Anomaly detection and explanation - Univerzita Karlovaai.ms.mff.cuni.cz/~sui/kopp.pdfAnomaly...

Preview:

Citation preview

Anomaly detection and explanation

Martin Kopp

Czech Technical University in Prague

Cisco Systems, Cognitive Research Team in Prague

Institute of Computer Science, Academy of Sciences of the Czech Republic

March 12, 2015

Anomaly detection Anomaly explanation Clustering

Outline

1 Anomaly detection2 Anomaly explanation

Sapling random forestsminimal explanationmaximal explanationrules aggregation

3 Clusteringvoting vectorsfeature deviationsevaluation

Anomaly detection Anomaly explanation Clustering

Outline

1 Anomaly detection2 Anomaly explanation

Sapling random forestsminimal explanationmaximal explanationrules aggregation

3 Clusteringvoting vectorsfeature deviationsevaluation

Anomaly detection Anomaly explanation Clustering

Anomaly detectionmotivation

Anomaly detection is about ...

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

normalanomal

Anomaly detection Anomaly explanation Clustering

Anomaly detectionmotivation

... point of view.

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

normalanomal

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

normalanomal

Anomaly detection Anomaly explanation Clustering

Anomaly detectionmotivation

Anomaly in crowd

1www.svcl.ucsd.edu/projects/anomaly/

Anomaly detection Anomaly explanation Clustering

Anomaly detectionmotivation

Network securitytypical proportion of anomalies is 1− 0.1%

0.5 million data points→ 1000 anomaliesParticle physics

typical proportion of anomalies is 10−3 − 10−4%

2 million data points→ 100 anomalies

Anomaly detection Anomaly explanation Clustering

Anomaly detectionmotivation

Network securitytypical proportion of anomalies is 1− 0.1%

0.5 million data points→ 1000 anomaliesParticle physics

typical proportion of anomalies is 10−3 − 10−4%

2 million data points→ 100 anomalies

Anomaly detection Anomaly explanation Clustering

Anomaly detectionproblem statement

“ An outlier is an observation which deviates so much from theother observations as to arouse suspicions that it wasgenerated by a different mechanism.”

1Hawkins 1980 - Identification of outliers

Anomaly detection Anomaly explanation Clustering

Anomaly detectionproblem statement

Defining a normal region for every possible normalbehaviour is very difficult.The boundary between normal and anomalous behaviouris often not precise.Some anomalous events often adapt to appear normally.Even normal behaviour may evolve over time.Obtaining labelled data for training and validation ofmodels is usually a major issue.Often the data contains noise that tends to be similar to theactual anomalies and hence is difficult to distinguish andremove.

Anomaly detection Anomaly explanation Clustering

Anomaly detectiontype of detectors

Anomaly detectorsStatisticalLinearProximity based

clusterdistancedensity

domain specific

Anomaly detection Anomaly explanation Clustering

Anomaly detectiontype of detectors

Statistical anomaly detectors

Anomaly detection Anomaly explanation Clustering

Anomaly detectiontype of detectors

Linear model based detectors

x10 10 20 30 40 50 60 70 80 90 100

y

0

5

10

15

20

25y vs. x1

DataFitConfidence bounds

x10 10 20 30 40 50 60 70 80 90 100

y

0

5

10

15

20

25y vs. x1

DataFitConfidence bounds

Anomaly detection Anomaly explanation Clustering

Anomaly detectiontype of detectors

Cluster based detectors

x-10 -8 -6 -4 -2 0 2 4 6 8 10

y

-10

-8

-6

-4

-2

0

2

4

6

8

10pdf(gm,[x,y])

Anomaly detection Anomaly explanation Clustering

Anomaly detectiontype of detectors

Distance based detectors

https://baldscientist.wordpress.com/2013/02/02/is-free-will-a-matter-of-being-a-conscious-outlier/

Anomaly detection Anomaly explanation Clustering

Anomaly detectiontype of detectors

Density based detectors

http://scikit-learn.org/stable/modules/outlier_detection.html

Anomaly detection Anomaly explanation Clustering

Outline

1 Anomaly detection2 Anomaly explanation

Sapling random forestsminimal explanationmaximal explanation

3 Clusteringvoting vectorsfeature deviationsevaluation

4 Rulesvoting vectorsfeature deviationsevaluation

Anomaly detection Anomaly explanation Clustering

Anomaly explanationhistory

Grubbs 1950 - Anomaly detection 1

Knorr 1999 - Question 2

Dang 2013 - Answer 3

1Grubbs 1950 - Sample criteria for testing outlying observations.2Knorr, Edwin M., and Raymond T. Ng. 1999 - Finding intensional

knowledge of distance-based outliers.3Dang, Xuan Hong, et al. 2013 - Local outlier detection with interpretation.

Anomaly detection Anomaly explanation Clustering

Anomaly explanationmotivation

Network securityattack vs. unscheduled backup

Particle physicsHiggs boson vs. misconfiguration of equipment

Astronomycosmic microwave background vs. pigeon nest

Fraud detectionholiday vs. credit card fraud

Anomaly detection Anomaly explanation Clustering

Anomaly explanationproblem definition

We have:datasetanomaly detection algorithmlabelled suspicious samples

We want:examine the suspicious samplesinterpret them clearly

as a small subset of featuresas human readable set of rules

Anomaly detection Anomaly explanation Clustering

Anomaly explanationproblem definition

We have:datasetanomaly detection algorithmlabelled suspicious samples

We want:examine the suspicious samplesinterpret them clearly

as a small subset of featuresas human readable set of rules

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsapling

(a) In nature

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsapling

(a) In nature

(b) In theoretical informatics

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsummary

ensembles of specifically trained CARTsmultiple trees per anomalyspecifically made training sets -> grow setstrees are quite small -> saplings

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsummary

Summary of the SRF for minimal explanation

Input: datay← anomalyDetector(data)for all data(y ==anomaly) do

G← createGrowSet(size,method)T ← trainTree(G)SRF ← T

end forextractRules(SRF)

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestalgorithm

Input: data

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestalgorithm

y← anomalyDetector(data)

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

normalanomal

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestalgorithm

G← createGrowSet(size,method)

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

normalanalyzedchosen

(a) random selection−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

normalanalyzedchosen

(b) k-nn selection

Anomaly detection Anomaly explanation Clustering

Sapling Random ForestGrow set selection

A grow set G contains an anomaly xa and several normalsamples xn ⊆ X n.

typical size |G| = 100random selection

fast even in high dimensionsmultiple trees can be grown -> robust

k-nn selectiondeterministic - more trees are uselessslow in high dimensionssuperior in low dimensions

Anomaly detection Anomaly explanation Clustering

Sapling Random ForestGrow set selection

5 10 15 20 25 30

0.4

0.6

0.8

1

problem

AUC

|G| = 2

|G| = 5

|G| = 10

|G| = 20

|G| = 40

|G| = 80

(a) random selection

5 10 15 20 25 30

0.4

0.6

0.8

1

problemAUC

|G| = 2

|G| = 5

|G| = 10

|G| = 20

|G| = 40

|G| = 80

(b) k-nn selection

Anomaly detection Anomaly explanation Clustering

Sapling Random Foresttree training

T ← trainTree(G)

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

normal

anomal

chosen

x1<0.31

x1<0.51

normal

normalanomal

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsplitting criterion

Gini’s indexGi = 1− p2

a − p2n,

Information gain

arg maxh∈H

−∑

b∈{L,R}

|Sb(h)||S| H(Sb(h)),

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsplitting criterion

Simplified criterionarg min

h∈H|Sa(h)|,

Maximal margin

arg maxd∈D

max minSnd − xa

d

arg maxd∈D

infSnd − xa

d

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsplitting criterion

Simplified criterionarg min

h∈H|Sa(h)|,

Maximal margin

arg maxd∈D

max minSnd − xa

d

arg maxd∈D

infSnd − xa

d

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestalgorithm

SRF ← T

x1<0.31

x1<0.51

normal

normalanomal

Anomaly detection Anomaly explanation Clustering

Sapling Random Foresttree training

5 10 15 20 25 30 35

0.4

0.6

0.8

1

problem

AUC

1 rep.10 rep.20 rep.40 rep.

(a) ground truth

5 10 15 20 25 30

0.4

0.6

0.8

1

problemAUC

1 rep.10 rep.20 rep.40 rep.

(b) Local Outlier Factor

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestexplaining an anomaly

extractRules(SRF)C = x2 > 2.2

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

normal

anomal

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestexplaining an anomaly

extractRules(SRF)C = (x2 > 2.2) ∧ (x1 < −2.1)

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

normal

anomal

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestexplaining an anomaly

extractRules(SRF)C = (x2 > 2.2) ∧ (x1 < −2.1) ∧ (x1 > 2.2) ∧ . . .

−3 −2 −1 0 1 2 3−3

−2

−1

0

1

2

3

normal

anomal

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestexplaining an anomaly

The set of all possible rules is defined asH =

{hj,θ|j ∈ {1, . . . , d}, θ ∈ R

}where

hj,θ(x) =

{+1 if xj > θ

−1 otherwise

d . . . number of features

θ . . . inner node threshold

xj . . . jth feature of sample x

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestexplaining an anomaly

The set of all possible rules is defined asH =

{hj,θ|j ∈ {1, . . . , d}, θ ∈ R

}where

hj,θ(x) =

{+1 if xj > θ

−1 otherwise

Let hj1,θ1 , . . . , hjt,θt be the set of decisions taken in inner nodes on thepath from the root to the leaf with the anomaly xa. Then xa isexplained as conjunction of atomic conditions

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestrules extraction

Rules in form:

C = (xj1 > θ1) ∧ (xj2 < θ2) ∧ . . . ∧ (xjt > θt)

We calculate groups sizes

r2j =∑C∈D

∑h∈C

I(j ∈ h,L)

r2j−1 =∑C∈D

∑h∈C

I(j ∈ h,R)

I(j ∈ h) =

{+1 if < rule−1 otherwise

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestrules extraction

and chose only k-most frequent, where

k = arg mink

1∑2dj=1 rj

k∑j=1

rj > τ

Then we aggregate similar rules and chose the most strictthresholds.

hRj = arg min

h∈HRj

θh

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestsummary

Summary of the SRF for maximal explanation

y← anomalyDetection(data)for all data(y ==anomaly) do

f ← allFeatureswhile d < τ do

G← createGrowSet(size, f )t← trainTree(G)SRF ← SRF + tf ← f − topSplitFeature(t)D = nnDistance(G)d = D(anomaly)/max(D)

end whileend forextractRules(SRF)

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestmax vs min

(a) average zero (b) average one0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestmax vs min

(a) minimal explanation (b) maximal explanation

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestmax vs min

(a) maximal explanationrelevance (b) minimal explanation

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestresults

Anomaly explanation as feature selection

problem0 5 10 15 20 25 30 35 40

AU

C

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

LOFk-nnsrfMaxsrfMin

Anomaly detection Anomaly explanation Clustering

Sapling Random Forestresults

Anomaly explanation as feature selection

problem5 10 15 20 25 30 35

dim

en

sio

n

100

101

102

103

all featuressrfMinsrfMax

Anomaly detection Anomaly explanation Clustering

Outline

1 Anomaly detection2 Anomaly explanation

Sapling random forestsminimal explanationmaximal explanationrules aggregation

3 Clusteringvoting vectorsfeature deviationsevaluation

Anomaly detection Anomaly explanation Clustering

Clusteringmotivation

Investigation of multiple anomalies at onceGeneralized anomaly groupsDiscovery of large scale anomaliesDomain knowledge

Anomaly detection Anomaly explanation Clustering

ClusteringVoting vectors

binary vectortree votingTxA matrixsapling are anomaly specificsapling votes for similar anomalies

Anomaly detection Anomaly explanation Clustering

ClusteringVoting vectors

Example of voting vectors

5

10

15

20

25

30

35

40

45

505 10 15 20 25 30 35 40 45 50

data samples

tre

es v

otin

g

Anomaly detection Anomaly explanation Clustering

ClusteringFeatures deviation matrix

deviation in feature rangesthe most strict threshold is storedlower and upper boundaryTx2d matrix, but can be reduced

Anomaly detection Anomaly explanation Clustering

ClusteringVoting

Example of features deviation matrix

−1

−0.8

−0.6

−0.4

−0.2

0.0

0.2

0.4

0.6

0.8

1

Anomaly detection Anomaly explanation Clustering

Clusteringresults

Grow set size vs performance

5 10 20 40 80 15084

85

86

87

88

89

90

91

92

Grow set size

accu

racy

raw

raw reduced

voting

fdm

Anomaly detection Anomaly explanation Clustering

Clusteringresults

Number of clusters vs performance

2 3 4 560

65

70

75

80

85

90

95

number of anomaly clusters

accu

racy

raw

raw reduced

voting

fdm

Anomaly detection Anomaly explanation Clustering

Conclusion and future work

Conclusionanomaly explanation

most important featureshuman readable rules

arbitrary anomaly detectorreal time/data streams

Future workmulti-dimensional anomaliescluster rules aggregationfuzzy rules

Anomaly detection Anomaly explanation Clustering

Thank you for your attention.

Recommended