91
AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Embed Size (px)

Citation preview

Page 1: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

AMCS/CS 340: Data Mining

Classification I: Decision Tree

Xiangliang Zhang

King Abdullah University of Science and Technology

Page 2: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

2

Classification: Definition Given a collection of records (training set )

– Each record contains a set of attributes, one of the attributes is the class.

Find a model for class attribute as a function of the values of other attributes.

Goal: previously unseen records should be assigned a class as accurately as possible.– A test set is used to determine the accuracy of the model.

Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 3: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

3

Classification Example

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class

Refund MaritalStatus

TaxableIncome Cheat

No Single 75K ?

Yes Married 50K ?

No Married 150K ?

Yes Divorced 90K ?

No Single 40K ?

No Married 80K ?10

TestSet

Training Set

ModelLearn

Classifier

predicting borrowers who cheat on loan payments.

Page 4: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Issues: Evaluating Classification Methods

4

• Accuracy– classifier accuracy: how well the class labels of test

data are predicted• Speed

– time to construct the model (training time)– time to use the model (classification/prediction time)

• Robustness: handling noise and missing values• Scalability: efficiency in large-scale data • Interpretability

– understanding and insight provided by the model• Other measures, e.g., goodness of rules, such as decision tree

size or compactness of classification rules

Page 5: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

5

Classification Techniques

• Decision Tree based Methods

• Rule-based Methods

• Learning from Neighbors

• Bayesian Classification

• Neural Networks

• Ensemble Methods

• Support Vector Machines

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 6: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

6

Example of a Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Splitting Attributes

Training Data Model: Decision Tree

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 7: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

7

Another Example of Decision Tree

Tid Refund MaritalStatus

TaxableIncome Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes10

categoric

al

categoric

al

continuous

class MarSt

Refund

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

There could be more than one tree that fits the same data!

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 8: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

8

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 9: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

9

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test DataStart from the root of tree.

Page 10: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

10

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 11: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

11

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 12: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

12

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 13: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

13

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 14: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

14

Apply Model to Test Data

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single, Divorced

< 80K > 80K

Refund Marital Status

Taxable Income Cheat

No Married 80K ? 10

Test Data

Assign Cheat to “No”

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 15: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

15

Decision Tree Classification Task

Apply

Model

Induction

Deduction

Learn

Model

Model

Tid Attrib1 Attrib2 Attrib3 Class

1 Yes Large 125K No

2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No

5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No

8 No Small 85K Yes

9 No Medium 75K No

10 No Small 90K Yes 10

Tid Attrib1 Attrib2 Attrib3 Class

11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?

14 No Small 95K ?

15 No Large 67K ? 10

Test Set

TreeInductionalgorithm

Training Set

Decision Tree

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 16: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

16

Algorithm for Decision Tree Induction• Basic algorithm (a greedy algorithm)

– Tree is constructed in a top-down recursive divide-and-conquer manner– At start, all the training examples are at the root– Examples are partitioned recursively based on selected attributes– Attributes are categorical (if continuous-valued, they are discretized in

advance)– Test attributes are selected on the basis of a heuristic or statistical

measure (e.g., information gain)

• Conditions for stopping partitioning– All samples for a given node belong to the same class– There are no remaining attributes for further partitioning – majority

voting is employed for classifying the leaf– There are no samples left

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 17: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

17

Decision Tree Induction

Many Algorithms: Hunt’s Algorithm (one of the earliest)

CART

ID3, C4.5

SLIQ,SPRINT

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 18: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

18

General Structure of Hunt’s Algorithm Let Dt be the set of training records

that reach a node t General Procedure:

If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt

If Dt is an empty set, then t is a leaf node labeled by the default class, yd

If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset.

Tid Refund Marital Status

Taxable Income Cheat

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 No Single 90K Yes 10

Dt

?

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 19: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

19

Hunt’s AlgorithmDon’t Cheat

Refund

Don’t Cheat

Don’t Cheat

Yes No

Refund

Don’t Cheat

Yes No

MaritalStatus

Don’t Cheat

Cheat

Single,Divorced

Married

TaxableIncome

Don’t Cheat

< 80K >= 80K

Refund

Don’t Cheat

Yes No

MaritalStatus

Don’t Cheat

Cheat

Single,Divorced

Married

Page 20: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

20

Issues of Hunt's Algorithm

Issues: Determine how to split the records

– How to specify the attribute test condition?

How many branches

Partition threshold for splitting

– How to determine the best split?

Choose which attribute ?

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 21: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

21

How to Specify Test Condition?

Depends on attribute types Nominal Ordinal Continuous

Depends on number of ways to split 2-way split Multi-way split

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 22: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

22

Splitting Based on Nominal Attributes Multi-way split: Use as many partitions as distinct

values.

Binary split: Divides values into two subsets.

Need to find optimal partitioning.

CarTypeFamily

Sports

Luxury

CarType{Family, Luxury} {Sports}

CarType{Sports, Luxury} {Family} OR

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 23: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

23

Splitting Based on Ordinal Attributes Multi-way split: Use as many partitions as distinct

values.

Binary split: Divides values into two subsets. Need to find optimal partitioning.

What about this split?

SizeSmall

Medium

Large

Size{Medium,

Large} {Small}

Size{Small,

Medium} {Large}OR

Size{Small, Large} {Medium}

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 24: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

24

Splitting Based on Continuous Attributes

Different ways of handling Binary Decision: (A < v) or ( A >= v )

consider all possible splits and finds the best cut

Discretization to form an ordinal categorical attribute

ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering.

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 25: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

25

Splitting Based on Continuous Attributes

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 26: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

26

Issues of Hunt's Algorithm

Issues: Determine how to split the records

– How to specify the attribute test condition?

How many branches

Partition threshold for splitting

– How to determine the best split?

Choose which attribute ?

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 27: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

27

How to determine the Best SplitBefore Splitting: 10 records of class 0,

10 records of class 1

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 28: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

28

How to determine the Best Split

Greedy approach: Nodes with homogeneous class distribution are

preferred Need a measure of node impurity:

C0: 5C1: 5

C0: 9C1: 1

Non-homogeneous,

High degree of impurity

Homogeneous,

Low degree of impurity

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 29: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

How to determine the Best SplitBefore Splitting: 10 records of class 0,

10 records of class 1

Non-homogeneous,

High degree of impurity

Homogeneous,

Low degree of impurity

29Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 30: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

30

Measures of Node Impurity• Gini Index

– Gini impurity is a measure of how frequently a randomly chosen element from a set is incorrectly labeled if it were labeled randomly according to the distribution of labels in the subset.

• Entropy– a measure of the uncertainty associated with a

random variable.

• Misclassification error– the proportion of misclassified samples

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 31: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Measures of Node Impurity

• Gini Index

p( j | t) is the relative frequency of class j at node t

• Entropy

• Misclassification error

j

tjptGINI 2)]|([1)(

j

tjptjptEntropy )|(log)|()(

)|(max1)( tjptErrorj

Node t Class j count

C1 0 C2 6

60

6)|2(

60

0)|1(

tCp

tCp

0)66,60max(1)(

0)66(log)66()60(log)60()(

0)66()60(1)(

22

22

tError

tEntropy

tGINI

31Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 32: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Comparison among Measures of Node Impurity

j

tjptGINI 2)]|([1)(

j

tjptjptEntropy )|(log)|()(

)|(max1)( tjptErrorj

Node t1 Class j count

C1 0 C2 6

0)1(

0)1(

0)1(

tError

tEntropy

tGINIFor a 2-class problem:

Node t2 Class j count

C1 1 C2 5

167.0)2(

650.0)2(

278.0)2(

tError

tEntropy

tGINI

Node t3 Class j count

C1 3 C2 3

5.0)3(

1)3(

5.0)3(

tError

tEntropy

tGINI

32Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 33: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Comparison among Measures of Node Impurity

j

tjptGINI 2)]|([1)(

j

tjptjptEntropy )|(log)|()(

)|(max1)( tjptErrorj

For a 2-class problem:

33Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 34: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

When a node p is split into k partitions, the quality of split is computed as,

Or information gain

k

i

isplit iGINI

n

npGINIGAIN

1)()(

Quality of Split

p: n

child 1: n1

GINI(1)child k: nk

GINI(k)

child i: ni

GINI(i)

k

i

isplit iEntropy

n

npEntropyGAIN

1)()(

• Measures reduction in GINI/Entropy achieved because of the split.

• Choose the split that achieves most reduction (maximizes GAIN)

where, ni = number of records at child i, n = number of records at node p.

34Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 35: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Quality of Split: binary attributes

• Splits into two partitions (binary attributes)

P

Yes No

Node N1 Node N2

Parent

C1 6

C2 6

Gini = 0.500

N1 N2 C1 4 2

C2 3 3

Gini=0.486

Gini(N1) = 1 – (4/7)2 – (3/7)2 = 0.4898

Gini(N2) = 1 – (2/5)2 – (3/5)2 = 0.480

Ginisplit(Children) = 7/12 * 0.4898 + 5/12 * 0.480= 0.486

Gainsplit = Gini (parent) – Ginisplit (children) = 0.014

Page 36: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

36

Decision Tree Induction

Many Algorithms: Hunt’s Algorithm (one of the earliest)

CART

ID3, C4.5

SLIQ,SPRINT

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 37: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

CART

CART: Classification and Regression Trees

• constructs trees with only binary splits (simplifies the splitting criterion)

• use Gini Index as a splitting criterion

• split the attribute who provides the smallest Ginisplit(p) or the largest GAINsplit(p)

• need to enumerate all the possible splitting points for each attribute

37Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 38: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Continuous Attributes: Computing Gini Index

For efficient computation: for each attribute,

• Sort the attribute on values• Set candidate split positions as the midpoints between two

adjacent sorted values.• Linearly scan these values, each time updating the count matrix

and computing Gini index• Choose the split position that has the least Gini index

Cheat No No No Yes Yes Yes No No No No

Taxable Income

60 70 75 85 90 95 100 120 125 220

55 65 72 80 87 92 97 110 122 172 230

<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >

Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420

Split Positions

Sorted Values

Page 39: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

39

Decision Tree Induction

Many Algorithms: Hunt’s Algorithm (one of the earliest)

CART

ID3, C4.5

SLIQ,SPRINT

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 40: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

How to Find the Best Split

CarTypeFamily

Sports

LuxuryCarType

{Sports, Luxury} {Family}

CarType{Family, Luxury} {Sports}

CarType

{Sports, Luxury}

{Family}

C1 9 1 C2 7 3

Gini 0.468

CarType

{Family,Luxury}

{Sports}

C1 2 8 C2 10 0

Gini 0.167

CarType

Family Sports Luxury

C1 1 8 1 C2 3 0 7

Gini 0.163

Multi-way splitTwo-way split (find best partition of values)

Parent

C1 10

C2 10

Gini = 0.500

largest Gain = 0.337

40Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 41: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Which Attribute to Split ?

Gain = 0.337Gain = 0.02 Gain = 0.5Best ?

Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.• Unique value for each record not predictable• Small number of recodes in each node not reliable prediction

41Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 42: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Splitting Based on Gain Ratio

Gain Ratio:Parent node p is split into k partitions

where ni is the number of records in partition i

• designed to overcome the disadvantage of Information Gain

• adjusts Information Gain by the entropy of the partitioning (SplitINFO).

• higher entropy partitioning (large number of small partitions) is penalized!

SplitINFO

GAINGainRATIO Split

split n

n

n

nSplitINFO ik

i

i log1

42Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 43: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

September 14, 2010 Data Mining: Concepts and Techniques 43

Comparing Attribute Selection Measures

• The three measures, in general, return good results but

– Gini gain:

• biased to multivalued attributes

• has difficulty when # of classes is large

• tends to favor tests that result in equal-sized partitions and purity in both partitions

– Information gain: • biased towards multivalued attributes

– Gain ratio: • tends to prefer unbalanced splits in which one

partition is much smaller than the others43

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 44: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

• ID3 (Ross Quinlan1986) is the precursor to the C4.5 algorithm (Ross Quinlan 1993)

• C4.5 is an extension of earlier ID3 algorithm- For each unused attribute Ai, count the information GAIN

(ID3) or GAINRatio (C4.5) from splitting on Ai

- Find the best splitting attribute Abest with the highest GAIN or GAINRatio

- Create a decision node that splits on Abest

- Recur on the sublists obtained by splitting on Abest , and add those nodes as children of node

ID3 and C4.5

44Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 45: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Handling both continuous and discrete attributes

o In order to handle continuous attributes, C4.5 creates a threshold and then splits the list into those whose attribute value is above the threshold and those that are less than or equal to it.

Handling training data with missing attribute values

o C4.5 allows attribute values to be marked as ? for missing. Missing attribute values are simply not used in gain and entropy calculations.

Pruning trees after creation

o C4.5 goes back through the tree once it's been created and attempts to remove branches that do not help by replacing them with leaf nodes.

Improvements of C4.5 from ID3 algorithm

45Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 46: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

• Issues: o Needs entire data to fit in memory.o Unsuitable for Large Datasets.

– Needs a lot of computation at every stage of construction of decision tree.

• You can download the software from:http://www2.cs.uregina.ca/~dbd/cs831/notes/ml/dtrees/c4.5/c4.5r8.tar.gz

• More information http://www2.cs.uregina.ca/~dbd/cs831/notes/ml/dtrees/c4.5/tutorial.html

C4.5

46Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 47: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

47

Decision Tree Induction

Many Algorithms: Hunt’s Algorithm (one of the earliest)

CART

ID3, C4.5

SLIQ,SPRINT

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 48: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

SLIQ, Supervised Learning In Quest (EDBT’96 — Mehta et al.)

o Uses a pre-sorting technique in the tree growing phase (eliminates the need to sort data at each node)

o creates a separate list for each attribute of the training data

o A separate list, called class list, is created for the class labels attached to the examples.

o SLIQ requires that the class list and (only) one attribute list could be kept in the memory at any time

o Suitable for classification of large disk-resident datasets

o Applies to both numerical and categorical attributes

SLIQ – a decision tree classifier

48Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 49: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Generate attribute list for each attribute

Sort attribute lists for NUMERIC Attributes

Create decision tree by partitioning records

Start End

SLIQ Methodology

Drivers Age

CarType Class (risk)

23 Family HIGH

17 Sports HIGH

43 Sports HIGH

68 Family LOW

32 Truck LOW

20 Family HIGH

Only NUMERIC attributes sorted

Example

Age Class RecId

17 HIGH 2

20 HIGH 6

23 HIGH 1

32 LOW 5

43 HIGH 3

68 LOW 4

49Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 50: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Numeric attributes splitting index

Numeric attributes sorted

Age Class RecId

17 HIGH 2

20 HIGH 6

23 HIGH 1

32 LOW 5

43 HIGH 3

68 LOW 4

Partition position

Position 0

Position 3

Position 6

States of class histograms

HIGH

LOW

Cbelow0 0

Cabove4 2

Ginisplit =0.44

HIGH

LOW

Cbelow3 0

Cabove1 2

Ginisplit =0.22

HIGH

LOW

Cbelow4 2

Cabove0 0

Ginisplit =0.44

50Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 51: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Numeric attributes splitting index

Numeric attributes sorted

Age Class RecId

17 HIGH 2

20 HIGH 6

23 HIGH 1

32 LOW 5

43 HIGH 3

68 LOW 4

Partition position

Position 0

Position 3

Position 6

States of class histograms

HIGH

LOW

Cbelow0 0

Cabove4 2

Ginisplit =0.44

HIGH

LOW

Cbelow3 0

Cabove1 2

Ginisplit =0.22

HIGH

LOW

Cbelow4 2

Cabove0 0

Ginisplit =0.44age

N1 N2

< 27.5 > 27.5

51

Page 52: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Numeric attributes splitting index

age

N1 N2

< 27.5 > 27.5

Age

Class

Car type

RecId

32 LOW truck 5

43 HIGH sport 3

68 LOW family 4

Age

Class

Car type RecId

17 HIGH sport 2

20 HIGH family 6

23 HIGH family 1

HIGH

Car type

52Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 53: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

54

Issues: Underfitting and Overfitting

Missing values

Decision Tree Issues

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 54: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Underfitting and Overfitting

55Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 55: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

September 14, 2010 Data Mining: Concepts and Techniques 56

Overfitting and Tree Pruning

• Overfitting: An induced tree may overfit the training data – Too many branches, some may reflect anomalies due to noise or outliers

– Poor accuracy for unseen samples

• Two approaches to avoid overfitting – Pre-pruning: Stop tree construction early—do not split a node if this

would result in the gain measure falling below a threshold

• Difficult to choose an appropriate threshold

– Post-pruning: Remove branches from a “fully grown” tree

• Use a set of data different from the training data to decide which is

the “best pruned tree”

56Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 56: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Data Mining: Concepts and Techniques 57

Tree Post-Pruning• Reduced error pruning

– Start from leaves

– Replace each node with its most popular class

– Keep the change, if the prediction accuracy is not affected

• Cost complexity pruning– Generate a series of trees T0,…,Tm, (T0 is the initial tree, Tm is the root alone)

– Construct tree Ti by

• Replacing a subtree of tree Ti-1 with a leaf node.

• the class label of leaf node is determined from majority class of

instances in the sub-tree

• Which subtree to remove ?

)),(()(

),()),,((minarg

tTpruneleavesTleaves

DTerrDtTpruneerrt

tremove

57

Page 57: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

58

Issues: Underfitting and Overfitting

Missing values

Decision Tree Issues

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 58: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Missing values affect decision tree construction in three different ways:

– Affects how impurity measures are computed

– Affects how to distribute instance with missing value to child nodes

– Affects how a test instance with missing value is classified

Handling Missing Values of Attribute

59Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 59: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Tid Refund Marital Status

Taxable Income Class

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No

10 ? Single 90K Yes 10

Class = Yes

Class = No

Refund=Yes 0 3

Refund=No 2 4

Refund=? 1 0

Split on Refund:

Entropy(Refund=Yes) = 0

Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183

Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551

Gain = 0.9 (0.8813 – 0.551) = 0.3303Missing value

Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813

Computing Impurity Measure

60

Page 60: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Tid Refund Marital Status

Taxable Income Class

1 Yes Single 125K No

2 No Married 100K No

3 No Single 70K No

4 Yes Married 120K No

5 No Divorced 95K Yes

6 No Married 60K No

7 Yes Divorced 220K No

8 No Single 85K Yes

9 No Married 75K No 10

RefundYes No

Class=Yes 0

Class=No 3

Cheat=Yes 2

Cheat=No 4

RefundYes

Tid Refund Marital Status

Taxable Income Class

10 ? Single 90K Yes 10

No

Class=Yes 2 + 6/ 9

Class=No 4

Probability that Refund=Yes is 3/9

Probability that Refund=No is 6/9

Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9

Class=Yes 0 + 3/ 9

Class=No 3

Distribute Instances

61

Page 61: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Classify Instances

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

Married Single Divorced Total

Class=No 3 1 0 4

Class=Yes 6/9 1 1 2.67

Total 3.67 2 1 6.67

Tid Refund Marital Status

Taxable Income Class

11 No ? 85K ? 10

New record:

Probability that Marital Status = Married is 3.67/6.67 = 0.55

Probability that Marital Status ={Single,Divorced} is 3/6.67 = 0.45

62

Page 62: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Classify Instances

Refund

MarSt

TaxInc

YESNO

NO

NO

Yes No

Married Single,

Divorced

< 80K > 80K

Married Single Divorced Total

Class=No 3 1 0 4

Class=Yes 6/9 1 1 2.67

Total 3.67 2 1 6.67

Tid Refund Marital Status

Taxable Income Class

11 No ? 85K ? 10

New record:

Probability that Class = YES is 0.45

Probability that Class = NO is 0.55

63

Page 63: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

What is Decision Tree?

How to choose the best attribute to split?

How to decide the branches when splitting?

What are CART, ID3 and C4.5? What are their differences?

How Decision Tree can be used to learn from large data?

How to avoid overfitting in Decision Tree learning?

How missing values can be handled by Decision Tree?

64

Questions

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 64: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

65

Classification Techniques

• Decision Tree based Methods

• Rule-based Methods

• Learning from Neighbors

• Bayesian Classification

• Neural Networks

• Ensemble Methods

• Support Vector Machines

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 65: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Tree Rules

How tree can be used in other ways?

66Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 66: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Rule Extraction from a Decision Tree

YESYESNONO

NONO

NONO

Yes No

{Married}{Single,

Divorced}

< 80K > 80K

Taxable Income

Marital Status

Refund Classification Rules

(Refund=Yes) ==> No

(Refund=No, Marital Status={Single,Divorced},Taxable Income<80K) ==> No

(Refund=No, Marital Status={Single,Divorced},Taxable Income>80K) ==> Yes

(Refund=No, Marital Status={Married}) ==> No

• One rule is created for each path from the root to a leaf

• Each attribute-value pair along a path forms a conjunction: the leaf holds the class prediction

• Rules are mutually exclusive (rules are independent and each record is covered by at most

one rule) and exhaustive (each record is covered by at least one rule)

• Rule set contains as much information as the tree 67

Page 67: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Rule Extraction from a Decision Tree

YESYESNONO

NONO

NONO

Yes No

{Married}{Single,

Divorced}

< 80K > 80K

Taxable Income

Marital Status

Refund Classification Rules

(Refund=Yes) ==> No

(Refund=No, Marital Status={Single,Divorced},Taxable Income<80K) ==> No

(Refund=No, Marital Status={Single,Divorced},Taxable Income>80K) ==> Yes

(Refund=No, Marital Status={Married}) ==> No

• Rules can be simplified

(Marital Status = {Married}) ==> No

68Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 68: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Rule-Based Classifier

• Classify records by using a collection of “if…then…” rules

• Rule: Represent the knowledge in the form of IF-THEN rules

(Condition) y, where ‾ Condition is a conjunctions of attributes ‾ y is the class label

Examples of classification rules:‾ (Blood Type=Warm) (Lay Eggs=Yes) Birds‾ (Taxable Income < 50K) (Refund=Yes) Cheat=No‾ (rule antecedent or condition) (rule consequent)

• Classifier: A rule R covers an instance x if the attributes of the instance satisfy the condition of the rule, instance x labeled by the consequent of R69

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 69: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

• Extract rules directly from training data e.g., FOIL (Quinlan 1990), AQ (Michalski et al 1986), CN2(Clark & Niblett89), RIPPER (Cohen95)

• Steps:1. Start from an empty rule

2. Grow a rule for one class Ci using the Learn-One-Rule function

3. Remove training records covered by the rule

4. Repeat Step (2) and (3) until stopping criterion is met, e.g., when no more training examples or when the quality of a rule returned is below a user-specified threshold

Compare with decision-tree induction: DT learns a set of rules simultaneously

Sequential covering algorithm

70Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 70: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Rules for 2-class and multi-class

71

For 2-class problem choose one of the classes as positive class, and

the other as negative class Learn rules for positive class Negative class will be default class

For multi-class problem Order the classes according to increasing class

prevalence (fraction of instances that belong to a particular class)

Learn the rule set for smallest class first, treat the rest as negative class

Repeat with next smallest class as positive class

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Tid Refund Marital Status

Taxable Income Class

5 No Divorced 95K Yes

8 No Single 85K Yes

10 No Single 90K Yes 10

1 Yes Single 125K No

4 Yes Married 120K No

7 Yes Divorced 220K No

3 No Single 70K No

2 No Married 100K No

6 No Married 60K No

9 No Married 75K No

Page 71: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

AMCS/CS 340: Data Mining

Classification: Evaluation

Xiangliang Zhang

King Abdullah University of Science and Technology

Page 72: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Model Evaluation

Metrics for Performance Evaluation How to evaluate the performance of a model?

Methods for Performance Evaluation How to obtain reliable estimates?

Methods for Model Comparison How to compare the relative performance

among competing models?

73Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 73: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Metrics for Performance Evaluation

Focus on the predictive capability of a modelRather than how fast it takes to classify or build models, scalability, etc.

Confusion Matrix:

Most widely-used metric:

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a (TP) b (FN)

Class=No c (FP) d (TN)

74

FNFPTNTP

TNTP

dcba

da

Accuracy

TP: True PositiveFP: False PositiveTN: True NegativeFN: False Negative

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 74: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Limitation of Accuracy

Consider a 2-class problem

Number of Class 0 examples = 9990

Number of Class 1 examples = 10

Unbalanced classes

If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 %

Accuracy is misleading because model does not detect any class 1 example

75Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 75: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Other Measures

cba

a

pr

rpba

aca

a

2

22(F) measure-F

(r) Recall

(p)Precision

dwcwbwawdwaw

4321

41Accuracy Weighted

76

PREDICTED CLASS

ACTUALCLASS

Class=Yes Class=No

Class=Yes a (TP) b (FN)

Class=No c (FP) d (TN)

ba

adc

c

rate positive True

rate positive False

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 76: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Model Evaluation

Metrics for Performance Evaluation How to evaluate the performance of a model?

Methods for Performance Evaluation How to obtain reliable estimates?

Methods for Model Comparison How to compare the relative performance

among competing models?

77Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 77: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Methods for Performance Evaluation

How to obtain a reliable estimate of performance?

Performance of a model may depend on other factors besides the learning algorithm:

Class distribution

Cost of misclassification

Size of training and test sets

78Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 78: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Methods of Estimation

Holdout Reserve 2/3 for training and 1/3 for testing

Random subsampling Repeated holdout

Cross validation‾ Partition data into k disjoint subsets

‾ k-fold: train on k-1 partitions, test on the remaining one

‾ Leave-one-out: k=n

Bootstrap Sampling with replacement

79Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 79: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Model Evaluation

Metrics for Performance Evaluation How to evaluate the performance of a model?

Methods for Performance Evaluation How to obtain reliable estimates?

Methods for Model Comparison How to compare the relative performance

among competing models?

80Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 80: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

ROC (Receiver Operating Characteristic)Developed in 1950s for signal detection theory to analyze noisy signals

Characterize the trade-off between positive hits and false alarms

ROC curve plots TP rate (y-axis) against FP rate (x-axis)

Performance of each classifier represented as a point on ROC curve changing the threshold of algorithm, or sample distribution changes the location of the point

At threshold t:

TPR=0.5, FPR=0.12 81

1-dimensional data set containing 2 classes (positive and negative)

- any points located at x > t is classified as positive

Page 81: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

ROC Curve

(TPR,FPR):

(0,0): declare everything to be negative class

(1,1): declare everything to be positive class

(0,1): ideal

Diagonal line:‾ Random guessing‾ Below diagonal line:

prediction is opposite of the true classXiangliang Zhang, KAUST AMCS/CS 340:

Data Mining

82

Page 82: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Using ROC for Model Comparison

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

No model consistently outperform the other

M1 is better for small FPR

M2 is better for large FPR

Area Under the ROC curve Ideal:

Area = 1 Random guess:

Area = 0.5

83

Page 83: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

How to construct an ROC curve

P(+|x) 0.95 0.93 0.87 0.85 0.85 0.85 0.76 0.53 0.43 0.25

Class + - + - - - + - + +

P 0.25 0.43 0.53 0.76 0.85 0.85 0.85 0.87 0.93 0.95 1.00

TP 5 4 4 3 3 3 3 2 2 1 0

FP 5 5 4 4 3 2 1 1 0 0 0

TN 0 0 1 1 2 3 4 4 5 5 5

FN 0 1 1 2 2 2 2 3 3 4 5

TPR 1 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.2 0

FPR 1 1 0.8 0.8 0.6 0.4 0.2 0.2 0 0 0

Threshold: t

ROC Curve:

84

# of + >= t # of - >= t

Posterior probability of test instance x

Page 84: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Confidence Interval for Accuracy

Prediction can be regarded as a Bernoulli trial‾ A Bernoulli trial has 2 possible outcomes‾ Possible outcomes for prediction: correct or wrong‾ Collection of Bernoulli trials has a Binomial distribution:

‾ x Bin(N, p) x: number of correct predictions

e.g: Toss a fair coin 50 times, how many heads would turn up? Expected number of heads = Np = 50 0.5 = 25

Given x (# of correct predictions) or equivalently, acc=x/N, and N (# of test instances),

Can we predict p (true accuracy of model)?

85Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 85: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

For large test sets (N > 30), acc has a normal distribution with mean p and variance p(1-p)/N

Confidence Interval for p:

1

)/)1(

( 2/12/ ZNpp

paccZP

Area = 1 -

Z/2 Z1- /2

)(2

4422

2/

222/

22/

ZN

accNaccNZZaccNp

86

Confidence Interval for Accuracy

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 86: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Confidence Interval for AccuracyConsider a model that produces an accuracy of

80% when evaluated on 100 test instances:‾ N=100, acc = 0.8‾ Let 1- = 0.95 (95% confidence)

‾ From probability table, Z/2=1.96

1- Z

0.99 2.58

0.98 2.33

0.95 1.96

0.90 1.65

N 50 100 500 1000 5000

p(lower) 0.670 0.711 0.763 0.774 0.789

p(upper) 0.888 0.866 0.833 0.824 0.811

87

Standard Normal distribution

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 87: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Test of SignificanceGiven two models:

‾ Model M1: accuracy = 85%, tested on 30 instances

‾ Model M2: accuracy = 75%, tested on 5000 instances

Can we say M1 is better than M2?‾ How much confidence can we place on accuracy

of M1 and M2?‾ Can the difference in performance measure be

explained as a result of random fluctuations in the test set? 88

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 88: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Comparing Performance of 2 Models

Given two models, say M1 and M2, which is better?

‾ M1 is tested on D1 (size=n1), found error rate = e1

‾ M2 is tested on D2 (size=n2), found error rate = e2

‾ Assume D1 and D2 are independent

‾ If n1 and n2 are sufficiently large, then

‾ Approximate of variance (Binomial distribution):

),(~

),(~2

222

2111

Ne

Ne

i

iii n

ee )1(ˆ 2

89Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 89: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

To test if performance difference is statistically significant:

d = e1 – e2

where dt is the true difference

Since D1 and D2 are independent, their variance adds up:

At (1-) confidence level,

2

)21(2

1

)11(1

ˆˆ 22

21

22

21

2

n

ee

n

eet

tt Zdd ˆ2/

90

Comparing Performance of 2 Models

),(~ 2ttdNd

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 90: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

An Illustrative ExampleGiven: M1: n1 = 30, e1 = 0.15

M2: n2 = 5000, e2 = 0.25

d = |e2 – e1| = 0.1 (2-sided test)

At 95% confidence level, Z/2=1.96

=> Interval contains 0 => difference may not be

statistically significant

0043.05000

)25.01(25.0

30

)15.01(15.0ˆ

d

128.0100.00043.096.1100.0 td

91Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

Page 91: AMCS/CS 340: Data Mining Classification I: Decision Tree Xiangliang Zhang King Abdullah University of Science and Technology

Our Teaching Assistant

Name: Francisco Franco

Email: [email protected]

Responsibilities:

Homework

Project report

Final grades

Questions92

Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining