View
216
Download
2
Tags:
Embed Size (px)
Citation preview
1
Classification with Decision Trees I
Instructor: Qiang YangHong Kong University of Science and Technology
Thanks: Eibe Frank and Jiawei Han
Given a set of pre-classified examples, build a model or classifier to classify new cases.
Supervised learning in that classes are known for the examples used to build the classifier.
A classifier can be a set of rules, a decision tree, a neural network, etc.
Typical Applications: credit approval, target marketing, fraud detection, medical diagnosis, treatment effectiveness analysis, …..
INTRODUCTION
3
Constructing a Classifier The goal is to maximize the accuracy on new
cases that have similar class distribution. Since new cases are not available at the time
of construction, the given examples are divided into the testing set and the training set. The classifier is built using the training set and is evaluated using the testing set.
The goal is to be accurate on the testing set. It is essential to capture the “structure” shared by both sets.
Must prune overfitting rules that work well on the training set, but poorly on the testing set.
Example
TrainingData
NAME RANK YEARS TENUREDMike Assistant Prof 3 noMary Assistant Prof 7 yesBill Professor 2 yesJim Associate Prof 7 yesDave Assistant Prof 6 noAnne Associate Prof 3 no
ClassificationAlgorithms
IF rank = ‘professor’OR years > 6THEN tenured = ‘yes’
Classifier(Model)
Example (Conted)
Classifier
TestingData
NAME RANK YEARS TENUREDTom Assistant Prof 2 noMerlisa Associate Prof 7 noGeorge Professor 5 yesJoseph Assistant Prof 7 yes
Unseen Data
(Jeff, Professor, 4)
Tenured?
Evaluation Criteria Accuracy on test set
the rate of correct classification on the testing set. E.g., if 90 are classified correctly out of the 100 testing cases, accuracy is 90%.
Error Rate on test set The percentage of wrong
predictions on test set Confusion Matrix
For binary class values, “yes” and “no”, a matrix showing true positive, true negative, false positive and false negative rates
Speed and scalability the time to build the classifier
and to classify new cases, and the scalability with respect to the data size.
Robustness: handling noise and missing values
Predicted class
Yes No
Actual class
Yes
True positive
False negative
No False positive
True negative
Evaluation Techniques
Holdout: the training set/testing set. Good for a large set of data.
k-fold Cross-validation: divide the data set into k sub-samples. In each run, use one distinct sub-sample as testing
set and the remaining k-1 sub-samples as training set.
Evaluate the method using the average of the k runs.
This method reduces the randomness of training set/testing set.
88
Cross Validation: Holdout Method
— Break up data into groups of the same size — —
— Hold aside one group for testing and use the rest to build model
—
— Repeat
Testiteration
9
Continuous Classes
Sometimes, classes are continuous in that they come from a continuous domain,
e.g., temperature or stock price. Regression is well suited in this case:
Linear and multiple regression Non-Linear regression
We shall focus on categorical classes, e.g., colors or Yes/No binary decisions.
We will deal with continuous class values later in CART
10
DECISION TREE [Quinlan93]
An internal node represents a test on an attribute.
A branch represents an outcome of the test, e.g., Color=red.
A leaf node represents a class label or class label distribution.
At each node, one attribute is chosen to split training examples into distinct classes as much as possible
A new case is classified by following a matching path to a leaf node.
Outlook Tempreature Humidity Windy Classsunny hot high false Nsunny hot high true Novercast hot high false Prain mild high false Prain cool normal false Prain cool normal true Novercast cool normal true Psunny mild high false Nsunny cool normal false Prain mild normal false Psunny mild normal true Povercast mild high true Povercast hot normal false Prain mild high true N
Training Set
Outlook
overcast
humidity windy
high normal falsetrue
sunny rain
N NP P
P
overcast
Example
13
Building Decision Tree [Q93]
Top-down tree construction At start, all training examples are at the root. Partition the examples recursively by choosing one
attribute each time. Bottom-up tree pruning
Remove subtrees or branches, in a bottom-up manner, to improve the estimated accuracy on new cases.
14
Choosing the Splitting Attribute
At each node, available attributes are evaluated on the basis of separating the classes of the training examples. A Goodness function is used for this purpose.
Typical goodness functions: information gain (ID3/C4.5) information gain ratio gini index
15
Which attribute to select?
16
A criterion for attribute selection
Which is the best attribute? The one which will result in the smallest tree Heuristic: choose the attribute that produces the
“purest” nodes Popular impurity criterion: information gain
Information gain increases with the average purity of the subsets that an attribute produces
Strategy: choose attribute that results in greatest information gain
17
Computing information
Information is measured in bits Given a probability distribution, the info required to
predict an event is the distribution’s entropy Entropy gives the information required in bits (this
can involve fractions of bits!) Formula for computing the entropy:
nnn ppppppppp logloglog),,,entropy( 221121
18
Example: attribute “Outlook”
“ Outlook” = “Sunny”:
“Outlook” = “Overcast”:
“Outlook” = “Rainy”:
Expected information for attribute:
bits 971.0)5/3log(5/3)5/2log(5/25,3/5)entropy(2/)info([2,3]
bits 0)0log(0)1log(10)entropy(1,)info([4,0]
bits 971.0)5/2log(5/2)5/3log(5/35,2/5)entropy(3/)info([3,2]
Note: this isnormally notdefined.
971.0)14/5(0)14/4(971.0)14/5([3,2])[4,0],,info([3,2] bits 693.0
19
Computing the information gain
Information gain: information before splitting – information after splitting
Information gain for attributes from weather data:
0.693-0.940[3,2])[4,0],,info([2,3]-)info([9,5])Outlook"gain("
bits 247.0
bits 247.0)Outlook"gain(" bits 029.0)e"Temperaturgain("
bits 152.0)Humidity"gain(" bits 048.0)Windy"gain("
20
Continuing to split
bits 571.0)e"Temperaturgain(" bits 971.0)Humidity"gain("
bits 020.0)Windy"gain("
21
The final decision tree
Note: not all leaves need to be pure; sometimes identical instances have different classes Splitting stops when data can’t be split any further
22
Highly-branching attributes
Problematic: attributes with a large number of values (extreme case: ID code)
Subsets are more likely to be pure if there is a large number of values
Information gain is biased towards choosing attributes with a large number of values
This may result in overfitting (selection of an attribute that is non-optimal for prediction)
Another problem: fragmentation
23
The gain ratio
Gain ratio: a modification of the information gain that reduces its bias on high-branch attributes
Gain ratio takes number and size of branches into account when choosing an attribute
It corrects the information gain by taking the intrinsic information of a split into account
Also called split ratio Intrinsic information: entropy of distribution of
instances into branches (i.e. how much info do we need to tell which branch
an instance belongs to)
.||
||2
log||||
),(SiS
SiSASnfoIntrinsicI
.),(
),(),(ASnfoIntrinsicI
ASGainASGainRatio
Gain Ratio
Gain ratio should be Large when data is evenly spread Small when all data belong to one branch
Gain ratio (Quinlan’86) normalizes info gain by this reduction:
25
Computing the gain ratio
Example: intrinsic information for ID code
Importance of attribute decreases as intrinsic information gets larger
Example of gain ratio:
Example:
bits 807.3)14/1log14/1(14),1[1,1,(info
)Attribute"info("intrinsic_)Attribute"gain("
)Attribute"("gain_ratio
246.0bits 3.807bits 0.940
)ID_code"("gain_ratio
26
Gain ratios for weather data
Outlook Temperature
Info: 0.693 Info: 0.911
Gain: 0.940-0.693 0.247 Gain: 0.940-0.911 0.029
Split info: info([5,4,5]) 1.577 Split info: info([4,6,4]) 1.362
Gain ratio: 0.247/1.577 0.156 Gain ratio: 0.029/1.362 0.021
Humidity Windy
Info: 0.788 Info: 0.892
Gain: 0.940-0.788 0.152 Gain: 0.940-0.892 0.048
Split info: info([7,7]) 1.000 Split info: info([8,6]) 0.985
Gain ratio: 0.152/1 0.152 Gain ratio: 0.048/0.985 0.049
27
More on the gain ratio
“ Outlook” still comes out top However: “ID code” has greater gain ratio
Standard fix: ad hoc test to prevent splitting on that type of attribute
Problem with gain ratio: it may overcompensate May choose an attribute just because its intrinsic
information is very low Standard fix:
First, only consider attributes with greater than average information gain
Then, compare them on gain ratio
n
jjpTgini
1
21)(
splitgini NT
NTT
Ngini
Ngini( ) ( ) ( ) 1
12
2
Gini Index If a data set T contains examples from n classes, gini index,
gini(T) is defined as
where pj is the relative frequency of class j in T. gini(T) is
minimized if the classes in T are skewed. After splitting T into two subsets T1 and T2 with sizes N1 and
N2, the gini index of the split data is defined as
The attribute providing smallest ginisplit(T) is chosen to split the node.
29
Discussion
Algorithm for top-down induction of decision trees (“ID3”) was developed by Ross Quinlan
Gain ratio just one modification of this basic algorithm
Led to development of C4.5, which can deal with numeric attributes, missing values, and noisy data
Similar approach: CART (linear regression tree, WF book, Chapter 6.5)
There are many other attribute selection criteria! (But almost no difference in accuracy of result.)