1
CISC 4631Data Mining
Lecture 04:• Decision Trees
Theses slides are based on the slides by • Tan, Steinbach and Kumar (textbook authors)• Eamonn Koegh (UC Riverside)• Raymond Mooney (UT Austin)
2
Classification: Definition• Given a collection of records (training set )
– Each record contains a set of attributes, one of the attributes is the class.
• Find a model for class attribute as a function of the values of other attributes.
• Goal: previously unseen records should be assigned a class as accurately as possible.– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
3
Illustrating Classification Task
Apply
Model
Induction
Deduction
Learn
Model
Model
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes 10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ? 10
Test Set
Learningalgorithm
Training Set
4
Classification Techniques
• Decision Tree based Methods• Rule-based Methods• Memory based reasoning• Neural Networks• Naïve Bayes and Bayesian Belief Networks• Support Vector Machines
5
Example of a Decision Tree
Tid Refund MaritalStatus
TaxableIncome Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes10
categoric
al
categoric
al
contin
uous
class
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Splitting Attributes
Training Data Model: Decision Tree
6
Another Example of Decision Tree
Tid Refund MaritalStatus
TaxableIncome Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes10
categoric
al
categoric
al
contin
uous
class
MarSt
Refund
TaxInc
YESNO
NO
NO
Yes No
Married Single,
Divorced
< 80K > 80K
There could be more than one tree that fits the same data!
7
Decision Tree Classification Task
Apply
Model
Induction
Deduction
Learn
Model
Model
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes 10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ? 10
Test Set
TreeInductionalgorithm
Training Set
Decision Tree
8
Apply Model to Test Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Test DataStart from the root of tree.
9
Apply Model to Test Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Test Data
10
Apply Model to Test Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Test Data
11
Apply Model to Test Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Test Data
12
Apply Model to Test Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Test Data
13
Apply Model to Test Data
Refund
MarSt
TaxInc
YESNO
NO
NO
Yes No
Married Single, Divorced
< 80K > 80K
Refund Marital Status
Taxable Income Cheat
No Married 80K ? 10
Test Data
Assign Cheat to “No”
14
Decision Tree Terminology
15
Decision Tree Classification Task
Apply
Model
Induction
Deduction
Learn
Model
Model
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes 10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ? 10
Test Set
TreeInductionalgorithm
Training Set
Decision Tree
16
Decision Tree Induction
• Many Algorithms:– Hunt’s Algorithm (one of the earliest)– CART– ID3, C4.5– SLIQ,SPRINT
• John Ross Quinlan is a computer science researcher in data mining and decision theory. He has contributed extensively to the development of decision tree algorithms, including inventing the canonical C4.5 and ID3 algorithms.
17
Decision Tree Classifier
Ross Quinlan
Ante
nna
Len
gth
10
1 2 3 4 5 6 7 8 9 10
123456789
Abdomen Length
Abdomen Length > 7.1?
no yes
KatydidAntenna Length > 6.0?
no yes
KatydidGrasshopper
18
Grasshopper
Antennae shorter than body?
Cricket
Foretiba has ears?
Katydids Camel Cricket
Yes
Yes
Yes
No
No
3 Tarsi?
No
Decision trees predate computers
19
Decision tree is a classifier in the form of a tree structure– Decision node: specifies a test on a single attribute– Leaf node: indicates the value of the target attribute – Arc/edge: split of one attribute– Path: a disjunction of test to make the final decision
Decision trees classify instances or examples by starting at the root of the tree and moving through it until a leaf node.
Definition
20
• Decision tree generation consists of two phases– Tree construction
• At start, all the training examples are at the root• Partition examples recursively based on selected attributes
– Tree pruning• Identify and remove branches that reflect noise or outliers
• Use of decision tree: Classifying an unknown sample– Test the attribute values of the sample against the decision tree
Decision Tree Classification
21
Decision Tree Representation
• Each internal node tests an attribute• Each branch corresponds to attribute value• Each leaf node assigns a classification
outlook
sunny overcast rain
yeshumidity wind
high normal strong weak
yes yesno no
22
• Basic algorithm (a greedy algorithm)– Tree is constructed in a top-down recursive divide-and-conquer manner– At start, all the training examples are at the root– Attributes are categorical (if continuous-valued, they can be discretized in
advance)– Examples are partitioned recursively based on selected attributes.– Test attributes are selected on the basis of a heuristic or statistical measure
(e.g., information gain)• Conditions for stopping partitioning
– All samples for a given node belong to the same class– There are no remaining attributes for further partitioning – majority voting is
employed for classifying the leaf– There are no samples left
How do we construct the decision tree?
23
Top-Down Decision Tree Induction
• Main loop:1. A the “best” decision attribute for next node2. Assign A as decision attribute for node3. For each value of A, create new descendant of node4. Sort training examples to leaf nodes5. If training examples perfectly classified,
Then STOP, Else iterate over new leaf nodes
24
Tree Induction
• Greedy strategy.– Split the records based on an attribute test that optimizes
certain criterion.
• Issues– Determine how to split the records
• How to specify the attribute test condition?• How to determine the best split?
– Determine when to stop splitting
25
How To Split Records• Random Split
– The tree can grow huge – These trees are hard to understand. – Larger trees are typically less accurate than smaller trees.
• Principled Criterion– Selection of an attribute to test at each node - choosing the most useful attribute
for classifying examples. – How?– Information gain
• measures how well a given attribute separates the training examples according to their target classification
• This measure is used to select among the candidate attributes at each step while growing the tree
26
Tree Induction
• Greedy strategy:– Split the records based on an attribute test that optimizes
certain criterion:– Hunt’s algorithm: recursively partition training records into
successively purer subsets. How to measure purity/impurity
• Entropy and information gain (covered in the lectures slides)• Gini (covered in the textbook)• Classification error
27
How to determine the Best Split
OwnCar?
C0: 6C1: 4
C0: 4C1: 6
C0: 1C1: 3
C0: 8C1: 0
C0: 1C1: 7
CarType?
C0: 1C1: 0
C0: 1C1: 0
C0: 0C1: 1
StudentID?
...
Yes No Family
Sports
Luxury c1c10
c20
C0: 0C1: 1
...
c11
Before Splitting: 10 records of class 0, 10 records of class 1
Which test condition is the best?
Why is student id a bad feature to use?
Gender
28
How to determine the Best Split
• Greedy approach: – Nodes with homogeneous class distribution are preferred
• Need a measure of node impurity:
C0: 5C1: 5
C0: 9C1: 1
Non-homogeneous,
High degree of impurity
Homogeneous,
Low degree of impurity
29
Picking a Good Split Feature• Goal is to have the resulting tree be as small as possible, per Occam’s
razor.• Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard
optimization problem.• Top-down divide-and-conquer method does a greedy search for a simple
tree but does not guarantee to find the smallest.– General lesson in Machine Learning and Data Mining: “Greed is good.”
• Want to pick a feature that creates subsets of examples that are relatively “pure” in a single class so they are “closer” to being leaf nodes.
• There are a variety of heuristics for picking a good test, a popular one is based on information gain that originated with the ID3 system of Quinlan (1979).
R. Mooney, UT Austin
30
Information Theory• Think of playing "20 questions": I am thinking of an integer between 1 and
1,000 -- what is it? What is the first question you would ask?• What question will you ask?• Why?
• Entropy measures how much more information you need before you can identify the integer.
• Initially, there are 1000 possible values, which we assume are equally likely.
• What is the maximum number of question you need to ask?
31R. Mooney, UT Austin
Entropy• Entropy (disorder, impurity) of a set of examples, S, relative to a binary
classification is:
where p1 is the fraction of positive examples in S and p0 is the fraction of negatives.
• If all examples are in one category, entropy is zero (we define 0log(0)=0)• If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1.• Entropy can be viewed as the number of bits required on average to encode
the class of an example in S where data compression (e.g. Huffman coding) is used to give shorter codes to more likely cases.
• For multi-class problems with c categories, entropy generalizes to:
)(log)(log)( 020121 ppppSEntropy
c
iii ppSEntropy
12 )(log)(
32
Entropy Plot for Binary Classification
• The entropy is 0 if the outcome is certain. • The entropy is maximum if we have no knowledge of the system
(or any outcome is equally possible).
Entropy of a 2-class problem with regard to
the portion of one of the two groups
33
Information Gain• Is the expected reduction in entropy caused by partitioning the examples
according to this attribute. • is the number of bits saved when encoding the target value of an arbitrary
member of S, by knowing the value of attribute A.
34
Information Gain in Decision Tree Induction
• Assume that using attribute A, a current set will be partitioned into some number of child sets
• The encoding information that would be gained by branching on A
)()()( setschildallEsetCurrentEAGain
Note: entropy is at its minimum if the collection of objects is completely uniform
35
Examples for Computing Entropy
C1 0 C2 6
C1 2 C2 4
C1 1 C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Entropy = – 0 log2 0 – 1 log2 1 = – 0 – 0 = 0
P(C1) = 1/6 P(C2) = 5/6
Entropy = – (1/6) log2 (1/6) – (5/6) log2 (5/6) = 0.65
P(C1) = 2/6 P(C2) = 4/6
Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
j
tjptjptEntropy )|(log)|()(2
C1 3 C2 3
P(C1) = 3/6=1/2 P(C2) = 3/6 = 1/2Entropy = – (1/2) log2 (1/2) – (1/2) log2 (1/2)
= -(1/2)(-1) – (1/2)(-1) = ½ + ½ = 1
NOTE: p( j | t) is computed as the relative frequency of class j at node t
36
How to Calculate log2x
• Many calculators only have a button for log10x and logex (note log typically means log10)
• You can calculate the log for any base b as follows:– logb(x) = logk(x) / logk(b)
– Thus log2(x) = log10(x) / log10(2)
– Since log10(2) = .301, just calculate the log base 10 and divide by .301 to get log base 2.
– You can use this for HW if needed
Splitting Based on INFO...
• Information Gain:
Parent Node, p is split into k partitions;ni is number of records in partition i
– Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN)
– Used in ID3 and C4.5– Disadvantage: Tends to prefer splits that result in large
number of partitions, each being small but pure.
k
i
i
splitiEntropy
nn
pEntropyGAIN1
)()(
38
Continuous Attribute?(more on it later)
• Each non-leaf node is a test, its edge partitioning the attribute into subsets (easy for discrete attribute).
• For continuous attribute– Partition the continuous value of attribute A into a discrete set of
intervals– Create a new boolean attribute Ac , looking for a threshold c,
if
otherwisec
c
true A cA
false
How to choose c ?
39
Person Hair Length
Weight Age Class
Homer 0” 250 36 M
Marge 10” 150 34 F
Bart 2” 90 10 M
Lisa 6” 78 8 F
Maggie 4” 20 1 F
Abe 1” 170 70 M
Selma 8” 160 41 F
Otto 10” 180 38 M
Krusty 6” 200 45 M
Comic 8” 290 38 ?
40
Hair Length <= 5?yes no
Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911
Entropy(1F,3M) = -(1/4)log2(1/4) - (3/4)log2(3/4)
= 0.8113
Entropy(3F,2M) = -(3/5)log2(3/5) - (2/5)log
2(2/5)
= 0.9710
np
n
np
n
np
p
np
pSEntropy 22 loglog)(
Gain(Hair Length <= 5) = 0.9911 – (4/9 * 0.8113 + 5/9 * 0.9710 ) = 0.0911
)()()( setschildallEsetCurrentEAGain
Let us try splitting on Hair length
Let us try splitting on Hair length
41
Weight <= 160?yes no
Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911
Entropy(4F,1M) = -(4/5)log2(4/5) - (1/5)log2(1/5)
= 0.7219
Entropy(0F,4M) = -(0/4)log2(0/4) - (4/4)log
2(4/4)
= 0
np
n
np
n
np
p
np
pSEntropy 22 loglog)(
Gain(Weight <= 160) = 0.9911 – (5/9 * 0.7219 + 4/9 * 0 ) = 0.5900
)()()( setschildallEsetCurrentEAGain
Let us try splitting on Weight
Let us try splitting on Weight
42
age <= 40?yes no
Entropy(4F,5M) = -(4/9)log2(4/9) - (5/9)log2(5/9) = 0.9911
Entropy(3F,3M) = -(3/6)log2(3/6) - (3/6)log2(3/6)
= 1
Entropy(1F,2M) = -(1/3)log2(1/3) - (2/3)log
2(2/3)
= 0.9183
np
n
np
n
np
p
np
pSEntropy 22 loglog)(
Gain(Age <= 40) = 0.9911 – (6/9 * 1 + 3/9 * 0.9183 ) = 0.0183
)()()( setschildallEsetCurrentEAGain
Let us try splitting on Age
Let us try splitting on Age
43
Weight <= 160?yes no
Hair Length <= 2?yes no
Of the 3 features we had, Weight was best. But while people who weigh over 160 are perfectly classified (as males), the under 160 people are not perfectly classified… So we simply recurse!
This time we find that we can split on Hair length, and we are done!
44
Weight <= 160?
yes no
Hair Length <= 2?
yes no
We don’t need to keep the data around, just the test conditions.
Male
Male Female
How would these people be classified?
45
It is trivial to convert Decision Trees to rules… Weight <= 160?
yes no
Hair Length <= 2?
yes no
Male
Male Female
Rules to Classify Males/Females
If Weight greater than 160, classify as MaleElseif Hair Length less than or equal to 2, classify as MaleElse classify as Female
Rules to Classify Males/Females
If Weight greater than 160, classify as MaleElseif Hair Length less than or equal to 2, classify as MaleElse classify as Female
46
Decision tree for a typical shared-care setting applying the system for the diagnosis of prostatic obstructions.
Once we have learned the decision tree, we don’t even need a computer!
This decision tree is attached to a medical machine, and is designed to help nurses make decisions about what type of doctor to call.
47
Wears green?
Yes No
The worked examples we have seen were performed on small datasets. However with small datasets there is a great danger of overfitting the data…
When you have few datapoints, there are many possible splitting rules that perfectly classify the data, but will not generalize to future datasets.
For example, the rule “Wears green?” perfectly classifies the data, so does “Mothers name is Jacqueline?”, so does “Has blue shoes”…
MaleFemale
48
How to Find the Best Split: GINI
B?
Yes No
Node N3 Node N4
A?
Yes No
Node N1 Node N2
Before Splitting:
C0 N10 C1 N11
C0 N20 C1 N21
C0 N30 C1 N31
C0 N40 C1 N41
C0 N00 C1 N01
M0
M1 M2 M3 M4
M12 M34Gain = M0 – M12 vs M0 – M34
49
Measure of Impurity: GINI (at node t)• Gini Index for a given node t with classes j
NOTE: p( j | t) is computed as the relative frequency of class j at node t
• Example: Two classes C1 & C2 and node t has 5 C1 and 5 C2 examples. Compute Gini(t)– 1 – [p(C1|t) + p(C2|t)] = 1 – [(5/10)2 + [(5/10)2 ]– 1 – [¼ + ¼] = ½.– Do you think this Gini value indicates a good split or bad
split? Is it an extreme value?
j
tjptGINI 2)]|([1)(
50
More on Gini• Worst Gini corresponds to probabilities of 1/nc, where nc is the
number of classes.– For 2-class problems the worst Gini will be ½
• How do we get the best Gini? Come up with an example for node t with 10 examples for classes C1 and C2– 10 C1 and 0 C2 – Now what is the Gini?
• 1 – [(10/10)2 + (0/10)2 = 1 – [1 + 0] = 0– So 0 is the best Gini
• So for 2-class problems:– Gini varies from 0 (best) to ½ (worst).
51
Some More Examples• Below we see the Gini values for 4 nodes with
different distributions. They are ordered from best to worst. See next slide for details– Note that thus far we are only computing GINI for one
node. We need to compute it for a split and then compute the change in Gini from the parent node.
C1 0C2 6
Gini=0.000
C1 2C2 4
Gini=0.444
C1 3C2 3
Gini=0.500
C1 1C2 5
Gini=0.278
Examples for computing GINI
C1 0 C2 6
C1 2 C2 4
C1 1 C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
j
tjptGINI 2)]|([1)(
P(C1) = 1/6 P(C2) = 5/6
Gini = 1 – (1/6)2 – (5/6)2 = 0.278
P(C1) = 2/6 P(C2) = 4/6
Gini = 1 – (2/6)2 – (4/6)2 = 0.444
53
Splitting Criteria based on Classification Error
• Classification error at a node t :
• Measures misclassification error made by a node. • Maximum (1 - 1/nc) when records are equally distributed among all
classes, implying least interesting information• Minimum (0.0) when all records belong to one class, implying most
interesting information
)|(max1)( tiPtErrori
54
Examples for Computing Error
C1 0 C2 6
C1 2 C2 4
C1 1 C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Error = 1 – max (0, 1) = 1 – 1 = 0
P(C1) = 1/6 P(C2) = 5/6
Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6
P(C1) = 2/6 P(C2) = 4/6
Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
)|(max1)( tiPtErrori
55
Comparison among Splitting CriteriaFor a 2-class problem:
56
Discussion• Error rate is often the metric used to evaluate a
classifier (but not always)– So it seems reasonable to use error rate to determine the
best split– That is, why not just use a splitting metric that matches the
ultimate evaluation metric?– But this is wrong!
• The reason is related to the fact that decision trees use a greedy strategy, so we need to use a splitting metric that leads to globally better results
• The other metrics will empirically outperform error rate, although there is no proof for this.
DTs in practice...• Growing to purity is bad (overfitting)
x1: petal length
x2:
sepa
l wid
th
57
DTs in practice...• Growing to purity is bad (overfitting)
x1: petal length
x2:
sepa
l wid
th
58
59
DTs in practice...
• Growing to purity is bad (overfitting)– Terminate growth early– Grow to purity, then prune back
60
DTs in practice...• Growing to purity is bad (overfitting)
x1: petal length
x2:
sepa
l wid
th
Not statisticallysupportable leaf
Remove split& merge leaves
61
Avoid Overfitting in Classification(more on overfitting later)
• The generated tree may overfit the training data – Too many branches, some may reflect anomalies due to noise or outliers– Result is in poor accuracy for unseen samples
• Two approaches to avoid overfitting – Prepruning: Halt tree construction early—do not split a node if this would
result in the goodness measure falling below a threshold• Difficult to choose an appropriate threshold
– Postpruning: Remove branches from a “fully grown” tree—get a sequence of progressively pruned trees
• Use a set of data different from the training data to decide which is the “best pruned tree”
62
Tree Induction
• Greedy strategy.– Split the records based on an attribute test that optimizes
certain criterion.
• Issues– Determine how to split the records
• How to specify the attribute test condition?• How to determine the best split?
– Determine when to stop splitting
63
How to Specify Test Condition?
• Depends on attribute types– Nominal– Ordinal– Continuous
• Depends on number of ways to split– 2-way split– Multi-way split
64
Splitting Based on Nominal Attributes
• Multi-way split: Use as many partitions as distinct values.
• Binary split: Divides values into two subsets. Need to find optimal partitioning.
CarTypeFamily
SportsLuxury
CarType{Family, Luxury} {Sports}
CarType{Sports, Luxury} {Family} OR
65
• Multi-way split: Use as many partitions as distinct values.
• Binary split: Divides values into two subsets. Need to find optimal partitioning.
• What about this split?
Splitting Based on Ordinal Attributes
SizeSmall
MediumLarge
Size{Medium,
Large} {Small}
Size{Small,
Medium} {Large}OR
Size{Small, Large} {Medium}
66
Splitting Based on Continuous Attributes
• Different ways of handling– Discretization to form an ordinal categorical attribute
• Static – discretize once at the beginning• Dynamic – ranges can be found by equal interval
bucketing, equal frequency bucketing(percentiles), or clustering.
– Binary Decision: (A < v) or (A v)• consider all possible splits and finds the best cut• can be more compute intensive
67
Splitting Based on Continuous Attributes
TaxableIncome> 80K?
Yes No
TaxableIncome?
(i) Binary split (ii) Multi-way split
< 10K
[10K,25K) [25K,50K) [50K,80K)
> 80K
68
Data Fragmentation
• Number of instances gets smaller as you traverse down the tree
• Number of instances at the leaf nodes could be too small to make any statistically significant decision
69
Search Strategy
• Finding an optimal decision tree is NP-hard
• The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution
70
Expressiveness• Decision tree provides expressive representation for learning
discrete-valued function– But they do not generalize well to certain types of Boolean
functions• Example: parity function:
– Class = 1 if there is an even number of Boolean attributes with truth value = True
– Class = 0 if there is an odd number of Boolean attributes with truth value = True
• For accurate modeling, must have a complete tree
• Not expressive enough for modeling continuous variables– Particularly when test condition involves only a single
attribute at-a-time
71
Decision Boundary
y < 0.33?
: 0 : 3
: 4 : 0
y < 0.47?
: 4 : 0
: 0 : 4
x < 0.43?
Yes
Yes
No
No Yes No
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
• Border line between two neighboring regions of different classes is known as decision boundary
• Decision boundary is parallel to axes because test condition involves a single attribute at-a-time
72
Oblique Decision Trees
x + y < 1
Class = + Class =
• Test condition may involve multiple attributes
• More expressive representation
• Finding optimal test condition is computationally expensive
73
Vertical/Horizontal Boundaries
500 circular and 500 triangular data points.
Circular points:
0.5 sqrt(x12+x2
2) 1
Triangular points:
sqrt(x12+x2
2) > 0.5 or
sqrt(x12+x2
2) < 1
74
Tree ReplicationP
Q R
S 0 1
0 1
Q
S 0
0 1
• Same subtree appears in multiple branches
75
Model Evaluation
• Metrics for Performance Evaluation– How to evaluate the performance of a model?
• Methods for Performance Evaluation– How to obtain reliable estimates?
76
10
1 2 3 4 5 6 7 8 9 10
123456789
100
10 20 30 40 50 60 70 80 90 100
102030405060708090
10
1 2 3 4 5 6 7 8 9 10
123456789
Which of the “Problems” can be solved by a Decision Tree?
1) Deep Bushy Tree2) Useless3) Deep Bushy Tree
The Decision Tree has a hard time with correlated attributes ?
77
• Advantages:– Easy to understand (Doctors love them!) – Easy to generate rules
• Disadvantages:– May suffer from overfitting.– Classifies by rectangular partitioning (so does not
handle correlated features very well).– Can be quite large – pruning is necessary.– Does not handle streaming data easily
Advantages/Disadvantages of Decision Trees