41
Machine Learning Boosting (based on Rob Schapire’s IJCAI’99 talk and slides by B. Pardo)

Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Embed Size (px)

Citation preview

Page 1: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Machine Learning

Boosting(based on Rob Schapire’s IJCAI’99 talk and slides by B.

Pardo)

Page 2: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Horse Race Prediction

Page 3: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

How to Make $$$ In Horse Races?

• Ask a professional.• Suppose:

– Professional cannot give single highly accurate rule

– …but presented with a set of races, can always generate better-than-random rules

• Can you get rich?

Page 4: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Idea

1) Ask expert for rule-of-thumb2) Assemble set of cases where rule-of-thumb

fails (hard cases)3) Ask expert for a rule-of-thumb to deal with

the hard cases4) Goto Step 2

• Combine all rules-of-thumb• Expert could be “weak” learning algorithm

Page 5: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Questions

• How to choose races on each round?– concentrate on “hardest” races

(those most often misclassified by previous rules of thumb)

• How to combine rules of thumb into single prediction rule?– take (weighted) majority vote of rules of

thumb

Page 6: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Boosting

• boosting = general method of converting rough rules of thumb into highly accurate prediction rule

• more technically:– given “weak” learning algorithm that can

consistently find hypothesis (classifier) with error ≤1/2-γ

– a boosting algorithm can provablyconstruct single hypothesis with error ≤ ε

Page 7: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

This Lecture

• Introduction to boosting (AdaBoost)• Analysis of training error• Analysis of generalization error based on

theory of margins• Extensions• Experiments

Page 8: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

A Formal View of Boosting• Given training set X={(x1,y1),…,(xm,ym)}• yi∈{−1,+1} correct label of instance xi∈X

• for timesteps t = 1,…,T:• construct a distribution Dt on {1,…,m}• Find a weak hypothesis ht : X → {−1,+1}

with error εt on Dt:

• Output a final hypothesis Hfinal that combines the weak hypotheses in a good way

])([Pr iitDt yxht

≠=ε

HOW?

Page 9: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

= ∑

ttt xhxH )(sgn)(final α

Weighting the Votes

• Hfinal is a weighted combination of the choices from all our hypotheses.

How seriously we take hypothesis t

What hypothesis t guessed

Page 10: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

The Hypothesis Weight

• αt determines how “seriously” we take this particular classifier’s answer

−=

t

tt ε

εα 1ln21

The error on training distribution Dt

Page 11: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

The Training Distribution

• Dt determines which elements in the training set we focus on.

≠=

⋅=

=

+ )(if)(if)()(

1)(

1

1

iti

iti

t

tt xhye

xhyeZ

iDiD

miD

t

t

α

α

Size of the training set

The right answer

What we guessed

Normalization factor

Page 12: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

The Hypothesis Weight

≠=

⋅=−

+ )(if)(if

1iti

iti

t

tt xhye

xhyeZDD

t

t

α

α

01ln21

>

−=

t

tt ε

εα

Page 13: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

AdaBoost [Freund&Schapire ’97]

• constructing Dt:•

• given Dt and ht:

where: Zt = normalization constant

• final hypothesis:

))(exp( ititt

t xhyZD

⋅⋅−⋅= α

≠=

⋅=−

+ )(if)(if

1iti

iti

t

tt xhye

xhyeZDD

t

t

α

α

miD 1)(1 =

01ln21

>

−=

t

tt ε

εα

= ∑

ttt xhxH )(sgn)(final α

Page 14: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Toy Example

Page 15: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Round 1

Page 16: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Round 2

Page 17: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Round 3

Page 18: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Final Hypothesis

Page 19: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Analyzing the Training Error

• Theorem [Freund&Schapire ’97]:write εt as ½-γt

so if ∀t: γt ≥ γ > 0 then22

finalthen, training error( ) TH e γ−≤

2finalthen, training error( ) exp 2 t

tH γ

≤ − ∑

Page 20: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Analyzing the Training Error

So what? This means AdaBoost is adaptive:• does not need to know γ or T a priori• Works as long as γt > 0

Page 21: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Proof Intuition

• on round t:increase weight of examples incorrectly classified by ht

• if xi incorrectly classified by Hfinal

then xi incorrectly classified by weighted majority of ht’sthen xi must have “large” weight under final dist. DT+1

• since total weight ≤ 1:number of incorrectly classified examples “small”

Page 22: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Analyzing Generalization Error

we expect: training error to continue to drop (or reach zero) test error to increase when Hfinal becomes “too complex”

(Occam’s razor)

Page 23: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

A Typical Run

• Test error does not increase even after 1,000 rounds (~2,000,000 nodes)

• Test error continues to drop after training error is zero!• Occam’s razor wrongly predicts “simpler” rule is better.

(boosting on C4.5 on “letter” dataset)

Page 24: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

A Better Story: Margins

Key idea: Consider confidence (margin):• with

• define: margin of (x,y) =

]1,1[)(

)( −∈=∑

tt

ttt xh

xfα

α))(sgn()(final xfxH =

)(xfy ⋅

Page 25: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Margins for Toy Example

Page 26: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

The Margin Distribution

epoch 5 100 1000training error 0.0 0.0 0.0test error 8.4 3.3 3.1%margins≤0.5 7.7 0.0 0.0Minimum margin 0.14 0.52 0.55

Page 27: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Boosting Maximizes Margins

• Can be shown to minimize

∑ ∑∑

=−

i i

xhyxfy t

ittiii ee

)()(

α

∝ to margin of (xi,yi)

Page 28: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Analyzing Boosting Using Margins

generalization error bounded by function of training sample margins:

larger margin ⇒ better bound bound independent on # of epochs boosting tends to increase margins of training

examples by concentrating on those with smallest margin

+≤≤ 2

)VC(~]),(r[marginP̂errorθ

θm

HOyxf

Page 29: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Relation to SVMs

SVM: map x into high-dim space, separate data linearly

Page 30: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Relation to SVMs (cont.)

)2,0,0,5,1,10( −−=α

−>⋅+

=otherwise1

0)(if1)( xhxHα

),,,,,1()( 5432 xxxxxxh =

−>+−+

=otherwise1

1052if1)(

25 xxxxH

Page 31: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Relation to SVMs

• Both maximize margins:

• SVM: Euclidean norm (L2)• AdaBoost: Manhattan norm (L1)

• Has implications for optimization, PAC bounds

1||||α

||||))((minmax

ααθ

ii

iw

yxh⋅=

2||||α

See [Freund et al ‘98] for details

Page 32: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Extensions: Multiclass Problems

• Reduce to binary problem by creating several binary questions for each example:

• “does or does not example x belong to class 1?”• “does or does not example x belong to class 2?”• “does or does not example x belong to class 3?”

. . .

Page 33: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Extensions: Confidences and Probabilities

• Prediction of hypothesis ht:

• Confidence of hypothesis ht:

• Probability of Hfinal:

))(sgn( xht

|)(| xht

)()(

)(

]|1[Pr xfxf

xf

f eeexy −+

=+=

[Schapire&Singer ‘98], [Friedman, Hastie & Tibshirani ‘98]

Page 34: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Practical Advantages of AdaBoost

• (quite) fast• simple + easy to program• only a single parameter to tune (T)• no prior knowledge• flexible: can be combined with any classifier

(neural net, C4.5, …)• provably effective (assuming weak learner)

• shift in mind set: goal now is merely to find hypotheses that are better than random guessing

• finds outliers

Page 35: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Caveats

• performance depends on data & weak learner• AdaBoost can fail if

– weak hypothesis too complex (overfitting)– weak hypothesis too weak (γt→0 too quickly),

• underfitting• Low margins → overfitting

• empirically, AdaBoost seems susceptible to noise

Page 36: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

UCI Benchmarks

Comparison with• C4.5 (Quinlan’s Decision Tree Algorithm)• Decision Stumps (only single attribute)

Page 37: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Text Categorization

database: Reuters

Page 38: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Conclusion

• boosting useful tool for classification problems• grounded in rich theory• performs well experimentally• often (but not always) resistant to overfitting• many applications

• but• slower classifiers• result less comprehensible• sometime susceptible to noise

Page 39: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Other Ensembles

• Bagging

• Stacking

Page 40: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Background

• [Valiant’84]introduced theoretical PAC model for studying machine learning

• [Kearns&Valiant’88]open problem of finding a boosting algorithm

• [Schapire’89], [Freund’90]first polynomial-time boosting algorithms

• [Drucker, Schapire&Simard ’92]first experiments using boosting

Page 41: Machine Learningddowney/courses/349_Spring2018/lectures/... · always generate better-than-random rules • Can you get rich? ... • Combine all rules-of-thumb • Expert could be

Background (cont.)

• [Freund&Schapire ’95]– introduced AdaBoost algorithm– strong practical advantages over previous boosting algorithms

• experiments using AdaBoost:[Drucker&Cortes ’95] [Schapire&Singer ’98][Jackson&Cravon ’96] [Maclin&Opitz ’97][Freund&Schapire ’96] [Bauer&Kohavi ’97][Quinlan ’96] [Schwenk&Bengio ’98][Breiman ’96] [ Dietterich’98]

• continuing development of theory & algorithms:[Schapire,Freund,Bartlett&Lee ’97] [Schapire&Singer ’98][Breiman ’97] [Mason, Bartlett&Baxter ’98][Grive and Schuurmans’98] [Friedman, Hastie&Tibshirani ’98]