32
Lecture 8: Ensemble classifiers. Bagging. Boosting Idea of boosting AdaBoost algorithm (Freund and Schapire) Why does boosting work? COMP-652, Lecture 8 - October 5, 2009 1

Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Lecture 8: Ensemble classifiers. Bagging. Boosting

• Idea of boosting• AdaBoost algorithm (Freund and Schapire)• Why does boosting work?

COMP-652, Lecture 8 - October 5, 2009 1

Page 2: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Recall: Bias and variance

• For regression problems, the expected error can be decomposed as:Bias2 + Variance + Noise

• Bias is typically caused by the hypothesis class being too simple, andhence not able to represent the true function (underfitting)• Variance is typically caused by the hypothesis class being too large

(overfitting)• There is often a trade-off between bias and variance• A similar but more involved decomposition of the error can be done

for classification problems (using the 0-1 loss error function)

COMP-652, Lecture 8 - October 5, 2009 2

Page 3: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Measuring bias and variance in practice

• Recall that bias and variance are both defined as expectations:

Bias(x) = EP [f(x)− h̄(x)]

V ar(x) = EP [(h(x)− h̄(x))2]

• To get expected values we simulated multiple data sets, by drawingwith samples with replacement from the original data set• This gives a set of hypothesis, whose predictions can be averaged

together

COMP-652, Lecture 8 - October 5, 2009 3

Page 4: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Bootstrap replicates

• Given data set D, construct a bootstrap replicate of D, called Db,which has the same number of examples, by drawing samples fromD with replacement• Use the learning algorithm to construct a hypothesis hb by training onDb

• Compute the prediction of hb on each of the remaining points, fromthe set Tb = D −Db

• This process is repeated B times, where B is typically a few hundred• If D is very large, the replicates should contain m < |D| points (still

drawn with replacement)

COMP-652, Lecture 8 - October 5, 2009 4

Page 5: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Estimating bias and variance

• For each point, we have a set of estimates h1(x), . . . hK(x), with K ≤B• The average prediction, determined empirically, is:

h̄(x) =1K

K∑k=1

hk(x)

• We will estimate the bias as:

y − h̄(x)

• We estimate the variance as:

1K − 1

K∑k=1

(h̄(x)− hk(x))2

COMP-652, Lecture 8 - October 5, 2009 5

Page 6: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Approximations

• Bootstrap replicates are not real data• We typically ignore the noise• If we had multiple points with the same x value, we can estimate the

noise• Alternatively, we can do an estimation using ”similar points”, if this

appropriate

COMP-652, Lecture 8 - October 5, 2009 6

Page 7: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Bagging: Bootstrap aggregation

• If we did all the work to get the hypotheses hb, why not use all of themto make a prediction?• All hypotheses can have a vote, in the classification case, and we

pick the majority class• For regression, we can average all the predictions• Which hypotheses classes would benefit most from this approach?

COMP-652, Lecture 8 - October 5, 2009 7

Page 8: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Estimated bias and variance of bagging

• According with our way of estimating variance and bias, baggingeliminates variance altogether!• In practice, bagging tends to reduce variance and increase bias• Hence, the main benefit is for “unstable” learners, i.e., learners with

high variance.• This includes complex hypotheses classes, e.g. decision trees (even

unpruned), neural networks, nearest-neighbor-type methods

COMP-652, Lecture 8 - October 5, 2009 8

Page 9: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Ensemble learning in general

• Ensemble learning algorithms work by running a base learning algorithmmultiple times, then combining the predictions of the differenthypotheses obtained using some form of voting• One approach is to construct several classifiers independently, then

combine their predictions. Examples include:– Bagging– Randomizing the test selection in decision trees– Using a different subset of input features to train different neural

nets• A second approach is to coordinate the construction of the

hypotheses in the ensemble.

COMP-652, Lecture 8 - October 5, 2009 9

Page 10: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Additive models

• In an ensemble, the output on any instance is computed by averagingthe outputs of several hypotheses, possibly with a different weighting.• Hence, we should choose the individual hypotheses and their weight

in such a way as to provide a good fit• This suggests that instead of constructing the hypotheses

independently, we should construct them such that new hypothesesfocus on instances that are problematic for existing hypotheses.• Boosting is an algorithm implementing this idea

COMP-652, Lecture 8 - October 5, 2009 10

Page 11: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Main idea of boostingComponent classifiers should concentrate more on difficult examples

• Examine the training set• Derive some rough ”rule of thumb”• Re-weight the examples of the training set, concentrating on “hard”

cases for the previous rule• Derive a second rule of thumb• And so on... (repeat this T times)• Combine the rules of thumb into a single, accurate predictor

Questions:

• How do we re-weight the examples?• How do we combine the rules into a single classifier?

COMP-652, Lecture 8 - October 5, 2009 11

Page 12: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Notation

• Assume that examples are drawn independently from someprobability distribution P on the set of possible data D• Notation: JP (h) is the expected error of h when data is drawn fromP :

JP (h) =∑〈x,y〉

J(h(x), y)P (〈x, y〉)

where J(h(x), y) could be squared error, or 0/1 loss

COMP-652, Lecture 8 - October 5, 2009 12

Page 13: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Weak learners

• Assume we have some “weak” binary classifiers (e.g., decisionstumps: xi > t)• “Weak” means JP (h) < 1/2 − γ where γ > 0 (i.e., the true error of

the classifier is better than random).

COMP-652, Lecture 8 - October 5, 2009 13

Page 14: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Boosting classifier

Data

Weak Learner

Weak Learner

Weak Learner

H1

H2

Hn

Finalhypothesis

F(H1,H2,...Hn)

D1

D2

Dn

Original

COMP-652, Lecture 8 - October 5, 2009 14

Page 15: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

AdaBoost (Freund & Schapire, 1995)

1. Input N training examples {(x1, y1), . . . (xN, yN)}, where xi are theinputs and yi is the desired class label

2. Let D1(xi) = 1N (we start with a uniform distribution)

3. Repeat T times:(a) Construct Dt+1 from Dt (details in a moment)(b) Train a new hypothesis ht+1 on distribution Dt+1

4. Construct the final hypothesis:

hf(x) = sign

(∑t

αtht(x)

),

COMP-652, Lecture 8 - October 5, 2009 15

Page 16: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Constructing the new distributionWe want data on which we make mistakes to be emphasized:

Dt+1(xi) =1ZtDt(xi)×

{βt, if ht(xi) = yi

1, otherwise where

βt =JDt(ht)

1− JDt(ht)and Zt is a normalization factor (set such that the probabilities Dt+1(xi)sum to 1).

Construct the final hypothesis:

hf(x) = sign

(∑t

αtht(x)

), where αt = log(1/βt)

COMP-652, Lecture 8 - October 5, 2009 16

Page 17: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Toy example

!"##"$!%

&'%

()*%+,-./01%

2)345%&%

COMP-652, Lecture 8 - October 5, 2009 17

Page 18: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Toy example: First step

!"##"$!%

&'%

()*%+,-./01%

2)345%&%

COMP-652, Lecture 8 - October 5, 2009 18

Page 19: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Toy example: Second step

!"##"$!%

&'%

()*+,%#%

()*+,%-%COMP-652, Lecture 8 - October 5, 2009 19

Page 20: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Toy example: Third step

!"##"$!%

&'%

()*+,%#%

()*+,%-%

COMP-652, Lecture 8 - October 5, 2009 20

Page 21: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Toy example: Final hypothesis

!"##"$!%

&'%

()*+,%-./01234)4%

567%83*92:+;<4%

60:/+;)40*%=)12%

•! 6>?'%@AB)*,+*C4%D39)4)0*%E;33%F,G0;)12:H%

•! D39)4)0*%I1B:/4%@0*,.%4)*G,3%+J;)KB13H%

COMP-652, Lecture 8 - October 5, 2009 21

Page 22: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Real data set: Text Categorization

!"##"$!%

&!%

'()*+,-.+,/%0123425,,%67.82*9,%

•! :*;14*%).%8-+57<%=7.82*9%8<%47*53+>%,*?*752%8-+57<%

@1*,3.+,%A.7%*54B%*(59=2*/%

C! D;.*,%.7%;.*,%+.)%*(59=2*%x%8*2.+>%).%425,,%&EF%

C! D;.*,%.7%;.*,%+.)%*(59=2*%x%8*2.+>%).%425,,%#EF%

C! D;.*,%.7%;.*,%+.)%*(59=2*%x%8*2.+>%).%425,,%GEF%

. . .

H*()%I5)*>.7-J53.+%•! K*4-,-.+%,)19=,/%=7*,*+4*%.A%L.7;%.7%,B.7)%

=B75,*M%'(59=2*/%

“If the word Clinton appears in the document predict

document is about politics”

;5)585,*/%:*1)*7,%;5)585,*/%N6%

COMP-652, Lecture 8 - October 5, 2009 22

Page 23: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Boosting empirical evaluation

20

39

©Carlos Guestrin 2005-2007

Boosting: Experimental Results

Comparison of C4.5, Boosting C4.5, Boosting decision

stumps (depth 1 trees), 27 benchmark datasets

[Freund & Schapire, 1996]

errorerror

err

or

40

©Carlos Guestrin 2005-2007

COMP-652, Lecture 8 - October 5, 2009 23

Page 24: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Bagging vs. Boosting

0 20 40 60 80

boosting

0

20

40

60

80

ba

gg

ing

FindAttrTest

0 20 40 60 80

FindDecRule

0 5 10 15 20 25 300

5

10

15

20

25

30

C4.5

Figure 4: Comparison of boosting and bagging for each of theweak learners.

and assign each mislabel weight 1 times the number oftimes it was chosen. The hypotheses computed in thismanner are then combined using voting in a naturalmanner;namely, given , the combined hypothesis outputs the labelwhich maximizes .For either error or pseudo-loss, the differences between

bagging and boosting can be summarized as follows: (1)bagging always uses resampling rather than reweighting; (2)bagging does not modify the distribution over examples ormislabels, but instead always uses the uniform distribution;and (3) in forming the final hypothesis, bagging gives equalweight to each of the weak hypotheses.

3.3 THE EXPERIMENTS

We conducted our experiments on a collection of machinelearning datasets available from the repository at Universityof California at Irvine.3 A summary of some of the proper-ties of these datasets is given in Table 1. Some datasets areprovided with a test set. For these, we reran each algorithm20 times (since some of the algorithms are randomized),and averaged the results. For datasets with no provided testset, we used 10-fold cross validation, and averaged the re-sults over 10 runs (for a total of 100 runs of each algorithmon each dataset).

In all our experiments, we set the number of rounds ofboosting or bagging to be 100.

3.4 RESULTS AND DISCUSSION

The results of our experiments are shown in Table 2.The figures indicate test error rate averaged over mul-tiple runs of each algorithm. Columns indicate whichweak learning algorithm was used, and whether pseudo-loss (AdaBoost.M2) or error (AdaBoost.M1) was used.Note that pseudo-loss was not used on any two-class prob-lems since the resulting algorithmwould be identical to thecorresponding error-based algorithm. Columns labeled “–”indicate that the weak learning algorithmwas used by itself(with no boosting or bagging). Columns using boosting orbagging are marked “boost” and “bag,” respectively.

One of our goals in carrying out these experiments wasto determine if boosting using pseudo-loss (rather than er-ror) is worthwhile. Figure 3 shows how the different al-gorithms performed on each of the many-class ( 2)problems using pseudo-loss versus error. Each point in thescatter plot represents the error achieved by the twocompet-ing algorithms on a given benchmark, so there is one point

3URL “http://www.ics.uci.edu/˜mlearn/MLRepository.html”

0 5 10 15 20 25 30

boosting FindAttrTest

0

5

10

15

20

25

30

C4.5

0 5 10 15 20 25 30

boosting FindDecRule

0 5 10 15 20 25 30

boosting C4.5

0 5 10 15 20 25 30

bagging C4.5

Figure 5: Comparison of C4.5 versus various other boosting andbagging methods.

for each benchmark. These experiments indicate that boost-ing using pseudo-loss clearly outperforms boosting usingerror. Using pseudo-loss did dramatically better than erroron every non-binary problem (except it did slightly worseon “iris” with three classes). Because AdaBoost.M2 didso much better than AdaBoost.M1, we will only discussAdaBoost.M2 henceforth.

As the figure shows, using pseudo-loss with bagginggave mixed results in comparison to ordinary error. Over-all, pseudo-loss gave better results, but occasionally, usingpseudo-loss hurt considerably.

Figure 4 shows similar scatterplots comparing the per-formance of boosting and bagging for all the benchmarksand all three weak learner. For boosting, we plotted the er-ror rate achieved using pseudo-loss. To present bagging inthe best possible light, we used the error rate achieved usingeither error or pseudo-loss, whichever gave the better resulton that particular benchmark. (For the binary problems,and experiments withC4.5, only error was used.)

For the simpler weak learning algorithms (FindAttr-Test and FindDecRule), boosting did significantly and uni-formly better than bagging. The boosting error rate wasworse than the bagging error rate (using either pseudo-lossor error) on a very small number of benchmark problems,and on these, the difference in performance was quite small.On average, for FindAttrTest, boosting improved the errorrate over using FindAttrTest alone by 55.2%, compared tobagging which gave an improvement of only 11.0% usingpseudo-loss or 8.4% using error. For FindDecRule, boost-ing improved the error rate by 53.0%, bagging by only18.8% using pseudo-loss, 13.1% using error.

When usingC4.5 as theweak learning algorithm, boost-ing and bagging seem more evenly matched, althoughboosting still seems to have a slight advantage. On av-erage, boosting improved the error rate by 24.8%, baggingby 20.0%. Boosting beat bagging by more than 2% on 6 ofthe benchmarks, while baggingdid not beat boostingby thisamount on any benchmark. For the remaining 20 bench-marks, the difference in performance was less than 2%.

Figure 5 shows in a similarmanner howC4.5 performedcompared to bagging withC4.5, and compared to boostingwith each of the weak learners (using pseudo-loss for thenon-binary problems). As the figure shows, using boostingwith FindAttrTest does quite well as a learning algorithmin its own right, in comparison to C4.5. This algorithmbeat C4.5 on 10 of the benchmarks (by at least 2%), tiedon 14, and lost on 3. As mentioned above, its averageperformance relative to using FindAttrTest by itself was55.2%. In comparison,C4.5’s improvement in performance

6

COMP-652, Lecture 8 - October 5, 2009 24

Page 25: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Parallel of bagging and boosting

• Bagging is typically faster, but may get a smaller error reduction (notby much)• Bagging works well with “reasonable” classifiers• Boosting works with very simple classifiers

E.g., Boostexter - text classification using decision stumps based onsingle words• Boosting may have a problem if a lot of the data is mislabeled,

because it will focus on those examples a lot, leading to overfitting.

COMP-652, Lecture 8 - October 5, 2009 25

Page 26: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Why does boosting work?

• Weak learners have high bias• By combining them, we get more expressive classifiers• Hence, boosting is a bias-reduction technique• What happens as we run boosting longer?

COMP-652, Lecture 8 - October 5, 2009 26

Page 27: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Why does boosting work?

• Weak learners have high bias• By combining them, we get more expressive classifiers• Hence, boosting is a bias-reduction technique• What happens as we run boosting longer?

Intuitively, we get more and more complex hypotheses• How would you expect bias and variance to evolve over time?

COMP-652, Lecture 8 - October 5, 2009 27

Page 28: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

A naive (but reasonable) analysis of generalizationerror

• Expect the training error to continue to drop (until it reaches 0)• Expect the test error to increase as we get more voters, and hf

becomes too complex.

20 40 60 80 100

0.2

0.4

0.6

0.8

1

COMP-652, Lecture 8 - October 5, 2009 28

Page 29: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Actual typical run of AdaBoost

10 100 10000

5

10

15

20

• Test error does not increase even after 1000 runs! (more than 2million decision nodes!)• Test error continues to drop even after training error reaches 0!

These are consistent results through many sets of experiments!

COMP-652, Lecture 8 - October 5, 2009 29

Page 30: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Classification margin

• Boosting constructs hypotheses of the form hf(x) = sign(f(x))• The classification of an example is correct if sign(f(x)) = y

• Themargin of a training example is defined as:

margin(f(x), y) = y · f(x)

• The margin tells us how close the decision boundary is to the datapoint• The minimum margin over the data set gives an idea of how close

the training points are to the decision boundary• A higher margin on the training set should yield a lower generalization

error• Intuitively, increasing the margin is similar to lowering the variance

COMP-652, Lecture 8 - October 5, 2009 30

Page 31: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Effect of boosting on the margin

10 100 10000

5

10

15

20

-1 -0.5 0.5 1

0.5

1.0

• Between rounds 5 and 10 there is no training error reduction• But there is a significant shift in margin distribution!• There is a formal proof that boosting increases the margin• Next time: classifiers that explicitly aim to construct a large margin.

COMP-652, Lecture 8 - October 5, 2009 31

Page 32: Lecture 8: Ensemble classifiers. Bagging. Boosting · 3.Repeat Ttimes: (a)Construct D t+1 from D t (details in a moment) (b)Train a new hypothesis h t+1 on distribution D t+1 4.Construct

Summary

• Ensemble methods combine several hypotheses into one prediction• They work better than the best individual hypothesis from the same

class because they reduce bias or variance (or both)• Bagging is mainly a variance-reduction technique, useful for complex

hypotheses• Main idea is to sample the data repeatedly, train several classifiers

and average their predictions.• Boosting focuses on harder examples, and gives a weighted vote to

the hypotheses.• Boosting works by reducing bias and increasing classification margin.

COMP-652, Lecture 8 - October 5, 2009 32