63
Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Embed Size (px)

Citation preview

Page 1: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Machine Learning – Linear Classifiers and Boosting

CS 271: Fall 2007

Instructor: Padhraic Smyth

Page 2: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 2CS 271, Fall 2007: Professor Padhraic Smyth

Final Exam: Thursday 1:30, this classroom

• Same format as midterm

• Topics:– Search:

• 3.1-3.5, 4.1-4.4, 6.1-6.3 (no CSPs)– Propositional Logic:

• all of Ch 7 except FC, BC, and DPLL– First-Order Logic

• all of Ch 8, and 9.1, 9.2 and 9.5– Uncertainty and Bayesian Networks

• All of chapter 13• 14.1-14.4, except for subsections on continuous variables

– Machine Learning• Decision trees and boosting: 18.1-18.4 • Perceptrons: Ch 20: pages 737-742

– Plus all material in class slides and homeworks related to the topics above – but not including face detection material in today’s slides

Page 3: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 3CS 271, Fall 2007: Professor Padhraic Smyth

Outline

• Different types of learning problems

• Different types of learning algorithms

• Supervised learning– Decision trees– Naïve Bayes– Perceptrons, Multi-layer Neural Networks– Boosting

• Applications: learning to detect faces in images

• Reading for today’s lecture: – Chapter 18.1 to 18.4 (inclusive) plus pages 736-743

Page 4: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 4CS 271, Fall 2007: Professor Padhraic Smyth

Training Data for Supervised Learning

Page 5: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 5CS 271, Fall 2007: Professor Padhraic Smyth

Classification Problem with Overlap

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

Page 6: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 6CS 271, Fall 2007: Professor Padhraic Smyth

Inductive learning

• Let x represent the input vector of attributes– xj is the ith component of the vector x – value of the jth attribute, j = 1,…d

• Let f(x) represent the value of the target variable for x– The implicit mapping from x to f(x) is unknown to us– We just have training data pairs, D = {x, f(x)} available

• We want to learn a mapping from x to f, i.e., h(x; ) is “close” to f(x) for all training data points x

are the parameters of our predictor h(..)

• Examples:– h(x; ) = sign(w1x1 + w2x2+ w3)

– hk(x) = (x1 OR x2) AND (x3 OR NOT(x4))

Page 7: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 7CS 271, Fall 2007: Professor Padhraic Smyth

Decision Boundaries

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

Decision Boundary Decision

Region 1

Decision Region 2

Page 8: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 8CS 271, Fall 2007: Professor Padhraic Smyth

Classification in Euclidean Space

• A classifier is a partition of the space x into disjoint decision regions– Each region has a label attached – Regions with the same label need not be contiguous– For a new test point, find what decision region it is in, and predict

the corresponding label

• Decision boundaries = boundaries between decision regions– The “dual representation” of decision regions

• We can characterize a classifier by the equations for its decision boundaries

• Learning a classifier searching for the decision boundaries that optimize our objective function

Page 9: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 9CS 271, Fall 2007: Professor Padhraic Smyth

Example: Decision Trees

• When applied to real-valued attributes, decision trees produce “axis-parallel” linear decision boundaries

• Each internal node is a binary threshold of the form xj > t ?

converts each real-valued feature into a binary one

requires evaluation of N-1 possible threshold locations for N data points, for each real-valued attribute, for each internal node

Page 10: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 10CS 271, Fall 2007: Professor Padhraic Smyth

Decision Tree Example

Income

Debt

Page 11: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 11CS 271, Fall 2007: Professor Padhraic Smyth

Decision Tree Example

t1 Income

DebtIncome > t1

??

Page 12: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 12CS 271, Fall 2007: Professor Padhraic Smyth

Decision Tree Example

t1

t2

Income

DebtIncome > t1

Debt > t2

??

Page 13: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 13CS 271, Fall 2007: Professor Padhraic Smyth

Decision Tree Example

t1t3

t2

Income

DebtIncome > t1

Debt > t2

Income > t3

Page 14: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 14CS 271, Fall 2007: Professor Padhraic Smyth

Decision Tree Example

t1t3

t2

Income

DebtIncome > t1

Debt > t2

Income > t3

Note: tree boundaries are linear and axis-parallel

Page 15: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 15CS 271, Fall 2007: Professor Padhraic Smyth

A Simple Classifier: Minimum Distance Classifier

• Training– Separate training vectors by class

– Compute the mean for each class, k, k = 1,… m

• Prediction– Compute the closest mean to a test vector x’ (using Euclidean

distance)– Predict the corresponding class

• In the 2-class case, the decision boundary is defined by the locus of the hyperplane that is halfway between the 2 means and is orthogonal to the line connecting them

• This is a very simple-minded classifier – easy to think of cases where it will not work very well

Page 16: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 16CS 271, Fall 2007: Professor Padhraic Smyth

Minimum Distance Classifier

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

Page 17: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 17CS 271, Fall 2007: Professor Padhraic Smyth

Another Example: Nearest Neighbor Classifier

• The nearest-neighbor classifier– Given a test point x’, compute the distance between x’ and each

input data point – Find the closest neighbor in the training data– Assign x’ the class label of this neighbor– (sort of generalizes minimum distance classifier to exemplars)

• If Euclidean distance is used as the distance measure (the most common choice), the nearest neighbor classifier results in piecewise linear decision boundaries

• Many extensions– e.g., kNN, vote based on k-nearest neighbors– k can be chosen by cross-validation

Page 18: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 18CS 271, Fall 2007: Professor Padhraic Smyth

Local Decision Boundaries

1

1

1

2

2

2

Feature 1

Feature 2

?

Boundary? Points that are equidistantbetween points of class 1 and 2Note: locally the boundary is linear

Page 19: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 19CS 271, Fall 2007: Professor Padhraic Smyth

Finding the Decision Boundaries

1

1

1

2

2

2

Feature 1

Feature 2

?

Page 20: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 20CS 271, Fall 2007: Professor Padhraic Smyth

Finding the Decision Boundaries

1

1

1

2

2

2

Feature 1

Feature 2

?

Page 21: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 21CS 271, Fall 2007: Professor Padhraic Smyth

Finding the Decision Boundaries

1

1

1

2

2

2

Feature 1

Feature 2

?

Page 22: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 22CS 271, Fall 2007: Professor Padhraic Smyth

Overall Boundary = Piecewise Linear

1

1

1

2

2

2

Feature 1

Feature 2

?

Decision Region for Class 1

Decision Region for Class 2

Page 23: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 23CS 271, Fall 2007: Professor Padhraic Smyth

Nearest-Neighbor Boundaries on this data set?

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

Page 24: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 24CS 271, Fall 2007: Professor Padhraic Smyth

Linear Classifiers

• Linear classifier single linear decision boundary (for 2-class case)

• We can always represent a linear decision boundary by a linear equation:

w1 x1 + w2 x2 + wd xd = wj xj = wt x = 0

• In d dimensions, this defines a (d-1) dimensional hyperplane– d=3, we get a plane; d=2, we get a line

• For prediction we simply see if wj xj > 0

• The wi are the weights (parameters)– Learning consists of searching in the d-dimensional weight space for the set of weights

(the linear boundary) that minimizes an error measure

• Note that a minimum distance classifier is a special (restricted) case of a linear classifier

Page 25: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 25CS 271, Fall 2007: Professor Padhraic Smyth

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

A Possible Decision Boundary

Page 26: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 26CS 271, Fall 2007: Professor Padhraic Smyth

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

Another PossibleDecision Boundary

Page 27: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 27CS 271, Fall 2007: Professor Padhraic Smyth

0 1 2 3 4 5 6 7 80

1

2

3

4

5

6

7

8

FEATURE 1

FE

AT

UR

E 2

Minimum ErrorDecision Boundary

Page 28: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 28CS 271, Fall 2007: Professor Padhraic Smyth

The Perceptron Classifier (pages 740-743 in text)

• The perceptron classifier is just another name for a linear classifier for 2-class data, i.e.,

output(x) = sign( wj xj )

• Loosely motivated by a simple model of how neurons fire

• For mathematical convenience, class labels are +1 for one class and -1 for the other

• Two major types of algorithms for training perceptrons– Objective function = classification accuracy (“error correcting”)– Objective function = squared error (use gradient descent)

– Gradient descent is generally faster and more efficient – but there is a problem!

Page 29: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 29CS 271, Fall 2007: Professor Padhraic Smyth

The Sigmoid Function

• Sigmoid function is defined as

[ f ] = [ 2 / ( 1 + exp[- f ] ) ] - 1

• Derivative of sigmoid

f [ f ] = .5 * ( [f]+1 ) * ( 1-[f] )

f

f)

Page 30: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 30CS 271, Fall 2007: Professor Padhraic Smyth

Two different types of perceptron output

o(f)

f

x-axis below is f(x) = f = weighted sum of inputsy-axis is the perceptron output

f)

Thresholded output, takes values +1 or -1

Sigmoid output, takesreal values between -1 and +1

The sigmoid is in effect an approximationto the threshold function above, buthas a gradient that we can use for learning

f

Page 31: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 31CS 271, Fall 2007: Professor Padhraic Smyth

Squared Error for Perceptron with Sigmoidal Output

• Squared error = E[w] = i [ (f[x(i)]) - t(i) ]2

where x(i) is the ith input vector in the training data, i=1,..N t(i) is the ith target value (-1 or 1)

f[x(i)] = wj xj is the weighted sum of inputs

(f[x(i)]) is the sigmoid of the weighted sum

• Note that everything is fixed (once we have the training data) except for the weights w

• So we want to minimize E[w] as a function of w

Page 32: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 32CS 271, Fall 2007: Professor Padhraic Smyth

Gradient Descent Learning of Weights

Gradient Descent Rule:

w new = w old - ( E[w] ) where

(E[w]) is the gradient of the error function E wrt weights, and is the learning rate (small, positive)

Notes:

1. This moves us downhill in direction ( E[w] ) (steepest downhill)

2. How far we go is determined by the value of

Page 33: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 33CS 271, Fall 2007: Professor Padhraic Smyth

Gradient Descent Update Equation

• From basic calculus, for perceptron with sigmoid, and squared error objective function, gradient for a single input x(i) is

( E[w] ) = - ( t(i) – [f(i)] ) [f(i)] xj(i)

• Gradient descent weight update rule:

wj = wj + ( t(i) – [f(i)] ) [f(i)] xj(i)

– can rewrite as:

wj = wj + * error * c * xj(i)

Page 34: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 34CS 271, Fall 2007: Professor Padhraic Smyth

Pseudo-code for Perceptron Training

• Inputs: N features, N targets (class labels), learning rate • Outputs: a set of learned weights

Initialize each wj (e.g.,randomly)

While (termination condition not satisfied)for i = 1: N % loop over data points (an iteration)

for j= 1 : d % loop over weights deltawj = ( t(i) – [f(i)] ) [f(i)] xj(i) wj = wj + deltawj

endcalculate termination conditionend

Page 35: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 35CS 271, Fall 2007: Professor Padhraic Smyth

Comments on Perceptron Learning

• Iteration = one pass through all of the data

• Algorithm presented = incremental gradient descent– Weights are updated after visiting each input example– Alternatives

• Batch: update weights after each iteration (typically slower)• Stochastic: randomly select examples and then do weight updates

(see text, p. 742)

• Rate of convergence– E[w] is convex as a function of w, so no local minima– So convergence is guaranteed as long as learning rate is small enough

• But if we make it too small, learning will be *very* slow– But if learning rate is too large, we move further, but can overshoot the

solution and oscillate, and not converge at all

Page 36: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 36CS 271, Fall 2007: Professor Padhraic Smyth

Multi-Layer Perceptrons (p744-747 in text)

• What if we took K perceptrons and trained them in parallel and then took a weighted sum of their sigmoidal outputs?– This is a multi-layer neural network with a single “hidden” layer

(the outputs of the first set of perceptrons)– If we train them jointly in parallel, then intuitively different

perceptrons could learn different parts of the solution• Mathematically, they define different local decision boundaries

in the input space, giving us a more powerful model

• How would we train such a model?– Backpropagation algorithm = clever way to do gradient descent– Bad news: many local minima and many parameters

• training is hard and slow– Neural networks generated much excitement in AI research in the

late 1980’s and 1990’s• But now techniques like boosting and support vector machines

are often preferred

Page 37: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Learning to Detect Faces

A Large-Scale Application of Machine Learning

(This material is not in the text: for further information see the paper by P. Viola and M. Jones, International Journal of Computer Vision, 2004

Page 38: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 38CS 271, Fall 2007: Professor Padhraic Smyth

Viola-Jones Face Detection Algorithm

• Overview : – Viola Jones technique overview– Features– Integral Images– Feature Extraction– Weak Classifiers– Boosting and classifier evaluation– Cascade of boosted classifiers– Example Results

Page 39: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 39CS 271, Fall 2007: Professor Padhraic Smyth

Viola Jones Technique Overview

• Three major contributions/phases of the algorithm : – Feature extraction– Learning using boosting and decision stumps– Multi-scale detection algorithm

• Feature extraction and feature evaluation.– Rectangular features are used, with a new image representation

their calculation is very fast.

• Classifier learning using a method called boosting

• A combination of simple classifiers is very effective

Page 40: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 40CS 271, Fall 2007: Professor Padhraic Smyth

Features

• Four basic types.– They are easy to calculate.– The white areas are subtracted from the black ones.– A special representation of the sample called the integral

image makes feature extraction faster.

Page 41: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 41CS 271, Fall 2007: Professor Padhraic Smyth

Integral images

• Summed area tables

• A representation that means any rectangle’s values can be calculated in four accesses of the integral image.

Page 42: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 42CS 271, Fall 2007: Professor Padhraic Smyth

Fast Computation of Pixel Sums

Page 43: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 43CS 271, Fall 2007: Professor Padhraic Smyth

Feature Extraction

• Features are extracted from sub windows of a sample image.– The base size for a sub window is 24 by 24 pixels.– Each of the four feature types are scaled and shifted across

all possible combinations• In a 24 pixel by 24 pixel sub window there are ~160,000

possible features to be calculated.

Page 44: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 44CS 271, Fall 2007: Professor Padhraic Smyth

Learning with many features

• We have 160,000 features – how can we learn a classifier with only a few hundred training examples without overfitting?

• Idea:– Learn a single very simple classifier (a “weak classifier”)– Classify the data– Look at where it makes errors– Reweight the data so that the inputs where we made errors get

higher weight in the learning process– Now learn a 2nd simple classifier on the weighted data– Combine the 1st and 2nd classifier and weight the data according to

where they make errors– Learn a 3rd classifier on the weighted data

– … and so on until we learn T simple classifiers

– Final classifier is the combination of all T classifiers

– This procedure is called “Boosting” – works very well in practice.

Page 45: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 45CS 271, Fall 2007: Professor Padhraic Smyth

“Decision Stumps”

• Decision stumps = decision tree with only a single root node– Certainly a very weak learner!

– Say the attributes are real-valued– Decision stump algorithm looks at all possible thresholds for each

attribute– Selects the one with the max information gain– Resulting classifier is a simple threshold on a single feature

• Outputs a +1 if the attribute is above a certain threshold• Outputs a -1 if the attribute is below the threshold

– Note: can restrict the search for to the n-1 “midpoint” locations between a sorted list of attribute values for each feature. So complexity is n log n per attribute.

– Note this is exactly equivalent to learning a perceptron with a single intercept term (so we could also learn these stumps via gradient descent and mean squared error)

Page 46: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 46CS 271, Fall 2007: Professor Padhraic Smyth

Boosting Example

Page 47: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 47CS 271, Fall 2007: Professor Padhraic Smyth

First classifier

Page 48: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 48CS 271, Fall 2007: Professor Padhraic Smyth

First 2 classifiers

Page 49: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 49CS 271, Fall 2007: Professor Padhraic Smyth

First 3 classifiers

Page 50: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 50CS 271, Fall 2007: Professor Padhraic Smyth

Final Classifier learned by Boosting

Page 51: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 51CS 271, Fall 2007: Professor Padhraic Smyth

Final Classifier learned by Boosting

Page 52: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 52CS 271, Fall 2007: Professor Padhraic Smyth

Boosting with Decision Stumps

• Viola-Jones algorithm– With K attributes (e.g., K = 160,000) we have 160,000 different

decision stumps to choose from

– At each stage of boosting • given reweighted data from previous stage• Train all K (160,000) single-feature perceptrons• Select the single best classifier at this stage• Combine it with the other previously selected classifiers• Reweight the data• Learn all K classifiers again, select the best, combine, reweight• Repeat until you have T classifiers selected

– Very computationally intensive• Learning K decision stumps T times• E.g., K = 160,000 and T = 1000

Page 53: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 53CS 271, Fall 2007: Professor Padhraic Smyth

How is classifier combining done?

• At each stage we select the best classifier on the current iteration and combine it with the set of classifiers learned so far

• How are the classifiers combined?– Take the weight*feature for each classifier, sum these up, and

compare to a threshold (very simple)

– Boosting algorithm automatically provides the appropriate weight for each classifier and the threshold

– This version of boosting is known as the AdaBoost algorithm

– Some nice mathematical theory shows that it is in fact a very powerful machine learning technique

Page 54: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 54CS 271, Fall 2007: Professor Padhraic Smyth

Reduction in Error as Boosting adds Classifiers

Page 55: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 55CS 271, Fall 2007: Professor Padhraic Smyth

Useful Features Learned by Boosting

Page 56: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 56CS 271, Fall 2007: Professor Padhraic Smyth

A Cascade of Classifiers

Page 57: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 57CS 271, Fall 2007: Professor Padhraic Smyth

Detection in Real Images

• Basic classifier operates on 24 x 24 subwindows

• Scaling:– Scale the detector (rather than the images)– Features can easily be evaluated at any scale– Scale by factors of 1.25

• Location:– Move detector around the image (e.g., 1 pixel increments)

• Final Detections– A real face may result in multiple nearby detections – Postprocess detected subwindows to combine overlapping

detections into a single detection

Page 58: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 58CS 271, Fall 2007: Professor Padhraic Smyth

Training

• Examples of 24x24 images with faces

Page 59: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 59CS 271, Fall 2007: Professor Padhraic Smyth

Small set of 111 Training Images

Page 60: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 60CS 271, Fall 2007: Professor Padhraic Smyth

Sample results using the Viola-Jones Detector

• Notice detection at multiple scales

Page 61: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 61CS 271, Fall 2007: Professor Padhraic Smyth

More Detection Examples

Page 62: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 62CS 271, Fall 2007: Professor Padhraic Smyth

Practical implementation

• Details discussed in Viola-Jones paper

• Training time = weeks (with 5k faces and 9.5k non-faces)

• Final detector has 38 layers in the cascade, 6060 features

• 700 Mhz processor:– Can process a 384 x 288 image in 0.067 seconds (in 2003 when

paper was written)

Page 63: Machine Learning – Linear Classifiers and Boosting CS 271: Fall 2007 Instructor: Padhraic Smyth

Topic 12: Machine Learning – Part 2: 63CS 271, Fall 2007: Professor Padhraic Smyth

Summary

• Learning– Given a training data set, a class of models, and an error function,

this is essentially a search or optimization problem

• Different approaches to learning– Divide-and-conquer: decision trees– Global decision boundary learning: perceptrons– Constructing classifiers incrementally: boosting

• Learning to recognize faces– Viola-Jones algorithm: state-of-the-art face detector, entirely

learned from data, using boosting+decision-stumps