21
1 ©2005-2007 Carlos Guestrin 1 PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University October 24 th , 2007 ©2005-2007 Carlos Guestrin 2 A simple setting… Classification m data points Finite number of possible hypothesis (e.g., dec. trees of depth d) A learner finds a hypothesis h that is consistent with training data Gets zero error in training – error train (h) = 0 What is the probability that h has more than ε true error? error true (h) ε

PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

  • Upload
    vokien

  • View
    224

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

1

©2005-2007 Carlos Guestrin 1

PAC-learning, VCDimension

Machine Learning – 10701/15781Carlos GuestrinCarnegie Mellon University

October 24th, 2007

©2005-2007 Carlos Guestrin 2

A simple setting…

Classification m data points Finite number of possible hypothesis (e.g., dec. trees

of depth d) A learner finds a hypothesis h that is consistent

with training data Gets zero error in training – errortrain(h) = 0

What is the probability that h has more than εtrue error? errortrue(h) ≥ ε

Page 2: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

2

©2005-2007 Carlos Guestrin 3

How likely is a bad hypothesis toget m data points right?

Hypothesis h that is consistent with training data →got m i.i.d. points right h “bad” if it gets all this data right, but has high true error

Prob. h with errortrue(h) ≥ ε gets one data point right

Prob. h with errortrue(h) ≥ ε gets m data points right

©2005-2007 Carlos Guestrin 4

But there are many possible hypothesisthat are consistent with training data

Page 3: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

3

©2005-2007 Carlos Guestrin 5

How likely is learner to pick a badhypothesis

Prob. h with errortrue(h) ≥ ε gets m data points right

There are k hypothesis consistent with data How likely is learner to pick a bad one?

©2005-2007 Carlos Guestrin 6

Union bound

P(A or B or C or D or …)

Page 4: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

4

©2005-2007 Carlos Guestrin 7

How likely is learner to pick a badhypothesis

Prob. h with errortrue(h) ≥ ε gets m data points right

There are k hypothesis consistent with data How likely is learner to pick a bad one?

©2005-2007 Carlos Guestrin 8

Review: Generalization error infinite hypothesis spaces [Haussler ’88]

Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learnedhypothesis h that is consistent on the training data:

Page 5: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

5

©2005-2007 Carlos Guestrin 9

Using a PAC bound

Typically, 2 use cases: 1: Pick ε and δ, give you m 2: Pick m and δ, give you ε

©2005-2007 Carlos Guestrin 10

Review: Generalization error infinite hypothesis spaces [Haussler ’88]

Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learnedhypothesis h that is consistent on the training data:

Even if h makes zero errors in training data, may make errors in test

Page 6: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

6

©2005-2007 Carlos Guestrin 11

Limitations of Haussler ‘88 bound

Consistent classifier

Size of hypothesis space

©2005-2007 Carlos Guestrin 12

What if our classifier does not havezero error on the training data?

A learner with zero training errors may makemistakes in test set

What about a learner with errortrain(h) in training set?

Page 7: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

7

©2005-2007 Carlos Guestrin 13

Simpler question: What’s theexpected error of a hypothesis?

The error of a hypothesis is like estimating theparameter of a coin!

Chernoff bound: for m i.i.d. coin flips, x1,…,xm,where xi ∈ {0,1}. For 0<ε<1:

©2005-2007 Carlos Guestrin 14

Using Chernoff bound to estimateerror of a single hypothesis

Page 8: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

8

©2005-2007 Carlos Guestrin 15

But we are comparing manyhypothesis: Union bound

For each hypothesis hi:

What if I am comparing two hypothesis, h1 and h2?

©2005-2007 Carlos Guestrin 16

Generalization bound for |H|hypothesis

Theorem: Hypothesis space H finite, dataset Dwith m i.i.d. samples, 0 < ε < 1 : for any learnedhypothesis h:

Page 9: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

9

©2005-2007 Carlos Guestrin 17

PAC bound and Bias-Variancetradeoff

Important: PAC bound holds for all h,but doesn’t guarantee that algorithm finds best h!!!

or, after moving some terms around,with probability at least 1-δ:

©2005-2007 Carlos Guestrin 18

What about the size of thehypothesis space?

How large is the hypothesis space?

Page 10: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

10

©2005-2007 Carlos Guestrin 19

Boolean formulas with n binary features

©2005-2007 Carlos Guestrin 20

Number of decision trees of depth k

Recursive solutionGiven n attributesHk = Number of decision trees of depth kH0 =2Hk+1 = (#choices of root attribute) *

(# possible left subtrees) *(# possible right subtrees)

= n * Hk * Hk

Write Lk = log2 Hk

L0 = 1Lk+1 = log2 n + 2Lk

So Lk = (2k-1)(1+log2 n) +1

Page 11: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

11

©2005-2007 Carlos Guestrin 21

PAC bound for decision trees ofdepth k

Bad!!! Number of points is exponential in depth!

But, for m data points, decision tree can’t get too big…

Number of leaves never more than number data points

©2005-2007 Carlos Guestrin 22

Number of decision trees with k leaves

Hk = Number of decision trees with k leavesH0 =2

Loose bound: Reminder:

Page 12: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

12

©2005-2007 Carlos Guestrin 23

PAC bound for decision trees with kleaves – Bias-Variance revisited

©2005-2007 Carlos Guestrin 24

Announcements

Midterm: Thursday Oct. 25th, Thursday 5-6:30pm, MM A14

All content up to, and including SVMs and Kernels Not learning theory

any book, class notes, your printouts of class materials thatare on the class website, including my annotated slides andrelevant readings, and Andrew Moore’s tutorials. You cannotuse materials brought by other students.

Calculators are not necessary. No laptops, PDAs or cellphones.

Page 13: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

13

©2005-2007 Carlos Guestrin 25

What did we learn from decision trees?

Bias-Variance tradeoff formalized

Moral of the story:Complexity of learning not measured in terms of size hypothesis space,but in maximum number of points that allows consistent classification Complexity m – no bias, lots of variance Lower than m – some bias, less variance

©2005-2007 Carlos Guestrin 26

What about continuous hypothesisspaces?

Continuous hypothesis space: |H| = ∞ Infinite variance???

As with decision trees, only care about themaximum number of points that can beclassified exactly!

Page 14: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

14

©2005-2007 Carlos Guestrin 27

How many points can a linearboundary classify exactly? (1-D)

©2005-2007 Carlos Guestrin 28

How many points can a linearboundary classify exactly? (2-D)

Page 15: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

15

©2005-2007 Carlos Guestrin 29

How many points can a linearboundary classify exactly? (d-D)

©2005-2007 Carlos Guestrin 30

PAC bound using VC dimension

Number of training points that can beclassified exactly is VC dimension!!! Measures relevant size of hypothesis space, as

with decision trees with k leaves

Page 16: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

16

©2005-2007 Carlos Guestrin 31

Shattering a set of points

©2005-2007 Carlos Guestrin 32

VC dimension

Page 17: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

17

©2005-2007 Carlos Guestrin 33

PAC bound using VC dimension

Number of training points that can beclassified exactly is VC dimension!!! Measures relevant size of hypothesis space, as

with decision trees with k leaves Bound for infinite dimension hypothesis spaces:

©2005-2007 Carlos Guestrin 34

Examples of VC dimension

Linear classifiers: VC(H) = d+1, for d features plus constant term b

Neural networks VC(H) = #parameters Local minima means NNs will probably not find best

parameters

1-Nearest neighbor?

Page 18: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

18

©2005-2007 Carlos Guestrin 35

Another VC dim. example -What can we shatter? What’s the VC dim. of decision stumps in 2d?

©2005-2007 Carlos Guestrin 36

Another VC dim. example -What can’t we shatter? What’s the VC dim. of decision stumps in 2d?

Page 19: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

19

©2005-2007 Carlos Guestrin 37

What you need to know

Finite hypothesis space Derive results Counting number of hypothesis Mistakes on Training data

Complexity of the classifier depends on number ofpoints that can be classified exactly Finite case – decision trees Infinite case – VC dimension

Bias-Variance tradeoff in learning theory Remember: will your algorithm find best classifier?

©2005-2007 Carlos Guestrin 38

Big PictureMachine Learning – 10701/15781Carlos GuestrinCarnegie Mellon University

October 24th, 2007

Page 20: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

20

©2005-2007 Carlos Guestrin 39

What you have learned thus far Learning is function approximation Point estimation Regression Naïve Bayes Logistic regression Bias-Variance tradeoff Neural nets Decision trees Cross validation Boosting Instance-based learning SVMs Kernel trick PAC learning VC dimension Margin bounds Mistake bounds

©2005-2007 Carlos Guestrin 40

Review material in terms of…

Types of learning problems

Hypothesis spaces

Loss functions

Optimization algorithms

Page 21: PAC-learning, VC Dimensionguestrin/Class/10701/slides/learningtheory-big... · PAC-learning, VC Dimension Machine Learning – 10701/15781 Carlos Guestrin Carnegie Mellon University

21

©2005-2007 Carlos Guestrin 41

BIG PICTURE(a few points of comparison)

Naïve Bayes

Logistic regression

NeuralNets

Boosting

SVMs

Instance-basedLearning

SVM regression

kernelregression

linearregression

Decisiontrees

Log-loss/MLELL

Margin-basedMrg

RegressionReg

Squared errorRMS

ClassificationCl

density estimationDElearning

task

lossfunction

DE, LL

DE, LL

DE,Cl,Reg,RMS

Cl, exp-loss

DE,Cl,Reg

DE,Cl,Reg

Cl, MrgReg, Mrg

Reg, RMS

Reg, RMS

This is a very incomplete view!!!