94
Classification Classification Giuseppe Attardi Giuseppe Attardi Università di Pisa Università di Pisa

Classification

  • Upload
    colum

  • View
    65

  • Download
    0

Embed Size (px)

DESCRIPTION

Classification. Classification. Define classes/categories Label text Extract features Choose a classifier Naive Bayes Classifier Decision Trees Maximum Entropy … Train it Use it to classify new examples. Decision Trees. Naïve Bayes. Na ï ve Bayes. - PowerPoint PPT Presentation

Citation preview

Page 1: Classification

ClassificationClassification

Giuseppe AttardiGiuseppe AttardiUniversità di PisaUniversità di Pisa

Page 2: Classification

ClassificationClassification Define classes/categoriesDefine classes/categoriesLabel textLabel textExtract featuresExtract featuresChoose a classifierChoose a classifier

Naive Bayes Classifier Decision Trees Maximum Entropy …

Train it Train it Use it to classify new examplesUse it to classify new examples

Page 3: Classification

NaNaïïve Bayesve Bayes

More powerful than Decision TreesDecision Trees Naïve Bayes

Every feature gets a say in determining which label should be assigned to a given input value.

Slide from Heng Ji

Page 4: Classification

NaNaïïve Bayes: Strengthsve Bayes: Strengths Very simple modelVery simple model

Easy to understand Very easy to implement

Can scale easily to millions of training Can scale easily to millions of training examples (just need counts!) examples (just need counts!)

Very efficient, fast training and classificationVery efficient, fast training and classification Modest space storageModest space storage Widely used because it works really well for Widely used because it works really well for

text categorizationtext categorization Linear, but non parallel decision boundariesLinear, but non parallel decision boundaries

Slide from Heng Ji

Page 5: Classification

NaNaïïve Bayes: weaknessesve Bayes: weaknesses Naïve Bayes independence assumption has two consequences:Naïve Bayes independence assumption has two consequences:

The linear ordering of words is ignored (bag of words model)

The words are independent of each other given the class:• President is more likely to occur in a context that

contains election than in a context that contains poet

Naïve Bayes assumption is inappropriate if there are strong Naïve Bayes assumption is inappropriate if there are strong conditional dependencies between the variablesconditional dependencies between the variables

Nonetheless, Naïve Bayes models do well in a surprisingly large Nonetheless, Naïve Bayes models do well in a surprisingly large number of cases because often we are interested in number of cases because often we are interested in classification classification accuracyaccuracy and not in accurate and not in accurate probability estimationsprobability estimations) )

Does not optimize prediction accuracy Does not optimize prediction accuracy

Slide from Heng Ji

Page 6: Classification

The naivete of independenceThe naivete of independence Naïve Bayes assumption is inappropriate if there are Naïve Bayes assumption is inappropriate if there are

strong conditional dependencies between the variablesstrong conditional dependencies between the variables Classifier may end up "double-counting" the effect of Classifier may end up "double-counting" the effect of

highly correlated features, pushing the classifier closer highly correlated features, pushing the classifier closer to a given label than is justifiedto a given label than is justified

Consider a name gender classifierConsider a name gender classifier features ends-with(a) and ends-with(vowel) are dependent on

one another, because if an input value has the first feature, then it must also have the second feature

For features like these, the duplicated information may be given more weight than is justified by the training set

Slide from Heng Ji

Page 7: Classification

Decision Trees: StrengthsDecision Trees: Strengths capable to generate understandable rulescapable to generate understandable rules perform classification without requiring much perform classification without requiring much

computationcomputation capable to handle both continuous and capable to handle both continuous and

categorical variablescategorical variables provide a clear indication of which features provide a clear indication of which features

are most important for prediction or are most important for prediction or classification. classification.

Slide from Heng Ji

Page 8: Classification

Decision Trees: weaknessesDecision Trees: weaknessesprone to errors in classification prone to errors in classification

problems with many classes and problems with many classes and relatively small number of training relatively small number of training examples. examples. Since each branch in the decision tree

splits the training data, the amount of training data available to train nodes lower in the tree can become quite small.

can be computationally expensive to can be computationally expensive to train. train. Need to compare all possible splits Pruning is also expensive

Slide from Heng Ji

Page 9: Classification

Decision Trees: weaknessesDecision Trees: weaknesses Typically examine one field at a timeTypically examine one field at a time Leads to rectangular classification boxes that Leads to rectangular classification boxes that

may not correspond well with the actual may not correspond well with the actual distribution of records in the decision space. distribution of records in the decision space. Such ordering limits their ability to exploit

features that are relatively independent of one another

Naive Bayes overcomes this limitation by allowing all features to act "in parallel"

Slide from Heng Ji

Page 10: Classification

Linearly separable dataLinearly separable data

Class1Class2

Linear Decision boundary

Slide from Heng Ji

Page 11: Classification

Non linearly separable Non linearly separable datadata

Class1Class2

Slide from Heng Ji

Page 12: Classification

Non linearly separable Non linearly separable datadata

Non Linear Classifier

Class1Class2

Slide from Heng Ji

Page 13: Classification

Linear versus Non Linear Linear versus Non Linear algorithmsalgorithms Linear or Non linear separable data?Linear or Non linear separable data?

We can find out only empirically Linear algorithmsLinear algorithms (algorithms that find a linear decision (algorithms that find a linear decision

boundary)boundary) When we think the data is linearly separable Advantages

• Simpler, less parameters Disadvantages

• High dimensional data (like for NLP) is usually not linearly separable

Examples: Perceptron, Winnow, large margin Note: we can use linear algorithms also for non

linear problems (see Kernel methods)

Slide from Heng Ji

Page 14: Classification

Linear versus Non Linear Linear versus Non Linear algorithmsalgorithms Non Linear algorithmsNon Linear algorithms

When the data is non linearly separable Advantages

• More accurate Disadvantages

• More complicated, more parameters Example: Kernel methods Note: the distinction between linear and non linear

applies also for multi-class classification (we’ll see this later)

Slide from Heng Ji

Page 15: Classification

Simple linear Simple linear algorithmsalgorithms Perceptron algorithmPerceptron algorithm

Linear Binary classification Online (process data sequentially, one data

point at the time) Mistake driven Simple single layer Neural Networks

Slide from Heng Ji

Page 16: Classification

Linear AlgebraLinear Algebra

Page 17: Classification

Basic conceptsBasic concepts VectorVector in in RRnn is an ordered set of is an ordered set of

nn real numbers: real numbers:v = (1, 6, 3, 4) is in R4

MatrixMatrix in in RRmxnmxn has has mm rows and rows and nn columnscolumns

4361

2396784821

Page 18: Classification

Vector AdditionVector Addition

vvww

v+wv+w

Page 19: Classification

Vector ProductsVector Products Vector dot (inner) productVector dot (inner) product::

Vector outer productVector outer product::

Page 20: Classification

Geometrical InterpretationGeometrical InterpretationVector norm: A norm of a vector ||x|| is a

measure of the “length” of the vector

Angle between vectors:vv

ww

Page 21: Classification

Matrix ProductMatrix Product Matrix productMatrix product::

Example:Example:

2222122121221121

2212121121121111

2221

1211

2221

1211 ,

babababababababa

AB

bbbb

Baaaa

A

Page 22: Classification

Vector-Matrix ProductVector-Matrix Product

Page 23: Classification

HyperplaneHyperplane Hyperplane equation:Hyperplane equation: wxwx + + bb = 0 = 0

w-b/w2

wx + b > 0

wx + b < 0

wx + b = 0

Page 24: Classification

Vector of featuresVector of features word is capitalized (Trump)word is capitalized (Trump) word made of digits (2016)word made of digits (2016) all upper (USA)all upper (USA)

tree = <0, 0, 0>tree = <0, 0, 0> Dylan = <1, 0, 0>Dylan = <1, 0, 0>

Binary featuresBinary features

Page 25: Classification

Non binary featuresNon binary features Bag of wordsBag of words

““the presence of words in the document”the presence of words in the document”[2, 1, 1, 1, 1, 1][2, 1, 1, 1, 1, 1]

““the absence of words in the document”the absence of words in the document”[2, 0, 1, 1, 1, 1, 1][2, 0, 1, 1, 1, 1, 1]

Use a dictionary to assign an index to each Use a dictionary to assign an index to each word:word:dict[newword] = len(dict)dict[newword] = len(dict)

Page 26: Classification

Linear binary Linear binary classificationclassification Data: {(xi, yi)}i=1…n

x in Rd (x is a vector in d-dimensional space) feature vector y in {1,+1} label (class, category)

Question: Find a linear decision boundary: wx + b (hyperplane)

such that the classification rule associated with it has minimal probability of error

classification rule: y = sign(w x + b) which means: if wx + b > 0 then y = +1 if wx + b < 0 then y = 1

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 27: Classification

Linear binary classificationLinear binary classification Find a goodFind a good hyperplane hyperplane ((ww, , bb) ) in in RRd+1d+1

that correctly classifies that correctly classifies data points as much as data points as much as possiblepossible

In In online fashiononline fashion: one : one data point at the time, data point at the time, update weights as update weights as necessarynecessary

wx + b = 0

Classification Rule: y = sign(wx + b)

G. Lanckriet, Statistical Learning Theory Tutorial

Page 28: Classification

PerceptronPerceptron

Page 29: Classification

The Perceptron

It obeyed the following rule:If the sum of the weighted inputs exceeds a threshold, output 1, else output 0.

output

inpu

ts

wei

ghts

sum

Σxi wi

*Frank Rosenblatt (1962). Principles of Neurodynamics, Spartan, New York, NY.Subsequent progress was inspired by the invention of learning rules inspired by ideas from neuroscience…Rosenblatt’s Perceptron could automatically learn to categorise or classify input vectors into types.

1 if Σ inputi * weighti > threshold

0 if Σ inputi * weighti < threshold

Page 30: Classification

Binary threshold neurons McCulloch-Pitts (1943)McCulloch-Pitts (1943)

First compute a weighted sum of the inputs from other neurons

Then output a 1 if the weighted sum exceeds the threshold.

ii

iwxz y

z

1

0threshold

Page 31: Classification

Perceptron as Single Layer Neural Perceptron as Single Layer Neural NetworkNetwork

y = sign(wx + b)

21 wwb

211 xx

Page 32: Classification

Multi-layer Perceptron

xn

x1

x2

Input Output

Hidden layers

Page 33: Classification

• No connections within a layerNo connections within a layer• No direct connections between input and output layersNo direct connections between input and output layers• Fully connected between layersFully connected between layers• Often more than 3 layersOften more than 3 layers• Number of output units need not equal number of input unitsNumber of output units need not equal number of input units• Number of hidden units per layer can be more or less than Number of hidden units per layer can be more or less than input or output unitsinput or output units

m

jijiji bxwfy

1

)(

Each unit is a perceptronEach unit is a perceptron

Often include bias as an extra weightOften include bias as an extra weight

Properties of Architecture

Page 34: Classification

What do each of the layers do?

1st layer draws linear 1st layer draws linear boundariesboundaries

2nd layer combines the 2nd layer combines the boundariesboundaries

3rd layer can generate 3rd layer can generate arbitrarily complex arbitrarily complex boundariesboundaries

Page 35: Classification

Perceptron Learning RulePerceptron Learning RuleAssuming the problem is linearly separable, Assuming the problem is linearly separable, there is a learning rule that converges in a there is a learning rule that converges in a finite timefinite timeMotivationMotivation A new (unseen) input pattern that is similar to A new (unseen) input pattern that is similar to an old (seen) input pattern is likely to be an old (seen) input pattern is likely to be classified correctlyclassified correctly

Page 36: Classification

Learning RuleLearning Rule Basic IdeaBasic Idea

go over all existing data patterns, whose labeling is known, and check their classification with a current weight vector

If correct, continue If not, add to the weights a quantity proportional to

the product of the input pattern with the desired output Y (+1 or -1)

Page 37: Classification

Hebb RuleHebb Rule In 1949, In 1949, Hebb Hebb postulated that the changes in postulated that the changes in

a synapse are proportional to the a synapse are proportional to the correlationcorrelation between firing of the neurons that are between firing of the neurons that are connected through the synapse (the pre- and connected through the synapse (the pre- and post- synaptic neurons)post- synaptic neurons)

Neurons that fire together, wire together Neurons that fire together, wire together

Page 38: Classification

Example: a simple Example: a simple problemproblem 4 points linearly separable4 points linearly separable

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

y = +1 y = -1

(1/2, 1)

(1,1/2)

x0 (-1,1/2)

x1 (-1,1)

Page 39: Classification

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

(1/2, 1)

(1,1/2)

(-1,1/2)

(-1,1)

w0 = (0, 1)

Initial WeightsInitial Weights

Page 40: Classification

Updating WeightsUpdating Weights Learning rule is:Learning rule is:

wi+1 = wi + wi

where:where: is learning rate

= 1/3 , = 1/3 , ww00 = (0, 1), = (0, 1), xx00 = (-1, ½) = (-1, ½)

ww00 = 1/3 (-1, ½) (0 – = 1/3 (-1, ½) (0 – signsign((ww00 xx00))))

= 1/3 (-1,1/2) (-1)= 1/3 (-1,1/2) (-1)= (1/3, -1/6)= (1/3, -1/6)ww11 = (0, 1) + (1/3, -1/6) = (1/3, 5/6) = (0, 1) + (1/3, -1/6) = (1/3, 5/6)

)ˆ( iiii yyxw

Page 41: Classification

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

w1 = (1/3,5/6)

First CorrectionFirst Correction

Page 42: Classification

Updating WeightsUpdating Weights Upper left point is still wrongly classifiedUpper left point is still wrongly classified

ww22 = = ww11 + + ww11

ww11 = 1/3 (-1, ½) (0 – = 1/3 (-1, ½) (0 – signsign((ww11 xx11))))

ww22 = (1/3, 5/6) + 1/3 = (1/3, 5/6) + 1/3 -1 -1 (-1, 1/2) = (2/3, (-1, 1/2) = (2/3, 2/3)2/3)

Page 43: Classification

Second CorrectionSecond Correction

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

w2 = (2/3,2/3)

Page 44: Classification

ExampleExample All 4 points are classified correctlyAll 4 points are classified correctly Toy problem – only 2 updates requiredToy problem – only 2 updates required Correction of weights was simply a Correction of weights was simply a

rotation of the separating hyper planerotation of the separating hyper plane Rotation can be applied to the right direction, Rotation can be applied to the right direction,

but may require many updatesbut may require many updates

Page 45: Classification

Deriving the delta rule Define the error as the Define the error as the

squared residuals summed squared residuals summed over all training casesover all training cases

Now differentiate to get error Now differentiate to get error derivatives for weightsderivatives for weights

TheThe batchbatch delta rule changes delta rule changes the weights in proportion to the weights in proportion to theirtheir error derivativeserror derivatives summed over all training summed over all training casescases

Page 46: Classification

The error surface The error surface lies in a space with a horizontal The error surface lies in a space with a horizontal

axis for each weight and one vertical axis for the axis for each weight and one vertical axis for the error. error. For a linear neuron, it is a quadratic bowl. Vertical cross-sections are parabolas. Horizontal cross-sections are ellipses.

E w1

w2

Page 47: Classification

Online learning zig-zags around the direction of Online learning zig-zags around the direction of steepest descentsteepest descent

w1

w2

Gradient Descent

constraint from training case 1

constraint from training case 2

Page 48: Classification

Support Vector MachinesSupport Vector Machines

Page 49: Classification

Large margin Large margin classifierclassifier Another family of linear Another family of linear

algorithmsalgorithms IntuitionIntuition (Vapnik, 1965) (Vapnik, 1965) If the classes are linearly If the classes are linearly

separable:separable: Separate the data Place hyper-plane “far”

from the data: large margin

Statistical results guarantee good generalization

BAD

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 50: Classification

GOOD

Maximal Margin Classifier

Large margin Large margin classifierclassifier

IntuitionIntuition (Vapnik, 1965) if (Vapnik, 1965) if linearly separable:linearly separable: Separate the data Place hyperplane

“far” from the data: large margin

Statistical results guarantee good generalization

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 51: Classification

Large margin Large margin classifierclassifier

If If not linearly separablenot linearly separable Allow some errors Still, try to place

hyperplane “far” from each class

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 52: Classification

Large Margin ClassifiersLarge Margin ClassifiersAdvantagesAdvantages

Theoretically better (better error bounds)LimitationsLimitations

Computationally more expensive, large quadratic programming

Page 53: Classification

Linear Classifiers

denotes +1denotes -1

How would you classify this data?

w x + b=

0

w x + b<0

w x + b>0

f x

f(x,w,b) = sign(w x + b)

&y

Page 54: Classification

denotes +1denotes -1

How would you classify this data?

f x

f(x,w,b) = sign(w x + b)

&y

Linear Classifiers

Page 55: Classification

Linear Classifiers

denotes +1denotes -1

How would you classify this data?

f x

f(x,w,b) = sign(w x + b)

&y

Page 56: Classification

denotes +1denotes -1

Any of these would be fine..

..but which is best?

f x

f(x,w,b) = sign(w x + b)

&y

Linear Classifiers

Page 57: Classification

Linear Classifiersf x

denotes +1denotes -1

f(x,w,b) = sign(w x + b)

How would you classify this data?

Misclassified to +1 class

&y

Page 58: Classification

f x

yest

denotes +1denotes -1

f(x,w,b) = sign(w x + b)

Define the margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Classifier Marginf x

yest

denotes +1denotes -1 Define the

margin of a linear classifier as the width that the boundary could be increased by before hitting a datapoint.

Page 59: Classification

Maximum Margin

f x

yest

denotes +1denotes -1

f(x,w,b) = sign(w x + b)

The maximum margin linear classifier is the linear classifier with the, um, maximum margin.This is the simplest kind of SVM (Called an LSVM)Linear SVM

Support Vectors are those datapoints that the margin pushes up against

1. Maximizing the margin is good according to intuition and PAC theory

2. Implies that only support vectors are important; other training examples are ignorable.

3. Empirically it works very very well.

Page 60: Classification

Digression: PAC TheoryDigression: PAC Theory Two important aspects of complexity in machine learning:Two important aspects of complexity in machine learning:

1. sample complexity: in many learning problems, training data is expensive and we should hope not to need too much of it.

2. computational complexity: A neural network, for example, which takes an hour to train may be of no practical use in complex financial prediction problems.

Important that both the amount of training data required Important that both the amount of training data required for a prescribed level of performance and the running time for a prescribed level of performance and the running time of the learning algorithm in learning from this data do not of the learning algorithm in learning from this data do not increase too dramatically as the “difficulty” of the learning increase too dramatically as the “difficulty” of the learning problem increases. problem increases.

Page 61: Classification

Such issues have been formalised and Such issues have been formalised and investigated over the past decade within the field investigated over the past decade within the field of “computational learning theory”. of “computational learning theory”.

One popular framework for discussing such One popular framework for discussing such problems is the probabilistic framework which problems is the probabilistic framework which has become known as the “probably has become known as the “probably approximately correct”, or approximately correct”, or PAC, model of PAC, model of learning. learning.

Digression: PAC TheoryDigression: PAC Theory

Page 62: Classification

Linear SVM Linear SVM MathematicallyMathematically

What we know:What we know: ww . . xx++ + b = + + b = +11 ww . . xx-- + b = - + b = -11 ww . ( . (xx++-x-x--) = ) = 22

“Predict Class = +1”

zone

“Predict Class = -1”

zonewx+b=1

wx+b=0

wx+b=-1

X-

x+

wwwxxM 2)(

M=Margin Width

Page 63: Classification

Linear SVM Mathematically Goal: 1) Correctly classify all training data if yi = +1 if yi = -1

for all i

2) Maximize the Margin same as minimize

We can formulate a Quadratic Optimization Problem and solve for w and b

Minimize subject to

wM 2

www t

21)(

1bwxi1bwxi

1)( bwxyi ii

1)( bwxy ii

wwt

21

Page 64: Classification

Solving the Optimization Problem

Need to optimize a quadratic function subject to linear constraints.

Quadratic optimization problems are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist for solving them.

The solution involves constructing a dual problem where a Lagrange multiplier αi is associated with every constraint in the primary problem:

Find w and b such thatΦ(w) =½ wTw is minimized; and for all {(xi ,yi)}: yi (wTxi + b) ≥ 1

Find α1…αN such thatQ(α) = Σαi - ½ΣΣαiαjyiyjxi

Txj is maximized and (1) Σαiyi = 0(2) αi ≥ 0 for all αi

Page 65: Classification

Digression: Lagrange Digression: Lagrange MultipliersMultipliers The method of The method of Lagrange multipliersLagrange multipliers provides a strategy  provides a strategy

for finding the maxima and minima of a function subject to for finding the maxima and minima of a function subject to constraintsconstraints

For instance, consider the optimization problem For instance, consider the optimization problem maximize maximize subject to subject to 

We introduce a new variable (λ) called a Lagrange We introduce a new variable (λ) called a Lagrange multiplier, and study the Lagrange function defined bymultiplier, and study the Lagrange function defined by

(the λ term may be either added or subtracted.) (the λ term may be either added or subtracted.) If (If (xx,,yy) is a maximum for the original constrained problem, ) is a maximum for the original constrained problem,

then there exists a then there exists a λλ such that ( such that (xx,,yy,λ) is a stationary ,λ) is a stationary point for the Lagrange function point for the Lagrange function

(stationary points are those points where the partial (stationary points are those points where the partial derivatives of Λ are zero). derivatives of Λ are zero).

Page 66: Classification

The Optimization Problem Solution The solution has the form:

Each non-zero αi indicates that corresponding xi is a support vector.

Then the classifying function will have the form:

Notice that it relies on an inner product between the test point x and the support vectors xi

Also keep in mind that solving the optimization problem involved computing the inner products xi

Txj between all pairs of training points.

w =Σαiyixi b= yk- wTxk for any xk such that αk 0

f(x) = ΣαiyixiTx + b

Page 67: Classification

Dataset with noise

Hard Margin: So far we require all data points be classified correctly

- No training error What if the training set is

noisy? - Solution 1: use very powerful

kernels

denotes +1denotes -1

OVERFITTING!

Page 68: Classification

Slack variables ξi can be added to allow misclassification of difficult or noisy examples.

wx+b=1

wx+b=0

wx+b=-1

7

11 2

Soft Margin ClassificationSoft Margin Classification

What should our quadratic optimization criterion be?

Minimize

R

kkεCww

121

Page 69: Classification

Hard Margin v.s. Soft Margin The old formulation:

The new formulation incorporating slack variables:

Parameter C can be viewed as a way to control overfitting.

Find w and b such thatΦ(w) =½ wTw is minimized and for all {(xi ,yi)}yi (wTxi + b) ≥ 1

Find w and b such thatΦ(w) =½ wTw + CΣξi is minimized and for all {(xi ,yi)}yi (wTxi + b) ≥ 1- ξi and ξi ≥ 0 for all i

Page 70: Classification

Linear SVMs: Overview The classifier is a separating hyperplane. Most “important” training points are support vectors; they

define the hyperplane. Quadratic optimization algorithms can identify which training

points xi are support vectors with non-zero Lagrangian multipliers αi.

Both in the dual formulation of the problem and in the solution training points appear only inside dot products:

Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjxi

Txj is maximized and (1) Σαiyi = 0(2) 0 ≤ αi ≤ C for all αi

f(x) = ΣαiyixiTx + b

Page 71: Classification

Non Linear problemNon Linear problem

Page 72: Classification

Non Linear problemNon Linear problem

Page 73: Classification

Non Linear problemNon Linear problem Kernel methodsKernel methods A family of A family of non-linear algorithmsnon-linear algorithms Transform the non linear problem in a linear Transform the non linear problem in a linear

one (in a different feature space)one (in a different feature space) Use linear algorithms to solve the linear Use linear algorithms to solve the linear

problem in the new spaceproblem in the new space

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 74: Classification

X=[x z]

Basic principle kernel Basic principle kernel methodsmethods

: : RRdd RRDD ( (DD >> >> dd))

(X)=[x2 z2 xz]

f(x) = sign(w1x2+w2z2+w3xz +b)

wT(x)+b=0

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 75: Classification

Basic principle kernel Basic principle kernel methodsmethods Linear separabilityLinear separability: more likely in high : more likely in high

dimensionsdimensions MappingMapping: : maps input into high-dimensional maps input into high-dimensional

feature spacefeature space ClassifierClassifier: construct linear classifier in high-: construct linear classifier in high-

dimensional feature spacedimensional feature space MotivationMotivation: appropriate choice of : appropriate choice of leads to leads to

linear separabilitylinear separability We can do this efficiently!We can do this efficiently!

Gert Lanckriet, Statistical Learning Theory Tutorial

Page 76: Classification

Basic principle kernel Basic principle kernel methodsmethods We can use the linear algorithms seen before We can use the linear algorithms seen before

(for example, perceptron) for classification in (for example, perceptron) for classification in the higher dimensional spacethe higher dimensional space

Page 77: Classification

Non-linear SVMs Datasets that are linearly separable with some noise

work out great:

But what are we going to do if the dataset is just too hard?

How about… mapping data to a higher-dimensional space:

0 x

0 x

0 x

x2

Page 78: Classification

Non-linear SVMs: Feature spaces General idea: the original input space can always be

mapped to some higher-dimensional feature space where the training set is separable:

Φ: x → φ(x)

Page 79: Classification

The “Kernel Trick” The linear classifier relies on dot product between vectors

K(xi,xj) = xiTxj

If every data point is mapped into high-dimensional space via some transformation Φ: x → φ(x), the dot product becomes:

K(xi,xj) = φ(xi) Tφ(xj)

A kernel function is some function that corresponds to an inner product in some expanded feature space

Example: x = [x1 x2]; let K(xi,xj) = (1 + xi

Txj)2,

Need to show that K(xi,xj) = φ(xi) Tφ(xj):

K(xi,xj) = (1 + xiTxj)2

= 1+ xi12xj1

2 + 2 xi1xj1 xi2xj2+ xi2

2xj22 + 2xi1xj1 + 2xi2xj2

= [1 xi12 √2 xi1xi2 xi2

2 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj2

2 √2xj1 √2xj2] = φ(xi)

Tφ(xj)where φ(x) = [1 x1

2 √2 x1x2 x22 √2x1 √2x2]

Page 80: Classification

What Functions are Kernels?

For some functions K(xi,xj) checking that K(xi,xj) = φ(xi)

Tφ(xj) can be cumbersome Mercer’s theorem:

Every semi-positive definite symmetric function is a kernel

Page 81: Classification

Examples of Kernel Functions Linear: K(xi,xj)= xi

Txj

Polynomial of power p: K(xi,xj) = (1+ xi Txj)p

Gaussian (radial-basis function network):

Sigmoid: K(xi,xj) = tanh(β0xi Txj + β1)

)2

exp(),( 2

2

ji

ji

xxxx

K

Page 82: Classification

Non-linear SVMs Mathematically Dual problem formulation:

The solution is:

Optimization techniques for finding αi’s remain the same!

Find α1…αN such thatQ(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and (1) Σαiyi = 0(2) αi ≥ 0 for all αi

f(x) = ΣαiyiK(xi, xj)+ b

Page 83: Classification

SVM locates a separating hyperplane in the feature space and classify points in that space

It does not need to represent the space explicitly, simply by defining a kernel function

The kernel function plays the role of the dot product in the feature space.

Nonlinear SVM - OverviewNonlinear SVM - Overview

Page 84: Classification

Multi-class classificationMulti-class classification Given:Given: some data items that belong to one of some data items that belong to one of

M possible classes M possible classes Task:Task: Train the classifier and predict the class Train the classifier and predict the class

for a new data itemfor a new data item Geometrically:Geometrically: harder problem, no more harder problem, no more

simple geometrysimple geometry

Page 85: Classification

Multi-class classificationMulti-class classification

Page 86: Classification

Multiclass ApproachesMulticlass Approaches One vs rest:One vs rest:

Build N binary classifiers, for class Ci against all others

Choose the class with highest score One vs one:One vs one:

Build N(N-1)/2 classifiers, each class against each other

Use voting to choose the class

Page 87: Classification

Properties of SVMProperties of SVM Flexibility in choosing a similarity functionFlexibility in choosing a similarity function Sparseness of solution when dealing with large data Sparseness of solution when dealing with large data

setssets only support vectors are used to specify the separating

hyperplane Ability to handle large feature spacesAbility to handle large feature spaces

complexity does not depend on the dimensionality of the feature space

Overfitting can be controlled by soft margin Overfitting can be controlled by soft margin approachapproach

Nice math property:Nice math property: a simple convex optimization problem which is guaranteed to

converge to a single global solution Feature SelectionFeature Selection

Page 88: Classification

SVM ApplicationsSVM Applications SVM has been used successfully in many SVM has been used successfully in many

real-world problemsreal-world problems text (and hypertext) categorization image classification – different types of sub-

problems bioinformatics (protein classification, cancer

classification) hand-written character recognition

Page 89: Classification

Weakness of SVMWeakness of SVM It is sensitive to noiseIt is sensitive to noise

A relatively small number of mislabeled examples can dramatically decrease the performance

It only considers two classesIt only considers two classes how to do multi-class classification with SVM? Answer:

1. with output arity m, learn m SVM’sSVM 1 learns “Output==1” vs “Output !=

1”SVM 2 learns “Output==2” vs “Output !=

2”:SVM m learns “Output==m” vs “Output !=

m”2. to predict the output for a new input, just predict with

each SVM and find out which one puts the prediction the furthest into the positive region.

Page 90: Classification

Application: Text Application: Text CategorizationCategorization Task: The classification of natural text (or Task: The classification of natural text (or

hypertext) documents into a fixed number of hypertext) documents into a fixed number of predefined categories based on their content.predefined categories based on their content.

- email filtering, web searching, sorting documents by - email filtering, web searching, sorting documents by topic, etc..topic, etc..

A document can be assigned to more than A document can be assigned to more than one category, so this can be viewed as a one category, so this can be viewed as a series of binary classification problems, one series of binary classification problems, one for each categoryfor each category

Page 91: Classification

Application : Face Expression Application : Face Expression RecognitionRecognition Construct feature space, by use of Construct feature space, by use of

eigenvectors or other meanseigenvectors or other means Multiple class problem, several expressionsMultiple class problem, several expressions Use multi-class SVMUse multi-class SVM

Page 92: Classification

Some IssuesSome Issues Choice of kernelChoice of kernel - Gaussian or polynomial kernel is default- Gaussian or polynomial kernel is default - if ineffective, more elaborate kernels are needed- if ineffective, more elaborate kernels are needed

Choice of kernel parametersChoice of kernel parameters - e.g. - e.g. σ in Gaussian kernelσ in Gaussian kernel - - σ is the distance between closest points with different σ is the distance between closest points with different

classifications classifications -- In the absence of reliable criteria, applications rely on the use In the absence of reliable criteria, applications rely on the use

of a validation set or cross-validation to set such parameters. of a validation set or cross-validation to set such parameters.

Optimization criterionOptimization criterion – Hard margin v.s. Soft margin – Hard margin v.s. Soft margin - a lengthy series of experiments in which various parameters - a lengthy series of experiments in which various parameters

are tested are tested

Page 93: Classification

Additional ResourcesAdditional Resources libSVMlibSVM An excellent tutorial on VC-dimension and Support Vector An excellent tutorial on VC-dimension and Support Vector

Machines:Machines: C.J.C. Burges. A tutorial on support vector machines for pattern

recognition. Data Mining and Knowledge Discovery, 2(2):955-974, 1998.

The VC/SRM/SVM Bible:The VC/SRM/SVM Bible: Statistical Learning Theory by Vladimir Vapnik, Wiley-Interscience; Statistical Learning Theory by Vladimir Vapnik, Wiley-Interscience;

19981998http://www.kernel-machines.org/

Page 94: Classification

ReferenceReference Support Vector Machine Classification of Support Vector Machine Classification of

Microarray Gene Expression DataMicroarray Gene Expression Data, Michael P. , Michael P. S. Brown William Noble Grundy, David Lin, Nello S. Brown William Noble Grundy, David Lin, Nello Cristianini, Charles Sugnet, Manuel Ares, Jr., Cristianini, Charles Sugnet, Manuel Ares, Jr., David Haussler David Haussler

www.cs.utexas.edu/users/mooney/cs391L/www.cs.utexas.edu/users/mooney/cs391L/svm.svm.pptppt

Text categorization with Support Vector Text categorization with Support Vector Machines:Machines:learning with many relevant featureslearning with many relevant features

T. Joachims, ECML - 98 T. Joachims, ECML - 98