42
Support Vector Machines Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg Harvard-MIT Division of Health Sciences and Technology HST.951J: Medical Decision Support

Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

  • Upload
    others

  • View
    19

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Support Vector Machines

Stephan Dreiseitl University of Applied Sciences Upper Austria at Hagenberg

Harvard-MIT Division of Health Sciences and TechnologyHST.951J: Medical Decision Support

Page 2: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Overview

• Motivation • Statistical learning theory • VC dimension • Optimal separating hyperplanes • Kernel functions • Performance evaluation

Page 3: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Motivation

Given data D = {(xi,ti)} distributed according to P(x,t), which is better performance on test set?

00.50.7 00.50.7 00.50.7 00.50.7

10.9-0.5 10.9-0.5 10.9-0.5 10.9-0.5

1-1.2-0.2 1-1.2-0.2 1-1.2-0.2 1-1.2-0.2

10.60.3 10.60.3 10.60.3 10.60.3

00.5-0.2 00.5-0.2 10.5-0.2 10.5-0.2

0-0.20.8 1-0.20.8 0-0.20.8 1-0.20.8

Page 4: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Motivation

• Neural networks model p(t|x) by – Topology restriction – Early stopping – Weight decay – Bayesian approach

• SVM: capacity control • Based on statistical learning theory

Page 5: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Statistical learning theory

• Given data D = {(xi,ti)}, model output y(α,xi) ∈{+1,-1}, class labels ti ∈{+1,-1}

• Fundamental question: Is learning consistent?

• Can we infer performance on test set (generalization error) from performance on training set?

Page 6: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Statistical learning theory

Average error on a data set D for model with parameter α:

n

Remp(α ) = 21 n ∑ | y(α, xi ) − ti |

i =1

Expected error of same model given unseen data distributed like D:

1R(α ) = 2 ∫ | y(α , x) − t | dP( x, t)

Page 7: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Statistical learning theory

• How can we relate Remp and R? • Generalization error R(α) depends on

empirical error Remp(α) and capacity h of model

• With probability 1-η :

R(α ) ≤ Remp (α ) + h(log(2n / h) + 1) − log(η / 4) n

h is VC dimension (Vapnik-Chervonenkis)

Page 8: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

VC dimension

• Capacity measure for classification models

• Largest number of points that can be shattered by model

• Classifier shatters data points if for any labeling, points can be separated

• Not the same as number of parameters in model!

Page 9: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Shattering Straight lines can shatter 3 points in 2-space

Model: sign(α • x)

Page 10: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Shattering Other model:

sign(x • x - α)

Still other model: sign(βx • x - α)

Page 11: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

VC dimension

• Largest number of points for which there is arrangement that can be shattered

• For straight lines in 2-space, VC dim. is 3 • For hyperplanes in n-space, VC dim. is

n + 1 • There is model with one parameter and

infinite VC dimension

Page 12: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Structural risk minimization

R(α ) ≤ Remp (α ) + h(log(2n / h) + 1) − log(η / 4) n

• Fix data set and order classifiers according to their VC dimension

• For each classifier, train and calculate right-hand side

• Best classifier minimizes right-hand side

Page 13: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Structural risk minimization

R(α ) ≤ Remp(α ) + h(log(2n / h) + 1) − log(η / 4) n

f5(α)

f4(α)

f3(α)

f2(α)

f1(α)

Upper boundVC conf.RempModel

best

Page 14: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Model selection

• Cross-validation: use test sets to estimate error

• Penalize model complexity: – Akaike information criterion (AIC) – Bayesian information criterion

• Structural risk minimization

Page 15: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Support Vector Machines

• Implement hyperplanes, so know VC-dimension

• Algorithmic representation of concepts from statistical learning theory

• Determine hyperplanes that maximize margin between classes

Page 16: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Geometry of hyperplanes

Page 17: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Geometry of hyperplanes

Hyperplanes invariant to scaling of parameters:

{x|w • x + w0 = 0} = {x|c w • x + c w0 = 0}

3x – y – 4 = 0 6x – 2y – 8 = 0

Page 18: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Geometry of hyperplanes

3x – y – 4 = 0 6x – 2y – 8 = 0

3 * 2 – (-4) – 4 = 6 6 * 2 – 2*(-4) – 8 = 12

Page 19: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Optimal hyperplanes We want

w • xi + w0 ≥ 1 for all xi in class 1 (ti =+1) w • xi + w0 ≤ -1 for all xi in class 2 (t =-1)

-1

x xx

+10 x

Page 20: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Optimal hyperplanes Points on dashed lines satisfyg(x) = +1 resp. g(o) = -1

Margin =|g(x)|/||w|| + |g(o)|/||w||= 2 /||w||

Largest (optimal) margin:maximize 2 /||w|| equiv. tominimize ||w||2

subject to ti (w • xi + w0) –1 ≥ 0

Page 21: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Optimal hyperplanes

• Optimal hyperplane has largest margin (“large margin classifiers”)

• Parameter estimation problem turned into constrained optimization problem

• Unique solution w = Σαixi over all inputs xi on the margin (“support vectors”)

• Decision function g(x) = sign(Σαixi ·x + w0) • All other cases xj irrelevant to solution!

Page 22: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Nonseparable data sets

• Introduce slack variables ξi ≥ 0 • Constraints are then

w • xi + w0 ≥ +1 - ξi for all xi in class 1 w • xi + w0 ≤ -1 + ξi for all xi in class 2

• minimize ||w||2 + CΣξi

Page 23: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Nonlinear SVM

• Idea: Nonlinearly project data into higher dimensional space with Φ: Rm→H

• Apply optimal hyperplane algorithm in H

Page 24: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Nonlinear SVM example

Idea: Project R2→ R3 via

Φ(x1 ,x2) = (x12, √2 x1 x2,x2

2)

Page 25: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Nonlinear SVM example

Do the math:

Φ x1 ⋅ Φ

y1 =

2 xx 12

1x2

2 yy 12

1 y2

x2 y2 2 2 x2 y2

= x12 y1

2 + 2x1x2 y1 y2 + x22 y2

2

= ( x1 y1 + x2 y2)2

2

= x1 ⋅ y1

x2 y2

Page 26: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Nonlinear SVM

• Recall: Input data xi enters calculation only via dot products xi ·xj or Φ(xi)·Φ(xj)

• Kernel trick: K(xi ,xj) = Φ(xi)·Φ(xj)

• Advantage: no need to calculate Φ • Advantage: no need to know H • What are admissible kernel functions?

Page 27: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Kernel functions

• Satisfy Mercer’s condition • Most widely used (+parameters):

– Polynomials (degree) – Gaussians (variance)

• For given kernel function K, projection Φ and projection space H not unique

Page 28: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Kernel function example

• Data space R2; x, y ∈ R2

• K(x,y) = (x ·y)2

• Possible H=R3

• Possible Φ(x1 ,x2) = (x12, √2 x1 x2,x2

2)

Page 29: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Kernel function example

K(x,y) = (x ·y)2

Page 30: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Linearly separable C=100

Page 31: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

C=100 C=1

Page 32: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Linear function Quad. polynomial

Page 33: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Quad. poly., C=10 Quad. poly., C=100

Page 34: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Cubic polynomial Gaussian, σ = 1

Page 35: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Quad. polynomial Cubic polynomial

Page 36: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Cubic polynomial Degree 4 poly.

Page 37: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

SVM examples

Gaussian, σ = 1 Gaussian, σ = 3

Page 38: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Performance comparison

• Log. regression ⇔ ANN ⇔ SVM • Real-world data set • 1619 lesion images • 107 morphometric features:

– Global (size, shape) • size • shape

– Local (color distributions) • Use ROC analysis

Page 39: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

CN ⇔ DN + MM

• Logistic regression: 0.829 • Artificial neural networks: 0.826 • Support vector machines (polynomial

kernel): 0.738 to 0.813 • Support vector machines (Gaussian

kernel): 0.786 to 0.831

Page 40: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

MM ⇔ CN + DN

• Logistic regression: 0.968 • Artificial neural networks: 0.968 • Support vector machines (polynomial

kernel): 0.854 to 0.918 • Support vector machines (Gaussian

kernel): 0.947 to 0.970

Page 41: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Summary

• SVM based on statistical learning theory • Bounds on generalization performance • Optimal separating hyperplanes • Kernel trick (projection) • Performs comparable to log. regression

and neural networks

Page 42: Support Vector Machines - MIT OpenCourseWare · • Artificial neural networks: 0.826 • Support vector machines (polynomial kernel): 0.738 to 0.813 • Support vector machines (Gaussian

Pointers to the literature

• Burges C. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery. 1998; 2(2):121-167.

• Christianini N, Shawe-Taylor J. An introduction to support vector machines. Cambridge University Press 2000.

• Vapnik V. Statistical learning theory. Wiley Interscience 1998.