15
Eigenfaces for recognition Matthew Turk and Alex Pentland J. Cognitive Neuroscience 1991

J. Cognitive Neuroscience

  • Upload
    others

  • View
    18

  • Download
    0

Embed Size (px)

Citation preview

Page 1: J. Cognitive Neuroscience

Eigenfacesforrecognition

MatthewTurkandAlexPentlandJ.CognitiveNeuroscience

1991

Page 2: J. Cognitive Neuroscience

Linearsubspaces

SupposethedatapointsarearrangedasaboveIdea:fitaline,classifiermeasuresdistancetoline

convert x into v1, v2 coordinates

What does the v2 coordinate measure?

What does the v1 coordinate measure?

-  distance to line -  use it for classification—near 0 for orange pts

-  position along line -  use it to specify which orange point it is

Page 3: J. Cognitive Neuroscience

Dimensionalityreduction

•  We can represent the orange points with only their v1 coordinates (since v2 coordinates are all essentially 0)

•  This makes it much cheaper to store and compare points

•  A bigger deal for higher dimensional problems

Page 4: J. Cognitive Neuroscience

LinearsubspacesConsider the variation along direction v among all of the orange points:

What unit vector v minimizes var?

What unit vector v maximizes var?

Solution: v1 is eigenvector of A with largest eigenvalue v2 is eigenvector of A with smallest eigenvalue

Page 5: J. Cognitive Neuroscience

Principalcomponentanalysis•  SupposeeachdatapointisN-dimensional

–  Sameprocedureapplies:

–  TheeigenvectorsofAdefineanewcoordinatesystem•  eigenvectorwithlargesteigenvaluecapturesthemostvariationamongtrainingvectorsx

•  eigenvectorwithsmallesteigenvaluehasleastvariation

–  Wecancompressthedatausingthetopfeweigenvectors•  correspondstochoosinga“linearsubspace”

–  representpointsonaline,plane,or“hyper-plane”•  theseeigenvectorsareknownastheprincipalcomponents

Page 6: J. Cognitive Neuroscience

Thespaceoffaces

•  Animageisapointinahighdimensionalspace–  AnNxMimageisapointinRNM

–  Wecandefinevectorsinthisspaceaswedidinthe2Dcase

+ =

Page 7: J. Cognitive Neuroscience

FaceRecognitionandDetection

Dimensionalityreduction

•  Thesetoffacesisa“subspace”ofthesetofimages–  WecanfindthebestsubspaceusingPCA–  Thisislikefittinga“hyper-plane”tothesetoffaces

•  spannedbyvectorsv1,v2,...,vK•  anyface

Page 8: J. Cognitive Neuroscience

Eigenfaces

•  PCAextractstheeigenvectorsofA– Givesasetofvectorsv1,v2,v3,...– Eachvectorisadirectioninfacespace

•  whatdotheselooklike?

Page 9: J. Cognitive Neuroscience
Page 10: J. Cognitive Neuroscience
Page 11: J. Cognitive Neuroscience

Projectingontotheeigenfaces•  Theeigenfacesv1,...,vKspanthespaceoffaces

–  Afaceisconvertedtoeigenfacecoordinatesby

Page 12: J. Cognitive Neuroscience

Recognitionwitheigenfaces1.  Processtheimagedatabase(setofimageswithlabels)

•  RunPCA—computeeigenfaces•  CalculatetheKcoefficientsforeachimage

2.  Givenanewimage(toberecognized)x,calculateKcoefficients

3.  Detectifxisaface

4.  Ifitisaface,whoisit?•  Findclosestlabeledfaceindatabase

–  nearest-neighborinK-dimensionalspace

Page 13: J. Cognitive Neuroscience

ChoosingthedimensionK

K NM

•  Howmanyeigenfacestouse?•  Lookatthedecayoftheeigenvalues

–  theeigenvaluetellsyoutheamountofvariance“inthedirection”ofthateigenface

–  ignoreeigenfaceswithlowvariance

In this case, we say that we “preserve” 90% or 95% of the information in our data.

Page 14: J. Cognitive Neuroscience

Limitations

Page 15: J. Cognitive Neuroscience

Limitations