60
Lecture 11 - Stanford University Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 2-Nov-17 1

Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Lecture:FaceRecognitionandFeatureReduction

JuanCarlosNiebles andRanjayKrishnaStanfordVisionandLearningLab

2-Nov-171

Page 2: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Recap- Curseofdimensionality• Assume5000pointsuniformlydistributedintheunit

hypercubeandwewanttoapply5-NN.Supposeourquerypointisattheorigin.– In1-dimension,wemustgoadistanceof5/5000=0.001onthe

averagetocapture5nearestneighbors.– In2dimensions,wemustgotogetasquarethatcontains0.001

ofthevolume.– Inddimensions,wemustgo

31-Oct-172

0.001

0.001( )1/d

Page 3: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Whatwewilllearntoday• Singularvaluedecomposition• PrincipalComponentAnalysis(PCA)• Imagecompression

2-Nov-173

Page 4: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Whatwewilllearntoday• Singularvaluedecomposition• PrincipalComponentAnalysis(PCA)• Imagecompression

2-Nov-174

Page 5: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SingularValueDecomposition(SVD)

• Thereareseveralcomputeralgorithmsthatcan“factorize”amatrix,representingitastheproductofsomeothermatrices

• ThemostusefuloftheseistheSingularValueDecomposition.

• RepresentsanymatrixA asaproductofthreematrices:UΣVT

• Pythoncommand:– [U,S,V]= numpy.linalg.svd(A)

2-Nov-175

Page 6: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SingularValueDecomposition(SVD)

UΣVT =A• WhereU andV arerotationmatrices,andΣisascalingmatrix.Forexample:

2-Nov-176

Page 7: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SingularValueDecomposition(SVD)• Beyond2x2matrices:

– Ingeneral,ifA ism xn,thenU willbemxm, Σ willbem xn,andVT willben xn.

– (Notethedimensionsworkouttoproducem xnaftermultiplication)

2-Nov-177

Page 8: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SingularValueDecomposition(SVD)

• U andV arealwaysrotationmatrices.– Geometricrotationmaynotbeanapplicableconcept,dependingonthematrix.Sowecallthem“unitary”matrices– eachcolumnisaunitvector.

• Σisadiagonalmatrix– Thenumberofnonzeroentries=rankofA– Thealgorithmalwayssortstheentrieshightolow

2-Nov-178

Page 9: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

• We’vediscussedSVDintermsofgeometrictransformationmatrices

• ButSVDofanimagematrixcanalsobeveryuseful

• Tounderstandthis,we’lllookatalessgeometricinterpretationofwhatSVDisdoing

2-Nov-179

Page 10: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

• Lookathowthemultiplicationworksout,lefttoright:• Column1ofU getsscaledbythefirstvaluefromΣ.

• Theresultingvectorgetsscaledbyrow1ofVT toproduceacontributiontothecolumnsofA

2-Nov-1710

Page 11: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

• Eachproductof(columni ofU)·(valuei fromΣ)·(rowi ofVT)producesacomponentofthefinalA.

2-Nov-1711

+

=

Page 12: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

• We’rebuildingAasalinearcombinationofthecolumnsof U• UsingallcolumnsofU,we’llrebuildtheoriginalmatrixperfectly• But,inreal-worlddata,oftenwecanjustusethefirstfew

columnsofU andwe’llgetsomethingclose(e.g.thefirstApartial,above)

2-Nov-1712

Page 13: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

• Wecancallthosefirstfewcolumnsof UthePrincipalComponents ofthedata

• Theyshowthemajorpatternsthatcanbeaddedtoproducethecolumnsoftheoriginalmatrix

• TherowsofVT showhowtheprincipalcomponents aremixedtoproducethecolumnsofthematrix

2-Nov-1713

Page 14: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

WecanlookatΣtoseethatthefirstcolumnhasalargeeffect

2-Nov-1714

whilethesecondcolumnhasamuchsmallereffectinthisexample

Page 15: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDApplications

• Forthisimage,usingonlythefirst10 of300principalcomponentsproducesarecognizablereconstruction

• So,SVDcanbeusedforimagecompression

2-Nov-1715

Page 16: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

SVDforsymmetricmatrices

• IfAisasymmetricmatrix,itcanbedecomposedasthefollowing:

• ComparedtoatraditionalSVDdecomposition,U=VT andisanorthogonalmatrix.

2-Nov-1716

Page 17: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PrincipalComponentAnalysis

• Remember,columnsof UarethePrincipalComponents ofthedata:themajorpatternsthatcanbeaddedtoproducethecolumnsoftheoriginalmatrix

• Oneuseofthisistoconstructamatrixwhereeachcolumnisaseparatedatasample

• RunSVDonthatmatrix,andlookatthefirstfewcolumnsofUtoseepatternsthatarecommonamongthecolumns

• ThisiscalledPrincipalComponentAnalysis (orPCA)ofthedatasamples

2-Nov-1717

Page 18: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PrincipalComponentAnalysis

• Often,rawdatasampleshavealotofredundancyandpatterns• PCAcanallowyoutorepresentdatasamplesasweightsonthe

principalcomponents,ratherthanusingtheoriginalrawformofthedata

• Byrepresentingeachsampleasjustthoseweights,youcanrepresentjustthe“meat”ofwhat’sdifferentbetweensamples.

• Thisminimalrepresentationmakesmachinelearningandotheralgorithmsmuchmoreefficient

2-Nov-1718

Page 19: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

HowisSVDcomputed?

• Forthisclass:tellPYTHONtodoit.Usetheresult.

• But,ifyou’reinterested,onecomputeralgorithmtodoitmakesuseofEigenvectors!

2-Nov-1719

Page 20: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Eigenvectordefinition

• SupposewehaveasquarematrixA.Wecansolveforvectorxandscalarλ suchthatAx= λx

• Inotherwords,findvectorswhere,ifwetransformthemwithA,theonlyeffectistoscalethemwithnochangeindirection.

• Thesevectorsarecalledeigenvectors(Germanfor“selfvector”ofthematrix),andthescalingfactorsλ arecalledeigenvalues

• Anm xmmatrixwillhave≤m eigenvectorswhereλ isnonzero

2-Nov-1720

Page 21: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Findingeigenvectors• ComputerscanfindanxsuchthatAx= λxusingthisiterativealgorithm:

– X=randomunitvector– while(xhasn’tconverged)

• X=Ax• normalizex

• xwillquicklyconvergetoaneigenvector• Somesimplemodificationswillletthisalgorithmfindalleigenvectors

2-Nov-1721

Page 22: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

FindingSVD

• Eigenvectorsareforsquarematrices,butSVDisforallmatrices

• Todosvd(A),computerscandothis:– TakeeigenvectorsofAAT(matrixisalwayssquare).

• TheseeigenvectorsarethecolumnsofU.• Squarerootofeigenvalues arethesingularvalues(theentriesofΣ).

– TakeeigenvectorsofATA(matrixisalwayssquare).• TheseeigenvectorsarecolumnsofV (orrowsofVT)

2-Nov-1722

Page 23: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

FindingSVD

• Moralofthestory:SVDisfast,evenforlargematrices• It’susefulforalotofstuff• TherearealsootheralgorithmstocomputeSVDorpartof

theSVD– Python’snp.linalg.svd()commandhasoptionstoefficiently

computeonlywhatyouneed,ifperformancebecomesanissue

2-Nov-1723

AdetailedgeometricexplanationofSVDishere:http://www.ams.org/samplings/feature-column/fcarc-svd

Page 24: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Whatwewilllearntoday• Introductiontofacerecognition• PrincipalComponentAnalysis(PCA)• Imagecompression

2-Nov-1724

Page 25: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Covariance• VarianceandCovarianceareameasureofthe“spread”ofasetofpointsaroundtheircenterofmass(mean)

• Variance– measureofthedeviationfromthemeanforpointsinonedimensione.g.heights

• Covarianceasameasureofhowmucheachofthedimensionsvaryfromthemeanwithrespecttoeachother.

• Covarianceismeasuredbetween2dimensionstoseeifthereisarelationshipbetweenthe2dimensionse.g.numberofhoursstudied&marksobtained.

• Thecovariancebetweenonedimensionanditselfisthevariance

2-Nov-1725

Page 26: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Covariance

• So,ifyouhada3-dimensionaldataset(x,y,z),thenyoucouldmeasurethecovariancebetweenthexandydimensions,theyandzdimensions,andthexandzdimensions.Measuringthecovariancebetweenxandx,oryandy,orzandzwouldgiveyouthevarianceofthex,yandzdimensionsrespectively

2-Nov-1726

Page 27: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Covariancematrix• RepresentingCovariancebetweendimensionsasamatrixe.g.for3dimensions

• Diagonalisthevariancesofx,yandz• cov(x,y)=cov(y,x)hencematrixissymmetricalaboutthediagonal

• N-dimensionaldatawillresultinNxN covariancematrix

2-Nov-1727

Page 28: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Covariance

• Whatistheinterpretationofcovariancecalculations?– e.g.:2dimensionaldataset– x:numberofhoursstudiedforasubject– y:marksobtainedinthatsubject– covariancevalueissay:104.53– whatdoesthisvaluemean?

2-Nov-1728

Page 29: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Covarianceinterpretation

2-Nov-1729

Page 30: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Covarianceinterpretation• Exactvalueisnotasimportantasit’ssign.• Apositivevalueofcovarianceindicatesbothdimensionsincreaseordecreasetogethere.g.asthenumberofhoursstudiedincreases,themarksinthatsubjectincrease.

• Anegativevalueindicateswhileoneincreasestheotherdecreases,orvice-versae.g.activesociallifeatPSUvsperformanceinCSdept.

• Ifcovarianceiszero:thetwodimensionsareindependentofeachothere.g.heightsofstudentsvsthemarksobtainedinasubject

2-Nov-1730

Page 31: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Exampledata

2-Nov-1731

Covariancebetweenthetwoaxisishigh.Canwereducethenumberofdimensionstojust1?

Page 32: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

GeometricinterpretationofPCA

2-Nov-1732

Page 33: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

GeometricinterpretationofPCA

• Let’ssaywehaveasetof2Ddatapointsx.Butweseethatallthepointslieonalinein2D.

• So,2dimensionsareredundanttoexpressthedata.Wecanexpressallthepointswithjustonedimension.

2-Nov-1733

1Dsubspacein2D

Page 34: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCA:PrincipleComponentAnalysis

• Givenasetofpoints,howdoweknowiftheycanbecompressedlikeinthepreviousexample?– Theansweristolookintothecorrelationbetweenthepoints

– ThetoolfordoingthisiscalledPCA

2-Nov-1734

Page 35: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAFormulation• Basicidea:

– Ifthedatalivesinasubspace,itisgoingtolookveryflatwhenviewedfromthefullspace,e.g.

2-Nov-1735

SlideinspiredbyN.Vasconcelos

1Dsubspacein2D 2Dsubspacein3D

Page 36: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAFormulation• AssumexisGaussianwith

covarianceΣ.

• Recallthatagaussian isdefinedwithit’smeanandvariance:

• Recallthat μ andΣ ofagaussian aredefinedas:

2-Nov-1736

x1

x2

λ1λ2

φ1

φ2

Page 37: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAformulation

• Sincegaussians aresymmetric,it’scovariancematrixisalsoasymmetricmatrix.Sowecanexpressitas:– Σ = UΛUT = UΛ1/2(UΛ1/2)T

2-Nov-1737

Page 38: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAFormulation• IfxisGaussianwithcovarianceΣ,

– Principalcomponentsφi aretheeigenvectorsofΣ– Principallengthsλi aretheeigenvaluesofΣ

• bycomputingtheeigenvaluesweknowthedatais– Notflatifλ1 ≈λ2– Flatifλ1 >>λ2

2-Nov-1738

SlideinspiredbyN.Vasconcelos

x1

x2

λ1λ2

φ1

φ2

Page 39: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAAlgorithm(training)

2-Nov-1739

SlideinspiredbyN.Vasconcelos

Page 40: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAAlgorithm(testing)

2-Nov-1740

SlideinspiredbyN.Vasconcelos

Page 41: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD• Analternativemannertocomputetheprincipalcomponents,

basedonsingularvaluedecomposition• Quickreminder:SVD

– Anyrealnxmmatrix(n>m)canbedecomposedas

– WhereMisan(nxm)columnorthonormalmatrixofleftsingularvectors(columnsofM)

– Π isan(mxm)diagonalmatrixofsingularvalues– NT isan(mxm)roworthonormalmatrixofrightsingularvectors

(columnsofN)

2-Nov-1741

SlideinspiredbyN.Vasconcelos

Page 42: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD• TorelatethistoPCA,weconsiderthedatamatrix

• Thesamplemeanis

2-Nov-1742

SlideinspiredbyN.Vasconcelos

Page 43: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD• CenterthedatabysubtractingthemeantoeachcolumnofX• Thecentereddatamatrix is

2-Nov-1743

SlideinspiredbyN.Vasconcelos

Page 44: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD• Thesamplecovariance matrixis

wherexic istheith columnofXc• Thiscanbewrittenas

2-Nov-1744

SlideinspiredbyN.Vasconcelos

Page 45: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD• Thematrix

isreal(nxd).Assumingn>dithasSVDdecomposition

and

2-Nov-1745

SlideinspiredbyN.Vasconcelos

Page 46: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD

• NotethatNis(dxd)andorthonormal,andΠ2 isdiagonal.ThisisjusttheeigenvaluedecompositionofΣ

• Itfollowsthat– TheeigenvectorsofΣ arethecolumnsofN– TheeigenvaluesofΣ are

• ThisgivesanalternativealgorithmforPCA

2-Nov-1746

SlideinspiredbyN.Vasconcelos

Page 47: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAbySVD• Insummary,computationofPCAbySVD• GivenXwithoneexamplepercolumn

– Createthecentereddatamatrix

– ComputeitsSVD

– PrincipalcomponentsarecolumnsofN,eigenvaluesare

2-Nov-1747

SlideinspiredbyN.Vasconcelos

Page 48: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

RuleofthumbforfindingthenumberofPCAcomponents

• Anaturalmeasureistopicktheeigenvectorsthatexplainp%ofthedatavariability– Canbedonebyplottingtheratiork asafunctionofk

– E.g.weneed3eigenvectorstocover70%ofthevariabilityofthisdataset

2-Nov-1748

SlideinspiredbyN.Vasconcelos

Page 49: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Whatwewilllearntoday• Introductiontofacerecognition• PrincipalComponentAnalysis(PCA)• Imagecompression

2-Nov-1749

Page 50: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

OriginalImage

• Divide the original 372x492 image into patches:• Each patch is an instance that contains 12x12 pixels on a grid

• View each as a 144-D vector2-Nov-1750

Page 51: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

L2 errorandPCAdim

2-Nov-1751

Page 52: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAcompression:144D) 60D

2-Nov-1752

Page 53: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAcompression:144D) 16D

2-Nov-1753

Page 54: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

16mostimportanteigenvectors

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2-Nov-1754

Page 55: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAcompression:144D) 6D

2-Nov-1755

Page 56: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

2 4 6 8 10 12

24681012

6mostimportanteigenvectors

2-Nov-1756

Page 57: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAcompression:144D) 3D

2-Nov-1757

Page 58: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

2 4 6 8 10 12

2

4

6

8

10

122 4 6 8 10 12

2

4

6

8

10

12

2 4 6 8 10 12

2

4

6

8

10

12

3 most important eigenvectors

2-Nov-1758

Page 59: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

PCAcompression:144D) 1D

2-Nov-1759

Page 60: Lecture: Face Recognition and Feature Reductionvision.stanford.edu/teaching/cs131_fall1718/files/12_svd_PCA.pdf · Stanford University Lecture 11 - Principal Component Analysis •

Lecture 11 -Stanford University

Whatwehavelearnedtoday• Introductiontofacerecognition• PrincipalComponentAnalysis(PCA)• Imagecompression

2-Nov-1760