40
Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer, Dan Weld, Vibhav Gogate, and Andrew Moore

Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Unsupervisedlearning(part1)Lecture19

DavidSontagNewYorkUniversity

SlidesadaptedfromCarlosGuestrin,DanKlein,LukeZe@lemoyer,DanWeld,VibhavGogate,andAndrewMoore

Page 2: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Bayesiannetworksenableuseofdomainknowledge

Willmycarstartthismorning?

Heckermanetal.,Decision-TheoreMcTroubleshooMng,1995

Bayesian networksReference: Chapter 3

A Bayesian network is specified by a directed acyclic graphG = (V , E ) with:

1 One node i 2 V for each random variable X

i

2 One conditional probability distribution (CPD) per node, p(xi

| xPa(i)

),specifying the variable’s probability conditioned on its parents’ values

Corresponds 1-1 with a particular factorization of the jointdistribution:

p(x1

, . . . xn

) =Y

i2V

p(xi

| xPa(i))

Powerful framework for designing algorithms to perform probabilitycomputations

David Sontag (NYU) Graphical Models Lecture 1, January 31, 2013 30 / 44

Page 3: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Bayesian networksReference: Chapter 3

A Bayesian network is specified by a directed acyclic graphG = (V , E ) with:

1 One node i 2 V for each random variable X

i

2 One conditional probability distribution (CPD) per node, p(xi

| xPa(i)

),specifying the variable’s probability conditioned on its parents’ values

Corresponds 1-1 with a particular factorization of the jointdistribution:

p(x1

, . . . xn

) =Y

i2V

p(xi

| xPa(i))

Powerful framework for designing algorithms to perform probabilitycomputations

David Sontag (NYU) Graphical Models Lecture 1, January 31, 2013 30 / 44

Bayesiannetworksenableuseofdomainknowledge

WhatisthedifferenMaldiagnosis?

Beinlichetal.,TheALARMMonitoringSystem,1989

Page 4: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Bayesiannetworksaregenera*vemodels

•  CansamplefromthejointdistribuMon,top-down

•  SupposeYcanbe“spam”or“notspam”,andXiisabinaryindicatorofwhetherwordiispresentinthee-mail

•  Let’strygeneraMngafewemails!

•  OZenhelpstothinkaboutBayesiannetworksasageneraMvemodelwhenconstrucMngthestructureandthinkingaboutthemodelassumpMons

Y

X1 X2 X3 Xn. . .

Features

Label

Page 5: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

InferenceinBayesiannetworks•  CompuMngmarginalprobabiliMesintreestructuredBayesian

networksiseasy–  Thealgorithmcalled“beliefpropagaMon”generalizeswhatweshowedfor

hiddenMarkovmodelstoarbitrarytrees

•  Wait…thisisn’tatree!Whatcanwedo?

X1 X2 X3 X4 X5 X6

Y1 Y2 Y3 Y4 Y5 Y6

Y

X1 X2 X3 Xn. . .

Features

Label

Page 6: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

InferenceinBayesiannetworks

•  Insomecases(suchasthis)wecantransformthisintowhatiscalleda“juncMontree”,andthenrunbeliefpropagaMon

Page 7: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Approximateinference

•  ThereisalsoawealthofapproximateinferencealgorithmsthatcanbeappliedtoBayesiannetworkssuchasthese

•  MarkovchainMonteCarloalgorithmsrepeatedlysampleassignmentsforesMmaMngmarginals

•  Varia4onalinferencealgorithms(determinisMc)findasimplerdistribuMonwhichis“close”totheoriginal,thencomputemarginalsusingthesimplerdistribuMon

Page 8: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MaximumlikelihoodesMmaMoninBayesiannetworksML estimation in Bayesian networks

Suppose that we know the Bayesian network structure G

Let ✓x

i

|xpa(i)

be the parameter giving the value of the CPD p(xi

| xpa(i)

)

Maximum likelihood estimation corresponds to solving:

max✓

1

M

MX

m=1

log p(xM ; ✓)

subject to the non-negativity and normalization constraints

This is equal to:

max✓

1

M

MX

m=1

log p(xM ; ✓) = max✓

1

M

MX

m=1

NX

i=1

log p(xMi

| xMpa(i)

; ✓)

= max✓

NX

i=1

1

M

MX

m=1

log p(xMi

| xMpa(i)

; ✓)

The optimization problem decomposes into an independent optimizationproblem for each CPD! Has a simple closed-form solution.

David Sontag (NYU) Graphical Models Lecture 10, April 11, 2013 17 / 22

ML estimation in Bayesian networks

Suppose that we know the Bayesian network structure G

Let ✓x

i

|xpa(i)

be the parameter giving the value of the CPD p(xi

| xpa(i)

)

Maximum likelihood estimation corresponds to solving:

max✓

1

M

MX

m=1

log p(xM ; ✓)

subject to the non-negativity and normalization constraints

This is equal to:

max✓

1

M

MX

m=1

log p(xM ; ✓) = max✓

1

M

MX

m=1

NX

i=1

log p(xMi

| xMpa(i)

; ✓)

= max✓

NX

i=1

1

M

MX

m=1

log p(xMi

| xMpa(i)

; ✓)

The optimization problem decomposes into an independent optimizationproblem for each CPD! Has a simple closed-form solution.

David Sontag (NYU) Graphical Models Lecture 10, April 11, 2013 17 / 22

ML estimation in Bayesian networks

Suppose that we know the Bayesian network structure G

Let ✓x

i

|xpa(i)

be the parameter giving the value of the CPD p(xi

| xpa(i)

)

Maximum likelihood estimation corresponds to solving:

max✓

1

M

MX

m=1

log p(xM ; ✓)

subject to the non-negativity and normalization constraints

This is equal to:

max✓

1

M

MX

m=1

log p(xM ; ✓) = max✓

1

M

MX

m=1

NX

i=1

log p(xMi

| xMpa(i)

; ✓)

= max✓

NX

i=1

1

M

MX

m=1

log p(xMi

| xMpa(i)

; ✓)

The optimization problem decomposes into an independent optimizationproblem for each CPD! Has a simple closed-form solution.

David Sontag (NYU) Graphical Models Lecture 10, April 11, 2013 17 / 22

Page 9: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Returningtoclustering…

•  Clustersmayoverlap•  Someclustersmaybe“wider”thanothers

•  Canwemodelthisexplicitly?

•  Withwhatprobabilityisapointfromacluster?

Page 10: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

ProbabilisMcClustering

•  TryaprobabilisMcmodel!•  allowsoverlaps,clustersofdifferent

size,etc.

•  Cantellagenera*vestoryfordata– P(Y)P(X|Y)

•  Challenge:weneedtoesMmatemodelparameterswithoutlabeledYs

Y X1 X2

?? 0.1 2.1

?? 0.5 -1.1

?? 0.0 3.0

?? -0.1 -2.0

?? 0.2 1.5

… … …

Page 11: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

GaussianMixtureModels

µ1µ2

µ3

•  P(Y):Therearekcomponents

•  P(X|Y):Eachcomponentgeneratesdatafromamul>variateGaussianwithmeanμiandcovariancematrixΣi

Eachdatapointassumedtohavebeensampledfromagenera4veprocess:

1.  ChoosecomponentiwithprobabilityP(y=i)[Mul*nomial]

2.  Generatedatapoint~N(mi,Σi)

P(X = x j |Y = i) =

1(2π)m / 2 ||Σi ||

1/ 2 exp −12x j − µi( )

TΣi−1 x j − µi( )⎡

⎣ ⎢ ⎤

⎦ ⎥

Byfi:ngthismodel(unsupervisedlearning),wecanlearnnewinsightsaboutthedata

Page 12: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MulMvariateGaussians

Σ∝idenMtymatrix

P(X = x j |Y = i) =1

(2π )m/2 || Σi ||1/2 exp −

12x j −µi( )

TΣi−1 x j −µi( )

#

$%&

'(P(X=xj)=

Page 13: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MulMvariateGaussians

Σ=diagonalmatrixXiareindependentalaGaussianNB

P(X = x j |Y = i) =1

(2π )m/2 || Σi ||1/2 exp −

12x j −µi( )

TΣi−1 x j −µi( )

#

$%&

'(P(X=xj)=

Page 14: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MulMvariateGaussians

Σ=arbitrary(semidefinite)matrix:-specifiesrotaMon(changeofbasis)-eigenvaluesspecifyrelaMveelongaMon

P(X = x j |Y = i) =1

(2π )m/2 || Σi ||1/2 exp −

12x j −µi( )

TΣi−1 x j −µi( )

#

$%&

'(P(X=xj)=

Page 15: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

P(X = x j |Y = i) =1

(2π )m/2 || Σi ||1/2 exp −

12x j −µi( )

TΣi−1 x j −µi( )

#

$%&

'(P(X=xj)=

Covariancematrix,Σ,=degreetowhichxivarytogether

Eigenvalue,λ,ofΣ

MulMvariateGaussians

Page 16: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

ModellingerupMonofgeysers

OldFaithfulDataSet

TimetoErupM

on

DuraMonofLastErupMon

Page 17: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

ModellingerupMonofgeysers

OldFaithfulDataSet

SingleGaussian MixtureoftwoGaussians

Page 18: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MarginaldistribuMonformixturesofGaussians

Component

Mixingcoefficient

K=3

Page 19: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MarginaldistribuMonformixturesofGaussians

Page 20: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

LearningmixturesofGaussians

Originaldata(hypothesized) Observeddata(ymissing)

Pr(Y = i | x)

Inferredy’s(learnedmodel)

ShownistheposteriorprobabilitythatapointwasgeneratedfromithGaussian:

Page 21: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

MLesMmaMoninsupervisedsetng

•  UnivariateGaussian

•  MixtureofMul4variateGaussians

MLesMmateforeachoftheMulMvariateGaussiansisgivenby:

Justsumsoverxgeneratedfromthek’thGaussian

µML =1n

xnj=1

n

∑ ΣML =1n

x j −µML( ) x j −µML( )T

j=1

n

∑k k k k

Page 22: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Whataboutwithunobserveddata?

•  Maximizemarginallikelihood:– argmaxθ∏jP(xj)=argmax∏j∑k=1P(Yj=k,xj)

•  Almostalwaysahardproblem!– UsuallynoclosedformsoluMon

– EvenwhenlgP(X,Y)isconvex,lgP(X)generallyisn’t…

– ManylocalopMma

K

Page 23: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

ExpectaMonMaximizaMon

1977:Dempster,Laird,&Rubin

Page 24: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

TheEMAlgorithm

•  Aclevermethodformaximizingmarginallikelihood:–  argmaxθ∏jP(xj)=argmaxθ∏j∑k=1

KP(Yj=k,xj)

– Basedoncoordinatedescent.Easytoimplement(eg,nolinesearch,learningrates,etc.)

•  Alternatebetweentwosteps:– ComputeanexpectaMon– ComputeamaximizaMon

•  Notmagic:s4llop4mizinganon-convexfunc4onwithlotsoflocalop4ma–  ThecomputaMonsarejusteasier(oZen,significantlyso)

Page 25: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

EM:TwoEasyStepsObjec>ve:argmaxθlg∏j∑k=1

KP(Yj=k,xj;θ)=∑jlg∑k=1KP(Yj=k,xj;θ)

Data:{xj|j=1..n}

•  E-step:ComputeexpectaMonsto“fillin”missingyvaluesaccordingtocurrentparameters,θ

–  ForallexamplesjandvalueskforYj,compute:P(Yj=k|xj;θ)

•  M-step:Re-esMmatetheparameterswith“weighted”MLEesMmates

–  Setθnew=argmaxθ∑j∑kP(Yj=k|xj;θold)logP(Yj=k,xj;θ)

Par>cularlyusefulwhentheEandMstepshaveclosedformsolu>ons

Page 26: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

GaussianMixtureExample:Start

Page 27: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZerfirstiteraMon

Page 28: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZer2nditeraMon

Page 29: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZer3rditeraMon

Page 30: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZer4thiteraMon

Page 31: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZer5thiteraMon

Page 32: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZer6thiteraMon

Page 33: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

AZer20thiteraMon

Page 34: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

EMforGMMs:onlylearningmeans(1D)Iterate:Onthet’thiteraMonletouresMmatesbe

λt={μ1(t),μ2(t)…μK(t)}E-step

Compute“expected”classesofalldatapoints

M-step

ComputemostlikelynewμsgivenclassexpectaMons€

P Yj = k x j ,µ1...µK( )∝ exp − 12σ 2 (x j − µk )

2⎛

⎝ ⎜

⎠ ⎟ P Yj = k( )

µk = P Yj = k x j( )

j=1

m

∑ x j

P Yj = k x j( )j=1

m

Page 35: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Whatifwedohardassignments?Iterate:Onthet’thiteraMonletouresMmatesbe

λt={μ1(t),μ2(t)…μK(t)}E-step

Compute“expected”classesofalldatapoints

M-step

ComputemostlikelynewμsgivenclassexpectaMons

µk = j=1

m∑ δ Yj = k,x j( ) x j

δ Yj = k,x j( )j=1

m

δrepresentshardassignmentto“mostlikely”ornearestcluster

Equivalenttok-meansclusteringalgorithm!!!

P Yj = k xj ,µ1...µK( ) ∝ exp − 12σ 2 (xj −µk )

2⎛

⎝ ⎜

⎠ ⎟ P Yj = k( )

µk = P Yj = k x j( )

j=1

m

∑ x j

P Yj = k x j( )j=1

m

Page 36: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

E.M.forGeneralGMMsIterate:Onthet’thiteraMonletouresMmatesbe

λt={μ1(t),μ2(t)…μK(t),Σ1(t),Σ2

(t)…ΣK(t),p1(t),p2(t)…pK(t)}

E-stepCompute“expected”classesofalldatapointsforeachclass

P Yj = k x j;λt( )∝ pk( t )p x j ;µk

( t ),Σk(t )( )

pk(t)isshorthandforesMmateofP(y=k)ont’thiteraMon

M-step

ComputeweightedMLEforμgivenexpectedclassesabove

µkt+1( ) =

P Yj = k x j;λt( )j∑ x j

P Yj = k x j;λt( )j∑

Σkt+1( ) =

P Yj = k x j;λt( )j∑ x j − µk

t+1( )[ ] x j − µkt+1( )[ ]

T

P Yj = k x j ;λt( )j∑

pk(t+1) =

P Yj = k x j;λt( )j∑

m m=#trainingexamples

Evaluateprobabilityofamul*variateaGaussianatxj

Page 37: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Thegenerallearningproblemwithmissingdata•  Marginallikelihood:Xisobserved,

Z(e.g.theclasslabelsY)ismissing:

•  ObjecMve:Findargmaxθl(θ:Data)•  Assuminghiddenvariablesaremissingcompletelyatrandom

(otherwise,weshouldexplicitlymodelwhythevaluesaremissing)

Page 38: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

ProperMesofEM

•  Onecanprovethat:– EMconvergestoalocalmaxima

– EachiteraMonimprovesthelog-likelihood

•  How?(Sameask-means)– LikelihoodobjecMveinsteadofk-meansobjecMve– M-stepcanneverdecreaselikelihood

Page 39: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

EMpictoriallyDerivation of EM algorithm

L(θ) l(θ|θn)

θn θn+1

L(θn) = l(θn|θn)l(θn+1|θn)

L(θn+1)

L(θ)l(θ|θn)

θ

Figure 2: Graphical interpretation of a single iteration of the EM algorithm:The function l(θ|θn) is bounded above by the likelihood function L(θ). Thefunctions are equal at θ = θn. The EM algorithm chooses θn+1 as the value of θfor which l(θ|θn) is a maximum. Since L(θ) ≥ l(θ|θn) increasing l(θ|θn) ensuresthat the value of the likelihood function L(θ) is increased at each step.

We have now a function, l(θ|θn) which is bounded above by the likelihoodfunction L(θ). Additionally, observe that,

l(θn|θn) = L(θn) + ∆(θn|θn)

= L(θn) +!

z

P(z|X, θn) lnP(X|z, θn)P(z|θn)

P(z|X, θn)P(X|θn)

= L(θn) +!

z

P(z|X, θn) lnP(X, z|θn)

P(X, z|θn)

= L(θn) +!

z

P(z|X, θn) ln 1

= L(θn), (16)

so for θ = θn the functions l(θ|θn) and L(θ) are equal.Our objective is to choose a values of θ so that L(θ) is maximized. We have

shown that the function l(θ|θn) is bounded above by the likelihood function L(θ)and that the value of the functions l(θ|θn) and L(θ) are equal at the currentestimate for θ = θn. Therefore, any θ which increases l(θ|θn) will also increaseL(θ). In order to achieve the greatest possible increase in the value of L(θ), theEM algorithm calls for selecting θ such that l(θ|θn) is maximized. We denotethis updated value as θn+1. This process is illustrated in Figure (2).

7

(Figure from tutorial by Sean Borman)

David Sontag (NYU) Graphical Models Lecture 11, April 12, 2012 11 / 13

LikelihoodobjecMve

Lowerboundatitern

Page 40: Unsupervised learning (part 1) Lecture 19 · Unsupervised learning (part 1) Lecture 19 David Sontag New York University Slides adapted from Carlos Guestrin, Dan Klein, Luke Ze@lemoyer,

Whatyoushouldknow•  MixtureofGaussians

•  EMformixtureofGaussians:–  Howtolearnmaximumlikelihoodparametersinthecaseofunlabeleddata

–  RelaMontoK-means•  Twostepalgorithm,justlikeK-means

•  Hard/soZclustering•  ProbabilisMcmodel

•  Remember,EMcangetstuckinlocalminima, –  AndempiricallyitDOES