Monash University short course, part I

Preview:

DESCRIPTION

First part of slides for a short course in Monash, EBS, on July 19, 2012

Citation preview

MCMC and likelihood-free methods Part/day I: Markov chain methods

MCMC and likelihood-free methodsPart/day I: Markov chain methods

Christian P. Robert

Universite Paris-Dauphine, IUF, & CREST

Monash University, EBS, July 18, 2012

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Motivations and leading example

Computational issues in Bayesianstatistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Population Monte Carlo

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

What is Bayesian statistics?

Statistical model defined by a likelihood function

f(x1, . . . , xn|θ) = L(θ|x1, . . . , xn)

[inversion of what varies]

Bayesian approach turns the likelihoodinto a conditional density:

π(θ|x1, . . . , xn) ∝ π(θ)L(θ|x1, . . . , xn)

using a reference measure (or a prior)π(θ)

[Thomas Bayes, 1701–1761]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

What is Bayesian statistics?

Statistical model defined by a likelihood function

f(x1, . . . , xn|θ) = L(θ|x1, . . . , xn)

[inversion of what varies]Bayesian approach turns the likelihoodinto a conditional density:

π(θ|x1, . . . , xn) ∝ π(θ)L(θ|x1, . . . , xn)

using a reference measure (or a prior)π(θ)

[Thomas Bayes, 1701–1761]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

What is Bayesian statistics?

Statistical model defined by a likelihood function

f(x1, . . . , xn|θ) = L(θ|x1, . . . , xn)

[inversion of what varies]Bayesian approach turns the likelihoodinto a conditional density:

π(θ|x1, . . . , xn) ∝ π(θ)L(θ|x1, . . . , xn)

using a reference measure (or a prior)π(θ)

[Thomas Bayes, 1701–1761]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

New perspective

I Uncertainty on the parameters θ of a model modeled througha probability distribution π on Θ, called prior distribution

I Inference processed through distribution of θ conditional on x,π(θ|x), called posterior distribution

π(θ|x) =f(x|θ)π(θ)∫f(x|θ)π(θ) dθ

.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

New perspective

I Uncertainty on the parameters θ of a model modeled througha probability distribution π on Θ, called prior distribution

I Inference processed through distribution of θ conditional on x,π(θ|x), called posterior distribution

π(θ|x) =f(x|θ)π(θ)∫f(x|θ)π(θ) dθ

.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

Justifications

I Semantic drift from unknown to random

I Actualization of the information on θ by extracting theinformation on θ contained in the observation x

I Allows incorporation of imperfect information in the decisionprocess

I Unique mathematical way to condition upon the observations(conditional perspective)

I Penalization factor

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

Posterior distribution

π(θ|x) central to Bayesian inference

I Operates conditional upon the observation s

I Incorporates the requirement of the Likelihood Principle

I Avoids averaging over the unobserved values of x

I Coherent updating of the information available on θ,independent of the order in which i.i.d. observations arecollected

I Provides a complete inferential scope

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

Posterior distribution

π(θ|x) central to Bayesian inference

I Operates conditional upon the observation s

I Incorporates the requirement of the Likelihood Principle

I Avoids averaging over the unobserved values of x

I Coherent updating of the information available on θ,independent of the order in which i.i.d. observations arecollected

I Provides a complete inferential scope

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

Posterior distribution

π(θ|x) central to Bayesian inference

I Operates conditional upon the observation s

I Incorporates the requirement of the Likelihood Principle

I Avoids averaging over the unobserved values of x

I Coherent updating of the information available on θ,independent of the order in which i.i.d. observations arecollected

I Provides a complete inferential scope

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

Posterior distribution

π(θ|x) central to Bayesian inference

I Operates conditional upon the observation s

I Incorporates the requirement of the Likelihood Principle

I Avoids averaging over the unobserved values of x

I Coherent updating of the information available on θ,independent of the order in which i.i.d. observations arecollected

I Provides a complete inferential scope

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

abc of Bayesian perspective

Posterior distribution

π(θ|x) central to Bayesian inference

I Operates conditional upon the observation s

I Incorporates the requirement of the Likelihood Principle

I Avoids averaging over the unobserved values of x

I Coherent updating of the information available on θ,independent of the order in which i.i.d. observations arecollected

I Provides a complete inferential scope

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Latent structures make life harder!

Even simple models may lead to computational complications, asin latent variable models

f(x|θ) =

∫f?(x, x?|θ) dx?

I If (x, x?) observed, fine!

I If only x observed, trouble!

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Latent structures make life harder!

Even simple models may lead to computational complications, asin latent variable models

f(x|θ) =

∫f?(x, x?|θ) dx?

I If (x, x?) observed, fine!

I If only x observed, trouble!

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Latent structures make life harder!

Even simple models may lead to computational complications, asin latent variable models

f(x|θ) =

∫f?(x, x?|θ) dx?

I If (x, x?) observed, fine!

I If only x observed, trouble!

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

example: mixture models

Models of mixtures of distributions:

X ∼ fj with probability pj ,

for j = 1, 2, . . . , k, with overall density

X ∼ p1f1(x) + · · ·+ pkfk(x) .

n∏i=1

{p1f1(xi) + · · ·+ pkfk(xi)} .

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

example: mixture models

Models of mixtures of distributions:

X ∼ fj with probability pj ,

for j = 1, 2, . . . , k, with overall density

X ∼ p1f1(x) + · · ·+ pkfk(x) .

For a sample of independent random variables (X1, · · · , Xn),sample density

n∏i=1

{p1f1(xi) + · · ·+ pkfk(xi)} .

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

example: mixture models

Models of mixtures of distributions:

X ∼ fj with probability pj ,

for j = 1, 2, . . . , k, with overall density

X ∼ p1f1(x) + · · ·+ pkfk(x) .

n∏i=1

{p1f1(xi) + · · ·+ pkfk(xi)} .

Expanding this product of sums into a sum of products involves kn

elementary terms: too prohibitive to compute in large samples.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Simple mixture (1)

−1 0 1 2 3

−1

01

23

µ1

µ 2

Case of the 0.3N (µ1, 1) + 0.7N (µ2, 1) likelihood

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Simple mixture (2)

For mixture of two normal distributions,

0.3N (µ1, 1) + 0.7N (µ2, 1) ,

likelihood proportional to

n∏i=1

[0.3ϕ (xi − µ1) + 0.7 ϕ (xi − µ2)]

containing 2n terms.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Complex maximisation

Standard maximization techniques often fail to find the globalmaximum because of multimodality or undesirable behavior(usually at the frontier of the domain) of the likelihood function.

Example

In the special case

f(x|µ, σ) = (1− ε) exp{(−1/2)x2}+ε

σexp{(−1/2σ2)(x− µ)2}

with ε > 0 known,

whatever n, the likelihood is unbounded:

limσ→0

L(x1, . . . , xn|µ = x1, σ) =∞

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Complex maximisation

Standard maximization techniques often fail to find the globalmaximum because of multimodality or undesirable behavior(usually at the frontier of the domain) of the likelihood function.

Example

In the special case

f(x|µ, σ) = (1− ε) exp{(−1/2)x2}+ε

σexp{(−1/2σ2)(x− µ)2}

with ε > 0 known, whatever n, the likelihood is unbounded:

limσ→0

L(x1, . . . , xn|µ = x1, σ) =∞

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Unbounded likelihood

−2 0 2 4 6

12

34

µ

n= 3

−2 0 2 4 6

12

34

µ

σ

n= 6

−2 0 2 4 6

12

34

µ

n= 12

−2 0 2 4 6

12

34

µ

σ

n= 24

−2 0 2 4 6

12

34

µ

n= 48

−2 0 2 4 6

12

34

µ

σ

n= 96

Case of the 0.3N (0, 1) + 0.7N (µ, σ) likelihood

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Mixture once again

press for MA Observations from

x1, . . . , xn ∼ f(x|θ) = pϕ(x;µ1, σ1) + (1− p)ϕ(x;µ2, σ2)

Prior

µi|σi ∼ N (ξi, σ2i /ni), σ2

i ∼ I G (νi/2, s2i /2), p ∼ Be(α, β)

Posterior

π(θ|x1, . . . , xn) ∝n∏

j=1

{pϕ(xj ;µ1, σ1) + (1− p)ϕ(xj ;µ2, σ2)}π(θ)

=

n∑`=0

∑(kt)

ω(kt)π(θ|(kt))

[O(2n)]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Mixture once again

press for MA Observations from

x1, . . . , xn ∼ f(x|θ) = pϕ(x;µ1, σ1) + (1− p)ϕ(x;µ2, σ2)

Prior

µi|σi ∼ N (ξi, σ2i /ni), σ2

i ∼ I G (νi/2, s2i /2), p ∼ Be(α, β)

Posterior

π(θ|x1, . . . , xn) ∝n∏

j=1

{pϕ(xj ;µ1, σ1) + (1− p)ϕ(xj ;µ2, σ2)}π(θ)

=

n∑`=0

∑(kt)

ω(kt)π(θ|(kt))

[O(2n)]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Mixture once again

press for MA Observations from

x1, . . . , xn ∼ f(x|θ) = pϕ(x;µ1, σ1) + (1− p)ϕ(x;µ2, σ2)

Prior

µi|σi ∼ N (ξi, σ2i /ni), σ2

i ∼ I G (νi/2, s2i /2), p ∼ Be(α, β)

Posterior

π(θ|x1, . . . , xn) ∝n∏

j=1

{pϕ(xj ;µ1, σ1) + (1− p)ϕ(xj ;µ2, σ2)}π(θ)

=n∑

`=0

∑(kt)

ω(kt)π(θ|(kt))

[O(2n)]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Mixture once again (cont’d)

For a given permutation (kt), conditional posterior distribution

π(θ|(kt)) = N

(ξ1(kt),

σ21

n1 + `

)×I G ((ν1 + `)/2, s1(kt)/2)

×N

(ξ2(kt),

σ22

n2 + n− `

)×I G ((ν2 + n− `)/2, s2(kt)/2)

×Be(α+ `, β + n− `)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Mixture once again (cont’d)

where

x1(kt) = 1`

∑`t=1 xkt

, s1(kt) =∑`

t=1(xkt− x1(kt))

2,x2(kt) = 1

n−`∑n

t=`+1 xkt, s2(kt) =

∑nt=`+1(xkt

− x2(kt))2

and

ξ1(kt) =n1ξ1 + `x1(kt)

n1 + `, ξ2(kt) =

n2ξ2 + (n− `)x2(kt)

n2 + n− `,

s1(kt) = s21 + s21(kt) +n1`

n1 + `(ξ1 − x1(kt))

2,

s2(kt) = s22 + s22(kt) +n2(n− `)n2 + n− `

(ξ2 − x2(kt))2,

posterior updates of the hyperparameters

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Latent variables

Mixture once again

Bayes estimator of θ:

δπ(x1, . . . , xn) =

n∑`=0

∑(kt)

ω(kt)Eπ[θ|x, (kt)]

Too costly: 2n terms

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

AR(p) model

Auto-regressive representation of a time series,

xt|xt−1, . . . ∼ N

(µ+

p∑i=1

%i(xt−i − µ), σ2

)

I Generalisation of AR(1)

I Among the most commonly used models in dynamic settings

I More challenging than the static models (stationarityconstraints)

I Different models depending on the processing of the startingvalue x0

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

AR(p) model

Auto-regressive representation of a time series,

xt|xt−1, . . . ∼ N

(µ+

p∑i=1

%i(xt−i − µ), σ2

)

I Generalisation of AR(1)

I Among the most commonly used models in dynamic settings

I More challenging than the static models (stationarityconstraints)

I Different models depending on the processing of the startingvalue x0

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

Unwieldy stationarity constraints

Practical difficulty: for complex models, stationarity constraints getquite involved to the point of being unknown in some cases

Example (AR(1))

Case of linear Markovian dependence on the last value

xt = µ+ %(xt−1 − µ) + εt , εti.i.d.∼ N (0, σ2)

If |%| < 1, (xt)t∈Z can be written as

xt = µ+

∞∑j=0

%jεt−j

and this is a stationary representation.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

Stationary but...

If |%| > 1, alternative stationary representation

xt = µ−∞∑j=1

%−jεt+j .

This stationary solution is criticized as artificial because xt iscorrelated with future white noises (εt)s>t, unlike the case when|%| < 1.Non-causal representation...

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

Stationary but...

If |%| > 1, alternative stationary representation

xt = µ−∞∑j=1

%−jεt+j .

This stationary solution is criticized as artificial because xt iscorrelated with future white noises (εt)s>t, unlike the case when|%| < 1.Non-causal representation...

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

Stationarity+causality

Stationarity constraints in the prior as a restriction on the values ofθ.

Theorem

AR(p) model second-order stationary and causal iff the roots of thepolynomial

P(x) = 1−p∑i=1

%ixi

are all outside the unit circle

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

Stationarity constraints

Under stationarity constraints, complex parameter space: eachvalue of % needs to be checked for roots of correspondingpolynomial with modulus less than 1

E.g., for an AR(2) process withautoregressive polynomialP(u) = 1− %1u− %2u

2, constraint is

%1 + %2 < 1, %1 − %2 < 1

and |%2| < 1

−2 −1 0 1 2

−1.

0−

0.5

0.0

0.5

1.0

θ1

θ 2

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The AR(p) model

Stationarity constraints

Under stationarity constraints, complex parameter space: eachvalue of % needs to be checked for roots of correspondingpolynomial with modulus less than 1

E.g., for an AR(2) process withautoregressive polynomialP(u) = 1− %1u− %2u

2, constraint is

%1 + %2 < 1, %1 − %2 < 1

and |%2| < 1

−2 −1 0 1 2

−1.

0−

0.5

0.0

0.5

1.0

θ1

θ 2

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

The MA(q) model

Alternative type of time series

xt = µ+ εt −q∑j=1

ϑjεt−j , εt ∼ N (0, σ2)

Stationary but, for identifiability considerations, the polynomial

Q(x) = 1−q∑j=1

ϑjxj

must have all its roots outside the unit circle

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

Identifiability

Example

For the MA(1) model, xt = µ+ εt − ϑ1εt−1,

var(xt) = (1 + ϑ21)σ2

can also be written

xt = µ+ εt−1 −1

ϑ1εt, ε ∼ N (0, ϑ2

1σ2) ,

Both pairs (ϑ1, σ) & (1/ϑ1, ϑ1σ) lead to alternativerepresentations of the same model.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

Properties of MA models

I Non-Markovian model (but special case of hidden Markov)

I Autocovariance γx(s) is null for |s| > q

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

Representations

x1:T is a normal random variable with constant mean µ andcovariance matrix

Σ =

σ2 γ1 γ2 . . . γq 0 . . . 0 0γ1 σ2 γ1 . . . γq−1 γq . . . 0 0

. . .

0 0 0 . . . 0 0 . . . γ1 σ2

,

with (|s| ≤ q)

γs = σ2

q−|s|∑i=0

ϑiϑi+|s|

Not manageable in practice [large T’s]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

Representations (contd.)

Conditional on past (ε0, . . . , ε−q+1),

L(µ, ϑ1, . . . , ϑq, σ|x1:T , ε0, . . . , ε−q+1) ∝

σ−TT∏

t=1

exp

−xt − µ+

q∑j=1

ϑj εt−j

2 /2σ2

,

where (t > 0)

εt = xt − µ+

q∑j=1

ϑj εt−j , ε0 = ε0, . . . , ε1−q = ε1−q

Recursive definition of the likelihood, still costly O(T × q)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

Representations (contd.)

Encompassing approach for general time series modelsState-space representation

xt = Gyt + εt , (1)

yt+1 = Fyt + ξt , (2)

(1) is the observation equation and (2) is the state equation

Note

This is a special case of hidden Markov model

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

Representations (contd.)

Encompassing approach for general time series modelsState-space representation

xt = Gyt + εt , (1)

yt+1 = Fyt + ξt , (2)

(1) is the observation equation and (2) is the state equation

Note

This is a special case of hidden Markov model

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

MA(q) state-space representation

For the MA(q) model, take

yt = (εt−q, . . . , εt−1, εt)′

and then

yt+1 =

0 1 0 . . . 00 0 1 . . . 0

. . .0 0 0 . . . 10 0 0 . . . 0

yt + εt+1

00...01

xt = µ−

(ϑq ϑq−1 . . . ϑ1 −1

)yt .

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

The MA(q) model

MA(q) state-space representation (cont’d)

Example

For the MA(1) model, observation equation

xt = (1 0)yt

withyt = (y1t y2t)

directed by the state equation

yt+1 =

(0 10 0

)yt + εt+1

(1ϑ1

).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Typology of problems

c© A typology of Bayes computational problems

(i). latent variable models in general

(ii). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(iii). use of a complex sampling model with an intractablelikelihood, as for instance in some graphical models;

(iv). use of a huge dataset;

(v). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(vi). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)

P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Typology of problems

c© A typology of Bayes computational problems

(i). latent variable models in general

(ii). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(iii). use of a complex sampling model with an intractablelikelihood, as for instance in some graphical models;

(iv). use of a huge dataset;

(v). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(vi). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)

P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Typology of problems

c© A typology of Bayes computational problems

(i). latent variable models in general

(ii). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(iii). use of a complex sampling model with an intractablelikelihood, as for instance in some graphical models;

(iv). use of a huge dataset;

(v). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(vi). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)

P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Typology of problems

c© A typology of Bayes computational problems

(i). latent variable models in general

(ii). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(iii). use of a complex sampling model with an intractablelikelihood, as for instance in some graphical models;

(iv). use of a huge dataset;

(v). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(vi). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)

P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Typology of problems

c© A typology of Bayes computational problems

(i). latent variable models in general

(ii). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(iii). use of a complex sampling model with an intractablelikelihood, as for instance in some graphical models;

(iv). use of a huge dataset;

(v). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(vi). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)

P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Computational issues in Bayesian statistics

Typology of problems

c© A typology of Bayes computational problems

(i). latent variable models in general

(ii). use of a complex parameter space, as for instance inconstrained parameter sets like those resulting from imposingstationarity constraints in dynamic models;

(iii). use of a complex sampling model with an intractablelikelihood, as for instance in some graphical models;

(iv). use of a huge dataset;

(v). use of a complex prior distribution (which may be theposterior distribution associated with an earlier sample);

(vi). use of a particular inferential procedure as for instance, Bayesfactors

Bπ01(x) =

P (θ ∈ Θ0 | x)

P (θ ∈ Θ1 | x)

/π(θ ∈ Θ0)

π(θ ∈ Θ1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis-Hastings Algorithm

Computational issues in Bayesianstatistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Population Monte Carlo

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

General purpose

A major computational issue in Bayesian statistics:

Given a density π known up to a normalizing constant, and anintegrable function h, compute

Π(h) =

∫h(x)π(x)µ(dx) =

∫h(x)π(x)µ(dx)∫π(x)µ(dx)

when∫h(x)π(x)µ(dx) is intractable.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Monte Carlo 101

Generate an iid sample x1, . . . , xN from π and estimate Π(h) by

ΠMCN (h) = N−1

N∑i=1

h(xi).

LLN: ΠMCN (h)

as−→ Π(h)If Π(h2) =

∫h2(x)π(x)µ(dx) <∞,

CLT:√N(

ΠMCN (h)−Π(h)

)L N

(0,Π

{[h−Π(h)]2

}).

Caveat conducting to MCMC

Often impossible or inefficient to simulate directly from Π

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo basics

Monte Carlo 101

Generate an iid sample x1, . . . , xN from π and estimate Π(h) by

ΠMCN (h) = N−1

N∑i=1

h(xi).

LLN: ΠMCN (h)

as−→ Π(h)If Π(h2) =

∫h2(x)π(x)µ(dx) <∞,

CLT:√N(

ΠMCN (h)−Π(h)

)L N

(0,Π

{[h−Π(h)]2

}).

Caveat conducting to MCMC

Often impossible or inefficient to simulate directly from Π

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Importance Sampling

For Q proposal distribution such that Q(dx) = q(x)µ(dx),alternative representation

Π(h) =

∫h(x){π/q}(x)q(x)µ(dx).

Principle of importance (!)

Generate an iid sample x1, . . . , xN ∼ Q and estimate Π(h) by

ΠISQ,N (h) = N−1

N∑i=1

h(xi){π/q}(xi).

return to pMC

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Importance Sampling

For Q proposal distribution such that Q(dx) = q(x)µ(dx),alternative representation

Π(h) =

∫h(x){π/q}(x)q(x)µ(dx).

Principle of importance (!)

Generate an iid sample x1, . . . , xN ∼ Q and estimate Π(h) by

ΠISQ,N (h) = N−1

N∑i=1

h(xi){π/q}(xi).

return to pMC

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Properties of importance

ThenLLN: ΠIS

Q,N (h)as−→ Π(h) and if Q((hπ/q)2) <∞,

CLT:√N(ΠIS

Q,N (h)−Π(h))L N

(0, Q{(hπ/q −Π(h))2}

).

Caveat

If normalizing constant of π unknown, impossible to use ΠISQ,N

Generic problem in Bayesian Statistics: π(θ|x) ∝ f(x|θ)π(θ).

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Properties of importance

ThenLLN: ΠIS

Q,N (h)as−→ Π(h) and if Q((hπ/q)2) <∞,

CLT:√N(ΠIS

Q,N (h)−Π(h))L N

(0, Q{(hπ/q −Π(h))2}

).

Caveat

If normalizing constant of π unknown, impossible to use ΠISQ,N

Generic problem in Bayesian Statistics: π(θ|x) ∝ f(x|θ)π(θ).

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Self-Normalised Importance Sampling

Self normalized version

ΠSNISQ,N (h) =

(N∑i=1

{π/q}(xi)

)−1 N∑i=1

h(xi){π/q}(xi).

LLN : ΠSNISQ,N (h)

as−→ Π(h)

and if Π((1 + h2)(π/q)) <∞,

CLT :√N(ΠSNIS

Q,N (h)−Π(h))L N

(0, π {(π/q)(h−Π(h)}2)

).

c© The quality of the SNIS approximation depends on thechoice of Q

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Self-Normalised Importance Sampling

Self normalized version

ΠSNISQ,N (h) =

(N∑i=1

{π/q}(xi)

)−1 N∑i=1

h(xi){π/q}(xi).

LLN : ΠSNISQ,N (h)

as−→ Π(h)

and if Π((1 + h2)(π/q)) <∞,

CLT :√N(ΠSNIS

Q,N (h)−Π(h))L N

(0, π {(π/q)(h−Π(h)}2)

).

c© The quality of the SNIS approximation depends on thechoice of Q

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Importance Sampling

Self-Normalised Importance Sampling

Self normalized version

ΠSNISQ,N (h) =

(N∑i=1

{π/q}(xi)

)−1 N∑i=1

h(xi){π/q}(xi).

LLN : ΠSNISQ,N (h)

as−→ Π(h)

and if Π((1 + h2)(π/q)) <∞,

CLT :√N(ΠSNIS

Q,N (h)−Π(h))L N

(0, π {(π/q)(h−Π(h)}2)

).

c© The quality of the SNIS approximation depends on thechoice of Q

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (MCMC)

It is not necessary to use a sample from the distribution f toapproximate the integral

I =

∫h(x)f(x)dx ,

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (MCMC)

It is not necessary to use a sample from the distribution f toapproximate the integral

I =

∫h(x)f(x)dx ,

[notation warnin: π turned to f !]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (MCMC)

It is not necessary to use a sample from the distribution f toapproximate the integral

I =

∫h(x)f(x)dx ,

We can obtain X1, . . . , Xn ∼ f(approx) without directly simulatingfrom f , using an ergodic Markovchain with stationary distribution f

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (MCMC)

It is not necessary to use a sample from the distribution f toapproximate the integral

I =

∫h(x)f(x)dx ,

We can obtain X1, . . . , Xn ∼ f(approx) without directly simulatingfrom f , using an ergodic Markovchain with stationary distribution f

Andreı Markov

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (2)

Idea

For an arbitrary starting value x(0), an ergodic chain (X(t)) isgenerated using a transition kernel with stationary distribution f

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (2)

Idea

For an arbitrary starting value x(0), an ergodic chain (X(t)) isgenerated using a transition kernel with stationary distribution f

I irreducible Markov chain with stationary distribution f isergodic with limiting distribution f under weak conditions

I hence convergence in distribution of (X(t)) to a randomvariable from f .

I for T0 “large enough” T0, X(T0) distributed from f

I Markov sequence is dependent sample X(T0), X(T0+1), . . .generated from f

I Birkoff’s ergodic theorem extends LLN, sufficient for mostapproximation purposes

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Monte Carlo Methods based on Markov Chains

Running Monte Carlo via Markov Chains (2)

Idea

For an arbitrary starting value x(0), an ergodic chain (X(t)) isgenerated using a transition kernel with stationary distribution f

Problem: How can one build a Markov chain with a givenstationary distribution?

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

The Metropolis–Hastings algorithm

Arguments: The algorithm uses theobjective (target) density

f

and a conditional density

q(y|x)

called the instrumental (or proposal)distribution

Nicholas Metropolis

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

The MH algorithm

Algorithm (Metropolis–Hastings)

Given x(t),

1. Generate Yt ∼ q(y|x(t)).

2. Take

X(t+1) =

{Yt with prob. ρ(x(t), Yt),

x(t) with prob. 1− ρ(x(t), Yt),

where

ρ(x, y) = min

{f(y)

f(x)

q(x|y)

q(y|x), 1

}.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Features

I Independent of normalizing constants for both f and q(·|x)(ie, those constants independent of x)

I Never move to values with f(y) = 0

I The chain (x(t))t may take the same value several times in arow, even though f is a density wrt Lebesgue measure

I The sequence (yt)t is usually not a Markov chain

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties

1. The M-H Markov chain is reversible, withinvariant/stationary density f since it satisfies the detailedbalance condition

f(y)K(y, x) = f(x)K(x, y)

2. As f is a probability measure, the chain is positive recurrent

3. If

Pr

[f(Yt) q(X

(t)|Yt)f(X(t)) q(Yt|X(t))

≥ 1

]< 1. (1)

that is, the event {X(t+1) = X(t)} is possible, then the chainis aperiodic

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties

1. The M-H Markov chain is reversible, withinvariant/stationary density f since it satisfies the detailedbalance condition

f(y)K(y, x) = f(x)K(x, y)

2. As f is a probability measure, the chain is positive recurrent

3. If

Pr

[f(Yt) q(X

(t)|Yt)f(X(t)) q(Yt|X(t))

≥ 1

]< 1. (1)

that is, the event {X(t+1) = X(t)} is possible, then the chainis aperiodic

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties

1. The M-H Markov chain is reversible, withinvariant/stationary density f since it satisfies the detailedbalance condition

f(y)K(y, x) = f(x)K(x, y)

2. As f is a probability measure, the chain is positive recurrent

3. If

Pr

[f(Yt) q(X

(t)|Yt)f(X(t)) q(Yt|X(t))

≥ 1

]< 1. (1)

that is, the event {X(t+1) = X(t)} is possible, then the chainis aperiodic

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties (2)

4. Ifq(y|x) > 0 for every (x, y), (2)

the chain is irreducible

5. For M-H, f -irreducibility implies Harris recurrence6. Thus, for M-H satisfying (1) and (2)

(i) For h, with Ef |h(X)| <∞,

limT→∞

1

T

T∑t=1

h(X(t)) =

∫h(x)df(x) a.e. f.

(ii) and

limn→∞

∥∥∥∥∫ Kn(x, ·)µ(dx)− f∥∥∥∥TV

= 0

for every initial distribution µ, where Kn(x, ·) denotes thekernel for n transitions.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties (2)

4. Ifq(y|x) > 0 for every (x, y), (2)

the chain is irreducible

5. For M-H, f -irreducibility implies Harris recurrence

6. Thus, for M-H satisfying (1) and (2)(i) For h, with Ef |h(X)| <∞,

limT→∞

1

T

T∑t=1

h(X(t)) =

∫h(x)df(x) a.e. f.

(ii) and

limn→∞

∥∥∥∥∫ Kn(x, ·)µ(dx)− f∥∥∥∥TV

= 0

for every initial distribution µ, where Kn(x, ·) denotes thekernel for n transitions.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

The Metropolis–Hastings algorithm

Convergence properties (2)

4. Ifq(y|x) > 0 for every (x, y), (2)

the chain is irreducible

5. For M-H, f -irreducibility implies Harris recurrence6. Thus, for M-H satisfying (1) and (2)

(i) For h, with Ef |h(X)| <∞,

limT→∞

1

T

T∑t=1

h(X(t)) =

∫h(x)df(x) a.e. f.

(ii) and

limn→∞

∥∥∥∥∫ Kn(x, ·)µ(dx)− f∥∥∥∥TV

= 0

for every initial distribution µ, where Kn(x, ·) denotes thekernel for n transitions.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Random walk Metropolis–Hastings

Use of a local perturbation as proposal

Yt = X(t) + εt,

where εt ∼ g, independent of X(t).The instrumental density is of the form g(y − x) and the Markovchain is a random walk if we take g to be symmetric g(x) = g(−x)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Random walk Metropolis–Hastings [code]

Algorithm (Random walk Metropolis)

Given x(t)

1. Generate Yt ∼ g(y − x(t))

2. Take

X(t+1) =

Yt with prob. min

{1,

f(Yt)

f(x(t))

},

x(t) otherwise.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

The original example

Example (Random walk and normal target)

forget History! Generate N (0, 1) based on the uniform proposal [−δ, δ]The probability of acceptance is then

ρ(x(t), yt) = exp{(x(t)2 − y2t )/2} ∧ 1.

[Hastings (1970)]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

The original example

Example (Random walk & normal (2))

Sample statistics

δ 0.1 0.5 1.0

mean 0.399 -0.111 0.10variance 0.698 1.11 1.06

c© As δ ↑, we get better histograms and a faster exploration of thesupport of f .

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

The original example

-1 0 1 2

050

100

150

200

250

(a)

-1.5

-1.0

-0.5

0.0

0.5

-2 0 2

010

020

030

040

0

(b)

-1.5

-1.0

-0.5

0.0

0.5

-3 -2 -1 0 1 2 3

010

020

030

040

0

(c)

-1.5

-1.0

-0.5

0.0

0.5

Three samples based on U[−δ, δ] with (a) δ = 0.1, (b) δ = 0.5 and (c) δ = 1.0, superimposed with the convergence of the means (15, 000 simulations)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Mixtures by random walk MH

Example (Mixture models)

π(θ|x) ∝n∏j=1

(k∑`=1

p`f(xj |µ`, σ`)

)π(θ)

Metropolis-Hastings proposal:

θ(t+1) =

{θ(t) + ωε(t) if u(t) < ρ(t)

θ(t) otherwise

where

ρ(t) =π(θ(t) + ωε(t)|x)

π(θ(t)|x)∧ 1

and ω scaled for good acceptance rate

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Mixtures by random walk MH

Example (Mixture models)

π(θ|x) ∝n∏j=1

(k∑`=1

p`f(xj |µ`, σ`)

)π(θ)

Metropolis-Hastings proposal:

θ(t+1) =

{θ(t) + ωε(t) if u(t) < ρ(t)

θ(t) otherwise

where

ρ(t) =π(θ(t) + ωε(t)|x)

π(θ(t)|x)∧ 1

and ω scaled for good acceptance rate

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Mixtures by random walk MH

p

thet

a

0.0 0.2 0.4 0.6 0.8 1.0

-10

12

tau

thet

a

0.2 0.4 0.6 0.8 1.0 1.2

-10

12

p

tau

0.0 0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

1.2

-1 0 1 2

0.0

1.0

2.0

theta

0.2 0.4 0.6 0.8

02

4

tau

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

6

p

Random walk sampling (50000 iterations)

General case of a 3 component normal mixture[Celeux & al., 2000]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Mixtures by random walk MH

−1 0 1 2 3

−1

01

23

µ1

µ 2

X

Random walk MCMC output for .7N (µ1, 1) + .3N (µ2, 1)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Convergence properties

Uniform ergodicity prohibited by random walk structure

At best, geometric ergodicity:

Theorem (Sufficient ergodicity)

For a symmetric density f , log-concave in the tails, and a positiveand symmetric density g, the chain (X(t)) is geometrically ergodic.

[Mengersen & Tweedie, 1996]

no tail effect

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Convergence properties

Uniform ergodicity prohibited by random walk structureAt best, geometric ergodicity:

Theorem (Sufficient ergodicity)

For a symmetric density f , log-concave in the tails, and a positiveand symmetric density g, the chain (X(t)) is geometrically ergodic.

[Mengersen & Tweedie, 1996]

no tail effect

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

illustration of the tail effect

Example (Comparison of tails)

Random walk Metropolis Hastings

algorithms based on a N (0, 1)

instrumental for the generation of

(left) a N (0, 1) distribution and

(right) a distribution with density

ψ(x) ∝ (1 + |x|)−3 (a)

0 50 100 150 200-1

.5-1

.0-0

.50.

00.

51.

01.

5

(a)

0 50 100 150 200-1

.5-1

.0-0

.50.

00.

51.

01.

5

0 50 100 150 200

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

0 50 100 150 200

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

(b)

0 50 100 150 200

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

0 50 100 150 200

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

0 50 100 150 200

-1.5

-1.0

-0.5

0.0

0.5

1.0

1.5

90% confidence envelopes of the means, derived from500 parallel independent chains

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Further convergence properties

Under assumptions skip detailed convergence

I (A1) f is super-exponential, i.e. it is positive with positive

continuous first derivative such that

lim|x|→∞ n(x)′∇ log f(x) = −∞ where n(x) := x/|x|.In words : exponential decay of f in every direction with ratetending to ∞

I (A2) lim sup|x|→∞ n(x)′m(x) < 0, where m(x) = ∇f(x)/|∇f(x)|In words: non degeneracy of the countour manifoldCf(y) = {y : f(y) = f(x)}

Q is geometrically ergodic, andV (x) ∝ f(x)−1/2 verifies the drift condition

[Jarner & Hansen, 2000]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Further [further] convergence properties

skip hyperdetailed convergence

If P ψ-irreducible and aperiodic, for r = (r(n))n∈N real-valued nondecreasing sequence, such that, for all n,m ∈ N,

r(n+m) ≤ r(n)r(m),

and r(0) = 1, for C a small set, τC = inf{n ≥ 1, Xn ∈ C}, andh ≥ 1, assume

supx∈C

Ex

[τC−1∑k=0

r(k)h(Xk)

]<∞,

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Further [further] convergence properties

then,

S(f, C, r) :=

{x ∈ X,Ex

{τC−1∑k=0

r(k)h(Xk)

}<∞

}

is full and absorbing and for x ∈ S(f, C, r),

limn→∞

r(n)‖Pn(x, .)− f‖h = 0.

[Tuominen & Tweedie, 1994]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Comments

I [CLT, Rosenthal’s inequality...] h-ergodicity implies CLTfor additive (possibly unbounded) functionals of the chain,Rosenthal’s inequality and so on...

I [Control of the moments of the return-time] Thecondition implies (because h ≥ 1) that

supx∈C

Ex[r0(τC)] ≤ supx∈C

Ex

{τC−1∑k=0

r(k)h(Xk)

}<∞,

where r0(n) =∑n

l=0 r(l) Can be used to derive bounds forthe coupling time, an essential step to determine computablebounds, using coupling inequalities

[Roberts & Tweedie, 98; Fort & Moulines, 00; Jones et al., 02]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Alternative conditions

The condition is not really easy to work with...[Possible alternative conditions]

(a) [Tuominen, Tweedie, 1994] There exists a sequence(Vn)n∈N, Vn ≥ r(n)h, such that

(i) supC V0 <∞,(ii) {V0 =∞} ⊂ {V1 =∞} and(iii) PVn+1 ≤ Vn − r(n)h+ br(n)IC .

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Random-walk Metropolis-Hastings algorithms

Alternative conditions

(b) [Fort 2000] ∃V ≥ f ≥ 1 and b <∞, such that supC V <∞and

PV (x) + Ex

{σC∑k=0

∆r(k)f(Xk)

}≤ V (x) + bIC(x)

where σC is the hitting time on C and

∆r(k) = r(k)− r(k − 1), k ≥ 1 and ∆r(0) = r(0).

Result (a) ⇔ (b) ⇔ supx∈C Ex{∑τC−1

k=0 r(k)f(Xk)}<∞.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Langevin Algorithms

Proposal based on the Langevin diffusion Lt is defined by thestochastic differential equation

dLt = dBt +1

2∇ log f(Lt)dt,

where Bt is the standard Brownian motion

Theorem

The Langevin diffusion is the only non-explosive diffusion which isreversible with respect to f .

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Discretization

Instead, consider the sequence

x(t+1) = x(t) +σ2

2∇ log f(x(t)) + σεt, εt ∼ Np(0, Ip)

where σ2 corresponds to the discretization step

Unfortunately, the discretized chain may be transient, for instancewhen

limx→±∞

∣∣σ2∇ log f(x)|x|−1∣∣ > 1

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Discretization

Instead, consider the sequence

x(t+1) = x(t) +σ2

2∇ log f(x(t)) + σεt, εt ∼ Np(0, Ip)

where σ2 corresponds to the discretization stepUnfortunately, the discretized chain may be transient, for instancewhen

limx→±∞

∣∣σ2∇ log f(x)|x|−1∣∣ > 1

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

MH correction

Accept the new value Yt with probability

f(Yt)

f(x(t))·

exp

{−∥∥∥Yt − x(t) − σ2

2 ∇ log f(x(t))∥∥∥2/

2σ2

}exp

{−∥∥∥x(t) − Yt − σ2

2 ∇ log f(Yt)∥∥∥2/

2σ2

} ∧ 1 .

Choice of the scaling factor σShould lead to an acceptance rate of 0.574 to achieve optimalconvergence rates (when the components of x are uncorrelated)

[Roberts & Rosenthal, 1998; Girolami & Calderhead, 2011]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Optimizing the Acceptance Rate

Problem of choice of the transition kernel from a practical point ofviewMost common alternatives:

(a) a fully automated algorithm like ARMS;[Gilks & Wild, 1992]

(b) an instrumental density g which approximates f , such thatf/g is bounded for uniform ergodicity to apply;

(c) a random walk

In both cases (b) and (c), the choice of g is critical,

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Case of the random walk

Different approach to acceptance ratesA high acceptance rate does not indicate that the algorithm ismoving correctly since it indicates that the random walk is movingtoo slowly on the surface of f .

If x(t) and yt are close, i.e. f(x(t)) ' f(yt) y is accepted withprobability

min

(f(yt)

f(x(t)), 1

)' 1 .

For multimodal densities with well separated modes, the negativeeffect of limited moves on the surface of f clearly shows.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Case of the random walk

Different approach to acceptance ratesA high acceptance rate does not indicate that the algorithm ismoving correctly since it indicates that the random walk is movingtoo slowly on the surface of f .If x(t) and yt are close, i.e. f(x(t)) ' f(yt) y is accepted withprobability

min

(f(yt)

f(x(t)), 1

)' 1 .

For multimodal densities with well separated modes, the negativeeffect of limited moves on the surface of f clearly shows.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Case of the random walk (2)

If the average acceptance rate is low, the successive values of f(yt)tend to be small compared with f(x(t)), which means that therandom walk moves quickly on the surface of f since it oftenreaches the “borders” of the support of f

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Rule of thumb

In small dimensions, aim at an average acceptance rate of50%. In large dimensions, at an average acceptance rate of25%.

[Gelman,Gilks and Roberts, 1995]

warnin: rule to be taken with a pinch of salt!

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Rule of thumb

In small dimensions, aim at an average acceptance rate of50%. In large dimensions, at an average acceptance rate of25%.

[Gelman,Gilks and Roberts, 1995]

warnin: rule to be taken with a pinch of salt!

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Role of scale

Example (Noisy AR(1))

Hidden Markov chain from a regular AR(1) model,

xt+1 = ϕxt + εt+1 εt ∼ N (0, τ2)

and observablesyt|xt ∼ N (x2

t , σ2)

The distribution of xt given xt−1, xt+1 and yt is

exp−1

2τ2

{(xt − ϕxt−1)2 + (xt+1 − ϕxt)2 +

τ2

σ2(yt − x2

t )2

}.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Role of scale

Example (Noisy AR(1))

Hidden Markov chain from a regular AR(1) model,

xt+1 = ϕxt + εt+1 εt ∼ N (0, τ2)

and observablesyt|xt ∼ N (x2

t , σ2)

The distribution of xt given xt−1, xt+1 and yt is

exp−1

2τ2

{(xt − ϕxt−1)2 + (xt+1 − ϕxt)2 +

τ2

σ2(yt − x2

t )2

}.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Role of scale

Example (Noisy AR(1) continued)

For a Gaussian random walk with scale ω small enough, therandom walk never jumps to the other mode. But if the scale ω issufficiently large, the Markov chain explores both modes and give asatisfactory approximation of the target distribution.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Role of scale

Markov chain based on a random walk with scale ω = .1.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Role of scale

Markov chain based on a random walk with scale ω = .5.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

MA(2)

Since the constraints on (ϑ1, ϑ2) are well-defined, use of a flatprior over the triangle as prior.Simple representation of the likelihood

library(mnormt)

ma2like=function(theta){

n=length(y)

sigma = toeplitz(c(1 +theta[1]^2+theta[2]^2,

theta[1]+theta[1]*theta[2],theta[2],rep(0,n-3)))

dmnorm(y,rep(0,n),sigma,log=TRUE)

}

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Basic RWHM for MA(2)

Algorithm 1 RW-HM-MA(2) sampler

set ω and ϑ(1)

for i = 2 to T dogenerate ϑj ∼ U(ϑ

(i−1)j − ω, ϑ(i−1)

j + ω)

set p = 0 and ϑ(i) = ϑ(i−1)

if ϑ within the triangle thenp = exp(ma2like(ϑ)−ma2like(ϑ(i−1)))

end ifif U < p thenϑ(i) = ϑ

end ifend for

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Outcome

Result with a simulated sample of 100 points and ϑ1 = 0.6,ϑ2 = 0.2 and scale ω = 0.2

● ●

−2 −1 0 1 2

−1.

0−

0.5

0.0

0.5

1.0

θ1

θ 2

●●●●

●●●

●●●●●

●●●

●● ●●●●

●●

●●●●●●●●●●

● ●●●●●●●●

●●●

●●●●●●●●

●●●

●●●●●●●

●●●

●●●●●●●●●●●

●●●●●

●●

●●●●

●●●●●●●

●●●●●

●●●

●●●●●●●●●

●●●●●●●●●●●●

●●

●●

●●●

●●●●●●●●

●●●●●●●

●●●●●

●●●●

●●●●●●

●●●●●●●

●●●●●

●●●

●●

●●●

●●

●●●

●●●

●●

●●

●●●●

●●

●●●

●●●

●●

●●

●●●

●●●

●●

●●

●●●●●

●●●

●●●●●●

●●●

●●●●●

●●

●●●●

●●●●

●●

●●●●●●●●

●●●●●●●●●

●●

●●

●●●●●●●●

●●●●●●●●

●●●

●●●●●●●●●●●●

●●●●

●●

●●●●●●●●●●●●●

●●●●

●●●

●●

●●

●●●●

●●

●●

●●●●●

●● ●●

●●●●●

●●●●

●●●●●●●●●

●●

●●

●●●●

●●●●

●●

●●

● ●●●

●●

●●●●●●●

●●●

●●●●●

●●●●●

●●

●●●

●●●

●●

●●

●●●●●●●●

●●●●●●●●●●●

●●

●●

●●●●●●●●

●●●

●●●●●●

●●●

●●

●●

●●●

●●●

●●

●●●●●●●

●●●

●●●●

●●●

●●

●●●●●●●●●●●●

●●

●●

●●●●

●●●

●●●●●

●●●●●●

●●

●●●●●●

●●●

●●●

●●

●●

●●●●

●●

●●●

●●●●

●●●●●●●

●●●●

●●●

●●

●●●●●●●●●●●

●●●●

●●●●

●●●●

●●

●●●●●●●

●●●●●

●●●●●●

●●

●●●

●●●●

●●

●●

●●●●●●●●●●●

●●●●

●●

●●●●

●●●●●●●●

●●

●●●

●●●●

●●

●●

●●

●●●

●●●●●●●

●●●●

●●●●

●●●●

●●●

●●

●●●●●

●●●●●

●●●

●●●●●

●●

●●●●

●●●●●●

●●

●●●●●●●●●●●

●●●

●●●

●●

●●

●●●

●●●

●●●●●●●●●●●●

●●●

●●●●●

●●

●●●●●

●●●●●

● ●

●●●

●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●

●●

●●

●●●●●

●●●●●

●●

●●●

●●●

●●

●●●●

●●

●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●

●●●●●●●●●●●●●●

●●●

●● ●

●●●●●●●●●

●●

●●●●

●●●

●●●●●

●●●●●●

●●

●●●

●●●●●●

●●●●●●●

●●

●●

●●●●●●●●●●

●●●●●

●●●●●

●●●●

●●●●●

●●●●●

●●

●●●●●●

●●

●●●●

●●●

●●

●●●●

●●●

●●

●●

●● ●●●●

●●●

●●

●●

●●●●

●●●●●●●●●●●●●●●●●

●●●

●●

●●●

●●●

●●●●

●●●●

●●●

●●●●●●

●●

●●●●

●●●●●

●●●

●●

●●●●●●●

●●

●●●

●●●●

●●

●●

●●

●●●●

●●

●●

●●●●

●●●●●●●●●

●●●●●●●●●●

●●

●●

●●●●●●●

●●●

●●

●●●

●●

●●

●●●●●

●●

●●●●

●●●●●●●●

●●●

●●

●●●●●

●●

●●●●

●●●●●●●●

●●●

●●

●●●●

●●

●●

●●●●●●●

●●●

●●

●●●

●●

●●●

●●

●●

●●●●●●

●●●●●

●●

●●●●

●●●●

●●●●

●●●

●●●●●

●●●

●●

●●● ●

●●●

●●●

●●●

●●●●●●

●●●

●●●

●●●●

●●

●●●

●●

●●

●●●●●

●●●

●●●●

●●●

●●●

●●●●●●●●●●

●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●●

●●

●●

●●●●●

●●●

●●●

●●●

●●●●●

●●●●●●●●●●●●

●●

●●●●●●●●●

●●●●●

●●●●●●●

●●

●●●●

●●●●

●●

●●

●●●

●●

●●●●●●

●●

●●●●●

●●●

●●

●●●●●

●●●●●●●

●●

●●●●●●●●●

●●●●●●●●

●●

●●●●●

●●

●●●●●●

●●●●●●●●●●●●●●

●●●

●●●●●●●● ●●

●●●

●●●●●●●●●

●●●

●●

●● ●●●

●●

●●

●●●●●

●●●●

●●●● ●

●●

●●●

●●●●●●

● ●

●●●●●●●●●

●●● ●●●●●

●●●●●●●●●●

●● ●●●●

●●●●

●●

●●

●●●●●

●●

●●●

●●

●●●●

●●

● ● ●

●●●●

●●●

●●●●

●●

●●

●●

●●●●●

●●

●●●

●●●●●●

●●●●●●●●●●●●●● ●●

●●

●●

●●●●

●●●●●●●●

●●●

●●

●●

●●●●●●●

●●●

● ●●

●●

●●

●●●●●●●●●●●●●

●●

●●●●●●●●●●

●●●

●●●

●●●●●●

●●

●●

●●●●●●●

●●●●●●●●●●●●●

●●●●

●●

●●

●●

●●●●

●●

●●●●

●●●●

●●

●●●●

●●

●●

●●●●●

●●

●●●●●●●●

●●●●●●●●

●●

●●●●●●●

●●●●●

●●●●

●●●●●

●●●

●●●●●

●●●●●●●

●●●●●

●●●●

●●

●●●●●●●●

●●●●●●●

●●●●

●●●●

●●●●

●●●

●●

●●●

●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●

●●●

●●●

●●●

●●

●●●●●●●●●

●●●●

●●

●●●

●●●●

●●●

●●

●●

●●●●●●●●●

●●

●●●●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●●●●●

●●●●●●

●●●

●●

●●

●●

●●●●

●●●

●●●

●●●●●●●

●●●●●●●●●

●●●

●●

●●●●●●

●●●●

●●●●●

●●

●●●

●●●

●●●●●

●●●●

●●

●●●

●●●

●●●

●●●

●●●●

●●

●●●●

●●●●●

●●

●●

●● ●

●●

●●●

●●●●●●

●●

●●●

●●

●●●●●

●●●●●●●●●●●●●●

●●●●●

●●●

●●●

●●●●●●●

●●

●●

●●●

●●●

●●

●●

●●

●●●●

●●●●●●●

●●●● ●

●●●●

●●

●●●●

●●

●●●

●●●

●●●

●●●●●

●●

●●

●●

●●●

●●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●

●●●●●

●●●●●●●●●

●●

●●●

●●

●●●●●●●●●●●

●●●

●●●●

●●●●

●●●●●●●●●

●●

●●●●

●●●●●●

●●

●●

●●

●●●●●●●●●●●

●●

●●

●●

●●●●

●●

●●●

●●

●●

●●

●●

●●●

●●

●●●●

●●

●●

●●●●●●

●●●●●●

●●● ●●●

●●● ●●

●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●

●●●●

●●

●●●●

●●

●●●●

●●

●●

●●●●●●●●●

●●●●

●●● ●●●

●●●●

●●●●●●

●●●●●●

●●●●●

●●●●●●● ●

●●●●

●●●●●●●

●●

●●●●●●●

●●●●●

●●●

●●●●●●

●●●

●●● ●

●●

●●

●●

●●●●●

●●●●●●●●●●●●

●●●

●●●

●●●

●●●

●●●

●●

●●●●●●●●

●●

●●●●●●

●●●●●

●●

●●

●●●●

●●●●

●●●●●

●●●●

●●

●●●●●

●●●

●●●●●●●●●●

●●

●●

●●●

●●

●●

●●●

●●●●

●●●●●●

●●●

●●●●

●●●●●●●●●

●●●●

●●●●

●●●●

●●●

●●●●●

●●●●

●●

●●●●●●●

●●●

● ●●●●

●●●●●●●●●

●●●

●●

●●●●

●●

●●●

●●●

●●●●

●●

●●●●

●●●

●●●●●●

●●●●●●

●●●

●●●●●●●●●●●●●

●● ●

●●

●●●●●●●●●●●●●

●●●●

●●

●●●●

●●●

●●●●

●●●

●●●●●

●●

●●

●●

●●

●●●

●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●

●●

●●●●●●●●

●●●●●●

●●

●●

●●

●●●●●●●●●●●

●●●●●●●

●●

●●●●●●

●●●●●●

●●●●●●

●●

●●●●●

●●●

●●

●●●●●

●●●●●●●

●●

●●●

●●

●●

●●●●●●●

●●●●

●●●●●

●●●

●●●

●●

●●●●

●●●●●●●

●●●

●●● ●●●●●●

●●

●●●

●●●●●●

●●

●●●●●●●●●●

●●●●●●

●●●

●●●●●●

●●●●●●

●●●●●●●●

●●

●●●

●●

●●●●●●●

●●●●

●●●●●

●●

●●●●●●●●

●●●●

●●●

●●●● ●●●●●●●●●●●●●●●

●●●●

●●

●●●●●

●●●●●

●●

●●●●●●●●

●●

●●●

●●●●●●

●●

●●

●●

●●●●●

●●●●●●●●

●●●●●●●

●●

●●●

●●

●●●●●●●●●●

●●

●●●●●

●●●

●●●

●●

●●

●●

●●

●●●●

●●●●●●

●●●

●●●●

●●●●●●●

●●●●●●

●●●

●●●

●●●●●

●●

●●●●

●●●●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●●●●

●●

●●

●●●●●●●●

●●●●●

●●●●●

●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●

●●

●●●

●●●

●●●●

●●●

●●●●●

●●

●●

●●

●●●●●

●●

●●●●

●●

●●

●●●●

●●●●●

●●

● ●

●●

●●●●●●●●●●●●

●●●

●●

●●●●●●

●●

●●

●●●●●●●●●

●●

●●

●●

●●●●●●

●●

●●●●●●●●

●●

●●●

●●

●●●

●●

●●●

●●

●●●

●●●

●●●

●●●●●

●●●●●●

●●●●●●

●●

●●●●●●●

●●

●●●●

●●●●●●

●●●●●●●

●●

●●

●●●

●●

●●●

●● ●●

●●●●●●

●●●●●●●●●●●

● ●●●●

●●

●●●●

●●●

●●

●●●●

●●

●●

●●

●●●

●●●

●●●●●●●●●●●●●●●

●●●●●

●●●●

●●●●●● ●●●●●●●●●

●●●

●●

●●

●●

●●

●●●●●●

●●

●●●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●

●●●●

●●

●●

●●●●

●●

●●●●●

●●

●●●

●●

●●

●●●

●●

●●●●●●●●●

●●●●

●●●

●●●

●●

●●●●●

●●●●●●●●●●●●●●● ●● ●●

●●●

●●●●●

●●

●●●●

●●

●●●●●

●●

●●●

●●●●●●●●●

●●

●●●

●●●●●

●●

●●●●●

●●●●

●●●●●

●●

●●●●●

●●

●●●

●●●●●●●●●●●●●●

●●●●

●●●●●

●●

●●●

●●●

●●●●

●●

●●●●●

●●

●●●

●●●●●

●●●●●

●●●●●●●●

●●

●●●●●

●●●●●●●●●●●●●●

●●

●●

●●

●●●●●

●●●●●●●

●●

●●

●●●●●●●

●●●●●●●●●●

●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●●●●

●●●●

●●

●●

●●●●

●●

●●

●●●●

●●●●●●

●●●●●●●●

●●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●●

●●●

●●

●●

●●

●●●

●●●●●●●●●●●●

●●●

●●●●●

●●

●●

●●●

●●

●●

●●●●●

●●●●●

●●●●●

●●●

●●

●●●

●●

●●●●

●●●●

●●●●●●

●●●

●●

●●●●

●●

●●

●●●●

●●●

●●●●

●●●

●●●●

●●

●●●

●●

●●●●●●●●●

●●

●●●●●●

●●●●

●●●

●●●●●●●

●●●●

●●●●●

●●●●●

●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●

●●●

●●●

●●

●●●●

●●●●●

●●●

●●●●●●●

●●

●●

●●●●

●●

●●●●●●

●●●

●●●

●●●

●●●●●

●●●

●●●●

●●●●

●●●●

●●●●●●●●●

●●

●●●●

●●

●●●●

●●

●●●●●

●●●

●●●●●

●●●●●●

●●●●

●●●

●●●●●

●●●● ●●

●●●●

●●

●●

●●●●

●●

●●●●●●●●

●●●●

● ●●●●

●●

●●●●●●●●●●

●●●

●●●●●

●●

●●

●●●●●●

●●●

●●●●●●

●●●●

●●●

●●●●●

●●●

●●● ●

●●

●●

●●

●●

●●●

●●●●●●●●

●●

●●●●

●●●●● ●●

●●

●●●

●●

●●●●

●●●

●●●●●●

●●●●●●

●●

●●●●

●●●●●●●

●●

●●●●

●●●●

●●●●●

●●●●

●●

●●●

●●

●●●●●●●●●●●●●●●

●●●●●●

●●

●●●

●●●●●●●●

●●●●

●●

●●●●

●●●●●●●

●●●●●●●

●●●

●●●●

●●●

●●●●

●●

●●●●●●●●●●●●●●●●

● ●●●●●●

●●

●●●

●●

●●●●

●●

●●●●●

●●

●●●●●●●●●●●

●●

●●●●

●●●

●●●●●

●●

●●●●●●●●●●

●●

●●●●●●●

●●

●●●●●●

●●●●●

●●●

●●

●●●

●●

●●

●●

●●●

●●●

●●

●●●●●

●●●

●●●

●●●●●●●

●●●

●●●●●

●●●●●●●

●●●●

●●●

●●●●●

●●●●●

●●●●●●●●●●●

●●●

●●●●●

●●●●●●

●●●

●●●●●●

●●●●●●●●●●

●●●

●●

●●●●●●●●●

●●●

●●●●●●

●●●●

●●

●●

●●●●●●●●

●●

●●

●●●

●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●

●●●

●●

●●●

●●●

●●

●●●●●●

●●●●●●●●

●●

●●●●

●●●

●●●●●●●●

●●●

●●

●●●●●●●●

●●●● ●

●●●

●●●●●●●●

●●●●●

●●●●●

●●●

●●●

●●

●●●●●●

●●

●●●●●

●●●

●●●●

●●●●●●●

●●

●●●●

●●●

●●●

●●

●●●●●

●●●●●●●

●●●●●●

●●●●●●

●●

●●●●

●●●●●●●●● ●●●●

●●●●●●

●●

●●

●●●●

●●●

●●

●●

●●●●●●●

●●●●●●

●●●●●●●

●●●

●●●●●●

●●●●●●●●●

●●●

●●●

●●●●

●●●●●●

●●●●●●

●● ●

●●●●●

●●●

●●

●●

●●●●

●●

●●●●●●● ●

●●●●

●●●●●●●

●●●

●●

●●●●●

●●●●●●●

●●●●●

●●●

●●

●●● ●●●●●

●●●●●●●●

●●●

●●●

●●●●●●●●●●●

●●●●●

●●●●●●●●●

●●●●

●●●

●●

●●●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●

●●●● ●●

●●●

●●

●●●●●

●●●●●●●●●●

●●●

●●●

●●●● ●●

●●●●

●●●●

●●

●●●●

●●●

●●●

●●

●●●

●●●●●

●●●●●●

●●

●●●●●●●●●●●●●

●●

●●

●●●●

●●●●●●

●●

●●●●●●

●●●●●●

●●●●●

● ●●●

●●●●●

●●●●●

●●

●●●●

●●●●

●●●●

●●

●●●●●●●

●●

●●●●

●●●●●

●●●●●

●●●●●●

●●●

●●

●●●

●●●●●●●

●●●

●●●●●●●●

●●●●●

●●●

●●

●●●●●●●●●●

●●

●●●●

●●●●●●●●

●●

●●●●

●●●

●●

●●●

●●●●

●●●●●●●●●●

●●●

●●●●

●●●●●

●●●

●●●●● ●

●●●●

●●●●●●●

●●

●●●●●

●●●

●●

●●

●●

●●●

●●

●●●●●●●

●●●●

●●●

●●●

●●●

●●

●●●●●

●●●●●●●

●●●●●●

●●●●●●●

●●●●●●●●

●●

●●●

●●

●●●●

●●●●

●●

●●●●

●●

●●●●●●

●●

●●

●●●●●

●●

● ●●●●

●●

●●●

●●●●

●●●

●●

●●●

●●●●●●●●●●

●●●

●●●

●●●●

●●●●●●

●●●●●●●

●●●

●●●●●●●

●●●●●

●●●

●●

●●●●●●●●●●

●●●●●

●●

●●●●

●●●●●

●●

●●●●

●●●●●●

●●

●●●●

●●●●●●●●

●●

●●●●●

●●

●● ●

●●

●●●●●

●●●●●●●●●●●

●●●

●●●●●

●●

●●●●

●●●

●●●●●●●●●●●

●●●●

●●●●

●●

●●

●●

●●

●●●●●●

●●

●●●●●

●●●●

●●●●

●●●●●●●●● ●●

●●●

●●

●●

●●●●●●●●●

●●●

●●●

●●●●

●●●

●●●●

●●●●●

●●●●

●●●●

●●●●●●●●●

●●●

●●

●●

●●●●●

●●●●●

●●●

●●●●● ●

●●

●●

●●

●●

●●●●

●●

●●●●

●●

●●

●●●●●●●●●●●

●●●●●●●

●●●●●

●●

●●

●●●

●●●●

●●●●

●●●

●●●

●●

●●●●●●

●●●

●●

●●●●●●●●●●

●●●

●●

●●

●●

●●

●●●●

●●

●●●

●●●●

●●

●●●●●

●●●●●●

●●●●●●

●●● ●

●●●●

●●

●●

●●●

●●

●●●●●●

●●●●●●

●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●

●●

●●●

●●

●●●

●●

●●●

●●

●●●● ●●●●●

●●●●●●●●

●●●

●●●●●●

●●●

●●●●●

●●●●●

● ●●●●●

●●●

●●●●●●

●●

●●

●●

● ●●●●●

●●

●●

●●●●●●●

●●●●●●

●●●

●●

●●

●●●

● ●●●●●●●

●●●●●●●●●●

●●●●●●●●●

●●

●●●●●

●●●●●●

●●

●●●

●●●●●●

●●●●●● ●●●●●●

●●

●●●●

●●●●●●●●

●●●●●●●

●●

●●

●●●●

●●●●● ●●●

●●●●●●●

●●●●●

●●●●●●●

●●

●●

●●

●●●●

●●●

● ●●●●●

●●

●●●●●●

●●●

●●●●

●●●

●●

●●

●●

●●●●●

●●

●●

●●●●

●●

●●●●● ●●●●●●●●

●●

●●

●●●●

●●

●●●

●●●●

●●●●●●

●●●●●

●●●●

●●●●●●●

●●●

●●

●●●

●●●

●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●

●●●●

●●●●●●●●●●●●●●●●

●●●

●●●●●

●●●●

●●

●●

●●●●

●●

●●●●●●●

●●●

●●●●●●●●

●●●●●

●●

●●●

●●

●●●●●●

●●●●●

●●●

●●●

●●●

●●

●●

●●

●●●●●●●

●●●●●●●●

●●●●

●●●

●●●●

●●

●●●●●●●●

●●●●●●●●

●●

●●●

●●

●●●●●

●●●

●●●

●●●●

●●●●

●●●●●●

●●●●●

●●●

●●●

●●●●●●●

●●●

●●●●

●●●

●●

●●●●●●●

●●●●●●●●●●

●●

●●●

●●●●●●●

●●●●

●●

●●

●●

●●●●●●●

●●●●

●●

●●●

●●●●●●●

●●●●●

●●●●●●

●●

●●●●●●●

●●

●●●

●●●●●●●

●●●●●

●●●●

●●

●●●●

●●

●●

●●●

●●

●●●●●●

●●●

●●●

●●●●●

●●

●●●●●●●●●●

●●●

●●●●●●●●●●●●●

●●

●●●●●

●●

●●

●●●

●●●●●●●●

●●

●●●●●●●●

●●●●●●●●●●●●●●●●

●●●

●●●●●

●●●●●●

●●

●●●●

●●

●●●●●

●●●●

●●●●●

●●●●

●●

●●●

●●

●●

●●

●●

●●

●●●●●●

●●

●●

●●●●

●●●

●●●

●●●●●

●●

●●●●

●●

●●●●●

●●●

●●●

●●●●●

●●●●●●●●●●●●●

●●●

●●●

●●

●●●●●

●●●●

●●

●●

●●●

●●

●●●●●●●●●●

●●●●

●●

●●●●●●●

● ●●

●●

●●

●●●●

●●

●●●●●

●●●

●●●●●●●●●

●●●●●●●●●●

●●

●●●●●●● ●●●●

●●●

●●●●●●

●●●●

●●

●●●●●●

●●●●●

●●

●●●●

●●●

● ●●●●●●●●●●●

●●

●●

●●●●●

●●

●●●●

●●

●●●●●●●

●●

●●●

●●●●

●●

●●●●●

●●●

●●●

●●●●●●●●●●●

●●●●●●

●●●●●●●●

●●

●●

●●

●●●●●●

●●●

●●●

●●

●●●●

●●●●●●●●●●●

●●●●●●●●

●●●

●●●

●●●●●●●●

●●●

●●

●●

●●●●●

●●●●

●●●●●

●●●●

●●●●●●●●●●●

●●

●●●●

●●

●●●

●●●●●●

●●

●●●

●●●●●

●●

●●●●●●●●●

●●

●●●

●●●●●

●●●●●●●●●●

●●

●●●●●

●●●●

●●●●

●●●

●●

●●

●●●●●●●●●●●

●●●

●●●●●●●

●●●●●●

●●

●●

●●●●●●●●●

●●●●●

●●●●●●

●●●●●●●●●●●●

●●●●●

●●

●●

●●●●●●●

●●●●●●●●●

●●

●●

●●

●●

●●●●●●●●●

●●

●●●●

●●●●

●●●●

●●

●●●●●●●●●●●●

●●

●●●●●●

●●●●●●●●

●●●●

●●●●●●

●●

●●●●

●●

●●●●●●●●

●●●

●●●●

●●

●●

●●

●●●

●●●

●●●●●●●

●●●●●

●●

●●●●●●

●●●

●●●●●

●●

●●

●●●●●

●●

●●

●●●

●●●

●●

●●●

●●●●●●●●●

●●●●●

●●●●●●●●●●●●●

●●●

●●

●●●●●●●●

●●●

●●●

●●●●●

●●●

●●

●●●●●●

●●●●●●●●

●●●●

●●

●●

●●

●●●●●●●●

●●●

●●●●●●●●●

●●

●●●●●

●●●●

●●●●●

●●●●●●●

●●●

●●●●

●●

●●

●●●●●●●●●●●●

●●●●●

●●●●●●●

●●●●

●●●●

●●

●●●●

●●●●●●

●●

●●●●

●●

●●

●●●●●

●●

●●●●

●●●●●●●

●●●●●

●●●●●●●

●●●

●●●

●●

●●●

●●●

●●●●●

●●●

● ●●

●●●●●●●

●●●●●

●●●●

●●●●

●●●

●●●●

●●● ●

●●●●●●●

●●●●

●●●●●

●●

●●

●●●

●●●●

●●

●●

●●

●●

●●●●●

●●

●●

●●

●●●

●●●●

●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●

●●●

●●●●

●●●

●●

●●

●●

●●

●●●

●●●●

●●●●●●

●●●●●●

●●

●●

●●●●●●

●●●●●●●

●●

●●●

●●●

●●●●●●●●●●●●●●

●●●●●

●●●●●●●●

●●●●

●●

●●●●●●●●

●●

●●●●●●

●●

●●

●●

●●●●

●●

●●●●

●●●●

●●●●●

●●●

●●●●●●●

●●●●

●●●●●●●● ●●

●●●

●●

●●●●●

●●●●●●●

●●

●●●●●●

●●●

●●

●●●●●●

●●●●●●

●●●●●●●●●

●●

●●●●●

●●●●●●●

●●●●●

●●

●●●

●●●

●●

●●

●●

●●●

●●●●

●●●

●●●

●●●

●●●●●

●●

●●

●●

●●

●●●

●●●●●●●●

●●●

●●●●●●●●●●

●●

●●●●●

●●●●●

●●

●●

●●●

●●●

●●

●●●●●●●●

●●●

●●●

●●●●●●●●●●●●●

●●

●●●●●

●●

●●●●●●●●●●

●●●

●●●●

●●●●●●

●●

●●●●●●●●●

●●●

●●

●●

●●●●●●● ●●●●●

●●●● ●●

●●

● ●●●●

●●●

●●●●

●●

●●●●

●●●

●●●●

●●●●●●●●●●●●●●

●●●

●●●

●●

●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●

●●●

●●

●●●●●●

●●●●●●

●●●●●●

●●

● ●● ●

●●

●●●●●

●●●●●

●●

●●

●●●●●●●●●●●●●

●●

●●

●●●●●●● ●●●

●●●●

●●●

●●●●●

●●●●●●●●

●●

●●

●●●

●●●

●●●●●●●

●●●●●●●●

●●●●

●●●●

●●●●●●●●●●

●●●●●●

●●●●●●

●●

●●

● ●●

●●●●●●●●●●●●●●

●●

●●●

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Outcome

Result with a simulated sample of 100 points and ϑ1 = 0.6,ϑ2 = 0.2 and scale ω = 0.5

● ●

−2 −1 0 1 2

−1.

0−

0.5

0.0

0.5

1.0

θ1

θ 2

●●●●●●●

●●●●

●●●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●

●●●●●●●●●●●●●

●●

●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●

●●●●●●

●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●

●●●●●●●●●●

●●●●●●

●●●●

●●●●●●●●

●●●

●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●●●●●●●●

●●●●

●●●●●

●●

●●●

●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●

●●●●●

●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●

●●●

●●●●●●●●●●●●●●

●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●

●●●●●●●●●●●

●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●

●●●

●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●

●●●●

●●●●●●

●●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●●

●●●●●

●●●●●

●●

●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●

●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●

●●●●●●●●●●

●●

●●●●

●●●●●●

●●●

●●●

●●●●●

●●●●●●●●●●●●●●●●

●●●●

●●●●●●

●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●

●●●

●●●●●●●●●●●●●

●●●●●●

●●●●●

●●●●●●

●●●

●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●

●●●●●●●●●●

●●

●●●●●●

●●

●●●●●●●●●●●

●●

●●●●●

●●●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●

●●●

●●●●●

●●●●●●●●●●●●●●

●●

●●●●

●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●

●●●

● ●●●●●●●●

●●●●●●●

●●●●●●●●●●●

●●●●●

●●●●●●●●

●●

●●●●●

●●●●●●

●●●●●●●●●●

●●●●●●●●●

●●●●●●●●

●●●●●●●

●●●

●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●

●●●

●●

●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●

●●●●●●

●●●

●●●●●●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●● ●●●●

●●●●●●●●●●

●●●●●●●

●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●

●●●●●●●●●●●

●●●●●

●●●●●●●●

●●●

●●●●●●●

●●

●●●●

●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●● ●●●

●●●●●

●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●

●●●

●●●

●●●●●●●●

●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●

●●

●●

●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●●●●

●●

●●●

●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●

●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●

●●●

●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●

●●●●●

●●●

●●●●●

●●●●●●

●●●●●●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●

●●●●●

●●●●●●●●

●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●

●●●

●●●●●●●●●●

●●

●●●●●●●●●●

●●

●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●

●●●

●●●●●

●●●●

●●●●

●●

●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●

●●●●●●●●●

●●

●●

●●●●

●●●

●●●●●●●

●●●●●

●●●

●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●

●●●●●●●●●●●●●

●●●●

●●

●●●

●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●

●●●●●●●

●●●

●●

●●●●

●●●

●●●●●●●●●●●●●

●●●●●●●

●●

●●●●●●

●●●●●●●

●●●●●●●●●●●●● ●

●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●

●●●●●●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●●

●●●

●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●

●●●

●●

●●●●

●●●●●

●●●

●●●●●●●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●

●●●●●

●●●●

●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●

●●●●

●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●

●●●

●●●

●●●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●

●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●

●●

●●●●●●●

●●●●●

●●●●●●●●●●

●●

●●●●●●

●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●

●●●●●●●●●●

●●●●●●●●●●●●

●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●

●●

●●

●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●

●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●

●●●●●●

●●●

●●●●●●●

●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●

●●●●●●●●●●

●●●

●●

●●●●●●●●●●●●●

●●●●●●●

●●●●●●

●●

●●●

●●●●●●●

●●●●●●●●●

●●●●

●●●

●●●●●●●●●

●●●● ●

●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●●●

●●●●●

●●●●●●●●

●●●●●●●●

●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●

●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●

●●●●●●

●●●●●

●●

●●●●●●●

●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●

● ●●

●●

●●●

●●●●●●●

●●

●●●●●●●●●

●●●●

●●●●●

●●●●●

●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●

●●●●

●●●●●

●●●●●

●●●●●●●●●●●●

●●●●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●

●●●●

●●●●

●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●

●●

●●●●●

●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●

●●●

●●●●●●●●●●●

●●●●●

●●●●●

●●●

●●●●●●●●●●●●

●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●

●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●

●●●●●

●●●●●●

●●●●●●●

●●●●

●●●●

●●●●●●●●

●●●●●●●●

●●●●●●●●

●●●●●●●●●

●●●●●●●●●

●●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●

●●●●●●●●●●●●●●

●●●

●● ●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●●●●●●●

●●●●●

●●●●●●●

●●●●

●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●

●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●

●●●●●●●

●●●

●●●●●●●●●

●●●●

●●

●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●

●●

●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●

●●●●●

●●●

●●●

●●●●●

●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●●●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●●●

●●●●●

●●●●

●●●●● ●●●●●●●●●●●●

●●●●●●●

●●●●●●●

●●●●●●●●

●●●●●●

●●●●●●

●●●●●

●●●●●●●

●●●●●●●

●●

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●

●●●●●●●●●●●●●

●●●

●●

●●●●●

●●●●●●●●●●●●●●

●●●●

●●●

●●●●●●●

●●●●●●●

●●

●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●

●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●

●●●●●●●●●●●●●●●

●●

●●●●●●●●

●●●●●●

●●●●

●●●●●●

●●●●

●●

●●●

●●●●●●●

●●●●●●●●

●●●●●

●●●●●

●●●●●●

●●●●●

●●●●

●●●● ●●●●●

●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●

●●●●●●●●●●●●●●●●●

●●●●●●

●●●●●●●●●

●●●●●●●●●●●●●

●●●

●●●●

●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●

●●●●●

●●●●●●●

●●●●●●

●●●●●

●●●●●●

●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●

●●●

●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●

●●●●

●●●●●●●●

●●●

●●

●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●●●●●●●●●●

●●●

●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●

●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●

●●●●●●●●●

●●●●●●●

●●●●

●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●

●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●

●●

●●●●●●●●●●●●

●●●●●●●●●●

●●

●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●

●●●

●●●●●●●●●●

●●●

●●●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●●●

●●

●●●●●●●●●●●●●

●●●

●●●●●●●●●●●●●

●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●

●●

●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●

●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●

●●●●●●

●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●

●●●●●●●●●●

●●●●●

●●●●●

●●●

●●●●●●●

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Metropolis-Hastings Algorithm

Extensions

Outcome

Result with a simulated sample of 100 points and ϑ1 = 0.6,ϑ2 = 0.2 and scale ω = 2.0

● ●

−2 −1 0 1 2

−1.

0−

0.5

0.0

0.5

1.0

θ1

θ 2

●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

The Gibbs Sampler

skip to population Monte Carlo

Computational issues in Bayesianstatistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Population Monte Carlo

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

General Principles

A very specific simulation algorithm based on the targetdistribution f :

1. Uses the conditional densities f1, . . . , fp from f

2. Start with the random variable X = (X1, . . . , Xp)

3. Simulate from the conditional densities,

Xi|x1, x2, . . . , xi−1, xi+1, . . . , xp

∼ fi(xi|x1, x2, . . . , xi−1, xi+1, . . . , xp)

for i = 1, 2, . . . , p.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

General Principles

A very specific simulation algorithm based on the targetdistribution f :

1. Uses the conditional densities f1, . . . , fp from f

2. Start with the random variable X = (X1, . . . , Xp)

3. Simulate from the conditional densities,

Xi|x1, x2, . . . , xi−1, xi+1, . . . , xp

∼ fi(xi|x1, x2, . . . , xi−1, xi+1, . . . , xp)

for i = 1, 2, . . . , p.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

General Principles

A very specific simulation algorithm based on the targetdistribution f :

1. Uses the conditional densities f1, . . . , fp from f

2. Start with the random variable X = (X1, . . . , Xp)

3. Simulate from the conditional densities,

Xi|x1, x2, . . . , xi−1, xi+1, . . . , xp

∼ fi(xi|x1, x2, . . . , xi−1, xi+1, . . . , xp)

for i = 1, 2, . . . , p.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Gibbs code

Algorithm (Gibbs sampler)

Given x(t) = (x(t)1 , . . . , x

(t)p ), generate

1. X(t+1)1 ∼ f1(x1|x(t)

2 , . . . , x(t)p );

2. X(t+1)2 ∼ f2(x2|x(t+1)

1 , x(t)3 , . . . , x

(t)p ),

. . .

p. X(t+1)p ∼ fp(xp|x(t+1)

1 , . . . , x(t+1)p−1 )

X(t+1) → X ∼ f

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Properties

The full conditionals densities f1, . . . , fp are the only densities usedfor simulation. Thus, even in a high dimensional problem, all ofthe simulations may be univariate

The Gibbs sampler is not reversible with respect to f . However,each of its p components is. Besides, it can be turned into areversible sampler, either using the Random Scan Gibbs sampler

see section or running instead the (double) sequence

f1 · · · fp−1fpfp−1 · · · f1

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Properties

The full conditionals densities f1, . . . , fp are the only densities usedfor simulation. Thus, even in a high dimensional problem, all ofthe simulations may be univariateThe Gibbs sampler is not reversible with respect to f . However,each of its p components is. Besides, it can be turned into areversible sampler, either using the Random Scan Gibbs sampler

see section or running instead the (double) sequence

f1 · · · fp−1fpfp−1 · · · f1

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

2D Gibbs sampler

Example (Bivariate Gibbs sampler)

(X,Y ) ∼ f(x, y)

Generate a sequence of observations bySet X0 = x0

For t = 1, 2, . . . , generate

Yt ∼ fY |X(·|xt−1)

Xt ∼ fX|Y (·|yt)

where fY |X and fX|Y are the conditional distributions

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

toy example: iid N (µ, σ2) variates

When Y1, . . . , Yniid∼ N (y|µ, σ2) with both µ and σ unknown, the

posterior in (µ, σ2) is conjugate outside a standard familly

But...

µ|Y 0:n, σ2 ∼ N

(µ∣∣∣ 1n

∑ni=1 Yi,

σ2

n )

σ2|Y 1:n, µ ∼ IG(σ2∣∣n

2 − 1, 12

∑ni=1(Yi − µ)2

)assuming constant (improper) priors on both µ and σ2

I Hence we may use the Gibbs sampler for simulating from theposterior of (µ, σ2)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

toy example: iid N (µ, σ2) variates

When Y1, . . . , Yniid∼ N (y|µ, σ2) with both µ and σ unknown, the

posterior in (µ, σ2) is conjugate outside a standard familly

But...

µ|Y 0:n, σ2 ∼ N

(µ∣∣∣ 1n

∑ni=1 Yi,

σ2

n )

σ2|Y 1:n, µ ∼ IG(σ2∣∣n

2 − 1, 12

∑ni=1(Yi − µ)2

)assuming constant (improper) priors on both µ and σ2

I Hence we may use the Gibbs sampler for simulating from theposterior of (µ, σ2)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

toy example: code

R Gibbs Sampler for Gaussian posterior

n = length(Y);

S = sum(Y);

mu = S/n;

for (i in 1:500)

S2 = sum((Y-mu)^2);

sigma2 = 1/rgamma(1,n/2-1,S2/2);

mu = S/n + sqrt(sigma2/n)*rnorm(1);

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1

, 2, 3, 4, 5, 10, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2

, 3, 4, 5, 10, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3

, 4, 5, 10, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4

, 5, 10, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4, 5

, 10, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4, 5, 10

, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4, 5, 10, 25

, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4, 5, 10, 25, 50

, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4, 5, 10, 25, 50, 100

, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Example of results with n = 10 observations from theN (0, 1) distribution

Number of Iterations 1, 2, 3, 4, 5, 10, 25, 50, 100, 500

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

3. is, by construction, multidimensional

4. does not apply to problems where the number of parametersvaries as the resulting chain is not irreducible.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

3. is, by construction, multidimensional

4. does not apply to problems where the number of parametersvaries as the resulting chain is not irreducible.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

3. is, by construction, multidimensional

4. does not apply to problems where the number of parametersvaries as the resulting chain is not irreducible.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

General Principles

Limitations of the Gibbs sampler

Formally, a special case of a sequence of 1-D M-H kernels, all withacceptance rate uniformly equal to 1.The Gibbs sampler

1. limits the choice of instrumental distributions

2. requires some knowledge of f

3. is, by construction, multidimensional

4. does not apply to problems where the number of parametersvaries as the resulting chain is not irreducible.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Latent variables are back

The Gibbs sampler can be generalized in much wider generalityA density g is a completion of f if∫

Zg(x, z) dz = f(x)

Note

The variable z may be meaningless for the problem

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Latent variables are back

The Gibbs sampler can be generalized in much wider generalityA density g is a completion of f if∫

Zg(x, z) dz = f(x)

Note

The variable z may be meaningless for the problem

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Purpose

g should have full conditionals that are easy to simulate for aGibbs sampler to be implemented with g rather than f

For p > 1, write y = (x, z) and denote the conditional densities ofg(y) = g(y1, . . . , yp) by

Y1|y2, . . . , yp ∼ g1(y1|y2, . . . , yp),

Y2|y1, y3, . . . , yp ∼ g2(y2|y1, y3, . . . , yp),

. . . ,

Yp|y1, . . . , yp−1 ∼ gp(yp|y1, . . . , yp−1).

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Generic Gibbs sampler

The move from Y (t) to Y (t+1) is defined as follows:

Algorithm (Completion Gibbs sampler)

Given (y(t)1 , . . . , y

(t)p ), simulate

1. Y(t+1)

1 ∼ g1(y1|y(t)2 , . . . , y

(t)p ),

2. Y(t+1)

2 ∼ g2(y2|y(t+1)1 , y

(t)3 , . . . , y

(t)p ),

. . .

p. Y(t+1)p ∼ gp(yp|y(t+1)

1 , . . . , y(t+1)p−1 ).

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Mixture illustration

Example (Mixtures all over again)

Hierarchical missing data structure:If

X1, . . . , Xn ∼k∑i=1

pif(x|θi),

then

X|Z ∼ f(x|θZ), Z ∼ p1I(z = 1) + . . .+ pkI(z = k),

Z is the component indicator associated with observation x

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Mixture illustration

Example (Mixtures (2))

Conditionally on (Z1, . . . , Zn) = (z1, . . . , zn) :

π(p1, . . . , pk, θ1, . . . , θk|x1, . . . , xn, z1, . . . , zn)

∝ pα1+n1−11 . . . pαk+nk−1

k

×π(θ1|y1 + n1x1, λ1 + n1) . . . π(θk|yk + nkxk, λk + nk),

withni =

∑j

I(zj = i) and xi =∑j; zj=i

xj/ni.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Mixture illustration

Algorithm (Mixture Gibbs sampler)

1. Simulate

θi ∼ π(θi|yi + nixi, λi + ni) (i = 1, . . . , k)

(p1, . . . , pk) ∼ D(α1 + n1, . . . , αk + nk)

2. Simulate (j = 1, . . . , n)

Zj |xj , p1, . . . , pk, θ1, . . . , θk ∼k∑

i=1

pijI(zj = i)

with (i = 1, . . . , k)pij ∝ pif(xj |θi)

and update ni and xi (i = 1, . . . , k).

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

A wee problem

−1 0 1 2 3 4

−1

01

23

4

µ1

µ2

Gibbs started at random

Gibbs stuck at the wrong mode

−1 0 1 2 3

−1

01

23

µ1

µ2

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

A wee problem

−1 0 1 2 3 4

−1

01

23

4

µ1

µ2

Gibbs started at random

Gibbs stuck at the wrong mode

−1 0 1 2 3

−1

01

23

µ1

µ2

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Slice sampler as generic Gibbs

If f(θ) can be written as a product

k∏i=1

fi(θ),

it can be completed as

k∏i=1

I0≤ωi≤fi(θ),

leading to the following Gibbs algorithm:

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Slice sampler as generic Gibbs

If f(θ) can be written as a product

k∏i=1

fi(θ),

it can be completed as

k∏i=1

I0≤ωi≤fi(θ),

leading to the following Gibbs algorithm:

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Slice sampler (code)

Algorithm (Slice sampler)

Simulate

1. ω(t+1)1 ∼ U[0,f1(θ(t))];

. . .

k. ω(t+1)k ∼ U[0,fk(θ(t))];

k+1. θ(t+1) ∼ UA(t+1) , with

A(t+1) = {y; fi(y) ≥ ω(t+1)i , i = 1, . . . , k}.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2

, 3, 4, 5, 10, 50, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3

, 4, 5, 10, 50, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4

, 5, 10, 50, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5

, 10, 50, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5, 10

, 50, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5, 10, 50

, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Example of results with a truncated N (−3, 1) distribution

0.0 0.2 0.4 0.6 0.8 1.0

0.00

00.

002

0.00

40.

006

0.00

80.

010

x

y

Number of Iterations 2, 3, 4, 5, 10, 50, 100

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Good slices

The slice sampler usually enjoys good theoretical properties (likegeometric ergodicity and even uniform ergodicity under bounded fand bounded X ).As k increases, the determination of the set A(t+1) may getincreasingly complex.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Stochastic volatility

Example (Stochastic volatility core distribution)

Difficult part of the stochastic volatility model

π(x) ∝ exp−{σ2(x− µ)2 + β2 exp(−x)y2 + x

}/2 ,

simplified in exp−{x2 + α exp(−x)

}

Slice sampling means simulation from a uniform distribution on

A ={x; exp−

{x2 + α exp(−x)

}/2 ≥ u

}=

{x;x2 + α exp(−x) ≤ ω

}if we set ω = −2 log u.Note Inversion of x2 + α exp(−x) = ω needs to be done bytrial-and-error.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Stochastic volatility

Example (Stochastic volatility core distribution)

Difficult part of the stochastic volatility model

π(x) ∝ exp−{σ2(x− µ)2 + β2 exp(−x)y2 + x

}/2 ,

simplified in exp−{x2 + α exp(−x)

}Slice sampling means simulation from a uniform distribution on

A ={x; exp−

{x2 + α exp(−x)

}/2 ≥ u

}=

{x;x2 + α exp(−x) ≤ ω

}if we set ω = −2 log u.Note Inversion of x2 + α exp(−x) = ω needs to be done bytrial-and-error.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Completion

Stochastic volatility

0 10 20 30 40 50 60 70 80 90 100−0.1

−0.05

0

0.05

0.1

Lag

Corre

lation

−1 −0.5 0 0.5 1 1.5 2 2.5 3 3.50

0.2

0.4

0.6

0.8

1

Dens

ity

Histogram of a Markov chain produced by a slice sampler and target distribution in overlay.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

Properties of the Gibbs sampler

Theorem (Convergence)

For(Y1, Y2, · · · , Yp) ∼ g(y1, . . . , yp),

if either[Positivity condition]

(i) g(i)(yi) > 0 for every i = 1, · · · , p, implies thatg(y1, . . . , yp) > 0, where g(i) denotes the marginal distributionof Yi, or

(ii) the transition kernel is absolutely continuous with respect to g,

then the chain is irreducible and positive Harris recurrent.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

Properties of the Gibbs sampler (2)

Consequences

(i) If∫h(y)g(y)dy <∞, then

limnT→∞

1

T

T∑t=1

h1(Y (t)) =

∫h(y)g(y)dy a.e. g.

(ii) If, in addition, (Y (t)) is aperiodic, then

limn→∞

∥∥∥∥∫ Kn(y, ·)µ(dx)− f∥∥∥∥TV

= 0

for every initial distribution µ.

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

Slice sampler

fast on that slice

For convergence, the properties of Xt and of f(Xt) are identical

Theorem (Uniform ergodicity)

If f is bounded and suppf is bounded, the simple slice sampler isuniformly ergodic.

[Mira & Tierney, 1997]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

A small set for a slice sampler

no slice detail

For ε? > ε?,C = {x ∈ X ; ε? < f(x) < ε?}

is a small set:Pr(x, ·) ≥ ε?

ε?µ(·)

where

µ(A) =1

ε?

∫ ε?

0

λ(A ∩ L(ε))

λ(L(ε))dε

if L(ε) = {x ∈ X ; f(x) > ε}‘[Roberts & Rosenthal, 1998]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

Slice sampler: drift

Under differentiability and monotonicity conditions, the slicesampler also verifies a drift condition with V (x) = f(x)−β, isgeometrically ergodic, and there even exist explicit bounds on thetotal variation distance

[Roberts & Rosenthal, 1998]

Example (Exponential Exp(1))For n > 23,

||Kn(x, ·)− f(·)||TV ≤ .054865 (0.985015)n (n− 15.7043)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

Slice sampler: drift

Under differentiability and monotonicity conditions, the slicesampler also verifies a drift condition with V (x) = f(x)−β, isgeometrically ergodic, and there even exist explicit bounds on thetotal variation distance

[Roberts & Rosenthal, 1998]

Example (Exponential Exp(1))For n > 23,

||Kn(x, ·)− f(·)||TV ≤ .054865 (0.985015)n (n− 15.7043)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

Slice sampler: convergence

no more slice detail

Theorem

For any density such that

ε∂

∂ελ ({x ∈ X ; f(x) > ε}) is non-increasing

then||K523(x, ·)− f(·)||TV ≤ .0095

[Roberts & Rosenthal, 1998]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

Convergence

A poor slice sampler

Example

Consider

f(x) = exp {−||x||} x ∈ Rd

Slice sampler equivalent toone-dimensional slice sampler on

π(z) = zd−1 e−z z > 0

or on

π(u) = e−u1/d

u > 0

Poor performances when d large(heavy tails)

0 200 400 600 800 1000

-2-1

01

1 dimensional run

co

rre

latio

n

0 10 20 30 40

0.0

0.2

0.4

0.6

0.8

1.0

1 dimensional acf

0 200 400 600 800 1000

10

15

20

25

30

10 dimensional run

co

rre

latio

n

0 10 20 30 40

0.0

0.2

0.4

0.6

0.8

1.0

10 dimensional acf

0 200 400 600 800 1000

02

04

06

0

20 dimensional run

co

rre

latio

n

0 10 20 30 40

0.0

0.2

0.4

0.6

0.8

1.0

20 dimensional acf

0 200 400 600 800 1000

01

00

20

03

00

40

0100 dimensional run

co

rre

latio

n

0 10 20 30 40

0.0

0.2

0.4

0.6

0.8

1.0

100 dimensional acf

Sample runs of log(u) and ACFs for log(u) (Roberts

& Rosenthal, 1999)

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

The Hammersley-Clifford theorem

Hammersley-Clifford theorem

An illustration that conditionals determine the joint distribution

Theorem

If the joint density g(y1, y2) have conditional distributionsg1(y1|y2) and g2(y2|y1), then

g(y1, y2) =g2(y2|y1)∫

g2(v|y1)/g1(y1|v) dv.

[Hammersley & Clifford, circa 1970]

MCMC and likelihood-free methods Part/day I: Markov chain methods

The Gibbs Sampler

The Hammersley-Clifford theorem

General HC decomposition

Under the positivity condition, the joint distribution g satisfies

g(y1, . . . , yp) ∝p∏j=1

g`j (y`j |y`1 , . . . , y`j−1, y′`j+1

, . . . , y′`p)

g`j (y′`j|y`1 , . . . , y`j−1

, y′`j+1, . . . , y′`p)

for every permutation ` on {1, 2, . . . , p} and every y′ ∈ Y .

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Sequential importance sampling

Computational issues in Bayesianstatistics

The Metropolis-Hastings Algorithm

The Gibbs Sampler

Population Monte Carlo

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Importance sampling (revisited)

basic importance

Approximation of integrals

I =

∫h(x)π(x)dx

by unbiased estimators

I =1

n

n∑i=1

%ih(xi)

when

x1, . . . , xniid∼ q(x) and %i

def=

π(xi)

q(xi)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Iterated importance sampling

As in Markov Chain Monte Carlo (MCMC) algorithms,introduction of a temporal dimension :

x(t)i ∼ qt(x|x

(t−1)i ) i = 1, . . . , n, t = 1, . . .

and

It =1

n

n∑i=1

%(t)i h(x

(t)i )

is still unbiased for

%(t)i =

πt(x(t)i )

qt(x(t)i |x

(t−1)i )

, i = 1, . . . , n

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Fundamental importance equality

Preservation of unbiasedness

E[h(X(t))

π(X(t))

qt(X(t)|X(t−1))

]

=

∫h(x)

π(x)

qt(x|y)qt(x|y) g(y) dx dy

=

∫h(x)π(x) dx

for any distribution g on X(t−1)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Sequential variance decomposition

Furthermore,

var(It

)=

1

n2

n∑i=1

var(%

(t)i h(x

(t)i )),

if var(%

(t)i

)exists, because the x

(t)i ’s are conditionally uncorrelated

Note

This decomposition is still valid for correlated [in i] x(t)i ’s when

incorporating weights %(t)i

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Simulation of a population

The importance distribution of the sample (a.k.a. particles) x(t)

qt(x(t)|x(t−1))

can depend on the previous sample x(t−1) in any possible way aslong as marginal distributions

qit(x) =

∫qt(x

(t)) dx(t)−i

can be expressed to build importance weights

%it =π(x

(t)i )

qit(x(t)i )

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Special case of the product proposal

If

qt(x(t)|x(t−1)) =

n∏i=1

qit(x(t)i |x

(t−1))

[Independent proposals]then

var(It

)=

1

n2

n∑i=1

var(%

(t)i h(x

(t)i )),

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Validation

skip validation

E[%

(t)i h(X

(t)i ) %

(t)j h(X

(t)j )]

=

∫h(xi)

π(xi)

qit(xi|x(t−1))

π(xj)

qjt(xj |x(t−1))h(xj)

qit(xi|x(t−1)) qjt(xj |x(t−1)) dxi dxj g(x(t−1))dx(t−1)

= Eπ [h(X)]2

whatever the distribution g on x(t−1)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Self-normalised version

In general, π is unscaled and the weight

%(t)i ∝

π(x(t)i )

qit(x(t)i )

, i = 1, . . . , n ,

is scaled so that ∑i

%(t)i = 1

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Self-normalised version properties

I Loss of the unbiasedness property and the variancedecomposition

I Normalising constant can be estimated by

$t =1

tn

t∑τ=1

n∑i=1

π(x(τ)i )

qiτ (x(τ)i )

I Variance decomposition (approximately) recovered if $t−1 isused instead

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Sampling importance resampling

Importance sampling from g can also produce samples from thetarget π

[Rubin, 1987]

Theorem (Bootstraped importance sampling)

If a sample (x?i )1≤i≤m is derived from the weighted sample(xi, %i)1≤i≤n by multinomial sampling with weights %i, then

x?i ∼ π(x)

Note

Obviously, the x?i ’s are not iid

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Sampling importance resampling

Importance sampling from g can also produce samples from thetarget π

[Rubin, 1987]

Theorem (Bootstraped importance sampling)

If a sample (x?i )1≤i≤m is derived from the weighted sample(xi, %i)1≤i≤n by multinomial sampling with weights %i, then

x?i ∼ π(x)

Note

Obviously, the x?i ’s are not iid

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Sampling importance resampling

Importance sampling from g can also produce samples from thetarget π

[Rubin, 1987]

Theorem (Bootstraped importance sampling)

If a sample (x?i )1≤i≤m is derived from the weighted sample(xi, %i)1≤i≤n by multinomial sampling with weights %i, then

x?i ∼ π(x)

Note

Obviously, the x?i ’s are not iid

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Iterated sampling importance resampling

This principle can be extended to iterated importance sampling:After each iteration, resampling produces a sample from π

[Again, not iid!]

Incentive

Use previous sample(s) to learn about π and q

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Iterated sampling importance resampling

This principle can be extended to iterated importance sampling:After each iteration, resampling produces a sample from π

[Again, not iid!]

Incentive

Use previous sample(s) to learn about π and q

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Generic Population Monte Carlo

Algorithm (Population Monte Carlo Algorithm)

For t = 1, . . . , T

For i = 1, . . . , n,

1. Select the generating distribution qit(·)2. Generate x

(t)i ∼ qit(x)

3. Compute %(t)i = π(x

(t)i )/qit(x

(t)i )

Normalise the %(t)i ’s into %

(t)i ’s

Generate Ji,t ∼M((%(t)i )1≤i≤N ) and set xi,t = x

(t)Ji,t

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

D-kernels in competition

A general adaptive construction:

Construct qi,t as a mixture of D different transition kernels

depending on x(t−1)i

qi,t =

D∑`=1

pt,`K`(x(t−1)i , x),

D∑`=1

pt,` = 1 ,

and adapt the weights pt,`.

Darwinian example

Take pt,` proportional to the survival rate of the points

(a.k.a. particles) x(t)i generated from K`

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

D-kernels in competition

A general adaptive construction:

Construct qi,t as a mixture of D different transition kernels

depending on x(t−1)i

qi,t =

D∑`=1

pt,`K`(x(t−1)i , x),

D∑`=1

pt,` = 1 ,

and adapt the weights pt,`.

Darwinian example

Take pt,` proportional to the survival rate of the points

(a.k.a. particles) x(t)i generated from K`

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Implementation

Algorithm (D-kernel PMC)

For t = 1, . . . , T

generate (Ki,t)1≤i≤N ∼M ((pt,k)1≤k≤D)

for 1 ≤ i ≤ N , generate

xi,t ∼ KKi,t(x)

compute and renormalize the importance weights ωi,t

generate (Ji,t)1≤i≤N ∼M ((ωi,t)1≤i≤N )

take xi,t = xJi,t,t and pt+1,d =∑N

i=1 ωi,tId(Ki,t)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Links with particle filters

I Sequential setting where π = πt changes with t: PopulationMonte Carlo also adapts to this case

I Can be traced back all the way to Hammersley and Morton(1954) and the self-avoiding random walk problem

I Gilks and Berzuini (2001) produce iterated samples with (SIR)resampling steps, and add an MCMC step: this step must usea πt invariant kernel

I Chopin (2001) uses iterated importance sampling to handlelarge datasets: this is a special case of PMC where the qit’sare the posterior distributions associated with a portion kt ofthe observed dataset

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Links with particle filters (2)

I Rubinstein and Kroese’s (2004) cross-entropy method isparameterised importance sampling targeted at rare events

I Stavropoulos and Titterington’s (1999) smooth bootstrap andWarnes’ (2001) kernel coupler use nonparametric kernels onthe previous importance sample to build an improvedproposal: this is a special case of PMC

I West (1992) mixture approximation is a precursor of smoothbootstrap

I Mengersen and Robert (2002) “pinball sampler” is an MCMCattempt at population sampling

I Del Moral, Doucet and Jasra (2006, JRSS B) sequentialMonte Carlo samplers also relates to PMC, with a Markoviandependence on the past sample x(t) but (limited) stationarityconstraints

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Things can go wrong

Unexpected behaviour of the mixture weights when the number ofparticles increases

N∑i=1

ωi,tIKi,t=d−→P1

D

Conclusion

At each iteration, every weight converges to 1/D:the algorithm fails to learn from experience!!

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Things can go wrong

Unexpected behaviour of the mixture weights when the number ofparticles increases

N∑i=1

ωi,tIKi,t=d−→P1

D

Conclusion

At each iteration, every weight converges to 1/D:the algorithm fails to learn from experience!!

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Saved by Rao-Blackwell!!

Modification: Rao-Blackwellisation (=conditioning)

Use the whole mixture in the importance weight:

ωi,t = π(xi,t)

D∑d=1

pt,dKd(xi,t−1, xi,t)

instead of

ωi,t =π(xi,t)

KKi,t(xi,t−1, xi,t)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Saved by Rao-Blackwell!!

Modification: Rao-Blackwellisation (=conditioning)

Use the whole mixture in the importance weight:

ωi,t = π(xi,t)

D∑d=1

pt,dKd(xi,t−1, xi,t)

instead of

ωi,t =π(xi,t)

KKi,t(xi,t−1, xi,t)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Adapted algorithm

Algorithm (Rao-Blackwellised D-kernel PMC)

At time t (t = 1, . . . , T ),

Generate(Ki,t)1≤i≤N

iid∼ M((pt,d)1≤d≤D);

Generate(xi,t)1≤i≤N

ind∼ KKi,t(xi,t−1, x)

and set ωi,t = π(xi,t)

/∑Dd=1 pt,dKd(xi,t−1, xi,t);

Generate(Ji,t)1≤i≤N

iid∼ M((ωi,t)1≤i≤N )

and set xi,t = xJi,t,t and pt+1,d =∑N

i=1 ωi,tpt,d.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Convergence properties

Theorem (LLN)

Under regularity assumptions, for h ∈ L1Π and for every t ≥ 1,

1

N

N∑k=1

ωi,th(xi,t)N→∞−→P Π(h)

andpt,d

N→∞−→P αtd

The limiting coefficients (αtd)1≤d≤D are defined recursively as

αtd = αt−1d

∫ (Kd(x, x

′)∑Dj=1 α

t−1j Kj(x, x′)

)Π⊗Π(dx, dx′).

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Recursion on the weights

Set F as

F (α) =

(αd

∫ [Kd(x, x

′)∑Dj=1 αjKj(x, x

′)

]Π⊗Π(dx, dx′)

)1≤d≤D

on the simplex

S =

{α = (α1, . . . , αD); ∀d ∈ {1, . . . , D}, αd ≥ 0 and

D∑d=1

αd = 1

}.

and define the sequence

αt+1 = F (αt)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Kullback divergence

Definition (Kullback divergence)

For α ∈ S,

KL(α) =

∫ [log

(π(x)π(x′)

π(x)∑D

d=1 αdKd(x, x′)

)]Π⊗Π(dx, dx′).

Kullback divergence between Π and the mixture.

Goal: Obtain the mixture closest to Π, i.e., that minimises KL(α)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Connection with RBDPMCA ??

Theorem

Under the assumption

∀d ∈ {1, . . . , D},−∞ <

∫log(Kd(x, x

′))Π⊗Π(dx, dx′) <∞

for every α ∈ SD,

KL(F (α)) ≤ KL(α).

Conclusion

The Kullback divergence decreases at every iteration of RBDPMCA

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Connection with RBDPMCA ??

Theorem

Under the assumption

∀d ∈ {1, . . . , D},−∞ <

∫log(Kd(x, x

′))Π⊗Π(dx, dx′) <∞

for every α ∈ SD,

KL(F (α)) ≤ KL(α).

Conclusion

The Kullback divergence decreases at every iteration of RBDPMCA

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

An integrated EM interpretation

skip interpretation

We have

αmin = arg minα∈S

KL(α) = arg maxα∈S

∫log pα(x)Π⊗Π(dx)

= arg maxα∈S

∫log

∫pα(x,K)dK Π⊗Π(dx)

for x = (x, x′) and K ∼M((αd)1≤d≤D). Then αt+1 = F (αt)means

αt+1 = arg maxα

∫∫Eαt(log pα(X,K)|X = x)Π⊗Π(dx)

andlimt→∞

αt = αmin

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Illustration

Example (A toy example)

Take the target

1/4N (−1, 0.3)(x) + 1/4N (0, 1)(x) + 1/2N (3, 2)(x)

and use 3 proposals: N (−1, 0.3), N (0, 1) and N (3, 2)[Surprise!!!]

Then

1 0.0500000 0.05000000 0.90000002 0.2605712 0.09970292 0.63972596 0.2740816 0.19160178 0.534316610 0.2989651 0.19200904 0.509025916 0.2651511 0.24129039 0.4935585

Weight evolution

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Illustration

Example (A toy example)

Take the target

1/4N (−1, 0.3)(x) + 1/4N (0, 1)(x) + 1/2N (3, 2)(x)

and use 3 proposals: N (−1, 0.3), N (0, 1) and N (3, 2)[Surprise!!!]

Then

1 0.0500000 0.05000000 0.90000002 0.2605712 0.09970292 0.63972596 0.2740816 0.19160178 0.534316610 0.2989651 0.19200904 0.509025916 0.2651511 0.24129039 0.4935585

Weight evolution

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

Target and mixture evolution

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

c© Learning scheme

The efficiency of the SNIS approximation depends on the choice ofQ, ranging from optimal

q(x) ∝ |h(x)−Π(h)|π(x)

to uselessvar ΠSNIS

Q,N (h) = +∞

Example (PMC=adaptive importance sampling)

Population Monte Carlo is producing a sequence of proposals Qtaiming at improving efficiency

Kull(π, qt) ≤ Kull(π, qt−1) or var ΠSNISQt,∞ (h) ≤ var ΠSNIS

Qt−1,∞(h)

[Cappe, Douc, Guillin, Marin, Robert, 04, 07a, 07b, 08]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

c© Learning scheme

The efficiency of the SNIS approximation depends on the choice ofQ, ranging from optimal

q(x) ∝ |h(x)−Π(h)|π(x)

to uselessvar ΠSNIS

Q,N (h) = +∞

Example (PMC=adaptive importance sampling)

Population Monte Carlo is producing a sequence of proposals Qtaiming at improving efficiency

Kull(π, qt) ≤ Kull(π, qt−1) or var ΠSNISQt,∞ (h) ≤ var ΠSNIS

Qt−1,∞(h)

[Cappe, Douc, Guillin, Marin, Robert, 04, 07a, 07b, 08]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

Multiple Importance Sampling

Reycling: given several proposals Q1, . . . , QT , for 1 ≤ t ≤ Tgenerate an iid sample

xt1, . . . , xtN ∼ Qt

and estimate Π(h) by

ΠMISQ,N (h) = T−1

T∑t=1

N−1N∑i=1

h(xti)ωti

where

ωti 6=π(xti)

qt(xti)correct...

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

Multiple Importance Sampling

Reycling: given several proposals Q1, . . . , QT , for 1 ≤ t ≤ Tgenerate an iid sample

xt1, . . . , xtN ∼ Qt

and estimate Π(h) by

ΠMISQ,N (h) = T−1

T∑t=1

N−1N∑i=1

h(xti)ωti

where

ωti =π(xti)

T−1∑T

`=1 q`(xti)

still correct!

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

Mixture representation

Deterministic mixture correction of the weights proposed by Owenand Zhou (JASA, 2000)

I The corresponding estimator is still unbiased [if notself-normalised]

I All particles are on the same weighting scale rather than theirown

I Large variance proposals Qt do not take over

I Variance reduction thanks to weight stabilization & recycling

I [K.o.] removes the randomness in the component choice[=Rao-Blackwellisation]

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

Global adaptation

Global Adaptation

At iteration t = 1, · · · , T ,

1. For 1 ≤ i ≤ N1, generate xti ∼ T3(µt−1, Σt−1)

2. Calculate the mixture importance weight of particle xti

ωti = π(xti) /δti

where

δti =

t−1∑l=0

qT (3)

(xti; µ

l, Σl)

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

Backward reweighting

3. If t ≥ 2, actualize the weights of all past particles, xli1 ≤ l ≤ t− 1

ωli = π(xti) /δli

whereδli = δli + qT (3)

(xli; µ

t−1, Σt−1)

4. Compute IS estimates of target mean and variance µt and Σt,where

µtj =

t∑l=1

N1∑i=1

ωli(xj)li

/ t∑l=1

N1∑i=1

ωli . . .

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

A toy example

−40 −20 0 20 40

−30

−20

−10

010

Banana shape benchmark: marginal distribution of (x1, x2) for the parameters

σ21 = 100 and b = 0.03. Contours represent 60% (red), 90% (black) and 99.9%

(blue) confidence regions in the marginal space.

MCMC and likelihood-free methods Part/day I: Markov chain methods

Population Monte Carlo

AMIS

A toy example

●4e+

046e

+04

8e+

041e

+05

p=5

2000

040

000

6000

080

000

p=10

020

000

6000

0

p=20

Banana shape example: boxplots of 10 replicate ESSs for the AMIS scheme

(left) and the NOT-AMIS scheme (right) for p = 5, 10, 20.