287
MCMC and likelihood-free methods Part/day II: ABC MCMC and likelihood-free methods Part/day II: ABC Christian P. Robert Universit´ e Paris-Dauphine, IuF, & CREST Monash University, EBS, July 18, 2012

Monash University short course, part II

Embed Size (px)

DESCRIPTION

second part of my slides for the Special Econometrics series in Monash Uni., Jul. 20

Citation preview

Page 1: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

MCMC and likelihood-free methodsPart/day II: ABC

Christian P. Robert

Universite Paris-Dauphine, IuF, & CREST

Monash University, EBS, July 18, 2012

Page 2: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Outline

simulation-based methods in Econometrics

Genetics of ABC

Approximate Bayesian computation

ABC for model choice

ABC model choice consistency

Page 3: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Econ’ections

simulation-based methods inEconometrics

Genetics of ABC

Approximate Bayesian computation

ABC for model choice

ABC model choice consistency

Page 4: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Usages of simulation in Econometrics

Similar exploration of simulation-based techniques in Econometrics

I Simulated method of moments

I Method of simulated moments

I Simulated pseudo-maximum-likelihood

I Indirect inference

[Gourieroux & Monfort, 1996]

Page 5: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Simulated method of moments

Given observations yo1:n from a model

yt = r(y1:(t−1), εt , θ) , εt ∼ g(·)

simulate ε?1:n, derive

y?t (θ) = r(y1:(t−1), ε?t , θ)

and estimate θ by

arg minθ

n∑t=1

(yot − y?t (θ))2

Page 6: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Simulated method of moments

Given observations yo1:n from a model

yt = r(y1:(t−1), εt , θ) , εt ∼ g(·)

simulate ε?1:n, derive

y?t (θ) = r(y1:(t−1), ε?t , θ)

and estimate θ by

arg minθ

{n∑

t=1

yot −

n∑t=1

y?t (θ)

}2

Page 7: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Method of simulated moments

Given a statistic vector K (y) with

Eθ[K (Yt)|y1:(t−1)] = k(y1:(t−1); θ)

find an unbiased estimator of k(y1:(t−1); θ),

k(εt , y1:(t−1); θ)

Estimate θ by

arg minθ

∣∣∣∣∣∣∣∣∣∣

n∑t=1

[K (yt)−

S∑s=1

k(εst , y1:(t−1); θ)/S

]∣∣∣∣∣∣∣∣∣∣

[Pakes & Pollard, 1989]

Page 8: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Indirect inference

Minimise (in θ) the distance between estimators β based onpseudo-models for genuine observations and for observationssimulated under the true model and the parameter θ.

[Gourieroux, Monfort, & Renault, 1993;Smith, 1993; Gallant & Tauchen, 1996]

Page 9: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Indirect inference (PML vs. PSE)

Example of the pseudo-maximum-likelihood (PML)

β(y) = arg maxβ

∑t

log f ?(yt |β, y1:(t−1))

leading to

arg minθ||β(yo)− β(y1(θ), . . . , yS(θ))||2

whenys(θ) ∼ f (y|θ) s = 1, . . . ,S

Page 10: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Indirect inference (PML vs. PSE)

Example of the pseudo-score-estimator (PSE)

β(y) = arg minβ

{∑t

∂ log f ?

∂β(yt |β, y1:(t−1))

}2

leading to

arg minθ||β(yo)− β(y1(θ), . . . , yS(θ))||2

whenys(θ) ∼ f (y|θ) s = 1, . . . ,S

Page 11: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Consistent indirect inference

...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...

Consistency depending on the criterion and on the asymptoticidentifiability of θ

[Gourieroux, Monfort, 1996, p. 66]

Page 12: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Consistent indirect inference

...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...

Consistency depending on the criterion and on the asymptoticidentifiability of θ

[Gourieroux, Monfort, 1996, p. 66]

Page 13: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

AR(2) vs. MA(1) example

true (AR) modelyt = εt − θεt−1

and [wrong!] auxiliary (MA) model

yt = β1yt−1 + β2yt−2 + ut

R codex=eps=rnorm(250)

x[2:250]=x[2:250]-0.5*x[1:249]

simeps=rnorm(250)

propeta=seq(-.99,.99,le=199)

dist=rep(0,199)

bethat=as.vector(arima(x,c(2,0,0),incl=FALSE)$coef)

for (t in 1:199)

dist[t]=sum((as.vector(arima(c(simeps[1],simeps[2:250]-propeta[t]*

simeps[1:249]),c(2,0,0),incl=FALSE)$coef)-bethat)^2)

Page 14: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

AR(2) vs. MA(1) example

One sample:

−1.0 −0.5 0.0 0.5 1.0

0.0

0.2

0.4

0.6

0.8

θ

dist

ance

Page 15: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

AR(2) vs. MA(1) example

Many samples:

0.2 0.4 0.6 0.8 1.0

01

23

45

6

Page 16: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

Choice of pseudo-model

Pick model such that

1. β(θ) not flat(i.e. sensitive to changes in θ)

2. β(θ) not dispersed (i.e. robust agains changes in ys(θ))

[Frigessi & Heggland, 2004]

Page 17: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

ABC using indirect inference (1)

We present a novel approach for developing summary statisticsfor use in approximate Bayesian computation (ABC) algorithms byusing indirect inference(...) In the indirect inference approach toABC the parameters of an auxiliary model fitted to the data becomethe summary statistics. Although applicable to any ABC technique,we embed this approach within a sequential Monte Carlo algorithmthat is completely adaptive and requires very little tuning(...)

[Drovandi, Pettitt & Faddy, 2011]

c© Indirect inference provides summary statistics for ABC...

Page 18: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

simulation-based methods in Econometrics

ABC using indirect inference (2)

...the above result shows that, in the limit as h→ 0, ABC willbe more accurate than an indirect inference method whose auxiliarystatistics are the same as the summary statistic that is used forABC(...) Initial analysis showed that which method is moreaccurate depends on the true value of θ.

[Fearnhead and Prangle, 2012]

c© Indirect inference provides estimates rather than global inference...

Page 19: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Genetics of ABC

simulation-based methods inEconometrics

Genetics of ABC

Approximate Bayesian computation

ABC for model choice

ABC model choice consistency

Page 20: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Genetic background of ABC

ABC is a recent computational technique that only requires beingable to sample from the likelihood f (·|θ)

This technique stemmed from population genetics models, about15 years ago, and population geneticists still contributesignificantly to methodological developments of ABC.

[Griffith & al., 1997; Tavare & al., 1999]

Page 21: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Population genetics

[Part derived from the teaching material of Raphael Leblois, ENS Lyon, November 2010]

I Describe the genotypes, estimate the alleles frequencies,determine their distribution among individuals, populationsand between populations;

I Predict and understand the evolution of gene frequencies inpopulations as a result of various factors.

c© Analyses the effect of various evolutive forces (mutation, drift,migration, selection) on the evolution of gene frequencies in timeand space.

Page 22: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

A[B]ctors

Towards an explanation of the mechanisms of evolutionarychanges, mathematical models of evolution started to be developedin the years 1920-1930.

Les mécanismes de l’évolution

Ronald A Fisher (1890-1962) Sewall Wright (1889-1988) John BS Haldane (1892-1964)

•! Les mathématiques de l’évolution ont

commencé à être développés dans les

années 1920-1930

Page 23: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Wright-Fisher modelLe modèle de Wright-Fisher

•! En l’absence de mutation et de

sélection, les fréquences

alléliques dérivent (augmentent

et diminuent) inévitablement

jusqu’à la fixation d’un allèle

•! La dérive conduit donc à la

perte de variation génétique à

l’intérieur des populations

I A population of constantsize, in which individualsreproduce at the same time.

I Each gene in a generation isa copy of a gene of theprevious generation.

I In the absence of mutationand selection, allelefrequencies derive inevitablyuntil the fixation of anallele.

Page 24: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Coalescent theory

[Kingman, 1982; Tajima, Tavare, &tc]

5

!"#$%&'(('")**+$,-'".'"/010234%'".'5"*$*%()23$15"6"

!!"7**+$,-'",()5534%'" " "!"7**+$,-'"8",$)('5,'1,'"9"

"":";<;=>7?@<#" " " """"":"ABC7#?@>><#"

"":"D+04%'1,'5" " " """"":"E010)($/3'".'5"/F1'5"

"":"G353$1")&)12"HD$+I)+.J" " """"":"G353$1")++3F+'"HK),LI)+.J"

Coalescence theory interested in the genealogy of a sample ofgenes back in time to the common ancestor of the sample.

Page 25: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Common ancestor

6 T

ime

of

coal

esce

nce

(T)

Modélisation du processus de dérive génétique

en “remontant dans le temps”

jusqu’à l’ancêtre commun d’un échantillon de gènes

Les différentes

lignées fusionnent

(coalescent) au fur et à mesure que

l’on remonte vers le

passé

The different lineages merge when we go back in the past.

Page 26: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Neutral mutations

20

Arbre de coalescence et mutations

Sous l’hypothèse de neutralité des marqueurs génétiques étudiés,

les mutations sont indépendantes de la généalogie

i.e. la généalogie ne dépend que des processus démographiques

On construit donc la généalogie selon les paramètres

démographiques (ex. N),

puis on ajoute a posteriori les

mutations sur les différentes

branches, du MRCA au feuilles de

l’arbre

On obtient ainsi des données de

polymorphisme sous les modèles

démographiques et mutationnels

considérés

I Under the assumption ofneutrality, the mutationsare independent of thegenealogy.

I We construct the genealogyaccording to thedemographic parameters,then we add a posteriori themutations.

Page 27: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Demo-genetic inference

Each model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factors

The goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present time

Problem: most of the time, we can not calculate the likelihood ofthe polymorphism data f (y|θ).

Page 28: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Untractable likelihood

Missing (too missing!) data structure:

f (y|θ) =

∫G

f (y|G ,θ)f (G |θ)dG

The genealogies are considered as nuisance parameters.

This problematic thus differs from the phylogenetic approachwhere the tree is the parameter of interesst.

Page 29: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

A genuine example of application

94

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Pygmies populations: do they have a common origin? Is there alot of exchanges between pygmies and non-pygmies populations?

Page 30: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Scenarios under competition

96

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Différents scénarios possibles, choix de scenario par ABC

Verdu et al. 2009

Page 31: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Simulation results

97

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Différents scénarios possibles, choix de scenario par ABC

Le scenario 1a est largement soutenu par rapport aux

autres ! plaide pour une origine commune des

populations pygmées d’Afrique de l’Ouest Verdu et al. 2009

97

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Différents scénarios possibles, choix de scenario par ABC

Le scenario 1a est largement soutenu par rapport aux

autres ! plaide pour une origine commune des

populations pygmées d’Afrique de l’Ouest Verdu et al. 2009

c© Scenario 1A is chosen.

Page 32: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Genetics of ABC

Most likely scenario

99

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Scénario

évolutif :

on « raconte »

une histoire à

partir de ces

inférences

Verdu et al. 2009

Page 33: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Approximate Bayesian computation

simulation-based methods inEconometrics

Genetics of ABC

Approximate Bayesian computation

ABC for model choice

ABC model choice consistency

Page 34: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Untractable likelihoods

Cases when the likelihood functionf (y|θ) is unavailable and when thecompletion step

f (y|θ) =

∫Z

f (y, z|θ) dz

is impossible or too costly because ofthe dimension of zc© MCMC cannot be implemented!

Page 35: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Untractable likelihoods

Cases when the likelihood functionf (y|θ) is unavailable and when thecompletion step

f (y|θ) =

∫Z

f (y, z|θ) dz

is impossible or too costly because ofthe dimension of zc© MCMC cannot be implemented!

Page 36: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Illustrations

Example ()

Stochastic volatility model: fort = 1, . . . ,T ,

yt = exp(zt)εt , zt = a+bzt−1+σηt ,

T very large makes it difficult toinclude z within the simulatedparameters

0 200 400 600 800 1000

−0.

4−

0.2

0.0

0.2

0.4

t

Highest weight trajectories

Page 37: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Illustrations

Example ()

Potts model: if y takes values on a grid Y of size kn and

f (y|θ) ∝ exp

{θ∑l∼i

Iyl=yi

}

where l∼i denotes a neighbourhood relation, n moderately largeprohibits the computation of the normalising constant

Page 38: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Illustrations

Example (Genesis)

Phylogenetic tree: in populationgenetics, reconstitution of a commonancestor from a sample of genes viaa phylogenetic tree that is close toimpossible to integrate out[100 processor days with 4parameters]

[Cornuet et al., 2009, Bioinformatics]

Page 39: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f (x |θ)

When likelihood f (x |θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f (y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 40: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f (x |θ)When likelihood f (x |θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f (y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 41: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

The ABC method

Bayesian setting: target is π(θ)f (x |θ)When likelihood f (x |θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f (y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 42: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Why does it work?!

The proof is trivial:

f (θi ) ∝∑z∈D

π(θi )f (z|θi )Iy(z)

∝ π(θi )f (y|θi )= π(θi |y) .

[Accept–Reject 101]

Page 43: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Earlier occurrence

‘Bayesian statistics and Monte Carlo methods are ideallysuited to the task of passing many models over onedataset’

[Don Rubin, Annals of Statistics, 1984]

Note Rubin (1984) does not promote this algorithm forlikelihood-free simulation but frequentist intuition on posteriordistributions: parameters from posteriors are more likely to bethose that could have generated the data.

Page 44: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

A as A...pproximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distance

Output distributed from

π(θ) Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

[Pritchard et al., 1999]

Page 45: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

A as A...pproximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distanceOutput distributed from

π(θ) Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

[Pritchard et al., 1999]

Page 46: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

ABC algorithm

Algorithm 1 Likelihood-free rejection sampler 2

for i = 1 to N dorepeat

generate θ′ from the prior distribution π(·)generate z from the likelihood f (·|θ′)

until ρ{η(z), η(y)} ≤ εset θi = θ′

end for

where η(y) defines a (not necessarily sufficient) statistic

Page 47: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y (z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 48: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y (z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 49: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Convergence of ABC (first attempt)

What happens when ε→ 0?

If f (·|θ) is continuous in y , uniformly in θ [!], given an arbitraryδ > 0, there exists ε0 such that ε < ε0 implies

Page 50: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Convergence of ABC (first attempt)

What happens when ε→ 0?

If f (·|θ) is continuous in y , uniformly in θ [!], given an arbitraryδ > 0, there exists ε0 such that ε < ε0 implies

π(θ)∫

f (z|θ)IAε,y (z) dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

∈ π(θ)f (y|θ)(1∓ δ)µ(Bε)∫Θ π(θ)f (y|θ)dθ(1± δ)µ(Bε)

Page 51: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Convergence of ABC (first attempt)

What happens when ε→ 0?

If f (·|θ) is continuous in y , uniformly in θ [!], given an arbitraryδ > 0, there exists ε0 such that ε < ε0 implies

π(θ)∫

f (z|θ)IAε,y (z) dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

∈ π(θ)f (y|θ)(1∓ δ)����XXXXµ(Bε)∫Θ π(θ)f (y|θ)dθ(1± δ)����XXXXµ(Bε)

Page 52: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Convergence of ABC (first attempt)

What happens when ε→ 0?

If f (·|θ) is continuous in y , uniformly in θ [!], given an arbitraryδ > 0, there exists ε0 such that ε < ε0 implies

π(θ)∫

f (z|θ)IAε,y (z) dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

∈ π(θ)f (y|θ)(1∓ δ)����XXXXµ(Bε)∫Θ π(θ)f (y|θ)dθ(1± δ)����XXXXµ(Bε)

[Proof extends to other continuous-in-0 kernels Kε]

Page 53: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Convergence of ABC (second attempt)

What happens when ε→ 0?

For B ⊂ Θ, we have∫B

∫Aε,y

f (z|θ)dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

π(θ)dθ =

∫Aε,y

∫B f (z|θ)π(θ)dθ∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

=

∫Aε,y

∫B f (z|θ)π(θ)dθ

m(z)

m(z)∫Aε,y×Θ π(θ)f (z|θ)dzdθ

dz

=

∫Aε,y

π(B|z)m(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

which indicates convergence for a continuous π(B|z).

Page 54: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Convergence of ABC (second attempt)

What happens when ε→ 0?

For B ⊂ Θ, we have∫B

∫Aε,y

f (z|θ)dz∫Aε,y×Θ π(θ)f (z|θ)dzdθ

π(θ)dθ =

∫Aε,y

∫B f (z|θ)π(θ)dθ∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

=

∫Aε,y

∫B f (z|θ)π(θ)dθ

m(z)

m(z)∫Aε,y×Θ π(θ)f (z|θ)dzdθ

dz

=

∫Aε,y

π(B|z)m(z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθdz

which indicates convergence for a continuous π(B|z).

Page 55: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 56: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 57: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 58: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

I plasma glucose concentration in oral glucose tolerance test

I diastolic blood pressure

I diabetes pedigree function

I presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

Test of H0 : β3 = 0 for 200 observations of Pima.tr based on ag -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of importance function inspired from the MLE estimatedistribution

β ∼ N (β, Σ)

Page 59: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Pima Indian benchmark

−0.005 0.010 0.020 0.030

020

4060

8010

0

Dens

ity

−0.05 −0.03 −0.01

020

4060

80

Dens

ity

−1.0 0.0 1.0 2.0

0.00.2

0.40.6

0.81.0

Dens

ityFigure: Comparison between density estimates of the marginals on β1

(left), β2 (center) and β3 (right) from ABC rejection samples (red) andMCMC samples (black)

.

Page 60: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

MA example

Back to the MA(q) model

xt = εt +

q∑i=1

ϑiεt−i

Simple prior: uniform over the inverse [real and complex] roots in

Q(u) = 1−q∑

i=1

ϑiui

under the identifiability conditions

Page 61: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

MA example

Back to the MA(q) model

xt = εt +

q∑i=1

ϑiεt−i

Simple prior: uniform prior over the identifiability zone, e.g.triangle for MA(2)

Page 62: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (εt)−q<t≤T

3. producing a simulated series (x ′t)1≤t≤T

Distance: basic distance between the series

ρ((x ′t)1≤t≤T , (xt)1≤t≤T ) =T∑t=1

(xt − x ′t)2

or distance between summary statistics like the q autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 63: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

MA example (2)

ABC algorithm thus made of

1. picking a new value (ϑ1, ϑ2) in the triangle

2. generating an iid sequence (εt)−q<t≤T

3. producing a simulated series (x ′t)1≤t≤T

Distance: basic distance between the series

ρ((x ′t)1≤t≤T , (xt)1≤t≤T ) =T∑t=1

(xt − x ′t)2

or distance between summary statistics like the q autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 64: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 65: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.5

0.00.5

1.01.5

θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 66: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.5

0.00.5

1.01.5

θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model

Page 67: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

Homonomy

The ABC algorithm is not to be confused with the ABC algorithm

The Artificial Bee Colony algorithm is a swarm based meta-heuristicalgorithm that was introduced by Karaboga in 2005 for optimizingnumerical problems. It was inspired by the intelligent foragingbehavior of honey bees. The algorithm is specifically based on themodel proposed by Tereshko and Loengarov (2005) for the foragingbehaviour of honey bee colonies. The model consists of threeessential components: employed and unemployed foraging bees, andfood sources. The first two components, employed and unemployedforaging bees, search for rich food sources (...) close to their hive.The model also defines two leading modes of behaviour (...):recruitment of foragers to rich food sources resulting in positivefeedback and abandonment of poor sources by foragers causingnegative feedback.

[Karaboga, Scholarpedia]

Page 68: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiency

Either modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 69: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 70: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 71: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

ABC basics

ABC advances

Simulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x ’s within the vicinity of y ...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 72: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP

Better usage of [prior] simulations byadjustement: instead of throwing awayθ′ such that ρ(η(z), η(y)) > ε, replaceθ’s with locally regressed transforms

(use with BIC)

θ∗ = θ − {η(z)− η(y)}Tβ [Csillery et al., TEE, 2010]

where β is obtained by [NP] weighted least square regression on(η(z)− η(y)) with weights

Kδ {ρ(η(z), η(y))}

[Beaumont et al., 2002, Genetics]

Page 73: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (regression)

Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation by

Kδ {ρ(η(z(θ)), η(y))}

or

1

S

S∑s=1

Kδ {ρ(η(zs(θ)), η(y))}

[consistent estimate of f (η|θ)]

Curse of dimensionality: poor estimate when d = dim(η) is large...

Page 74: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (regression)

Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation by

Kδ {ρ(η(z(θ)), η(y))}

or

1

S

S∑s=1

Kδ {ρ(η(zs(θ)), η(y))}

[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...

Page 75: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (density estimation)

Use of the kernel weights

Kδ {ρ(η(z(θ)), η(y))}

leads to the NP estimate of the posterior expectation∑i θiKδ {ρ(η(z(θi )), η(y))}∑i Kδ {ρ(η(z(θi )), η(y))}

[Blum, JASA, 2010]

Page 76: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (density estimation)

Use of the kernel weights

Kδ {ρ(η(z(θ)), η(y))}

leads to the NP estimate of the posterior conditional density∑i Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

[Blum, JASA, 2010]

Page 77: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (density estimations)

Other versions incorporating regression adjustments∑i Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

In all cases, error

E[g(θ|y)]− g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

Page 78: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (density estimations)

Other versions incorporating regression adjustments∑i Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

In all cases, error

E[g(θ|y)]− g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

[Blum, JASA, 2010]

Page 79: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NP (density estimations)

Other versions incorporating regression adjustments∑i Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

In all cases, error

E[g(θ|y)]− g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

[standard NP calculations]

Page 80: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NCH

Incorporating non-linearities and heterocedasticities:

θ∗ = m(η(y)) + [θ − m(η(z))]σ(η(y))

σ(η(z))

where

I m(η) estimated by non-linear regression (e.g., neural network)

I σ(η) estimated by non-linear regression on residuals

log{θi − m(ηi )}2 = log σ2(ηi ) + ξi

[Blum & Francois, 2009]

Page 81: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NCH

Incorporating non-linearities and heterocedasticities:

θ∗ = m(η(y)) + [θ − m(η(z))]σ(η(y))

σ(η(z))

where

I m(η) estimated by non-linear regression (e.g., neural network)

I σ(η) estimated by non-linear regression on residuals

log{θi − m(ηi )}2 = log σ2(ηi ) + ξi

[Blum & Francois, 2009]

Page 82: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NCH (2)

Why neural network?

I fights curse of dimensionality

I selects relevant summary statistics

I provides automated dimension reduction

I offers a model choice capability

I improves upon multinomial logistic

[Blum & Francois, 2009]

Page 83: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-NCH (2)

Why neural network?

I fights curse of dimensionality

I selects relevant summary statistics

I provides automated dimension reduction

I offers a model choice capability

I improves upon multinomial logistic

[Blum & Francois, 2009]

Page 84: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f (x |θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t))

,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 85: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f (x |θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t))

,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 86: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MCMC (2)

Algorithm 2 Likelihood-free MCMC sampler

Use Algorithm 1 to get (θ(0), z(0))for t = 1 to N do

Generate θ′ from Kω

(·|θ(t−1)

),

Generate z′ from the likelihood f (·|θ′),Generate u from U[0,1],

if u ≤ π(θ′)Kω(θ(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAε,y (z′) then

set (θ(t), z(t)) = (θ′, z′)else

(θ(t), z(t))) = (θ(t−1), z(t−1)),end if

end for

Page 87: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Why does it work?

Acceptance probability does not involve calculating the likelihoodand

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)

× q(θ(t−1)|θ′)f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))f (z′|θ′)

=π(θ′)���

�XXXXf (z′|θ′) IAε,y (z′)

π(θ(t−1)) f (z(t−1)|θ(t−1)) IAε,y (z(t−1))

× q(θ(t−1)|θ′) f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))����XXXXf (z′|θ′)

Page 88: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Why does it work?

Acceptance probability does not involve calculating the likelihoodand

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)

× q(θ(t−1)|θ′)f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))f (z′|θ′)

=π(θ′)���

�XXXXf (z′|θ′) IAε,y (z′)

π(θ(t−1))((((((((hhhhhhhhf (z(t−1)|θ(t−1)) IAε,y (z(t−1))

× q(θ(t−1)|θ′)((((((((hhhhhhhhf (z(t−1)|θ(t−1))

q(θ′|θ(t−1))����XXXXf (z′|θ′)

Page 89: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Why does it work?

Acceptance probability does not involve calculating the likelihoodand

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)

× q(θ(t−1)|θ′)f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))f (z′|θ′)

=π(θ′)���

�XXXXf (z′|θ′) IAε,y (z′)

π(θ(t−1))((((((((hhhhhhhhf (z(t−1)|θ(t−1))���

���XXXXXXIAε,y (z(t−1))

× q(θ(t−1)|θ′)((((((((hhhhhhhhf (z(t−1)|θ(t−1))

q(θ′|θ(t−1))����XXXXf (z′|θ′)

=π(θ′)q(θ(t−1)|θ′)

π(θ(t−1)q(θ′|θ(t−1))IAε,y (z′)

Page 90: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

Case of

x ∼ 1

2N (θ, 1) +

1

2N (−θ, 1)

under prior θ ∼ N (0, 10)

ABC samplerthetas=rnorm(N,sd=10)

zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1)

eps=quantile(abs(zed-x),.01)

abc=thetas[abs(zed-x)<eps]

Page 91: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

Case of

x ∼ 1

2N (θ, 1) +

1

2N (−θ, 1)

under prior θ ∼ N (0, 10)

ABC samplerthetas=rnorm(N,sd=10)

zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1)

eps=quantile(abs(zed-x),.01)

abc=thetas[abs(zed-x)<eps]

Page 92: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

Case of

x ∼ 1

2N (θ, 1) +

1

2N (−θ, 1)

under prior θ ∼ N (0, 10)

ABC-MCMC samplermetas=rep(0,N)

metas[1]=rnorm(1,sd=10)

zed[1]=x

for (t in 2:N){

metas[t]=rnorm(1,mean=metas[t-1],sd=5)

zed[t]=rnorm(1,mean=(1-2*(runif(1)<.5))*metas[t],sd=1)

if ((abs(zed[t]-x)>eps)||(runif(1)>dnorm(metas[t],sd=10)/dnorm(metas[t-1],sd=10))){

metas[t]=metas[t-1]

zed[t]=zed[t-1]}

}

Page 93: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

x = 2

Page 94: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

x = 2

θ

−4 −2 0 2 4

0.00

0.05

0.10

0.15

0.20

0.25

0.30

Page 95: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

x = 10

Page 96: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

x = 10

θ

−10 −5 0 5 10

0.00

0.05

0.10

0.15

0.20

0.25

Page 97: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

x = 50

Page 98: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A toy example

x = 50

θ

−40 −20 0 20 40

0.00

0.02

0.04

0.06

0.08

0.10

Page 99: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)

Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

Page 100: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

Page 101: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K ) and errorsεk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K [{εk−ρk(ηk(zb), ηk(y))}/hk ]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)

ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)mink ξk(ε|y, θ)

Page 102: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K ) and errorsεk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

∑b

K [{εk−ρk(ηk(zb), ηk(y))}/hk ]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)mink ξk(ε|y, θ)

Page 103: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Page 104: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABCµ for model choice

[ c© Ratmann et al., PNAS, 2009]

Page 105: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Questions about ABCµ

For each model under comparison, marginal posterior on ε used toassess the fit of the model (HPD includes 0 or not).

I Is the data informative about ε? [Identifiability]

I How is the prior π(ε) impacting the comparison?

I How is using both ξ(ε|x0, θ) and πε(ε) compatible with astandard probability model? [remindful of Wilkinson ]

I Where is the penalisation for complexity in the modelcomparison?

[X, Mengersen & Chen, 2010, PNAS]

Page 106: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Questions about ABCµ

For each model under comparison, marginal posterior on ε used toassess the fit of the model (HPD includes 0 or not).

I Is the data informative about ε? [Identifiability]

I How is the prior π(ε) impacting the comparison?

I How is using both ξ(ε|x0, θ) and πε(ε) compatible with astandard probability model? [remindful of Wilkinson ]

I Where is the penalisation for complexity in the modelcomparison?

[X, Mengersen & Chen, 2010, PNAS]

Page 107: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC

Another sequential version producing a sequence of Markov

transition kernels Kt and of samples (θ(t)1 , . . . , θ

(t)N ) (1 ≤ t ≤ T )

ABC-PRC Algorithm

1. Select θ? at random from previous θ(t−1)i ’s with probabilities

ω(t−1)i (1 ≤ i ≤ N).

2. Generateθ

(t)i ∼ Kt(θ|θ?) , x ∼ f (x |θ(t)

i ) ,

3. Check that %(x , y) < ε, otherwise start again.

[Sisson et al., 2007]

Page 108: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC

Another sequential version producing a sequence of Markov

transition kernels Kt and of samples (θ(t)1 , . . . , θ

(t)N ) (1 ≤ t ≤ T )

ABC-PRC Algorithm

1. Select θ? at random from previous θ(t−1)i ’s with probabilities

ω(t−1)i (1 ≤ i ≤ N).

2. Generateθ

(t)i ∼ Kt(θ|θ?) , x ∼ f (x |θ(t)

i ) ,

3. Check that %(x , y) < ε, otherwise start again.

[Sisson et al., 2007]

Page 109: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Why PRC?

Partial rejection control: Resample from a population of weightedparticles by pruning away particles with weights below threshold C ,replacing them by new particles obtained by propagating anexisting particle by an SMC step and modifying the weightsaccordinly.

[Liu, 2001]

PRC justification in ABC-PRC:

Suppose we then implement the PRC algorithm for somec > 0 such that only identically zero weights are smallerthan c

Trouble is, there is no such c ...

Page 110: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Why PRC?

Partial rejection control: Resample from a population of weightedparticles by pruning away particles with weights below threshold C ,replacing them by new particles obtained by propagating anexisting particle by an SMC step and modifying the weightsaccordinly.

[Liu, 2001]PRC justification in ABC-PRC:

Suppose we then implement the PRC algorithm for somec > 0 such that only identically zero weights are smallerthan c

Trouble is, there is no such c ...

Page 111: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC weight

Probability ω(t)i computed as

ω(t)i ∝ π(θ

(t)i )Lt−1(θ?|θ(t)

i ){π(θ?)Kt(θ(t)i |θ

?)}−1 ,

where Lt−1 is an arbitrary transition kernel.

In caseLt−1(θ′|θ) = Kt(θ|θ′) ,

all weights are equal under a uniform prior.Inspired from Del Moral et al. (2006), who use backward kernelsLt−1 in SMC to achieve unbiasedness

Page 112: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC weight

Probability ω(t)i computed as

ω(t)i ∝ π(θ

(t)i )Lt−1(θ?|θ(t)

i ){π(θ?)Kt(θ(t)i |θ

?)}−1 ,

where Lt−1 is an arbitrary transition kernel.In case

Lt−1(θ′|θ) = Kt(θ|θ′) ,

all weights are equal under a uniform prior.

Inspired from Del Moral et al. (2006), who use backward kernelsLt−1 in SMC to achieve unbiasedness

Page 113: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC weight

Probability ω(t)i computed as

ω(t)i ∝ π(θ

(t)i )Lt−1(θ?|θ(t)

i ){π(θ?)Kt(θ(t)i |θ

?)}−1 ,

where Lt−1 is an arbitrary transition kernel.In case

Lt−1(θ′|θ) = Kt(θ|θ′) ,

all weights are equal under a uniform prior.Inspired from Del Moral et al. (2006), who use backward kernelsLt−1 in SMC to achieve unbiasedness

Page 114: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC bias

Lack of unbiasedness of the method

Joint density of the accepted pair (θ(t−1), θ(t)) proportional to

π(θ(t−1)|y)Kt (θ(t)|θ(t−1))f (y|θ(t)) ,

For an arbitrary function h(θ), E [ωth(θ(t))] proportional to

∫∫h(θ(t))

π(θ(t))Lt−1(θ(t−1)|θ(t))

π(θ(t−1))Kt (θ(t)|θ(t−1))π(θ(t−1)|y)Kt (θ(t)|θ(t−1))f (y|θ(t))dθ(t−1)dθ(t)

∝∫∫

h(θ(t))π(θ(t))Lt−1(θ(t−1)|θ(t))

π(θ(t−1))Kt (θ(t)|θ(t−1))π(θ(t−1))f (y|θ(t−1))

× Kt (θ(t)|θ(t−1))f (y|θ(t))dθ(t−1)dθ(t)

∝∫

h(θ(t))π(θ(t)|y)

{∫Lt−1(θ(t−1)|θ(t))f (y|θ(t−1))dθ(t−1)

}dθ(t)

.

Page 115: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-PRC bias

Lack of unbiasedness of the method

Joint density of the accepted pair (θ(t−1), θ(t)) proportional to

π(θ(t−1)|y)Kt (θ(t)|θ(t−1))f (y|θ(t)) ,

For an arbitrary function h(θ), E [ωth(θ(t))] proportional to

∫∫h(θ(t))

π(θ(t))Lt−1(θ(t−1)|θ(t))

π(θ(t−1))Kt (θ(t)|θ(t−1))π(θ(t−1)|y)Kt (θ(t)|θ(t−1))f (y|θ(t))dθ(t−1)dθ(t)

∝∫∫

h(θ(t))π(θ(t))Lt−1(θ(t−1)|θ(t))

π(θ(t−1))Kt (θ(t)|θ(t−1))π(θ(t−1))f (y|θ(t−1))

× Kt (θ(t)|θ(t−1))f (y|θ(t))dθ(t−1)dθ(t)

∝∫

h(θ(t))π(θ(t)|y)

{∫Lt−1(θ(t−1)|θ(t))f (y|θ(t−1))dθ(t−1)

}dθ(t)

.

Page 116: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A mixture example (1)

Toy model of Sisson et al. (2007): if

θ ∼ U(−10, 10) , x |θ ∼ 0.5N (θ, 1) + 0.5N (θ, 1/100) ,

then the posterior distribution associated with y = 0 is the normalmixture

θ|y = 0 ∼ 0.5N (0, 1) + 0.5N (0, 1/100)

restricted to [−10, 10].Furthermore, true target available as

π(θ||x | < ε) ∝ Φ(ε−θ)−Φ(−ε−θ)+Φ(10(ε−θ))−Φ(−10(ε+θ)) .

Page 117: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

“Ugly, squalid graph...”

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

θθ

−3 −1 1 3

0.00.2

0.40.6

0.81.0

Comparison of τ = 0.15 and τ = 1/0.15 in Kt

Page 118: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A PMC version

Use of the same kernel idea as ABC-PRC but with IS correction( check PMC )Generate a sample at iteration t by

πt(θ(t)) ∝

N∑j=1

ω(t−1)j Kt(θ

(t)|θ(t−1)j )

modulo acceptance of the associated xt , and use an importance

weight associated with an accepted simulation θ(t)i

ω(t)i ∝ π(θ

(t)i )/πt(θ

(t)i ) .

c© Still likelihood free[Beaumont et al., 2009]

Page 119: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Our ABC-PMC algorithm

Given a decreasing sequence of approximation levels ε1 ≥ . . . ≥ εT ,

1. At iteration t = 1,

For i = 1, ...,N

Simulate θ(1)i ∼ π(θ) and x ∼ f (x |θ(1)

i ) until %(x , y) < ε1

Set ω(1)i = 1/N

Take τ 2 as twice the empirical variance of the θ(1)i ’s

2. At iteration 2 ≤ t ≤ T ,

For i = 1, ...,N, repeat

Pick θ?i from the θ(t−1)j ’s with probabilities ω

(t−1)j

generate θ(t)i |θ

?i ∼ N (θ?i , σ

2t ) and x ∼ f (x |θ(t)

i )

until %(x , y) < εt

Set ω(t)i ∝ π(θ

(t)i )/

∑Nj=1 ω

(t−1)j ϕ

(σ−1t

(t)i − θ

(t−1)j )

})Take τ 2

t+1 as twice the weighted empirical variance of the θ(t)i ’s

Page 120: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Sequential Monte Carlo

SMC is a simulation technique to approximate a sequence ofrelated probability distributions πn with π0 “easy” and πT astarget.Iterated IS as PMC: particles moved from time n to time n viakernel Kn and use of a sequence of extended targets πn

πn(z0:n) = πn(zn)n∏

j=0

Lj(zj+1, zj)

where the Lj ’s are backward Markov kernels [check that πn(zn) is amarginal]

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 121: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Sequential Monte Carlo (2)

Algorithm 3 SMC sampler

sample z(0)i ∼ γ0(x) (i = 1, . . . ,N)

compute weights w(0)i = π0(z

(0)i )/γ0(z

(0)i )

for t = 1 to N doif ESS(w (t−1)) < NT then

resample N particles z(t−1) and set weights to 1end ifgenerate z

(t−1)i ∼ Kt(z

(t−1)i , ·) and set weights to

w(t)i = w

(t−1)i−1

πt(z(t)i ))Lt−1(z

(t)i ), z

(t−1)i ))

πt−1(z(t−1)i ))Kt(z

(t−1)i ), z

(t)i ))

end for

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 122: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-SMC

[Del Moral, Doucet & Jasra, 2009]

True derivation of an SMC-ABC algorithmUse of a kernel Kn associated with target πεn and derivation of thebackward kernel

Ln−1(z , z ′) =πεn(z ′)Kn(z ′, z)

πn(z)

Update of the weights

win ∝ wi(n−1)

∑Mm=1 IAεn (xm

in )∑Mm=1 IAεn−1

(xmi(n−1))

when xmin ∼ K (xi(n−1), ·)

Page 123: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-SMCM

Modification: Makes M repeated simulations of the pseudo-data zgiven the parameter, rather than using a single [M = 1] simulation,leading to weight that is proportional to the number of acceptedzi s

ω(θ) =1

M

M∑i=1

Iρ(η(y),η(zi ))<ε

[limit in M means exact simulation from (tempered) target]

Page 124: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z , z ′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesMajor assumption: the forward kernel K is supposed to be invariantagainst the true target [tempered version of the true posterior]

Adaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds εt , slowly enough to keep a largenumber of accepted transitions

Page 125: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z , z ′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesMajor assumption: the forward kernel K is supposed to be invariantagainst the true target [tempered version of the true posterior]Adaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds εt , slowly enough to keep a largenumber of accepted transitions

Page 126: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A mixture example (2)

Recovery of the target, whether using a fixed standard deviation ofτ = 0.15 or τ = 1/0.15, or a sequence of adaptive τt ’s.

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

Page 127: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Yet another ABC-PRC

Another version of ABC-PRC called PRC-ABC with a proposaldistribution Mt , S replications of the pseudo-data, and a PRC step:

PRC-ABC Algorithm

1. Select θ? at random from the θ(t−1)i ’s with probabilities ω

(t−1)i

2. Generate θ(t)i ∼ Kt , x1, . . . , xS

iid∼ f (x |θ(t−1)i ), set

ω(t)i =

∑s

Kt{η(xs)− η(y)}/

Kt(θ(t)i ) ,

and accept with probability p(i) = ω(t)i /ct,N

3. Set ω(t)i ∝ ω

(t)i /p(i) (1 ≤ i ≤ N)

[Peters, Sisson & Fan, Stat & Computing, 2012]

Page 128: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Yet another ABC-PRC

Another version of ABC-PRC called PRC-ABC with a proposaldistribution Mt , S replications of the pseudo-data, and a PRC step:

I Use of a NP estimate Mt(θ) =∑ω

(t−1)i ψ(θ|θ(t−1)

i ) (with afixed variance?!)

I S = 1 recommended for “greatest gains in samplerperformance”

I ct,N upper quantile of non-zero weights at time t

Page 129: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Wilkinson’s exact BC

ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel function

πε(θ, z|y) =π(θ)f (z|θ)Kε(y − z)∫π(θ)f (z|θ)Kε(y − z)dzdθ

,

with Kε kernel parameterised by bandwidth ε.[Wilkinson, 2008]

Theorem

The ABC algorithm based on the assumption of a randomisedobservation y = y + ξ, ξ ∼ Kε, and an acceptance probability of

Kε(y − z)/M

gives draws from the posterior distribution π(θ|y).

Page 130: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Wilkinson’s exact BC

ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel function

πε(θ, z|y) =π(θ)f (z|θ)Kε(y − z)∫π(θ)f (z|θ)Kε(y − z)dzdθ

,

with Kε kernel parameterised by bandwidth ε.[Wilkinson, 2008]

Theorem

The ABC algorithm based on the assumption of a randomisedobservation y = y + ξ, ξ ∼ Kε, and an acceptance probability of

Kε(y − z)/M

gives draws from the posterior distribution π(θ|y).

Page 131: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

How exact a BC?

“Using ε to represent measurement error isstraightforward, whereas using ε to model the modeldiscrepancy is harder to conceptualize and not ascommonly used”

[Richard Wilkinson, 2008]

Page 132: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

How exact a BC?

Pros

I Pseudo-data from true model and observed data from noisymodel

I Interesting perspective in that outcome is completelycontrolled

I Link with ABCµ and assuming y is observed with ameasurement error with density Kε

I Relates to the theory of model approximation[Kennedy & O’Hagan, 2001]

Cons

I Requires Kε to be bounded by M

I True approximation error never assessed

I Requires a modification of the standard ABC algorithm

Page 133: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC for HMMs

Specific case of a hidden Markov model

Xt+1 ∼ Qθ(Xt , ·)Yt+1 ∼ gθ(·|xt)

where only y01:n is observed.

[Dean, Singh, Jasra, & Peters, 2011]

Use of specific constraints, adapted to the Markov structure:{y1 ∈ B(y 0

1 , ε)}× · · · ×

{yn ∈ B(y 0

n , ε)}

Page 134: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC for HMMs

Specific case of a hidden Markov model

Xt+1 ∼ Qθ(Xt , ·)Yt+1 ∼ gθ(·|xt)

where only y01:n is observed.

[Dean, Singh, Jasra, & Peters, 2011]

Use of specific constraints, adapted to the Markov structure:{y1 ∈ B(y 0

1 , ε)}× · · · ×

{yn ∈ B(y 0

n , ε)}

Page 135: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MLE for HMMs

ABC-MLE defined by

θεn = arg maxθ

Pθ(Y1 ∈ B(y 0

1 , ε), . . . ,Yn ∈ B(y 0n , ε)

)

Exact MLE for the likelihood same basis as Wilkinson!

pεθ(y 01 , . . . , yn)

corresponding to the perturbed process

(xt , yt + εzt)1≤t≤n zt ∼ U(B(0, 1)

[Dean, Singh, Jasra, & Peters, 2011]

Page 136: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MLE for HMMs

ABC-MLE defined by

θεn = arg maxθ

Pθ(Y1 ∈ B(y 0

1 , ε), . . . ,Yn ∈ B(y 0n , ε)

)Exact MLE for the likelihood same basis as Wilkinson!

pεθ(y 01 , . . . , yn)

corresponding to the perturbed process

(xt , yt + εzt)1≤t≤n zt ∼ U(B(0, 1)

[Dean, Singh, Jasra, & Peters, 2011]

Page 137: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MLE is biased

I ABC-MLE is asymptotically (in n) biased with target

l ε(θ) = Eθ∗ [log pεθ(Y1|Y−∞:0)]

I but ABC-MLE converges to true value in the sense

l εn(θn)→ l ε(θ)

for all sequences (θn) converging to θ and εn ↘ ε

Page 138: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

ABC-MLE is biased

I ABC-MLE is asymptotically (in n) biased with target

l ε(θ) = Eθ∗ [log pεθ(Y1|Y−∞:0)]

I but ABC-MLE converges to true value in the sense

l εn(θn)→ l ε(θ)

for all sequences (θn) converging to θ and εn ↘ ε

Page 139: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Noisy ABC-MLE

Idea: Modify instead the data from the start

(y 01 + εζ1, . . . , yn + εζn)

[ see Fearnhead-Prangle ] andnoisy ABC-MLE estimate

arg maxθ

Pθ(Y1 ∈ B(y 0

1 + εζ1, ε), . . . ,Yn ∈ B(y 0n + εζn, ε)

)[Dean, Singh, Jasra, & Peters, 2011]

Page 140: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Consistent noisy ABC-MLE

I Degrading the data improves the estimation performances:I Noisy ABC-MLE is asymptotically (in n) consistentI under further assumptions, the noisy ABC-MLE is

asymptotically normalI increase in variance of order ε−2

I likely degradation in precision or computing time due to thelack of summary statistic [curse of dimensionality]

Page 141: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

SMC for ABC likelihood

Algorithm 4 SMC ABC for HMMs

Given θfor k = 1, . . . , n do

generate proposals (x1k , y

1k ), . . . , (xN

k , yNk ) from the model

weigh each proposal with ωlk = IB(y0

k +εζk ,ε)(y l

k)

renormalise the weights and sample the x lk ’s accordingly

end forapproximate the likelihood by

n∏k=1

(N∑l=1

ωlk

/N

)

[Jasra, Singh, Martin, & McCoy, 2010]

Page 142: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics [except when done by theexperimenters in the field]

Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest

I Not taking into account the sequential nature of the tests

I Depends on parameterisation

I Order of inclusion matters

Page 143: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics [except when done by theexperimenters in the field]Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest

I Not taking into account the sequential nature of the tests

I Depends on parameterisation

I Order of inclusion matters

Page 144: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

Which summary?

Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statistics [except when done by theexperimenters in the field]Starting from a large collection of summary statistics is available,Joyce and Marjoram (2008) consider the sequential inclusion intothe ABC target, with a stopping rule based on a likelihood ratiotest

I Not taking into account the sequential nature of the tests

I Depends on parameterisation

I Order of inclusion matters

Page 145: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Alphabet soup

A connected Monte Carlo study

I Repeating simulations of the pseudo-data per simulated parameterdoes not improve approximation

I Tolerance level ε does not seem to be highly influential

I Choice of distance / summary statistics / calibration factorsare paramount to successful approximation

I ABC-SMC outperforms ABC-MCMC

[Mckinley, Cook, Deardon, 2009]

Page 146: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Semi-automatic ABC

Fearnhead and Prangle (2010) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposal

ABC then considered from a purely inferential viewpoint andcalibrated for estimation purposesUse of a randomised (or ‘noisy’) version of the summary statistics

η(y) = η(y) + τε

Derivation of a well-calibrated version of ABC, i.e. an algorithmthat gives proper predictions for the distribution associated withthis randomised summary statistic

[calibration constraint: ABCapproximation with same posterior mean as the true randomisedposterior]

Page 147: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Semi-automatic ABC

Fearnhead and Prangle (2010) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposal

ABC then considered from a purely inferential viewpoint andcalibrated for estimation purposesUse of a randomised (or ‘noisy’) version of the summary statistics

η(y) = η(y) + τε

Derivation of a well-calibrated version of ABC, i.e. an algorithmthat gives proper predictions for the distribution associated withthis randomised summary statistic [calibration constraint: ABCapproximation with same posterior mean as the true randomisedposterior]

Page 148: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Summary statistics

I Optimality of the posterior expectation E[θ|y] of theparameter of interest as summary statistics η(y)!

I Use of the standard quadratic loss function

(θ − θ0)TA(θ − θ0) .

Page 149: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Summary statistics

I Optimality of the posterior expectation E[θ|y] of theparameter of interest as summary statistics η(y)!

I Use of the standard quadratic loss function

(θ − θ0)TA(θ − θ0) .

Page 150: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Details on Fearnhead and Prangle (F&P) ABC

Use of a summary statistic S(·), an importance proposal g(·), akernel K (·) ≤ 1 and a bandwidth h > 0 such that

(θ, ysim) ∼ g(θ)f (ysim|θ)

is accepted with probability (hence the bound)

K [{S(ysim)− sobs}/h]

and the corresponding importance weight defined by

π(θ)/

g(θ)

[Fearnhead & Prangle, 2012]

Page 151: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Errors, errors, and errors

Three levels of approximation

I π(θ|yobs) by π(θ|sobs) loss of information[ignored]

I π(θ|sobs) by

πABC(θ|sobs) =

∫π(s)K [{s− sobs}/h]π(θ|s) ds∫π(s)K [{s− sobs}/h] ds

noisy observations

I πABC(θ|sobs) by importance Monte Carlo based on Nsimulations, represented by var(a(θ)|sobs)/Nacc [expectednumber of acceptances]

[M. Twain/B. Disraeli]

Page 152: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Average acceptance asymptotics

For the average acceptance probability/approximate likelihood

p(θ|sobs) =

∫f (ysim|θ) K [{S(ysim)− sobs}/h] dysim ,

overall acceptance probability

p(sobs) =

∫p(θ|sobs)π(θ) dθ = π(sobs)hd + o(hd)

[F&P, Lemma 1]

Page 153: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Optimal importance proposal

Best choice of importance proposal in terms of effective sample size

g?(θ|sobs) ∝ π(θ)p(θ|sobs)1/2

[Not particularly useful in practice]

I note that p(θ|sobs) is an approximate likelihood

I reminiscent of parallel tempering

I could be approximately achieved by attrition of half of thedata

Page 154: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Optimal importance proposal

Best choice of importance proposal in terms of effective sample size

g?(θ|sobs) ∝ π(θ)p(θ|sobs)1/2

[Not particularly useful in practice]

I note that p(θ|sobs) is an approximate likelihood

I reminiscent of parallel tempering

I could be approximately achieved by attrition of half of thedata

Page 155: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Calibration of h

“This result gives insight into how S(·) and h affect the MonteCarlo error. To minimize Monte Carlo error, we need hd to be nottoo small. Thus ideally we want S(·) to be a low dimensionalsummary of the data that is sufficiently informative about θ thatπ(θ|sobs) is close, in some sense, to π(θ|yobs)” (F&P, p.5)

I turns h into an absolute value while it should becontext-dependent and user-calibrated

I only addresses one term in the approximation error andacceptance probability (“curse of dimensionality”)

I h large prevents πABC(θ|sobs) to be close to π(θ|sobs)

I d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of[dis]information”)

Page 156: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Calibrating ABC

“If πABC is calibrated, then this means that probability statementsthat are derived from it are appropriate, and in particular that wecan use πABC to quantify uncertainty in estimates” (F&P, p.5)

Definition

For 0 < q < 1 and subset A, event Eq(A) made of sobs such thatPrABC(θ ∈ A|sobs) = q. Then ABC is calibrated if

Pr(θ ∈ A|Eq(A)) = q

I unclear meaning of conditioning on Eq(A)

Page 157: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Calibrating ABC

“If πABC is calibrated, then this means that probability statementsthat are derived from it are appropriate, and in particular that wecan use πABC to quantify uncertainty in estimates” (F&P, p.5)

Definition

For 0 < q < 1 and subset A, event Eq(A) made of sobs such thatPrABC(θ ∈ A|sobs) = q. Then ABC is calibrated if

Pr(θ ∈ A|Eq(A)) = q

I unclear meaning of conditioning on Eq(A)

Page 158: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Calibrated ABC

Theorem (F&P)

Noisy ABC, where

sobs = S(yobs) + hε , ε ∼ K (·)

is calibrated

[Wilkinson, 2008]no condition on h!!

Page 159: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Calibrated ABC

Consequence: when h =∞

Theorem (F&P)

The prior distribution is always calibrated

is this a relevant property then?

Page 160: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

More about calibrated ABC

“Calibration is not universally accepted by Bayesians. It is even morequestionable here as we care how statements we make relate to thereal world, not to a mathematically defined posterior.” R. Wilkinson

I Same reluctance about the prior being calibrated

I Property depending on prior, likelihood, and summary

I Calibration is a frequentist property (almost a p-value!)

I More sensible to account for the simulator’s imperfectionsthan using noisy-ABC against a meaningless based measure

[Wilkinson, 2012]

Page 161: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Converging ABC

Theorem (F&P)

For noisy ABC, the expected noisy-ABC log-likelihood,

E {log[p(θ|sobs)]} =

∫ ∫log[p(θ|S(yobs) + ε)]π(yobs|θ0)K (ε)dyobsdε,

has its maximum at θ = θ0.

True for any choice of summary statistic? even ancilary statistics?![Imposes at least identifiability...]

Relevant in asymptotia and not for the data

Page 162: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Converging ABC

Corollary

For noisy ABC, the ABC posterior converges onto a point mass onthe true parameter value as m→∞.

For standard ABC, not always the case (unless h goes to zero).

Strength of regularity conditions (c1) and (c2) in Bernardo& Smith, 1994?

[out-of-reach constraints on likelihood and posterior]Again, there must be conditions imposed upon summarystatistics...

Page 163: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Loss motivated statistic

Under quadratic loss function,

Theorem (F&P)

(i) The minimal posterior error E[L(θ, θ)|yobs] occurs whenθ = E(θ|yobs) (!)

(ii) When h→ 0, EABC(θ|sobs) converges to E(θ|yobs)

(iii) If S(yobs) = E[θ|yobs] then for θ = EABC[θ|sobs]

E[L(θ, θ)|yobs] = trace(AΣ) + h2

∫xTAxK (x)dx + o(h2).

measure-theoretic difficulties?dependence of sobs on h makes me uncomfortable inherent to noisyABCRelevant for choice of K ?

Page 164: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Optimal summary statistic

“We take a different approach, and weaken the requirement forπABC to be a good approximation to π(θ|yobs). We argue for πABC

to be a good approximation solely in terms of the accuracy ofcertain estimates of the parameters.” (F&P, p.5)

From this result, F&P

I derive their choice of summary statistic,

S(y) = E(θ|y)

[almost sufficient]

I suggest

h = O(N−1/(2+d)) and h = O(N−1/(4+d))

as optimal bandwidths for noisy and standard ABC.

Page 165: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Optimal summary statistic

“We take a different approach, and weaken the requirement forπABC to be a good approximation to π(θ|yobs). We argue for πABC

to be a good approximation solely in terms of the accuracy ofcertain estimates of the parameters.” (F&P, p.5)

From this result, F&P

I derive their choice of summary statistic,

S(y) = E(θ|y)

[wow! EABC[θ|S(yobs)] = E[θ|yobs]]

I suggest

h = O(N−1/(2+d)) and h = O(N−1/(4+d))

as optimal bandwidths for noisy and standard ABC.

Page 166: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Caveat

Since E(θ|yobs) is most usually unavailable, F&P suggest

(i) use a pilot run of ABC to determine a region of non-negligibleposterior mass;

(ii) simulate sets of parameter values and data;

(iii) use the simulated sets of parameter values and data toestimate the summary statistic; and

(iv) run ABC with this choice of summary statistic.

where is the assessment of the first stage error?

Page 167: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Caveat

Since E(θ|yobs) is most usually unavailable, F&P suggest

(i) use a pilot run of ABC to determine a region of non-negligibleposterior mass;

(ii) simulate sets of parameter values and data;

(iii) use the simulated sets of parameter values and data toestimate the summary statistic; and

(iv) run ABC with this choice of summary statistic.

where is the assessment of the first stage error?

Page 168: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Approximating the summary statistic

As Beaumont et al. (2002) and Blum and Francois (2010), F&Puse a linear regression to approximate E(θ|yobs):

θi = β(i)0 + β(i)f (yobs) + εi

with f made of standard transforms[Further justifications?]

Page 169: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

[my]questions about semi-automatic ABC

I dependence on h and S(·) in the early stage

I reduction of Bayesian inference to point estimation

I approximation error in step (i) not accounted for

I not parameterisation invariant

I practice shows that proper approximation to genuine posteriordistributions stems from using a (much) larger number ofsummary statistics than the dimension of the parameter

I the validity of the approximation to the optimal summarystatistic depends on the quality of the pilot run

I important inferential issues like model choice are not coveredby this approach.

[Robert, 2012]

Page 170: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

More about semi-automatic ABC

[End of section derived from comments on Read Paper, to appear]

“The apparently arbitrary nature of the choice of summary statisticshas always been perceived as the Achilles heel of ABC.” M.Beaumont

I “Curse of dimensionality” linked with the increase of thedimension of the summary statistic

I Connection with principal component analysis[Itan et al., 2010]

I Connection with partial least squares[Wegman et al., 2009]

I Beaumont et al. (2002) postprocessed output is used as inputby F&P to run a second ABC

Page 171: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

More about semi-automatic ABC

[End of section derived from comments on Read Paper, to appear]

“The apparently arbitrary nature of the choice of summary statisticshas always been perceived as the Achilles heel of ABC.” M.Beaumont

I “Curse of dimensionality” linked with the increase of thedimension of the summary statistic

I Connection with principal component analysis[Itan et al., 2010]

I Connection with partial least squares[Wegman et al., 2009]

I Beaumont et al. (2002) postprocessed output is used as inputby F&P to run a second ABC

Page 172: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Wood’s alternative

Instead of a non-parametric kernel approximation to the likelihood

1

R

∑r

Kε{η(yr )− η(yobs)}

Wood (2010) suggests a normal approximation

η(y(θ)) ∼ Nd(µθ,Σθ)

whose parameters can be approximated based on the R simulations(for each value of θ).

I Parametric versus non-parametric rate [Uh?!]

I Automatic weighting of components of η(·) through Σθ

I Dependence on normality assumption (pseudo-likelihood?)

[Cornebise, Girolami & Kosmidis, 2012]

Page 173: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Wood’s alternative

Instead of a non-parametric kernel approximation to the likelihood

1

R

∑r

Kε{η(yr )− η(yobs)}

Wood (2010) suggests a normal approximation

η(y(θ)) ∼ Nd(µθ,Σθ)

whose parameters can be approximated based on the R simulations(for each value of θ).

I Parametric versus non-parametric rate [Uh?!]

I Automatic weighting of components of η(·) through Σθ

I Dependence on normality assumption (pseudo-likelihood?)

[Cornebise, Girolami & Kosmidis, 2012]

Page 174: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Reinterpretation and extensions

Reinterpretation of ABC output as joint simulation from

π(x , y |θ) = f (x |θ)πY |X (y |x)

whereπY |X (y |x) = Kε(y − x)

Reinterpretation of noisy ABC

if y |y obs ∼ πY |X (·|y obs), then marginally

y ∼ πY |θ(·|θ0)

c© Explain for the consistency of Bayesian inference based on y and π[Lee, Andrieu & Doucet, 2012]

Page 175: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Reinterpretation and extensions

Reinterpretation of ABC output as joint simulation from

π(x , y |θ) = f (x |θ)πY |X (y |x)

whereπY |X (y |x) = Kε(y − x)

Reinterpretation of noisy ABC

if y |y obs ∼ πY |X (·|y obs), then marginally

y ∼ πY |θ(·|θ0)

c© Explain for the consistency of Bayesian inference based on y and π[Lee, Andrieu & Doucet, 2012]

Page 176: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Reinterpretation and extensions

Also fits within the pseudo-marginal of Andrieu and Roberts (2009)

Algorithm 5 Rejuvenating GIMH-ABC

At time t, given θt = θ:Sample θ′ ∼ g(·|θ).Sample z1:N ∼ π⊗NY |Θ(·|θ′).

Sample x1:N−1 ∼ π⊗(N−1)Y |Θ (·|θ).

With probability

min

{1,π(θ′)g(θ|θ′)π(θ)g(θ′|θ)

×∑N

i=1 IBh(zi )(y)

1 +∑N−1

i=1 IBh(xi )(y)

}

set θt+1 = θ′.Otherwise set θt+1 = θ.

[Lee, Andrieu & Doucet, 2012]

Page 177: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Reinterpretation and extensions

Also fits within the pseudo-marginal of Andrieu and Roberts (2009)

Algorithm 6 One-hit MCMC-ABC

At time t, with θt = θ:Sample θ′ ∼ g(·|θ).

With probability 1−min{

1, π(θ′)g(θ|θ′)π(θ)g(θ′|θ)

}, set θt+1 = θ and go to

time t + 1.Sample zi ∼ πY |Θ(·|θ′) and xi ∼ πY |Θ(·|θ) for i = 1, . . . untily ∈ Bh(zi ) and/or y ∈ Bh(xi ).If y ∈ Bh(zi ) set θt+1 = θ′ and go to time t + 1.If y ∈ Bh(xi ) set θt+1 = θ and go to time t + 1.

[Lee, Andrieu & Doucet, 2012]

Page 178: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Reinterpretation and extensions

Also fits within the pseudo-marginal of Andrieu and Roberts (2009)

Validation

c© The invariant distribution of θ is

πθ|Y (·|y)

for both algorithms.

[Lee, Andrieu & Doucet, 2012]

Page 179: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

ABC for Markov chains

Rewriting the posterior as

π(θ)1−nπ(θ|x1)∏

π(θ|xt−1, xt)

where π(θ|xt−1, xt) ∝ f (xt |xt−1, θ)π(θ)

I Allows for a stepwise ABC, replacing each π(θ|xt−1, xt) by anABC approximation

I Similarity with F&P’s multiple sources of data (and also withDean et al., 2011 )

[White et al., 2010, 2012]

Page 180: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

ABC for Markov chains

Rewriting the posterior as

π(θ)1−nπ(θ|x1)∏

π(θ|xt−1, xt)

where π(θ|xt−1, xt) ∝ f (xt |xt−1, θ)π(θ)

I Allows for a stepwise ABC, replacing each π(θ|xt−1, xt) by anABC approximation

I Similarity with F&P’s multiple sources of data (and also withDean et al., 2011 )

[White et al., 2010, 2012]

Page 181: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Back to sufficiency

Difference between regular sufficiency, equivalent to

π(θ|y) = π(θ|η(y))

for all θ’s and all priors π, and

marginal sufficiency, stated as

π(µ(θ)|y) = π(µ(θ)|η(y))

for all θ’s, the given prior π and a subvector µ(θ)[Basu, 1977]

Relates to F & P’s main result, but could event be reduced toconditional sufficiency

π(µ(θ)|yobs) = π(µ(θ)|η(yobs))

(if feasible at all...)[Dawson, 2012]

Page 182: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Back to sufficiency

Difference between regular sufficiency, equivalent to

π(θ|y) = π(θ|η(y))

for all θ’s and all priors π, andmarginal sufficiency, stated as

π(µ(θ)|y) = π(µ(θ)|η(y))

for all θ’s, the given prior π and a subvector µ(θ)[Basu, 1977]

Relates to F & P’s main result, but could event be reduced toconditional sufficiency

π(µ(θ)|yobs) = π(µ(θ)|η(yobs))

(if feasible at all...)[Dawson, 2012]

Page 183: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Back to sufficiency

Difference between regular sufficiency, equivalent to

π(θ|y) = π(θ|η(y))

for all θ’s and all priors π, andmarginal sufficiency, stated as

π(µ(θ)|y) = π(µ(θ)|η(y))

for all θ’s, the given prior π and a subvector µ(θ)[Basu, 1977]

Relates to F & P’s main result, but could event be reduced toconditional sufficiency

π(µ(θ)|yobs) = π(µ(θ)|η(yobs))

(if feasible at all...)[Dawson, 2012]

Page 184: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

ABC and BCa’s

Arguments in favour of a comparison of ABC with bootstrap in theevent π(θ) is not defined from prior information but asconventional non-informative prior.

[Efron, 2012]

Page 185: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Predictive performances

Instead of posterior means, other aspects of posterior to explore.E.g., look at minimising loss of information∫

p(θ, y) logp(θ, y)

p(θ)p(y)dθdy−

∫p(θ, η(y)) log

p(θ, η(y))

p(θ)p(η(y))dθdη(y)

for selection of summary statistics.[Filippi, Barnes, & Stumpf, 2012]

Page 186: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Auxiliary variables

Auxiliary variable method avoids computations of untractableconstant in likelihood

f (y|θ) = Zθ f (y|θ)

Introduce pseudo-data z with artificial target g(z|θ, y)Generate θ′ ∼ K (θ, θ′) and z′ ∼ f (z|θ′)

[Møller, Pettitt, Berthelsen, & Reeves, 2006]

Page 187: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Auxiliary variables

Auxiliary variable method avoids computations of untractableconstant in likelihood

f (y|θ) = Zθ f (y|θ)

Introduce pseudo-data z with artificial target g(z|θ, y)Generate θ′ ∼ K (θ, θ′) and z′ ∼ f (z|θ′)Accept with probability

π(θ′)f (y|θ′)g(z′|θ′, y)

π(θ)f (y|θ)g(z|θ, y)

K (θ′, θ)f (z|θ)

K (θ, θ′)f (z′|θ′)∧ 1

[Møller, Pettitt, Berthelsen, & Reeves, 2006]

Page 188: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Auxiliary variables

Auxiliary variable method avoids computations of untractableconstant in likelihood

f (y|θ) = Zθ f (y|θ)

Introduce pseudo-data z with artificial target g(z|θ, y)Generate θ′ ∼ K (θ, θ′) and z′ ∼ f (z|θ′)Accept with probability

π(θ′)f (y|θ′)g(z′|θ′, y)

π(θ)f (y|θ)g(z|θ, y)

K (θ′, θ)f (z|θ)

K (θ, θ′)f (z′|θ′)∧ 1

[Møller, Pettitt, Berthelsen, & Reeves, 2006]

Page 189: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Auxiliary variables

Auxiliary variable method avoids computations of untractableconstant in likelihood

f (y|θ) = Zθ f (y|θ)

Introduce pseudo-data z with artificial target g(z|θ, y)Generate θ′ ∼ K (θ, θ′) and z′ ∼ f (z|θ′)

For Gibbs random fields , existence of a genuine sufficient statistic η(y).[Møller, Pettitt, Berthelsen, & Reeves, 2006]

Page 190: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Auxiliary variables and ABC

Special case of ABC when

I g(z|θ, y) = Kε(η(z)− η(y))

I f (y|θ′)f (z|θ)/f (y|θ)f (z′|θ′) replaced by one [or not?!]

Consequences

I likelihood-free (ABC) versus constant-free (AVM)

I in ABC, Kε(·) should be allowed to depend on θ

I for Gibbs random fields, the auxiliary approach should beprefered to ABC

[Møller, 2012]

Page 191: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

Auxiliary variables and ABC

Special case of ABC when

I g(z|θ, y) = Kε(η(z)− η(y))

I f (y|θ′)f (z|θ)/f (y|θ)f (z′|θ′) replaced by one [or not?!]

Consequences

I likelihood-free (ABC) versus constant-free (AVM)

I in ABC, Kε(·) should be allowed to depend on θ

I for Gibbs random fields, the auxiliary approach should beprefered to ABC

[Møller, 2012]

Page 192: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

ABC and BIC

Idea of applying BIC during the local regression :

I Run regular ABC

I Select summary statistics during local regression

I Recycle the prior simulation sample (reference table) withthose summary statistics

I Rerun the corresponding local regression (low cost)

[Pudlo & Sedki, 2012]

Page 193: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

Approximate Bayesian computation

Automated summary statistic selection

A Brave New World?!

Page 194: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

ABC for model choice

simulation-based methods inEconometrics

Genetics of ABC

Approximate Bayesian computation

ABC for model choice

ABC model choice consistency

Page 195: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Model choice

Bayesian model choice

Several models M1,M2, . . . are considered simultaneously for adataset y and the model index M is part of the inference.Use of a prior distribution. π(M = m), plus a prior distribution onthe parameter conditional on the value m of the model index,πm(θm)Goal is to derive the posterior distribution of M, challengingcomputational target when models are complex.

Page 196: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Model choice

Generic ABC for model choice

Algorithm 7 Likelihood-free model choice sampler (ABC-MC)

for t = 1 to T dorepeat

Generate m from the prior π(M = m)Generate θm from the prior πm(θm)Generate z from the model fm(z|θm)

until ρ{η(z), η(y)} < εSet m(t) = m and θ(t) = θm

end for

Page 197: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Model choice

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑t=1

Im(t)=m .

Issues with implementation:

I should tolerances ε be the same for all models?

I should summary statistics vary across models (incl. theirdimension)?

I should the distance measure ρ vary as well?

Page 198: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Model choice

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑t=1

Im(t)=m .

Extension to a weighted polychotomous logistic regression estimateof π(M = m|y), with non-parametric kernel weights

[Cornuet et al., DIYABC, 2009]

Page 199: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Model choice

The Great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Page 200: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Model choice

The Great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Replies: Fagundes et al., 2008,Beaumont et al., 2010, Berger etal., 2010, Csillery et al., 2010point out that the criticisms areaddressed at [Bayesian]model-based inference and havenothing to do with ABC...

Page 201: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Gibbs random fields

Gibbs distribution

The rv y = (y1, . . . , yn) is a Gibbs random field associated withthe graph G if

f (y) =1

Zexp

{−∑c∈C

Vc(yc)

},

where Z is the normalising constant, C is the set of cliques of Gand Vc is any function also called potential sufficient statistic

U(y) =∑

c∈C Vc(yc) is the energy function

c© Z is usually unavailable in closed form

Page 202: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Gibbs random fields

Gibbs distribution

The rv y = (y1, . . . , yn) is a Gibbs random field associated withthe graph G if

f (y) =1

Zexp

{−∑c∈C

Vc(yc)

},

where Z is the normalising constant, C is the set of cliques of Gand Vc is any function also called potential sufficient statistic

U(y) =∑

c∈C Vc(yc) is the energy function

c© Z is usually unavailable in closed form

Page 203: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Potts model

Potts model

Vc(y) is of the form

Vc(y) = θS(y) = θ∑l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑x∈X

exp{θTS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

[Cucala, Marin, CPR & Titterington, 2009]

Page 204: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Potts model

Potts model

Vc(y) is of the form

Vc(y) = θS(y) = θ∑l∼i

δyl=yi

where l∼i denotes a neighbourhood structure

In most realistic settings, summation

Zθ =∑x∈X

exp{θTS(x)}

involves too many terms to be manageable and numericalapproximations cannot always be trusted

[Cucala, Marin, CPR & Titterington, 2009]

Page 205: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Bayesian Model Choice

Comparing a model with potential S0 taking values in Rp0 versus amodel with potential S1 taking values in Rp1 can be done throughthe Bayes factor corresponding to the priors π0 and π1 on eachparameter space

Bm0/m1(x) =

∫exp{θT

0 S0(x)}/Zθ0,0π0(dθ0)∫

exp{θT1 S1(x)}/Zθ1,1

π1(dθ1)

Use of Jeffreys’ scale to select most appropriate model

Page 206: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Bayesian Model Choice

Comparing a model with potential S0 taking values in Rp0 versus amodel with potential S1 taking values in Rp1 can be done throughthe Bayes factor corresponding to the priors π0 and π1 on eachparameter space

Bm0/m1(x) =

∫exp{θT

0 S0(x)}/Zθ0,0π0(dθ0)∫

exp{θT1 S1(x)}/Zθ1,1

π1(dθ1)

Use of Jeffreys’ scale to select most appropriate model

Page 207: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Neighbourhood relations

Choice to be made between M neighbourhood relations

im∼ i ′ (0 ≤ m ≤ M − 1)

withSm(x) =

∑im∼i ′

I{xi=xi′}

driven by the posterior probabilities of the models.

Page 208: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Model index

Formalisation via a model index M that appears as a newparameter with prior distribution π(M = m) andπ(θ|M = m) = πm(θm)

Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m) ,

Page 209: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Model index

Formalisation via a model index M that appears as a newparameter with prior distribution π(M = m) andπ(θ|M = m) = πm(θm)Computational target:

P(M = m|x) ∝∫

Θm

fm(x|θm)πm(θm) dθm π(M = m) ,

Page 210: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Sufficient statistics

By definition, if S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

For each model m, own sufficient statistic Sm(·) andS(·) = (S0(·), . . . ,SM−1(·)) also sufficient.

Page 211: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Sufficient statistics

By definition, if S(x) sufficient statistic for the joint parameters(M, θ0, . . . , θM−1),

P(M = m|x) = P(M = m|S(x)) .

For each model m, own sufficient statistic Sm(·) andS(·) = (S0(·), . . . ,SM−1(·)) also sufficient.

Page 212: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Sufficient statistics in Gibbs random fields

For Gibbs random fields,

x |M = m ∼ fm(x|θm) = f 1m(x|S(x))f 2

m(S(x)|θm)

=1

n(S(x))f 2m(S(x)|θm)

wheren(S(x)) = ] {x ∈ X : S(x) = S(x)}

c© S(x) is therefore also sufficient for the joint parameters[Specific to Gibbs random fields!]

Page 213: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

ABC model choice Algorithm

ABC-MCI Generate m∗ from the prior π(M = m).

I Generate θ∗m∗ from the prior πm∗(·).

I Generate x∗ from the model fm∗(·|θ∗m∗).

I Compute the distance ρ(S(x0), S(x∗)).

I Accept (θ∗m∗ ,m∗) if ρ(S(x0),S(x∗)) < ε.

Note When ε = 0 the algorithm is exact

Page 214: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

ABC approximation to the Bayes factor

Frequency ratio:

BFm0/m1(x0) =

P(M = m0|x0)

P(M = m1|x0)× π(M = m1)

π(M = m0)

=]{mi∗ = m0}]{mi∗ = m1}

× π(M = m1)

π(M = m0),

replaced with

BFm0/m1(x0) =

1 + ]{mi∗ = m0}1 + ]{mi∗ = m1}

× π(M = m1)

π(M = m0)

to avoid indeterminacy (also Bayes estimate).

Page 215: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

ABC approximation to the Bayes factor

Frequency ratio:

BFm0/m1(x0) =

P(M = m0|x0)

P(M = m1|x0)× π(M = m1)

π(M = m0)

=]{mi∗ = m0}]{mi∗ = m1}

× π(M = m1)

π(M = m0),

replaced with

BFm0/m1(x0) =

1 + ]{mi∗ = m0}1 + ]{mi∗ = m1}

× π(M = m1)

π(M = m0)

to avoid indeterminacy (also Bayes estimate).

Page 216: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Toy example

iid Bernoulli model versus two-state first-order Markov chain, i.e.

f0(x|θ0) = exp

(θ0

n∑i=1

I{xi=1}

)/{1 + exp(θ0)}n ,

versus

f1(x|θ1) =1

2exp

(θ1

n∑i=2

I{xi=xi−1}

)/{1 + exp(θ1)}n−1 ,

with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phasetransition” boundaries).

Page 217: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Gibbs random fields

Toy example (2)

−40 −20 0 10

−50

5

BF01

BF01

−40 −20 0 10−10

−50

510

BF01

BF01

(left) Comparison of the true BFm0/m1(x0) with BFm0/m1

(x0) (inlogs) over 2, 000 simulations and 4.106 proposals from the prior.(right) Same when using tolerance ε corresponding to the 1%quantile on the distances.

Page 218: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Back to sufficiency

‘Sufficient statistics for individual models are unlikely tobe very informative for the model probability.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 219: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Back to sufficiency

‘Sufficient statistics for individual models are unlikely tobe very informative for the model probability.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 220: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Back to sufficiency

‘Sufficient statistics for individual models are unlikely tobe very informative for the model probability.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 221: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt , z t)’s are simulated from the (joint) prior

As T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)f η1 (η|θ1) dη dθ1∫Iρ{η,η(y)}≤επ2(θ2)f η2 (η|θ2) dη dθ2

,

where f η1 (η|θ1) and f η2 (η|θ2) distributions of η(z)

Page 222: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt , z t)’s are simulated from the (joint) priorAs T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)f η1 (η|θ1) dη dθ1∫Iρ{η,η(y)}≤επ2(θ2)f η2 (η|θ2) dη dθ2

,

where f η1 (η|θ1) and f η2 (η|θ2) distributions of η(z)

Page 223: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)f η1 (η(y)|θ1) dθ1∫π2(θ2)f η2 (η(y)|θ2) dθ2

,

c© Bayes factor based on the sole observation of η(y)

Page 224: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)f η1 (η(y)|θ1) dθ1∫π2(θ2)f η2 (η(y)|θ2) dθ2

,

c© Bayes factor based on the sole observation of η(y)

Page 225: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi (y|θi ) = gi (y)f ηi (η(y)|θi )

Thus

B12(y) =

∫Θ1π(θ1)g1(y)f η1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 226: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi (y|θi ) = gi (y)f ηi (η(y)|θi )

Thus

B12(y) =

∫Θ1π(θ1)g1(y)f η1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 227: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Poisson/geometric example

Samplex = (x1, . . . , xn)

from either a Poisson P(λ) or from a geometric G(p) Then

S =n∑

i=1

yi = η(x)

sufficient statistic for either model but not simultaneously

Discrepancy ratio

g1(x)

g2(x)=

S!n−S/∏

i yi !

1/(n+S−1

S

)

Page 228: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Poisson/geometric discrepancy

Range of B12(x) versus Bη12(x) B12(x): The values produced have

nothing in common.

Page 229: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f (x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x) + θT

1 η1(x) + α1t1(x) + α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Page 230: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f (x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x) + θT

1 η1(x) + α1t1(x) + α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

In the Poisson/geometric case, if∏

i xi ! is added to S , nodiscrepancy

Page 231: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Formal recovery

Creating an encompassing exponential family

f (x|θ1, θ2, α1, α2) ∝ exp{θT1 η1(x) + θT

1 η1(x) + α1t1(x) + α2t2(x)}

leads to a sufficient statistic (η1(x), η2(x), t1(x), t2(x))[Didelot, Everitt, Johansen & Lawson, 2011]

Only applies in genuine sufficiency settings...

c© Inability to evaluate loss brought by summary statistics

Page 232: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

‘This is also why focus on model discrimination typically(...) proceeds by (...) accepting that the Bayes Factorthat one obtains is only derived from the summarystatistics and may in no way correspond to that of thefull model.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

In the Poisson/geometric case, if E[yi ] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 233: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Meaning of the ABC-Bayes factor

‘This is also why focus on model discrimination typically(...) proceeds by (...) accepting that the Bayes Factorthat one obtains is only derived from the summarystatistics and may in no way correspond to that of thefull model.’

[Scott Sisson, Jan. 31, 2011, X.’Og]

In the Poisson/geometric case, if E[yi ] = θ0 > 0,

limn→∞

Bη12(y) =

(θ0 + 1)2

θ0e−θ0

Page 234: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

MA(q) divergence

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factorequal to 17.71.

Page 235: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

MA(q) divergence

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21

equal to .004.

Page 236: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Further comments

‘There should be the possibility that for the same model,but different (non-minimal) [summary] statistics (sodifferent η’s: η1 and η∗1) the ratio of evidences may nolonger be equal to one.’

[Michael Stumpf, Jan. 28, 2011, ’Og]

Using different summary statistics [on different models] mayindicate the loss of information brought by each set but agreementdoes not lead to trustworthy approximations.

Page 237: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Population genetics example with

I 3 populations

I 2 scenari

I 15 individuals

I 5 loci

I single mutation parameter

I 24 summary statistics

I 2 million ABC proposal

I importance [tree] sampling alternative

Page 238: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

A population genetics evaluation

Population genetics example with

I 3 populations

I 2 scenari

I 15 individuals

I 5 loci

I single mutation parameter

I 24 summary statistics

I 2 million ABC proposal

I importance [tree] sampling alternative

Page 239: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Stability of importance sampling

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0

0.2

0.4

0.6

0.8

1.0

Page 240: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

●●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling estimates of P(M=1|y)

AB

C e

stim

ates

of P

(M=

1|y)

Page 241: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 15 summary statistics and DIY-ABC logistic correction

●●

●●

●●

● ●

● ●

●●

●●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling estimates of P(M=1|y)

AB

C e

stim

ates

of P

(M=

1|y)

Page 242: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

Comparison with ABC

Use of 24 summary statistics and DIY-ABC logistic correction

●●

●●

● ●

●●

●●

●●

●●

●●

●●

● ●

●●

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Importance Sampling estimates of P(M=1|y)

AB

C e

stim

ates

of P

(M=

1|y)

Page 243: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

The only safe cases???

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010; Sousa & al., 2009]

...and so does the use of more informal model fitting measures[Ratmann & al., 2009]

Page 244: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC for model choice

Generic ABC model choice

The only safe cases???

Besides specific models like Gibbs random fields,

using distances over the data itself escapes the discrepancy...[Toni & Stumpf, 2010; Sousa & al., 2009]

...and so does the use of more informal model fitting measures[Ratmann & al., 2009]

Page 245: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

ABC model choice consistency

simulation-based methods inEconometrics

Genetics of ABC

Approximate Bayesian computation

ABC for model choice

ABC model choice consistency

Page 246: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

The starting point

Central question to the validation of ABC for model choice:

When is a Bayes factor based on an insufficient statistic T(y)consistent?

Note: c© drawn on T(y) through BT12(y) necessarily differs from c©

drawn on y through B12(y)

Page 247: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

The starting point

Central question to the validation of ABC for model choice:

When is a Bayes factor based on an insufficient statistic T(y)consistent?

Note: c© drawn on T(y) through BT12(y) necessarily differs from c©

drawn on y through B12(y)

Page 248: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposed to model M2:y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2 and scale

parameter 1/√

2 (variance one).

Page 249: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposed to model M2:y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2 and scale

parameter 1/√

2 (variance one).Four possible statistics

1. sample mean y (sufficient for M1 if not M2);

2. sample median med(y) (insufficient);

3. sample variance var(y) (ancillary);

4. median absolute deviation mad(y) = med(y −med(y));

Page 250: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposed to model M2:y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2 and scale

parameter 1/√

2 (variance one).

0.1 0.2 0.3 0.4 0.5 0.6 0.7

01

23

45

6

posterior probability

Den

sity

Page 251: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposed to model M2:y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2 and scale

parameter 1/√

2 (variance one).

●●

Gauss Laplace

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n=100

Page 252: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposed to model M2:y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2 and scale

parameter 1/√

2 (variance one).

0.1 0.2 0.3 0.4 0.5 0.6 0.7

01

23

45

6

posterior probability

Den

sity

0.0 0.2 0.4 0.6 0.8 1.0

01

23

probability

Den

sity

Page 253: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposed to model M2:y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2 and scale

parameter 1/√

2 (variance one).

●●

Gauss Laplace

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n=100

●●

●●

Gauss Laplace

0.0

0.2

0.4

0.6

0.8

1.0

n=100

Page 254: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

Framework

Starting from sample

y = (y1, . . . , yn)

the observed sample, not necessarily iid with true distribution

y ∼ Pn

Summary statistics

T(y) = Tn = (T1(y),T2(y), · · · ,Td(y)) ∈ Rd

with true distribution Tn ∼ Gn.

Page 255: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Formalised framework

Framework

c© Comparison of

– under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1

– under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2

turned into

– under M1, T(y) ∼ G1,n(·|θ1), and θ1|T(y) ∼ π1(·|Tn)

– under M2, T(y) ∼ G2,n(·|θ2), and θ2|T(y) ∼ π2(·|Tn)

Page 256: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A1] There exist a sequence {vn} converging to +∞,an a.c. distribution Q with continuous bounded density q(·),a symmetric, d × d positive definite matrix V0

and a vector µ0 ∈ Rd such that

vnV−1/20 (Tn − µ0)

n→∞ Q, under Gn

and for all M > 0

supvn|t−µ0|<M

∣∣∣|V0|1/2v−dn gn(t)− q(vnV

−1/20 {t − µ0}

)∣∣∣ = o(1)

Page 257: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A2] For i = 1, 2, there exist d × d symmetric positive definite matricesVi (θi ) and µi (θi ) ∈ Rd such that

vnVi (θi )−1/2(Tn − µi (θi ))

n→∞ Q, under Gi,n(·|θi ) .

Page 258: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A3] For i = 1, 2, there exist sets Fn,i ⊂ Θi and constants εi , τi , αi > 0such that for all τ > 0,

supθi∈Fn,i

Gi,n

[|Tn − µ(θi )| > τ |µi (θi )− µ0| ∧ εi |θi

]. v−αi

n supθi∈Fn,i

(|µi (θi )− µ0| ∧ εi )−αi

withπi (F c

n,i ) = o(v−τin ).

Page 259: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A4] For u > 0

Sn,i (u) ={θi ∈ Fn,i ; |µ(θi )− µ0| ≤ u v−1

n

}if inf{|µi (θi )− µ0|; θi ∈ Θi} = 0, there exist constants di < τi ∧ αi − 1such that

πi (Sn,i (u)) ∼ udi v−din , ∀u . vn

Page 260: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A5] If inf{|µi (θi )− µ0|; θi ∈ Θi} = 0, there exists U > 0 such that forany M > 0,

supvn|t−µ0|<M

supθi∈Sn,i (U)

∣∣∣|Vi (θi )|1/2v−dn gi (t|θi )

−q(vnVi (θi )

−1/2(t − µ(θi ))∣∣∣ = o(1)

and

limM→∞

lim supn

πi

(Sn,i (U) ∩

{||Vi (θi )

−1||+ ||Vi (θi )|| > M})

πi (Sn,i (U))= 0 .

Page 261: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Assumptions

A collection of asymptotic “standard” assumptions:

[A1]–[A2] are standard central limit theorems ([A1] redundantwhen one model is “true”)[A3] controls the large deviations of the estimator Tn from theestimand µ(θ)[A4] is the standard prior mass condition found in Bayesianasymptotics (di effective dimension of the parameter)[A5] controls more tightly convergence esp. when µi is notone-to-one

Page 262: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Effective dimension

[A4] Understanding d1, d2 : defined only whenµ0 ∈ {µi (θi ), θi ∈ Θi},

πi (θi : |µi (θi )− µ0| < n−1/2) = O(n−di/2)

is the effective dimension of the model Θi around µ0

Page 263: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Asymptotic marginals

Asymptotically, under [A1]–[A5]

mi (t) =

∫Θi

gi (t|θi )πi (θi ) dθi

is such that(i) if inf{|µi (θi )− µ0|; θi ∈ Θi} = 0,

Clvd−din ≤ mi (Tn) ≤ Cuvd−di

n

and(ii) if inf{|µi (θi )− µ0|; θi ∈ Θi} > 0

mi (Tn) = oPn [vd−τin + vd−αi

n ].

Page 264: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Within-model consistency

Under same assumptions, if inf{|µi (θi )− µ0|; θi ∈ Θi} = 0, theposterior distribution of µi (θi ) given Tn is consistent at rate 1/vnprovided αi ∧ τi > di .

Note: di can truly be seen as an effective dimension of the modelunder the posterior πi (.|Tn), since if µ0 ∈ {µi (θi ); θi ∈ Θi},

mi (Tn) ∼ vd−din

Page 265: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Within-model consistency

Under same assumptions, if inf{|µi (θi )− µ0|; θi ∈ Θi} = 0, theposterior distribution of µi (θi ) given Tn is consistent at rate 1/vnprovided αi ∧ τi > di .

Note: di can truly be seen as an effective dimension of the modelunder the posterior πi (.|Tn), since if µ0 ∈ {µi (θi ); θi ∈ Θi},

mi (Tn) ∼ vd−din

Page 266: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value of Tn under bothmodels. And only by this mean value!

Page 267: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value of Tn under bothmodels. And only by this mean value!

Indeed, if

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0

then

Clv−(d1−d2)n ≤ m1(Tn)

/m2(Tn) ≤ Cuv

−(d1−d2)n ,

where Cl ,Cu = OPn(1), irrespective of the true model.c© Only depends on the difference d1 − d2

Page 268: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value of Tn under bothmodels. And only by this mean value!

Else, if

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} > inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0

thenm1(Tn)

m2(Tn)≥ Cu min

(v−(d1−α2)n , v

−(d1−τ2)n

),

Page 269: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Consistency theorem

If

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0,

Bayes factor

BT12 = O(v

−(d1−d2)n )

irrespective of the true model. It is consistent iff Pn is within themodel with the smallest dimension

Page 270: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Consistency results

Consistency theorem

If Pn belongs to one of the two models and if µ0 cannot beattained by the other one :

0 = min (inf{|µ0 − µi (θi )|; θi ∈ Θi}, i = 1, 2)

< max (inf{|µ0 − µi (θi )|; θi ∈ Θi}, i = 1, 2) ,

then the Bayes factor BT12 is consistent

Page 271: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Consequences on summary statistics

Bayes factor driven by the means µi (θi ) and the relative position ofµ0 wrt both sets {µi (θi ); θi ∈ Θi}, i = 1, 2.

For ABC, this implies the most likely statistics Tn are ancillarystatistics with different mean values under both models

Else, if Tn asymptotically depends on some of the parameters ofthe models, it is quite likely that there exists θi ∈ Θi such thatµi (θi ) = µ0 even though model M1 is misspecified

Page 272: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

Tn = n−1n∑

i=1

X 4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

and the true distribution is Laplace with mean θ0 = 1, under theGaussian model the value θ∗ = 2

√3− 3 leads to µ0 = µ(θ∗)

[here d1 = d2 = d = 1]

Page 273: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

Tn = n−1n∑

i=1

X 4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

and the true distribution is Laplace with mean θ0 = 1, under theGaussian model the value θ∗ = 2

√3− 3 leads to µ0 = µ(θ∗)

[here d1 = d2 = d = 1]c© a Bayes factor associated with such a statistic is inconsistent

Page 274: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

Tn = n−1n∑

i=1

X 4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

0.0 0.2 0.4 0.6 0.8 1.0

02

46

8

probability

Fourth moment

Page 275: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

Tn = n−1n∑

i=1

X 4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

●●

●●

M1 M2

0.30

0.35

0.40

0.45

n=1000

Page 276: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [1]

If

Tn = n−1n∑

i=1

X 4i , µ1(θ) = 3 + θ4 + 6θ2, µ2(θ) = 6 + · · ·

Caption: Comparison of the distributions of the posteriorprobabilities that the data is from a normal model (as opposed to aLaplace model) with unknown mean when the data is made ofn = 1000 observations either from a normal (M1) or Laplace (M2)distribution with mean one and when the summary statistic in theABC algorithm is restricted to the empirical fourth moment. TheABC algorithm uses proposals from the prior N (0, 4) and selectsthe tolerance as the 1% distance quantile.

Page 277: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [0]

WhenT(y) =

{y

(4)n , y

(6)n

}and the true distribution is Laplace with mean θ0 = 0, thenµ0 = 6, µ1(θ∗1) = 6 with θ∗1 = 2

√3− 3

[d1 = 1 and d2 = 1/2]thus

B12 ∼ n−1/4 → 0 : consistent

Under the Gaussian model µ0 = 3 µ2(θ2) ≥ 6 > 3 ∀θ2

B12 → +∞ : consistent

Page 278: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [0]

WhenT(y) =

{y

(4)n , y

(6)n

}and the true distribution is Laplace with mean θ0 = 0, thenµ0 = 6, µ1(θ∗1) = 6 with θ∗1 = 2

√3− 3

[d1 = 1 and d2 = 1/2]thus

B12 ∼ n−1/4 → 0 : consistent

Under the Gaussian model µ0 = 3 µ2(θ2) ≥ 6 > 3 ∀θ2

B12 → +∞ : consistent

Page 279: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Toy example: Laplace versus Gauss [0]

WhenT(y) =

{y

(4)n , y

(6)n

}

0.0 0.2 0.4 0.6 0.8 1.0

01

23

45

posterior probabilities

Den

sity

Fourth and sixth moments

Page 280: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Embedded models

When M1 submodel of M2, and if the true distribution belongs tothe smaller model M1, Bayes factor is of order

v−(d1−d2)n

Page 281: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Embedded models

When M1 submodel of M2, and if the true distribution belongs tothe smaller model M1, Bayes factor is of order

v−(d1−d2)n

If summary statistic only informative on a parameter that is thesame under both models, i.e if d1 = d2, thenc© the Bayes factor is not consistent

Page 282: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Embedded models

When M1 submodel of M2, and if the true distribution belongs tothe smaller model M1, Bayes factor is of order

v−(d1−d2)n

Else, d1 < d2 and Bayes factor is consistent under M1. If truedistribution not in M1, thenc© Bayes factor is consistent only if µ1 6= µ2 = µ0

Page 283: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Another toy example: Quantile distribution

Q(p; A,B, g , k) = A+B

[1 +

1− exp{−gz(p)}1 + exp{−gz(p)}

] [1 + z(p)2

]kz(p)

A,B, g and k , location, scale, skewness and kurtosis parametersEmbedded models:

I M1 : g = 0 and k ∼ U [−1/2, 5]

I M2 : g ∼ U [0, 4] and k ∼ U [−1/2, 5].

Page 284: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Consistency [or not]

M1 M2

0.3

0.4

0.5

0.6

0.7

●●

M1 M2

0.3

0.4

0.5

0.6

0.7

M1 M2

0.3

0.4

0.5

0.6

0.7

●●

M1 M2

0.0

0.2

0.4

0.6

0.8

●●

●●

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

M1 M2

0.0

0.2

0.4

0.6

0.8

●●●

●●●

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

●●

●●

M1 M2

0.0

0.2

0.4

0.6

0.8

1.0

Page 285: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Summary statistics

Consistency [or not]

Caption: Comparison of the distributions of the posteriorprobabilities that the data is from model M1 when the data ismade of 100 observations (left column), 1000 observations (centralcolumn) and 10,000 observations (right column) either from M1

(M1) or M2 (M2) when the summary statistics in the ABCalgorithm are made of the empirical quantile at level 10% (firstrow), the empirical quantiles at levels 10% and 90% (second row),and the empirical quantiles at levels 10%, 40%, 60% and 90%(third row), respectively. The boxplots rely on 100 replicas and theABC algorithms are based on 104 proposals from the prior, withthe tolerance being chosen as the 1% quantile on the distances.

Page 286: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Conclusions

Conclusions

• Model selection feasible with ABC• Choice of summary statistics is paramount• At best, ABC → π(. | T(y)) which concentrates around µ0

• For estimation : {θ;µ(θ) = µ0} = θ0

• For testing {µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅

Page 287: Monash University short course, part II

MCMC and likelihood-free methods Part/day II: ABC

ABC model choice consistency

Conclusions

Conclusions

• Model selection feasible with ABC• Choice of summary statistics is paramount• At best, ABC → π(. | T(y)) which concentrates around µ0

• For estimation : {θ;µ(θ) = µ0} = θ0

• For testing {µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅