Download pdf - Ab cancun

Transcript
Page 1: Ab cancun

ABC methodology and applications

Jean-Michel Marin & Christian P. Robert

I3M, Universite Montpellier 2, Universite Paris-Dauphine, & University of Warwick

ISBA 2014, Cancun. July 13

Page 2: Ab cancun

Outline

1 Approximate Bayesian computation

2 ABC as an inference machine

3 ABC for model choice

4 Genetics of ABC

Page 3: Ab cancun

Approximate Bayesian computation

1 Approximate Bayesian computationABC basicsAlphabet soupABC-MCMC

2 ABC as an inference machine

3 ABC for model choice

4 Genetics of ABC

Page 4: Ab cancun

Untractable likelihoods

Cases when the likelihood functionf (y|θ) is unavailable and when thecompletion step

f (y|θ) =

Zf (y, z|θ) dz

is impossible or too costly because ofthe dimension of zc© MCMC cannot be implemented!

Page 5: Ab cancun

Untractable likelihoods

Cases when the likelihood functionf (y|θ) is unavailable and when thecompletion step

f (y|θ) =

Zf (y, z|θ) dz

is impossible or too costly because ofthe dimension of zc© MCMC cannot be implemented!

Page 6: Ab cancun

The ABC method

Bayesian setting: target is π(θ)f (y|θ)

When likelihood f (y|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f (y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 7: Ab cancun

The ABC method

Bayesian setting: target is π(θ)f (y|θ)When likelihood f (y|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f (y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 8: Ab cancun

The ABC method

Bayesian setting: target is π(θ)f (y|θ)When likelihood f (y|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f (y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f (z|θ′) ,until the auxiliary variable z is equal to the observed value, z = y.

[Tavare et al., 1997]

Page 9: Ab cancun

Why does it work?!

The proof is trivial:

f (θi ) ∝∑

z∈Dπ(θi )f (z|θi )Iy(z)

∝ π(θi )f (y|θi )= π(θi |y) .

[Accept–Reject 101]

Page 10: Ab cancun

Earlier occurrence

‘Bayesian statistics and Monte Carlo methods are ideallysuited to the task of passing many models over onedataset’

[Don Rubin, Annals of Statistics, 1984]

Note Rubin (1984) does not promote this algorithm forlikelihood-free simulation but frequentist intuition on posteriordistributions: parameters from posteriors are more likely to bethose that could have generated the data.

Page 11: Ab cancun

A as A...pproximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distance

Output distributed from

π(θ) Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

[Pritchard et al., 1999]

Page 12: Ab cancun

A as A...pproximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distanceOutput distributed from

π(θ) Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

[Pritchard et al., 1999]

Page 13: Ab cancun

ABC algorithm

Algorithm 1 Likelihood-free rejection sampler 2

for i = 1 to N dorepeat

generate θ′ from the prior distribution π(·)generate z from the likelihood f (·|θ′)

until ρ{η(z), η(y)} ≤ εset θi = θ′

end for

where η(y) defines a (not necessarily sufficient) [summary] statistic

Page 14: Ab cancun

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y (z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 15: Ab cancun

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f (z|θ)IAε,y (z)∫

Aε,y×Θ π(θ)f (z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|y) .

Page 16: Ab cancun

A toy example

Case ofy |θ ∼ N1

(2(θ + 2)θ(θ − 2), 0.1 + θ2

)

andθ ∼ U[−10,10]

when y = 2 and ρ(y , z) = |y − z |[Richard Wilkinson, Tutorial at NIPS 2013]

Page 17: Ab cancun

A toy example

✏ = 7.5

●●●

●●

●●

●●

● ●

●●

● ●

●●

●●

●●

●●

●●

●● ●

● ●●

● ●

● ●●

●●●

●●

● ●

● ●

●●

●●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

−3 −2 −1 0 1 2 3

−10

010

20

theta vs D

theta

D

●●

●●

●●

●●

● ●

●●

●●

●●●

●●● ●

●●

● ● ●●

●●

● ●

●●

●●●

●● ●

●●

●●

●●

●●

●●

●● ●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

● ●

●●

● ●

●●

●●

● ●● ●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

● ●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●● ●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

−ε

+ ε

D

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Density

theta

Density

ABCTrue

ε = 7.5

Page 18: Ab cancun

A toy example

✏ = 5

●●

●●

● ●

●●

●●

●●

●●

●●

● ●

●●

●● ●

●●

●●

●●

● ●

●●

●●

●●

●● ●

● ●●

●● ●

● ●

● ●

●●

● ●●●

● ●

●●

●●

●●

●●

●●

● ●

● ●

●●

●●● ●

●●

●●

●●●

● ●●

●●

●●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

● ●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

−3 −2 −1 0 1 2 3

−10

010

20

theta vs D

theta

D

●●

●●

●●

●●

●●●

●●

●●

●●

●●●

●●● ●

●●

● ●

● ● ●●

●●

●●

● ●

●●

●●●

●● ●

●●

●●

●●

●●

●● ●

●●

●●

●● ●

●●

●●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●

●●

●●

● ●

●●

●●

●●

● ●● ●

●●

●●

●●

● ●

●●

●●

●●

●●

● ●

●●

●●

●●

●●

● ●

●●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●●

●●

●●

●● ●●

●●

●●

●●

●●

● ●

●●

●●

●● ●

●●

●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●● ●

●●

−ε

+ ε

D

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Density

theta

Density

ABCTrue

ε = 5

Page 19: Ab cancun

A toy example

✏ = 2.5

●●

●●

●●

● ●

●●

●●

●●● ●

●●

● ● ●●

●●

●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●●

●●

●●

● ●

●●

● ●

●●

● ●● ●

●●

●●

●●

●●

●●

●●

●●

● ●

● ●

● ●

●●

● ●

●●

●● ●

●●

●●●

●●

● ●●

●●

●●

●●

●●

●●●

●●

●●

●●

●●

●●

● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●

●●

● ●

●●

●●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

−3 −2 −1 0 1 2 3

−10

010

20

theta vs D

theta

D

●●

●●

●●

●●●●

● ●●

●●

●●

●●

●●●●

●●●

● ●●

●●

●●

● ●

●●

●●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●

●●● ●

●●●

●●

●●

●●●

●●

●●

●●●

●●

●●

●●

●●●

●●●

●●●

● ●●

●●

●●●

●●

●●

●●

●●

●●●●●

●●

●●

●●●

●●

●●●

●●

●● ●

●●

●●

●● ●

●●

●●

●●

●●

●●

●●

● ●

●●

●●

●● ●

●●

−ε

+ ε

D

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Density

theta

Density

ABCTrue

ε = 2.5

Page 20: Ab cancun

A toy example

✏ = 1

●●

●●

●●

●●

● ●

●●

●●

●●● ●

●●

● ● ●●

●●

● ●

●●

●●●● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●●●

●●

●●

●●

●●●

●●

●●

●●●

●●

● ●

●●

● ●

●●

● ●● ●

●●

●●

●●

●●

●●

●●

●●

● ●

● ●

● ●

● ●

●●

●● ●

●●

●●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●●

●●

●● ●

● ●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

−3 −2 −1 0 1 2 3

−10

010

20

theta vs D

theta

D ●●● ●●●

● ●●●●●●● ●

●●

●●● ●●● ●

●●

●●

●●

●●

●●●●●

●●●●●

●●●

●●●● ●●

●●

● ●●●●●

●●

●●

●●●●●

●●

●●

●●

●● ●● ●

● ●● ●●●

● ●● ● ●

●●●●●

−ε

+ ε

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

Density

theta

Density

ABCTrue

ε = 1

Page 21: Ab cancun

ABC as knn

[Biau et al., 2013, Annales de l’IHP]

Practice of ABC: determine tolerance ε as a quantile on observeddistances, say 10% or 1% quantile,

ε = εN = qα(d1, . . . , dN)

• Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice

[Blum & Francois, 2010]

• ABC is a k-nearest neighbour (knn) method with kN = NεN[Loftsgaarden & Quesenberry, 1965]

Page 22: Ab cancun

ABC as knn

[Biau et al., 2013, Annales de l’IHP]

Practice of ABC: determine tolerance ε as a quantile on observeddistances, say 10% or 1% quantile,

ε = εN = qα(d1, . . . , dN)

• Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice

[Blum & Francois, 2010]

• ABC is a k-nearest neighbour (knn) method with kN = NεN[Loftsgaarden & Quesenberry, 1965]

Page 23: Ab cancun

ABC as knn

[Biau et al., 2013, Annales de l’IHP]

Practice of ABC: determine tolerance ε as a quantile on observeddistances, say 10% or 1% quantile,

ε = εN = qα(d1, . . . , dN)

• Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice

[Blum & Francois, 2010]

• ABC is a k-nearest neighbour (knn) method with kN = NεN[Loftsgaarden & Quesenberry, 1965]

Page 24: Ab cancun

ABC consistency

Provided

kN/ log log N −→∞ and kN/N −→ 0

as N →∞, for almost all s0 (with respect to the distribution ofS), with probability 1,

1

kN

kN∑

j=1

ϕ(θj) −→ E[ϕ(θj)|S = s0]

[Devroye, 1982]

Biau et al. (2013) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraints

kN →∞, kN/N → 0, hN → 0 and hpNkN →∞,

Page 25: Ab cancun

ABC consistency

Provided

kN/ log log N −→∞ and kN/N −→ 0

as N →∞, for almost all s0 (with respect to the distribution ofS), with probability 1,

1

kN

kN∑

j=1

ϕ(θj) −→ E[ϕ(θj)|S = s0]

[Devroye, 1982]

Biau et al. (2013) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraints

kN →∞, kN/N → 0, hN → 0 and hpNkN →∞,

Page 26: Ab cancun

Rates of convergence

Further assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, like

• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N−4

p+8

• when m = 4, kN ≈ N(p+4)/(p+8) and rate N−4

p+8 log N

• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N−4

m+p+4

[Biau et al., 2013]

where p dimension of θ and m dimension of summary statistics

Drag: Only applies to sufficient summary statistics

Page 27: Ab cancun

Rates of convergence

Further assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, like

• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N−4

p+8

• when m = 4, kN ≈ N(p+4)/(p+8) and rate N−4

p+8 log N

• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N−4

m+p+4

[Biau et al., 2013]

where p dimension of θ and m dimension of summary statistics

Drag: Only applies to sufficient summary statistics

Page 28: Ab cancun

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

• plasma glucose concentration in oral glucose tolerance test

• diastolic blood pressure

• diabetes pedigree function

• presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

for 200 observations based on a g -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of MLE estimates as summary statistics and of distance

ρ(y, z) = ||β(z)− β(y)||2

Page 29: Ab cancun

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

• plasma glucose concentration in oral glucose tolerance test

• diastolic blood pressure

• diabetes pedigree function

• presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

for 200 observations based on a g -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of MLE estimates as summary statistics and of distance

ρ(y, z) = ||β(z)− β(y)||2

Page 30: Ab cancun

Probit modelling on Pima Indian women

Example (R benchmark)200 Pima Indian women with observed variables

• plasma glucose concentration in oral glucose tolerance test

• diastolic blood pressure

• diabetes pedigree function

• presence/absence of diabetes

Probability of diabetes function of above variables

P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,

for 200 observations based on a g -prior modelling:

β ∼ N3(0, n(

XTX)−1)

Use of MLE estimates as summary statistics and of distance

ρ(y, z) = ||β(z)− β(y)||2

Page 31: Ab cancun

Pima Indian benchmark

−0.005 0.010 0.020 0.030

020

4060

8010

0

Dens

ity

−0.05 −0.03 −0.01

020

4060

80

Dens

ity

−1.0 0.0 1.0 2.0

0.00.2

0.40.6

0.81.0

Dens

ity

Figure: Comparison between density estimates of the marginals on β1

(left), β2 (center) and β3 (right) from ABC rejection samples (red) andMCMC samples (black)

.

Page 32: Ab cancun

MA example

Case of the MA(2) model

xt = εt +2∑

i=1

ϑiεt−i

Simple prior: uniform prior over the identifiability zone, e.g.triangle for MA(2)

Page 33: Ab cancun

MA example (2)

ABC algorithm thus made of

1 picking a new value (ϑ1, ϑ2) in the triangle

2 generating an iid sequence (εt)−2<t≤T

3 producing a simulated series (x ′t)1≤t≤T

Distance: choice between basic distance between the series

ρ((x ′t)1≤t≤T , (xt)1≤t≤T ) =T∑

t=1

(xt − x ′t)2

or distance between summary statistics like the 2 autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 34: Ab cancun

MA example (2)

ABC algorithm thus made of

1 picking a new value (ϑ1, ϑ2) in the triangle

2 generating an iid sequence (εt)−2<t≤T

3 producing a simulated series (x ′t)1≤t≤T

Distance: choice between basic distance between the series

ρ((x ′t)1≤t≤T , (xt)1≤t≤T ) =T∑

t=1

(xt − x ′t)2

or distance between summary statistics like the 2 autocorrelations

τj =T∑

t=j+1

xtxt−j

Page 35: Ab cancun

Comparison of distance impact

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 10%, 1%, 0.1% quantiles of simulated distances) foran MA(2) model

Page 36: Ab cancun

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.50.0

0.51.0

1.5θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 10%, 1%, 0.1% quantiles of simulated distances) foran MA(2) model

Page 37: Ab cancun

Comparison of distance impact

0.0 0.2 0.4 0.6 0.8

01

23

4

θ1

−2.0 −1.0 0.0 0.5 1.0 1.50.0

0.51.0

1.5θ2

Evaluation of the tolerance on the ABC sample against bothdistances (ε = 10%, 1%, 0.1% quantiles of simulated distances) foran MA(2) model

Page 38: Ab cancun

Homonomy

The ABC algorithm is not to be confused with the ABC algorithm

The Artificial Bee Colony algorithm is a swarm based meta-heuristicalgorithm that was introduced by Karaboga in 2005 for optimizingnumerical problems. It was inspired by the intelligent foragingbehavior of honey bees. The algorithm is specifically based on themodel proposed by Tereshko and Loengarov (2005) for the foragingbehaviour of honey bee colonies. The model consists of threeessential components: employed and unemployed foraging bees, andfood sources. The first two components, employed and unemployedforaging bees, search for rich food sources (...) close to their hive.The model also defines two leading modes of behaviour (...):recruitment of foragers to rich food sources resulting in positivefeedback and abandonment of poor sources by foragers causingnegative feedback.

[Karaboga, Scholarpedia]

Page 39: Ab cancun

ABC advances

Simulating from the prior is often poor in efficiency:

Either modify the proposal distribution on θ to increase the densityof z’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 40: Ab cancun

ABC advances

Simulating from the prior is often poor in efficiency:Either modify the proposal distribution on θ to increase the densityof z’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 41: Ab cancun

ABC advances

Simulating from the prior is often poor in efficiency:Either modify the proposal distribution on θ to increase the densityof z’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 42: Ab cancun

ABC advances

Simulating from the prior is often poor in efficiency:Either modify the proposal distribution on θ to increase the densityof z’s within the vicinity of y...

[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger ε

[Beaumont et al., 2002]

.....or even by including ε in the inferential framework [ABCµ][Ratmann et al., 2009]

Page 43: Ab cancun

ABC-NP

Better usage of [prior] simulations byadjustement: instead of throwing awayθ′ such that ρ(η(z), η(y)) > ε, replaceθ’s with locally regressed transforms

θ∗ = θ − {η(z)− η(y)}Tβ[Csillery et al., TEE, 2010]

where β is obtained by [NP] weighted least square regression on(η(z)− η(y)) with weights

Kδ {ρ(η(z), η(y))}

[Beaumont et al., 2002, Genetics]

Page 44: Ab cancun

ABC-NP (regression)

Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation by

Kδ {ρ(η(z(θ)), η(y))}

or

1

S

S∑

s=1

Kδ {ρ(η(zs(θ)), η(y))}

[consistent estimate of f (η|θ)]

Curse of dimensionality: poor estimate when d = dim(η) is large...

Page 45: Ab cancun

ABC-NP (regression)

Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation by

Kδ {ρ(η(z(θ)), η(y))}

or

1

S

S∑

s=1

Kδ {ρ(η(zs(θ)), η(y))}

[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...

Page 46: Ab cancun

ABC-NP (regression)

Use of the kernel weights

Kδ {ρ(η(z(θ)), η(y))}

leads to the NP estimate of the posterior expectation

∑i θiKδ {ρ(η(z(θi )), η(y))}∑i Kδ {ρ(η(z(θi )), η(y))}

[Blum, JASA, 2010]

Page 47: Ab cancun

ABC-NP (regression)

Use of the kernel weights

Kδ {ρ(η(z(θ)), η(y))}

leads to the NP estimate of the posterior conditional density

∑i Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

[Blum, JASA, 2010]

Page 48: Ab cancun

ABC-NP (regression)

●●

●●

● ●

●●

● ●

●●

●●

●●

●●

●●

●●●

●●

● ●

●●

●●

●●

●●

●●

●●

180 190 200 210 220

1.7

1.8

1.9

2.0

2.1

2.2

2.3

ABC and regression adjustment

S

theta

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

●●

● ●

● ●

●●

●●

●●

●●●

● ●

●●

●●

● ●

−ε + εs_obs

In rejection ABC, the red points are used to approximate the histogram.Using regression-adjustment, we use the estimate of the posterior mean atsobs and the residuals from the fitted line to form the posterior.

Page 49: Ab cancun

ABC-NP (density estimations)

Other versions incorporating regression adjustments

∑i Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

In all cases, error

E[g(θ|y)]− g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

Page 50: Ab cancun

ABC-NP (density estimations)

Other versions incorporating regression adjustments

∑i Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

In all cases, error

E[g(θ|y)]− g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

[Blum, JASA, 2010]

Page 51: Ab cancun

ABC-NP (density estimations)

Other versions incorporating regression adjustments

∑i Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}∑

i Kδ {ρ(η(z(θi )), η(y))}

In all cases, error

E[g(θ|y)]− g(θ|y) = cb2 + cδ2 + OP(b2 + δ2) + OP(1/nδd)

var(g(θ|y)) =c

nbδd(1 + oP(1))

[standard NP calculations]

Page 52: Ab cancun

ABC-NCH

Incorporating non-linearities and heterocedasticities:

θ∗ = m(η(y)) + [θ − m(η(z))]σ(η(y))

σ(η(z))

where

• m(η) estimated by non-linear regression (e.g., neural network)

• σ(η) estimated by non-linear regression on residuals

log{θi − m(ηi )}2 = log σ2(ηi ) + ξi

[Blum & Francois, 2009]

Page 53: Ab cancun

ABC-NCH

Incorporating non-linearities and heterocedasticities:

θ∗ = m(η(y)) + [θ − m(η(z))]σ(η(y))

σ(η(z))

where

• m(η) estimated by non-linear regression (e.g., neural network)

• σ(η) estimated by non-linear regression on residuals

log{θi − m(ηi )}2 = log σ2(ηi ) + ξi

[Blum & Francois, 2009]

Page 54: Ab cancun

ABC-NCH (2)

Why neural network?

• fights curse of dimensionality

• selects relevant summary statistics

• provides automated dimension reduction

• offers a model choice capability

• improves upon multinomial logistic

[Blum & Francois, 2009]

Page 55: Ab cancun

ABC-NCH (2)

Why neural network?

• fights curse of dimensionality

• selects relevant summary statistics

• provides automated dimension reduction

• offers a model choice capability

• improves upon multinomial logistic

[Blum & Francois, 2009]

Page 56: Ab cancun

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f (x |θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t))

,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 57: Ab cancun

ABC-MCMC

Markov chain (θ(t)) created via the transition function

θ(t+1) =

θ′ ∼ Kω(θ′|θ(t)) if x ∼ f (x |θ′) is such that x = y

and u ∼ U(0, 1) ≤ π(θ′)Kω(θ(t)|θ′)π(θ(t))Kω(θ′|θ(t))

,

θ(t) otherwise,

has the posterior π(θ|y) as stationary distribution[Marjoram et al, 2003]

Page 58: Ab cancun

ABC-MCMC (2)

Algorithm 2 Likelihood-free MCMC sampler

Use Algorithm 1 to get (θ(0), z(0))for t = 1 to N do

Generate θ′ from Kω

(·|θ(t−1)

),

Generate z′ from the likelihood f (·|θ′),Generate u from U[0,1],

if u ≤ π(θ′)Kω(θ(t−1)|θ′)π(θ(t−1)Kω(θ′|θ(t−1))

IAε,y (z′) then

set (θ(t), z(t)) = (θ′, z′)else

(θ(t), z(t))) = (θ(t−1), z(t−1)),end if

end for

Page 59: Ab cancun

Why does it work?

Acceptance probability does not involve calculating the likelihoodand

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)

× q(θ(t−1)|θ′)f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))f (z′|θ′)

=π(θ′)���

�XXXXf (z′|θ′) IAε,y (z′)

π(θ(t−1)) f (z(t−1)|θ(t−1)) IAε,y (z(t−1))

× q(θ(t−1)|θ′) f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))����XXXXf (z′|θ′)

Page 60: Ab cancun

Why does it work?

Acceptance probability does not involve calculating the likelihoodand

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)

× q(θ(t−1)|θ′)f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))f (z′|θ′)

=π(θ′)���

�XXXXf (z′|θ′) IAε,y (z′)

π(θ(t−1))((((((((hhhhhhhhf (z(t−1)|θ(t−1)) IAε,y (z(t−1))

× q(θ(t−1)|θ′)((((((((hhhhhhhhf (z(t−1)|θ(t−1))

q(θ′|θ(t−1))����XXXXf (z′|θ′)

Page 61: Ab cancun

Why does it work?

Acceptance probability does not involve calculating the likelihoodand

πε(θ′, z′|y)

πε(θ(t−1), z(t−1)|y)

× q(θ(t−1)|θ′)f (z(t−1)|θ(t−1))

q(θ′|θ(t−1))f (z′|θ′)

=π(θ′)���

�XXXXf (z′|θ′) IAε,y (z′)

π(θ(t−1))((((((((hhhhhhhhf (z(t−1)|θ(t−1))���

���XXXXXXIAε,y (z(t−1))

× q(θ(t−1)|θ′)((((((((hhhhhhhhf (z(t−1)|θ(t−1))

q(θ′|θ(t−1))����XXXXf (z′|θ′)

=π(θ′)q(θ(t−1)|θ′)

π(θ(t−1)q(θ′|θ(t−1))IAε,y (z′)

Page 62: Ab cancun

A toy example

Case of

x ∼ 1

2N (θ, 1) +

1

2N (−θ, 1)

under prior θ ∼ N (0, 10)

ABC samplerthetas=rnorm(N,sd=10)

zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1)

eps=quantile(abs(zed-x),.01)

abc=thetas[abs(zed-x)<eps]

Page 63: Ab cancun

A toy example

Case of

x ∼ 1

2N (θ, 1) +

1

2N (−θ, 1)

under prior θ ∼ N (0, 10)

ABC samplerthetas=rnorm(N,sd=10)

zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1)

eps=quantile(abs(zed-x),.01)

abc=thetas[abs(zed-x)<eps]

Page 64: Ab cancun

A toy example

Case of

x ∼ 1

2N (θ, 1) +

1

2N (−θ, 1)

under prior θ ∼ N (0, 10)

ABC-MCMC samplermetas=rep(0,N)

metas[1]=rnorm(1,sd=10)

zed[1]=x

for (t in 2:N){

metas[t]=rnorm(1,mean=metas[t-1],sd=5)

zed[t]=rnorm(1,mean=(1-2*(runif(1)<.5))*metas[t],sd=1)

if ((abs(zed[t]-x)>eps)||(runif(1)>dnorm(metas[t],sd=10)/dnorm(metas[t-1],sd=10))){

metas[t]=metas[t-1]

zed[t]=zed[t-1]}

}

Page 65: Ab cancun

A toy example

x = 2

Page 66: Ab cancun

A toy example

x = 2

θ

−4 −2 0 2 4

0.00

0.05

0.10

0.15

0.20

0.25

0.30

Page 67: Ab cancun

A toy example

x = 10

Page 68: Ab cancun

A toy example

x = 10

θ

−10 −5 0 5 10

0.00

0.05

0.10

0.15

0.20

0.25

Page 69: Ab cancun

A toy example

x = 50

Page 70: Ab cancun

A toy example

x = 50

θ

−40 −20 0 20 40

0.00

0.02

0.04

0.06

0.08

0.10

Page 71: Ab cancun

A PMC version

Use of the same kernel idea as ABC-PRC (Sisson et al., 2007) butwith IS correctionGenerate a sample at iteration t by

πt(θ(t)) ∝

N∑

j=1

ω(t−1)j Kt(θ

(t)|θ(t−1)j )

modulo acceptance of the associated xt , and use an importance

weight associated with an accepted simulation θ(t)i

ω(t)i ∝ π(θ

(t)i )/πt(θ

(t)i ) .

c© Still likelihood free[Beaumont et al., 2009]

Page 72: Ab cancun

ABC-PMC algorithm

Given a decreasing sequence of approximation levels ε1 ≥ . . . ≥ εT ,

1. At iteration t = 1,

For i = 1, ...,N

Simulate θ(1)i ∼ π(θ) and x ∼ f (x |θ(1)

i ) until %(x , y) < ε1

Set ω(1)i = 1/N

Take τ 2 as twice the empirical variance of the θ(1)i ’s

2. At iteration 2 ≤ t ≤ T ,

For i = 1, ...,N, repeat

Pick θ?i from the θ(t−1)j ’s with probabilities ω

(t−1)j

generate θ(t)i |θ

?i ∼ N (θ?i , σ

2t ) and x ∼ f (x |θ(t)

i )

until %(x , y) < εtSet ω(t)

i ∝ π(θ(t)i )/

∑Nj=1 ω

(t−1)j ϕ

(σ−1t

(t)i − θ

(t−1)j )

})Take τ 2

t+1 as twice the weighted empirical variance of the θ(t)i ’s

Page 73: Ab cancun

Sequential Monte Carlo

SMC is a simulation technique to approximate a sequence ofrelated probability distributions πn with π0 “easy” and πT astarget.Iterated IS as PMC: particles moved from time n to time n viakernel Kn and use of a sequence of extended targets πn

πn(z0:n) = πn(zn)n∏

j=0

Lj(zj+1, zj)

where the Lj ’s are backward Markov kernels [check that πn(zn) is amarginal]

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 74: Ab cancun

Sequential Monte Carlo (2)

Algorithm 3 SMC sampler

sample z(0)i ∼ γ0(x) (i = 1, . . . ,N)

compute weights w(0)i = π0(z

(0)i )/γ0(z

(0)i )

for t = 1 to N doif ESS(w (t−1)) < NT then

resample N particles z(t−1) and set weights to 1end ifgenerate z

(t−1)i ∼ Kt(z

(t−1)i , ·) and set weights to

w(t)i = w

(t−1)i−1

πt(z(t)i ))Lt−1(z

(t)i ), z

(t−1)i ))

πt−1(z(t−1)i ))Kt(z

(t−1)i ), z

(t)i ))

end for

[Del Moral, Doucet & Jasra, Series B, 2006]

Page 75: Ab cancun

ABC-SMC

[Del Moral, Doucet & Jasra, 2009]

True derivation of an SMC-ABC algorithmUse of a kernel Kn associated with target πεn and derivation of thebackward kernel

Ln−1(z , z ′) =πεn(z ′)Kn(z ′, z)

πn(z)

Update of the weights

win ∝ wi(n−1)

∑Mm=1 IAεn (xm

in )∑M

m=1 IAεn−1(xm

i(n−1))

when xmin ∼ K (xi(n−1), ·)

Page 76: Ab cancun

ABC-SMCM

Modification: Makes M repeated simulations of the pseudo-data zgiven the parameter, rather than using a single [M = 1] simulation,leading to weight that is proportional to the number of acceptedzi s

ω(θ) =1

M

M∑

i=1

Iρ(η(y),η(zi ))<ε

[limit in M means exact simulation from (tempered) target]

Page 77: Ab cancun

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z , z ′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesMajor assumption: the forward kernel K is supposed to be invariantagainst the true target [tempered version of the true posterior]

Adaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds εt , slowly enough to keep a largenumber of accepted transitions

Page 78: Ab cancun

Properties of ABC-SMC

The ABC-SMC method properly uses a backward kernel L(z , z ′) tosimplify the importance weight and to remove the dependence onthe unknown likelihood from this weight. Update of importanceweights is reduced to the ratio of the proportions of survivingparticlesMajor assumption: the forward kernel K is supposed to be invariantagainst the true target [tempered version of the true posterior]Adaptivity in ABC-SMC algorithm only found in on-lineconstruction of the thresholds εt , slowly enough to keep a largenumber of accepted transitions

Page 79: Ab cancun

A mixture example (2)

Recovery of the target, whether using a fixed standard deviation ofτ = 0.15 or τ = 1/0.15, or a sequence of adaptive τt ’s.

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

θθ

−3 −2 −1 0 1 2 3

0.0

0.2

0.4

0.6

0.8

1.0

Page 80: Ab cancun

ABC inference machine

1 Approximate Bayesiancomputation

2 ABC as an inference machineExact ABCABCµAutomated summarystatistic selection

3 ABC for model choice

4 Genetics of ABC

Page 81: Ab cancun

How Bayesian is aBc..?

• may be a convergent method of inference (meaningful?sufficient? foreign?)

• approximation error unknown (w/o massive simulation)

• pragmatic/empirical B (there is no other solution!)

• many calibration issues (tolerance, distance, statistics)

• the NP side should be incorporated into the whole B picture

• the approximation error should also be part of the B inference

Page 82: Ab cancun

Wilkinson’s exact (A)BC

ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel function

πε(θ, z|y) =π(θ)f (z|θ)Kε(y − z)∫π(θ)f (z|θ)Kε(y − z)dzdθ

,

with Kε kernel parameterised by bandwidth ε.[Wilkinson, 2013]

Theorem

The ABC algorithm based on the assumption of a randomisedobservation y = y + ξ, ξ ∼ Kε, and an acceptance probability of

Kε(y − z)/M

gives draws from the posterior distribution π(θ|y).

Page 83: Ab cancun

Wilkinson’s exact (A)BC

ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel function

πε(θ, z|y) =π(θ)f (z|θ)Kε(y − z)∫π(θ)f (z|θ)Kε(y − z)dzdθ

,

with Kε kernel parameterised by bandwidth ε.[Wilkinson, 2013]

Theorem

The ABC algorithm based on the assumption of a randomisedobservation y = y + ξ, ξ ∼ Kε, and an acceptance probability of

Kε(y − z)/M

gives draws from the posterior distribution π(θ|y).

Page 84: Ab cancun

How exact a BC?

“Using ε to represent measurement error isstraightforward, whereas using ε to model the modeldiscrepancy is harder to conceptualize and not ascommonly used”

[Richard Wilkinson, 2013]

Page 85: Ab cancun

How exact a BC?

Pros

• Pseudo-data from true model and observed data from noisymodel

• Interesting perspective in that outcome is completelycontrolled

• Link with ABCµ and assuming y is observed with ameasurement error with density Kε

• Relates to the theory of model approximation[Kennedy & O’Hagan, 2001]

Cons

• Requires Kε to be bounded by M

• True approximation error never assessed

• Requires a modification of the standard ABC algorithm

Page 86: Ab cancun

Noisy ABC

Idea: Modify the data from the start

y = y0 + εζ1

with the same scale ε as ABC[ see Fearnhead-Prangle ] and

run ABC on y

Then ABC produces an exact simulation from π(θ|y) = π(θ|y)[Dean et al., 2011; Fearnhead and Prangle, 2012]

Page 87: Ab cancun

Noisy ABC

Idea: Modify the data from the start

y = y0 + εζ1

with the same scale ε as ABC[ see Fearnhead-Prangle ] and

run ABC on yThen ABC produces an exact simulation from π(θ|y) = π(θ|y)

[Dean et al., 2011; Fearnhead and Prangle, 2012]

Page 88: Ab cancun

Consistent noisy ABC

• Degrading the data improves the estimation performances:• Noisy ABC-MLE is asymptotically (in n) consistent• under further assumptions, the noisy ABC-MLE is

asymptotically normal• increase in variance of order ε−2

• likely degradation in precision or computing time due to thelack of summary statistic [curse of dimensionality]

Page 89: Ab cancun

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)

Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

Page 90: Ab cancun

ABCµ

[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]

Use of a joint density

f (θ, ε|y) ∝ ξ(ε|y, θ)× πθ(θ)× πε(ε)

where y is the data, and ξ(ε|y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ(ε|y, θ) with a non-parametric kernelapproximation.

Page 91: Ab cancun

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K ) and errorsεk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

b

K [{εk−ρk(ηk(zb), ηk(y))}/hk ]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)

ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)mink ξk(ε|y, θ)

Page 92: Ab cancun

ABCµ details

Multidimensional distances ρk (k = 1, . . . ,K ) and errorsεk = ρk(ηk(z), ηk(y)), with

εk ∼ ξk(ε|y, θ) ≈ ξk(ε|y, θ) =1

Bhk

b

K [{εk−ρk(ηk(zb), ηk(y))}/hk ]

then used in replacing ξ(ε|y, θ) with mink ξk(ε|y, θ)ABCµ involves acceptance probability

π(θ′, ε′)

π(θ, ε)

q(θ′, θ)q(ε′, ε)

q(θ, θ′)q(ε, ε′)

mink ξk(ε′|y, θ′)mink ξk(ε|y, θ)

Page 93: Ab cancun

ABCµ multiple errors

[ c© Ratmann et al., PNAS, 2009]

Page 94: Ab cancun

ABCµ for model choice

[ c© Ratmann et al., PNAS, 2009]

Page 95: Ab cancun

Semi-automatic ABC

Fearnhead and Prangle (2012) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposal

ABC is then considered from a purely inferential viewpoint andcalibrated for estimation purposesUse of a randomised (or ‘noisy’) version of the summary statistics

η(y) = η(y) + τε

Derivation of a well-calibrated version of ABC, i.e. an algorithmthat gives proper predictions for the distribution associated withthis randomised summary statistic

Page 96: Ab cancun

Summary statistics

Main results:

• Optimality of the posterior expectation E[θ|y] of theparameter of interest as summary statistics η(y)!

• Use of the standard quadratic loss function

(θ − θ0)TA(θ − θ0) .

bare summary

Page 97: Ab cancun

Summary statistics

Main results:

• Optimality of the posterior expectation E[θ|y] of theparameter of interest as summary statistics η(y)!

• Use of the standard quadratic loss function

(θ − θ0)TA(θ − θ0) .

bare summary

Page 98: Ab cancun

Details on Fearnhead and Prangle (F&P) ABC

Use of a summary statistic S(·), an importance proposal g(·), akernel K (·) ≤ 1 and a bandwidth h > 0 such that

(θ, ysim) ∼ g(θ)f (ysim|θ)

is accepted with probability (hence the bound)

K [{S(ysim)− sobs}/h]

and the corresponding importance weight defined by

π(θ)/

g(θ)

[Fearnhead & Prangle, 2012]

Page 99: Ab cancun

Average acceptance asymptotics

For the average acceptance probability/approximate likelihood

p(θ|sobs) =

∫f (ysim|θ) K [{S(ysim)− sobs}/h] dysim ,

overall acceptance probability

p(sobs) =

∫p(θ|sobs)π(θ) dθ = π(sobs)hd + o(hd)

[F&P, Lemma 1]

Page 100: Ab cancun

Calibration of h

“This result gives insight into how S(·) and h affect the MonteCarlo error. To minimize Monte Carlo error, we need hd to be nottoo small. Thus ideally we want S(·) to be a low dimensionalsummary of the data that is sufficiently informative about θ thatπ(θ|sobs) is close, in some sense, to π(θ|yobs)” (F&P, p.5)

• turns h into an absolute value while it should becontext-dependent and user-calibrated

• only addresses one term in the approximation error andacceptance probability (“curse of dimensionality”)

• h large prevents πABC(θ|sobs) to be close to π(θ|sobs)

• d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of[dis]information”)

Page 101: Ab cancun

Converging ABC

Theorem (F&P)

For noisy ABC, under identifiability assumptions, the expectednoisy-ABC log-likelihood,

E {log[p(θ|sobs)]} =

∫ ∫log[p(θ|S(yobs) + ε)]π(yobs|θ0)K (ε)dyobsdε,

has its maximum at θ = θ0.

Page 102: Ab cancun

Converging ABC

Corollary

For noisy ABC, under regularity constraints on summary statistics,the ABC posterior converges onto a point mass on the trueparameter value as m→∞.

Page 103: Ab cancun

Loss motivated statistic

Under quadratic loss function,

Theorem (F&P)

(i) The minimal posterior error E[L(θ, θ)|yobs] occurs whenθ = E(θ|yobs) (!)

(ii) When h→ 0, EABC(θ|sobs) converges to E(θ|yobs)

(iii) If S(yobs) = E[θ|yobs] then for θ = EABC[θ|sobs]

E[L(θ, θ)|yobs] = trace(AΣ) + h2

∫xTAxK (x)dx + o(h2).

Page 104: Ab cancun

Optimal summary statistic

“We take a different approach, and weaken the requirement forπABC to be a good approximation to π(θ|yobs). We argue for πABC

to be a good approximation solely in terms of the accuracy ofcertain estimates of the parameters.” (F&P, p.5)

From this result, F&P

• derive their choice of summary statistic,

S(y) = E(θ|y)

[almost sufficient]

• suggest

h = O(N−1/(2+d)) and h = O(N−1/(4+d))

as optimal bandwidths for noisy and standard ABC.

Page 105: Ab cancun

Optimal summary statistic

“We take a different approach, and weaken the requirement forπABC to be a good approximation to π(θ|yobs). We argue for πABC

to be a good approximation solely in terms of the accuracy ofcertain estimates of the parameters.” (F&P, p.5)

From this result, F&P

• derive their choice of summary statistic,

S(y) = E(θ|y)

[wow! EABC[θ|S(yobs)] = E[θ|yobs]]

• suggest

h = O(N−1/(2+d)) and h = O(N−1/(4+d))

as optimal bandwidths for noisy and standard ABC.

Page 106: Ab cancun

Caveat

Since E(θ|yobs) is most usually unavailable, F&P suggest

(i) use a pilot run of ABC to determine a region of non-negligibleposterior mass;

(ii) simulate sets of parameter values and data;

(iii) use the simulated sets of parameter values and data toestimate the summary statistic; and

(iv) run ABC with this choice of summary statistic.

Page 107: Ab cancun

ABC for model choice

1 Approximate Bayesian computation

2 ABC as an inference machine

3 ABC for model choice

4 Genetics of ABC

Page 108: Ab cancun

Bayesian model choice

Several models M1,M2, . . . are considered simultaneously for adataset y and the model index M is part of the inference.Use of a prior distribution. π(M = m), plus a prior distribution onthe parameter conditional on the value m of the model index,πm(θm)Goal is to derive the posterior distribution of M, challengingcomputational target when models are complex.

Page 109: Ab cancun

Generic ABC for model choice

Algorithm 4 Likelihood-free model choice sampler (ABC-MC)

for t = 1 to T dorepeat

Generate m from the prior π(M = m)Generate θm from the prior πm(θm)Generate z from the model fm(z|θm)

until ρ{η(z), η(y)} < εSet m(t) = m and θ(t) = θm

end for

Page 110: Ab cancun

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑

t=1

Im(t)=m .

Issues with implementation:

• should tolerances ε be the same for all models?

• should summary statistics vary across models (incl. theirdimension)?

• should the distance measure ρ vary as well?

Page 111: Ab cancun

ABC estimates

Posterior probability π(M = m|y) approximated by the frequencyof acceptances from model m

1

T

T∑

t=1

Im(t)=m .

Extension to a weighted polychotomous logistic regression estimateof π(M = m|y), with non-parametric kernel weights

[Cornuet et al., DIYABC, 2009]

Page 112: Ab cancun

The Great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Page 113: Ab cancun

The Great ABC controversy

On-going controvery in phylogeographic genetics about the validityof using ABC for testing

Against: Templeton, 2008,2009, 2010a, 2010b, 2010cargues that nested hypothesescannot have higher probabilitiesthan nesting hypotheses (!)

Replies: Fagundes et al., 2008,Beaumont et al., 2010, Berger etal., 2010, Csillery et al., 2010point out that the criticisms areaddressed at [Bayesian]model-based inference and havenothing to do with ABC...

Page 114: Ab cancun

Back to sufficiency

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 115: Ab cancun

Back to sufficiency

If η1(x) sufficient statistic for model m = 1 and parameter θ1 andη2(x) sufficient statistic for model m = 2 and parameter θ2,(η1(x), η2(x)) is not always sufficient for (m, θm)

c© Potential loss of information at the testing level

Page 116: Ab cancun

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt , z t)’s are simulated from the (joint) prior

As T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)f η1 (η|θ1) dη dθ1∫Iρ{η,η(y)}≤επ2(θ2)f η2 (η|θ2) dη dθ2

,

where f η1 (η|θ1) and f η2 (η|θ2) distributions of η(z)

Page 117: Ab cancun

Limiting behaviour of B12 (T →∞)

ABC approximation

B12(y) =

∑Tt=1 Imt=1 Iρ{η(zt),η(y)}≤ε∑Tt=1 Imt=2 Iρ{η(zt),η(y)}≤ε

,

where the (mt , z t)’s are simulated from the (joint) priorAs T go to infinity, limit

Bε12(y) =

∫Iρ{η(z),η(y)}≤επ1(θ1)f1(z|θ1) dz dθ1∫Iρ{η(z),η(y)}≤επ2(θ2)f2(z|θ2) dz dθ2

=

∫Iρ{η,η(y)}≤επ1(θ1)f η1 (η|θ1) dη dθ1∫Iρ{η,η(y)}≤επ2(θ2)f η2 (η|θ2) dη dθ2

,

where f η1 (η|θ1) and f η2 (η|θ2) distributions of η(z)

Page 118: Ab cancun

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)f η1 (η(y)|θ1) dθ1∫π2(θ2)f η2 (η(y)|θ2) dθ2

,

c© Bayes factor based on the sole observation of η(y)

Page 119: Ab cancun

Limiting behaviour of B12 (ε→ 0)

When ε goes to zero,

Bη12(y) =

∫π1(θ1)f η1 (η(y)|θ1) dθ1∫π2(θ2)f η2 (η(y)|θ2) dθ2

,

c© Bayes factor based on the sole observation of η(y)

Page 120: Ab cancun

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi (y|θi ) = gi (y)f ηi (η(y)|θi )

Thus

B12(y) =

∫Θ1π(θ1)g1(y)f η1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 121: Ab cancun

Limiting behaviour of B12 (under sufficiency)

If η(y) sufficient statistic for both models,

fi (y|θi ) = gi (y)f ηi (η(y)|θi )

Thus

B12(y) =

∫Θ1π(θ1)g1(y)f η1 (η(y)|θ1) dθ1∫

Θ2π(θ2)g2(y)f η2 (η(y)|θ2) dθ2

=g1(y)

∫π1(θ1)f η1 (η(y)|θ1) dθ1

g2(y)∫π2(θ2)f η2 (η(y)|θ2) dθ2

=g1(y)

g2(y)Bη

12(y) .

[Didelot, Everitt, Johansen & Lawson, 2011]

c© No discrepancy only when cross-model sufficiency

Page 122: Ab cancun

MA(q) divergence

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factorequal to 17.71.

Page 123: Ab cancun

MA(q) divergence

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

1 2

0.0

0.2

0.4

0.6

0.8

1.0

Evolution [against ε] of ABC Bayes factor, in terms of frequencies ofvisits to models MA(1) (left) and MA(2) (right) when ε equal to10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sampleof 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21

equal to .004.

Page 124: Ab cancun

Further comments

‘There should be the possibility that for the same model,but different (non-minimal) [summary] statistics (sodifferent η’s: η1 and η∗1) the ratio of evidences may nolonger be equal to one.’

[Michael Stumpf, Jan. 28, 2011, ’Og]

Using different summary statistics [on different models] mayindicate the loss of information brought by each set but agreementdoes not lead to trustworthy approximations.

Page 125: Ab cancun

LDA summaries for model choice

In parallel to F& P semi-automatic ABC, selection of mostdiscriminant subvector out of a collection of summary statistics,can be based on Linear Discriminant Analysis (LDA)

[Estoup & al., 2012, Mol. Ecol. Res.]

Solution now implemented in DIYABC.2[Cornuet & al., 2008, Bioinf.; Estoup & al., 2013]

Page 126: Ab cancun

Implementation

Step 1: Take a subset of the α% (e.g., 1%) best simulationsfrom an ABC reference table usually including 106–109

simulations for each of the M compared scenarios/models

Selection based on normalized Euclidian distance computedbetween observed and simulated raw summaries

Step 2: run LDA on this subset to transform summaries into(M − 1) discriminant variables

Step 3: Estimation of the posterior probabilities of eachcompeting scenario/model by polychotomous local logisticregression against the M − 1 most discriminant variables

[Cornuet & al., 2008, Bioinformatics]

Page 127: Ab cancun

Implementation

Step 1: Take a subset of the α% (e.g., 1%) best simulationsfrom an ABC reference table usually including 106–109

simulations for each of the M compared scenarios/models

Step 2: run LDA on this subset to transform summaries into(M − 1) discriminant variables

When computing LDA functions, weight simulated data with anEpanechnikov kernel

Step 3: Estimation of the posterior probabilities of eachcompeting scenario/model by polychotomous local logisticregression against the M − 1 most discriminant variables

[Cornuet & al., 2008, Bioinformatics]

Page 128: Ab cancun

LDA advantages

• much faster computation of scenario probabilities viapolychotomous regression

• a (much) lower number of explanatory variables improves theaccuracy of the ABC approximation, reduces the tolerance εand avoids extra costs in constructing the reference table

• allows for a large collection of initial summaries

• ability to evaluate Type I and Type II errors on more complexmodels [more on this later]

• LDA reduces correlation among explanatory variables

When available, using both simulated and real data sets, posteriorprobabilities of scenarios computed from LDA-transformed and rawsummaries are strongly correlated

Page 129: Ab cancun

LDA advantages

• much faster computation of scenario probabilities viapolychotomous regression

• a (much) lower number of explanatory variables improves theaccuracy of the ABC approximation, reduces the tolerance εand avoids extra costs in constructing the reference table

• allows for a large collection of initial summaries

• ability to evaluate Type I and Type II errors on more complexmodels [more on this later]

• LDA reduces correlation among explanatory variables

When available, using both simulated and real data sets, posteriorprobabilities of scenarios computed from LDA-transformed and rawsummaries are strongly correlated

Page 130: Ab cancun

A stylised problem

Central question to the validation of ABC for model choice:

When is a Bayes factor based on an insufficient statistic T(y)consistent?

Note/warnin: c© drawn on T(y) through BT12(y) necessarily differs

from c© drawn on y through B12(y)[Marin, Pillai, X, & Rousseau, JRSS B, 2013]

Page 131: Ab cancun

A stylised problem

Central question to the validation of ABC for model choice:

When is a Bayes factor based on an insufficient statistic T(y)consistent?

Note/warnin: c© drawn on T(y) through BT12(y) necessarily differs

from c© drawn on y through B12(y)[Marin, Pillai, X, & Rousseau, JRSS B, 2013]

Page 132: Ab cancun

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks!]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).Four possible statistics

1 sample mean y (sufficient for M1 if not M2);

2 sample median med(y) (insufficient);

3 sample variance var(y) (ancillary);

4 median absolute deviation mad(y) = med(|y −med(y)|);

Page 133: Ab cancun

A benchmark if toy example

Comparison suggested by referee of PNAS paper [thanks!]:[X, Cornuet, Marin, & Pillai, Aug. 2011]

Model M1: y ∼ N (θ1, 1) opposedto model M2: y ∼ L(θ2, 1/

√2), Laplace distribution with mean θ2

and scale parameter 1/√

2 (variance one).

●●

Gauss Laplace

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

n=100

●●

●●

Gauss Laplace

0.0

0.2

0.4

0.6

0.8

1.0

n=100

Page 134: Ab cancun

Framework

Starting from sample

y = (y1, . . . , yn)

the observed sample, not necessarily iid with true distribution

y ∼ Pn

Summary statistics

T(y) = Tn = (T1(y),T2(y), · · · ,Td(y)) ∈ Rd

with true distribution Tn ∼ Gn.

Page 135: Ab cancun

Framework

c© Comparison of

– under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1

– under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2

turned into

– under M1, T(y) ∼ G1,n(·|θ1), and θ1|T(y) ∼ π1(·|Tn)

– under M2, T(y) ∼ G2,n(·|θ2), and θ2|T(y) ∼ π2(·|Tn)

Page 136: Ab cancun

Assumptions

A collection of asymptotic “standard” assumptions:

[A1] is a standard central limit theorem under the true model withasymptotic mean µ0

[A2] controls the large deviations of the estimator Tn from themodel mean µ(θ)[A3] is the standard prior mass condition found in Bayesianasymptotics (di effective dimension of the parameter)[A4] restricts the behaviour of the model density against the truedensity

[Think CLT!]

Page 137: Ab cancun

Asymptotic marginals

Asymptotically, under [A1]–[A4]

mi (t) =

Θi

gi (t|θi )πi (θi ) dθi

is such that(i) if inf{|µi (θi )− µ0|; θi ∈ Θi} = 0,

Clvd−din ≤ mi (Tn) ≤ Cuvd−di

n

and(ii) if inf{|µi (θi )− µ0|; θi ∈ Θi} > 0

mi (Tn) = oPn [vd−τin + vd−αi

n ].

Page 138: Ab cancun

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value µ(θ) of Tn underboth models. And only by this mean value!

Page 139: Ab cancun

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value µ(θ) of Tn underboth models. And only by this mean value!

Indeed, if

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0

then

Clv−(d1−d2)n ≤ m1(Tn)

/m2(Tn) ≤ Cuv

−(d1−d2)n ,

where Cl ,Cu = OPn(1), irrespective of the true model.c© Only depends on the difference d1 − d2: no consistency

Page 140: Ab cancun

Between-model consistency

Consequence of above is that asymptotic behaviour of the Bayesfactor is driven by the asymptotic mean value µ(θ) of Tn underboth models. And only by this mean value!

Else, if

inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} > inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0

thenm1(Tn)

m2(Tn)≥ Cu min

(v−(d1−α2)n , v

−(d1−τ2)n

)

Page 141: Ab cancun

Checking for adequate statistics

Run a practical check of the relevance (or non-relevance) of Tn

null hypothesis that both models are compatible with the statisticTn

H0 : inf{|µ2(θ2)− µ0|; θ2 ∈ Θ2} = 0

againstH1 : inf{|µ2(θ2)− µ0|; θ2 ∈ Θ2} > 0

testing procedure provides estimates of mean of Tn under eachmodel and checks for equality

Page 142: Ab cancun

Checking in practice

• Under each model Mi , generate ABC sample θi ,l , l = 1, · · · , L• For each θi ,l , generate yi ,l ∼ Fi ,n(·|ψi ,l), derive Tn(yi ,l) and

compute

µi =1

L

L∑

l=1

Tn(yi ,l), i = 1, 2 .

• Conditionally on Tn(y),

√L {µi − Eπ [µi (θi )|Tn(y)]} N (0,Vi ),

• Test for a common mean

H0 : µ1 ∼ N (µ0,V1) , µ2 ∼ N (µ0,V2)

against the alternative of different means

H1 : µi ∼ N (µi ,Vi ), with µ1 6= µ2 .

Page 143: Ab cancun

Toy example: Laplace versus Gauss

●●●●●●●●●●●●●●●●●●●●●●●●●●

●●●

Gauss Laplace Gauss Laplace

010

2030

40

Normalised χ2 without and with mad

Page 144: Ab cancun

Genetics of ABC

1 Approximate Bayesian computation

2 ABC as an inference machine

3 ABC for model choice

4 Genetics of ABC

Page 145: Ab cancun

Genetic background of ABC

ABC is a recent computational technique that only requires beingable to sample from the likelihood f (·|θ)

This technique stemmed from population genetics models, about15 years ago, and population geneticists still contributesignificantly to methodological developments of ABC.

[Griffith & al., 1997; Tavare & al., 1999]

Page 146: Ab cancun

Population genetics

[Part derived from the teaching material of Raphael Leblois, ENS Lyon, November 2010]

• Describe the genotypes, estimate the alleles frequencies,determine their distribution among individuals, populationsand between populations;

• Predict and understand the evolution of gene frequencies inpopulations as a result of various factors.

c© Analyses the effect of various evolutive forces (mutation, drift,migration, selection) on the evolution of gene frequencies in timeand space.

Page 147: Ab cancun

Wright-Fisher modelLe modèle de Wright-Fisher

•! En l’absence de mutation et de

sélection, les fréquences

alléliques dérivent (augmentent

et diminuent) inévitablement

jusqu’à la fixation d’un allèle

•! La dérive conduit donc à la

perte de variation génétique à

l’intérieur des populations

• A population of constantsize, in which individualsreproduce at the same time.

• Each gene in a generation isa copy of a gene of theprevious generation.

• In the absence of mutationand selection, allelefrequencies derive inevitablyuntil the fixation of anallele.

Page 148: Ab cancun

Coalescent theory

[Kingman, 1982; Tajima, Tavare, &tc]

5

!"#$%&'(('")**+$,-'".'"/010234%'".'5"*$*%()23$15"6"

!!"7**+$,-'",()5534%'" " "!"7**+$,-'"8",$)('5,'1,'"9"

"":";<;=>7?@<#" " " """"":"ABC7#?@>><#"

"":"D+04%'1,'5" " " """"":"E010)($/3'".'5"/F1'5"

"":"G353$1")&)12"HD$+I)+.J" " """"":"G353$1")++3F+'"HK),LI)+.J"

Coalescence theory interested in the genealogy of a sample ofgenes back in time to the common ancestor of the sample.

Page 149: Ab cancun

Common ancestor

6

Tim

e of

coal

esce

nce

(T)

Modélisation du processus de dérive génétique

en “remontant dans le temps”

jusqu’à l’ancêtre commun d’un échantillon de gènes

Les différentes

lignées fusionnent

(coalescent) au fur et à mesure que

l’on remonte vers le

passé

The different lineages merge when we go back in the past.

Page 150: Ab cancun

Neutral mutations

20

Arbre de coalescence et mutations

Sous l’hypothèse de neutralité des marqueurs génétiques étudiés,

les mutations sont indépendantes de la généalogie

i.e. la généalogie ne dépend que des processus démographiques

On construit donc la généalogie selon les paramètres

démographiques (ex. N),

puis on ajoute a posteriori les

mutations sur les différentes

branches, du MRCA au feuilles de

l’arbre

On obtient ainsi des données de

polymorphisme sous les modèles

démographiques et mutationnels

considérés

• Under the assumption ofneutrality, the mutationsare independent of thegenealogy.

• We construct the genealogyaccording to thedemographic parameters,then we add a posteriori themutations.

Page 151: Ab cancun

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Kingman’s genealogyWhen time axis isnormalized,T (k) ∼ Exp(k(k−1)/2)

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 152: Ab cancun

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Kingman’s genealogyWhen time axis isnormalized,T (k) ∼ Exp(k(k−1)/2)

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches

• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 153: Ab cancun

Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibrium

Observations: leafs of the treeθ =?

Kingman’s genealogyWhen time axis isnormalized,T (k) ∼ Exp(k(k−1)/2)

Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2

Page 154: Ab cancun

Much more interesting models. . .

• several independent locusIndependent gene genealogies and mutations

• different populationslinked by an evolutionary scenario made of divergences,admixtures, migrations between populations, selectionpressure, etc.

• larger sample sizeusually between 50 and 100 genes

Page 155: Ab cancun

Available population scenarios

Between populations: three types of events, backward in time

• the divergence is the fusion between two populations,

• the admixture is the split of a population into two parts,

• the migration allows the move of some lineages of apopulation to another.

38 2. Modèles de génétique des populations

•4

•2

•5

•3

•1

Lignée ancestrale

Passé

PrésentT5

T4

T3

T2

MRCA

FIGURE 2.2: Exemple de généalogie de cinq individus issus d’une seule population fermée à l’équilibre. Lesindividus échantillonnés sont représentés par les feuilles du dendrogramme, les durées inter-coalescencesT2, . . . , T5 sont indépendantes, et Tk est de loi exponentielle de paramètre k

�k - 1

�/2.

Pop1 Pop2

Pop1Divergence

(a)

t

t0

Pop1 Pop3 Pop2

Admixture

(b)

1 - rr

t

t0

m12

m21

Pop1 Pop2

Migration

(c)

t

t0

FIGURE 2.3: Représentations graphiques des trois types d’évènements inter-populationnels d’un scénariodémographique. Il existe deux familles d’évènements inter-populationnels. La première famille est simple,elle correspond aux évènement inter-populationnels instantanés. C’est le cas d’une divergence ou d’uneadmixture. (a) Deux populations qui évoluent pour se fusionner dans le cas d’une divergence. (b) Trois po-pulations qui évoluent en parallèle pour une admixture. Pour cette situation, chacun des tubes représente(on peut imaginer qu’il porte à l’intérieur) la généalogie de la population qui évolue indépendamment desautres suivant un coalescent de Kingman.La deuxième correspond à la présence d’une migration.(c) Cette situation est légèrement plus compliquéeque la précédente à cause des flux de gènes (plus d’indépendance). Ici, un seul processus évolutif gouverneles deux populations réunies. La présence de migrations entre les populations Pop1 et Pop2 implique desdéplacements de lignées d’une population à l’autre et ainsi la concurrence entre les évènements de coales-cence et de migration.

Page 156: Ab cancun

A complex scenario

The goal is to discriminate between different population scenariosfrom a dataset of polymorphism (DNA sample) y observed at thepresent time.

2.5 Conclusion 37

Divergence

Pop1

Ne1

Pop4

Ne4

Admixture

Pop3

Ne3

Pop6Ne6

Pop2

Ne2

Pop5Ne5

Migration

m

m0

t = 0

t5

t4

t04Ne4

Ne04

t3

t2

t1r 1 - r

1 - ss

FIGURE 2.1: Exemple d’un scénario évolutif complexe composé d’évènements inter-populationnels. Cescénario implique quatre populations échantillonnées Pop1, . . . , Pop4 et deux autres populations non-observées Pop5 et Pop6. Les branches de ce schéma sont des "tubes" et le scénario démographique contraintla généalogie à rester à l’intérieur de ces "tubes". La migration entre les populations Pop3 et Pop4 sur lapériode [0, t3] est paramétrée par les taux de migration m et m0. Les deux évènements d’admixture sont pa-ramétrés par les dates t1 et t3 ainsi que les taux d’admixture respectifs r et s. Les trois évènements restantssont des divergences, respectivement en t2, t4 et t5. L’événement en t04 correspond à un changement de tailleefficace dans la population Pop4.

Page 157: Ab cancun

Demo-genetic inference

Each model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factors

The goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present time

Problem: most of the time, we can not calculate the likelihood ofthe polymorphism data f (y|θ).

Page 158: Ab cancun

Untractable likelihood

Missing (too missing!) data structure:

f (y|θ) =

Gf (y|G ,θ)f (G |θ)dG

The genealogies are considered as nuisance parameters.

This problematic thus differs from the phylogenetic approachwhere the tree is the parameter of interesst.

Page 159: Ab cancun

A genuine example of application

94

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Pygmies populations: do they have a common origin? Is there alot of exchanges between pygmies and non-pygmies populations?

Page 160: Ab cancun

Scenarios under competition

96

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Différents scénarios possibles, choix de scenario par ABC

Verdu et al. 2009

Page 161: Ab cancun

Simulation results

97

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Différents scénarios possibles, choix de scenario par ABC

Le scenario 1a est largement soutenu par rapport aux

autres ! plaide pour une origine commune des

populations pygmées d’Afrique de l’Ouest Verdu et al. 2009

97

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Différents scénarios possibles, choix de scenario par ABC

Le scenario 1a est largement soutenu par rapport aux

autres ! plaide pour une origine commune des

populations pygmées d’Afrique de l’Ouest Verdu et al. 2009

c© Scenario 1A is chosen.

Page 162: Ab cancun

Most likely scenario

99

!""#$%&'()*+,(-*.&(/+0$'"1)()&$/+2!,03 !1/+*%*'"4*+56(""4&7()&$/.+.1#+4*.+8-9':*.+

Scénario

évolutif :

on « raconte »

une histoire à

partir de ces

inférences

Verdu et al. 2009

Page 163: Ab cancun

Instance of ecological questions [message in a beetle]

• How the Asian Ladybirdbeetle arrived in Europe?

• Why does they swarm rightnow?

• What are the routes ofinvasion?

• How to get rid of them?

• Why did the chicken crossthe road?

[Lombaert & al., 2010, PLoS ONE]beetles in forests

Page 164: Ab cancun

Worldwide invasion routes of Harmonia Axyridis

For each outbreak, the arrow indicates the most likely invasionpathway and the associated posterior probability, with 95% credibleintervals in brackets

[Estoup et al., 2012, Molecular Ecology Res.]

Page 165: Ab cancun

Worldwide invasion routes of Harmonia Axyridis

For each outbreak, the arrow indicates the most likely invasionpathway and the associated posterior probability, with 95% credibleintervals in brackets

[Estoup et al., 2012, Molecular Ecology Res.]

Page 166: Ab cancun

A population genetic illustration of ABC model choice

Two populations (1 and 2) having diverged at a fixed known timein the past and third population (3) which diverged from one ofthose two populations (models 1 and 2, respectively).

Observation of 50 diploid individuals/population genotyped at 5,50 or 100 independent microsatellite loci.

Model 2

Page 167: Ab cancun

A population genetic illustration of ABC model choice

Two populations (1 and 2) having diverged at a fixed known timein the past and third population (3) which diverged from one ofthose two populations (models 1 and 2, respectively).

Observation of 50 diploid individuals/population genotyped at 5,50 or 100 independent microsatellite loci.

Stepwise mutation model: the number of repeats of the mutatedgene increases or decreases by one. Mutation rate µ common to allloci set to 0.005 (single parameter) with uniform prior distribution

µ ∼ U [0.0001, 0.01]

Page 168: Ab cancun

A population genetic illustration of ABC model choice

Summary statistics associated to the (δµ)2 distance

xl ,i ,j repeated number of allele in locus l = 1, . . . , L for individuali = 1, . . . , 100 within the population j = 1, 2, 3. Then

(δµ)2j1,j2 =

1

L

L∑

l=1

1

100

100∑

i1=1

xl ,i1,j1 −1

100

100∑

i2=1

xl ,i2,j2

2

.

Page 169: Ab cancun

A population genetic illustration of ABC model choice

For two copies of locus l with allele sizes xl ,i ,j1 and xl ,i ′,j2 , mostrecent common ancestor at coalescence time τj1,j2 , gene genealogydistance of 2τj1,j2 , hence number of mutations Poisson withparameter 2µτj1,j2 . Therefore,

E{(

xl ,i ,j1 − xl ,i ′,j2)2 |τj1,j2

}= 2µτj1,j2

andModel 1 Model 2

E{

(δµ)21,2

}2µ1t ′ 2µ2t ′

E{

(δµ)21,3

}2µ1t 2µ2t ′

E{

(δµ)22,3

}2µ1t ′ 2µ2t

Page 170: Ab cancun

A population genetic illustration of ABC model choice

Thus,

• Bayes factor based only on distance (δµ)21,2 not convergent: if

µ1 = µ2, same expectation

• Bayes factor based only on distance (δµ)21,3 or (δµ)2

2,3 notconvergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation

• if two of the three distances are used, Bayes factor converges:there is no (µ1, µ2) for which all expectations are equal

Page 171: Ab cancun

A population genetic illustration of ABC model choice

● ●

5 50 100

0.0

0.4

0.8

DM2(12)

●●

● ●

5 50 100

0.0

0.4

0.8

DM2(13)

●●

●●●●●

●●●●●●●

5 50 100

0.0

0.4

0.8

DM2(13) & DM2(23)

Posterior probabilities that the data is from model 1 for 5, 50and 100 loci

Page 172: Ab cancun

A population genetic illustration of ABC model choice

●●●●●

DM2(12) DM2(13) & DM2(23)

020

4060

8010

012

014

0

●●●●●●●●●

DM2(12) DM2(13) & DM2(23)

0.0

0.2

0.4

0.6

0.8

1.0

p−values