29
HAL Id: hal-01081888 https://hal.archives-ouvertes.fr/hal-01081888 Submitted on 12 Nov 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart, Christelle Vergé To cite this version: Jérôme Morio, Mathieu Balesdent, Damien Jacquemart, Christelle Vergé. A survey of rare event sim- ulation methods for static input–output models. Simulation Modelling Practice and Theory, Elsevier, 2014, 49, pp.287-304. 10.1016/j.simpat.2014.10.007. hal-01081888

A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

HAL Id: hal-01081888https://hal.archives-ouvertes.fr/hal-01081888

Submitted on 12 Nov 2014

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

A survey of rare event simulation methods for staticinput–output models

Jérôme Morio, Mathieu Balesdent, Damien Jacquemart, Christelle Vergé

To cite this version:Jérôme Morio, Mathieu Balesdent, Damien Jacquemart, Christelle Vergé. A survey of rare event sim-ulation methods for static input–output models. Simulation Modelling Practice and Theory, Elsevier,2014, 49, pp.287-304. 10.1016/j.simpat.2014.10.007. hal-01081888

Page 2: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

A survey of rare event simulation methods for stati

input-output models

Jérme Morio

1,∗Mathieu Balesdent

2Damien Ja quemart

2,3Christelle Vergé

2,4,5

Abstra t

Crude Monte-Carlo or quasi Monte-Carlo methods are well suited to hara terize events of

whi h asso iated probabilities are not too low with respe t to the simulation budget. For very

seldom observed events, su h as the ollision probability between two air raft in airspa e, these

approa hes do not lead to a urate results. Indeed, the number of available samples is often

insu ient to estimate su h low probabilities (at least 106samples are needed to estimate

a probability of order 10−4

with 10% relative error with Monte-Carlo simulations). In this

arti le,one reviewed dierent appropriate te hniques to estimate rare event probabilities that

require a fewer number of samples. These methods an be divided into four main ategories:

parameterization te hniques of probability density fun tion tails, simulation te hniques su h as

importan e sampling or importan e splitting, geometri methods to approximate input failure

spa e and nally, surrogate modelling. Ea h te hnique is detailed, its advantages and drawba ks

are des ribed and a synthesis that aims at giving some lues to the following question is given:

"whi h te hnique to use for whi h problem?".

Key words: Monte-Carlo methods, Rare event, Input-output model, Simulation

∗ orresponding author

Email addresses: jerome.morioonera.fr (Jérme Morio), mathieu.balesdentonera.fr

(Mathieu Balesdent), damien.ja quemartonera.fr (Damien Ja quemart),

hristelle.vergeonera.fr (Christelle Vergé).

1Onera - The Fren h Aerospa e Lab, BP 74025, 31055 Toulouse Cedex, Fran e Tel.: +33 5 62 25 26 63

2Onera - The Fren h Aerospa e Lab, BP 80100, 91123 Palaiseau Cedex, Fran e

3INRIA Rennes, ASPI Appli ations of intera ting parti le systems to statisti s, ampus de Beaulieu,

35042 Rennes, Fran e

4INRIA Bordeaux, 351 ours de la Libération, 33405 Talen e Cedex, Fran e

5CNES, 18 avenue Edouard Belin, 31401 Toulouse Cedex 9, Fran e

Page 3: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

1. Introdu tion

Rare event estimation has be ome a large area of resear h in the reliability engineering

and system safety domains. A signi ant number of methods has been proposed to redu e

the omputation burden for the estimation of rare events from sampling to extreme value

theory. However it is often di ult to determine whi h algorithm is the most adapted to

a given problem. Moreover, the existing survey arti les on rare events are often fo used

on spe i algorithms [13. The novelties of this arti le are thus to provide a broad view

of the urrent available te hniques to estimate rare event probabilities des ribed with

a unied notation and to provide some lues to answer this question: whi h rare event

te hnique is the most adapted to a given situation?

The general problem onsidered in this arti le is analysed in a rst se tion and then all

the dierent methods are des ribed separately. Their advantages and drawba ks are also

given. Finally, a synthesis helps the reader to determine the most appropriate method to

a given rare event estimation problem.

Let us onsider a d-dimensional random ve tor X with a probability density fun tion

(PDF) h0, φ a ontinuous positive s alar fun tion φ : Rd → R and S a threshold.

The dierent omponents of X will be denoted X = (X1, X2, ..., Xd) in the following.

The fun tion φ is stati , i.e., does not depend on time, and represents for instan e an

input-output model. This kind of model is notably used in numerous engineering appli-

ations [49. We assume that the output Y = φ(X) is a s alar random variable. In this

arti le, we propose to review dierent algorithms that an be e ient to estimate the

probability P = P (φ(X) > S) when this quantity is rare relatively to the available sim-

ulation budget N , that is when P < 1N . For the sake of on iseness, the issue of extreme

quantile estimation is not addressed even if the vast majority of the methods that are

presented in the paper an be adapted to this spe i ase. The ase of dynami systems

modeled with Markov hains is also not onsidered in this paper. Spe i algorithm ex-

tensions for large omplex systems modelled by a network or a oherent fault tree are

ompletely detailed in [10 and will not be mu h developed here. It orresponds to the

ase where the inputs X i, i = 1, ..., d follow a Bernoulli distribution and the output is

equivalent to an indi ator fun tion.

2. Monte-Carlo methods

A simple way to estimate a probability is to onsider rude Monte-Carlo (CMC) [11

16. For that purpose, one generates N independent and identi ally distributed (i.i.d.)

samples X1, ...,XN from the PDF h0 and omputes their outputs with the fun tion φ:

φ(X1), ..., φ(XN ). The probability P (φ(X) > S), also alled failure probability, is then

estimated with

PCMC =1

N

N∑

i=1

1φ(Xi)>S, (1)

where 1φ(Xi)>S is equal to 1 if φ(Xi) > S and 0 otherwise. This estimation onverges to

the real probability as shows the law of large numbers [13. The positive and negative as-

pe ts of CMC are des ribed in Table 1. A possible indi ator of the estimation e ien y is

notably its relative deviation. The relative deviation or relative error RE of an estimator

2

Page 4: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of CMC Drawba ks of CMC

Simple implementation Slow onvergen e

Information on φ not needed Signi ant simulation budget for rare events

No bias

Table 1

Advantages and drawba ks of CMC methods.

P of P is given by the following ratio:

RE(P ) =σP

E(P ), (2)

with σP the standard deviation of P and E the mathemati al expe tation. The relative

error is said bounded when RE(P ) remains bounded when P −→ 0 [17,18. In that ase,

the number of samples needed to get a spe ied relative error is bounded whatever the

rarity of φ(X) > S. The logarithmi e ien y LE an also be dened for an unbiased

estimator P with [17,18,

LE(P ) = limP→0

log(E(P 2))

log(P )= 2. (3)

Logarithmi e ien y is a ne essary but not su ient ondition for bounded relative

error. Chara terizing the rare event probability estimate with these on epts is very

important even if they are often di ult to verify in pra ti e.

Sin e PCMCis unbiased, the relative error of the estimator PCMC

is given by the ratio

σPCMC

P with σPCMC , the standard deviation of PCMC. Knowing the true probability P

of the event (φ(X) > S), one has [11,19

σPCMC

P=

1√N

√P − P 2

P. (4)

Considering rare event probability estimation, that is when P takes low values, one

obtains

limP→0

σPCMC

P= lim

P→0

1√NP

= +∞. (5)

The relative deviation is onsequently unbounded. For instan e, to estimate a probability

P of order 10−4with a 10% relative deviation, at least 106 samples are required. The

simulation budget is thus an issue when the omputation time required to obtain a sample

φ(Xi) is not negligible. CMC is thus not adapted to rare event estimation and a wide

olle tion of statisti and simulation methods has been developed. The following se tions

des ribe the dierent available alternatives to CMC to improve probability estimations,

i.e., to redu e the number of required samples, in rease the estimation a ura y, and

thus de rease RE(P ).

3. Statisti al te hniques

Statisti al te hniques enable to derive a probability estimate and asso iated onden e

intervals with a xed set of samples φ(X1), ..., φ(XN ). The main statisti al approa hes,

extreme value theory and large deviation theory, model the behaviour of the PDF tails.

Let us review their theoreti al founding.

3

Page 5: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

3.1. Extreme value theory

Extreme value theory (EVT) [20,21 hara terizes the distribution tails of a random

variable, based on a reasonable number of observations. Thanks to its general appli a-

tive onditions, this theory has been widely used for des ribing extreme meteorologi al

phenomena with appli ations su h as hydrology [22, snowfall [23, but also in nan e

and insuran e [20,24, and engineering [25.

3.1.1. Law of sample maxima

EVT is notably very useful when one has to work with only a xed set of data. One

onsequently assumes in the following that a nite set of i.i.d. samples φ(X1), ..., φ(XN )of the output is available, but also that one annot generate new samples of φ(X). Theasso iated ordered sample set is dened with φ(X(1)) ≤ φ(X(2)) ≤ ... ≤ φ(X(N)). EVTenables to estimate for some threshold S the probability P (φ(X) > S).The founder theorem of EVT [20,26,27 is that, under some onditions, the maxima of

an i.i.d. sequen e onverge to a generalized extreme value (GEV) distribution Gξ, whi h

admits the following umulative distribution fun tion (CDF)

Gξ(x) =

exp(− exp(−x)), for ξ = 0,

exp(−(1 + ξx)−

), for ξ 6= 0.

(6)

The set of GEV distributions is omposed of three distin t types, hara terized by ξ =0, ξ > 0 and ξ < 0 that orrespond to the Gumbel, Fré het and Weibull distributions

respe tively. Let us dene G, the CDF of the i.i.d. samples φ(X1), ..., φ(XN ).Theorem 3.1 Suppose there exist aN and bN , with aN > 0 su h that, for all y ∈ R

P

(φ(X(N))− bN

aN≤ y

)= GN (aNy + bN )

N→∞−→ G(y),

where G is a non degenerate CDF, then G is a GEV distribution Gξ. In this ase, one

denotes G ∈MDA(ξ) (MDA=maximum domain of attra tion).

The sequen es aN and bN are omputed in [20 for most well-known PDF. An approxi-

mation of P (φ(X) > S) [20 for large values of S and N an also be obtained:

PEV T (φ(X) > S) ≈ 1

N

(1 + ξ

(S − bN

aN

))− 1ξ

. (7)

The GEV approa h is notably used when only samples of maxima are available. In that

ase, the dierent parameters of the GEV distribution are obtained by determining max-

imum likelihood or probability weighted moment estimators. When samples of maxima

are not available, it is required to group the samples φ(X1), ..., φ(XN ) into blo ks and t

the GEV using the maximum of ea h blo k (blo k maxima method). The main di ulty

is to determine an e ient sample size for the dierent blo ks.

3.1.2. Peak over threshold approa h

Instead of grouping the samples into blo k maxima, POT onsiders the largest samples

φ(Xi) to estimate the probability P (φ(X) > S).

4

Page 6: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

There are two equivalent ways of analyzing extremes with POT. The most ommon is

to hara terize the distribution of samples above a threshold u, whi h is given by the

generalized Pareto CDF. An alternative is to use a Poisson point pro ess whi h ounts the

number of threshold ex eedan es. This approa h is not developed in this arti le, but one

an refer to [27 for more details. The rst paper linking the EVT with the distribution

of a threshold ex eedan e is [28. Later, De Haan obtains a result of the same type, with

a slightly simplied on lusion, using slow varying fun tions [29. The following theorem

[20 an be then obtained:

Theorem 3.2 Let us assume that the distribution fun tion G of i.i.d. samples φ(X1),...,φ(XN ) is ontinuous. Set y∗ = supy, G(y) < 1 = infy, G(y) = 1. Then, the two fol-

lowing assertions are equivalent

(i) G ∈MDA(ξ),(ii) there exists a positive and measurable fun tion u 7→ β(u) su h that

limu7→y∗

sup0<y<y∗−u

|Gu(y)−Hξ,β(u)(y)| = 0,

where Gu(y) = P (φ(X) − u ≤ y|φ(X) > u), and Hξ,β(u) is the CDF of a generalized

Pareto distribution (GPD) with shape parameter ξ and s ale parameter β(u).The expression of the GPD distribution fun tion is the following

Hξ,β(x) =

1− exp(− x

β

), for ξ = 0,

1−(1 + ξx

β

)−1/ξ

, for ξ 6= 0.

(8)

This theorem is in fa t useful to estimate a probability of ex eedan e. Indeed, the

probability P (φ(X) > S) an be rewritten as

P (φ(X) > S) = P (φ(X) > S|φ(X) > u)P (φ(X) > u). (9)

for S > u. A natural estimate of P (φ(X) > u) is given by

PCMC(φ(X) > u) =1

N

N∑

i=1

1φ(Xi)>u. (10)

With the Theorem 3.2 and for signi ant value of u, one obtains

P (φ(X) > S|φ(X) > u) = 1−Hξ,β(u)(S − u). (11)

The estimate of P (φ(X) > S) is then built with

PPOT (φ(X) > S) =

(1

N

N∑

i=1

1φ(Xi)>u

)×(1−Hξ,β(u)(S − u)

). (12)

The mathemati al justi ation of Eq. 11 and Eq. 12 is notably dis ussed in [21, [30, [31,

or [32 for a given set of samples to determine if this set is suitable for the appli ation

of POT. Three parameters have to be determined in the POT probability estimate of

Eq. 12: the threshold u and the ouple (ξ, β(u)). The hoi e of u is very inuent sin e

it determines the samples that are used in the estimation of (ξ, β(u)). Indeed, a high

threshold leads to onsider only a small number of samples in the estimation of (ξ, β(u))and thus their estimate an be then spoiled by a large varian e whereas a low threshold

5

Page 7: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of EVT Drawba ks of EVT

No need to resample Complex estimation of the adequate parameters

(u, ξ, β(u)) or of the blo k maxima size.

Can be applied with a relatively low value of N Less e ient than simulation

methods when resampling is possible

Table 2

Advantages and drawba ks of EVT.

introdu es a bias in the probability estimate [33. There are several methods to determine

a valuable threshold u knowing the samples. The most well-known ones are the Hill plot

and the mean ex ess plot [20. These methods are nevertheless very empiri al sin e they

are based on graphi al interpretation. It is often ne essary in pra ti e to ompare the

estimates of u given by the dierent methods. On e the value of u is set, the parameters

(ξ, β(u)) are often estimated by maximum likelihood [34 or more o asionally by the

method of moments [35. The estimate PPOT (φ(X) > S) given in Eq. 12 for S > u is

then ompletely dened. A review of these dierent methods an be found in [36. It is

not possible, to our knowledge, to ontrol the probability error estimate in EVT. Never-

theless, the use of boostrap on samples φ(X1), ..., φ(XN ) [37 an give some information

on the e ien y of EVT.

3.1.3. Blo k maxima versus POT

The POTmethod takes into a ount all relevant high samples φ(X1), ..., φ(XN ) whereasthe blo k maxima method an miss some of these high samples and, on the same time,

onsider some lower samples in its probability estimation. Thus, POT seems to be more

appropriate for the design of sample PDF tail. Nevertheless, the blo k maxima method

is preferable when the available samples are not exa tly i.i.d. or when only samples of

maxima are available. For instan e, the samples of a monthly river maximum height

orrespond to this situation. Finally, the tuning of blo k maxima size turns out to be

easier than the tuning of POT threshold u in many situations [38. The advantages and

drawba ks of EVT are presented in Table 2.

3.2. Large deviation theory

The large deviation theory (LDT) hara terizes the asymptoti behaviour of PDF se-

quen e tails [3941 and more pre isely, it analyses how a PDF sequen e tail deviates from

its typi al behaviour des ribed by the law of large numbers. LDT an be used to evaluate

the onvergen e of rare event algorithms [4246. Let us deneHN = J(φ(X1), ..., φ(XN ))a random variable indexed by N with J a ontinuous s alar fun tion, H its mathemat-

i al expe tation and VN = HN − H . One says that VN satises the prin iple of large

deviations with a ontinuous rate fun tion I if the following limit exists:

limN→∞

1

Nln[P (| VN |> γ)] = −I(γ). (13)

The existen e of this limit implies for a large value of N that

P (| VN |> γ) ≈ exp (−NI(γ)) . (14)

6

Page 8: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

The probability de ays exponentially as N grows to innity, at a rate depending on

γ. This approximation is a well-known result of LDT. If the limit does not exist, then

P (| VN |> γ) has a too singular behaviour or de reases faster than exponential de ay. If

the limit is equal to 0, then the tail P (| VN |> γ) de reases with N slower than exp (−Na)with a > 0. The omputation of the rate fun tion I is not obvious but an be obtained

through the Gärtner-Ellis theorem [47. Let us dene the fun tion λ(θ) of VN with

λ(θ) = limN→∞

1

Nln [E (exp (NθVN ))] , (15)

with θ ∈ R.

Theorem 3.3 Gärtner-Ellis theorem If the fun tion λ(θ) of the variable VN exists

and is dierentiable for all θ ∈ R, then VN satises the prin iple of large deviations and

I(γ) is given by

I(γ) = supθ∈R

[θγ − λ(θ)] .

In the spe i ase of a s alar fun tion J , one an derive the Cramér theorem from

Gärtner-Ellis theorem [47.

Theorem 3.4 Cramér theorem If VN = 1N

∑Ni=1 J(φ(Xi)) where the random vari-

ables J(φ(Xi)) are i.i.d, the rate fun tion is given by

I(γ) = supθ∈R

[θγ − λ(θ)] ,

with

λ(θ) = ln [E (exp (θJ(φ(X))))] .

This theorem only holds for light tail distributions.

Let us onsider the Monte-Carlo probability estimate given in Eq. 1. In that ase, one

has J(φ(.)) = 1φ(.). The random variable J(φ(Xi)) follows a Bernoulli distribution of

mean P . The sequen e VN is dened with

VN =

(1

N

N∑

i=1

1φ(Xi)>S

)− P. (16)

The fun tions λ(θ) and I(γ) an be derived for some well-known PDF. In the ase of

Bernoulli distributions of mean P , one has

λ(θ) = P exp(θ) + 1− P, (17)

and

I(γ) = γ ln( γP

)+ (1 − γ) ln

(1− γ

1− P

). (18)

One an then obtain the onvergen e speed of the Monte-Carlo probability estimate in

fun tion of the number of samples with the following equation

limN→∞

1

Nln[P (| VN |> γ)] = −I(γ) = −γ ln

( γP

)− (1 − γ) ln

(1− γ

1− P

). (19)

The quantity I(γ) orresponds to the relative entropy (Kullba k-Leibler divergen e) of

a oin toss with bias γ with respe t to true value P . In a lot of situations, the large

deviation rate fun tion is the Kullba k-Leibler divergen e [47.

7

Page 9: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

LDT annot in fa t be applied dire tly to determine a rare event probability in a realisti

pra ti al ase where the density of Y is not known a priori. LDT an be useful to analyze

the deviation of a probability estimate, notably if the probability estimate is a sum of

random variables as shows Eq. 19. for the CMC estimate. Spe i surveys on LDT an

be found in [3,48.

4. Importan e sampling

4.1. Prin iple of importan e sampling

The obje tive of importan e sampling (IS) is to redu e the varian e of the Monte-Carlo

estimator PCMC[17,19,4953. The main idea is to generate the samplesX1, ...,XN with

an auxiliary PDF h that is able to generate more samples su h that φ(X) > S than PDF

h0 and then to introdu e a weight in the probability estimate to take into a ount the

hange in the PDF generating the samples. The IS probability estimate P ISis then given

with

P IS =1

N

N∑

i=1

1φ(Xi)>Sh0(Xi)

h(Xi). (20)

The term P ISis an unbiased estimate of the probability P . Its varian e is given by the

following equation:

V ar(P IS

)=

V ar(1φ(X)>Sw(X)

)

N, (21)

with w(X) = h0(X)h(X) . The term w(X) is often alled the likelihood fun tion in the impor-

tan e sampling literature. The varian e of P ISstrongly depends on the hoi e of h. If h is

well- hosen, the IS estimate has then a mu h smaller varian e than Monte-Carlo estimate

and onversely. The obje tive of IS is to de rease the estimation varian e and one an

thus dene an optimal IS auxiliary density that minimizes the varian e V ar(P IS

). Sin e

varian es are non negative quantities, the optimal auxiliary density hopt is determined

by an elling the varian e in Eq. 21. It is well-known that hopt is then dened with [54

hopt(X) =1φ(X)>Sh0(X)

P. (22)

The optimal auxiliary density hopt depends unfortunately on the probability P that

one tries to estimate and is unusable in pra ti e. Nevertheless, hopt an be useful to

determine an e ient sampling PDF. Indeed, a valuable sampling auxiliary PDF h will

be lose to the PDF hopt relative to a given riterion. An optimization of the auxiliary

sampling PDF is then ne essary. In some spe i ases or spe i fun tions φ, importan e

sampling probability estimate an have a bounded relative error as demonstrated in

[55,56 or logarithmi e ien y in [57,58.

Spe i surveys on IS have been proposed su h as in [1,59, and thus, the omplete list

of possible importan e algorithms will not be des ribed for the sake of on iseness. We

only review the main algorithms in the next se tions.

8

Page 10: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

4.2. Cross entropy optimization of importan e sampling auxiliary density

Let us dene hλ, a family of PDF indexed by a parameter λ ∈ ∆ where ∆ is the

multidimensional spa e of PDF parameters. The parameter λ is, for instan e, the mean

and the ovarian e matrix in the ase of Gaussian densities. The obje tive of IS with ross

entropy (CE) is to determine the parameter λopt that minimizes the Kullba k-Leibler

divergen e between hλoptand hopt [60,61. The value of λopt is thus obtained with

λopt = argminλ∈∆

D(hopt, hλ) , (23)

where D is the Kullba k-Leibler divergen e dened between PDF p and PDF q by

D(q, p) =

Rd

q(x) ln(q(x))dx −∫

Rd

q(x) ln(p(x))dx. (24)

Determining the parameter λopt with Eq. 23 is not obvious sin e it depends on the

unknown PDF hopt. In fa t, it an be shown [60 that Eq. 23 is equivalent to the following

one

λopt = argmaxλ∈∆

E[1φ(X)>S ln (hλ(X))

]. (25)

In pra ti e, one does not fo us dire tly on Eq. 25 sin e it requires the knowledge of some

samples of X so that φ(X) > S. In most realisti appli ations, it is not the ase. Thus,

one pro eeds iteratively to estimate λopt with an in reasing sequen e of thresholds

γ0 < γ1 < γ2 < ... < γk < ... ≤ S, (26)

hosen adaptively using quantile denition. At the iteration k, the value λk−1 is available

and one determines in pra ti e

λk = argmaxλ∈∆

1

N

N∑

i=1

1φ(Xi)>γk

h0(Xi)

hλk−1(Xi)

ln(hλ(Xi)), (27)

where the samples X1, ...,XN are generated with hλk−1. The probability PCE

is then

estimated with IS at the last iteration. The ross entropy optimization algorithm for the

IS density is des ribed more pre isely by the following s heme

(i) k = 1, dene hλ0 = h0 and set ρ ∈]0, 1[.(ii) Generate the population X1, ...,XN a ording to the PDF hλk−1

and apply the

fun tion φ in order to have Y1 = φ(X1), ..., YN = φ(XN ).(iii) Compute γk = min(S, Yρ) where Yρ denotes the empiri al ρ-quantile of Y1, ..., YN .

(iv) Optimize the parameters of the auxiliary PDF family with

λk = argmaxλ∈∆

1

N

N∑

i=1

[1φ(Xi)>γk

h0(Xi)

hλk−1(Xi)

ln [hλ(Xi)]

].

(v) If γk < S, k ← k + 1, ba k to the step (ii).

(vi) Estimate the probability PCE(φ(X > S)) = 1N

N∑

i=1

1φ(Xi)>Sh0(Xi)

hλk−1(Xi).

The advantages of and drawba ks of CE are presented in Table 3. CE is a very pra ti al

algorithm to approximate the optimal sampling density. Nevertheless, the hoi e of the

parametri family density hλ has to be done arefully to obtain valuable results. Due to

9

Page 11: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of CE Drawba ks of CE

Simple optimization for exponential PDF family Strong inuen e of the initial

parametri density hoi e

Fast omputation Di ult to apply in ases where the optimal

auxiliary density is multimodal

Table 3

Advantages and drawba ks of CE.

the adaptiveness of the algorithm, it is di ult to ensure the robustness (logarithmi e-

ien y) of the CE estimate in the general ase [62. The on ept of probabilisti bounded

relative error is then proposed.

4.3. Non parametri adaptive importan e sampling

The obje tive of non parametri adaptive importan e sampling (NAIS) te hnique [63

66 is to approximate the IS optimal auxiliary density given in Eq. 22 with kernel density

fun tion [67. NAIS does not require the hoi e of a PDF family and is thus more exible

than a parametri model. The iterative prin iple is relatively similar to the CE optimiza-

tion and is des ribed by the following steps. For the sake of simpli ity, the algorithm is

presented with a Gaussian kernel but other kinds of kernel an be used.

(i) k = 1 and set ρ ∈]0, 1[.(ii) Generate the population X

(k)1 , ...,X

(k)N a ording to the PDF hk−1, apply the fun -

tion φ in order to have Y(k)1 = φ(X

(k)1 ), ..., Y

(k)N = φ(X

(k)N ).

(iii) Compute γk = min(S, Y(k)ρ ) where Y

(k)ρ denotes the empiri al ρ-quantile of Y

(k)1 , ..., Y

(k)N .

(iv) Estimate Ik = 1kN

∑kj=1

∑Ni=1 1φ(X

(j)

i)≥γk

h0(X(j)i

)

hj−1(X(j)i

).

(v) Update the Gaussian kernel sampling PDF with

hk(X) =1

kNIk det (Bk)

k∑

j=1

N∑

i=1

wj(X(j)i )Kd

(B−1

k

(X−X

(j)i

)). (28)

where Kd is standard d-dimensional Gaussian fun tion with zero mean and a di-

agonal ovarian e matrix Bk = diag(b1k, ..., bdk) and wj(.) = 1φ(.)≥γk

h0(.)hj−1(.)

. The

adapted oe ient in the matrix Bk+1 an be optimized a ording to the AMISE

(asymptoti mean integrated square error) riterion [11 and [68.

(vi) If γk < S, k ← k + 1, ba k to the step (ii).

(vii) Estimate the probability PNAIS(φ(X) > S) = 1N

N∑

i=1

1φ(X

(k)i

)>S

h0(X(k)i )

hk−1(X(k)i )

.

The advantages of and drawba ks of NAIS are presented in Table 4. The use of kernel

density fun tion enables a more exible and general model than CE. It be ome very

di ult to apply NAIS in ases where the input dimension d is greater than 10 due to

the numeri al ost indu ed by the use of kernel density [66.

10

Page 12: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of NAIS Drawba ks of NAIS

No hoi e of a parametri density Computation time

E ient in ases where the optimal Inappli able when d is greater than 10

auxiliary density is multimodal

Table 4

Advantages and drawba ks of NAIS.

4.4. Simple hanges of measure

The use of CE or NAIS is not always ne essary, notably in simple ases of fun tion

φ(.). Conventional hanges of density h0 an then be e ient to de rease the probability

estimate varian e. S aling and translation an be applied on the initial PDF h0. S aling

onsists in dening the auxiliary PDF h so that

h(X) =1

ah0

(X

a

), (29)

with a ∈ R∗. Translation is another simple hange of density that an be applied in IS.

The new auxiliary density is dened with translation by

h(X) = h0(X− c), (30)

with c ∈ Rd. The hoi es of a and c for ea h method strongly inuen e the importan e

sampling e ien y. Valuable values of a and c are not obvious to nd without some

knowledge of the fun tion φ.

4.5. Exponential twisting

The prin iple of exponential twisting is very similar to LDT and saddle point approx-

imation [6972. The main idea of exponential twisting is to dene the auxiliary density

on the output Y = φ(X) with

h(y) = exp(θy − λ(θ))g(y), (31)

where g is the density of random variable Y and λ(θ) = ln (E (exp (θY ))). The probabilityis then determined with

PTW = E

(1Y >S

g(Y )

h(Y )

).

The variable Y has to get exponential moments so that λ(θ) to be nite for at least some

values of θ ∈ R. The PDF h(y) depends on the parameter θ. An optimal value θopt an

be obtained with saddle point approximation with

dλ(θ)

∣∣∣∣θ=θopt

= S. (32)

The parameter θopt is estimated numeri ally. Exponential twisting an thus only be

applied in some spe i ases, notably if Y =∑d

i=1 Xi(fun tion used in some queueing

models) or if the density g is analyti ally known. In the ase of a sum of random variables,

this estimator has a bounded relative error if the input has a light tail [73,74. In ase of

large deviation probabilities and under some general onditions, logarithmi e ien y is

guaranteed with exponential twisting importan e sampling [75.

11

Page 13: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

5. FORM/SORM

First/se ond-order reliability methods (FORM/SORM) [7679 are onsidered as reli-

able omputational methods for stru tural reliability. FORM is an analyti al approxima-

tion in whi h the reliability index is interpreted as the minimum distan e from the origin

to the limit state surfa e in standardized normal input spa e. This limit state surfa e

hara terizes the input region where φ(X) > S. The most probable failure point (design

point) is sear hed using mathemati al programming methods. Sin e the performan e

fun tion is approximated by a linear fun tion at the design point, a ura y problems

o ur when the performan e fun tion is strongly nonlinear or if the most probable failure

point is not unique [80. The se ond-order reliability method (SORM) has been estab-

lished as an attempt to improve the a ura y of FORM. SORM approximates the limit

state surfa e at the design point by a se ond-order surfa e.

FORM/SORM method are applied in four stages to estimate P (φ(X) > S):(i) Apply a transformation T on the input X su h that R = T (X) with R a normal

redu ed entered PDF. Depending on the available information on the PDF of X,

several transformations an be proposed [8186. See Table 5 for details on the

orresponden e between assumptions and transformations.

(ii) Evaluate the most probable failure point β su h that

β = argminR

|| R ||, (33)

subje t to the onstraint S−φ(T−1(R)) = 0 and where || . || is the Eu lidian norm.

The onstraint S − φ(T−1(R)) = 0 denes the limit of failure spa e for variable

R. The parameter β is the design point and || β || is the reliability index. Several

algorithms have been proposed to solve this optimization problem as proposed in

[82,83,87,88.

(iii) Approximate the surfa e S − φ(T−1(R)) = 0 at the solution β. In the ase of

FORM, this surfa e is a hyperplane and it is a paraboloid in the ase of SORM

[89.

(iv) Estimate the failure probability with, in the ase of FORM :

PFORM (φ(X) > S) = Ω(− || β ||), (34)

where Ω is the CDF of a normal redu ed and entered PDF. In the ase of SORM,

the failure probability is given by [90

PSORM (φ(X) > S) = Ω(− || β ||)d−1∏

i=1

(1 − βκi)− 1

2 , (35)

where κi denotes the prin ipal urvature of S − φ(T−1(R)) at the design point β.

The term κi is dened with

κi =∂2(S − φ(T−1(R)))

∂2Ri

∣∣∣∣R=β

, (36)

with Ri, i = 1, ..., d, a omponent of the ve tor R. A rst order saddle point ap-

proximation (FOSPA) [91,92 method has also been proposed as an improvement to

FORM/SORM. It onsists in using LDT and the saddle point approximation [6972

whi h onsiders the fun tion

12

Page 14: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Assumptions on the PDF of X Corresponding transformations T

X is Gaussian with un orrelated omponents Hasofer-Lind transformation

X has independent omponents (not assumed to be Gaussian) Diagonal transformation

Only the marginal laws of X and their ovarian e are known Nataf tranformation

The omplete law of X is known Rosenblatt transformation

Table 5

Possible transformations T depending on the assumptions on the PDF of X.

λ(θ) = ln [E (θφ(X))] , (37)

to estimate the repartition fun tion of φ(X). Indeed, it is possible to show that

P (φ(X) > S) ≈ 1− Ω

(w +

1

wln( v

w

)), (38)

with

w = sign(θs)(2(θsS − λ(θs)))12 , (39)

and

v = θs

(2d2λ(θ)

dθ2|θ=θs

) 12

. (40)

The parameter θs is the saddle point and is the solution of the equation

d2λ(θ)

dθ2|θ=θs = S. (41)

The approximation proposed in Eq. 38 is not easily omputable in the general ase. It is

thus often ne essary to linearize the fun tion φ near the most probable failure point with

the onstraint S − φ(X) = 0 and also to linearize the fun tion λ. These linearizations

simplify the estimation of λ(θ) in Eq. 37 and of θs. The moment method is also used to

approximate the fun tion λ in [91,93,94.

The advantage of and drawba ks of geometri methods su h as FORM/SORM/FOSPA

are given in Table 6. These methods do not require a large simulation budget to obtain a

valuable result. Nevertheless, the dierent assumptions require that one has to be areful

when one applies FORM/SORM/FOSPA to a realisti ase of fun tion φ. There is also

no ontrol of the error in FORM/ SORM. However, it is possible from FORM/SORM

to determine an importan e sampling auxiliary density and then to sample with it to

estimate the rare event probability.

6. Line sampling

6.1. Prin iple

The underlying idea of Line Sampling (LS) [9597 is to employ lines instead of random

points in order to probe the failure domain of the system, i.e. X so that φ(X) > S .

It has to be applied on input random variables that have zero-mean standard normal

density. Let us rst assume thatX follows a multidimensional zero-mean standard normal

13

Page 15: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of FORM/SORM/FOSPA Drawba ks of FORM/SORM/FOSPA

Ne essary simulation budget very restri ted Di ult to apply when the

optimal auxiliary density is multimodal

Ne essary transformation on input

variables if they are not Gaussian

Not adapted to non linear and to

high dimensional fun tion φ

No possible ontrol

of the error

Table 6

Advantages and drawba ks of FORM/SORM.

distribution and also dene the set A = X ∈ Rd|φ(X) > S. The set A an be also

expressed in the following way

A = X ∈ Rd|X1 ∈ A1(X−1). (42)

where the set A1(X−1) is dened on R and depends on X

−1 = (X2, X3, ..., Xd). Similar

sets A1 an be dened with respe t to any dire tion in the random parameter spa e and

for all measurable A. The failure probability P (φ(X) > S) an be written with integrals

in the following way :

P =

Rd

1φ(X)>Sh0(X)dX,

=

Rd

1X∈Ah0(X)dX,

=

Rd−1

R

1X1∈A1h0(X)dX1dX−1.

It an then be rewritten with mathemati al expe tation over the variable X−1

thanks to

the Gaussian assumptions with

P = E(P (X1 ∈ A1|X−1)

). (43)

The failure probability is des ribed as the expe tation of the ontinuous random variable

P (X1 ∈ A1) relatively to the variable X−1. This expe tation is repla ed in pra ti e in

LS by its Monte-Carlo estimate

PLS =1

NC

NC∑

i=1

(P (X1 ∈ A1(X−1i ))), (44)

where (X−11 ), ..., (X−1

NC) are samples of the random variable X

−1. It is still ne essary to

estimate the probability P (X1 ∈ A1(X−1i )), that is

P (X1 ∈ A1(X−1i )) =

R

1X1∈A1(X−1i

)ω(X1)dX1, (45)

where ω is a zero-mean standard normal variable. It is possible to show that this integral

an be approximated with

14

Page 16: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of LS Drawba ks of LS

Ne essary simulation budget restri ted Di ult to apply when the

optimal auxiliary density is multimodal

Simple implementation Ne essary transformation on

input variables if they are not Gaussian

Need a priori information on φ

Table 7

Advantages and drawba ks of LS.

P (X1 ∈ A1(X−1i )) ≈

∫ ∞

ci

ω(X1)dX1, (46)

where ci is the value of X1su h that φ(ci,X

−1i ) = S. This approximation is only valuable

if there is only one interse tion point between the input failure region and the hosen

sampling dire tion. The varian e of LS estimate is always lower or equal to the CMC es-

timation [95. Nevertheless, to our knowledge, the logarithmi e ien y of this algorithm

has never been provided.

6.2. Algorithm

The omputational steps of the algorithm are:

(i) Assume X follows a entered Gaussian PDF. If it is not the ase, apply a transfor-

mation on X des ribed in Table 5.

(ii) In the standard normal spa e, determine the unit important dire tion ve tor α ∈Rd

. It is the dire tion that enables to rea h the urve S − φ(X) = 0 with the

shortest path to the origin. This dire tion an be found with Monte-Carlo Markov

hain methods [98. To simplify the notations, one assumes that the important

dire tion ve tor is α = (1, 0, ..., 0). If it is not the ase, a rotation has to be applied

to the variable X.

(iii) Generate NC samples X−11 , ...,X−1

NCof the variable X

−1and estimate for ea h of

these samples the probability P (X1 ∈ A1(X−1i )) using Eq. 46.

(iv) Estimate the LS probability estimate with

PLS =1

NC

NC∑

i=1

(P (X1 ∈ A1(X−1i ))). (47)

A joint use of Monte-Carlo simulations and line sampling, that does not need the knowl-

edge of the dire tion α has been proposed in [99,100. It requires nevertheless some a

priori information on φ(.) in order to be e ient. The advantages and drawba ks of LS

are presented in Table 7.

15

Page 17: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

7. Adaptive splitting te hnique

7.1. Prin iple

The idea of importan e splitting, also alled subset sampling, subset simulation or

sequential Monte-Carlo, is to de ompose the sought probability in a produ t of ondi-

tional probabilities that an be estimated with a reasonable simulation budget. It has

rstly been proposed in a physi al ontext in 1951 [101, and numerous variants have

been then worked out. Considering the set A = X ∈ Rd|φ(X) > S, the obje tive

of adaptive splitting te hnique (AST) [102106 is to determine the probability P (X ∈A) = P (φ(X) > S). For that purpose, the prin iple of AST [107113 is to iteratively

estimate supersets of A and then to estimate P (X ∈ A) with onditional probabilities.

Let us dene A0 = Rd ⊃ A1 ⊃ ... ⊃ An−1 ⊃ An = A, a de reasing sequen e of Rd

subsets with smallest element A = An. The probability P (X ∈ A) an be then rewritten

in the following way:

P (X ∈ A) =

n∏

k=1

P (X ∈ Ak|X ∈ Ak−1), (48)

where P (X ∈ Ak|X ∈ Ak−1) is the probability thatX ∈ Ak knowing thatX ∈ Ak−1. An

optimal hoi e of the sequen e Ak, k = 0, ..., n is given when P (X ∈ Ak|X ∈ Ak−1) = ρ,

where ρ is a onstant, that is when all the onditional probabilities are equal. The vari-

an e of P (X ∈ A) is indeed minimized in this onguration as shown in [114,115. Conse-

quently, if ea h P (X ∈ Ak|X ∈ Ak−1) is well estimated, then the probability P (X ∈ A)is estimated more a urately with AST than with a dire t estimation by Monte-Carlo

[116.

Let us dene hkthe density of X restri ted to the set Ak. The subset Ak an be dened

with Ak = X ∈ Rd|φ(X) > Sk for k = 0, ..., n with S = Sn > Sn−1 > ... > Sk >

... > S0. Determining the sequen e Ak is equivalent to hoose some values for Sk, with

k = 0, ..., n. The values of Sk for k = 0, ..., n an be determined in an adaptive manner to

perform valuable results [116 using ρ-quantile of samples generated with the PDF hk.

7.2. Algorithm

The dierent stages of AST to estimate P (φ(X) > S) are the following ones:

(i) Set k = 0, ρ ∈]0, 1[ and h0 = h0

(ii) Generate N samples X(k)1 , ...,X

(k)N from hk

and apply the fun tion φ in order to

have Y(k)1 = φ(X

(k)1 ), ..., Y

(k)N = φ(X

(k)N )

(iii) Estimate the ρ-quantile γ(k)ρ of the samples Y

(k)1 , ..., Y

(k)N .

(iv) Determine the subset Ak+1 with Ak+1 = X ∈ Rd|φ(X) > γ(k)ρ and the ondi-

tional density hk+1.

(v) If γ(k)ρ < S, set k ← k + 1 and go ba k to stage (ii). Otherwise, estimate the

probability with

PAST = (1− ρ)k × 1

N

N∑

i=1

1φ(X

(k)

i)>S

.

16

Page 18: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of AST Drawba ks of AST

Appli able in high dimensions Important simulation budget

and non linear systems

E ient on very rare events (P < 10−6) Di ult to apply on non Gaussian inputs

Table 8

Advantages and drawba ks of AST.

Generating dire tly independent samples from the hk onditional densities is in most

ases impossible as they are usually unknown [102,117. Nevertheless, AST provides an

iterative way to do it, yet in a dependent fashion using a h0-reversible Markovian kernel

K(X, ·). With su h a kernel and Xk following the density hk, one an distribute random

variable Ξk a ording to hkwith the following proposal/refusal method [116:

Ξk = Ξk(Xk) =

K(Xk, ·), if K(Xk, ·) ∈ Ak,

Xk, otherwise.

This proposal/refusal algorithm enables to generate any number of samples a ording to

hkin a relative simple manner. It also enables us to keep onstant the number of samples

to estimate ea h P (X ∈ Ak+1|X ∈ Ak). This operation has to be applied for ea h densityhk. The generated samples are unfortunately dependent and identi ally distributed a -

ording to hk. Up to now, there is no way to do this in an independent fashion. However,

under mild onditions, it an be shown [117 that applying the proposal/refusal method

several times may de rease varian e.

The advantages and drawba ks of AST are des ribed in Table 8. AST is often applied to

estimate very rare events (P < 10−6). For higher probabilities, other simulation methods

as IS are more e ient than AST [116. The logarithmi e ien y has been proved for

splitting with xed levels in [118.

8. CMC inspired methods

Even if CMC is not adapted to rare event estimations, CMC an nevertheless be slightly

improved with the use of stratied sampling of Latin hyper ube sampling as des ribed

the following subse tions.

8.1. Stratied Sampling

The prin iple of stratied sampling (SS) is very similar to CMC [119. The idea is

to propose more samples in the input spa e so that 1φ(X)>S = 1. SS onsists thus in

partitioning the support of X, dened by Rdin the general ase as proposed in Se tion

1, in several subsets Qi, i = 1, ...,m su h that Qi

⋂Qj = ∅ for i 6= j, and

⋃iQi = Rd

.

One then generates ni i.i.d. samples Xi1, ...,X

ini

from the PDF hQidened with

hQi(X) = 1X∈Qi

h0(X)

di, (49)

where di is dened by

17

Page 19: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of SS Drawba ks of SS

Simple implementation Ne essary information on fun tion φ

Potential de rease of CMC relative deviation Subset denition strongly inuen es probability estimate a ura y

Table 9

Advantages and drawba ks of stratied sampling.

di =

Qi

h0(x)dx. (50)

The required number of samples N in SS is equal to

N =

m∑

i=1

ni.

The SS probability estimate PSSis then obtained with

PSS =m∑

i=1

diPhQi, (51)

where PhQiis dened as

PhQi=

1

ni

ni∑

j=1

1φ(Xij)>S . (52)

The relative deviation of PSSdepends notably on ni and hQi

, and is given by the following

equation [120

σPSS

P=

1

P

√√√√m∑

i=1

diPhQi

(1− PhQi)

ni, (53)

where PhQiis the true value of PhQi

. If m = 1, the previous equation orresponds to the

CMC relative deviation given in Eq. 4. The hoi e of the subsets Qi and of ni is thus

very important in order to redu e the Monte-Carlo estimator varian e, but requires some

information on the input-output fun tion φ. If one has no lue on where 1φ(X)>S = 1 in

the input spa e, the method of stratied sampling is not appli able and an in rease the

Monte-Carlo relative deviation if Qi and ni are not adapted to φ. An adaptive version of

SS has been proposed in [121. Table 9 sums up the hara teristi s of stratied sampling

estimator. An extended version of SS alled overage Monte-Carlo method in [122,123

has been proposed for spe i systems represented by a fault tree or a network using

its minimal uts to improve the probability estimation. For the same kind of systems,

re ursive varian e redu tion methods des ribed in [124,125, have also been proposed and

have some links with SS. They are one of the most e ient methods for this appli ation

[126.

8.2. Monte-Carlo method with Latin Hyper ube Sampling

Latin hyper ube sampling (LHS) [127132 an be used instead of stratied sampling

when the subsets Qi are di ult to estimate. The prin iple is to stratify in an independent

fashion ea h of the d input dimensionsX = (X1, X2, ..., Xd) into N equipossible intervals

of probability

1N . For a given dimension k, one generates one sample in ea h interval

18

Page 20: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages of LHS Drawba ks of LHS

Simple implementation Weak potential de rease of CMC relative deviation

Table 10

Advantages and drawba ks of LHS.

a ording to the onditional joint law of h0 for the dimension k and thus obtains N s alar

samples. The random mat hing between the s alar samples in the dierent dimensions

enables to obtain a N d-tuple X1, ...,XN that des ribes a LHS. The probability with

LHS is estimated in the same way as Monte-Carlo with

PLHS =1

N

N∑

i=1

1φ(Xi)>S . (54)

This estimate is unbiased and its relative deviation is always lower than CMC [133,134.

The advantages and drawba ks of LHS are des ribed in Table 10. In [135, the use of

LHS allows to de rease by

√2 the relative deviation of the Monte-Carlo method. This

redu tion is interesting and divides by 2 the omputational eort. It is nevertheless

possible to obtain a better de rease of the estimate varian e with statisti or simulation

te hniques dedi ated to rare event estimation. Some information about the relative error

bound of LHS sampling an be found in [15. The logarithmi e ien y of this algorithm

has not been proved.

9. Other simulation algorithms

9.1. Control Variates

The ontrol variate method [136,137 is a varian e redu tion te hnique used in Monte-

Carlo methods. The prin iple is the following. Let us dene the random variable H =1φ(X)>S . One has E(H) = P and an dene a random variable m su h that E(m) = τ .

One an also dene the variable H∗so that, given a oe ient c,

H∗ = H + c(m− τ). (55)

The variable H∗is also an unbiased estimator of P for any hoi e of the oe ient c.

The varian e of H∗is given by

V ar(H∗) = V ar(H) + c2V ar(m) + 2c Cov(H,m), (56)

where Cov(H,m) is the ovarian e between H and m. It an be shown that hoosing the

optimal oe ient c∗ dened by

c∗ =−Cov(H,m)

V ar(m), (57)

minimizes the varian e of H∗. In that ase, the varian e H∗

is equal to

V ar(H∗) = (1− ρ2)V ar(H), (58)

where ρ is the orrelation oe ient between H and m. Unfortunately, the optimal

oe ient c∗ is not available and thus, dierent te hniques allow to hoose e ient values

of c. When the system an be bounded, that is, if one an determine φL and φR su h

19

Page 21: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

that φL(X) < φ(X) < φR(X) ∀X, the use of ontrol variates an de rease the varian e

of the probability estimate. Su h developments have notably been proposed in [138 for

fault trees.

9.2. Antitheti variates

The antitheti variate (AV) algorithm [52,139 is a varian e redu tion te hnique. Let

us assume that one has two random variables H1 and H2 with the same probability law

of H = 1φ(X)>S . One has then

E(H) =1

2(E(H1) + E(H2)) = E

(H1 +H2

2

), (59)

and also

V ar

(H1 +H2

2

)=

V ar(H1) + V ar(H2) + 2Cov(H1, H2)

4. (60)

If H1 and H2 are i.i.d, then Cov(H1, H2) = 0 and one obtains the same varian e as

Monte-Carlo estimate. The prin iple of AV is to obtain samples so that Cov(H1, H2) <0. For instan e, if X follows a multidimensional normal PDF with mean µ and ovarian e

matrix Σ, then X ′ = 2µ−X follows the same law as X . In that ase, one an generate

H1 = 1φ(X)>S and H2 = 1φ(X′)>S and redu e the varian e of the Monte-Carlo estimate

on P .

Control and antitheti variates annot be easily applied in ases where the fun tion φ is

not known analyti ally whi h redu es the potential appli ability of these methods. Re ent

results have thrown an important doubt about their interest [140. Dagger sampling,

des ribed in [141 and more re ently in [142, is an extension of antitheti variable method.

It improves CMC estimate for spe i systems su h as networks or fault trees.

10. Use of metamodels in rare event probability estimation

Being able to build an e ient surrogate model whi h allows to redu e the number of

alls to the expensive input-output fun tion φ while keeping a good a ura y is a key point

in rare event probability estimation. A great number of methods have been proposed and

ompared in re ent years. For the sake of on iseness, in this paper, we do not review all

the methods present in the literature whi h is very profuse on this subje t. A survey of the

dierent metamodel methods an be found in [80. In this se tion, we present the main

surrogate models whi h have been got underway with importan e sampling and Monte-

Carlo estimators. Classi al deterministi surrogate models su h as polynomials, splines

have been tested and ompared to neural networks and rst order reliability method

(FORM) [143145. Chaos Polynomials have been asso iated with Monte-Carlo sampling

to estimate failure probabilities [146. Support ve tor ma hines have also been employed

to estimate the domains of failure [147 and been oupled to rare event estimator su h

as subset sampling [148.

Kriging method [149151 presents some advantages in rare event probability estimation.

Indeed, this surrogate model is based on a Gaussian pro ess, that allows to estimate

the varian e of the predi tion error and onsequently to dene a onden e domain

of the surrogate model. This indi ator an be dire tly used to rene the model, i.e.,

20

Page 22: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

Advantages Drawba ks

Allow to greatly redu e omputation time Indu e approximation errors

due to the surrogate model

Allow to use greater simulation budget Require knowledge on φ to build

a onsistent model espe ially when φ(X) > S

Table 11

Advantages and drawba ks of metamodel probability estimate.

to hoose new points to evaluate the real fun tion that allow to improve the a ura y

of the model. Kriging has been extensively used with lassi al Monte Carlo estimator

[152, Importan e sampling method [145,153155, importan e sampling with ontrol

variates [156 or subset simulation [157159. The way to rene the Kriging model is a key

point and dierent strategies have been proposed [155,160,161 to exploit the omplete

probabilisti des ription given by the Kriging to evaluate the minimal number of points

on the real expensive input-output fun tion. A numeri al omparison of dierent Kriging

based methods to estimate a probability of failure an be found in [162.

The advantages and drawba ks of metamodel-based rare event probability estimators

are given in Table 11.

11. Synthesis

The proposed synthesis of this arti le onsists of a series of questions than an help

the reader to hoose the appropriate methods for his estimation problem.

(i) Is it possible to use the fun tion φ to resample? If resampling is not possible, that

is if one onsiders only a xed set of samples φ(X1), ..., φ(XN ), the only available

methods are EVT and metamodel probability estimate. If resampling is possible,

the other simulation methods presented in this arti le are more e ient than EVT.

(ii) Is the density of Y or the fun tion φ analyti ally known? If it is the ase, then

it an be interesting to fo us on LDT, exponential twisting, simple hanges of

importan e sampling, ontrol variates and antitheti variates. If these methods are

not e ient, then more general algorithms are more omplex to implement but

should be e ient.

(iii) Is the input region whi h gives φ(X) > S approximately known? If yes, then SS

and FORM/SORM/FOSPA are adapted.

(iv) Is the input region whi h gives φ(X) > S multimodal? If yes or if the answer to

this question is not known, the use of CE, FORM/SORM/FOSPA is not advised.

(v) What is the dimension d of the problem? If d < 10 (value given as an order of

magnitude), NAIS, FORM/SORM/FOSPA and LS an be onsidered. If d > 10,AST and CE are the most e ient algorithms.

(vi) What is the available simulation budget N? If N > 1000 (value given as an order of

magnitude), then CE, NAIS and AST are adapted. IfN < 1000, FORM/SORM/FOSPA

and LS have to be used. CE, NAIS and AST an also be applied when N < 1000but jointly used with a surrogate model.

(vii) Is the fun tion φ highly non linear? If it is the ase, then FORM/SORM/FOSPA,

LS and surrogate model an imply a bias in the estimation and has to be applied

arefully whereas AST is adapted.

21

Page 23: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

(viii) Is it possible to prove that the probability estimate has a bounded relative error or

is logarithmi e ient ? IS with exponential twisting or with CE optimisation (in

a spe i ontext) and AST have been proved to have good robustness properties

in ertain appli ations.

Table 12 sums up these dierent answers. It is often di ult to pra ti ally hoose the

most e ient rare event method for a given problem. Indeed, as des ribed in this arti le,

a large olle tion of methods is available to estimate rare event probability with more

or less a ura y depending on the problem hara teristi s. The answers to all the previ-

ous questions an guide the reader to an appropriate algorithm. An open topi on rare

event estimation is the analysis of the robustness properties of the dierent probability

estimates in very general ases. It would ease the omparison of the dierent algorithms

to determine whi h method ould potentially lead to the required simulation budget for

a xed relative error.

Impossibility Density of φ known φ and Y Region Region d < 10 d> 10 N > 1000 N < 1000 φ non

of resampling Y known analyti ally unknown Y > S Y > S linear

partially disjoint - info

known not available

AST

√ √ √ √

Anti.Var.

√ √

CE

√ √×

√ √

Cont.Var.

√ √

EVT

Exp.Tw

√ √

FORM

√ √×

√×

/SORM

LDT

√ √

LS

√×

√ √

Surrogates

√ √×

√×

NAIS

√ √ √×

SS

√ √

Table 12

Synthesis table -

√(resp. ×): the method presents some advantages (resp. drawba ks) for the onsiderate

hara teristi

12. A knowledgements

The work of Jérme Morio has re eived funding from the European Community's

Seventh Framework Program (FP7/2012-2015) under grant agreement number 284802

HIPOW proje t. The work of Damien Ja quemart is nan ially supported by DGA (Di-

re tion Générale de l'Armement) and Onera. The work of Christelle Vergé is nan ially

supported by CNES (Centre National d'Etudes Spatiales) and Onera. The authors thank

the anonymous reviewer for his valuable remarks.

Referen es

[1 P. J. Smith, M. Sha, H. Gao, Qui k simulation: a review of importan e sampling te hniques

in ommuni ations systems, Sele ted Areas in Communi ations, IEEE Journal on 15 (4) (1997)

597613.

22

Page 24: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

[2 M. Ro o, Extreme value theory for nan e: A survey, Bank of Italy O asional 99 (2012) 172.

[3 S. Juneja, P. Shahabuddin, Chapter 11 rare-event simulation te hniques: An introdu tion and

re ent advan es, in: S. G. Henderson, B. L. Nelson (Eds.), Simulation, Vol. 13 of Handbooks in

Operations Resear h and Management S ien e, Elsevier, 2006, pp. 291350.

[4 A. J. Keane, P. B. Nair, Computational Approa hes for Aerospa e Design, John Wiley & Sons,

Ltd, 2005.

[5 J. Morio, R. Pastel, F. Le Gland, Missile target a ura y estimation with importan e splitting,

Aerospa e S ien e and Te hnology 25 (1) (2013) 4044.

[6 I. Banerjee, S. Pal, S. Maiti, Computationally e ient bla k-box modeling for feasibility analysis,

Computers & Chemi al Engineering 34 (9) (2010) 1515 1521.

[7 A. Gran harova, J. Ko ijan, T. A. Johansen, Expli it output-feedba k nonlinear predi tive ontrol

based on bla k-box models, Engineering Appli ations of Arti ial Intelligen e 24 (2) (2011) 388

397.

[8 V. Kreinovi h, S. Ferson, A new Cau hy-based bla k-box te hnique for un ertainty in risk analysis,

Reliability Engineering & System Safety 85 (1-3) (2004) 267279.

[9 K. Worden, C. Wong, U. Parlitz, A. Hornstein, D. Engster, T. Tjahjowidodo, F. Al-Bender,

D. Rizos, S. Fassois, Identi ation of pre-sliding and sliding fri tion dynami s: Grey box and bla k-

box models, Me hani al Systems and Signal Pro essing 21 (1) (2007) 514534.

[10 H. Can ela, M. El Khadiri, G. Rubino, Rare event analysis by Monte Carlo te hniques in stati

models, John Wiley & Sons, Ltd, 2009, pp. 145170.

[11 B. W. Silverman, Density estimation for statisti s and data analysis, in: Monographs on Statisti s

and Applied Applied Probability, London: Chapman and Hall, 1986.

[12 G. A. Mikhailov, Parametri Estimates by the Monte Carlo Method, VSP, Utre ht (NED), 1999.

[13 I. M. Sobol, A Primer for the Monte Carlo Method, CRC Press, Bo a Raton, Fl., 1994.

[14 C. Robert, G. Casella, Monte Carlo Statisti al Methods, Springer, New York, 2005.

[15 H. Niederreiter, J. Spanier, Monte Carlo and Quasi-Monte Carlo Methods, Springer, 2000.

[16 G. S. Fishman, Monte-Carlo: Con epts, Algorithms, and Appli ations, Springer, New York, 1996.

[17 P. L'E uyer, M. Mandjes, B. Tun, Importan e Sampling in Rare Event Simulation, John Wiley

& Sons, Ltd, 2009, pp. 1738.

[18 P. L'E uyer, J. H. Blan het, B. Tun, P. W. Glynn, Asymptoti robustness of estimators in rare-

event simulation, ACM Trans. Model. Comput. Simul. 20 (1) (2010) 137.

[19 D. P. Kroese, R. Y. Rubinstein, Monte-Carlo methods, Wiley Interdis iplinary Reviews:

Computational Statisti s 4 (1) (2012) 4858.

[20 P. Embre hts, C. Kluppelberg, T. Mikos h, Modelling extremal events for insuran e and nan e,

ZOR Zeits hrift for Operations Resear h Mathemati al Methods of Operations Resear h 97 (1)

(1994) 134.

[21 S. Kotz, S. Nadarajah, Extreme Value Distributions. Theory and Appli ations, Imperial College

Press, London, 2000.

[22 E. Towler, B. Rajagopalan, E. Gilleland, R. S. Summers, D. Yates, R. W. Katz, Modeling hydrologi

and water quality extremes in a hanging limate: A statisti al approa h based on extreme value

theory, Water Resour es Resear h 46 (11) (2010) 111.

[23 J. Blan het, C. Marty, M. Lehning, Extreme value statisti s of snowfall in the Swiss alpine region,

Water Resour es Resear h 45 (5) (2009) 112.

[24 R. D. Reiss, M. Thomas, Statisti al Analysis of Extreme Values, Birkhauser, 1997.

[25 E. Castillo, A. Hadi, J. Sarabia, Extreme Value and Related Models with Appli ations in

Engineering and S ien e, John Wiley & Sons, New Jersey, 2005.

[26 B. Gnedenko, Sur la distribution limite du terme maximum d'une série aléatoire, Annals of

Mathemati s 44 (3) (1943) 423453.

[27 S. Resni k, Extreme values, regular variation and point pro ess, New York, 1987.

[28 J. Pi kands, Statisti al inferen e using extreme order statisti s, Annals of Statisti s 3 (1) (1975)

119131.

[29 L. de Haan, Slow variation and hara terization of domains of attra tion, in: J. de Oliveira (Ed.),

Statisti al Extremes and Appli ations, Vol. 131 of NATO ASI Series, Springer Netherlands, 1984,

pp. 3148.

[30 M. I. Fraga Alves, M. Ivette Gomes, Statisti al hoi e of extreme value domains of attra tion - a

omparative analysis, Communi ations in Statisti s - Theory and Methods 25 (4) (1996) 789811.

23

Page 25: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

[31 H. Drees, L. de Haan, D. Li, Approximations to the tail empiri al distribution fun tion with

appli ation to testing extreme value onditions, Journal of Statisti al Planning and Inferen e

136 (10) (2006) 34983538.

[32 R. B. D'Agostino, M. A. Stephens, Goodness-of-t Te hniques, Vol. 68 of Statisti s: a Series of

Textbooks and Monographs, Dekker, New York, 1996.

[33 A. L. M. Dekkers, L. De Haan, On the estimation of the extreme-value index and large quantile

estimation, The Annals of Statisti s 17 (4) (1999) 17951832.

[34 S. G. Coles, An introdu tion to statisti al modeling of extreme values, Springer, New York, 2001.

[35 J. Hosking, J. Wallis, Parameter and quantile estimation for the generalized Pareto distribution,

Te hnometri s 29 (3) (1987) 339349.

[36 C. Neves, M. Fraga Alves, Reiss and Thomas' automati sele tion of the number of extremes,

Computational Statisti s and Data Analysis 47 (4) (2004) 689704.

[37 J. Geluk, L. de Haan, On bootstrap sample size in extreme value theory, E onometri Institute

Resear h Papers EI 2002-40, Erasmus University Rotterdam, Erasmus S hool of E onomi s (ESE),

E onometri Institute (2002).

[38 A. Ferreira, L. De Haan, On the blo k maxima method in extreme value theory submitted.

URL http://arxiv.org/pdf/1310.3222.pdf

[39 S. R. S. Varadhan, Spe ial invited paper: Large deviations, The Annals of Probability 36 (2) (2008)

397419.

[40 A. Dembo, O. Zeitouni, Large deviations te hniques and appli ations, Springer-Verlag, New York,

1998.

[41 F. den Hollander, Large Deviations, Ameri an Mathemati al So iety, New York, 2008.

[42 P. Dupuis, H. Wang, Importan e sampling, large deviations, and dierential games, Sto hasti s

and Sto hasti s Reports 76 (2004) 481508.

[43 J. S. Sadowsky, Evaluation of large deviation probabilities via importan e sampling, in: Signals,

Systems and Computers, 1994 Conferen e Re ord of the Twenty-Eighth Asilomar Conferen e on,

Vol. 1, 1994, pp. 3034.

[44 P. Glasserman, Y. Wang, Counter examples in importan e sampling for large deviations

probabilities, Ann. Appl. Prob. 7 (1997) 731746.

[45 T. Dean, P. Dupuis, Splitting for rare event simulation: a large deviation approa h to design and

analysis, Sto hasti pro esses and their appli ations 119 (2) (2009) 562587.

[46 P. Del Moral, J. Garnier, Genealogi al parti le analysis of rare events, The Annals of Applied

Probability 15 (2005) 24962534.

[47 H. Tou hette, The large deviation approa h to statisti al me hani s, Physi s Reports 478 (1-3)

(2009) 169.

[48 J. Blan het, H. Lam, State-dependent importan e sampling for rare-event simulation: An overview

and re ent advan es, Surveys in Operations Resear h and Management S ien e 17 (1) (2012) 3859.

[49 G. C. Orsak, B. Aazhang, A lass of optimum importan e sampling strategies, Information S ien es

84 (1-2) (1995) 139160.

[50 J.-F. Ri hard, W. Zhang, E ient high-dimensional importan e sampling, Journal of E onometri s

141 (2) (2007) 13851411.

[51 S. Engelund, R. Ra kwitz, A ben hmark study on importan e sampling te hniques in stru tural

reliability, Stru tural Safety 12 (4) (1993) 255276.

[52 J. Hammersley, D. Hands omb, Monte-Carlo methods, Methuen, London, 1964.

[53 R. E. Mel hers, Importan e sampling in stru tural systems, Stru tural Safety 6 (1) (1989) 310.

[54 J. A. Bu klew, Introdu tion to Rare Event Simulation, Springer, 2004.

[55 D. L. M Leish, Bounded relative error importan e sampling and rare event simulation, ASTIN

Bulletin 40 (2010) 377398.

[56 S. Juneja, Estimating tail probabilities of heavy tailed distributions with asymptoti ally zero

relative error, Queueing Systems 57 (2-3) (2007) 115127.

[57 P. Dupuis, H. Wang, Dynami importan e sampling for uniformly re urrent Markov hains, The

Annals of Applied Probability 15 (1A) (2005) 138.

[58 P. Dupuis, A. D. Sezer, H. Wang, Dynami importan e sampling for queueing networks, The Annals

of Applied Probability 17 (4) (2007) 13061346.

[59 S. T. Tokdar, R. E. Kass, Importan e sampling: a review, Wiley Interdis iplinary Reviews:

Computational Statisti s 2 (1) (2010) 5460.

24

Page 26: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

[60 R. Rubinstein, D. Kroese, The Cross-Entropy Method : A Unied Approa h to Combinatorial

Optimization, Monte-Carlo Simulation and Ma hine Learning (Information S ien e and Statisti s),

Springer, 2004.

[61 T. H. de Mello, R. Y. Rubinstein, Rare event estimation for stati models via ross-entropy and

importan e sampling, John Wiley, New York, 2002.

[62 B. Tun, A. Ridder, Probabilisti bounded relative error for rare event simulation learning

te hniques, in: Simulation Conferen e (WSC), Pro eedings of the 2012 Winter Simulation

Conferen e, 2012, pp. 112.

[63 P. Zhang, Nonparametri importan e sampling, Journal of the Ameri an Statisti al Asso iation

91 (434) (1996) 12451253.

[64 J. C. Neddermeyer, Non-parametri partial importan e sampling for nan ial derivative pri ing,

Quantitative Finan e 11 (2010) 11931206.

[65 J. C. Neddermeyer, Computationally e ient nonparametri importan e sampling, Journal of the

Ameri an Statisti al Asso iation 104 (2009) 788802.

[66 J. Morio, Extreme quantile estimation with nonparametri adaptive importan e sampling,

Simulation Modelling Pra ti e and Theory 27 (0) (2012) 7689.

[67 M. P. Wand, M. C. Jones, Kernel Smoothing, Chapman and Hall, New York, 1994.

[68 I. K. Glad, N. L. Hjort, N. G. Ushakov, Mean-squared error of kernel estimators for nite values

of the sample size, Journal of Mathemati al S ien es 146 (2007) 59775983.

[69 H. E. Daniels, Saddlepoint approximations in statisti s, Ann. Math. Statist 25 (4) (1954) 631650.

[70 J. L. Jensen, Saddlepoint Approximations, Oxford University Press, USA, 1995.

[71 S. Huzurbazar, Pra ti al saddlepoint approximations, The Ameri an Statisti ian 53 (3) (1999)

225232.

[72 C. Goutis, G. Casella, Explaining the saddlepoint approximation, The Ameri an Statisti ian 53 (3)

(1999) 216224.

[73 S. Asmussen, Applied probability and queues, Springer, New York, 2003.

[74 D. Siegmund, Importan e sampling in the Monte Carlo study of sequential tests, The Annals of

Statisti s 4 (4) (1976) 673684.

[75 A. B. Dieker, M. Mandjes, On asymptoti ally e ient simulation of large deviation probabilities,

Advan es in Applied Probability 37 (2) (2005) 539552.

[76 Z. Yan-Gang, O. Tetsuro, A general pro edure for rst/se ond-order reliability method

(FORM/SORM), Stru tural Safety 21 (2) (1999) 95112.

[77 R. Bjerager, Methods for stru tural reliability omputation, Springer Verlag, New York, 1991, pp.

89136.

[78 H. . Madsen, S. Krenk, N. C. Lind, Methods of stru tural safety, Springer-Verlag, Englewood Clis,

1986.

[79 T. Lassen, N. Re ho, Fatigue Life Analyses of Welded Stru tures, ISTE Wiley, New York, 2006.

[80 B. Sudret, Meta-models for stru tural reliability and un ertainty quanti ation, in: Asian-Pa i

Symposium on Stru tural Reliability and its Appli ations, Singapore, Singapore, 2012.

[81 A. Nataf, Distribution des distributions dont les marges sont données (in Fren h), Comptes rendus

de l'A adémie des S ien es 225 (1962) 4243.

[82 A. Hasofer, N. Lind, An exa t and invariant rst-order reliability format, Journal of Engineering

Me hani s 100 (1974) 111121.

[83 L. Pei-Ling, A. D. Kiureghian, Optimization algorithms for stru tural reliability, Stru tural Safety

9 (3) (1991) 161177.

[84 M. Rosenblatt, Remarks on a multivariate transformation, Annals of Mathemati al Statisti s 23

(1952) 470472.

[85 R. Lebrun, A. Dutfoy, An innovating analysis of the Nataf transformation from the opula

viewpoint, Probabilisti Engineering Me hani s 24 (3) (2009) 312320.

[86 R. Lebrun, A. Dutfoy, A generalization of the Nataf transformation to distributions with ellipti al

opula, Probabilisti Engineering Me hani s 24 (2) (2009) 172178.

[87 R. Ra kwitz, B. Flessler, Stru tural reliability under ombined random load sequen es, Computers

and Stru tures 9 (5) (1978) 489494.

[88 O. D. et H.O Madsen, Stru tural Reliability Methods, John Wiley and Sons, New York, 1996.

[89 P. Bjerager, On omputation methods for stru tural reliability analysis, Stru tural Safety 9 (2)

(1990) 7996.

25

Page 27: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

[90 K. Breitung, Asymptoti approximation for multinormal integrals, Journal of Engineering

Me hani s 110 (3) (1984) 357366.

[91 B. Huanh, X. Du, Probabilisti un ertainty analysis by mean-value rst order saddlepoint

approximation, Reliability Engineering and System Safety 93 (2) (2008) 325336.

[92 X. Du, A. Sudjianto, First order saddlepoint approximation for reliability analysis, AIAA Journal

42 (6) (2004) 11991207.

[93 Z. Lu, S. Song, Z. Yue, J. Wang, Reliability sensitivity method by line sampling, Stru tural Safety

30 (6) (2008) 517532.

[94 Y.-G. Zhao, T. Ono, Moment methods for stru tural reliability, Stru tural Safety 23 (1) (2001)

4775.

[95 P. S. Koutsourelakis, H. J. Pradlwarter, G. I. S hueller, Reliability of sru tures in high dimensions,

part I algorithms and appli ation, Probabilisti Engineering Me hani s 19 (2004) 409417.

[96 P. Koutsourelakis, Reliability of stru tures in high dimensions. part II. theoreti al validation,

Probabilisti Engineering Me hani s 19 (4) (2004) 419423.

[97 G. I. S hueller, H. J. Pradlwarter, P. S. Koutsourelakis, A riti al appraisal of reliability estimation

pro edures for high dimensions, Probabilisti Engineering Me hani s 19 (2004) 463474.

[98 C. A. S henk, H. J. Pradlwarter, G. I. S huëller, Realisti and e ient reliability estimation in

spa e engineering, in: Pro eedings of the 16th ASCE Engineering Me hani s Conferen e, Seattle,

2003.

[99 P. Bjerager, Probability integration by dire tional simulation, Journal of Engineering Me hani s

114 (8) (1988) 12881302.

[100 J. Nie, B. R. Ellingwood, A new dire tional simulation method for system reliability. part II:

appli ation of neural networks, Probabilisti Engineering Me hani s 19 (4) (2004) 437447.

[101 H. Kahn, T. Harris, Estimation of parti le transmission by random sampling, Appl. Math. Ser. 12

(1951) 2730.

[102 F. Cérou, P. Del Moral, T. Furon, A. Guyader, Rare event simulation for a stati distribution,

Vol. 7, RESIM, 2008, pp. 107115.

[103 F. Cérou, A. Guyader, Adaptive parti le te hniques and rare event estimation, in: Pro eedings of

ESAIM onferen e, Vol. 19, 2007, pp. 6572.

[104 P. L'E uyer, V. Demers, B. Tun, Splitting for rare event simulation, in: Pro eedings of the 2006

Winter Simulation Conferen e, 2006, pp. 137148.

[105 M. Villén-Altamirano, J. Villén-Altamirano, Restart: a straightforward method for fast simulation

of rare events, 1994, pp. 282289.

[106 P. Glasserman, P. Heidelberger, P. Shahabuddin, T. Zaji , Splitting for rare event simulation:

analysis of simple ases, in: Pro eeding of the 1996 Winter Simulation Conferen e, 1996, pp. 302

308.

[107 M. N. Rosenbluth, A. W. Rosenbluth, Monte-Carlo al ulation of the average extension of mole ular

hains, J. Chem. Phys 356 (1955) 356359.

[108 P. L'E uyer, F. Le Gland, P. Lezaud, B. Tun, Splitting Te hniques, John Wiley & Sons, Ltd,

2009, pp. 3961.

[109 P. Del Moral, Feynman-Ka Formulae, Genealogi al and Intera ting Parti le Systems with

Appli ations. Probability and its Appli ations., Springer, New York, 2004.

[110 S. K. Au, Reliability-based design sensitivity by e ient simulation, Computers and Stru tures 83

(2005) 10481061.

[111 S. K. Au, J. L. Be k, Estimation of small failure probabilities in high dimensions by subset

simulations, Probabilisti Engineering Me hani s 16 (4) (2001) 263277.

[112 Z. I. Botev, D. P. Kroese, E ient Monte-Carlo simulation via the generalized splitting method.,

Statisti s and Computing 22 (1) (2012) 116.

[113 Z. I. Botev, D. P. Kroese, An e ient algorithm for rare-event probability estimation, ombinatorial

optimization, and ounting, Methodol. Comput. Appl. Probab. 10 (4) (2012) 471505.

[114 A. Lagnoux, Rare event simulation, Probability in the Engineering and Informational s ien e 20

(2006) 4566.

[115 F. Cérou, P. Del Moral, F. Le Gland, P. Lezaud, Geneti genealogi al models in rare event analysis,

INRIA report 1797 (2006) 130.

[116 F. Cérou, P. Del Moral, T. Furon, A. Guyader, Sequential Monte-Carlo for rare event estimation,

Statisti s and Computing 22 (2012) 795808.

26

Page 28: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

[117 L. Tierney, Markov hains for exploring posterior distributions, Annals of Statisti s 22 (1994)

17011762.

[118 F. Cérou, P. Del Moral, A. Guyader, A nonasymptoti theorem for unnormalized Feynman-Ka

parti le models, Ann. Inst. H. Poin aré Probab. Statist. 47 (3) (2011) 629649.

[119 W. G. Co hran, Sampling Te hniques, Wiley, New York, 1977.

[120 M. Keramat, R. Kielbasa, A study of stratied sampling in varian e redu tion te hniques for

parametri yield estimation, Cir uits and Systems II: Analog and Digital Signal Pro essing, IEEE

Transa tions on 45 (5) (1998) 575583.

[121 M. M. Zuniga, J. Garnier, E. Remy, E. de Ro quigny, Adaptive dire tional strati ation for

ontrolled estimation of the probability of a rare event, Reliability Engineering & System Safety

96 (12) (2011) 1691 1712.

[122 R. M. Karp, M. G. Luby, A new Monte-Carlo method for estimating the failure probability of an

n- omponent system, Te h. Rep. UCB/CSD-83-117, EECS Department, University of California,

Berkeley (1983).

[123 H. Kumamoto, T. Tanaka, K. Inoue, A new Monte Carlo method for evaluating system-failure

probability, Reliability, IEEE Transa tions on R-36 (1) (1987) 6369.

[124 H. Can ela, M. El Khadiri, A re ursive varian e-redu tion algorithm for estimating ommuni ation-

network reliability, Reliability, IEEE Transa tions on 44 (4) (1995) 595602.

[125 H. Can ela, M. El Khadiri, The re ursive varian e-redu tion simulation algorithm for network

reliability evaluation, Reliability, IEEE Transa tions on 52 (2) (2003) 207212.

[126 D. P. Kroese, T. Taimre, Z. I. Botev, Handbook of Monte Carlo Methods, John Wiley & Sons,

Hobokem, USA, 2011.

[127 M. D. M Kay, R. J. Be kman, W. Conover, A omparison of three methods for sele ting values

of input variables in the analysis of output from a omputer ode, Te hnometri s 21 (1979) 239

245.

[128 M. D. Ma Kay, Latin hyper ube sampling as a tool in un ertainty analysis of omputer models,

in: WSC '92 Pro eedings of the 24th onferen e on Winter simulation, ACM, New York, 1992, pp.

557564.

[129 R. L. Inman, J. C. Helson, J. E. Campbell, An approa h to sensitivity analysis of omputer models:

Part I - introdu tion, input variable sele tion and preliminary variable assessment, Journal of

Quality Te hnology 13 (3) (1981) 174183.

[130 Z. Keqin D., Zegong, L. Chuntu, Latin hyper ube sampling used in the al ulation of the fra ture

probability, Reliability Engineering & System Safety 59 (2) (1998) 239242.

[131 B. Ayyub, L. Kwan-Ling, Stru tural reliability assessment using Latin hyper ube sampling, in:

Pro eedings of the International Conferen e on Stru tural Safety and Reliability, ICOSSAR'89,

San Fran is o, USA, 1989, pp. 174184.

[132 P. Zhang, P. Breitkopf, C. Knopf-Lenoir, W. Zhang, Diuse response surfa e model based on

moving latin hyper ube patterns for reliability-based design optimization of ultrahigh strength

steel n milling parameters, Stru t. Multidis ip. Optim. 44 (5) (2011) 613628.

[133 M. Keramat, R. Kielbasa, Modied latin hyper ube sampling Monte-Carlo (MLHSMC) estimation

for average quality index, Analog Integr. Cir uits Signal Pro ess. 19 (1) (1999) 8798.

[134 M. Keramat, R. Kielbasa, Worst ase e ien y of Latin hyper ube sampling Monte-Carlo (LHSMC)

yield estimator of ele tri al ir uits, in: Pro . IEEE Int. Symp. Cir uits Syst., Hong Kong, 1997,

pp. 16601663.

[135 G. Olsson, A. Sandberg, O. Dahlblom, On latin hyper ube sampling for stru tural reliability

analysis, Stru tural Safety 25 (1) (2003) 4768.

[136 S. P. Meyn, Control Te hniques for Complex Networks, Cambridge University Press, 2007.

[137 P. Glasserman, Monte-Carlo Methods in Finan ial Engineering, Springer, New York, 2003.

[138 H. Kumamoto, K. Tanaka, K. Inoue, E ient evaluation of system reliability by Monte Carlo

method, Reliability, IEEE Transa tions on R-26 (5) (1977) 311315.

[139 P. K. Sarkar, M. A. Prasad, Varian e redu tion in Monte-Carlo radiation transport using antitheti

variates, Annals of Nu lear Energy 19 (5) (1992) 253265.

[140 E. Saliby, R. J. Paul, A farewell to the use of antitheti variates in monte arlo simulation., JORS

60 (7) (2009) 10261035.

[141 H. Kumamoto, K. Tanaka, K. Inoue, E. J. Henley, Dagger-sampling Monte Carlo for system

unavailability evaluation, Reliability, IEEE Transa tions on R-29 (2) (1980) 122125.

27

Page 29: A survey of rare event simulation methods for static input ...A survey of rare event simulation methods for static input–output models Jérôme Morio, Mathieu Balesdent, Damien Jacquemart,

[142 S. Rongfu, S. Chanan, C. Lin, S. Yuanzhang, Short-term reliability evaluation using ontrol variable

based dagger sampling method, Ele tri Power Systems Resear h 80 (6) (2010) 682 689.

[143 C. Bu her, U. Bourgund, A fast and e ient response surfa e approa h for stru tural reliability

problems, Stru tural Safety 7 (1990) 5766.

[144 H. Gomes, A. Awru h, Comparison of response surfa e and neural network with other methods for

stru tural reliability analysis, Stru tural Safety 26 (2004) 4967.

[145 L. S hueremans, D. Van Gemert, Use of Kriging as Meta-model in simulation pro edures for

stru tural reliability, in: 9th International onferen e on stru tural safety and reliability, Rome,

2005, pp. 24832490.

[146 J. Li, D. Xiu, Evaluation of failure probability via surrogate models, Journal of Computational

Physi s 229 (2010) 89668980.

[147 A. Basudhar, S. Missoum, A. San hez, Limit state fun tion identi ation using Support Ve tor

Ma hines for dis ontinuous responses and disjoint failure domains, Probabilisti Engineering

Me hani s 23 (2008) 111.

[148 J.-M. Bourinet, F. Deheeger, M. Lemaire, Assessing small failure probabilities by ombined subset

simulation and support ve tor ma hines, Stru tural Safety 33 (2011) 343353.

[149 G. Matheron, Prin iples of geostatisti s, E onomi Geology 58 (8) (1963) 1246.

[150 M. K. Sasena, Flexibility and e ienvy enhan ements for onstrained global design optimization

with Kriging approximation, Ph.D. thesis, University of Mi higan (2002).

[151 T. J. Santner, B. J. Williams, N. I. Notz, The design and analysis of omputer experiments, 2003.

[152 B. E hard, N. Gayton, M. Lemaire, AK-MCS : An a tive learning reliability method ombining

Kriging and Monte-Carlo Simulation, Stru tural Safety 33 (2011) 145154.

[153 J. Janusevskis, R. Le Ri he, Simultaneous Kriging-based estimation and optimization of mean

response, Journal of Global Optimization 55 (2) (2012) 313336.

[154 M. Balesdent, J. Morio, J. Marzat, Kriging-based adaptive importan e sampling algorithms for

rare event estimation, Stru tural Safety 13 (2013) 110.

[155 V. Dubourg, E. Deheeger, B. Sudret, Metamodel-based importan e sampling for the simumation of

rare events, in: Faber, M. J. Kohler and K. Nishilima (Eds.), Pro eedings of the 11th International

Conferen e of Statisti s and Probability in Civil Engineering (ICASP2011), Zuri h, Switzerland,

2011.

[156 C. Cannamela, J. Garnier, B. Iooss, Controlled strati ation for quantile estimation, Annals of

Applied Stats 2 (4) (2008) 15541580.

[157 E. Vazquez, J. Be t, A Sequential Bayesian algorithm to estimate a probability of failure, in: 15th

IFAC, Symposium on System Identi ation (SYSID'09), Saint-Malo, Fran e, July 6-8, 2009.

[158 L. Li, J. Be t, E. Vazquez, Bayesian Subset Simulation: a Kriging-based subset simulation algorithm

for the estimation of small probabilities of failure, in: Pro eedings of PSAM 11 and ESREL 2012,

25-29 June 2012, Helsinki, Finland, 2012.

[159 J. Be t, D. Ginsbourger, L. Li, V. Pi heny, E. Vazquez, Sequential design of omputer experiments

for the estimation of a probability of failure, Statisti s and Computing 22 (3) (2012) 773793.

[160 V. Pi heny, Improving a ura y and ompensating for un ertainty in surrogate modeling, Ph.D.

thesis, University of Florida (2009).

[161 V. Baudoui, P. Klotz, J.-B. Hiriart-Urruty, S. Jan, F. Morel, LO al Un ertainty Pro essing

(LOUP) method for multidis iplinary robust design optimization, Stru tural and Multidis iplinary

Optimization 46 (5) (2012) 116.

[162 L. Li, J. Be t, E. Vazquez, A numeri al omparison of two sequential Kriging-based algorithms to

estimate a probability of failure, in: Un ertainty in Computer Model Conferen e, Sheeld, UK,

July, 12-14, 2010.

28