34
20-1 0-8493-1180-X/04/$0.00+$1.50 © 2004 by CRC Press LLC 20 Stochastic Simulation Methods for Engineering Predictions 20.1 Introduction ................................................................... 20-1 20.2 One-Dimensional Random Variables ............................ 20-3 Uniform Random Numbers Inverse Probability Transformation Sampling Acceptance-Rejection Bivariate Transformation Density Decomposition 20.3 Stochastic Vectors with Correlated Components ......... 20-5 Gaussian Vectors NonGaussian Vectors 20.4 Stochastic Fields (or Processes) ..................................... 20-8 One-Level Hierarchical Simulation Models • Two-Level Hierarchical Simulation Models 20.5 Simulation in High-Dimensional Stochastic Spaces .......................................................... 20-21 Sequential Importance Sampling (SIS) Dynamic Monte Carlo (DMC) Computing Tail Distribution Probabilities Employing Stochastic Linear PDE Solutions Incorporating Modeling Uncertainties 20.6 Summary ...................................................................... 20-31 20.1 Introduction The term “simulation” comes from the Latin word simulatio, which means to imitate a phenomenon or a generic process. In the language of mathematics, the term “simulation” was used for the first time during the World War II period at the Los Alamos National Laboratory by the renowned mathematicians Von Neumann, Ulam, Metropolis and physicist Fermi, in the context of their nuclear physics research for the atomic bomb. About the same time they also introduced an exotic term in mathematics, namely “Monte Carlo” methods. These were defined as numerical methods that use artificially generated statis- tical selections for reproducing complex random phenomena and for solving multidimensional integral problems. The name “Monte Carlo” was inspired by the famous Monte Carlo casino roulette, in France, that was the best available generator of uniform random numbers. In scientific applications, stochastic simulation methods based on random sampling algorithms, or Monte Carlo methods, are used to solve two types of problems: (1) to generate random samples that belong to a given stochastic model, or (2) to compute expectations (integrals) with respect to a given distribution. The expectations might be probabilities, or discrete vectors or continuous fields of proba- bilities or in other words, probability distributions. It should be noted that if the problem is solved, Dan M. Ghiocel Ghiocel Predictive Technologies, Inc. 1180_C20.fm Page 1 Wednesday, October 27, 2004 8:04 PM

Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

  • Upload
    others

  • View
    15

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20

-1

0-8493-1180-X/04/$0.00+$1.50© 2004 by CRC Press LLC

20

Stochastic SimulationMethods for Engineering

Predictions

20.1 Introduction

...................................................................20

-

120.2 One-Dimensional Random Variables

............................20

-

3

Uniform Random Numbers • Inverse Probability Transformation • Sampling Acceptance-Rejection• Bivariate Transformation • Density Decomposition

20.3 Stochastic Vectors with Correlated Components

.........20

-

5

Gaussian Vectors • NonGaussian Vectors

20.4 Stochastic Fields (or Processes)

.....................................20

-

8

One-Level Hierarchical Simulation Models • Two-Level Hierarchical Simulation Models

20.5 Simulation in High-DimensionalStochastic Spaces

..........................................................20

-

21

Sequential Importance Sampling (SIS) • Dynamic Monte Carlo (DMC) • Computing Tail Distribution Probabilities • Employing Stochastic Linear PDE Solutions • Incorporating Modeling Uncertainties

20.6 Summary

......................................................................20

-

31

20.1 Introduction

The term “simulation” comes from the Latin word

simulatio

, which means to imitate a phenomenon ora generic process. In the language of mathematics, the term “simulation” was used for the first timeduring the World War II period at the Los Alamos National Laboratory by the renowned mathematiciansVon Neumann, Ulam, Metropolis and physicist Fermi, in the context of their nuclear physics researchfor the atomic bomb. About the same time they also introduced an exotic term in mathematics, namely“Monte Carlo” methods. These were defined as numerical methods that use artificially generated statis-tical selections for reproducing complex random phenomena and for solving multidimensional integralproblems. The name “Monte Carlo” was inspired by the famous Monte Carlo casino roulette, in France,that was the best available generator of uniform random numbers.

In scientific applications, stochastic simulation methods based on random sampling algorithms, orMonte Carlo methods, are used to solve two types of problems: (1) to generate random samples thatbelong to a given stochastic model, or (2) to compute expectations (integrals) with respect to a givendistribution. The expectations might be probabilities, or discrete vectors or continuous fields of proba-bilities or in other words, probability distributions. It should be noted that if the problem is solved,

Dan M. Ghiocel

Ghiocel Predictive Technologies, Inc.

1180_C20.fm Page 1 Wednesday, October 27, 2004 8:04 PM

Page 2: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20

-2

Engineering Design Reliability Handbook

so that random samples are available, then the solution to the problem, to compute expectations, becomestrivial because expectations can be approximated by the statistical averaging of the random samples.

The power of Monte Carlo methods manifests visibly for multidimensional probability integrationproblems, for which the typical deterministic integration algorithms are extremely inefficient. For verylow-dimensional problems, in one dimension or two, Monte Carlo methods are too slowly convergentand therefore there is no practical interest in employing them. For a given number of solution points

N

,the Monte Carlo estimator converges to the exact solution with rate of the order O( ) in comparisonwith some deterministic methods that converge much faster with rates up to the order O( ) or evenO(exp(

N

)). Unfortunately, for multidimensional problems, the classical deterministic integrationschemes based on regular discretization grids fail fatally. The key difference between stochastic anddeterministic methods is that the Monte Carlo methods are, by their nature, meshless methods, whilethe deterministic integration methods are regular grid-based methods. The problem is that the classicalgrid-based integration methods scale very poorly with the space dimensionality.

The fact that the Monte Carlo estimator convergence rate is independent of the input space dimen-sionality makes the standard Monte Carlo method very popular for scientific computing applications.Although the statistical convergence rate of the Monte Carlo method is independent of the input spacedimensionality, there are still two difficulties to address that are dependent on the input space dimen-sionality: (1) how to generate uniformly distributed random samples in the prescribed stochastic inputdomain, and (2) how to control the variance of Monte Carlo estimator for highly “nonuniform” stochasticvariations of the integrand in the prescribed input domain. These two difficulties are becoming increas-ingly important as the input space dimensionality increases. One can imagine a Monte Carlo integrationscheme as a sort of a random fractional factorial sampling scheme or random quadrature that uses arefined cartesian grid of the discretized multidimensional input domain with many unfilled nodes withdata, vs. the deterministic integration schemes or deterministic quadratures that can be viewed ascomplete factorial sampling schemes using cartesian grids with fully filled nodes with data. Thus, a veryimportant aspect to get good results while using Monte Carlo is to ensure as much as possible a

uniformfilling

of the input space domain grid with statistically independent solution points so that the potentiallocal subspaces that can be important are not missed. The hurting problem of the standard Monte Carlomethod is that for high-dimensional spaces, it is difficult to get uniform filling of the input space. Theconsequence of a nonuniform filling can be a sharp increase in the variance of the Monte Carlo estimator,which severely reduces the attraction for the method.

The classical way to improve the application of Monte Carlo methods to high-dimensional problemsis to partition the stochastic input space in subdomains with uniform statistical properties. Instead ofdealing with the entire stochastic space, we deal with subdomains. This is like defining a much coarsergrid in space to work with. The space decomposition in subdomains may accelerate substantially thestatistical convergence for multidimensional problems. Stratified sampling and importance samplingschemes are different sampling weighting schemes based on stochastic space decomposition in partitions.Using space decomposition we are able to control the variation of the estimator variance over the inputdomain by focusing on the most important stochastic space regions (mostly contributing to the integrandestimate) and adding more random data points in those regions.

Another useful way to deal with the high dimensionality of stochastic space is to simulate correlatedrandom samples, instead of statistically independent samples, that describe a random path or walk instochastic space. If the random walk has a higher attraction toward the regions with larger probabilitymasses, then the time spent in a region vs. the time (number of steps) spent in another region isproportional to the probability mass distributions in those regions. By withdrawing samples at equaltime intervals (number of steps), we can build an approximation of the prescribed joint probabilitydistribution. There are two distinct classes of Monte Carlo methods for simulating random samples froma prescribed multivariate probability distribution: (1) static Monte Carlo (SMC), which is based on thegeneration of independent samples from univariate marginal densities or conditional densities; and (2)dynamic Monte Carlo (DMC), which is based on spatially correlated samples that describe the randomevolutions of a

fictitious

stochastic dynamic system that has a stationary probability distribution identical

N−1 2/

N−4

1180_C20.fm Page 2 Wednesday, October 27, 2004 8:04 PM

Page 3: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions

20

-3

with the prescribed joint probability distribution. Both classes of Monte Carlo methods are discussedherein.

This chapter focuses on the application of Monte Carlo methods to simulate random samples of avariety of stochastic models — from the simplest stochastic models described by one-dimensional randomvariable models, to the most complex stochastic models described by hierarchical stochastic network-like models. The chapter also includes a solid section on high-dimensional simulation problems thattouches key numerical issues including sequential sampling, dynamic sampling, computation of expec-tations and probabilities, modeling uncertainties, and sampling using stochastic partial-differential equa-tion solutions. The goal of this chapter is to provide readers with the conceptual understanding of differentsimulation methods without trying to overwhelm them with all the implementation details that can befound elsewhere as cited in the text.

20.2 One-Dimensional Random Variables

There are a large number of simulation methods available to generate random variables with variousprobability distributions. Only the most popular of these methods are reviewed in this chapter: (1) inverseprobability transformation, (2) sampling acceptance-rejection, (3) bivariate functional transformation,and (4) density decomposition. The description of these methods can be found in many textbooks onMonte Carlo simulations [1–5]. Herein, the intention is to discuss the basic concepts of these methodsand very briefly describe their numerical implementations. In addition to these classical simulationmethods, there is a myriad of many other methods, most of them very specific to particular types ofproblems, that have been developed over the past decades but for obvious reasons are not included.

20.2.1 Uniform Random Numbers

The starting point for any stochastic simulation technique is the construction of a reliable uniformrandom number generator. Typically, the random generators produce random numbers that are uni-formly distributed in the interval [0, 1]. The most common uniform random generators are based onthe linear congruential method [1–5].

The quality of the uniform random number generator reflects on the quality of the simulation results.It is one of the key factors that needs special attention. The quality of a random generator is measuredby (1) the uniformity in filling with samples the prescribed interval and (2) its period, which is definedby the number of samples after which the random generator restarts the same sequence of independentrandom numbers all over again. Uniformity is a crucial aspect for producing accurate simulations, whilethe generator periodicity is important only when a large volume of samples is needed. If the volume ofsamples exceeds the periodicity of the random number generator, then the sequence of numbers generatedafter this is perfectly correlated with initial sequence of numbers, instead of being statistically independent.There are also other serial correlation types that can be produced by different number generators [4]. Ifwe need a volume of samples for simulating a rare random event that is several times larger than thegenerator periodicity, then the expected simulation results will be wrong due to the serial correlation ofthe generated random numbers.

An efficient way to improve the quality of the generated sequence of uniform numbers and break theirserial correlation is to combine the outputs of few generators into a single uniform random number thatwill have a much larger sequence periodicity and expectedly better space uniformity. As general advice to thereader, it is always a good idea to use statistical testing to check the quality of the random number generator.

20.2.2 Inverse Probability Transformation

Inverse probability transformation (IPT) is based on a lemma that says that for any random variable

x

with the probability distribution

F

, if we define a new random variable

y

with the probability distribution

F

1

(i.e., the inverse probability transformation of

F

), then

y

=

F

1

(

u

) has a probability distribution

F

.

1180_C20.fm Page 3 Wednesday, October 27, 2004 8:04 PM

Page 4: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20

-4

Engineering Design Reliability Handbook

The argument

u

of the inverse probability transformation stands for a uniformly distributed randomvariable.

The general algorithm of the IPT method for any type of probability distribution is as follows:

Step 1: Initialize the uniform random number generator.Step 2: Implement the algorithm for computing

F

1

.Step 3: Generate a uniform random number

u

in the interval [0, 1].Step 4: Compute the generated deviate

x

with the distribution

F

by computing

x

=

F

1

(

u

).

Because the probability or cumulative distribution functions (CDFS) are monotonic increasing func-tions, the IPT method can be applied to any type of probability distribution, continuous or discrete,analytically or numerically defined, with any probability density function (PDF) shapes, from symmetricto extremely skewed, from bell-shaped to multimodal-shaped.

Very importantly, IPT can be also used to transform a sequence of Gaussian random deviates into asequence of nonGaussian random deviates for any arbitrarily shaped probability distribution. A gener-alization of the IPT method, called the transformation method, is provided by Press et al. [4].

20.2.3 Sampling Acceptance-Rejection

Sampling acceptance-rejection (SAR) uses an auxiliary, trial, or proposal density to generate a randomvariable with a prescribed distribution. SAR is based on the following lemma: If

x

is a random variablewith the PDF

f(x)

and

y

is another variable with the PDF

g

(

y

) and the probability of

f

(

y

)

=

0, then thevariable

x

can generated by first generating variable

y

and assuming that there is a positive constant

α

such as . If

u

is a uniform random variable defined on [0, 1], then the PDF of

y

conditionedon relationship is identical with

f

(

x

).A typical numerical implementation of the SAR method for simulating a variable

x

with the PDF

f

(

x

)is as follows:

Step 1: Initialize the uniform random number generator and select constant

α

.Step 2: Generate a uniform number

u

on [0, 1] and a random sample of

y

.Step 3: If

u

>

f

(

y

)/

α

g

(

y

), then reject the pair (

u

,

y

) and go back to the previous step.Step 4: Otherwise, accept the pair (

u

,

y

) and compute

x

=

y

.

It should be noted that SAR can be easily extended to generate random vectors and matrices withindependent components.

The technical literature includes a variety of numerical implementations based on the principle ofSAR [6]. Some of the newer implementations are the weighted resampling and sampling importanceresampling (SIR) schemes [6, 7] and other adaptive rejection sampling [8].

20.2.4 Bivariate Transformation

Bivariate transformation (BT) can be viewed as a generalization of the IPT method for bivariate proba-bility distributions. A well-known application of BT is the Box-Muller method for simulating standardGaussian random variables [9]. Details are provided in [4–6].

20.2.5 Density Decomposition

Density decomposition (DD) exploits the fact that an arbitrary PDF,

f

(

x

), can be closely approximatedin a linear combination elementary density function as follows:

(20.1)

in which , , and .

α ≥ f x g x( )/ ( )

0 ≤ ≤u f y g y( )/ ( )α

f x p g xi i

i

N

( ) ( )==

∑1

0 1≤ <pi 1 ≤ ≤i N ∑ ==iN

ip1 1

1180_C20.fm Page 4 Wednesday, October 27, 2004 8:04 PM

Page 5: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions

20

-5

The typical implementation of DD for a discrete variable

x

is as follows:

Step 1: Initialize for the generation of a uniform random number and a variable

z

i

with the PDF

g

i

(

x

).This implies that

x

=

z

i

with the probability

p

i

. Step 2: Compute the values for all

i

values,

i

=

1,

N

.Step 3: Generate a uniform random variable

u

on [0, 1].Step 4: Loop

j

=

1,

N

and check if

u

<

G

i

(

x

). If not, go to the next step.Step 5: Generate

z

j

and assign

x

=

z

j

.

It should be noted that the speed of the above algorithm is highest when the

p

i

values are ordered indescending order. It should be noted that DD can be combined with SAR. An example of such acombination is the Butcher method [10] for simulating normal deviates.

20.3 Stochastic Vectors with Correlated Components

In this section the numerical simulation of stochastic vectors with independent and correlated compo-nents is discussed. Both Gaussian and nonGaussian vectors are considered.

20.3.1 Gaussian Vectors

A multivariate Gaussian stochastic vector

x of size m with a mean vector � and a covariance matrix �is completely described by the following multidimensional joint probability density function (JPDF):

(20.2)

The probability distribution of vector x is usually denoted as N(�, �). If the Gaussian vector is astandard Gaussian vector, z, with a zero mean, 0, and a covariance matrix equal to an identity matrix,I, then its probability distribution is N(0, I). The JPDF of vector z is defined by:

(20.3)

To simulate a random sample of a Gaussian stochastic vector x (with correlated components) that hasthe probability distribution N(�, �), a two computational step procedure is needed:

Step 1: Simulate a standard Gaussian vector z (with independent random components) having aprobability distribution N(0, I). This step can be achieved using standard routines for simu-lating independent normal random variables.

Step 2: To compute the vector x use the linear matrix equation

x = � + Sz (20.4)

where matrix S is called the square-root matrix of the positively definite covariance matrix �.The matrix S is a lower triangular matrix that is computed using the Choleski decompositionthat is defined by the equation

Another situation of practical interest is to generate a Gaussian vector y with a probability distribution that is conditioned on an input Gaussian vector x with a probability distribution .

Assuming that the augmented vector has the mean � and the covariance matrix � defined by

(20.5)

G x pi j

ij( ) = ∑ =1

fx mT( , )

( ) (det )

exp ( ) ( )µ ΣΣ

µ Σ µ= − − −

−1

2

1

22

1

2

1

πx x

f zz mT

i

m

i( , ) 0 I z z= −

= −

=

1

2

1

2

1

2

1

22

1

2 1

2

( ) (det )

exp exp

π πΣΠ

SST = ΣΣ.

N y yy( , )µ ∑ N x xx( , )µ ∑

[ ]x, y T

µµµ

ΣΣ

ΣΣ

=

∑ =

y

x

yy

xy

yx

xx

1180_C20.fm Page 5 Wednesday, October 27, 2004 8:04 PM

Page 6: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-6 Engineering Design Reliability Handbook

then the statistics of the conditional vector y are computed by the matrix equations:

(20.6)

It is interesting to note that the above relationships define a stochastic condensation procedure of reducingthe size of a stochastic vector from the size of the total vector with probability distribution N(�, �)to a size of the reduced vector y with a probability distribution . The above simulation proceduresemployed for Gaussian vectors can be extended to nonGaussian vectors, as shown in the next subsection.

An alternate technique to the Choleski decomposition for simulating stochastic vectors with correlatedcomponents is the popular principal component analysis. Principal component analysis (PCA) is basedon eigen decomposition of the covariance matrix. If PCA is used, then the matrix equation in the Step2 shown above is replaced by the following matrix equation

(20.7)

where � and are the eigenvalue and eigenvector matrices, respectively, of the covariance matrix.

20.3.2 NonGaussian Vectors

A nonGaussian stochastic vector is completely defined by its JPDF. However, most often in practice, anonGaussian vector is only partially defined by its second-order moments (i.e., the mean vector and thecovariance matrix) and its marginal probability distribution (MCDF) vector. This definition loses infor-mation about the high-order statistical moments that are not defined.

These partially defined nonGaussian vectors form a special class of nonGaussian vectors called trans-lation vectors. Thus, the translation vectors are nonGaussian vectors defined by their second-ordermoments and their marginal distributions. Although the translation vectors are not completely definedstochastic vectors, they are of great practicality. First, because in practice, most often we have statisticalinformation limited to second-order moments and marginal probability distributions. Second, becausethey capture the most critical nonGaussian aspects that are most significantly reflected in the marginaldistributions. Of great practical benefit is that translation vectors can be easily mapped into Gaussianvectors, and therefore can be easily handled and programmed.

To generate a nonGaussian (translation) stochastic vector y with a given covariance matrix anda marginal distribution vector F, first a Gaussian image vector x with a covariance matrix andmarginal distribution vector �(x) is simulated. Then, the original nonGaussian vector y is simulated byapplying IPT to the MCDF vector F as follows:

= g(x) (20.8)

However, for simulating the Gaussian image vector we need to define its covariance matrix as atransform of the covariance matrix of the original nonGaussian vector. Between the elements of thescaled covariance matrix or correlation coefficient matrix of the original nonGaussian vector, , andthe elements of the scaled covariance or correlation coefficient of the Gaussian image vector, , thereis the following relation:

(20.9)

µ µ Σ Σ µ

Σ Σ Σ Σ Σy y yx xx x

yy yy yx xx xy

= + ⋅ −

= −

1

1

( )x

[ , ]y x T

N y yy( , )µ ∑

x z= +µ λΦ 1 2/

Φ

Σ yy

�xx

y F x= −1Φ( )

�xx

� yy

ρyi yj,

ρxi xj,

ρσ σ

µ µ φyi yjyi yj

i yi i j yj i j i jF x F x x x dx dxi, ( ) ( ) ( , )= −[ ] −[ ]− −

−∞

−∞

∫∫1 1 1Φ Φ

1180_C20.fm Page 6 Wednesday, October 27, 2004 8:04 PM

Page 7: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-7

where the bivariate Gaussian probability density is defined by:

(20.10)

Two probability transformation options are attractive: (1) the components of the Gaussian image sto-chastic vector have means and variances equal to those of the original nonGaussian vector (i.e., and ), or (2) the Gaussian image vector is standard Gaussian vector (i.e., and �x = 1).Depending on the selected option, the above equations take a simpler form.

Thus, problem of generating nonGaussian vectors is a four-step procedure for which the two steps areidentical with those used for generating a Gaussian vector:

Step 1: Compute the covariance matrix of Gaussian vector x using Equation 20.9.Step 2: Generate a standard Gaussian vector z.Step 3: Generate a Gaussian vector x using Equation 20.4 or Equation 20.7.Step 4: Simulate a nonGaussian vector y using the following inverse probability transformation of the

MCDF,

Sometimes in practice, the covariance matrix transformation is neglected, assuming that the correla-tion coefficient matrix of the original nonGaussian vector and that of the image Gaussian vector areidentical. This assumption is wrong and may produce a violation of the physics of the problem. It shouldbe noted that calculations of correlation coefficients have indicated that if is equal to 0 or 1, then

is also equal to 0 or 1, respectively. However, when is −1, then is not necessarily equalto −1. For significant negative correlations between vector components, the effects of covariance matrixtransformation are becoming significant, especially for nonGaussian vectors that have skewed marginalPDF shapes. Unfortunately, this is not always appreciated in the engineering literature. There are a numberof journal articles on probabilistic applications for which the significance of covariance matrix transfor-mation is underestimated; for example, the application of the popular Nataf probability model tononGaussian vectors with correlated components neglects the covariance matrix transformation.Grigoriu [11] showed a simple example of a bivariate lognormal distribution for which the lowest valueof the correlation coefficient is −0.65, and not −1.00, which is the lowest value correlation coefficient fora bivariate Gaussian distribution.

For a particular situation, of a nonGaussian component and a Gaussian component , the corre-lation coefficient can be computed by

(20.11)

As shown in the next section, the above probability transformation simulation procedure can be extendedfrom nonGaussian vectors to multivariate nonGaussian stochastic fields. The nonGaussian stochasticfields that are partially defined by their mean and covariance functions and marginal distributions arecalled translation stochastic fields.

Another way to generate nonGaussian stochastic vectors is based on the application of the SAR methoddescribed for one-dimensional random variables to random vectors with correlated components. Thisapplication of SAR to multivariate cases is the basis of the Metropolis-Hastings algorithm that is extremelypopular in the Bayesian statistics community. Importantly, the Metropolis-Hastings algorithm [12] isnot limited to translation vectors. The basic idea is to sample directly from the JPDF of the nonGaussianvector using an adaptive and important sampling strategy based on the application of SAR. At each

φµ µ

( , ) exp( ) / / ( ) /

,

.,

,

x xx x

i j

xi xj xi xj

i xi xi xi xj xi xj j xj xj

xi xj

=−( )

−− − + −

−( )

1

2 1

1

2

2

120 5

2 2 2 2

2π ρ σ σ

σ ρ σ σ σ

ρ

µ µy x=σ σy x= µx = 0

y F x= −1Φ( ).

ρxi xj,

ρyi yj, ρxi xj, ρyi yj,

yi x j

ρyi xj,

ρσ σ

φyi xjyi xj

i yi j xj i j i jF x x x x dx dx, [ ( ) ]( ) ( , )= − −−

−∞

−∞

∫∫1 1Φ µ µ

1180_C20.fm Page 7 Wednesday, October 27, 2004 8:04 PM

Page 8: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-8 Engineering Design Reliability Handbook

simulation step, random samples are generated from a simple proposal or trial JPDF that is different thanthe target JPDF and then weighted in accordance with the important ratio. This produces dependent samplesthat represent a Markov chain random walk in the input space. The Metropolis-Hastings algorithm is thecornerstone of the Markov chain Monte Carlo (MCMC) simulation discussed in Section 20.5 of this chapter.

To move from vector sample or state to state , the following steps are applied:

Step 1: Sample from the trial density, , that is initially assumed to be identical to theconditional density .

Step 2: Compute acceptance probability

(20.12)

where is the target JPDF. Step 3: Compute for sampling acceptance, otherwise for sampling rejection.

Based on the ergodicity of the simulated Markov chain, the random samples of the target JPDF aresimulated by executing repeated draws at an equal number of steps from the Markov chain movementin the stochastic space. There is also the possibility to use multiple Markov chains simultaneously.

A competing algorithm with the Metropolis-Hastings algorithm is the so-called Gibbs sampler [13].The Gibbs sampler assumes that the probability of sampling rejection is zero; that is, all samples areaccepted. This makes it simpler but less flexible when compared with the Metropolis-Hastings algorithm.One step of the Gibbs sampler that moves the chain from state to state involves the followingsimulation steps by breaking the vector x in its k components:

Step 1: Sample .

Step 2: Sample .(20.13)

Step j: Sample .

Step k: Sample

To ensure an accurate vector simulation for both the Metropolis-Hastings and the Gibbs algorithms, itis important to check the ergodicity of the generated chain, especially if they can be stuck in a localenergy minimum. Gibbs sampler is the most susceptible to getting stuck in different parameter spaceregions (metastable states).

Most MCMC researchers are currently using derivative of the Metropolis-Hastings algorithms. Whenimplementing the Gibbs sampler or Metropolis-Hastings algorithms, key questions arise: (1) How do weneed to block components to account for the correlation structure and dimensionality of the targetdistribution? (2) How do we choose the updating scanning strategy: deterministic or random? (3) Howdo we devise the proposal densities? (4) How do we carry out convergence and its diagnostics on one ormore realizations of the Markov chain?

20.4 Stochastic Fields (or Processes)

A stochastic process or field is completely defined by its JPDF. Typically, the term “stochastic process”is used particularly in conjunction with the time evolution of a dynamic random phenomenon, whilethe term “stochastic field” is used in conjunction with the spatial variation of a stochastic surface. Aspace-time stochastic process is a stochastic function having time and space as independent arguments.The term “space-time stochastic process” is synonymous with the term “time-varying stochastic field.”More generally, a stochastic function is the output of a complex physical stochastic system. Because astochastic output can be described by a stochastic surface in terms of input parameters (for given

xt xt +1

y x yt≈ q( , )

qt( | )y x

α ππ

( , ) min ,( ) ( , )

( ) ( , )x y

y y x

x x yt

t

t t

q

q=

1

π

x yt + =1 x xt t+ =1

xt

xt +1

x x x xt tkt

11

1 2+ ≈ ( )π | , ,K

xt2

1+ ≈ π x x x xt tkt

2 11

3| , , ,+( )K

M

x x x x x xj

tj

tjt

jt

kt+ +

−+

+≈ ( )11

111

1π | , , , , ,K K

Mx x x xk

tk

tkt+ +−+≈ ( )1

11

11π | , , .K

1180_C20.fm Page 8 Wednesday, October 27, 2004 8:04 PM

Page 9: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-9

ranges of variability), it appears that the term “stochastic field” is a more appropriate term for stochasticfunction approximation. Thus, stochastic field fits well with stochastic boundary value problems.Stochastic process fits well with stochastic dynamic, phenomena, random vibration, especially forstochastic stationary (steady state) problems that assume an infinite time axis. The term “stochasticfield” is used hereafter.

Usually, in advanced engineering applications, simplistic stochastic models are used for idealizingcomponent stochastic loading, material properties, manufacturing geometry and assembly deviations.To simplify the stochastic modeling, it is often assumed that the shape of spatial random variations isdeterministic. Thus, the spatial variability is reduced to a single random variable problem, specifically toa random scale factor applied to a deterministic spatial shape. Another simplified stochastic model thathas been extensively used in practice is the traditional response surface method based on quadraticregression and experimental design rules (such as circumscribed central composite design, CCCD, orBox-Benken design, BBD). The response surface method imposes a global quadratic trend surface forapproximating stochastic spatial variations that might violate the physics of the problem. However, thetraditional response surface method is practical for mildly nonlinear stochastic problems with a reducednumber of random parameters.

From the point of view of a design engineer, the simplification of stochastic modeling is highly desired.A design engineer would like to keep his stochastic modeling as simple as possible so he can understandit and simulate it with a good confidence level (obviously, this confidence is subjective and depends onthe analyst’s background and experience). Therefore, the key question of the design engineer is: Do Ineed to use stochastic field models for random variations, or can I use simpler models, random variablemodels? The answer is yes and no on a case-by-case basis. Obviously, if by simplifying the stochasticmodeling the design engineer significantly violates the physics behind the stochastic variability, then hehas no choice; he has to use refined stochastic field models. For example, stochastic field modeling isimportant for turbine vibration applications due to the fact that blade mode-localization and flutterphenomena can occur. These blade vibration-related phenomena are extremely sensitive to small spatialvariations in blade properties or geometry produced by the manufacturing or assembly process. Anotherexample of the need for using a refined stochastic field modeling is the seismic analysis of large-spanbridges. For large-span bridges, the effects of the nonsynchronicity and spatial variation of incidentseismic waves on structural stresses can be very significant. To capture these effects, we need to simulatethe earthquake ground motion as a dynamic stochastic field or, equivalently, by a space-time stochasticprocess as described later in this section.

A stochastic field can be homogeneous or nonhomogeneous, isotropic or anisotropic, depending onwhether its statistics are invariant or variant to the axis translation and, respectively, invariant or variantto the axis rotation in the physical parameter space. Depending on the complexity of the physics describedby the stochastic field, the stochastic modeling assumptions can affect negligibly or severely the simulatedsolutions. Also, for complex problems, if the entire set of stochastic input and stochastic system parametersis considered, then the dimensionality of the stochastic space spanned by the stochastic field model canbe extremely large.

This chapter describes two important classes of stochastic simulation techniques of continuous mul-tivariate stochastic fields (or stochastic functionals) that can be successfully used in advanced engineeringapplications. Both classes of simulation techniques are based on the decomposition of the stochastic fieldin a set of elementary uncorrelated stochastic functions or variables. The most desirable situation froman engineering perspective is to be able to simulate the original stochastic field using a reduced numberof elementary stochastic functions or variables. The dimensionality reduction of the stochastic input isextremely beneficial because it also reduces the overall dimensionality of the engineering reliabilityproblem.

This chapter focuses on stochastic field simulation models that use a limited number of elementarystochastic functions, also called stochastic reduced-order models. Many other popular stochastic simulationtechniques, used especially in conjunction with random signal processing — such as discrete autoregres-sive process models AR, moving-average models MA, or combined ARMA or ARIMA models, Gabor

1180_C20.fm Page 9 Wednesday, October 27, 2004 8:04 PM

Page 10: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-10 Engineering Design Reliability Handbook

transform models, wavelet transform models, and many others — are not included due to space limita-tion. This is not to shadow their merit.

In this chapter two important types of stochastic simulation models are described:

1. One-level hierarchical stochastic field (or stochastic functional) model. This simulation model isbased on an explicit representation of a stochastic field. This representation is based on a statisticalfunction (causal relationship) approximation by nonlinear regression. Thus, the stochastic fieldis approximated by a stochastic hypersurface u that is conditioned on the stochastic input x. Thetypical explicit representation of a stochastic field has the following form:

(20.14)

In the above equation, first the conditional mean term, , is computed by minimizing theglobal mean-square error over the sample space. Then, the randomly fluctuating term is treated as a zero-mean decomposable stochastic field that can be factorized using a Wiener-Fourier series representation. This type of stochastic approximation, based on regression, is limitedto a convergence in mean-square sense.

In the traditional response surface method, the series expansion term is limited to a single termdefined by a stochastic residual vector defined directly in the original stochastic space (the vectorcomponents are the differences between exact and mean values determined at selected samplingpoints via experimental design rules).

2. Two-level hierarchical stochastic field (or stochastic functional) model. This simulation model is basedon an implicit representation of a stochastic field. This representation is based on the Joint Proba-bility Density Function (JPDF) (non-causal relationship) estimation. Thus, the stochastic field u isdescribed by the JPDF of an augmented stochastic system that includes both the stochasticinput x and the stochastic field u. The augmented stochastic system is completely defined by itsJPDF f (x, u). This JPDF defines implicitly the stochastic field correlation structure of the field.Then, the conditional PDF of the stochastic field can be computed using the JPDF ofaugmented system and the JPDF of the stochastic input f(x) as follows:

(20.15)

The JPDF of the augmented stochastic system can be conveniently computed using its projectionsonto a stochastic space defined by a set of locally defined, overlapping JPDF models. The set of local orconditional JPDF describe completely the local structure of the stochastic field. This type of stochasticapproximation is based on the joint density estimation convergences in probability sense.

There are few key aspects that differentiate the two stochastic field simulation models. The one-levelhierarchical model is based on statistical function estimation that is a more restrictive approximationproblem than the density estimation employed by the two-level hierarchical model. Statistical functionestimation based on regression can fail when the stochastic field projection on the input space is notconvex. Density estimation is a much more general estimation problem than a statistical functionestimation problem. Density estimation is always well-conditioned because there is no causal relationshipimplied.

The two-level hierarchical model uses the local density functions to approximate a stochastic field.The optimal solution rests between the use of a large number of small-sized isotropic-structure densityfunctions (with no correlation structure) and a reduced number of large-sized anisotropic-structuredensity functions (with strong correlation structure). The preference is for a reduced number of localdensity functions. Cross-validation or Bayesian inference techniques can be used to select the optimalstochastic models based either on error minimization or likelihood maximization.

u x u x u x| ( ) [ ( ) ]| |= = + −� �u x u x

� u x|

[ ( ) ]|u x − � u x

[ , ]x u T

f ( | )u x

f f f( | ) ( , ) ( )u x x u x=

1180_C20.fm Page 10 Wednesday, October 27, 2004 8:04 PM

Page 11: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-11

20.4.1 One-Level Hierarchical Simulation Models

To simulate complex pattern nonGaussian stochastic fields (functionals), it is advantageous to representthem using a Wiener-Fourier type series [14–16]:

(20.16)

where argument z is a set of independent standard Gaussian random variables and f is a set of orthogonalbasis functions (that can be expressed in terms of a random variable set z). A simple choice is to takethe set of stochastic basis functions f equal to the random variables set z. There are two main disadvantageswhen using the Wiener-Fourier series approximations. First, the stochastic orthogonal basis functions are multidimensional functions that require intensive computations, especially for high-dimensionalstochastic problems for which a large number of coupling terms need to be included to achieve adequateconvergence of the series.

Generally, under certain integrability conditions, a stochastic function can be decomposed in anorthogonal structure in a probability measure space. Assuming that the uncorrelated stochastic variables

, i = 1, 2, … , m are defined on a probability space and that is a square-integrable functionwith respect to the probability measure and if i = 1, 2, … , m are complete sets of square-integrable stochastic functions orthogonal with respect to the probability density , sothat for all = 0, 1, … and i = 1, … m, then, the function can beexpanded in a generalized Wiener-Fourier series:

(20.17)

where are complete sets of stochastic orthogonal (uncorrelated)functions. The generalized Wiener-Fourier series coefficients can be computed by solving the integral

(20.18)

The coefficients have a key minimizing property, specifically the integral difference

(20.19)

reaches its minimum only for .Several factorization techniques can be used for simulation of complex pattern stochastic fields. An

example is the use of the Pearson differential equation for defining different types of stochastic seriesrepresentations based on orthogonal Hermite, Legendre, Laguerre, and Cebyshev polynomials. Thesepolynomial expansions are usually called Askey chaos series [16]. A major application of stochastic fielddecomposition theory is the spectral representation of stochastic fields using covariance kernel factor-ization. These covariance-based techniques have a large potential for engineering applications becausethey can be applied to any complex, static, or dynamic nonGaussian stochastic field. Herein, in additionto covariance-based factorization techniques, an Askey polynomial chaos series model based on Wiener-Hermite stochastic polynomials is presented. This polynomial chaos model based on Wiener-Hermiteseries has been extensively used by many researchers over the past decade [15–17].

u x u x u x z( , ) ( )f ( ( ) ( (i i

i 0

i i

i 0

θ θ θ= ==

=

∑ ∑) ))f

fi

zi u z zm( , , )1 K

{ ( )},,p zi k i

P dz dz f zi i i z i( )/ ( )=

E p z p zk i l i[ ( ) ( )]= 0 k ≠ 1 u z zm( , , )1 K

u z z u p z p zm k k k k m

kkm m

m

( , ) ( ) ( )1 1

001 1

1

K K KK==

=

∑∑

{ ( ), , ( )}, , , , , ,p z p z k kk k m mm1 1 1 0 1K K K=

u u z z p z p z u z u z dz dzk k m k k m m m mm m1 11 1 1 1 1K K K K K K= ∫∫ ( , , ) ( ) ( ) ( ) ( )

uk km1K

D u z z g p z f z f z dz dzm k k k m

k

M

k

M

z z m mm m

m

m

= −

==∑∑∫∫K K K K KK( , , ) ( ) ( ) ( )1

00

2

1 11

1

1

g p u pk k k k k km m m m1 1K K=

1180_C20.fm Page 11 Wednesday, October 27, 2004 8:04 PM

Page 12: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-12 Engineering Design Reliability Handbook

20.4.1.1 Covariance-Based Simulation Models

Basically, there are two competing simulation techniques using the covariance kernel factorization: (1) theCholeski decomposition technique (Equation 20.4), and (2) the Karhunen-Loeve (KL) expansion (Equa-tion 20.7). They can be employed to simulate both static and dynamic stochastic fields. A notable propertyof these two simulation techniques is that they can handle both real-valued and complex-valued covariancekernels. For simulating space-time processes (or dynamic stochastic fields), the two covariance-basedtechniques can be employed either in the time-space domain by decomposing the cross-covariance kernelor in the complex frequency-wavelength domain by decomposing the complex cross-spectral densitykernel. For real-valued covariance kernels, the application of the KL expansion technique is equivalent tothe application of the Proper Orthogonal Decomposition (POD expansion) and Principal ComponentAnalysis (PCA expansion) techniques [17, 18].

More generally, the Choleski decomposition and the KL expansion can be applied to any arbitrarysquare-integrable, complex-valued stochastic field, . Because the covariance kernel of the complex-valued stochastic field is a Hermitian kernel, it can be factorized using eitherCholeski or KL decomposition.

If the KL expansion is used, the covariance function is expanded in the following eigenseries:

(20.20)

where λn and Φn(x) are the eigenvalue and the eigenvector, respectively, of the covariance kernel computedby solving the integral equation (based on Mercer’s theorem) [19]:

(20.21)

As a result of covariance function being Hermitian, all its eigenvalues are real and the associatedcomplex eigenfunctions that correspond to distinct eigenvalues are mutually orthogonal. Thus, they forma complete set spanning the stochastic space that contains the field u. It can be shown that if thisdeterministic function set is used to represent the stochastic field, then the stochastic coefficients usedin the expansion are also mutually orthogonal (uncorrelated).

The KL series expansion has the general form

(20.22)

where set {zi} represents the set of uncorrelated random variables that are computed by solving thestochastic integral:

(20.23)

The KL expansion is an optimal spectral representation with respect to the second-order statistics of thestochastic field. Equation 20.23 indicates that the KL expansion can be applied also to nonGaussianstochastic fields if sample data from it are available. For many engineering applications on continuummechanics, the KL expansion is fast mean-square convergent; that is, only a few expansion terms needto be included.

An important practicality aspect of the above covariance-based simulation techniques is that they can beeasily applied in conjunction with the marginal probability transformation (Equation 20.8 and Equation 20.9)

u( , )x θ

Cov[ ( , ), ( , )]u ux xθ θ′

Cov[ ( , ), ( , )] ( ) ( )u u n n n

n

x x x xθ θ λ′ = ′=

∑ Φ Φ0

Cov[ ( , ), ( , )] ( ) ( )u u dn nx x x x xθ θ′ = ′∫ Φ Φ

u zi

i

n

i i( , ) ( ) ( )x xθ λ θ==∑

0

Φ

z u di

i

n

D

( ) ( ) ( , )θλ

θ= ∫1 Φ x x x

1180_C20.fm Page 12 Wednesday, October 27, 2004 8:04 PM

Page 13: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-13

to simulate nonGaussian (translation) stochastic fields, either static or dynamic. For nonGaussian (trans-lation) stochastic fields, two simulation models can be employed:

1. Original space expansion. Perform the simulation in the original nonGaussian space using thecovariance-based expansion with a set {zi} of uncorrelated nonGaussian variables that can becomputed as generalized Fourier coefficients by the integral

(20.24)

In particular, for the KL expansion, the Equation 20.23 is used.

2. Transformed space expansion. Perform the simulation in the transformed Gaussian space using thecovariance-based expansion with a set {zi} of standard Gaussian variables and then transform theGaussian field to nonGaussian using the marginal probability transformation (Equation 20.8 andEquation 20.9).

In the engineering literature there are many examples of the application of covariance-based expansionsfor simulating either static or dynamic stochastic fields. In the civil engineering field, Choleski decom-position and KL expansion were used by several researchers, including Yamazaki and Shinozuka [20],Deodatis [21], Deodatis and Shinozuka [22], and Ghiocel [23] and Ghiocel and Ghanem [24], to simulatethe random spatial variation of soil properties and earthquake ground motions. Shinozuka [25] andGhiocel and Trandafir [26] used Choleski decomposition to simulate stochastic spatial variation of windvelocity during storms. Ghiocel and Ghiocel [27, 28] employed the KL expansion to simulate the sto-chastic wind fluctuating pressure field on large-diameter cooling towers using a nonhomogeneous,anisotropic dynamic stochastic field model. In the aerospace engineering field, Romanovski [29], andThomas, Dowell, and Hall [30] used the KL expansion (or POD) to simulate the unsteady pressure fieldon aircraft jet engine blades. Ghiocel [31, 32] used the KL expansion as a stochastic classifier for jetengine vibration-based fault diagnostics.

An important aerospace industry application is the stochastic simulation of the engine blade geometrydeviations due to manufacturing and assembly processes. These random manufacturing deviations havea complex stochastic variation pattern [33]. The blade thickness can significantly affect the forced responseof rotating bladed disks [34–36]. Due to the cyclic symmetry geometry configuration of engine bladeddisks, small manufacturing deviations in blade geometries produce a mode-localization phenomenon,specifically called mistuning, that can increase the airfoil vibratory stresses up to a few times. Blair andAnnis [37] used the Choleski decomposition to simulate blade thickness variations, assuming that thethickness variation is a homogeneous and isotropic field. Other researchers, including Ghiocel [38, 39],Griffiths and Tschopp [40], Cassenti [41], and Brown and Grandhi [35], have used the KL expansion (orPOD, PCA) to simulate more complex stochastic blade thickness variations due to manufacturing. Ghiocel[36] applied the KL expansion, both in the original, nonGaussian stochastic space and transformedGaussian space. It should be noted that the blade thickness variation fields are highly nonhomogeneousand exhibit multiple, localized anisotropic directions due to the manufacturing process constraints (thesestochastic variations are also technology dependent). If different blade manufacturing technology datasetsare included in the same in a single database, then the resultant stochastic blade thickness variations couldbe highly nonGaussian, with multimodal, leptoqurtique, and platiqurtique marginal PDF shapes. Moregenerally, for modeling the blade geometry variations in space, Ghiocel suggested a 3V-3D stochastic fieldmodel (three variables , in three dimensions x, y, z). For only blade thickness variation, a 1V-2Dstochastic field (one variable, thickness, in two dimensions, blade surface grid) is sufficient. It should benoted that for multivariate-multidimensional stochastic fields composed of several elementary one-dimensional component fields, the KL expansion must be applied to the entire covariance matrix of thestochastic field set that includes the coupling of all the component fields.

One key advantage of the KL expansion over the Choleski decomposition, that makes the KL expansion(or POD, PCA) more attractive for practice, is that the transformed stochastic space obtained using the

z u u di i

D

( ) ( , )θ θ= ∫ ( ) x x x

∆ ∆ ∆x y z, ,

1180_C20.fm Page 13 Wednesday, October 27, 2004 8:04 PM

Page 14: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-14 Engineering Design Reliability Handbook

KL expansion typically has a highly reduced dimensionality when compared with the original stochasticspace. In contrast, the Choleski decomposition preserves the original stochastic space dimensionality. Forthis reason, some researchers [29, 30] consider the KL expansion (or POD, PCA) a stochastic reduced-ordermodel for simulating complex stochastic patterns. The KL expansion, in addition to space dimensionalityreduction, also provides great insight into the stochastic field structure. The eigenvectors of the covariancematrix play, in stochastic modeling, a role similar to the vibration eigenvectors in structural dynamics;complex spatial variation patterns are decomposed in just a few dominant spatial variation mode shapes.For example, blade thickness variation mode shapes provide great insight into the effects of technologicalprocess variability. These insights are very valuable for improving blade manufacturing technology.

The remaining part of this subsection illustrates the application of covariance-based stochastic simu-lation techniques to generate dynamic stochastic fields. Specifically, the covariance-based methods areused to simulate the stochastic earthquake ground surface motion at a given site.

To simulate the earthquake ground surface motion at a site, a nonstationary, nonhomogeneous sto-chastic vector process model is required. For illustrative purposes, it is assumed that the stochastic processis a nonhomogeneous, nonstationary, 1V-2D space-time Gaussian process (one variable, acceleration inan arbitrary horizontal direction, and two dimensions for the horizontal ground surface). The space-time process at any time moment is completely defined by its evolutionary cross-spectral density kernel.For a seismic acceleration field u(x, t), the cross-spectral density for two motion locations i and k is:

(20.25)

where is the cross-spectral density function for point motions and , and , j = i,k is the auto-spectral density for location point j. The function is the stationary or“lagged” coherence function for locations i and k. The “lagged” coherence is a measure of the similarityof the two point motions including only the amplitude spatial variation. Herein, it is assumed thatthe frequency-dependent spatial correlation structure of the stochastic process is time-invariant. Theexponential factor represents the wave passage effect in the direction Dexpressed in the frequency domain by a phase angle due to two motion delays at two locations Xi andXj. The parameter VD(t) is the apparent horizontal wave velocity in the D direction. Most often inpractice, the nonstationary stochastic models of ground motions are based on the assumption thatthe “lagged” coherence and the apparent directional velocity are independent of time. However, in realearthquakes, the coherence and directional wave velocity are varying during the earthquake duration,depending on the time arrivals of different seismic wave packages hitting the site from various direc-tions.

Because the cross-spectral density is Hermitian, either the Choleski decomposition or the KL expansioncan be applied. If the Choleski decomposition is applied, then

(20.26)

where the matrix is a complex-valued lower triangular matrix. Then the space-time nonstationarystochastic process can be simulated using the trigonometric series as the number of frequency componentsNF [21,22]:

, for i = 1, 2, … NL (20.27)

In the above equation, NL is the number of space locations describing the spatial variation of the motion.The first phase angle term in Equation 20.27 is computed by

(20.28)

Su u ui ui uk uk ui uk D i D k Dt t t oh t i X X V ti, k C ( , ) [ ( , ) ( , )] ( , ) exp[ ( )/ ( )], ,

/, , ,ω ω ω ω ω= − −S S 1 2

Suj uk, ( )ω ui uk

Suj uj, ( )ω Coh tui uk, ( , )ω

exp[ ( )/ ( )]i D D V tDω 1 2−

S C C( , ) ( , ) ( , )ω ω ωt t t= ∗

C( , )ω t

→ ∞

u t C t t ti i k j j i k j k j

j

NF

k

NL

( ) ( , )| cos[ ( , ) ], , ,= − +==

∑∑211

ω ω ω θ ω∆ Φ

θ ωωωi k

i k

i k

tC t

C t,,

,

( , ) tanIm | ( , )|

Re | ( , )|=

−1

1180_C20.fm Page 14 Wednesday, October 27, 2004 8:04 PM

Page 15: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-15

and the second phase angle term �k , j is a random phase angle uniformly distributed in [0, 2π] (uniformrandom distribution is consistent with Gaussian assumption). The above procedure can be used for anyspace-time stochastic process. For nonGaussian processes, the procedure can be applied in conjunctionwith the inverse probability equation transformation (Equation 20.8 and Equation 20.9).

If the complex-valued coherence function (including wave passage effects) is used, then its eigenfunc-tions are complex functions. Calculations have shown that typically one to five coherence function modesare needed to get an accurate simulation of the process [23, 24]. The number of needed coherencefunction modes depends mainly on the soil layering stiffness and the frequency range of interest; forhigh-frequency components, a larger number of coherence modes are needed.

20.4.1.2 Polynomial Chaos Series-Based Simulation Models

Ghanem and Spanos [15] discussed theoretical aspects and key details of the application PolynomialChaos to various problems in their monograph on spectral representation of stochastic functionals.

Polynomial chaos expansion models can be formally expressed as a nonlinear functional of a set ofstandard Gaussian variables or, in other words, expanded in a set of stochastic orthogonal polynomialfunctions. The most popular polynomial chaos series model is that proposed by Ghanem and Spanos[15] using a Wiener-Hermite polynomial series:

(20.29)

The symbol denotes the polynomial chaoses of order n in the variables .Introducing a one-to-one mapping to a set with ordered indices denoted by and truncating thepolynomial chaos expansion after the pth term, Equation 20.29 can be rewritten

(20.30)

The polynomial expansion functions are orthogonal in L2 sense that is, their inner product with respectto the Gaussian measure that defines their statistical correlation, , is zero. A given truncatedseries can be refined along the random dimension either by adding more random variables to the set {zi}or by increasing the maximum order of polynomials included in the stochastic expansion. The firstrefinement takes into account higher frequency random fluctuations of the underlying stochastic process,while the second refinement captures strong nonlinear dependence of the solution process on thisunderlying process. Using the orthogonality property of polynomial chaoses, the coefficients of thestochastic expansion solution can be computed by

for k = 1, … , K (20.31)

A method for constructing polynomial chaoses of order n is by generating the corresponding multi-dimensional Wiener-Hermite polynomials. These polynomials can be generated using the partial differ-ential recurrence rule defined by

(20.32)

The orthogonality of the polynomial chaoses is expressed by the inner product in L2 sense with respectto Gaussian measure:

(20.33)

u t a t a t z a t z zi i

i

i i i i

i

i

i

( , , ) ( , ) ( , ) ( ( )) ( , ) ( ( ), ( ))x x x xθ θ θ θ= + + +=

==

∑ ∑∑0 0 1

1

2

111 1

1

1 2 1 2

2

1

1

Γ Γ Γ K

Γn i iz zn

( , , )1

K ( , , )z zi in1K

{ ( )}ψ θi

u t u tj j

j

p

( , , ) ( , ) ( )x xθ ψ θ==

∑0

E j k[ ]ψ ψ

u

E u

Ekk

k

= [ ]

[ ]

ψψ 2

∂∂

θ θ θ θz

z z n z zij

n i in n i inΓ Γ( ( ), , ( )) ( ( ), , ( ))1 1 1K K= −

Γ Γ Τn i in m i in mnz z z z dz n( , , ) ( , , )exp !1 1

1

22K K −

=− ∞

∫ z z πδ

1180_C20.fm Page 15 Wednesday, October 27, 2004 8:04 PM

Page 16: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-16 Engineering Design Reliability Handbook

Although popular in the engineering community, the polynomial chaos series may not necessarily bean efficient computational tool to approximate multivariate nonGaussian stochastic fields. The majorproblem is the stochastic dimensionality issue. Sometimes, the dimensionality of the stochastic trans-formed space defined by a polynomial chaos basis can be even larger than the dimensionality of theoriginal stochastic space. Nair [42] discussed this stochastic dimensionality aspect related to polynomialchaos series application. Grigoriu [43] showed that the indiscriminate use of polynomial chaos approx-imations for stochastic simulation can result in inaccurate reliability estimates. To improve its conver-gence, the polynomial chaos series can be applied in conjunction with the inverse marginal probabilitytransformation (Equation 20.8 and Equation 20.9) as suggested by Ghiocel and Ghanem [24].

20.4.2 Two-Level Hierarchical Simulation Models

The two-level hierarchical simulation model is based on the decomposition of the JPDF of the implicitinput-output stochastic system in local or conditional JPDF models. The resulting stochastic local JPDFexpansion can be used to describe, in detail, very complex nonstationary, multivariate-multidimensionalnonGaussian stochastic fields. It should be noted that the local JPDF expansion is convergent in aprobability sense, in contrast with the Wiener-Fourier expansions, which are convergent only in a mean-square sense.

The stochastic local JPDF basis expansion, or briefly, the local density expansion, can be also viewedas a Wiener-Fourier series of a composite type that provides both a global approximation and a localfunctional description of the stochastic field. In contrast to the classical Wiener-Fourier series thatprovides a global representation of a stochastic field by employing global basis functions, as the covariancekernel eigenfunctions in KL expansion or the polynomial chaoses [16], the stochastic local densityexpansion provides both a global and local representation of a stochastic field.

Stochastic local density expansion can be implemented as a two-layer stochastic neural-network, whileWiener-Fourier series can be implemented as a one-layer stochastic neural-network. It should be noticedthat stochastic neural-network models train much faster than the usual multilayer preceptor (MLP)neural-networks. Figure 20.1 describes in pictorial format the analogy between stochastic local JPDFexpansion and a two-layer neural-network model.

In the local density expansion, the overall JPDF of the stochastic model is obtained by integrationover the local JPDF model space:

(20.34)

where p(α ) is a continuous distribution that plays the role of the probability weighting function overthe local model space. In a discrete form, the weighting function can be expressed for a number N oflocal JPDF models by

(20.35)

in which δ(α − αi) is the Kronecker delta operator. Typically, the parameters αi are assumed or known,and the discrete weighting parameters P(αi) are the unknowns. The overall JPDF of the stochastic modelcan be computed in the discrete form by

(20.36)

g f dp( ) ( | ) ( )u u= ∫ α α

p P i

i

N

i( ) ( ) ( )α α δ α α= −=

∑1

g g Pi i

i

N

( ) ( | ) ( )u u==

∑ α α1

1180_C20.fm Page 16 Wednesday, October 27, 2004 8:04 PM

Page 17: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-17

The parameters �i can be represented by the second-order statistics (computed by local averaging) ofthe local JPDF models. Thus, the overall JPDF expression can be rewritten as

(20.37)

where are the mean vector and covariance matrix, respectively, of the local JPDF model i.Also, , and . Typically, the types of the local JPDF are assumedand the probability weights are computed from the sample datasets. Often, it is assumed that the localJPDFs are multivariate Gaussian models. This assumption implies that the stochastic field is describedlocally by a second-order stochastic field model. Thus, nonGaussianity is assumed only globally. It ispossible to include nonGaussianity locally using the marginal probability transformation of the local JPDF.

The local JPDF models are typically defined on partially overlapping partitions within stochastic spacescalled soft partitions. Each local JPDF contributes to the overall JPDF estimate in a localized convexregion of the input space. The complexity of the stochastic field model is associated with the number oflocal JPDF models and the number of stochastic inputs that define the stochastic space dimensionality.Thus, the stochastic model complexity is defined by the number of local JPDFs times the number ofinput variables. For most applications, the model implementations that are based on a large number ofhighly localized JPDFs are not good choices because these models have high complexity. For highcomplexities, we need very informative data to build the locally refined stochastic models. Simpler modelswith a lesser number of JPDF models have a much faster statistical convergence and are more robust.Thus, the desire is to reduce model complexity at an optimal level.

The complexity reduction of the stochastic field model can be accomplished either by reducing thenumber of local JPDFs or by reducing their dimensionality. Complexity reduction can be achieved usingcross-validation and then removing the local JPDF that produces the minimum variation of the errorestimate or the model likelihood estimate. We can also use penalty functions (evidence functions) that

FIGURE 20.1 Analogy betweeen the local JPDF expansion and a two-layer stochastic neural network model.

Physical Variables

xi

XiXj

Xi

Xl

xi

f(xi, xj) = ∑k

wkf (hk)

xj

xj

Input Output

2D

3D

Xj

h2

h2

h1

h1

f(Xi, Xj, Xl) = ∑k

wkf (hk)

Symbolic Representation

Auxiliary Variables

1L 1L 1L

g f i Pi i i

i

N

( ) ( | , , )u u u= ∑=

∑1

ui i, ∑P n N Pi i i

Ni= ∑ ==/ , 1 1 P i Ni > =0, for 1,

1180_C20.fm Page 17 Wednesday, October 27, 2004 8:04 PM

Page 18: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-18 Engineering Design Reliability Handbook

reduce the model complexity by selecting the most plausible stochastic models based on the evidencecoming from sample datasets [44].

Alternatively, we can reduce the model complexity by reducing the dimensionality of local JPDF modelsusing local factor analyzers. Local factor analysis reduces the number of local models by replacing the localvariables in the original space by a reduced number of new local variables defined in local transformed spacesusing a stochastic linear space projection. If deterministic projection instead of stochastic projection is applied,the factor analysis coincides with PCA. However, in practice, especially for high-dimensional problems, it isuseful to consider a stochastic space projection by adding some random noise to the deterministic spaceprojection. Adding noise is beneficial because instead of solving a multidimensional eigen-problem for decom-posing the covariance kernel, we can implement an extremely fast convergent iterative algorithm for computingthe principal covariance directions. The combination of the local JPDF expansion combined with the localprobabilistic PCA decomposition is called herein the local PPCA expansion. Figure 20.2 shows the simulatedsample data and the local PPCA models for a complex pattern trivariate stochastic field.

It should be noted that if the correlation structure of local JPDF models is ignored, these localJPDFs lose their functionality to a significant extent. If the correlation is neglected, a loss of infor-mation occurs, as shown in Figure 20.3. The radial basis functions, such standard Gaussians, or otherkernels ignore the local correlation structure due to their shape constraint. By this they lose accuracy for

FIGURE 20.2 Sample data and local PPCA models for a trivariate stochastic field.

FIGURE 20.3 Loss of information if the local correlation structure is ignored.

3

−2−1

0

12

345

6

−1−2−3−4−2

02

4−2

−2−4 −3 −2 −1 0 1 2

−1012

3

3210

4

42

0

567Data Points in 3D Space

Data and Local PPCA Model

Stochastic InputInput RandomVariable Model

Lost information ifstandard local basisfunctions are used!

Sto

chas

tic R

espo

nse

Output RandomVariable Model

1180_C20.fm Page 18 Wednesday, October 27, 2004 8:04 PM

Page 19: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-19

modeling a complex stochastic field. One way to see radial basis function expansions is to look at them asdegenerated local JPDF models. These radial functions are commonly used in a variety of practical applica-tions, often in the form of the radial basis function network (RBFN), fuzzy basis function network (FBFN),and adaptive network-based fuzzy inference systems (ANFIS). Figure 20.4 compares the RBFN model withthe local PPCA expansion model. The local PPCA expansion model is clearly more accurate and bettercaptures all the data samples, including those that are remote in the tails of the JPDF. For a given accuracy,the local PPCA expansion needs a much smaller number of local models than the radial basis function model.As shown in the figure, the local PPCA models can elongate along long data point clouds in any arbitrarydirection. The radial basis functions do not have this flexibility. Thus, to model a long data point cloud, alarge number of overlapping radial basis functions are needed. And even so, the numerical accuracy of radialbasis function expansions may still not be as good as the accuracy of local PPCA expansions with just fewarbitrarily oriented local point cloud models. It should be noted that many other popular statistical approx-imation models, such as Bayesian Belief Nets (BNN), Specht’s Probability Neural Network (PNN), in additionto ANFIS, RBF, and FBF networks, and even the two-layer MLP networks, have some visible conceptualsimilarities with the more refined local PPCA expansion.

FIGURE 20.4 Local PPCA expansion model versus RBF network model. a) decomposition of sample space in localstochastic models b) computed JPDF estimates.

5

4

3

2

1

0

0.05

0.15

0.25

0.2

0.3

0.1

0.05

0.15

0.25

0.2

0.3

0.1

−1−4 −3 −2 −1 0 1 2 3

5

4

3

2

1

0

−1−4 −3 −2 −1 0 1 2 3

Stochastic Space Decompositionin Local RBF Models

Stochastic Space Decompositionin Local PPCA Models

42

0−2

−4 −4 −20 2

4 42

0−2

−4 −4−2 0

24

Estimated JPDF Using RBF Expansion Estimated JPDF Using PPCA Expansion

(a) Decomposition of Sample Space in Stochastic Models

(b) Computed JPDF Estimates

1180_C20.fm Page 19 Wednesday, October 27, 2004 8:04 PM

Page 20: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-20 Engineering Design Reliability Handbook

FIG

UR

E 2

0.5

Loc

al P

PC

A e

xpan

sion

mod

el c

aptu

res

the

non

un

ifor

m s

toch

asti

c va

riab

ility

; a)

In

itia

l da

tase

t, u

nif

orm

dat

a, b

) L

ocal

lyp

ertu

rbed

dat

aset

, non

un

ifor

m d

ata.

1180_C20.fm Page 20 Wednesday, October 27, 2004 8:04 PM

Page 21: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-21

The stochastic local PPCA expansion is also capable of including accurately local, nonuniform stochasticvariations. Figure 20.5 illustrates this fact by comparing the local PPCA expansions for two sample datasetsthat are identical except for a local domain shown at the bottom left of the plots. For the first dataset (leftplots), the local variation was much smaller than for the second dataset (right plots). As seen by comparingthe plots, the local PPCA expansion captures well this highly local variability of probabilistic response.

The decomposition of stochastic parameter space in a set of local models defined on soft partitions offersseveral possibilities for simulating high-dimensional complex stochastic fields. This stochastic sample spacedecomposition in a reduced number of stochastic local models, also called states, opens up a unique avenuefor developing powerful stochastic simulation techniques based on stochastic process or field theory, suchas dynamic Monte Carlo. Dynamic Monte Carlo techniques are discussed in the next section.

20.5 Simulation in High-Dimensional Stochastic Spaces

In simulating random samples from a joint probability density g(x), in principle there are always waysto do it because we can relate the multidimensional joint density g(x) to univariate probability densitiesdefined by full conditional densities :

(20.38)

Unfortunately, in real-life applications, these (univariate) full conditional densities are seldom availablein an explicit form that is suitable to direct sampling. Thus, to simulate a random sample from g(x), wehave few choices: (1) use independent random samples drawn from univariate marginal densities (static,independent sampling, SMC) to build the full conditionals via some predictive models; or (2) use inde-pendent samples from trial univariate marginal and conditional densities (sequential importance sampling,SIS), defined by the probability chain rule decomposition of the joint density, and then adaptively weightthem using recursion schemes; or (3) use independent samples drawn from the univariate full conditionaldensities to produce spatially dependent samples (trajectories) of the joint density using recursion schemesthat reflect the stochastic system dynamics (dynamic, correlated sampling, DMC).

For multidimensional stochastic fields, the standard SMC sampling techniques can fail to provide goodestimators because their variances can increase sharply with the input space dimensionality. This varianceincrease depends on stochastic simulation model complexity. For this reason, variation techniques are required.However, classical techniques for variance reductions such as stratified sampling, importance sampling, direc-tional sampling, control variable, antithetic variable, and Rao-Blackwellization methods are difficult to applyin high dimensions.

For simulating complex pattern, high-dimensional stochastic fields, the most adequate techniques arethe adaptive importance sampling techniques, such as the sequential importance sampling with inde-pendent samples, SIS, or with correlated samples, DMC. Currently, in the technical literature there existsa myriad of recently developed adaptive, sequential, single sample-based or population-based evolution-ary algorithms for high-dimensional simulations [6, 45]. Many of these techniques are based on con-structing a Markov chain structure of the stochastic field. The fundamental Markov assumption of one-step memory significantly reduces the chain structures of the full or partial conditional densities, thusmaking the computations affordable. In addition, Markov process theory is well-developed so that reliableconvergence criteria can be implemented.

20.5.1 Sequential Importance Sampling (SIS)

SIS is a class of variance reduction techniques based on adaptive important sampling schemes [45, 46]. A usefulstrategy to simulate a sample of joint distribution is to build up a trial density sequentially until it converges tothe target distribution. Assuming a trial density constructed as

g x x x x xj M j j( | , , , , , )K K+ −1 1 1

g g x x g x x xK j j

j

M

( ) ( , , ) ( | , , )x = = −=

∏1 1 1

1

K K

g g x g x x g x x xN N N( ) ( ) ( | ) ( | , )x = −1 1 2 2 1 1 1K K

1180_C20.fm Page 21 Wednesday, October 27, 2004 8:04 PM

Page 22: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-22 Engineering Design Reliability Handbook

and using the same decomposition for the target density , theimportance sampling weight can be computed recursively by

(20.39)

An efficient SIS algorithm is based on the following recursive computation steps, using an improvedadaptive sampling weight [6, 46]:

Step 1: Simulate from , let Step 2: Compute the adaptive sampling weight by

(20.40)

The entire sample of the input x is adaptively weighted until the convergence of the trial density to thetarget density is reached.

An important application of the SIS simulation technique is the nonlinear filtering algorithm basedon the linear state-space model that is very popular in the system dynamics and control engineeringcommunity. In this algorithm, the system dynamics consist of two major coupled parts: (1) the stateequation, typically represented by a Markov process; and (2) the observation equations, which are usuallywritten as

(20.41)

where yt are observed state variables and xt are unobserved state variables. The distribution of xt arecomputed using the recursion:

(20.42)

In applications, the discrete version of this model is called the Hidden Markov model (HMM). If theconditional distributions qt and ft are Gaussian, the resulting model is the called the linear dynamic modeland the solution can be obtained analytically via recursion and coincides with the popular Kalman filtersolution.

To improve the statistical convergence of SIS, we can combine it with resampling techniques, sometimescalled SIR, or with rejection control and marginalization techniques.

20.5.2 Dynamic Monte Carlo (DMC)

DMC simulates realizations of a stationary stochastic field (or process) that has a unique stationary prob-ability density identical to the prescribed JPDF. Instead of sampling directly in the original cartesian spacefrom marginal distributions, as SMC does, DMC generates a stochastic trajectory of spatially dependentsamples, or a random walk in the state-space (or local model space). Figure 20.6 shows the basic conceptualdifferences between standard SMC and DMC. The Gibbs sampler and Metropolis-Hastings algorithmspresented in Section 20.3.2 are the basic ingredients of dynamic Monte Carlo techniques.

The crucial part of the DMC is how to invent the best ergodic stochastic evolution for the underlyingsystem that converges to the desired probability distribution. In practice, the most used DMC simulation

π π π π( ) ( ) ( | ) ( | , )x = −x x x x x xN N1 2 1 1 1K K

w wgt t t t

t t

t t t

( ) ( )( | )

( | )x x

x x

x x= − −

−1 1

1

1

π

xt gt t t( | )x x −1 x x xt t t= −( , )1

w w

gt tt t

t t t t t

= −− − −

11 1 1

ππ

( )

( ) ( | )

x

x x x

x q x

y f x

t t t

t t t

~ (.| , )

~ (.| , )

−1 θ

φ

π πt t t t t t t t t t tx q x x f y x x dx( ) ( | ) ( | ) ( )= − − − −∫ 1 1 1 1

1180_C20.fm Page 22 Wednesday, October 27, 2004 8:04 PM

Page 23: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-23

models are based on irreducible, aperiodic, reversible stationary Markov process models. The DMCsimulation based on Markov process models is called Markov Chain Monte Carlo simulation, or brieflyMCMC. Higher-order Markov chains or nonMarkovian are more difficult to implement. When usingMCMC, a key modeling aspect is how to ensure that the samples of the chain are ergodic, especiallywhen there is a chance that they can be stuck in a local energy minimum. The principal drawback ofMCMC is that it has only local moves along its random path. This leads to a very slow convergence toequilibrium or, worse, to a stop in a local state.

A good strategy for improving MCMC movement in space is to combine the local powerful prop-erties of MCMC with the global updating properties of molecular dynamics simulation using aHamiltonian formulation of system dynamics. This combination is based on the Hamiltonian MarkovChain Monte Carlo (H-MCMC) algorithm [47]. The Hamiltonian MCMC is also called the HybridMonte Carlo (HMC). As an alternate to H-MCMC, an MCMC with random jumps has beendeveloped [48].

Herein we describe briefly the H-MCMC algorithm. This algorithm is the preferred MCMC algorithmfor high-dimensional stochastic simulation problems. The H-MCMC simulation algorithm can be effi-ciently implemented in conjunction with conditional PPCA expansion models. Hanson [47] claims inhis research paper that “the efficiency of Hamiltonian method does not drop much with increasingdimensionality.” Hanson has used H-MCMC for stochastic problems with a parameter space size up to128 dimensions.

The H-MCMC algorithm is used to simulate a random sample of a canonical (or Gibbs, or Boltzmann)JPDF of the form:

(20.43)

This distribution is specific to Gibbs fields or equivalently to Markov fields [49]. In the above equation,H(x) is the Hamiltonian of the dynamic system.

In the Hamiltonian formulation, the evolution of the dynamic system is completely known once thetotal energy or the Hamiltonian function in generalized coordinates, H(q, p), is defined:

(20.44)

FIGURE 20.6 Conceptual differences between static Monte Carlo and dynamic Monte Carlo.

Static MC Dynamic MC

Sampling in cartesian space Sampling in state space

π( ) exp( ( ))x x= −c Hx

Hp

mEi

ii

n

( , ) ( )q p q= +=

∑2

12

1180_C20.fm Page 23 Wednesday, October 27, 2004 8:04 PM

Page 24: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-24 Engineering Design Reliability Handbook

where q and p are the vectors of the generalized coordinates and generalized momenta (in the phasespace), mi are the masses associated with the individual particles of the system, and E(q) is the potentialenergy that ensures a coupling between the particles. System dynamics is represented by a trajectory inthe phase space. For any arbitrary evolutionary function f(q, p), its rate of change is expressed by itstime derivative:

(20.45)

For stationary stochastic dynamics, the Hamiltonian conserves. Now, using the molecular dynamics(MD) approach, we can discretize the Hamiltonian function evolution using finite difference equationsover the time chopped in small increments. The most used implementation is the leap-frog scheme[47]:

(20.48)

The leap-frog scheme is time reversible and preserves the phase space volume. The resulting Hamiltonian-MCMC algorithm is capable of having trajectory jumps between different iso-energy surfaces and con-verges to a target canonical (or Gibbs, Boltzmann) distribution.

Figure 20.7 and Figure 20.8 show the application of H-MCMC to a simple bivariate stochastic fieldmodel composed of three overlapping local Gaussians. Figure 20.7 shows a simulated trajectory usingthe H-MCMC. The random samples were obtained by drawing sample data at each random walk step.Figure 20.8 compares the target JPDF and the estimated JPDF. The estimated JPDF is computed fromthe 250 random samples drawn at each step of the Markov chain evolution. It can be observed that thereis a visible spatial correlation between the generated random samples. In practice, the random samplesshould be drawn periodically after a large number of random walk steps, so that the spatial correlationis lost. This improves considerably the convergence in probability of the H-MCMC simulation.

An extremely useful application of the DMC simulation is the stochastic interpolation for stochasticfields with missing data, especially for heavy-tailed nonGaussianity. DMC can be also used to simulateconditional-mean surfaces (response surfaces). Figure 20.9 illustrates the use of H-MCMC in conjunc-tion with the local PPCA expansion for a univariate highly nonGaussian stochastic field with unknownmean. It should be noted that the quality of estimation depends on the data density in the sample space.Figure 20.10 shows another simple example of application of H-MCMC for simulating a univariatestochastic field with known mean using different persistence parameters (that define the magnitude offictitious generalized momenta). The figure shows the H-MCMC samples and the exact (given) andestimated mean curves. By comparing the two plots in Figure 20.10, it should be noted that largerpersistence parameters (larger generalized momenta) improve the simulation results for the mean curvein remote regions where data density is very low. Larger fictitious momenta produce larger systemdynamics and, as a result, increase the variance and reduce the bias of the mean estimates. It appearsthat larger momenta in H-MCMC simulations are appropriate for probability estimations in the dis-tribution tails of the joint distributions. Thus, DMC can be also employed to establish confidencebounds of stochastically interpolated surfaces for different probability levels. From the figure we cannote that larger momenta provide less biased confidence interval estimates in the remote area of thedistribution tails.

˙( ) ˙ ˙ff

qq

f

pp

f

q

H

p

f

p

H

qii

ii

i i i ii

n

i

n

q, p = ∂∂

+ ∂∂

= ∂∂

∂∂

− ∂∂

∂∂

==

∑∑11

p t t p t tE q

q tq t t q t tp t t p t t

p t t tE

q t t

i ii

i i i i

ii

+

= − ∂∂

+ = + +

+

= +

− ∂∂ +

1

2

1

2

1

2

1

2

1

2

∆ ∆ ∆ ∆ ∆ ∆

∆ ∆∆

( )( )

( )( ) ( ) ( )

( )

( )

q

1180_C20.fm Page 24 Wednesday, October 27, 2004 8:04 PM

Page 25: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-25

Other techniques to accelerate the MCMC convergence are simulated tempering and annealing.Simulated tempering and simulated annealing are popular MCMC applications of stochastic simulationto optimization under uncertainty problems. These techniques, which are described in many textbooks,are not discussed herein. This is not to shadow their merit.

20.5.3 Computing Tail Distribution Probabilities

As discussed in the introductory section of this chapter, stochastic simulation can be applied to either(1) simulate random quantities or (2) compute expectations. For probabilistic engineering analyses, bothapplications of stochastic simulation are of practical interest. For reliability and risk assessment analyses,the second application of simulation is used.

This section addresses the use of stochastic simulation to compute expectations. Only selective aspectsare reviewed herein. The focus is on solving high-complexity stochastic problems.

FIGURE 20.7 The prescribed stochastic model and the H-MCMC simulated random samples (drawn point dataand random walk).

Var

iabl

e Y

10

8

6

4

2

0

−4

−2

−2 −1 0 1 2

Variable X

3 4 5 6

Var

iabl

e Y

10

8

6

4

2

0

−4

−2

−2 −1 0 1 2

Variable X

3 4 5 6

Bivariate 3 Gaussian Stochastic Model

Simulated Samples of the Bivariate 3Gaussian Stochastic Model Using DMC

1180_C20.fm Page 25 Wednesday, October 27, 2004 8:04 PM

Page 26: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-26 Engineering Design Reliability Handbook

Expectations can include probability integrals for computing marginal distributions, evidences (nor-malization constant for the product of likelihood and prior in Bayes theorem), statistical moments, andexpectation of functions. Thus, in most stochastic problems, the numerical aspect consists of computingthe expected value of a function of interest f(x) with respect to a target probability density g(x), where xis the stochastic input. Assuming that are an independent identically distributed sample setfrom g(x), then the Monte Carlo estimator is computed by simple statistical averaging over the entiresample set:

(20.49)

The estimator is unbiased and has the variance

(20.50)

FIGURE 20.8 Target JPDF versus estimated JPDF (using 250 samples drawn at each step).

0.050.040.030.020.01

Variable X

Variable X

Variable Y

Variable Y

−2

−4 −2 0 2 4 6 8

0

2

4

6

10

0.050.06

0.040.030.020.01

−2

−4 −2 0 2 4 6 8

0

2

4

6

10

Target Bivariate PDF

Estimated Bivariate PDF Using 250Hybrid H-MCMC Samples

x x xN1 2, , ,K

ˆ ( )fN

f xi

i

n

==

∑1

1

var( ˆ) [ ( ) ( )] ( )f

Nf E f g dg= −∫1 2x x x

1180_C20.fm Page 26 Wednesday, October 27, 2004 8:04 PM

Page 27: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-27

implying that the sampling error, (i.e., standard deviation) of is O( ) independent of stochasticspace dimensionality. Thus, the MC simulation estimator always converges with a constant rate thatis proportional to the square-root of the number of samples. This is a key property of the standardMonte Carlo estimator that makes the SMC simulation method so popular in all engineering fields.

Many textbooks provide suggestions on how to select the required number of samples to estimatethe probability of failure for a given percentage error independent of an assumed probability estimate.If the mean probability estimate has a normal distribution, the number of required samples for theconfidence 1 − α/2 is given by

(20.51)

where t is the T or Student distribution with n − 1 degrees of freedom.Thus, it appears that if we respect the above equation and select a small error, then we have nothing

to worry about. This is not necessarily true, especially for high-dimensional problems that involve non-Gaussian stochastic fields (or responses). The delicate problem is that for standard stochastic simulationbased on SMC, although the convergence rate is independent of space dimensionality, the probabilityestimate variance is not independent of space dimensionality. In fact, it can increase dramatically withdimensionality increase. The probability estimate variance increase affects most severely the tail proba-bility estimates (low and high fractiles) that typically are dependent on localized probability mass dis-tributions in some remote areas of the high-dimensional space.

Figure 20.11 shows a simple example of the increase of probability estimate variance with space dimen-sionality. Consider the computation of failure probability integral, I, defined by a volume that is internalto a multivariate standard Gaussian function:

(20.52)

FIGURE 20.9 Stochastic Interpolation Using A Local PPCA Expansion Model via H-MCMC Simulation.

0

1

2

3

5

4

3210−1−2−3−4

Bayesian Inference for RS ApproximationVia Hamiltonian Dynamic MC

Stochastic Space Decomposition in LocalGaussian/PPCA Models

200 samples (2)

Res

pons

e F

unct

ion

−2

−1

0

1

2

3

4

5

6

3210−1−2−3−4

Data Points

200 samples (1)Pers thre = 0.98

Mean ResponseDMC Samples

Random Input

f̂ N −1 2/

P̂f

n t P Pn f f= −− −1 2 1

2 1α / ,

ˆ ( ˆ )

I x x x dx dx dxn n= + +

∫∫K K Kexp

//1

2 12

22 2

1 2

0

1 4

0

1 4

1180_C20.fm Page 27 Wednesday, October 27, 2004 8:04 PM

Page 28: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-28 Engineering Design Reliability Handbook

Figure 20.11 shows the plot of the coefficient of variation (c.o.v.) and the 90% (normalized) confidenceinterval (measure of the sampling error) as a function of input space dimension N that varies up to 10.For a number of 10,000 random samples, the c.o.v. of the failure probability estimate increases fromabout 3% for one dimension to about 87% for ten dimensions.

Thus, for the (random) point estimates of failure probabilities that we are typically getting from ourreliability analyses, the use of Equation 20.51 for determining the number of required random samplescan be erroneous, and possibly unconservative, as sketched in Figure 20.12. The required number ofsamples increases with the complexity of the stochastic simulation model, namely with its dimensionalityand its nonGaussianity (or nonlinearity of stochastic dependencies).

20.5.4 Employing Stochastic Linear PDE Solutions

A special category of stochastic fields comprises the physics-based stochastic fields. These physics-basedfields are defined by the output of computational stochastic mechanics analyses. The physics-basedstochastic fields, or stochastic functionals, can be efficiently decomposed using stochastic reduced-order models (ROM) based on the underlying physics of the problem. This physics is typicallydescribed by stochastic partial-differential equations. Stochastic ROMs are computed by projecting

FIGURE 20.10 Effects of persistance (fictitious momenta) on the H-MCMC stochastic simulations; a) persistenceparameter is 0.95, b) persistence parameter is 0.98.

−0.5

−1.5

−1.5 −1 −0.5 0 0.5 1 1.5−2

−1

0

0.5

1.5

2

1

2

−0.5

−1.5

−1.5 −1 −0.5 0 0.5 1 1.5−2

−1

0

0.5

1.5

2

1

2

40 samples

40 samples

Predicted Mean ResponseDMC Random Samples

Exact Mean ResponseData Points

Exact Mean ResponseData Points

Predicted Mean ResponseDMC Random Samples

(b) persistence parameter is 0.98

(a) persistence parameter is 0.95

1180_C20.fm Page 28 Wednesday, October 27, 2004 8:04 PM

Page 29: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-29

the original stochastic physical field on reduced-size stochastic subspaces. Iterative algorithms usingstochastic preconditioning can be employed in conjunction with physics-based stochastic ROM models.

The most used physics-based stochastic ROM models are those based on stochastic projections in thepreconditioned eigen subspace, Taylor subspace, and Krylov subspace. These stochastic subspace projec-tions are accurate for linear stochastic systems (functionals, fields) [42]. Nonlinear variations can beincluded by stochastically preconditioning Newton-type iterative algorithms. Both approximate and exactsolvers can be used as preconditioners in conjunction with either stochastic ROM or full models. Thekey ingredients of the stochastic ROM implementations are (1) fast-convergent stochastic subspaceexpansions, (2) stochastic domain decomposition, (3) fast iterative solvers using stochastic precondition-ers, and (4) automatic differentiation to compute function derivatives.

A useful industry application example of the stochastic ROM based on a eigen subspace projection isthe Subset of Nominal Method (SNM) developed by Griffin and co-workers [50] for computing themistuned (mode-localization) responses of jet engine bladed-disks due to random manufacturing devi-ations. The SNM approach is exact for proportional small variations of the linear dynamic system mass

FIGURE 20.11 Effects of stochastic space dimensionality on the computed probability estimates; a) Coefficient ofvariation, b) Confidence interval.

Coe

ffici

ent o

f Var

iatio

n of

Pt E

stim

ate

01 2

C.O.V. of Pf Estimate vs.Number of Dimensions

Number of Simulations perPf Point Estimate = 1000Number of Points Estimate perGiven Dimension = 100

3 4 5 6

Number of Dimensions

Number of Dimensions, N

90% Confidence intervals of Pf Estimate

90% Cl Upper Bound

90% Cl Lower Bound

Number of Simulations per Pf Point Estimate = 1000Number of Points Estimate per Given Dimension = 100

Integration of Standard Normal JPDFbetween (0,1) in N Dimensions

7 8 9 10

−0.5

0

0.5

1.5

2.5

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

2

101 2 3 4 5 6 7 8 9

95%

Fra

ctile

Pf o

ver

Mea

n P

f

Integration of Standard Normal JPDFbetween (0,1) in N Dimensions

(a) Coefficient of Variation

(b) Confidence Interval

Mean Estimate

1180_C20.fm Page 29 Wednesday, October 27, 2004 8:04 PM

Page 30: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-30 Engineering Design Reliability Handbook

and stiffness matrices. Ghiocel [36] suggested combining the SNM approach with the KL expansion forsimulating mistuned responses more realistically by including the effects of nonproportional bladevariations due to manufacturing. The main idea is to use the KL expansion to reduce the stochastic inputdimensionality and then use SNM iteratively to reduce the stochastic dynamic system dimensionality.

The stochastic field projections in the Taylor and Krylov subspaces can be very efficient for both static anddynamic stochastic applications. Figure 20.13 illustrates the application of the Krylov subspace expansion forcomputing the vibratory mistuned response of a 72-blade compressor engine blisk system. The systemstochasticity is produced by the random variations in airfoil geometry due to manufacturing. Figure 20.13shows the blade tip stochastic responses in frequency domain (transfer functions) for a given engine orderexcitation [34, 51]. The stochastic Krylov expansion requires only 14 basis vectors to converge. This meansthat only a reduced-size system of 14 differential equations must be solved at each stochastic simulation step.

The application of physics-based stochastic ROMs to stochastic simulation is limited to stochasticfields that can be described by known linear partial differential equations. It is obvious that the applicationof these stochastic ROM to computational stochastic mechanics problems is straightforward, as presentedby Nair [42]. However, for stochastic fields that have an unknown functional structure, the physics-basedstochastic ROMs are not directly applicable.

20.5.5 Incorporating Modeling Uncertainties

Engineering stochastic simulation procedures should be capable of computing confidence or variationbounds of probability estimates. The variation of probability estimates is generated by the presence ofepistemic or modeling uncertainties due to (1) a lack of sufficient collection of data (small sample size issue);(2) nonrepresentative collection of statistical data with respect to the entire statistical population character-istics or stochastic system physical behavior (nonrepresentative data issue); (3) a lack of fitting the stochasticmodel with respect to a given statistical dataset, i.e., a bias is typically introduced due to smoothing (modelstatistical-fitting issue); and (4) a lack of accuracy of the prediction model with respect to real system physicalbehavior for given input data points, i.e., a bias is introduced at each predicted data point due to predictioninaccuracy (model lack-of-accuracy issue).

FIGURE 20.12 Space dimensionality effects on the accuracy of the computed probability estimates.

Pfˆ

Pf,Exact

Pfˆ

Pf,Exact

Pfˆ

Pf,Exact

Pfˆ

Pf,Exact

High Precision

Low Precision

n

n

n1 n2

n

n

n1 n2

N1, Low Dimension

N2, Higher Dimension

1180_C20.fm Page 30 Wednesday, October 27, 2004 8:04 PM

Page 31: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-31

The first three modeling uncertainty categories are all associated with statistical modeling aspects. Thefourth modeling uncertainty category is different. It addresses the prediction uncertainty due to the accuracylimitation of our computational models. In addition to these four uncertainty categories containing infor-mation that are objective in nature, there is also an uncertainty category related to the subjective informationcoming from engineering experience and judgment. Engineering judgment is an extremely useful source ofinformation for situations where only scarce data or no data is available.

The above categories of uncertainties are described in detail in other chapters of the book. The purposeof mentioning these categories is to highlight the necessity of including them in the stochastic simulationalgorithms. Unfortunately, this also increases the overall dimensionality of the stochastic simulationproblem. Strategies to reduce the overall stochastic space dimensionality are needed [52].

20.6 Summary

Stochastic simulation is a powerful engineering computational tool that allows accurate predictions forcomplex nondeterministic problems. The most important aspect when employing simulation is to ensurethat the underlying stochastic model is compatible with the physics of the problem. By oversimplifyingstochastic modeling, one can significantly violate the physics behind stochastic variabilities.

FIGURE 20.13 Convergence of stochastic Krylov subspace expansion for the random mistuned response of theblades compressor engine blisk; solid line is the full-model solution and dashed line is the reduced-model solution.

9 18

15

14

12

10

8

6

4

2

0

8

7

6

5

4

3

2

1

0900 910 920 930 940 950 960 970 980 990 1000 900 910 920 930 940 950 960 970 980 990 1000

9

8

7

6

5

4

3

2

1

0900 910 920 930 940 950 960 970 980 990 1000

12

10

8

6

4

2

0900 910 920 930 940 950 960 970 980 990 1000

Stochastic Approximation Convergence for the 1st Blade Model Family Response of the 72 BladeCompressor Model

Exact

Converged

4 terms

10 terms14 terms

1180_C20.fm Page 31 Wednesday, October 27, 2004 8:04 PM

Page 32: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-32 Engineering Design Reliability Handbook

Because the stochastic simulation is an almost endless subject, the focus of this chapter is on theconceptual understanding of these techniques, and not on the implementation details. This chapterreviews only the stochastic simulation techniques that in the author’s opinion are the most significantfor engineering design applications.

The chapter describes various stochastic simulation models — from simple, one-dimensional randomvariable models to complex stochastic network-based models. Significant attention is given to the decom-position of stochastic fields (or functionals) in constituent basis functions. Advances of stochastic func-tional analysis for developing efficient simulation techniques based on stochastic reduced-order modelsare included.

Another topic of a special attention is the stochastic simulation in high-dimensional spaces. The chapterreviews recent developments in stochastic simulation in high-dimensional spaces based on sequentialimportance sampling and dynamic simulation via a Hamiltonian formulation. Other key topics that areonly briefly discussed include the computation of tail probabilities and the incorporation of epistemicor modeling uncertainties in simulation.

References

1. Hammersley, J.M. and Handscomb, D.C., Monte Carlo Methods, Methuen and Co. Ltd., London,1964.

2. Knuth, D.C., The Art of Computer Programming, Addison-Wesley, Reading, MA, 1969.3. Vaduva, I., Simulation Models for Computer Applications, The Series Mathematical Basis of Opera-

tional Research, Technical Editure, Bucharest, 1977.4. Press, W.H., Teukolsky, S.A., Vetterling, W.T., and Flannery, B.P., Numerical Recipes in FORTRAN:

The Art of Scientific Computing, Cambridge University Press, New York, 1992.5. Law, A.M. and Kelton, W.D., Simulation Modeling and Analysis, McGraw-Hill, New York, 1991.6. Gamerman, D., “Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference,” Text in

Statistical Science, Chapman & Hall, New York, 1997.7. Rubin, D.B., Using SIR Algorithm to Simulate Posterior Distributions (with discussion), in Bayesian

Statistics, Vol. 3, Oxford University Press, Oxford, England, 395–402, 1987.8. Gilks, W.R. and Wild, P., Adaptive Rejection Sampling for Gibbs Sampling, Applied Statistics, 41,

337–348, 1992.9. Box, G.E. and Muller M.E., A Note of the Generation of Random Normal Deviates, Annals of

Mathematical Statistics, 29, 610–611, 1958.10. Butcher, J.C., Random Sampling from the Normal Distributions, Computing Journal, 3, 1961.11. Grigoriu M., Applied Non-Gaussian Processes, Prentice Hall, Englewood Cliffs, NJ, 1995.12. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., and Teller, E., Equations of State

Calculations by Fast Computing Machines, Journal of Chemical Physics, 21(6), 1087–1091.13. Geman, S. and Geman, D., Stochastic Relaxation, Gibbs Distributions and the Bayesian Restoration

of Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721–741, 1984.14. Ghiocel, D.M., Stochastic Field Models for Advanced Engineering Applications, 42nd AIAA/ASME/

ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, AIAA/ASCE/ASME Non-Deterministic Approaches Forum, Seattle, WA, April 16–19, 2001.

15. Ghanem, R.G. and Spanos, P.D., Stochastic Finite Elements: A Spectral Approach, Springer-Verlag,Heidelberg, 1991.

16. Wiener, N., The Homogenous Chaos, American Journal of Mathematics, 60, 897–936, 1938.17. Pearson, K., On Lines and Planes of Closest Fit to Systems of Points in Space, Philosophical Magazine,

2, 559–572, 1958.18. Everitt, B.S. and Dunn, G., Applied Multivariate Data Analysis, Arnold, Oxford University Press,

London, 2001.19. Grigoriu, M., Stochastic Calculus, Birkhauser, Boston, MA, 2002.

1180_C20.fm Page 32 Wednesday, October 27, 2004 8:04 PM

Page 33: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

Stochastic Simulation Methods for Engineering Predictions 20-33

20. Yamazaki, K. and Shinozuka, M., Safety Evaluation of Stochastic Systems by Monte Carlo Simulation9th SMiRT Conference, Volume M, Lausanne, August 17–21, 1987.

21. Deodatis, G., Non-Stationary Stochastic Vector Processes: Seismic Ground Motion Applications,Probabilistic Engineering Mechanics, 11, 149–168, 1996.

22. Shinozuka, M. and Deodatis, G., Stochastic Process Models for Earthquake Ground Motion, Journalof Probabilistic Engineering Mechanics, 3, 191–204, 1988.

23. Ghiocel, D.M., Uncertainties of Seismic Soil-Structure Interaction Analysis: Significance, Modelingand Examples, Invited Paper at the US–Japan Workshop on Seismic SSI, Organized by the NSF/USGS,Menlo Park, CA, September 22–24, 1998.

24. Ghiocel, D.M. and Ghanem, R.G., Stochastic Finite Element Analysis for Soil-Structure Interaction,Journal of Engineering Mechanics, ASCE, 128(1), January 2002.

25. Shinozuka, M., Stochastic Fields and Their Digital Simulation, in Stochastic Methods of StructuralDynamics, Schueller, G.I. and Shinozuka, M., Eds., Martinus Nijhoff, Dordrecht, 1987, 93–133.

26. Ghiocel, D.M. and Trandafir, R., Application of Space-Time Stochastic Processes in Load Modelingfor Structural Design of Special Facilities, 6th National Symposium on Informatics in Civil Engineering,Timisoara, June 1988 (in Romanian).

27. Ghiocel, D.M. and Ghiocel, D., Structural Behavior of Large Cooling Towers on Wind Load, ScientificBulletin of Civil Engineering, Vol. 1, ICB, Bucharest, 1988 (in Romanian).

28. Ghiocel, D.M. and Ghiocel, D., Stochastic Modeling and Simulation of Wind Fluctuating PressureField on Large Diameter Cooling Tower Structures, ASCE Probabilistic Structural Mechanics Confer-ence, Sandia National Laboratories, Alburquerqe, NM, July 24–26, 2004 (submitted).

29. Romanowski, M., Reduced Order Unsteady Aerodynamic and Aeroelastic Models Using Karhunen-Loeve Eigenmodes, AIAA Journal, Paper AIAA 96–3981, 1996.

30. Thomas, J.P., Dowell, E.H., and Hall, K.C., Three Dimensional Transonic Aeroelasticity Using ProperOrthogonal Decomposition-Based Reduced Order Models, Journal of Aircraft, 40(3), 544–551, 2003.

31. Ghiocel, D.M., Refined Stochastic Field Models for Jet Engine Vibration and Fault Diagnostics,Proceedings of International Gas Turbine & Aeroengine Congress, 2000 TURBO EXPO, Paper 2000-GT-654, Munich, Germany, May 8–11, 2000.

32. Ghiocel, D.M., A New Perspective on Health Management Using Stochastic Fault Diagnostic andPrognostic Models, International Journal of Advanced Manufacturing Systems, Special Issue on “Non-Deterministic Methods for Design and Manufacturing Under Uncertainties,” Vol. 4, Issue 1, October2001.

33. Hilbert, G. and Blair, A., Accuracy of Forced Response Prediction of Gas Turbine Rotor Blades, 4thNational Turbine Engine HCF Conference, Monterey, CA, March 6–9, 1999.

34. Ghiocel, D.M., Mistuning Analysis of Bladed-Disks, in STI BLADE-GT Engineering Research Reportfor AFRL/PRT at Wright-Patterson Air Force Base, Proprietary Information, USAF Contract F33615-96-C-2678, Dayton, OH, June 2002.

35. Brown J. and Grandhi, R.V., Probabilistic Analysis of Geometric Uncertainty Effects on Blade-Alone Forced Response, Proceedings of the 2004 ASME Turbo Expo, IGTI 2004, Vienna, June 14–17,2004.

36. Ghiocel, D.M., Critical Probabilistic Modeling and Implementation Issues for Geometry Mistuningin Engine Blisks, presentation at the 6th Annual FAA/AF/NASA/Navy Workshop on the Applicationof Probabilistic Methods to Gas Turbine Engines, Solomon Island, MD, March 18–20, 2003.

37. Blair A. and Annis, C., Development of Probabilistic Methods for Campbell Diagram FrequencyPlacement, presentation at the 7th National Turbine Engine High Cycle Fatigue Conference, Monterey,CA, April 16–19, 2003.

38. Ghiocel, D.M., Stochastic Modeling of Airfoil Thickness Spatial Variability due to ManufacturingProcess, presented at the 6th National Turbine Engine High Cycle Fatigue Conference, Jacksonville,FL, March, 5–8, 2001.

39. Ghiocel, D.M., Stochastic Field Models for Aircraft Jet Engines, Journal of Aerospace Engineering,ASCE, Special Issue on Reliability of Aerospace Structures, Vol. 14, No. 4, October 2001.

1180_C20.fm Page 33 Wednesday, October 27, 2004 8:04 PM

Page 34: Stochastic Simulation Methods for Engineering Predictions · variety of stochastic models—from the simplest stochastic models described by one-dimensional random variable models,

20-34 Engineering Design Reliability Handbook

40. Griffiths, J.A. and Tschopp, J.A., Selected Topics in Probabilistic Gas Turbine Engine TurbomachineryDesign, in CRC Press Engineering Design Reliability Handbook, Nikolaidis, E., Ghiocel, D.M., andSinghal, S., Eds., CRC Press, Boca Raton, FL, 2004.

41. Casenti, B., Probabilistic Blade Design, United Technologies — Pratt and Whitney Final Report,USAF Contract F33615-98-C-2928, June 2003.

42. Nair, P.B., Projection Schemes in Stochastic Finite Element Analysis, in CRC Press Engineering DesignReliability Handbook, Nikolaidis, E., Ghiocel, D.M., and Singhal, S., Eds., CRC Press, Boca Raton,FL, 2004.

43. Grigoriu, M., Formulation and Solution of Engineering Problems Involving Uncertainty, Invitedpresentation at the Biannual SAE Probabilistic Method Committee Meeting, Detroit, October 7–9,2003.

44. McKay, D.J.C., Bayesian Interpolation, Neural Computation, 1992.45. Robert, C.P. and Casella, G., Monte Carlo Statistical Methods, Springer-Verlag, Heidelberg, 1999.46. Liu, J., Monte Carlo Strategies in Scientific Computing, Spring Series in Statistics, Springer-Verlag,

Heidelberg, 2001.47. Hanson, K.M., Markov Chain Monte Carlo Posterior Sampling with the Hamilton Method, Pro-

ceedings SPIE 4322, Medical Image: Image Processing, Sonka, M. and Hanson, M.K., Eds., San Diego,CA, 2001, 456–467.

48. Green, P.J., Monte Carlo Methods: An Overview, Invited Paper, Proceedings of the IMA Conferenceon Complex Stochastic Systems and Engineering, Leeds, U.K., September 1993.

49. Fismen, M., Exact Simulation Using Markov Chains, M.Sc. thesis, Norvegian Institute of Technologyand Science, Trondheim, December 1997.

50. Yang, M.-T. and Griffin, J.H., A Reduced Order Model of Mistuning Using a Subset of NominalModes, Journal of Engineering for Gas Turbines and Power, 123(4), 893–900, 2001.

51. Keerti, A., Nikolaidis, E., Ghiocel, D.M., and Kirsch, U., Combined Approximations for EfficientProbabilistic Analysis of Structures, AIAA Journal, 2004.

52. Ghiocel, D.M., Combining Stochastic Networks, Bayesian Inference and Dynamic Monte Carlo forEstimating Variation Bounds of HCF Risk Prediction, presentation at the 7th National Turbine EngineHigh Cycle Fatigue Conference, Monterey, CA, April 16–19, 2003.

1180_C20.fm Page 34 Wednesday, October 27, 2004 8:04 PM