17
Targeted Testing for Reliability Validation Prabhu Soundappan Efstratios Nikolaidis Balaji Dheenadayalan Mechanical, Industrial and Manufacturing Engineering Department, The University of Toledo, Toledo, OH 43606 ABSTRACT Methods for designing targeted tests for reliability validation of structures obtained from reliability-based design are presented. These methods optimize the test parameters to minimize the variance in the estimated reliability (or equivalently the failure probability) estimated from the tests. The tests are designed using information from analytical models used to design the structure. Both analytical tests, in which very detailed models are used as reference, or physical tests can be designed using the methods presented. The methods are demonstrated on examples and their robustness to errors in the analytical models used to design the tests is assessed. INTRODUCTION MOTIVATION Reliability-based design can help build safer and/or more economical products than deterministic design. Aerospace, and automotive manufacturers have used reliability-based design to improve their products and reduce costs. In reliability-based design we use models for predicting performance and models of uncertainties, which involve approximations. Therefore, it is important to validate the results of reliability-based design. The reliability of a design can be validated analytically using very detailed models as reference or physical testing. The former approach can account for the effect of errors in approximate models but not for the effect of errors in the models of uncertainties. Analytical validation can also be expensive if very detailed models are used. Physical testing can account for the effect of both errors, but it is expensive. Pressure to reduce cost and to build products faster makes it important to develop targeted testing approaches for reducing the cost of tests for reliability validation. Accelerated physical tests can reduce the test time but are applicable only to problems for estimating time varying reliability. They shorten the duration of tests by over stressing designs but they cannot reduce the number of tests required to estimate the reliability. One way to reduce the number of tests is to design them using information from the analytical models employed in reliability-based optimization. This paper presents approaches for designing both physical and analytical tests that use such information to maximize the confidence in the estimated reliability from the tests. PREVIOUS WORK This subsection reviews studies on accelerated testing, Importance Sampling (IS) and Bayesian testing. Accelerated tests allow us to reduce the time required to collect experimental data about the life of a system by testing products under more severe conditions than the operating conditions. In most applications the time to failure or the time over which a performance property of a product degrades are estimated Singpurwalla et al [1], and Lawless [2] survey statistical theory for accelerated testing. Nelson [3] briefly presents the basic concepts of applied statistical models and models of accelerated testing. Nelson also discusses accelerated test models, analysis of the data and development of test plans. Kececioglu [4] discusses several accelerated life models and their practical applications. Extensive research has been done on Monte-Carlo simulation techniques [5-7]. Standard Monte-Carlo simulation is impractical for designs with high reliability (e.g. higher than 0.9 6 ) or products that are expensive to analyze. Therefore, variance reduction techniques such as IS, and Stratified Sampling (Law [8], Ayyub [9] and Nelson [10]) have been developed to reduce the cost of simulation. Unlike accelerated testing, which only reduces the test time, these techniques reduce the number of realizations of a design that needs to be tested to estimate reliability. Melchers [11&12] discussed in detail IS approaches. Although IS is effective, it can only be used for computer-based simulation testing. Mease and Nair [13, 14] developed a new testing methodology using IS to optimize the values of parameters that can be controlled in physical tests. Bayesian methods combine subjective information with measurements to estimate the reliability of a design or to construct a model of a random variable. If one has prior

Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

Embed Size (px)

Citation preview

Page 1: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

Targeted Testing for Reliability Validation

Prabhu Soundappan Efstratios Nikolaidis Balaji Dheenadayalan Mechanical, Industrial and Manufacturing Engineering Department, The University of Toledo, Toledo, OH 43606

ABSTRACT Methods for designing targeted tests for reliability validation of structures obtained from reliability-based design are presented. These methods optimize the test parameters to minimize the variance in the estimated reliability (or equivalently the failure probability) estimated from the tests. The tests are designed using information from analytical models used to design the structure. Both analytical tests, in which very detailed models are used as reference, or physical tests can be designed using the methods presented. The methods are demonstrated on examples and their robustness to errors in the analytical models used to design the tests is assessed.

INTRODUCTION MOTIVATION Reliability-based design can help build safer and/or more economical products than deterministic design. Aerospace, and automotive manufacturers have used reliability-based design to improve their products and reduce costs. In reliability-based design we use models for predicting performance and models of uncertainties, which involve approximations. Therefore, it is important to validate the results of reliability-based design.

The reliability of a design can be validated analytically using very detailed models as reference or physical testing. The former approach can account for the effect of errors in approximate models but not for the effect of errors in the models of uncertainties. Analytical validation can also be expensive if very detailed models are used. Physical testing can account for the effect of both errors, but it is expensive. Pressure to reduce cost and to build products faster makes it important to develop targeted testing approaches for reducing the cost of tests for reliability validation.

Accelerated physical tests can reduce the test time but are applicable only to problems for estimating time varying reliability. They shorten the duration of tests by

over stressing designs but they cannot reduce the number of tests required to estimate the reliability.

One way to reduce the number of tests is to design them using information from the analytical models employed in reliability-based optimization. This paper presents approaches for designing both physical and analytical tests that use such information to maximize the confidence in the estimated reliability from the tests.

PREVIOUS WORK

This subsection reviews studies on accelerated testing, Importance Sampling (IS) and Bayesian testing.

Accelerated tests allow us to reduce the time required to collect experimental data about the life of a system by testing products under more severe conditions than the operating conditions. In most applications the time to failure or the time over which a performance property of a product degrades are estimated Singpurwalla et al [1], and Lawless [2] survey statistical theory for accelerated testing. Nelson [3] briefly presents the basic concepts of applied statistical models and models of accelerated testing. Nelson also discusses accelerated test models, analysis of the data and development of test plans. Kececioglu [4] discusses several accelerated life models and their practical applications.

Extensive research has been done on Monte-Carlo simulation techniques [5-7]. Standard Monte-Carlo simulation is impractical for designs with high reliability (e.g. higher than 0.96) or products that are expensive to analyze. Therefore, variance reduction techniques such as IS, and Stratified Sampling (Law [8], Ayyub [9] and Nelson [10]) have been developed to reduce the cost of simulation. Unlike accelerated testing, which only reduces the test time, these techniques reduce the number of realizations of a design that needs to be tested to estimate reliability. Melchers [11&12] discussed in detail IS approaches. Although IS is effective, it can only be used for computer-based simulation testing. Mease and Nair [13, 14] developed a new testing methodology using IS to optimize the values of parameters that can be controlled in physical tests.

Bayesian methods combine subjective information with measurements to estimate the reliability of a design or to construct a model of a random variable. If one has prior

Page 2: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

information from experts of from an analytical model, one can obtain a more accurate estimate of reliability using Bayesian methods than standard statistics. Morgan [16] describes the basic ideas and philosophy of Bayesian inference. Berger [17] explains the reasons for using Bayesian methods in reliability and risk analysis. Winkler [18] presents procedures for applying Bayesian methods to a wide range of commonly used statistical methods, including analysis of variance and regression analysis. Martz and Waller [19] apply Bayesian methods to numerous applications in reliability and risk assessment. Bayesian methods have also been used for reliability validation [20, 21]. Martz and Waterman [22] presented an approach for determining the optimal stress for testing a single test unit. This approach requires expressing errors in an analytical model using one or more parameters, and a prior probability density function (PDF) of these parameters.

OBJECTIVES The objective of this paper is to present methodologies for designing efficient targeted analytical or physical tests to validate the reliability of a structure obtained using reliability-based design. For this purpose, information from the analytical models of the structure and the uncertainties is used to optimize selected parameters of a test that can be easily observed and controlled. Only time invariant reliability problems are considered.

METHODS FOR TARGETED PHYSICAL AND ANALYTICAL TESTS IN THIS PAPER Four approaches for validating the reliability of systems through targeted testing are presented; two IS approaches, a Bayesian approach and an approach for testing components of a series system. All approaches minimize the variance of the PDF of the estimated probability of failure, but they can also be readily modified to minimize the Shannon entropy of the same PDF. The first three approaches target a location in the space of the random variables that is most likely to cause failure when generating sample values of the test variables. For example they select values of the load parameters that are both likely to occur and cause failure. Targeted testing for components of series systems performs more tests on components that are more likely to fail and can be tested at low cost.

IS is a variance reduction technique used to assess the reliability of structural systems through simulation. This approach does not sample from the true joint PDF of the random variables; instead it samples from a joint PDF that produces many failures, which are the most likely to occur. Sampling from this PDF reduces the number of samples required to validate a system/ component by several orders of magnitude.

IS physical testing approaches maximize the information obtained about failure probabilities in fewer tests than

the tests required by a standard test approach that tests random realizations of a design under actual operating conditions. These approaches use the idea of IS to physically test a biased set of samples to validate the probability of failure.

IS can validate the reliability of a design using a detailed (reference) model, which is too expensive for design. The IS approach finds an optimum sampling PDF of the random variables to maximize the confidence in the estimate of the reliability of the design for a given number of simulations.

Bayesian testing uses Bayes’ rule to update the PDFs of the parameters that represent errors of a model according to the results of tests. The targeted Bayesian testing approach optimizes the values of those test parameters that we can observe and control (e.g. the load and the dimensions of a design).

Variance reduction techniques can also be used to validate system reliability when estimates of component reliabilities are available from analysis. The proposed testing methodology targets those components that have high failure probability and can be tested at a low cost. The number of tests on each component is optimized to minimize the variance in the failure probability estimated from the test

OUTLINE

First, the IS approach for analytical estimation of the reliability of a design is reviewed. Then the approach for validating reliability analytically using a very detailed model is presented. Then variance reduction techniques for targeted physical testing are presented. These include two IS physical testing approaches, namely Mease and Nair’s approach and an optimization approach. Then the targeted Bayesian approach is presented. Finally, the methodology for validating the reliability of a series system is presented.

IMPORTANCE SAMPLING Monte Carlo simulation is a widely used tool for evaluating the safety of systems analytically by testing many realizations of these systems on the computer. This simulation technique can also be used for both virtual (analytical) and physical testing to estimate the reliability of systems.

Consider a system with n variables, whose performance function is G(X1, X2,…,Xn). G(X) is defined in a way that it is negative when the system fails, and greater than zero when the system survives. The equation for estimating the probability of failure through Monte-Carlo simulation is

( )( )∑=

≤=≈N

jjf GI

NPP

11 01 x (1)

Page 3: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

where jx is defines the jth realization of the system in the sample tested and I is an indicator function, which is 1 when G(X) < 0, and 0 when G(X) > 0. The estimated failure probability is considered random, in the sense that it varies from sample to sample. The variance of the estimated probability of failure is an indicator of the quality of this estimate. The lower the variance, the higher is the quality of the estimate. The variance of the estimate in Eq. (1) is

( )[ ]( )[ ]NGI

GP

202

01

≤= X

X

σσ (2)

The true variance of the indicator function, ( )[ ]0≤XGI , is given by

( )[ ] ( )[ ] ( ) 2220 0 trueGI PFdfGI −⋅≤= ∫ ∫≤ xxX XX Lσ (3)

Equation (2) indicates that the variance of the estimator of the failure probability reduces with the sample size increasing. Unfortunately, it is expensive to test physically a large sample of systems. Another way to improve the quality of the estimate of the failure probability is to reduce the variance in equation (2). IS techniques sample from a PDF, hX(x), which is selected in a way that the variance in the estimate of the probability of failure is reduced compared to the one from equations (2) and (3), instead of the true PDF of the random variables, fX(x) (Figs 1 and 2). hX(x) is called IS or reference sampling PDF.

Failure domain

G(X) < 0Safe domain

G(X) > 0

Limit State Surface G(X) = 0

X1

X2

Sampling domain

Failure domain

G(X) < 0Safe domain

G(X) > 0

Limit State Surface G(X) = 0

X1

X2

Sampling domain Figure 1: Monte Carlo Simulation Sampling Domain

Failure domain

G(X) < 0Safe domain

G(X) > 0

Limit State Surface G(X) = 0

X1

X2

Sampling domain

Failure domain

G(X) < 0Safe domain

G(X) > 0

Limit State Surface G(X) = 0

X1

X2

Sampling domain Figure 2: Sampling Domain in Importance Sampling

Technique An unbiased estimate of the probability of failure when using, hX(x), is given by

( )[ ] ( )( ) ( ) xxxxX X

X

X dhhfGIPPf ⋅⋅≤=≈ ∫ ∫ 02 K (4)

The variance of this estimator can be derived from equation (2)

( )( )[ ] ( )

( ) ( )

N

PFdhh

fXGI

NhIfVar

Ntrue

P

22

2

0

2

−⋅

⋅≤

=

=∫ ∫ xx

xx

XX

XL

σ

(5)

The variance of the estimate in equation (5) can be made 0 by selecting the reference density as

( ) [ ] ( )[ ] ( ) xx

xx

X

XX dfXGI

fXGIh

∫ ∫ ⋅≤⋅≤

=0)(0)(

(6)

The PDF in equation (6) is the true PDF of the variables truncated in the survival domain and normalized by the probability of failure of the system. If this reference density is found then the variance of the estimate will be zero. Indeed, upon substitution into (5) we get

( )[ ] 0)())((0 2

22=−

⋅≤=

∫ ∫ truePFdhfGI

hIfVar x

xxX

X

XL

(7)

Equation (6) can also be written as

( ) ( )[ ] ( )truePF

fGIh

xXx X

X⋅≤

=0

(8)

Figure 3 shows the true PDF of variables in the joint space and the joint PDF of the variables truncated in the

Page 4: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

survival region. The equations for the reference PDF can be found in Melchers [11, 12].

X Y

Original Sampling PDF Normalized Optimal Sampling PDF

X Y

Original Sampling PDF Normalized Optimal Sampling PDF

Figure 3: Joint Sampling PDF’s in a 2 Variable Space

One challenge in using IS is that we need to determine the failure probability in (8). The true probability of failure can be surrogated with an approximation to find a near optimal reference density. This approach will be used in the next section to determine an near optimal sampling function for validating the estimate of the failure probability obtained from an approximate model by estimating the same probability from a reference detailed model.

Unfortunately, the optimal PDF cannot be used in physical testing because it is impractical to try to observe and control all random variables. For example, variables, such as Young’s modulus or yield stress cannot be controlled. Thus even if we determine that the values of these variables in an optimal sample that needs to be tested, it is too expensive to construct such a sample. In physical testing, it is possible to observe and control only a limited number of design variables, such as the load and selected dimensions. In the following sections, we present two methods for creating an optimal sample for the observable variables.

VALIDATION USING VERY DETAILED MODELS It can be shown that using the optimum sampling PDF we can estimate the true failure probability of the system exactly using only one sample value [11]. However, this PDF has no direct practical use because to determine it we need to calculate the failure probability of the system (see the denominator in equation 9). Therefore, this PDF is never used in Monte Carlo simulation. However, if we can approximately determine this optimum sampling PDF then we can reduce substantially the size of the sample needed to estimate accurately the failure probability.

IMPORTANCE SAMPLING METHODOLOGY We now present the approach of validating the reliability of design using a very detailed model and using IS.

First, the optimal sampling PDF is estimated. In doing so, we use the performance of the approximate model in (6), instead of the performance function of the detailed model, because the former function is more economical to compute. Then a sample is drawn from the optimal PDF and is used to estimate the probability of failure of the detailed model.

First, we generate a large sample of random numbers from the true PDF of the random variables, and screen out those sample values for which the approximate model survives. Thus we obtain a reduced sample that contains only the values that cause the approximate model to fail. The reduced sample has size approximately equal to the size of the original sample times the failure probability of the approximate model. Thus, the reduced sample is usually several orders of magnitude smaller than the original. For example, if the original sample has size 106 and the probability of failure is 10-3, then the reduced sample has only 103 values. This reduced sample is used to estimate the failure probability of the detailed model.

The optimal sampling density function might not accurately represent the optimal density of the detailed model because it is based on the approximate model. We may also screen out sample values from the original sample that cause the detailed model to fail but not the approximate model, which will result in underestimating the probability of failure. To avoid this we inflate the failure region of the approximate model, and estimate the failure probability of the detailed model using the reduced sample obtained from the approximate model with the inflated failure region. If we repeat this process inflating the failure region to different degrees, we can determine the probability of failure of the detailed model through a convergence study.

Let C be a parameter used to inflate the failure region of the approximate model. For example, the failure region is inflated by including parameter C in the indicator function.

( )[ ]CGI japprox ≤x (9)

where Gapprox (X) is the performance function of the approximate model.

Then, the estimated failure probability is;

( )[ ] ( )( )∑

=⋅≤⋅≈

N

j j

jjf Ch

fGI

NCP

approxail

1 ,01)(

det xx

xX

X (10)

where ( )Ch japprox,xX is the optimum reference density

function from the approximate model, given the value of C

( ) ( )[ ] ( )( )CP

fCGICh

approxapprox

f

jjapproxj

xxx X

X⋅≤

=, (11)

Page 5: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

where ( )CPapproxf is the probability of failure of the

approximate model, given value of C

Substituting Equation (11) in (10) we get

( )[ ] ( )∑=

⋅≤⋅≈N

jfjf CPGI

NCP

approxail1

01)(det

x (12)

Simplifying Equation (12)

approxail ff PNMP ⋅=

det (13)

where M is the number of failures of the detailed model and N is the size of the reduced sample.

VALIDATING THE PROBABILITY OF FAILURE OF A FRAME For illustrating the method of validating the probability of failure, a symmetrical single-story frame consisting of a horizontal beam supported by two vertical beams is considered. The frame has uniform cross section with a uniformly distributed load acting on the horizontal beam of the frame. The vertical beams are fixed at the bottom ends. The frame is made of aluminum and the cross section of the frame is a T-section throughout. Figure 4 shows a CAD model of the frame.

Figure 4: CAD Model of a Simple Portal Frame

The stress at the junction of the beams (hot point) cannot be estimated using a beam model. Detailed finite element analysis is required for the assessment of the stress at the hot point. During the optimization of a frame structure it is expensive to use the detailed finite element model (detailed model) for the analysis. Therefore, the stress at the hot point is approximated using a response surface polynomial (approximate model). The approximation can be validated using a detailed finite element model. However, in probabilistic design one also needs to validate the probability of failure of the

optimum design. Validating the probability of failure using a detailed model and standard Monte-Carlo simulation is too expensive. Therefore, the probability of yielding at the hot point is calculated using the IS methodology described in the previous subsection.

In the probabilistic design methodology the load, the Young’s Modulus, the yield stresses both in the heat affected and the non heat affected zones and the error in approximating the stress using the local model are considered random. In validating failure probability using the detailed model, all the above variables are considered except for the error in the model, which is zero because the detailed model is considered perfectly accurate. The uniformly distributed load and the Young’s modulus are normally distributed and the strength is modeled using the Weibull distribution.

Standard Monte-Carlo simulation is performed to capture the failures in the approximate model for various values of the inflation factor C. Only designs that fail in the approximate model are considered in the IS approach for calculating the probability of failure of the detailed model. The procedure for calculating the probability of failure is repeated for various values of C till the converged failure probability of the detailed model is found.

Table 1 shows the results of the IS approach for various values of C. As the failure region is inflated in the space of the random variables by changing the values of C, the failure probability of the detailed model nearly converged to the probability of failure of the approximate model. The number of failures converged to 62 and was found not to increase for higher values of C. Fig 5 compares the probability of failure of the approximate model to the probability of failure of the detailed model for various values of C. The probability of failure of the detailed model stabilizes to 6.20E-04 after a certain value of C. We consider this value as the true probability of failure of the local model at the hot point. From the reliability analysis of the reliability based local-global design optimum, it is found that the probability of failure of the local model at the hot point is 7.60E-04. One reason for the overestimation of the failure probability of the hot point by the approximate model can be the use of a uniform probability density function with large standard deviation for the approximation error.

Page 6: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

S.No C

Number of failures of the approx

model *

Number of failures in the detailed

model

PF of the detailed model

Standard Deviation

True PF of the

Approx model

1 -0.1 42 34 3.40E-04 4.39E-04 7.60E-04

2 -0.05 66 45 4.50E-04 3.21E-04 7.60E-04

3 0 92 47 4.70E-04 2.36E-04 7.60E-04

4 0.025 120 53 5.30E-04 1.92E-04 7.60E-04

5 0.05 144 57 5.70E-04 1.66E-04 7.60E-04

6 0.075 175 58 5.80E-04 1.38E-04 7.60E-04

7 0.1 221 58 5.80E-04 1.09E-04 7.60E-04

8 0.15 339 62 6.20E-04 7.34E-05 7.60E-04

9 0.2 534 62 6.20E-04 4.66E-05 7.60E-04

* It is also equal to the number of simulations in the detailed model Table 1 Results of IS Approach

Fig 5: Convergence of the probability of failure of the detailed model

VALIDATION USING PHYSICAL TESTING VARIANCE REDUCTION TECHNIQUES FOR OPTIMUM DESIGN OF TESTS Mease and Nair approach Mease and Nair extended IS to derive the optimal reference density function by optimizing the PDF of those variables that can be controlled in physical tests. This approach can be used for reliability validation/estimation through physical testing.

Let jcon XX ,...,1=X be the set of controllable variables

and njun XX ,...,1+=X the set of uncontrollable variables in a test. An unbiased estimate (mean value of estimate equal to the true failure probability) of the probability of failure is:

( )[ ] ( )( )∑

=≤=

Ni con

conunconf

con

con

hf

GIN

p,...,1

0,1xx

XXX

X (14)

The variance of this estimate is:

( )[ ] ( )( )

( ) ( )Np

ddfhh

fGI

Nf

unconunconcon

conuncon uncon

con

con

2

2

2

0,1 −≤∫∫ xxxxx

xxx XX

X

X

(15)

or

( )[ ] ( ){ } ( )( ) N

pd

hf

dfGIN

fcon

con

conunununcon

con

conun

22

0,1 −≤∫ ∫ xxx

xxxxX

XX

(16)

which is equal to:

( ) ( )( ) N

pd

hf

pN

fcon

con

conconf

con

con

221 −∫ x

xx

xX

X

(17)

In the above equation, pf (xcon) is the conditional failure probability given that the controllable variables are equal to x1,…,xj;

( ) ( )[ ] ( ) ununconconf dfGIpun

xxxx X∫ ≤= 0 (18) It can be readily shown that the estimate of the probability of failure still converges to the true probability of failure, when the reference PDF is used instead of the true PDF [13, 14]. That means that, as we increase the number of tests to infinity, the estimate of the failure probability converges to the true value, even though a biased sampling function for the test parameters is used.

Mease and Nair [13, 14] showed that the variance of the estimator of the failure probability becomes minimum when the reference density of the controllable variables is proportional to the product of the square root of the conditional probability of failure ( )( )confp x and joint PDF of the controllable variables. Specifically, the optimal reference PDF of the controllable variables ( jcon XX ,...,1=X ) is

( ) ( ) ( )confconcon pfhconcon

xxx XX ⋅∝ (19) This reference density should minimize the estimator in equation (5). Mease and Nair's PDF reduces to the optimal sampling PDF in eq. (8) when all the design variables in a problem can be controlled. The optimal reference density function in equation (19) can be derived as follows [13, 14].

The problem is to show that the optimal PDF minimizes the variance estimate in equation (15). Since the second term in equation (15) for the variance is constant, it is sufficient to show that by using the above optimal sampling PDF, the first term on the right hand side of equation (15) (that is the mean square of the estimator) becomes equal to its lower bound.

From Schwarz’s inequality for any two random variables W and Z we have

3.00E-04

4.00E-04

5.00E-04

6.00E-04

7.00E-04

8.00E-04

-0.1 -0.05 0 0.025 0.05 0.075 0.1 0.15 0.2

Value of C

PF o

f the

Det

aile

d M

odel

Detailed Model

Approximate Model (True PF)

Page 7: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

[ ] [ ] [ ]2222 EEE WZWZ ≥⋅ (20)

Let )(f xX be the PDF of X, and ( ) ( )( )x

xx

V

X

hpf

Z f⋅=

and ( )( )xx

X

VfhW =

Substituting the values of Z and W in equation (20)

( ) ( )( ) ( ) ( )

( ) ( ) ( ) ( )∫∫∫ ⋅≥⋅⋅⋅⋅

xxxxxxxxx

xxx

XXX

XX

X

X dfpdffhdf

hpf

ff

(21)

But the variance of a variable is greater or equal to the square of its standard deviation:

( ) ( ) ( ) ( )( ) ( ) ( )∫∫∫ ⋅=⋅≥⋅ xxxxxxxxx XXX dfpdfpdfp fff2

(22)

Therefore, simplifying equation (22)

( ) ( )( ) ( ) ( )[ ] 2

2

∫ ⋅≥⋅

∫ xxxxx

xxX

X

X dfpdhpf

ff (23)

Note that the left hand side is the mean square of the estimate of the failure probability in the general case where an arbitrary sampling PDF is used for the controllable variables (equation 15). According to

equation (23), ( ) ( )[ ]N

dfp conconconf12

∫ ⋅ xxx X is a

lower bound of the mean square of the probability of failure. The mean square can only become equal to the lower bound if, )( conconh xX , is selected as

( )( ) ( )( ) ( )∫ ⋅

⋅=

confcon

confconconcon pf

pfh

con

con xx

xxx

X

XX (24)

If this reference density is used for sampling the controllable variables, the variance of the estimator in equation (15) will be minimum. This PDF is identical with Mease and Nair's optimal PDF in equation (19). Q.E.D.

Optimization approach The optimization approach is also an IS approach. The objective of the optimization approach is also to minimize the variance of the estimate of the probability of failure in Eq. (7). This approach assumes a certain form of the sampling PDF, which is defined in terms of a number of parameters, such as mean values, and standard deviations. These parameters are optimized to

minimize the variance of the estimator of the failure probability or equivalently the variance of indicator function, I.

( )[ ] ( )( ) ( ) ( ) 2

22 0

trueunconcon

conI PFdfh

hfXGI

unconcon

con −⋅⋅

⋅≤

= ∫ ∫ xxxx

xXX

X

XLσ

(25) where ( )concon

h xX is the joint multivariate distribution of the controllable variables.

The following optimization problem is solved to find the optimal reference density:

Find the parameters of ( )conconh xX

To minimize 2Iσ (26)

This multivariate distribution will reduce the variance of the estimated probability of failure. Once the parameters of the multivariate distribution, ( )concon

h xX , are found, a sample can be drawn from this optimal density to design the test. The variance of the estimate of this approach will be greater than or equal to the variance from Mease and Nair approach. In the next subsection, we compare the two methodologies on a case study. The robustness of these IS based approaches are also examined.

Case study demonstrating the physical testing approaches Consider a structural element (Fig. 6) with random strength, Su, and stress, S. The element fails when the stress exceeds the strength and this is represented by the performance function G(Su, S) = Su – S. The random variables are assumed to be independent with PDFs,

( )uS sf u and ( )sfS .

Load Load Stress S

Load Load Load Load Stress S

Figure 6: Structural Element The probability of failure is

( )( ) ( ) ( )dssfsFSSGPJp SSuuf ⋅=<== ∫∞

∞−0,

(27)

or

( )( ) ( ) ( ) ( )∫ ∫∞

∞−

∞−⋅⋅=<= dsdssfsfssISSGPp uSuSuuuf ,0,

(28)

Page 8: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

where ( )ssifssif

ssIu

uu ≥

<⟨=01

,

The variance of indicator function, I is:

( )[ ] ( )[ ] ( ) ( ) 2200 JdsdssfsfGIGIVar uSuSu −⋅⋅<=< ∫ ∫

(29)

In the IS approach, we try to reduce the estimator in Eq. (29) by sampling from a different density function (reference density) rather than from the original PDF. In the tests, we can only control the applied stress. In our analytical models, we have estimated the PDF’s of the strength and the stress. We will use these estimated PDF’s to determine the optimal sampling PDF for the applied stress.

If the optimal reference density for the stress, hS(s), is used to estimate the probability of failure, the variance of the indicator function is:

( )[ ] ( ) ( )( ) ( ) ( ) 2

2

,0 JdsdsshsfshsfssIGIVar uSuSu

S

Su −⋅⋅

⋅=< ∫ ∫

(30)

The reference density in Eq. (15) for the stress-strength model is

( ) ( ) ( )spsfsh fSS ⋅∝ (31)

where pf (s) is the conditional failure probability given that the stress is equal to s.

In the optimization approach we assume the optimal density of the stress to be a multivariate normal density function, hX(x). Then the optimal density of the controllable variables is

( )( ) ( )

( ) C

Ch

concon

con

concon

con⋅⋅

−⋅⋅′−⋅−

=

π2

µµ21exp 1

xx

X

xxx

(32)

where C is the covariance matrix and conXµ is the vector

of the mean values of the random variables.

=2

1

212

211

jconjconjcon

conconcon

XXX,j

XX,X

σσσρ

σσρσ

L

MOM

L

C

(33)

Therefore, our optimization problem formulation will be

FindconXµ ,

conXσ , jjjj ,1,23,2,12,1 ,...,...,,,..., −ρρρρρ

To minimize 2Iσ (34)

The above formulation yields the means, standard deviations and correlation coefficients of the stress.

Once the parameters of the joint reference PDF are found samples can be generated from the joint reference PDF to perform the reliability validation.

First we consider a case where the true PDFs of the random variables are known. We assume the reference density of the stress to be normally distributed with unknown mean and standard deviation. The following optimization problem is solved to find these parameters

Find hh andσµ

To minimize ( )[ ]0<GIVar (35)

where we assume that ( ) ( )hhS Nsh σµ ,~=

In this example, the true PDFs of the stress and the strength are known to be normal, independent with mean values 15 and 20 and standard deviations 2 and 3, respectively. The results from the two IS approaches and the standard testing approach, which tests random beams under random loads, are listed in table 2.

It is observed that the two IS approaches reduce the variance of the indicator function to less than two thirds of the variance from standard testing, which tests random samples of the test parameters.

To study the robustness of the IS approaches we consider three cases in which we do not know the true PDFs of the stress, strength and the performance function. In all cases, the same analytical model for the PDF of the stress and the performance function as in the case where we know the true PDFs of the variables are used to derive the reference PDF, but these quantities are in error.

Case 1: The true PDF of the strength is, Su ~ N [19.5, 2.75], where N [19.5, 2.75] is a normal PDF with mean 19.5 and standard deviation, 2.75. The stress has same PDF as the in the case where we know the true PDFs of these variables.

Case 2: The true PDF of the stress is, S ~ N[15.5, 2.2]. The strength has same PDF as the in the case where we know the true PDFs of these variables.

Case 3: The true performance function is,

( ) ESSESSG uu −−=,, (36)

where E~N[1 , 0.3].

Using Monte-Carlo we simulated tests with 10, 100 and 1,000 nominally identical components (table 3). In these tests, the reference PDFs found using the wrong analytical models were used for the test stress.

Table 3 compares the variance of the probability of failure for the case in which the correct analytical model is used to design the sets. The results in table 3 are validated using computer simulation in table 4.

It is observed from table 3 that Mease and Nair's approach and the optimization approach reduce the variance of the indicator function or the variance of the

Page 9: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

estimate of the probability of failure (when one test is performed) by more than 40 %. Since the variance in the estimate of the failure probability is inversely proportional to the number of tests, IS approaches need 40 % less tests to predict the probability of failure with the same

variance as the standard testing approach. The IS approaches still reduced the variance of the indicator function to less than two thirds when there is error in the PDF’s of the strength and the stress or in the analytical model of the performance function.

Table 2: Probability of Failure and Variance for different approaches

Approach/ Assumptions

True probability of

failure

Variance of the Indicator

function

True probability of

failure

Variance of the Indicator

function

True probability of

failure

Variance of the Indicator

function

True probability of

failure

Variance of the Indicator

function

Standard 0.05917 0.055 0.09296 0.084 0.0884 0.08021 0.10674 0.095

Mease and Nair 0.05917 0.033 0.09296 0.059 0.0884 0.057 0.10674 0.064

Optimization 0.05917 0.033 0.09296 0.059 0.0884 0.057 0.10674 0.064

Error in the performance function; True function is

g(Su,S,E) = Su-S-E , E ~ N(1,0.3)

No errorError in PDF of strength

True Strength : Su is N ~ (19.5,2.75)

Error in PDF of stress, True Stress : S is

N ~ (15.5,2.2)

Table 3: Comparison of testing approaches when the analytical model for deriving the reference density is correct (second

column) or it is wrong (last three columns) No of simul-ations

Approach

Variance of PF(n)

Error in PF

PF Variance of PF(n)

Error in PF

PF Variance of PF(n)

Error in PF

PF Variance of PF(n)

Error in PF

PF

Standard 1.60E-02 1.41E-01 2.00E-01 9.30E-02 8.84E-02 1.07E-01

Mease and Nair

8.91E-03 3.97E-02 9.89E-02 1.69E-02 1.22E-01 2.15E-01 3.04E-03 5.70E-02 3.14E-02 1.11E-02 2.03E-02 1.27E-01

Optimization9.65E-03 4.91E-02 1.08E-01 6.85E-03 1.90E-02 7.40E-02 4.69E-03 3.91E-02 4.93E-02 1.74E-02 1.18E-01 2.25E-01

Standard 2.91E-04 2.92E-02 3.00E-02 9.79E-04 1.70E-02 1.10E-01 3.84E-04 4.84E-02 4.00E-02 1.20E-03 3.33E-02 1.40E-01

Mease and Nair

5.04E-04 5.92E-03 5.33E-02 5.71E-04 3.21E-02 6.08E-02 5.20E-04 3.34E-02 5.50E-02 7.69E-04 2.27E-02 8.40E-02

Optimization7.17E-04 1.86E-02 7.78E-02 1.01E-03 2.09E-02 1.14E-01 3.80E-04 4.89E-02 3.95E-02 8.92E-04 7.74E-03 9.90E-02

Standard 4.39E-05 1.32E-02 4.60E-02 8.35E-05 9.60E-04 9.20E-02 7.61E-05 5.40E-03 8.30E-02 9.00E-05 6.74E-03 1.00E-01

Mease and Nair

5.07E-05 5.57E-03 5.36E-02 8.44E-05 1.40E-04 9.31E-02 5.65E-05 2.82E-02 6.02E-02 9.87E-05 4.26E-03 1.11E-01

Optimization4.83E-05 8.31E-03 5.09E-02 8.56E-05 1.53E-03 9.45E-02 5.98E-05 2.45E-02 6.39E-02 9.24E-05 3.79E-03 1.03E-01

No error, True PF = 5.92E-02

Error in PDF of Stress, True PF (with error) =

8.84E-02

Error in PDF of Strength, True PF (with error) =

9.30E-02

10

100

1000

Error in performance function, True PF (with error) =

1.07E-02

Table 4: Comparison of testing approaches for 10, 100 and 1000 simulations

Note: The standard testing method estimated a zero failure probability when 10 simulations were used and there was error in the analytical model (last three columns).

The results in table 4 concur with those in table 3. When the true PDFs of the stress and the strength are known

and the performance function in the analytical model is correct, then both IS approaches (Mease and Nair’s and

Approach True Probability of Failure

Variance of the Indicator function

% Reduction in Variance

Standard 0.05917 0.055

Mease and Nair 0.05917 0.033 40.00%

Optimization 0.05917 0.033 40.00%

Page 10: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

the optimization approach) provide more accurate estimates of the failure probability than the standard testing approach. In particular, for 10 simulations, the estimate of the failure probability from the IS approaches is considerably more accurate than that from the standard approach and has considerably lower variance. The IS approaches did better than the standard approach for 10 simulations, even when there was error is the analytical models of the uncertainties and the performance function. For 100 and 1,000, simulations no method did consistently better when there was error in the analytical models. When there is error in the PDF of the strength used in the analytical model or in the performance function, the estimated failure probability from the tests still converges to the true one. However, when the PDF of the stress is incorrect in the analytical model, the estimates of the failure probability are biased, that is, they do not converge to the true probability of failure.

Figures 7 and 8 show the estimates of the failure probability from the standard testing approach and the IS approaches, as a function of the number of simulations. Figure 7 is for the case in which there is no error in the analytical models, while Fig. 8 is for the case in which there is error in the PDF of the applied stress. It is observed that the results of the two IS approaches converge faster than those of the standard approach. However, the IS approaches provide a biased estimator of the probability of failure when there is error in the PDF of the stress (Fig. 8).

Variance Convergence Graph (S-Su Model)No Error

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 10 100 1000

Number of Simulations

Prob

abili

ty o

f Fai

lure

Optimization Standard Mease and NairTrue Pf

Figure 7: Convergence study (No error in analytical

models)

Variance Convergence Graph (S-Su Model)Error in CAE Model of Stress

0

0.02

0.04

0.06

0.08

0.1

0.12

1 10 100 1000

Number of Simulations

Prob

abilit

y of

Fai

lure

Optimization Standard Mease and NairTrue Pf

Figure 8: Convergence study (Error in PDF of stress)

The estimates of the failure probabilities of Mease and Nair's approach and the optimization approach are biased because a wrong PDF of the stress is used to generate sample values of the stress in the test. On the other hand, when there is error in the assumed PDF of the strength or in the model of the performance function, then the estimate of the failure probability from the tests still converges to the true failure probability. The reason is that the correct PDF of the stress is used in the tests. Generally, the estimates of the failure probability from a test are biased only if the assumed PDF of the controllable variables is wrong. If the assumed PDF of the uncontrollable variables is wrong, IS testing approaches still yield unbiased estimates of the failure probability. Finally, from figures 7and 8, it is observed that the variation in the probability of failure from the standard approach is higher than the IS approaches.

The reduction in the variance from both the IS approaches is comparable. In general, Mease and Nair’s approach reduces the variance in the estimated failure probability more than the optimization approach because the former approach finds the true optimum sampling PDF, while the latter approach finds an approximation. Each of these approaches has a drawback. In Mease and Nair's approach, generating random numbers from the joint optimal reference density function is expensive. This expense will increase with the number of controllable variables. In the optimization approach solving for the parameters of the optimal multivariate distribution is expensive and will be increasingly tedious with more number of variables. Moreover the assumption about the multivariate distribution in the optimization approach will take its toll when the real joint reference cannot be approximated by a joint multivariate density function.

Concluding remarks Two IS based testing techniques used for reliability assessment through testing of structural systems are presented. The approaches are found be efficient. Relatively fewer numbers of tests are needed to

Page 11: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

estimate the probability of failure with same confidence as a standard testing approach. The approaches converge to the true probability of failure faster than the standard testing approach. The rate of convergence depends on many factors such as the number of design variables, number of controllable variables, and the influence of controllable variables on the performance function (G). The approaches become expensive (or even impractical) when the number of controllable design variables (j) is greater than 4 or 5. We can overcome this difficulty with improved numerical techniques.

TARGETED BAYESIAN TESTING Methodology In Bayesian testing, a set of parameters is considered to quantify the uncertainty in an analytical model. These parameters include errors in the deterministic model for predicting the performance of a system and parameters defining the PDFs of the random variables. In the Bayesian approach, these parameters are treated as random variables. Prior PDF’s are constructed for these parameters based on judgment, which are updated according to the evidence from physical tests. Then a PDF of the probability of failure can be derived from the updated PDFs of the model parameters and the functional relation between these parameters and the probability of failure of the system. In the Bayesian approach, the test parameters are optimized to minimize the variance of the updated PDF of the probability of failure.

In the following we will present the Bayesian approach considering a single error parameter θ for the performance function of a system, ( )θ,Xg . Let ( )θΘ′f be the PDF of the error parameter. The posterior PDF of the error parameter, ( )Ef /" θΘ , is:

( ) ( ) ( )kfELEf θθθ

'" // Θ

Θ⋅

= (37)

where the outcome of a test, E, can be a failure (I=1) or a success (I=0).

Therefore, the posterior PDF of the error is:

( ) ( ) ( )ikfiIPiIf θθθ

'" // Θ

Θ⋅=

== (38)

where ( ) error theof PDFprior theis' θΘf , and

( ) θθ =Θ= given test theof outcome theof likelihood theis/iIP

( )( )( )

( )∫ ∫== =≤=>

xxX dfIifXgIifXg

iIP L10,00,

/ θθ

θ (39)

The normalization constant is

( ) ( )∫∞

∞−⋅== θθθ θ dfiIPki

'/ (40)

The probability of failure, PF, is a function of error parameter, θ, because this parameter affects the performance function

( ) ( )( )∫ ∫= ≤

xxX

X dfgPF L

0,θθ (41)

The expected value of the probability of failure is:

( ) ( ) ( )∑ ∫=

∞−Θ =⋅

=⋅=

1,0

" /)(i

iIPdiIfPFPFE θθθ

(42)

The posterior failure PDF of the failure probability is given by:

( ) ( ) ( )pfPF

ddPF

iIfiIpffPF1

"" where)(

// −Θ ==== θ

θθ

θ (43)

After the posterior PDF of the probability of failure is calculated, the variance of the posterior probability of failure, VarPF, can be calculated:

( ) 2

1,0

1

0

"2 )()(/ PFEiIPdpfiIpffpfVari

PFPF −=⋅=⋅= ∑ ∫=

(44)

The following optimization problem is solved to find the optimal test parameters.

Find the values of the controllable parameters in a test, x1, x2,…, xj

To minimize VarPF (45)

One may prefer minimizing Shannon’s entropy instead of the variance of the probability of failure. In this case, the objective function in the formulation (equation 45) is Shannon’s entropy, instead of the variance of the failure probability. Shannon’s entropy measures the dispersion in the estimated failure probability. When a probability density function of a continuous variable is defined in a real interval, then Shannon’s entropy is defined in relative terms using a reference probability density function

∫ ⋅−=∞

∞−dx)x(g

p(x)logp(x)H(X) (46)

where X is the random variable with PDF p(x), and g(x) is the reference density function of X. Shannon’s entropy for the probability of failure is

Page 12: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

( ) ( )( ) dpfpfgpffpffH

PF

PFPF

⋅−= ∫

"1

0

" ln (47)

where ( )pfgPF is assumed to be a uniform PDF between 0 and 1.

The optimal test parameters for the first test can be determined by minimizing either the variance of the failure probability or Shannon’s entropy. Then based on the result from the first test, the error PDF can be updated using Bayes’ rule. The posterior PDF of the error is used as the prior PDF for a 2nd test. Using the same procedure, n tests can be performed until the required confidence in the estimated probability of failure is achieved. Alternatively, one may perform a fixed number of tests he/she can afford.

Numerical Example A structural element is subjected to one load that results in stress S. The resistance, SU, and load, L, are random. The structural element fails if its resistance, SU, is less than the stress S (L/A). The cross sectional area, A, is deterministic. We assume there is an error, Θ, in the uncertainty model of the stress, whose PDF is fθ (θ). The true stress is equal to the stress from the uncertainty model minus the error. We have estimates of the PDF’s of the strength, the stress and the error. The objective is to find the optimum load level in a test to estimate the probability of failure with maximum confidence regardless of the outcome of the test.

The analytical model of the performance function is:

( ) θθ +−=ALSLSg UU ,, (48)

The posterior PDF of the error is:

( ) ( ) ( )ikfiIPiIf θθθ θ

θ

'" // ⋅=

== (49)

where the likelihood of a failure or a success in a test are

( )

−=

−≤== θθθ

ALF

ALSPIP

USU/1 and (50)

( )

−−=

−>== θθθ

ALF

ALSPIP

USU 1/0 (51)

The normalizing constant k is

( ) 1if' =⋅

−= ∫

∞−Idf

ALFk

US θθθ θ (52)

and ( ) 0if1 ' =⋅

−−= ∫

∞−Idf

ALFk

US θθθ θ (53)

The probability of failure as a function of error is

( ) ( )dllfALFPF LSU∫

∞−⋅

−= 'θθ (54)

The variance of the probability of failure is given by

( ) ( )( ) ( ) ( )∑ ∫=

∞−=⋅

=⋅=−=

1,0

"2 //i

PF iIPdiIfiIPFEPFVar θθθ θ

(55)

The optimum test load in a test is found solving the optimization problem (equation 45). We assume numerical values for the above problem to assess the effectiveness of the targeted testing approach. Following are the PDFs of the strength, stress and the prior PDF of the error:

( ) [ ]5.2,20~Nsf uSU , ( ) [ ]5.2,15~NsfS , ( ) [ ]5.0,0~Nf θθ

The results of two consecutive tests are shown in figures 11 and 12.

Page 13: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

Prior PDF of PF

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (1st test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (1st test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Optimal load = 20success

success success

failure

failure failure

Optimal load = 19.7 Optimal load = 20

Prior PDF of PF

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (1st test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (1st test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Posterior PDF (2nd test)

010203040

0 0.05 0.1 0.15

Probabilty of failure (pf)

f(pf)

Optimal load = 20success

success success

failure

failure failure

Optimal load = 19.7 Optimal load = 20

Figure 9: PDF’s of probability of failure for two consecutive tests

Prior PDF of Error

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (1st test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (1st test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Optimal load = 20

Optimal load = 19.7 Optimal load = 20

success

successsuccess failurefailure

failure

Prior PDF of Error

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (1st test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (1st test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Posterior PDF (2nd test)

00.15

0.30.45

0.60.75

-2 -1 0 1 2Error, e

f(e)

Optimal load = 20

Optimal load = 19.7 Optimal load = 20

success

successsuccess failurefailure

failure

Figure 10: PDF’s of error for two consecutive tests

Figures 9 and 10, show the prior and posterior PDF’s of the error and the probability of failure for two consecutive tests. For the first test, the optimal load is found to be 20. If the component survives the first test, the posterior PDF indicates that the failure probability decreases compared to the prior probability of failure. Thus, the probability of failure at the nominal operating stress is

reduced when the component survives the first test at 20. From figure 10, it is observed that if the component fails this indicates that the error is more likely to be negative (that means it is likely that the true value of the strength is lower than the prior expected value of the strength). If the component survives the first test, the optimal stress value for the second test is 19.7. But if the

Page 14: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

component fails the first test at a load of 20, the test load for the second test will be also 20. If the component survives both tests, the Bayesian approach yields a posterior PDF of the error and the strength indicating that it is very likely that the strength was underestimated and that the probability of failure was overestimated before conducting the tests.

Concluding remarks A technique for optimizing Bayesian tests for validating the probability of failure of components/ systems has been presented. This approach improves the confidence in the reliability of components at operating stress levels by testing them at higher stress levels. This approach is appropriate when we can express modeling uncertainty in terms of error parameters in the analytical model of the performance of a system/component, and we can estimate the PDFs of these parameters. The Bayesian approach may work with only few tests (less than five), as opposed to the variance reduction techniques in the previous section.

SERIES SYSTEM VALIDATION Methodology Consider a series system consisting of n independently operating components. The estimated probabilities of failure of n components connected in series are P1, P2,…,Pn. Then the system reliability is approximately equal to 1 – (P1+P2…Pn). Suppose we can perform a total of k tests on these components, to validate the system reliability. An approach that minimizes the system probability of failure by optimizing the number of tests, ki, performed on each component will be presented.

1 2 n

P1 P2 Pn

1 2 n

P1 P2 Pn

Figure 11: Series system with n components

The probability of failure of the series system is approximately equal to

nS PPPP +++= L21 (56)

where iP is the probability of failure of the ith component.

If the probability of failure is estimated by testing a sample of ki components, then the variance of the component probability of failure is

( )i

iiP k

PPi

−⋅= 12σ (57)

Then the variance of the estimated system probability is

222221 nS PPPP σσσσ +++= L (58)

The objective is to reduce the variance of the system probability of failure ( 2

SPσ ). We develop a test plan for components for the following two cases: (1) the total number of tests is a constant (k); the (2) total testing cost is a constant (c). Two formulations are studied for the above cases. Both formulations minimize the variance of the system probability of failure by finding number of tests for each component.

Optimization formulation 1:

Find nkkk ,,, 21 L

To minimize ∑=

=n

iPP iS

1

22 σσ (59)

So that kkn

ii =∑

=1

The Lagrangian equation of the above problem is

−⋅+= ∑=

n

iiP kkL

1

2 λσ (60)

The minimum of the Lagrangian, which satisfies the equality constraints, minimizes the variance [Eq. (59)]. At the minimum, the gradient of the Lagrangian is zero,

0=∇Lr

(61)

constant02

2

2

1

22221 =

∂∂

==∂

∂=

∂∂

⇒=∂

∂⇒=−

∂∂

n

PPP

i

P

i

P

kkkkkni

σσσλ

σλσ

L

(62)

Solving equation (62)

( ) ( ) ( )22

2

222

1

11 111

n

nn

kPP

kPP

kPP −⋅

==−⋅=−⋅L (63)

We know that the total number of tests is equal to the sum of the tests on the individual components

kkkk n =+++ L21 (64)

Equations (63) and (64) are the optimality equations and can be solved to obtain the values of nk,,k,k L21 . We solve an example to present the optimality condition graphically.

Example: Consider a two-component system with components whose failures probabilities are estimated to be P1 and P2. Figure (12) shows the optimality condition

constant2

2

1

221 =

∂∂

=∂

∂kkPP σσ

for minimizing the variance 2SPσ .

Page 15: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

Figure 12: Optimality condition for a two-component

system

If the component probabilities of failure are very small then equation (63) reduces to

j

i

j

i

kk

PP

= when i = 1 and j = 2,…,n (65)

Optimization formulation 2:

This formulation will consider the testing costs explicitly. The cost of testing each component is ci and the total budget for testing all the components is c. Following is the formulation of the optimization problem.

Find nk,,k,k L21

To minimize ∑=

=n

iPP iS

1

22 σσ (66)

So that ckcn

iii =∑

=1

The optimality condition can be derived using the same procedure as in formulation 1. The optimality condition for this formulation yields the following equations:

( ) ( ) ( )22

2

222

1

11 111

n

nn

kPP

kPP

kPP −⋅

==−⋅=−⋅L (67)

ckckckc nn =⋅++⋅+⋅ L2211 (68)

In both formulations, we will obtain n equations with n unknowns. We determine how many tests on each component we should conduct by solving the set of these equations. A numerical example with a three- component system is considered. The proposed

methodology finds an optimum test plan to minimize variance in the estimate of the system reliability.

Numerical Example Consider a product line that produces systems made of three major components. We want to estimate the probability of failure of the system by testing samples of the three components. The budget allows testing a total of 100 components. The estimated probabilities of failure of the three components are 0.08, 0.05 and 0.02. Using the variance reduction approach for determining an optimal testing plan for systems, the sizes of the samples of the three components to be tested will be found.

1 2 3

P1=0.08 P2=0.05 P3=0.02

1 2 3

P1=0.08 P2=0.05 P3=0.02

Figure 13: 3 series system with their estimated probabilities of failure

The following equations will be solved to find the optimum sample sizes

( ) ( )2

2

222

1

11 11kPP

kPP −⋅=−⋅ (69)

( ) ( )2

3

332

1

11 11kPP

kPP −⋅

=−⋅ (70)

100321 =++ kkk (71)

Solving the above set of equations yields the optimal values for k1, k2 and k3: k1 = 43, k2 = 35 and k3 = 22. The variance of the system reliability is 3.96E-03. If we used the traditional approach of testing equal number individual components (k1= k2 = k3 =33) then the variance of system reliability would be 4.23E-03.

For the same example we consider different numerical values, k = 10 and the probabilities of failures are 0.13, 0.01 and 0.01 for the 3 components, respectively. The sample sizes for each component to be tested are 6, 2 and 2 respectively. The variance of the system using the optimal test plan is 2.87E-02. The variance if equal number samples (3.3 samples) were tested would be 3.99E-02

It is observed that following the optimum test plan we can estimate the system failure probability with higher confidence than testing equal numbers of components. The test plan indicates that we should test a larger number of components whose failure probabilities are higher.

k1

22Pσ

21Pσ

2φ1φ

1-k1

Var

ianc

eOptimality condition: 21 φφ =

k1

22Pσ

21Pσ

2φ1φ

1-k1

Var

ianc

eOptimality condition: 21 φφ =

Page 16: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

CONCLUSIONS The paper presented methodologies for designing targeted tests for reliability validation to increase the confidence in the estimated failure probability of a component/ system for a given number of tests. Four approaches for physical tests and one approach for numerical tests were developed and demonstrated.

An IS approach for validating a design obtained using an approximate model was developed. This approach computes the probability of failure of the same design using a very detailed model as a reference and also using IS. The method was demonstrated on a frame structure whose local details were analyzed using an approximate model.

Mease and Nair's approach and an optimization approach, which optimize the sampling probability distributions of those parameters that can be controlled in tests, were presented and illustrated. These approaches use IS techniques. Both approaches were compared and proven to be more accurate than standard techniques that test random samples of components or systems under real operating conditions. The robustness of these approaches to errors in the analytical models was investigated. The approaches yield unbiased estimates of the probability of failure even if there are errors in the analytical probability distributions of the uncontrollable variables. However, errors in the distributions of the controllable parameters bias the estimate of the failure probability from the test. In both cases where the true probability distributions and approximate probability distributions of the controllable random variables were used to design the tests, the IS approaches improved the accuracy of the estimate of the probability of failure compared to the standard approach. However, finding the optimal sampling probability distribution can be expensive when there are more than 5 controllable test parameters.

A Bayesian Targeted testing approach that validates the reliability of systems by performing optimal physical tests was developed. This approach models errors in the analytical model of a system by random parameters. The approach minimizes the variance of the probability of failure estimated from tests by testing the components/systems at more severe conditions than the operating conditions. This approach is suitable for validation problems in which we can only test few systems (e.g. one to five). The Bayesian approach improved the confidence in the probability of failure of a simple structure, in an example problem.

A variance reduction approach for validating series system reliability by optimizing the number of tests to be performed on each component was developed. The approach relies on analytical estimates of the failure probabilities of the components to optimize the number of components that should be tested to maximize the confidence in the system probability of failure. If the

components are independent and their failure probabilities are small (e.g. 0.05) the optimum number of tests for a component is approximately proportional to the square root of its failure probability.

ACKNOWLEDGMENTS

The work presented in this report has been partially supported by the grant "Analytical Certification and Multidisciplinary Integration" provided by The Dayton Area Graduate Study Institute (DAGSI) through the Air Force Institute of Technology.

REFERENCES [1] Mann, N.R., Schafer, R.E., and Singpurwalla, N.D., 1974, Methods for Statistical Analysis of Reliability and Life Data, Wiley, New York.

[2] Lawless, J.F., 1976, Statistical Models and Methods for Lifetime Data, Wiley, New York.

[3] Wayne, N., 1990, Accelerated Testing: Statistical Models, Test Plans, and Data Analyses, Wiley, New York.

[4] Kececioglu, D., 1993, Reliability and Life Testing Handbook Volumes 1 and 2, PTR Prentice Hall, New Jersey.

[5] Kleijnen, Jack P.C., 1974, Statistical Techniques in Simulation, Dekker, New York.

[6] Haldar, A., and Mahadevan, S., 2000, Probability, Reliability and Statistical Methods in Engineering Design, Wiley, New York.

[7] Sobol, I.M., 1974, The Monte Carlo Method, University of Chicago Press, Chicago.

[9] Kamal, H.A., Ayyub, B.M., 2000, “Variance Reduction Techniques for Simulation-Based Structural Reliability Assessment of Systems,” ASCE Specialty Conference on Probabilistic Mechanics and Structural Reliability.

[10] Nelson, W., 1977, “Optimum Demonstration Tests with Grouped Inspection Date,” IEEE Trans. on Reliability, R-26, pp. 226-231.

[11] Melchers, R.E., 1987, Structural Reliability Analysis and Prediction, Ellis Horwood Ltd, Chichester.

[12] Melchers R.E., 1990, “Search-Based Importance Sampling,” Journal of Structural Safety, Vol. 9, pp. 117-128.

[13] Mease, D., and Nair, V., 2002, "Accelerated testing in Structural Reliability Using CAE Models," Joint Statistical Meeting, New York.

[14] Mease, D., and Nair, V., 2003, “Variance Reduction Techniques for Reliability Estimation Using CAE Models.” Reliability and Robust Design in Automotive

Page 17: Targeted Testing for Reliability Validationenikolai/research/Targeted-testing-Web.pdf · Targeted Testing for Reliability Validation ... [16] describes the basic ... physically test

Engineering, Special Publication 1736, SAE International, 2003, pp. 69-72.

[15] Bayes, T., 1763, Essay Towards Solving a Problem in the Doctrine of Chances, Biometrika, Vol. 45, pp. 298-315.

[16] Morgan, B. W., 1968, An Introduction to Bayesian Statistical Decision Processes, Prentice-Hall, New Jersey.

[17] Berger, J. O., 1985, Decision Theory and Bayesian Analysis, Springer-Verlag, New York, pp. 109-113.

[18] Winkler, R. L., 1972, Introduction to Bayesian Inference and Decision, Holt Rienhart and Winston, Inc.

[19] Martz, H. F., Waller, R. A., 1982, Bayesian Reliability Analysis, Wiley, New York.

[20] Polson, N., Singpurwalla, N.D., Verdinelli, I., 1993, In Reliability and Decision Making, Chapman and Hall, New York.

[21] Pasquale, E., Massimiliano, G., 2002, “Assessing high reliability via Bayesian approach and accelerated tests,” Reliability Engineering and System Safety, Vol.76, pp. 301-310.

[22] Martz, H.F., Waterman, M.S., 1978, “A Bayesian Model for Determining the Optimal Test Stress for a Single Test Unit,” Technometrics , Vol.20, No. 2.