19
LA-6126 A l\ J V r UC-79p Reporting Date: October 1975 Issued: February 1976 The Basics of Bayesian Reliability Estimation from Attribute Test Data by H. F. Martz, Jr. R. A. Waller is I elf scientific laboratory of the University of California LOS ALAMOS, NEW MEXICO 87545 \ An Affirmative A*tion/Equal Opportunity Employer UNITED STATES ENERGY RESEARCH AND DEVELOPMENT ADMINISTRATION CONTRACT W-740B-ENG. t' i -|i J iv^

The Basics of Bayesian Reliability Estimation from

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The Basics of Bayesian Reliability Estimation from

LA-6126A l\ J

Vr UC-79pReporting Date: October 1975Issued: February 1976

The Basics of Bayesian Reliability

Estimation from Attribute Test Data

by

H. F. Martz, Jr.R. A. Waller

is I elf

scientific laboratoryof the University of California

LOS ALAMOS, NEW MEXICO 87545

\An Affirmative A*tion/Equal Opportunity Employer

UNITED STATESENERGY RESEARCH AND DEVELOPMENT ADMINISTRATION

CONTRACT W-740B-ENG. t'

i -|i J iv^

Page 2: The Basics of Bayesian Reliability Estimation from

Printed in Ihe Unir-;d States of America. Available fromNational Technical Information Service

U.S. Deparlment of Commerce5285 Port Royal RoadSpringfield, VA 22151

Price: Printed Copy $4.50 Microfiche $2.25

Thih report w*» prrpmrrA m* an •nouni of work «bv (h# l'nit«l St*tn tfavrrninrnl Neiihtr ih l^itnor thr t'nitrd Htatn Kn*r*^ H*iwjrrb and tfcvrl'-timtlralion. nor an% of their employ***, nor anv oiractor«. •ubronirartorb, or th*ir cmpkyt^ii. n;ak#i «nvwirr»niv, cKprna or implied, or aiiunw* m y \*£*\ liibilitv orrctpomibilily fcr the aceur«r«, completenm, or utcfulMu ofanv iBformation. apparatui. product, or pro<w«t ffiKctourf. orrcpretrnti that in U K would not infringe privtUly ownrdrixhu.

Page 3: The Basics of Bayesian Reliability Estimation from

THE BASICS OF BAYESIAN RELIABILITY ESTIMATIONFROM ATTRIBUTE TEST DATA

by

H. F. Martz, Jr. and R. A. Waller

ABSTRACT

The basic notions of Baycsian reliability estimation from at-tribute iifctcst data arc presented in an introductory and expositorymanner. This report will be useful to persons int_.estcd in becom-ing familiar with the Bayesian approach to reliability estimation.Both Bayesian point and interval estimates of the probability ofsurviving the lifetsst. the reliability, an? discussed. The necessaryformulas are simply stated, and examples are given to illustratetheir use. In particular, a binomial mode) in conjunction with abeta prior model are considered. Particular attention is Riven to theprocedure for selecting an appropriate prior model in practice. Km-piriciil Buyos point and interval estimates of reliability arc discuss-ed and examples are given.

1. INTRODUCTION

Basic .statistical notions ustd in reliability estima-tion are contained in a companion report bv Wallerand Mart/..' The render unfamiliar with suchstatistical notions as random variable, probabilitymodels, point and interval estimation, and distribu-tion functions should refer to that report, which alsoprovides an introduction to classical, Bavesinn. andempirical Hayes reliability estimation for the time-dependent case. Characteristics which serve to iden-tify reliability estimation for the time-dependentcase are (1) a probability model for the time-to-failure lifetime of the item under investigation isassumed; (2) the data consist of the observed times-to-failure of a st-t of items; and (3) the resultingreliability estimate is functionally dependent upontime. In particul&r. the exponential and Weibullprobability models are considered in Ref. 1.

In this re|tnr(. we discuss the time-independentcase. The time-independent case characteristicallyconsiders a set of n items placed on lilelest lor aspecified length of time. At the end of the lest, thenumber of survivors is observed and recorded. Hothpoint and interval estimates of the probability of

tpimiiiied r.r th r t'niTfj Sur*.ihf I'flitrJ St»trt nm ih f t

ni uirrutntA «f jo

Utfitftfr pn»itrl> >• •Ttr_ r if It II.

ntl(_nrtirnSlitr

_ , .

of voilkV.tJirrf nrtj!*

" " " " " "

failure-free operation (rt'liabilityS for the duration ofthe test must be obtained. Data of thU type ar*sumeiimes called attribute test data berausv onlythe survival/nonsurvivat. not the ltfeiime. is record-ed for ciich test item. Such a lifctest is called an at-tribute tfsi. whenuis a test in which >A\e acdtalfnilurc times are recorded is called a variables test.Then- arc two main advantages of an attribute testover a variables test. First, it is generaliy moreeconomical to conduct an attribute test than avariables test. Sec-find, n prohsihiitty model lor thetiuu's-to-failun' variable is unnecessary, and theresults lire valid regardless of tin* undedyitjjj lime-to-fiiilure probability distribution. There arc itlsotwo major disadvantage* when attributecompared to variables lesltnf*. First, if thetimes i\t- monitored and recorded i«nd if theprobabilitv distribution of these failure times isknown, the resulting reliability estimates aregenrrallv superior te.jj.. shorter interval estimates)to those liascfl on corresponding: attribute data. Se-cond, the r«'liii»)ilitv estimates based on attributedata ate not functionally dependent upon time.That is. the estimates pertain only to the reliability

Page 4: The Basics of Bayesian Reliability Estimation from

for a time period equal u> the lest duration, andreliability estimates cannot he given for other timeperiods. It is impiirumt to note thai variable testdata can he converted to attribute test data but notvice versa.

To illustrate the above and tr» supply numericaldata ("or use and reference in inter sections, considerthe following example. Suppose thai live indepen-dent sample* of ten fiO-W light bulbs were placed onlOOQ.h lifetest. The test results are given in Table I.

Using this data we shall calculate the classical.Bayes, and empmcat Bayes point and interval es-timate.* of the probability that any bulb in ihe pop-ulation from which these samples have been selectedwill survive 1(HX> h.

Let the variable X denote the number of survivorsof a Hfelest of duration t from tt sample of size is. It iscommonly assumed that the survival/nonsumvol oftsch item in the sample is independent of theremaining item* and that each item has the sameunderlying probability of surviving the test. Underthese assumptions it may be shown that s has abinomial probability distribution given by

n!(n-x) it*. x*O,i a.

where n! = n<n-l»...»2Mlt and R is the unknownprobability «tf surviving the lifcKwt of duration t.Thus R i« the reliability required to be estimated.Also Pi • I denotes the probability of the ev««t withinthe parentheses. For example, in Sample i. Table !.n = If) and X has the value s «• S.

Classical estimators for R arc considered in See.II; Bayes estimator* in Sec. ill; and empirical Bayesestimators in Sec. IV.

TABLE t

II. CLASSICAL ESTIMATION

The classical point estimator for H is simply theobserved fraction of items surviving the Sifelcst; thatis.

H = xn. t-'t

Bmvker and Lieberman* present the necessaryequations for constructing either a IOHtl-«)r; two-Hided or a HW( I ~«tf i lower one-sided confidence in-t rrv . tS t - - i i n i ; i M ' o l i h r r f l i a b t l i i v H . T i n - Hnti {--»»»•

imervnl iTCIi i

(3)

whert' V.. .• ,n .•» . .• .<„ is the «t>>cAiled upperpercentage jmint «»f'att FdisiriJiMtion with '2n - 2s +2 numerator dcurws of freedom and 2x denominatordegrees of freedom. Similarly. Frt̂ ..>»«.'..'fl-:» t* theupper H/2 pMccntage point of an F distrihutinn with2s + 2 numcra(f<r degrees of freedom and 2n - 2sdt-ntiininnitir dfjjfws of frecdnm. The Appendixshow* the required jwcentage points f«r«/'i » D.ftOS.<>.(H, o.trjK, o.ttt. and 0.J0. Thus. 99. 9S. ar». 5K). andssi'. *f'{'| eMttnntes mny be constructed fr»»ni thrseviilurs. Fr«'(|t!(>ntly W>'- intervalis ore de-sired. *i"*« us*»tin- AjitH-ndts. compute tht? value nf lt!0<«/2i f«r the.. <if»in*<l. Thru in the Ajn'fiuiis finrl the |wgeheading "MHnFii Percentage Points «f iht? FDistribuf ion." For example, if a 9-V. TCI is desired.tlwn «. = o.ai and |0nfn/2) n 'i.?». The page heading":!.•* l^-rcentflge Points of the F Distrihutitm" witutdthen be used.

A Hii( }-ui'. (tiM'tt one-sided confidence interval(I.Cii is given hv

ATTttilirTE TEST RESULTS FOR fiO-WLIGHT Bl'LRS IN A IflMl-H LIFRTKST

SampleNo.

1•j

:\

•%

: \

SampleSize

itiWin10

m

N"o. ofSurvivors

j -

K"3

a

Here th? required percentage points would he ob-tained from the Appendix under the heading " HXHntPercentage Points of the F Distribution," and the<J!i.."». <«J. »?..*». 9"K and »>'. i,('l estimates may beconstructed.

Example: For Sample No. I. Table 1. n = 10 andx = .*i. A fioint cstimati' of H is fi = 5/10 <= «.5l».

Page 5: The Basics of Bayesian Reliability Estimation from

th-mv we estimate Ihai "in1, (,f all such MO-W lightbulb- have a lifetime of al lenM KKM) h. A »"•' • T Oestimate <il U becomes

951 TCI: s • ( 1 0 - s • 1)(376^7n -a -1

> C °

I If

»*•'• T<! : ».li» S H < O.Si.

since Kiir,.-,.!,. in = -i.lW. Thus, weareft .V. confidentthat H is contained in tin- interval III. 19.ll.HI |.

111. KAYKSIAN ESTIMATION

It is important to mite that the Riiyesuw develop-ment requires more assumptions than are requiredl«r the preceding classical development. As theassumptions are presented, iin attempt is in tide todiscus* their relevancy in practical situations. Forthe classical estimation procedures in See. II wt»

that H was an uiikiunvn ontstanl and usedsnmph' tlaut lo cMiiuiilt' its vahic. The

Hnv«-siitn philosophy is u> trcai H as ti variable. Thaiis. we SII{)|»IIHI> thai in any nivon test «1 the kindilfsrriiswl in Sw. I that the vahnMifK undertyinj! thetest is ihe vultie of «variable acroi'dinj: t<<a sn-calledprinrdixtribulinn . Tht'fefore the prior distribution isthe probability model for ihe parameler It. The priordisiribtiiiixi represents the totality "I"theengineer'sknciwlediie and assumpiiims almut H before any datanre ohsrn-fil in U»e test. The |>rii»r distribution of Rshiuttd refiert the rncinevr's beliefs coiu-ernint; thelikely values t>!" R before he observes the test results.II. <in the basis t>{ engineering judgement and ex-IH-rieme, he believes that PlH > O.Sfll = (MM. theprior distribution should exhibit this property.

A. Selecting a Prior Distribution.

Selecting n prior distribution is the engineer'schoice and represents an assumptive and subjectivestep in the analysis. !n theory any probability dis-tribution over the interval (O.I | can serve as a priormodel. Further, two engineers will not necessarilychoose the sane prior model for a given problem ow-ing to different degrees of belief concerning R.

Although there are no rigid rules for selecting priormodels, there are analytic advantages in using whatHaiffa and Schtaifer' call conjugate prior models .For the parameter H considered here, the family ofbeta probability density functions defined by

• 0, otherwise.(5)

yield conjugate prior models. For reasons discussedlater, the prior parameters x,, and n,,are looked uponas "pseudo number of survivors" and "pseudo sam-ple size," tesjK'ctiveiy. of a lifetest of duration t. Themodifier "|>seudo" should be thought of as meaning"pretended." Information about x,, and n,, may havebeen gained from one or more previous tests of asimilar nature <see Sec. IVI. Also, the gamma tune-lion. I', appearing in Eq. <5» is defined as

I ' la i = fWhan a is an integer Ha) = la -1) ! . The prior meanand variance of Eq. (5) are given by E(K) = x,,/n.,and V1R> = |x., (n,, - x,, ) | / | n ^ ( n , , + \)\.respectively.

Different prior models may be chosen by specify-ing different values for x,, and n,,. Various priormodels with means of 0.5 ere shown in Fig. 1 • Noticethat the prior models are unimodal and bell-shapedfor relatively large values of x0 and n0 . For smallvalues of Xn and no, the models tend to become U-shuped with the probability tending to "pile-up" attiic? extremes of R. Notice also that the uniformmode! which assumes that all permissible values ofR are equally likely is a special case of Eq. (5)whenever &. = I and no «= 2.

O 01 02 03 04 OS 06 07 06 09 10

Fig. I-Typical beta prior models with mean 0.5

Page 6: The Basics of Bayesian Reliability Estimation from

A similar variety of shapes exist for larger valuesof the prior mean, although the curves becomeasymmetric for small values of n0. Figure 2illustrates some typical beta models with means of0.9. Generally, for a fixed mean, the larger the valueof rto the more the beta prior model is concentratedaround the mean. For example, from Fig. 2 x0 =o.fN) and no = 1 yields a beta model with a mean of0.90 and standard deviation 0.21, whereas x0 = 90and no = 100 yields a beta model with the samemean but with a standard deviation of 0.03.

We repeat that the most important considerationin selecting a prior model for R is that the modelselected must represent the engineer's knowledgeand experience concerning R. That is, the modelreflects his beliefs about R before observing the testresults. Each of the curves in Figs. 1 and 2 conveydifferent prior beliefs about R. For example, in Fig. 2the model (c) conveys the essential beliefs that (1)the prior mean of R is 0.90; (2) the "effective" rangeof R values is (0.8,0.975): and (3) values of R in theimmediate vicinity of 0.9 are quite likely. Thus, inthis case, the engineer "strongly believes" in therelative high reliability of the item in question beforeobtaining the test results. Such a model will bereferred to as a "concentrated" beta model. In otherwords, concentrated beta models are those withsmall variance. However, consider model (b) in Fig.2. Although the engineer believes the prior mean tobe 0.90 he is uncertain how close R is to this value.This is reflected in the distribution of density over awidei range of R values. Such a prior model will becalkv* a "diffuse" beta model.

There are several existing techniques for deter-mining values of xn and no consistent with priorbelief in R. This problem is discussed by manyauthors such as MacFarland,4 Schick and Lin,0 andBonis.6 However, the most useful treatment appears

O 01 02 03 04 05 06 07 08 09 lO

Fig. 2Typical beta pnor models with mean 0.9.

to be a paper by Weiler7 who considers beta priordistributions in the following situations:

(1) Past records provide the numbers of defectivesfound in each of N samples of size n from Npreviously inspected batches.

(2) The expected reliability in a batch and theprobability of a batch exceeding twice the expectedreliability are known approximately.

(3) The probabilities of R exceeding a certain valueRi and of R falling below another value R2 areapproximately known.

Situation (1) is essentially an empirical Bayes ap-proach and will be further considered in Sec. IV.A.Situations (2) and (3) are likely not to be as useful toreliability engineers as the following situation:

Suppose that a reliability engineer can specify anytwo of the following three quantities.

• The prior mean reliability Ri [Ri • E(R)I

• The value of reliability Ra such that, before thetest results, there is only a hei chance that thereliability R will exceed R2[P(R2R2> = 0.05J.

• The value of reliability Rt such that, before thetesl results, there is only a S'I chance thai thereliabilitv R will not exceed R:, |P(R < R;)» = 0.0ft|.

Anv two <>t these are sufficient i<> identify a pair ofvalues x, and n,,, the required parameters(sometimes called "hyperparameters") of the priorbeta distribution. Some values are given in Table IIfor two values of R, and various values of R2

An inspection of Table H shows that n,, and x,,tend to he large when Ri and R;; are close to eachothor. To illustrate the use of the table, assume thatthe prior mean reliability 0.9fi is desired and that thedesired prior 95th percentile is 0.9999. Then thisprior belief is consistent with a beta distributionhaving tv. = 11.43755and x,, = 10.98004.

E. The Posterior Model.

Suppose that a satisfactory beta prior model hasbeen selected according to the preceding discussion,that is, that the values for xo and no have beenidentified. Further si-ppose that attribute test datasubsequently become available in which it is knownthat x items from a sample of n items have surviveda lifetest of duration t. By means of Bayestheorem.1 we can obtain (he probability distribution

Page 7: The Basics of Bayesian Reliability Estimation from

TABLE II

VALUES OKxo AND nn CORRESPONDING TOSELECTED

VALUES OF R, and R2

Ri X o

0.990000.99000

0.99000

0.990000.990000.990000.990000.990000.990000.990000.990000.99000

0.990000.990000.990000.990000.990000.990000.990000.990000.990000.990000.990000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.960000.96000

0.995000.99550

0.996000.996500.997000.997500.998000.998500.999000.999500.999600.999700.99980

0.99990

0.999910.999920.999930.999940.399950.999960.999970.999980.999990.970000.980000.990000.995000.995500.996000.996500.997000.997500.998000.998500.999000.999500.999600.999700.999800.999900.999910.999920.999930.999940.999950.999960.999970.999980.99999

797.71876631.03126

506.01563408.78126332.302*2269.87501218.28126174.62501135.9197098.2265790.2890781.7031472.42970

60.52345

59.0625257.5468856.1093954.1406452.1718950.2031447.2500244.2968939.68751917.77345195.4609566.4765738.3906435.7187633.4687731.5000229.0625326.5781424.2187821.8281319.0625515.7500215.0000614.0625612.8984511.4375511.2500610.9140710.6562810.4844210.140709.843779.^75068.929708.15634

789.74157624.72095

500.95548404.69345329.05899267.17626216.09845172.87876134.5704097.2443089.3861880.8861171.70540

59.9182158.4718956.9714155.5483053.5992351.6501749.7011146.7775243.8539239.29063881.06251187.6425163.8175136.8550134.2900132.1300130.2400127.9000325.5150123.2500320.9550118.3000415.1200114.4000613.5000612.3825110.9800410.8000610.4775110.2300310.065049.735079.450019.000068.572517.83009

of R conditional on the lest results. This is the so-called posterior distribution (or posterior model) ofR. For the case of a binomial distribution of the testresults and n beta prior distribution, Biiyes theoremyields the posterior distribution of R given by

h(R|x,n) = h(R|x,n;xo,no)

T(n

- x -

o<R<l (6)

Note that the posterior d is t r ibut ion is also a betamodel with x,, and n,, in Kq. (ol replaced by ix + x,.iand In rn, ,) . respectively. Thus the po>teri<>r meanand variance are tx + x,,i/m + n,J and | tx + x,.i(n + a , - x - x . . ) | | m + n,,l-(n + n,.+ l t | . respectively. Aprior model that yields a posterior model of I he samefamily is called a conjugate prior m^del*

Example: Consider the data in Sample 1. Table I.Further assume that the prior model selected for R isa beta model with x,, = 3 and n,, = 5. Since x = 5 andn = 10 for Sample i. Table F, we substitute thesevalues into <6l to fitid the posterior model for R givenbv

MRlS.lO) r ( i s ) R7 (1 - R)60 * R < 1,

whereas the prior model for R is

g(R) '

Figure 3 give:* a graphic comparison between giRiand h(R|5.10). We observe from Fig. 3 that theposterior mean of 0.53 has shifted from the priormean of 0.6 toward I be sample result R = 0.5. This i.«.a significant change ascribed to using a "diffuse"prior beta model. Figure 4 illustrates the correspon-ding prior and posterior models for a "concentrated"prior beta mode! i:i which x,, = 60 and n,, = 100. Notethat the posterior mean of 0.59 represents a smallershift toward the sample result R = 0.5 than that inFig. 3. Generally, posterior models corresponding todiffuse prior models tend to be more sensitive to thesample results than those based upon concentratedprior models. This important feature has led to in-creasing use of diffuse prior models in practice.

Page 8: The Basics of Bayesian Reliability Estimation from

Fig. 3.The prinr and posterior models for the data inSanplv 1. Table I (x., = :i and n,, = o>.

C. Bayesian Estimation of Reliability

Before the attribute test data are obtained, aliBayes estimates are based on the prior model. Afterthe test data become available, the posterior modelis used. The posterior model is a function of both theprior model and its parameters and the attribute testresults. The "no-data" estimates based on the priormodel are referred to by Ref. 4 as "soft" Buyes es-timates, and those based on the posterior model arecalled "hard" or "mixed" Bayes estimates.

In this section we present and illustrate bothBayesian point estimates and Bayesian probabilityinterval estimators for the reliability R.

8 0POSTERIOR MODEL

PRIOR MODEL

02 03 04 05 06 07 06 09RELIABILITY

Fig. 4.The prior and posterior models for the data inSample 1, Table I (x0 = 60 and n0 = 100).

A commonly ust-d Bayesian point estimator fur Ris the mean of either the prior mod"l (no-data situa-tion) or the posterior model (data situation).Likewise, the Bayesian probability interval es-timators are obtained from either the prior model(no-data situation) or the posterior model (datasituation). The Bayesian point estimator of R for theno-data situation and a beta prior model is thus thefraction of "pseudo" items surviving a life test ofduration t. or

it • pseudo number of survivors o _oo pscudo sample size n

(7)

Reference 8 presents a procedure which, in aslightly modified form, can be used here for con-structing Bayes probability interval estimators of R.Using their procedure and simplifying, the100(1—«)r; two-sided Bayes probability interval(TBP1) estimator for R is given by

1 0 0 ( l - a ) ° » T B P I ( 0 ) :

V ( V x o } F a / 2 ; 2 i i o - 2 x o , 2 x o

<R (8 )

where Fu/2:2n,,-2x«,2x,, is the upper 100(a/2>percentage point of an F distribution with (2n,,-2x,>)numerator and 2x0 denominator degrees of freedom,respectively. The quantity F«/2,2xo.2no-2xo isdefined similarly, except that the numerator anddenominator degrees of freedom are reversed. TheseF values are found in the Appendix (see Sec. II) fromwhich 99,98.95,90. and 80c; TBPI estimates may beconstructed. Frequently, 95"c {a = 0.05) interval es-timates are required in practice.

A 100(1 - o ) ^ lower one-sided Bayes probabilityinterval (LBFO estimator for R for the no-datasituation is

1 O O ( 1 - « ) % L B P I ( o )

( 9 )

By meanr, of the F values in Table A, 99.5, 99, 97.5.95, and 90rr LBPI estimates may be constructed.

Page 9: The Basics of Bayesian Reliability Estimation from

It is remarked here that the TBPI estimator givenabove is a so-called "symmetric" Bayes interval es-timator in which the probability that R exceeds theupper limit is equal to the probability that R doesnot exceed the lower limit. For asymmetric posteriormodels this may not he the shortest Bayes1<M)(1-<i)'r interval estimator.

After the test data become available, the Bayesianpoint estimator of R for a beta prior model is thefraction of "combined" items surviving a lifetest ofduration t. That is

„ _ combined number of surv ivorscombined sample s i ze

n • n .

(10)

Several things are noted. Whe:i n = 0 (and thus x =Oi, Kq. (10) reduces to the mean ol'the prior modtl asexpected. Also, as n increases with n,, held constant,the Hayes point estimate approaches x/n, theclassical point estimate. This is sometimesparaphrased by saying that the influence of the priormodel, through x,, and n,,, on the Bayes estimatedecreases as the quantity of test data increases.

For the situation where test data are available,the KKM !-<»)'< TBPI estimator for R becomes

x + x

results, the no-data point estimate of reliability R is

R,, = 3/5 = 0.6

The no-data 9.Vr TBPI is

9SITBPI(0i : R3 ( 9 . 2 0 )

3 * ( S - 3 ) ( 6 . 2 3 ) - R - 5 - 3 * 3 ( B . 2 u )

because F<>.n-/:,,e.A = 9-20 and FO.I«5:J.6 = 6.23, or9.V, TBPKOl: 0.19 < R < 0.93.

Similarly, the Bayes point estimate, once the testdata in the sample become available, is

^ = ̂ 1 = 0 . 5 3 .

The corresponding 95'V TBPI estimate is

95VrBP.dC): T-T-m-l^-r)TT-8—

* n ,. ( 5 + 3 ) ( 2 . 9 3 )

where Fi.n2.s:i«.u = 2.93 (by interpolation) andFoo2n;i4.ifi = 2.82 (by interpolation), or

9.V. TBPK10I: 0.29 < R < 0.77.

1 0 0 ( 1 -<*) - , . T R P I ( n ) :x + x * ( n * i i - x - x ) F . . , ,

o v o o' a / 2 ; 2 n

x ,o

11)- 2 x - 2 x

o o

where the F values are interpreted as before.Likewise, the corresponding 100(1 -otV'i LBPI es-timat'ir for R is

100( l-a)U.UPI(n) :

( 12 )

Example: To illustrate the foregoing results, wesuppose that a beta prior model with x,, = 3 and n,, =5 i« appropriate for the light bulb test data recordedin Sample 1, Table 1. "Tie prior and posterior modelsare provided in the preceding example and aregraphed in Fiji. H. Before observing the te>i

This 9r>'i Bayes probability interval estimate may-be compared to the 95rr classical confidence intervalestimate [0.19,0.8! ] in Sec. II. The Bayes interval es-timate is shorter than the classical interval estimatefor {he same data. This gain is the result of the ad-ditional assumptions required in the Bayesiananalysis. If the assumptions are reasonable, theresult ing gain is beneficial.

The development in this section requires that theprior model be fully specified; that is, the values ofx,, and no must be stated to use the precedingmethods. Thus, the Bayesian methods presented aresubjective in that we assume both a family of priormodels, such as the beta family, as well as values forthe parameters x(l and n,, in the beta family. By-specifying these values a particular member of the

Page 10: The Basics of Bayesian Reliability Estimation from

family is selected for use. In an effort to remove someof this subjectivity, empirical Bayes methods useempirical test data to either estimate the parametersof a given prior model or estimate the prior modelitself. Some of these empirical Bayes techniques arepresented in the next section.

IV. Empirical Baves Estimation

By empirical B.wes estimation we mean either (I)the procedure and resulting estimates of reliabilityfor a beta prior mtdel in which x,, and n,, are bothunknown and are to be estimated from past testdata, or (2) the procedure and resulting estimates ofreliability for a completely unknown and unspecifiedprior model which must be estimated. In simpleterms, empirical Bayes denotes the techniques usedto empirically estimate Bayes estimators in theabsence of complete knowledge about the priormodel. Krutchkoff9 provides a brief introductoryexposition on empirical Bayes. Martz10 andLemon1' discuss the use of empirical Bayes inreliability.

In order to construct empirical Bayes estimates.past test data must be available which reasonablysatisfy certain conditions. These conditions will bediscussed later.

In S<c IV.A. we consider the case in which abeta prior mode!, but with unknown parameters, isassumed. The case in which ths prior model is com-pletely unknovn and unspecified is considered inSee. IV.B.

A. B e t a Prior Model with UnknownParameters. Suppose that an engineer has decidedthat a beta prior model [Eq. (5)] adequately reflectshis prior beliefs about the reliability R which is to beestimated. That is, he believes that the variety ofcurves within the beta family are sufficient to ex-press his prior belief. His immediate problem is toidentify values for xo and n0. Further, suppose thathe wishes to calculate values from the results ofprevious attribute tests that have been conducted onthe same or similar items. Let us be more explicit.Consider the following situation: A sequence of Nattribute lifetests has previously been conducted inwhich, tor the j t h lifetest, Xj items survived a lifetestof duration tout of a sample of size nj. Furthfrlet R,denote the unknown reliability in the j t h lifetest.Note that all previous lifetests are for the same dura-tion, although the sample sizes may be different ineach experiment. We assume that (1) the lifetestsare conditionally statistically independent of eachother and (2) the underlying sequence of reliabilityvaluea R] ,...,RN are statistically independent

realizations of a beta variable with constant(unchanging) x,, and n,, values over the sequence oftests. Such a situation might arise when conductingroutine lifetests for purposes of acceptance qualifica-tion on successive production lots of the same item.It could also occur whenever test data exists pur-suant to previous usage of the same or similar itemsin other project programs.

Recall that the classical estimator of R, in the j l h

lifetest b: R, = Xj/n,. Weiler' presents simplemoment estimators for x,, and n,, under theassumption that the sample size n, is the same ineach lifetest. Under rhe assumption that the samplesizes do not vary too greatly from lifHest to Isfetest,C'opas12 presents simple moment estimators for theprior mean and variance. Upon simplification theseestimators yield the estimators for x,, and n,, given by

(13)

(14)

n = —-

and

where the summations range from 1 to N and whereK = X'n,-'. These estimators reduce to those givenby Weiler for the case of a constant sample size, andthey provide good estimates of the prior betaparameters x,, and n,, when N is large.

When N is small, sampling error may cause Eq.(13) to yield negative estimates, if this occurs, we es-timate iv from the equation

NIR. -- 1 (15)

Example: suppose that the set of five samples inTable I satisfies the assumptions stated at the begin-ning of Sec. IV.A. Here N = ">. n = 10. and K = 0.5.Based on these data we desire to estimate x(, and noin the assumed beta model underlying this data.Now R] = 0.5. R.2 = 0.6, R, = 0.5, R4 = 0.8. and R5 =0.6. Thus

2R, =3.0, ?:R? = 1.86,

and using Eq. (I'i) we have

- _ 25(3.0 - 1.86)"° S ( S ( 1 . 8 6 ) - 0 . S ( 3 . 0 ) ] - ( 5 - 0 . 5 ) ( 3 . 0 ) 2

= -19.0 .

Page 11: The Basics of Bayesian Reliability Estimation from

with ri. 01. ivt- use Kq. i ]",»and find thai

. 1 5 . o

?«l thus

x,. == (l.ri.(«(»C-;.()i/.r) = 9.0.

ritiiin- "i -how* a graph of the fitted prior betamodel lri>tn F.q. (i'il . Thf corresponding fittedposterior «nodel lor the daia in Sample 1. Table i,is;il>o jiraphcti in Fiji *«. The fitted prior model in Fig.."> ha* a prior mean of O.fiO and a prior standarddeviation n( <>.i:i. whereas the fitted posterior modelhas a mean of (i.'ii! ;md a standard deviation of 0.10.

Fur x. = (t and n,. = ! ">. the empirical Hayes pointand !••"•' • two-sided empirical probability interval es-limntf* <>! H lor Sample 1. Table I. are O..*ifi and|o. iT.d.711. respectively. These estimates may hedirectlv ciimpared to ihose in the example in Sec.III.C The previous Hayes point estimate was <>.">:•;•which agrees closelv with the estimate here. Theprevious Have- probability interval estimate was|o.jii. (i 77| whu h is considerably wider than the em-pirical Haw.* interval estimate here. This is due tothe titled beia model being more concentrated thanthe beta model as*umcd in the example in Sev. HI.C.

B. Unknown and Unspecified Prior Model. Theengineer faced with the decision of selecting a prior

POSTERIOR M00EL

| 2PRIOfi MODEL

02 03 04 0 5 06RELIABILITY

07 08 09 10

Fin. 5.Tin fittrrl hrta prior and posterior models forIhr dulu in Stnnplr I, Tabli11 (x,, = 9 and n., =I.',).

model may be uncertain about which prior model heshould use. Suppose that he desires Bayesiar es-timates but wishes to use a method that does notrequire any decision regarding the family to whichthe prior model belongs, such as the beta family.Thus not only is ihe engineer unwilling to assignvalues to the parameters, such as x,, and n,,, he isunwilling t<> assume even a prior family, such as thebeta family. If past test data are available, thesedata can be used to construct an empirical estimateof the prior model itself. This is the situation usuallyreferred to as empirical Baye*.

Consider the following filiation. A seqrt-me- ol Nattribute lifctests has previously been conducted inwhich lor the j I h lilele-t. x; items survived a iifetes'(il duration t out of a sample of size n,. Suppose thatthese test results are available to the engineer.Further, let H, denote the unknown reliability in thej ' 1 ' lifeiest. We as.-ume that 111 the litetesis are con-ditiohalh sta:i>tic;illv independent of each other,and I'Jl the underlving >equertce of reliability value*Ri K\ are stati*ti<°ally independent realization*of a variable from a completely unknown and an-assumed common prior density.

Several investigators have considered this situa-tion. Copas1-' presents two point estimators for R;.and Ciriffin and Krutchkoff''' develop an e*timatorsimilar to one of those proposed by Copas. Lemonand Krutchkoff14 present a general method forconstructing estimators of R,. This technique is usedby Lemon11 ta develop a simple point estimator ofR,. Martz and Lian1"1"5 survey and compare severalof these estimators and also deveiop their own es-timators for R,. The simplest and most directestimator is that suggested by Lemon11. Thisestimator is presented here, although it is emphasiz-ed that point estimators exist that generally possesssmaller mean squared errors (see Refs. 15 and 16).

Lemon's procedure is basically to approximate theunknown and unspecified prior model by the dis-crete probability model given by

= 1/N.j = l.J X

where R, = x,/n,. It is also assumed that the samplesizes do not vary too greatly for the sequence oflifetest experiments. This estimated prior modelpla.t* a probability of 1 \ at each of ilu> olwervii,sample estimates in the sequence of lifetest (>\-perimeii i- . I 'sinjj Wives theorem in conjunct ion withF.q. 11 >. i he corresponding empirical posterior model

Page 12: The Basics of Bayesian Reliability Estimation from

H(R=R. |x ,n) 5 P C R = R j | x ) n ; R 1 , . . . R n )

R ? ( l - R j )n ' x

, j - 1 . 2 N

i - x (17)

otherwise .

Note that the posterior mode! also places probabilityonly at the observed sample estimates. Therefore,interval estimates based on such a posterior modelcannot be expected to be "smooth" in the sense thatthe probability of interval coverage will always beclose to the desired value. Smoother posteriormodels can be empirically obtained from empiricalprior models which assign probabilities to thosevalues of R adjacent to the observed estimates Rj.Such an approach, which is a more complicatedrefinement of the simple approach presented here,has been considered by Martz10 and Martz andLian.lD-16 In general, the more refined approachyields superior interval estimates, and also point es-timates which have ^smaller mean squared error. Iftwo or more values Rj coincide, both the empiricalprior and posterior models accumulate theprobability for the coincident values.

Example: Suppose that the set of five samples inTable I satisfies the assumptions stated at the begin-ning of Sec. IV.̂ B. Now Ri = 0.5, R2 = 0.6, R^ = 0.5,rtt = 0.8, and Rn = 0.6. The empirical prior modelbecomes

P(R = 0.5) = 1/5 + 1/5 = 0.4

P(R = 0.6) = 1/5 + 1/5 = 0.4

P(R = 0.8) = 1/5 = 0.2

Figure 6 shows a graph of this prior piobabilitymodel. After the data in Sample 1, Table I.have beenobserved, the corresponding empirical posteriormodel is obtained using Eq. (17)

P(R = 0.515,10) = 0.27 + 0.27 = 0.54,

P(R = 0.6|5,10) = 0.22 + 0.22 = 0.44,

P(R = 0.8(5,10) = 0.02,

0 4

E0.3

|

a.'

0.1

-

-

-

• 1 . i . J . I

-

-

-

-

01 0 3 04 0 5 06 0T 03 0 9 10RELIABILITY

Fig. 6.The empirical prior model fitted to the data inTable I.

The posterior model is presented in Fig. 7. It is in-teresting to compare the empirical prior andposterior models. Because the sample test data sup-port the prior value 0.5, the posterior modelsignificantly shrinks the prior probability that R =0.8. At the same time, the posterior probability thatR = 0.6 is slightly increased from its prior valuebecause 0.6 is close to the test results. However, theposterior probability that R = 0.5 is larger than theposterior probability that R = 0.6, because the testresults coincided with 0.5. Thus, the empiricalposterior model exhibits the desired behavior ofaltering prior belief based on the observed lifetestresults.

Now let us consider empir, cal Bayes estimators forRk, the reliability in the k th experiment. Theempirical Bayes point estimator of Rk becomes

Rk =

Z *,(18)

where Rj = Xj/nj. If the posterior model has beencalculated, an alternative form for computing R\ isgiven by

(19)

10

Page 13: The Basics of Bayesian Reliability Estimation from

A 100(1 —a)'i or larger two-sided symmetric em-pirical Bayes probability interval (TEBPI) es-timator of the reliability Rk in the kth lifetesiexperiment is given by

at least 100(l-«)<, TEBPI: R(j-+i>iSRk <Rij+), k =1.2 N, (20)

where Rm,...,Rlnl denotes the ordered (smallest tolargest) sequence of classical reliability estimates,

Example: Suppose that the set of five samples inTable 1 satisfies the assumptions stated at thebeginning of Sec. IV.B. Further, suppose we desireboth point and 95?; or larger two-sided interval es-timates of the reliability Ri corresponding to SampleUn Table L Thus k = 1. Now R,n = 0.5, R(2) = 0.5,%M = 0.6, R<4) = 0.6, and R,5, = 0.8. Alsoxk = Xi = 5and nk = m = 10. Substituting these values into (18)yields the empirical Bayes point estimate of Ri givenby

2 ( 0 . . S ) 6 ( l - 0 . 5 ) 5+ 2 ( 0 . 6 ) 6 ( l - 0 . S ) ^ ( 0 . 8 ) 6 ( l - 0 . 3 ) 5

2 ( 0 . 5 ) 5 ( l - 0 . 5 ) 5 + 2 ( 0 . 6 } 5 ( l - 0 . 5 r ' + ( C . 8 ) 5 ( l - 0 . 8 ) 5

j + is the smallest integer which satisfies the relation

and j is the largest integer satisfying

<a/2 .

If no such integer j exists between 1 and N, then j ~is set equal to 0. The lower end point of the intervalestimate will thus be Rio. In the event that j ~ = 0and j + - n, the required interval estimate will be theobserved range of the classical sequence of reliabilityestimates. Also, the empirical probability that Rklies within the interval is 1. Similarly, the empiricalprobability that Rk lies within any interval [Eq.(20) | can be calculated by summing the probabilitiesin the posterior model [Eq. (17)] corresponding tothe values Rj lying within the interval.

A corresponding 100(1 -a)c'< or larger lower one-sided empirical Bayes probability interval (LEBPI)estimator of the reliability Rk in the k th lifetestexperiment is given by

at least1,2 N,

100(l-«)<7 LEBPI: R(j-+1,SRk> k =(21)

where j * is the largest integer satisfying the relat ion

Again, j * is set equal to 0 if no such integer existsbetween 1 and N.

The alternative form [Eq. (19)] and the posteriormodel in the preceding example also yield

Ri =0.5(0.54)+0.6(0.44)+0.8(0.02) = 0.55.

This estimate closely agrees with the Bayes estimateof 0.56 based on the fitted beta prior model. Nowconsider the required interval estimate in which a =0.05. From the posterior model in Fig. 7, j + = 4 andj ~ = 0. Thus the interval estimate is

at least 95^ TEBPI: 0.5<Ri <0.6

The empirical probability that this interval containsRi is 0.98. Recall that the corresponding classical,Bayes, and fitted Bayes interval estimates werepreviously found to be (0.19,0.81], [0.29,0.77], and|0.37,0.74], respectively. The interval estimate hereis shorter than any of these other estimates.

06

0.5

5 0.4

§03

0.2

01

1 1 ' 1 ' 1 ' 1 ' 1 ' 1 ' 1 ' 1 ' ! '

-

-

-

-

-

. l . l . l . l .

-

-

-

-

-

. 1 . 1 . 1

0 01 02 03 04 05 06 07 08 09 10RELIABILITY

Fig. 7.The empirical posterior model fitted to thedata in Sample 1, Table 1.

11

Page 14: The Basics of Bayesian Reliability Estimation from

REFERENCES

1. R. A. Waller and H. F. Martz, Jr., "BayesianReliability Estimation: The State of the Art andFuture Needs." Los Alamos Scientific Laboratory,report LA-6003 (October 1975).

2. A. H. Bowker and G. J. Lieberman, EngineeringStatistics (Prentice Hall, New York, 1972), 2nd Ed.

3. H. Raiffa and R. Schlaifer, Applied StatisticalDecision Theory (Harvard University Press, Boston,MA, 1961).

4. W. J. MacFarland, "Bayes' Equation, Reliabili-ty, and Multiple Hypothesis Testing," IEEE Trans.Rel. R-21, 136-147 (1972).

5. G. J. Schick and C. Y. Lin, "Determination ofSubjective Prior Distributions for Bayesie.i Analysisof Maintainability Problems," Proceedings of the1974 Annual Reliability and Maintainability Sym-posium, Los Angeles, CA (January 1974), pp. 21-26.

6. A. J. Bonis, "Why Bayes Is Batter," Proceedingsof the 1975 Annual Reliability and MaintainabilitySymposium, Washington, DC. January 1975, pp.340-348.

7. H. Weiler, "The Use of Incomplete Beta Func-tions for Prior Distributions in Binomial Sampling,"Technometrics 7. 335-347 (August 1965).

8. N. R. Mann, R. E. Schafer, and N. D.Singpurwalla, Methods for Statistical Analysis of

Reliability and Life Data (John Wiley and Sons,New York, 1974).

9. R. G. Krutchkoff, "Empirical BayesEstimation," The American Statistician 26, 14-16(December 1972).

10. H. F. Martz, Jr., "Pooling Life-Test Data byMeans of the Empirical Bayes Method," IEEETrans. ReL R-24, 27-30 (April 1975).

11. G. H. Lemon, "An Empirical Bayes Approach toReliability,' IEEE Trans. Rel. R-21, 155-158(August 1972).

12. J. B. Copas, "Empirical Bayes Methods and theRepeated Use of a Standard," Biometrika 59, 349-360 (1972).

13. B. S. Griffin and R. G. Krutchkoff. "OptimalLinear Estimators: An Empirical Bayes Versionwith Application to the Binomial Distribution."fiiometrika 58. 195-201 (1909).

14. G. H. Lemon and R. G. Krutchkoff, "An Em-pirical Bayes Smoothing Technique," Biometrika56, 361-365 (1969).

15. H. F. Martz, Jr., and M. G. Lian, "A Survey andComparison of Several Empirical Bayes Estimatorsfor the Binomial Parameter," J. Statis. Comput.Simut. 3, 165-178 (1974).

16. H. F. Martz, Jr., and M. G. Lian, "EmpiricalBayes Estimation of the Binomial Parameter,"Biomtcrika 61, 517-523 (1974).

12

Page 15: The Basics of Bayesian Reliability Estimation from

PI c*

fi#H«>-i#» • -* 9 .0 -* o o ry • * - • - * 4>-*

0

a.

o

Ul

oI/I

« » • • •

M

( r > o cl *N * (\J *4 - .

I9

a.

ITula

3> i n a > i f i < o M

tu « « (VJ-.-N

O • » • • • X) O £> ->

^ -# «\i * - ^«

IT (T

tv — •* c^ — — —

tfv »^ *O OOD^mc3a)in• . • • • •

' ^ - ^ - * 0 0 0 0 0 0

Au.

j o

13

Page 16: The Basics of Bayesian Reliability Estimation from

i 8

> IV •+« « «nnnn(MN(\ i« lMMi\ j niniiunini AI AIMoT>*.£>*••

•o 9«mo>r>ini/v««c|triinnr><viwn<Mivftiruni«wni(u Ainimni M M M M

»r- c.<tcriCT'Oirin-»o-in—oofyj«oo tvi r- tvj aj •• o r- .̂<i> or><o o

sto*

2

§ Cu.: -°

£ IS

2uus

IV ftj l\ «. A' A.- A; 11 BV- ft: A AJ fW1 »̂

fuO«iflni) fom o* in c\i OCD <

nea; emr-iu0 4 n

oII

or- « min « • •»

ON inriM

».OD a in ir »"(\>o f- i- »- «o- r- f- » oir f- »-K O m c c o r o o « A e e e K ^ i n i n ^ i ^ m

cr*>oiCcvi — —

eiv jOCD

> • • • •» can

joieu^ulouap a m j o

14

Page 17: The Basics of Bayesian Reliability Estimation from

I5

tn o

S °

uaa

inIV

eein nj —(uocDr«o •

uec* CPI/I o»<o» —— — o o o o ffj

• * ^ lii *\l ^* • O

• ^ v* if) PI •"• ̂ O

^ n *̂

<j> m —

«e ni4i\i««a

o *n *-

<Gin oo^mmir

c n «-

* m ••

• o c <i* ndir c i

» a> « c « r- <••« i* • |«| M M

«i"

r- o

r»n

3.87

3

.3

.K?

3.

<o<v

• * «

om

CVjt-

• «« A:

« «

^ *

innn

i M (•) w

0> « "

•Bin m

« - «

« • «> i•t> « m<

n.»-e <

K "i> •!> in • ^ ^m M IMIMM nj M

n"7SSS^2

t * • <c «c v* tf* * «

3 OfU4 VIV^* P

o ^ r. o a IT n •-<> r̂ tr <*> (Vi i\d

a> rv rg a rv ru f\> *\jrv ru Ajnj AJ fg

ryAjtvojruojrvjrvfvjrjruruMw

"̂ f* *-• *- *•

ic»ocoo

eo

A

u

( > • * )

15

Page 18: The Basics of Bayesian Reliability Estimation from

Degrees pf freedom for the denominator («„)

9T

01 •• ui ro *•

• • • « « • • • • • • • • • • • • • • • • • • • « • • • • • • •~4 0 A »-*{ir U I O ^ • * • *

u u u u u m w u u u u u u » • • •• Ji

j4 o u a9 >9 ^» «» JIO Jl

# *• Ji fr *> 4 1 - -

JI

M ru ru <M V fuf\l >V r Ul W W WW «• » > O U<

n>i\irvfu\) rurvixi rurufurur\iruiwru<M w^ * 9 m -

• - — MMM• • • • •0 < O O M Mt» 9 *• IV*-*

» Ji IT « *•

7X « OO N- »— *- *- *- ̂ f \

• - I U

• w

WU » J> » « »•

f t

aw- o ui ru » »-

(DI -

it

s

5 S

i io

a

c

ruu u u *•(/>^9us#9ui-O— 3 » U> « •<

niu»ui»UIOD UI-4 -<O o

ruru u u » ui •u f t f « r i i i 4 i t g o e N u » i i i 4— -JUI— o — «. —

Page 19: The Basics of Bayesian Reliability Estimation from

couX

3zUi

t

UIX

h.O

V)»-

zoa

(M

°

a

M M *m « —r» ^ >

^ < - - p i * ©« i t t «• m e * « < ft; —»

in tn r- 1*19 «̂ fo x i

—• •• » i v ci u a u > * m

•* r> iv «— cir

o

ir IT, j »m » r- a o« oin ni(r««(v.o«r-in

•- IT, r- a r» >-a rv •- — « a r o> f ft B p c- r- <l^•*1T^(vl\.*-

r.a f < «v—o s ^ < l ( 4 r r > ^ ^ ^c c c c c 9 ( ^ 9 o o o o < ^ a a r ^

f if r i». r

^>« if r *~

«if if Hm G <

•# • * ^ - cv ac * — < — ««r» - o- a f> r- « IT. « cr « ir iriv c ctra

ist — n <-•

( (\) ao^pufuiousp aqi aoj uiopaaaj jo

17