15
Computational Statistics & Data Analysis 51 (2007) 4692 – 4706 www.elsevier.com/locate/csda Diagnostics analysis for log-Birnbaum–Saunders regression models Feng-Chang Xie a , , Bo-Cheng Wei b a Department of Applied Mathematics, Nanjing Agricultural University, Nanjing 210095, China b Department of Mathematics, Southeast University, Nanjing 210096, China Received 8 February 2005; received in revised form 14 August 2006; accepted 20 August 2006 Available online 18 September 2006 Abstract In this paper, several diagnostics measures are proposed based on case-deletion model for log-Birnbaum–Saunders regression models (LBSRM), which might be a necessary supplement of the recent work presented by Galea et al. [2004. Influence diagnostics in log-Birnbaum–Saunders regression models. J. Appl. Statist. 31, 1049–1064] who studied the influence diagnostics for LBSRM mainly based on the local influence analysis. It is shown that the case-deletion model is equivalent to the mean-shift outlier model in LBSRM and an outlier test is presented based on mean-shift outlier model. Furthermore, we investigate a test of homogeneity for shape parameter in LBSRM, which is a problem mentioned by both Rieck and Nedelman [1991. A log-linear model for the Birnbaum–Saunders distribution. Technometrics 33, 51–60] and Galea et al. [2004. Influence diagnostics in log-Birnbaum–Saunders regression models. J. Appl. Statist. 31, 1049–1064]. We obtain the likelihood ratio and score statistics for such test. Finally, a numerical example is given to illustrate our methodology and the properties of likelihood ratio and score statistics are investigated through Monte Carlo simulations. © 2006 Elsevier B.V. All rights reserved. Keywords: Case-deletion model; Generalized Cook distance; Likelihood distance; Log-Birnbaum–Saunders regression models; Mean-shift outlier model; Score test; Test of homogeneity; Simulation study 1. Introduction Birnbaum–Saunders distribution (Birnbaum and Saunders, 1969a, b; Desmond, 1985), log-Birnbaum–Saunders (LBS) distribution, and log-Birnbaum–Saunders regression (LBSR) are quite useful to failure time data and have received much attention in the literature. Galea et al. (2004) gave a nice overview for this topic, see also Rieck and Nedelman (1991), Achar (1993), Tsionas (2001), Wang et al. (2006) and the references therein. For Birnbaum–Saunders distribution, Birnbaum and Saunders (1969b) discussed the maximum likelihood estimates (MLEs); Achar (1993) developed some Bayesian methods; Ng et al. (2003) derived a modified moment estimation and Wang et al. (2006) explored a modified censored moment estimation. For log-Birnbaum–Saunders regression models (LBSRMs), Rieck and Nedelman (1991) have investigated the MLEs and the least-squares estimates (LSE) of parameters; Tsionas (2001) discussed Bayesian estimation for the model. Recently, Galea et al. (2004) presented various diagnostic methods for The project supported by NSFC 10371016. Corresponding author. Tel.: +86 25 84395205; fax: +86 25 84432420. E-mail address: [email protected] (F.-C. Xie). 0167-9473/$ - see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.csda.2006.08.030

Diagnostics analysis for log-Birnbaum–Saunders regression models

Embed Size (px)

Citation preview

Page 1: Diagnostics analysis for log-Birnbaum–Saunders regression models

Computational Statistics & Data Analysis 51 (2007) 4692–4706www.elsevier.com/locate/csda

Diagnostics analysis for log-Birnbaum–Saundersregression models�

Feng-Chang Xiea,∗, Bo-Cheng Weib

aDepartment of Applied Mathematics, Nanjing Agricultural University, Nanjing 210095, ChinabDepartment of Mathematics, Southeast University, Nanjing 210096, China

Received 8 February 2005; received in revised form 14 August 2006; accepted 20 August 2006Available online 18 September 2006

Abstract

In this paper, several diagnostics measures are proposed based on case-deletion model for log-Birnbaum–Saunders regressionmodels (LBSRM), which might be a necessary supplement of the recent work presented by Galea et al. [2004. Influence diagnosticsin log-Birnbaum–Saunders regression models. J. Appl. Statist. 31, 1049–1064] who studied the influence diagnostics for LBSRMmainly based on the local influence analysis. It is shown that the case-deletion model is equivalent to the mean-shift outlier modelin LBSRM and an outlier test is presented based on mean-shift outlier model. Furthermore, we investigate a test of homogeneityfor shape parameter in LBSRM, which is a problem mentioned by both Rieck and Nedelman [1991. A log-linear model for theBirnbaum–Saunders distribution. Technometrics 33, 51–60] and Galea et al. [2004. Influence diagnostics in log-Birnbaum–Saundersregression models. J. Appl. Statist. 31, 1049–1064]. We obtain the likelihood ratio and score statistics for such test. Finally, a numericalexample is given to illustrate our methodology and the properties of likelihood ratio and score statistics are investigated throughMonte Carlo simulations.© 2006 Elsevier B.V. All rights reserved.

Keywords: Case-deletion model; Generalized Cook distance; Likelihood distance; Log-Birnbaum–Saunders regression models; Mean-shift outliermodel; Score test; Test of homogeneity; Simulation study

1. Introduction

Birnbaum–Saunders distribution (Birnbaum and Saunders, 1969a, b; Desmond, 1985), log-Birnbaum–Saunders(LBS) distribution, and log-Birnbaum–Saunders regression (LBSR) are quite useful to failure time data and havereceived much attention in the literature. Galea et al. (2004) gave a nice overview for this topic, see also Rieck andNedelman (1991), Achar (1993), Tsionas (2001), Wang et al. (2006) and the references therein. For Birnbaum–Saundersdistribution, Birnbaum and Saunders (1969b) discussed the maximum likelihood estimates (MLEs); Achar (1993)developed some Bayesian methods; Ng et al. (2003) derived a modified moment estimation and Wang et al. (2006)explored a modified censored moment estimation. For log-Birnbaum–Saunders regression models (LBSRMs), Rieckand Nedelman (1991) have investigated the MLEs and the least-squares estimates (LSE) of parameters; Tsionas (2001)discussed Bayesian estimation for the model. Recently, Galea et al. (2004) presented various diagnostic methods for

� The project supported by NSFC 10371016.∗ Corresponding author. Tel.: +86 25 84395205; fax: +86 25 84432420.

E-mail address: [email protected] (F.-C. Xie).

0167-9473/$ - see front matter © 2006 Elsevier B.V. All rights reserved.doi:10.1016/j.csda.2006.08.030

Page 2: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4693

LBSRMs. The central part of their results is based on the local influence analysis of Cook (1986). Alternatively,this paper will discuss the influence diagnostics based on another basic method: case-deletion diagnostics. It is wellknown that to assess the influence of the ith observation on the parameter estimates, a direct approach is to computesingle-case diagnostics with the ith case deleted. Since the pioneering work of Cook (1977), case-deletion diagnosticssuch as Cook’s distance or the likelihood distance have been successfully applied to various statistical models; see, forexample, Christensen et al. (1992), Davison and Tsai (1992), Wei (1998), Tang et al. (2000), Galea et al. (2005), etc.In this paper, we shall propose several diagnostics measures based on case-deletion model (CDM) for LBSRM, whichmight be a necessary supplement of Galea et al. (2004). We also prove that in LBSRM the MLEs in CDM are equal tothe MLEs in mean-shift outlier model (MSOM), which is usually called the equivalence between two models. Basedon MSOM, a score statistic is obtained for identifying the outlier in LBSRM. Another interesting problem we shallstudy is the test for homogeneity of shape parameter in LBSRM. This problem has been mentioned by both Rieck andNedelman (1991) and Galea et al. (2004). In fact, if the shape parameter, say �, were not homogeneous in LBSRM, say�i for each observation i, then the inference would be much difficult to deal with in LBSR. We propose a diagnostictest for detecting homogeneity of shape parameter.

The paper is organized as follows. The rest of this section introduces LBSRMs. In Section 2, we give a briefsketch of the case-deletion method, and the one-step approximations of the estimates in CDM are obtained. Thenwe propose several case-deletion measures, such as generalized Cook distance, likelihood distance for identifyinginfluential observations. Section 3 gives a equivalence theorem between CDM and MSOM for LBSRM. At the sametime, a score test statistic for detecting outliers is obtained based on MSOM. In Section 4, we discussed the test forhomogeneity of shape parameter in LBSRM and the likelihood ratio statistic and score statistic are presented for the test.Illustrative example is reported in Section 5 and the properties of likelihood ratio and score statistics are investigatedthrough Monte Carlo simulations in Section 6. Some conclusion are given in the final section. All the proofs of thetheorems will be deferred to Appendix.

Let T has a Birnbaum–Saunders distribution, based on Theorem 1.1 presented by Rieck and Nedelman (1991), thenthe distribution of Y = log T is a sinh-normal distribution and the density function of Y is given by

fY (y) = (�√

2�)−1 cosh

(y − �

2

)exp

[−2�−2 sinh2

(y − �

2

)], y ∈ R,

where � is a shape parameter and � is a location parameter, which is a special case (�=2) of the sinh-normal distribution(denoted by SN(�, �, �)) introduced by Rieck and Nedelman (1991) as the distribution of U (also named the logarithmicBirnbaum–Saunders distribution) if R = 2�−1 sinh(U − �)/� ∼ N(0, 1).

The LBSRM discussed in this paper is given by

yi = XTi � + �i , i = 1, 2, . . . , n, (1)

where �i’s are mutually independent random errors with LBS distribution SN(�, 0, 2), Xi’s are p-dimensional explana-tory variables and � is a p-dimensional unknown vector parameter to be estimated. By Rieck and Nedelman (1991),the MLE of �2 can be obtained by

�2 = 4

n

n∑i=1

sinh2

(yi − XT

i �

2

),

where � is the MLE of � and a numerical procedure must be used to determine it. When � > 2, it may cause multiplemaxima of the likelihood, but it has been shown that if ��2, the MLE of � is unique if X = (X1, X2, . . . , Xn)

T hasrank p (Rieck, 1989). However, the case of � > 2 is unusual in practice.

The log-likelihood function for a random sample y = (y1, y2, . . . , yn)T of the model (1) may be expressed as

l(�) =n∑

i=1

log

{1

�√

2�cosh

(yi − �i

2

)exp

[− 2

�2sinh2

(yi − �i

2

)]}

= − n

2log 8� +

n∑i=1

log �i1 − 1

2

n∑i=1

�2i2, (2)

Page 3: Diagnostics analysis for log-Birnbaum–Saunders regression models

4694 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

where � = (�T, �)T, �i = XTi �, for i = 1, 2, . . . , n; and

�i1 = 2

�cosh

(yi − �i

2

), �i2 = 2

�sinh

(yi − �i

2

). (3)

2. Influence diagnostics based on CDM

Case-deletion is a common approach to study the effect of dropping the ith case from the data set. The CDM for themodel (1) is given by

yj = XTj � + �j , j = 1, 2, . . . , n, j �= i. (4)

In the following, a quantity with a subscript “(i)” means the original quantity with the ith case deleted. For the model(4), the log-likelihood function of � is denoted by L(i)(�). Let �(i) = (�T

(i), �(i))T be the ML estimate of � from L(i)(�).

To assess the influence of the ith case on the ML estimate � = (�T, �)T, the basic idea is to compare the differencebetween �(i) and �. If deletion of a case seriously influences the estimates, more attention should be paid to that case.Hence, if �(i) is far from �, then the ith case is regarded as an influential observation.

But �(i) is needed for every case, the total computational burden involved can be quite heavy, especially when n islarge. Hence, the following one-step approximation �(i) is often used to reduce the burden (see, Cook and Weisberg,1982, p. 182).

�1(i) = � + {−l(�)}−1 l(i)(�), (5)

where l(i)(�) = �l(i)(�)/��|=, l(�) = �2l(�)/����T|=.Then we have the following important theorem.

Theorem 1. For the LBSRM (1), the relationship between parameter estimates of full data and the data with the ithcase deleted can be expressed as

�1(i) = � + {(XTAX)−1(Xi ri − XTh−1�i )}, (6)

�1(i) = � + {[−1 + −1hTX(XTAX)−1XTh−1]�i − −1hTX(XTAX)−1Xi ri}, (7)

where A = V − h−1hT, and

V = diag(v1, v2, . . . , vn), vi = 1

4sech2

(yi − �i

2

)− 1

�2cosh(yi − �i ),

h = (h1, h2, . . . , hn)T, hi = − 2

�3sinh(yi − �i ), = n

�2− 12

�4

n∑i=1

sinh2(

yi − �i

2

),

ri = �i1�i2/2 − �i2/(2�i1), �i = −1/� + �2i2/�.

From this theorem, we can see the difference between the estimates with and without a case deleted and obtain thecase-deletion measures for assessing the influential observations in LBSR.

2.1. Generalized Cook distance

The generalized Cook’s distance is defined as the standardized norm of �(i) − �

GDi = (�(i) − �)TM(�(i) − �), (8)

Page 4: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4695

where M is a nonnegative definite matrix, which measures the weighted combination of the elements for the difference�(i) − �. Cook and Weisberg (1982) considered several choices for M. A commonly used choice is the observedinformation M = −l(�). Substituting (5) into (8), we obtain the following approximation:

GD1i = l(i)(�)T{−l(�)}−1 l(i)(�). (9)

Then by a little calculation, we get

GD1i ={−riXT

i (XTAX)−1Xi ri+2riXTi (XTAX)−1XTh−1�i−�T

i [−1 + −1hTX(XTAX)−1XTh−1]�i}, (10)

where ri, A, h, and �i are given in Theorem 1.On the other hand, it is quite often to consider the influence of ith case on the estimate of parameter � or �. Then

from (9), the generalized Cook distance for parameter subset can be defined as

GD1i (�) = {l(i)�(�)T{(Ip, 0)[−l(�)]−1(Ip, 0)T}l(i)�(�)}, (11)

GD1i (�) = {l(i)�(�)T{(0p, 1)[−l(�)]−1(0p, 1)T}l(i)�(�)}, (12)

where l(i)�(�) = �l(i)(�)/��|=, l(i)�(�) = �l(i)(�)/��|=, Ip is identity matrix of order p, 0p is zero matrix of order

p, 0 = (0, . . . , 0)T and 1 = (1, . . . , 1)T.The values of GDi (�) and GDi (�) reveal the impact of ith case on the estimates of � and �, respectively. Then by a

little calculation we have

GD1i (�) = {−riXT

i (XTAX)−1Xi ri}, (13)

GD1i (�) = {−�T

i [−1 + −1hTX(XTAX)−1XTh−1]�i}, (14)

where ri, A, h, and �i are given in Theorem 1.

2.2. Likelihood distance

Another popular measure of the difference between � and �(i) is the likelihood distance (Cook and Weisberg, 1982)

LDi = 2{l(�) − l(�(i))}. (15)

We can directly apply Theorem 1 to compute it. Substituting (5) into (15), we obtain the following approximation:

LD1i = 2{l(�) − l(� + {−l(�)}−1 l(i)(�))}. (16)

Besides, we can also compute �j − �j (i) (j = 1, . . . , p) to see the difference between � and �(i).

3. MSOM and outlier test

CDM is the basis for constructing effective diagnostic statistics. Another commonly used diagnostic model is MSOM{yj = XT

j � + �j , j = 1, 2, . . . , n, j �= i,

yi = XTi � + � + �i ,

(17)

where � is an extra parameter to indicate the presence of an outlier (Cook and Weisberg, 1982, p. 20). Obviously, if thevalue of � is nonzero, then the ith case may be an outlier because it no longer comes from the original model (1). Let�mi, �mi and �mi be the ML estimates for the model (17). Then we have the following equivalence theorem.

Theorem 2. For CDM (4) and MSOM (17), if the related MLEs are unique, then we have

�mi = �(i), �2mi ≈ �2

(i). (18)

Theorem 2 shows that although the forms of CDM (4) and MSOM (17) are different, they have the same statisticalcharacteristics for the estimation. So, the effect on the influence for the ith case under models (4) and (17) are consistent.

Page 5: Diagnostics analysis for log-Birnbaum–Saunders regression models

4696 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

The equivalence of the CDM and the MSOM for linear regression was observed in Cook and Weisberg (1982), and hasbeen extended to a broader class of models, see, for example, Wei (1998) and Fung et al. (2002).

From the MSOM (17), we can consider a test of hypothesis to identify the outliers

H0 : � = 0; H1 : � �= 0. (19)

If H0 is rejected, then the ith case may be a possible outlier. The following theorem provides a score statistic for thetest.

Theorem 3. For MSOM (17), the score statistic for the test of hypothesis (19) is

SCi = {r2i −1

i }, (20)

where i = u2i XT

i KXi + n−1�2uidihTXKXi − (2n)−1�2d2i + (4n2)−1�4d2

i hTXKXTh − ui , di = −�i1�i2/�, K =[XT(V + (2n)−1�2hhT)X]−1, ui = (1 − �2

i1 − �2i2 − �2

i2/�2i1)/4, and ri, h are given in Theorem 1 and �i1, �i2 can be

seen in expression (3).

4. Test for homogeneity of shape parameter

To provide an appropriate statistical framework for the estimation problem, Rieck and Nedelman (1991) assumedthat the shape parameter � is independent of the explanatory vector Xi , i.e. �i = � for i = 1, 2, . . . , n. Galea et al.(2004) also mentioned this problem and assumed the homogeneity of shape parameter in their discussions. However,this assumption usually need to be checked. In this section, we propose a test of hypothesis to detect the assumptionof homogeneity of shape parameter. To this aim, we apply a commonly used technique of parameterization for the testof homogeneity, see, for example, Cook and Weisberg (1983) and Smyth (1989). Now suppose that the homogeneousmodel (1) is modified as{

yi = XTi � + �i , i = 1, 2, . . . , n,

�i ∼ SN(�i , 0, 2), �i = �mi, mi = m(Xi , �),(21)

where � is the factor of shape parameter with a weight function m(Xi , �), and � is a q-dimensional vector parameter toindicate the heterogeneity of shape parameter. It is assumed that there is a unique value �0 of � such that m(Xi , �0)= 1for all i. Hence the test for homogeneity of shape parameter is equivalent to a test of hypothesis:

H0 : � = �0; H1 : � �= �0.

From (21), we can get likelihood ratio statistic and score statistic for the above test. Note that in (21), � is the parameterof interest and �, � are nuisance parameters. The log-likelihood function for model (21) is given by

l(�) = −n

2log 8� +

n∑i=1

log �i1 − 1

2

n∑i=1

�2i2, (22)

where

�i1 = 2

�mi

cosh

(yi − �i

2

), �i2 = 2

�mi

sinh

(yi − �i

2

)(23)

with � = (�T, �T, �)T. Let �1 = (�T1 , �

T1 , �1)

T be the ML estimate of the �; mi = m(Xi , �1) and � = (�T0 , �

T, �)T be the

ML estimate under H0. Then we have the following theorem.

Theorem 4. For model (21), the likelihood ratio statistic of H0 is given by

LR = n log

(�2mg

�21

), mg =

[n∏

i=1

{cosh((yi − �i )/2)1

cosh((yi − �i )/2)

m−1i

}]2/n

. (24)

Page 6: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4697

Theorem 5. For model (21), if mi = m(Xi , �) is a twice differentiable function of �, then the score statistic of H0 isgiven by

SC = {eTm[eTm − mT(H − P)m]−1mTe}, (25)

where

mia = �mi/��a, i = 1, . . . , n, a = 1, . . . , q, m = (mia)n×q ,

miab = �mi/��a��b, i = 1, . . . , n, a, b = 1, . . . , q, m = (miab)n×q×q ,

P = QSQ − ��hTSQ/n − �QSh�T/n − 2��T/n + �2�hTSh�T/n2,

S = XKXT, � = (�212, . . . , �

2n2)

T, e = 1 − �, 1 = (1, 1, . . . , 1)T,

H = In − 3diag(�), Q = diag(q1, . . . , qn), qi = −2 sinh(yi − �i )/�2.

5. Example

Now let us see an application of the diagnostic statistics developed in previous sections. Consider the biaxial fatiguedata set reported by Brown and Miller (see Rieck and Nedelman, 1991; Galea et al., 2004) on the life of a metal piecein cycles to failure. The response N is the number of cycles to failure and the explanatory variable W is the work percycle (mJ/m3). Forty-six observations were considered, see Table 1 of Galea et al. (2004).

Based on the discussion of Rieck and Nedelman (1991), the following log-linear model is the more adequate thebiaxial fatigue data:

yi = log Ni = �0 + �1 log Wi + �i , i = 1, 2, . . . , 46, (26)

where �i ∼ SN(�, 0, 2). The MLEs are �0 = 12.2792, �1 = −1.6706, and � = 0.4102.By a little calculation, we obtain the residual �i = yi − yi and Ri = 2 ∗ �−1 sinh{�i/2}, then we have the scatter plot

of Ri versus predicted values yi presented in Fig. 1. From Fig. 1, we can see that the distribution of Ri seems to beapproximately normal. Based on the above result that U ∼ SN(�, �, �) if 2 ∗ �−1 sinh{(U − �)/�} ∼ N(0, 1), so thedistribution of �i should be approximately distributed as a sinh-normal distribution.

4.5 5 5.5 6 6.5 7 7.5 8 8.5-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

Fig. 1. Index plot of Ri versus yi .

Page 7: Diagnostics analysis for log-Birnbaum–Saunders regression models

4698 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

Table 1Influence diagnostics based on case-deletion

i GDi (�) GDi (�) GDi LDi SCi

1 0.0062 0.0095 0.0157 0.0173 0.06782 0.1512 0.0057 0.1568 0.1515 1.86293 0.1038 0.0017 0.1056 0.1063 1.47674 0.2592 0.0779 0.3368 0.3116 4.01595 0.2444 0.0751 0.3198 0.3039 3.94156 0.0600 0.0002 0.0602 0.0576 1.15997 0.0399 0.0001 0.0400 0.0379 0.93558 0.0609 0.0041 0.0649 0.0602 1.63029 0.0087 0.0060 0.0147 0.0170 0.2528

10 0.0546 0.0044 0.0591 0.0611 1.646111 0.0168 0.0024 0.0192 0.0180 0.525812 0.1293 0.0929 0.2228 0.2128 4.070613 0.0134 0.0031 0.0164 0.0190 0.464314 0.0007 0.0103 0.0110 0.0117 0.027515 0.0029 0.0085 0.0115 0.0116 0.111116 0.0011 0.0100 0.0110 0.0127 0.041117 0.0187 0.0007 0.0194 0.0172 0.741918 0.0001 0.0107 0.0109 0.0118 0.005219 0.0148 0.0013 0.0160 0.0190 0.647620 0.0107 0.0029 0.0136 0.0122 0.473021 0.0154 0.0010 0.0164 0.0143 0.680122 0.0471 0.0141 0.0614 0.0637 2.133723 0.0017 0.0091 0.0108 0.0109 0.083124 0.0066 0.0047 0.0114 0.0102 0.331925 0.0344 0.0035 0.0378 0.0319 1.554926 0.0161 0.0006 0.0167 0.0203 0.749127 0.0021 0.0089 0.0110 0.0109 0.094928 0.0184 0.0005 0.0190 0.0157 0.771429 0.0000 0.0108 0.0109 0.0120 0.000430 0.0188 0.0008 0.0196 0.0235 0.714631 0.0507 0.0066 0.0573 0.0613 1.778532 0.1464 0.1019 0.2490 0.2402 4.243733 0.0200 0.0021 0.0221 0.0261 0.562834 0.0023 0.0096 0.0119 0.0120 0.059635 0.0059 0.0078 0.0136 0.0165 0.152136 0.0108 0.0057 0.0165 0.0151 0.277237 0.0011 0.0103 0.0113 0.0132 0.027438 0.0429 0.0000 0.0429 0.0382 1.019439 0.0862 0.0090 0.0950 0.0856 1.954640 0.0562 0.0003 0.0565 0.0614 1.182741 0.0001 0.0108 0.0109 0.0124 0.003142 0.0348 0.0009 0.0356 0.0402 0.730643 0.0212 0.0035 0.0247 0.0224 0.440344 0.0252 0.0027 0.0278 0.0321 0.510145 0.0204 0.0046 0.0250 0.0231 0.357346 0.2855 0.0343 0.3194 0.2954 3.1233

5.1. Influence diagnostics based on case-deletion

In this subsection, we use MATLAB 6.5 to compute case-deletion measures GDi and LDi presented in Section2 and the score statistic of outliers SCi presented in Section 3. The results of such influence measures are listed inTable 1, and three index plots are displayed in Figs. 2–4.

From this table and figures, we can see that cases 2–5, 12, 32 and 46 are influential observations. It is consistent withthe results presented by Galea et al. (2004). Furthermore, cases 4, 5, 12, 32 and 46 are most influential observations.

Page 8: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4699

0 5 10 15 20 25 30 35 40 45 500

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Fig. 2. Index plot of GDi (�).

0 5 10 15 20 25 30 35 40 45 500

0.02

0.04

0.06

0.08

0.1

0.12

Fig. 3. Index plot of GDi (�).

We eliminate the most influential observations 4, 5, 12, 32 and 46, and refitted the model, the estimates of parametersare given in Table 2. As we notice, observations 4, 5 and 46 are the most influential on the estimates of �0 and �1.

5.2. Test for homogeneity of shape parameter

Both Rieck and Nedelman (1991) and Galea et al. (2004) assumed that the shape parameter is homogeneous inLBSRMs. Now let us see if the assumption is appropriate to the biaxial fatigue data. Consider a heterogeneous LBSRM

Page 9: Diagnostics analysis for log-Birnbaum–Saunders regression models

4700 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

0 5 10 15 20 25 30 35 40 45 500

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Fig. 4. Index plot of GDi .

Table 2The MLE of parameters dropping the cases indicated

Deleted case � �0 �1

None 0.4102 12.2792 −1.67064 0.3965 12.0842 −1.62085 0.3967 12.4670 −1.718312 0.3960 12.3787 −1.693432 0.3952 12.2041 −1.644146 0.4005 12.4698 −1.7289

of (26) as{yi = log Ni = �0 + �1 log Wi + �i , i = 1, 2, . . . , 46,

�i ∼ SN(�i , 0, 2), �i = �mi = �m(Xi, �), Xi = log Wi.(27)

To test the homogeneity of shape parameter, we assume that mi = m(Xi, �) = X�i for simplicity. It is easily seen that

when � = 0, then m(Xi, �) = 1 and �i = � for all i. Hence the test for homogeneity of shape parameter becomes thetest of hypothesis H0 : � = 0. By Theorems 4 and 5 given in Section 4 and a little computation, we have that thelikelihood ratio statistic LR = 0.7605, the score test statistic SC = 0.7531 and the corresponding p-values are about0.38. Therefore, we cannot reject the hypothesis H0 and the assumption of homogeneity of shape parameter is suitablefor the biaxial fatigue data. This result is consistent with the arguments of Rieck and Nedelman (1991) and Galeaet al. (2004), as expected. Moreover, as pointed out by Chen (1983), the score test statistic is not very sensitive to thefunctional form of m(Xi, �) in the test for homogeneity of variance parameter. This fact might be also true in the test forhomogeneity of shape parameter. As suggested by Cook and Weisberg (1983), the power function and the exponentialfunction are usually employed in practice. Now we assume that m(Xi, �) = eXi�. Then the test for homogeneity ofshape parameter is still H0 : � = 0. By similar computation we have LR = 0.6366, SC = 0.6367 and the correspondingp-values are greater than 0.4. Therefore, we cannot reject the hypothesis H0 also. These results are quite similar tothose when we choose m(Xi, �) = X�

i . To study the effect of the most influential cases on statistics LR and SC. Weeliminate the most influential observations 4, 5, 12, 32 and 46, and have the values of LR and SC under the weight

Page 10: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4701

Table 3The values of LR and SC based on dropping the cases indicated

Deleted case LR SC

None 0.6366 0.63674 0.2497 0.24905 0.0916 0.091212 0.3181 0.316532 1.4024 1.427246 1.6542 1.6594

Table 4Simulated sizes and powers of SC and LR

n Statistic � = 0 � = 0.2 � = 0.4 � = 0.6 � = 0.8

LR 0.0394 0.0620 0.1112 0.1872 0.260210 SC 0.0756 0.0832 0.1400 0.2174 0.2966

LR 0.0528 0.1016 0.2596 0.4656 0.679620 SC 0.0700 0.1324 0.3128 0.5282 0.7224

LR 0.0476 0.1264 0.3992 0.7124 0.900440 SC 0.0530 0.1392 0.4244 0.7310 0.9084

LR 0.0482 0.1702 0.5472 0.8680 0.983060 SC 0.0524 0.1912 0.5744 0.8840 0.9856

LR 0.0490 0.2102 0.6984 0.9522 0.998480 SC 0.0524 0.2344 0.7196 0.9594 0.9986

function m(Xi, �) = eXi�. The results are given in Table 3. From Table 3, we can see that there are some influence onthe values of LR and SC with the most influential observations deleted, but all the values of LR and SC are smaller thanthe �2

1 critical value at an � = 0.05 level.

6. Simulation study

In this section, the performance of two tests is examined by using Monte Carlo simulations to provide small andmoderate sample properties of the proposed test statistics. The model considered in the simulation study is identical tothat used in biaxial fatigue data shown above. The model is

yi = �0 + �1xi + �i , i = 1, 2, . . . , n, (28)

where �i ∼ SN(�i , 0, 2) and let �i = �x�i for simplicity. As a usual way, the estimated values of parameters based

on the above biaxial fatigue data were treated as true values in our simulation study. We set the true values as �0 =12.2792, �1 = −1.6706 and � = 0.4102, respectively. We first generated random numbers from a uniform distributionin the interval [1, 6] as the values of xi’s (i = 1, 2, . . . , n). For getting the values of yi’s, the errors �i were generatedfrom SN(�i , 0, 2) by first sampling z from N(0, 1) and then putting �i = 2 sinh−1(�iz/2) with the true value ofparameter � and given �, according to the model (28), we obtained the values of yi’s. In this case, we, respectively, set�=0, 0.2, 0.4, 0.6, 0.8. Carrying out the above procedure n times, we got a set of data yi, xi, i=1, 2, . . . , n, which willbe used to compute the values of test statistics. For each given value of �, we did 5000 replications (the values of xi’swere fixed for each replication). Then the proportion of times which rejected the null hypothesis was just the simulatedvalue of power. Here, all the statistics were compared with the �2

1 critical value at an �=0.05 level. The simulations wereperformed for different n=10, 20, 40, 60, 80 and different values of � to get the simulated sizes and powers for two teststatistics.

Page 11: Diagnostics analysis for log-Birnbaum–Saunders regression models

4702 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

Table 4 lists the sizes and powers for the test statistics SC and LR under weight function x�i and sample sizes

n = 10, 20, 40, 60, 80 based on 5000 simulations. The results for testing � = 0 indicated that the actual sizes of the testunder n�20 were close to 0.05 and the powers of tests are increased as � and/or n are increased. Therefore, the testswere good. Table 4 also shows that there is no significant difference between the score statistic SC and the likelihoodratio statistic LR in terms of size and power.

7. Conclusion remarks

Influence diagnostics and homogeneous test for parameters are important step in regression analysis. This paperproposes several diagnostic measures based on CDM for log-Birnbaum–Saunders regression models (LBSRM), andshows that the CDM is equivalent to the MSOM in LBSRM and an outlier test is presented based on MSOM. Further-more, this article investigates the test of homogeneity for shape parameters in LBSRM based on the likelihood ratioand score statistics. From numerical example and simulation study, we can see that the diagnostic methods presentedin this paper are effective.

Acknowledgements

We would like to thank Editor, Associate Editor and referees for their helpful comments and suggestions that led to asignificant improvement of the paper. This work is supported by NSFC (10371016) and the Grant for Young Teachersin Nanjing Agricultural University (KJ06036)

Appendix A

Proof of Theorem 1. To apply Eq. (5) to CDM, we need first two derivatives of l(�) with respect to �. From (2) and(3) we have

�2l(�)

����T =n∑

i=1

[1

4sech2

(yi − �i

2

)− 1

�2cosh(yi − �i )

]XiXT

i = XTVX,

�2l(�)

����= − 2

�3

n∑i=1

sinh(yi − �i )Xi = XTh,

�2l(�)

��2= n

�2− 12

�4

n∑i=1

sinh2(

yi − �i

2

)= .

Then the observed information matrix −l(�) is obtained from

l(�) =

⎡⎢⎢⎣

�2l(�)

����T

�2l(�)

�����2l(�)

����T

�2l(�)

��2

⎤⎥⎥⎦

=[

XTVX XThhTX

]. (A.1)

The partial derivatives of l(i)(�) with respect � are, respectively, given by

�l(i)(�)

��=∑j �=i

(�j1�j2

2− �j2

2�j1

)Xj ,

�l(i)(�)

��= −n − 1

�+ 1

∑j �=i

�2j2.

Page 12: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4703

Since �l(�)/��|(�,�)

= 0, �l(�)/��|(�,�)

= 0, we have

�l(i)(�)/��|(�,�)

= Xi (�i2/(2�i1) − �i1�i2/2)(�,�)

= (−Xi ri)(�,�),

�l(i)(�)/��|(�,�)

= (1/� − �2i2/�)

(�,�)= (−�i )(�,�)

.

Then substituting all the above results into Eq. (5), we can obtain the approximations of �(i) and �(i), respectively. �

Proof of Theorem 2. For CDM (4), it follows from (2) that the log-likelihood function can be obtained by

l(i)(�) = −n − 1

2log 8� +

∑j �=i

log �j1 − 1

2

∑j �=i

�2j2.

So, the estimates �(i) and �(i), respectively, subject to the following equations:

�l(i)(�)

��=∑j �=i

(�j1�j2

2− �j2

2�j1

)Xj = 0,

�l(i)(�)

��= −n − 1

�+ 1

∑j �=i

�2j2 = 0.

For MSOM (17), the log-likelihood function can be written as

lmi(�) = − n

2log 8� +

⎛⎝∑

j �=i

log �j1 − 1

2

∑j �=i

�2j2

⎞⎠+

(log �i1 − 1

2�2i2

)

= l(i)(�) − 1

2log 8� +

(log �i1 − 1

2�2i2

),

where

�i1 = 2

�cosh

(yi − XT

i � − �

2

), �i2 = 2

�sinh

(yi − XT

i � − �

2

).

So the estimates �mi and �mi , respectively, subject to the following equations:

�lmi(�)

��= �l(i)(�)

��−{

1

��i1sinh

(yi − XT

i � − �

2

)− �i2

�cosh

(yi − XT

i � − �

2

)}Xi = 0,

�lmi(�)

��= −

{1

��i1sinh

(yi − XT

i � − �

2

)− �i2

�cosh

(yi − XT

i � − �

2

)}= 0. (A.2)

Combining the above two equations, we get

�lmi(�)

��= �l(i)(�)

��= 0.

By the assumption that if ��2, the MLE of � is unique if X = (X1, X2, . . . , Xn)T has rank p (Rieck, 1989), the ML

estimates are unique for CDM and MSOM, so we have �mi = �(i).

Page 13: Diagnostics analysis for log-Birnbaum–Saunders regression models

4704 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

By the expression of �2 (see Section 1), we can get the estimates �2(i) and �2

mi for CDM and MSOM, respectively

�2(i) = 4

n − 1

∑j �=i

sinh2

(yj − XT

j �(i)

2

),

�2mi = 4

n

∑j �=i

sinh2

(yj − XT

j �mi

2

)+ 4

nsinh2

(yi − XT

i �mi − �mi

2

)

=(

1 − 1

n

)�2(i) + 4

nsinh2

(yi − XT

i �mi − �mi

2

).

Substituting the expressions of �i1 and �i2 into (A.2), we get either

sinh

(yi − XT

i �mi − �mi

2

)= 0 or sinh2

(yi − XT

i �mi − �mi

2

)= �2

mi

4− 1

which results in �2mi = (1 − 1/n)�2

(i) ≈ �2(i), or �2

mi = �2(i) − 4/(n − 1) ≈ �2

(i) only if n → ∞. Therefore we completethe proof of Theorem 2. �

Proof of Theorem 3. Let �1 = (�T, �)T; 2 = �. To obtain the score test statistic for the hypothesis H0 : 2 = � = 0,we need to find the first two derivatives of lmi(�) under H0. For the parameter �1 = (�T, �)T, the related derivativeshave been given in (A1). The derivatives of lmi(�) with respect to 2 = � under H0 are, respectively, given by

�lmi(�)

��

∣∣∣∣�

=(

−1

2

�i2

�i1+ 1

2�i1�i2

)�

= (ri),

�2lmi(�)

��2

∣∣∣∣

=[−1

4

(�i2

�i1

)2

+ 1

4− 1

4(�2

i1 + �2i2)

]

= (ui),

�2lmi(�)

����T

∣∣∣∣

=[−1

4

(�i2

�i1

)2

+ 1

4− 1

4(�2

i1 + �2i2)

]

XTi = (ui)XT

i ,

�2lmi(�)

����

∣∣∣∣

=(

−1

��i1�i2

)

= (di).

Then the observed information matrix −l(�) evaluated at � can be obtained from

l(�) =

⎡⎢⎢⎢⎢⎢⎢⎣

�2l(�)

����T

�2l(�)

����

�2l(�)

�����2l(�)

����T

�2l(�)

��2

�2l(�)

����T

�2l(�)

����T

�2l(�)

����

�2l(�)

��2

⎤⎥⎥⎥⎥⎥⎥⎦

=[XTVX XTh uiXi

hTX di

uiXTi di ui

]

.

Page 14: Diagnostics analysis for log-Birnbaum–Saunders regression models

F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706 4705

Now let −l(�) and −l−1(�) be partitioned according to the parameters �1 and 2 as

−l(�) =[

A11 A12A21 A22

], −l−1(�) =

[A11 A12

A21 A22

]. (A.3)

Then the score statistic for the hypothesis H0 : � = 0 is (Cox and Hinkley, 1974)

SCi ={(

�lmi(�)

��

)T

A22(

�lmi(�)

��

)}

. (A.4)

Then substituting all the above results into (A.4), we can get Theorem 3. �

Proof of Theorem 4. Likelihood ratio statistic is given by

LR = 2{l(�1) − l(�)}

= 2

⎧⎨⎩(

n∑i=1

log �i1 − 1

2

n∑i=1

�2i2

)1

−(

n∑i=1

log �i1 − 1

2

n∑i=1

�2i2

)

⎫⎬⎭ .

According to models (1) and (21), we have

�2 = 4

n

n∑i=1

sinh2

(yi − XT

i �

2

),

�21 = 4

n

n∑i=1

1

m2i

sinh2

(yi − XT

i �1

2

).

So combining the above expressions we can get

LR = 2

{n log

�1−

n∑i=1

log mi +n∑

i=1

[log cosh

(yi − �i

2

)1

− log cosh

(yi − �i

2

)

]}

= n log�2mg

�21

. �

Proof of Theorem 5. Let �1 = (�T, �)T; �2 = �. To obtain the score test statistic for the hypothesis H0 : � = �0, weneed to find the first two derivatives of l(�) under H0. For the parameter �1 = (�T, �)T, the related derivatives havebeen given in (A1). The derivatives of l(�) with respect to �2 = � under H0 are, respectively, given by

�l(�)

��

∣∣∣∣

= (−mTe),

�2l(�)

����T

∣∣∣∣

= (mTm − 1Tm − 3mTdiag(�)m + �Tm)

= (mTHm − eTm),

�2l(�)

����T

∣∣∣∣

= (mTQX),

�2l(�)

����T

∣∣∣∣

=(

−2

�mT�

).

Page 15: Diagnostics analysis for log-Birnbaum–Saunders regression models

4706 F.-C. Xie, B.-C. Wei / Computational Statistics & Data Analysis 51 (2007) 4692–4706

Then the observed information matrix −l(�) evaluated at � can be obtained from

l(�0) =

⎡⎢⎢⎢⎢⎢⎢⎣

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

�2l(�)

����T

⎤⎥⎥⎥⎥⎥⎥⎦

=

⎡⎢⎢⎣

XTVX XTh XTQm

hTX −2

��Tm

mTQX −2

�mT� mTHm − eTm

⎤⎥⎥⎦

.

Now let −l(�) and −l−1(�) be partitioned according to the parameters �1 and �2 as given in (A.3), then the scorestatistic for hypothesis � = �0 is

SC ={(

�l

��

)T

A22(

�l

��

)}

= {eTm[eTm − mT(H − P)m]−1mTe}. �

References

Achar, J.A, 1993. Inference for Birnbaum–Saunders fatigue life model using Bayesian method. Comput. Statist. Data Anal. 15, 367–380.Birnbaum, Z.W., Saunders, S.C., 1969a. A new family of life distribution. J. Appl. Probab. 6, 319–327.Birnbaum, Z.W., Saunders, S.C., 1969b. Estimation for a family of life distributions with application to fatigue. J. Appl. Probab. 6, 328–347.Chen, C.F., 1983. Score test for regression models. J. Amer. Statist. Assoc. 78, 158–161.Christensen, R., Pearson, L., Johnson, W., 1992. Case-deletion diagnostics for mixed models. Technometrics 34, 38–45.Cook, R.D., 1977. Detection of influential observations in linear regression. Technometrics 19, 15–18.Cook, R.D., 1986. Assessment of local influence. J. Roy. Statist. Soc. B 48, 133–169.Cook, R.D., Weisberg, S., 1982. Residuals and Influence in Regression. Chapman & Hall, New York.Cook, R.D., Weisberg, S., 1983. Diagnostics for heteroscedasticity in regression. Biometrika 70 (1), 1–10.Cox, D.R., Hinkley, D.V., 1974. Theoretical Statistics. Chapman & Hall, London.Davison, A.C., Tsai, C.L., 1992. Regression model diagnostics. Internat. Statist. Rev. 60, 337–355.Desmond, A., 1985. Stochastic models of failure in random environments. Canad. J. Statist. 13, 171–183.Fung, W.K., Zhu, Z.Y., Wei, B.C., He, X., 2002. Influence diagnostics and outlier tests for semi-parametric mixed models. J. Roy. Statist. Soc. B 64,

565–579.Galea, M., Leiva-Sanchez, V., Paula, G.A., 2004. Influence diagnostics in log-Birnbaum–Saunders regression models. J. Appl. Statist. 31,

1049–1064.Galea, M., Paula, G.A., Cysneiros, F.J., 2005. On diagnostics in symmetrical nonlinear models. Statist. Probab. Lett. 73, 459–467.Ng, H.K.T., Kundu, D., Balakrishnan, N., 2003. Modified moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput.

Statist. Data Anal. 43, 283–298.Rieck, J.R., 1989. Statistical analysis for the Birnbaum–Saunders fatigue life distribution. Ph.D. Thesis, Clemson University, Department of

Mathematical Sciences, unpublished.Rieck, J.R., Nedelman, J.R., 1991. A log-linear model for the Birnbaum–Saunders distribution. Technometrics 33, 51–60.Smyth, G.K., 1989. Generalized linear models with varying dispersion. J. Roy. Statist. Soc. B 51, 47–60.Tang, N.S., Wei, B.C., Wang, X.R., 2000. Influence diagnostics in nonlinear reproductive dispersion models. Statist. Probab. Lett. 26, 59–68.Tsionas, E.G., 2001. Bayesian inference in Birnbaum–Saunders regression. Comm. Statist. Theory Methods 30 (1), 179–193.Wang, Z.H., Desmond, A.F., Lu, X.W., 2006. Modified censored moment estimation for the two-parameter Birnbaum–Saunders distribution. Comput.

Statist. Data Anal. 4, 1033–1051.Wei, B.C., 1998. Exponential Family Nonlinear Models. Springer, Singapore.