18
Hindawi Publishing Corporation Journal of Probability and Statistics Volume 2010, Article ID 596839, 17 pages doi:10.1155/2010/596839 Research Article Estimating the Conditional Tail Expectation in the Case of Heavy-Tailed Losses Abdelhakim Necir, 1 Abdelaziz Rassoul, 2 and Riˇ cardas Zitikis 3 1 Laboratory of Applied Mathematics, Mohamed Khider University of Biskra, 07000, Algeria 2 Ecole Nationale Superieure d’Hydraulique, Guerouaou, BP 31, Blida, 09000, Algeria 3 Department of Statistical and Actuarial Sciences, University of Western Ontario, London, ON, Canada N6A5B7 Correspondence should be addressed to Riˇ cardas Zitikis, [email protected] Received 21 October 2009; Accepted 20 January 2010 Academic Editor: Edward Furman Copyright q 2010 Abdelhakim Necir et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The conditional tail expectation CTE is an important actuarial risk measure and a useful tool in financial risk assessment. Under the classical assumption that the second moment of the loss variable is finite, the asymptotic normality of the nonparametric CTE estimator has already been established in the literature. The noted result, however, is not applicable when the loss variable follows any distribution with infinite second moment, which is a frequent situation in practice. With a help of extreme-value methodology, in this paper, we oer a solution to the problem by suggesting a new CTE estimator, which is applicable when losses have finite means but infinite variances. 1. Introduction One of the most important actuarial risk measures is the conditional tail expectation CTE see, e.g., 1, which is the average amount of loss given that the loss exceeds a specified quantile. Hence, the CTE provides a measure of the capital needed due to the exposure to the loss, and thus serves as a risk measure. Not surprisingly, therefore, the CTE continues to receive increased attention in the actuarial and financial literature, where we also find its numerous extensions and generalizations see, e.g., 28, and references therein. We next present basic notation and definitions. Let X be a loss random variable with cumulative distribution function cdf F. Usually, the cdf F is assumed to be continuous and defined on the entire real line, with negative loss interpreted as gain. We also assume the continuity of F throughout the present paper. The CTE of the risk or loss X is then defined, for every t 0, 1,

Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

  • Upload
    others

  • View
    18

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Hindawi Publishing CorporationJournal of Probability and StatisticsVolume 2010, Article ID 596839, 17 pagesdoi:10.1155/2010/596839

Research ArticleEstimating the Conditional Tail Expectation inthe Case of Heavy-Tailed Losses

Abdelhakim Necir,1 Abdelaziz Rassoul,2 and Ricardas Zitikis3

1 Laboratory of Applied Mathematics, Mohamed Khider University of Biskra, 07000, Algeria2 Ecole Nationale Superieure d’Hydraulique, Guerouaou, BP 31, Blida, 09000, Algeria3 Department of Statistical and Actuarial Sciences, University of Western Ontario, London,ON, Canada N6A5B7

Correspondence should be addressed to Ricardas Zitikis, [email protected]

Received 21 October 2009; Accepted 20 January 2010

Academic Editor: Edward Furman

Copyright q 2010 Abdelhakim Necir et al. This is an open access article distributed under theCreative Commons Attribution License, which permits unrestricted use, distribution, andreproduction in any medium, provided the original work is properly cited.

The conditional tail expectation (CTE) is an important actuarial risk measure and a useful toolin financial risk assessment. Under the classical assumption that the second moment of the lossvariable is finite, the asymptotic normality of the nonparametric CTE estimator has already beenestablished in the literature. The noted result, however, is not applicable when the loss variablefollows any distribution with infinite second moment, which is a frequent situation in practice.With a help of extreme-value methodology, in this paper, we offer a solution to the problem bysuggesting a new CTE estimator, which is applicable when losses have finite means but infinitevariances.

1. Introduction

One of the most important actuarial risk measures is the conditional tail expectation (CTE)(see, e.g., [1]), which is the average amount of loss given that the loss exceeds a specifiedquantile. Hence, the CTE provides a measure of the capital needed due to the exposure tothe loss, and thus serves as a risk measure. Not surprisingly, therefore, the CTE continuesto receive increased attention in the actuarial and financial literature, where we also find itsnumerous extensions and generalizations (see, e.g., [2–8], and references therein). We nextpresent basic notation and definitions.

Let X be a loss random variable with cumulative distribution function (cdf) F.Usually, the cdf F is assumed to be continuous and defined on the entire real line,with negative loss interpreted as gain. We also assume the continuity of F throughoutthe present paper. The CTE of the risk or loss X is then defined, for every t ∈ (0, 1),

Page 2: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

2 Journal of Probability and Statistics

by

CTEF(t) = E[X | X > Q(t)], (1.1)

where Q(t) = inf{x : F(x) ≥ t} is the quantile function corresponding to the cdf F. Since thecdf F is continuous, we easily check that

CTEF(t) =1

1 − t

∫1

t

Q(s)ds. (1.2)

Naturally, the CTE is unknown since the cdf F is unknown. Hence, it is desirable toestablish statistical inferential results such as confidence intervals for CTEF(t) with specifiedconfidence levels and margins of error. We shall next show how to accomplish this task,initially assuming the classical moment assumption E[X2] < ∞. Namely, suppose that wehave independent random variables X1, X2, . . . , each with the cdf F, and let X1:n < · · · < Xn:n

denote the order statistics of X1, . . . , Xn. It is natural to define an empirical estimator ofCTEF(t) by the formula

CTEn(t) =1

1 − t

∫1

t

Qn(s)ds, (1.3)

where Qn(s) is the empirical quantile function, which is equal to the ith order statistic Xi:n

for all s ∈ ((i − 1)/n, i/n], and for all i = 1, . . . , n. The asymptotic behavior of the estimatorCTEn(t) has been studied by Brazauskas et al. [9], and we next formulate their most relevantresult for our paper as a theorem.

Theorem 1.1. Assume that E[X2] < ∞. Then for every t ∈ (0, 1), we have the asymptotic normalitystatement

√n(

CTEn(t) − CTEF(t))(1 − t)−→dN

(0, σ2(t)

), (1.4)

when n → ∞, where the asymptotic variance σ2(t) is given by the formula

σ2(t) =∫1

t

∫1

t

(min(x, y) − xy

)dQ(x)dQ

(y). (1.5)

The assumption E[X2] < ∞ is, however, quite restrictive as the following exampleshows. Suppose that F is the Pareto cdf with index γ > 0, that is, 1−F(x) = x−1/γ for all x ≥ 1.Let us focus on the case γ < 1, because when γ ≥ 1, then CTEF(t) = +∞ for every t ∈ (0, 1).Theorem 1.1 covers only the values γ ∈ (0, 1/2) in view of the assumption E[X2] < ∞. Whenγ ∈ [1/2, 1), we have E[X2] = ∞ but, nevertheless, CTEF(t) is well defined and finite sinceE[X] < ∞. Analogous remarks hold for other distributions with Pareto-like tails, an we shallindeed work with such general distributions in this paper.

Page 3: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 3

Namely, recall that the cdf F is regularly varying at infinity with index (−1/γ) < 0 if

limt→∞

1 − F(tx)1 − F(t)

= x−1/γ (1.6)

for every x > 0. This class includes a number of popular distributions such as Pareto,generalized Pareto, Burr, Frechet, Student, and so forth, which are known to be appropriatemodels for fitting large insurance claims, fluctuations of prices, log-returns, and so forth(see, e.g., [10]). In the remainder of this paper, therefore, we restrict ourselves to this classof distributions. For more information on the topic and, generally, on extreme value modelsand their manifold applications, we refer to the monographs by Beirlant et al. [11], Castillo etal. [12], de Haan and Ferreira [13], Resnick [14].

The rest of the paper is organized as follows. In Section 2 we construct an alternative,called “new”, CTE estimator by utilizing an extreme value approach. In Section 3 we establishthe asymptotic normality of the new CTE estimator and illustrate its performance with alittle simulation study. The main result, which is Theorem 3.1 stated in Section 3, is proved inSection 4.

2. Construction of a New CTE Estimator

We have already noted that the “old” estimator CTEn(t) does not yield the asymptoticnormality (in the classical sense) beyond the condition E[X2] < ∞. Indeed, this follows bysetting t = 0, in which case CTEn(t) becomes the sample mean of X1, . . . , Xn, and thus theasymptotic normality of CTEn(0) is equivalent to the classical Central Limit Theorem (CLT).Similar arguments show that the finite second moment is necessary for having the asymptoticnormality (in the classical sense) of CTEn(t) at any fixed “level” t ∈ (0, 1). Indeed, note thatthe asymptotic variance σ2(t) in Theorem 1.1 is finite only if E[X2] < ∞.

For this reason, we next construct an alternative CTE estimator, which takes intoaccount different asymptotic properties of moderate and high quantiles in the case of heavy-tailed distributions. Hence, from now on we assume that γ ∈ (1/2, 1). Before indulgingourselves into construction details, we first formulate the new CTE estimator:

CTEn(t) =1

1 − t

∫1−k/n

t

Qn(s)ds +kXn−k,n

n(1 − t)(1 − γ

) , (2.1)

where we use the simplest yet useful and powerful Hill’s [15] estimator

γn =1k

k∑i=1

logXn−i+1:n − logXn−k:n (2.2)

of the tail index γ ∈ (1/2, 1). Integers k = kn ∈ {1, . . . , n} are such that k → ∞ and k/n → 0when n → ∞, and we note at the outset that their choices present a challenging task. InFigures 1 and 2, we illustrate the performance of the new estimator CTEn(t) with respect tothe sample size n ≥ 1, with the integers k = kn chosen according to the method proposed byCheng and Peng [16]. Note that when t increases through the values 0.25, 0.50, 0.75, and 0.90

Page 4: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

4 Journal of Probability and Statistics

2

468

10

0 5000 10000 15000 20000

New estimatorSample estimator

(a)

2

6

10

14

0 5000 10000 15000 20000

New estimatorSample estimator

(b)

5

15

25

0 5000 10000 15000 20000

New estimatorSample estimator

(c)

0

10

20

30

0 5000 10000 15000 20000

New estimatorSample estimator

(d)

Figure 1: Values of the CTE estimator CTEn(t) (vertical axis) versus sample sizes n (horizontal axis)evaluated at the levels t = 0.25, t = 0.50, t = 0.75, and t = 0.90 (panels (a)–(d), resp.) in the Pareto case withthe tail index γ = 2/3.

(panels (a)–(d), resp.), the vertical axes of the panels also increase, which reflects the fact thatthe larger the t gets, the more erratic the “new” and “old” estimators become. Note also thatthe empirical (i.e., “old”) estimator underestimates the theoretical CTEF(t), which is a wellknown phenomenon (see [17]).

We have based the construction of CTEn(t) on the recognition that one should estimatemoderate and high quantiles differently when the underlying distribution is heavy-tailed. Forthis, we first recall that the high quantile qs is, by definition, equal to Q(1 − s) for sufficientlysmall s. For an estimation theory of high quantiles in the case of heavy-tailed distributionswe refer to, for example, Weissman [18], Dekkers and de Haan [19], Matthys and Beirlant[20], Gomes et al. [21], and references therein. We shall use the Weissman estimator

qs =(k

n

Xn−k:ns−γ , 0 < s <

k

n, (2.3)

of the high quantile qs. Then we write CTEF(t) as the sum CTE1,n(t) + CTE2,n(t) with thetwo summands defined together with their respective empirical estimators CTE1,n(t) andCTE2,n(t) as follows:

CTE1,n(t) =1

1 − t

∫1−k/n

t

Q(s)ds ≈ 11 − t

∫1−k/n

t

Qn(s)ds = CTE1,n(t),

CTE2,n(t) =1

1 − t

∫1

1−k/nQ(s)ds ≈ 1

1 − t

∫1

1−k/nq1−sds = CTE2,n(t).

(2.4)

Page 5: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 5

2

6

10

14

0 5000 10000 15000 20000

New estimatorSample estimator

(a)

5

10

15

20

0 5000 10000 15000 20000

New estimatorSample estimator

(b)

510

1520

25

0 5000 10000 15000 20000

New estimatorSample estimator

(c)

0

20

40

0 5000 10000 15000 20000

New estimatorSample estimator

(d)

Figure 2: Values of the CTE estimator CTEn(t) (vertical axis) versus sample sizes n (horizontal axis)evaluated at the levels t = 0.25, t = 0.50, t = 0.75, and t = 0.90 (panels (a)–(d), resp.) in the Pareto case withthe tail index γ = 3/4.

Simple integration gives the formula

CTE2,n(t) =kXn−k,n

n(1 − γ

)(1 − t)

. (2.5)

Consequently, the sum CTE1,n(t)+ CTE2,n(t) is an estimator of CTEF(t), and this is exactly theestimator CTEn(t) introduced above. We shall investigate asymptotic normality of the newestimator in the next section, accompanied with an illustrative simulation study.

3. Main Theorem and Its Practical Implementation

We start this section by noting that Hill’s estimator γn has been thoroughly studied, improved,and generalized in the literature. For example, weak consistency of γn has been establishedby Mason [22] assuming only that the underlying distribution is regularly varying at infinity.Asymptotic normality of γ has been investigated under various conditions by a number ofresearchers, including Csorgo and Mason [23], Beirlant and Teugels [24], Dekkers et al. [25],see also references therein.

The main theoretical result of this paper, which is Theorem 3.1 below, establishesasymptotic normality of the new CTE estimator CTEn(t). To formulate the theorem, we needto introduce an assumption that ensures the asymptotic normality of Hill’s estimator γn.Namely, the cdf F satisfies the generalized second-order regular variation condition with

Page 6: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

6 Journal of Probability and Statistics

second-order parameter ρ ≤ 0 (see [26, 27]) if there exists a function a(t) which does notchange its sign in a neighbourhood of infinity and is such that, for every x > 0,

limt→∞

1a(t)

(1 − F(tx)1 − F(t)

− x−1/γ)

= x−1/γ xρ/γ − 1ρ/γ

. (3.1)

When ρ = 0, then the ratio on the right-hand side of (3.1) is interpreted as logx. Forstatistical inference concerning the second-order parameter ρ, we refer, for example, to Pengand Qi [28], Gomes et al. [21], Gomes and Pestana [29]. Furthermore, in the formulation ofTheorem 3.1, we shall also use the function A(z) = γ2a(U(z)), where U(z) = Q(1 − 1/z).

Theorem 3.1. Assume that the cdf F satisfies condition (3.1) with γ ∈ (1/2, 1). Then for anysequence of integers k = kn → ∞ such that k/n → 0 and k1/2A(n/k) → 0 when n → ∞,we have that for any fixed t ∈ (0, 1),

√n(

CTEn(t) − CTEF(t))(1 − t)

(k/n)1/2Xn−k:n

−→dN(

0, σ2γ

), (3.2)

where the asymptotic variance σ2γ is given by the formula

σ2γ =

γ4

(1 − γ

)4(2γ − 1) . (3.3)

The asymptotic variance σ2γ does not depend on t, unlike the variance in Theorem 1.1.

This is not surprising because the heaviness of the right-most tail of F makes the asymptoticbehaviour of

∫1t (Qn(s)−Q(s))ds “heavier” than the classical CLT-type behaviour of

∫ t0(Qn(s)−

Q(s))ds, for any fixed t. This in turn implies that under the conditions of Theorem 3.1,statement (3.2) is equivalent to the same statement in the case t = 0. The latter statementconcerns estimating the mean E[X] of a heavy-tailed distribution. Therefore, we can viewTheorem 3.1 as a consequence of Peng [30], and at the same time we can view results ofPeng [30] as a consequence of Theorem 3.1 by setting t = 0 in it. Despite this equivalence, inSection 4 we give a proof of Theorem 3.1 for the sake of completeness. Our proof, however,is crucially based on a powerful technique called the Vervaat process (see [31–33], for detailsand references).

To discuss practical implementation of Theorem 3.1, we first fix a significance levelς ∈ (0, 1) and use the classical notation zς/2 for the (1 − ς/2)-quantile of the standardnormal distribution N(0, 1). Given a realization of the random variables X1, . . . , Xn (e.g.,claim amounts), which follow a cdf F satisfying the conditions of Theorem 3.1, we constructa level 1 − ς confidence interval for CTEF(t) as follows. First, we choose an appropriatenumber k of extreme values. Since Hill’s estimator has in general a substantial variancefor small k and a considerable bias for large k, we search for a k that balances between thetwo shortcomings, which is indeed a well-known hurdle when estimating the tail index. Toresolve this issue, several procedures have been suggested in the literature, and we refer to,for example, Dekkers and de Haan [34], Drees and Kaufmann [35], Danielsson et al. [36],Cheng and Peng [16], Neves and Fraga Alves [37], Gomes et al. [38], and references therein.

Page 7: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 7

Table 1: Point estimates CTEn(t) and 95% confidence intervals for CTEF(t) when γ = 2/3.

t = 0.75 CTEF(t) = 7.005n k∗ CTEn(t) error rmse lower upper cover length1000 054 6.876 0.045 0.303 6.356 7.397 0.839 1.0412000 100 6.831 0.025 0.231 6.463 7.199 0.882 0.7365000 219 7.119 0.016 0.194 6.881 7.357 0.895 0.476t = 0.90 CTEF(t) = 12.533n k∗ CTEn(t) error rmse lower upper cover length1000 054 12.753 0.017 0.534 12.241 13.269 0.847 1.0282000 100 12.487 0.003 0.294 12.137 12.838 0.841 0.7015000 219 12.461 0.005 0.236 12.246 12.676 0.887 0.430

Table 2: Point estimates CTEn(t) and 95% confidence intervals for CTEF(t) when γ = 3/4.

t = 0.75 CTEF(t) = 9.719n k∗ CTEn(t) error rmse lower upper cover length1000 051 9.543 0.018 0.582 8.589 9.543 0.854 0.9542000 104 9.808 0.009 0.466 9.150 10.466 0.888 1.3165000 222 9.789 0.007 0.410 9.363 10.215 0.915 0.852t = 0.90 CTEF(t) = 18.494n k∗ CTEn(t) error rmse lower upper cover length1000 051 18.199 0.015 0.989 17.437 18.960 0.874 1.5232000 104 18.696 0.011 0.858 18.052 19.340 0.895 1.2885000 222 18.541 0.002 0.798 18.092 18.990 0.925 0.898

In our current study, we employ the method of Cheng and Peng [16] for an appropriate valuek∗ of the “parameter” k. Having computed Hill’s estimator and consequently determinedXn−k∗:n, we then compute the corresponding values of CTEn(t) and σ2

γn, and denote them by

CTE∗n(t) and σ2∗

γn, respectively. Finally, using Theorem 3.1 we arrive at the following (1 − ς)-

confidence interval for CTEF(t):

CTE∗n(t) ± zς/2

(k∗/n)1/2Xn−k∗:n σ2∗γn

(1 − t)√n

. (3.4)

To illustrate the performance of this confidence interval, we have carried out a small-scale simulation study based on the Pareto cdf F(x) = 1 − x−1/γ , x ≥ 1, with the tail index γset to 2/3 and 3/4, and the level t set to 0.75 and 0.90. We have generated 200 independentreplicates of three samples of sizes n = 1000, 2000, and 5000. For every simulated sample, wehave obtained estimates CTEn(t). Then we have calculated the arithmetic averages over thevalues from the 200 repetitions, with the absolute error (error) and root mean squared error(rmse) of the new estimator Cn(t) reported in Table 1 (γ = 2/3) and Table 2 (γ = 3/4). Inthe tables, we have also reported 95%-confidence intervals (3.4) with their lower and upperbounds, coverage probabilities, and lengths.

We note emphatically that the above coverage probabilities and lengths of confidenceintervals can be improved by employing more precise but, naturally, considerably more

Page 8: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

8 Journal of Probability and Statistics

complex estimators of the tail index. Such estimators are described in the monographs byBeirlant et al. [11], Castillo et al. [12], de Haan and Ferreira [13], and Resnick [14]. Since thepublication of these monographs, numerous journal articles have appeared on the topic. Ouraim in this paper, however, is to present a simple yet useful result that highlights how muchActuarial Science and developments in Mathematical Statistics, Probability, and StochasticProcesses are interrelated, and thus benefit from each other.

4. Proof of Theorem 3.1

We start the proof of Theorem 3.1 with the decomposition

(CTEn(t) − CTEF(t)

)(1 − t) = An,1(t) +An,2, (4.1)

where

An,1(t) =∫1−k/n

t

(Qn(s) −Q(s))ds,

An,2 =k/n

1 − γnXn−k:n −

∫1

1−k/nQ(s)ds.

(4.2)

We shall show below that there are Brownian bridges Bn such that

√nAn,1(t)

(k/n)1/2Q(1 − k/n)= −

∫1−k/n0 Bn(s)dQ(s)

(k/n)1/2Q(1 − k/n)+ oP(1), (4.3)

√nAn,2

(k/n)1/2Q(1 − k/n)=

γ2

(1 − γ

)2

√n

kBn

(1 − k

n

)

− γ(1 − γ

)2

√n

k

∫1

1−k/n

Bn(s)1 − s

ds + oP(1).

(4.4)

Assuming for the time being that statements (4.3) and (4.4) hold, we next complete the proofof Theorem 3.1. To simplify the presentation, we use the following notation:

W1,n = −∫1−k/n

0 Bn(s)dQ(s)

(k/n)1/2Q(1 − k/n),

W2,n =γ2

(1 − γ

)2

√n

kBn

(1 − k

n

),

W3,n = − γ(1 − γ

)2

√n

k

∫1

1−k/n

Bn(s)1 − s

ds.

(4.5)

Page 9: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 9

Hence, we have the asymptotic representation

√n(

CTEn(t) − CTEF(t))(1 − t)

(k/n)1/2Q(1 − k/n)= W1,n +W2,n +W3,n + oP(1). (4.6)

The sum W1,n+W2,n+W3,n is a centered Gaussian random variable. To calculate its asymptoticvariance, we establish the following limits:

E[W2

1,n

]−→ 2γ

2γ − 1, E

[W2

2,n

]−→ γ4

(1 − γ

)4, E

[W2

3,n

]−→ 2γ2

(1 − γ

)4,

2E[W1,nW2,n] −→−2γ2

(1 − γ

)2, 2E[W1,nW3,n] −→

2γ(1 − γ

)2,

2E[W2,nW3,n] −→−2γ3

(1 − γ

)4.

(4.7)

Summing up the right-hand sides of the above six limits, we obtain σ2γ , whose expression in

terms of the parameter γ is given in Theorem 3.1. Finally, since Xn−k:n/Q(1 − k/n) convergesin probability to 1 (see, e.g., the proof of Corollary in [39]), the classical Sultsky’s lemmacompletes the proof of Theorem 3.1. Of course, we are still left to verify statements (4.3) and(4.4), which make the contents of the following two subsections.

4.1. Proof of Statement (4.3)

If Q were continuously differentiable, then statement (4.3) would follow easily from the proofof Theorem 2 in [39]. We do not assume differentiability of Q and thus a new proof is required,which is crucially based on the Vervaat process (see [31–33], and references therein)

Vn(t) =∫ t

0(Qn(s) −Q(s))ds +

∫Q(t)

−∞(Fn(x) − F(x))dx. (4.8)

Hence, for every t such that 0 < t < 1 − k/n, which is satisfied for all sufficiently large n sincet is fixed, we have that

An,1(t) =∫1−k/n

0(Qn(s) −Q(s))ds −

∫ t

0(Qn(s) −Q(s))ds

= −∫Q(1−k/n)

Q(t)(Fn(x) − F(x))dx + Vn

(1 − k

n

)− Vn(t).

(4.9)

It is well known (see [31–33]) that Vn(t) is nonnegative and does not exceed −(Fn(Q(t)) −t)(Qn(t) −Q(t)). Since the cdf F is continuous by assumption, we therefore have that

√nVn(t) ≤ |en(t)||Qn(t) −Q(t)|, (4.10)

Page 10: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

10 Journal of Probability and Statistics

where en(t) is the uniform empirical process√n(Fn(Q(t)) − F(Q(t)), which for large n looks

like the Brownian bridge Bn(t). Note also that with the just introduced notation en, theintegral on the right-hand side of (4.9) is equal to

∫Q(1−k/n)Q(t) en(F(x))dx. Hence,

√nAn,1(t)

(k/n)1/2Q(1 − k/n)= −∫Q(1−k/n)Q(t) en(F(x))dx

(k/n)1/2Q(1 − k/n)

+OP(1)|en(1 − k/n)||Qn(1 − k/n) −Q(1 − k/n)|

(k/n)1/2Q(1 − k/n)

+OP(1)|en(t)||Qn(t) −Q(t)|(k/n)1/2Q(1 − k/n)

.

(4.11)

We shall next replace the empirical process en by an appropriate Brownian bridge Bn inthe first integral on the right-hand side of (4.11) with an error term of magnitude oP(1),and we shall also show that the second and third summands on the right-hand side of(4.11) are of the order oP(1). The replacement of en by Bn can be accomplished using, forexample, Corollary 2.1 on page 48 of Csorgo et al. [40], which states that on an appropriatelyconstructed probability space and for any 0 ≤ ν < 1/4, we have that

sup1/n≤s≤1−1/n

|en(s) − Bn(s)|s1/2−ν(1 − s)1/2−ν = OP

(n−ν). (4.12)

This result is applicable in the current situation since we can always place our originalproblem into the required probability space, because our main results are “in probability”.Furthermore, since Q(t) ≤ x ≤ Q(1 − k/n), we have that t ≤ F(x) ≤ 1 − k/n. Hence, statement(4.12) implies that

−∫Q(1−k/n)Q(t) en(F(x))dx

(k/n)1/2Q(1 − k/n)= −∫Q(1−k/n)Q(t) Bn(F(x))dx

(k/n)1/2Q(1 − k/n)+OP(1)

∫Q(1−k/n)Q(t) (1 − F(x))1/2−νdx

nν(k/n)1/2Q(1 − k/n).

(4.13)

Changing the variables of integration and using the property (k/n)1/2Q(1−k/n) → ∞ whenn → ∞, we obtain that

−∫Q(1−k/n)Q(t) Bn(F(x))dx

(k/n)1/2Q(1 − k/n)= −

∫1−k/n0 Bn(s)dQ(s)

(k/n)1/2Q(1 − k/n)+ oP(1). (4.14)

The main term on the right-hand side of (4.14) is W1,n. We shall next show that the right-mostsummand of (4.13) converges to 0 when n → ∞.

Page 11: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 11

Changing the variable of integration and then integrating by parts, we obtain thebound

∫Q(1−k/n)Q(t) (1 − F(x))1/2−νdx

nν(k/n)1/2Q(1 − k/n)≤

(1 − s)1/2−νQ(s)∣∣∣1−k/nt

nν(k/n)1/2Q(1 − k/n)+O(1)

∫1−k/nt (1 − s)−1/2−νQ(s)ds

nν(k/n)1/2Q(1 − k/n).

(4.15)

We want to show that the right-hand side of bound (4.15) converges to 0 when n → ∞. Forthis, we first note that

(1 − s)1/2−νQ(s)∣∣∣1−k/nt

nν(k/n)1/2Q(1 − k/n)=

1kν

− (1 − t)1/2−νQ(t)

nν(k/n)1/2Q(1 − k/n)−→ 0. (4.16)

Next, with the notation φ(u) = Q(1 − u)/u1/2+ν, we have that

∫1−k/nt (1 − s)−1/2−νQ(s)ds

nν(k/n)1/2Q(1 − k/n)=

1kν

∫1−tk/n φ(s)ds

(k/n)φ(k/n)−→ 0 (4.17)

when n → ∞, where the convergence to 0 follows from Result 1 in the Appendix of Necirand Meraghni [39]. Taking statements (4.15)–(4.17) together, we have that the right-mostsummand of (4.13) converges to 0 when n → ∞.

Consequently, in order to complete the proof of statement (4.3), we are left to show thatthe second and third summands on the right-hand side of (4.11) are of the order oP(1). Thethird summand is of the order oP(1) because |en(t)‖Qn(t) −Q(t)| = OP(1) and (k/n)1/2Q(1 −k/n) → ∞. Hence, we are only left to show that the second summand on the right-hand sideof equation (4.11) is of the order oP(1), for which we shall show that

|en(1 − k/n)|(k/n)1/2

∣∣∣∣Qn(1 − k/n)Q(1 − k/n)

− 1∣∣∣∣ = oP(1). (4.18)

To prove statement (4.18), we first note that

|en(1 − k/n)|(k/n)1/2

≤ |en(1 − k/n) − Bn(1 − k/n)|(k/n)1/2

+|Bn(1 − k/n)|(k/n)1/2

. (4.19)

The first summand on the right-hand side of bound (4.19) is of the order OP(1) due tostatement (4.12) with ν = 0. The second summand on the right-hand side of bound (4.19)is of the order OP(1) due to a statement on page 49 of Csorgo et al. [40] (see the displayedbound just below statement (2.39) therein). Hence, to complete the proof of statement (4.18),we need to check that

Qn(1 − k/n)Q(1 − k/n)

= 1 + oP(1). (4.20)

Page 12: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

12 Journal of Probability and Statistics

Observe that, for each n, the distribution of Qn(1−k/n) is the same as that of Q(E−1n (1−k/n)),

where E−1n is the uniform empirical quantile function. Furthermore, the processes {1−E−1

n (1−s), 0 ≤ s ≤ 1} and {E−1

n (s), 0 ≤ s ≤ 1} are equal in distribution. Hence, statement (4.20) isequivalent to

Q(1 − E−1

n (k/n))

Q(1 − k/n)= 1 + oP(1). (4.21)

From the Glivenko-Cantelli theorem we have that E−1n (k/n) − k/n → 0 almost surely, which

also implies that E−1n (k/n) → 0 since k/n → 0 by our choice of k. Moreover, we know from

Theorem 0 and Remark 1 of Wellner [41] that

sup1/n≤s≤1

s−1∣∣∣E−1

n (s) − s∣∣∣ = oP(1), (4.22)

from which we conclude that

nE−1n (k/n)k

= 1 + oP(1). (4.23)

Since the function s �→ Q(1 − s) is slowly varying at zero, using Potter’s inequality (see the5th assertion of Proposition B.1.9 on page 367 of de Haan and Ferreira [13],we obtain that

Q(1 − E−1

n (k/n))

Q(1 − k/n)= (1 + oP(1))

(nE−1

n (k/n)k

)−γ±θ(4.24)

for any θ ∈ (0, γ). In view of (4.23), the right-hand side of (4.24) is equal to 1 + oP(1), whichimplies statement (4.21) and thus finishes the proof of statement (4.3).

4.2. Proof of Statement (4.4)

The proof of statement (4.4) is similar to that of Theorem 2 in Necir et al. [42], though someadjustments are needed since we are now concerned with the CTE risk measure. We thereforepresent main blocks of the proof together with pinpointed references to Necir et al. [42] forspecific technical details.

We start the proof with the function U(z) = Q(1 − 1/z) that was already used in theformulation of Theorem 3.1. Hence, if Y is a random variable with the distribution functionG(z) = 1 − 1/z, z ≥ 1, then U(Y ) = Q(G(Y ))=dX because G(Y ) is a uniform on the interval[0, 1] random variable. Hence,

An,2 =k/n

1 − γnU(Yn−k:n) −

∫k/n

0U

(1s

)ds, (4.25)

Page 13: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 13

and so we have

√nAn,2

(k/n)1/2Q(1 − k/n)=√k

⎛⎝ 1

1 − γn

U(Yn−k:n)U(n/k)

− (n/k)∫k/n

0 U(1/s)dsU(n/k)

⎞⎠

=√k

(1

1 − γn

U(Yn−k:n)U(n/k)

− 11 − γ

)

+√k

(1

1 − γ−∫∞

1 s−2U(ns/k)ds

U(n/k)

).

(4.26)

We next show that the right-most term in (4.26) converges to 0 when n → ∞. For this reason,we first rewrite the term as follows:

√k

(1

1 − γ−∫∞

1 s−2U(ns/k)ds

U(n/k)

)= −√k

∫∞

1

1s2

(U(ns/k)U(n/k)

− sγ)ds. (4.27)

The right-hand side of (4.27) converges to 0 (see notes on page 149 of Necir et al. [42]) due tothe second-order condition (3.1), which can equivalently be rewritten as

limz→∞

1A(z)

(U(zs)U(z)

− sγ)

= sγsρ − 1ρ

(4.28)

for every s > 0, where A(z) = γ2a(U(z)). Note that√kA(n/k) → 0 when n → ∞. Hence, in

order to complete the proof of statement (4.4), we need to check that

√k

(1

1 − γn

U(Yn−k:n)U(n/k)

− 11 − γ

)=

γ2

(1 − γ

)2

√n

kBn

(1 − k

n

)

− γ(1 − γ

)2

√n

k

∫1

1−k/n

Bn(s)1 − s

ds + oP(1).

(4.29)

With Hill’s estimator written in the form

γn =1k

k∑i=1

log(

U(Yn−i+1:n)U(Yn−k:n)

), (4.30)

we proceed with the proof of statement (4.29) as follows:

√k

(1 − γ

1 − γn

U(Yn−k:n)U(n/k)

− 1)

=√k

1 − γ

1 − γn

(U(Yn−k:n)U(n/k)

−(Yn−k:n

n/k

)γ)+√k

1 − γ

1 − γn

((Yn−k:n

n/k

− 1)+√k

γn − γ

1 − γn.

(4.31)

Page 14: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

14 Journal of Probability and Statistics

Furthermore, we have that

√k

γn − γ

1 − γn=

11 − γn

1√k

k∑i=1

(log(

U(Yn−i+1:n)U(Yn−k:n)

)− γ log

(Yn−i+1:n

Yn−k:n

))

1 − γn

1√k

k∑i=1

(log(Yn−i+1:n

Yn−k:n

)− 1).

(4.32)

Arguments on page 156 of Necir et al. [42] imply that the first term on the right-hand side of(4.32) is of the order OP(

√kA(Yn−k:n)), and a note on page 157 of Necir et al. [42] says that√

k A(Yn−k:n) = oP(1). Hence, the first term on the right-hand side of (4.32) is of the orderoP(1). Analogous considerations using bound (2.5) instead of (2.4) on page 156 of Necir et al.[42] imply that the first term on the right-hand side of (4.31) is of the order oP(1). Hence, insummary, we have that

√k

(1 − γ

1 − γn

U(Yn−k:n)U(n/k)

− 1)

=1 − γ

1 − γn

√k

((Yn−k:n

n/k

− 1)

1 − γn

1√k

k∑i=1

(log(Yn−i+1:n

Yn−k:n

)− 1)+ oP(1).

(4.33)

We now need to connect the right-hand side of (4.33) with Brownian bridges Bn. To thisend, we first convert the Y -based order statistics into U-based (i.e., uniform on [0, 1]) orderstatistics. For this we recall that the cdf of Y is G, and thus Y is equal in distribution to G−1(U),which is 1/(1 −U). Consequently,

√k

(1 − γ

1 − γn

U(Yn−k:n)U(n/k)

− 1)

=1 − γ

1 − γn

√k

((1

(n/k)(1 −Un−k:n)

− 1)

1 − γn

1√k

k∑i=1

(log(

(1 −Un−k:n)(1 −Un−i+1:n)

)− 1)+ oP(1).

(4.34)

Next we choose a sequence of Brownian bridges Bn (see pages 158-159 in [42] and referencestherein) such that the following two asymptotic representations hold:

√k

((1

(n/k)(1 −Un−k:n)

− 1)

= −γ√

n

kBn

(1 − k

n

)+ oP(1),

1√k

k∑i=1

(log(

(1 −Un−k:n)(1 −Un−i+1:n)

)− 1)

=√

n

kBn

(1 − k

n

)

−√

n

k

∫1

1−k/n

Bn(s)1 − s

ds + oP(1).

(4.35)

Page 15: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 15

Using these two statements on the right-hand side of (4.34) and also keeping in mind that γnis a consistent estimator of γ (see [22]), we have that

√k

(1 − γ

1 − γn

U(Yn−k:n)U(n/k)

− 1)

=γ2

1 − γ

√n

kBn

(1 − k

n

)

− γ

1 − γ

√n

k

∫1

1−k/n

Bn(s)1 − s

ds + oP(1).

(4.36)

Dividing both sides of equation (4.36) by 1 − γ , we arrive at (4.29). This completes the proofof statement (4.4) and of Theorem 3.1 as well.

Acknowledgments

Our work on the revision of this paper has been considerably influenced by constructivecriticism and suggestions by three anonymous referees and the editor in charge of themanuscript, Edward Furman, and we are indebted to all of them. Results of the paper werefirst announced at the 44th Actuarial Research Conference at the University of Wisconsin,Madison, Wisconsin, July 30–August 1, 2009. The authors are grateful to participants of thismost stimulating conference, organized by the Society of Actuaries, for generous feedback.The research has been partially supported by grants from the Society of Actuaries (SOA) andthe Natural Sciences and Engineering Research Council (NSERC) of Canada.

References

[1] M. Denuit, J. Dhaene, M. Goovaerts, and R. Kaas, Actuarial Theory for Dependent Risks: Measures, Ordersand Models, Wiley, Chichester, UK, 2005.

[2] Z. M. Landsman and E. A. Valdez, “Tail conditional expectations for elliptical distributions,” NorthAmerican Actuarial Journal, vol. 7, no. 4, pp. 55–71, 2003.

[3] M. R. Hardy and J. L. Wirch, “The iterated CTE: a dynamic risk measure,” North American ActuarialJournal, vol. 8, no. 4, pp. 62–75, 2004.

[4] J. Cai and H. Li, “Conditional tail expectations for multivariate phase-type distributions,” Journal ofApplied Probability, vol. 42, no. 3, pp. 810–825, 2005.

[5] B. J. Manistre and G. H. Hancock, “Variance of the CTE estimator,” North American Actuarial Journal,vol. 9, no. 2, pp. 129–156, 2005.

[6] E. Furman and Z. Landsman, “Tail variance premium with applications for elliptical portfolio ofrisks,” Astin Bulletin, vol. 36, no. 2, pp. 433–462, 2006.

[7] E. Furman and R. Zitikis, “Weighted risk capital allocations,” Insurance: Mathematics & Economics, vol.43, no. 2, pp. 263–269, 2008.

[8] E. Furman and R. Zitikis, “Weighted pricing functionals with applications to insurance: an overview,”North American Actuarial Journal, vol. 13, pp. 483–496, 2009.

[9] V. Brazauskas, B. L. Jones, M. L. Puri, and R. Zitikis, “Estimating conditional tail expectation withactuarial applications in view,” Journal of Statistical Planning and Inference, vol. 138, no. 11, pp. 3590–3604, 2008.

[10] J. Beirlant, G. Matthys, and G. Dierckx, “Heavy-tailed distributions and rating,” Astin Bulletin, vol.31, no. 1, pp. 37–58, 2001.

[11] J. Beirlant, Y. Goegebeur, J. Teugels, and J. Segers, Statistics of Extremes, Wiley Series in Probability andStatistics, John Wiley & Sons, Chichester, UK, 2004.

[12] E. Castillo, A. S. Hadi, N. Balakrishnan, and J. M. Sarabia, Extreme Value and Related Models withApplications in Engineering and Science, Wiley Series in Probability and Statistics, Wiley-Interscience,Hoboken, NJ, USA, 2005.

Page 16: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

16 Journal of Probability and Statistics

[13] L. de Haan and A. Ferreira, Extreme Value Theory: An Introduction, Springer Series in OperationsResearch and Financial Engineering, Springer, New York, NY, USA, 2006.

[14] S. I. Resnick, Heavy-Tail Phenomena: Probabilistic and Statistical Modeling, Springer Series in OperationsResearch and Financial Engineering, Springer, New York, NY, USA, 2007.

[15] B. M. Hill, “A simple general approach to inference about the tail of a distribution,” The Annals ofStatistics, vol. 3, no. 5, pp. 1163–1174, 1975.

[16] S. Cheng and L. Peng, “Confidence intervals for the tail index,” Bernoulli, vol. 7, no. 5, pp. 751–760,2001.

[17] J. T. Kim and M. R. Hardy, “Quantifying and correcting the bias in estimated risk measures,” AstinBulletin, vol. 37, no. 2, pp. 365–386, 2007.

[18] I. Weissman, “Estimation of parameters and large quantiles based on the k largest observations,”Journal of the American Statistical Association, vol. 73, no. 364, pp. 812–815, 1978.

[19] A. L. M. Dekkers and L. de Haan, “On the estimation of the extreme-value index and large quantileestimation,” The Annals of Statistics, vol. 17, no. 4, pp. 1795–1832, 1989.

[20] G. Matthys and J. Beirlant, “Estimating the extreme value index and high quantiles with exponentialregression models,” Statistica Sinica, vol. 13, no. 3, pp. 853–880, 2003.

[21] M. I. Gomes, F. Figueiredo, and S. Mendonca, “Asymptotically best linear unbiased tail estimatorsunder a second-order regular variation condition,” Journal of Statistical Planning and Inference, vol.134, no. 2, pp. 409–433, 2005.

[22] D. M. Mason, “Laws of large numbers for sums of extreme values,” The Annals of Probability, vol. 10,no. 3, pp. 754–764, 1982.

[23] S. Csorgo and D. M. Mason, “Central limit theorems for sums of extreme values,” MathematicalProceedings of the Cambridge Philosophical Society, vol. 98, no. 3, pp. 547–558, 1985.

[24] J. Beirlant and J. L. Teugels, “Asymptotic normality of Hill’s estimator,” in Extreme Value Theory(Oberwolfach, 1987), J. Husler and R.-D. Reiss, Eds., vol. 51 of Lecture Notes in Statistics, pp. 148–155,Springer, New York, NY, USA, 1989.

[25] A. L. M. Dekkers, J. H. J. Einmahl, and L. de Haan, “A moment estimator for the index of an extreme-value distribution,” The Annals of Statistics, vol. 17, no. 4, pp. 1833–1855, 1989.

[26] L. de Haan and U. Stadtmuller, “Generalized regular variation of second order,” AustralianMathematical Society Journal Series A, vol. 61, no. 3, pp. 381–395, 1996.

[27] J. Geluk, L. de Haan, S. Resnick, and C. Starica, “Second-order regular variation, convolution and thecentral limit theorem,” Stochastic Processes and Their Applications, vol. 69, no. 2, pp. 139–159, 1997.

[28] L. Peng and Y. Qi, “Estimating the first- and second-order parameters of a heavy-tailed distribution,”Australian & New Zealand Journal of Statistics, vol. 46, no. 2, pp. 305–312, 2004.

[29] M. I. Gomes and D. Pestana, “A simple second-order reduced bias’ tail index estimator,” Journal ofStatistical Computation and Simulation, vol. 77, no. 5-6, pp. 487–504, 2007.

[30] L. Peng, “Estimating the mean of a heavy tailed distribution,” Statistics & Probability Letters, vol. 52,no. 3, pp. 255–264, 2001.

[31] R. Zitikis, “The Vervaat process,” in Asymptotic Methods in Probability and Statistics, B. Szyszkowicz,Ed., pp. 667–694, North-Holland, Amsterdam, The Netherlands, 1998.

[32] Y. Davydov and R. Zitikis, “Generalized Lorenz curves and convexifications of stochastic processes,”Journal of Applied Probability, vol. 40, no. 4, pp. 906–925, 2003.

[33] Y. Davydov and R. Zitikis, “Convex rearrangements of random elements,” in Asymptotic Methodsin Stochastics, B. Szyszkowicz, Ed., vol. 44 of Fields Institute Communications, pp. 141–171, AmericanMathematical Society, Providence, RI, USA, 2004.

[34] A. L. M. Dekkers and L. de Haan, “Optimal choice of sample fraction in extreme-value estimation,”Journal of Multivariate Analysis, vol. 47, no. 2, pp. 173–195, 1993.

[35] H. Drees and E. Kaufmann, “Selecting the optimal sample fraction in univariate extreme valueestimation,” Stochastic Processes and Their Applications, vol. 75, no. 2, pp. 149–172, 1998.

[36] J. Danielsson, L. de Haan, L. Peng, and C. G. de Vries, “Using a bootstrap method to choose the samplefraction in tail index estimation,” Journal of Multivariate Analysis, vol. 76, no. 2, pp. 226–248, 2001.

[37] C. Neves and M. I. Fraga Alves, “Reiss and Thomas’ automatic selection of the number of extremes,”Computational Statistics & Data Analysis, vol. 47, no. 4, pp. 689–704, 2004.

[38] M. I. Gomes, D. Pestana, and F. Caeiro, “A note on the asymptotic variance at optimal levels of abias-corrected Hill estimator,” Statistics & Probability Letters, vol. 79, no. 3, pp. 295–303, 2009.

[39] A. Necir and D. Meraghni, “Empirical estimation of the proportional hazard premium for heavy-tailed claim amounts,” Insurance: Mathematics & Economics, vol. 45, no. 1, pp. 49–58, 2009.

Page 17: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Journal of Probability and Statistics 17

[40] M. Csorgo, S. Csorgo, L. Horvath, and D. M. Mason, “Weighted empirical and quantile processes,”The Annals of Probability, vol. 14, no. 1, pp. 31–85, 1986.

[41] J. A. Wellner, “Limit theorems for the ratio of the empirical distribution function to the truedistribution function,” Zeitschrift fur Wahrscheinlichkeitstheorie und verwandte Gebiete, vol. 45, no. 1,pp. 73–88, 1978.

[42] A. Necir, D. Meraghni, and F. Meddi, “Statistical estimate of the proportional hazard premium ofloss,” Scandinavian Actuarial Journal, no. 3, pp. 147–161, 2007.

Page 18: Estimating the Conditional Tail Expectation in the Case of ...downloads.hindawi.com/journals/jps/2010/596839.pdf · Estimating the Conditional Tail Expectation in ... n t vertical

Submit your manuscripts athttp://www.hindawi.com

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical Problems in Engineering

Hindawi Publishing Corporationhttp://www.hindawi.com

Differential EquationsInternational Journal of

Volume 2014

Applied MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Probability and StatisticsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Mathematical PhysicsAdvances in

Complex AnalysisJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

OptimizationJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

CombinatoricsHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Operations ResearchAdvances in

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Function Spaces

Abstract and Applied AnalysisHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of Mathematics and Mathematical Sciences

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Algebra

Discrete Dynamics in Nature and Society

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Decision SciencesAdvances in

Discrete MathematicsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com

Volume 2014 Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Stochastic AnalysisInternational Journal of