View
212
Download
0
Category
Preview:
Citation preview
This article was downloaded by: [University of Illinois Chicago]On: 20 November 2014, At: 05:54Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Statistics: A Journal of Theoretical andApplied StatisticsPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/gsta20
Sharp upper mean-variance bounds fortrimmed means from restricted familiesKatarzyna Danielak aa Institute of Mathematics , Polish Academy of Sciences , ul.Śniadeckich 8 P.O. Box 21, Warszawa 10, 00-956, PolandPublished online: 29 Oct 2010.
To cite this article: Katarzyna Danielak (2003) Sharp upper mean-variance bounds for trimmedmeans from restricted families, Statistics: A Journal of Theoretical and Applied Statistics, 37:4,305-324, DOI: 10.1080/0233188031000123762
To link to this article: http://dx.doi.org/10.1080/0233188031000123762
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Statistics, July 2003, Vol. 37(4), pp. 305–324
SHARP UPPER MEAN–VARIANCE BOUNDS FORTRIMMED MEANS FROM RESTRICTED FAMILIES
KATARZYNA DANIELAK*
Institute of Mathematics, Polish Academy of Sciences, ul. Sniadeckich 8, P.O. Box 21,00-956 Warszawa 10, Poland
(Received 14 May 2002; In final form 16 January 2003)
We apply the method of projection to derive sharp upper mean–variance bounds on trimmed means of order statisticsof i.i.d. samples coming from restricted families of probability measures. Two families are considered: distributionswith decreasing density and decreasing failure rate. We also present some numerical results.
Keywords: Order statistic; Trimmed means; Decreasing density; Decreasing failure rate; Convex cone; Projection
1 INTRODUCTION
Assume that X1, . . . , Xn are independent identically distributed random variables with com-
mon distribution function F, mean m, finite variance s2 and quantile function F�1 (the right
continuous version, for definiteness) defined by
F�1(x) ¼ sup{y: F( y) � x}, x 2 [0, 1):
Let X1:n � � � � � Xn:n be the order statistics from the sample X1, . . . , Xn: We denote the
trimmed mean consisting of the order statistics from jth to kth one by
Tj,k:n ¼1
k � j þ 1
Xk
i¼j
Xi:n, 1 � j � k � n:
The sample mean T1,n:n which is the most natural estimate of the population mean in general
nonparametric models, is extremely sensitive to outliers and other violations in the model.
The aim of trimming is to reduce this sensitivity. Symmetrically trimmed means Tj,n�jþ1:n,
for properly chosen j, 1 � j � n=2, are asymptotically most-variance robust estimates of
location against various symmetric violations of symmetric marginal distributions
(cf. Jaeckel, 1971). Asymmetrically trimmed means can be used in estimating mean in
case of asymmetric distributions. Lower and upper selection differentials (T1,k:n � m)=sand (Tnþ1�k,n:n � m)=s, 1 � k � n, respectively, have numerous applications in genetics
* E-mail: k.danielak@impan.gov.pl
ISSN 0233-1888 print; ISSN 1029-4910 online # 2003 Taylor & Francis LtdDOI: 10.1080=0233188031000123762
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
(see, e.g., Burrows, 1972). They were also used in tests for detecting outliers (see, Barnett
and Lewis, 1984). Single order statistics Tk,k:n are most bias-robust against contaminations
of symmetric and asymmetric margins, respectively (cf. Huber, 1964; Rychlik, 1987;
Zielinski, 1988; Chen, 1998). Danielak and Rychlik (2003) applied Holder’s inequality com-
bined with Moriguti’s inequality to derive sharp upper and lower bounds on the bias
EFTj,k:n � m in different scale units for i.i.d. samples coming from general distributions.
Gajek and Rychlik (1998) applied the method of projection to provide sharp bounds for
the expectations of single order statistics based on i.i.d. samples coming from restricted
families of distributions (life distributions with decreasing failure density, decreasing failure
rate and symmetric unimodal ones) in terms of square root of the second raw moments.
The purpose of this paper is to find sharp upper mean–variance bounds for the expecta-
tions of trimmed means Tj,k:n, 1 � j � k � n, of independent identically distributed random
variables coming from restricted families of probability measures. We will restrict our atten-
tion to the distributions with decreasing density (DD) and decreasing failure rate (DFR).
Distribution function F has a decreasing density if F�1U ¼ F�1 is convex in (0, 1), where
U (x) ¼ x, 0 � x < 1. Similarly, F has a decreasing failure rate if F�1V is convex with
V (x) ¼ 1 � e�x, x > 0. Hence, the above cases can be examined together as a family of
distributions F such that F�1W , W ¼ U , V , is convex in the support of W . We say in
such case that F succeeds W in the convex order (F �c W ) which was defined by van
Zwet (1964). The respective bounds obtained for these two families are more accurate
than those for general distributions established in Danielak and Rychlik (2003). The results
are stronger than those in Gajek and Rychlik (1998) since we receive mean–variance bounds
in place of second moments bounds. Moreover, we consider not only single order statistics,
but trimmed means consisting of them as well.
The density and the distribution function of the ith order statistic from the standard uni-
form i.i.d. sample of size n are given by
fi:n(x) ¼ nBi�1,n�1(x),
Fi:n(x) ¼Xn
m¼i
Bm,n(x),
where
Bj,m(x) ¼m
j
� �x j(1 � x)m�j, j ¼ 0, . . . , m, m ¼ 0, 1, . . .
are the classical Bernstein polynomials. Using the representation
EFXi:n ¼
ð1
0
F�1(x) fi:n(x) dx,
we obtain
EFTj,k:n � m ¼
ð1
0
[F�1(x) � m][ fj,k:n(x) � 1] dx, (1:1)
where
fj,k:n(x) ¼1
k � j þ 1
Xk
i¼j
fi:n(x):
306 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Changing variables in (1.1) we get
EFTj,k:n � m ¼
ðd
a
[F�1W (x) � m][ fj,k:nW (x) � 1]w(x) dx
¼ (F�1W � m, fj,k:nW � 1)W , (1:2)
where W is an absolutely continuous distribution function with density w, support [a, d) ¼
[aW , dW ) and a finite second moment. Here (�, �)W stands for the inner product in the real
Hilbert space L2([a, d), w(x) dx) of square integrable functions with respect to the weight
function w on the interval [a, d).
Applying the Schwartz inequality to (1.2) and noting that
kF�1W � mkW ¼ kF�1 � mkU ¼ s,
we get
EFTj,k:n � m � k fj,k:nW � 1kWs: (1:3)
Bound in (1.3) is sharp iff the two factors of the integrand in (1.2) are proportional.
Transformation F�1W � m for an arbitrary F with a finite second moment and satisfying
F �c W , defines a family of functions C0W as
C0W ¼ g 2 CW :
ðd
a
g(x)w(x) dx ¼ 0
� �
with
CW ¼ {g 2 L2([a, d), w(x) dx): g is nondecreasing and convex}:
In generality, functions fj,k:nW � 1 are not nondecreasing and convex except for the case
j ¼ k ¼ n and W ¼ U, when fn,n:nU (x) ¼ nxn�1, 0 � x < 1. Then
EFTn,n:n � ms
�n � 1
(2n � 1)1=2
with
F(x) ¼
0,x � ms
< �(2n � 1)1=2
n � 1
1
n1 þ
n � 1
(2n � 1)1=2
x � ms
� �� �1=(n�1)
, �(2n � 1)1=2
n � 1�
x � ms
� (2n � 1)1=2
1,x � ms > (2n � 1)1=2
8>>>>>><>>>>>>:
attaining the bound. It is easy to see that F is a DD distribution. The result can be derived
directly from the Schwartz inequality (cf. Gumbel, 1954; Hartley and David, 1954) and it
coincides with that for the general family of distributions. We exclude this case from further
study. In order to find sharp bounds in (1.3) we apply the projection method. We sketch here
only the essence of the method and for a thorough treatment we refer the reader to Rychlik
(2001b). If the respective family of modifications of centered quantile functions F�1W � mforms a convex cone and the expectation functional can be represented as an inner product of
a fixed function with F�1W � m (cf. (1.2)), then the best bound in the s units is the norm of
the projection of the function onto the mentioned convex cone. The bound is achieved by F
such that F�1W � m is proportional to the projection.
BOUNDS FOR TRIMMED MEANS 307
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Observe that CW and C0W are convex cones, and the former one is translation invariant:
g 2 CW implies that g þ c 2 CW for arbitrary real c. By the above arguments, we need to
find projection P0W ( fj,k:nW � 1) of fj,k:nW � 1 onto C
0W . The following simple lemma (cf.
Rychlik, 2001a) allows us to replace the original projection problem by a simpler one of pro-
jecting fj,k:nW onto CW . The respective projection will be denoted by PW fj,k:nW.
LEMMA 1 Suppose that CC is a translation invariant convex cone in L2([a, d), w(x) dx) withÐ d
aw(x) dx ¼ 1. If the projection PPh of arbitrary h 2 L2([a, d), w(x) dx) onto CC exists, then
ðd
a
PPh(x)w(x) dx ¼
ðd
a
h(x)w(x) dx: (1:4)
It follows from the above lemma that P0W ( fj,k:nW � 1) ¼ PW fj,k:nW � 1, since we haveÐ d
a(fj,k:nW (x) � 1)w(x) dx ¼ 0, and PW ( fj,k:nW � 1) ¼ PW fj,k:nW � 1 by definition of projec-
tion. Summing up, we get
EFTj,k:n � ms
� kPW fj,k:nW � 1kW , (1:5)
and the equality in (1.5) is attained by a unique F satisfying
F�1W (x) � m ¼ sPW fj,k:nW (x) � 1
kPW fj,k:nW � 1kW
, (1:6)
due to the variance condition.
In Section 2, we describe the shape of projection by means of geometric arguments up to
two real parameters and then determine them. Section 3 contains the main results of the
paper. Section 4 contains some numerical results. Since the proofs are quite long, we placed
them in the Appendix.
2 THE PROJECTION PROBLEM
We present below assumptions on projected functions h ¼ fj,k:nW . Since we restrict our atten-
tion to W ¼ U and V, the assumptions are chosen so to cover these cases. They describe
monotonicity and regions of convexity and concavity of considered functions.
(A) For some �1 < a � b < c � d � 1, let h be a bounded, nonnegative and twice dif-
ferentiable function on [a, d), such that h(a) ¼ 0, h is strictly increasing and strictly convex
on (a, b), strictly increasing and strictly concave on (b, c) and strictly decreasing on (c, d).
Either of (a, b) and (c, d) may be empty. In addition we assume thatÐ d
ah(x)w(x) dx ¼ 1,
where w(x) is a positive weight function satisfyingÐ d
aw(x) dx ¼ 1.
The lemma below describes the behavior of fj,k:nW for W ¼ U , V.
LEMMA 2 (a) Let W ¼ U. If n ¼ 2, then f1,1:2 is decreasing and f2,2:2 is convex increasing.
If n � 3, then for 1 ¼ j � k � n � 1 functions fj,k:n are decreasing. For 2 ¼ j � k � n � 1
functions fj,k:n are first concave increasing, then decreasing. For 2 < j � k � n � 1 they are
convex increasing, concave increasing and finally decreasing. If k ¼ n, then fj,k:n are concave
increasing for j ¼ 2, convex increasing, concave increasing for 2 < j < k, and convex
increasing for j ¼ k:(b) Let W ¼ V . If n ¼ 2, then f1,1:2V is decreasing and f2,2:2V is concave decreasing.
If n � 3, then for 1 ¼ j � k � n � 1 functions fj,k:nV are decreasing and for 2 ¼ j � k �
n � 1 they are concave increasing. If 2 < j � k � n � 1, then fj,k:nV are convex increasing,
308 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
concave increasing and ultimately decreasing. If 2 < j � k ¼ n, then fj,k:nV are first convex
increasing, then concave increasing.
Therefore functions fj,k:nW satisfy condition (A) except for 1 ¼ j � k � n, W ¼ U , V , and
1 < j ¼ k ¼ n, W ¼ U . The latter case was discussed earlier. If j ¼ 1 and k ¼ n, then T1,n:n
is a sample mean and its expectation is equal to m. If j ¼ 1 and k 6¼ n, then fj,k:nW are
decreasing. Clearly, their projections onto CW are constant functions. Lemma 1 gives that
PW f1,k:nW ¼ 1 and by (1.5) we have
EFT1,k:n � m, k ¼ 1, . . . , n:
Note that we do not need to use the projection method in this case. It is easily seen that
T1,k:n ¼ 1=kPk
i¼1 Xi:n � 1=nPn
i¼1 Xi:n. Taking expectations of both sides in the last inequa-
lity, we obtain the desired result. We exclude this trivial case from further study and consider
below only those fj,k:nW that satisfy assumptions (A).
The following lemma describes geometrical properties of the projection of an arbitrary
function h satisfying (A).
LEMMA 3 Let C�W � CW be the class of functions such that
g�(x) ¼h(x), x � y
h( y) þ a(x � y), x > y
�
for a � h0( y), a � y � b, or
g�(x) ¼ h( y) þ a(x � y), a � 0, a � y � c:
Then for any g 2 CW there exists a function g� 2 C�W such that
kh � gk � kh � g�k:
In Proposition 1, we describe the exact form of projection of an arbitrary function h satis-
fying assumptions (A). In order to determine optimal parameters y and a we introduce
following notions. Let
a1( y) ¼
Ð d
y(x � y)[h(x) � h( y)]w(x) dxÐ d
y(x � y)2w(x) dx
: (2:1)
It is easy to show that a1( y) is the slope of the best approximation of h in [ y, d) among the
linear functions taking value h( y) at y. In addition we define
Y ( y) ¼ a1( y) � h0( y), a � y � b, (2:2)
and
Z( y) ¼
ðd
y
[h(x) � a1( y)(x � y) � h( y)]w(x) dx: (2:3)
PROPOSITION 1 If the set Y ¼ {y 2 (a, b]: Y ( y) � 0 and Z( y) ¼ 0} is not empty and y� ¼
sup y 2 Y� �
, then
PW h(x) ¼h(x), x � y�
a1( y�)(x � y�) þ h( y�), x > y�:
�(2:4)
BOUNDS FOR TRIMMED MEANS 309
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Otherwise
PW h(x) ¼ max{a2, 0}(x � a) þ min{b2, 1}, (2:5)
where
a2 ¼
Ð d
a(x � a)h(x)w(x) dx �
Ð d
a(x � a)w(x) dxÐ d
a(x � a)2w(x) dx � [
Ð d
a(x � a)w(x) dx]2
, (2:6)
b2 ¼
Ð d
a(x � a)2w(x) dx �
Ð d
a(x � a)h(x)w(x) dx
Ð d
a(x � a)w(x) dxÐ d
a(x � a)2w(x) dx � [
Ð d
a(x � a)w(x) dx]2
: (2:7)
Observe that for y 2 (a, b] and Y ( y) � 0 function of the form (2.4) belongs to CW .
Formulae (2.6) and (2.7) define parameters of the optimal linear approximation of h in
[a, d). If a2 � 0, which is equivalent to b2 � 1, then PW h ¼ 1. Otherwise (2.5) is a strictly
increasing linear function.
Remark 1 If the projection is of the form (2.5), then b2 � 0. Suppose, contrary to our
claim, that b2 < 0. Then, h(x) and a2(x � a) þ b2 have at most two common points, say
d � D, in (a, b) or h(x) > a2(x � a) þ b2 there. In the former case function equal to h in
[a, d] and a2(x � a) þ b2 in (d, d) would be a better nondecreasing convex approximation
of h in [a, d) than a2(x � a) þ b2. In the latter case h(x) and a2(x � a) þ b2 have
exactly one common point for some g 2 (b, d). Obviously we can find a linear
function that comes through (g, h(g)), has a smaller slope and is less distant from h in
[a, d) than a2(x � a) þ b2. It means that a2(x � a) þ b2 is not optimal linear
approximation.
3 MAIN RESULTS
We are now in position to present the main results of the paper. We first focus on
trimmed means of order statistics coming from distributions with decreasing density. We
assume that n � 3 because for n ¼ 2 the bounds for general distributions with finite
variance hold.
PROPOSITION 2 Let X1, . . . , Xn be i.i.d. with distribution function F and decreasing
density, EFX1 ¼ m and varFX1 ¼ s2:If j þ k � n þ 1, then EFTj,k:n � m.
If n þ 1 < j þ k � 4=3(n þ 1), then
EFTj,k:n � ms
�ffiffiffi3
p j þ k
n þ 1� 1
� �, (3:1)
which is equality iff F is the uniform distribution on m�ffiffiffi3
ps, mþ
ffiffiffi3
ps
�.
If 4=3(n þ 1) < j þ k � 2n � 1, then
EFTj,k:n � ms
� B ¼ B( j, k, n) (3:2)
310 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
for
B2 ¼(n!)2
(k � j þ 1)2(2n � 1)!
Xk
i¼j
2i � 2
i � 1
� �2n � 2i
n � i
� �F2i�1:2n�1( y�)
(
þ 2Xk
i¼jþ1
Xi�1
l¼j
i þ l � 2
i � 1
� �2n � i � l
n � i
� �Fiþl�1:2n�1( y�)
)
þ (1 � y�)f 2j,k:n( y�) þ a1(1 � y�)2fj,k:n( y�) þ
1
3(1 � y�)3a2
1 � 1 (3:3)
with
a1 ¼ a1( y�) ¼1
k � j þ 1
Xk
i¼j
3
(1 � y�)3
i
n þ 1(1 � Fiþ1:nþ1( y�))
��
� y�(1 � Fi:n( y�))
��
3
2(1 � y�)fi:n( y�)
�
and y� being the smallest positive zero of the polynomial
~ZZ(x) ¼Xj�1
m¼1
(k � j þ 1)[4(n þ 1) � 3(k þ j) þ 2(m � 1)] fm:nþ2(x)
�Xk
m¼j
[n2 þ 3k2 � 4nk þ (n � k)(2m � 1)] fm:nþ2(x):
The bound (3.2) is attained iff
F(x) ¼
0,x � ms
< �1
B
f �1j,k:n
x � ms
B þ 1�
, �1
B�
x � ms
<fj,k:n( y�) � 1
B
y� þx � msa1
B þ1 � fj,k:n( y�)
a1
,fj,k:n( y�) � 1
B�
x � ms
<a1(1 � y�) þ fj,k:n( y�) � 1
B
1,x � ms
�a1(1 � y�) þ fj,k:n( y�) � 1
B:
8>>>>>>>>>><>>>>>>>>>>:
(3:4)
For single order statistics, that is for j ¼ k, we obtain the following corollary.
COROLLARY 1 If i � 1=2(n þ 1), then EF Xi:n � m.
If 1=2(n þ 1) < i � 2=3(n þ 1), then
EFXi:n � ms
�ffiffiffi3
p 2i
n þ 1� 1
� �,
which is equality iff F is the uniform distribution on [m�ffiffiffi3
ps, mþ
ffiffiffi3
ps].
If 2=3(n þ 1) < i � n � 1, then
EFXi:n � ms
� B ¼ B(i, n) (3:5)
BOUNDS FOR TRIMMED MEANS 311
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
for
B2 ¼(n!)2
(2n � 1)!
2i � 2
i � 1
� �2n � 2i
n � i
� �F2i�1:2n�1( y�) þ (1 � y�)f 2
i:n( y�)
þ (1 � y�)2a1 fi:n( y�) þ (1 � y�)3 a21
3� 1
with
a1 ¼ a1( y�) ¼3
(1 � y�)3
i
n þ 1[1 � Fiþ1:nþ1( y�)] � y�[1 � Fi:n( y�)]
� �
�3
2(1 � y�)fi:n( y�)
and y� being the smallest positive zero of the polynomial
~ZZ(x) ¼Xi�1
m¼1
[4(n þ 1) � 2(3i � m þ 1)] fm:nþ2(x) � (n � i)(n � i � 1)fi:nþ2(x):
The bound (3.5) is attained by (3.4) with fj,k:n replaced by fi:n.
The next proposition and corollary refer to the case W¼V that is, distributions with
decreasing failure rate. First, we introduce the following notions:
S(i, n) ¼ EV Xi:n ¼Xi
m¼1
1
n þ 1 � m, 1 � i � n:
T ( j, k, n) ¼ EV Tj,k:n ¼1
k � j þ 1
Xk
m¼1
k � max{ j, m} þ 1
n � m þ 1, 1 � j � k � n:
PROPOSITION 3 Let X1, . . . , Xn be independent identically distributed random variables
with distribution function F, decreasing failure rate, mean m and variance s2.
If T (�, k, n) � 1, then EFTj,k:n � m.
If 1 < T ( j, k, n) � 2, then
EFTj,k:n � ms
� T ( j, k, n) � 1,
and the equality holds for exponential distribution with location m� s and scale s.
Finally, if T ( j, k, n) > 2, then
EFTj,k:n � ms
� C ¼ C( j, k, n) (3:6)
for
C2 ¼(n!)2
(k � j þ 1)2(2n � 1)!
Xk
i¼j
2i � 2
i � 1
� �2n � 2i
n � i
� �F2i�1:2n�1(z�)
"
þ2Xk
i¼jþ1
Xi�1
l¼j
i þ l � 2
i � 1
� �2n � i � l
n � i
� �Fiþl�1:2n�1(z�)
#
þ (1 � z�)[2a21 þ 2a1 fj,k:n(z�) þ f 2
j,k:n(z�)] � 1
312 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
with
a1 ¼ a1(V�1(z�)) ¼1
2(k � j þ 1)(n þ 1)(1 � z�)
�Xk
i¼j
Xi
m¼1
S(i þ 1 � m, n þ 1 � m) fm:nþ1(z�) � (n � i þ 1) fi:nþ1(z�)
" #
where z� is the smallest positive zero of the polynomial
~ZZ(x) ¼ (k � j þ 1)Xj�1
m¼1
[2 � T ( j � m þ 1, k � m þ 1, n � m þ 1)] fm:nþ1(x)
þ {(k � j þ 1)[2 � T (1, k � j þ 1, n � j þ 1)] � n þ j � 1} fj:nþ1(x)
þXk
m¼jþ1
{(k � m þ 1)[2 � T (1, k � m þ 1, n � m þ 1)] � n þ m � 1} fm:nþ1(x):
The equality holds in (3.6) iff
F(x) ¼
0,x � ms
< �1
C
f �1j,k:n
x � ms
C þ 1�
, �1
C�
x � ms
<fj,k:n(z�) � 1
C
Vx � msa1
C þ1 � fj,k:n(z�)
a1
þ V�1(z�)
� �,
x � ms
�fj,k:n(z�) � 1
C:
8>>>>>><>>>>>>:
(3:7)
For single order statistics from DFR samples we obtain the following corollary.
COROLLARY 2 If S(i, n) � 1, then EFXi:n � m:If 1 < S(i, n) � 2, then
EFXi:n � ms
� S(i, n) � 1
and the equality holds for exponential distribution with location m� s and scale s:Finally, if S(i, n) > 2, then
EFXi:n � ms
� C ¼ C(i, n) (3:8)
for
C2 ¼(n!)2
(2n � 1)!
2i � 2
i � 1
� �2n � 2i
n � i
� �F2i�1:2n�1(z�)
þ (1 � z�)[2a21 þ 2a1 fi:n(z�) þ f 2
i:n(z�)] � 1
with
a1 ¼ a1(V�1(z�)) ¼1
2(n þ 1)(1 � z�)
�Xi
m¼1
S(i þ 1 � m, n þ 1 � m) fm:nþ1(z�) � (n � i þ 1) fi:nþ1(z�)
" #
BOUNDS FOR TRIMMED MEANS 313
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
where z� is the smallest positive zero of the polynomial
~ZZ(x) ¼Xi�1
m¼1
[2 � S(i � m þ 1, n � m þ 1)] fm:nþ1(x) � [n � i � 1 þ S(1, n � i þ 1)] fi:nþ1(x):
The bound (3.8) is achieved by (3.7) with fj,k:n replaced by fi:n.
TABLE I Sharp Upper Bounds on Trimmed Means Tj,k:15, 9� j� k� 15, for General, DD and DFRDistributions.
k
j 9 10 11 12 13 14 15
2 0.12367 0.13317 0.14273 0.15247 0.16258 0.17338 0.185700 0 0 0 0 0 0.108250 0 0 0 0 0 0.06667
3 0.22992 0.24183 0.25350 0.26513 0.27697 0.28939 0.303240 0 0 0 0 0.10825 0.216510 0 0 0 0 0 0.13810
4 0.31758 0.33196 0.34584 0.35951 0.37329 0.38763 0.403530 0 0 0 0.10825 0.21651 0.324760 0 0 0 0 0.02382 0.21502
5 0.39593 0.41347 0.43009 0.44630 0.46255 0.47928 0.497870 0 0 0.10825 0.21651 0.32476 0.433010 0 0 0 0 0.09636 0.29835
6 0.46863 0.49032 0.51061 0.53011 0.54956 0.56940 0.591550 0 0.10825 0.21651 0.32476 0.43301 0.541270 0 0 0 0.03202 0.17493 0.38926
7 0.53718 0.56448 0.58965 0.61359 0.63713 0.66127 0.688180 0.10825 0.21651 0.32476 0.43301 0.54127 0.649790 0 0 0 0.10956 0.26064 0.48926
8 0.60219 0.63678 0.66854 0.69852 0.72776 0.75763 0.791100.10825 0.21651 0.32476 0.43301 0.54127 0.64979 0.761750 0 0 0.06966 0.19442 0.35496 0.60037
9 0.66376 0.70751 0.74801 0.78623 0.82340 0.86130 0.904120.21651 0.32476 0.43301 0.54127 0.64979 0.76174 0.882010 0 0.04601 0.15573 0.28823 0.45990 0.72537
10 0.77663 0.82834 0.87776 0.92610 0.97558 1.03230.43301 0.54127 0.64979 0.76174 0.88194 1.01620.03490 0.13490 0.25156 0.39323 0.57823 0.86823
11 0.90967 0.97400 1.0382 1.1048 1.18310.64979 0.76174 0.88192 1.0159 1.17220.23490 0.35990 0.51267 0.71406 1.0349
12 1.0757 1.1623 1.2551 1.36920.88192 1.0158 1.1711 1.36270.48490 0.65156 0.87378 1.2368
13 1.3026 1.4364 1.61411.1708 1.3595 1.61120.81823 1.0683 1.4993
14 1.6666 1.97111.6030 1.97051.3226 1.8797
15 2.59972.59972.5469
314 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
The forms of bounds in Propositions 2 and 3 depend on the values of EW Tj,k:n. In parti-
cular, for DD distributions they depend on 1=2( j þ k) i.e. average of ranks of extreme ele-
ments of trimmed means.
The squares of our bounds (3.5) and (3.8) for single order statistics and large i are 1 less
than those obtained by Gajek and Rychlik (1998). The respective bounds are achieved by
similar distributions being inverses of polynomials on the left part of the support and W
up to the location-scale transformation on the right. It is similar for distribution functions
attaining the bounds for trimmed means with large EW Tj,k:n: For moderate j and k, the equali-
ties are achievable by distribution functions without left inverse polynomial part. Difference
between our results for single order statistics for small i and those obtained by Gajek and
Rychlik (1998) lies in the fact that their bounds were always positive and attained by W
and we have additionally trivial bounds for small j and k (i, respectively). Note that distribu-
tion functions achieving the positive bounds are absolutely continuous in every case.
4 NUMERICAL RESULTS
Table I contains the numerical values of the optimal upper bounds on the trimmed means of
i.i.d. samples of size 15. For each pair of entries j and k we have bounds for general (with
finite second moment), DD and DFR distributions, respectively. All the bounds are expressed
in the standard deviation units. The evaluations for the second and third row of each sub-
column are determined from Propositions 2 and 3, respectively. We present the results
obtained for 9 � j � k � n ¼ 15, since for j � 8 all the bounds for DD and DFR distribu-
tions are equal to 0. For trimmed means coming from general distributions some of these
bounds are positive, but we omit them here since we present these results only for compar-
ison with those obtained for restricted families. The bounds for DFR distributions are tighter
than those for DD distributions since the former class is narrower. Obviously, also the bounds
for DD distributions are sharper than those obtained for general distributions, except for the
case j ¼ k ¼ n ¼ 15: We observe the increase of bounds when we increase j or k: Some
values occur several times in the DD case. This happens for identical 17 � j þ k � 21,
when the bound is attained by the uniform distribution.
Acknowledgement
The author wishes to thank Tomasz Rychlik for helpful comments and discussion.
References
Barnett, V. and Lewis, T. (1984). Outliers in Statistical Data, 2nd ed. Wiley, New York.Burrows, P. M. (1972). Expected selection differentials for directional selection. Biometrics, 28, 1091–1100.Chen, Z. (1998). A note on bias robustness of the median. Statist. Probab. Lett., 38, 363–368.Danielak, K. and Rychlik, T. (2003). Exact bounds for the bias of trimmed means. Austral. & New Zealand J. Statist.
(to appear).Gajek, L. and Rychlik, T. (1998). Projection method for moment bounds on order statistics from restricted families.
II. Independent case. J. Multivar. Anal., 64, 156–182.Gumbel, E. J. (1954). The maxima of the mean largest value and of the range. Ann. Math. Statist., 25, 76–84.Hartley, H. O. and David, H. A. (1954). Universal bounds for mean range and extreme observation. Ann. Math.
Statist., 25, 85–99.Huber, P. J. (1964). Robust estimation of a location parameter. Ann. Math. Statist., 35, 73–101.Jaeckel, L. A. (1971). Robust estimates of location: Symmetry and asymmetric contamination. Ann. Math. Statist.,
42, 1020–1034.Rychlik, T. (1987). An asymptotically most bias-stable estimator of location parameter. Statistics, 18, 563–571.
BOUNDS FOR TRIMMED MEANS 315
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Rychlik, T. (2001a). Mean-variance bounds for order statistics from dependent DFR, IFR, DFRA and IFRA samples.J. Statist. Plann. Inference, 92, 21–38.
Rychlik, T. (2001b). Projecting Statistical Functionals, Lecture Notes in Statistics, Vol. 160. Springer-Verlag, NewYork.
Schoenberg, I. J. (1959). On variation diminishing approximation methods. In: Langer, R. E. (Ed.), On NumericalApproximation: Proc. of Symp. Madison, 1958, Univ. Wisconsin Press, Madison, pp. 249–274.
van Zwet, W. R. (1964). Convex Transformations of Random Variables, Math. Centre Tracts, Vol. 7. MathematischCentrum, Amsterdam.
Zielinski, R. (1988). Stable estimation of location parameter – nonasymptotic approach. Statistics, 19, 229–231.
APPENDIX A: THE PROOFS
We shall frequently refer to the following lemma presented in Gajek and Rychlik (1998). The
proof of the former statement known as the Variation Diminishing Property (VDP) of
Bernstein polynomials can be found in Schoenberg (1959).
LEMMA 4 (VDP) The number of zeros of a given combination of Bernstein polynomials
B(x) ¼Xm
k¼0
akBk,m(x), x 2 (0, 1) (A.1)
does not exceed the number of sign changes of the sequence a0, . . . , am: The first and the last
signs of (A.1) are identical with the signs of the first and last nonzero elements of a0, . . . , am,
respectively.
We also apply auxiliary formulae:
xBl,m(x) ¼l þ 1
m þ 1Blþ1,mþ1(x) (A.2)
(1 � x)sBl,m(x) ¼(m � l þ s)!m!
(m � l)!(m þ s)!Bl,mþs(x) (A.3)
B0l,m(x) ¼ m[Bl�1,m�1(x) � Bl,m�1(x)] (A.4)ð1
y
Bl,m(x) dx ¼1
m þ 1
Xl
s¼0
Bs,mþ1(y) (A.5)
Proof of Lemma 2 (a) We analyze the behavior of fj,k:nU ¼ fj,k:n. The case n ¼ 2 is obvious
since f1,1:2(x) ¼ 2(1 � x) and f2,2:2(x) ¼ 2x. Let n � 3. Note that fj,k:n(0) ¼ 0 and fj,k:n(1) ¼ 0
except for k ¼ n. Using (A.4) and adopting the convention that Bl,m(x) ¼ 0 for l > m or
l < 0, we get
f 0j,k:n(x) ¼n(n � 1)
k � j þ 1{Bj�2,n�2(x) � Bk�1,n�2(x)}
and
f 00j,k:n(x) ¼n(n � 1)(n � 2)
k � j þ 1{Bj�3,n�3(x) � Bj�2,n�3(x) � Bk�2,n�3(x) þ Bk�1,n�3(x)}:
If j ¼ 1 and k < n, then, by VDP, f 0j,k:n are negative and so fj,k:n are decreasing. It follows from
VDP that f 0j,k:n are first positive, then negative (are þ�) for 2 � j � k � n � 1 and positive
for k ¼ n. Hence fj,k:n are increasing–decreasing in the former case and increasing in the
latter. For 2 ¼ j � k � n � 1 functions f 00j,k:n are negative–positive if k � n � 2 and negative
316 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
if k ¼ n � 1. VDP combined with analysis of f 0j,k:n shows that in both cases fj,k:n are first con-
cave increasing and then decreasing. For 2 < j � k � n � 1 behavior of f 0j,k:n and VDP show
that f 00j,k:n are þ�þ if k � n � 2 and þ� if k ¼ n � 1 and hence fj,k:n are as we claimed. We
now turn to the case k ¼ n. If j ¼ 2, then f 00j,k:n are negative and fj,k:n are concave increasing. If
2 < j < k, then f 00j,k:n are positive–negative and finally for j ¼ k they are positive. Therefore,
fj,k:n are first convex increasing, then concave increasing in the former case and convex
increasing in the latter.
(b) For n ¼ 2 we have f1,1:2V (x) ¼ 2e�x, f2,2:2V (x) ¼ 2(1 � e�x) and the claim follows. Let
n � 3 and Cl,m(x) ¼ Bl,m(1 � e�x). Since Cl,m is a composition of the increasing function V
and a Bernstein polynomial, the VDP holds for linear combinations of Cl,m as well. Applying
the formula
Cl,m(x)e�sx ¼(m � l þ s)!m!
(m � l)!(m þ s)!Cl,mþs(x),
we obtain
( fj,k:nV )0(x) ¼n(n � 1)e�x
k � j þ 1{Cj�2,n�2(x) � Ck�1,n�2(x)}
¼n
k � j þ 1{(n � j þ 1)Cj�2,n�1(x) � (n � k)Ck�1,n�1(x)},
( fj,k:nV )00(x) ¼n(n � 1)
k � j þ 1e�x{(n � j þ 1)Cj�3,n�2(x) � (n � j þ 1)Cj�2,n�2(x)
� (n � k)Ck�2,n�2(x) þ (n � k)Ck�1,n�2(x)}:
Obviously, fj,k:nV are decreasing for j ¼ 1, increasing–decreasing for 2 � j � k � n � 1, and
increasing for 2 � j � k ¼ n. It remains to verify the signs of ( fj,k:nV )00. If 2 ¼ j � k � n � 1,
then ( fj,k:nV )00 are �þ. If 2 < j � k � n � 1, then ( fj,k:nV )00 are þ�þ. For j ¼ 2 and k ¼ n
functions ( fj,k:nV )00 are negative, and for 2 < j � k ¼ n they are positive–negative. The
required claims follow immediately. j
The structure of the proof of Lemma 3 is similar to this of Lemma 1 in Gajek and Rychlik
(1998), concerning projecting functions onto the convex cone of nondecreasing convex func-
tions g which additionally satisfy g(a) ¼ 0. However, our proof contains so many modifica-
tions, that we decided to present it here in a complete and detailed form.
Proof of Lemma 3 Note first that it is enough to consider functions which are nonnegative,
because max {0, g} is nondecreasing, convex and less distant from h than arbitrary g 2 CW.
In addition we take into account only functions g 2 CW which cross h at some x0 2 (a, c)
because the projection PW h satisfies (1.4). Take an arbitrary function g 2 CW satisfying the
two above conditions. Let g0 ¼ sup {x 2 [a, b]: g(x) ¼ h(x)}. If for every x 2 [a, b],
g(x) > h(x), then we put g0 ¼ �1 and define g ¼ max{a, g0}. We now analyze the relations
between g and h in [b, d).
Suppose first that g > h in a right neighborhood of b. Then two cases are possible:
(i) either g > h on the whole (b, d), or g and h are tangent at one point of (b, c),
(ii) g crosses h at a point d 2 (b, c), say, and either g < h on (d, d) or g and h cross each
other at a single point D 2 (d, d).
In both cases we construct modifications gi, i ¼ 1, 2, of the original g which belong to C�W
and are less distant from h than g.
BOUNDS FOR TRIMMED MEANS 317
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
(i) We have g(x) � h(x) for every x 2 (g, d). Note that g ¼ g0 > a in this case, because
(1.4) would not be satisfied otherwise. There exists a straight line, say l1, separating the con-
vex curve g and the concave one h in (b, c). This line must have a common point, say b, with
h such that b 2 [g, b], because l1 runs beneath g and over the concave part of h and
g(g) ¼ h(g). Let g1 ¼ max{h, l1}. Obviously g1 2 C�W and h � g1 � g in [a, d), which finally
gives kg1 � hk � kg � hk.
(ii) Extend the definition of D as follows
D ¼ sup{x � d: g(x) � h(x)}:
If g < h on (d, d) we have D ¼ d which is possible only if d < 1: Note that
g(D) ¼ limx%D g(x) � h(D) ¼ limx%D h(x) < 1 are well defined. Let ldD denote the linear
function passing through (d, g(d)) and (D, g(D)). Then we have
g(x) � ldD(x) � h(x), x 2 [d, D],
and for D < d
h(x) � ldD(x) � g(x), x 2 [D, d):
The latter inequalities hold for x 2 (b, d) as well. The convexity of g implies ldD(x) � g(x) for
x 2 [a, d]. The following two subcases are possible: either h(x) � ldD(x) for x 2 [a, d] or
there exists a unique b 2 [a, d], such that h(b) ¼ ldD(b). If g0 > �1, then b exists and
b > g0 � a. Moreover, h0(b) � l0dD(b). In the former subcase g2(x) ¼ ldD(x) is the desired
modification. In the latter subcase we take
g2(x) ¼h(x), a � x � bldD(x), b � x < d:
�
This ends the proof in case (ii).
Suppose now that g < h in the right neighborhood of b. Then g ¼ g0 and the relation
g < h holds in the right neighborhood of g0, as well. It follows that there exists at most
one point D > b such that g(D) ¼ h(D). If g < h holds in the whole (b, d), then we set
D ¼ d. Note that in case D ¼ d ¼ 1 function g is constant and belongs to C�W .
For each y 2 [a, b] we define the line tangent to h at y as
ly(x) ¼ h( y) þ h0( y)(x � y): (A.6)
Notice that the slopes of (A.6) increase. Function y 7 �! ly(D), y 2 [a, b] is strictly increas-
ing, continuous and satisfies
lb(D) � h(D) � g(D):
Two more cases are possible:
(iii) lg0(D) � h(D),
(iv) lg0(D) > h(D).
In the case (iii) relations lg0(D) � h(D) � lb(D) imply the existence of a point d 2 [g0, b]
such that ld(D) ¼ h(D). It follows that
g(x) � ld(x) � h(x), x 2 [d, D],
and for D < d
h(x) � ld(x) � g(x), x 2 [D, d):
318 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Hence
g3(x) ¼h(x), a � x � dld(x), d � x < d
�
belongs to C�W and is less distant from h than g.
(iv) If lg0(D) > h(D), then
l0g0¼ h0(g0) >
h(D) � h(g0)
D� g0
�g(D) � g(g0)
D� g0
� g0(g0þ):
Let lg0D denote the linear function secant to g at g0 and D. If g0 ¼ a then lg0D runs between
g and h in the whole [a, d) and we take it as a modification of g. If a < g0 then we have
g0(g0þ) � g0(g0�) by convexity of g. It follows that h0(g0) > g0(g0�) and h < g in the left
neighborhood of g0: Then lg0D(x) < lg0(x) in (g0, d) and
g(x) � lg0D(x) � h(x), x 2 [g0, D], (A.7)
h(x) � lg0D(x) � g(x), x 2 (D, d) for D < d: (A.8)
If there exists d 2 [a, g0) such that g(d) ¼ h(d) and g(x) > h(x) for x 2 (d, g0), then
lg0D(d) � g(d) ¼ h(d):
Moreover, h(g0�) < lg0D(g0�) � g(g0�). It follows that there exists a point d0 2 [d, g0) such
that lg0D crosses h at d0 and
h(x) < lg0D(x) � g(x), d0 � x � g0, (A.9)
and h0(d0) < l0g0D: This together with (A.7)–(A.9) and h(d0) ¼ lg0D(d0) imply that
g4(x) ¼h(x), a � x � d0
lg0D(x), d0 � x < d
�(A.10)
is a desired modification.
If g(x) > h(x) for all x 2 [a, g0), then lg0D and h can cross at some d0 2 [a, g0). The argu-
ments of the previous subcase imply that (A.10) improves approximation of h. Finally if
lg0D(x) � h(x) for all x 2 (a, g0), then it suffices to replace g by lg0D. j
Proof of Proposition 1 Suppose first that the projection has the form (2.4). We try to deter-
mine optimal parameters y and a by minimizing the function
D( y, a) ¼
ðd
y
[h(x) � h(y) � a(x � y)]2w(x) dx (A.11)
for y 2 [a, b] and a � h0( y). For a fixed y, (A.11) is a quadratic function of a, attaining its
global minimum at a1( y) defined in (2.1). The restriction implies that the optimal slope is
max{a1( y), h0( y)}. Using the same reasoning as in Gajek and Rychlik (1998, Proof of
Lemma 2) we show that the latter infimum in
inf D( y, a) ¼ min{ infy2V
D( y, a1( y)), infy2V c
D( y, h0( y))},
where V ¼ {y 2 (a, b]: Y (y) � 0} and Vc ¼ (a, b] n V, is redundant. If y 2 Vc, then
d
dyD( y, h0( y)) ¼ �2h00( y)
ðd
y
[h(x) � h0( y)(x � y) � h( y)](x � y)w(x) dx:
BOUNDS FOR TRIMMED MEANS 319
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Obviously, h00( y) � 0, andðd
y
[h(x) � h0( y)(x � y) � h( y)](x � y)w(x) dx
�
ðd
y
[h(x) � a1( y)(x � y) � h( y)](x � y)w(x) dx ¼ 0,
by the definition of a1(y). It follows that dD( y, h0( y))=dy � 0 and we do not increase
D( y, h0( y)) passing y to the left until either h0( y) ¼ a1( y) or y ¼ a. Therefore it is enough
to consider only y 2 (a, b) satisfying Y ( y) � 0 and a ¼ a1( y).
Then every pair ( y, a1( y)) determines an element belonging to the convex cone of nonde-
creasing convex functions. The necessary condition for a function with such parameters to be
the projection is Z( y) ¼ 0 (cf. Lemma 1). We show that the function a1( y)(x � y) þ h( y) for
y satisfying Z( y) ¼ 0 is the best linear approximation of h in an interval ( y, d). We easily
determine the parameters
a2(y) ¼
Ð d
y(x � y)h(x)w(x) dx
Ð d
yw(x) dx �
Ð d
yh(x)w(x) dx
Ð d
y(x � y)w(x) dxÐ d
y(x � y)2w(x) dx
Ð d
yw(x) dx � [
Ð d
y(x � y)w(x) dx]2,
b2(y) ¼
Ð d
yh(x)w(x) dx
Ð d
y(x � y)2w(x) dxÐ d
y(x � y)2w(x) dx
Ð d
yw(x) dx � [
Ð d
y(x � y)w(x) dx]2
�
Ð d
y(x � y)h(x)w(x) dx
Ð d
y(x � y)w(x) dxÐ d
y(x � y)2w(x) dx
Ð d
yw(x) dx � [
Ð d
y(x � y)w(x) dx]2
of the optimal linear approximation a2( y)(x � y) þ b2( y) of h in ( y, d). Gajek and Rychlik
(1998, p. 173) showed that
Z(y) ¼
ðd
y
[h(x) � a2( y)(x � y) � b2( y)]w(x) dx
þ [b2( y) � h( y)]
ðd
y
w(x) dx �[Ð d
y(x � y)w(x) dx]2Ð d
y(x � y)2w(x) dx
8<:
9=;:
By Lemma 1 and the fact that a2( y)(x � y) þ b2(y) is the projection of h onto the convex
cone of linear functions in ( y, d), the first expression is equal to 0. Due to the Schwartz
inequality, the expression in braces is greater than 0. It follows that the condition Z(y) ¼ 0
is satisfied iff b2( y) ¼ h( y). Then we have a1(y) ¼ a2(y). This proves our claim.
Let hy stand for the function that equals to h in [a, y), and a2(y)(x � y) þ b2(y) in [ y, d).
We show that function y j! kh � hyk is decreasing. Take any z > y. Putting h in [a, z] and hy
in (z, d) we get a function less distant from h than hy. Clearly, the linear part of the function
hy cannot lie closer to h than hz, the best linear approximation of h in [z, d). It follows that
hy� , where y� ¼ sup {y 2 Y}, is the best candidate for a projection of h onto C�W in the class
of all functions hy for y 2 Y. Actually, it is a desired projection because is less distant from h
than ha, the best linear approximation of h in [a, d). If the set Y is empty, there is no candi-
date for a projection among functions that are equal to h in some interval [a, y] and linear
in (y, d). Hence in this case the projection must be a linear function of the form (2.5),
with a2 ¼ a2(a), b2 ¼ b2(a). Restrictions on the parameters follow from the fact that the
projection must be nondecreasing and integrate to 1. j
320 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
The following lemma is a simplified version of Lemma 4 in Gajek and Rychlik (1998).
LEMMA 5 (a) Formula
dD(y, a1(y))
dy¼ 2Y (y)Z(y) (A:12)
holds true.
(b) If Vþ ¼ {y 2 (a, b]: Y ( y) > 0} ¼ (a, v) and Z has a finite number of zeros, then Z is
either positive or negative or changes the sign once from � to þ in Vþ.
Proof of Proposition 2 The proof is similar to the one of Proposition 1 in Gajek and
Rychlik (1998). We start with finding the projections PU fj,k:n of h ¼ fj,k:n onto the convex
cone CU . By arguments of Section 1, EF (Tj,k:n � m) � kPU fj,k:n � 1kUs and the bound is
attained by the distribution F with centered quantile function F�1 � m proportional to
PU fj,k:n � 1.
Assume that n � 3. If j ¼ 2 and k < n, then by Lemma 2, we have a ¼ b ¼ 0, so the pro-
jection PU f2,k:n is a linear function satisfying (2.5). It remains to consider the cases 2 < j �
k � n � 1 and 2 < j < k ¼ n. As in the proof of Proposition 1, suppose first that PU fj,k:n is of
the form (2.4). In order to verify if y� exists and if so to find it, we calculate (2.2) and (2.3) for
h ¼ fj,k:n and examine their signs. Applying auxiliary formulas (A.2)–(A.5) we get
Y (y) ¼~YY (y)
2(n þ 1)(k � j þ 1)(1 � y)3, (A:13)
Z(y) ¼~ZZ(y)
4(n þ 1)(k � j þ 1)(1 � y), (A:14)
where ~YY and ~ZZ are combinations of Bernstein polynomials defined below. Since denomina-
tors in (A.13) and (A.14) are positive, it is enough to examine the signs of ~YY and ~ZZ. We start
with analyzing
~YY (y) ¼Xk�1
m¼0
amBm,nþ1( y),
where
am ¼ 3(k � j þ 1)(k þ j � 2m) > 0, m ¼ 0, . . . , j � 3
aj�2 ¼ 3(k � j þ 1)(k � j þ 4) � 2(n � j þ 3)!
(n � j)!< 0,
am ¼ 3[(k � m þ 1)(k � m) � (n � m þ 1)(n � m)] < 0, m ¼ j � 1, . . . , k � 2
ak�1 ¼ 2 3 þ(n � k þ 2)!
(n � k)!n � k �
3
2
� �� �,
which is equal to 0 for k ¼ n � 1 or k ¼ n and positive otherwise. By Lemma 4, ~YY changes
its sign once from þ to �, if k ¼ n � 1 or n. Otherwise it is positive near 0 and 1, and pos-
sibly negative in an inner subinterval of (0, 1). In fact, ~YY (b) < 0 in both the cases, where b is
the first inflexion point of fj,k:n. Indeed, the line tangent to fj,k:n at point b lies entirely over fj,k:n
in (b, 1), and obviously a1(b) < f 0j,k:n(b). Therefore ~YY has a single root v in (0, b) and it is
nonnegative in [0, v].
BOUNDS FOR TRIMMED MEANS 321
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Representing ~ZZ as a combination of Bernstein polynomials we obtain
~ZZ(y) ¼Xk�1
m¼0
bmBm,nþ1( y),
where
bm ¼ 4(k � j þ 1) n þ 1 �3(k þ j) � 2m
4
� �, m ¼ 0, . . . , j � 2
bm ¼ 4nk � [n2 þ 3k2 þ (n � k)(2m þ 1)], m ¼ j � 1, . . . , k � 1:
Note that the sequence {bm} is increasing for m ¼ 0, . . . , j � 2. Therefore ~ZZ is positive near 0
iff b0 � 0, i.e. for j þ k � 4=3(n þ 1). Then, by Lemma 5(b), Z is positive in (0, v). By
Lemma 5(a), D( y, a1( y)) is increasing, hence the best approximation of the form (2.4) is
attained for y ¼ 0. Clearly, it can be improved by the best linear approximation. This implies
that PU fj,k:n is of the form (2.5) in this case. If, in addition, j þ k � n þ 1, then a2 < 0 and
PU fj,k:n ¼ 1: Otherwise the projection is an increasing linear function with parameters given
by (2.6) and (2.7).
Now we turn to the case j þ k > 4=3(n þ 1). Since b0 < 0, ~ZZ is negative near 0. By
Lemma 5(b), Z is negative or has exactly one zero in (0, v]. We will show that the latter
holds. Suppose on the contrary, that ~ZZ does not have any zeros in this interval. By
Proposition 1, the projection is a linear function with a2 ¼ 6[( j þ k)=(n þ 1) � 1] > 2 and
b2 ¼ 4 � 3( j þ k)=(n þ 1) < 0. The latter inequality, by Remark 1, contradicts the statement
that the projection is a linear function. Therefore the projection is a function of the form (2.4)
with y� defined in the last statement of the proposition.
Summing up, if j þ k � n þ 1, then PU fj,k:n is constant equal to 1. If n þ 1 < j þ k �
4=3(n þ 1), then it is an increasing linear function, and finally for j þ k > 4=3(n þ 1) it is a
function of the form (2.4). Our next objective is to evaluate B( j, k, n) ¼ kPU h � 1kU for dif-
ferent forms of function PU h for h ¼ fj,k:n. Since kPW h � 1k2W ¼ kPW hk2
W � 1, it is enough to
compute the latter norm and then subtract 1. In the first case we easily obtain that the bound is
equal to 0. Elementary but tedious calculations lead us to the formulae (3.1) and (3.3). Using
(1.6), we find distributions attaining the respective bounds. j
Proof of Proposition 3 We proceed analogously to the proof of Proposition 2. We start with
the case j ¼ 2. By Lemma 2, b ¼ a ¼ 0 implies that the projection PV f2,k:nV of f2,k:n is a
linear function of the form (2.5). Now we turn to the case j > 2. Suppose that the projection
of h ¼ fj,k:nV onto the cone CV is of the form (2.4). We calculate (2.1), (2.2) and (2.3) for this
given h.
Gajek and Rychlik (1998, p. 177) showed that for h ¼ fi:nV we have
a1(y) ¼
Ð1y
(x � y)[ fi:nV (x) � fi:nV (y)]e�x dxÐ1y
(x � y)2e�x dx
¼ey
2
Xi�1
m¼0
S(i � m, n � m)Cm,n(y) � (n � i þ 1)Ci�1,n(y)
!:
Since h j! a1(y) and h j! h0(y) are linear functionals, for h ¼ fj,k:nV ¼ (1=(k � j þ 1)) �Pki¼j fi:nV , we obtain
a1(y) ¼ey
2(k � j þ 1)
Xk
i¼j
Xi�1
m¼0
S(i � m, n � m)Cm,n( y) � (n � i þ 1)Ci�1,n(y)
" #
322 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
and
h0(y) ¼1
k � j þ 1
Xk
i¼j
[n(n � i þ 1)Ci�2,n�1( y) � n(n � i)Ci�1,n�1( y)]:
Therefore, Y ( y) ¼ a1( y) � h0( y) can be represented as
Y (y) ¼ey
2(k � j þ 1)
Xk�1
m¼0
amCm,n( y) ¼ey
2(k � j þ 1)~YY ( y),
where
~YY ( y) ¼ (k � j þ 1)Xj�1
m¼0
T ( j � m, k � m, n � m)Cm,n( y)
� 2Xk�2
m¼j�2
(n � m � 1)(n � m)Cm,n( y)
þ 2Xk�1
m¼j�1
(n � m) n � m �3
2
� �Cm,n( y)
þXk�1
m¼j
(k � m)T (1, k � m, n � m)Cm,n( y):
Gajek and Rychlik (1998) showed that in case j ¼ k, Y (y) is þ� for k ¼ n and þ�þ other-
wise and has exactly one zero in v 2 (0, b), where b is the first inflexion point of fj,k:nV .
Below we analyze the case j < k. We have
am ¼ (k � j þ 1)T ( j � m, k � m, n � m), m ¼ 0, . . . , j � 3
aj�2 ¼ (k � j þ 1)T (2, k � j þ 2, n � j þ 2) � 2(n � j þ 1)(n � j þ 2),
aj�1 ¼ (k � j þ 1)T (1, k � j þ 1, n � j þ 1) � n þ j � 1,
am ¼ (k � m)T (1, k � m, n � m) � n þ m, m ¼ j, . . . , k � 2
ak�1 ¼1
n � k þ 1þ 2(n � k þ 1) n � k �
1
2
� �:
We easily see that am > 0 for m ¼ 0, . . . , j � 3 and aj�2 < 0. If k ¼ n, then am ¼ 0, m ¼ j �
1, . . . , k � 1 and it follows from VDP that Y ( y) is þ� in this case. If k < n, then am < 0 for
m ¼ j � 1, . . . , k � 1 and ak�1 > 0. Thus Y ( y) is either þ�þ or þ. Using the same argu-
ments as in the proof of Proposition 2, we show Y ( y) is negative at b. It follows that Y has
exactly one zero in v 2 (0, b) and it is nonnegative in [0, v].
Now we analyze the behavior of Z(y). Elementary computations show that
Z(y) ¼1
2(k � j þ 1)
Xj�2
m¼0
(k � j þ 1)[2 � T ( j � m, k � m, n � m)]Cm,n( y)
(
þ [(k � j þ 1)(2 � T (1, k � j þ 1, n � j þ 1)) � n þ j � 1]Cj�1,n( y)
þXk�1
m¼j
[(k � m)(2 � T (1, k � m, n � m)) � n þ m]Cm,n( y)
)
¼1
2(k � j þ 1)
Xk�1
m¼0
bmCm,n(y):
BOUNDS FOR TRIMMED MEANS 323
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
By arguments used in the proof of Proposition 2, the behavior of Z in (0, v) depends only on
the first nonzero element of the sequence {bm}, m ¼ 0, . . . , k � 1. Note that the sequence is
increasing for m ¼ 0, . . . , j � 2. Hence, if b0 � 0, then Z is positive near 0 and has no zeros
in (0, v). Similarly to the proof of Proposition 2, we show that the projection is a linear func-
tion of the form (2.5) in this case. Note that b0 � 0 for T ( j, k, n) � 2. The optimal para-
meters of the linear projection are given by a2 ¼ T ( j, k, n) � 1 and b2 ¼ 2 � T ( j, k, n). It
follows that for T ( j, k, n) � 1 the projection is constant equal to 1. If b0 < 0, which holds
for T ( j, k, n) > 2, the same arguments we used in the proof of Proposition 2 show that Z
has a unique zero y� in (0, v], and therefore PV fj,k:n is of the form (2.4). Note that~ZZ(x) ¼ 2(k � j þ 1)(n þ 1)ZV�1(x) and z� ¼ V (y�).
Summing up, PV fj,k:nV is constant equal to 1 for T ( j, k, n) � 1, an increasing linear func-
tion for 1 < T ( j, k, n) � 2, and a function of the form (2.4) for T ( j, k, n) > 2. In order to
obtain final results it remains to calculate kPfj,k:nVk2V for different forms of Pfj,k:nV and
then use (1.6) to provide formulae for distributions achieving the respective bounds. j
324 K. DANIELAK
Dow
nloa
ded
by [
Uni
vers
ity o
f Il
linoi
s C
hica
go]
at 0
5:54
20
Nov
embe
r 20
14
Recommended