Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
IntroductionIdentificationEstimation
ComputationConclusion
Nonparametric Identification and Estimation of aTransformation Model
Hidehiko Ichimura and Sokbae Lee
University of Tokyo and Seoul National University
15 February, 2012
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Outline
1. The Model and Motivation2. Identification3. Consistency4. Asymptotic Distribution5. Computation6. Further Research
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
The model
I Assume X and Y are observed, where X is a vector ofcovariates and Y is the dependent variable, (Y ,X ) beinggenerated by
Λ0(Y ) = m0(X ) + U,
where Λ0(·) is a strictly increasing unknown function, m0(·) isan unknown function, U is an unobserved random variable thatis independent of X with the cumulative distribution functionF0(·).
I The objective of this project is to identify and estimateunknown functions Λ0, m0, and F0.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
Related Literature (1)
I Box and Cox (1964) specify all three functions parametrically;m0(X ) = X ′β, U ∼ N(0, σ2) and for y > 0,
yλ =
{(yλ − 1)/λ if λ 6= 0log y if λ = 0.
I Horowitz (1996), Ye and Duan (1997), and Chen (2002)specify m0(X ) = X ′β.
I Linton, Sperlich and Van Keilegom (2008) specifym0(X ) = G (m1(X1), . . . ,mK (XK )), where G is a knownfunction.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
Related Literature (2)
I Jacho-Chavez, Lewbel, and Linton (2010) studies a model:
r(x , z) = H(G (x) + F (z))
where H is a strictly monotonic function and r(x , z) isnonparametrically estimatable model assuming continuousregressors.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
Related Literature (3)
I The nonparametric transformation model includes the mixedproportional hazard model as a special case (Ridder, 1990).For the proportional hazard model of duration Y withunobserved heterogeneity v ,
λ(t|x , v) = λ0(t) exp[−(m0(x) + v)],
Let T (t) =´ t0 λ0(s)ds. Then
Λ0(Y ) = logT (Y ) = m0(X ) + V + ε
where ε is independent of (X ,U) and has the CDF1− exp(− exp(t)) (a Gompertz distribution).
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
Related Literature (4)
I Ekeland, Heckman, and Nesheim (2004) present a hedonicmodel which, under suitable restrictions placed on utilityfunctions and production functions gives rise to anonparametric transformation model.
For example they assume production function F (z , x , ε) withhedonic element z , observed and unobserved control variablesx and ε, cost function C (z).
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
Related Literature (5)
The first order condition is
Fz(z , x , ε) = C ′(z).
They assume an existence of a monotonic transformation ψ suchthat
ψ(Fz(z , x , ε)) = τ(z) + M(η(x) + ε),
where M is also monotonic. Then
ψ(C ′(z)) = τ(z) + M(η(x) + ε),
so thatM−1(ψ(C ′(z))− τ(z)) = η(x) + ε.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
Related Literature (5)
I Guerre, Perrigne, and Vuong (2009) study the identification ofa private value first-price auction model with risk aversebidders and showed that, writing λ(·) = U(·)/U ′(·), denotingthe number of bidders by n, and the density of the equilibriumbid by g(·|n), and bids’s α-quantile by b(α|n) we have
λ−1(
α
(n2 − 1)g(b(α|n2))
)= b(α|n1)− b(α|n2) + λ−1
(α
(n1 − 1)g(b(α|n1))
)
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
A Transformation ModelRelated Literature
ObjectivesAll previous studies approach the identification via derivatives withrespect to regressors excluding discrete regressors.
In this paper weI construct an identification result which does not depend on
taking derivatives with respect to regressors,I consider identification issues when regressors are discrete, andI based on the identification results develop estimators for
functions Λ0, m0, and F0,I develop computatinal method for the estimator,I establish consistency and 1/
√n-consistency and outline the
distribution theory.I Our study helps isolate the variation intrinsic to the parameter
identification.Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Point Identification (1)
I We first provide a point identification result when X includes acontinuous regressor.
I Note that for any constants a, b, and c > 0, we have
c[Λ0(Y ) + b] = c[m0(X ) + b + a] + c(U − a),
I Thus, the model is not identified without location and scalenormalization.
I Our location and scale normalization is achieved by assumingthat F0(0) = 0.5, m0(x0) = 0, and m0(x1) = 1 for some(x0, x1) such that x1 6= x0.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Point Identification (2)
I Since Y = Λ−10 [m0(x) + U],
QY |X (α|x) = Λ−10 [m0(x) + F−1
0 (α)].
I As F−10 (0.5) = 0, m0(x) = Λ0[QY |X (0.5|x)].
I As m0(x0) = 0, F−10 (α) = Λ0[QY |X (α|x0)].
I Thus
Λ0[QY |X (α|x)] = Λ0[QY |X (0.5|x)] + Λ0[QY |X (α|x0)].
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Point Identification (3)
I We seek sufficient conditions under which the unique solution(a.s.) to the following equation is Λ0 on a compact set[y1, y2] ⊃ [0, 1]:
Λ[QY |X (α|x)] = Λ[QY |X (0.5|x)] + Λ[QY |X (α|x0)].
I Substituting the above relationships, we have
Λ[Λ−10 (m0(x) + F−1
0 (α))]
= Λ[Λ−10 (m0(x))] + Λ[Λ−1
0 (F−10 (α))].
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Point Identification (4)
I Examining this with s = m0(x) and t = F−10 (α) and
T (z) := Λ[Λ−10 (z)], we have
T (s + t) = T (s) + T (t).
I When s and t vary over a common interval [y1, y2], and T iscontinuous, T (z) = Cz .
I Since Λ0(QY |X (α|x)) = m0(x) + F−10 (α), the scale
normalization implies Λ0(QY |X (0.5|x1)) = 1 so thatΛ(QY |X (0.5|x1)) = 1 always via the scale normalization. Thisimplies T (1) = Λ(Λ−1
0 (1)) = 1 so that C = 1. Thus T (z) = z .I Therefore, Λ0 is identified on [y1, y2].
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Point Identification (5)
I Note that once Λ0(y) is identified, then m0(x) and F−10 (α) are
identified by m0(x) = Λ0[QY |X (0.5|x)] andF−1
0 (α) = Λ0[QY |X (α|x0)].I Hence, m0(x) is identified for any x satisfying
QY |X (0.5|x) ∈ [y1, y2].
I Likewise, F−10 (α) is identified for any α satisfying
QY |X (α|x0) ∈ [y1, y2].
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Identification Theorem
I If the distribution of Y conditional on X is given, (Y ,X ) beinggenerated by the model specified above,
I Λ0 is a strictly increasing and continuous function,I F0(0) = 0.5 and m0(x0) = 0 and m0(x1) = 1 for two distinct
points x0 and x1, andI the marginal distribution of m0(X ) and U are continuous and
their joint support include [0, 1]2.I Then there exists a non-empty set [y1, y2], which is a strict
subset of the common support of QY |X (0.5|x) andQY |X (α|x0) over which Λ0 is identified. m0(x) is identified forany x satisfying QY |X (0.5|x) ∈ [y1, y2]. Likewise, F−1
0 (α) isidentified for any α satisfying QY |X (α|x0) ∈ [y1, y2].
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Partial Identification (1)
I We next consider a discrete X taking a finite number ofprobability mass points.
I In this case s takes on a finite number of points which includes0 and 1 from our normalization.
I Note thatT (si + t) = T (si ) + T (t)
andT (sj + t) = T (sj) + T (t)
so that
T (si + t)− T (sj + t) = T (si )− T (sj).
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Partial Identification (2)I Differentiating both sides we have
T ′(si + t)− T ′(sj + t) = 0
or writing ∆ij = sj − si we have
T ′(t) = T ′(t + ∆ij).
I Thus T ′(z) is a periodic function with periodicity ∆ij .I Note also that since
T ′(t + ∆ij) = T ′(t + ∆i ′j ′),
we also have
T ′(t) = T ′(t + ∆ij −∆i ′j ′),
which implies that T ′(z) is a periodic function with periodicity∆ij −∆i ′j ′ .
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Partial Identification (3)I We consider the case in which there is a ∆ > 0 in all ∆ij and
differences among them and so on so that each of these termscan be written as an integer multiple of ∆.
I Clearly this is not necessarily the case, but it is also clear thatthis is always the case if all si are rational.
I We show that, in this case, we cannot point identify Λ0 butcan provide a bound and the bound becomes tight as ∆becomes small.
I Let ψ to be a non-constant periodicity ∆ function with´ ∆0 ψ(t)dt = 0.
I Then T ′(s) = ψ(s) + A for some constant A so that
T (s) =
ˆ s
0ψ(u)du + A · s + B
for some constants A and B .Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Partial Identification (4)
I Clearly B = 0 and that T (1) = 1 and the fact the locationnormalization implies
´ 10 ψ(u)du = 0 implies A = 1 so that
T (s) =
ˆ s
0ψ(u)du + s.
Recall that T (s) = Λ(Λ−10 (s)), taking v = Λ−1
0 (s),
Λ(v) = Λ0(v) +
ˆ Λ0(v)
0ψ(u)du.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Partial Identification (5)
I Let h(z) =´ z0 ψ(t)dt. Then
h(z + ∆) =
ˆ z+∆
0ψ(t)dt
= h(z) +
ˆ z+∆
zψ(t)dt,
= h(z).
so that h is also a periodicity ∆ function and
Λ(u) = Λ0(u) + h(Λ0(u)).
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Identification with Continuous RegressorsIdentification with Discrete Regressors
Partial Identification (6)
I Differentiating Λ(u) we observe that
Λ′(u) = Λ′0(u)[1 + h′(Λ0(u))]
so that the minimum slope of h′ is greater than −1.I Thus the supremum value h can take is ∆.I Thus the supremum deviation of Λ(u) from Λ0(u) is |∆|.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Estimating Λ0 (1)I To construct a root-n-consistent estimator of Λ0, without the
loss of generality, we normalize m0 byˆ
wX (x)m0(x)dx = 0
for some weight function wX that has a compact subset on thesupport of X and satisfies
´wX (x)dx = 1. Analogously we
normalize F0(α) byˆ
wα(a)F−10 (a)da = 0
for some weight function wα that has a compact subset on thesupport of α and satisfies
´wα(a)da = 1.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Estimating Λ0 (1)
I Then F−10 (α) =
´wX (x)Λ0[QY |X (α|x)]dx and
m0(x) =´
wα(a)Λ0[QY |X (α|x)]da, so that
Λ0[QY |X (α|x)] =
ˆwα(a)Λ0[QY |X (α|x)]da
+
ˆwX (u)Λ0[QY |X (α|u)]du.
This equation is the basis for our estimator of Λ0.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Estimating Λ0 (2)
I Our estimation procedure for Λ0 consists of two steps.I The first step is nonparametric estimation of QY |X (α|x).I Then the second step is to fit the data by minimizing an
objective function based on the identification result. We usethe least squares criterion.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Estimating Λ0 (3)
I We minimize a least-squares criterion function
Mn(Λ) :=
ˆwX (x)wα(α)
{Λ[Q̂Y |X (α|x)]
−ˆ
wα(a)Λ[Q̂Y |X (a|x)]da −ˆ
wX (u)Λ[Q̂Y |X (α|u)]du}2
dx da
over a set of possible functions for Λ, Ln, whereI Q̂Y |X (α|x) is the first-step estimator,
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Estimating m0 and F0
I m0 can be estimated using the normalization´wα(a)F−1
0 (a)da = 0:
m̂(x) :=
ˆwα(a)Λ̂[Q̂Y |X (a|x)]da.
I F0 can be estimated using the normalization´wX (x)m0(x)dx = 0:
F̂−1(α) :=
ˆwX (x)Λ̂[Q̂Y |X (α|x)]dx .
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Lemma for Showing Consistency
I Denote the probability limit of the objective function byM(Λ,QY |X (α|x)), the support of (X , α) by S, and for someL > 0, define
L = {Λ; |Λ′| ≤ L,ˆ ˆ
wX (u)wα(a)Λ(QY |X (a|u)dadu = 0ˆ ˆ
wX (u)wα(a)Λ2(QY |X (a|u)dadu = 1}.
I Then there is a universal constant C such that for any Λ ∈ L,
M(Λ,QY |X (α|x)) ≥ C sup(x ,α)∈S
|Λ(QY |X (α|x))− Λ0(QY |X (α|x))|2.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Asymptotic Normality: Outline of the Proof I
I We assume that the first order condition of the optimizationproblem is satisfied with equality so that
Λ̂(Q̂(α|x)) =
ˆwX (u)Λ̂(Q̂(α|u))du +
ˆwα(a)Λ̂(Q̂(a|x))da.
I Substitute t = Q̂(α|x), we obtain,
Λ̂(t) =
ˆwX (u)Λ̂(Q̂(Q̂−1(t|x)|u))du+
ˆwα(a)Λ̂(Q̂(a|x))da.
I Then by location normalization,
Λ̂(t) =
ˆwX (x)
ˆwX (u)Λ̂(Q̂(Q̂−1(t|x)|u))dudx .
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Asymptotic Normality: Outline of the Proof II
I Define
T0(Λ)(t) :=
ˆwX (x)
ˆwX (u)Λ(Q0(Q−1
0 (t|x)|u))dudx ,
T̂ (Λ)(t) :=
ˆwX (x)
ˆwX (u)Λ(Q̂(Q̂−1(t|x)|u))dudx .
I Both T0 and T̂ are linear operators and the latter above canbe seen as empirical approximation to T0 using thenonparametrically estimated conditional quantile function.
I Note that Λ0 = T0Λ0, that is (I − T0)Λ0 = 0. Since Λ0 isdifferent from zero, (I − T0) must not be invertible.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Asymptotic Normality: Outline of the Proof IIII However, we write as
Λ̂− Λ0 = T̂ Λ̂− T0Λ0
= T0(Λ̂− Λ0) + (T̂ − T0)Λ0 + (T̂ − T0)(Λ̂− Λ0)
= Tm+10 (Λ̂− Λ0) + [I + T0 + · · ·+ Tm
0 ](T̂ − T0)Λ0
+ [I + T0 + · · ·+ Tm0 ](T̂ − T0)(Λ̂− Λ0),
where the last equality holds for any positive integer m.I Note that in view of the identification result,
Null(I − T0) := {φ : T0φ = φ} has the form
Null(I − T0) = {c0 + c1Λ0 : (c0, c1) ∈ R2}.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Asymptotic Normality: Outline of the Proof IVI In general, ‖T0‖ = 1; however, in view of the identification
result,
sup {‖T0φ‖ : ‖φ‖ ≤ 1 and φ /∈ Null(I − T0)} < 1.
I Hence, if√
n(Λ̂− Λ0) /∈ Null(I − T0) with probabilityapproaching one, then
limm→∞
plimn→∞Tm+10
[√n(Λ̂− Λ0)
]= 0.
I Suppose that we can show that∥∥∥T̂ − T0
∥∥∥ = op(1).
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Asymptotic Normality: Outline of the Proof V
I Then
plimn→∞(T̂ − T0)[√
n(Λ̂− Λ0)] = 0.
Let
R̂m := [I + T0 + · · ·+ Tm0 ](T̂ − T0)[
√n(Λ̂− Λ0)].
Now plimn→∞R̂m = 0 for any integer m ≥ 1.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Estimation of Λ0Estimation of m0 and F0ConsistencyAsymptotic Normality
Asymptotic Normality: Outline of the Proof VII Suppose that the following strong approximation holds:
√n(T̂ − T0)Λ0 = G0 + op(1),
where G0 is a centered Gaussian process with almost surecontinuous sample paths. Then G /∈ Null(I − T0) withprobability one, so that
H := limm→∞
[I + T0 + · · ·+ Tm0 ]G0
is well defined and also is a centered Gaussian process. Insummary,
√n(Λ̂− Λ0)→d H.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Computation
I The second stage requires a constrained optimization since Λis a strictly monotone increasing function.
I We build on Ramsay (1998) who proposes a method forestimating an arbitrary twice differentiable strictly monotonefunction.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Ramsay’s method (1)
I To describe Ramsay (1998)’s method, consider a class ofmonotone functions
F = {monotone f : ln(Df ) is differentiableand D{ln(Df )} is Lebesgue square integrable},
where the notation Df refers to the first-order derivative of f .I Thus, each element of F is strictly increasing and its first
derivative is smooth and bounded almost everywhere.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Ramsay’s method (2)
I Using the same notation as in Ramsay (1998), define a partialintegration operator D−1f by
D−1f (t) =
ˆ t
0f (s)ds. (1)
Theorem 1 of Ramsay (1998) states that every function f ∈ Fcan be represented as
f (x) = C0 + C1D−1 {exp(D−1w)}
(x), (2)
where w is a Lebesgue square integrable function and C0 andC1 are arbitrary constants.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Sieve Space (1)
I Assume that Λ0 belongs to F and that w can be written asw(t) =
∑∞k=1 ckφk(t) with some basis functions
{φk : k = 1, 2, . . .}.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Sieve Space (2)I Then we consider a natural sieve space Ln by imposing the
scale normalization C1 = 1 and the location normalizationC0(c):
Ln =
{C0(c) + D−1
[exp
(Kn∑k=1
ckD−1φk
)]},
where
C0(c) = −ˆ
w0(u)
D−1
{exp
(Kn∑k=1
ckD−1φk
)}([Q̂Y |X (0.5|u)
])du
and c = (c1, . . . , cK ) for some Kn that converges to infinity asn→∞.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
The Second Step with Sieve Space (1)
I Note that the location normalization ensures thatF̂−1(0.5) = 0.
I Thus, for each n, minimizing the objective function can benow viewed as an unconstrained optimization problem withKn-dimensional c.
I Specifically, our second-stage estimation consists of
minΛ∈Ln
{Mn(Λ) + λ
ˆ y2
y1
w2Kn(t)dt
}, (3)
where λ is a regularization parameter that converges to zeroand wKn(t) =
∑Knk=1 ckφk(t).
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
The Second Step with Sieve Space (2)
I Then (3) can be solved by some numerical optimizationalgorithm, along with the location normalization imposed on Λ.
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
We use the following model to conduct a simple Monte Carlosimulation:
Λ0(Y ) = X + U
where X and U are both standard normal random variables and
Λ0(t) = log t
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Table 1. The Finite Sample Performance of the Estimator of ⇤0
New Estimator CS HJ KS YDy BIAS SD RMSE RMSE RMSE RMSE RMSE
0.0500.240 0.004 0.173 0.173 0.094 0.124 0.102 0.1040.430 0.013 0.129 0.130 0.069 0.069 0.069 0.0760.620 0.024 0.094 0.097 0.051 0.046 0.048 0.0580.810 0.026 0.075 0.079 0.036 0.022 0.033 0.0404.817 -0.018 0.048 0.051 0.103 0.246 0.129 0.1068.634 -0.062 0.102 0.119 0.134 0.406 0.586 0.16612.451 -0.054 0.149 0.158 0.156 0.475 0.644 0.26016.268 0.000 0.200 0.200 0.181 0.541 1.014 0.37220.086 0.069 0.247 0.256 0.195 0.572 1.332 0.489
24
CS:Songnian Chen’s estimator, HJ: Joel Horowitz’s estimatorKS: Klein-Sherman’s estimator, YD: Ye-Duan’s estimator
Ichimura and Lee Nonparametric Transformation Model
IntroductionIdentificationEstimation
ComputationConclusion
Summary
I We obtained new identification results, proposed a sampleanalog estimation method for the nonparametrictransformation model, and obtained some asymptoticdistribution results.
I Things to be done:I Complete the asymptotic analysis for Λ0 andI obtain results for estimation of m0 and F0.I Monte Carlo experiments.I Use the results for empirical illustration.
Ichimura and Lee Nonparametric Transformation Model