2
updated: 12/15/00 Hayashi Econometrics : Answers to Selected Review Questions Chapter 6 Section 6.1 1. Let s n n j=1 |γ j |. Then s m - s n = m j=n+1 |γ j | for m>n. Since |s m - s n |→ 0, the sequence {s n } is Cauchy, and hence is convergent. 3. Proof that β(L)= α(L) -1 δ(L) α(L)β(L)= δ(L)”: α(L)β(L)= α(L)α(L) -1 δ(L)= δ(L). Proof that α(L)β(L)= δ(L) α(L)= δ(L)β(L) -1 ”: δ(L)β(L) -1 = α(L)β(L)β(L) -1 = α(L). Proof that α(L)= δ(L)β(L) -1 α(L)β(L)= δ(L)”: α(L)β(L)= δ(L)β(L) -1 β(L)= δ(L)β(L)β(L) -1 = δ(L). 4. The absolute value of the roots is 4/3, which is greater than unity. So the stability condition is met. Section 6.2 1. By the projection formula (2.9.7), E * (y t |1,y t-1 )= c + φy t-1 . The projection coefficients does not depend on t. The projection is not necessarily equal to E(y t |y t-1 ). E * (y t |1,y t-1 ,y t-2 )= c + φy t-1 . If |φ| > 1, then y t-1 is no longer orthogonal to ε t . So we no longer have E * (y t |1,y t-1 )= c + φy t-1 . 3. If φ(1) were equal to 0, then φ(z) = 0 has a unit root, which violates the stationarity condition. To prove (b) of Proposition 6.4, take the expected value of both sides of (6.2.6) to obtain E(y t ) - φ 1 E(y t-1 ) -··· φ p E(y t-p )= c. Since {y t } is covariance-stationary, E(y t )= ···- E(y t-p )= μ. So (1 - φ 1 -···- φ p )μ = c. Section 6.3 4. The proof is the same as in the answer to Review Question 3 of Section 6.1, because for inverses we can still use the commutatibity that A(L)A(L) -1 = A(L) -1 A(L). 5. Multiplying both sides of the equation in the hint from the left by A(L) -1 , we obtain B(L)[A(L)B(L)] -1 = A(L) -1 . Multiplying both sides of this equation from the left by B(L) -1 , we obtain [A(L)B(L)] -1 = B(L) -1 A(L) -1 . 1

rq_ch6

Embed Size (px)

DESCRIPTION

solutions

Citation preview

Page 1: rq_ch6

updated: 12/15/00

Hayashi Econometrics : Answers to Selected Review Questions

Chapter 6

Section 6.1

1. Let sn ≡∑n

j=1 |γj |. Then sm − sn =∑m

j=n+1 |γj | for m > n. Since |sm − sn| → 0, thesequence sn is Cauchy, and hence is convergent.

3. Proof that “β(L) = α(L)−1δ(L) ⇒ α(L)β(L) = δ(L)”:

α(L)β(L) = α(L)α(L)−1δ(L) = δ(L).

Proof that “α(L)β(L) = δ(L) ⇒ α(L) = δ(L)β(L)−1”:

δ(L)β(L)−1 = α(L)β(L)β(L)−1 = α(L).

Proof that “α(L) = δ(L)β(L)−1 ⇒ α(L)β(L) = δ(L)”:

α(L)β(L) = δ(L)β(L)−1β(L) = δ(L)β(L)β(L)−1 = δ(L).

4. The absolute value of the roots is 4/3, which is greater than unity. So the stability conditionis met.

Section 6.2

1. By the projection formula (2.9.7), E∗(yt|1, yt−1) = c+φyt−1. The projection coefficients does

not depend on t. The projection is not necessarily equal to E(yt|yt−1). E∗(yt|1, yt−1, yt−2) =

c + φyt−1. If |φ| > 1, then yt−1 is no longer orthogonal to εt. So we no longer haveE∗(yt|1, yt−1) = c + φyt−1.

3. If φ(1) were equal to 0, then φ(z) = 0 has a unit root, which violates the stationarity condition.To prove (b) of Proposition 6.4, take the expected value of both sides of (6.2.6) to obtain

E(yt)− φ1 E(yt−1)− · · ·φp E(yt−p) = c.

Since yt is covariance-stationary, E(yt) = · · · −E(yt−p) = µ. So (1− φ1 − · · · − φp)µ = c.

Section 6.3

4. The proof is the same as in the answer to Review Question 3 of Section 6.1, because forinverses we can still use the commutatibity that A(L)A(L)−1 = A(L)−1A(L).

5. Multiplying both sides of the equation in the hint from the left by A(L)−1, we obtainB(L)[A(L)B(L)]−1 = A(L)−1. Multiplying both sides of this equation from the left byB(L)−1, we obtain [A(L)B(L)]−1 = B(L)−1A(L)−1.

1

Page 2: rq_ch6

Section 6.5

1. Let y ≡ (yn, . . . , y1)′. Then Var(√

ny) = Var(1′y/n = 1′ Var(y)1/n). By covariance-stationarity, Var(y) = Var(yt, . . . , yt−n+1).

3. lim γj = 0. So by Proposition 6.8, y →m.s.

µ, which means that y →p

µ.

Section 6.6

1. When zt = xt, the choice of S doesn’t matter. The efficient GMM etimator reduces to OLS.

2. The etimator is consistent because it is a GMM etimator. It is not efficient, though.

Section 6.7

2. J = ε′X(X′ΩX)−1X′ε, where ε is the vector of estimated residuals.

4. Let ωij be the (i, j) element of Ω. The truncated kernel-based estimator with a bandwidthof q can be written as (6.7.5) with ωij = εiεj for (i, j) such that |i − j| ≤ q and ωij = 0otherwise. The Bartlett kernel based estimator obtains if we set ωij = q−|i−j|

q εiεj for (i, j)such that |i− j| < q and ωij = 0 otherwise.

5. Avar(βOLS) > Avar(βGLS) when, for example, ρj = φj . This is consistent with the fact thatOLS is efficient, because the orthogonality conditions exploited by GLS are different fromthose exploited by OLS.

2