# Macroeconometrics Teaching Notes and Exercises Teaching Notes and Exercises Pau Roldan New York University Abstract This document is a compilation of notes and exercises on basic topics

• View
216

0

Embed Size (px)

### Text of Macroeconometrics Teaching Notes and Exercises Teaching Notes and Exercises Pau Roldan New York...

• Macroeconometrics

Teaching Notes and Exercises

Pau Roldan

New York University

Abstract

This document is a compilation of notes and exercises on basic topics in Macroeconometrics, which I collectedas a student both at UPF and NYU. All the programs, provided in an Appendix, were written together withEwout Verriest.

Contents

1 Invertibility and Equivalence 2

2 The Frequency Domain 4

3 The Hodrick-Prescott Filter 73.1 Computing the Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2 Gain and Phase of the Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 The Kalman Filter 114.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114.2 The Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5 Log-Likelihood Estimation 165.1 Estimation of an MA(1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165.2 Estimation of a VAR(2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.2.1 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195.2.2 Granger Causation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215.2.3 Beveridge-Nelson Output Gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5.3 Estimation of an SVAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.3.1 Identification via Long-Run Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

6 Application: A New Keynesian Model 266.1 Equivalent System of Expectational Difference Equations . . . . . . . . . . . . . . . . . . . . . . 266.2 State-Space Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.3 Numerical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.4 Simulating data when (xt, t, it) is known . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346.5 Bayesian Estimation of the New Keynesian Model . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6.5.1 Priors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.5.2 Posteriors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

A Appendix: MatLab Codes 43A.1 Code for Section 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43A.2 Code for Section 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44A.3 Code for Section 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44A.4 Code for Section 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45A.5 Code for Section 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47A.6 Code for Section 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48A.7 Code for Section 6.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51A.8 Code for Section 6.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Department of Economics, New York University, 19 West 4th Street, 6th Floor, New York, NY 100012. E-mail: [email protected]

• 1 Invertibility and Equivalence

Consider a second-order moving average (or simply, MA(2)) process:

xt = B(L)t

B(L) := 1 + 1L+ 2L2

where B(L) is the moving-average lag polynomial, L is the lag operator1 and t iid N (0, 2) is theinnovation to the process, with the no serial correlation property E{ts} = 0, t 6= s (that is, t is a white noiseprocess).

Invertibility We want to derive conditions on 1 and 2 that allow us to invert the process, that is, to write

the MA(2) as an AR() process t = B(L)1xt, which reads

t =

+j=0

jxtj

for some coefficients {j}jZ+ . The existence of B(L)1 means that B(L)1 is a non-explosive backward-looking lag polynomial. A moving-average process is said to be invertible if the innovations of the process lie in

the space spanned by current and past values of the observables, a property that is especially useful if we wish

to, for example, recover shocks to the system from past observations of the original series.

In this case, we use the following proposition:

Proposition 1 An MA(q) process xt = B(L)t is invertible iff the roots of B(z) = 0 lie outside the unit circle.

In our case, the characteristic equation is

1 + 1z + 2z2 = 0 (1)

Factoring the moving average operator as 1 + 1z + 2z2 = (1 1z)(1 2z), where 1 and 2 are the

eigenvalues of the characteristic equation (the conjugate of the roots), one can alternatively check for invertibility

using that |i| < 1 for both i = 1, 2. In our case, the roots of equation (1) are

z1 = 122

+

21 42

22

z2 = 12221 42

22

and we need |z1| > 1 and |z2| > 1, namely, zi / [0, 1], for both i = 1, 2. To derive conditions, note that

B(z) = 1 + 1z + 2z2 = (1 1z)(1 2z) = 1 (1 + 2)z 12z2

and therefore we must have

1 = 1 + 2

2 = 12

The condition |1| < 1 and |2| < 1 therefore collapses to the requirement that |2| = |12| < 1, which isa necessary condition for invertibility. On the other hand, |1| = |1 + 2| |1|+ |2| < 2, namely |1| < 2 isalso a necessary condition for invertibility.

1The lag operator is the operator defined by Ljxt = xtj , for any t, j N.

2

• Equivalence We say that a family of MA representations in observationally equivalent if they all have the

same mean and autocovariance sequence. We can model this by saying that two processes are observationally

equivalent if they share the same auto-covariance generating function. Define the autocovariance-generating

function, gx(z), by

gx(z) :=

+j=

jzj

for a potentially complex scalar z. For an MA(2) with moving average lag polynomial B(L), the covariance

generating function can be written as follows2

gx(z) = B(z)2B(z

1)

Using B(z) = (1 1z)(1 2z), we can get the family of observationally equivalent representations byflipping the roots in different ways. For example, we can flip 1 so that we can write the autocovariance-

generating function as

gx(z) = 2(1 1z)(1 2z)(1 1z1)(1 2z1)

= 21(11 z)(1 2z)1(

11 z

1)(1 2z1)

= 221(11 z1 1)(1 2z)(11 z 1)(1 2z

1)

= 221(1 11 z)(1

11 z)(1 2z)(1 2z

1)

where we have factored 1 out in the second line and multiplied by zz1 in the third line. Defining

B(L) := (1 11 L)(1 2L)

2 := 2

21

then the process xt = B(L)t is observationally equivalent to xt = B(L)t, but is not an invertible process

because |11 | > 1.Similarly, we can find two other observationally equivalent, non-invertible representations by either flipping

2 or flipping both roots instead. Doing the former and using a similar derivation than above gives us the process

xt = (1 1L)(1 12 L)t

2 := 2

22

Flipping both eigenvalues gives us the process

xt = (1 11 L)(1 12 L)t

2 := 2

21

22

And, in all cases, the processes give us the same autocovariance generating function (and therefore are

observationally equivalent) but they are all non-invertible because |i| < 1 for both i = 1, 2.2The reference here is Hamilton, pages 154 and 155.

3

• 2 The Frequency Domain

In time-series macroeconomics we tend to study processes in the time domain, and study fluctuations that

occur across time periods. Alternatively, one can also study processes with respect to frequency, rather than

time. Instead of analyzing how the process changes over time, the frequency-domain approach studies how much

of the process lies within each given frequency band over a range of frequencies.

Stochastic processes can be converted between time- and frequency-domains with so-called Fourier trans-

forms, which translates time into a sum of waves of different frequencies, each of which represents a different

frequency. Formally, for a sequence of complex numbers {xt}tZ, the Fourier transform is a function dTx ()defined by

dTx () :=

T1t=1

xteit

The inverse Fourier transform then converts frequency back to a time function. In turn, the so-called

population spectrum of a stochastic process {xt : t R}, denoted by sx(), is the frequency-domain repre-sentation of the process. Formally, it is the representation of {xt} as a weighted sum of periodic functions thatare orthogonal across different frequencies. In Economics, the reason we use this domain is because we typically

want to determine how important cycles of different frequencies are in accounting for the behavior of a random

variable, here xt.

By the so-called Cramer representation of the series, xt can be decomposed into periodic components

that are orthogonal across different frequencies that fluctuate within [, ], such that

xt =

eitdZx()

where i :=1, dZx() is a mean-zero, complex-valued, continuous random vector in frequencies with

the property E[dZx()dZx()] = sx()d, where sx() is the population spectrum (defined below), and

E[dZx()dZx()] = 0, for all frequencies 6= (the upper bar denotes the conjugate trans

Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents
Documents