31
Structural Macroeconometrics Chapter 2. Approximating and Solving DSGE Models David N. DeJong Chetan Dave

Structural Macroeconometrics Chapter 2. …dejong/text/Ch2.pdfStructural Macroeconometrics Chapter 2. Approximating and Solving DSGE Models David N. DeJong Chetan Dave Empirical investigations

  • Upload
    dinhnga

  • View
    230

  • Download
    0

Embed Size (px)

Citation preview

Structural Macroeconometrics

Chapter 2. Approximating and Solving DSGE Models

David N. DeJong Chetan Dave

Empirical investigations involving DSGE models invariably require the completion

of two preparatory stages. One stage involves preparation of the model to be analyzed; this

is the focus of the current chapter. The other involves preparation of the data; this is the

focus of Chapter 3.

Regarding the model-preparation stage, DSGE models typically include three compo-

nents: a characterization of the environment in which decision makers reside; a set of de-

cision rules that dictate their behavior; and a characterization of the uncertainty they face

in making decisions. Collectively, these components take the form of a non-linear system

of expectational di¤erence equations. Such systems are not directly amenable to empirical

analysis, but can be converted into empirically implementable systems through the comple-

tion of the general two-step process outlined in this chapter.

The �rst step involves the construction of a linear approximation of the model. Just as

non-linear equations may be approximated linearly via the use of Taylor-series expansions, so

too may non-linear systems of expectational di¤erence equations. The second step involves

the solution of the resulting linear approximation of the system. The solution is written in

terms of variables expressed as deviations from steady state values, and is directly amenable

to empirical implementation.

While this chapter is intended to be self-contained, far more detail is provided in the

literature cited below. Here, the goal is to impart an intuitive understanding of the model-

preparation stage, and to provide guidance regarding its implementation. In addition, we

note that there are alternatives to the particular approaches to model approximation and

solution presented in this chapter. A leading alternative to model approximation is provided

by perturbation methods; for a textbook discussion see Judd (1998). And a leading alter-

1

native to the approaches to model solution presented here is based on the use of projection

methods. This alternative solution technique is discussed in this text in Chapter 10; for

additional textbook discussions, see Judd (1998), Adda and Cooper (2003) and Sargent and

Ljungqvist (2004).

1 Linearization

1.1 Taylor Series Approximation

Consider the following n-equation system of non-linear di¤erence equations:

(xt+1; xt) = 0; (1)

where the x�s and 0 are n � 1 vectors. The parameters of the system are contained in the

vector �: DSGE models are typically represented in terms of such a system, augmented to

include sources of stochastic behavior. We abstract from the stochastic component of the

model in the linearization stage, since models are typically designed to incorporate stochastic

behavior directly into the linearized system (a modest example is provided in Section 2.2;

detailed examples are provided in Chapter 5). Also, while expectational terms are typically

included among the variables in x (e.g., variables of the form Et(xt+j); where Et is the

conditional expectations operator), these are not singled out at this point, as they receive

no special treatment in the linearization stage.

Before proceeding, note that while (1) is written as a �rst-order system, higher-order

speci�cations may be written as �rst-order systems by augmenting xt to include variables

2

observed at di¤erent points in time. For example, the pth-order equation

!t+1 = �1!t + �2!t�1 + :::+ �p!t�p+1

may be written in �rst-order form as

266666666664

!t+1

!t

...

!t�p+2

377777777775�

266666666664

�1 �2 � � � � � � �p

1 0 � � � � � � 0

...... � � � � � � ...

0 0 � � � 1 0

377777777775

266666666664

!t

!t�1

...

!t�p+1

377777777775= 0;

or more compactly, as

xt+1 � �xt = 0; xt+1 = [!t+1; !t; :::; !t�p+2]0:

Thus (1) is su¢ ciently general to characterize a system of arbitrary order.

The goal of the linearization step is to convert (1) into a linear system, which can then

be solved using any of the procedures outlined below. Anticipating the notation that follows

in Section 2.2, the form for the system we seek is given by

Axt+1 = Bxt: (2)

Denoting the steady state of the system as (x) = 0, where x is understood to be a function

of �; linearization is accomplished via a �rst-order Taylor series approximation of (1) around

3

its steady state, given by 1

0 � (x) + @

@xt(x)� (xt � x) +

@

@xt+1(x)� (xt+1 � x); (3)

where (xt�x) is n� 1, and the n�n matrix @@xt(x) denotes the Jacobian of (xt+1; xt) with

respect to xt evaluated at x. That is, the (i; j)th element of @

@xt(x) is the derivative of the ith

equation in (1) with respect to the jth element of xt. De�ning A = @@xt+1

(x) and B = � @@xt(x)

yields (2), where variables are expressed as deviations from steady state values.

1.2 Logarithmic Approximations

It is often useful to work with log-linear approximations of (1), due to their ease of

interpretation. For illustration, we begin with a simple example in which the system is 1�1;

and can be written as

xt+1 = f(xt):

Taking logs and using the identity xt = elog xt ; the system becomes

log xt+1 = log�f(elog xt)

�:

Then approximating,

log xt+1 � log [f(x)] +f 0(x)

f(x)(log(xt)� log(x)) ;

1It is also possible to work with higher-order approximations; e.g., see Schmitt-Grohé and Uribe (2002).

4

or since log [f(x)] = log x,

log�xt+1x

�� f 0(x)

f(x)

�log(

xtx)�:

Note that f0()f()is the elasticity of xt+1 with respect to xt:Moreover, writing xt as x+"t; where

"t denotes a small departure from steady state,

log�xtx

�= log

�1 +

"tx

�� "tx;

and thus log�xtx

�is seen as expressing xt in terms of its percentage deviation from steady

state.

Returning to the n� 1 case, re-write (1) as

1(xt+1; xt) = 2(xt+1; xt); (4)

since it is not possible to take logs of both sides of (1). Again using the identity xt = log ext ;

taking logs of (4) and rearranging yields

log1(log ext+1 ; log ext)� log2(log ext+1 ; log ext) = 0: (5)

The �rst-order Taylor series approximation of this converted system yields the log-linear

approximation we seek. The approximation for the �rst term is

log1(xt+1; xt) � log [1(x)]+@ log [1]

@ log(xt)(x)�

hlog(

xtx)i+@ log [1]

@ log(xt+1)(x)�

hlog(

xt+1x)i; (6)

5

where @ log[1]@ log(xt)

(x) and @ log[1]@ log(xt+1)

(x) are n�n Jacobian matrices, and�log(xt

x)�and

�log(xt+1

x)�

are n�1 vectors. The approximation of the second term in (5) is analogous. Then combining

the two approximations and rearranging yields (2), which takes the speci�c form

�@ log [1]

@ log(xt+1)(x)� @ log [2]

@ log(xt+1)(x)

��hlog(

xt+1x)i= �

�@ log [1]

@ log(xt)(x)� @ log [2]

@ log(xt)(x)

��hlog(

xtx)i:

(7)

The elements of A and B are now elasticities, and the variables of the system are expressed

in terms of percentage deviations from steady state.

In Part II of the text we will discuss several empirical applications that involve the need

to approximate (1) or (5) repeatedly for alternative values of �: In such cases, it is useful to

automate the linearization stage via the use of a numerical gradient calculation procedure.

We introduce this brie�y here in the context of approximating (1); the approximation of (5)

is analogous.

Gradient procedures are designed to construct the Jacobian matrices in (3) without an-

alytical expressions for the required derivatives. Derivatives are instead calculated numeri-

cally, given the provision of three components by the user. The �rst two components are a

speci�cation of � and a corresponding speci�cation of x: The third component is a procedure

designed to return the n � 1 vector of values z generated by (1) for two cases. In the �rst

case xt+1 is treated as variable and xt is �xed at x; in the second case xt is treated as variable

and xt+1 is �xed at x: The gradient procedure delivers the Jacobian @@xt+1

(x) = A in the �rst

case and @@xt(x) = �B in the second case. Examples follow.

6

1.3 Examples

Consider the simple resource constraint

yt = ct + it;

indicating that output (yt) can be either consumed (ct) or invested (it). This equation is

already linear. In the notation of (1) the equation appears as

yt � ct � it = 0;

and in terms of (3), with xt = [yt ct it]0 and the equation representing the ith of the system,

the ith row of @@xt(x) = [1 � 1 � 1] : In the notation of (5), the equation appears as

log yt � log [exp(log ct)� exp(log it)] = 0;

and in terms of (7), the ith row of the right-hand-side matrix is

�@ log [1]

@ log(xt)(x)� @ log [2]

@ log(xt)(x)

�=

�1

y

�cc+ i

�ic+ i

�: (8)

Finally, to use a gradient procedure to accomplish log-linear approximation, the ith return

of the system-evaluation procedure would be

zi = log yt � log [exp(log ct)� exp(log it)] :

7

As an additional example consider the Cobb-Douglas production function

yt = atk�t n

1��t ; � 2 (0; 1);

where output is produced by use of capital (kt) and labor (nt) and is subject to a technology

or productivity shock (at). Linear approximation of this equation is left as an exercise. To

accomplish log-linear approximation, taking logs of the equation and rearranging maps into

the notation of (5) as

log yt � log at � � log kt � (1� �) log nt = 0:

With xt =hlog yt

ylog at

alog kt

klog nt

n

i0; the ith row of the right-hand-side matrix in (7) is

�@ log [1]

@ log(xt)(x)� @ log [2]

@ log(xt)(x)

�= [1 � 1 � � � (1� �)] : (9)

And to use a gradient procedure to accomplish log-linear approximation, the ith return of

the system-evaluation procedure would be

zi = log yt � log at � � log kt � (1� �) log nt:

8

2 Solution Methods

Having approximated the model as in (2), we next seek a solution of the form

xt+1 = Fxt +G�t+1: (10)

This solution represents the time series behavior of fxtg as a function of f�tg ; where �t is a

vector of exogenous innovations, or as frequently referenced, structural shocks.

Here we present four popular approaches to the derivation of (10) from (2). Each approach

involves an alternative way of expressing (2), and employs specialized notation. Before

describing these approaches, we introduce an explicit example of (2), which we will map into

the notation employed under each approach to aid with the exposition.

The example is a linearized stochastic version of Ramsey�s (1928) optimal growth model:

eyt+1 � eat+1 � �ekt+1 = 0 (11)

eyt+1 � cect+1 � ieit+1 = 0 (12)

�1cEt(ect+1) + �aEt(eat+1) + �kEt(ekt+1) + �2cect = 0 (13)

ekt+1 � �kekt � �ieit = 0 (14)

eat+1 � �eat = "t+1: (15)

The variablesneyt;ect;eit;ekt; eato represent output, consumption, investment, physical capital,

and a productivity shock, all expressed as logged deviations from steady state values. The

9

variable "t is a serially uncorrelated stochastic process. The vector

� = [� c i �1c �a �k �2c �k �i; �]0

contains the �deep�parameters of the model.

Two modi�cations enable a mapping of the model into a speci�cation resembling (2).

First, the expectations operator Et(:) is dropped from (13), introducing an expectational

error into the modi�ed equation; let this error be denoted as �ct+1: Next, the innovation

term "t+1 in (15) must be accommodated. The resulting expression is

2666666666666664

1 0 0 �� �1

0 0 0 0 1

1 � c � i 0 0

0 �1c 0 �k �z

0 0 0 1 0

3777777777777775| {z }

A

2666666666666664

eyt+1ect+1eit+1ekt+1ezt+1

3777777777777775| {z }

xt+1

=

2666666666666664

0 0 0 0 0

0 0 0 0 �

0 0 0 0 0

0 ��2c 0 0 0

0 0 �i �k 0

3777777777777775| {z }

B

2666666666666664

eytecteitektezt

3777777777777775| {z }

xt

+

2666666666666664

0 0 0 0 0

0 0 0 0 1

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

3777777777777775| {z }

C

2666666666666664

0

0

0

0

"t+1

3777777777777775| {z }

�t+1

+

2666666666666664

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 1 0

0 0 0 0 0

3777777777777775| {z }

D

2666666666666664

0

0

0

�ct+1

0

3777777777777775| {z }

�t+1

: (16)

10

2.1 Blanchard and Kahn�s Method

The �rst solution method we present was developed by Blanchard and Kahn (1980),

and is applied to models written as

2664 x1t+1

Et(x2t+1)

3775 = eA2664 x1tx2t

3775+ Eft; (17)

where the model variables have been divided into an n1 � 1 vector of endogenous predeter-

mined variables x1t (de�ned as variables for which Etx1t+1 = x1t+1), and an n2 � 1 vector of

endogenous non-predetermined variables x2t. The k� 1 vector ft contains exogenous forcing

variables.

Following the approach of King and Watson (2002), a preliminary step is taken before

casting a given model into the form (17). The step is referred to as a system reduction: it

involves writing the model in terms of a subset of variables that are uniquely determined.

In terms of the example, note that observations on eat and ekt are su¢ cient for determining eytusing (11), and that given eyt; the observation of either ect oreit is su¢ cient for determining bothvariables using (12). Thus we proceed in working directly with

nect;ekt; eato using (13)�(15),and recover

neyt;eito as functions of nect;ekt; eato using (11) and (12). Among nect;ekt; eato ;ekt is predetermined (given ekt and eit; ekt+1 is determined as in (14)); ect is endogenous butnot predetermined (as indicated in (13), its time-(t + 1) realization is associated with an

expectations error); and eat is an exogenous forcing variable. Thus in the notation of (17),

11

we seek a speci�cation of the model in the form

2664 ekt+1Et(ect+1)

3775 = eA2664 ektect

3775+ Eeat: (18)

To obtain this expression, let �t =� eyt eit �0, �t = � ekt ect �0, and note that Et(eat+1) =

�eat. In terms of these variables, the model may be written as2664 1 0

1 � i

3775| {z }

0

�t =

2664 � 0

0 c

3775| {z }

1

�t +

2664 10

3775| {z }2

eat (19)

2664 �k �1c

1 0

3775| {z }

3

Et(�t+1) =

2664 0 ��2c

�k 0

3775| {z }

4

�t +

2664 0 0

0 �i

3775| {z }

5

�t +

2664 ��a�0

3775| {z }

6

eat: (20)

Next, substituting (19) into (20), which requires inversion of 0, we obtain

3Et(�t+1) =�4 +5

�10 1

��t +

�6 +5

�10 2

�eat: (21)

Finally, premultiplying (21) by �13 yields a speci�cation in the form of (18); Blanchard and

Kahn�s solution method may now be implemented. Hereafter, we describe its implementation

in terms of the notation employed in (17).

The method begins with a Jordan decomposition of eA; yieldingeA = ��1J�; (22)

12

where the diagonal elements of J , consisting of the eigenvalues of eA, are ordered in increasingabsolute value in moving from left to right.2 Thus J may be written as

J =

2664 J1 0

0 J2

3775 ; (23)

where the eigenvalues in J1 lie on or within the unit circle, and those in J2 lie outside of

the unit circle. J2 is said to be unstable or explosive, since Jn2 diverges as n increases. The

matrices � and E are partitioned conformably as

� =

2664 �11 �12

�21 �22

3775 ; E =

2664 E1E2

3775 ; (24)

where �11 is conformable with J1; etc. If the number of explosive eigenvalues is equal to

the number of non-predetermined variables, the system is said to be saddle-path stable and

a unique solution to the model exists. If the number of explosive eigenvalues exceeds the

number of non-predetermined variables no solution exists (and the system is said to be a

source); and in the opposite case an in�nity of solutions exist (and the system is said to be

a sink).

Proceeding under the case of saddle-path stability, substitution for eA in (18) yields2664 x1t+1

Et(x2t+1)

3775 = ��1J�2664 x1tx2t

3775+2664 E1E2

3775 ft: (25)

2Eigenvalues of a matrix � are obtained from the solution of equations of the form �e = �e; where e isan eigenvector and � the associated eigenvalue. The GAUSS command eigv performs this decomposition.

13

Next, the system is pre-multiplied by �; yielding

2664 �x1t+1

Et(�x2t+1)

3775 =2664 J1 0

0 J2

37752664 �x1t

�x2t

3775+2664 D1

D2

3775 ft; (26)

where

2664 �x1t

�x2t

3775 =

2664 �11 �12

�21 �22

37752664 x1tx2t

3775 (27)

2664 D1

D2

3775 =

2664 �11 �12

�21 �22

37752664 E1E2

3775 : (28)

This transformation e¤ectively �de-couples�the system, so that the non-predetermined vari-

ables depend only upon the unstable eigenvalues of eA contained in J2; as expressed in thelower part of (26).

Having de-coupled the system, we derive a solution for the non-predetermined variables

by performing a forward iteration on the lower portion of (26). Using f2t to denote the

portion of ft conformable with D2; this is accomplished as follows. First, re-express the

lower portion of (26) as

�x2t = J�12 Et(�x2t+1)� J�12 D2f2t: (29)

This implies an expression for x2t+1 of the form

�x2t+1 = J�12 Et+1(�x2t+2)� J�12 D2f2t+1; (30)

14

which can be substituted into (29) to obtain

�x2t = J�22 Et(�x2t+1)� J�22 D2Et(f2t+1)� J�12 D2f2t: (31)

In writing (31) we have exploited the Law of Iterated Expectations, which holds that

Et [Et+1(xt)] = Et(xt) for any xt (e.g., see Ljungqvist and Sargent, 2004). Since J2 con-

tains explosive eigenvalues, J�n2 disappears as n approaches in�nity, thus continuation of the

iteration process yields

�x2t = �1Xi=0

J�(i+1)2 D2Et(f2t+i): (32)

Mapping this back into an expression for x2t using (27), we obtain

x2t = ���122 �21x1t � ��1221Xi=0

J�(i+1)2 D2Et(f2t+i): (33)

In the case of the example model presented above, Et(f2t+i) = �ieat; and thus (33) becomesx2t = ���122 �21x1t � ��122 J�12

�I � �J�12 D2

��1 eat: (34)

Finally, to solve the non-explosive portion of the system begin by expanding the upper

portion of (25):

x1t+1 = eA11x1t + eA22x2t + E1ft; (35)

where eA11 and eA22 are partitions of ��1J� conformable with x1t and x2t: Then substitutingfor x2t using (33) yields a solution for x1t of the form given by (10).

We conclude this subsection by highlighting two requirements of this solution method.

15

First, a model-speci�c system reduction is employed to obtain an expression of the model

that consists of a subset of its variables. The variables in the subset are distinguished as

being either predetermined or non-predetermined. Second, invertibility of the lead matrices

0 and 3 is required in order to obtain a speci�cation of the model amenable for solution.

Exercise 1 Write computer code for mapping the example model expressed in (11)-(15) into

the form of the representation given in (17).

2.2 Sims�s Method

Sims (2001) proposes a solution method applied to models expressed as

Axt+1 = Bxt + E + C�t+1 +D�t+1; (36)

where E is a matrix of constants.3 Relative to the notation we have employed above, E

is unnecessary because the variables in xt are expressed in terms of deviations from steady

state values. Like Blanchard and Kahn�s (1980) method, Sims�method involves a de-coupling

of the system into explosive and non-explosive portions. However, rather than expressing

variables in terms of expected values, expectations operators have been dropped, giving

rise to the expectations errors contained in �t+1: Also, while Blanchard and Kahn�s method

entails isolation of the forcing variables from xt+1; these are included in xt+1 under Sims�

method; thus the appearance in the system of the vector of shocks to these variables �t+1:

Third, Sims�method does not require an initial system-reduction step. Finally, it does not

3The programs available on Sims�website perform all of the steps of this procedure. The web address is:http://www.princeton.edu/~sims/. The programs are written in Matlab; analogous code written in GAUSSis currently under construction.

16

entail a distinction between predetermined and non-predetermined variables.

Note from (16) that the example model has already been cast in the form of (36), thus

we proceed directly to a characterization of the solution method. The �rst step employs a

�QZ factorization�to decompose A and B into unitary upper triangular matrices:

A = Q0�Z 0 (37)

B = Q0Z 0; (38)

where (Q;Z) are unitary, and (�;) are upper triangular.4 Next, (Q;Z;�;) are ordered

such that, in absolute value, the generalized eigenvalues of A and B are organized in � and

in increasing order moving from left to right, just as in Blanchard and Kahn�s Jordan

decomposition procedure.5 Having obtained the factorization, the original system is then

pre-multiplied byQ, yielding the the transformed system expressed in terms of zt+1 = Z 0xt+1:

�zt = zt�1 +QE +QC�t +QD�t; (39)

where we have lagged the system by one period in order to match the notation (and code)

of Sims.

Next, as with Blanchard and Kahn�s (1980), method, (39) is partitioned into explosive

4A unitary matrix � satis�es �0� = ��0 = I. If Q and/or Z contain complex values, the transpositionsre�ect complex conjugation, that is, each complex entry is replaced by its conjugate and then transposed.

5Generalized eigenvalues of � are obtained as the solution to �e = ��e; where � is a symmetric matrix.Sims�website also provides a program that orders the eigenvalues appropriately.

17

and non-explosive blocks:

2664 �11 �12

0 �22

37752664 z1tz2t

3775 =2664 11 12

0 22

37752664 z1t�1z2t�1

3775+2664 Q1Q2

3775 [E + C�t +D�t] : (40)

The explosive block (the lower equations) is solved as follows. Letting wt = Q(E+C�t+D�t)

(partitioned conformably as w1t and w2t), the lower block of (40) is given by

�22z2t = 22z2t�1 + w2t: (41)

Leading (41) by one period and solving for z2t yields

z2t =Mz2t+1 � �122 w2t+1; (42)

where M = �122 �22: Then recursive substitution for z2t+1; z2t+2; ... yields

z2t = �1Xi=0

M i�122 w2t+1+i; (43)

since limt!1

M tz2t = 0. Recalling that wt is de�ned as wt = Q(E +C�t +D�t); note that (43)

expresses z2t as a function of future values of structural and expectational errors. But z2t is

known at time t, and Et(�t+s) = Et(�t+s) = 0 for s > 0; thus (43) may be written as

z2t = �1Xi=0

M i�122 Q2E2; (44)

18

where Q2E2 are the lower portions of QE conformable with z2:6 Postmultiplying (44) by

�122 Q2E2 and noting that �1Pi=0

M i = �(I �M)�1; the solution of z2t is obtained as

z2t = (�22 � 22)�1Q2E: (45)

Having solved for z2t; the �nal step is to solve for z1t in (40). Note that the solution of

z1t requires a solution for the expectations errors that appear in (40). As Sims notes, when a

unique solution for the model exists, it will be the case that a systematic relationship exists

between the expectations errors associated with z1t and z2t; exploiting this relationship

yields a straightforward means of solving for z1t: The necessary and su¢ cient condition for

uniqueness is given by the existence of a k � (n� k) matrix � that satis�es

Q1D = �Q2D; (46)

which represents the systematic relationship between the expectations errors associated with

z1t and z2t noted above. Given uniqueness, and thus the ability to calculate � as in (46),

the solution of z1t proceeds with the pre-multiplication of (39) by [I � �]; which yields

��11 �12 � ��22

�2664 z1tz2t

3775 = � 11 12 � �22

�2664 z1t�1z2t�1

3775+[Q1 � �Q2] [E + C�t +D�t] :(47)

Then due to (46), the loading factor for the expectational errors in (47) is zero, and thus the

6Sims also considers the case in which the structural innovations �t are serially correlated, which leadsto a generalization of (44).

19

system may be written in the form

xt = �E +�0xt�1 +�1�t; (48)

where

H = Z

2664 ��111 ���111 (�12 � ��22)

0 I

3775 (49)

�E = H

2664 Q1 � �Q2

(22 � �22)�1Q2

3775E (50)

�0 = Z��111 [11(12 � �22)]Z 0 (51)

�1 = H

2664 Q1 � �Q20

3775D: (52)

Exercise 2 Using the code cited for this method, compute the solution for (11)-(15) for

given values of �.

2.3 Klein�s Method

Klein (2000) proposes a solution method that is a hybrid of those of Blanchard and

Kahn (1980) and Sims (2001).7 The method is applied to systems written as

eAEt(xt+1) = eBxt + Eft; (53)

7GAUSS and Matlab code that implement this solution method are available athttp://www.ssc.uwo.ca/economics/faculty/klein/.

20

where the vector ft (of length nz) has a zero-mean vector autoregressive (VAR) speci�cation

with autocorrelation matrix �; additionally eA may be singular.8Like Blanchard and Kahn, Klein distinguishes between the predetermined and non-

predetermined variables of the model. The former are contained in x1t+1; the latter in x2t+1:

Et(xt+1) =

�x1t+1 Et(x2t+1)

�0. The solution approach once again involves de-coupling the

system in to non-explosive and explosive components, and solving the two components in

turn.

Returning to the example model expressed in (11)-(15), the form of the model amenable

to the implementation of Klein�s method is given by (21), repeated here for convenience:

3Et(�t+1) =�4 +5

�10 1

��t +

�6 +5

�10 2

�eat: (54)

The main advantage of Klein�s approach relative to Blanchard and Kahn�s is that 3 may

be singular. To proceed with the description of Klein�s approach, we revert to the notation

employed in (53).

Klein�s approach overcomes the potential non-invertibility of eA by implementing a com-plex generalized Schur decomposition to decompose eA and eB: This is in place of the QZdecomposition employed by Sims. In short, the Schur decomposition is a generalization of

the QZ decomposition that allows for complex eigenvalues associated with eA and eB: Giventhe decomposition of eA and eB; Klein�s method closely follows that of Blanchard and Kahn.

8See Chapter 4 for a description of VAR models.

21

The Schur decompositions of eA and eB are given byQ eAZ = S (55)

Q eBZ = T; (56)

where (Q;Z) are unitary and (S; T ) are upper triangular matrices with diagonal elements

containing the generalized eigenvalues of eA and eB: Once again the eigenvalues are orderedin increasing value in moving from left to right. Partitioning Z as

Z =

2664 Z11 Z12

Z21 Z22

3775 ; (57)

Z11 is n1�n1 and corresponds to the non-explosive eigenvalues of the system. Given saddle-

path stability, this conforms with x1; which contains the predetermined variables of the

model.

Having obtained this decomposition, the next step in solving the system is to triangularize

(53) as was done in working with the QZ decomposition. Begin by de�ning

zt = ZHxt; (58)

where ZH refers to a Hermitian transpose.9 This transformed vector is divided into n1 � 1

stable (st) and n2 � 1 unstable (ut) components. Then since eA = Q0SZH and eB = Q0SZH ;9Given a matrix �, if the lower triangular portion of � is the complex conjugate transpose of the upper

triangle portion of �, then � is denoted as Hermitian.

22

(53) may be written as

2664 S11 S12

0 S22

3775Et2664 st+1

ut+1

3775 =2664 T11 T12

0 T22

37752664 st

ut

3775+2664 Q1Q2

3775Eft; (59)

once again, the linear portion of (59) contains the unstable components of the system. Solving

this component via forward iteration, we obtain10

ut = Mft (60)

vec(M) =�(�T S22)� Inz T22

��1vec(Q2E): (61)

This solution for the unstable component is then used to solve the stable component, yielding

st+1 = S�111 T11st + S

�111 fT12M � S12M� +Q1Egft � Z�111 Z12M�t+1; (62)

where �t+1 is a serially uncorrelated stochastic process representing the innovations in the

VAR speci�cation for ft+1: In the context of our example model, ft corresponds to eat; the10The appearance of the vec operator accommodates the VAR speci�cation for ft: In the context of the

example model, �T is replaced by the scalar �T ; and (61) becomes M =��TS22 � T22

��1Q2E:

23

innovation to which is "t: In terms of the original variables the solution is expressed as

x2t = Z21Z�111 x1t +Nft (63)

x1t+1 = Z11S�111 T11Z

�111 x2t + Lft (64)

N = (Z22 � Z21Z�111 Z12)M (65)

L = �Z11S�111 T11Z�111 Z12M + Z11S�111 [T12M � S12M� +Q1E] + Z12M�: (66)

This solution can be cast into the form of (10) as

x1t+1 =�Z11S

�111 T11Z

�111 Z21Z

�111

�x1t +

�Z11S

�111 T11Z

�111 N + L

�ft: (67)

Exercise 3 Apply Klein�s code to the example model presented in (11)-(15).

2.4 An Undetermined Coe¢ cients Approach

Uhlig (1999) proposes a solution method based on the method of undetermined coe¢ -

cients.11 The method is applied to systems written as

0 = Et[Fxt+1 +Gxt +Hxt�1 + Lft+1 +Mft] (68)

ft+1 = Nft + �t+1; Et(�t+1) = 0: (69)

11Matlab code available for implementing this solution method is available at:http://www.wiwi.hu-berlin.de/wpol/html/toolkit.htm.GAUSS code is currently under construction.

24

With respect to the example model in (11)-(14) let xt = [yt ct it kt]0 : Then lagging the �rst

two equations, which are subject neither to structural shocks nor expectations errors, the

matrices in (68) and (69) are given by

F =

266666666664

0 0 0 0

0 0 0 0

0 �1c 0 �k

0 0 0 1

377777777775; G =

266666666664

1 0 0 ��

1 � c � i 0

0 �2c 0 0

0 0 ��i ��k

377777777775; (70)

H = 0; L = [0 0 �a 0]0 ; M = [�1 0 0 0]0, and N = �.

Solutions to (68)-(69) take the form

xt = Pxt�1 +Qft: (71)

In deriving (71), we will confront the problem of solving matrix quadratic equations of the

form

P 2 � �P �� = 0 (72)

for the m�m matrix P . Thus we �rst describe the solution of such equations.

To begin, de�ne

�2m�2m

=

2664 � �

Im 0m�m

3775 ; �2m�2m

=

2664 0m�m

0m�m Im

3775 : (73)

Given these matrices, let s and � denote the generalized eigenvector and eigenvalue of �

25

with respect to �; and note that s0 = [�x0; x0] for some x 2 <m. Then the solution to the

matrix quadratic is given by

P = ��1; = [x1; :::; xm]; � = diag(�1; :::; �m); (74)

so long as the m eigenvalues contained in � and (x1; :::; xm) are linearly independent. The

solution is stable if the generalized eigenvalues are all less than one in absolute value.

Returning to the solution of the system in (68)-(69), the �rst step towards obtaining (71)

is to combine these three equations into a single equation. This is accomplished in two steps.

First, write xt in (68) in terms of its relationship with xt�1 given by (71), and do the same

for xt+1; where the relationship is given by

xt+1 = P2xt�1 + PQft +Qft+1: (75)

Next, write ft+1 in terms with its relationship with ft given by (69). Taking expectations of

the resulting equation yields

0 = [FP 2 +GP +H]xt�1 + [(FP +G)Q+M + (FQ+ L)N ]ft: (76)

Note that in order for (76) to hold, the coe¢ cients on xt�1 and ft must be zero. The �rst

restriction implies that P must satisfy the matrix quadratic equation

0 = FP 2 +GP +H; (77)

26

the solution of which is obtained as indicated in (73) and (74). The second restriction

requires the derivation of Q which satis�es

(FP +G)Q+M + (FQ+ L)N = 0: (78)

The required Q can be shown to be given by

Q = V �1 [�vec(LN +M)] ; (79)

where V is de�ned as

V = N 0 F + Ik (FP +G): (80)

The solutions for P and Q will be unique so long as the matrix P has stable eigenvalues.

As noted by Christiano (2002), this solution method is particularly convenient for working

with models involving endogenous variables that have di¤ering associated information sets.

Such models can be cast in the form of (68)-(69), with the expectations operator �Et replacing

Et. In terms of calculating the expectation of an n� 1 vector Xt, �Et is de�ned as

�Et(Xt) =

26666664E(X1tj�1t)

...

E(Xntj�nt)

37777775 ; (81)

where �it represents the information set available for formulating expectations over the ith

element of Xt. Thus systems involving this form of heterogeneity may be accommodated

using an expansion of the system (68)-(69) speci�ed for a representative agent. The solution

27

of the expanded system proceeds as indicated above; for details and extensions, see Christiano

(2002).

Exercise 4 Apply Uhlig�s code to the example model presented in (11)-(15).

28

References

[1] Adda, J. and R. Cooper. 2003. Dynamic Economics. MIT Press.

[2] Blanchard, O. J. and C. M. Kahn. 1980. �The Solution of Linear Di¤erence Models

Under rational Expectations�. Econometrica 48 (5): 1305-1311.

[3] Christiano, L. J. 2002. �Solving Dynamic Equilibrium Models by a Method of Undeter-

mined Coe¢ cients�. Computational Economics 20: 21-55.

[4] Judd, K. 1998. Numerical Methods in Economics. MIT Press.

[5] King, R.G. and M.W. Watson. 2002. �System Reduction and Solution Algorithms for

Solving Linear Di¤erence Systems under Rational Expectations�Computational Eco-

nomics 20:57-86.

[6] Klein, P. 2000. �Using the Generalized Schur Form to Solve a Multivariate Linear Ratio-

nal Expectations Model�. Journal of Economic Dynamics and Control 24: 1405-1423.

[7] Ramsey, F.K. 1928. �A Mathematical Theory of Saving." Economic Journal 38:543-559.

[8] Sargent. T. and L. Ljungqvist. 2004. Recursive Macroeconomic Theory. MIT Press.

[9] Schmitt-Grohe S. and M. Uribe. 2002. �Solving Dynamic General Equilibrium Models

Using a Second-Order Approximation of the Policy Function�. NBERTechnical Working

Paper No. 0282. Cambridge: NBER.

[10] Sims, C. 2001. �Solving Linear Rational Expectations Models�. Computational Eco-

nomics 20: 1-20.

29

[11] Uhlig, H. 1999. �A toolkit for analyzing non-linear dynamic stochastic models easily�.

In Ramon Marimon and Andrew Scott (eds.), Computational Methods for the Study of

Dynamic Economies. Oxford University Press, New York, 30�61.

30