Robustness and Model Misspecification

Preview:

Citation preview

Robustness and

Model Misspecification

Lars Peter Hansen, University of Chicago

Thomas J. Sargent, NYU

Gauhar A. Turmuhambetova, U. Chicago

Noah Williams, Princeton University

Overall Aim of Research

• Structural models of dynamic economies require specificationsof how people respond to risk and uncertainty.

• Typical approach assumes people confront risk using a modelof rational expectations. Decision-makers are endowed with thecorrect model to use in assessing risk. No scope for modellingerror or approximation.

• Empirical evidence asset markets suggests that decision-makerssometimes look highly risk averse.

• Our aim: to understand when a concern about modelmisspecification or robustness will imitate risk aversion inactions and prices that clear decentralized markets.

• Here: clarify decision theoretic foundations.

Robustness

Burl, Linear Optimal Control (1999):The control system engineer should be assured that a design willfunction acceptably before committing to implementation. Suchassurance can be obtained by analyzing control system stabilityand performance with respect to a range of plant models that isexpected to encompass the actual plant. This type of analysis istermed robustness analysis.

Huber (1981):... as we defined robustness to mean insensitivity with regard tosmall deviations from assumptions, any quantitative measure ofrobustness must somehow be concerned with the maximumdegradation of performance possible for an ε-deviation from theassumptions. The optimally robust procedure minimizes thisdegradation and hence will be a minimax procedure of some kind.

Uncertainty Aversion

• Ellsberg (1961) paradox:2 urns, 100 balls, some red some blackUrn A: 50 red, 50 black, Urn B: unknownBet 1: ball from urn A is black (AB)Bet 2: ball from urn A is red (AR)Similarly for BB, BR.Observed empirically: AB ∼ AR > BB ∼ BR

• Gilboa and Schmeidler (1989):For urn B, subject has too little information to form a prior.Considers a set of priors as possible. Being uncertainty averse,evaluates by minimal expected utility over priors in the set.

• Key: “Model” = probability measureset of priors = class of models/measuresLagrange multiplier theorem links formulations

Four Problems

• Robust Control Problem - multiplier - benchmark controlproblem - penalized family of perturbations around theproblem - James (1992), Anderson-Hansen-Sargent (2000)

• Robust Control Problem - constraint -Petersen-James-Dupuis (2000) - perturbations index multiplepriors - Gilboa-Schmeidler (1989) and Chen-Epstein (2001)

• Risk Sensitive Control Problem - change preferences usingrisk-sensitive recursion - Jacobson (1973), Whittle (1981),Hansen-Sargent (1995). Special case of Epstein-Zin (1989),Duffie-Epstein (1992).

• Bayesian Decision Problem - generate a new shock model -dynamic counterpart to Blackwell-Girshick (1954),Chamberlain (2000)

Two Additional Tasks

• Study absolute continuity as a device to motivate perturbationsin robust decision problems.

• Investigate the recursivity of the implied control problems andpreference orders.

Quantitative Application

• Here focus on setup of problem in simple environment:Diffusion setting with full information

• Cagetti, Hansen, Sargent, and Williams (2002):richer environment with partial information, jumps

• Analyze stochastic growth model with unobserved growth rate.Filter data to predict state.

• Robustness adds additional motive for precautionary saving,can be offset with impatience.

• Robustness adds to the market price of risk: small modeluncertainty helps solve equity premium puzzle.

• Time variation in MPR: highest going in and out of recessions.Robustness lowers P/E ratios, but can’t explain volatility.

The Benchmark Model

• Bt : t ≥ 0: d-dim., std. Brownian motion

• Underlying probability space (Ω,F , P )

• Ft: completion of filtration generated by Bt• c = ct progressively meas. control process

• Objective:

supc∈C

E

[∫ ∞

0

exp(−δt)U(ct, xt)dt

]

subject to:dxt = µ(ct, xt)dt + σ(ct, xt)dBt (1)

• x0 given, C: set of admissible control processes

Distorted Evolution

• Ito distortion: replace Bt with:

Bht = Bt +

∫ t

0

hsds

where Bt is a Brownian motion.

• Distortions hidden within the Brownian motion.

• B induces Wiener measure Q and Bh a different measure Qh

on the canonical space of d-dimensional continuous functions.

• Alternative measures Q indexed by alternative (progressivelymeasureable) specifications of ht.

• State evolution:

dxt = µ(ct, xt)dt + σ(ct, xt)(htdt + dBt)

Absolute Continuity

• Analyze perturbations Qh “close” to Q

• Q is absolutely continuous w.r.t P if for allA ∈ F , P (A) = 0 implies Q(A) = 0.

• Q is locally absolutely continuous w.r.t P if Qt isabsolutely continuous w.r.t Pt for all t ≥ 0.

• Local absolute continuity insures changeof measure (density) qt = dQt/dQt exists:

EQgt = Egtqt for Ft-measurable gt

• Key result: Qh local absolutely continuous w.r.t Q iffBh

t = Bt +∫ t

0hsds where

∫ t

0

|ht|2dt < ∞ w.p.1

Digression

• Suppose that Q distribution is multivariate normal with meanzero and variance I.

• Suppose that Q distribution is multivariate normal with meanµ and variance I.

• Relative density is

q =dQ

dQ= exp

(x′µ− 1

2µ′µ

)

• The logarithm has mean 12µ′µ.

EQ log q =12µ′µ

Brownian Motion and Log Likelihood Ratios

• Diffusion distortion: ht = ψt(Bh)

dBht = ψt(Bh)dt + dBt

• Relative likelihood given data B:

qht = exp

[∫ t

0

ψu(B) · dBu − 12

∫ t

0

|ψu(B)|2du

].

• The Ito depiction of log-likelihood evolution under model Qh:

d log qht = ht · dBt +

12|ht|2dt

Relative Entropy

• Use relative entropy to measure discrepancy:

R(Q||P ) = EQ logdQ

dP

• Here also look at measures over time. Use:

R(Q) = δ

∫ ∞

0

exp(−δt)EQ (log qt) dt

=∫ ∞

0

exp(−δt)E( |ht|2

2

)dt.

• R(Q) ≥ 0. R(Q) = 0 ⇒ Q = P (ht ≡ 0).

• R(Q) convex in Q

• R(Q) < ∞ implies local absolute continuity

Two Control Problems

• Recall state evolution:

dxt = µ(ct, xt)dt + σ(ct, xt)(htdt + dBt) (2)

• H is set of h with R(Q) < ∞• A multiplier robust control problem is:

J(θ) = supc∈C

infh∈H

E

[∫ ∞

0

exp(−δt)U(ct, xt)dt

]+ θR(Q)

subject to (2).

• A constraint robust control problem is:

J∗(η) = supc∈C

infh∈H

E

[∫ ∞

0

exp(−δt)U(ct, xt)dt

]

subject to (2) and R(Q) ≤ η.

• Possible to have value −∞, so restrict θ ≥ θ

Constraint Implies Multiplier

Lagrangian (objective linear, R convex in Q):

supc∈C

infh∈H

supθ≥0

E

[∫ ∞

0

exp(−δt)U(ct, xt)d t

]+ θ [R(Q)− η]

Lagrange Multiplier Theorem:

supc∈C

supθ≥θ

infh∈H

E

[∫ ∞

0

exp(−δt)U(ct, xt)d t

]+ θ [R(Q)− η]

supθ≥θ

supc∈C

infh

E

[∫ ∞

0

exp(−δt)U(ct, xt)d t

]+ θ [R(Q)− η]

Say sup attained by θ∗:

supc∈C

infh∈H

E

[∫ ∞

0

exp(−δt)U(ct, xt)d t

]+ θ∗R(Q)

Multiplier (often) Implies Constraint

• Problem with converse is that c varies with η.

• Works when max-min equals min-max.

maxc∈C

minh∈H

E

∫ ∞

0

exp(−δt)[U(ct, xt) + θ

|ht|22

]dt

= minh∈H

maxc∈C

E

∫ ∞

0

exp(−δt)[U(ct, xt) + θ

|ht|22

]dt

subject to

dxt = µ(ct, xt)dt + σ(ct, xt)(ht + dBt)

• Easier to verify with recursive counterpart.

Recursivity of the Multiplier Problem

• Under an Isaacs Condition the Markov and commitment gamesolutions coincide. Fleming and Souganidis (1989).

δV (x, θ) = maxc∈C

minh

U(c, x) +θ|h|2

2+ µ(c, x) · Vx(x, θ)

+σ(c, x)h · Vx(x, θ) +12tr [σ(c, x)′Vxx(x, θ)σ(c, x)]

= minh

maxc∈C

U(c, x) +θ|h|2

2+ µ(c, x) · Vx(x, θ)

+σ(c, x)h · Vx(x, θ) +12tr [σ(c, x)′Vxx(x, θ)σ(c, x)]

• Recursive methods may be employed to solve the decisionproblem as a two-player, zero-sum Markov game.

A Bayesian Interpretation

Consider a commitment game in which ht goes first. Given aprocess ht, the process ct is chosen optimally.

Use the Markov solution ct = φc(xt) and ht = φh(xt).

Standard control problem with discounted objective:

E

∫ ∞

0

exp(−δt)U(ct, xt)dt

and evolution:

dxt = µ(ct, xt)dt + σ(ct, xt)[φh(Xt) + dBt]

dXt = µ∗(Xt)dt + σ∗(Xt)dBt.

where

µ∗(x) = µ[φc(x), x] + σ∗(x)φh(x)

σ∗(x) = σ[φc(x), x]

Recursive, Risk-sensitive Control

• Risk Sensitive Control Problem: make adjustment tocontinuation value function to enhance risk. In discrete time,value function Vt solves:

Vt = U(ct, xt) + β (−θ log Et exp(−Vt+1/θ))

• Continuous time counterpart:

δV (x, θ) = maxc∈C

U(c, x) + µ(c, x) · Vx(x, θ) +12tr

[σσ′Vxx(x, θ)

]

− 12θ

Vx(x, θ)′σσ′Vx(x, θ)

• Optimized value of Multiplier Robust Control Problem givessame HJB equation.

Recursivity of the Constraint Problem

• A new state r and new control g with evolution:

drt =(

δrt − ht · ht

2+ gt · ht

)dt + gt · dBt.

• State rt is the continuation value for relative entropy. Controlgt is chosen by the minimizing agent.

HJB Equation:

δV ∗(x, r) = maxc∈C

minh,g

U(c, x) + [µ(c, x) + σ(c, x)h] · V ∗x (x, r)

+(

δr − h · h2

+ h · g)· V ∗

r (x, r)

+12tr

[σ′ g

] V ∗

xx(x, r) V ∗xr(x, r)

V ∗rx(x, r) V ∗

rr(x, r)

σ

g′

Observations

• The added state is not needed for computation. The solution isof the form r(x).

• The control laws for h and c coincide with those from themultiplier problem.

• The solved h gives an contribution to factor risk prices:Consider a security whose return has local mean µr, loading σr

on the Brownian increment dB. Then

µr − µf = σrλ

where factor price is λ = λm + λu and µf is the risk free rate.

• λm comes from the standard consumption-based model.

• The model uncertainty price is λu = −h

Preference Orders

To construct the preference orderings, we use an endogenous statevector st:

dst = µs(st, ct)dt. (3)

where this differential equation can be solved uniquely for st givens0 and the process cs : 0 ≤ s < t.We define preferences to be additively separable in (st, ct). Givens0, form:

D(c) =∫ ∞

0

exp(−δt)u(st, ct)dt

for st that solves the differential equation (3).

Preference Orders (continued)

One preference ordering uses the valuation function:

W ∗(c; η) = infR(Q)≤η

EQD(c).

Consider a c and c∗ that satisfy: ct = φt(B) and c∗t = φ∗t (B).

Def. c∗ ºη c ifW ∗(c∗; η) ≥ W ∗(c; η).

The other ordering uses the valuation function:

W (c; θ) = infQ

EQD(c) + θR(Q)

Def. c∗ºθc ifW (c∗; θ) ≥ W (c; θ).

This multiplier preference ordering coincides with a recursive, risksensitive preference ordering provided that θ > 0.

Recursivity

• Both preferences can be made recursive.

• The multiplier preference ordering coincides with a recursive,risk sensitive preference ordering.

• The constraint preference order is made recursive byintroducing a continuation entropy state variable.

• This formulation is not allowed by Chen-Epstein (2001) orEpstein-Scheider (2004).

• Constraint and multiplier preferences differ, although optimalchoices coincide for η∗, θ∗.

• Can show c∗ ºη∗ c implies c∗ ºθ∗ c.

Illustration of Preferences

1 1.25 1.5 1.75 2 2.25

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5

Consumption in State 1

Con

sum

ptio

n in

Sta

te 2

Constraint Multiplier Budget Line

1 period, 2 states. Log utility, states equally likely under approx.

Conclusion

• Robustness is a specification of uncertainty aversion. Hasinterpretations as risk-sensitivity (depends on unstructureduncertainty, full information), Bayesian decision problem.

• Multiplier formulation is most parsimonius, easiest toformulate, of interest in its own right (Wang, 2001).

• Absolute continuity of distributions limits attention to driftdistortions, close connection with relative entropy.

• Control problems and supporting preference orders arerecursive, time-consistent once allow for state variables.

Quantitative Application

1955 1960 1965 1970 1975 1980 1985 1990 1995 2000−4.3

−4.2

−4.1

−4

−3.9

−3.8

−3.7

−3.6

−3.5

−3.4Log Technology Shock Process

Log of the technology level.

1955 1960 1965 1970 1975 1980 1985 1990 1995 20000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1Estimated Probability of Being in the Low Growth State

Probability in low growth state.

Capital Accumulation

1955 1960 1965 1970 1975 1980 1985 1990 1995 20003

3.5

4

4.5

5

5.5

6

6.5

7θ=∞, ρ=0.04θ=4, ρ=0.04 θ=4, ρ=0.058

Capital/tech for varying disc rates (ρ) and robustness params (θ).

Market Prices of Risk and Uncertainty

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.05

0.1

0.15

0.2

0.25

0.3

Probability in Low State

Price of Uncertainty, Game I Price of Uncertainty, Game IIPrice of Risk, Game I Price of Risk, Game II

Recommended