Recent Development in the Eco No Metrics of Program Evaluation

  • View
    107

  • Download
    5

Embed Size (px)

Transcript

Journal of Economic Literature 2009, 47:1, 586 http:www.aeaweb.org/articles.php?doi=10.1257/jel.47.1.5

Recent Developments in the Econometrics of Program EvaluationGuido W. Imbens and Jeffrey M. Wooldridge*Many empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades, much research has been done on the econometric and statistical analysis of such causal effects. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical research in economics, including labor economics, public finance, development economics, industrial organization, and other areas of empirical microeconomics. In this review, we discuss some of the recent developments. We focus primarily on practical issues for empirical researchers, as well as provide a historical overview of the area and give references to more technical research.

1.

Introduction

any empirical questions in economics and other social sciences depend on causal effects of programs or policies. In the last two decades, much research has been done on the econometric and statistical analysis of such causal effects. This recent theoretical literature has built on, and combined features of, earlier work in both the statistics and econometrics literatures. It has by now reached a level of maturity that makes it an important tool in many areas of empirical* Imbens: Harvard University and NBER. Wooldridge: Michigan State University. Financial support for this research was generously provided through NSF grants SES 0136789, 0452590 and 08. We are grateful for comments by Esther Duflo, Caroline Hoxby, Roger Gordon, Jonathan Beauchamp, Larry Katz, Eduardo Morales, and two anonymous referees.

M

research in economics and suitable for a review. In this article, we attempt to present such a review. We will focus on practical issues for empirical researchers, as well as provide an historical overview of the area and give references to more technical research. This review complements and extends other reviews and discussions, including those by Richard Blundell and Monica Costa Dias (2002), Guido W. Imbens (2004), and Joshua D. Angrist and Alan B. Krueger (1999) and the books by Paul R. Rosenbaum (1995), Judea Pearl (2000), Myoung-Jae Lee (2005a), Donald B. Rubin (2006), Marco Caliendo (2006), Angrist and Jrn-Steffen Pischke (2009), Howard S. Bloom (2005), Stephen L. Morgan and Christopher Winship (2007), Jeffrey M. Wooldridge (2002) and Imbens and Rubin (forthcoming). In addition, the reviews in James J. Heckman, Robert J. LaLonde, 5

6

Journal of Economic Literature, Vol. XLVII (March 2009) can involve different physical units or the same physical unit at different times. The problem of evaluating the effect of a binary treatment or program is a well studied problem with a long history in both econometrics and statistics. This is true both in the theoretical literature as well as in the more applied literature. The econometric literature goes back to early work by Orley Ashenfelter (1978) and subsequent work by Ashenfelter and David Card (1985), Heckman and Richard Robb (1985), LaLonde (1986), Thomas Fraker and Rebecca Maynard (1987), Card and Daniel G. Sullivan (1988), and Charles F. Manski (1990). Motivated primarily by applications to the evaluation of labor market programs in observational settings, the focus in the econometric literature is traditionally on endogeneity, or self-selection, issues. Individuals who choose to enroll in a training program are by definition different from those who choose not to enroll. These differences, if they influence the response, may invalidate causal comparisons of outcomes by treatment status, possibly even after adjusting for observed covariates. Consequently, many of the initial theoretical studies focused on the use of traditional econometric methods for dealing with endogeneity, such as fixed effect methods from panel data analyses, and instrumental variables methods. Subsequently, the econometrics literature has combined insights from the semiparametric literature to develop new estimators for a variety of settings, requiring fewer functional form and homogeneity assumptions. The statistics literature starts from a different perspective. This literature originates in the analysis of randomized experiments by Ronald A. Fisher (1935) and Jerzy SplawaNeyman (1990). From the early 1970s, Rubin (1973a, 1973b, 1974, 1977, 1978), in a series of papers, formulated the now dominant approach to the analysis of causal effects in observational studies. Rubin proposed the

and Jeffrey A. Smith (1999), Heckman and Edward Vytlacil (2007a, 2007b), and Jaap H. Abbring and Heckman (2007) provide an excellent overview of the important theoretical work by Heckman and his coauthors in this area. The central problem studied in this literature is that of evaluating the effect of the exposure of a set of units to a program, or treatment, on some outcome. In economic studies, the units are typically economic agents such as individuals, households, markets, firms, counties, states, or countries but, in other disciplines where evaluation methods are used, the units can be animals, plots of land, or physical objects. The treatments can be job search assistance programs, educational programs, vouchers, laws or regulations, medical drugs, environmental exposure, or technologies. A critical feature is that, in principle, each unit can be exposed to multiple levels of the treatment. Moreover, this literature is focused on settings with observations on units exposed, and not exposed, to the treatment, with the evaluation based on comparisons of units exposed and not exposed.1 For example, an individual may enroll or not in a training program, or he or she may receive or not receive a voucher, or be subject to a particular regulation or not. The object of interest is a comparison of the two outcomes for the same unit when exposed, and when not exposed, to the treatment. The problem is that we can at most observe one of these outcomes because the unit can be exposed to only one level of the treatment. Paul W. Holland (1986) refers to this as the fundamental problem of causal inference. In order to evaluate the effect of the treatment, we therefore always need to compare distinct units receiving the different levels of the treatment. Such a comparison1 As oppposed to studies where the causal effect of fundamentally new programs is predicted through direct identification of preferences and production functions.

Imbens and Wooldridge: Econometrics of Program Evaluation interpretation of causal statements as comparisons of so-called potential outcomes: pairs of outcomes defined for the same unit given different levels of exposure to the treatment, with the ressearcher only observing the potential outcome corresponding to the level of the treatment received. Models are developed for the pair of potential outcomes rather than solely for the observed outcome. Rubins formulation of the evaluation problem, or the problem of causal inference, labeled the Rubin Causal Model (RCM) by Holland (1986), is by now standard in both the statistics and econometrics literature. One of the attractions of the potential outcomes setup is that from the outset it allows for general heterogeneity in the effects of the treatment. Such heterogeneity is important in practice, and it is important theoretically as it is often the motivation for the endogeneity problems that concern economists. One additional advantage of the potential outcome set up is that the parameters of interest can be defined, and the assumptions stated, without reference to particular statistical models. Of particular importance in Rubins approach is the relationship between treatment assignment and the potential outcomes. The simplest case for analysis is when assignment to treatment is randomized and, thus, independent of covariates as well as the potential outcomes. In such classical randomized experiments, it is straightforward to obtain estimators for the average effect of the treatment with attractive properties under repeated sampling, e.g., the difference in means by treatment status. Randomized experiments have been used in some areas in economics. In the 1970s, negative income tax experiments received widespread attention. In the late 1980s, following an influential paper by LaLonde (1986) that concluded econometric methods were unable to replicate experimental results, more emphasis was put on experimental evaluations of labor

7

market programs, although more recently this emphasis seems to have weakened a bit. In the last couple of years, some of the most interesting experiments have been conducted in development economics (e.g., Edward Miguel and Michael Kremer 2004; Esther Duflo 2001; Angrist, Eric Bettinger, and Kremer 2006; Abhijit V. Banerjee et al. 2007) and behavioral economics (e.g., Marianne Bertrand and Sendhil Mullainathan 2004). Nevertheless, experimental evaluations remain relatively rare in economics. More common is the case where economists analyze data from observational studies. Observational data generally create challenges in estimating causal effects but, in one important special case, variously referred to as unconfoundedness, exogeneity, ignorability, or selection on observables, questions regarding identification and estimation of the policy effects are fairly well understood. All these labels refer to some form of the assumption that adjusting treatment and control groups for differences in observed covariates, or pretreatment variables, remove all biases in comparisons between treated and control units. This case is of great practical relevance, with ma