4
EDITORS’ NOTES D ecades ago, Suchman (1967) encouraged evaluators to apply Campbell and Stanley’s (Campbell & Stanley, 1963) writings on experiments, quasi-experiments, and validity to evaluation. Since that time, the Campbellian validity typology, as presented in Campbell and Stanley (1963), Cook and Campbell (1979), and Shadish, Cook, and Campbell (2002), has been prominent in much of the theory and practice of outcome evaluation. Despite its influence, the Campbellian validity typology and its associated methods have been criticized, sometimes generating heated debates on the typology’s strengths and weaknesses for evaluation. For some readers such debates might form part of this issue’s subtext; for others the issue should still be of interest—even to evaluators new to the field and unfamiliar with such debates. Validity frameworks are important. They can inform thinking about evaluation, guide evaluation practice, and facilitate future develop- ment of evaluation theory and methods. This issue had its origins in a panel at the 2008 conference of the Amer- ican Evaluation Association. Led by Huey T. Chen, the session focused on theory and practice as related to external validity in evaluation. The session was motivated in part by the sense that new directions, and perhaps increased attention to some old directions, are needed to reach meaningful conclusions about evaluation generalizability. But session presenters addressed issues related to validity forms beyond external validity. In addi- tion, as planning shifted from the conference session to this issue, newly added contributors planned to address issues other than external validity. As a result, after considering alternative framings, the issue has evolved to its theme, that is, validity in the context of outcome evaluation. The primary focus of most of the chapters is not on Campbell and col- leagues’ validity typology per se, but rather on its application in the context of outcome evaluation. According to the Program Evaluation Standards ( Joint Committee on Standards for Educational Evaluation, 1994), four attributes are essential for evaluation practice: utility, feasibility, propriety, and accuracy. The Campbellian typology offers clear strengths in addressing accuracy. However, it is less suited to address issues of utility, propriety, and feasibility. Perhaps a worthwhile direction for developing a comprehensive validity perspective for evaluation is to build on the Campbellian typology in ways that will better address issues related to all four attributes. This issue of New Directions for Evaluation is organized and developed under this spirit. 1 NEW DIRECTIONS FOR EVALUATION, no. 130, Summer 2011 © Wiley Periodicals, Inc., and the American Evaluation Association. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/ev.360 Disclaimer: The findings and conclusions of this article are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Pre- vention (CDC).

Editors' notes

Embed Size (px)

Citation preview

Page 1: Editors' notes

EDITORS’ NOTES

Decades ago, Suchman (1967) encouraged evaluators to apply Campbelland Stanley’s (Campbell & Stanley, 1963) writings on experiments,quasi-experiments, and validity to evaluation. Since that time, the

Campbellian validity typology, as presented in Campbell and Stanley (1963),Cook and Campbell (1979), and Shadish, Cook, and Campbell (2002), hasbeen prominent in much of the theory and practice of outcome evaluation.Despite its influence, the Campbellian validity typology and its associatedmethods have been criticized, sometimes generating heated debates on thetypology’s strengths and weaknesses for evaluation. For some readers suchdebates might form part of this issue’s subtext; for others the issue shouldstill be of interest—even to evaluators new to the field and unfamiliar withsuch debates. Validity frameworks are important. They can inform thinkingabout evaluation, guide evaluation practice, and facilitate future develop-ment of evaluation theory and methods.

This issue had its origins in a panel at the 2008 conference of the Amer-ican Evaluation Association. Led by Huey T. Chen, the session focused ontheory and practice as related to external validity in evaluation. The sessionwas motivated in part by the sense that new directions, and perhapsincreased attention to some old directions, are needed to reach meaningfulconclusions about evaluation generalizability. But session presentersaddressed issues related to validity forms beyond external validity. In addi-tion, as planning shifted from the conference session to this issue, newlyadded contributors planned to address issues other than external validity.As a result, after considering alternative framings, the issue has evolved toits theme, that is, validity in the context of outcome evaluation.

The primary focus of most of the chapters is not on Campbell and col-leagues’ validity typology per se, but rather on its application in the contextof outcome evaluation. According to the Program Evaluation Standards ( Joint Committee on Standards for Educational Evaluation, 1994), fourattributes are essential for evaluation practice: utility, feasibility, propriety,and accuracy. The Campbellian typology offers clear strengths in addressingaccuracy. However, it is less suited to address issues of utility, propriety, andfeasibility. Perhaps a worthwhile direction for developing a comprehensivevalidity perspective for evaluation is to build on the Campbellian typologyin ways that will better address issues related to all four attributes. This issueof New Directions for Evaluation is organized and developed under this spirit.

1NEW DIRECTIONS FOR EVALUATION, no. 130, Summer 2011 © Wiley Periodicals, Inc., and the American Evaluation Association. Published online in Wiley Online Library (wileyonlinelibrary.com) • DOI: 10.1002/ev.360

Disclaimer: The findings and conclusions of this article are those of the authors and donot necessarily represent the official position of the Centers for Disease Control and Pre-vention (CDC).

Page 2: Editors' notes

In general, we take the stance that we can further advance validity inoutcome evaluation by revising or expanding the Campbellian typology.Chapter authors present multiple views on how to build on the Campbel-lian typology’s contribution and suggest alternative validity frameworks ormodels to serve program evaluation better. We hope that these new per-spectives will advance theory and practice regarding validity in evaluationas well as improve the quality and usefulness of outcome evaluations.

Chapter authors propose the following strategies in developing a newperspective of validity typology for advancing validity in program evaluation.

Enhance External Validity

John Gargani and Stewart I. Donaldson, then Melvin M. Mark, focus onexternal validity. Gargani and Donaldson discuss limits of the Campbelliantradition regarding external validity. They argue that the external validity ofan evaluation could be enhanced by better addressing issues about whatworks for whom, where, why, and when. Mark reviews several alternativeframings of generalizability issues. With the use of these alternatives, he pro-vides potentially fruitful directions for external validity enhancement.

Enhance Precision by Reclassifying the Campbellian Typology

The chapters by Charles S. Reichardt and George Julnes offer conceptualrevisions of the Campbellian typology. Reichardt offers what he sees as flawsin the four types of validity in Shadish et al. (2002). He also offers his ver-sion of a typology, which includes four criteria: validity, precision, general-izability, and completeness. Julnes proposes a validity framework with threedimensions—representation (construct validity), causal inference (internaland external validity), and valuation. He argues for the conceptual and prag-matic merits of this framework.

Expand the Scope of the Typology

Ernest R. House discusses the Campbellian typology’s limitations in dealingwith ethical challenges with which evaluation is increasingly faced. He notesan alarming phenomenon, visible in medical evaluations but increasinglyworrisome in other areas of evaluation practice, whereby evaluation resultsbecome biased because of researchers’ intentional and unintentional manip-ulation. House discusses strategies for dealing with this ethical problem,including how these ethics-related problems might be incorporated withinthe Campbellian validity tradition.

Jennifer C. Greene is one of the few contributors to this issue who isnot affiliated with the Campbellian tradition. She provides a naturalistic

2 ADVANCING VALIDITY IN OUTCOME EVALUATION

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

Page 3: Editors' notes

3EDITORS’ NOTES

viewpoint in examining limits of the Campbellian typology. She discussesdifferent validity concepts and offers strategies for strengthening validitythat are not primarily associated with the Campbellian tradition. At thesame time, her comments are congenial to advances within the frameworkprovided by Campbell and colleagues. Huey T. Chen and Paul Garbe arguethat outcome evaluation should address system-integration issues that gobeyond the scope of goal attainment. The Campbellian typology’s strengthis goal-attainment assessment. To address both goal-attainment and system-integration issues, these authors propose a validity model with three cate-gories: viable, effectual, and transferable. With this expanded typology, theypropose a bottom-up approach with the use of quantitative and qualitativemethods to strengthen validity in an evaluation.

William R. Shadish, a collaborator of Campbell’s who played a key rolein expanding the Campbellian typology (Shadish et al., 2002), offers his per-spective on the contributions of this issue. Other chapters in the issue dis-cuss various aspects of the Campbellian typology, with the authorsrepresenting varying degrees of closeness or distance to the tradition.Shadish speaks as an involved and interested representative of this tradition,which he upholds with vigor, thus providing balance to the perspectives inthe issue. Shadish clarifies and defends the work of Campbell and his col-leagues, offers themes related to the issue topic, and comments on the restof the chapters.

Shadish takes exception with many of the arguments in the other chap-ters, countering our view that the typology must be revised or expanded toserve program evaluation better. Our hope is that the interplay among theideas in all of the chapters will provide readers with multiple viewpoints aswell as stimulate future development in this important area. Don Campbelladvocated a “disputatious community of scholars” to create self-correctingprocesses. He appended critiques of his papers by others to his own reprints.In this spirit, we include Shadish’s comments and hope this will contribute toevaluators’ thinking and practice regarding validity and outcome evaluation.

References

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designsfor research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching(pp. 171–246). Chicago, IL: Rand McNally. Also published as Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago,IL: Rand McNally. Since reprinted as Campbell, D. T., & Stanley, J. (1963). Experi-mental and quasi-experimental designs for research. Boston, MA: Houghton-Mifflin/Wadsworth.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issuesfor field settings. Chicago, IL: Rand McNally.

Joint Committee on Standards for Educational Evaluation. (1994). The program evalua-tion standards (2nd ed.). Thousand Oaks, CA: Sage.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

Page 4: Editors' notes

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experi-mental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Suchman, E. A. (1967). Evaluation research. New York, NY: Russell Sage Foundation.

Huey T. ChenStewart I. Donaldson

Melvin M. MarkEditors

HUEY T. CHEN is a senior evaluation scientist of the Air Pollution and Respira-tory Health Branch at the Centers for Disease Control and Prevention (CDC).

STEWART I. DONALDSON is dean and professor of psychology at the ClaremontGraduate University.

MELVIN M. MARK is professor and head of psychology at Penn State University.

4 ADVANCING VALIDITY IN OUTCOME EVALUATION

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev