16
Evaluation of Educational Programs and Policies Course Number and Section Spring 2015 Online Instructor: Marcy Davis, Ph.D 410-516-6796 T: 2:00 – 4:00 pm by phone or by appointment [email protected] Credit Hours: 3 Class Time: Online Course Description: This course is intended to provide an overview of key elements and topics related to program and policy evaluation and research. Students will become familiar with types of evaluation and their purposes including their role in research and development and program improvement. The course will also cover evaluability assessment, logic models and program theory, fidelity of implementation, casual inference, threats to validity, experimental and quasi-experimental designs. Course Objectives: By the end of this course you will be able to: 1. Assess the evaluability of their project to identify areas of strength and weakness in the evaluation of their project. 2. Identify key stakeholders and stakeholder groups and understand their role in the evaluation of the student’s project 3. Align the theory of treatment with logic models to provide justification and support for their proposed intervention and the likely outcomes of the intervention 4. Specify an evaluation design to assess the process and implementation of their proposed intervention that is justifiable and defensible. 5. Specify an evaluation design to assess the outcomes of their proposed intervention that is justifiable and defensible. 6. Develop a plan for enrolling and maintaining subjects in their project. PROGRAM GOALS ADDRESSED IN THIS COURSE 1. Contribute to the public discourse on improvement of education.

Evaluation of Educational Programs and Policies Course

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Evaluation of Educational Programs and Policies Course

Evaluation of Educational Programs and Policies

Course Number and Section

Spring 2015 Online

Instructor: Marcy Davis, Ph.D 410-516-6796

T: 2:00 – 4:00 pm by phone or by appointment [email protected] Credit Hours: 3 Class Time: Online Course Description: This course is intended to provide an overview of key elements and topics related to program and policy evaluation and research. Students will become familiar with types of evaluation and their purposes including their role in research and development and program improvement. The course will also cover evaluability assessment, logic models and program theory, fidelity of implementation, casual inference, threats to validity, experimental and quasi-experimental designs. Course Objectives:

By the end of this course you will be able to:

1. Assess the evaluability of their project to identify areas of strength and weakness in the evaluation of their project.

2. Identify key stakeholders and stakeholder groups and understand their role in the evaluation of the student’s project

3. Align the theory of treatment with logic models to provide justification and support for their proposed intervention and the likely outcomes of the intervention

4. Specify an evaluation design to assess the process and implementation of their proposed intervention that is justifiable and defensible.

5. Specify an evaluation design to assess the outcomes of their proposed intervention that is justifiable and defensible.

6. Develop a plan for enrolling and maintaining subjects in their project.

PROGRAM GOALS ADDRESSED IN THIS COURSE 1. Contribute to the public discourse on improvement of education.

Page 2: Evaluation of Educational Programs and Policies Course

2. Engage in and promote evidence-based practices through the application of rigorous

methodology. 3. Link education research to policy and practice

PROGRAM COMPETENCIES ADDRESSED IN THIS COURSE

1. Design a program evaluation plan to evaluate the effectiveness of the proposed solution to the POP

Required Textbooks: Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage. Wholey, J., Hatry, H., & Newcomer, K. (2010). Handbook of practical program evaluation. San Francisco, CA: Jossey-Bass. Shadish, W., Cook, T., and Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin Company. Assignments Evaluation Overview: This assignment requires you to succinctly and clearly present your POP and proposed intervention. You are asked to make a beginning, tentative plan for evaluating your project based on the content of this session. It is not expected for you to have a final, fully specified plan of the evaluation of your project - the assignment requires you make critical assessments about the potential evaluability of your project given the current state of your work. It is expected that this work will highlight gaps, weaknesses, and potential pathways for further development during the remainder of the course. Stakeholder Analysis: Based on your Stakeholder Identification and Analysis worksheet and feedback received from your peers provide a one (1) page narrative assessment of the most important stakeholder for the successful implementation of your POP intervention and one (1) page narrative assessment of the most important stakeholder for the successful evaluation of your POP intervention. Your assessment should summarize the key points embodied in the worksheet and should provide tentative plans for engaging and maintaining stakeholders during implementation and evaluation of your POP intervention. Integrative Narrative of Theory of Treatment and Logic Model: In this assignment you are asked to integrate the current logic model for your POP intervention (completed in RMII) with the underlying theory of treatment of your intervention (see Leviton & Lipsey, 2007) in a narrative summary.

Page 3: Evaluation of Educational Programs and Policies Course

Using Leviton and Lipsey (2007) specify the theory of treatment for you POP

intervention. Develop a figural model of this theory of treatment and discuss it in narrative form.

Compare your theory of treatment as specified and discussed in (1) to the logic model you developed in RMII for your POP intervention. How is your theory of treatment realized in your logic model? To what extent are these two models aligned? Where are they not in alignment?

Discuss how you need to revise your logic model in light of the theory of treatment. If you believe that your logic model does not need revision, provide support and justification for this position.

Participant Recruitment Protocol: Institutional Review Boards (IRB) require that human subjects research protocols specify how the study will recruit participants. These protocols require you to provide details about the target population of participants and specify the protocol for recruiting and enrolling participants. The Recruitment and Participants template provides the key questions asked on IRB protocols that pertain to these issues. Complete the required questions as succinctly and clearly as possible. IRB protocols are data collection documents and do not require length exposition – they only require direct and truthful answers to the questions. Complete the Participant Recruitment Protocol Recruitment and Participants worksheet and upload any recruitment scripts and informed consent forms needed given the particulars of your intervention/evaluation. The Investigator’s Manual (http://web.jhu.edu/Homewood-IRB/images/forms/InvestMan.pdf ) provides detailed discussion for investigators on the ethical requirements for participant recruitment and should be reviewed prior and as you complete this assignment. Process Evaluation Plan: This assignment asks you to put together the beginnings of a plan to conduct a process evaluation of your POP project. The intent is not for you to have a complete, finalized plan. Rather successful completion of this assignment should provide you with a roadmap for conducting a process evaluation of you POP project and should serve as a discussion point of departure with your advisor/mentor as you are working on your project. Your final product should read as a coherent narrative discussion, not a collection of narrativized bullet points. As always, follow proper APA formatting. Outcome Evaluation Plan: This assignment asks you to put together the beginnings of a plan to conduct an outcome evaluation of your POP project. The intent is not for you to have a complete, finalized plan. Rather successful completion of this assignment should provide you with a roadmap for conducting an outcome evaluation of your POP project and should serve as a point of departure with your advisor/mentor as you are discussing and working on your project. Your final product should read as a coherent narrative discussion, not a collection of narrativized bullet points. As always, follow proper APA formatting.

Page 4: Evaluation of Educational Programs and Policies Course

Grading Assignment Percent of Grade Due Date

Discussion and Participation 15% Ongoing

Evaluation Overview 5% Noon, 2/9/15

Stakeholder Analysis Assessment 10% Noon, 3/2/15

Integrative Narrative of Theory of Treatment and Logic Model

10% Noon, 3/2/15

Participant Recruitment Protocol 10% Noon,3/30/15

Process Evaluation Plan 25% Noon, 3/30/15

Outcome Evaluation Plan 25% 5/9/15

Grading Scale A = 94-100% A- = 90-93% B+ = 87-89% B = 84-86% B- = 80-83% C+ = 77-79% C = 74-76% C- = 70-73% F = 0-69% The grades of D+, D, and D- are not awarded at the graduate level. Course Outline: Readings denoted with a * are supplemental and are recommended but not required

Session Dates Topic Readings Assignments Due Session 1 1/26 to 2/8 Introduction to

Evaluation and Evaluability Assessment

Leviton, L. C., Khan, L. K., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability assessment to improve public health policies, programs, and practices. Annual Review of Public Health, 31(1), 213–233.

Newcomer, K., Hatry, H., & Wholey, J. (2010). Planning and designing useful evaluations. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 1-29). San Francisco: Jossey-Bass.

Rossi, P., Lipsey, M., & Freeman, H. (2004). An overview of program evaluation. In P. Rossi, M. Lipsey, & H. Freeman (Eds.), Evaluation: A systematic

Evaluation Overview (Due: Noon following end of session)

Page 5: Evaluation of Educational Programs and Policies Course

approach (pp. 1-30). Thousand Oaks, CA: Sage.

Strosberg, M. A., & Wholey, J. S. (1983). Evaluability assessment: From theory to practice in the department of health and human services. Public Administration Review, 43(1), 66–71

Wholey, J. (2010). Exploratory evaluation. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 81-99). San Francisco: Jossey-Bass.

Session 2 2/9 to 2/22 Stakeholders, Logic Models and Program Theory

Stakeholders

Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35(1), 26–44. doi:10.1177/1098214013503699 Bryson, J. & Patton, M. (2010). Analyzing and engaging stakeholders. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 30-54). San Francisco: Jossey-Bass. Bryson, J. M. (2004). What to do when stakeholders matter. Public Management Review, 6(1), 21–53. doi:10.1080/14719030410001675722 Bryson, J. M., Ackermann, F., & Eden, C. (2007). Putting the resource-based view of strategy and distinctive competencies to work in public organizations. Public Administration Review, 67(4), 702–717. doi:10.1111/j.1540-6210.2007.00754.x Logic Models Cooksy, L. J., Gill, P., & Kelly, P. A. (2001). The program logic model as an integrative framework for a multimethod evaluation. Evaluation and

Page 6: Evaluation of Educational Programs and Policies Course

Program Planning, 24(2), 119–128. McLaughlin, J., & Jordan, G. (2010). Using logic models. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 55-80). San Francisco: Jossey-Bass. Program Theory Leviton, L. C., & Lipsey, M. W. (2007). A big chapter about small theories: Theory as method: Small theories of treatments. New Directions for Evaluation, 2007(114), 27–62. doi:10.1002/ev.224 Rossi, P., Lipsey, M., & Freeman, H. (2004). Expressing and assessing program theory. In P. Rossi, M. Lipsey, & H. Freeman, Evaluation: A systematic approach (pp. 133-168). Thousand Oaks, CA: Sage.

Session 3 2/23 to 3/1 Work Week Stakeholder Analysis Assessment (Due: Noon following end of session) Integrative Narrative of Theory of Treatment and Logic Model (Due: Noon following end of session)

Session 4 3/2 to 3/22 Evaluating Process and Fidelity of Implementation

Cook, S., Godiwalla, S., Brooks, K., Powers, C., & John, P. (2010). Recruitment and retention of study participants. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 182-207). San Francisco: Jossey-Bass. Rossi, P., Lipsey, M., & Freeman, H. (2004). Assessing and monitoring program process. In P. Rossi, M. Lipsey, & H. Freeman (Eds.), Evaluation: A systematic approach (pp. 169-202). Thousand Oaks, CA: Sage.

Page 7: Evaluation of Educational Programs and Policies Course

Fidelity of Implementation Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256. doi:10.1093/her/18.2.237 Holliday, L. R. (2014). Using logic model mapping to evaluate program fidelity. Studies in Educational Evaluation, 42, 109–117. doi:10.1016/j.stueduc.2014.04.001 O'Donnell, C. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research, 78(1), 33-84. Nelson, M. C., Cordray, D. S., Hulleman, C. S., Darrow, C. L., & Sommer, E. C. (2012). A procedure for assessing intervention fidelity in experiments testing educational and behavioral interventions. The Journal of Behavioral Health Services & Research, 39(4), 374–396. doi:10.1007/s11414-012-9295-x Saunders, R. P., Evans, M. H., & Joshi, P. (2005). Developing a process-evaluation plan for assessing health promotion program implementation: A how-to guide. Health Promotion Practice, 6(2), 134–147. doi:10.1177/1524839904273387 Schulte, A., Easton, J., & Parker, J. (2009). Advances in treatment integrity research: Multidisciplinary perspectives on the conceptualization, measurement, and enhancement of treatment integrity. School Psychology Review, 38(4), 460–475.

Session 5 3/23 to 3/29 Work Week Participant

Page 8: Evaluation of Educational Programs and Policies Course

Recruitment Protocol (Due: Noon following end of session) Process Evaluation Plan (Due: Noon following end of session)

Session 6 3/30 to 4/12 Thinking about Outcomes: Causality, Inference, Threats to Validity and Power Analysis

Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177. doi:10.1111/j.1750-8606.2008.00061.x

Lipsey, M. (1998). Design sensitivity: Statistical power for applied experimental research. In L. Bickman and D. J. Rog (eds.). Handbook of applied social research methods (pp. 39-68). Thousand Oaks, CA: Sage.

Lipsey, M.W., Puzio, K., Yun, C., Hebert, M.A., Steinka-Fry, K., Cole, M.W., Roberts, M., Anthony, K.S., Busick, M.D. (2012). Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms. (NCSER 2013-3000). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education. This report is available on the IES website at http://ies.ed.gov/ncser/pubs/20133000/

Shadish, W., Cook, T., & Campbell, D. (2002). Experiments and generalized causal inference. In Experimental and quasi-experimental designs for generalized causal inference (pp. 1-32). Boston, MA: Houghton Mifflin.

Shadish, W., Cook, T., & Campbell, D. (2002). Construct validity and external validity. In Experimental and quasi-experimental designs for generalized

Page 9: Evaluation of Educational Programs and Policies Course

causal inference (pp. 33-63). Boston, MA: Houghton Mifflin.

Shadish, W., Cook, T., & Campbell, D. (2002). Statistical conclusion validity and internal validity. In Experimental and quasi-experimental designs for generalized causal inference (pp. 33-63). Boston, MA: Houghton Mifflin.

Stuart, E. A. (2007). Estimating causal effects using school-level data sets. Educational Researcher, 36(4), 187–198. doi:10.3102/0013189X07303396

Session 7 4/13 to 5/3 Outcome Evaluation Designs

Henry, G. (2010). Comparison group designs. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 125-143). San Francisco, CA: Jossey-Bass. Rossi, P., Lipsey, M., & Freeman, H. (2004). Measuring and Monitoring Program Outcomes. In P. Rossi, M. Lipsey, & H. Freeman, Evaluation: A Systematic Approach (pp. 203-232). Thousand Oaks, CA: Sage. Shadish, W., Cook, T., & Campbell, D. (2002). Quasi-experimental designs that use both control groups and pretests. In Experimental and quasi-experimental designs for generalized causal inference (pp. 135-170). Boston, MA: Houghton Mifflin Company. *Shadish, W., Cook, T., & Campbell, D. (2002). Randomized experiments: Rationale, designs, and conditions conducive to doing them. In Experimental and quasi-experimental designs for generalized causal inference (pp. 246-278). Boston, MA: Houghton Mifflin Company. Torgerson, C., Torgerson, D., & Taylor, C. (2010). Randomized controlled trials and nonrandomized designs. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook

Page 10: Evaluation of Educational Programs and Policies Course

of practical program evaluation (pp. 144-162). San Francisco, CA: Jossey-Bass.

Session 8 5/4 to 5/9 Work Week Outcome Evaluation Plan

Grading Policies 1. Grading will be based on successful and timely completion of assignments. Students should

reference the syllabus for a schedule of assignment due dates. One letter grade will be deducted for turning in a late assignment, and no assignment will be accepted past 1 week from the due date unless the student makes a prior arrangement with the instructor.

2. Incomplete (I) grades are highly discouraged and as a general rule will not be given without an unavoidable and compelling reason. If an instructor grants an Incomplete (I), a “contract for completion” of course assignments must be developed by the student and approved by the instructor. Once the contract is mutually agreed upon, it will be signed by both the student and instructor. Passing the class requires completion of all course requirements.

3. If an Incomplete is granted, a grade of “F” will replace the “I” on the student's academic transcript within four weeks after the start of the following semester. Please consult the Academic Policy Manual for more information. (http://education.jhu.edu/media/files/AcademicPolicyManual2011-12final.pdf).

Participation Active engagement is an essential component of the learning process. Participation in online courses includes active reading and discussion within online forums and activities during the week in which the class is engaged with the same content. Students are expected to log into the course, monitor course discussions, and engage as appropriate for the course several times a session (e.g., typically a session lasts one week). It is unlikely that students can fully engage with the knowledge construction within the online context if they log in only once or twice a week (e.g., only on weekends). Please notify the instructor in the case that you are not able to participate in a session at the designated time. See the Evaluation and Grading section of this syllabus for the weighting assigned to course participation when determining the course grade. Academic Conduct

Page 11: Evaluation of Educational Programs and Policies Course

The School of Education defines academic misconduct as any intentional or unintentional act that provides an unfair or improper advantage beyond a student’s own work, intellect, or effort, including but not limited to cheating, fabrication, plagiarism, unapproved multiple submissions, or helping others engage in misconduct. This includes the misuse of electronic media, text, print, images, speeches and ideas. Any act that violates the spirit of authorship or gives undue advantage is a violation. Students are responsible for understanding what constitutes academic misconduct. (Please refer to the School of Education’s Academic Catalog for the current academic year for more information on the School’s policies and procedures relating to academic conduct--http://www.students.education.jhu.edu/catalog/, see Academic and Student Conduct Policies under the Academic Policies section.) Please note that student work may be submitted to an online plagiarism detection tool at the discretion of the course instructor. If student work is deemed plagiarized, the course instructor shall follow the policy and procedures governing academic misconduct as laid out in the School of Education’s Academic Catalog. Policy on Academic Integrity The School of Education has adopted a policy regarding academic integrity that reads in part:

The University reserves the right to dismiss at any time a student whose academic standing or general conduct is considered unsatisfactory…School of Education students assume an obligation to conduct themselves in a manner appropriate to the Johns Hopkins University’s mission as an institution of higher education and with accepted standards of ethical and professional conduct. Students must demonstrate personal integrity and honesty at all times in completing classroom assignments and examinations, in carrying out their fieldwork or other applied learning activities, and in their interactions with others. Students are obligated to refrain from acts they know or, under the circumstances, have reason to know will impair their integrity or the integrity of the University. Violations of academic integrity and ethical conduct include, but are not limited to cheating, plagiarism, unapproved multiple submissions, knowingly furnishing false or incomplete information to any agent of the University for inclusion in academic records, violation of the rights of human and animal subjects in research, and falsification, forgery, alteration, destruction, or misuse of the University seal and official documents. (For further information on what constitutes cheating, plagiarism, etc., please see Appendix B, Fostering an Academic Community Based on Integrity. For violations related to non-academic conduct matters, see Policies Governing Student Conduct.)

(Johns Hopkins University School of Education, 2010) For more information regarding Johns Hopkins University School of Education’s academic policy, view the Johns Hopkins University School of Education Academic Policy Manual:

Page 12: Evaluation of Educational Programs and Policies Course

Academic Year 2010-2011 at http://education.jhu.edu/bin/c/y/academicpolicymanual2010-11.pdf Plagiarism It is important to distinguish between plagiarism and the legitimate presentation of the work of other through quotations or paraphrasing. The Publication Manual of the American Psychological Association (2010) gives the following guidance: Plagiarism (Principle 1.10). Researchers do not claim the words and ideas of another as their own; they give credit where credit is due (APA Ethics Code Standard 8.11, Plagiarism).Quotation marks should be used to indicate the exact words of another. Each time your paraphrase another author (i.e., summarize a passage or rearrange the order of a sentence and change some of the words), you need to credit the source in the text. The following paragraph is an example of how one might appropriately paraphrase some of the foregoing material this section:

As stated in the sixth edition of the Publication Manual of the American Psychological Association (APA, 2010), the ethical principles of scientific publication are designed to ensure the integrity of scientific knowledge and to protect the intellectual property rights of others. As the Publication Manual explains, authors are expected to correct the record if they discover errors in their publications; they are also expected to give credit to others for their work when it is quoted or paraphrased.

The key element of this principle is that an author does not present the work of another as if it were his or her own work. This can extend to ideas as well as written words. (p. 349)

You should review the rules for quoting and paraphrasing the work of other that are given in sections 3.34-3.41of the sixth edition of the APA Publication Manual. Religious Observance Accommodation Policy Religious holidays are valid reasons to be excused from participating in an online course on a particular day or days during a session. Students who are not able to participate on a particular day typically do not need to inform the instructor unless a specific assignment is due on that day. Please make alternative arrangements to submit an assignment on another day during the session. It is expected that students will complete all work within every session of the course. Statement of Academic Continuity Please note that in the event of serious consequences arising from extreme weather conditions, communicable health problems, or other extraordinary circumstances, the School of Education

Page 13: Evaluation of Educational Programs and Policies Course

may change the normal academic schedule and/or make appropriate changes to course structure, format, and delivery. In the event such changes become necessary, information will be posted on the School of Education web site. Accommodations for Students with Disabilities If you are a student with a documented disability who requires an academic adjustment, auxiliary aid or other similar accommodations, please contact Jennifer Eddinger in the Disability Services Office at 410-516-9734 or via email at [email protected]. For more information on the School of Education’s disability services, please visit the disability services website (http://www.students.education.jhu.edu/disability/). Statement of Diversity and Inclusion

Johns Hopkins University is a community committed to sharing values of diversity and inclusion in order to achieve and sustain excellence. We believe excellence is best promoted by being a diverse group of students, faculty, and staff who are committed to creating a climate of mutual respect that is supportive of one another’s success. Through its curricula and clinical experiences, the School of Education purposefully supports the University’s goal of diversity, and, in particular, works toward an ultimate outcome of best serving the needs of all students in K-12 schools and/or the community. Faculty and candidates are expected to demonstrate a commitment to diversity as it relates to planning, instruction, management, and assessment. IDEA Course Evaluation Please remember to complete the IDEA course evaluation for this course. These evaluations are an important tool in the School of Education’s ongoing efforts to improve instructional quality and strengthen its programs. The results of the IDEA course evaluations are kept anonymous—your instructor will only receive aggregated data and comments for the entire class. An email with a link to the online course evaluation form will be sent to your JHU email address towards the end of the course. Thereafter, you will be sent periodic email reminders until you complete the evaluation. Please remember to activate your JHU email account and to check it regularly. (Please note that it is the School of Education’s policy to send all faculty, staff, and student email communications to a JHU email address, rather than to personal or alternative work email addresses.) If you are having difficulty accessing the course evaluations, you haven’t received an email notification about the course evaluation, or if you have any questions in general about the IDEA course evaluation process, please contact Liesl McNeal (410-516-9759; [email protected] or [email protected]).

Page 14: Evaluation of Educational Programs and Policies Course

Bibliography Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35(1), 26–44. doi:10.1177/1098214013503699 Bryson, J. & Patton, M. (2010). Analyzing and engaging stakeholders. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 30-54). San Francisco: Jossey-Bass. Bryson, J. M. (2004). What to do when stakeholders matter. Public Management Review, 6(1), 21–53. doi:10.1080/14719030410001675722 Bryson, J. M., Ackermann, F., & Eden, C. (2007). Putting the resource-based view of strategy and distinctive competencies to work in public organizations. Public Administration Review, 67(4), 702–717. doi:10.1111/j.1540-6210.2007.00754.x Cook, S., Godiwalla, S., Brooks, K., Powers, C., & John, P. (2010). Recruitment and retention of study participants. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 182-207). San Francisco: Jossey-Bass. Cooksy, L. J., Gill, P., & Kelly, P. A. (2001). The program logic model as an integrative framework for a multimethod evaluation. Evaluation and Program Planning, 24(2), 119–128.

Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18(2), 237–256. doi:10.1093/her/18.2.237

Henry, G. (2010). Comparison group designs. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 125-143). San Francisco, CA: Jossey-Bass.

Hill, C. J., Bloom, H. S., Black, A. R., & Lipsey, M. W. (2008). Empirical benchmarks for interpreting effect sizes in research. Child Development Perspectives, 2(3), 172–177. doi:10.1111/j.1750-8606.2008.00061.x

Holliday, L. R. (2014). Using logic model mapping to evaluate program fidelity. Studies in Educational Evaluation, 42, 109–117. doi:10.1016/j.stueduc.2014.04.001

Leviton, L. C., Khan, L. K., Rog, D., Dawkins, N., & Cotton, D. (2010). Evaluability assessment to improve public health policies, programs, and practices. Annual Review of Public Health, 31(1), 213–233.

Page 15: Evaluation of Educational Programs and Policies Course

Leviton, L. C., & Lipsey, M. W. (2007). A big chapter about small theories: Theory as method: Small theories of treatments. New Directions for Evaluation, 2007(114), 27–62. doi:10.1002/ev.224

Lipsey, M. (1998). Design sensitivity: Statistical power for applied experimental research. In L. Bickman and D. J. Rog (eds.). Handbook of applied social research methods (pp. 39-68). Thousand Oaks, CA: Sage.

Lipsey, M.W., Puzio, K., Yun, C., Hebert, M.A., Steinka-Fry, K., Cole, M.W., Roberts, M., Anthony, K.S., Busick, M.D. (2012). Translating the Statistical Representation of the Effects of Education Interventions into More Readily Interpretable Forms. (NCSER 2013-3000). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education. This report is available on the IES website at http://ies.ed.gov/ncser/pubs/20133000/

McLaughlin, J., & Jordan, G. (2010). Using logic models. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 55-80). San Francisco: Jossey-Bass. Nelson, M. C., Cordray, D. S., Hulleman, C. S., Darrow, C. L., & Sommer, E. C. (2012). A procedure for assessing intervention fidelity in experiments testing educational and behavioral interventions. The Journal of Behavioral Health Services & Research, 39(4), 374–396. doi:10.1007/s11414-012-9295-x

Newcomer, K., Hatry, H., & Wholey, J. (2010). Planning and designing useful evaluations. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 1-29). San Francisco: Jossey-Bass.

O'Donnell, C. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research, 78(1), 33-84.

Rossi, P., Lipsey, M., & Freeman, H. (2004). An overview of program evaluation. In P. Rossi, M. Lipsey, & H. Freeman (Eds.), Evaluation: A systematic approach (pp. 1-30). Thousand Oaks, CA: Sage.

Rossi, P., Lipsey, M., & Freeman, H. (2004). Expressing and assessing program theory. In P. Rossi, M. Lipsey, & H. Freeman, Evaluation: A systematic approach (pp. 133-168). Thousand Oaks, CA: Sage. Rossi, P., Lipsey, M., & Freeman, H. (2004). Assessing and monitoring program process. In P. Rossi, M. Lipsey, & H. Freeman (Eds.), Evaluation: A systematic approach (pp. 169-202). Thousand Oaks, CA: Sage.

Page 16: Evaluation of Educational Programs and Policies Course

Rossi, P., Lipsey, M., & Freeman, H. (2004). Measuring and Monitoring Program Outcomes. In P. Rossi, M. Lipsey, & H. Freeman, Evaluation: A Systematic Approach (pp. 203-232). Thousand Oaks, CA: Sage. Saunders, R. P., Evans, M. H., & Joshi, P. (2005). Developing a process-evaluation plan for assessing health promotion program implementation: A how-to guide. Health Promotion Practice, 6(2), 134–147. doi:10.1177/1524839904273387 Schulte, A., Easton, J., & Parker, J. (2009). Advances in treatment integrity research: Multidisciplinary perspectives on the conceptualization, measurement, and enhancement of treatment integrity. School Psychology Review, 38(4), 460–475.

Shadish, W., Cook, T., & Campbell, D. (2002). Experiments and generalized causal inference. In Experimental and quasi-experimental designs for generalized causal inference (pp. 1-32). Boston, MA: Houghton Mifflin.

Shadish, W., Cook, T., & Campbell, D. (2002). Construct validity and external validity. In Experimental and quasi-experimental designs for generalized causal inference (pp. 33-63). Boston, MA: Houghton Mifflin.

Shadish, W., Cook, T., & Campbell, D. (2002). Statistical conclusion validity and internal validity. In Experimental and quasi-experimental designs for generalized causal inference (pp. 33-63). Boston, MA: Houghton Mifflin.

Shadish, W., Cook, T., & Campbell, D. (2002). Quasi-experimental designs that use both control groups and pretests. In Experimental and quasi-experimental designs for generalized causal inference (pp. 135-170). Boston, MA: Houghton Mifflin Company

*Shadish, W., Cook, T., & Campbell, D. (2002). Randomized experiments: Rationale, designs, and conditions conducive to doing them. In Experimental and quasi-experimental designs for generalized causal inference (pp. 246-278). Boston, MA: Houghton Mifflin Company.

Strosberg, M. A., & Wholey, J. S. (1983). Evaluability assessment: From theory to practice in the department of health and human services. Public Administration Review, 43(1), 66–71

Stuart, E. A. (2007). Estimating causal effects using school-level data sets. Educational Researcher, 36(4), 187–198. doi:10.3102/0013189X07303396

Torgerson, C., Torgerson, D., & Taylor, C. (2010). Randomized controlled trials and nonrandomized designs. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 144-162). San Francisco, CA: Jossey-Bass.

Wholey, J. (2010). Exploratory evaluation. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (pp. 81-99). San Francisco: Jossey-Bass.