44
Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Embed Size (px)

Citation preview

Page 1: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Chapter 11POLICY EVALUATION: DETERMINING IF

POLICY WORIIS

Page 2: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Focus Questions• Why should education leaders be

knowledgeable about policy evaluations?• How can one tell if a proposed or

completed evaluation is of high quality?• Why are evaluations always political? • How can a leader facilitate the evaluation

process?

Page 3: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

A NERVOUS-MAKING TOPIC

• Evaluation is an integral part of the professional lives of all educators. Teachers regularly evaluate students; principals regularly evaluate teachers; and, increasingly, administrators themselves are subjected to regular evaluation.

• Broader forms of institutional evaluation are also common. Accrediting teams visit schools and districts, observing, interviewing, and collecting data.

Page 4: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Many states have established indicator systems in order to gather information from school districts in dozens of performance areas, analyzing and comparing these figures, and then issuing official reports on their findings.

• In some cases state departments of education (SDEs) even use evaluation results to categorize districts, perhaps labeling them excellent, effective, or deficient and attaching rewards an sanctions to those labels.

• An age of accountability such as our own is inevitable an age of evaluation.

• Not surprisingly, they, policies are often evaluated, too. In an ideal world, not only would all policies be thoroughly and fairly evaluated; but policy makers would also act on evaluation findings, modifying or terminating policies on that basis.

Page 5: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Of course, our world is far from ideal. As a result, many policies are never evaluated at all; some are evaluated poorly; and, although some others are evaluated careful no one ever acts on the findings.

• But sometimes, of course, the final stage of the policy process unfolds as it should: after an appropriate length of time, the new policy is carefully evaluated and then either maintained as it is, changed, or terminate.

• Like every other stage of the policy process, evaluation is difficult, in large part because it is political; the major reason is that it threatens people.

Page 6: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

DEFINITIONS ASSOCIATED WITH POLICY EVALUATION

• This section defines four key terms: (1) evaluation, (2) project, (3)program, and (4) stakeholder

• Evaluation is "the systematic investigation of the worth or merit of an object" . A policy evaluation is a type of applied research in which the practices and rigorous standards of all research are used in a specific setting for a practical purpose: determining to what extent a policy is reaching its goals.

• Policies are often first put into effect through projects, which are "educational activities that are provided for a defined period of time"

Page 7: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• When projects are institutionalized, they become programs: "educational activities that are provided on a continuing basis" .

• Thus, most districts have an ongoing reading program or professional development program that reflects their basic policies in these areas. These programs are often based on earlier projects.

• In any evaluation, a number of stakeholders exist: "individuals or groups that may be involved in or affected by a program evaluation“.

• In a school system, major stakeholders generally include teachers, administrators, support staff, students, parents, the teachers' union and other unions, school board members, and any important interest groups.

Page 8: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

The Professionalization of Evaluation • About 1973, evaluation began to mature as a field and to

emerge as a distinctive profession within education. One sign that the discipline was coming of age was the establishment of several professional journals, including Educational Evaluation and Policy Analysis and Evaluation Review.

• Another was the publication of numerous books, including textbooks, on evaluation. Many universities began to offer courses in evaluation, and a few even established graduate concentrations in it.

• Moreover, the federal government and several major universities established centers to conduct research and development in policy evaluation.

Page 9: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

CHARACTERISTICS OF POLICY EVALUATIONS • The Evaluation Process • Whether a large national organization evaluates a policy in

fifty states or a building principal evaluates a program in a single school does not matter: All policy evaluations follow the same general procedures.

Page 10: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• 1.These Determine the goals of the policy • 2.Select indicators • 3. Select or develop data-collection instruments • 4.Collect data • 5. Analyze and summarize data • 6. Write evaluation report • 7. Respond to evaluators' recommendations

•Figure 11.1 Basic steps in the policy evaluation process.

Page 11: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• The first step is to determine as precisely as possible the goals or objectives of the policy. After all, it is legitimate to evaluate a policy only in relationship the goals that it is supposed to achieve.

• As an example, the objectives of a district's new award systems are to improve student attendance and morale, then evaluators should assess only the extent to which those goals have been attained.

• The impact of the policy on other aspects of such activity-such as test scores-is irrelevant.

Page 12: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Then, evaluators must next select indicators, which are measurements or signs that a goal has been reached.

• In evaluating award system, for example, comparative attendance figures for five years scores on a school climate survey administered at the beginning of each year implementation might be selected as indicators.

• The third step in conducting an evaluations is to select or develop data selection instruments. In assessing the award systems, evaluators might develop form to use for recording data from district attendance records for the last years.

• They might also select a commercially available survey of school climate to administer to all students.

Page 13: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• After the data have been collected, they must be analyzed. Numerical data (such as test scores) are usually analyzed statistically; means, ranges, and frequency are calculated and used to determine if any differences are statistically significant.

• Such findings are often summarized in the form of graphs, trend lines, and tables. Verbal data (such as interview transcripts) are usually analyzed by identifying recurrent themes in the entire set of transcripts and coding them so that the occurrences of each theme can be located.

• Computer programs are available for analyzing both numerical and verbal data.

Page 14: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• After the data have been carefully analyzed, the evaluators write a report presenting their findings and making recommendations based upon them.

• For instance, an evaluation of the awards system discussed throughout this section might reveal that attendance has improved since the awards policy was implemented but that student morale has not.

• The evaluators might, then, recommend that the awards be continued but that another approach be tried for raise morale.

• After district leaders receive the report, they should use it to either notify the policy or terminate it.

Page 15: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Criteria for Judging Evaluations • Although evaluations are designed to judge

the effectiveness of policies, they be evaluated themselves. In fact, education leaders need to know how to assess quality of a proposed or completed evaluation.

• If they have contracted for evaluation to be implemented, they need to know if the recommended rese design is sound.

• If they are using a completed evaluation, they need to be ab determine how seriously to take it.

Page 16: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• There are four categories, which serve as general criteria: (1) usefulness, (2) feasibility, (3) propriety and (4) accuracy.

• Usefulness. Interim and final evaluation reports should also be useful. They should be written in clear language that the stakeholders can understand and should avoid overly technical words and incomprehensible statistics. If necessary, different versions should be prepared, expressed in the various languages of the stakeholders.

• All reports for all audiences should include adequate information about the policy under study and the general design of the evaluation. The major findings should be clearly related to practical situations, and all recommendations should specific rather than general.

• Finally, reports should not be excessively long and should include an executive summary at the beginning.

Page 17: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Feasibility. • The second criterion is feasibility; the evaluation must be

doable without imposing unreasonable strains on the school or school district, An important aspect of feasibility is practicality.

• The evaluation should be designed so that it can be completed within the required time frame and implemented without unduly disturbing the professional responsibilities of the educators who are involved. An evaluation that requires that learning come to a halt for a week or two so that interviews can be conducted or tests administered is not practical.

• Another dimension of feasibility is political. As suggested at the beginning of this chapter, evaluations are "nervous making"; nervous people are more likely to try to block portions of the study in order to protect themselves or their jobs. This means that evaluations are always political. A good evaluation does not ignore this fact, but rather plans the evaluation with this in mind.

Page 18: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• One way to minimize the political controversy that often erupts around an evaluation is for the evaluators to meet with all the interest groups and stakeholders during the planning process to discuss their concerns and hear their suggestions.

• During the evaluation, leaders must also avoid favoring, or appearing to favors, one group over another in interviewing, observing, collecting data, and so on. Moreover, evaluators with good political skills are careful to maintain communication with all stakeholders throughout the process so that no one has any unpleasant surprises when the final report is issued.

• Finally a feasible evaluation is financially responsible. It has a definite budget, which is sufficient to carry on the study but not exorbitant in relation to the environment.

Page 19: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Propriety. Issues of propriety are legal and ethical; a good evaluation crust conform to accepted norms for research.

• First, evaluators selected should not be in a conflict of interest situation regarding the study. They should not have a personal, professional, or financial interest in the outcome of the evaluation, nor should they be close friends of any of the people who wish to commission the evaluation.

• Next, a written contract should be signed between the evaluator(s) and the organization commissioning the study, spelling out the purposes of the research, what it is to entail, and when it is to be completed.

Page 20: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Evaluators should respect the rights of human subjects as they conduct their work, maintain appropriate confidentiality and inform each participant of the purposes of the research.

• Moreover, they should be careful to treat everyone courteously and to solicit a range of opinions, not allowing themselves to be blinded by individuals' states in the organization, race, gender, or age.

• The final report should fully disclose the findings of the evaluation, even if evaluators include the discovery of financial fraud or other questionable behavior.

Page 21: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Accuracy. Finally, an evaluation should be accurate. In order to produce an accurate report, evaluators must first study the context in which the policy has been implemented, familiarizing themselves with its cultural and socioeconomic characteristics.

• In their report, they should provide enough detail about their sources of information that readers can determine the value of their information and thus of the conclusions based upon it. Data collection should be systematic rather than haphazard, drawing on a range of sources.

• In the report, evaluators should explain exactly how they reached their conclusions and specify the data upon which each conclusion is based.

Page 22: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Purposes of Evaluations

• Summative Evaluation. Policy evaluations can serve at least four purposes. Many are summative; they assess the quality of a policy that has been in force for some time, especially when it has reached a critical juncture in its history.

• Alternatively, negative findings might mean that the policy would be re-funded, but only with the condition that major changes be made in it. The primary purpose of summative evaluation, then, is to hold the implementers of a policy accountable, which is why the stakes are high when summative evaluations are conducted and why external evaluators are normally used.

Page 23: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Formative Evaluation. The purpose of a formative policy evaluation is to enable the policy implementers of a policy to make necessary changes throughout the life of a policy in order to improve it. As a result, formative evaluation is an ongoing, recurrent process. Although the evaluators write formal reports only at predetermined intervals, data are usually collected regularly.

• Because a formative evaluation is designed to help implementers make good decisions about what they are doing, it is not as threatening as a summative evaluation. And, because the stakes are not as high, sometimes internal evaluators are used.

Page 24: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Pseudo-evaluations. Unfortunately, policy evaluations are not always conducted in good faith. Often, what appears on the surface to be a bona fide evaluation is actually what has been called a pseudo-evaluations. Pseudo--evaluations are of two types.

• The first is "the politically controlled study" . Such an evaluation is politically motivated; its purpose is usually communicated, directly or indirectly, to the evaluators, who must then decide if their ethical standards permit them to continue with the project. In such a pseudo-evaluation, the data collection and dissemination of the final report are carefully controlled to create the desired impression of the policy.

• The desired contusions about the policy may be either negative or positive, but they reflect not the truth about the policy's success, but the outcome that those who commissioned the evaluation sought for polices reasons.

Page 25: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• The second type of pseudo-evaluation is the public relations evaluation. Its purpose is to "create a positive public image for a school district, program, or process" As with politically inspired pseudo-evaluations, those who commission a public relations study usually clearly indicate what the findings must be.

• In this case, the conclusions in the final report not only must be positive, but also must add luster to the public image that has already been created.

• In order to bring about this result, the commissioners of the evaluation carefully shape and select the data that they make available to the researchers, limiting where they can go, to whom they may talk, and what questions they may ask. Needless to say, either commissioning or participating in a pseudo-evaluation is unethical.

Page 26: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Methodologies Used in Policy Evaluation

• Quantitative Methodologies. Quantitative research designs involve the collection and statistical analysis of numerical data, Many types of numerical data are available in schools and school districts.

• Figure 11.2 lists some of the most common. Quantitative policy evaluations are sometimes based on experimental or quasi-experimental designs that investigate the statistical differences between a group that participate in a program and a group that did not.

Page 27: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Quantitative Evaluations have several advantages. Perhaps the most important is the high level of credibility of a well-constructed quantitative study. Most Americans respect findings that are presented in statistical terms or depicted in graphs.

• A second major advantage is that quantitative evaluations can often be carried out relatively quickly and at a relatively low cost.

• The major disadvantage of a quantitative evaluation is that because of its tight structure and precise research questions, it is not well suited for discovering unexpected facts about the policy under study .

Page 28: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Test scores • Retention rates • Attendance figures • Dropout rates • Per-pupil expenditure • Teachers' salaries • Teacher-pupil ratios • Percentage of student on free and reduced lunch • Enrollment figures • Percentage o1 teachers with master's degrees

•Figure 11.2 Examples of quantitative educational data

Page 29: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Qualitative Methodologies. Although almost all early policy evaluations were quantitative, in recent decades qualitative approaches have become popular. Qualitative research designs involve the collection of verbal or pictorial data. Many types of such data are available in schools and school districts or can easily be generated by a researcher.

• Figure 11.3 lists some of the most commonly used. Qualitative research designs often involve collecting several types of data and comparing them, a process called triangulation.

Page 30: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• For example, a qualitative evaluation team assessing a dropout preventions project might interview students and teachers, observe in classes, and analyze books and other materials associated with the program.

• When the policy under consideration is very new or is a pilot program, a qualitative study can yield valuable insights into a problem.

• Moreover, qualitative evaluations sometimes make unexpected findings that quantitative ones miss.

Page 31: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Transcripts of Views . • Transcripts of focus group discussions • Notes on observations • Open-ended surveys • Personal statements • Diaries • Minutes from meetings • Official reports • Legal documents • Books and materials • Photographs

Figure 11.3 Types of qualitative data.

Page 32: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Indicators • An indicator is an individual or a composite

statistic that relates to a a basic construct in education and is useful in a policy context"

• Indicators as an essential step in any evaluation process, the evaluators must define the indicators they will use to determine to what extent the policy is achieving its objectives.

• Careful thought must be given to the selection of indicators because of the potentially pathological effects of inappropriate ones.

Page 33: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

FACILITATING MEANINGFUL POLICY EVALUATIONS

• First, the programs and projects that they assess are the products of the political process.

• Second, evaluation reports influence what happens in the political arena, often affecting whether a policy is continued and how much funding it receives.

• Finally, the careers, professional reputations, and educational benefits of many individuals depend on the outcome of any evaluation.

Page 34: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

ACTING ON EVALUATION REPORT

• (1) Do nothing, • (2) make minor modifications, • (3) make major modifications, or • (4) terminate the policy

Page 35: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

TABLE 11.1 Major Methods far Modifying Policies

Method Explanation

Replacement A new program that has the same objectives is put in

place of the old program. Consolidation Two or more entire programs or parts of programs

are put together. Splitting One aspect of the program is removed and developed into a separate program or project. Decrementing A substantial cut in funding is imposed on the

program by reducing the amount of money available to most components of the old program.

Page 36: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• When the administration changes • When the economy turns down • When budget difficulties happen • When other jobs are available within the

organization • When the old program can easily be

replaced by the new one

Figure 11.4 Situations conducive to policy termination.

Page 37: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• 1.1 Discuss why people feel nervous about evaluations.• 1.2 Find an evaluation that used quantitative methods and

one that used qualitative ones. What methods of each type were used? Contrast the kinds of information yielded by each approach.

• 1.3 Find an evaluation report and evaluate it, using the criteria discussed in this chapter.

• 1.4 List the evaluation and monitoring techniques used by your SDE and identify the major indicators. Do they distort the behavior of your state’s educators in any way? How would you improve them?

1. Questions and activities for discussion

Page 38: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Case-study: The middle school proposal goes down in flames

• A school superintendent decided to evaluate the curriculum and organization of the district’s junior high schools (grades 7-9) and formed a panel of evaluators consisting of the elementary, junior high, and senior high school principals and a teacher at each level.

• They were requested to complete a written report for the superintendent within five weeks of commencing the evaluation.

Page 39: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• The panel prepared the report for the superintendent based on their own know beliefs about the school system and its needs, supplemented by limited staff and student perceptions collected with a survey instrument.

• Included in the report were sections on academic achievement of local junior high school pupils, national trends in junior high school organization (stressing the advantages of the middle school concept), present and projected enrollments, gaps in the junior high school curriculum, and the physiological and social development of students in the junior high age group.

• The report recommended that the school system shift to a middle school organization, with grades 6, 7, and 8 in the middle schools, and grade 9 going to the senior high schools.

Page 40: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• When the report was published, elementary and senior high parents were disturbed that specific concerns of theirs had not been addressed in the report. Elementary parents, especially, ere upset about the prospect of the loss of the school leadership provided by sixth-grade students.

• They suggested, among other things, that many parents looked to sixth graders to escort their younger children safely to and from school. Senior high parents were alarmed at the potential overcrowding that would be brought about by the addition of grade-9 students.

Page 41: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• Representatives of both parent groups complained to the school board. The board itself was irritated because the report did not assess the disadvantages and cost implications of the suggested reorganization and the advantages and disadvantages of other possible organizational changes.

• The board supported the parent groups and rejected the middle school concept.

Page 42: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

• 1. Using the criteria provided in this chapter, what were the weaknesses of this evaluation?

• 2. If you had been a member of the school board, how would you have voted on the panel’s recommendation? Why?

• 3. What could the superintendent have done to encourage a more credible evaluation? What could the panel of evaluators have done?

• 4. Reading between the lines, how do you think that the politics of evaluation affected this situation?

Questions:

Page 43: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

Questions :• 1. What problems did the evaluation of the dual-

language program at Harris reveal?• 2. Was the evaluation summative or formative?• 3. What players in this evaluation arena can you

identify? What other players may be involved?• 4. What tactics have the implementers used to

discredit the evaluation?

Page 44: Chapter 11 POLICY EVALUATION: DETERMINING IF POLICY WORIIS

The End