24
 http://erx.sagepub.com Evaluation Review DOI: 10.1177/0193841X0002400403 2000; 24; 384 Eval Rev Michael Morris and Lynette R. Jacobs You Got a Problem With That?: Exploring Evaluators' Disagreements about Ethics http://erx.sagepub.com/cgi/content/abstract/24/4/384  The online version of this article can be found at:  Published by: http://www.sagepublications.com  can be found at: Evaluation Review Additional services and information for http://erx.sagepub.com/cgi/alerts Email Alerts:  http://erx.sagepub.com/subscriptions Subscriptions:  http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: http://erx.sagepub.com/cgi/content/refs/24/4/384 Citations  by Pedro Moscoso on April 17, 2009 http://erx.sagepub.com Downloaded from 

Disagreement About Ethics

Embed Size (px)

DESCRIPTION

Disagreement About Ethics

Citation preview

  • http://erx.sagepub.comEvaluation Review

    DOI: 10.1177/0193841X0002400403 2000; 24; 384 Eval Rev

    Michael Morris and Lynette R. Jacobs You Got a Problem With That?: Exploring Evaluators' Disagreements about Ethics

    http://erx.sagepub.com/cgi/content/abstract/24/4/384 The online version of this article can be found at:

    Published by:

    http://www.sagepublications.com

    can be found at:Evaluation Review Additional services and information for

    http://erx.sagepub.com/cgi/alerts Email Alerts:

    http://erx.sagepub.com/subscriptions Subscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    http://erx.sagepub.com/cgi/content/refs/24/4/384 Citations

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • EVALUATION REVIEW / AUGUST 2000Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS

    A random sample of American Evaluation Association (AEA) members were surveyed for theirreactions to three case scenariosinformed consent, impartial reporting, and stakeholderinvolvementin which an evaluator acts in a way that could be deemed ethically problematic.Significant disagreement among respondents was found for each of the scenarios, in terms ofrespondents views of whether the evaluator had behaved unethically. Respondents explana-tions of their judgments support the notion that general guidelines for professional behavior(such as AEAs Guiding Principles for Evaluators) can encompass sharply conflicting interpre-tations of how evaluators should behave in specific situations. Respondents employed in privatebusiness/consulting were less likely than those in other settings to believe that the scenarios por-trayed unethical behavior by the evaluator, a finding that underscores the importance of takingcontextual variables into account when analyzing evaluators ethical perceptions. The need forincreased dialogue among evaluators who represent varied perspectives on ethical issues isaddressed.

    YOU GOT A PROBLEM WITH THAT?

    Exploring EvaluatorsDisagreements About Ethics

    MICHAEL MORRISLYNETTE R. JACOBS

    University of New Haven

    Ethical issues in evaluation have received increasing attention in recentyears (e.g., Bennington 1999; Fitzpatrick and Morris 1999; House and Howe1999; Mabry, 1999; Morris 1999a; Newman and Brown 1996; Shadish et al.1995; Wenger et al. 1999). Not surprisingly, one outcome of this attention hasbeen a recognition of the diverse perspectives that evaluators bring to thedomain of ethics. Indeed, considerable disagreement even appears to sur-round such basic questions as: What constitutes an ethical issue in evalua-tion? For example, when summarizing their research on evaluation ethics,Newman and Brown (1996, 89) note, We consistently found people whose

    384

    AUTHORS NOTE: An earlier version of this article was presented at the 1999 annual meetingof the American Evaluation Association in Orlando. Please address correspondence to MichaelMorris, Department of Psychology, University of New Haven, West Haven, CT 06516.EVALUATION REVIEW, Vol. 24 No. 4, August 2000 384-406 2000 Sage Publications, Inc.

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • generalized response was What? Ethics? What does ethics have to do withevaluation? This came from experienced evaluators, long-term users of eval-uation, evaluation interns, and faculty members teaching program evalua-tion. In a similar vein, Morris and Cohn (1993, 626) found that 35% of theirsample of American Evaluation Association (AEA) members respondedno when asked in a questionnaire, In your work as a program evaluator,have you ever encountered an ethical problem or conflict to which you had torespond? Finally, in an interview study whose goal was to identify anddescribe the ethical issues encountered by public-sector evaluators, Honea(1992, 317) found, Ethics was not discussed during the practice of evalua-tion and ethical dilemmas were rarely, if ever, identified during the conduct ofevaluation and policy analysis activities.

    At a general level, these investigations, and the questions they raise,explore what Merton (1973, 269) describes as the normative structure of sci-ence, a structure expressed in the form of prescriptions, proscriptions, pref-erences, and permissions . . . legitimized in terms of institutional values, andinternalized by the scientist, thus fashioning his scientific conscience. Thenotion of a normative structure evokes images of unanimity. In practice, how-ever, individuals can vary in their commitment to a given norm, and sub-groups can differ in the specific norms they endorse within an overall norma-tive context (see Rossi and Berk 1985). Anderson and Louis (1994), forexample, found that U.S. doctoral students endorsed traditional scientificnorms (universalism, communality, disinterestedness, and skepticism) morethan did foreign-born students, whereas the opposite was true with respect tocommitment to scientific counternorms (particularism, solitariness,self-interestedness, and dogmatism).

    Our study examines the extent to which a normative framework character-izes evaluators responses to a set of detailed scenarios drawn from profes-sional practice. Given that the results previously cited suggest that evaluatorsvary in the degree to which they interpret the challenges they face in ethicalterms (Morris, 1999b, 16), we wished to explore the factors that mightaccount for such differences.

    Scenarios, because they are specific and concrete, are more likely thanopen-ended methods (e.g., Honea 1992; Morris and Cohn 1993) to generateuniform reference points for the application of respondents opinions,beliefs, and values related to ethics. This, in turn, increases the likelihood thatobserved differences in respondents views represent real, substantive differ-ences that have practical implications, a conclusion that is harder to justifywhen disagreements pertain to issues presented in more abstract, theoreticalterms.

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 385

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • Ethical scenarios have been used by other investigators, most notablyKorenman et al. (1998), who employed them to identify research normsamong National Science Foundation research grantees and research adminis-trators in the areas of performance and reporting of research, appropriation ofideas, conflict of interest, and collegiality and sharing (see also Wenger et al.1997, 1999). They found a high level of agreement between the two respon-dent groups in their views of the first three areas, suggesting the existence ofan underlying normative structure within the scientific community in thosedomains.

    Researchers who have focused specifically on ethics in evaluation haveidentified several factors that might influence an individuals tendency toperceive evaluation problems through an ethical lens. Morris and Cohn(1993) found that evaluators who reported that they had never encountered anethical conflict in their work had conducted fewer evaluations, had devotedmore of their time to internal evaluation, and were more likely to have beentrained in the field of education than respondents who said that they hadencountered such challenges. Also relevant in this context are two factorsidentified by Honea (1992) on the basis of her interviews with public-sectorevaluators: allegiance to the role of objective scientist, and membership on anevaluation team. Honea believes that internalization of the scientist role, andparticipation in research teams, decreases the extent to which one sees ethicalissuesas opposed to methodological or political onesas salient in onesevaluation work. In the current investigation, we attempt to examine withgreater directness and precision the role played by these and other factors inthe perceptions of challenges that might be deemed ethical in nature.

    METHOD

    PARTICIPANTS

    The population for the study consisted of the 3,167 individuals with U.S.addresses who were listed in the March 1999 database of AEA. We mailed aquestionnaire to a random sample of 798 of these individuals. A small num-ber of surveys (24) were returned due to incorrect addresses, reducing theoriginal sample size to 774. Overall, we received 397 responses, which repre-sents a return rate of 51%. Within this group, there were 6 individuals whoindicated that they were not evaluators or evaluators-in-training, and thus,they did not think it was appropriate for them to complete the survey. Conse-quently, the data analyses reported here are based on a sample of 391.

    386 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • SURVEY INSTRUMENT

    The questionnaire contained three sections. Section A included three, sin-gle-paragraph scenarios (see the appendix); in each one, an evaluator acts in away that could be deemed ethically problematic. In the first scenario, theRevised Report, the evaluator alters a section of a final report in response topressure from a stakeholder. In the second scenario (Advisory Group), anevaluator assembles a widely representative advisory group for a project butdoes not actively involve these stakeholders in the evaluation process. In thethird scenario (Passive Consent), the evaluator decides to use passive ratherthan active consent when studying a school-based youth program, eventhough he or she realizes that some parents who oppose the research willsimply forget to return the passive-consent form, while others who wouldhave been opposed to the study will fail to read it in the first place. Becausethree scenarios can be sequenced in six different ways, there were six ver-sions of the questionnaire, with each version (representing a differentsequence) accounting for one sixth of the total number of surveys mailed.

    On a Likert-type scale, respondents indicated the extent to which theyregarded the evaluators actions in each scenario as ethically problematic (1 =they definitely are problematic, 2 = they probably are problematic, 3 =unsure, 4 = they probably are not problematic, 5 = they definitely are notproblematic). A note at the beginning of the survey encouraged respondentsto define ethical in terms of issues of morality, that is, good and bad, rightand wrong, duty and obligation. For each scenario, respondents explained,in an open-ended fashion, why they gave the answer they did to the Likertitem. Finally, we asked respondents to predict (by assigning percentages tothe five Likert categories) how the overall AEA membership would react toeach scenario.

    In the surveys second section, respondents rated on a 7-point scale theusefulness of four role-oriented labelsconsultant, scientist, reporter, andfacilitatorfor describing the work that evaluators do (1 = not at all useful, 7 =extremely useful). Respondents also indicated whether they were familiarwith AEAs Guiding Principles for Evaluators and how useful the principleswere on a 5-point scale (1 = not at all useful, 5 = extremely useful). The finalquestion asked for the respondents overall political orientation (1 = veryconservative, 7 = very liberal).

    The third section of the questionnaire solicited background information.Respondents reported the number of years they had worked in evaluation, aswell as the approximate number of evaluations they had conducted. They alsoestimated the percentage of evaluations they had conducted in each of the fol-lowing capacities: external evaluator, internal evaluator, member of an

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 387

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • evaluation team, and solo practitioner. In addition, information was gatheredon the respondents highest degree, primary discipline, employment setting,and sex.

    RESULTS

    RESPONDENT CHARACTERISTICS

    A majority of the respondents possessed a doctoral degree (54% Ph.D.,7% Ed.D.); 30% had a masters, 5% a bachelors, and 5% were other. Theprimary discipline of more than half the respondents was education (19%),psychology (17%), or evaluation (17%) (see Table 1).

    The largest subgroup of respondents worked in a college or university(40%), with private business/consulting (19%) and nonprofit organizations(15%) representing the only other settings employing 10% or more of thesample (see Table 2). With respect to sex, 52% of the respondents werefemale, and 48% were male. This sex ratio differed significantly from that ofthe nonrespondent group, in which 60% were female and 40% were male,2(1, N = 779) = 4.52, p < .05. However, we found the respondents sex to beunrelated to the key variable examined in this study (i.e., reactions to the threescenarios), and thus, we have little reason to believe that the differentresponse rates for males and females affected our results in a substantive way.

    EVALUATION EXPERIENCE

    Respondents had worked in the evaluation field for an average of 12.5years (SD = 9.2), with a range spanning from 0 to 52 years. More than half

    388 EVALUATION REVIEW / AUGUST 2000

    TABLE 1: Primary Discipline of Respondents (n = 390)Discipline Percentage

    Education 19Psychology 17Evaluation 17Public administration/political science 10Research/statistics 9Sociology 7Social work 4Public health 3Other 14

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • (53%) had conducted 11 or more evaluations (see Table 3). Both external andinternal evaluators were well represented in the sample, as were team evalua-tors and solo practitioners (see Table 4). At the extremes, purely externalevaluators accounted for 28% of the respondents, whereas purely internalones accounted for 13%. Those who had only participated in team evalua-tions comprised 20% of the respondents, whereas those who always workedalone represented 7%.

    Respondents views of the scenarios are presented in Table 5. The evalua-tors actions were seen as most troubling in the Passive Consent vignette,with 69% of the sample rating the use of passive consent in this situation asdefinitely or probably ethically problematic. Slightly more than 50% of therespondents regarded the behavior described in the Revised Report scenarioas ethically problematic, whereas only 39% believed that the evaluators fail-ure to involve stakeholders actively in the Advisory Group scenario wasproblematic.

    Respondents predictions of how the AEA membership would react to thescenarios were strongly related to their own views of the vignettes.1 The more

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 389

    TABLE 2: Employment Settings of Respondents (n = 391)Setting Percentage

    College/university 40Private business/consulting 19Nonprofit organization 15Federal agency 9State agency 5Local agency 3School system 3Other 6

    TABLE 3: Evaluation Experience of Respondents (n = 390)Number of Evaluations Conducted Percentage

    None 41-5 246-10 1911-19 1620 or more 37

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • convinced a respondent was that the evaluator had behaved unethically in agiven scenario, the larger the respondents estimate of how many AEA mem-bers would view the evaluators action as ethically problematic (see Table 6).For example, when respondents viewed the evaluators actions as definitelyor probably ethically problematic, their mean estimate of the percentage ofthe AEA membership that would share this view was 69%. In contrast, whenrespondents regarded the evaluators behavior as not problematic (definitelyor probably), they estimated that only 28% of AEA would consider the evalu-ators actions to be ethically problematic (definitely or probably).

    CONTENT ANALYSIS

    For the purpose of content analyzing the respondents explanations oftheir answers, we grouped respondents into three categories for each sce-nario: those who thought the evaluators actions were definitely or probablyethically problematic, those who were unsure whether the evaluators actionswere ethically problematic, and those who thought the evaluators actions

    390 EVALUATION REVIEW / AUGUST 2000

    TABLE 4: Experience in External and Team Evaluations (n = 373-374)Evaluator Role

    Percentage of Evaluations External TeamConducted in a Given Role Evaluator Evaluator

    76-100 46 4350-75 17 2425-49 8 120-24 29 20

    NOTE: Values in the two right-hand columns represent the percentage of respondentsin a given role category.Figures for Team Evaluator do not total 100% due to rounding.

    TABLE 5: Reactions to the Three Scenarios (in percentages; n = 391)Scenario

    Passive Revised AdvisoryWere the Evaluators Actions Ethically Problematic? Consent Report Group

    Definitely are problematic 44 23 19Probably are problematic 25 28 20Unsure 11 17 12Probably are not problematic 16 28 32Definitely are not problematic 4 4 17

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • were definitely or probably not ethically problematic. In this section, wefocus on the explanations offered by the first and third groups; the unsuresare omitted.

    As might be expected, the specific issues raised by respondents differedacross the three scenarios. Given the studys focus on ethics, we used theGuiding Principles for Evaluators (American Evaluation Association1995)systematic inquiry, competence, integrity/honesty, respect for peo-ple, and responsibilities for general and public welfareas a conceptual toolfor categorizing these open-ended responses, once we had conducted an ini-tial content analysis to identify specific themes in the explanations. Theresults indicate that a general principle (e.g., integrity/honesty) could supportarguments both for and against the ethicality of the evaluators actions in agiven scenario (see Tables 7-9). For example, 30% of those who faulted theevaluator in the Advisory Group scenario maintained that extensive stake-holder participation is required for an accurate evaluation. In contrast, 11% ofthose who found the evaluators actions in that scenario to be acceptablebelieved that such participation could jeopardize the evaluations objectivity.Both of these arguments pertain most directly to the principle of systematicinquiry: Evaluators should adhere to the highest appropriate technical stan-dards in conducting their work . . . so as to increase the accuracy and credibil-ity of the evaluation information they produce (American Evaluation Asso-ciation 1995, 22).

    In other cases, a principles ability to encompass conflicting argumentswas related to respondents interpretations of a lack of detail in the scenario.Thus, 61% of those who objected to the evaluators behavior in the RevisedReport scenario assumed that the revision substantially altered the report,

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 391

    TABLE 6: Predictions of American Evaluation Association (AEA) MembersScenario Judgments as a Function of Ones Own Judgments (n =297-304)

    Predicted AEA JudgmentNot

    Respondents Judgment Problematic Unsure Problematic

    Evaluators behavior is definitely or probablyethically problematic 69 12 19

    Unsure 35 35 30Evaluators behavior is definitely or probably

    not ethically problematic 28 16 56

    NOTE: Values represent respondents mean predicted percentages of the AEA mem-bership who would judge a scenario in a given way (all three scenarios combined).

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • whereas 70% of those who did not object gave explanations that indicatedthat they did not share this assumption (see Table 8). In both instances, therelevant principle involves the integrity/honesty of the evaluation: Evalua-tors should not misrepresent their procedures, data, or findings (AmericanEvaluation Association 1995, 23).

    Finally, each scenario generated a number of open-ended responses thatclaimed that the situation depicted did not raise an ethical issue. Indeed, amongthose who saw the evaluators actions as not problematic in the Advisory Groupscenario, 50% thought the case involved a methodological or philo-sophical issue, not an ethical one. The percentages of the not-problematic

    392 EVALUATION REVIEW / AUGUST 2000

    TABLE 7: Respondents Explanations: The Passive Consent Scenario

    RelevantPercentage Explanation Guiding Principle

    Respondents who judged the evaluators actions as definitely or probably ethicallyproblematic (n = 271)

    45 Evaluator is consciously violating informed Respect for peopleconsent by using passive consent despite his orher knowledge of its limitations in this situation

    22 Passive consent is not permitted under various Respect for peoplelegal or policy guidelines

    15 Passive consent is inappropriate in studies Respect for peopleinvolving controversial or high-risk issues orvulnerable populations, such as minors

    5 Using passive consent can lead to future problemsfor the study or the evaluator NA

    9 Other NA4 No explanation given NA

    Respondents who judged the evaluators actions as definitely or probably notethically problematic (n = 76)

    43 Passive consent is an ethically acceptable Respect for Peopleprocedure for obtaining informed consent

    25 Passive consent is acceptable as long as it does Respect for peoplenot focus on controversial or sensitive issues orexpose participants to significant harm

    16 Passive consent may be necessary to obtain avalid, representative sample Systematic inquiry

    4 This scenario does not raise an ethical issue NA5 Other NA7 No explanation given NA

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • subgroup who believed this to be the case in the other two scenarios weremuch smaller (4%-5%).

    COMBINED SCENARIOS

    The analyses in this section group respondents into three categories: thosewho responded definitely problematic or probably problematic to noneof the three scenarios (8% of the sample), those who responded in this fashionto one or two of the scenarios (76% of the sample), and those who found theevaluator to be at fault (definitely or probably) in all three of the scenarios

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 393

    TABLE 8: Respondents Explanations: The Revised Report Scenario

    RelevantPercentage Explanation Guiding Principle

    Respondents who judged the evaluators actions as definitely or probably ethicallyproblematic (n = 198)

    61 Substantively altering a fair and balanced report Integrity/honestyundermines the accuracy, integrity, and scientificrigor of the evaluation

    22 Deleting quotes is not an appropriate solution; Integrity/honestyhowever, it might be acceptable to modify thereport in other ways

    4 Altering the report violates the evaluators primary Responsibilities forresponsibility, which is to the foundation (the general andclient) public welfare,

    integrity/honesty9 Other NA4 No explanation given NA

    Respondents who judged the evaluators actions as definitely or probably notethically problematic (n = 126)

    70 As long as the reports key findings are not Integrity/honestysubstantively altered with the quotes removed,the evaluator is behaving ethically

    13 Evaluators have an ethical responsibility to be Respect for people,sensitive to the needs of programs and stake- responsibilitiesholders as well as to the political consequences for general andof their reports public welfare

    5 This scenario does not raise an ethical issue NA5 Other NA7 No explanation given NA

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • (16% of the sample). We used either one-way ANOVA or chi-square tests toexamine the relationship of this variable to responses to questions dealingwith evaluator role, the Guiding Principles for Evaluators, political orienta-tion, evaluator experience, education, employment background, and sex (seeTable 10).

    The only role for which a significant relationship was found was consul-tant: Viewing all three scenarios as problematic was negatively associatedwith believing that the consultant label is useful for describing the work of

    394 EVALUATION REVIEW / AUGUST 2000

    TABLE 9: Respondents Explanations: The Advisory Group Scenario

    RelevantPercentage Explanation Guiding Principle

    Respondents who judged the evaluators actions as definitely or probably ethicallyproblematic (n = 155)

    30 Stakeholder participation generates input that is Systematic inquiryneeded for an accurate evaluation

    22 Stakeholder participation is a given in an Responsibilities forethical evaluation general and

    public welfare21 It is unethical to form an advisory group and then Integrity/honesty

    not use them as such17 The usefulness or utilization of an evaluation is Integrity/honesty

    decreased if stakeholders are not meaningfullyinvolved

    7 Other NA2 No explanation given NA

    Respondents who judged the evaluators actions as definitely or probably notethically problematic (n = 126)

    50 This scenario raises a methodological or NAphilosophical issue, not an ethical one

    15 The initial understanding between the evaluator Integrity/honestyand the stakeholders may not have provided forextensive stakeholder involvement

    12 The advisory group does have the opportunity to Integrity/honestyprovide some input into the evaluation

    11 The evaluator is the expert; involving stakeholders Systematic inquiryin depth is not necessary and might evencompromise the objectivity of the evaluation

    5 Other NA6 No explanation given NA

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • evaluators. In contrast, viewing the scenarios as problematic was positivelyassociated with finding the Guiding Principles for Evaluators to be useful.Other significant relationships included the following:

    Respondents employed in private business/consulting were less likely thanthose in other settings to believe that the scenarios involved ethically problem-atic behavior on the evaluators part.

    Among those not employed in private business/consulting, length of evaluationexperience (as measured in terms of both years and number of evaluations con-

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 395

    TABLE 10: Relationship of Combined Scenarios to Key Variables

    Number ofProblematic Scenarios

    Variable 0 1-2 3 2(df, n) or F(df)

    Role (Mean)Consultant 5.74a 5.30a 4.77b 4.56 (2, 381)**Scientist 3.97 3.83 3.76 n.s.Reporter 3.10 3.46 3.40 n.s.Facilitator 4.16 4.49 4.26 n.s.

    Guiding principleUseful (mean) 2.94a 3.43b 3.83c 6.68 (2, 194)**Familiar (%) 56 52 47 n.s.

    Political views (mean) 5.35 4.92 5.03 n.s.Experience (mean)

    Years (non-PBC) 18.3a 12.6b 9.0c 7.74 (2, 311)***Number of evaluations

    (non-PBC) 4.1a 3.6a 2.9b 8.24 (2, 312)***Percentage external (PBC) 90.0a 81.0a 52.2b 4.82 (2, 70)**Percentage team 65.3 59.7 62.5 n.s.

    Degree (%)Doctoral 77 65 52 6.24 (2, 372)*B.A./M.A. 23 35 48

    Employment (%)PBC 37 18 14 8.07 (2, 391)**Non-PBC 63 82 86

    Sex (%)Male 52 49 40 n.s.Female 48 51 60

    NOTE:PBC = private business/consulting;non-PBC = those employed in other settings.Means with different subscripts differ significantly at p < .05 or lower in the Tukey hon-estly significant difference comparison. Means for Number of evaluations refer to sur-vey scale values, not actual number of evaluations conducted.*p < .05. **p < .01. ***p < .001.

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • ducted) was negatively related to judging the evaluators actions to be ethicallyproblematic. No such relationship characterized those who were employed inprivate business/consulting.

    Among those employed in private business/consulting, respondents whoviewed all of the scenarios as problematic devoted less of their time to exter-nal evaluation than those who judged none, or one or two, of the scenarios asproblematic. This relationship was not found among those employed in othersettings.

    Holders of a doctoral degree were less likely than B.A. or M.A. respondentsto see all of the scenarios as ethically problematic. However, this relationshipis attributable to the greater evaluation experience of the former group, interms of both years and number of evaluations conducted. When either ofthese experience indicators is held constant, the relationship between degreeand ones score on the combined scenarios disappears.

    THE NOT-AN-ETHICAL-ISSUE SUBGROUP

    Only the Advisory Group scenario produced enough open-ended explana-tions (106) of I dont think [or Im not sure] this is an ethical issue to war-rant further statistical analysis. When we compared this subgroup with therest of the sample on the variables examined in the previous section (evalua-tor roles, Guiding Principles for Evaluators, etc.), no significant differencesemerged.

    DISCUSSION

    The results of this study shed light on two important, and related, ques-tions in evaluation ethics. First, what issues do evaluators emphasize, and dis-agree about, when judging the ethicality of professionals behavior in spe-cific situations? Second, are there factors that operate at a more general levelto increase or decrease the salience of ethical concerns in the eyes of evalua-tors? We will address both of these questions in this section.

    CONSENT, REPORTING, AND STAKEHOLDER PARTICIPATION

    Perhaps the most striking finding pertaining to the individual scenarios isthe lack of consensus that characterized the respondents judgments ofwhether each of the hypothetical evaluators had behaved ethically. Even in

    396 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • the Passive Consent scenario, in which agreement was highest, only 44% ofthe respondents believed that the evaluators actions were definitelyproblematic.

    In part, widespread disagreement may simply reflect the limitations of thetype of scenario methodology employed in this study. A single-paragraphvignette inevitably leaves many details unspecified, and different respon-dents are likely to fill in the blanks with different assumptions, with some ofthese assumptions having implications for the ethical judgments rendered.Thus, as was previously mentioned, respondents to the Revised Report sce-nario varied in their views of how the evaluators revisions would influencethe report: Of those who found the evaluators behavior ethically problem-atic, 61% cited the inappropriateness of substantively altering a fair report; ofthose who were unsure of the behaviors ethicality, 66% said they wereunsure because they did not know if the revisions substantively altered a fairreport; and among those who saw the evaluators actions as not problematic,70% assumed the revisions had not misrepresented the studys key findings.

    Similar points could be made concerning the other two vignettes. In theAdvisory Group scenario, it appears that respondents varied in their views ofthe understanding established between the evaluator and the stakeholders atthe beginning of the project concerning the nature of the advisory relation-ship. And in the Passive Consent scenario, respondents differed in the extentto which they assumed that the school-based youth program involvedhigh-risk issues. Had the scenarios been more explicit about these and otherissues, it is likely that respondents would have displayed higher levels ofagreement when judging the evaluators behavior in each vignette.

    Reducing disagreement is not synonymous with eliminating it, however.Even if the Revised Report scenario had contained actual copies of both theoriginal and final reports, respondents would almost certainly have differedover whether the changes in the document represented substantive ones ornot, resulting in conflicting conclusions about the ethicality of the evaluatorsactions. The same principle applies to the other two scenarios: describingmore fully the initial evaluator-stakeholder conversations in the AdvisoryGroup vignette, and specifying the type of youth program in the Passive Con-sent scenario, does not guarantee that respondents would have agreed on thenature of the understanding in the former vignette, or the amount of riskinvolved in the latter one. Indeed, as Korenman et al. (1998, 47) observe,Ambiguity is . . . typical of real-life behaviors as well as scenarios. Withthese considerations in mind, we are inclined to conclude that the level of dis-agreement among our respondents on the issues raised in the three scenariosis probably less than the reported percentages suggest but of considerablemagnitude nonetheless.

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 397

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • In this context, it should be noted that scenarios can be designed to exam-ine normative issues in a more comprehensive and fine-grained fashion thanwas the case in this study. Korenman et al. (1998), for example, used a frac-tional factorial approach (Rossi and Nock 1982) to construct multiple scenar-ios reflecting various levels of potential misconduct within general researchdomains. This strategy has much to offer future investigators of evaluationethics.

    In our study, both the Passive Consent and Revised Report scenarios wereapparently seen by nearly all respondents as encompassing ethical problems.Only 1% of the sample, when explaining their judgments of the evaluatorsactions, claimed that these vignettes did not raise an ethical issue. In contrast,19% of the respondents expressed such an opinion when discussing the Advi-sory Group scenario. Why the difference? The Passive Consent scenariodeals with informed consent, traditionally a core topic in discussions ofresearch ethics (e.g., Newman and Brown 1996, 147-49). Similarly, at theheart of the Revised Report scenario is the issue of impartial reporting offindings, a professional responsibility that researchers typically see as havingmajor ethical significance (Korenman et al. 1998; Morris and Cohn 1993).The Advisory Group scenario, however, focuses on stakeholder involvementand empowerment, a domain that in the minds of many evaluators does notnecessarily suggest a set of ethical imperatives. For example, Empower-ment Evaluation (Fetterman, Kaftarian, and Wandersman 1996), anapproach that strongly advocates stakeholder involvement, has been greetedwith less than wholehearted endorsement by opinion leaders within the field(see Fetterman 1997; Patton 1997; Scriven 1997). Thus, it should not be sur-prising that of the respondents who did not see the evaluators actions asunethical in this scenario, 50% indicated that they did not believe the probleminvolved was an ethical one. Representative comments from this subgroupincluded, Involving stakeholders is a matter of use not ethics, This maynot be the smartest approach, but I dont find it an ethical dilemma, andWhile not actively involving stakeholders is not good evaluation, I dont seeit as morally wrong. As previously reported, these respondents did not sig-nificantly differ from the rest of the sample on any of the variables examinedin the study.

    When the explanations respondents offered for their ethical judgments ofthe three scenarios are viewed as a whole, the differences between themreflect a dynamic commonly found in controversy: conflicting views ofwhether a general principle or value is being upheld in a specific situation.Thus, respondents who found fault with the evaluators behavior in the Pas-sive Consent scenario usually thought that the spirit of informed consent (ifnot the letter of the law) had been violated by the evaluator. In contrast,

    398 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • most of those who were ethically comfortable with this scenario indicatedthat they did not see the evaluators actions threatening informed consent.Both groups of respondents would probably claim that their positions wereconsistent with the AEA Guiding Principle of Respect for People. Similarly,both the defenders and critics of the evaluator in the Revised Report scenariooften professed their allegiance to the importance of not altering the sub-stance of the final report and undoubtedly saw themselves upholding theintegrity/honesty of the evaluation. And in the Advisory Group vignette,there was one subgroup of respondents who argued that an accurate evalua-tion required extensive stakeholder participation, whereas another subgroupclaimed that such participation would threaten the evaluations accuracy.Both groups would be likely to maintain that they were committed to system-atic inquiry in the evaluation.

    These findings underscore one of the limitations of any highly general setof principles for guiding professional behavior (e.g., House 1995; Mabry1999; Rossi 1995). As Rossi (1995, 59) has observed of such principles, Iam certain that I can claim to subscribe to them. I am also certain that if I heldvery different views of evaluation, I would also be in compliance.

    WHO FINDS FAULT, AND WHO DOESNT?

    Although in this study we failed to identify a distinctive respondent sub-group whose general orientation was an explicit one of not viewing evalua-tion problems through an ethical lens, we did succeed in generating a com-posite variable (Combined Scenarios) that may reflect a similar orientationoperating at a more implicit level. Specifically, we distinguished betweenthree groups of respondents: those who believed that the evaluators behaviorwas ethically problematic in none of the scenarios, those who found it prob-lematic in one or two of the scenarios, and those who faulted the evaluator inall three of the scenarios. In at least one crucial respect, perceiving an evalua-tors actions as ethically blameless is much the same as perceiving the evalu-ators behavior as not involving an ethical issue; in neither case is a judgmentof moral wrongdoing rendered.

    When respondents were subgrouped in this fashion, the differences thatemerged between them were intriguing. Perhaps most telling was the role ofprimary employment setting. Respondents in private business/consultingwere less likely than those in other settings to criticize ethically the evalua-tors behavior. This finding underscores the importance of structural or con-textual variables in understanding evaluators ethical perceptions. Evaluatorsin private business/consulting essentially work for themselves, a status that

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 399

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • can be counted on to heighten ones sensitivity to the personal economic con-sequences of ones actions. Viewing an evaluators behavior as ethicallyinappropriate usually implies that some other action should have been taken,an action that in many cases might not, at least in the short term, be in the eval-uators material self-interest. Thus, experiences in private business/consult-ing may predispose evaluators to be more tolerant, and understanding, ofbehavior that those in other settings might criticize ethically.2 The influenceof role-oriented variables on ethical judgments relevant to evaluation has alsobeen documented by Korenman et al. (1998), who found that National Sci-ence Foundation research grantees were more likely than administratorsresponsible for academic research integrity to perceive violations of collegi-ality and sharing of research products as unethical.

    Viewing AEAs Guiding Principles for Evaluators as useful for thinkingabout the ethical issues you encounter in evaluation was positively related tobelieving that the evaluators scenario behavior was unethical. It is unclearwhether the perceived value of the Guiding Principles actually plays a causalrole with respect to the ethical judgments participants rendered in the study. Itmay be that ethical issues have greater salience for some individuals than forothers, and this salience causes the former group to find the Guiding Princi-ples more useful, as well as to be more critical of evaluators behavior withrespect to ethics.

    Interestingly, simply having knowledge of the Guiding Principles doesnot appear to be important in this regard: Those who responded that they werenot familiar with the Guiding Principles (48% of the sample) were no lesslikely than those who were familiar with them to perceive unethical behavior inthe scenarios. This finding lends support to the notion that a causal factor otherthan the Guiding Principles is responsible for the observed relationship betweenthe Guiding Principles subjective value and reactions to the scenarios.

    Of the four roles examined in this studyconsultant, scientist, reporter,and facilitatorrespondents ethical judgments were only related to theirview of the consultant role: the more useful this role was perceived to be, theless likely the respondent was to view the evaluators actions as ethicallyproblematic. The nature of the consultation process may be key to interpret-ing this result. A consultant is typically defined as an expert who gives pro-fessional advice or services (Websters Ninth New Collegiate Dictionary1988). Inherent in this view is the notion that within their domain of exper-tise, the judgments rendered by consultants are worthy of respect and trust.Indeed, it is precisely for these judgments that consultants are hired by clientsin the first place. Thus, respondents who highly value the consultant role forevaluation may be signaling, in part, a willingness to give evaluators the ben-efit of the doubt when scrutinizing their behavior in specific situations. Such

    400 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • an orientation could lead to fewer accusations of unethical behavior thanwould otherwise be the case.

    In this context, the failure to find a relationship between the perceived use-fulness of the scientist role and views of unethical behavior deserves com-ment. This role was included in the survey to test, in an admittedly limitedway, Honeas (1992) conclusion that internalization of the objective scientistrole decreases the salience of ethical issues for evaluators. Our results do notsupport Honeas conclusion. It is possible, of course, that ouroperationalization of the scientist role was too limited to do justice to her con-ceptualization of it. It should also be noted that our study focused on respon-dents perceptions of other evaluators experiences (as represented by thescenarios) rather than their own. This difference in method between our studyand hers, at least in part, may be responsible for the different results obtained.This factor may also help explain the lack of a relationship we found betweeninvolvement in team-oriented evaluationsa dimension deemed importantby Honeaand perceptions of unethical behavior in the scenarios.

    Among those currently employed in private business/consulting, respon-dents with more experience in external evaluation were less likely to see thescenarios as ethically problematic. This finding may reflect the fact that bydefinition, all evaluations conducted by those in private business/consultingare external in nature. Thus, within the private business/consulting subgroup,the percentage of external evaluations conducted probably serves as a roughproxy for how long a respondent has been in private business/consulting.Hence, this finding might be viewed as further evidence of the relationshipbetween the private business/consulting role and ethical judgments.

    Finally, we found that among those not employed in private business/con-sulting, evaluation experience was negatively associated with believing thatthe evaluators actions in the scenarios were ethically problematic. In thisregard, it should be noted that training programs in most disciplines includecurricular components that address ethical issues in an explicit fashion. Withthis exposure relatively fresh in their minds, less experienced evaluators maybe prone to set the bar higher for ethical decision making than more sea-soned practitioners, who have had more encounters than the former groupwith the myriad factors that can constrain these decisions. As one respondentwith 15 years of experience wrote when defending the evaluator in the Pas-sive Consent scenario, [The evaluators actions are] not ethically problem-atic, just realistic. The benefit of the evaluation results justifies trying to get asgood a sample as possible.

    At first glance, this finding for experience might be viewed as contradict-ing the results of the Morris and Cohn (1993) study, in which experiencedevaluators were more likely than the less experienced to report that they had

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 401

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • encountered ethical conflicts in their work. Once again, however, the differ-ent focuses of the two studies are key. The Morris and Cohn investigation tar-geted the respondents experiences, whereas the current study examined therespondents reactions to others experiences. As ones evaluation experi-ence grows over time, the number of opportunities one has to encounter anethical problem grows as well, which is what Morris and Cohn found. Inaddition, Morris and Cohn did not ask respondents to pass ethical judgmenton their own behavior, whereas in the research reported here, we did requestthat such judgments be rendered concerning the actions of the hypotheticalevaluators.

    CONCLUSION

    To those who would like to see evaluators speak with one voice on ethi-cal matters, this study delivers two messages. The bad news is that one voicedoes not exist, at least on the scenarios we examined involving stakeholderinvolvement, reporting of results, and informed consent. This finding is espe-cially noteworthy with respect to stakeholder involvement, where the resultsindicate that a significant percentage of evaluators do not even see this issueas an ethical one. Thus, it does not appear that the normative structure of eval-uation currently encompasses stakeholder involvement as a moral, asopposed to a technical, concern (see Schaffner 1992).

    The studys good news, based on respondents explanations of theirviews, is that there is reason to believe that the more information evaluatorshave about a specific challenging situation, the more likely they are to agreeon what the evaluator is ethically obligated to do. Although it may be true thatthe devil is in the details, it is also the case that the most meaningful com-mon ground is likely to be found there rather than in more abstract discus-sions. As House (1995, 27) has put it, Ethical problems are manifested onlyin particular concrete cases, and endorsement of general principles some-times seems platitudinous or irrelevant.

    Of course, even with a surfeit of details, significant disagreement is likelyto remain in many instances. Applying general ethical principles and stan-dards to a particular circumstance can leave a great deal of room forvalue-based interpretation and differences in prioritization, as is evident fromarguments over whether scientific objectivity is enhanced or hindered byextensive stakeholder involvement, to cite just one example. Indeed, theongoing nature of this argument is one reason why stakeholder involvementis less a part of the ethical canon of evaluation than either informed consentor impartial reporting.

    402 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • Against this background, increased dialogue among evaluators who bringdifferent orientations to ethical problems is likely to be valuable to the field,in terms of both theory and practice (see Mabry 1999; Morris 1999a). Ourresults suggest that evaluators in private business/consulting and those inother settings would particularly benefit from talking more with each other,as would new and experienced evaluators. As the number of empirical studiesaddressing practitioner and trainee perspectives on research ethics increases,our ability to identify the most fruitful arenas for discussion increases as well.For example, personal encounters with various types of unethical behavior(either as participant or observer) have been investigated (e.g., Hamilton1992; Kalichman and Friedman 1992; Morris and Cohn 1993; Swazey,Anderson, and Lewis 1993) as have individuals predictions of how theywould respond to instances of misconduct (e.g., Wenger et al. 1999).

    To the extent that evaluators assume that other evaluators share their opin-ions about ethical issues, a pattern that was strikingly evident in the presentstudy, a fuller appreciation of the different ethical lenses that can be applied toevaluation awaits those who participate in conversations framed by theresults emerging from these related lines of research.

    APPENDIXTHE REVISED REPORT SCENARIO

    An evaluator has recently shared the draft of a final report with the director of theprogram being evaluated. After reviewing the draft, the program director asks theevaluator to tone down one section of the report that describes some operational prob-lems within the program. The director believes that the findings in this section, al-though accurate, are presented in a way that could cause readers to overlook the over-all success of the programs implementation. (The evaluations sponsor and primaryclient is a philanthropic foundation that is the major source of funding for the pro-gram.) The evaluator reexamines the draft and concludes that the findings on opera-tional problems have been reported in a fair and balanced fashion. Nevertheless, theevaluator wishes to be responsive to the directors concerns. The evaluator revises thesection in question, mainly, by deleting a number of harshly worded quotes concern-ing operational difficulties that were voiced by interview and survey respondents.

    THE ADVISORY GROUP SCENARIO

    An evaluator is conducting an impact study of an urban crime prevention program.Key stakeholders include the following: the funding source (a local foundation), thecommunity agency responsible for overseeing implementation of the program, the

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 403

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • police department, the mayors office, local merchants, neighborhood block watchgroups, and several organizations specializing in youth services. The evaluator as-sembles an advisory group for the evaluation that includes representatives from all ofthese constituencies. As the project unfolds, the evaluator mainly uses the advisorygroup meetings to keep stakeholders informed of the evaluations progress. The eval-uator places very little emphasis on actively involving stakeholders in the process ofconceptualizing the evaluation and how it should be carried out, or in interpreting thedata. The evaluators experience in doing research on crime prevention interventionssignificantly exceeds that of any of the stakeholders.

    THE PASSIVE CONSENT SCENARIO

    In evaluating a school-based youth program, the evaluator has the choice of usingeither an active-consent or a passive-consent procedure to obtain parental permission.Active consent requires parents to sign and return a form if they wish to give permis-sion for their child to participate in a study. In contrast, passive consent only requiresthem to sign and return a form if they do not want their child to participate. In general,it is much easier to achieve high participation rates with passive-consent approachesthan with active-consent ones. In this particular situation, the evaluator is convincedthat passive consent will generate a significantly higher participation rate than activeconsent and will be much less costly to implement as well. The evaluator believes thatin part, this higher rate will result from the fact that some parents who oppose thestudy will simply forget to return the passive-consent form, whereas others whowould have opposed the study will fail to read the form in the first place. The evaluatordecides to use the passive consent procedure.

    NOTES

    1. Between 22% and 24% of the sample (depending on the scenario) chose not to offer pre-dictions, sometimes writing that they didnt have a clue as to what the correct percentagesmight be.

    2. This does not necessarily mean that those in private business/consulting do not recognizeethical challenges when they occur. In their 1993 study, for example, Morris and Cohn (1993)found that respondents in private business/consulting were no less likely than other respondentsto report that they had encountered ethical problems in their evaluation work.

    404 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • REFERENCES

    American Evaluation Association, Task Force on Guiding Principles for Evaluators. 1995. Guid-ing Principles for Evaluators. In Guiding Principles for Evaluators, edited by W. R. Shadish,D. L. Newman, M. A. Scheirer, and C. Wye, 19-26. New directions for program evaluation,no. 66. San Francisco: Jossey-Bass.

    Anderson, M. S., and K. S. Louis. 1994. The graduate student experience and subscription to thenorms of science. Research in Higher Education 35:273-99.

    Bennington, T. 1999. Ethical implications of computer-mediated instruction. In Informationtechnologies in evaluation: Social, moral, epistemological, and practical implications,edited by G. Gay and T. L. Bennington, 87-103. New directions for evaluation, no. 84. SanFrancisco: Jossey-Bass.

    Fetterman, D. 1997. Empowerment evaluation: A response to Patton and Scriven. EvaluationPractice 18:253-66.

    Fetterman, D. M., S. J. Kaftarian, and A. Wandersman, eds. 1996. Empowerment evaluation:Knowledge and tools for self-assessment and accountability. Thousand Oaks, CA: Sage.

    Fitzpatrick, J. L., and M. Morris, eds. 1999. Current and emerging ethical challenges in evalua-tion. New directions for evaluation, no. 82. San Francisco: Jossey-Bass.

    Hamilton, D. 1992. In the trenches, doubts about scientific integrity. Science 255:1636.Honea, G. E. 1992. Ethics and public sector evaluators: Nine case studies. Unpublished doctoral

    dissertation, University of Virginia.House, E. R. 1995. Principled evaluation: A critique of the AEA Guiding Principles. In Guiding

    Principles for Evaluators, edited by W. R. Shadish, D. L. Newman, M. A. Scheirer, andC. Wye, 27-34. New directions for program evaluation, no. 66. San Francisco: Jossey-Bass.

    House, E. R., and K. R. Howe. 1999. Values in evaluation and social research. Thousand Oaks,CA: Sage.

    Kalichman, M. W., and P. J. Friedman. 1992. A pilot study of biomedical trainees perceptionsconcerning research ethics. Academic Medicine 67:769-75.

    Korenman, S. G., R. Berk, N. S. Wenger, and V. Lew. 1998. Evaluation of the research norms ofscientists and administrators responsible for academic research integrity. JAMA 279:41-7.

    Mabry, L. 1999. Circumstantial ethics. American Journal of Evaluation 20:199-212.Merton, R. K. 1973. The normative structure of science. In The sociology of science: Theoretical

    and empirical investigations, edited by N. W. Storer, 267-78. Chicago: University of Chi-cago Press.

    Morris, M. 1999a. Ethical challenges. American Journal of Evaluation 20:113-22.. 1999b. Research on evaluation ethics: What have we learned and why is it important? In

    Current and emerging ethical issues in evaluation, edited by J. L. Fitzpatrick and M. Morris,15-24. New directions for evaluation, no. 82. San Francisco: Jossey-Bass.

    Morris, M., and R. Cohn. 1993. Program evaluators and ethical challenges: A national survey.Evaluation Review, 17:621-42.

    Newman, D. L., and R. D. Brown. 1996. Applied ethics for program evaluation. Thousand Oaks,CA: Sage.

    Patton, M. Q. 1997. Toward distinguishing empowerment evaluation and placing it in a largercontext. Evaluation Practice 18:147-63.

    Rossi, P. H. 1995. Doing good and getting it right. In Guiding principles for evaluators, edited byW. R. Shadish, D. L. Newman, M. A. Scheirer, and C. Wye, 55-9. San Francisco: Jossey-Bass.

    Rossi, P. H., and R. A. Berk. 1985. Varieties of normative consensus. American SociologicalReview 50:333-47.

    Morris, Jacobs / EVALUATORS DISAGREEMENTS ABOUT ETHICS 405

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from

  • Rossi, P. H., and S. L. Nock. 1982. Measuring social judgments: The factorial survey approach.Newbury Park, CA: Sage.

    Schaffner, K. F. 1992. Ethics and the nature of empirical science. In Research fraud in the behav-ioral and biomedical sciences, edited by D. J. Miller and M. Hersen, 17-33. New York: JohnWiley.

    Scriven, M. 1997. Empowerment evaluation examined. Evaluation Practice 18:165-75.Shadish, W. R., D. L. Newman, M. A. Scheirer, and C. Wye, eds. 1995. Guiding Principles for

    Evaluators. New directions for program evaluation, no. 66. San Francisco: Jossey-Bass.Swazey, J. P., M. S. Anderson, and K. S. Lewis. 1993. Ethical problems in academic research.

    American Scientist 81:542-53.Wenger, N. S., S. G. Korenman, R. Berk, and S. Berry. 1997. The ethics of scientific research: An

    analysis of focus groups of scientists and institutional representatives. Journal of Investiga-tive Medicine 45:371-80.

    Wenger, N. S., S. G. Korenman, R. Berk, and H. Liu. 1999. Reporting unethical research behav-ior. Evaluation Review 23:553-70.

    Michael Morris, Ph.D., is professor of psychology at the University of New Haven, where he alsoserves as director of graduate field training in community psychology. His research interestsfocus on ethical issues in program evaluation.

    Lynette R. Jacobs received her M.A. in community psychology from the University of NewHaven. She has conducted research in the area of collective identity and social activism and onissues of access to reproductive health services for women. Her other interests include commu-nity-supported agriculture, feminist identity, and social support.

    406 EVALUATION REVIEW / AUGUST 2000

    by Pedro Moscoso on April 17, 2009 http://erx.sagepub.comDownloaded from