Upload
mallory
View
215
Download
0
Embed Size (px)
Citation preview
This article was downloaded by: [RMIT University]On: 22 September 2013, At: 00:59Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK
Applied Environmental Education & CommunicationPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/ueec20
Needs Assessment for Participatory Evaluation ofEnvironmental Education ProgramsMallory McDuff aa Warren Wilson College Asheville, North Carolina, USAPublished online: 29 Oct 2010.
To cite this article: Mallory McDuff (2002) Needs Assessment for Participatory Evaluation of Environmental EducationPrograms, Applied Environmental Education & Communication, 1:1, 25-36, DOI: 10.1080/15330150213990
To link to this article: http://dx.doi.org/10.1080/15330150213990
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Applied Environmental Education and Communication 1:25–36 (2002)Copyright © 2002 Taylor & Francis1533-015X /02 $12.00+ .00
Address correspondence to:Mallory McDufMallory McDufMallory McDufMallory McDufMallory McDuffffff, P.O. Box 9000, Warren WilsonCollege, Asheville, NC 28815, USA; Tel. 828-771-3787; Fax 828-771-7081; [email protected]
Needs Assessment for Participatory Evaluation
of Environmental Education Programs
Mallory McDuff
Warren Wilson CollegeAsheville, North Carolina, USA
This study conducted a needs assessment prior to designing and implementingparticipatory evaluation at the Wildlife Clubs of Kenya. As the largest grassrootsenvironmental education organization for youth in Africa, the Wildlife Clubs ofKenya (WCK) has involved more than 1 million youth and influenced a genera-tion of Kenyan conservationists. This needs assessment examined past evaluationpractices at WCK, perceptions of staff toward evaluation, and recommendedstrategies for building the capacity of WCK to conduct monitoring and evalua-tion. The needs assessment combined data from document review, participantobservation, interviews, participatory rural appraisal, and institutional structuresinto a relevant training and participatory evaluation design that involved 120stakeholders of WCK.
Environmental education (EE) programs oper-ate under the concurrent constraints of limitedfinancial resources and criticisms that EE is in-effective. An important linkage exists betweenthese two constraints. In the face of attacks onEE (e.g., Sanera, 1995; Sanera & Shaw, 1999;Satchell, 1996), environmental educators mustpresent concrete evidence of their effectivenessand impacts. Yet with limited resources, manyEE programs perceive monitoring and evalua-tion as impractical given the costs of hiring ex-ternal consultants.
The need for evaluation of EE programs isa call that has been repeated for the past threedecades (Bennett, 1974; Bennett, 1988–1989;Doran, 1977; Jacobson & McDuff, 1997; Linke,1981; D’Hearn, 1982; Thomas, 1989–1990). De-spite this acute need, the majority of EE pro-
grams do not integrate ongoing evaluation intoeducational programming (Jacobson & McDuff,1997; Norris & Jacobson, 1998). An overview ofwildlife education programs found that few pro-grams included any type of evaluation(Pomerantz & Blanchard, 1992). Likewise, ananalysis of 56 tropical conservation educationprograms found that less than half used any typeof evaluation (Norris & Jacobson, 1998).
The literature does reflect an increasedfocus on evaluations of EE programs since the1970s (e.g., Jacobson, 1991; Monroe, Washburn,Goodale, & Wright, 1997; Rovira, 2000; Wood,2001). Yet the majority of evaluations of EE pro-grams are traditional evaluations designed andconducted by outside consultants or academicresearchers rather than local stakeholders (e.g.,Fleming, 1983; Hollweg, 1997; Middlestadt etal., 2001). In these evaluations, local stakehold-ers are involved as data sources through inter-views and surveys, for example, but not as datacollectors or evaluation designers.
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
26 MALLORY MCDUFF
While invaluable for the purposes of ac-countability, these traditional evaluations missthe opportunity for evaluation to guide sustain-able program improvement. Typically, data col-lection ends when the researcher leaves theproject site. Criticisms of traditional evaluationinclude (1) a narrow focus on evaluation ques-tions derived by administrators or donors; (2)collection of data unresponsive to the needs ofprogram participants; and (3) dissemination ofresults that are not used (Cousins & Earl, 1992;Eash, 1985; Weiss, 1983).
Effective evaluation should be a continu-ous process, aimed at measurement of impactsand program improvement (Jacobson, 1987).Yet the lack of ongoing evaluation in EE stemsfrom multiple factors including: lack of stafftime; perception of evaluation as a complexprocess requiring outside expertise; lack ofknowledge and skills by staff; and lack of fund-ing for evaluation (McDuff, 1999; Nowak, 1984;Thomas, 1989–1990; Wood & Wood, 1985).
Without evaluation, ineffective EE programswill continue with serious consequences to theenvironments these programs seek to protect. Oneevaluation of an EE program in Senegal andGambia found that local residents were more con-fused regarding the conservation objectives of thegovernment after implementation of the educa-tional activities (IIED, 1994). Evaluation of con-servation education programs in Malaysia and
Brazil, however, resulted in concrete evidence ofpositive changes in attitudes, knowledge, and be-haviors of students engaged in EE activities(Jacobson, 1987; Padua & Jacobson, 1993).
In contrast to traditional evaluation, par-ticipatory evaluation involves local stakeholdersin problem identification, evaluation design,data collection, analysis, and use of results(Feurestein, 1986; Jackson & Kassam, 1998).Stakeholders include those who affect or areaffected by the policies, decisions, and actionsof a program (Grimble & Chan, 1995). Table 1presents differences between participatory andtraditional evaluation. Participatory evaluationhas experienced growth in fields such as sus-tainable development, health, and agriculture,but these strategies have rarely been applied toEE programs (McDuff, 1999).
To build the capacity of EE stakeholders toconduct evaluation, the EE community mustinvest resources in training. Weeks-Vagliani(1993) concludes that training staff in evalua-tion techniques would increase the probabilityof success in EE programs. Relevant training,however, must address prior attitudes, knowl-edge, and skills in evaluation, as well as the or-ganizational context for evaluation.
Needs assessment provides one toolbox ofmethods for addressing these contextual vari-ables prior to conducting a participatory evalu-ation. Needs are defined as a discrepancy or gap
TABLE 1Differences between conventional and participatory evaluation
Issue Conventional Participatory
Epistemology objectivist world view constructivist world viewknowledge as empirical knowledge as dialectical, constructed
Involvement of passive interactivestakeholders
Choice of problem based on client’s/donor’s needs based on immediate problem situation
Methodology quantitative or qualitative qualitative and/or quantitative
Research design fixed flexible
Assessment fewer measures/standardized multiple measures/site specific
Role of evaluator technician facilitator
Locus of control external (evaluator) internal (stakeholders)
Interests served administrators/donors all stakeholder groups
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
NEEDS ASSESSMENT FOR PARTICIPATORY EVALUATION OF ENVIRONMENTAL EDUCATION PROGRAMS 27
between a desired state of affairs (a goal) andthe present state of affairs (Gagne, Briggs, &Wager, 1992). In this case, capacity building inevaluation represents the desired state of affairs.With respect to evaluation, a needs assessmentprovides methods for examining prior organiza-tional experience with evaluation, existing in-stitutional practices and vocabulary, attitudes ofstakeholders toward evaluation, and opportu-nities and constraints for conducting evaluation.
Needs assessment techniques have beenapplied to the design and development of EEprograms (Andrews, Camozzi, & Puntenney,1994; Jacobson, 1995). However, needs assess-ment has not been used for capacity, buildingin evaluation of EE programs. Needs assess-ments can utilize a variety of both qualitativeand quantitative data collection tools (Table 2).The objectives of the needs assessment and avail-able resources will determine the tools used.
For this study, a needs assessment for evalu-ation was conducted at the Wildlife Clubs ofKenya, which is the largest grassroots environ-mental education program for youth in Africa.The Wildlife Clubs of Kenya (WCK) began in1968 with a meeting of Kenyan students fromtwelve secondary schools. WCK was the first EEprogram of its kind in Africa and served as amodel for youth conservation organizations inAfrica, India, and Asia (McDuff, 2000). Sinceits inception, WCK has involved more than onemillion youth in Kenya and influenced a gen-eration of Kenyan conservationists (McDuff &Jacobson, 2000).
With a national headquarters in Nairobi,the organization has clubs in nine regions through-
out Kenya (Figure 1). Elected groups of teacherscalled Action Groups coordinate activities in eachregion, with the help of four regional offices. Inindividual schools, the wildlife clubs are led byvolunteer teachers and student members. Wild-life clubs in the schools conduct their own activi-ties, such as trips to the national parks, commu-nity clean-ups, and research projects. The nationaland regional offices organize activities such asenvironmental rallies and art competitions for theclubs. WCK projects are supported by donors, thegovernment, membership fees, and an endow-ment fund.
Despite its thirty-year history, WCK had notconducted a systematic evaluation of its impactsor effectiveness. The administration expressedinterest in building the capacity of its staff, teach-ers, and other stakeholders to implement par-ticipatory evaluation. A needs assessment waschosen as a first step to facilitate the design of arelevant strategy for integrating evaluation intoWCK programming. The objectives of the needsassessment were to examine: (1) past evaluationpractices at WCK; (2) perceptions of staff towardevaluation; and (3) recommended strategies forbuilding the capacity of the organization in
TABLE 2Examples of available tools for data collectionin needs assessment (adapted from Jacobson, 1995)
A Needs Assessment Toolbox
Document reviewSurveyInterviewParticipant ObservationContent analysisParticipatory rural appraisalPublic meetingFocus groupWorkshops
FIGURE 1. The Wildlife Clubs of Kenya are dividedinto 9 regions served by 1 headquarters and 4regional offices.
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
28 MALLORY MCDUFF
monitoring and evaluation. The results of theneeds assessment then shaped the developmentof workshops in participatory evaluation and thedesign of an evaluation system at WCK (seeMcDuff, 2001; McDuff & Jacobson, 2000, fordetails of trainings and evaluation results).
METHODS
Four data collection methods were used during atwo-month period at WCK: (1) document reviewof past project reports, minutes of staff meetings,and annual reports; (2) participant observation;(3) semi-structured interviews with all ten WCKeducation staff from both headquarters and re-gional offices; and (4) participatory rural appraisal(Slocum, Wichhart, Rocheleau, & Thomas-Slaytor,1995) tools including drawing, word association,and ranking. Lastly, the needs assessment usedgroup brainstorming sessions during seven evalu-ation workshops with WCK stakeholders in a five-month period.
Document review of WCK reports and min-utes of staff meetings revealed informationabout past evaluation practices. The objectiveof participant observation was to assess currentmethods of monitoring and evaluating programeffectiveness at WCK. Participant observationincluded attending all staff meetings at head-quarters, participating in all educational pro-grams, visiting regional offices, and attendingWCK workshops.
The semi-structured interviews explored staffperceptions of evaluation, perceived needs regard-ing program evaluation, and prior involvementin evaluation. The interview questions were fieldtested with two education interns at WCK. Eachintern responded to the interview questions andthen participated in a discussion about the datacollection instrument, as suggested by Fowler(1993). Questions were revised based on the re-sponses during the field test.
Participatory rural appraisal (PRA) tools ofdrawing, word association, and ranking with theten education staff were used. Methods fromPRA have been used extensively in natural re-
source management and community develop-ment to document perceptions and prioritiesat the grassroots level (Slocum et al., 1995). Inthe drawing exercises, staff were given paperand markers and asked to draw the first imagethat came to their mind when they thought ofthe word “evaluation.” Similarly, for the wordassociations they recorded words associated withthe term “evaluation.”
The objective of the ranking exercise wasto document the amount of time staff spent onmonitoring and evaluation, as compared withother responsibilities. Ranking has been usedas an effective method of quantifying variablesin relation to one another (Davis-Case, 1990;Pretty et al., 1995; Slocum et al., 1995). Duringthe ranking exercises, individual staff membersreceived nine index cards representing the fol-lowing tasks related to EE programs: writingproposals; locating funding; designing the pro-gram; pilot testing the program; implementingthe program; coordinating logistics; document-ing data to monitor program effectiveness;evaluating the program; and writing the finalreport. I asked staff to rank the cards accordingto the amount of time they spent on each taskin their job.
The objective of the group brainstormingexercises was for WCK stakeholders in sevenworkshops to assess reasons for evaluation, cur-rent methods of evaluation at WCK, and pro-posed changes in evaluation. The 120 partici-pants chosen to represent the geographical di-versity within WCK regions included all educa-tion staff (n=10), Action Group members(n=17), teachers who lead wildlife clubs at theschool level (n=82), and community membersinvolved in WCK projects (n=11).
RESULTS
Document review andparticipant observation
Document review and participant observationreflected that program implementation and
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
NEEDS ASSESSMENT FOR PARTICIPATORY EVALUATION OF ENVIRONMENTAL EDUCATION PROGRAMS 29
funding received much higher priority thanmonitoring and evaluation. Given limited finan-cial resources, the focus of staff meetings andproject reports was implementation of EE ac-tivities. Staff submitted annual project reportsto donors, and all staff and action groups re-ported to headquarters at biannual meetings.
Information reported in project reportsand meetings included the number of activitiesconducted and the number of participants. Bothdocument review and participant observationrevealed that the primary formal project “evalu-ations” consisted of brief site visits by donors toprojects for the purposes of accountability.
Semi-structured interviews
The interviews with the education staff (N=10)revealed the following methods or indicatorsused to evaluate program effectiveness: num-ber of participants (n=7); feedback from stu-dents (n=2); and number of registered WCKmembers (n=1). As one staff member noted,“We’re just doing activities without really mea-suring if our message is communicated. Instead,we judge our success by the number of studentsparticipating in an activity.”
None of the staff members had been re-quired to conduct a systematic evaluation ofWCK programs. Only two of the ten educationstaff reported that an outside consultant hadevaluated their EE programs. “My project wasevaluated by international consultants fundingthe project,” said one education staff member.“They asked us questions and collected data.They didn’t send me a report, but the head ofthe consulting team sent me a letter saying theevaluation was finished.”
Constraints to evaluating WCK programsincluded lack of knowledge and skills in evalua-tion (n=5), lack of financial resources (n=5),lack of time (n=2), and lack of human resources(n=3). Strategies noted by staff for overcomingthese constraints were training in evaluation(n=7), integrating evaluation into programming(n=2), changing negative attitudes toward evalu-ation (n=2), and providing resources for evalu-
ation (n=1). In the design of evaluation train-ing, EE staff wanted to address the followingcontent: hands-on practice in evaluation (n=6),effective evaluation methods (n=2), measure-ment of program impacts (n=1), and changingnegative attitudes toward evaluation (n=1).(Note that staff could provide more than oneresponse to these questions.)
Staff viewed evaluation as a responsibilitytypically delegated to external consultants, pri-marily donors. Some staff described evaluationas a process of “investigation,” rather than pro-gram improvement. “The donors come to evalu-ate, and it’s a process of checking, inspection,and investigation,” said James Maina. “Theyleave, and we await the results like an examina-tion. When they send us the report after a fewmonths, we flip through it quickly to see whatthey thought about us.”
Other staff described the suspicion thatevaluations evoke, especially if driven by donorcriteria. “People don’t even like thosewords . . . monitoring and evaluation becausethey think, Am I under suspicion?” Jane Ngutisaid. “Is it for punishment? The first impressionof evaluation is always negative.” One WCK staffmember called monitoring and evaluation “thewicked stick of the donors.” “We are scared ofmonitoring and evaluation because we see it asfault-finding,” he said. “Evaluation becomes atool for punishing us. We think we don’t havethe skills to ask the right questions, so we don’task the questions at all.”
The interviews also revealed that staff viewedevaluation as an activity conducted at the end of aprogram, rather than a continuous process. Sevenof the ten staff responded that the best time toconduct evaluation was at the end of a program.The interviews showed that traditional assessmenttools used in conservation and developmentprojects such as log frame analysis (UNDP, 1997)and detailed planning matrices were viewed withskepticism. “I once went to a conference wherethey told us to place our activities for WCK in boxesand connect multiple boxes with arrows,” said onestaff member. “Here at WCK, we need somethingsimple to help us with monitoring, not these com-plex tools.”
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
30 MALLORY MCDUFF
DRAWINGS, RANKINGS, ANDWORD ASSOCIATIONS
The drawings by staff further revealed priorconceptions of evaluation as an activity con-ducted by outsiders or supervisors (Figure 2).The drawings showed supervisors admonish-ing staff, dollar signs for accountability, ques-tion marks, and superiors investigating infe-riors.
The ranking exercise was completed bythe ten education staff. A higher ranking re-vealed more time spent on that specific task.The tasks with the highest mean rank werecoordinating logistics (7.8) and implement-ing the program (7.2), as seen in Table 3. Pi-lot testing the program (0.9) and evaluatingimpacts of the program (0.6) received the low-est ranking.
When asked to record words associated withthe word evaluation, the 10 education staffnoted the following terms: test, examination,review, pass or fail, questions and question-naires, standards, achievement of goals, inspect-ing, fault-finding, judging, checking, checks andbalances, intimidation, no smiling faces, andfinished product.
BRAINSTORMING SESSIONSWITH WCK STAKEHOLDERS
The 120 stakeholders in the workshops re-sponded in small group discussions to the ques-tions of why conduct evaluation at WCK andwhat happens when we fail to evaluate. Reasonslisted for conducting evaluation included to:assess achievement of goals and objectives; dis-cover weaknesses and improve them; determinesuccess and failure of projects; guide planningand decision making; maintain standards; pro-
TABLE 3Results of ranking exercise—Time spent on tasks byWCK staff (N=10) (*Ranking=1–9, higher rankindicates more time spent on task)
Mean RankTask in the Project Cycle by Staff (N=10)
Coordinating logistics of program 7.8Implementing program 7.2Designing the program 5.5Writing the final report 4.5Documenting data to monitor 4.0
program effectivenessWriting proposals 3.5Locating funding 2.1Pilot testing the program 0.9Evaluating impacts of the program 0.6
FIGURE 2.FIGURE 2.FIGURE 2.FIGURE 2.FIGURE 2. Training of trainers in participatory evaluation.
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
NEEDS ASSESSMENT FOR PARTICIPATORY EVALUATION OF ENVIRONMENTAL EDUCATION PROGRAMS 31
vide documentation for donors and stakehold-ers; assess use of resources and funding; iden-tify feedback; and expand effective programs.
Without evaluation, participants noted thatWCK cannot measure achievement of objectivesor identify failures and correct them. Otherconsequences were that lack of evaluation couldlead to misuse of funds, loss of confidence bydonors, incorrect assumptions of program suc-cess, difficulty planning for the future, lack ofmotivation, lack of feedback from stakeholders,and lack of justification for continuing programs.
Participants noted the following indicatorsor methods currently used to evaluate WCK pro-grams: number of participants in activities; num-ber of activities completed; number of registeredWCK members; feedback from stakeholders;student interest; quality of presentations dur-ing student competitions and rallies; discussionsduring meetings; reports to WCK headquarters;visits to projects; and questionnaires.
Proposed changes to evaluation of WCKincluded: standardize methods for evaluationwithin WCK; provide incentives for evaluation;involve teachers and students; follow-up on clubprojects; provide feedback and visits from head-quarters; assess changes in attitudes and knowl-edge; hold teacher meetings; conduct continu-ous evaluation; form evaluation committee to
assess clubs; monitor clubs through correspon-dence; and assess impact of WCK activities onlocal environment.
USING NEEDS ASSESSMENTRESULTS TO DESIGNEVALUATION TRAINING
The needs assessment reflected a lack of finan-cial resources for evaluation at WCK. Thus, atraining-of-trainers approach (Eitington, 1989)was used in order to reach the largest numberof stakeholders. This strategy involves traininga core group of trainers who then help to facili-tate additional trainings. The ten education staffparticipated in the initial one-week training onparticipatory evaluation. These staff assisted infacilitating a one-week training for 17 teacherswho are members of the WCK Action Groupsthat coordinate regional activities. The ActionGroup members then helped to organize fiveone-day workshops for teachers who lead wild-life clubs at the school level, as well as commu-nity members involved in WCK activities. Asnoted, a total of 120 WCK stakeholders receivedtraining in participatory evaluation throughseven sessions (Figure 3).
FIGURE 3. Prior conceptions of evaluation. Drawings by WCK staff.
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
32 MALLORY MCDUFF
ADDRESSING PRIORPERCEPTIONS OFEVALUATION
Given the prior perceptions of evaluation, thetrainings were designed to demystify evaluationand show the role of evaluation in everyday life.Workshop participants performed role-plays of
cooking and farming to illustrate evaluation inthese real-life contexts. Using a frameworkcalled the evaluation matrix, participants iden-tified evaluation questions, sources of evidence,and indicators for activities such as cooking(Table 4).
They then applied the evaluation matrixin fieldwork for the one-week workshops or casestudies during the one-day trainings to the evalu-
TABLE 4The evaluation matrix in everyday activities: Example from cooking
Evaluation Questions Sources of Evidence Indicators/Signs
PLANNINGDo I have enough ingredients Check the kitchen All ingredients available
for the meal?
PROCESSDoes the meal taste good while Taste the meal Meal tastes good to the cook
cooking? Observe the cooking process
PRODUCTDid my guests enjoy the meal? Observe the guests Guests eat the meal
Ask them questions Guests make positivecomments
Table 5Sample evaluation matrix for the WCK Nature Trail program
Evaluation Questions Sources of Evidence Indicators/Signs
PLANNINGAre the resources adequate? Observation Support materials should be avail-
able and appropriate for level ofstudents
PROCESS Students:How effective is the implementation Observation Taking notes
of the nature trail? Interviews Asking questionsMapping Acting enthusiasticSurveys Nature Trail:
AccessibleSpecies diversityWell-defined trails
Facilitator:AudibleCompetentVariety of methods ofcommunication
PRODUCT All objectives met:Were the objectives met? Interviews Identify at least 3 habitats on theWhat are recommendations for Pre/post-mapping nature trail
improvement? Surveys Identify at least 4 species of plantsand/or animals
Describe at least 4 plantadaptations
Apply critical thinking skills toconservation in Kenya
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
NEEDS ASSESSMENT FOR PARTICIPATORY EVALUATION OF ENVIRONMENTAL EDUCATION PROGRAMS 33
ation of a WCK program. Working in teams,participants identified indicators and sources ofevidence to evaluate specific WCK projects, suchas a nature trail for students at the Nairobi WCKoffice (Table 5). This hands-on fieldwork ad-dressed the need expressed during the inter-views for practical experience in evaluation andhelped to dispel the conception of evaluationled only by consultants or donors. Another percep-tion revealed during the needs assessment wasevaluation as an activity only conducted at theend of a program. The evaluation matrix de-signed specifically for the WCK trainings inte-grated evaluation in the planning, process, andproduct stages, as described by Jacobson (1991).
INTEGRATING EVALUATIONINTO WCK PROGRAMMING
Lack of time, funds, and human resources rep-resented potential constraints to participatoryevaluation at WCK. The interviews also showedthat staff would not accept a complex evalua-tion plan, but needed simple tools to integrateevaluation into WCK programs. In addition, theconstraints such as lack of time reflected theimportance of providing concrete incentives for
monitoring and evaluation at WCK.These findings pointed to the need to build
on existing institutional structures at WCK topromote the adoption of participatory evalua-tion. Given this need, workshop participantsdeveloped a monitoring and evaluation systemcalled the WCK Incentive Program (McDuff,2001). Using the evaluation matrix, participantsidentified indicators and sources of evidence forevaluating WCK at three levels: club, ActionGroup, and regional office. Indicators identi-fied in the evaluation matrix would serve as cri-teria for rewarding outstanding performance inconservation within WCK through a competi-tion. More importantly, the indicators wouldserve as baseline criteria for monitoring WCKactivities (Table 6).
Action Groups, clubs, and regional officeswould collect data on the indicators through theidentified sources of evidence for the competi-tion. The winners in the competition would re-ceive prizes such as trips to the national parksand vouchers for wildlife books. Since 1968, clubcompetitions have been an integral aspect ofWCK programming as shown by the documentreview. The competition met the multiple needsof providing incentives for evaluation, integrat-ing evaluation into ongoing programming, andstandardizing an evaluation system.
TABLE 6Sample evaluation matrix for WCK offices for the WCK Incentive Program
Evaluation Questions Sources of Evidence Indicators/Signs
REGIONAL OFFICESPLANNING Minutes Stakeholders involved in planningHow effective is the planning of Reports Documented reports of activities/
your activities? Record-keeping meetings
PROCESSHow effective is the Observation Renewal/recruitment of members
implementation of your Interviews Variety of activities conductedactivities? Photographs Innovative conservation projects
Membership records # of students participating inReports activitiesVideos # of contacts/meetings with clubs
PRODUCTWhat are the impacts of your Interviews Action taken on local environmental
activities? Mapping issuesWhat are your constraints and Photos Changes in attitudes and behavior
recommendations for Letters of reference toward conservationimprovement? Surveys Support from community and stake-
holders
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
34 MALLORY MCDUFF
Table 7 presents the outline of sessions forthe five-day workshop. The workshop topics in-cluded the role of evaluation in everyday life,evaluation design, data collection tools, analy-sis, and application of evaluation to an environ-mental education program. The last day of theworkshop addressed the design of the monitor-ing and evaluation system, the WCK IncentiveProgram.
CONCLUSION
Conducting a participatory evaluation of WCKwithout a needs assessment could have resultedin training strategies and an evaluation designthat did not fit the organizational context atWCK. The use of multiple tools in the needsassessment allowed for triangulation of datafrom diverse stakeholders. Triangulation is thecombining of multiple data sources and meth-ods to strengthen data collection and analysis(Wholey, Hatry, & Newcomer, 1994). The needsassessment combined data from document re-view, participant observation, semi-structuredinterviews, participatory rural appraisal, andbrainstorming exercises to validate results fromeach tool and strengthen the relevance of thetraining and evaluation design.
As critics attack the efficacy of EE programs,the EE community must collect data using strat-
egies such as needs assessments before designingboth trainings and program evaluations. Withrelevant training, we improve the chances ofsustaining evaluation of EE programs and en-hancing accountability and efficiency. Given theincreasing rates of habitat degradation world-wide, ongoing evaluation remains essential fordocumenting the impacts of EE programs onenvironmental stewardship.
Our constituencies in EE include diverseaudiences such as schoolchildren, environmen-tal activists, Chamber of Commerce members,government officials, college students, and re-tirees. Our critics range from conservative thinktanks funding Michael Sanera to John Stosselinterviewing children about overblown environ-mental claims for ABC News (Kurtz, 2001). Inthe EE community, we must move from reactionto criticism in the media to action that docu-ments the impacts of our programs.
Environmental educators have made greatstrides in developing guidelines to promotequality professional development, curricula, andinstruction in EE (e.g., NAAEE, 2000a, 2000b,2000c). Educational reform programs such asthe State Education and Environment Round-table have integrated the environment as a con-text for learning in classrooms and schools na-tionwide (SEER, 2001). By involving stakehold-ers in evaluation, we can empower supportersof EE, improve the chances of funding our pro-grams, and build measures of accountability
TABLE 7Workshop schedule
Day 1 Day 2 Day 3 Day 4 Day 5
What is What is Tools for Fieldwork: Presentation ofevaluation? participatory evaluation (cont.): evaluation of evaluation results
evaluation? surveys the WCKThe evaluation ecology program The incentivematrix: evaluation Introduction to Observation and program: generatingquestions, the fieldwork: record-keeping Analysis of data questions, sourcessources of planning your from fieldwork of evidence, andevidence, and evaluation Data analysis and indicators for WCKindicators reporting Preparation of
Tools for presentations Action plans for theThe evaluation evaluation: Small group work incentive programprocess mapping, songs to prepare for
& art fieldwork
Interviews
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
NEEDS ASSESSMENT FOR PARTICIPATORY EVALUATION OF ENVIRONMENTAL EDUCATION PROGRAMS 35
while educating our critics. To this end, needsassessment for participatory evaluation providesa logical and concrete strategic move for theEE community.
REFERENCES
Andrews, E., Camozzi, A., and Puntenney, P. (Eds.). (1994).Action models in adult environmental education. Proceedingsand summary of the 1991 NAAEE nonformal section work-shop. Troy, OH: NAAEE.
Bennett, D. B. (1974). Evaluating environmental educa-tion programs. In J. A. Swan and W. B. Stapp (Eds.),Environmental education. New York: Halsted Press. (pp.113–164.
Bennett, D. B. (1988–1989). Four steps to evaluating envi-ronmental education learning experiences. Journal ofEnvironmental Education, 20(2): 14–21.
Cousins, J. B., and Earl, L. M. (1992). The case for partici-patory evaluation. Educational Evaluation and Policy Analy-sis, 14(4): 397–418.
Davis-Case, D. (1990). The community’s toolbox: The ideas,methods, and tools for participatory assessment, monitoring,and evaluation in community forestry. Rome: Food andAgriculture Organization of the United Nations.
Doran, R. L. (1977). “State of the art” for measurementand evaluation of environmental objectives. Journal ofEnvironmental Education, 9(1): 50–63.
Eash, M. J. (1985). Evaluation research and program evalu-ation: Retrospect and prospect. Educational Evaluationand Policy Analysis, 7(3): 237–238.
Eitington, J. E. (1989). The winning trainer. (2nd ed.). Hous-ton, TX: Gulf Publishing Company.
Feuerstein, M. T. (1986). Partners in evaluation: Evaluatingdevelopment and community programmes with participants.Hong Kong: MacMillan Education.
Fleming, M. L. (1983). Project WILD evaluation final reportof field test. Western Regional Environmental EducationCouncil. ERIC Document Reproduction Service No. ED245890.
Fowler, F. J. (1993). Survey research methods (2nd ed.).Newbury Park, CA: Sage.
Gagne, R. M., Briggs, L. J., and Wager, W. W. (1992). Prin-ciples of instructional design. Orlando, FL: Harcourt BraceJovanovich.
Grimble, R., and Chan, M. (1995). Stakeholder analysisfor natural resource management in developing coun-tries: Some practical guidelines for making managementmore participatory and effective. Natural Resources Forum19(2), 113–124.
Hollweg, K. (1997). Are we making a difference? Lessons learnedfrom VINE program evaluations. Washington, DC: NorthAmerican Association for Environmental Education.
International Institute for Environment and Development.(1994). Whose Eden? An overview of community approachesto wildlife management. London: IIED.
Jacobson, S. K. (1987). Conservation education programs:Evaluate and improve them. Environmental Conservation,14(3): 201–205.
Jacobson, S. K. (1991). Evaluation model for developing,implementing, and assessing conservation educationprograms: Examples from Belize and Costa Rica. Envi-ronmental Management, 15(2): 143–150.
Jacobson, S. K. (1995). Needs assessment techniques forenvironmental education. International Research in Geo-graphical and Environmental Education, 4(1): 125–133.
Jacobson, S. K., and McDuff, M. D. (1997). Success factorsand evaluation in conservation education programmes.International Research in Geographical and EnvironmentalEducation, 6(3): 1–18.
Jackson, E. T., and Kassam, Y. (1998). Knowledge shared:Participatory evaluation in development cooperation. WestHartford, CT: Kumerian Press.
Kurtz, H. (2001, June 26). Parents angered over kids’ in-terview by John Stossel. Washington Post, p. C01.
Linke, R. (1981). Linke slams non-evaluation. AustralianAssociation for Environmental Education Newsletter4(March): 1.
McDuff, M. D. (1999). A model for participatory evaluation ofenvironmental education programs: Promoting success at theWildlife Clubs of Kenya. Unpublished doctoral disserta-tion. University of Florida, Gainesville.
McDuff, M. D. (2000). Thirty years of environmental edu-cation in Africa: The role of the Wildlife Clubs of Kenya.Environmental Education Research, 6(4): 383–396.
McDuff, M. D. (2001). Building the capacity of grassrootsconservation organizations to conduct participatoryevaluation. Environmental Management, 27(5): 715–727.
McDuff, M. D., and Jacobson, S. K. (2000). Impacts andfuture directions of youth conservation organizations:Wildlife clubs in Africa. Wildlife Society Bulletin, 28(2):414–425.
Middlestadt, S., Grieser, M., Hernandez, O., Tubaishat, K.,Sanchack, J., Southwell, B., and Schwartz, R. (2001).Turning minds on and water off: Water conservationeducation in Jordanian schools. Journal of EnvironmentalEducation, 32(2), 37–45.
Monroe, M. C., Washburn, J., Goodale, T. L., and Wright,B. A. (1997). National parks education programs making adifference: Evaluating partners, a parks as classrooms program.Washington, DC: National Park Foundation.
North American Association for Environmental Education.(2000a). Environmental education materials: Guidelines forexcellence. Washington, DC: Author.
North American Association for Environmental Education.(2000b). Excellence in environmental education: Guidelinesfor learning (K–12). Washington, DC: Author.
North American Association for Environmental Education.(2000c). Guidelines for the initial preparation of environmen-tal educators. Washington, DC: Author.
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013
36 MALLORY MCDUFF
Norris, K., and Jacobson, S. K. (1998). A content analysisof tropical conservation education programs. Journal ofEnvironmental Education, 15(4): 27–31.
Nowak, P. F. (1984). Direct evaluation: A management toolfor program justification, evolution, and modification.Journal of Environmental Education,15(4): 27–31.
O’Hearn, G. T. (1982). What is the purpose of evaluation?Journal of Environmental Education, 13(4): 1–3.
Padua, S. M., and Jacobson, S. K. (1993). A comprehen-sive approach to environmental education in Brazil. Jour-nal of Environmental Education, 24(4): 29–36.
Pomerantz, G. A., and Blanchard, K. A. (1992). Successfulcommunication and education strategies for wildlifeconservation. In Wildlife Management Institute (Ed.),Transactions of the 57th North American Wildlife and Natu-ral Resources Conference. Washington, DC: Wildlife Man-agement Institute.
Pretty, J. N., Guijt, I., Scoones, I., and Thompson, J. (1995).A trainer’s guide for participatory learning and action. Lon-don: IIED.
Rovira, M. (2000). Evaluating environmental educationprogrammes: Some issues and problems. EnvironmentalEducation Research, 6(2), 143–155.
Sanera, M. (1995, Nov. 13). Battle over environment movesto the classroom. Los Angeles Times, p. A1.
Sanera, M., and Shaw, J. (1999). Facts not fear: Teaching chil-dren about the environment. Washington, DC: Regnery.
Satchell. M. (1996, June). Dangerous waters? Why envi-ronmental education is under attack in the nation’sschools. U.S. News &World Report,10: 63–64.
Slocum, R., Wichhart, L., Rocheleau, D., and Thomas-Slaytor, B. (1995). Power, process, and participation: Tools
for change. London: Intermediate Technology Publica-tions.
State Education Environment Roundtable. (2001, Septem-ber). What is EIC? Basic concepts. Environment as an inte-grating context for learning. Presentation to the Environ-mental Educators of North Carolina Annual Conference,Hendersonville, NC.
Thomas, I. G. (1989–1990). Evaluating environmentaleducation using case studies. Journal of EnvironmentalEducation, 21(2): 3–8.
United Nations Development Program. (1997). Who are thequestion makers? A participatory evaluation handbook. NewYork: UNDP, Office of Evaluation and Strategic Planning.
Weeks-Vagliani, W. (1993). Building on the women in de-velopment and family planning experience for commu-nity-based environmental education. In H. Schneider(Ed.), Environmental education: An approach to sustainabledevelopment (pp. 103–116). Paris: OECD.
Weiss, C. H. (1983). The stakeholder approach to evalua-tion: Origins and promise. In A. S. Bryk (Ed.), Stakeholder-based evaluation: New directions for program evaluation (pp.3–14). San Francisco: Jossey-Bass.
Wholey, J. S., Hatry, H. P., and Newcomer, K. E. (1994).Handbook of practical program evaluation. San Francisco:Jossey-Bass.
Wood, B. B. (2001). Stake’s countenance model: Evaluat-ing an environmental education professional develop-ment course. Journal of Environmental Education, 32(2),18–27.
Wood, D. W., and Wood, D. S. (1985). Conservation educa-tion: A planning guide. Peace Corps Manual M-23. Wash-ington, DC: U.S. Peace Corps.
Dow
nloa
ded
by [
RM
IT U
nive
rsity
] at
00:
59 2
2 Se
ptem
ber
2013