6
Building environmental educators’ evaluation capacity through distance education M. Lynette Fleming a, *, Janice Easton b,1 a Research, Evaluation & Development Services, 11220 E. Stetson Place, Tucson, AZ 85749-9550, USA b Department of Agricultural Education and Communication, University of Florida, Gainesville, FL 32511, USA 1. Introduction 1.1. Evaluation capacity building: definitions Although the term has been used in the literature for nearly 10 years, the meaning of ‘‘evaluation capacity building’’ (ECB) still lacks clarity. At the 2000 American Evaluation Association (AEA) conference, evaluation capacity was defined as ‘‘the ability to conduct an effective evaluation (i.e., one that meets accepted standards of the discipline)’’ (Milstein & Cotton, 2000). In her Presidential Address at the 2007 American Evaluation Association Conference, Hallie Preskill defined ECB as ‘‘the intentional work to continuously create and sustain overall organizational processes that make quality evaluation and its uses routine’’ (Preskill, 2008; Stockdill, Baizerman & Compton, 2002, p. 14). Naccarella et al. (2007) conclude that most definitions of ECB consist of providing staff with skills and sufficient resources to conduct rigorous evaluations in organizations that foster a culture of support and allow evaluation to become part of routine practice. Also helpful is the analogy described by McDonald, Rogers, and Kefford (2003) who stressed the supply and the demand aspects of evaluation capacity, noting that most ECB efforts have focused primarily or even exclusively and excessively on supply—devel- oping technical skills, tools and resources to produce evaluations. The demand aspect requires an institutional understanding of and commitment to evaluation and purposeful management to create a culture that values comprehensive information and nurtures skills such that evaluations are used and become sustainable over time. In this paper we describe a distance education course that equips environmental educators and natural resource profes- sionals to conduct evaluations at the local level within the mission and resources of their organizations. The course developers and instructors stress McDonald’s supply aspect of evaluation capacity, while being mindful of the importance of the demand for relevant evaluations created by environmental education organizations, communities, and the environmental education profession. 1.2. ECB: conceptualizations for environmental education Consumers demand that environmental education (EE) teaches, affirms, inspires and even entertains them. The goal of EE within organizations is to supply programs, information and resources that lead to decision-making, and behaviors that promote the organization’s environmental, educational, health and other social missions. In their work, environmental educators are often solo or very few in number in their organizations and must tackle a broad array of topics with a variety of audiences. For many, EE responsibilities are shared with several other unrelated job duties. EE is delivered to youth and adult audiences in formal school classroom and nonformal settings, such as museums, parks, zoos, aquaria, and natural science centers inside and outside of Evaluation and Program Planning 33 (2010) 172–177 ARTICLE INFO Keywords: Environmental education Evaluation capacity building Teaching evaluation Distance education ABSTRACT Evaluation capacity building (ECB) is seldom mentioned in the environmental education (EE) literature, but as demonstrated by the lack and poor quality of EE evaluations, is much needed. This article focuses on an online course, Applied Environmental Education Program Evaluation (AEEPE), which provides nonformal educators with an understanding of how evaluation can be used to improve their EE programs. The authors provide descriptions of key aspects and strategies for addressing challenges they face in teaching AEEPE, such as: reducing attrition, developing and maintaining a social learning environment online, and improving students’ understanding of attribution and logic models. While the course equips environmental educators with the skills necessary to design and implement basic evaluations, there is less certainty that the course contributes to generating demand for evaluation within organizations and the profession. Therefore the authors call on national organizations and associations for help with increasing the demand for ECB in the EE community. ß 2009 Elsevier Ltd. All rights reserved. * Corresponding author. Tel.: +1 520 749 4909. E-mail addresses: [email protected] (M.L. Fleming), jeaston@ufl.edu (J. Easton). 1 Tel.: +1 386 454 8871. Contents lists available at ScienceDirect Evaluation and Program Planning journal homepage: www.elsevier.com/locate/evalprogplan 0149-7189/$ – see front matter ß 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.evalprogplan.2009.07.007

Building environmental educators’ evaluation capacity through distance education

Embed Size (px)

Citation preview

Page 1: Building environmental educators’ evaluation capacity through distance education

Building environmental educators’ evaluation capacity through distanceeducation

M. Lynette Fleming a,*, Janice Easton b,1

a Research, Evaluation & Development Services, 11220 E. Stetson Place, Tucson, AZ 85749-9550, USAb Department of Agricultural Education and Communication, University of Florida, Gainesville, FL 32511, USA

Evaluation and Program Planning 33 (2010) 172–177

A R T I C L E I N F O

Keywords:

Environmental education

Evaluation capacity building

Teaching evaluation

Distance education

A B S T R A C T

Evaluation capacity building (ECB) is seldom mentioned in the environmental education (EE) literature,

but as demonstrated by the lack and poor quality of EE evaluations, is much needed. This article focuses

on an online course, Applied Environmental Education Program Evaluation (AEEPE), which provides

nonformal educators with an understanding of how evaluation can be used to improve their EE

programs. The authors provide descriptions of key aspects and strategies for addressing challenges they

face in teaching AEEPE, such as: reducing attrition, developing and maintaining a social learning

environment online, and improving students’ understanding of attribution and logic models. While the

course equips environmental educators with the skills necessary to design and implement basic

evaluations, there is less certainty that the course contributes to generating demand for evaluation

within organizations and the profession. Therefore the authors call on national organizations and

associations for help with increasing the demand for ECB in the EE community.

� 2009 Elsevier Ltd. All rights reserved.

Contents lists available at ScienceDirect

Evaluation and Program Planning

journa l homepage: www.e lsev ier .com/ locate /eva lprogplan

1. Introduction

1.1. Evaluation capacity building: definitions

Although the term has been used in the literature for nearly 10years, the meaning of ‘‘evaluation capacity building’’ (ECB) stilllacks clarity. At the 2000 American Evaluation Association (AEA)conference, evaluation capacity was defined as ‘‘the ability toconduct an effective evaluation (i.e., one that meets acceptedstandards of the discipline)’’ (Milstein & Cotton, 2000). In herPresidential Address at the 2007 American Evaluation AssociationConference, Hallie Preskill defined ECB as ‘‘the intentional work tocontinuously create and sustain overall organizational processesthat make quality evaluation and its uses routine’’ (Preskill, 2008;Stockdill, Baizerman & Compton, 2002, p. 14). Naccarella et al.(2007) conclude that most definitions of ECB consist of providingstaff with skills and sufficient resources to conduct rigorousevaluations in organizations that foster a culture of support andallow evaluation to become part of routine practice.

Also helpful is the analogy described by McDonald, Rogers, andKefford (2003) who stressed the supply and the demand aspects ofevaluation capacity, noting that most ECB efforts have focusedprimarily or even exclusively and excessively on supply—devel-

* Corresponding author. Tel.: +1 520 749 4909.

E-mail addresses: [email protected] (M.L. Fleming), [email protected] (J. Easton).1 Tel.: +1 386 454 8871.

0149-7189/$ – see front matter � 2009 Elsevier Ltd. All rights reserved.

doi:10.1016/j.evalprogplan.2009.07.007

oping technical skills, tools and resources to produce evaluations.The demand aspect requires an institutional understanding of andcommitment to evaluation and purposeful management to create aculture that values comprehensive information and nurtures skillssuch that evaluations are used and become sustainable over time.

In this paper we describe a distance education course thatequips environmental educators and natural resource profes-sionals to conduct evaluations at the local level within the missionand resources of their organizations. The course developers andinstructors stress McDonald’s supply aspect of evaluation capacity,while being mindful of the importance of the demand for relevantevaluations created by environmental education organizations,communities, and the environmental education profession.

1.2. ECB: conceptualizations for environmental education

Consumers demand that environmental education (EE) teaches,affirms, inspires and even entertains them. The goal of EE withinorganizations is to supply programs, information and resourcesthat lead to decision-making, and behaviors that promote theorganization’s environmental, educational, health and other socialmissions. In their work, environmental educators are often solo orvery few in number in their organizations and must tackle a broadarray of topics with a variety of audiences. For many, EEresponsibilities are shared with several other unrelated job duties.EE is delivered to youth and adult audiences in formal schoolclassroom and nonformal settings, such as museums, parks, zoos,aquaria, and natural science centers inside and outside of

Page 2: Building environmental educators’ evaluation capacity through distance education

M.L. Fleming, J. Easton / Evaluation and Program Planning 33 (2010) 172–177 173

traditional classrooms. Environmental educators have very diversebackgrounds; some have never taken a course in education, EE, orevaluation. For many environmental educators, relevant profes-sional development opportunities are rare.

In the environmental education community, the term ‘‘evalua-tion capacity building’’ is difficult to find; it is often subsumedwithin the broader environmental education capacity building(EECB) framework. Most definitions of EECB emphasize creatingleaders and infrastructure at the state and local level to supportcomprehensive EE programs for lifelong learning for all ages(EETAP, 2008; NAAEE, 1998; NAAEE, 2008). For example, theEnvironmental Education and Training Partnership (EETAP) refersto EECB as ‘‘the process of developing the structures, funding, andprograms needed to create and sustain comprehensive EEprograms at the state and local level. It includes developingeffective leaders, organizations, and partnerships’’ (EETAP, 2008).In its description of EECB, the North American Association ofEnvironmental Education (NAAEE) considers evaluation part of EEprograms as it relates to instruction, primarily as assessment oflearning objectives and outcomes (NAAEE, 2008).

1.2.1. Need

Despite the lack of prominence for evaluation in the definitionsof EECB, environmental educators recognize the importance of andneed for evaluation capacity building. In 2002, EE leaders andprogram providers identified program evaluation and assessmentskills as very important for individual training needs and to furtherbuild environmental education capacity (Stenstrup, 2002; Wells &Fleming, 2002). In their work on building evaluation capacity innonformal education settings, Monroe et al. (2005, p. 69) describethe potential to build an evaluation culture in the nonformaleducation community as well as the need for ‘‘evaluativeinformation to ensure continuous program improvement andoptimize the use of limited funding and staff for maximumaudience benefit.’’

Moreover, the NAAEE outlined a set of recommendations fordesigning and administering high quality nonformal EE programsin Nonformal Environmental Education Programs: Guidelines for

Excellence (NAAEE, 2004b) and for developing professionalenvironmental educators in the Guidelines for the Preparation

and Professional Development of Environmental Educators (NAAEE,2004a). These Guidelines emphasize needs assessment andprogram evaluation as important themes for quality EE programsas well as knowledge of assessment and evaluation skills as keycharacteristics of professional environmental educators.

1.2.2. Status of EE evaluation

The first major EE evaluation efforts were guided by theevaluation requirements of outdoor and environmental educationprojects funded by Title III of the Elementary and SecondaryEducation Act (ESEA) of 1965. Although these early requirementshave been criticized for misguided evaluation methods (Stuf-flebeam, 2004), some environmental educators continue to believebehavioral objectives, experimental designs and standardized testsare the way to evaluate EE projects even if the subsequent findingsare not used and may be irrelevant to the field. Demand forgathering evaluative information has followed legislative andfunding requirements since the 1960s.

The overall reviews of EE evaluation efforts are not favorable(Coyle, 2005; Fien, Scott, & Tilbury, 2001). Historically, the field ofEE has lacked the ability to support and produce routineassessments and evaluations. While environmental educatorsare being increasingly challenged to provide evidence of programeffectiveness, there continues to be a lack of widespread rigorousprogram evaluation within the field, as well as a lack of confidenceamong professionals in their ability to use assessment and

evaluation in their programs (Coyle, 2005). Although the majorityof EE programs do not integrate an evaluation plan into theirprogramming (McDuff, 2002) most, if not all, state and federalagencies require demonstrable accountability in EE programming,including program monitoring and formal evaluations to deter-mine whether the intended program outcomes are being achieved(Wiltz, 2000). Interestingly enough, these same organizations areoften unable to provide training or resources to support educatorsin developing skills and knowledge in the field of programevaluation, particularly in nonformal EE program evaluation(NEEAC, 1996). Other challenges to implementing EE evaluationsare described throughout this volume.

2. Applied EE program evaluation

2.1. Addressing a need for ECB

In an attempt to address the need for evaluation in the field ofEE, an online evaluation course, Applied Environmental Education

Program Evaluation (AEEPE), was developed in 2004 to teachenvironmental educators how to design and implement evalua-tions of their own programs. AEEPE is based on a 4-day residentialcourse, Education Program Evaluation, offered through U.S. Fish andWildlife Service’s (USFWS) National Conservation Training Center(NCTC). The Environmental Education Training Partnership(EETAP) at the University of Wisconsin-Stevens Point (UW-SP)in collaboration with NCTC hired the authors and current courseinstructors to develop AEEPE. The aim of the course is to buildevaluation capacity among environmental educators. To achievethis aim, the course provides participants with an overview ofprogram evaluation and an opportunity to practice skills designingand using evaluation tools for their EE and outreach programs.While AEEPE certainly addresses the supply side of McDonald’sevaluation capacity building framework by building a cadre ofinternal evaluators, we are less confident that the course hasimpacted the demand for evaluation.

2.2. AEEPE course content

AEEPE is offered to environmental educators, natural resourceprofessionals, and students through Desire 2 Learn (D2L), a web-based platform used by UW-SP that allows for interaction amongparticipants through an asynchronous discussion board. Instruc-tors and participants also utilize the chat function, the onlysynchronous capability used in the course. Since most of thelearners are employed full-time and span several time zones, theasynchronous component is essential. Course participants have theoption of taking the 12-week course as a non-credit workshop orregistering through UW-SP for undergraduate or graduate credits.Workshop participants and students alike are advised that thecourse will require a minimum of 10 hours of work per week.

Together, seven units offer a basic framework that enablesparticipants to evaluate an EE program at their place of work. Eachunit has at least two assignments, one that is submitted to theinstructor via a dropbox and another that is posted to thediscussion board for peer review and comment. The first week ofthe course is designed to orient participants to the D2L platform,to their instructor, and to each other through activities that builda learning community that functions throughout the course. InUnit 1, participants receive a general overview of the types andpurposes of evaluation, describe the local programs to which theyplan to apply the AEEPE concepts, and confer with theirstakeholders to develop evaluation questions. In Unit 2, partici-pants develop measurable program objectives, a logic model, andan evaluation plan for their program and are encouraged todiscuss these assignments within their organization. Units 3–6

Page 3: Building environmental educators’ evaluation capacity through distance education

M.L. Fleming, J. Easton / Evaluation and Program Planning 33 (2010) 172–177174

introduce participants to common evaluation instruments(observation forms, interview and focus group guides, ques-tionnaires, and pre-post-tests) and require the development ofinstruments that will gather data to answer their evaluationquestions. The last unit introduces participants to qualitative andquantitative data analysis and outlines how to report evaluationresults. Although most participants are new to data analysis, theycalculate descriptive statistics and conduct a content analysisusing data from a supplied Excel spreadsheet. They communicatetheir findings in a way they feel would be best for theirstakeholders, resulting in executive summaries, PowerPointpresentations, press releases, and brochures. The culminatingassignment for undergraduate and workshop participantsrequires a pilot-test of at least one data gathering instrumentdeveloped during the course. Graduate students have the optionof pilot-testing all instruments, developing a PowerPoint pre-sentation of their evaluation for a conference, or writing anevaluation report. Additional course information is available athttp://www.uwsp.edu/natres/eetap/.

2.3. AEEPE participants

2.3.1. Who are the participants?

Since the first AEEPE course was offered in fall semester 2004,350 environmental educators and natural resource professionalsfrom 46 states and 12 countries have registered to take AEEPE.Nearly 75% of the registrants are workshop participants. Studentsenrolled through UW-SP are typically seeking graduate degrees;however, many participants are located at other institutions whereEE program evaluation courses are not offered. Three-quarters ofall registrants have received a certificate of completion. Themajority of participants are female (90%), between the ages of 24and 67.

Most of the workshop participants are full-time employeesworking in federal and state agencies as well as nonprofit EEorganizations. Through a partnership with EETAP, the USFWSoffers scholarships to its education and outreach staff, thus in anygiven semester as many as five USFWS employees haveparticipated; to date, 40 USFWS staff have taken AEEPE. Otherfederal agencies represented include the National Park Service, U.S.Forest Service and to a lesser extent the National Oceanic andAtmospheric Administration and Bureau of Land Management.Those working at state agencies represent departments of naturalresources and environmental protection and also state parks.Participants from nonprofits such as the Audubon Society, Rain-forest Alliance, botanical gardens and zoos make up about a third ofthose enrolled. Nearly all the workshop participants conductnonformal EE programs aimed at promoting environmentallyresponsible behaviors in children and adults.

2.3.2. Outcomes of participation in AEEPE

In an evaluation of AEEPE’s short-term outcomes, Dillard (2006)found that AEEPE participants made significant gains in knowledgeand confidence to conduct program evaluations after participatingin the course. Participants improved their comprehension ofevaluation and ‘‘gained confidence in applying evaluation skills in my

work.’’ In 2007, Kreis studied the extent to which the 2006 AEEPEcourse participants were implementing the program evaluationplans they developed during the course and the impact ofsubsequent evaluations on their programs. Of the 65 participantswho completed the post-course follow-up, nearly three-quartershad started or completed a program evaluation within 1 year of thecourse and had used the results to improve their program. Mostconducted formative evaluations that resulted in improvements totheir program’s content, delivery, short-term outcomes, and stafftraining (Kreis, 2007).

AEEPE participants continue to be overwhelmingly positiveabout the skills they develop in the course, as evidenced bycomments like ‘‘I had little experience and formal knowledge of the

process prior to taking the course. The applied approach helped me to

develop personal skills while benefiting my organization. I feel that I

learned not only the ‘whats’ and ‘whys’ of evaluation, but also the

‘how-to’s.’’ (Kreis, 2007).Participants also make suggestions on their course evaluation

forms and have encouraged the instructors to adjust assignmentsand course content, especially in the area of data management andanalysis. Several participants have expressed an interest inlearning more about evaluation and have requested that instruc-tors include information about evaluating long-term impacts andcreating an online forum ‘‘in which to communicate with others

across the country working on similar programs’’ (Kreis, 2007).Continuing to develop the supply side of the ECB framework,AEEPE instructors are looking into the possibility of using ongoingevaluation forums through My Environmental Education Evalua-tion Resource Assistant (MEERA, http://meera.snre.umich.edu/).

While some students had support from their organizations forconducting evaluations, the demand side of ECB, others struggledwith the challenges addressed by Carleton-Hug and Hug (thisedition). Challenges that emphasize the lack of time given toconduct evaluations, ‘‘The largest barrier we face in implementing an

appropriate and meaningful evaluation plan is staff time’’ orinstitutional resistance, ‘‘Our organization wants outside evaluators

(consultants), not in-house evaluation. I think it is lack of under-

standing, in addition to institutional tradition, to use outside

evaluators’’ were common barriers to implementing their EEevaluations (Kreis, 2007).

Although ECB is not happening at the organization level in mostcases, some organizations, for example the USFWS and St. LouisZoo, have created the supply as well as the demand for evaluationby promoting and supporting evaluation training for their staff inface-to-face workshops and in AEEPE. These organizations havesupported their EE staff by offering scholarships and encouragingparticipation in professional development opportunities. Buildinga critical mass of environmental educators with evaluationexperience is one step in building evaluation capacity withinthe organization. As one participant noted, ‘‘This course has really

benefited the organization I work with because they now are thinking

how to incorporate the evaluation plan I created. The organization

now talks and thinks more like a learning organization and is

consistently thinking how to improve their programs and what tools

they need to help them in improving their programs and meeting their

goals’’ (Kreis, 2007).

2.4. Challenges

While AEEPE has successfully developed its participants’ skillson the supply side of evaluation capacity building, there arechallenges. The following were identified by the AEEPE instructorsas the main challenges they face teaching EE program evaluation inan online format. These challenges include reducing the attritionrate, creating a social learning environment, and helping studentsunderstand what outcomes can be attributed to their programs.

2.4.1. Reducing attrition

One of the main challenges for AEEPE, as well as for other onlinecourses, is the attrition rate. The average attrition rate of AEEPEparticipants over the last 4 years is 24%, with a range from 5% to33% per semester. This figure is consistent with Moody’s (2004)estimates of a 20% attrition rate for online courses as compared totheir face-to-face counter parts. Moody (2004) suggests that thepervasiveness of high attrition rates is problematic for any onlinecourse since attrition rates are often considered a measure of the

Page 4: Building environmental educators’ evaluation capacity through distance education

M.L. Fleming, J. Easton / Evaluation and Program Planning 33 (2010) 172–177 175

quality of program delivery by an institution. Identifying thereasons why AEEPE participants drop out has been a specialconcern for the instructors. The workshop participants themselveshave cited family circumstances, illness, and job demands as themain reasons for withdrawal. To a lesser extent, participants notedthat the course was more involved than they first thought, they hadtrouble managing their time, or they felt uncomfortable with theonline format.

To mitigate the effects of a high attrition rate, a number ofstrategies aimed at increasing student engagement have includedinstructor-initiated emails before the course starts, establishingvirtual office hours, synchronous chats, galleries where students canpost their work and give and get peer feedback, and a Cyber Cafewhere students are able to post-evaluation articles and ask generalquestions. Other strategies used to increase student engagement inAEEPE have focused on adapting course assignments so participantscan apply the material to their own EE programs as well as nurturingan online learning community where participants can share theirprograms and ideas with their classmates. In an effort to increase thequantity and quality of peer contact, grade points are assigned forparticipation on the discussion board. Participants are also dividedinto groups so reading and posting to the discussion board is moremanageable given their limited time. These changes may be helpingus to decrease the attrition rates. In 2008 attrition averaged 15% overthree AEEPE courses.

2.4.2. Creating a social learning environment

For EE professionals who do not have access to an educationalinstitution, the AEEPE course provides an opportunity for learningand professional development to which they would not otherwisehave access. While we feel that the online course has increasedcommunication among professionals within the EE field aboutways by which they can evaluate their programs, the onlineenvironment is not always conducive to effective dialogue. As aresult some participants feel disconnected from their peers and thematerial. Strategies to increase social learning within a distancecourse require creative and innovative approaches. Over the last 4years, the instructors have tried a number of strategies to increasethe socialization of participants, thereby decreasing the sense ofisolation and reducing attrition. We have found that by engagingtheir program stakeholders early in the process, course partici-pants become more vested in the course, even to the extent offostering a culture of evaluation in their organizations. More recentstrategies include the addition of the Cyber Cafe, a discussion boardwhere participants further professional discussion of EE andpersonal discussions of family and current events (such asdiscussing what is happening on the television series ‘‘Lost’’) asa way to encourage the social aspects of the virtual community. Wehave also employed synchronous discussions in lieu of somewritten assignments and optional group work to completeassignments, such as for qualitative and quantitative data analysis.Students have been very positive about the data gathering toolgalleries on the discussion board, where they post their observa-tion forms, interview protocols, questionnaires, and scoringrubrics, and where, as part of the revision process, they give andreceive feedback from their peers.

In addition, former and current course participants areencouraged to attend the NAAEE conferences where they are ableto meet one another and interact with the instructors. Currentlyunder discussion is establishing an evaluation discussion forum forpast AEEPE students, possibly through My Environmental Educa-tion Evaluation Resource Assistant (MEERA). Typically at least 3–5participants per course request more detailed instructor feedbackon new data gathering tools they develop or seek other types ofassistance within the first year of taking the class. At least oneparticipant per semester has communicated evaluation success

stories within a year of the course. This extended contact shows theneed for continued opportunities to build evaluation capacityamong EE providers.

2.4.3. Understanding attribution

While participants choose to evaluate an array of EE programs(e.g., water quality, angling, recycling, exhibits, summer camps,and curricula) for an assortment of target audiences, we note thattheir evaluation needs and questions are relatively similar acrossthe spectrum. The majority of participants begin AEEPE with littleor no previous experience or coursework in evaluation. While mosthave administered some type of questionnaire as part of a programevaluation, they lack a conceptual understanding of evaluation as asystematic process.

In addition, most participants enter the course with only animplicit knowledge of how their programs work. They tend toattribute long-term outcomes to short duration programs; such asthe improvement of the resources their organization is responsiblefor conserving. This misconception is due in part to the unique goalof EE, to create an environmentally literate public who demon-strate environmentally responsible behaviors. This of course isexceptionally difficult to measure and evaluate in a short period oftime. Another frequently cited long-term outcome of theirprograms is the need to increase standardized test scores, whichis impossible given that in most states EE is not part of the formaleducation curriculum upon which standardized testing of stu-dents’ skills and knowledge are built. Since these types of long-term outcomes are often part of the justification for an agency ororganization’s educational efforts, participants tend to believe thatspecies protection or higher standardized test scores will resultfrom a single field trip visit or a few classroom programs.

To address this challenge, participants are asked to develop aset of measurable program objectives, a logic model, and anevaluation plan in consultation with others from their organiza-tions. Development of program logic models helps participantsarticulate why the elements of the program achieve specificoutcomes. Relating program elements to realistic and attainableoutcomes is eye-opening for many participants as they reconstructtheir program theory of action. ‘‘The hardest and most helpful parts

(of the course) were developing SMART objectives and a logic model,

and creating the evaluation plan. . .they helped me really understand

my program, what to evaluate, and to explain my program to others.’’

3. Lessons learned: implications for ECB and EECB

3.1. The supply side of ECB and EECB

We have found the AEEPE course to be an effective approach toprofessional development in evaluation, one that equips environ-mental educators with the confidence, knowledge, and skillsneeded to conduct evaluation in a routine and ongoing way and ina local context. AEEPE addresses the supply side of the McDonald’sevaluation capacity building framework. Course outcomes includeincreasing the number of EE professionals who value evaluation,the process of evaluation, the development of evaluation tools,techniques of data analysis, and the importance of evaluationreporting. The increased capacity has helped environmentaleducators conduct evaluations that yield useful information forprogram improvement. By learning together to evaluate theirprograms, participants expand their knowledge of EE concepts,methods, guidelines, and what a comprehensive EE approachmeans, thus increasing the supply side of EE capacity. We believehigh quality evaluation findings will help organizations supportand sustain EE.

Our challenges may provide some lessons or serve as a warningfor others wishing to building evaluation or EE capacity online. The

Page 5: Building environmental educators’ evaluation capacity through distance education

M.L. Fleming, J. Easton / Evaluation and Program Planning 33 (2010) 172–177176

main challenges of the supply side as delivered by AEEPE and wayswe have faced those challenges, include: (a) addressing attrition byencouraging discussions in the learning community, and instructorand peer support at key points during the course; (b) overcomingimpediments to learning in an asynchronous online format byfamiliarizing participants with the online platform, promotingpeer feedback, and encouraging socializing through chat, sharingpictures, biographies, stories, and off-line phone conversations;and (c) improving students’ understanding of attribution ofprogram effects by focusing on short and medium term outcomes.Efforts to create an online forum for sharing among participantsafter the course ends are viewed as a critical next step for engagingand sustaining a learning community among environmentaleducators.

3.2. The demand side of ECB and EECB

While distance learning through AEEPE effectively builds thecapacity of environmental educators in diverse settings to conductevaluations of their own programs, the AEEPE instructors’ role,however, is limited on the demand side of ECB. In the future we canhelp by assisting AEEPE evaluators in maintaining ties to theircohort and networking with other course graduates through anonline community, but most of the demand effort must come fromthe leadership of others. Organizations and professional associa-tions have the ability to create greater demand by facilitating thecultural changes necessary for evaluation to become a routine partof EE.

Most participants take AEEPE through their own initiative andon their own time because they are personally committed to theenvironment and to improving their EE programs and have not hadthe opportunity to learn about evaluation. Many have involvedothers within their organizations through their course assign-ments. Some participants have successfully created a culturewithin their organizations that demands evaluation of itsprograms and products. While those organizations may be fewand far between, we do believe that we are creating a critical massof EE professionals who value program evaluation. Some have ledtheir organizations to create evaluation plans and use theirevaluation findings, creating bottom-up and top-down support forthe development of shared understandings of the program beingevaluated. In addition, we see an evaluation demand being createdwithin the profession as a result of EETAP’s leadership, NAAEE’sGuidelines for Excellence, EE certification, and through top-downefforts at some organizations. This professional guidance is alsocreating a demand for more high quality EE.

Efforts to move evaluation into a more prominent position inprofessional EE associations and organizations that conduct EE canlead to an understanding of the varied uses of evaluation findingsand increase support for more EE program evaluation. As a start,NAAEE should consider expanding its website and adding a newevaluation commission to illustrate the value of evaluation andnurture evaluation skills, support and resources so that evaluationcan become sustainable over time in the field of EE. When EEleaders within professional associations and organizations at thelocal, state, national and international levels promote a culture ofsupport for evaluation of EE programs, they begin to create thedemand for program evaluation that McDonald, Rogers, andKefford (2003) believe is missing from many ECB efforts.

4. Conclusions

Evaluation capacity building for environmental educators andevaluators must address both supply and demand aspects ofevaluation. The supply side builds skills in individuals to produceevaluations while the demand side builds an organization’s

understanding and management to create a culture that values,promotes, and uses evaluation over time. The course, AppliedEnvironmental Education Program Evaluation (AEEPE), primarilyaddresses supply issues of ECB, however, it also actively attemptsto address the demand side of the equation by asking students toinvolve their organizations in their course assignments and bytrying to build a sense of community and support among courseparticipants. In the future, an online forum may be provided tofacilitate communication after the course ends.

Our analysis of the AEEPE online course has caused us toquestion how the terms ‘‘evaluation capacity building’’ (ECB) and‘‘environmental education capacity building’’ (EECB) are defined.We believe that these definitions must include the supply anddemand aspects of capacity building. We believe that any ECBdefinition should stress equipping staff (or practitioners) withknowledge, technical skills, and tools to produce quality evalua-tions that acknowledge local context and that those evaluationsshould be logically planned to become part of routine practicewithin the mission and resources of an organization. Further, forany ECB definition to be useful in the EE community, the culture oforganizational support should be broadly defined to include notonly workplace culture but also professional associations that offeradditional support for evaluation activities. There is a need for ECBwithin the EE profession since organizations in which EE occursmay not understand EE, let alone evaluation.

Definitions of EECB must clearly address the use of evaluationfindings, both the instrumental use where results are applied in aspecific way to answer local evaluation questions that supportdecision-making, and the conceptual use to generate new knowl-edge about EE’s goals and objectives (Beyer, 1997; Naccarella et al.,2007). In an attempt to clarify, we propose the following definitionthat includes evaluation as part of EECB so that it may becomeroutine practice. Environmental education capacity building(EECB) refers to the process of developing the structures, funding,and programs needed to create and sustain comprehensiveenvironmental education (EE) programs within a local contextand that uses evaluation to further environmental education’seffectiveness to improve environmental literacy, decision-making,and behavior.

We present this case because we believe others may be able touse what we have learned to design online evaluation courses toimprove the supply side of ECB within the context of a discipline.One lesson learned is that through an online course, nonformalenvironmental educators can become part of a learning commu-nity and apply evaluative thinking to existing programs despite awide range of EE goals among those programs. Another lesson isthat the implementation of evaluation plans and use of findingscan occur for many students within a year of taking the course, butthat more time and organizational support is needed for others.

Our participants need support from their organizations andprofessional associations for evaluation as well as for their EE work.We believe that demand will increase the supply of EE professionalswith quality evaluation skills. The main challenges we face asinstructors in AEEPE will also be reduced with increased demand forskilled evaluators. A large part of addressing the demand side of ECBmust come from organizations and from professional associationswho have the ability to facilitate the cultural changes necessary forevaluation to become a routine part of EE.

References

Beyer, J. M. (1997). Research utilization: Bridging the gap between communities.Journal of Management Inquiry, 6, 17–22.

Coyle, K. (2005). Environmental Literacy in America: What Ten Years of NEETF/RoperResearch and Related Studies Say about Environmental Literacy in the U.S. NationalEnvironmental Education Foundation. [On-line], available:. http://www.neefusa.org/pdf/ELR2005.pdf.

Page 6: Building environmental educators’ evaluation capacity through distance education

M.L. Fleming, J. Easton / Evaluation and Program Planning 33 (2010) 172–177 177

Dillard, J. (2006). The evaluation and revision of an online course entitled ‘‘AppliedEnvironmental Education Program Evaluation’’. Unpublished master’s thesis, Uni-versity of Wisconsin, Stevens Point.

EETAP (Environmental Education Training Partnership). (2008). Capacity BuildingTools. [On-line], available:. http://eetap.org/html/capacity_building_tools.php.

Fien, J., Scott, W., & Tilbury, D. (2001). Education and conservation: Lessons from anevaluation. Environmental Education Research, 7(4), 379–395.

Kreis, R. (2007). The impact of an online course on environmental education programevaluation. Unpublished master’s thesis, University of Wisconsin, StevensPoint.

McDonald, B., Rogers, P., & Kefford, B. (2003). Teaching people to fish? Building theevaluation capability of public sector organizations. Evaluation, 9(1), 9–29.

McDuff, M. (2002). Needs assessment for participatory evaluation of environmentaleducation programs. Applied Environmental Education and Communication, 1,25–36.

Milstein, B., & Cotton, D. (2000). Defining concepts for the presidential strand on buildingevaluation capacity. Fairhaven, MA: American Evaluation Association.

Monroe, M. C., Fleming, M. L., Bowman, R. A., Zimmer, J. F., Marcinkowski, T.,Washburn, J., et al. (2005). Evaluators as educators: Articulating program theoryand building evaluation capacity. New Directions for Evaluation, 105, 57–71.

Moody, J. (2004). Distance education: Why are the attrition rates so high? The QuarterlyReview of Distance Education, 5(3), 205–210.

Naccarella, L., Pirkis, J., Kohn, F., Morley, B., Burgess, P., & Blashki, G. (2007). Buildingevaluation capacity: Definitional and practical implications for an Australian casestudy. Evaluation and Program Planning, 30, 231–236.

NEEAC (National Environmental Education Advisory Council). (1996). Report assessingenvironmental education in the United States and the implementation of the NationalEnvironmental Education Act of 1990.

NAAEE (North American Association for Environmental Education). (1998). State-levelcapacity building for environmental education: Next steps: A concept paper forconsideration by the EE community and its supporters. Stevens Point, WI: StateCapacity Building Commission, National Environmental Education AdvancementProject.

NAAEE (North American Association for Environmental Education). (2004a). Guide-lines for the Preparation and Professional Development of EnvironmentalEducators. [On-line], available:. http://www.naaee.org/programs-and-initiatives/guidelines-for-excellence/materials-guidelines/educator-preparation.

NAAEE (North American Association for Environmental Education). (2004b). Nonfor-mal environmental education programs: Guidelines for excellence. [On-line],available:. http://www.naaee.org/programs-and-initiatives/guidelines-for-excel-lence/materials-guidelines/nonformal-guidelines.

NAAEE (North American Association for Environmental Education). (2008). CapacityBuilding. EELink. [On-line], available:. http://eelink.net/pages/Capacity+Building.

Preskill, H. (2008). Evaluation’s Second Act: A spotlight on learning. American Journal ofEvaluation, 29(2), 127–138.

Stenstrup, A. (2002). Environmental Education Liaison with State Capacity Building:National Survey of Project Learning Tree, Project WET, Project WILD and WorldWildlife Fund Environmental Education Providers. [On-line], available:. http://www.plt.org/initiatives/education_liasion_sc_building.pdf.

Stockdill, S. H., Baizerman, M., & Compton, D. W. (2002). Toward a definition of ECBprocess: A conversation with the ECB literature. New Directions for Evaluation, 93,7–25.

Stufflebeam, D. L. (2004). The 21st century CIPP model: Origins, development, and use.In M. Alkin (Ed.), Evaluation roots tracing theorists’ views and influences. ThousandOaks, CA: Sage.

Wells, M., & Fleming, M. L. (2002). EETAP Capacity Building Evaluation: A Final ProjectReport. Environmental Education and Training Partnership, UW-Stevens Point. U.S.EPA Cooperative Agreement NT-82865901-2.

Wiltz, L. K. (2000). In Proceedings of the Teton Summit for program evaluation innonformal environmental education.

M. Lynette Fleming, PhD has spent more than 30 years designing, facilitating, andevaluating programs and materials for educators. She is currently conducting parti-cipatory evaluations of education programs; implementing evaluation workshops;teaching AEEPE; working with several states on certification of environmental edu-cators; and assisting the North American Association for Environmental Educationwith accreditation of proficiency-based certification programs.

Janice Easton is a PhD candidate in the Department of Agricultural Education andCommunication at the University of Florida focusing on Extension program develop-ment. She is also an instructor of AEEPE and an independent consultant working withNOAA’s Office of Education to develop online and in-person program evaluationworkshops for their education and outreach staff.