21
Carman, J. G., & Fredericks, K. A. (2008). Nonprofits and evaluation: Empirical evidence from the field. In J. G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation. New Directions for Evaluation, 119, 51–71. Nonprofits and Evaluation: Empirical Evidence From the Field Joanne G. Carman, Kimberly A. Fredericks Abstract The authors explore what evaluation looks like, in practice, among today’s non- profit organizations on the basis of their survey results. The types of evaluation activities nonprofit organizations are engaging in on a regular basis, as well as the types of data they are collecting and how they are using these data, are described. How nonprofits think about evaluation and a three-pronged typology, based on a factor analysis of the survey data, is presented. This analysis shows that nonprofit organizations tend to think about evaluation in three distinct ways: as a resource drain and distraction; as an external, promotional tool; and as a strategic management tool. The authors recommend how funders, evalua- tors, and nonprofit managers can change the way they think about evaluation and build upon the way they currently use evaluation to maximize its potential. © Wiley Periodicals, Inc. D uring the last 15 years, nonprofits have faced increasing pressures from funders and other stakeholders to demonstrate their effective- ness and document program outcomes for accountability purposes. Because of devolution, today’s nonprofit organizations have been charged with the responsibility of delivering many important public services: child 51 4 NEW DIRECTIONS FOR EVALUATION, no. 119, Fall 2008 © Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/ev.268

Nonprofits and Evaluation: Empirical Evidence From the Field

Embed Size (px)

Citation preview

Carman, J. G., & Fredericks, K. A. (2008). Nonprofits and evaluation: Empirical evidencefrom the field. In J. G. Carman & K. A. Fredericks (Eds.), Nonprofits and evaluation. NewDirections for Evaluation, 119, 51–71.

Nonprofits and Evaluation: EmpiricalEvidence From the Field

Joanne G. Carman, Kimberly A. Fredericks

Abstract

The authors explore what evaluation looks like, in practice, among today’s non-profit organizations on the basis of their survey results. The types of evaluationactivities nonprofit organizations are engaging in on a regular basis, as well asthe types of data they are collecting and how they are using these data, aredescribed. How nonprofits think about evaluation and a three-pronged typology,based on a factor analysis of the survey data, is presented. This analysis showsthat nonprofit organizations tend to think about evaluation in three distinctways: as a resource drain and distraction; as an external, promotional tool; andas a strategic management tool. The authors recommend how funders, evalua-tors, and nonprofit managers can change the way they think about evaluationand build upon the way they currently use evaluation to maximize its potential.© Wiley Periodicals, Inc.

During the last 15 years, nonprofits have faced increasing pressuresfrom funders and other stakeholders to demonstrate their effective-ness and document program outcomes for accountability purposes.

Because of devolution, today’s nonprofit organizations have been chargedwith the responsibility of delivering many important public services: child

51

4

NEW DIRECTIONS FOR EVALUATION, no. 119, Fall 2008 © Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/ev.268

52 NONPROFITS AND EVALUATION

care, counseling, job training, employment services, housing and commu-nity development, mental health services, and youth development services.Essentially, there is an exchange relationship between nonprofit organiza-tions and funders. Government, foundations, and other funders give non-profit organizations grants and contracts to deliver public services. Inreturn, funders, as well as the broader community, expect that high-qualityservices will be delivered to those in need. Evaluation requirements of someform are typically part of these exchange relationships.

In this chapter, we explore what evaluation looks like in practice,among today’s nonprofit organizations. The chapter has four sections. First,we begin with a review of the empirical literature about the current programevaluation practices of nonprofit organizations, summarizing the major find-ings and highlighting existing gaps. Second, we present the findings from amail survey that we conducted of nonprofit organizations in the state ofIndiana. We describe the types of evaluation activities nonprofit organiza-tions are engaging in on a regular basis, as well as the types of data they arecollecting and how they are using these data. We also examine how non-profits think about evaluation and present a three-pronged typology, basedon a factor analysis of the survey data. This analysis shows that nonprofitorganizations tend to think about evaluation in three distinct ways: (1) as aresource drain and distraction; (2) as an external, promotional tool; and (3)as a strategic management tool. Third, we use the experiences of three non-profit organizations to illustrate these mind-sets. Finally, we offer recom-mendations about how funders, evaluators, and nonprofit managers canchange the way they think about evaluation and build upon the way theyare currently using evaluation to maximize its potential.

Literature Review

During the last 10 years, we have seen an increase in the empirical litera-ture focused on evaluation practice within nonprofit organizations. Initially,this literature was focused on learning more about the evaluation require-ments of foundations. In a survey of 170 foundations, McNelis and Bickel(1996) found that most foundations evaluate the programs they fundthrough self-reports by the grantees and monitoring by foundation staff. Ina study of 128 foundations, Alie and Seita (1997) found that more than one-third conducted ongoing evaluation of the nonprofit organizations theyfund. In a study of 21 foundations, Patrizi and McMullan (1999) found thattwo-thirds of the foundations had staff members assigned specifically toevaluation; more than half of the foundations reported evaluation activitieshad increased during the last five years. More recently, we have seen promi-nent organizations, such as the Center for Effective Philanthropy, the Davidand Lucille Packard Foundation, and the W. K. Kellogg Foundation, comeout with guidelines and information about best practices for foundationevaluation (Braverman, Constantine, & Slater, 2004).

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

53EMPIRICAL EVIDENCE FROM THE FIELD

Survey research into this topic has also proliferated. One of the earlierstudies was funded by the Aspen Institute’s Nonprofit Sector Research Fundand the Robert Wood Johnson Foundation. From data gathered from 178mail surveys, 40 interviews, and four in-depth profiles, Fine, Thayer, andCoghlan (1998) found that recent evaluation efforts completed by nonprofitorganizations were focused on outcome measures, were conducted primar-ily for current funders, and relied on data gathered through a combinationof quantitative and qualitative methods. In a mail survey of 241 national,state, and local nonprofit organizations, OMB Watch (1998) found thatmore than 80% of these nonprofit organizations reported being subjected toperformance measurement requirements by government agencies, founda-tions, other private funders, their own management, or their own board ofdirectors. In a mail survey of 91 human service agencies in Dallas, Hoefer(2000) found that three-quarters of the agencies had participated in an eval-uation within the preceding 2 years, with the majority conducting theevaluation to ensure proper program implementation. Similarly, Morley,Vinson, and Hatry (2001) found that 83% of the 36 nonprofit organizationsthey studied regularly collected and analyzed outcome data. Carman (2004)and Carman and Millesen (2004) found similar results in surveys of non-profit organizations in New York and Ohio. Additional surveys have lookedat outcomes measurement among organizations receiving funding from theUnited Way (United Way of America 2000, 2003).

Empirical literature about evaluation practice among nonprofit organi-zations has emerged from Canada as well. Some of the earlier work was basedon case studies of specific nonprofit organizations; it examined the account-ability relationship between nonprofits and funders (Cutt & Murray, 2000;Tassie, Murray, & Bragg, 1996; Tassie, Murray, & Cutt, 1998). A large-scale,joint project between the Canadian Centre for Philanthropy and the Centrefor Voluntary Sector Research and Development at Carleton University wasalso launched to study this phenomenon, called the Voluntary Sector Eval-uation Research Project (VSERP). As part of this project, a telephone surveywas conducted with 1,965 nonprofit organizations and 322 funders in 2001,which found that the evaluation expectations among funders had increasedduring the last three years, yet less than half of the funders provided fund-ing for evaluation activities. Most nonprofit organizations relied on internalself-evaluation, with just 8% reporting working with an external evaluator.The majority (76%) reported gathering output data, and 66% reported gath-ering outcome data (Imagine Canada, 2005).

While this growing literature is extremely valuable, we developed ourresearch project in order to fill several important gaps. The first gap has todo with the level of detail gathered in previous research. Instead of askinggeneral questions about evaluation practice, our research is very specificabout the nuts and bolts of evaluation work (what types of data are col-lected, how are data collected, who is responsible, how is it paid for). Thesecond gap is more conceptual. In previous research (Carman, 2004, 2007),

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

54 NONPROFITS AND EVALUATION

we learned that nonprofit organizations think about evaluation in verybroad terms. They see evaluation as being closely related to a wide range ofreporting, monitoring, management, and regulatory activities. Our surveytherefore recognizes this context. The third gap has to do with coverage.Although previous research has focused on “who is doing what,” we still donot know much about how nonprofit organizations are using evaluation orwhat they think about the evaluation work they do. Our research addressesthis as well.

Survey

Building on previous research (Carman, 2004, 2007; Carman & Millesen,2004), we developed a survey and sent it to Indiana nonprofit organizations.Indiana was chosen because of its contrast in size, demographics, and over-all state ideology with the previous state surveys. Additionally, Indiana non-profits have had tremendous growth over the last several years, with thenumber of paid nonprofit workers increasing 5.1% between 2002 and 2004,while the work force as a whole decreased 0.2% (Gronbjerg, Lewis, &Campbell, 2007). These differences yielded another point of variance withinthis line of study.

The study survey was six pages in length and comprised 21 closed-ended questions related to evaluation practice. Specifically, we wanted toknow:

• What types of evaluation activities are nonprofit organizations conduct-ing on a regular basis?

• What types of evaluation data do nonprofit organizations collect on a reg-ular basis?

• How do they collect data?• Who has primary responsibility for collecting data?• How are these evaluations typically funded?• How do nonprofit organizations use the evaluation data they collect?• What do they think about their evaluation efforts?

We sent our survey to a random sample of 340 nonprofit organizationsin Indiana. We targeted our survey to just those organizations that providedhuman services—specifically, organizations providing social services, com-munity development and housing services, and services to people withphysical or developmental disabilities. We made this decision because theseorganizations rely heavily on government contracts and philanthropicgrants, which typically have some type of monitoring, reporting, or evalu-ation requirements associated with them (Beam & Conlan, 2002; DeHoog &Salamon, 2002; Gronbjerg, 1993; Smith & Lipsky, 1993).

Because comprehensive and statewide lists of nonprofit organizations sim-ply do not exist, we created a sampling frame from four sources. Our primary

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

55EMPIRICAL EVIDENCE FROM THE FIELD

data source was data kept by the National Center for Charitable Statistics(NCCS) containing the 2004 IRS Form 990 data for nonprofit organizationsin Indiana. Because this data source excludes religiously affiliated organiza-tions, which are exempt from filing with the IRS, we supplemented this datasource with listings for religiously affiliated organizations found in telephonedirectories, Internet searches, and a database furnished by the Indiana Human-ities Council.

To ensure that each service field was well represented, a dispropor-tionate random sample was selected to be surveyed, 105 that were identi-fied as providing primarily social services, 105 that were identified asproviding primarily community development and housing services, and 105that were identified as providing services to people with physical or devel-opmental disabilities. Additionally, 25 of the religiously affiliated organiza-tions in Indiana were randomly selected. A total of six surveys werereturned as undeliverable. Two rounds of survey mailings, follow-up postcards, and e-mails and phone calls yielded the return of 189 surveys, for anoverall response rate of 57% (189/334).

Sample

The sample of nonprofit organizations responding to the survey was com-posed of 189 nonprofit organizations, all with 501(c)(3) tax status desig-nation. Thirty-five percent provided primarily social services, 36% providedservices to people with developmental or physical disabilities, and 29% pro-vided primarily community development and housing services. The orga-nizations ranged considerably in terms of age, annual operating budget,number of staff, and the number of people served. A closer look revealedsome significant differences. First, social service organizations were morediverse in terms of their operating budget and tended to be older. Second,disabilities organizations were typically larger (in terms of both operatingbudgets and numbers of staff) and served fewer people, which makes sensegiven the specialized, labor-intensive nature of care they provide. Third,community development organizations were typically smaller (in terms ofbudget and staff) and founded more recently.

With regard to funding sources, 71% reported they received some typeof government funding: federal government, state government, county orlocal government, or Medicaid. Forty-seven percent received funding fromfoundations and 43% received funds from the United Way. Thirty-seven per-cent received funding from the private sector (banks or corporations). Morethan three quarters (76%) of the organizations received donations from indi-viduals, through special event fundraising and direct individual contribu-tions. One-half of the organizations raised funds from fees, sales, or dues. Acloser look again revealed a few significant differences between the non-profit organizations in different service fields. Most notably, more of thecommunity development organizations received funds from banks and

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

56 NONPROFITS AND EVALUATION

lending institutions (most likely for housing and commercial development).More of the social service organizations received funds from special eventfundraising, individual donations, foundations, and the United Way. Moreof the developmental disabilities organizations received funds from localgovernment sources and Medicaid.

Findings

The survey respondents were asked to characterize their organization’s eval-uation activities. Of the 189 organizations that responded to the survey, 90%reported they did at least some evaluation (see Figure 4.1). More specifi-cally, 46% reported they made a concerted effort to evaluate most of theirprograms and organizational activities, and 26% reported they did someevaluation of their programs and organizational activities. Eighteen percentreported they went out of their way to evaluate all of their programs andorganizational activities, while 5% reported doing very little evaluation ofprograms and organizational activities and 5% reported they did not evalu-ate any of their programs and organizational activities.

Activities. According to previous research, we knew that nonprofit orga-nizations tended to think about evaluation in very broad terms, whichincluded a wide range of activities related to reporting, monitoring, manage-ment, and government regulations, along with evaluation and performancemeasurement (Carman, 2004, 2007). Therefore, we asked the survey respon-dents to tell us how often they engaged in 19 management, oversight, or eval-uation activities. As shown in Table 4.1, most of the nonprofits reported theyengaged in various behaviors relating to reporting and monitoring activities,while comparably fewer engaged in formal program evaluation. Specifically,

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

We do not evaluateany of our programs

5%

We go out of ourway to evaluate all of

our programs18%

We do very littleevaluation

5%

We do someevaluation

26%

We make a concertedeffort to evaluate our

programs46%

Figure 4.1. Characterization of Evaluation Efforts

57EMPIRICAL EVIDENCE FROM THE FIELD

just over half of the survey respondents (55%) reported they regularly con-ducted formal evaluations of their programs on a regular basis. Forty-six per-cent reported using a performance measurement system on a regular basis.Twenty-three percent have designed program logic models.

Data. The survey also asked the nonprofits to specify the types ofevaluation data the organization gathered on a regular basis. Not surpris-ingly, 93% kept track of information about program expenditures and 78%gathered information about other resource expenditures (i.e., staff and vol-unteer time, equipment and supplies). Eighty-nine percent gathered dataabout the number of people served by their programs. Seventy-seven percentof those organizations kept track of demographics of the people they

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

Table 4.1. Management, Oversight, and Evaluation Activitiesa

Total

(N � 189) % N

Reporting ActivitiesProduce reports for the board of directors 94% (177)Produce annual reports 75% (141)Produce reports for funders about program activities 71% (135)Produce reports for funders about financial expenditures 70% (133)

Regulatory ActivitiesConduct financial audits of your books 86% (162)Review program documentation (i.e., records, case notes) 71% (134)Acquire official licenses to operate programs 35% (66)Participate in accreditation processes 31% (58)

Monitoring ActivitiesConduct performance reviews and evaluations of staff 80% (151)Conduct first-hand observations of program activities 77% (145)Monitor program implementation 69% (131)Experience site visits by funders or regulatory agencies 55% (103)

Management StrategiesAssess whether you are meeting program goals, objectives 67% (127)Establish performance targets 57% (107)Engage in formal strategic planning processes 47% (89)Use a “balanced scorecard” management system 5% (10)

Evaluation and Performance MeasurementConduct formal program evaluations of your programs 55% (103)Use a performance measurement system 46% (86)Design program “logic models” 23% (43)

a Percentage who report engaging in each behavior regularly.

58 NONPROFITS AND EVALUATION

served. Two-thirds (67%) of the organizations reported collecting infor-mation about consumer or participant satisfaction on a regular basis, and62% gathered information about program outcomes or program results ona regular basis. Sixty-one percent reported gathering narrative or anecdo-tal data on a regular basis, and 60% gathered information about programactivities or outputs on a regular basis. Just ten percent gathered control orcomparison data.

The majority (79%) of the nonprofits in the sample rely on written datacollection tools. Seventy percent observe and record program activities.Fifty-nine percent conduct face-to-face interviews when they gather data.Half use mail surveys, 29% conduct focus groups, and 28% conduct tele-phone surveys. Just 4% use handheld computer systems (PDAs) for gather-ing data.

The majority of the nonprofit organizations (80%) reported that inter-nal executive or management staff were responsible for gathering evalua-tion data. Another 4% reported having a dedicated internal evaluation staffmember who was responsible for gathering evaluation data. Twelve percentreported that board members or board committees were primarily respon-sible for gathering evaluation data. Two percent reported that an externalagency or funder was responsible for gathering evaluation data, and just twoorganizations reported that an external evaluator was the primary personresponsible for gathering evaluation data.

When asked how they funded their evaluation activities, 63% of theorganizations reported that they relied on internal, operating funds. Twenty-nine percent reported there were no costs associated with their evaluationactivities. Just 8% reported that funding for evaluation was included ingrants or contracts. None of the organizations reported receiving a separategrant for evaluation.

Use. We also asked the survey respondents to tell us how they usedthe evaluation information that they collected. As shown in Figure 4.2, themost frequently reported use was to help make changes in existing programs(93%), followed by reporting to the board (82%) and helping to establishprogram goals or targets (75%). At least two-thirds of the organizationsreported using evaluation data for strategic planning purposes, decisionsabout staffing (68%), to help develop new programs (68%), and to report tofunders (67%). Comparably fewer used the data for outreach or public rela-tions purposes or to help them to get new funding.

Attitudes. We asked survey respondents to tell us what they thinkabout their evaluation efforts. Specifically, we asked them to tell us the extentto which they agreed with 14 statements, using a five-point scale: stronglyagree (1), agree (2), not sure (3), disagree (4), and strongly disagree (5). Asshown in Table 4.2, a factor analysis of these data (principal components,with a varimax rotation) showed that attitudes toward evaluation could begrouped according to three factors: (1) viewing evaluation as a resource drainand distraction; (2) viewing evaluation as an external, promotional tool; and

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

59EMPIRICAL EVIDENCE FROM THE FIELD

(3) viewing evaluation as a strategic management tool (see the Appendix forthe Rotated Component Matrix). To better understand this range in attitudes,we have identified three organizations that exemplify each viewpoint.

Case Example 1: Evaluation as a Resource Drain andDistraction

In preparation for our research, we conducted an interview and site visitwith a nonprofit organization that offers domestic violence services, com-munity enrichment programming, and child care. This organization is a localaffiliate of an international women’s organization; it has seven full-time staffmembers and approximately 25 part-time staff members. Most of their rev-enue comes from state and federal grants and contracts, with some fee-for-service income, foundation funding, United Way funding, corporatedonations, and special event fundraising. Their annual operating budget isclose to $800,000.

According to the executive director of this organization, the responsi-bility for pulling together evaluation data and information about the pro-grams that the organization runs rests with her. When we asked her todescribe the process, she replied, “You will love this. I run an organization,and short of the Excel program, everything else is hand-tallied.” She wenton to explain that her organization was very small. They do not receive anyfunding for evaluation, and funds for administration are very hard to comeby. Data collection is done by the program staff, and she said that anybodycoming in to be interviewed or hired needs to recognize that data collectionand reporting to funders is unfortunately part of the job.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

To help make changes in existing programs

To report to the board

To help us establish program goals or targets

For strategic planning purposes

To help develop new programs

To make decisions about staffing

To report to funders

To make decisions about fiscal allocations

For outreach and public relations

To help us get new funding

0% 20% 40% 60% 80% 100%

82%

75%

69%

68%

68%

67%

63%

59%

53%

93%

Figure 4.2. Uses of Evaluation Data

Tab

le 4

.2.

Fac

tor

An

alys

is o

f A

ttit

ud

es

Eva

luat

ion

as a

Res

ourc

eE

valu

atio

n as

an

Ext

erna

lD

rain

and

Dis

trac

tion

Prom

otio

nal T

ool

Eva

luat

ion

as a

Str

ateg

ic M

anag

emen

t Too

l

•T

he

amou

nt

of t

ime

and

mon

ey w

esp

end

on p

rogr

am e

valu

atio

n is

not

wor

th it

•M

uch

of

wh

at w

e do

for

pro

gram

eval

uat

ion

is s

ymbo

lic

•W

e si

mpl

y do

n’t

hav

e th

e kn

owle

dge

or e

xper

tise

to

do q

ual

ity

prog

ram

eval

uat

ion

•Sp

endi

ng

tim

e an

d re

sou

rces

on

eva

l-u

atio

n t

akes

aw

ay f

rom

wh

at w

e do

best

—pr

ovid

e se

rvic

es

•P

rogr

am e

valu

atio

n r

equ

irem

ents

are

just

hoo

ps t

hat

ou

r fu

nde

rs m

ake

us

jum

p th

rou

gh

•O

ur

fun

ders

are

ver

y in

tere

sted

in p

rogr

am e

valu

a-ti

on

•W

e do

pro

gram

eva

luat

ion

bec

ause

ou

r fu

nde

rs

requ

ire

it

•W

e do

pro

gram

eva

luat

ion

bec

ause

it h

elps

us

topr

omot

e ou

rsel

ves

to f

un

ders

, oth

er s

take

hol

ders

,an

d th

e co

mm

un

ity

•W

e u

se o

ur

eval

uat

ion

res

ult

s to

hel

p u

s lo

ok g

ood

to f

un

ders

an

d at

trac

t re

sou

rces

•P

rogr

am e

valu

atio

n h

elps

us

to im

prov

e th

e qu

alit

yof

ser

vice

s w

e de

live

r

•Pr

ogra

m e

valu

atio

n he

lps

us to

mak

e st

rate

gic

choi

ces

abou

t ou

r or

gan

izat

ion

’s f

utu

re

•P

rogr

am e

valu

atio

n is

an

ess

enti

al p

art

of o

ur

stra

tegi

c pl

ann

ing

proc

esse

s

•P

rogr

am e

valu

atio

n is

an

inte

gral

par

t of

ou

r m

an-

agem

ent

prac

tice

s

•W

e of

ten

use

the

resu

lts

from

our

pro

gram

eva

luat

ion

effo

rts

to m

ake

orga

niz

atio

nal

or

prog

ram

mat

icde

cisi

ons

61EMPIRICAL EVIDENCE FROM THE FIELD

When we asked her to characterize her evaluation efforts, she used theword “piecemeal” and explained that the evaluation efforts are program-specific and governed by the funders’ requirements. She explained thatalthough her organization tries to specify performance goals in grant appli-cations, she has a hard time getting her staff to think in terms of outcomes.She went on to explain that even though she has sent a number of staffmembers to training that the United Way offers, the training has not beenvery helpful. Not only do the trainings “take staff away from the work thatthey should be doing” but staff also cannot relate to the examples used inthe training. She explained that the last time her staff went to a United Waytraining session, the session was being held out of town, “which costsmoney,” and the example the trainers used had to do with specifying outcomes for an alcoholism rehab facility. She said, “That’s a great program, but we don’t do that here.” Another challenge she reportedencountering is that some of her staff think outcomes measurement is “awaste of time” or “one more thing that is popular with management orpopular with funders.” She then asked us, “Do you remember total qualitymanagement?”

We asked the executive director to tell us more about her experienceswith the United Way. She described the relationship as a “dual-edgedsword.” On the one hand, she was glad for the United Way’s recognitionthat her organization is viable, vital, and valuable to the community. Onthe other hand, she perceived the budget process, reporting requirements,and blackouts in fundraising to be very onerous. She said, “We go throughan incredible number of hoops, and sometimes I wonder if it is worth themoney they give you.” She said she does it “more for the seal of approval.”She went on to say, “The United Way does wonderful things, especially insmall communities where that dollar makes a big difference in terms ofprogramming, but they put you through a lot more than any other funderhas ever put us through.”

We asked if the board of directors was involved in any evaluationefforts. She responded by explaining that her agency had just completed astrategic planning process, something the agency had been talking aboutdoing for the past five years. The primary focus during the two-day retreatwas on resolving physical plant issues, cultivating support for the staff, andidentifying strategies for engaging young people. “The board,” she said,“has not even gotten to evaluation.” Evaluation is something that is staff-and funder-driven.

Finally, we asked the executive director about the types of resources herorganization might need to improve its current evaluation efforts. At the topof her list was more training on how to do outcome measurement in herspecific setting. This was followed by the wish that funders would work inpartnership with her organization to help her come up with a way of doingevaluation that is realistic and feasible, while at the same time helping herorganization not feel threatened or defensive.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

62 NONPROFITS AND EVALUATION

Case Example 2: Evaluation as an ExternalPromotional Tool

We conducted an interview and a site visit with a nonprofit organization thatoffers a range of services for youth, including individual and family coun-seling, juvenile justice programs, a teen parenting program, and substanceabuse prevention and treatment. The organization has seven departments andoperates community- and school-based programs at multiple sites in fourcounties. The organization serves almost 1,500 children, youths, andfamilies, with an annual operating budget of close to $1 million. Eighty-fivepercent of the budget comes from federal, state, and local governmentsources; 10% from foundation grants; and 5% from donations and fees.

We began our visit by talking with one of the program directors, whoexplained that the funding environment for youth services appeared to bechanging in recent years. “Funds are shifting in different directions right now;whereas before there was a lot of prevention money out there, it now seemslike [funding] is going toward literacy and school-based programs.” She wenton to describe how the past year had been “really rough, knowing that wewon’t be refunded for some things.” She pointed to a stack of papers on herdesk, and said, “This whole pile is grants that we are working on.” When weasked if these were new grants she was applying for, she replied “Yes. It is somuch work. You spend three or four weeks working on a grant. You send itout and if you don’t get it, you don’t get it. . . . It seems to be getting harder. Idon’t want to be really negative, but this year has been very difficult.”

We then asked one of the department heads to describe the evaluationpractices of the organization. She replied, “It is pretty formal. It is that waybecause our funders require it.” She explained that her organization relieson a range of data collection tools, including parent surveys, participant sur-veys, focus groups, and attendance lists. The organization tracks the grades ofthe program participants, as well as units of service delivered, the types of services delivered, to whom, and over how much time. For one, the fam-ily counseling programs, the organization has also set up a comparisongroup, offering families gift cards for groceries in exchange for filling out aneight-page family profile form and pre- and post-surveys.

She explained that the evaluation components for many of their programsare often negotiated between her organization and the funder: “We decide whatthe evaluation is going to look like and we submit as part of our proposal. Then,if [the federal government agency] doesn’t like it, they will call us and say, ‘ Weneed you to fix this or tweak that ’ or, ‘We are not quite sure about this. ’”

The organization relies on an external evaluator from a local universityfor the evaluation of one of the family programs, but internal staff areresponsible for gathering evaluation data for the rest of the programs “byhand, on forms.” The completed forms are then forwarded to a programcoordinator, who enters the data into a database, analyzes the data usingSPSS, and writes up the reports for funders.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

63EMPIRICAL EVIDENCE FROM THE FIELD

When asked to reflect on the process, the director replied, “It is achallenge, but I really have learned to appreciate the value of it this year.”She explained that she used the data they collect to write new grants, writeannual reports, and help her get more funding. “We are all about themoney,” she said. She went on to say:

[Our evaluation efforts] help me to be able to pitch a program. I canhave statistics to back it up. That is really what my funders want to hear.. . . I think the reason why we do most of this [evaluation] is because weare required to by our funders. Over the last year or two years, have Iseen the benefit because I can use the results? Yes. But, that is the mainreason why so much thought goes into it.

As she reflected on this further, she said:

We are so busy implementing programs, developing new materials,doing recruitment, doing the budget, and all of this stuff, to actuallyhave the time to be able to look at the data and see how it is workingand why, and to be able to make the changes? That is a luxury aroundhere. There is just not a lot of time.

She went on to say that recently there had been some interest in giving theexecutive director “little snippets about what our programs do,” so he can“show the community what is going on and that we are heading in the rightdirection.” In short, this is an organization that is conducting evaluation atthe request of funders. Yet even though evaluation may be a necessaryrequirement, this organization has figured out that evaluation data can helpin their efforts to seek additional funding and community support.

Case Example 3: Evaluation as a StrategicManagement Tool

We conducted an interview and site visit with a nonprofit organization thatprovides community development and antipoverty services in a low-incomecommunity. Founded in 1966, this organization serves approximately 5,000households per year and provides a range of services in the areas of child and family development, crisis intervention and support, housing,energy, and employment. The organization employs almost 200 people andhas an annual operating budget of close to $6.5 million, which comes frommore than 50 different funding sources. While the organization receives mostof its funding from federal, state, and local government contracts, the orga-nization also receives some United Way funding and foundation grants, aswell as relies on corporate and individual donations. When we talked withthe executive director, she explained that, in recent years, like many non-profit organizations, her organization has moved to more performance-basedcontracting, where “outcomes are the catchphrase.”

Most of the evaluation that this organization conducts is internal evaluation,conducted by the management team and funded as part of “administrative

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

64 NONPROFITS AND EVALUATION

overhead.” Occasionally, however, the organization works with an externalevaluator on a few federal programs that they run, such as Early Head Start andHead Start. The management team is very experienced and well-trained. Theexecutive director has a master’s degree in public administration and hasworked in the human service field for 35 years. The directors of each depart-ment have, on average, 15 years of experience.

Each director is responsible for setting up a data collection system withinhis or her department that collects all of the information he or she needs inorder to produce the reports for the particular funding sources. This infor-mation is then gathered and entered in a master database, designed uniquelyfor this agency, where information about each client is maintained. From thisdatabase, reports can be generated—individually by client, individually byprogram, or in aggregate for the whole organization. Copies of all of thereports that are sent to the funders are sent to the executive director.

The board of directors also has a program planning and evaluationcommittee that meets regularly to review the performance contracts in orderto assess how well the programs are doing, relative to the performancetargets that have been established. According to the executive director, thecommittee looks at all types of information for each program: the numberof people being served, the demographics, the units of service delivered, andperformance on specified outcome measures.

In addition, the organization conducts “client satisfaction surveys” everythree years by relying on students from two of the local colleges to develop andadminister the surveys. The organization not only uses this survey as an oppor-tunity to assess client satisfaction in terms of the quality of the service (suchas “How were you treated? Were you served promptly?”) but also uses this asan opportunity to identify gaps in service within the community (by askingquestions such as “Was this the service you needed?” and “What other servicesmight you need that this agency is not currently providing and are not avail-able elsewhere in the community?”). According to the executive director, thisis because one of the goals of the organization is to identify those gaps in ser-vice, identify the obstacles that are keeping people from reaching self-suffi-ciency, and create new services. The organization also holds focus groupsannually, where they bring in other service providers, local officials, and clients(separately in groups) to identify the strengths and weaknesses of the agency.

According to the executive director, “all of this then goes into our strate-gic planning process.” The organization has a five-year strategic plan, whichthey review continuously. When we asked the executive director what her rolewas in all of this, she replied, “It is my job to see that it all gets done, to seethat it happens, and to keep us on track. I also work very closely with theboard, who reviews all of it.” She explained:

I think that it is really important to our board of directors to see that we areaccomplishing what we thought we would. This agency has been very proac-tive. If we come up with a program, and it is not getting results, then we end

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

65EMPIRICAL EVIDENCE FROM THE FIELD

the program. We want to see people make changes in their lives. If what weare doing isn’t helping them do that, then we need to be doing something dif-ferent. Our resources are limited so we need to do what works for folks.

She went on to say:

One of the toughest decisions that we had to make, as a board and anorganization, was [when we had] a program called the [Program Name].It was a program where low-income craft people could put their itemson consignment in our shop. The program had been in operation for anumber of years. The truth of the matter was it was costing us more torun the program than the money we were returning to crafters by sell-ing their items. We would have been further ahead if we just wrote thema check. We really looked at this. [The program] wasn’t doing what wehad hoped. It wasn’t providing significant amounts of money to helpbring [the crafters] out of poverty. And so we said, “It is not working,”and we closed the program. Evaluation helps us make those decisions.Without evaluation, we don’t know how effective we are. We don’t justwant to be doing programs just to do them.

The executive director also explained that her agency used evaluation datain other ways too, which included helping to motivate staff by framing theirwork in terms of “These are the things we want to accomplish,” and “This isthe progress that we have made,” and by helping to motivate clients by saying“Look, this is where you were a year ago on this scale; here is where you are now.”

When we asked the executive director about the extent to which fun-ders and other environmental forces effect what her organization does withrespect to evaluation, she replied:

We would do this even if we didn’t have to do it. As long as I am here,we would do this because it is a good management tool and we need todo it. We use the information to accommodate the funding sources byproviding them information in their reports in the format that they feelthey need. But we really see it as an internal management tool, ratherthan to satisfy funders.

She went on to say that:

My philosophy personally is that the community, the foundation, or thestate or the federal government, or the private donor has entrusted mewith their money to do the best job we can possibly do. If we don’tevaluate, we don’t know that we are doing the best job that we can possibly do. We look at evaluation. We look at it probably more than [the funders] do. They want more numbers, and outputs or outcomes,where we also look at the quality of what we are providing. That issomething that I really feel has been lost in the outcome-based funding.As long as the person reaches the next benchmark, that is all that isimportant to them. To me, there is a quality issue too.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

66 NONPROFITS AND EVALUATION

Implications and Recommendations

The findings from this study have important implications for nonprofit man-agers, evaluators, and funders (see Table 4.3). One of the most importantobservations, if not obvious, is that most of the evaluation that nonprofits aredoing is internal evaluation, where they are relying on executive or programstaff. Very few organizations have the luxury of having separate funding, ded-icated staff, or external evaluators for data collection and data analysis. Yetmost make a sincere attempt to evaluate at least some of their programs andorganizational activities, but they do so with considerable staffing and fund-ing constraints. Given that funders are going to continue to be interested inevaluation and performance measurement, nonprofit managers should do

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

Table 4.3. Characteristics of the Viewpoints

Evaluation as a Resource Evaluation as an External Evaluation as a Strategic Drain and Distraction Promotional Tool Management Tool

• Staff in need of • Funders that are • A focus on using training and interested in evaluation data to education about key evaluation results make decisionsevaluation concepts • An organization • A strategic plan that and tools experiencing some is reviewed regularly

• A perception that degree of financial • A master databasethe current focus on uncertainty that tracks evaluationoutcomes and • The capacity to data, includingevaluation is just carry out data demographics, out-another fad collection and puts, and outcomes,

• A board in need of summarize the and is able totraining about roles results generate meaningfuland responsibilities • A focus on using reports

• An organization evaluation data for • An active programwith a range of report writing planning andcapacity issues (compared to making evaluation committee

• An organization in decisions) as part of the board

need of a low-cost, • Recognition that that reviews

user-friendly way of evaluation data can evaluation data

gathering and be used to promote regularly

tracking evaluation specific programs • Regular efforts todata and the organization gather client

satisfaction data, gapsin services and community needs, andagency strengthsand weaknesses

67EMPIRICAL EVIDENCE FROM THE FIELD

what they can to earmark specific resources for improving their evaluationcapacity, much in the same way that they do for strategic planning, fundraising,and marketing. Even small, systematic investments to upgrade computerhardware and software will be worth the investment. The same can be said forinvesting in training and education for staff regarding key evaluationconcepts and cultivating an interest and experience in evaluation at the board level.

As we report elsewhere (Carman, 2007; Carman & Millesen, 2005),evaluation does not necessarily have to be expensive or intrusive. In fact,much of today’s evaluation can be done using personal computers with soft-ware that is relatively inexpensive and widely available. One of the organi-zations profiled here had a database management system created specificallyfor them, but most nonprofit organizations should be able to use existingsoftware to maintain evaluation data.

Other findings from this study also show how nonprofit managers canmaximize the benefits of evaluation. For example, using evaluation data toinform strategic planning helped to keep one of the case study organizationson track to fulfilling its mission and goals. This organization also used eval-uation to highlight and celebrate individual and organizational achieve-ments. Another case study organization made a considerable effort to usethe data they collect to improve the quality of their grant applications andmarket themselves to the community. Even the organization that was themost negative about evaluation recognized the importance of hiringemployees who recognize that collecting evaluation data for funders is partof the job.

Furthermore, there are different models of evaluation and each has itsown purpose. Not all evaluation has to be focused on “outcomes.” From theexperiences of three organizations we profiled here, it seems that the recentattention and focus on outcomes may be overshadowing the many benefitsthat can be found with other types of evaluation and performance mea-surement activities, such as cost-benefit analysis, customer satisfaction sur-veys, and participatory evaluation.

Although these data also suggest that many nonprofits have limitedinteraction with external evaluators, we believe that evaluators can still playan important role by being thoughtful about their interactions with non-profit organizations. While some evaluators clearly embrace and practiceparticipatory evaluation, empowerment evaluation, and cooperative evalu-ation (Fetterman & Wandersman, 2004; O’Sullivan, 2004; Whitmore,1999), evaluators should try to remember to maximize their interactionswith nonprofit organizations and use them as opportunities for impartingtraining, technical assistance, and knowledge whenever possible.

The findings from this study also have a number of implications forfunders. For example, given that some nonprofit organizations are stillstruggling with the logistical and technical aspects of program evaluation,funders are uniquely positioned to support nonprofit organizations in ways

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

68 NONPROFITS AND EVALUATION

that help them to invest in computers, networks, data management soft-ware, and PDAs. We would suggest encouraging nonprofit organizations toallocate specific funds for these purposes within their grant applications orbudget proposals.

Funders are also uniquely positioned to offer support for evaluationtraining that better meets the needs of today’s nonprofits. Instead of offer-ing group workshops for program staff about how to develop outcome mea-sures (which are so common today), funders might want to considerproviding more opportunities for training staff on how to use affordablecomputer software for database management. Then, once staff understandthe different concepts behind data management (relational data, queries,report generation, and so on) and know how to create, analyze, and man-age data, staff can go on to participate in various types of training that wouldhelp them learn more about key evaluation concepts, such as the differencebetween outputs and outcomes, specifying outcome indicators, and how todevelop an evaluation system that meets their needs.

Internally, funders might also want to reexamine the types of data andreports that they require their grantees to collect and submit, with an eyetoward moving away from simple reports that just “focus on the numbers.”Instead, funders might consider asking grantees to demonstrate how theyare using the data they are collecting in meaningful ways, such as helpingthem make specific program-related decisions or staffing decisions, incor-porating data into their public outreach and public awareness campaigns,or helping secure and leverage additional funding by including this infor-mation in future grant proposals.

Externally, funders are uniquely positioned to reframe the idea thatevaluation is the ultimate accountability tool. In fact, given our research, webelieve that funders, as well as other stakeholders, should not be askingnonprofit organizations to conduct evaluation so they can simply demon-strate that they are doing good work. Rather, funders, as well as other stake-holders, should be asking nonprofit organizations to conduct evaluation sothat they can do better work.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

Appendix: Factor Analysis—Rotated ComponentMatrixa

Component

Survey Questions 1 2 3

1. Program evaluation helps us make .738 �.181 .254strategic choices about our organization’s future

2. Program evaluation is an .794 �.325 .056essential part of our strategic planning processes

3. Program evaluation helps us .883 �.156 .033improve the quality of services we deliver

4. We often use the results from .779 �.282 .067our program evaluation efforts to make organizational or programmatic decisions

5. Program evaluation is an .528 �.465 .312integral part of our management practices

6. The amount of time and money �.278 .348 .056we spend on program evaluation is not worth it

7. Much of what we do for �.183 .753 .044program evaluation is symbolic

8. We simply don’t have the �.230 .750 �.290knowledge or expertise to do quality program evaluation

9. Spending time and resources �.254 .760 .062on evaluation takes away from what we do best: provide services

10. Program evaluation �.525 .571 .007requirements are just hoops that our funders make us jump through

11. We do program evaluation .054 .263 .708because our funders require it

12. We use our evaluation results .016 .075 .779to help us look good to funders and attract resources

13. We do program evaluation .050 �.147 .678because it helps us promote ourselves to funders, other stakeholders, and the community

14. Our funders are very interested .191 �.233 .636in program evaluation

a Extraction Method: Principal Component Analysis. Rotation Method: Varimax withKaiser Normalization; Rotation converged in five iterations.

70 NONPROFITS AND EVALUATION

References

Alie, R. E., & Seita, J. R. (1997). Who’s using evaluation and how? New study givesinsight. Nonprofit World, 15(5), 40–49.

Beam, D. R., & Conlan, T. J. (2002). Grants. In L. M. Salamon (Ed.), Tools of government(pp. 340–380). New York: Oxford University Press.

Braverman, M. T., Constantine, N. A., & Slater, J. K. (2004). Foundations and evaluation:Contexts and practices for effective philanthropy. San Francisco: Jossey-Bass.

Carman, J. G. (2004). Explaining program evaluation practice. A paper presented at theSoutheastern Conference for Public Administration held October 3 at the Hilton,University Place, Charlotte, NC.

Carman, J. G. (2007). Evaluation practice among community-based agencies: Researchinto the reality. American Journal of Evaluation, 28(1), 60–75.

Carman, J. G., & Millesen, J. L. (2004). Evaluation theory and practice: A report from thefield. A paper presented at the 6th International conference of the International Soci-ety for Third Sector Research held July 12 at Ryerson and York University, Toronto,Canada.

Carman, J. G., & Millesen, J. L. (2005). Nonprofit program evaluation: Organizationalchallenges and resource needs. Journal of Volunteer Administration, 23(3), 36–43.

Cutt, J., & Murray, V. (2000). Accountability and effectiveness evaluation in non-profitorganizations. New York: Routledge.

DeHoog, R. H., & Salamon, L. M. (2002). Purchase-of-service contracting. In L. M. Salamon(Ed.), Tools of government (pp. 319–339). New York: Oxford University Press.

Fetterman, D., & Wandersman, A. (Eds.) (2004). Empowerment evaluation principles inpractice. New York: Guilford Press.

Fine, A. H., Thayer, C. E., & Coghlan, A. (1998). Program evaluation practice in the non-profit sector. Washington, DC: Innovation Network.

Gronbjerg, K. A. (1993). Understanding nonprofit funding. San Francisco: Jossey-Bass.Gronbjerg, K. A., Lewis, A., & Campbell, P. (2007). Indiana Nonprofit Employment: 2007

Report. Retrieved October 29, 2007, from http://www.indiana.edu/~nonprof/results/inemploy/innonprofitemploy07.htm

Hoefer, R. (2000). Accountability in action? Program evaluation in nonprofit human ser-vice agencies. Nonprofit Management and Leadership, 11(2), 167–177.

Imagine Canada (2005). Evaluation practices in Canadian voluntary organizations.Retrieved July 20, 2007, from http://www.nonprofitscan.ca/Files/vserp/vserp_fact_sheet.pdf

McNelis, R. H., & Bickel, W. E. (1996). Building formal knowledge bases: Understand-ing evaluation use in the foundation community. Evaluation Practice, 17(1), 19–41.

Morley, E., Vinson, E., & Hatry, H. P. (2001). Outcome measurement in nonprofit orga-nizations: Current practices and recommendations. Waldorf, MD: Independent Sector.

OMB Watch. (1998). Measuring the measurers: A nonprofit assessment of the GovernmentPerformance and Results Act. Washington, DC: Author.

O’Sullivan, R. G. (2004). Practicing evaluation: A collaborative approach. Thousand Oaks,CA: Sage.

Patrizi, P., & McMullan, B. J. (1999). Realizing the potential of program evaluation.Foundation News and Commentary, 40(2) 30–35.

Smith, S. R., & Lipsky, M. (1993). Nonprofits for hire: The welfare state in the age ofcontracting. Cambridge, MA: Harvard University Press.

Tassie, B., Murray, V., & Bragg, D. (1996). Rationality and politics: What really goes onwhen funders evaluate the performance of fundees? Nonprofit and Voluntary SectorQuarterly, 25(3), 347–363.

Tassie, B., Murray, V., & Cutt, J. (1998). Evaluating social services: Fuzzy pictures oforganizational effectiveness. International Journal of Voluntary and Nonprofit Organi-zations, 9(1), 59–79.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev

71EMPIRICAL EVIDENCE FROM THE FIELD

United Way of America (2000). Agency experiences with outcome measurement. RetrievedJuly 21, 2007, from http://national.unitedway.org/files/pdf/outcomes/agencyom.pdf

United Way of America (2003). Outcome measurement in national health & human ser-vice and accrediting organizations. Retrieved July 21, 2007, from http://national.unit-edway.org/files/pdf/outcomes/natlorgsreportfinal.pdf

Whitmore, E. (Ed.). (1999). Understanding and practicing participatory evaluation. NewDirections for Evaluation, no 80. San Francisco: Jossey-Bass.

JOANNE G. CARMAN is an assistant professor of political science at the Universityof North Carolina at Charlotte, where she teaches in the Master’s of PublicAdministration program, and serves as the advisor and coordinator for theGraduate Certificate in Nonprofit Management.

KIMBERLY A. FREDERICKS is an assistant professor of management and directorof the Health Services Administration program at the Sage Colleges.

NEW DIRECTIONS FOR EVALUATION • DOI: 10.1002/ev