28
Power, Politics, and Change: How International Actors Assess Local Context JUNE 2010 Jenna Slotin, Vanessa Wyeth, and Paul Romita INTERNATIONAL PEACE INSTITUTE

Power,Politics,andChange: HowInternationalActorsAssess

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Power, Politics, and Change:How International Actors AssessLocal Context

JUNE 2010Jenna Slotin, Vanessa Wyeth, and Paul Romita

I N T E RNAT I ONA L P EAC E I N S T I T U T E

Cover Photo: Enrique Ter Horst

(third from right), Special

Representative of the Secretary-

General for Haiti, looking at a map

of Haiti with members of the

Canadian Battalion of UN

Stabilization Mission in Haiti, May

13, 1997. © UN Photo/Eskinder

Debebe.

The views expressed in this paper

represent those of the authors and

not necessarily those of IPI. IPI

welcomes consideration of a wide

range of perspectives in the pursuit

of a well-informed debate on critical

policies and issues in international

affairs.

IPI Publications

Adam Lupel, EditorEllie B. Hearne, Publications Officer

Suggested Citation:

Jenna Slotin, Vanessa Wyeth, and

Paul Romita, "Power, Politics, and

Change: How International Actors

Assess Local Context," New York:

International Peace Institute, June

2010.

© by International Peace Institute,

2010

All Rights Reserved

www.ipinst.org

ABOUT THE AUTHORS

JENNA SLOTIN is Research Fellow working on

peacebuilding and state fragility at the International

Peace Institute (IPI).

VANESSA WYETH is Senior Policy Analyst working on

peacebuilding and state fragility at IPI.

PAUL ROMITA is Policy Analyst working on peace-

building and state fragility at IPI.

ACKNOWLEDGEMENTS

Research for the Understanding Local Context project has

been generously supported by the Carnegie Corporation of

New York and the United Kingdom’s Department for

International Development (DFID). The project was

conducted as part of IPI’s work on peacebuilding and state

fragility, which fits within IPI’s umbrella program, Coping

with Crisis, Conflict, and Change. The latter is generously

supported by the governments of Finland, Luxembourg,

Norway, Spain, Sweden, Turkey, and the United Kingdom.

IPI thanks the many experts from various governmental and

nongovernmental agencies and organizations—including the

Australian Agency for International Development (AusAID),

the Clingendael Institute, the European Commission, the

Canadian International Development Agency, the Institute of

Development Studies, the Overseas Development Institute,

the Dutch Ministry of Foreign Affairs, the Swedish Inter-

national Development Cooperation Agency, DFID, UK

Foreign and Commonwealth Office, UK Stabilisation Unit,

UN Development Program, US Agency for International

Development, the US Office of the Coordinator for

Reconstruction and Stabilization, and the World Bank—who

shared generously their time and expertise through a series

of in-depth interviews between December 2008 and July

2009. We are also very grateful to the participants in the

June 11 and 12, 2009, expert workshop “Understanding

Local Political Context: The Use of Assessment Tools for

Conflict-Affected and Fragile States” in New York for

sharing their time and knowledge. We would like to particu-

larly thank Marco Mezzera, Babu Rahman, and Tjip Walker

for their support and assistance from the earliest days of

the project.

Thanks are also due to Mandy Gunton, Vanna Chan, Ellena

Fotinatos, Joyce Pisarello, Liat Shetret, and Melissa Waits

for their research efforts in the project. We also thank

Francesco Mancini for supervising the project and

contributing his insight and expertise. An especially large

debt of gratitude goes to Josie Lianna Kaye, whose

extensive research helped to underpin much of the content

of this report.

CONTENTS

Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Approach and Methodology. . . . . . . . . . . . . . . . . . . . . 4

Background and Evolution ofAssessment Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

CONFLICT ASSESSMENTS

GOVERNANCE ASSESSMENTS

ASSESSING STATE FRAGILITY

WHOLE-OF-GOVERNMENT APPROACHES ANDTHEIR RELATIONSHIP TO ASSESSMENTS

Extent of Influence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

CLARITY OF PURPOSE

TIMING AND TIMEFRAMES

INTERESTS AND INCENTIVES

PEOPLE AND COMPETENCIES

LINKAGE BETWEEN ASSESSMENT AND PLANNING

THE INTERAGENCY CONUNDRUM

Conclusion and Recommendations . . . . . . . . . . . . . . 17

Annex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Executive SummaryIn recent years, donor governments and interna-tional organizations such as the UN and the WorldBank have developed a number of frameworks andtools to assess governance, conflict, and fragility.This report argues that there are multiple, and

often contradictory, objectives underlying thedevelopment and use of such assessment tools.Underpinning this multiplicity of objectives aredeep assumptions, many of which remain unstated.Different agencies tend to define the problemthrough their own institutional lens, and the assess-ment tools they create reflect these biases. As theOrganisation for Economic Co-operation andDevelopment—Development Assistant Commit-tee’s (OECD-DAC’s) work on governance assess-ments has pointed out, assumptions underlyinggovernance assessment methodologies are usuallynot explicit, but tend to measure governanceagainst existing norms in OECD countries.Similarly, the different approaches to conflictassessment adopted by major bilateral and multilat-eral actors demonstrate conceptual and intellectualdifferences in their understanding of the nature ofconflict; the same may be said for various donors’approaches to assessing state fragility.Overall, we found that experience with assess-

ment tools has produced mixed results as far asimpacts on decision making, planning, andprogramming are concerned. While theimportance of producing good quality analysiscannot be overstated, the extent of an assessment’sinfluence is rarely, if ever, solely determined by thecontent or quality of analysis. The use of assess-ments appears to be determined by five key factors:1. Clarity of purpose: Assessments may servemultiple purposes simultaneously. Differentagencies within donor governments andmultilateral organizations (or even differentdepartments within the same agency) may havevarying perspectives on the purpose andobjectives of an assessment, how it should beconducted, its target audience, and the use of itsresults. The key is to clearly establish thepurpose and expectations of the assessmentfrom the outset to ensure that the choice of tooland process are appropriate.

2. Timing and timeframes: Timing appears to be asignificant determinant of whether and how the

results of an assessment are used. There is atension between effectively feeding intoplanning cycles and responding to changingcircumstances on the ground to inform time-sensitive decision making. Whatever the need, ifthe assessment misses the window of influence,it is likely to have little impact.

3. Interests and incentives: Individual and institu-tional interests and incentives have a significantimpact on how effectively assessments areconducted, as well as how the results of theanalysis produced by them are used. Theimportance of obtaining buy-in of field-office/embassy staff is often noted as a major determi-nant of the impact of an assessment. If scope fordissent from or change within a given policy,strategy, or program is limited, then receptivityto the results of an assessment is also likely to belimited.

4. People and competencies: Certain skills andcompetencies appear to be particularly valuablein generating an assessment that can be easilyunderstood and effectively used. A focus onthese competencies—including a mix ofexpertise, communication, leadership, andfacilitation skills—may be more important thanthe tool itself. External consultants are oftenused to conduct assessments, but they comewith benefits and drawbacks.

5. Linkage between assessment and planning: Thereis frequently inadequate attention paid to howassessment tools fit into broader strategicplanning processes. Consequently, assessmentprocesses are often one-off exercises, instead ofefforts to collect and update analysis at regularintervals that can feed into planning cycles.Where interagency or whole-of-government

planning is the primary objective, practitionerstend to be agnostic about assessment methodology.In these cases, the emphasis is almost entirely onprocess—specifically, how to use the informationand analysis produced through an assessment tohelp different actors agree on a basic storyline of thesituation. Here the goal seems to be “good enough”analysis and a basic level of agreement among thekey players in order to provide the basis for acommon strategy.Based on these findings, this report offers the

following conclusions and recommendations:

1

Be realistic about what assessments canaccomplish: The use of assessments has to besituated within the broader universe of politicalanalysis that informs decision making, much ofwhich is done informally. If the aim is to strengtheninternational actors’ understanding of local context,instruments such as formal assessment toolsrepresent only one way to capture this type ofknowledge, and should be supported by othermethods. There is a tendency to think that, on thestrength of better analysis, international actors willbe able to design better interventions. However,good analysis does not always point to solutions.More often, a truly nuanced analysis reveals thelimitations of donor options.Ensure that assessments are linked more consis-

tently to an overarching planning cycle: Ideally,assessments should inform planning andimplementation, followed by robust monitoringand evaluation of impact, with the ability to makemidcourse corrections or respond to new opportu-nities or constraints posed by in-country develop-ments. Donors should develop clear protocols thatset out how the results of an assessment should feedinto planning or programming, what the

appropriate link to monitoring and evaluation is,and how to disseminate the results of assessmentsto avoid their becoming one-off exercises. Fosteringgreater clarity vis-à-vis who constitute the end usersof assessments, their information needs, and how totarget and convey information in such a way that itcan be readily fed into planning and decision-making processes could also ensure that assess-ments are more effectively used.Shift the focus from tools to developing a culture

of analysis: International actors must guard againstexcessive focus on the tools themselves, to theneglect of ensuring that political analysis is stream-lined throughout development-agency thinking.Over time, the focus needs to shift from the tools topromoting a culture of analysis. This has implica-tions for recruitment, training, and promotion ofstaff, as well as the importance of cultivatingmultiple sources of information and analysis locallyand internationally. The goal should be to promotean analytical culture, whereby staff is encouraged to“think politically” so that strategies, programs, andday-to-day implementation are regularly informedby contextual information. This means prioritizingcountry, as well as thematic, expertise.

2 POWER, POLITICS, AND CHANGE

IntroductionThe last decade of theory and practice has yieldedimportant lessons about international efforts toprevent conflict, build peace, and foster thedevelopment of effective, legitimate, and resilientstates capable of meeting the needs and expecta-tions of their populations. Above all, it is nowcommonly accepted that statebuilding andpeacebuilding are deeply political, context-specificprocesses: to be effective, international responses tofragile situations must therefore grapple with localcontext.1 This means understanding several factors:historical trajectories of state formation; underlyingdrivers of conflict; interaction of political andeconomic processes within the state; relationshipsamong communities and between state and society;sources of legitimacy that the state may lay claim to(and competitors for those sources); informalmeans of distributing rights and resources andsettling disputes; and capacities for peace that existwithin and outside the state. It also means analyzingkey actors and their values, interests, strategies,incentives, and relationships of power, and theimpact that external influence can have on thesedynamics.International actors are increasingly aware that a

clear-eyed understanding of the underlyingdynamics in fragile situations can enhance theircountry strategy and programming, a sentimentreflected in the Organisation for Economic Co-operation and Development—DevelopmentAssistance Committee (OECD-DAC) admonitionto “take context as the starting point.2 To this end,recent years have seen a proliferation of assessmenttools and frameworks (in particular, conflict,governance, and fragility assessments) developedby donor governments and multilateral organiza-tions such as the United Nations and the World

Bank. These tools are intended, among otherthings, to identify the often intangible factors andrelationships that drive political and economicbehavior, as well as the points of friction, tension,and underlying grievances that could contribute toconflict.3 However, despite the innovative thinkingthat has gone into the development of these instru-ments, it is not clear what impact this type ofanalysis has had on international actors’ strategiesand programs.In light of these trends, the International Peace

Institute (IPI) undertook a project calledUnderstanding Local Context, which aims toevaluate the conceptual frameworks, processes, anduses of assessment tools developed by majormultilateral and bilateral actors, in order to informwork in conflict-affected and otherwise fragileenvironments. This builds on recent workconducted by the OECD-DAC’s Network onGovernance on donor uses of governance assess-ments, as well as other recent literature.4 Thepurpose of this project is two-fold: first, we seek toanalyze the ways in which international actors useconflict, governance, and/or fragility assessmenttools to understand the local context in which theywork and the opportunities and challenges associ-ated with these instruments. Second, we seek toinvestigate how these tools and the analysis theyproduce influence international actors’ decisions,planning, and programs.This report presents findings and general

observations from the first phase of UnderstandingLocal Context. The purpose of this phase was toconduct an informal analysis of conflict,governance, and fragility/stability assessment toolsdeveloped by bilateral andmultilateral actors and todraw out common themes and challenges withregard to the evolution and use of these tools. It

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 3

1 This has been highlighted in work by IPI and others, including Charles T. Call and Elizabeth M. Cousens, “Ending Wars and Building Peace,” Coping with CrisisWorking Paper Series, New York: International Peace Academy, March 2007; Charles T. Call with Vanessa Wyeth, eds., Building States to Build Peace (Boulder, CO:Lynne Rienner, 2008); Bruce Jones and Rahul Chandran with Elizabeth Cousens, Jenna Slotin, and Jake Sherman, “From Fragility to Resilience: Concepts andDilemmas of Statebuilding in Fragile States,” research paper prepared for the OECD Fragile States Group, New York, March 2008; Roland Paris and Timothy Sisk,eds., The Contradictions of State Building: Confronting the Dilemmas of Post-War Peace Operations (London: Routledge, 2009).

2 OECD-DAC, “Principles for Good International Engagement in Fragile States and Situations,” Paris, April 2007, available atwww.oecd.org/dataoecd/61/45/38368714.pdf . Similarly, point 21a of the Accra Agenda for Action commits donors to “conduct joint assessments of governance andcapacity and examine the causes of conflict, fragility and insecurity, engaging developing country authorities and other relevant stakeholders to the maximum extentpossible.” See Accra Agenda for Action, Accra, September 2008.

3 The development of assessment tools by governments and multilateral organizations has also been influenced by important efforts by nongovernmental organiza-tions (such as Collaborative for Development Action, CARE, and the Conflict Prevention and Post-Conflict Reconstruction Network) to develop conflict-analysistools. See Annex for a list of some of these tools.

4 OECD-DAC Network on Governance, “Survey of Donor Approaches to Governance Assessment,” Paris: OECD-DAC, February 2008; OECD-DAC Network onGovernance, “Donor Approaches to Governance Assessment Sourcebook,” Paris: OECD-DAC, August 2008. See also Stefan Meyer, “Governance Assessments andDomestic Accountability: Feeding Domestic Debate and Changing Aid Practices,” working paper, Fundación para las Relaciones Internacionales y el DiálogoExterior (FRIDE), June 2009.

presents a first attempt to investigate the questionof whether assessments actually affect decisionmaking, planning, and programming. This report isorganized in four parts. The first part describes ourapproach for the first phase of the project.Subsequently, in part two, we offer somebackground on the development of assessmenttools and the interaction between the evolution ofthese tools and recent thinking on state fragility.The third part describes our findings regarding theextent to which formal assessments influencedecision making, planning, and programming. Weconclude with some final observations andrecommendations.

Approach and MethodologyThe inspiration for this project was the generalconclusion that international interventions must berooted in a nuanced understanding of local context.The project’s aim was to better understand howdonors operationalize this objective via the use ofassessment tools and frameworks, and whetherefforts to do so influence decision making,planning, and programming—and to what extent.It is important to note that the term “assessment”

is used by various actors to refer to several differenttypes of exercises. Many formal governance,conflict, and fragility tools were originallydeveloped to be used by a single agency or depart-ment. Yet “assessment” has also come to refer tointeragency exercises that are perhaps moreaccurately characterized as assessment or planningprocesses. The many governance, conflict, andfragility assessment tools that have been developedalso focus on different levels of analysis. At theglobal level there are several initiatives thatexamine cross-country data to make comparisons

and rank countries on a variety of indicators.5 Atthe national level, there are a variety of assessmenttools that look at the overall country context toinform a general strategy or a specific program.Some of these can also be adapted to focus on aspecific geographic area or sector within a country,while other tools have been specifically designed toassess needs and dynamics in a particular sector.6

This project builds on recent studies that seek tomap and evaluate the content and use of a range ofassessment tools, notably the 2008 study commis-sioned by the OECD-DAC on governance assess-ments, which represents perhaps the most compre-hensive study across a range of qualitative andquantitative tools.7 UNDP and the GermanDevelopment Institute (DIE) have also recentlyproduced a “Users’ Guide on Measuring Fragility,”which provides a comparative analysis of theconceptual premises, methodologies, and possibleuses of eleven cross-country fragility indices.8 Ourobjective was to complement this work with a focuson those tools that are used by bilateral andmultilateral actors to derive a qualitative picture ofcountry context and that speak to the drivers ofstate fragility as understood today.9 We found thatthis type of analysis was typically generated byqualitative conflict and governance assessments, asubset of which focuses on political-economyanalysis, as well as some newly adapted instrumentsthat explicitly emphasize fragility.10

Initially, we mapped nine donors that havetwenty-four tools among them, deliberately castingthe net wide to capture a range of approaches.11Subsequently, we pursued in-depth interviews withofficers in the governments of the Netherlands,Sweden, the United Kingdom, and the UnitedStates, as well as independent experts who have

4 POWER, POLITICS, AND CHANGE

5 Examples of quantitative indices include internally developed ratings, such as the one used by USAID, and fragility indices developed by independent organiza-tions such as the Fund for Peace’s Failed State Index, George Mason University’s Global Report on Conflict, Governance, and State Fragility, and CarletonUniversity’s Country Indicators for Foreign Policy. See Annex for a more complete list of the fragility indices covered by initial desk research.

6 Examples of some sector-specific assessment tools include Transparency International’s corruption ratings, Freedom House’s democracy ratings, and sector-specifictools developed by donor governments, such as USAID’s Anticorruption Assessment Framework, the US Interagency Security Sector Assessment Framework, andGermany’s Security Sector Reform Assessment.

7 OECD-DAC Network on Governance, “Survey of Donor Approaches to Governance Assessment” and related documents.8 UNDP and the German Development Institute (DIE), “Users’ Guide on Measuring Fragility,” UNDP/DIE, 2009.9 For the purposes of this project, “bilateral and multilateral actors” refers to donor governments as well as the UN, the World Bank, and the European Commission.

Assessment tools are typically developed and used by development agencies/departments within governments. However, recognizing the increasing prevalence ofwhole-of-government and integrated approaches, we also engaged officers in other parts of government.

10 Many of these conflict, governance, and fragility assessment tools draw on or otherwise incorporate some of the previously mentioned cross-country quantitativeratings and indices as part of their analyses.

11 This count includes the bilateral and multilateral actors’ tools captured by the mapping. By way of background, we also looked at tools developed by nongovern-mental organizations (NGOs) that have informed the development of tools used by donor governments and multilateral organizations. See Annex for a full list ofthe tools covered by the initial mapping.

been involved in developing and applying variousassessment tools.12 We chose to focus our in-depthanalysis on the experience of these four donorsbecause of the number of tools they have in use, theconsistency with which they have pursued thedevelopment of assessment tools over time, theirongoing efforts to refine the tools, and their willing-ness to share experiences and lessons learned.We focused on “broad” conflict, governance, and

fragility/stability tools that are typically used togarner an overall understanding of country context,although they are also sometimes used to assessdynamics in a particular sector or geographicregion. In the interests of time and space, we didnot assess sector-specific tools (such as assessmentsof the security sector), although, we recognize thatmany of these yield important information aboutcontext.Following an initial round of interviews, IPI

hosted a workshop in June 2009, which broughttogether twenty-four experts from donor govern-ments, the United Nations, and independentresearch organizations with experience indesigning and/or using assessment tools, as well asthose who have been involved in using the analysesgenerated by these tools for decision making. Theworkshop offered a forum for the fruitful exchangeof insights and further informed our analysis ofinternational actors’ efforts to grapple with localcontext. Most of the workshop participants also hadexperience with interagency or whole-of-govern-ment assessment processes, which enhanced theproject’s preliminary findings by highlightingseveral important lessons and observations relatedto the role of assessments in joined-up or integratedplanning and decision making.

Background and Evolutionof Assessment ToolsConflict tools and governance tools have distinctorigins with different underlying motivations. They

tend to mirror the thinking on conflict orgovernance that was dominant at a given donoragency when the tools were developed. As thinkingon state fragility has evolved, newer assessmenttools have tended to reflect these emerging ideasand, in some cases, have served as vehicles topromote newer thinking on state fragility andalternative ways of approaching and understandingcontext.CONFLICT ASSESSMENTS

The development of conflict assessment tools in the1990s was motivated by a desire to understand localconflict dynamics and the effects of external actionon those dynamics. This was spurred by the realiza-tion that “normal” development was not suited toconflict settings and in some cases was doing harmby feeding into or exacerbating tensions. As aWorld Bank report on conflict assessment noted,“the need for conflict analysis is underpinned byrecognition that there is a strong link betweeneffective development and the social and economicfactors affecting the trajectory of conflicts.”13 Withthe publication of early “do no harm” studies,14development agencies began to acknowledge thatsince conflict is in part about the control ofresources, injecting resources into a conflictcountry inevitably means involvement in theconflict.15 International actors began to see theirinterventions in conflict settings as both workingon conflict—targeting and attempting to addressthe causes of armed conflict—and working inconflict—implementing assistance programs amidconditions of armed conflict.16 In order to improveboth aspects of their interventions, conflict assess-ments were seen as essential to develop anunderstanding of conflict factors, actors, anddynamics, and to analyze the relationship betweenthose dynamics and donor programming.Conducting conflict assessments, therefore, hadtwo objectives: to better orient conflict program-ming in terms of prevention, mitigation, orreduction, and to make country and sector

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 5

12 We also interviewed officers in the Australian and Canadian governments who have been involved in deliberations on whether to develop formal assessment tools,but do not have such tools in use at the present time.

13 World Bank, “Effective Conflict Analysis: Overcoming Organizational Challenges,” Report No. 36446-GLB, June 2006, p. 3.14 Collaborative Learning Projects (CDA), “International Assistance and Conflict: An Exploration of Negative Impacts,” Issue Paper, July 1994; Mary B. Anderson, Do

No Harm: How Aid Can Support Peace—or War (Boulder, CO: Lynne Rienner, 1999).15 Dan Smith, “Towards a Strategic Framework for Peacebuilding: Getting Their Act Together,” Overview Report of the Joint Utstein Study of Peacebuilding, Oslo,

January 2004.16 Ibid.

programs conflict sensitive.17

Although they vary from donor to donor,18conflict assessments tend to include analysis ofstructural and proximate causes of conflict andopportunities/capacities for peace (sometimescalled drivers and mitigators of conflict). Thesetools have been heavily influenced by the vastacademic scholarship produced in the 1990s on theso-called “root causes” of civil war, which could beroughly summarized in the debate between thetwo competing interpretations—“greed” versus“grievance.”19 In fact, several of these tools focus onthe role of “greed and grievance” in fueling conflict,which emphasizes the capture of resources bygovernment elites and nonstate actors (greed) andthe sense of injustice experienced by sectors of thepopulation (grievance) that believe they have beenunfairly disenfranchised.20 Conflict assessmentstypically include analyses of actors, their interestsand incentives, access to resources, and thedynamics among them. Several of these tools, suchas DFID’s Strategic Conflict Assessment (SCA),USAID’s Conflict Assessment Framework (CAF),Sida’s Manual for Conflict Analysis, and UNDP’sConflict-Related Development Analysis, alsoinclude a final step that seeks to develop strategiesor options for donor programming.Some conflict assessments include an analysis of

international responses and the way theseresponses interact with the dynamics of war andpeace. For example, UNDP’s Conflict-RelatedDevelopment Analysis explores prospectiveinternational and regional strategies to managesecurity, political, economic, and social challengesin conflict-affected countries, in addition toanalyzing possible national, subnational, and localapproaches. Likewise, DFID’s SCA assesses howinternational responses “interact with the dynamicsof conflict” in the military/security, diplomatic,

trade, immigration, and development spheres.21Explicit in these frameworks is the assumption thatexternal actors risk exacerbating conflict drivers ifthey do not have a good understanding of how theirinterventions may interact with local dynamics.22

Many of these tools are still in use and have beenperiodically updated since their inception. They aretypically seen as analytical tools that can be appliedto a particular sector, region, or country. However,they have been criticized for failing to respond tonew research or thinking (e.g., missing the mostrecent findings on the role of inequalities inconflict) and failing to reconcile competingarguments about the causes and nature of conflict.23Conflict assessments are used variously as stand-alone exercises to inform country strategies, as acomponent of a larger assessment and planningprocess, or as a “lens” that is incorporated intoother assessments by adding a series of steps to theassessment process, or by including a conflictexpert in an assessment team.GOVERNANCE ASSESSMENTS

The development of governance assessment toolswas sparked by a renewed interest in governance, astraditional arguments in favor of the role of themarket and nonstate actors in economic growthgave way to the realization that poor developmentperformance was due, at least in part, to the state’sfailure to provide an enabling environment forprivate actors and enterprises to flourish. Thisrenewed interest had to be translated into policiesand programming, and, for that, an assessment ofthe governance context in partner countries wasrequired. Governance assessment tools werecreated by development agencies in parallel withthe conflict assessments, and range from quantita-tive ratings of performance, such as theWorld BankCountry Policy and Institutional Assessment or the

6 POWER, POLITICS, AND CHANGE

17 In addition to providing information about context to inform conflict programming, conflict analysis is also used as an operational component of programs thataim to prevent or resolve conflict. In this respect, it is often used as a basis to promote dialogue and reconciliation among individuals and communities by helpingto develop a commonly accepted narrative of the conflict and its root causes. Given our project’s focus on international actors’ understanding of country context,this use of conflict analysis was not addressed by the study.

18 A range of governmental, multilateral, and nongovernmental actors use conflict assessments. However, our focus here is on donors’ use of these tools.19 See, for example, Paul Collier and Anke Hoeffler, “Greed and Grievance in Civil War,” Oxford Economic Papers 56, no. 4 (October 2004); Mats Berdal and David

Malone, eds., Greed and Grievance: Economic Agendas in Civil Wars (Boulder, CO: Lynne Rienner, 2000); and Ted Robert Gurr, “Containing Internal War in theTwenty-First Century,” in From Reaction to Conflict Prevention: Opportunities for the UN System, edited by Fen Osler Hampson and David Malone (Boulder, CO:Lynne Rienner, 2002), pp. 48-50.

20 See, for example, USAID, “Conducting a Conflict Assessment: A Framework for Strategy and Program Development,” April 2005; DFID, “Conducting ConflictAssessments: Guidance Notes,” January 2002; and Sida, “Manual for Conflict Analysis,” January 2006.

21 DFID, “Conducting Conflict Assessments: Guidance Notes,” p. 19.22 Here the influence of Mary Anderson’s “do no harm” approach is quite clear.23 See, among others, Susan L. Woodward, “Do the Root Causes of Civil War Matter? On Using Knowledge to Improve Peacebuilding Interventions,” Journal of

Intervention and Statebuilding 1, no. 2 (June 2007).

US Millennium Challenge Corporation Scorecard,to qualitative analyses of the governance context.The latter are typically focused on political systemsand public administration, and deal explicitly withcorruption. Many of them also assess socialgovernance issues including pro-poor spendingand access to and effectiveness of service delivery.24Like conflict analysis, some governance assess-ments map actors as well as their interests,incentives, and relationships.Within the group of qualitative-assessment tools,

the more traditional type of assessment frameworksfocuses on how formal institutions are performingat a particular moment in time, often withembedded normative assumptions about what“functioning governance” means. As the OECD-DAC’s work on governance assessments points out,assumptions underlying governance assessmentmethodologies are usually not explicit, but tend tomeasure governance against existing norms inOECD countries.25 Unlike conflict assessments,which tend to be conducted on an ad hoc basis, inresponse to a perceived need from headquarters orfield officials, governance assessments are oftenmandatory for all countries to which a donorprovides development assistance. This means thatthey are applied across a range of countries, fromthe more stable to those that are considered fragileor conflict-affected. It also means that whilegovernance assessments provide an important basisfor understanding country context, they are alsooften used explicitly as a platform for engagementand dialogue with partners on governanceprogramming and reform.Although our project did not seek to evaluate

assessment tools in terms of how effective they areas a platform for engagement with partner govern-ments, it is worth noting that the OECD’s study ofgovernance assessments has produced importantfindings regarding the role of partner governments

in assessments. The OECD-DAC Sourcebook notesthat the majority of the tools included in the surveyof governance assessments only involve partners toa limited degree.26 Other efforts seek to strengthenthe capacity of governments to assess themselves,and citizens to assess their governments, such asInternational IDEA’s State of Democracy assess-ment methodology.27 While the OECD-DACacknowledges that international actors often havelegitimate reasons for keeping assessmentsconfidential, transparency is encouraged to thefullest extent possible. Moreover, due to theenormous burden that donors often place onpartner governments by pursuing multiple separateassessments (one example being Zambia, where in2008, ten different governance assessmentprocesses were ongoing, not including the govern-ment’s own annual report on the state ofgovernance in the country),28 the OECD-DAC’sfindings recommend aligning with domesticallydriven governance assessments and/or pursuinggreater harmonization among donors where assess-ments are meant to serve as a platform for dialogueon governance reform.29

ASSESSING STATE FRAGILITY30

In the last decade, a new focus has emerged on statefragility, initially spurred by post-9/11 concernsabout weak states as “vectors” for terrorism andother global bads that threatened the interests andsecurity of powerful Western countries.31 Theseconcerns were paralleled by growing consensus,particularly in UN circles, on the centrality of thestate for sustainable peacebuilding, and the need foreffective and legitimate institutions to managecompetition and conflict within society. Therelevance of longstanding concerns aboutgovernance and the promotion of a governanceagenda for development were central to this debate.Donors increasingly realized that so-called “fragilestates” were lagging behind other low-income

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 7

24 OECD-DAC Network on Governance, “Survey of Donor Approaches to Governance Assessment,” Paris: OECD-DAC, February 2008.25 See concept paper for OECD-DAC Network on Governance “Conference on Governance Assessment and Aid Effectiveness,” London, February 20-21, 2008.26 OECD-DAC Network on Governance, “Donor Approaches to Governance Assessments Sourcebook,” p. 19.27 See David Beetham, Edzia Carvalho, Todd Landman, and Stuart Weir, Assessing the Quality of Democracy: A Practical Guide (Stockholm: International IDEA,

2008).28 OECD-DAC Network on Governance, “Survey of Donor Approaches to Governance Assessment,” p. 17.29 OECD-DAC Network on Governance, “Donor Approaches to Governance Assessments Sourcebook,” p. 19.30 This section draws on Vanessa Wyeth and Tim Sisk, “Rethinking Peacebuilding and Statebuilding in Fragile and Conflict-Affected Countries,” Discussion Note for

the OECD-DAC International Network on Conflict and Fragility, New York, June 2009.31 See James Fearon and David Laitin, “Neotrusteeship and the Problem of Weak States,” International Security 28, no. 4 (2004): 5-23. For a US-national-interest

perspective, see Stephen D. Krasner and Carlos Pascual, “Addressing State Failure,” Foreign Affairs 84, no. 4 (2005): 153ff., as well as the White House, “NationalSecurity Strategy of the United States of America,” Washington, DC, 2002.

countries in progress on the MillenniumDevelopment Goals, and that normal developmentpolicies needed to be tailored to the uniquechallenges posed by state fragility.As fragile states have moved to the top of the

international policy agenda, bringing concernsabout conflict and weak governance with them,thinking on fragility has evolved. A new focus on“statebuilding” emerged in the early 2000s, whichinitially tended to define the problem as one ofweak state capacity, and emphasized building/strengthening institutional capacity in countriesemerging from conflict.32 However, this argumentdid little to address the political nature of thechallenges faced by conflict-affected and fragilestates. More recent studies (notably work sponsoredby the OECD-DAC) emphasize state-societyrelations, and locate fragility in the breakdown ofthe political process through which state andsociety negotiate mutual expectations and managerelationships of power.33 Increasingly, there is afocus on the notion of state “resilience” as theability to cope with changes in capacity, effective-ness, or legitimacy, whether in the form of suddenshocks or crises, or through long-term erosion.34This and subsequent work has placed the conceptof legitimacy—as both a means to building statecapacity and an end in itself—squarely at the centerof the debate.As thinking on state fragility has evolved, violent

conflict has come to be seen simultaneously as acause, symptom, and consequence of fragility,depending on the situation.35 A common undercur-rent of state fragility is the vulnerability of thegovernment to recurring crises of legitimacy andauthority. Fragile states face heightened risk ofconflict; many have experienced conflict in therecent past, whereas others may exhibit abreakdown in social cohesion and politicalprocesses to manage competition, putting them at

risk for violent conflict.36 Therefore, much of thethinking that has gone into understanding conflict,how to prevent it and to build peace in its wake hasinfluenced the current agenda around state fragility.While these views on fragility have gained

considerable traction at a conceptual level,37 theypose significant challenges for practice, where atechnocratic approach to delivering developmentassistance and an aversion to engaging in politicstend to prevail. Putting state-society relations at thecenter of the debate implies engaging with bothformal and informal modes of governance atmultiple levels of state and society. Internationalactors generally have inadequate understanding ofthe various sources of legitimacy, the process bywhich states legitimate their authority, andpathways for strengthening state legitimacy incontexts where other actors and institutions (ofteninformal, nonstate) compete with the state forlegitimacy.38

In response to these challenges, a new generationof assessment tools using “political-economyanalysis” has emerged. These tools explore the“underlying factors (including history, geography,sources of government revenue, deeply embeddedsocial and economic structures) that shape formaland informal relationships between the state andorganized groups in society, and thus the incentivesthat are driving politicians and policy makers andthe potential pressures for or against progressivechange.”39Although they are still commonly consid-ered to be governance assessments, assessmentsbased on political-economy analysis draw on someof the thinking that underlay earlier efforts atconflict analysis, combined with evolving thinkingon state fragility. As described by one donor’sinternal guidance, they strive to “get beyond thefaçade of the state” and to grapple with the formaland informal relationships of power within societyand between state and society.40 The most

8 POWER, POLITICS, AND CHANGE

32 See Roland Paris, At War's End: Building Peace After Civil Conflict (Cambridge, UK: Cambridge University Press, 2004); and Francis Fukuyama, State-Building:Governance and World Order in the 21st Century (Ithaca, NY: Cornell University Press, 2004).

33 See Jones, Chandran, et al., “From Fragility to Resilience”; and OECD-DAC, “Statebuilding in Situations of Fragility: Initial Findings,” August 2008.34 Ibid.35 UNDP/DIE, “Users’ Guide on Measuring Fragility.”36 Wyeth and Sisk, “Rethinking Peacebuilding and Statebuilding.”37 Following the finalization of “From Fragility to Resilience,” which was an independently produced concept paper, OECD-DAC members produced a consensus

document that summarizes their initial findings on statebuilding in fragile situations. See “State Building in Situations of Fragility, Initial Findings.”38 Eric Scheye, “The Statebuilding Misconception in the Fragile and Post-Conflict State,” unpublished article, 2008.39 Sue Unsworth, “Is Political Analysis Changing Donor Behavior?,” September 2008, unpublished, p. 1.40 See “Framework for Strategic Governance and Corruption Analysis (SGACA): Designing Strategic Responses Towards Good Governance,” Prepared by the

Clingendael Institute for the Netherlands Ministry of Foreign Affairs, July 2008, p. 5.

prominent examples of assessment based onpolitical-economy analysis include the UK’s Driversof Change Analysis, Sida’s Power Analysis, and theNetherlands’ Strategic Governance and CorruptionAnalysis (SGACA).In addition to being used to assess context, these

tools have also become vehicles to advance analternative way of thinking about country dynamicsand the role of external actors therein. Specifically,they are encouraging development-agency staff tosee development through the lens of local actors’incentives for and against progressive change and toconsider the realistic scope of external influence onthose incentives.41 In this sense the tools are beingused to promote a cultural shift within develop-ment agencies by fostering political-economythinking among development practitioners whotend toward technocratic approaches. Thishighlights the interactive relationship betweencurrent thinking and assessment tools. Thepolitical-economy approach is increasingly seen asthe most nuanced analytical approach to get at thediverse facets of fragility within a given country.However, it has also proved to be the most difficultanalytical approach to translate into strategydevelopment and operational guidance.The preoccupation with fragile states has also

motivated some donor governments to developtools that explicitly focus on various dimensions offragility, which in many cases means incorporatingsecurity concerns into more traditional governanceanalyses. The Stability Assessment Framework(SAF), developed for use by the Dutch governmentin 2005, reflected a greater concern with stabilityand security than previous conflict or governancetools, and built in a trend analysis to traceinstability in a given country. Later, the Dutch sawthe need to modify their Strategic Governance andCorruption Analysis (SGACA)—a tool that usespolitical economy analysis—for fragile states. Theydid so by bringing elements of the SAF into theSGACA methodology. The result is their Fragile

States Assessment Methodology (FSAM).42

In 2005 USAID also embarked on an effort todevelop a Fragile States Assessment Framework(FSA). Intended to offer internal guidance toUSAID for understanding fragility in selectedcountries, its purpose was to identify programresponses within fragile states that would promoteimprovements in their governance and establish afoundation for their transformational development.Although field-tested in two countries, the FSA wasnever finalized. However, interviews indicate thatelements of the analysis, particularly the focus onidentifying the dynamics of fragility and resiliencethrough the framework of effectiveness and legiti-macy, are being incorporated into USAID’s revisedversion of the Conflict Assessment Framework.Several interviewees noted that each of the tools

has been shaped to some extent by the particularpolitical, bureaucratic, and conceptual prerogativesof the agencies that have developed them. Differentagencies tend to define the problem through theirown institutional lens, and the assessment toolsthey create reflect these biases. For instance, thedifferent approaches to conflict assessment adoptedby major bilateral and multilateral actorsdemonstrate conceptual and intellectual differencesin their understanding of the nature of conflict; thesame may be said for various donor governments’approaches to assessing state fragility.43 Seeing theevolution of donor agendas in this way providessome hints as to how conflict, governance, andfragility tools have evolved and influenced oneanother. It also sheds light on the conceptualframeworks that underpin international actors’efforts to grapple with country context, as well asthe assumptions they bring to assessments and theexpectations they have of these tools. The differentpathways by which donors have developed theseinstruments, and the thinking that underpins theseefforts, have important implications for the types ofdonor policies and programs that such analysesprescribe.

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 9

41 Unsworth, “Is Political Analysis Changing Donor Behavior?”42 The tools used by the Dutch government have been developed by the Netherlands Institute for International Affairs–Clingendael.43 For example, an analysis of conflict assessments conducted by five different donors in Sri Lanka between 2000 and 2006 reveals differing diagnoses of the nature of

the conflict, resulting in very different prescriptions for donor responses. These ranged from a narrow focus on securing and implementing a ceasefire agreement,to private-sector development and creation of economic opportunities, to responses focused on addressing group and regional grievances. See Vanna Chan, EllenaFotinatos, Joyce Pisarello, Liat Shetret, and Melissa Waits, “International Peace Institute SIPA Capstone Workshop: Assessing Post-Conflict and FragileStates–Evaluating Donor Frameworks: Final Report,” unpublished, May 2009.

WHOLE-OF-GOVERNMENTAPPROACHES AND THEIR RELATION-SHIP TO ASSESSMENTS

Many formal governance and conflict assessmenttools were originally developed for use by a singleentity (a bilateral development agency such asDFID or a multilateral development agency such asUNDP) to analyze a particular country situationand inform internal decisions related to thedevelopment of a new program or country strategy,adjustment of an existing program or strategy, ordecisions about aid allocation. However, “assess-ment” has also come to refer to interagencyexercises, either across ministries/departments of agovernment (whole of government), across entitieswithin the UN system, or among several actors onthe ground (national and international) in a givencountry. These exercises aim to promote a commonunderstanding of the country context as a basis forjoint or integrated decision making. They areperhaps more accurately termed planning or assess-ment processes (although they often carry the termassessment in their name, as in the UN-World BankPost-Conflict Needs Assessment, or the US govern-ment’s Inter-Agency Conflict Assessment),whereby the analysis of context is one part of alarger consensus-building and planning exercise.In recent years, many of the assessment tools that

were originally developed to feed into single-entitydecision making are now sometimes used for theinteragency purpose described above. This seemsto be driven by two factors. First, thinking oninternational engagement in fragile situations hasevolved toward an understanding that a joined-uppolitical, security, and development strategy isrequired to respond effectively in these situations.44Second is the realization that one of the majorchallenges of interagency planning is that political,security, and development actors have differentinstitutional goals, cultures, and languages and eachbrings its own perspective and understanding of thecontext to the table. Rather than waiting until theplanning stage (when perspectives are fully formed)to bring these actors together, conducting joint

assessments aims to get everyone on the same pageby breaking down actors’ preconceived assump-tions, thereby providing a basis for integrateddecision making. This is perhaps most commonwithin the UN system where the political, security,humanitarian, and development pillars of theorganization have been working to promote anintegrated UN response in postconflict countriesfor several years.45 It has also taken place wherebilateral donors have begun to adopt “whole-of-government” approaches in their engagement withfragile and conflict-affected countries.46 In addition,it is becoming more common with the promotionof “whole-of-system” approaches where interna-tional and national actors seek to promote greateralignment between international efforts andnational priorities, as well as greater harmonizationand complementarity among national and interna-tional efforts in a particular country. Therefore, inaddition to their analytical function, assessmentsare increasingly being used as a platform to fostermore coherent engagement in fragile situations.In the course of our analysis, interviewees

uniformly agreed that assessments should not beends in themselves. Yet, considerable time andresources have been invested in developing,implementing and refining formal assessmenttools. To what extent have they influenced theultimate objective of fostering more context-sensitive external engagement in fragile situations?

Extent of InfluenceOverall, we found that experience with assessmenttools has produced mixed results as far as impactson decision making, planning, and programmingare concerned. The importance of producing goodquality analysis cannot be overstated: a mix ofqualitative and quantitative methods, as well asdiverse sources of information, is essential toensure as nuanced and rich an understanding of asituation as possible. Yet, the balance between adetailed and comprehensive assessment and onethat produces usable analysis for decision makingpresents significant challenges. Moreover, content

10 POWER, POLITICS, AND CHANGE

44 See Stewart Patrick and Kaysie Brown, Greater than the Sum of its Parts? Assessing “Whole of Government” Approaches to Fragile States (New York: InternationalPeace Academy, 2007).

45 For more on UN integration issues, see Espen Barth Eide, Anja Therese Kaspersen, Randolph Kent, and Karen von Hippel, “Report on Integrated Missions:Practical Perspectives and Recommendations,” Oslo: Norwegian Institute of International Affairs, 2005; or Susanna P. Campbell and Anja T. Kaspersen, “The UN'sReforms: Confronting Integration Barriers,” International Peacekeeping 15, no. 4 (2008): 470-485.

46 See Patrick and Brown, Greater than the Sum of its Parts?

cannot be divorced from process: the extent of anassessment’s influence is rarely, if ever, solelydetermined by the content or quality of analysis.The use of assessments appears to be determined

by five key factors:1. Clarity of purpose2. Timing and timeframes3. Interests and incentives4. People and competencies5. Linkage between assessment and planningWhere interagency planning is the primary

objective, practitioners tend to be agnostic aboutassessment methodology. In these cases, theemphasis is almost entirely on process—specifi-cally, on how to use the information and analysisproduced through an assessment to help differentactors agree on a basic storyline of the situation.Here the goal seems to be “good enough” analysisand a basic level of agreement among the keyplayers in order to provide the basis for a commonstrategy.CLARITY OF PURPOSE

There are multiple, and often contradictory,objectives underlying the development and use ofassessment tools. Different actors are often drivenby different impulses; different entities within thesame government (or even different departmentsof the same ministry) and different depart-ments/agencies within multilateral organizationsmay have very different understandings of whatthe purpose and objectives of assessments are,whom the audience should be, what they shouldcover, how they should be conducted, and howresults should be used.Our research produced the following list of

purposes for which assessments have beendesigned and used:47

• Deciding whether or not to engage in a partnercountry, or to scale up (or down) existing levels ofsupport;

• Reorienting or designing a country or sectorstrategy or program (or justifying an existingstrategy or program);

• Developing more-realistic expectations of what aidmight accomplish given the political, economic,social, and cultural constraints of a particular

country situation and the actor’s own political andbureaucratic constraints;

• Stimulating internal dialogue among staff andfostering new ways of analyzing specific problemsand modes of engagement;

• Avoiding the unintended consequences of externalaction and guarding against the risks of statecapture and corruption;

• Making existing or planned aid programs moresensitive to drivers of conflict;

• Providing baseline analysis against which progressmay be measured;

• Modeling or predicting the likelihood of instability;• Informing decisions about aid allocation andfunding modalities in light of fiduciary risk;

• Ensuring accountability and transparency in theuse of aid resources;

• Stimulating a discussion about reform with thepartner country; and

• Serving as a platform for interagency planning andconsensus building.

Each purpose or combination of purposes willdemand different kinds of information andanalyses. Thus the content of assessments, as well asthe process by which they are undertaken, willoften be shaped by the purpose. As noted byinterviewees, this can be a double-edged sword:there is a risk of missing important information ifthe assessment is too heavily focused onresponding to a specific purpose. However, assess-ments that do not respond to the immediatedecision-making needs of an organization also riskbeing disregarded.Challenges often arise when the purpose of an

assessment is not clearly established from theoutset, leading to differing, and even competing,expectations of how the assessment should be used(this holds true whether between differentoffices/departments of the same agency, betweenheadquarters and field offices, or between differentministries and departments across government).For example, some actors may be particularlyconcerned with getting an accurate assessment ofcorruption in order to determine fiduciary risk and,therefore require that the assessment be keptconfidential in order to ensure that it is not watereddown. At the same time, other actors may see the

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 11

47 A similar list of purposes may be found in the OECD-DAC, “Donor Approaches to Governance Assessments,” Conference Report, 2008.

assessment as a basis for dialogue on reform withthe partner government and consequently feel thatthe government’s involvement in the assessment isessential to ensure buy-in and to build trust. (Asnoted above, recent work by the OECD-DACacknowledges that, while transparency may bepreferred, donors also have legitimate reasons forkeeping assessments confidential.48) The key is toclearly establish the purpose and expectations ofthe assessment from the outset to ensure that thechoice of tool and process is appropriate. That said,our interviews also indicated that resource and timeconstraints will inevitably force assessments torespond to multiple goals. The challenge thenbecomes one of making these goals explicit fromthe outset and drawing on multiple resources,sources of information, and tools to ensure theassessment process speaks to the various decision-making needs of the agency(ies) it is designed tosupport.TIMING AND TIMEFRAMES

Timing appears to be a significant determinant ofwhether and how the results of an assessment areused. There is a tension between effectivelyfeeding into planning cycles and responding tochanging circumstances on the ground to informtime-sensitive decision making. Whatever theneed, if the assessment misses the window ofinfluence, it is likely to have little impact.Some assessments are mandatory and are linked

to regular planning cycles, such as the DutchSGACA and DFID’s Country GovernanceAssessment (CGA). Others are initiated on an adhoc basis, triggered when a donor agency’sheadquarters, or, less frequently, field office sensesthe need to reevaluate its strategic approach and/orwhen the partner country has experienced criticalpolitical changes. In general, governance assess-ments are more likely to be mandatory, whileconflict assessments are more likely to be ad hoc.In many cases, we found that while several tools

are meant to be linked to strategic planningprocesses or programming cycles, this linkagefrequently does not occur as envisioned. Thereasons for this discrepancy may vary: in somecases, assessments may be conducted as one-offevents and the timing may not coincide with

decision-making processes. Formal mechanismsmay not exist to feed analysis into planning, or tohelp translate analysis into policy options. In othercases, this disconnect may be due to high-levelpolitical decisions. For example, the implementa-tion of the Dutch SGACA was originally carefullytimed so that the results of the analysis would feedinto the development of multiannual strategicplans. However, with the arrival of a new ministerof foreign affairs, the planning timeline was pushedforward by a year, with the result that the vastmajority of country plans had to be designed beforethe assessments were carried out.Whatever the reasons for this disconnect, the

consequences are predictably negative: findingsmay not be incorporated into relevant programinitiatives, and analysis loses its direct relevance todecision makers who do not have the time toconsider information that cannot be practicallyapplied. When the time comes for the nextprogramming cycle or strategic review, the analysisprovided by an assessment not linked to theseprocesses may be overlooked, or rendered obsolete.One interviewee emphasized the need to pinpointthe relevant “window of influence” in terms ofheadquarters or field-level decision making andensure that assessments feed in at the appropriatetime.At the same time, there is a tension between

timing assessments to influence programmingcycles and the need for real-time guidance.Conflict-affected and fragile states present complexand volatile environments, where real-time eventsoften overtake efforts to analyze them. There aretradeoffs between ensuring that findings areincorporated into programming cycles (thusdictating the timing of analyses), and conductinganalyses at important key moments as and whenthey arise, such as peace processes, power shifts,elections, or other moments of particularly hightension or dramatic political change. On the onehand, analysis risks being untimely; on the other,actors can be left with findings that point toimportant political opportunities, but no way totranslate them into programming. The challenge isin finding the optimal point between supply (interms of funding cycles, incentives to engage, and

12 POWER, POLITICS, AND CHANGE

48 See OECD-DAC, “Donor Approaches to Governance Assessments, Guiding Principles for Enhanced Impact, Usage and Harmonization,” Paris, March 2009.

human and financial resources) and demand (suchas historical moments and opportunities to engagemore fruitfully). These considerations suggest thatassessments should not be a “one-off ” exercise, butrather a continuing activity, possibly synchronizedwith key events in the context under scrutiny.While a more resource-intensive and time-

consuming analysis may produce a stronger finalproduct, sometimes such a luxury is not availablebecause the pace of events requires rapid decisionmaking. International actors may be willing toinvest these resources in countries of highimportance, but these same countries are the onesin which political pressure to act is highest, andwhere international actors rarely have the luxury oftime to wait for the results of analysis beforedevising a strategy for engagement. The need tomanage this tension is reflected in the flexibletimeframes that are allotted to various assessmentprocesses. There is scope for considerable variationwithin the timeframe of some assessmentprocesses, with obvious consequences for the depthand breadth of the analysis. For example, therecently developed US Inter-Agency ConflictAssessment Framework (ICAF) could take placeover several weeks, but it could also be conductedin as little as a day and a half in response to a crisis.Likewise, while Sida’s Manual for Conflict Analysisis usually undertaken in six to twelve weeks, it canalso be carried out as a rapid desk study as circum-stances require.The different time horizons of ministries,

agencies, and departments within a government ormultilateral organization influence the kind ofinformation that is sought from an assessment.While development agencies are oriented to digestlonger and more detailed analyses to feed into year-long and often multiyear planning and program-ming, foreign ministry and peacekeeping staffarticulated a need for quick and targeted analysisthat can be translated easily into strategic andoperational options. Political-economy analysis inparticular has fallen victim to these differingexpectations, with foreign-ministry staff arguingthat its impact is limited because of the difficulty oftranslating it into concrete and immediate policyoptions.

INTERESTS AND INCENTIVES49

Regardless of the quality or purpose of an assess-ment, political interests can have a significantinfluence on the focus of assessments or the extentto which their results are considered. In addition,individual and institutional incentives are rarelyaligned in support of assessments, which maydemand more work from staff and challenge thestatus quo.A variety of interests and incentives appear to be

at play in assessment processes. Political preroga-tives at the highest levels of government caninfluence the nature and use of the analysisproduced through formal assessments. Forinstance, the emphasis of a particular tool’s analysismay reflect ministerial or parliamentary objectives,as in the case of the Netherlands’ SGACA, whichhas a strong focus on corruption because of parlia-mentary concerns related to misappropriated aid inpartner countries. In other cases, a low premiumhas been placed on the analysis provided throughformal assessments by ministers who do notentirely trust the judgment or skills of the bureau-cracy working beneath them.The importance of obtaining buy-in of field-

office/embassy staff—in both conducting an assess-ment and letting the resulting analysis influenceprogramming and policy decisions—is often notedas a major determinant of the impact of an assess-ment. Securing buy-in can be challenging for manyreasons. Participation in and attention to the resultsof assessments often require staff on the ground tomake commitments of time and energy, requiringthem to adapt their thinking and work responsibil-ities. Interviews indicated that field staff maybelieve that their participation in an assessmentprocess takes them away from more pressingresponsibilities, that the assessment exercise isbeing imposed upon them by headquarters, or thatthe analysis only confirms what they already know.Moreover, in an aid agency where the primaryfocus is on spending allocated funds, there arestrong incentives in favor of sticking with a partic-ular strategy or program direction. As oneinterviewee noted, political-economy analysis inparticular can highlight risks and pose questionsthat may contradict development mandates and

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 13

49 For more on interests and incentives underpinning the use of assessments, see Unsworth, “Is Political Analysis Changing Donor Behavior?”

relationships with partner countries. It mayrecommend changing, reducing, or diverting aplanned program in which there is considerablepersonal and institutional investment.Beyond the individual incentives to support

particular programs, there are strong bureaucratic,political, and institutional barriers to change withindevelopment agencies and strong incentives thatreinforce the status quo.50 There are inbuiltincentives in an organization that programsmillions of dollars of development assistance not toquestion the underlying assumptions on whichthose programs are based, and to demonstrate thatthe programs are in fact working. The findings ofan assessment can drastically challenge the statusquo, calling for much longer-term engagement thancurrent planning horizons foresee, demanding aserious rethinking of the way problems are beingapproached, or recommending a reevaluation ofthe national and local actors with whom to engage.Paradoxically, it may also call for internationalactors to recognize that their influence is limited,scale down their ambitions, and channel theirefforts to areas where they have the greatest chanceof making a difference. Some of these are decisionsthat can only be made at the highest policy levels. Ifscope for dissent from or change within a givenpolicy, strategy, or program is limited, thenreceptivity to the results of an assessment is alsolikely to be limited. Even where there is a stronginclination to respond to this analysis, interviewsindicated that it is difficult for practitioners tochange the way international assistance is deliveredwithout strong political backing.51

The partner country frequently has incentivesand disincentives to participate in assessmentprocesses. On the one hand, the partner govern-ment may support a process it believes will lead toenhanced development assistance, more fundingfor a particularly weak sector, or more broadly,policies and decisions that help it to mitigateconflict. On the other hand, such processes can betime consuming and place enormous burdens onpartner country capacities, taking key governmentofficials away from critical tasks where theirservices are at a premium. Assessments may also

reveal frailties in governance or cleavages in society,and result in less-than-flattering appraisals thatmay weaken the government’s position vis-à-vis itsinternational partners. Recognizing thesechallenges, efforts within the OECD-DAC’sGovernance Network have produced five guidingprinciples to enhance the impact, usage, andharmonization of governance assessments. Theseinclude building on and strengthening nationallydriven governance assessments as well asharmonizing donor assessments when the aim is tostimulate dialogue and governance reform.52

PEOPLE AND COMPETENCIES

People matter. Certain skills and competenciesappear to be particularly valuable in generating anassessment that can be easily understood andeffectively used. A focus on these competenciesmay be more important than the tool itself.External consultants are often used to conductassessments, but they come with benefits anddrawbacks.Practitioners frequently point to a mix of skill sets

and competencies that are valued in assessmentprocesses, some of which are particularly pertinentfor interagency assessments. In addition to valuingpeople with strong analytical skills, they generallypoint to the following types of personnel:• Experts: At the most tangible level, most mentionthe importance of including experts: people withspecialized sectoral, thematic, or country-specificknowledge, as well as experts in the tool or type ofmethodology being used. In fact, such specializedpersonnel are generally included as a matter ofprotocol in the composition of assessment teams.

• Translators: In order to ensure that the knowledgeof experts is shared effectively throughout thegroup, it is also important that they (or others inthe team who understand their work) cancommunicate it well, thus “translating” esoteric,subject-specific content into easily accessibleinformation that can be used by the broader team.One interviewee also noted that the “translator”function is particularly valuable in an interagencysetting where political, military, and developmentactors are likely to bring different cultures andmindsets to a particular issue.

14 POWER, POLITICS, AND CHANGE

50 For more on operational, institutional, and intellectual barriers to change, see ibid.51 Some assessment frameworks explicitly engage with this paradox. For example, USAID’s DG assessment includes a final filter in the assessment framework that

examines the donor’s own interests and institutional and political constraints.52 See OECD-DAC, “Donor Approaches to Governance Assessments, Guiding Principles for Enhanced Impact.”

• Leaders: An assessment team should includepersonnel with good leadership skills andappropriate decision-making authority, who canguide the process effectively and help ensure thatthe results of an analysis are taken seriously andacted upon. What seems crucial, however, isensuring that there are not several sources ofauthority that risk clashing with one another andparalyzing the process.

• Facilitators: Where an assessment culminates in aworkshop that is meant to help develop optionsand strategies for the country office/embassy(common practice with the Dutch SGACA as wellas other actors’ assessments), facilitation skillsbecome particularly important. In the case ofinteragency assessments, team members with goodfacilitation skills can help to build consensus ondifficult issues, negotiate compromises amongdivergent perspectives, and foster a cordial workingenvironment.

This list describes broad types of skills needed forassessments, and need not be viewed as discretecategories of personnel in an assessment team.Experts can be good translators, and in general, atalented team member may fit into two or more ofthese categories simultaneously. However it isimportant to realize that skill in one area does notnecessarily denote skill in another, as wascommonly noted among interviewees particularlywith respect to facilitation skills.Consultants—including international and/or

locally based consultants—frequently play a signif-icant role in assessment processes, as reflected intools employed by the UK, the US, the Netherlands,and Sweden, among others. Heavy reliance onconsultants has benefits and drawbacks. On the onehand, consultants may provide thematic andcountry-specific expertise and cultural sensitivitynot otherwise readily available. They may alsobring a fresh perspective to bear, and are often seenas more independent and less biased in theiranalysis than agency staff. The use of consultants isalso intended to minimize any extra burden onagency staff, which might otherwise be taken awayfrom their day-to-day activities to participate in orconduct an assessment. On the other hand, consult-ants lack first-hand institutional knowledge of theorganization that has contracted them, whichmeans they will be less familiar with the resource

and political constraints that characterize the policyenvironment to which the assessment needs torespond. Using consultants also represents a lostopportunity to train a new cadre of staff in order to“embed” political thinking across an organization,and help ensure that assessments are living toolsrather than one-off exercises, and may reinforce atendency to prioritize thematic expertise ratherthan country knowledge.The use of consultants may compound some of

the problems of buy-in discussed above. To put itcrudely, assessments produced by external consult-ants are sometimes dismissed because they areregarded as “outsiders” who do not understand theagency for which the assessment was conducted.Another challenge is that external consultants maynot have access to sensitive information that couldgreatly enhance the quality of the assessment. Anumber of interviewees noted that, while engaginglocal consultants in assessment processes canprovide much-needed local knowledge and culturalsensitivity, it is important to balance theirviewpoints with multiple local perspectives toguard against the possible biases of an individualwho belongs to a certain socioeconomic, political,ethnic, geographic, or religious group. In this, as inany analytical study, triangulation of data andinformation remains essential to guarantee arigorous final product.As noted above, the emergence of political-

economy analysis has increasingly placed anemphasis on changing the intellectual culture ofdevelopment agencies, in effect encouraging staff to“think politically.” One interviewee noted thatpolitical-economy analysis is less about tools andmore about networks, people, and knowledge. At itscore, this approach to assessments is a way ofthinking about the problem; ideally, assessmentsshould serve as a platform for bringing relevantstakeholders together to reorient policy, program-ming, and planning to take into account analysis ofstate-society relations and the incentives for andagainst progressive change in the partner country.53Several interviewees noted that if the political-economy approach is to be fully mainstreamed indevelopment agencies, assessments and planningprocesses will need to be complemented by a

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 15

53 See Unsworth, “Is Political Analysis Changing Donor Behavior?”

serious investment in recruiting and training staffthat can integrate political thinking into their work.It also means investment in country, as well asthematic, expertise. A recent step in this direction isthe development of a “How-to Note”54 for DFIDstaff on conducting political-economy analysis.Interviews also indicate that a similar guidancenote is under development for staff in the DutchMinistry of Foreign Affairs.LINKAGE BETWEEN ASSESSMENT ANDPLANNING

Overall, we found that there has been a dispropor-tionate emphasis on the development of assess-ment tools and their implementation, and insuffi-cient attention to how the assessment fits intobroader strategic planning processes. As a result,assessments are commonly one-off exercises,rather than efforts to gather and update analysis atregular intervals to feed into planning cycles.Providing clear and concise analysis of country

context is not enough to effectively shape planning,and without providing a roadmap to help translateanalysis into policy and programming, assessmentsare often dismissed as little more than intellectualexercises. Proponents of assessment tools, particu-larly those based on political-economy analysis,argue that they were never meant to be a magicbullet, and that there needs to be an acceptance thatthe results will not meet simplistic explanations.This may be partly a problem of ensuring thatobjectives are made explicit, and ensuring thatexpectations of what an assessment is intended todeliver are clearly communicated to all stakeholders(e.g., headquarters and field staff, technical andpolitical experts). But for agencies where the bulk ofstaff may be technical experts who do not tradition-ally think of their work in political terms, somemechanisms will inevitably be needed to helptranslate analysis into recommendations forcountry strategy and programming. Donors havestruggled with how best to make this link.55

Thus far, evaluations of the ways assessments arebeing used generally indicate a bias toward deliver-ables, over and above the processes associated withundertaking them and implementing recommen-

dations. This may be indicative of institutionalpriorities to produce measurable results, spendallocated resources and, sometimes, retroactivelyjustify decisions. Especially in cases where assess-ments are conducted by external consultants, theextent to which they are utilized in program designand strategy seems to depend more on whose deskthey land rather than on any systematic process forensuring that stakeholders think collectively aboutthe implications of the analysis for policies andprograms. Some tools, such as the Dutch SGACA,include a one-to-two-day workshop that isintended to provide an opportunity for embassystaff to discuss the implications of the analysis fortheir programs and plans. Although this doesprovide a formal setting to discuss the results of theanalysis and appears to sensitize staff, it does notguarantee that staff will be any more receptive tothe results of the assessment. Even in cases whereworkshops are part of the process, interviewsindicate that an assessment still has the greatestinfluence on country plans when field staff isconvinced of its usefulness and when the timing ofthe assessment coincides with a new planning cycle.The disconnect between assessment and

planning is further compounded by a lack of clarityas to the end users of an assessment. Guidancedocuments typically describe end users of assess-ments in generic terms as “field” and/or “headquar-ters” staff with bilateral and multilateral partnersand the partner country sometimes also beinglisted as end users. As a result, it is often unclearhow assessments are shared within donor bureau-cracies in terms of format, routing, and prioritiza-tion of information. This means that there is a riskthat analysis may not be adequately absorbed by oreven circulated among key decision-makingpersonnel, unless they make an effort to get hold ofthe information, believe that it is important enoughto focus on, and are receptive to findings that maychallenge or contradict their own thinking.A related challenge is that the line between

assessment and planning is often blurry andcontested. This comes up predominantly in intera-gency planning processes where the division of

16 POWER, POLITICS, AND CHANGE

54 UK Department for International Development (DFID), “Political Economy Analysis How-To Note,” DFID practice paper, July 2009.55 For example, USAID has prepared a series of “Conflict Toolkits” on thematic issues that offer a discussion of the relationship between each topic and conflict, and

guidance in developing programs based on the result of a conflict assessment. Topics include peace processes; religion, conflict, and peacebuilding; livelihoods andconflict; women and conflict; etc.

roles may be unclear and different agencies havedifferent expectations regarding the extent to whichthe assessment should point to planning options.The common use of external consultants toconduct assessments can also be problematic in thisregard. Some interviewees expressed discomfortwith external consultants participating in internalplanning, leading to a division between an assess-ment exercise and the planning process it is meantto support.THE INTERAGENCY CONUNDRUM

As we noted above, interagency assessments arebecoming increasingly common as whole-of-government approaches and integrated or joined-up planning and implementation are promoted.Four of the five factors we have identified relate toprocess—i.e., how the assessment is undertaken.Our analysis suggests that process is even moreimportant where an assessment is used to helpdifferent actors agree on a basic understanding ofthe situation as the basis for a common strategy. Assuch, the issues described above are particularlypertinent, and made even more complex, in intera-gency settings.In addition, there are several other challenges and

risks related to interagency assessments thatemerged through our interviews. In some casesthere is a lack of agreement as to which entitiesshould be engaged in political analysis. While thereis a growing recognition that development isfundamentally a political enterprise and thatengagement in fragile situations is inherentlypolitical, there is still some resistance—bothinternal and external—to the idea of developmentactors engaging in this area.Whole-of-government approaches are still in

their infancy and continue to face basic problems ofcommunication and information flow. Basic issuessuch as harmonized information-technologysystems and clear, efficient protocols for dealingwith classified information need to be addressed.Each agency will have lines that cannot be crossed,especially with regard to intelligence data, but theselines can be more easily managed if they areunderstood in advance.Discussions at the experts’ workshop highlighted

that using assessments as a vehicle to promotewhole-of-government or integrated decisionmaking risks privileging the mechanics of the tool

rather than the quality of information and analysisproduced. Such processes may risk papering overimportant differences through interagency negotia-tion. Genuine debate and hard choices in terms ofthe prioritization and sequencing of interventionsmay lose out to interagency turf battles. Thistendency also has important implications for theassessment team. Privileging the mechanics of thetool creates a tendency to staff assessment teamswith individuals that are experienced in the use ofthe tool, rather than putting a premium on countryknowledge or the other skills and competencieshighlighted above.

Conclusion andRecommendationsIn the last ten to fifteen years, international actorshave continually refined their tools and approachesto address the challenge of understanding localcontext. From the earlier conflict and governanceassessment tools to newer political-economyanalysis and fragility tools, donors have sought newways to understand the drivers and mitigators ofconflict, and to uncover the underlying dynamicsthat drive relationships of power at multiple levelsof state and society. Recognizing that context mustbe the starting point for all interventions, the driveto develop and refine assessment tools has beencritical to fostering increased sensitivity to context.However, despite considerable attention to and

investment in assessment tools, our findings indicatethat the extent to which the analysis they produceinfluences decision making, policy, or programmingis mixed. The extent of assessments’ impact appearsto be determined by five key factors: clarity ofpurpose; timing and timeframes; interests andincentives; people and competencies; and the linkagebetween assessment and planning. These factorsspeak to the decision-making needs of policymakersand other high-level officials, the bureaucratic andpolitical circumstances under which assessments areconducted and analysis received, and the process bywhich assessments are conducted. These findingspoint to a few broad recommendations that emergedthrough our interviews and in discussions at theexpert’s workshop.1. Be realistic about what assessments canaccomplish.The use of assessments has to be situated within

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 17

the broader universe of political analysis thatinforms decision making, much of which isdone informally. It is worth asking whethershortcomings in international responses arereally due to lack of knowledge about andunderstanding of the context, or due to other(primarily political) obstacles. Would improvedanalysis of context really translate into betterdecision making in conflict-affected and fragileenvironments, given all of the strategic prioritiesand political imperatives that drive decisionsabout international engagement and foreign aid?If the aim is to strengthen international actors’understanding of local context, instrumentssuch as formal assessment tools represent onlyone way to capture this type of knowledge, andshould be supported by other methods.Moreover, international actors are oftencriticized for employing an overly technocraticapproach to conflict-affected and fragile states: itis important to ensure that political analysis inthe form of assessments does not becomeanother box to tick.There is a tendency to think that, on the

strength of better analysis, international actorswill be able to design better interventions.However, good analysis does not always point tosolutions. More often, a truly nuanced analysisreveals the limitations of donor options andhelps policymakers realize how constrained theyare. This is highlighted by the Dutch experiencein conducting a SGACA in Uganda, where theanalysis led the embassy to conclude that theprevious multiannual strategic plan was bothinsufficiently critical of what was happening“behind the façade” in Uganda, and at the sametime too ambitious. Instead, they concluded thattheir “circle of interest was much bigger than[their] circle of influence” and ended up limitingtheir focus to the two sectors where Dutchpolicy objectives aligned with those of theUgandan government (education and justice).56

2. Ensure that assessments are linked moreconsistently to an overarching planning cycle.The drive to understand context has producedmany important developments in terms ofassessment tools and processes. But this has

come at the expense of systematic attention toplanning cycles, and the role of assessmentstherein. Ideally, assessments should informplanning and implementation, followed byrobust monitoring and evaluation of impact,with the ability to make midcourse correctionsor respond to new opportunities or constraintsposed by in-country developments. Althoughour findings indicate that some assessments arerequired as part of regular programming cycles,they often miss the mark due to inopportunetiming. Even where assessments are linked toplanning, there is a lack of mechanisms to revisitinitial assessments when country strategies andprograms are updated, or in later planningcycles. In some cases, as in a conflict assessmentconducted in a crisis situation, it may not bepossible to integrate findings into a formalplanning process. However, this should be theexception rather than the rule. Too oftenlinkages to planning processes do not occurbecause of lapses in foresight and poor manage-ment.Effective presentation of material is essential

to ensure that good analysis is fed into planningand decision-making processes. If material isnot presented in a way that is “user friendly” oreasily accessible to busy policymakers andpractitioners who have multiple responsibilitiesand limited time, then the utility of the analysisis diminished. In many cases, assessments areconsidered “too academic” or analysis ispresented in such a way that staff feels it cannotbe easily translated into concrete options. Here,the practice of workshops as the final stage inthe assessment process is key, so that thosetasked with implementing aid programs arerequired to reflect on the findings of the assess-ment and implications for country strategy andprograms. However, efforts should be made toensure that the process is not perceived as overlyheadquarters-driven.Donors should develop clear protocols that

set out how the results of an assessment shouldfeed into planning or programming, what is theappropriate link to monitoring and evaluation,and how to disseminate the results of assess-

18 POWER, POLITICS, AND CHANGE

56 Joop Hazenberg, “The SGACA Experience: Incentives, Interests and Raw Power—Making Development Aid More Realistic and Less Technical,” in A Rich Menu forthe Poor: Food for Thought on Effective Aid Policies, The Hague: Dutch Ministry of Foreign Affairs, 2009.

ments to avoid their becoming one-off exercises.Fostering greater clarity on who are the endusers of assessments, their information needs,and how to target and convey information in away that it can be readily fed into planning anddecision-making processes could also ensurethat assessments are more effectively used. Thismeans that it may be necessary to presentinformation differently in terms of length,format, and the focus of the analysis, dependingon the end user.Interagency or whole-of-government

planning processes (including assessment,planning, implementation, and monitoring andevaluation) are becoming more and morecommon in fragile situations. They suffer frommany of the same challenges as single-agencyassessments, but they also present uniqueobstacles where the goal is to find a commonunderstanding of the situation and to devise astrategy that draws on the assets of each entity oractor. Our interviews suggest that a completeconsensus is not realistic. In these cases, thefundamental challenge seems to be devising aprocess that draws on each actor’s assets andperspectives and manages to build consensusaround a basic understanding of the situationand the implications for a coherent and coordi-nated response. There is also a need to ensurethat the assessment is linked to a dynamicplanning process that can be modified as newinformation and analysis become available.Whole-of-government processes are still in

their infancy and are characterized by a greatdeal of experimentation and innovation. Ourinterviews and discussions indicated that donorgovernments are keen to reflect on their earlyexperience, especially with respect to assessmentand planning, and learn from others that areengaged in similar efforts. This seems a fruitfularea for further research which could point topractical lessons and guidance based on donors’early experiences.

3. Shift the focus from tools to developing aculture analysis.International actors must guard againstexcessive focus on the tools themselves, to theneglect of ensuring that political analysis isstreamlined throughout development-agency

thinking. Over time, the focus needs to shiftfrom the tools to promoting a culture of analysis.This has implications for recruitment andtraining of staff, as well as the importance ofcultivating multiple sources of information andanalysis locally and internationally. Drawing onthe thinking inherent in political-economyanalysis, practitioners could be trained andincentivized to gather and analyze informationon a regular basis. The goal would be to promotean analytical culture, whereby staff is encour-aged to “think politically” so that strategies,programs, and day-to-day implementation areregularly informed by contextual information.Existing assessment tools will continue to be

valuable as frameworks to guide analysis,especially in terms of understanding conflictfactors and the dynamics of fragility andresilience, but the emphasis should shift fromthe mechanics of the tools to the way staffapproach their work. A first step is to shift thefocus to developing guidelines to assist practi-tioners in gathering knowledge, understandingchanging political dynamics, and organizing andpresenting their knowledge in a form that ishelpful to decision makers.In many cases this is already underway, either

because individual supervisors have encouragedthis kind of approach among their staff, orthrough the development of guidance, ashighlighted above. However, promoting aculture of analysis requires much more system-atic support and investment, including through:• Developing guidelines for translating analysis intopolicy and programming;

• Training staff in political-economy analysis;• Staffing-up in the field to ensure individualofficers have the time to gather and analyzeinformation regularly;

• Prioritizing country knowledge over thematicexpertise, for example by making field rotationsmandatory for promotion within the organization,or by extending the minimum time spent inoverseas posts;

• Avoiding organizational stove-piping betweenanalytical and operational staff;

• Encouraging rotations through different depart-ments and agencies, for example through the useof secondments;

JENNA SLOTIN, VANESSA WYETH, AND PAUL ROMITA 19

• Ensuring systematic information-sharing amongdevelopment, diplomatic, and military (whereappropriate) staff in the field and at headquarters,many of whom monitor country situations on aregular basis, but may not have a comprehensivepicture of the situation; and

• Cultivating multiple sources of informationlocally and internationally, for example bysupporting local think tanks, universities, orpolling companies, as well as building a networkof international experts with country and issue-specific knowledge that can be drawn uponregularly.

The advantage of this approach is that itcould address some of the obstacles related totiming and incentives that limit the impact ofassessments. Fostering a culture of analysis mayreduce the need for formal assessment exercises,instead allowing staff to modify programs basedon real-time analysis, as well as enabling them tofeed into time-sensitive decision making.57Formal assessments may still be required for avariety of reasons, but they could be made moreflexible in terms of format and duration bydrawing more readily on staff knowledge as wellas local sources of analysis and information. Byplacing a premium on ongoing context analysis,agencies can help create incentives for staff toengage in analysis, participate in formal assess-ments when and if they are required, and bemore open to considering the implications of theanalysis produced by assessments. Enhancedopportunities for career advancement and

greater financial compensation couldincentivize staff to adapt their thinking andcontribute to a normative shift in the culture ofdonor agencies. Promoting a culture of analysiswould require commitment from the very top inthe form of bureaucratic and political will torespond to new information, even when itsuggests a significant departure from the statusquo.Overall, our findings indicate that donor

experience with assessment tools has fosteredincreasing sensitivity to context. Successiveiterations of conflict and governance assessmenttools have produced increasingly nuancedframeworks for understanding the dynamics offragility and resilience and their interaction withexternal interventions. However, the pendulummay have swung too far in favor of formalassessment tools. The development of thesetools has overshadowed much-needed attentionto how assessments feed into broader decision-making and planning processes, and themechanics of assessment processes have beenprivileged at the expense of developing a cultureof analysis and cultivating multiple sources ofinformation and diagnostics. It may be time toallow the pendulum to swing back to the centerby refocusing on developing a culture of politicalanalysis and creating mechanisms to allow thatanalysis to feed into time-sensitive decisionmaking and planning.

20 POWER, POLITICS, AND CHANGE

57 This may be easier to achieve in agencies with decentralized decision making, where field offices operate with relative autonomy in making programmingdecisions.

21

Annex: Initial Mapping of Assessment Tools

Assessment tools and frameworks used by bilateral and multilateral donors that were covered by initial deskresearch during October 2008 – January 2009:

European Commission1. Check-list for Root Causes of Conflict2. Conflict Prevention Assessment Framework

Germany3. The Catalogue of Criteria4. Conflict Analysis for Project Planning and Management

Netherlands5. Stability Assessment Framework6. Strategic Governance and Corruption Analysis7. Fragile States Assessment Methodology

Sweden8. The Manual for Conflict Analysis9. Power Analysis

Switzerland10. Key Questions for Context Analysis

United Kingdom11. Strategic Conflict Assessment12. Country Governance Analysis13. Drivers of Change14. Countries at Risk of Instability Framework

United Nations15. UN Common Country Assessment16. UN Common Inter-Agency Framework for Conflict Analysis17. UN Strategic Assessment18. UNDP Conflict-Related Development Analysis

United States19. Conflict Assessment Framework20. Democracy and Governance Strategic Assessment Framework21. Fragile States Assessment Framework (not operationalized)22. Inter-Agency Conflict Assessment Framework

World Bank23. Conflict Analysis Framework24. Post-Conflict Needs Assessment and Transitional Results Framework (with UNDP)

Assessment tools and frameworks developed by nongovernmental organizations and agencies:

CARE25. Benefits/Harms Handbook

Collaborative for Development Action (CDA)26. Do No Harm Framework for Analyzing the Impact of Assistance on Conflict

Conflict Prevention and Post-Conflict Reconstruction (CPR) Network27. Peace and Conflict Impact Assessment

FEWER, International Alert, and Saferworld28. Conflict Sensitive Approaches to Development, Humanitarian Assistance, and Peacebuilding:

A Resource Pack

The following global fragility indices were also examined in a related subproject conducted in January – May 2009in a workshop at Columbia University’s School of International and Public Affairs (SIPA). Research was conductedby Vanna Chan, Ellena Fotinatos, Joyce Pisarello, Liat Shetret, and Melissa Waits, under the overall supervision ofAriel Lublin. Findings were delivered to IPI in an unpublished report: International Peace Institute SIPA CapstoneWorkshop: Assessing Post-Conflict and Fragile States – Evaluating Donor Frameworks: Final Report (May 2009):

1. Brookings, Index of State Weakness in the Developing World2. Carleton University, Country Indicators for Foreign Policy (CIFP) Fragility Index3. Fund for Peace, Failed State Index4. George Mason University, State Fragility Index5. World Bank, Country Policy and Institutional Assessment / International Development Association

Resource Allocation Index (IRAI)

22 ANNEX

The INTERNATIONAL PEACE INSTITUTE (IPI) is an independent,international not-for-profit think tank with a staff representing more

than twenty nationalities, located in New York across from United

Nations headquarters. IPI is dedicated to promoting the prevention

and settlement of conflicts between and within states by strength-

ening international peace and security institutions. To achieve its

purpose, IPI employs a mix of policy research, convening, publishing,

and outreach.

777 United Nations Plaza New York, NY 10017-3521 USA

TEL +1-212 687-4300 FAX +1-212 983-8246

www.ipinst.org