19
Report Seminar ‘Learning for the Future: how to deal with the lessons learnt from MFS II?’ Thursday 14 april 2016 Location: New World Campus, Spaarneplein 2, The Hague (entry: 9.30h) 10.00-17.00 h Report plenary sessions 14 April 2016 Introduction Ellen Mangnus, chair of the day, opened the seminar, presented some facts and figures, and started interviewing Bart Romijn. He values the MFS II Evaluation as a unique undertaking with lots of lessons learnt. However, it is clear that we should never do this in the same way again: the top-down given evaluation formats and requirements were not appropriate for the processes of social change. The emphasis in the evaluation was too much on quantitative methods, and too little on sense making. He emphasises that the ILA evaluation has been a positive exception because it gave more attention to learning. The observed huge energy, e.g. at the webinars, and today, on learning is very positive. We also observe a need for future cooperation, e.g. concerning the evaluations per country. For instance in Indonesia, 19 of the 25 Strategic Partnerships have planned an evaluation at the same time. Last, but not least: Investing in evaluations is only worthwhile if the results can and will be used. After all, it is not about how good we are, but how good we want to be. Looking back, what are the most striking lessons learnt from MFS II? Harry Derksen about efficiency: We should give more attention to the cost-benefit relation, the question what value do we get for the money. Karel Chambille about M&E: Ownership is crucial for learning. The MFS II evaluation negatively affected ownership and learning. At the other hand, looking into the academic World has contributed to learning about M&E methods.

Seminar ‘Learning for the Future - Partos · Seminar ‘Learning for the Future: ... - More cooperation needed with the Dutch Wetenschaps Agenda about basic questions of ... principle

  • Upload
    ngotruc

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Report

Seminar ‘Learning for the Future:

how to deal with the lessons learnt from MFS II?’ Thursday 14 april 2016 Location: New World Campus, Spaarneplein 2, The Hague (entry: 9.30h) 10.00-17.00 h

Report plenary sessions 14 April 2016

Introduction

Ellen Mangnus, chair of the day, opened the seminar, presented some facts and figures, and started

interviewing Bart Romijn. He values the MFS II Evaluation as a unique undertaking with lots of

lessons learnt. However, it is clear that we should never do this in the same way again: the top-down

given evaluation formats and requirements were not appropriate for the processes of social change.

The emphasis in the evaluation was too much on quantitative methods, and too little on sense

making. He emphasises that the ILA evaluation has been a positive exception because it gave more

attention to learning. The observed huge energy, e.g. at the webinars, and today, on learning is very

positive. We also observe a need for future cooperation, e.g. concerning the evaluations per country.

For instance in Indonesia, 19 of the 25 Strategic Partnerships have planned an evaluation at the same

time.

Last, but not least: Investing in evaluations is only worthwhile if the results can and will be used.

After all, it is not about how good we are, but how good we want to be.

Looking back, what are the most striking lessons learnt from MFS II?

Harry Derksen about efficiency: We should give more attention to the cost-benefit relation, the

question what value do we get for the money.

Karel Chambille about M&E: Ownership is crucial for learning. The MFS II evaluation negatively

affected ownership and learning. At the other hand, looking into the academic World has contributed

to learning about M&E methods.

Liliana Ruiz about capacity development: Is the North the one who knows and the South to learn? Or

(how) can we join forces to find new solutions? Shared learning is really important.

Looking forward, how do we take up the challenges for the future?

A panel discussion with Danielle Hirsch (Director Both ENDS), Monique van ’t Hek (Director Plan

Netherlands), To Tjoelker (Head MO/DSO/ MoFA).

Some elements from the discussion:

Capacity Development:

- Most southern partners have a clear vision, strategy etc. and are much stronger than 20

years ago. That makes that north-south relations need to change (DSO)

- CD should be –mutual- capacity development, based on cooperation and equality in the

relationship (including addressing the inequality related to funding). Both ENDS is currently

elaborating its vision on this topic.

- Equality does not mean that you will be the same. Each partner should use its added value.

We, as northern organisations for instance have a huge lobby force, which we should use to

influence our private sector.

- What, for example, could the Ministry do as partner in the strategic partnerships? How can

the Ministry / Embassies improve the enabling environment, i.c. help to reclaim space for

civil society? How can (the government motivate) the private sector (to) take responsibility

for enlarging civic space? More and more governments refuse to subsidize their own

opponents.

- Considering the 5C’s: financing is a huge issue. Funding for International Lobby & Advocacy is

THE major problem for partners in the South. This was the main conclusion of the capacity

development analyses (Both ENDS).

- CD (and measuring it/ the 5C model) should be context specific, include power relations, and

legitimacy/ constituency. In general, constituency, the way you relate to the society in which

you live, deserves more attention. (DSO)

- The synthesis report does not give a clear picture of the results of CD. To get a clearer

picture, you need to go back to individual stories of organisations in the country reports.

(DSO)

- More cooperation needed with the Dutch ‘Wetenschaps Agenda’ about basic questions of

our work (long term studies, more than scientists monitoring us; Both ENDS)

- Mutuality (and ownership of southern partners) has also been a concern in the ILA

evaluation. To be really partners is difficult, in particular to keep communication going

between partners in international lobby & advocacy. GPPAC has been a positive example,

where much emphasis is put on ownership of partners (Margit, wrapping up)

- CD is always geared towards organisations, but let’s not forget that civil society , mobilising

for change, is not only about organisations, but also includes e.g. individual activists.

PMEL:

From the inspiration wall: Goals don’t need to be measurable, to be of value.

- There is a need for a learning platform for the strategic partnerships (DSO)

- The pitch about M&E has already made clear that the issue of ownership is back again .

Ownership is important for learning, for the Dutch NGOs as well as for the southern partners.

The inception phase of the strategic partnerships should enhance southern ownership and

adapting the programmes to the context. (DSO)

- Other important issues are the timespan between baseline and endline measurement,

attribution and aggregation. (DSO)

- Having learnt from the experiences with MFS II DSO has changed her approach towards its

strategic partners: No more detailed top-down M&E requirements, but trust in the

professional capacities of the partners and bottom-up development of M&E as an important

principle (DSO).

- In an unpredictable context M&E should focus on outcomes (the desired change), instead of

outputs and activities.

Wrapping up session:

- what about mutual M&E in the strategic partnerships? Should be more prominent than what

is been done in the partner satisfaction surveys. (public)

- And also: what about mutual learning (action learning, qualitative stories in line with ToC?)?

- M&E team and programme team should team up from the very beginning, prior to the

project implementation. A follow-up survey after 3-4 years, accepting that the project may

have changed in this timespan. (vd Gaag)

- Emphasis is put on assessing achievements of e.g. complex advocacy programs, but we need

a better balance between assessing the achievements and communicating them. Margit sees

a lack of capacities to put the spotlights on and sell our achievements. Building capacities for

advocacy is a result on itself, a huge achievement (because it creates the conditions for

inclusive development), but we don’t see and don’t communicate that as an achievement on

itself. Telling this part of the story, in light of the ToC, can help enormously in communicating

and selling our results. This capacity of telling parts of the story needs to be improved.

- Comparing the approach at MFS II and the strategic partnerships, Bart sees that DSO and IOB

have learnt a lot.

- Bart hopes that cooperation on evaluations will continue, but bottom-up driven and on a

smaller scale.

- A critical succesfactor for the future is (the capacity for) joint learning in a changing global

context, with NGOs, but also with enterprises. We need a learning mindset, among others to

be able to cope with uncertainties. It is promising to see so much energy to learn (Bart)

- We hope to team up with WOTRO and science, for research covering a longer timespan.

(Bart)

Cooperation in alliances:

- At the start of MFS II it was not new to cooperate in alliances. Both ENDS for instance,

already had much experience in cooperation before MFS II.

- It should be clear in the cooperation if you are equal or not (there is no in-between). Given

that clarity partners should be willing to play their role. (Both ENDS)

- It is clear that the lead in the alliance has more influence than other partners. (Both ENDS)

- If you have continued cooperation with your natural partners in MFS II, it was not a forced

marriage (Plan)

- Partners in an alliance should support each other (plenary wrapping up session)

Efficiency:

- Value is more than money. We should not take over the concepts of enterprises, which

follow market rules and exclude social value.

- Our way of thinking about efficiency is still inefficiënt. Is there an African way of thinking

about efficiency (see interview Vice Versa? LH)

- Measuring efficiency was more complicated than anticipated: to get researchers in the same

direction, to measure quantitative changes in for instance CD or CS, and to get cost data and

benchmarks. Knowledge on the best approach of efficiency should be further developed in a

mutual learning experience. Today was an opportunity, the beginning of a process (vd Gaag,

wrapping up).

- Coherence can also be seen as a way to improve efficiency. On 19 May Partos will publish a

document about coherence. We monitor the governement on the coherence of their

policies, which may contribute to huge improvement of efficiency.

Closure:

The MFS II results have been made accessible in several ways:

- A leaflet with some basic facts and figures, and very short summaries of the themes of the

evaluation (2015)

- the e-book Advocacy for Development, Effectiveness, Monitoring and Evaluation (2015), by

Jennifer B. Barrett, Margit van Wessel and Dorothea Hilhorst.

- a booklet with some reflections about several conclusions from the country- and

synthesisreports. To be published in June 2016.

- Individual reports of organisations and themes will be made accessible on the Partos website

from June 2016 onwards

- And of course learning will continue in all organisations, platforms, alliances and groups,

hopefully supported by academics as well.

Reports workshops

1.1. Report M&E workshop: Impact: from burden to bedrock, Kellie Liket and Rens Rutten

In this session “ Impact: From Burden to Bedrock” in the MONITORING AND EVALUATION track, an

enthusiastic group of participants explored what could be done to create impact evaluations that are

truly valuable to NGOs. By identifying the ‘mistakes’ that were made in the MFS II evaluation –the

misalignment between the evaluation needs of the donor and those of the NGOs. These

misalignments mostly centered on projects selected for the evaluation and the formulations of

evaluation questions.

Some participants viewed the gap between donor requirements (evaluations producing results that

function as input for its public accountability) and NGO organizational requirements (evaluations

producing results that function as input for strategic learning) irreconcilable. However, the workshop

leaders, Rens Rutten from UTZ and Dr. Kellie Liket from Impact Centre Erasmus tried to inspire the

participants with concrete cases on how win-wins might be realized.

The departure point of all the examples raised lied at superseding the program level of organizations.

Caught in their program cycles, impact is reduced to a measurement issue and often comes as a –

rather burdensome - after thought. But when a program has already been developed and

implementers have been contracted, so many opportunities are missed to create an impact focused

program that one could possibly evaluate. The M&E manager is expected to function as a data

magician, and use her magic dust to turn the data that within those boundaries can still be collected

into something that gives donors the impression that and goals have been achieved.

If NGOs would take the time, resources and mental space to deeply reflect on the impact they want

to achieve as an organization, they could fulfill a stronger position when negotiating with their donor.

Simavi is doing this for example. This results in an organization that knows where the evidence base

for their programs would be strong, and where it is weak. It allows them to create a set of evaluation

question with which they pro-actively steer the requirements of the donor. This only deepens the

accountability that the donor can ensure to the public, as an obsession with reaching goals is

replaced with empirical insights into the extent to which a difference was truly made in the lives of

the beneficiaries with their tax money. Another example is Cordaid that has developed a so-called

Flourishing Community Index (FCI) to capture the needs and perceptions of members of local

communities, based on storytelling and quantitative data. The programs can thus be focused on the

main priorities of the target groups, and the FCI can also be used to measure changes over time. This

instrument can also be used to pro-actively communicate to donors on the kind of programs that are

needed to create an impact.

1.2. Monitoring and Evaluation: practical approaches to monitor social change

In this workshop participated 35 M&E practitioners, more than halve as M&E coordinator. We

started with an exercise: what makes you ‘warm’ (happy, unthusiastic) and ‘cold’ (unhappy,

frustrates you), when thinking of ‘Monitoring and Evaluation’. Many answers were in line with

learning: people become ‘warm’ when M&E is done with colleagues, interactively and when learning

is followed-up with action. People become ‘cold’ when there is a lack of learning, disinterest among

colleagues for M&E, or when M&E is done for donors and more so when useless qualitative

indicators have to be used. All answers are in the annex and clustered in groups.

Then we examined two approaches to M&E: Outcome Mapping (by MCNV) and Outcome Harvesting

(by GPPAC). In both cases the presenters emphasized to adapt such methods to your own situation.

The jargon from Outcome Mapping is for instance very confusing for CSOs, and need to be

‘translated’ in local users’ language. MCNV uses OM for planning, monitoring and evaluation of

developing the capacities of their local partners. It is a useful tool for interaction among program

staff, with a focus on learning and using. In the case of GPPAC OH is a useful tool to monitor and

report on outcomes retrospectively and document why the change was significant and how GPPAC

contributed to the outcome. Apart from OM and OH there are other tools and approaches that can

be used – such as story telling, most significant change, sense making and write shops – in which

reflection and learning is stimulated. Outcomes are about social changes and behavioural changes.

The small steps in such changes are important; so to monitor and reflect on ‘small outcomes’ and

intermediate outcomes. The presenters concluded the importance of interaction, reflection and

using monitoring information for strategic discussions and follow-up (future oriented).

1.3. Capacity development for lobbying and advocacy

Facilitated by Heinz Greijn (PARTOS) and Clara Bosco (CIVICUS). Approximately 20 participants

The aim of the workshop was to critically reflect on three assumptions that underly the Dutch policy framework Dialogue and Dissent.

The discussions took place in dedicated sub-sessions of approximately 25 minutes each. Each session started with a provocative statement to trigger debate in small groups who then reported back to plenary.

Discussions revolved around:

Whether capacities of Southern partners require strengthening and in which ways;

The role and capability of Dutch NGOs and more widely INGOs to support southern partners in developing their capacities to influence policy;

The possible tensions in achieving results in L&A and at the same time also in capacity development for L&A.

The main conclusions:

Participants of strategic partners, from the North and from the South, each have their own specific areas of expertise which can be mobilised to strengthen eachothers’ capacities;

The group cautioned against a “mechanical” application of the ToC approach at the expense of the quality of strategic thinking.

1.4. Report workshop CD: Advocacy Capacities (Margit, Eunike, Frauke)

While Dutch government (DSO) is no longer beholden to the 5C model as a monitoring tool, there

needs to be some assessment of effectiveness in the Dialogue and Dissent programme. Capacities as

adapted in the e-book Advocacy for Development offer a direction as to what capacities are relevant

for lobbying and advocacy. However, these capacities would still require specification for different

organisations, programmes and contexts.

Going forward, for using an adapted form of the 5C model, most scope was seen for using it as a

starting point for a discussion and framework for learning. It can point to dimensions of capacity

people may not have thought about, and can open their eyes to these, less skills- and competency

based, capabilities. Discussing capacity should be but about learning about the relevance and

changes needed with respect to each of the Cs to contribute to the overall capacity. A practice-

oriented possibility for engaging with capacity questions is to start out from organisational objectives

and consider what capacities need to be furthered to attain these.

1.5. Workshop on Efficiency: Sense or Nonsense

Moderator: Harry Derksen

A remarkable conclusion in the evaluation report of the MFS II program (2011-2015) was that

efficiency or the cost/benefit ratio does not seem to be an issue of consideration for many

organizations. It proved to be very difficult to detract from the administrations of organizations the

costs they spend on the implementation of their programs.

Piet de Lange, evaluator of the Policy and Operations Department (IOB) of the Ministry of Foreign

Affairs, summed up points of consideration in relation to efficiency of development programs such as

are appropriate inputs deployed at lowest cost, are outputs produced within timeframe and within

the budget, and how well is efficiency managed in the organization? Good reasons to pay more

attention to efficiency is that taxpayers money is involved and people have the right to know

whether their money is spent in a good way. It was remarked though that technically it proved to be

very difficult to ‘measure’ the efficiency of programs under MFS II.

Jappe Kok of Hivos remarked that the issue of efficiency is not easy to determine as it depends very

much on the kind of activity and the way in which impact can be measured. His conclusion was that

future programs and evaluations should have realistic ambitions.

Silvie Mensink from the social enterprise Akiki, noticed that (social) companies are evaluated also on

the issue of cost/benefit ratio. She believes that

The discussion then focused on the point whether or not programs can be compared in terms of

efficiency. It was argued that some of the programs are too complex and too different to make a

simple comparison. But others argued that the issue is not whether programs with lower costs

should be favored, but one can learn a lot in the comparison with other programs within the same

sector. General consensus was that efficiency in development programs should receive more

attention: Penny Wise, Pound Foolish!

2.1. M&E: Reporting Results in IATI – a peer learning workshop Anne-Marie Heemskerk and Rolf Kleef Organisations are getting ready to publish IATI data, but still face decisions to make on what to publish and how. This sessions was to learn from others. Some sessions questions were practical: how to find organisation identifiers, which DAC sector codes to use, how to verify your own data before publishing? Bigger questions were around using IATI within a partnership. Who publishes outcomes and outputs? How to align different indicators used by each organisation? Some partnerships develop a manual with choices and recommendations, and start with a minimal set. Most participants still look for ways to work on IATI with their staff, country offices, and partners. The workshop helped participants realise they're not alone with these questions, and to learn about often pragmatic choices made by others. Many participants would like Partos to repeat such an exchange, and maybe have an online forum for peer support.

2.2. Monitoring and Evaluation: practical approaches to monitor social change

35 M&E and lobby&advocacy practitioners participated in this workshop. More than halve are

working with Theories of Change for amongst others the Strategic Partnership grant scheme with the

Dutch government.

Dieuwke Klaver of the Centre for Development Innovation of Wageningen UR set the stage,

explaining how Theories of Change can be helpful in monitoring and assessing complex social change

processes. She assessed two Oxfam campaigns financed with the MFS-II grant. Oxfam actively uses

Theories of Change to elaborate the strategic directions of their campaigns towards impact... As an

exterternal evaluator of the joint ILA evaluation, Dieuwke elaborated how she assessed the Oxfam

practice with ToC in three ways: (a) Assessing the relevance of outcomes achieved in the light of the

ToC; (b)Testing the validity of the ToC; and (c) Explaining outcomes achieved and assessing Oxfam’s

role in these. .

NB: The PPT is available here: [link]

Co-facilitator Paul Kosterink of GPPAC kept it short: his organisation did not actively use their Theory

of Change for the implementation of the MFS-II grant, because it represented GPPAC’s organisational

mission and vision rather than the story of change to realise the objectives of the MFS-II grant: A ToC

should be formulated at the level of how an actor can bring about change. The actor can be an

individual (a decision maker), an organisation, an institution or at the society level; and of course the

own organisation’s role in the elaborated changes.

Finally, we had ½ hour discussions in sub-groups of 5 to 8 people, with the following questions to be

addressed:

How are you (planning to) do M&E of Lobby and advocacy?

Group 1: (a) zoom out, (b) involve imprementing partner, (c) at least once a year, (d) explicate the

whole assumption: „if A --> B, because ... (e) admit it is new for many of us, (f) outcome tracing can

help to reflect, (g) evidence based change of adaptation.

How are you (planning to) do M&E of Lobby and advocacy?

Group 2: (a) how far are we from ToC change?, (b) relevant milestones, (c) qualitative versus

quantitative, (d) Team and personnal journey on learning, (e) big campaigning , specific, lobby

behind doors take notes! / internal journal, (f) reality check on what you’ve been doing, (g) share

that with colleagues. Concluding question to address: ‘ what is the most significant change

achieved?’

Do you aim to learn from M&E information? How?

Group 3: (a) learning does not automatically happens from M&E. Need to build it in your organisation

(common practice), (b) level of conclusions at ToC level same times too general to really learn.

2.3. CDI workshop ‘Evaluating capacity development with the 5Cs framework. Experiences from 4

countries’.

The complexity of the evaluation influenced its utility. Evaluators could not directly engage with the

synthesis team, nor with evaluation managers to fine-tune the evaluation to organisation needs.

However, integrating learning processes (e.g. workshops and self-assessments in teams) enhanced

evaluation utility.

Process tracing was found to be useful, particularly to get in-depth information on how selected

capacity changes have come about. This indicated clear links between MFS II and selected capacity

changes, while showing that internal and to a smaller extent external factors also played a role.

Recalling training effects was difficult for participants.

Capacity development is mainly provided as training – CFA’s are grant makers and thematic experts

but not capacity developers. The 5Cs framework needs considerable explanation, is applied

differently by different organisations, and was therefore not valued by all workshop participants.

Context matters and capacity development needs to be tailored to the needs and situation of the

Southern Partner Organisation. Having good relationships and close communication helps.

2.4. Workshop CD: Partnership Learning Loop – Be the partner you like to see (Helga van Kampen, Rita Dieleman) There are many assumptions about partnering being the key to success as opposed to working alone. The question is when partnering brings added value and on what level: for society, for beneficiaries, for the partnership as a whole, for organisations, for individual employees? Partnerships don’t equal relationships. A relationship might well be a transactional one, a service delivery agreement. If the relation doesn’t suit the situation, a partnership becomes a (administrative) burden for everyone involved. A partnership evolves over time. Sometimes there is a clear joint vision and strategy, with clear expectations on each side, sometimes not. Each partnership is unique and changes depending on the context, the phase it’s in and the people involved. Expectations are usually not that explicitly on the table, especially in a phase and position where trust has to be built. There is a growing need to get more insight in the partnership process and its added value on different levels. Participants in the workshop felt it would be good to stimulate dialogue in an open

atmosphere about the differences in perceptions. A tool such as the Partnership Learning Loop might be helpful to stimulate that dialogue.

2.5. Efficiency

Introduction by Wouter: Experience with evaluating efficiency is poor and when executed it is usually not the strongest part. This is partly because it is not easy to do. Jacques van der Gaag:

Decision was made not to assess efficiency of capacity development and civil society strengthening

Even for MDGs, there are so many different outcomes that it turned out to be difficult to measure efficiency

For efficiency a comparison of outcomes with inputs is needed, but the country teams couldn’t get their hands on the relevant cost data

Organisations should pay more attention to how much money goes out and on what it is spent

In order to know if for example 100 dollars is much, you need benchmarks to compare. However, benchmarks of similar programs are scarce in literature.

The overall grading in the synthesis report seemed very high, but that was because some of the projects were far stronger than others and many were not scored.

Wouter points out that Jacques wrote the statement: ‘’This disturbing state of affairs needs to be addressed urgently’’.

Question of the audience was if they monitored other aspects such as time. This was not the case.

Thea Hilhorst:

There were some issues with the efficiency part we were also looking how to deal with efficiency

In literature efficiency of L & A is almost non-existent

We wouldn’t be satisfied if we would have decided to not measure efficiency

Benchmarks wouldn’t work, because there are too many indicators and not many clear outcomes so there is no way to compare. Therefore we came up with the theory of efficiency, equivalent to using a Theory of Change for evaluating the outcomes of a programme.

Firstly, we asked the alliances what they think efficiency is (how would they define efficiency?) and how they make decisions about money. Secondly, we check if they do what they say they do. Thirdly, we ask how and what they changed and what they learned concerning efficiency. Other examples of questions are: What are the indications? What is the formal side? What is the informal side? What are the dilemma’s? How did they learn?

Mainly the Dutch side was included in the measurement of efficiency (not so much the south)

You can break down efficiency in different dimensions

They found out that alliances are actually very busy with efficiency. The question: should we do it or not? is often discussed.

Time is an important aspect of efficiency but it is not simply adding 8 hours times five days. People are often exploited to the max and the people let themselves be exploited. The consequences many burn-outs. Maybe we should be speaking about being ‘over- efficient‘ in this case.

We didn’t try to get an objective measure. An expensive hotel seems objectively not good at all, but it can be the only effective way to get highly placed people in the room way, needed to reach what you want.

Broadening network for advocacy is also an effiency question. Smaller networks are cheaper…

The theory of efficiency shows that there are a lot of ideas about efficiency but they (NGOs/alliances) don’t ‘’share’’ how much money they spent.

It is important to make your choices explicit. The theory of efficiency is a good tool to show to the donors.

Van der Gaag adds: When you talk about efficiency you talk about the ultimate goal, so this is hardly about real efficiency

Dewi:

Dewi shares a case of Cordaid about women’s peace and security

They wanted to come up with a strong advocacy tool (barometer)

Strengthen local women how to influence policy and their understanding

I have never thought about efficiency before. During the process we like to maintain: o Valid and reliable data o Local women need to be in charge

We questioned ourselves: is the L & A data valid to measure effectiveness?

Our strengths are: validity, legitimacy, 120 focusgroup discussions

Looking into efficiency shouldn’t only be done within our organisations but also from the partners. Being efficient ourselves can have in some cases negative effects for the efficiency of partners. It is important to think about this beforehand.

Interesting statement: The definition of efficiency is the same between the ILA and synthesis team but the synthesis team is trying to measure actual efficiency and the ILA team is focusing on the road toward efficiency. Discussion: Questions that were addressed: Should we assess efficiency more and better? What approaches/methods should we adopt? Any other implications, agreements, ideas or solutions? Some of the issues discussed included:

Make expenses visible and transparent

Explain why you choose option A and not B. The explanation is more key than the actual costs.

Capture guidelines for the theory of efficiency is possible, but context can have a huge influence on which guidelines would or would not work.

If you have no results in your project does this mean you were not efficient? Was it bad luck? Or did you make the wrong assumptions?

One of the participants says: A poorly developed TOC can lead to not being efficient. But after some discussion we think it is more about how you use and adapt your TOC during the process and not per se the starting model. If the TOC is not used well then this leads to not being efficient.

You can’t compare projects in efficiency as it is done in the private sector. You can compare the price of cheese of two different companies but you can’t compare two similar program that are set-up in different contexts. Maybe there should be a different word for efficiency?

Wrap-up Thea: Evaluation in an active way is important. A method that facilitates active engagement and thereby use of the findings has an advantage above methods that just provide a figure. We mainly talk about the costs for the donors but we hardly do this on the level of local workers. We should also focus on the efforts (time) and costs for the local people Jacques: I learned that it is important not to approach efficiency as for a matter of accountability only, but it should be looked at as a process of learning. You can use the TOC to understand what results mean.

Evaluatie seminar (Joke, Sophie, Annemiek, Lucia)

- Aanleiding van het seminar had iets meer aandacht kunnen krijgen. De dagvoorzitter

besteedde niet veel aandacht aan het onderdeel ‘facts and figures’, en bovendien hadden

sommige mensen graag een korte samenvatting gezien van de belangrijkste MFS II

resultaten. Ook gezien het eigenaarschap van deze dag had Lucia dat, achteraf gezien,

misschien even moeten doen.

- Bij de slotsessie kwam er niet veel energie meer uit het publiek. Bovendien was de

dagvoorzitter door haar vragen heen. Volgende keer, na een volle dag, liever een wat meer

interactieve afsluiting doen, bv. door met mobiele telefoon vragen te laten beantwoorden,

en die op het scherm te krijgen. Of iets met handen opsteken of zoiets. Ook is het goed om

zelf een aantal vragen achter de hand te hebben.

- De ‘wall of inspiration’ kwam niet heel erg uit de verf. Een volgende keer moeten we nog

actiever stimuleren dat mensen daar iets opplakken. Bv. door papiertjes op de stoelen te

leggen, door in de zalen waar de workshops gegeven worden nog even een korte briefing van

max. 2-3 punten neer te leggen voor de organisatoren, door er een nog duidelijker doel aan

te koppelen (bv. input voor 10 punten agenda voor de toekomst).

- De volgende keer in de workshopzalen een korte briefing neerleggen voor de

workshoporganisatoren, waarop max. 2-3 punten staan. Dat zou in dit geval zijn dat ze de

laatste 5 min. reserveren voor het laten schrijven van kaartjes voor de inspiration wall, en

alvast een verzoek om na afloop een verslagje te schrijven.

- Volgende keer ook (misschien) een plattegrond opnemen in het programma boekje, met

aanwijzingen waar men de subzalen kan vinden.

- Volgende keer misschien ook de tijdstippen van de workshopsessies op de kaartjes

opnemen.

- Badges moeten er ruim op tijd zijn. Nu ging het maar net goed, vanwege vertraging met de

treinen.

- In sommige workshops was de tijdsplanning niet helemaal goed (Paul/Akke). Dat kwam

mede omdat er vanwege grote belangstelling eerst nog extra stoelen geregeld moesten

worden. Ook was de tijd wellicht te kort om 2 methodes (outcome mapping and harvesting)

te behandelen. Na de workshop was er nog veel verwarring over wat er bv. geharvest moest

worden. Volgende keer misschien beperken tot één methode. Less is more. Aan de andere

kant waren er ook mensen die juist heel erg geïnspireerd waren door deze workshop, en de

manier waarop die werd aangepakt.

- New World Campus had te weinig brood voor de lunch, en het duurde te lang voordat er was

bijgemaakt. Volgende keer meer bestellen dan 2 broodjes (3 boterhammen op elkaar) pp?

Kijken voor hoeveel personen ze nu op de rekening zetten……..

Zie verder het aparte rapport met resultaten van de beantwoording van participanten van de

evaluatievragen.

Bijlage programma Seminar ‘Learning for the Future:

how to deal with the lessons learnt from MFS II?’

Thursday 14 april 2016 Location: New World Campus, Spaarneplein 2, The Hague (entry: 9.30h) 10.00-17.00 h

PROGRAMME

09.30 – 10.00h registration (for workshops) 10.00 – 11.30h opening

- interview with Bart Romijn (Chair SGE/Director Partos) - looking back - what are the most striking lessons learnt from MFS II?

with Harry Derksen (SGE Board), Karel Chambille (Member SGE Internal Reference Group/Evaluation Manager Hivos) and Liliana Ruiz (Team Manager Program Quality and Capacity Strengthening Oxfam Novib)

- looking forward – how do we take up the challenges for the future? with Daniëlle Hirsch (Director Both ENDS), Monique van ’t Hek (Director Plan Netherlands), To Tjoelker (Head MO/DSO, MoFA).

11.30 – 13.00h workshops session 1 13.00 – 14.00h lunch 14.00 – 15.30h workshops session 2 15.30 – 16.00h break (coffee and tea) 16.00 – 17.00h wrap up and plans for the future

with contributions from: - Margit van Wessel (Project Leader MFS II ILA Evaluation/Assistant

Professor Wageningen UR), - Jacques van der Gaag (Member Synthesis Team MFS II Evaluation/Senior

Fellow Center for Universal Education), - Bart Romijn (Chair SGE/Director Partos) and others

17.00h drinks and snacks

The programme will be facilitated by Ellen Mangnus (Journalist, Vice Versa)

Please find below an overview of the workshops which will be given throughout the two workshop sessions of the program. You will have the opportunity to choose one workshop in each session. In both sessions, three areas will be covered: monitoring and evaluation, capacity development and efficiency.

Workshop Session 1 (11:30-13:00h)

1.1 Monitoring and evaluation: Impact: from burden to bedrock Together we have allowed impact to become a burden to the development sector. In this workshop we explore lessons from case studies that have been able to use impact thinking and measurement in a meaningful way. When done right, impact assessments (analysis of the potential impact that can be achieved), impact evaluations (results measurements) and evidence-based programming can provide the foundation for effectiveness at all levels of the sector: formulating effective donor strategies, NGO strategies, NGO programmes and specific interventions. A focus on impact - when done right - allows your organisation to strip away the cant and get down to the bedrock of providing those in need with the opportunities to substantially improve their wellbeing. Facilitated by: Kellie Liket (postdoctoral researcher, Impact Centre Erasmus, Erasmus University Rotterdam) and Rens Rutten (Member SGE Internal Reference Group/Monitoring & Evaluation Officer, UTZ)

1.2 Monitoring and evaluation: Practical approaches to monitor social change Objective: inventory and exchange of practical approaches to plan, monitor and evaluate social change. In their MFS II programmes, the facilitators have practiced Outcome Mapping (MCNV) and Outcome Harvesting (GPPAC), respectively. Both organisations were subject to external evaluations; MCNV through an own evaluation and GPPAC as part of the Joint Evaluation on international lobbying and advocacy. The facilitators will introduce how Outcome Mapping and Outcome Harvesting works for their organisations and for the external evaluations. Main findings and challenges will be shared with the workshop participants and will be the starting point to investigate existing experiences of participants. The investigation will lead – we expect – to new insights on how participants can improve their own PME practices. Note: we expect this workshop will be most relevant to practitioners of lobbying and advocacy, PME coordinators and external evaluators.Facilitated by: Akke Schuurmans (Senior Policy Advisor, Medical Committee Netherlands Vietnam (MCNV)/Tea Programme) and Paul Kosterink (Coordinator Planning, Monitoring, Evaluation & Learning, Global Partnership for the Prevention of Armed Conflict (GPPAC))

1.3 Capacity Development: Capacity development: critical reflection on the assumptions underlying capacity development for lobbying and advocacy Do capacities of Southern partners still require strengthening anno 2016? Do Dutch CSOs have the capacity to support their Southern partners in developing their capacities to influence policy? Is it possible to achieve results in lobbying & advocacy and at the same time also in capacity development for lobby & advocacy? In this workshop we share our experiences and critically reflect on these questions. Facilitated by: Heinz Greijn (Consultant, Partos) and Clara Bosco (CIVICUS)

1.4 Capacity Development: Advocacy capacities Many organisations are familiar with the 5C framework for capacity development, but to which extent does this framework need to be adapted for the analysis of advocacy capacity? During the workshop, we propose and explore some suggestions for adaptation. Do these suggestions do justice to the conceptualization of capacity in the 5C model? Are the identified capacities the proper ones, and are all relevant capacities covered? Do they do justice to the realities in the field? What should be added or adjusted? And more broadly: is an adapted version of the 5C framework useful for the development, monitoring and evaluation of advocacy capacity or would another approach be more adequate? What are interesting alternatives? Facilitated by: Eunike Spierings (Policy Officer

Monitoring and Evaluation, ECDPM), Margit van Wessel (Project Leader MFS II ILA Evaluation/Assistant Professor, Wageningen UR) and Frauke de Weijer (independent consultant)

1.5 Efficiency: Sense or NonSense? Does it make sense to weigh the costs against the benefits in projects of health, education or human rights or do we think this is nonsense or even not appropriate to ask? The report of the MFS II Evaluation noticed a lack of interest for efficiency issues both among donors and recipients. In other sectors however cost/benefit ratios are essential in the decision-making of investments. In this workshop we explore the sense and nonsense of cost/benefit analysis from three different perspectives. Facilitated by: Harry Derksen (Board Member SGE), Silvie Mensink (Managing Director, Akiki Finance Group), Ramon Lambert (Managing Director, Akiki Finance Group), Piet de Lange (Evaluator, IOB, Ministry of Foreign Affairs), Jappe Kok (Head Audit & Evaluation, Hivos)

Workshop Session 2 (14:00-15:30h) 2.1 Monitoring and Evaluation: Reporting results in IATI - a peer learning workshop Since the 1st of January 2016, all programs funded by the Dutch MoFA must report their expenditures and their results by using IATI. These reporting guidelines were discussed during a Partos meeting in November. What solutions have organisations found in the meantime to share their results using IATI? How does this work for organisations in partnerships or alliances? This workshop is meant to share those experiences, identify possible solutions and issues that still need to be solved. It could be the starting point of a learning network on IATI publication on the basis of these MoFA guidelines. Facilitated by: Anne-Marie Heemskerk (Manager Knowledge and Effectiveness, Partos) and Rolf Kleef (Consultant, Open for Change) 2.2 Monitoring and evaluation: How can Theories of Change be used in practice? Objective: inventory and exchange of practical approaches in monitoring and measuring of results in lobbying and advocacy. In the joint ‘International Lobby & Advocacy’ (ILA) evaluation of MFS II, the researchers from Wageningen UR used Theories of Change to answer the evaluation questions. Some of the evaluated alliances had already formulated own Theories of Change, others not or did not actively use them. Dieuwke Klaver was one of the researchers and will share her experiences with the use of ToCs for evaluation purposes. Both facilitators will discuss in what different ways Alliances used (or did not use) ToCs, and how practitioners of lobbying and advocacy are involved. This will be the starting point to discuss with participants how they (plan to) monitor and measure results in lobbying and advocacy. Note: we expect this workshop will be most relevant to practitioners of lobbying and advocacy, PME coordinators and external evaluators. Facilitated by: Dieuwke Klaver (Researcher MFS II Evaluation/Expert Governance & Rural Livelihoods, Wageningen UR Centre for Development Innovation) and Paul Kosterink (Coordinator Planning, Monitoring, Evaluation & Learning, Global Partnership for the Prevention of Armed Conflict (GPPAC)) 2.3 Capacity Development: Evaluating capacity development with the 5c framework in MFS II – experiences from 4 countries The 5 capabilities (5c) framework has been used for assessing the changes in (organisational) capacity development and the extent to which these changes could be attributed to Dutch support (MFS II). Wageningen University and Research centre, Centre for Development Innovation (CDI) was responsible for this evaluation in 4 out of the 8 selected countries: Ethiopia, India, Indonesia and Liberia. What did we learn from this in relation to capacity development? What does capacity development mean in practice? Is it useful to undertake capacity development? And what suggestions do we have for strengthening the capacity of our partners? This workshop will focus on

the learning from the evaluators whilst engaging with the audience to draw on their experiences as well. Facilitated by: Cecile Kusters (PME senior advisor, Wageningen UR, Centre for Development Innovation (CDI); coordinator of the MFS II capacity development evaluations for 4 countries) and Bram Peters (junior advisor CDI; responsible for the implementation of the 5c evaluation in Liberia)

2.4 Capacity Development: Partnership Learning Loop - Be the partner you like to see An increasingly frequent question for partnership practitioners centers on how to increase the efficiency and effectivity of collaborative efforts. How can they be improved in terms of working operations and effectiveness? And what is the added value for each partner, for the partnership as a whole and for the beneficiaries. In this interactive learning session about common challenges and core principles, we will share with you the Partnership Learning Loop. An online dialogue tool that assesses the different layers of complex partnerships. It provides insight in how partnerships function in reality, whether they respond to needs and gives guidance on how to improve the collaboration. Facilitated by: Helga van Kampen (Accredited Partnership Broker, Partnership Review, NewHow & Partnership Learning Loop) and Rita Dieleman (Accredited partnership Broker, Evaluator, RidiConsultancy & Partnership Learning Loop)

2.5 Efficiency: How to measure efficiency – using a theory of efficiency versus using benchmarks Efficiency can be measured in many different ways. Should the focus be on the choices that organisations make about their investments, or on objective unit costs compared with proper benchmarks? Two evaluators will present the approach they have used to analyse efficiency as well as their main conclusions and recommendations. In the workshop we will compare and contrast the different approaches to analyse efficiency and discuss how the recommendations can be used to improve our work. Facilitated by: Wouter Rijneveld (Member SGE Internal Reference Group/Evaluator, Resultante), Thea Hilhorst (Researcher MFS II Evaluation – ILA Team/Professor of Humanitarian Aid and Reconstruction, ISS), Jacques van der Gaag (Researcher MFS II Evaluation – Synthesis Team/Senior Fellow Center for Universal Education) and Dewi Suralaga (Policy Advisor, Cordaid)

Bijlage: Evaluation questions seminar 14th April

1. How did you appreciate the overall programme and organisation of the seminar

low medium high

the overall programme contents

the overall organisation

the location

the catering

Explanation……………

2. Which workshops did you attend and how would you rate these (0 = low appreciation; 10 =

high appreciation)?

Workshop 1st session Your

rate

Workshop 2nd session Your rate

1.1. Monitoring & Evaluation: Impact:

from burden to bedrock (Kellie Liket

and Rens Rutten)

2.1. Monitoring & Evaluation: Reporting results in IATI (Anne-Marie Heemskerk and Rolf Kleef)

1.2. Monitoring & Evaluation:

Practical approaches to monitor

social change (Paul Kosterink and

Akke Schuurmans)

2.2. Monitoring & Evaluation: How can

Theories of Change be used in practice?

(Dieuwke Klaver and Paul Kosterink)

1.3. Capacity Development:

Critical reflection of the

assumptions underlying capacity

development for L&A (Heinz Greijn

and Clara Bosco)

2.3. Capacity Development: Evaluating

Capacity Development with the 5C

framework in MFS II (Cecile Kusters and

Bram Peters)

1.4. Capacity Development: Advocacy capacities (Eunike Spierings, Margit van Wessel, Frauke de Weijer)

2.4. Capacity Development: Partnership Learning Loop – Be the partner you like to see (Helga van Kampen and Rita Dieleman)

1.5. Efficiency: Sense or Non-sense?

(Harry Derksen, Silvie Mensink,

Ramon Lambert, Piet de Lange,

Jappe Kok)

2.5. Efficiency: How to measure efficiency –

using a theory of efficiency versus using

benchmarks (Wouter RIjneveld, Thea

Hilhorst, Jacques van der Gaag, Dewi

Suralaga)

Explanation rate workshop 1st session…………………

Explanation rate workshop 2nd session……………….

3. What did you like most about the seminar, and why?

4. What suggestions do you have for improvement, if a similar event will be organised in the

future?

5. What have you learnt from this seminar?

6. How are you going to follow-up based on what you have learned?

7. Would you like to attend this kind of events more often? Yes/no

8. If yes, what should be the topic(s) for the next event?

9. If no, why not?

10. Do you have any other suggestions or remarks?

Thank you very much!