163
EVALUATION MEASURE Evaluation Phase IV – Midterm Performance Evaluation FINAL REPORT February 2018 This publication was produced for review by the United States Agency for International Development. It was prepared by Lynne Franco, Svetlana Negroustoueva, Kelsey Simmons, Elisa Knebel, Sabine Topolansky, and Sarah Lunsford of EnCompass LLC through the Policy Planning, and Learning–Learning, Evaluation, and Research contract.

MEASURE Evaluation Phase IV - Midterm Performance ... · EVALUATION MEASURE Evaluation Phase IV – Midterm Performance Evaluation FINAL REPORT February 2018 This publication was

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

  • EVALUATION

    MEASURE Evaluation Phase IV – Midterm Performance Evaluation FINAL REPORT

    February 2018

    This publication was produced for review by the United States Agency for International Development. It was prepared by

    Lynne Franco, Svetlana Negroustoueva, Kelsey Simmons, Elisa Knebel, Sabine Topolansky, and Sarah Lunsford of

    EnCompass LLC through the Policy Planning, and Learning–Learning, Evaluation, and Research contract.

  • Abstract

    Purpose: The midterm performance evaluation of MEASURE Evaluation Phase IV seeks to inform technical programming and program management activities.

    Questions: The evaluation responds to three overarching questions: (1) Is the project meeting its stakeholders’ needs? (2) What are the benefits of a health sector–wide versus a health area–specific approach? (3) Are the tools developed useful?

    Methods: This evaluation used a mixed-methods approach. The team conducted in-person interviews with 117 stakeholders in Cote d’Ivoire, Mali, and Nigeria; 49 virtual or in-person interviews with internal and external stakeholders in Washington, D.C., and other countries; an online survey with 120 internal (U.S. Government) and external stakeholders; and a review of 104 project documents.

    Findings and Conclusions: MEASURE Evaluation has been successful in meeting many needs at country and global levels, but has been challenged to comprehensively and consistently meet all needs. Stakeholders perceive a unique role for the project in the USAID landscape in strengthening heath information systems (HIS), conducting health impact evaluations, and building evaluation capacity. They view the project’s approach to technical assistance and capacity building as facilitating ownership and sustainability at the country level. Almost all stakeholders see the benefits of a health sector–wide approach to strengthening HIS; however, this is challenging for USAID, given the funding streams it has to manage. Stakeholders appreciate and use the many tools that the project has developed and adapted. Strengthening data quality and data use for decision making and facilitating interoperability of databases remain the most pressing HIS strengthening needs.

    Prepared for the United States Agency for International Development

    USAID Contract Number AID-OAA-TO-17-00015

    February 23, 2018

    Implemented by:

    EnCompass LLC

    1451 Rockville Pike, Suite 600

    Rockville, MD 20852

    Phone: +1 301-287-8700

    Fax: +1 301-685-3720

    www.encompassworld.com

    http://www.encompassworld.com/

  • MEASURE EVALUATION

    PHASE IV – MIDTERM PERFORMANCE

    EVALUATION FINAL REPORT

    Disclaimer

    The authors’ views expressed in this publication do not necessarily reflect the views of the United States Agency for International Development (USAID) or the United States Government.

  • Policy Planning, and Learning–Learning, Evaluation, and Research

    This task order is implemented through the Policy, Planning, and Learning–Learning, Evaluation, and Research (PPL-LER) Indefinite Delivery, Indefinite Quantity contract, funded by the United States Agency for International Development (USAID). USAID’s Bureau for Policy, Planning, and Learning (PPL) awarded EnCompass LLC the 5-year PPL-LER contract to provide technical and advisory services for performance and impact evaluations, evaluation and performance monitoring capacity building, and performance monitoring activities at the mission, bureau, and agency-wide levels. PPL-LER task orders design and implement evaluation studies and assessments based on rigorous evidence sources (both quantitative and qualitative), develop and deliver evaluation and performance monitoring training, and provide technical assistance in performance monitoring for USAID development programs worldwide.

    Recommended Citation

    Franco, L., S. Negroustoueva, K. Simmons, E. Knebel, S. Topolansky, and S. Lunsford. 2018. MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report. Prepared for the United States Agency for International Development (USAID) and the MEASURE Evaluation project. Rockville, MD: EnCompass LLC.

    Acknowledgments

    The evaluation team appreciates the strong support provided by the USAID project management team and MEASURE Evaluation staff, and offers thanks to all who provided input to the interviews and surveys.

  • CONTENTS LIST OF EXHIBITS ............................................................................................................. ii

    ACRONYMS ....................................................................................................................... iv

    EXECUTIVE SUMMARY .................................................................................................... v

    INTRODUCTION ............................................................................................................... 1

    Background and Project Context .............................................................................................1 Evaluation Questions...............................................................................................................3

    EVALUATION DESIGN AND METHODOLOGY ............................................................. 4

    Sampling and Data Collection .................................................................................................4 Data Analysis ...........................................................................................................................5 Limitations ..............................................................................................................................6

    FINDINGS ........................................................................................................................... 7

    Is the Project Meeting the Needs of Its Stakeholders? ..............................................................7 What Are the Benefits of a Health Sector–Wide versus a Health Area–Specific Approach to HIS Strengthening and Evaluation Capacity Building? ..........................................................19 Are the Tools Developed Useful? ...........................................................................................21 Remaining Needs ..................................................................................................................25

    CONCLUSIONS ................................................................................................................ 27

    RECOMMENDATIONS .................................................................................................... 30

    ANNEX 1: EVALUATION SCOPE OF WORK ................................................................. 34

    ANNEX 2: EVALUATION TEAM PROFILES ................................................................... 46

    ANNEX 3: DOCUMENTS REVIEWED ............................................................................ 49

    ANNEX 4: EVALUATION DESIGN AND METHODS .................................................... 55

    ANNEX 5: DATA COLLECTION TOOLS (QUALITATIVE INTERVIEW GUIDES) ..... 67

    ANNEX 6: DATA COLLECTION TOOLS (ONLINE SURVEY QUESTIONNAIRES) .... 99

    ANNEX 7: COTE D’IVOIRE BRIEF COUNTRY REPORT ............................................ 100

    ANNEX 8: MALI BRIEF COUNTRY REPORT ............................................................... 109

    ANNEX 9: NIGERIA BRIEF COUNTRY REPORT ........................................................ 115

    ANNEX 10: ONLINE SURVEY TABLES ......................................................................... 127

    ANNEX 11: CONFLICT OF INTEREST DISCLOSURES .............................................. 141

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report i

  • LIST OF EXHIBITS Exhibit 1: MEASURE Evaluation stakeholders ................................................................................ 1

    Exhibit 2: Distribution of MEASURE Evaluation funding, years 1–3, by type and source ............... 2

    Exhibit 3: Key evaluation questions .................................................................................................. 3

    Exhibit 4: Data collection methods by respondents reached ............................................................. 4

    Exhibit 5: Project performance ratings from USAID mission respondents who worked with MEASURE Evaluation on a five-point scale (10 missions) ....................................................... 7

    Exhibit 6: MEASURE Evaluation’s frequently cited technical successes from country visits ............. 8

    Exhibit 7: Summary of most-reported success and inhibiting factors in three countries .................... 9

    Exhibit 8: Ratings from USG respondents who worked with MEASURE Evaluation on the project’s performance in building capacity on a 5-point scale ............................................................... 10

    Exhibit 9: USG respondents’ satisfaction with MEASURE Evaluation on a 5-point scale .............. 11

    Exhibit 10: USG/Washington interviewees’ differing perceptions on strengths and weaknesses of MEASURE Evaluation ........................................................................................................... 12

    Exhibit 11: Country-level survey respondents’ ratings on benefits of participating in groups that MEASURE Evaluation led, with weighted average on a 5-point scale (n=43) ......................... 13

    Exhibit 12: Survey respondents’ satisfaction ratings with MEASURE Evaluation’s helpfulness to the groups in which it participated on a 3-point scale ................................................................... 14

    Exhibit 13: USAID missions’ reasons to work with MEASURE Evaluation (n=13) ....................... 15

    Exhibit 14: Types of evaluations and related activities in MEASURE Evaluation Phase IV (n=53). 15

    Exhibit 15: MEASURE Evaluation’s contributions to sustainability in three countries visited ........ 18

    Exhibit 16: USAID mission survey respondents’ perceptions of MEASURE Evaluation's encouragement of sustainability .............................................................................................. 19

    Exhibit 17: Country visit respondents’ most-cited tools with which MEASURE Evaluation is involved, in order of importance ............................................................................................ 22

    Exhibit 18: Comments about specific MEASURE Evaluation supported tools ............................... 23

    Exhibit 19: MEASURE resource downloads/hits: July 1, 2014–June 30, 2017 .............................. 25

    Exhibit 20: Survey respondents’ top five emerging HIS development needs for the near future (next 2–5 years) (n=99) ................................................................................................................... 26

    Exhibit 21: Survey respondents’ top emerging evaluation capacity development needs for the near future (next 2–5 years) (n=99) ................................................................................................ 27

    Exhibit 22: MEASURE Evaluation polarities ................................................................................. 28

    Exhibit 23: Evaluation design matrix .............................................................................................. 56

    Exhibit 24: Selected countries against USAID criteria .................................................................... 61

    Exhibit 25: Online survey sample by stakeholder and method ........................................................ 63

    Exhibit 26: Sample by stakeholder and method .............................................................................. 64

    Exhibit 27: MEASURE Evaluation Phase IV activities for years 1–3 ............................................ 101

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report ii

    file:///C:/Users/Crystal/Desktop/MEASURE_Evaluation_MTE_Final_Report_2018-02-23_FinalSubmission.docx%23_Toc507156875file:///C:/Users/Crystal/Desktop/MEASURE_Evaluation_MTE_Final_Report_2018-02-23_FinalSubmission.docx%23_Toc507156896

  • Exhibit 28: Most useful tools, as cited by stakeholders ................................................................. 106

    Exhibit 29: MEASURE Phase IV activities ................................................................................... 109

    Exhibit 30: Most useful tools, as cited by stakeholders ................................................................. 113

    Exhibit 31: MEASURE Evaluation Phase IV activities ................................................................. 116

    Exhibit 32: Data collection breakdown by stakeholder group ....................................................... 117

    Exhibit 33: Staffing challenges under the HMIS portfolio ............................................................ 122

    Exhibit 34: Most useful tools, as cited by stakeholders ................................................................. 125

    Exhibit 35: Respondents who have participated in a technical working group or community of practice that MEASURE Evaluation has led (Question 3 – long survey) .............................. 128

    Exhibit 36: Respondents’ rating of effectiveness of technical working group or community of practice they participate in (Q4 – long survey) ..................................................................... 128

    Exhibit 37: Degree to which respondents felt participating in the technical working groups/communities of practice benefited them (Q5 – long survey) ..................................... 129

    Exhibit 38: Number of respondents participating in a technical working group or community of practice that MEASURE Evaluation is involved in (Q6 – long survey) ................................. 130

    Exhibit 39: Rating of the assistance MEASURE Evaluation provided to technical working groups (Q7 – long survey) ............................................................................................................... 131

    Exhibit 40: Tools respondents have worked with (Q9 – long survey) (n=55) ............................... 132

    Exhibit 41: Respondent rating of utility of MEASURE Evaluation tools (Q10 – long survey) ..... 133

    Exhibit 42: Respondent awareness of MEASURE Evaluation tools that have not been produced despite the need, or tools that did not prove to be useful (Q11 and 12 – long survey) .......... 134

    Exhibit 43: Respondents reasons for buying into MEASURE Evaluation (Q16 – long survey) .... 134

    Exhibit 44: Degree to which respondents feel MEASURE Evaluation actions encourage sustainability of efforts after the end of Phase IV (Q22 – long survey) .................................. 135

    Exhibit 45: Internal and external stakeholder respondents who said their organization plans to use or collaborate with MEASURE Evaluation in the future (Q25 – long survey) .......................... 136

    Exhibit 46: U.S. Government Washington-based respondents’ satisfaction with MEASURE Evaluation based on their experiences with the project (Q3 – short survey) .......................... 136

    Exhibit 47: Internal stakeholder respondent knowledge of USAID-funded projects that work in evaluation and/or health information system (Q17 – long survey; Q10 – short survey) ........ 137

    Exhibit 48: Respondents’ identification of three top emerging needs in the next 2–5 years for the development of HIS in the country or region they work with/in (Q23 – long survey; Q8 – short survey) ......................................................................................................................... 138

    Exhibit 49: Respondents’ identification of top three emerging needs in the next 2–5 years for the development of evaluation capacity in the country or region they work with/in (Q24 – long survey; Q9 – short survey) .................................................................................................... 139

    Exhibit 50: U.S. Government (internal stakeholder) rating of MEASURE Evaluation performance on HIS strengthening, use of information, and conducting evaluation (Q19 – long survey; Q4 – short survey) ...................................................................................................................... 140

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report iii

    file:///C:/Users/Crystal/Desktop/MEASURE_Evaluation_MTE_Final_Report_2018-02-23_FinalSubmission.docx%23_Toc507156903

  • CDC U.S. Centers for Disease Control MOH Ministry of Health

    and Prevention NCD C Nigerian Center for Disease

    DATIM Data for Accountability Control Transparency and Impact

    NME P National Malaria Elimination DCHA Bureau for Democracy, Conflict, Program

    and Humanitarian Assistance OGA C Office of the Global AIDS

    DHIS 2 District Health Information Coordinator Software 2

    OVC Orphans and vulnerable children DPRS Department of Planning,

    Research and Statistics PEPF AR U.S. President’s Emergency Plan

    for AIDS Relief DQA Data quality assessment

    PLAC E Priorities for Local AIDS Control DREAMS Determined, Resilient, Efforts

    Empowered, AIDS-free, PMI President’s Malaria Initiative

    Mentored, and Safe

    eLMIS/eSIGL Electronic drug logistics PNOE V National OVC Program

    management information system PRIS M Performance of Routine

    FMWASD Federal Ministry of Women Information System Management

    Affairs and Social Development RDQ A Routine data quality assessment

    FP/RH Family planning and reproductive RDQ A+G Gender-Integrated Routine Data

    health Quality Assessment

    GEMNet-Health Global Evaluation and RHIS Routine Health Information

    Monitoring Network for Health System

    GH Pro Global Health Program Cycle SIGD EP 2 Management Tool for Electronic

    Improvement Project Patient Files

    HIS Health information systems SOAR Supporting Operational AIDS Research

    HMIS Health management information system TB Tuberculosis

    M&E Monitoring and evaluation UNIC EF United Nations Children’s Fund

    MCHN Maternal and child health and USAI D United Stated Agency for

    nutrition International Development

    MEASURE Monitoring and Evaluation to USG United States Government

    Assess and Use Results WHO World Health Organization

    MER Monitoring, evaluation, and reporting

    WSS Water Supply and Sanitation

    ACRONYMS

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report iv

  • EXECUTIVE SUMMARY

    BACKGROUND The Monitoring and Evaluation to Assess and Use Results (MEASURE) Evaluation Phase IV project is the flagship mechanism for strengthening health information systems (HIS) in developing countries at the USAID Bureau for Global Health. MEASURE Evaluation supports U.S. Government (USG) offices in Washington, D.C., and works in more than 30 countries. At the global level, the project supports the development of data systems to meet USAID’s monitoring and evaluation (M&E) needs. At the country level, it provides evaluation implementation, capacity building, technical assistance, information sharing, and knowledge management to strengthen country HIS and improve host-country capacity to manage HIS.

    PURPOSE In August 2017, USAID/Washington contracted EnCompass LLC to conduct a midterm performance evaluation of MEASURE Evaluation to examine how effective the project has been in meeting key stakeholders’ needs. The evaluation’s purpose is to inform technical programming and project management activities for the remainder of the current cooperative agreement and support the design and scope of future global HIS procurements.

    EVALUATION QUESTIONS AND METHODS The evaluation responds to three overarching questions and associated sub-questions:

    1. Is the project meeting the needs of its stakeholders? This question refers to needs related to HIS, evaluation, and learning for internal and external stakeholders, and seeks to examine the project’s comparative advantages and disadvantages in responding to needs, how well the project fits in the current landscape of USAID M&E and HIS mechanisms, and project processes to facilitate the sustainability of quality, performance, and use of HIS and evaluation-related work.

    2. What are the benefits of a health sector–wide versus a health area–specific approach? This question examines the key facilitators and barriers the project (and USAID) faces to implement a health sector–wide approach to strengthening country HIS and evaluation. It also explores the extent to which the presence of a health sector–wide portfolio, in addition to health area–specific portfolios, facilitate or hinder the project’s effectiveness in improving HIS performance and evaluation capacity building.

    3. Are the tools developed useful? This question examines the tools most frequently used and/or adapted at the country level; factors of success (and barriers) in tool development,

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report v

  • deployment, adaptation, and dissemination; and which tools might need continuous investment for future adaptation.

    A five-person international team, coordinated from the EnCompass office in Rockville, Maryland, carried out the evaluation, along with three local consultants in Cote d’Ivoire, Mali, and Nigeria, between August 2017 and February 2018.

    The team used a concurrent, mixed-methods approach that allowed for depth and breadth in data collection and triangulation during analysis and interpretation. The team collected data from internal stakeholders (staff from USAID/Washington, USAID missions, and other USG agencies) and external stakeholders (staff from MEASURE Evaluation, partner organizations of the project, and country governments). In sum, the team interviewed or held group sessions with 117 internal and external stakeholders in country visits to Nigeria, Cote d’Ivoire, and Mali; conducted virtual or in-person interviews with 49 internal and external stakeholders in Washington and globally; conducted an online survey with 120 internal and external stakeholder respondents from Washington and around the world; and reviewed 104 project documents.

    FINDINGS AND CONCLUSIONS Evaluation findings and conclusions are grouped under the three main evaluation questions, with a fourth section dedicated to remaining needs for HIS strengthening and evaluation capacity building. Findings, grouped under each conclusion, are based on the triangulation of data from country visits, global-level interviews, the online survey questionnaire, and a document review.

    IS THE PROJECT MEETING THE NEEDS OF ITS STAKEHOLDERS?

    Conclusion 1: With the wide range of internal and external stakeholders’ needs, MEASURE Evaluation has been challenged to comprehensively and consistently meet all of the needs at the country and global levels.

    Finding 1: At the country level, internal and external stakeholders in the majority of countries covered in this evaluation perceive that MEASURE Evaluation is meeting their needs, but the project faces challenges with specific activities in a subset of countries.

    Finding 2: Washington-based internal stakeholders varied in their perspectives of the extent to which MEASURE Evaluation is meeting their needs. Some offices perceived that the project has met their needs as a responsive partner that provides high-quality technical assistance, but others were not satisfied with the degree of technical leadership or quality they have received.

    Finding 3: All stakeholder groups appreciate MEASURE Evaluation’s collaborative working style and processes, particularly when applied to technical working groups and other collaboration platforms.

    Conclusion 2: MEASURE Evaluation is playing a unique role in the USAID landscape in HIS strengthening, conducting health impact evaluations, and building evaluation capacity in health, despite differing views among USG stakeholders on the role of evaluation in the project.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report vi

  • Finding 4: Internal and external stakeholders perceive that MEASURE Evaluation has a niche in the USAID landscape for HIS strengthening, and appreciate the project’s work in evaluation capacity building.

    Finding 5: Internal and external stakeholders perceive that MEASURE Evaluation has a reputation for and expertise in conducting evaluations, and a niche in the current USAID M&E landscape for conducting health impact evaluations.

    Finding 6: Although USAID/Washington perceives that MEASURE Evaluation is meeting its needs for gender M&E and HIS-related services, USAID missions show limited demand for gender-related support to HIS strengthening beyond incorporating the sex disaggregation of data.

    Conclusion 3: MEASURE Evaluation’s approach to technical assistance and capacity building has facilitated ownership among country-level stakeholders. Operationalization of the learning agenda and its principles is still at an early stage, but this focus is important for paving the way for more sustainable outcomes.

    Finding 7: To ensure sustainability, MEASURE Evaluation has facilitated broad stakeholder engagement, institutionalization of technical assistance, and local maintenance of the HIS. However, sustainability is challenged by meeting the ongoing needs for country-level financial and human resources, which most stakeholders perceive as outside the project’s scope and mandate.

    WHAT ARE THE BENEFITS OF A HEALTH SECTOR–WIDE VERSUS A

    HEALTH AREA–SPECIFIC APPROACH?

    Conclusion 4: Almost all stakeholders see the benefits to a health sector–wide approach to strengthening HIS, and USAID and MEASURE Evaluation have been able to leverage this approach with health area–specific approaches to achieve stronger results. However, a health sector–wide approach has been inherently challenging for USAID to manage, given its health area–specific organization and funding structure.

    Finding 8: Internal and external stakeholders perceive that a health sector–wide approach to HIS strengthening is highly beneficial. Yet, MEASURE Evaluation and USAID also recognize that it presents a challenge in managing demands from multiple funding streams, including health area–specific funds.

    Finding 9: At the country level, internal and external stakeholders see the combination of health sector–wide and health area–specific approaches as beneficial for creating a single HIS to meet the needs of all programs and stakeholders. MEASURE Evaluation has effectively leveraged multiple funding streams to support a health sector–wide HIS.

    ARE THE TOOLS DEVELOPED USEFUL?

    Conclusion 5: Many tools MEASURE Evaluation has supported, developed, and/or adapted at the country level are well-appreciated and used by their specific target audiences, both with and without project support. However, there is potential to serve a wider audience and room for broader dissemination and application, both globally and at subnational levels.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report vii

  • Finding 10: Country-level internal and external stakeholders perceive that tools and resources supported by MEASURE Evaluation Phase IV are useful, particularly those for data quality and use and for assessing HIS.

    Finding 11: In the countries visited, internal and external stakeholders perceive that MEASURE Evaluation’s participatory process for tool development and adaptation, coupled with its capacity building, have contributed to tool uptake at the country level.

    Finding 12: Internal stakeholders perceive that tools and resources are not disseminated widely enough, although project data show positive trends for exposure to and uptake of the tools.

    WHAT ARE THE REMAINING NEEDS?

    Conclusion 6: Strengthening data quality and data use for decision making and facilitating interoperability of databases remain the pressing needs to address in order to continue progress in strengthening HIS and evaluation capacity building.

    Finding 13: Internal stakeholders’ suggestions for MEASURE Evaluation’s focus in the remainder of Phase IV emphasize completing current work and strengthening management, while external stakeholders focus on harmonization and sustainability.

    Finding 14: Internal and external stakeholders identified their top HIS strengthening priorities as improving data quality and strengthening analysis, and use through capacity building, and focusing on the interoperability of databases, especially at the local level.

    Finding 15: In evaluation capacity building, global and country-level internal and external stakeholders identified improved capacity to use data for decision making and improved quality of evaluations as top priorities.

    RECOMMENDATIONS These recommendations reflect evidence emerging from the evaluation and the evaluation team’s interpretation of the findings and conclusions. All recommendations stem from the broad set of triangulated data.

    1. Given the short remaining timeline and reduced bureau-wide funds for the remainder of Phase IV, MEASURE Evaluation and USAID should streamline communication and set clear priorities and expectations for what must and can be accomplished in the time remaining.

    2. MEASURE Evaluation should continue to collaborate effectively with internal and external stakeholders to strengthen HIS and evaluation capacity building to ensure that Phase IV work is completed on time and with high quality. The project should prioritize finalizing the work on the learning agenda related to HIS strengthening and evaluations, and prepare dissemination plans to ensure awareness and use of these results by the bureau, missions, and external stakeholders.

    3. MEASURE Evaluation should continue to build on its Phase IV achievements in tool and resource development and adaptation, and USAID and the project should work jointly to raise awareness and disseminate more widely across USAID missions, particularly for health

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report viii

  • area–specific and gender integration tools. At the country level, the project should focus its efforts on facilitating tool dissemination to increase the likelihood of use at subnational levels.

    4. USAID and MEASURE Evaluation should work on increasing demand for the gender-related M&E tools and resources for HIS work, including health area–specific tools, and expand work to better integrate gender considerations into HIS and evaluation data use as the project’s work on data quality and data use for decision making continues to deepen.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report ix

  • Exhibit 1 : MEASURE Evaluation stakeholders

    INTRODUCTION The Monitoring and Evaluation to Assess and Use Results (MEASURE) Evaluation Phase IV project is the flagship mechanism for strengthening health information systems (HIS) in developing countries at the USAID Bureau of Global Health. In mid-2017, USAID/Washington contracted EnCompass LLC to conduct a midterm performance evaluation of the project. A five-person international team, coordinated from the EnCompass headquarters in Rockville, Maryland, carried out the evaluation, along with three local consultants in Cote d’Ivoire, Mali, and Nigeria, between August 2017 and February 2018.

    BACKGROUND AND PROJECT CONTEXT MEASURE Evaluation seeks to empower institutions and people to identify, collect, analyze, and use technically sound information to improve global health and well-being. Its results framework (see Annex 1) includes (1) strengthened collection, analysis, and use of routine health data; (2) improved country-level capacity to manage HIS, resources, and staff; (3) methods, tools, and approaches improved and applied to address health information challenges and gaps; and (4) increased capacity for rigorous evaluations.

    MEASURE Evaluation is a Leader with Associates cooperative agreement with a 5-year period of performance (July 1, 2014, through June 28, 2019). It is implemented by the Carolina Population Center, University of North Carolina at Chapel Hill, in partnership with ICF International, John Snow, Inc., Management Sciences for Health, Palladium, and Tulane University School of Public Health and Tropical Medicine. It operates as an integrated, “bureau-wide” project, providing assistance across the Bureau for Global Health’s five technical offices1 and addressing all of the bureau’s health elements and focus areas. The project is managed from the Office of HIV/AIDS and has a management team representing all bureau offices.

    As Exhibit 1 illustrates, the project serves and works with many internal stakeholders—U.S. Government (USG) entities that invest resources in MEASURE Evaluation in exchange for project services—and external

    1 Offices of Health Systems, Office of HIV/AIDS, Office of Infectious Diseases, Office of Maternal and Child Health and Nutrition, and Office of Population and Reproductive Health.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 1

  • Exhibit 2: Distribution of MEASURE Evaluation funding, years 1–3, by type and source

    Core 27%

    Special Initiative

    24%

    Field 49%

    Other (

  • EVALUATION QUESTIONS USAID leadership laid out a set of evaluation questions in Request for Task Order Proposals NSOL-OAA-17-000079 (see Annex 1). The sub-questions, presented in Exhibit 3, reflect minor revisions, based on input from the August 17, 2017, evaluation kick-off meeting and the Septe7, 2017, evaluation design meeting attended by USAID staff, MEASURE Evaluation staff, andEnCompass evaluation team.

    o.

    mber the

    Exhibit 3: Key evaluation questions

    1. Is the project meeting the needs of its stakeholders?

    1a. To what extent is the project meeting the HIS, evaluation, and learning needs of key stakeholders? 1b. What do key stakeholders consider to be the project’s comparative advantages and disadvantages in

    responding to their needs? 1c. To what extent does the project fit into the current landscape of USAID M&E and HIS mechanisms? 1d. What processes are in place to facilitate the sustainability of quality, performance, and use of HIS and

    evaluation-related work? What barriers to sustainability exist?

    2. What are the benefits of a health sector–wide versus a health area–specific approach to HIS strengthening and evaluation capacity building?

    2a. What are the key facilitators and barriers MEASURE Evaluation faces with respect to implementing a health sector–wide approach to strengthening country HIS and evaluation?

    2b. To what extent does the presence of a health sector–wide portfolio, in addition to health area–specific portfolios, facilitate or hinder the project’s effectiveness in strengthening the collection, analysis, and use of routine health data, improving country-level capacity to manage HIS resources and staff, and building evaluation capacity?

    3. Are the tools developed useful?

    3a. Which tools are most frequently used and/or adapted at the country level, and how? 3b. What have been the success factors in terms of tool development, deployment, adaptation, and

    dissemination? 3c. What are the barriers to tool development, deployment, adaptation, and dissemination? 3d. What tools are likely to require continuous investment for future adaptation and use? Why?

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 3

  • EVALUATION DESIGN AND

    METHODOLOGY The EnCompass evaluation team (see Annex 2) used a concurrent, mixed-methods approach that allowed depth and breadth in data collection and triangulation during analysis and interpretation. The design included semi-structured interviews with internal and external stakeholders at global level and USAID missions, and with a wide range of internal and external stakeholders during three country visits; an online survey targeting internal and external stakeholders at global and country levels; and extensive document review. Exhibit 4 gives an overview of samples for each method. See Annex 3 for the list of documents reviewed and Annex 4 for details about sampling, data collection, and limitations.

    Exhibit 4: Data collection methods by respondents reached

    SAMPLING AND DATA COLLECTION Data collection tools are presented in Annex 5 (qualitative tools) and Annex 6 (online questionnaire).

    Country visits: The evaluation team conducted country visits to gain a 360-degree view of the project from all stakeholder perspectives (e.g., USAID, government, other USAID implementing partners, other donors, project staff). The USAID Management Team selected Cote d’Ivoire, Mali, and Nigeria for the visits, based on four criteria—diversity of portfolio, work across result areas, implementing partner in-country presence, and a USAID investment of more than $2 million. See Exhibit 24 in Annex 3 for more details on these criteria. During each 2-week visit, an international consultant and one local consultant conducted interviews, focus group discussions, observation, and

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 4

  • debriefings. For brief reports with more details about the countries visited, see Annex 7 (Cote d’Ivoire), Annex 8 (Mali), and Annex 9 (Nigeria).

    Semi-structured interviews: The sampling frame for global-level interviews was selected and prioritized in coordination with USAID, balancing the desire for wide representation with time and funding limitations. The sample included key USAID and other USG staff in Washington, USG implementing partners, and external partners. To expand exposure to USAID mission perspectives on other continents, the evaluation team conducted virtual interviews with USAID mission staff in Bangladesh and the Central America region. To gain an in-depth understanding of why missions have not bought into the project, the team reached out to several missions identified by USAID, but were able to speak with only one mission.

    Online survey: The sampling frame for the online survey (administered via SurveyMonkey) was developed in coordination with USAID, considering geographic reach and the desire for wide representation. Two versions of the online survey questionnaire were developed:

    A shorter, more open-ended version for USAID and other USG staff based in Washington. This was sent to 37 individuals (not interviewed), with a 38 percent response rate.

    A longer, more closed-ended version, in English and French, for USAID staff at missions with health portfolios (those buying into the project and those that had not) and members of global and country-level technical working groups or other collaboration platforms. The English version was distributed to 229 individuals, with a 45 percent response rate from USAID missions buying into the project (4 percent from those not buying in), and 40 percent for external stakeholders. The French version was distributed to 59 individuals, with a 27 percent response rate. The evaluation team also emailed the link to the main points of contact for each technical working group identified by the project, to share with members of their working groups; this resulted in 5 additional responses.

    DATA ANALYSIS Semi-structured interviews and focus group discussions: Transcripts of verbatim notes for all interviews and focus group discussions (country visits and phone/Skype/in-person interviews) were coded using an online qualitative data analysis program, Dedoose. Content analysis entailed a combination of deductive codes based on evaluation questions, followed by inductive coding drawing from the data.

    Online survey data: English and French responses were combined and analyzed in Microsoft Excel and in Stata 14. Sample sizes were too small to disaggregate for statistical analysis, but results are presented in Annex 10 by internal and external stakeholders.

    Triangulation: The core evaluation team discussed and interpreted the emerging findings together as a means of validation, identification, and testing of rival explanations for key themes. Triangulation of data from country-level interviews, other interviews, the online survey, and the document review led to mutually reinforcing findings, as well as divergent and sometimes conflicting ones. For each point of disagreement, the evaluation team re-analyzed appropriate data to explore factors that may have contributed to the disagreement and came to a reconciliation when possible. In triangulating the evaluation data, it became apparent that there were distinct perspectives on the

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 5

  • project’s ability to meet stakeholders’ needs, based on interaction with the project and role, indicating that the different data collection processes captured the multitude of opinions and ideas.

    LIMITATIONS This evaluation has several limitations, some that affect the interpretation of results and some that affected the evaluation process.

    Long project history and broad project scope: MEASURE Evaluation is in its fourth 5-year implementation phase, with an extensive history and strong name recognition among stakeholders. Some stakeholders were unable to distinguish among the phases and attribute results appropriately to Phase IV or to be knowledgeable of the range of current activities. The evaluation team tried to address this issue by focusing questions on events and activities during Phase IV and by triangulating data across all sources.

    Small number of country visits: Resource and time limitations allowed only three country visits (of the more than 30 countries where the project works), and all on one continent. Nonetheless, the visits provided a balance of information from the range of stakeholder perspectives. Although data from the online survey had the potential to provide information from a similar range (USAID, government entities, other partners), it was not possible to control which stakeholders responded or to ensure the same balance of perspectives from each country. The sample for USAID missions across the evaluation methods covered 34 percent of missions buying into the project, but represents 73 percent of field support funding and 58 percent of missions using HIV funds for MEASURE Evaluation activities.

    Representation of USAID missions not buying in to the project in the sample: Data were limited from missions that had not bought into the project. It is important to understand how missions choose to use a mechanism such as MEASURE Evaluation. Other missions not buying in and not responding could have opinions that differ from the two that responded.

    Online survey sampling and respondent bias: Response rates to the online survey were within common ranges of response rates for all groups, except among those not buying into the project. However, as with all passive survey administration, response rates are lower than those administered live, and therefore subject to some respondent bias.

    Unintended bias toward HIS versus evaluation-related findings: The countries selected for visits offered less data on the project’s evaluation portfolio than the project’s HIS strengthening work, as none of these had significant evaluation portfolios. Virtual interviews were only able to include one mission with a large evaluation portfolio.

    Time and financial constraints: This project is large, both in geography and content areas. A longer evaluation time frame would have allowed greater reflection on the design and the possibility to reach a larger pool of respondents and gain an even broader picture of MEASURE Evaluation’s ability to meet stakeholders’ needs. The evaluation budget did not always match USAID’s desires for sample size for interviews.

    Team composition changes: Due to unforeseen circumstances, the original team lead had to depart the evaluation at the design stage. Replacement team members brought both evaluation and HIS strengthening expertise, but this shift was challenging in the short time frame.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 6

  • Exhibit 5: Project performance ratings from USAID mission respondents who worked with

    MEASURE Evaluation on a five-point scale (10 missions)

    Very high Above average Average Below average Very low

    Conducting evaluations (n=10)

    Collecting and using health information to make strategic decisions (n=10)

    Managing health information systems (n=7)

    0% 20% 40% 60% 80% 100%

    FINDINGS The evaluation findings are grouped under the three main evaluation questions, with a fourth section dedicated to information related to future needs. Although there are no sub-headings for the evaluation sub-questions, all are addressed in this section.

    IS THE PROJECT MEETING THE NEEDS OF ITS STAKEHOLDERS? MEASURE Evaluation serves internal and external stakeholders at global and country levels. Its ability to meet this wide range of stakeholder needs has been mixed.

    At the country level, internacovered in this evaluation pbut the project faces challen

    l and external stakeholders in the majority of countries erceive that MEASURE Evaluation is meeting their needs, ges with specific activities in a subset of countries.

    The project has been successful in many countries, but has had significant difficulties in at least part of its portfolio in others. Overall, data from the three country visits and interviews with other country missions indicate that the majority of internal stakeholders, and almost all external stakeholders, reported that the project was meeting their needs to a high degree.

    MEASURE works to fulfill all our needs. Until now, we have not needed to seek expertise outside of MEASURE. The technicians have always been at the highest level, and they are here with us (we are one team). This is extremely important. —Government, Mali

    MEASURE has been very supportive and a great partner. —USG, Guyana

    Data from the online survey from an additional 10 missions indicate positive perceptions of the project’s performance with regard to conducting evaluations, strengthening HIS collection and use of data, and systems management. USAID mission respondents rated its performance as very high or above average (Exhibit 5).

    Internal and external stakeholders in the countries visited cited specific examples of ways the project has led or contributed to technical successes (Exhibit 6).

    When mission-level USG respondents to the online survey were asked if they planned to use MEASURE Evaluation in the future, 50 percent (7) responded “yes,” 50 percent (7) did not know,

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 7

  • Exhibit 6: MEASURE Evaluation’s frequently cited technical successes from country visits

    Mali: • Based on the Performance of Routine Information System

    Management (PRISM) assessment, advocating with national actors and donors to adopt DHIS 2 as the HIS platform

    • Providing technical support for DHIS 2 rollout, enabling early release of the 2016 Annual Statistics Report

    • Adapting the RDQA tool, which enables identification of data gaps, visualization of trends, and real-time decision making

    • Enabling availability of epidemiological data for decision making Nigeria:

    • Reactivating National OVC Management Information System • Leading the OVC Technical Working Group • Conducting a PEPFAR OVC Outcome Monitoring Survey • Conducting a secondary Analysis of Nigeria Malaria Indicator

    Survey • Building Ministry of Health capacity through a malaria

    surveillance workshop • Inaugurating the Health Data Governance Council • Supporting the HMIS Technical Working Group

    Cote d’Ivoire: • Implementing DHIS 2 • Supporting rollout of the Electronic Drug Logistics Management

    Information System (eLMIS) • Supporting the transition from the Electronic Medical Record for

    HIV Patients to the web-based SIGDEP 2 • Assessing health system performance using HMIS data • Institutionalizing harmonized HMIS data quality assurance

    and none (0 percent) responded “no.” Among those who did not know, several mentioned the uncertainty of future funding. Among external stakeholders, 81 percent (of 54) said yes, 15 percent said they did not know, and 4 percent said no.

    Despite primarily positive feedback from country-based stakeholders across data sources, there were instances of dissatisfaction. In Cote d’Ivoire, internal stakeholders noted that the project was sometimes slow in communication with USG implementing partners, and that it had a thinly stretched project team (see the country report in Annex 7). In Nigeria, several internal stakeholders were dissatisfied with specific (and sometimes key) activities, and some external stakeholders noted that staff turnover and insufficient numbers of in-country staff, as well as a shifting focus, have challenged the project in building relationships and ensuring continuity of work. Government partners in Nigeria stated that they felt the project was not transparent enough about funding priorities and was lacking in overall communication (see the country report in Annex 9).

    MEASURE should just be open and transparent. It is like having 10 things and they show us two, and when we ask for the remaining eight, they tell us their donor said this or that. —Government, Nigeria

    Factors influencing whether stakeholder needs were met were consistent across the three countries visited. Where facilitating factors existed, MEASURE Evaluation tended to be successful in meeting stakeholder needs in that country. Where these factors did not exist, the project faced challenges. One critical factor for success is strong communication between various stakeholders and the project. For example, continuity of project staff in Mali and Cote d’Ivoire allowed for building relationships with government and other USG partners, as well as the mission, and this facilitated significant collaboration that made MEASURE Evaluation more successful. In Nigeria, the gap in funding

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 8

  • Exhibit 7: Summary of most-reported success and inhibiting factors in three countries

    Success Factors Inhibiting Factors

    Sphere of Influence Project Sphere of Control Sphere of Influence

    Limited government Insufficient project staff Continuity of project staff Government leadership and and staff turnover leadership and Long history, strong ownership ownership for HIS Slow response time reputation

    Inconsistent USAID USAID mission Lack of leadership Expertise of project staff mission interest and interest in and support

    to provide leadership and Lack of clear support in HIS for HIS technical guidance communication on

    Poor internet connectivity work plans and Strong collaboration and and availability in certain timelines with coordination with donors, parts of the country government and government, USAID, and

    Incomplete coverage of partners others computer and related Commitment to equipment at facility level government ownership for data entry and

    Ability to provide key analysis

    resources for national Poor interoperability implementation, while

    across government mobilizing partners to fill systems gaps at subnational level

    contributed to loss of local staff and a minimal on-the ground presence, which challenged relationship-building and led to communication breakdowns among the USAID mission, the project, and the government in many instances. Exhibit 7 summarizes these factors, indicating which are within the project’s sphere of influence and which are within its sphere of control.5

    Certain inhibiting factors that internal and external stakeholders reported were both outside of the project’s sphere of influence and outside its sphere of control: changing indicators from the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) and other centrally funded initiatives; short USAID project timelines, limiting the ability to see long-term results; turnover of USAID staff and government counterparts; and project funding limited to the national level versus the subnational level. For example, in all three countries visited, MEASURE Evaluation funding was earmarked only for national-level activities. Though the project used its influence to affect subnational work through coordination and mobilization of efforts across partners, work at the subnational level was ultimately funded and run by other donors and other USAID projects, sometimes leading to inadequate results beyond the project’s control.

    Washington-based internal stakeholders varied in their perspectives of the extent to which MEASURE Evaluation is meeting their needs. Some offices perceived that the project has met their needs as a responsive partner that provides high-quality technical assistance, but others were not satisfied with the degree of technical leadership or quality they have received.

    5 The concept of spheres of control and influence is from Covey’s Seven Habits of Highly Effective People (1992) and is the basis of outcome mapping in the evaluation field. The sphere of control includes those things in a system that a project can change or determine. The sphere of influence includes activities over which a project can have some degree of impact, but does not exercise full control.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 9

  • Exhibit 8: Ratings from USG respondents who worked with MEASURE Evaluation on the

    project’s performance in building capacity on a 5-point scale

    Very high Above average Average Below average Very low

    Conducting evaluations (n=9)

    Collecting and using health information to make strategic decisions (n=11)

    Managing health information systems (n=9)

    0% 20% 40% 60% 80% 100%

    In the online survey and interviews, Washington-based internal stakeholders provided a range of opinions on MEASURE Evaluation’s performance, speaking to their own direct needs, as well as those they heard from mission staff who communicate with them. Respondents from the Office of HIV/AIDS, the office that manages the project, more frequently reported dissatisfaction with project management and technical leadership, and expressed frustration with certain country-level activities.

    [The USG] was not pleased with the outcome [of the evaluation work]: it was not up to the standard. Speed, efficiency, and flexibility were not there. It needed innovativeness and design. The project’s approach was very cookie-cutter: they pushed back on criticisms, identified limitations without solutions. —USG, HIV

    Interviewees working on health area–specific work, such as those in malaria, infectious diseases, and population and reproductive health tended to be more satisfied, although they have narrower scopes and interactions with the project. In particular, the project’s work with DATIM, which accounts for 12 percent of its funding overall (see Exhibit 2), was viewed positively.

    While I emphasized the things that could be strengthened, we do think we are getting exceptionally good value for money [for DATIM], and are very happy with the delivery side of things. —USG, HIV

    Overall, … for malaria-specific activities, provided by [project consortium partners], they have high marks in technical assistance in the country and core level. —USG, non-HIV

    MEASURE got rave reviews for [the] Global Development Lab (Liberia)—which [supported] them with Ebola funds … It was not a lot of money, but MEASURE’s work was well regarded, and well appreciated. They had good people to do the work. Everywhere I go, they are appreciated. —USG, non-HIV

    As Exhibit 8 illustrates, 65 percent of Washington-based USG survey respondents6 rated the project’s performance as above average or very high in building HIS and evaluation capacity.

    Satisfaction with specific MEASURE Evaluation activities reported from the online survey also showed a range of perceptions. A majority of USG respondents were satisfied, although a few individuals (working in HIV) were very unsatisfied with certain performance aspects (Exhibit 9, with more details available in Exhibit 46 in Annex 10).

    6 Individuals completing the online survey questionnaire were different from those interviewed; there was no overlap.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 10

  • Providing technical leadership (n=13)

    Coordinating with other stakeholders (n=12)

    Coordinating with USAID (n=12)

    Leading technical working groups (n=9)

    Delivering tools and products in a timely manner (n=13)

    Delivering useful tools and products (n=12)

    0% 20% 40% 60% 80% 100%

    Exhibit 9: USG respondents’ satisfaction with MEASURE Evaluation on a 5-point scale

    Very satisfied Satisfied Neutral Unsatisfied Very unsatisfied

    In interviews, USG respondents cited project weaknesses such as unresponsiveness to USAID feedback, lack of innovation, cost and time inefficiencies, and, as one respondent put it, what felt like a “constant tug-of-war.” Some USAID/Washington interviewees felt the project’s work was “too academic and theoretical” and that it “has not been able to keep up with developments in health informatics and HIS,” particularly related to rapidly evolving data systems for PEPFAR, tuberculosis (TB), and malaria.

    USG interviewees and online survey respondents acknowledged that some of the project’s challenges are due to its expansive scope and the varying needs and expectations of its many stakeholders (see Exhibit 1).

    [MEASURE Evaluation’s] mandate/scope and activities seems to span such a wide spectrum of activities that it’s hard to know what they truly excel in anymore, and there is a strong perception that they may have cast their net too wide and therefore have lost their technical edge in any specific area. —USG, Washington

    Exhibit 10 summarizes Washington-based USG interviewee perceptions of the project’s strengths and weaknesses. As with country-level performance, some factors emerge as both a strength (when it is perceived to exist) and a weakness (when it is perceived as not existing).

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 11

  • Exhibit 10: USG/Washington interviewees’ differing perceptions on strengths and weaknesses of MEASURE Evaluation

    Perceived Weaknesses Perceived Strengths

    Cost/Financial Management

    Good value for money Very costly Slow and cumbersome financial reporting processes

    Staffing

    Consortium of diverse implementing partners with Inadequate use of implementing partners’ technical recognized technical expertise experts

    In-country teams staffed with technical experts Staffing shortages, especially of technical experts Reliance on U.S. consultants (vs. local experts)

    Processes

    Inadequately responsive to USAID needs Poor coordination with in-country implementing

    partners

    Reactive versus proactive with internal stakeholders Some instances of poor coordination among

    consortium partners

    Adequate responsive to USAID and OGAC guidance Good coordination with in-country USG implementing

    partners

    Proactive with external stakeholders Wide geographic coverage providing opportunities for

    South-to-South and project-wide learning

    Technical Assistance

    Not a thought leader Not able to be innovative and relevant Overly academic; not practical Insufficient capacity for analysis Impractical impact evaluations

    High-quality technical assistance (management and technical skills)

    Tools tailored to different contexts Rigorous evaluations and evaluation capacity building

    Vision

    Support to promote country-level vision of strong HIS Overly activity-oriented through DHIS 2 Not visibly delivering on the learning agenda

    All stakeholder groups appreciate MEASURE Evaluation’s collaborative working style and processes, particularly when applied to technical working groups and other collaboration platforms.

    Data from the country visits, interviews, and the online survey indicate most global and country stakeholders’ strong appreciation for MEASURE Evaluation’s collaborative working style and processes. Country-level stakeholders cited the project’s process for collaboration and coordination as fostering buy-in, engagement, and ownership in HIS strengthening efforts. Washington-based USG online survey respondents rated the project highest for its coordination with other stakeholders, with 67 percent “satisfied” or “very satisfied” (see Exhibit 9).

    The majority of USAID mission interview respondents cited a variety of ways the project is coordinating effectively. In Mali (see Annex 8), internal and external stakeholders both noted that the project has created an environment of collaboration, in which government and other partners feel consulted and feel they can solve problems together. One interviewee remarked that the project “leads from behind,” providing technical expertise and direction in supporting the routine HIS and DHIS 2, while allowing space for capacity strengthening and national actors to take ownership. Stakeholders in Mali also appreciated that although the project’s resources for the DHIS 2 rollout focused at the national level, it coordinated with USAID and the Ministry of Health to mobilize

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 12

  • Exhibit 11: Country-level survey respondents’ ratings on benefits of participating in groups

    that MEASURE Evaluation led, with weighted average on a 5-point scale (n=43)

    Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

    Enhanced my job performance

    Enabled me to locate useful knowledge and resources

    Enabled me to get advice from others on technical issues

    0% 20% 40% 60% 80% 100%

    Enabled me to learn about similar work of others

    Enabled me to learn about strengthening evaluation

    Enabled me to learn about strengthening HIS

    other partners to fill gaps at the subnational level. In Nigeria (see Annex 9), internal and external stakeholders noted that the project’s work in their orphans and vulnerable children (OVC) portfolio included collaborative work with government implementing partners to support government in the reactivation of the National OVC Management System.

    Respondents in all three countries reportedly saw themselves owning the system and receiving support they needed from the project, as articulated by one respondent below.

    MEASURE set up the [DHIS 2] steering committee and the technical committee. I have never seen a partner that has generated so much excitement—people have been so motivated. For example, on equipment issues, UNICEF did not hesitate to commit. The partners shared the responsibilities for the supply of computers, etc. There was great collaboration among the partners, despite their legendary rivalry! —Government, Mali

    Stakeholders across countries and at the global level agreed that the project’s facilitation of technical working groups, communities of practice, and other groups was beneficial. In the online survey, 45 country-level members of a global or country-level technical working group or community of practice agreed strongly or very strongly with statements about the benefits of participating in these groups (Exhibit 11). Non-government respondents appeared more likely to feel strongly about how they benefited than host-government respondents.

    Online survey respondents (almost exclusively country-level respondents) participating in a technical working group or other collaborating platform in which MEASURE Evaluation also participated were very positive about the degree to which the project was helpful in assisting these groups (Exhibit 12) particularly related to providing or creating useful tools or resources and helping these groups reach their goals.

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 13

  • Exhibit 12: Survey respondents’ satisfaction ratings with MEASURE Evaluation’s helpfulness t o the groups in which it participated on a 3-point s cale

    Very helpful Moderately hepful Not involved

    Providing or creating useful tools or resources (n=34)

    Facilitating learning (n=33)

    Convening stakeholders (n-33)

    Disseminating knowledge (n=33)

    Achieving the group's goals (n=34)

    0% 20% 40% 60% 80% 100%

    Internal and external stakeholders perceive that MEASURE Evaluation has a niche in the USAID landscape for HIS strengthening, and appreciate the project’s work in evaluation capacity building.

    Many internal stakeholders and almost all external stakeholders at the country and global levels recognized the important HIS strengthening role of a project such as MEASURE Evaluation. Internal and external stakeholders highlighted the project’s contributions in HIS governance and leadership, HIS and data management, data quality, information products and dissemination, and HIS performance strengthening. When stakeholders were prompted in interviews and the online survey, few could think of an alternative project to perform the same role in HIS strengthening. Among USG online survey respondents, only 40 percent were familiar with the Digital Health Initiative, which also works in HIS strengthening.

    If you take out MEASURE Evaluation, I’m not sure any other person, agency, or group that has addressed that gap at all, so whatever progress they have seen, a large proportion of that can be attributed either directly or indirectly [to] MEASURE. —Country-level partner, Nigeria

    The MEASURE project has great expertise. The USAID team felt comfortable with this team, due to their technical expertise … [Other USAID mechanisms] do not have the same level and depth of technical expertise. —USG, Washington

    In country-level interviews, internal and external stakeholders listed a number of reasons for the project’s comparative advantage in leading HIS strengthening: its reputation and expertise; its ability to build on successes, such as institutionalization of data quality assessments (DQAs) into government systems and institutionalizing M&E curricula in universities; its global presence; the number of committed technical staff; and its knowledge of the country context.

    A mapping of 79 USAID-supported M&E mechanisms shows that MEASURE Evaluation is one of the few projects with a specific mandate for M&E capacity building of local and regional partners, and the only one in the impact evaluation arena for the health sector. Many internal and external stakeholders praised the project’s evaluation courses.

    The work with the GEMNet-Health [Global Evaluation and Monitoring Network for Health] team has helped me design a stronger evaluation proposal for submission. I am able to confidently teach in a five-day workshop on M&E fundamentals, and another workshop on impact evaluation. —External stakeholder, global

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 14

  • Exhibit 13: USAID missions’ reasons to work with MEASURE Evaluation (n=13)

    Strong reputation in evaluation capacity building (n=9)

    Familiarity with past services and performance (n=8)

    Strong reputation in HIS strengthening (n=7)

    Ease of buy-in into mechanism (n=5)

    No better alternative mechanism, project, or…

    0% 20% 40% 60% 80% 100%

    Exhibit 14: Types of evaluations and related activities in MEASURE Evaluation Phase IV (n=53)

    Performance Impact evaluation evaluation (counterfactual)

    (outcome, process) 25% 23%

    Other studies, surveys,

    MER outcome assessments

    monitoring survey* 23%

    30%

    We have had several trainings on M&E by MEASURE. … The training was very, very useful, not only for me but everybody that attended; both facilitators and participants from different states … Capacity in evaluation globally has been a strength of the project, related [to] individual capacities and capacity building and transfer. —Government, Nigeria

    USAID mission online survey respondents most often cited MEASURE Evaluation’s strong reputation in evaluation capacity building and HIS strengthening and its past performance as reasons they chose to work with the project (Exhibit 13).

    Internal and external stakeholders perceive that MEASURE Evaluation has a reputation for and expertise in conducting evaluations, and a niche in the current USAID M&E landscape for conducting health impact evaluations.

    Conducting rigorous evaluation is MEASURE Evaluation Result Area 4. The Phase IV evaluation portfolio includes 53 evaluations and related activities, 32 percent of which are complete, 45 percent in implementation, and 23 percent in the planning or design phase. Exhibit 14 breaks out these activities by type.

    * The Monitoring, Evaluation, and Reporting (MER) OVC Essential Survey Indicators, required under PEPFAR MER guidance, provide a snapshot of project outcomes at a point in time and allow assessment of changes in outcomes among OVC project beneficiaries over time to provide evidence to the U.S. Congress on the key outcomes of OVC programming. See https://www.measureevaluation.org/resources/publications/sr-17-140/

    Respondents both praised and criticized the project for its evaluation work. The online survey showed positive views of project performance related to conducting evaluations and related studies (see Exhibit 5 and Exhibit 8), with this aspect rating highest. Information from the country visits

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 15

    https://www.measureevaluation.org/resources/publications/sr-17-140/

  • referred to performance evaluations, which is not representative of the project’s evaluation portfolio. In Nigeria, the PEPFAR OVC Monitoring Outcome Survey was appreciated for providing the mission with a baseline to assess its implementing partners working on OVC (see Annex 9). The project was also recognized for facilitating use of results from a previous malaria indicator assessment to inform the design of its malaria surveillance workshop.

    USAID online survey respondents were asked to note their awareness of projects offering evaluation services or research that could serve missions or the Bureau for Global Health. Aside from MEASURE Evaluation, respondents were most familiar with Supporting Operational AIDS Research (SOAR), bilateral or regional M&E platforms, the Global Health Program Cycle Improvement Project (GH Pro), and the Digital Health Initiative (see Annex 10 for more detail). USAID interviewees in Washington and at missions noted MEASURE Evaluation’s unique expertise in impact evaluation and their demand for impact evaluations conducted by the project.

    Nobody ever wanted to do an impact evaluation; now all bilateral projects ask MEASURE Evaluation to do an impact evaluation. The impact evaluation of [project X] gave [USAID] a lot of food for thought [on] what changes are needed … All impact evaluations are done through MEASURE. —USG, Bangladesh

    [T]he strength of MEASURE is that they do impact evaluation and we don’t have a lot of mechanisms that do that … It is something we need; it is a niche area. —USG, Washington

    However, some USG online survey respondents (Washington and missions) noted that there have been issues with the project’s evaluation work.

    In working with MEASURE Evaluation on a large-scale evaluation, it was surprising that the team did not take the initiative to understand the M&E system and indicators already in place for the intervention being delivered. What was more disappointing is that even after repeatedly highlighting this, the team still did not and has not made any real effort to understand these data and utilize these collected data to interpret the evaluation. —USG, Washington

    Although USAID/Washington perceives that MEASURE Evaluation is meeting its needs for gender M&E and HIS-related services, USAID missions show limited demand for gender-related support to HIS strengthening beyond incorporating the sex disaggregation of data.

    MEASURE Evaluation has made a concerted effort to give a “gender sweep” to all its work on tool development and integrate gender considerations in assessments or analysis. Most USAID/ Washington respondents who were aware of the project’s work in gender concurred that it was highly attentive to gender concerns in developing tools and resources and other routine work.

    For the most part they have well integrated [gender] … where gender makes sense, with PLACE [Priorities for Local AIDS Control Efforts] or evaluation tools, monitoring, developing indicators, guidance on M&E … they have done that very well. —USG, Washington

    However, there seems to be limited country-level demand for gender integration work. Only 5 of the 32 countries where the project works have specified gender activities in their MEASURE Evaluation portfolios, although 42 percent of core (n=12) and 30 percent of special initiatives (n=10) involve gender activities. In the online survey, 13 percent of the 56 respondents had worked with the Guidelines for Integrating Gender into an M&E Framework and Systems Assessment. When asked about

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 16

  • the project’s gender integration work, interviewees in the country visits primarily mentioned sex disaggregation as the way HIS activities addressed gender issues.

    In the online survey, when asked to list their top three emerging HIS development needs, only 13 percent of 98 internal and external stakeholder respondents noted an increased use of sex-disaggregated data from HIS, and only 17 percent cited improving national institutions’ capacity to demand and use evidence from equity-focused and gender-responsive evaluations for planning. Some Washington-based internal stakeholders working in non-HIV health areas noted that they did not see a clear application of gender to their work, or see a demand for it.

    To ensure sustainability, MEASURE Evaluation has facilitated broad stakeholder engagement, institutionalization of technical assistance, and local maintenance of the HIS. However, sustainability is challenged by meeting the ongoing needs for country-level financial and human resources, which most stakeholders perceive as outside the project’s scope and mandate.

    In USAID’s Local Systems Framework, sustainability is defined as the “the ability of a local system to produce … valued results and its ability to be both resilient and adaptive in the face of changing circumstances,”7 and the agency’s website leads with a quote from Administrator Mark Green: “The purpose of foreign assistance is to end the need for its existence.”8

    MEASURE Evaluation specifies four main project principles for sustainability, in line with these values and definitions.9 As the sample quotes in Exhibit 15 illustrate, the project’s processes and approaches align well with principles 1, 2 and 3, although there are concerns about the ongoing availability of human and financial resources to maintain the system by host countries (principle 4).

    Data from the three country visits indicate that the project has been effective in ensuring that the government is in the driver’s seat for country-level HIS strengthening and galvanizing groups of actors to strengthen and monitor HIS strengthening efforts. Respondents across all three countries pointed to participatory processes for tool development and adaptation, and to the cadre of project-trained government counterparts who feel technically competent to manage and troubleshoot information systems to ensure HIS sustainability.

    7 See https://www.usaid.gov/sites/default/files/documents/1870/LocalSystemsFramework.pdf, p.5. 8 See https://www.usaid.gov/ 9 These principles emerged from a 2015 meeting specifically convened to define sustainability for the project. The definition is available in MEASURE Evaluation P4 Key Operational Definitions (provided to the evaluation team on October 9, 2017), building from USAID’s Vision for Health Systems Strengthening: 2015–2019 (https://www.usaid.gov/sites/default/files/documents/1864/HSS-Vision.pdf).

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 17

    https://www.usaid.gov/sites/default/files/documents/1870/LocalSystemsFramework.pdfhttps://www.usaid.gov/file:///C:/Users/LynneF-LT/Dropbox%20(EnCompass)/MEASURE%20EVAL%20MTE/6-Report/C.%20Final%20DRAFT%20Jan.%2019/These%20principles%20emerged%20from%20a%20meeting%20specifically%20designed%20to%20define%20sustainability%20for%20the%20project.%20%20The%20defintiion%20can%20be%20found%20in%20the%20document,%20MEASURE%20Evaluation%20P4%20Key%20Operational%20Definitions,%20provided%20to%20the%20evaluation%20team%20on%20October%209,%202017,%20building%20from%20USAID's%20Vision%20for%20Health%20Systems%20Strengthening:%202015–2019file:///C:/Users/LynneF-LT/Dropbox%20(EnCompass)/MEASURE%20EVAL%20MTE/6-Report/C.%20Final%20DRAFT%20Jan.%2019/These%20principles%20emerged%20from%20a%20meeting%20specifically%20designed%20to%20define%20sustainability%20for%20the%20project.%20%20The%20defintiion%20can%20be%20found%20in%20the%20document,%20MEASURE%20Evaluation%20P4%20Key%20Operational%20Definitions,%20provided%20to%20the%20evaluation%20team%20on%20October%209,%202017,%20building%20from%20USAID's%20Vision%20for%20Health%20Systems%20Strengthening:%202015–2019file:///C:/Users/LynneF-LT/Dropbox%20(EnCompass)/MEASURE%20EVAL%20MTE/6-Report/C.%20Final%20DRAFT%20Jan.%2019/These%20principles%20emerged%20from%20a%20meeting%20specifically%20designed%20to%20define%20sustainability%20for%20the%20project.%20%20The%20defintiion%20can%20be%20found%20in%20the%20document,%20MEASURE%20Evaluation%20P4%20Key%20Operational%20Definitions,%20provided%20to%20the%20evaluation%20team%20on%20October%209,%202017,%20building%20from%20USAID's%20Vision%20for%20Health%20Systems%20Strengthening:%202015–2019https://www.usaid.gov/sites/default/files/documents/1864/HSS-Vision.pdfhttps://www.usaid.gov/sites/default/files/documents/1864/HSS-Vision.pdf

  • Exhibit 15: MEASURE Evaluation’s contributions to sustainability in three countries visited

    Red text denotes principles not fully addressed.

    Principle 1. Regional and national leadership with broad stakeholder engagement: The country government

    is the architect of the health sector response and the designer of its HIS with support from partners.

    During the DHIS setup workshops … there were additional indicators, but we always found a consensus of what to include, what to leave out. MEASURE, in these discussions, remained neutral and left it to the government to decide. —Government, Mali

    Principle 2. Institutionalization and routinization: Technical assistance should be demand-driven and

    embedded in complex national systems through routine and regular processes managed by national actors.

    The experience with MEASURE Evaluation was made through the setting up of electronic tools (SIGDEP 2, eSIGL) … put in place with the collaboration of MEASURE Evaluation, but the [Directorate of Informatics and Health Information] has taken the lead and this allows a good transition with the delegation of tasks. —Government, Cote d’Ivoire

    Principle 3. Process and direction: Local stakeholders are proactively and regularly engaged in a process to

    ensure the maintenance and improvement of the HIS.

    Putting [DHIS 2] in place is not easy. Now that the system is in place … the National Health Directorate and the Center for Planning and Statistics have taken over the system, and they are able to follow the system and respond to ministry requests for information. In terms of achievements … [in the] the coordination meetings … the analysis of strengths and weaknesses allows the partners and the government to solve the problems, and so everyone is aware. —External partner, Mali

    Principle 4. Resource mobilization and management: A country’s commitment to finance its own systems as donor financing and support of system operations, maintenance, and development diminishes.

    I told them the best way is for partners to do advocacy to government to start supporting their own project. … If they keep bringing free money, we think if we don’t do it, that someone else will be there to do it for us. So, the best thing for partners to do is to go back and do advocacy. —Government, Nigeria

    Data from other USAID missions and the online survey indicate that the project is contributing to sustainability in its work to develop a culture of data use, build national capacity to maintain the system, and encourage ownership of the system (Exhibit 16). These data also indicate that the work is not finished.

    The project’s learning agenda focuses heavily on understanding the dynamics of developing and operating a sustainable HIS. This is a work in progress.

    I have scoured USAID literature for priorities and strengthening of HIS. There is not much at all … One sustainable thing we are leaving is that map of the HIS world and I am hoping that going forward, USAID will begin to formally adopt that as the map of where HIS needs to go. —MEASURE Evaluation

    However, the project faces some countervailing factors in being responsive to internal stakeholder needs and fostering sustainability. There is often a tension between USAID’s need for quick work and the time required to build ownership and sustainability.

    I don’t think they are doing a good job [internally]. They aren’t paying attention to country ownership—just reacting to mission requests … They don’t define the problem or try to educate the client, to say, “If you want us to do this, there are other things that need to be done, or increase the time duration for this activity. That isn’t the way it works.” … there are limitations but also opportunities to engage their client. —USG, Washington

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 18

  • Exhibit 16: USAID mission survey respondents’ perceptions of MEASURE Evaluation's encouragement of sustainability

    Great extent Some extent

    Contributed to development of a culture of data use at decentralized levels of the health system (n=7)

    Built adequate capacity of national counterparts to update the HIS as needs evolve (n=6)

    Implemented activities in ways that encourage true national level ownership and leadership (n=6)

    Built adequate capacity of national counterparts to conduct rigorous evaluations (n=4)

    Facilitated coordination and collaboration with other partners supporting HIS strengthening (n=7)

    Contributed to development of a national level culture of data use (n=10)

    Co-developed ongoing financing of recurrent HIS system costs (n=11)

    Neutral Very little Not at all

    0% 20% 40% 60% 80% 100%

    Internal and external stakeholder concerns about the sustainability of HIS strengthening efforts at the country level focused heavily on factors related to meeting stakeholder needs, cited in Finding 1. These factors are generally outside of MEASURE Evaluation’s sphere of control and its (technical and geographic) mandate to support the national level, although sometimes within its sphere of influence: inadequate in-country resources (human, material, and financial, particularly at the implementation level for recurrent costs and ongoing activities); retention of government staff trained by the project; continual changes to indicators from external bodies, requiring significant adaptations that make it hard to stabilize the system; and the short duration of USAID projects or timelines, which sacrifices ownership for speed.

    [I]n the PEPFAR world, the expectations of what would be collected and used has become quite large and I think that keeping up with those demands has been difficult. Particularly in PEPFAR countries, developing systems for data collection is difficult. … All things considered, MEASURE has done quite a bit to build capacity, but they have had to do it on a shoe string. —USG, Washington

    [S]ustainability should also be a top plan you should have so that it can continue. [USAID] programs are usually 2 to 3 years … it doesn’t give us time to plan for takeover and continue running with it. It always ends at the time we are trying to gather momentum to take over. —Government, Nigeria

    WHAT ARE THE BENEFITS OF A HEALTH SECTOR–WIDE

    VERSUS A HEALTH AREA–SPECIFIC APPROACH TO HIS

    STRENGTHENING AND EVALUATION CAPACITY BUILDING? MEASURE Evaluation has three funding streams (see Exhibit 2): core funds, negotiated with the USAID Management Team; field support funds, received from and negotiated with country missions; and special initiative funds, in which an activity is defined by emerging USAID priorities

    February 2018 | MEASURE Evaluation Phase IV – Midterm Performance Evaluation: Final Report 19

  • inside or outside the Bureau for Global Health. A portion of core funds are “bureau-wide” (health sector–wide) funds, generated through a negotiated contribution across Bureau for Global Health offices and used to support health system strengthening efforts in HIS and evaluation. Bureau-wide funds make up about one-third of the project’s core funds and 9 percent of its overall funding. Most country-level funds address evaluation or HIS strengthening across mul