38
Program Evaluation in Health Care Vicky Stergiopoulos, MSc, MD, MHSc Onil Bhattacharyya, MD, PhD

Program Evaluation in Health Care Vicky Stergiopoulos, MSc, MD, MHSc Onil Bhattacharyya, MD, PhD

Embed Size (px)

Citation preview

Program Evaluation in Health Care

Vicky Stergiopoulos, MSc, MD, MHSc

Onil Bhattacharyya, MD, PhD

Purpose of Evaluation

The purpose of evaluation includes accountability, informing decisions on resource allocation, program and policy improvement.

Assessing relevance and performance

Rigorous evaluation can build a knowledge base of what works that is generalizable.

Treasury Board of Canada Secretariat. Policy on Evaluation. 2009

Types of Evaluation

Evaluation of need Developmental evaluation Evaluation of implementation and process Evaluation of outcomes / effectiveness

The Realist Approach to Evaluation

“What works for whom and under what circumstances”. “Realists do not conceive that programs work, rather it is

the action of stakeholders that makes them work…the causal potential of an initiative takes the form of providing reasons and resources to enable program participants to change.” Implications for involving stakeholders including program

recipients, program staff and funders in program planning and evaluation

Pawson &Tilley, 1997

CATCH-ED: A Brief Case Management Intervention for

Frequent ED Users An adaptation of the Critical Time Intervention model in the Canadian context

Sponsored by the Toronto Central LHIN and the Toronto Mental Health and Addictions Acute Care Alliance

A multi-organizational intervention spanning: 6 General and 1 Specialty Hospitals 4 Community Mental Health and Addiction Agencies 4 Community Health Centers 1 Community Agency providing peer support

Evaluated at the Centre of Research on Inner City Health, St. Michael’s Hospital

• Transition to longer-term supports as needed

• Coordination of care delivery

- Multi-d care team

- Integrated care plan

- Tailoring of care pathways

• Low barrier re-entry

CATCH ED -Integrated Care Framework

6

1. Proactive identification

2. At-right-time contact and connection

3. Navigation and connection to service

4. Tailoring of care pathways and integration of care

Proactive identification in the hospital (ED, inpatient)

Proactive identification in the community

Low-barrier access transitional case management

- Specialized- Mobile- Responsive-Peer supports

-Social support-Advocacy

5. Supporting structures and mechanisms

• Partnerships and protocols• “When all else fails” processes

• Ongoing monitoring and evaluation• Consistent, continuous communication

Alternatives to ED when in crisis

Primary and psychiatric care

MHA counseling

Low-barrier individual/group-based services

Other determinants of health

CATCH-ED Phases7

Meet in hospital whenever possible

First contact – 24 hrs; first meeting – 48 hrs

Rapport-building, engagement Rapid assessment of pressing

needs, strengths, resources, and reasons for ED use

Practical needs assistance Individualized, focused

treatment/support plans 2-3 contacts per week Assertive outreach

Phase 3: Transfer of Care (mo. 4-6)

Phase 1: Engagement and Goal-Setting (mo. 1-2)

Continued practical needs assistance

Referrals to services Continued focus on only

the most critical areas Strong emphasis on

building and testing connections to longer-term supports

Reduction in service intensity to 1-2 contacts per week

Phase 2: Bridging to Community (mo. 2-4)

Transfer of care to new support network

Focus on assessment of the strength and functioning of the support system

Reduction in service intensity to <1 contact per week

Once confident in hand off, patients are discharged, with an open-door return policy

Steps in Evaluation – Step 1

Understanding the program and its components: What are the anticipated outcomes? What is the underlying program theory?

Is it supported by an evidence base?

What is the programs anticipated timeline of impact?

Steps in Evaluation- Step 2

Evaluation design How does the complexity of the intervention impact on the

evaluation design?

Implementation of the evaluation – data collection and analysis.

Interpretation of findings, reporting, communicating to stakeholders.

Evaluation Design

Study design should follow study purpose or function: Developmental evaluation Implementation and process evaluation Outcome evaluation

Research oriented vs internal program oriented

“Not all forms of evaluation are helpful. Indeed many forms of evaluation are the enemy of

social innovation”

Patton, 2008

Different Contexts, Different Evaluation

Innovation Context / Developmental Evaluation: Initiative is in

development Evaluation is used

to provide feedback on the creation of the initiative

Mature Contexts Formative: Evaluation is used

to help improve the initiative Implementation / process:

Evaluation examines if initiative implemented as intended and/or meeting targets

Outcome: Evaluation is used to assess impact of the initiative

Developmental Evaluation Niches

Pre-formative Ongoing development of existing model Adaptation to a new context Sudden change or crisis Major systems change

Developmental Evaluation Goals

Framing the intervention Testing quick iterations Tracking developments Surfacing tough issues

Implementation and Process Evaluation

Evaluation involves checking the assumptions made while the program was being planned. The extent to which implementation has taken place. The nature of people being served. The degree to which the program operates as

expected. Do people drop out of the program?

Examples of Process/Implementation

Evaluation Questions Are there inconsistencies between planned and actual implementation?

What is working /worked well in terms of program implementation?

What challenges and barriers have emerged as the programs have been implemented?

What factors have helped implementation?

Examples of Process/Implementation

Evaluation Questions What issues have arisen between stakeholder groups and how have they been resolved?

What do participants say is helpful and not helpful about the program?

What are they key factors in the program’s environment that are influencing program implementation? Structures, relationships, resources?

Getting Results

Effective NOT Effective

Effective

NOT Effective

IMPLEMENTATION

INT

ER

VE

NT

ION Actual Benefits

Institute of Medicine, 2000; 2001; 2009

Inconsistent; Not Sustainable; Poor outcomes

Poor outcomes Poor outcomes; Sometimes harmful

Outcome Evaluation

What are the program results/ effects? Is the program achieving its goals? Are program recipients performing well? What constitutes a successful outcome?

Selecting the Outcome Evaluation Design

Design options: Pre-experimental Quasi-experimental Experimental

Consider threats to validity: If a change occurs can the program take credit?

(internal validity) To whom can results apply? (external validity or

generalizability)

Design Options I:Pre-experimental

Single group design (no control). Collect information at only one point in time, compare

to expected outcome without the program. Collect information at two points in time (pre-post

design) Less intrusive and expensive, less effort to

complete. More threats to internal validity.

Design Options II:Quasi-experimental

Naturally occurring control conditions: Collecting information at additional times before and

after the program. Not equivalent control group. Observing other dependent variables. Combining design types to increase internal validity.

Main threat to internal validity is differences in two groups (selection threat).

Design Options III:Experimental Designs

Randomized control trials. Objections to experiments:

“Don’t experiment on me!” “We already know what’s best”. “Experiments are just too much trouble”.

When to conduct experiments: When stakes are high When there is controversy about program effects When policy change is desired When demand is high

Measurement and Data Collection

What needs to be measured? What are the most appropriate indicators? How will we collect the data? What resources are required for data collection and

analysis?

Data Sources for Evaluation

Intended beneficiaries of the program Program participants Community indices

Providers of services Program staff Program records

Observers

Selecting Measures

Sources of data for evaluation Which sources should be used?

Good assessment procedures Use multiple variables. Use variables relevant to information needs. Use valid and reliable measures. Use measures that can detect change over time. Use cost effective measures.

Quantitative and Qualitative Approaches

Mixed methods provide richer information. Quantitative methods give breadth of understanding. Qualitative methods provide depth of understanding. Together, the different methods help better explain

whether, how and why the intervention works in a given context.

Use of Qualitative Methods Before a trial

To develop and refine the intervention To develop or select appropriate outcome measures To generate hypotheses for examination

During a trial To examine whether the intervention was delivered as intended To identify key intervention ingredients To explore patients’ and providers’ experience of the intervention

After a trial To explore reasons for findings To explain variations in effectiveness within the sample To examine the appropriateness of the underlying theory

Lewin et al, 2009

CATCH-ED Evaluation

CATCH-ED Evaluation

Before the trial Developmental Evaluation

During the trial Evaluation of process implementation

Process Measures Narrative interviews and focus groups Direct observation

Outcome evaluation TBD

Implementation Evaluation Findings

Barriers Poor identification and

referral processes Incomplete understanding

of drivers of ED use Decentralized structure Long wait times for other

services Training and technical

assistance

Facilitators Partnership with local

health integration network Agency commitment ED presence of case

managers Training and Technical

Assistance

Evaluation Questions During the Trial

Who are the clients being served by the program? Demographic and clinical characteristics: survey questionnaires,

program records

How do they experience continuity of care and the working relationship / alliance with their case manager? Narrative interviews, survey questionnaires

Is the intervention being delivered as intended? Direct observation, interviews, monthly reports by case

managers

What is the effectiveness of the program in decreasing ED use and improving health outcomes? RCT

Using CATCH-ED Program Records

How often were patients seen? How many times were patients seen? Were patients referred to appropriate services? Was there warm hand off to other services? What was the appropriateness and comprehensiveness

of services offered?

Your questions?

What are your main challenges?

MCP Group Coaching Sessions

Interested teams can participate in monthly coaching activities

Collaborative approach – teams can share learnings to support one another

Role of coaches: facilitate team meetings, provide feedback, connect to resources

Teams can help each other in implementing, evaluating and building capacity for integration of care for medically complex patients

Coaching Process

Interested teams will be contacted by a coach to determine topics for subsequent calls

Based on interests shared by multiple teams, the topic/focus of group coaching sessions will be decided

BRIDGES will subsequently send an invitation outlining the topic and objectives to interested MCP teams for a teleconference coaching session

Coaches will be available to facilitate coaching sessions for the duration of the initiative

MCP Preconference: Coaching WorkshopCoaching Session Facilitator:

Patricia O’Brien, Manager, Quality Improvement Program, Department of Family & Community Medicine, University of Toronto

Coaching Workshop Theme: Coaching for sustainability and spread The role of coaching support in encouraging

improvement and changeWorkshop Format:

Overview of the coaching model Breakout sessions with coaches modelling best

practices

References

Posavac EJ & Carey RG. Program Evaluation: Methods and Case Studies (6th Ed.). Prentice Hall, New Jersey, 2003.

Patton, MQ. Utilization Focused Evaluation (4th Ed). Thousand Oaks, CA: Sage, 2008.

Damschroder LJ et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science 2009:4:50

Pawson R. Evidence-based policy: A realist perspective. Sage, 2006. Pawson R & Tilley N. An introduction to scientific realist evaluations. In E.

Chelimsky & WR Shadish (Eds.) Evaluation for the 21st Century: A Handbook. Thousand Oaks, CA: Sage, 1997, pp 405-418.

Renger R & Titcomb A. A three step approach to teaching logic models. American Journal of Evaluation, 2002;23:493-503.

Treasury Board of Canada Secretariat. Policy on Evaluation. 2009 http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id-12309.

Lewin S, Glenton C, Oxman AD. BMJ 2009;339;b3496