58
Using the Community Guide to Move the Research Agenda Forward Peter A. Briss, MD, MPH February 03, 2005

Using the Community Guide to Move the Research Agenda Forward Peter A. Briss, MD, MPH February 03, 2005

Embed Size (px)

Citation preview

Using the Community Guide to Move the Research Agenda

Forward

Peter A. Briss, MD, MPH

February 03, 2005

Why Evidence-Based Public Health?

Resources are tight … and getting tighter

Public health is more visible—therefore our decisions are more carefully examined

Increasing pressure to be accountable

Gaps between scientists and decision-makers—priorities, language, and approaches

Increasing pressure to embrace “evidence” methods

Evidence and Public Health Decision Making

Good news: Major efforts underway to assess the body

of evidence for a wide range of public health interventions

More and more high quality reviews available

But capacity not what it might be

Strong evidence on the effect of many policies/programs aimed to improve public health

But…Awareness and Use Are Not What They Might Be

Bad news: Many public health professionals are unaware

of this evidence

Some who are aware of it don’t use it

Many existing disease control programs use interventions with insufficient evidence, while better-documented alternatives are available

Failing to use an effective intervention is a missed opportunity that can adversely affect fulfilling mission and getting public support

The Community Guide Seeks to Answer Many Important Questions:

Under what circumstances is an intervention appropriate?

Does it work?

How well?

For whom?

What does it cost?

Does it provide value?

Are there other barriers I need to know about?

So What Does One Do with This Kind of Information?

Know what to expect

Know which programs are more likely to be successful

Support decisions about research What programs need additional research to

support decisions? What research is needed, (e.g., formative,

effectiveness, replication, or dissemination)?

Advise program planners and evaluators

Essential Information, But Only One Piece Of The Puzzle

Community assessment

Priority setting

Objective setting

Intervention selection

Implementation

Evaluation

Repeating the cycle

What’s Been Accomplished So Far? 171 findings to date

More in the pipeline…

Book publication in Jan 2005: Oxford University Press

People are beginning to use the Community Guide as a starting place to access evidence-based prevention advice

Beginning to see effects on practice, policy, research

What’s Been Published Relevant to Cancer Prevention?

Primary Prevention Tobacco Use (2000, 2001, in preparation)

Physical Activity (2001, 2002, in preparation)

Skin Cancer Prevention (2003, 2004)

Improving Vaccination Coverage (1999, 2000, in press)

Improving processes of health care Promoting Informed Decision Making

(2004)

Culturally-competent health care (2003, in preparation)

Population-based interventions for the detection of oral cancer (2001, 2002)

What’s Been Published Relevant to Cancer Prevention? (cont’d)

More on the Way Early phase

Alcohol Worksite Health Promotion

Midcourse HIV Sexual Behavior Nutrition

Late course Obesity Promoting Cancer Screening

There Are Only Two Outcomes of a Guide Review

Move practice forward

Move research forward

We Know Less About Moving Research Forward Than Practice

Collaboration between the Community Guide and the network is an evolving work in progress

Need for ongoing dialogue: What does the network need?

formats? detail? additional information?

We’d also like to get feedback from you that might influence our more general reviews or communications

Still Building the Airplane . . .While We’re Flying

We have only about 4 years of experience in trying to use the Community Guide to move research forward in a variety of areas, but I’ll talk generally about some potential uses I recommend you also read chapter 12

in the book

Effort Required to Establish a Community Guide Recommendation

Effort Required to Implement a Community Guide Recommendation

Research Phases: Health Promotion Programs (After NCI And NHLBI)

1. Basic research —

2. Hypothesis development —

3. Pilot applied studies Very small scale

4. Prototype studies Experimental or Q-ExpSmall scale

5. Efficacy trials ExperimentalNumbers sufficient for behavioral evaluation

6. Treatment effectiveness trials Exp or Q-ExpWith outcomesSTD deliveryLarge scale, real world

7. Implementation effectiveness As above (#6)trials Several types of delivery

8. Demonstration studies As above (#6)Unrestricted population(s)

This Research-to-Practice View Is Useful But Incomplete

The world is not linear-sequential

No place to put synthesis steps

More consistent with “programs that work” models than with synthesis can’t say much about characteristics that

contribute to success or failure

Based primarily on science push and little on user pull

No place to put research that might follow demonstration of effectiveness

Perform Research Appropriate to the Stage of Progress of the Field

Define the problem Identify targets of intervention Develop theory-based interventions

and taxonomy Evaluate effectiveness

Perform Research Appropriate to the Stage of Progress of the Field (cont’d)

Consider: Targeted replication research that answers important

new questions Whether applicability can be broadened and, if so,

what is required

Targeted dissemination research

Other “post-effectiveness” research questions Research and support for improving fit

Cost and cost effectiveness

Identification and reduction of implementation barriers

What else?

Testing/production/dissemination of “how to”

materials

How Can Reviews Help to Inform Additional Research

Identify what is already known and where are the remaining gaps:

Object is to move a field downstream

Hope is to help identify “low hanging fruit”

better complement work that has already been done

Identify opportunities to kill multiple birds with one stone

For example, replication research might be paired with work on economics or identification and reduction of barriers

A Case Story

There are now many examples of implementation of Community Guide and follow-up evaluative or research efforts

Designing new studies to add to what’s already known is harder than it appears

A Case Story

In 2000, the TF recommended client reminders as one of several client-oriented interventions to improve coverage with vaccines that are recommended for everyone in a particular age group (i.e., universally-recommended vaccines)

What Was The Evidence?

31 intervention arms of reminders used alone produced a median improvement in coverage of 8 pct pts (range –7 to 31 pct pts)

Intervention characteristics, populations, settings were diverse

What Else Did the Task Force Say?

Should be applicable to most adults and children in the US for whom universally recommended vaccines are applicable and in whom improvements in coverage are needed

Suggested a 4-step process for implementing recommended interventions Assess current intervention activities and needs

Assess barriers to vaccination

Select interventions that address local barriers “Using additional interventions when coverage is already

high or using additional interventions that are poorly matched to local problems are unlikely to result in important benefits”

Monitor progress and effects Adequate implementation?

Periodically reassess and adjust

What Else Did the Task Force Say?

Client Reminders for Adult Flu Shots: Methods

Site

Sample and design

Data collection

3 Health Plans in CT

~9500 high-risk adults, 18–64 yrs 55% response rate

Mail survey

“Reminder” vs. Small Media

Challenges: Implementation

Little formal or informal a priori assessment of locally-important barriers to vaccination due to time and other constraints

The fit of this intervention to locally-important problems was largely unknown

Client Reminders for Adult Flu Shots: Crude Results

Client Reminders for Adult Flu Shots:Additional Information

Most (55%) of the people who did not get vaccinated this time had never been vaccinated Might require additional strategies

Previously vaccinated people who were not vaccinated most commonly reported access barriers for which a reminder might not be expected to provide substantial help

If This Was An Effectiveness Study

Change in coverage below the median but well within the reported range

If This Was A Replication Study

Were these results importantly different from what was expected?

If so, why? Population (barriers, coverage) Setting (IPA) Intervention (type, implementation,

something else?) What we learn from this addition is harder

to interpret than I might have expected

If This Was A Dissemination Study

Identification of several important implementation barriers Ensuring fit Implementing a reminder in the way it

was defined in the guide We learned less about how to

address the barriers

Opportunities for Improvement

Improved communications between guideline developers, scientists, implementers, and decision makers

Better positioning of recommendations as part of a portfolio of resources to support decision making

Better positioning of intervention selection as part but not all of comprehensive program planning

Probably broaden the range of questions that are addressed by “replication research”

Introduction to the Matrix

This Network Will Have A Balanced Portfolio of 4 Main

Areas Of Study “Nearly sufficient” Replication Dissemination Evaluation

“Nearly Sufficient Evidence”

One or two well done studies could provide sufficient evidence for a recommendation

Evidence of Effectiveness

Quality of Execution

Design Suitability

Number of Studies

Consistent Effect Size

1. Strong

Good Greatest > 2 Yes Sufficient

Good Greatest or Moderate

> 5 Yes Sufficient

Good or Fair Greatest > 5 Yes Sufficient

Meet criteria for sufficient evidence Large

2. Sufficient

Good Greatest 1 -- Sufficient

Good or Fair Greatest or Moderate

> 3 Yes Sufficient

Good or Fair Greatest, Moderate or Least

> 5 Yes Sufficient

3. Insufficient Insufficient design or execution

Too few No Small

Translating to Recommendations

Examples “Nearly Sufficient”

Small numbers of studies trending positive

Few existing studies Coded yellow

Likely Have More “Nearly Sufficient” Examples Than Can

Be Immediately Funded Likely to need additional priority

setting criteria, e.g., Commonly done by programs (DCPC

survey) Already in the PLANET “Hot topics” “High stakes” Controversial

Replication Research

Replicate recommended interventions in populations or community settings in which they have not been previously evaluated, Underserved populations Health departments and other cancer control

partners.

Consider whether you also want to evaluate particular intervention subtypes

Examples (Replication Research)

Some fundamental questions have been addressed rarely B+C

Effectiveness among never-screened CRC

Effectiveness in promoting screening other than FOBT

We Could Use Some Feedback

What applicability information would be most useful to you? Types of information

Population, Setting, Intervention Level of detail

We’re willing to pull more info if needed

Other Ways To Set Priorities

Commonly done by programs (DCPC survey)

Not yet in the PLANET Data set missing a characteristic of

setting or population that is essential from the perspective of the B+C program

How To Effectively Disseminate

Research on how to effectively disseminate or implement within health departments or with community groups or other cancer control partners Guide-recommended community interventions

Examples Relevant To Dissemination

Research that identifies and addresses barriers to implementation

Identification and sharing (e.g., on the PLANET) of useful “tools”

Other related research, e.g., on cost or cost effectiveness Very little economics thus far except for

reminders

Evaluations Of Recommended Interventions Already

Implemented Evaluate fidelity to recommended

interventions Determine, as much as possible, if

they are as effective as might be expected

Examples (already implemented)

Surveys of programs about what they say they’re currently doing (or not doing)

Audits of what they’re actually doing (or not doing)

Checks of whether programs match what was recommended

Identification and sharing (e.g., on the PLANET) of useful “tools” (i.e., “how to” advice)

Potential Priorities for Evaluation of Already-Implemented Interventions

Recommended interventions that are commonly practiced (e.g., based on the DCPC survey)

Interventions that are not commonly practiced for which identification of sharable tools might help

Identification of barriers and ways to overcome them

Gaps between research and program and program and research keep us from making the most of investments in each of these

This project has great potential to help narrow the gaps

Discussion Questions

Barbara K. Rimer, DrPH

What sets of recommendations are

most relevant to the CPCRN? For

example, is anything with cancer

relevance potential grist for the

Network?

Do we want to focus on certain

categories of evidence, e.g. sufficient,

strong or insufficient?

Should we focus on particular kinds

of insufficient evidence, e.g. where

there are

unresolved issues, e.g., minorities?

Should the Network make national

selection in topic areas and apply

nationally/locally?

How do we go from the national

level recommendations to

regional/local implementation? 

For example, if we are to

disseminate or replicate programs,

do we give first priority to CPCRN

member programs?

How do we go from the national level

recommendations to regional or local

implementation?

For example, if we are to disseminate or

replicate programs, do we give first

priority to CPCRN member programs? Should

they have appeared in PLANET? Do we want

to recommend strongly that members register

for PLANET when requested to do so?

What is the role of the Network in replication research?

Some types of research are inherently more difficult to fund than others, e.g. replication is difficult to get funded through NIH. Are these areas for potential support through SIPs?