3
EDITORIAL Putting Evidence into Practice C. David Naylor, MD, DPhil T he modern physician swims in a river of clinical research evidence that is unprecedented in its depth, velocity, and turbulence. Perhaps as a re- sult, many reports during the last two decades have doc- umented puzzling interinstitutional and inter-regional variations in practice patterns (1–3). Some variation is acceptable in the absence of definitive evidence about best practices. But when process-of-care audits are under- taken to assess practice variations (4 – 6), clinical deci- sions do not always reflect even clear-cut research evi- dence. As generators and disseminators of new knowledge, researchers and academic physicians might be expected to behave differently. It is indeed plausible that this sector of the medical profession would adopt new practices rap- idly in response to evidence from randomized trials. However, in this issue of The American Journal of Medi- cine, Majumdar et al. put that belief to rest (7). The authors have drawn on data from the Global Uti- lization of Streptokinase and Tissue Plasminogen Activa- tor for Occluded Coronary Arteries (GUSTO)-1 study that was conducted from 1990 to 1993 and involved 659 hospitals across North America (8). Of these hospitals, 22 also participated in the Survival and Ventricular Enlarge- ment (SAVE) trial (9), which showed survival benefits from the use of an angiotensin-converting enzyme (ACE) inhibitor in patients with myocardial infarction. The pri- mary SAVE report was published in 1992, but the results had been shared earlier with participating sites. Majum- dar et al. hypothesized that GUSTO-1 participants with myocardial infarction who were hospitalized at the 22 SAVE sites would be more likely to receive ACE inhibitors at discharge than were those at the other 637 hospitals. What then was the likelihood of ACE inhibitor use at discharge among patients at SAVE sites versus the rest? It was virtually identical whether considering all subjects (15% vs. 16%) or the subgroup of GUSTO-1 subjects with heart failure (30% for both sets of hospitals). More- over, the authors controlled carefully for potential inter- site differences with a fixed-effect hierarchical logistic re- gression that nested patients within their respective hos- pitals, before adjusting for all relevant patient-level variables. What makes these findings dramatic is that large-scale trials such as SAVE are sometimes presumed to have pos- itive “inoculation effects,” jump-starting the adoption of effective new treatments by introducing them to a large cohort of research-oriented practitioners in academic and community settings. Sadly, it now appears that clini- cians who participate in a positive trial are as slow to adopt a highly beneficial treatment as are clinicians who do not participate. The first consolation here is the lack of an external control group. One might surmise, or at least hope, that practitioners who did not participate in studies such as GUSTO-1 or SAVE were even slower to adopt ACE in- hibitors. A second consolation is that the findings of Ma- jumdar et al. are consistent with what we have learned about clinical decision making during the last three de- cades. At risk of caricature, one can sketch four phases in the evolution of conventional wisdom about the relation of clinicians to research evidence. Phase 1 was the Era of Optimism. It presumed a model of passive diffusion (10). Evidence published in peer-re- viewed journals would find its way into practice as clini- cians filtered the medical literature and chose the most promising ideas for bedside application. Accordingly, ac- ademicians focused on training the next generation of physicians to be consumers of primary research studies, seeking, appraising, and applying evidence with ma- chine-like efficiency. Phase 2 was the Era of Innocence Lost and Regained. As one study after another showed a disjunction of evi- dence and action, there was widespread criticism of phy- sicians for failing to keep abreast of the growing medical literature. There was also increasing recognition that it was not sufficient to recreate the physician as the u ¨ber consumer of clinical studies. Faith in professionalism, however, was restored through the practice guidelines movement based, in turn, on evidence synthesis and ac- tive dissemination (10,11). Meta-analyses, decision anal- yses, and practice guidelines would synthesize the best evidence available from multiple research studies and thereby define what a practitioner should do when con- fronted with a particular clinical situation. Endorsed by prestigious organizations, the resulting practice guide- lines could be publicized widely and address the problem of information overload for clinicians. Phase 3, the Era of Industrialization, dawned once it Am J Med. 2002;113:161–163. From the Faculty of Medicine, University of Toronto, Toronto, On- tario, Canada. Requests for reprints should be addressed to C. David Naylor, MD, DPhil, University of Toronto, Medical Science Building, Room 2109, 1 Kings College Circle, Toronto, Ontario, Canada M5S 1A8, or [email protected]. ©2002 by Excerpta Medica, Inc. 0002-9343/02/$–see front matter 161 All rights reserved. PII S0002-9343(02)01215-9

Putting evidence into practice

Embed Size (px)

Citation preview

Page 1: Putting evidence into practice

EDITORIAL

Putting Evidence into Practice

C. David Naylor, MD, DPhil

The modern physician swims in a river of clinicalresearch evidence that is unprecedented in itsdepth, velocity, and turbulence. Perhaps as a re-

sult, many reports during the last two decades have doc-umented puzzling interinstitutional and inter-regionalvariations in practice patterns (1–3). Some variation isacceptable in the absence of definitive evidence about bestpractices. But when process-of-care audits are under-taken to assess practice variations (4 – 6), clinical deci-sions do not always reflect even clear-cut research evi-dence.

As generators and disseminators of new knowledge,researchers and academic physicians might be expectedto behave differently. It is indeed plausible that this sectorof the medical profession would adopt new practices rap-idly in response to evidence from randomized trials.However, in this issue of The American Journal of Medi-cine, Majumdar et al. put that belief to rest (7).

The authors have drawn on data from the Global Uti-lization of Streptokinase and Tissue Plasminogen Activa-tor for Occluded Coronary Arteries (GUSTO)-1 studythat was conducted from 1990 to 1993 and involved 659hospitals across North America (8). Of these hospitals, 22also participated in the Survival and Ventricular Enlarge-ment (SAVE) trial (9), which showed survival benefitsfrom the use of an angiotensin-converting enzyme (ACE)inhibitor in patients with myocardial infarction. The pri-mary SAVE report was published in 1992, but the resultshad been shared earlier with participating sites. Majum-dar et al. hypothesized that GUSTO-1 participants withmyocardial infarction who were hospitalized at the 22SAVE sites would be more likely to receive ACE inhibitorsat discharge than were those at the other 637 hospitals.

What then was the likelihood of ACE inhibitor use atdischarge among patients at SAVE sites versus the rest? Itwas virtually identical whether considering all subjects(15% vs. 16%) or the subgroup of GUSTO-1 subjectswith heart failure (30% for both sets of hospitals). More-over, the authors controlled carefully for potential inter-site differences with a fixed-effect hierarchical logistic re-gression that nested patients within their respective hos-

pitals, before adjusting for all relevant patient-levelvariables.

What makes these findings dramatic is that large-scaletrials such as SAVE are sometimes presumed to have pos-itive “inoculation effects,” jump-starting the adoption ofeffective new treatments by introducing them to a largecohort of research-oriented practitioners in academicand community settings. Sadly, it now appears that clini-cians who participate in a positive trial are as slow toadopt a highly beneficial treatment as are clinicians whodo not participate.

The first consolation here is the lack of an externalcontrol group. One might surmise, or at least hope, thatpractitioners who did not participate in studies such asGUSTO-1 or SAVE were even slower to adopt ACE in-hibitors. A second consolation is that the findings of Ma-jumdar et al. are consistent with what we have learnedabout clinical decision making during the last three de-cades. At risk of caricature, one can sketch four phases inthe evolution of conventional wisdom about the relationof clinicians to research evidence.

Phase 1 was the Era of Optimism. It presumed a modelof passive diffusion (10). Evidence published in peer-re-viewed journals would find its way into practice as clini-cians filtered the medical literature and chose the mostpromising ideas for bedside application. Accordingly, ac-ademicians focused on training the next generation ofphysicians to be consumers of primary research studies,seeking, appraising, and applying evidence with ma-chine-like efficiency.

Phase 2 was the Era of Innocence Lost and Regained.As one study after another showed a disjunction of evi-dence and action, there was widespread criticism of phy-sicians for failing to keep abreast of the growing medicalliterature. There was also increasing recognition that itwas not sufficient to recreate the physician as the uberconsumer of clinical studies. Faith in professionalism,however, was restored through the practice guidelinesmovement based, in turn, on evidence synthesis and ac-tive dissemination (10,11). Meta-analyses, decision anal-yses, and practice guidelines would synthesize the bestevidence available from multiple research studies andthereby define what a practitioner should do when con-fronted with a particular clinical situation. Endorsed byprestigious organizations, the resulting practice guide-lines could be publicized widely and address the problemof information overload for clinicians.

Phase 3, the Era of Industrialization, dawned once it

Am J Med. 2002;113:161–163.From the Faculty of Medicine, University of Toronto, Toronto, On-tario, Canada.

Requests for reprints should be addressed to C. David Naylor, MD,DPhil, University of Toronto, Medical Science Building, Room 2109,1 Kings College Circle, Toronto, Ontario, Canada M5S 1A8, [email protected].

©2002 by Excerpta Medica, Inc. 0002-9343/02/$–see front matter 161All rights reserved. PII S0002-9343(02)01215-9

Page 2: Putting evidence into practice

became clear that practice guidelines were not consis-tently guiding practice. The new model was one of aggres-sive implementation rather than passive diffusion or evenactive dissemination (10). A veritable industry arose, fo-cused on measuring and managing physician behavior.Although cost containment and risk management werepotent drivers of the measurement-and-managementmovement, the industrialization of clinical quality alsoreflected the understandable preoccupation of the public,professional leaders, payers, and policymakers with sub-standard clinical decision making. Ultimately, in a bril-liant countermovement, medical leaders co-opted thelanguage of industrial quality gurus, emphasizing theneed for change in management strategies that would in-form and empower clinicians rather than deprofession-alize them (12).

Phase 4, the Era of Information Technology and Sys-tems Engineering, is still with us. It presupposes that wecan shape clinical decisions with information tools thatwill make it easier for clinicians and patients to assimilatethe most relevant and recent evidence. However, this erais also paying attention to research evidence of a differentsort— evidence about how to change physician behavior.

This evidence was usefully summarized in 1995 whenDavis et al. (13) and Oxman et al. (14) conducted system-atic reviews of all controlled studies of strategies designedto persuade physicians “to modify their practice perfor-mance by communicating clinical information.” Theiroverviews confirmed that one-off educational events orsimple dissemination of educational materials had littleeffect on clinical decision making. Even audit and feed-back had limited effect, unless undertaken on a continu-ous basis and coupled with feedback reminding cliniciansof actions that should be taken in response to the auditresults. Moreover, the evidence was compelling with re-spect to the sociology of behavior change. Information-based tools were more likely to have an effect when cou-pled with processes that recruited opinion leaders in thelocal practice environment, addressed barriers to change,built a consensus in the affected professional community,and rectified specific gaps in clinical knowledge. Tell-ingly, multifaceted interventions showed the strongest ef-fects on processes or outcomes of care. These findings,reinforced by subsequent research reports (15,16), havegalvanized the modern systems approach to getting evi-dence into practice, an approach in which sophisticatedinformation technology is a necessary but not sufficientelement.

Looking to the future, more research will be needed onhow evidence intersects clinical practice. The classic over-views by Davis et al. (13) and Oxman et al. (14) dependedin part on cross-study inferences, but those were nonran-domized comparisons with potential pitfalls. Factorialdesigns in studies of clinical behavior change have beenuncommon, and it is seldom clear which elements in a

multifactorial strategy are truly essential and effective.The potential role of patients as evidence vectors is in-triguing but underexplored. Not least, it may be time foreducation researchers to study whether we should spendmore time teaching medical students about health infor-mation technology and the sociology of change manage-ment, and less time inculcating them with the time-hon-ored principles of critical appraisal.

In the interim, for clinicians depressed by this evolu-tionary chronicle, it may help to remember that our pro-fession is by no means impervious to published evidence.Majumdar et al. (7) did find modest increases in the use ofACE inhibitors after publication of the SAVE study. Be-fore publication, only 14% of GUSTO-1 participants re-ceived these drugs; after publication, the proportion roseto 18% (P � 0.001). This is consistent with research sug-gesting that medical practices change slowly but steadily.Indeed, there are instances, such as the utilization of ca-rotid endarterectomy in North America (17,18), whereclinical research has had rapid and dramatic effects onpractice patterns. If our colleagues in surgery can assim-ilate evidence rapidly, there is still hope for those of uswho labor in the medical vineyards!

REFERENCES1. Wennberg J. Which rate is right? N Engl J Med. 1986;314:310 –311.2. Tu JV, Naylor CD, Kumar D, et al. Coronary artery bypass graft

surgery in Ontario and New York State: which rate is right? SteeringCommittee of the Cardiac Care Network of Ontario. Ann InternMed. 1997;126:13–19.

3. Di Salvo TT, Paul SD, Lloyd-Jones D, et al. Care of acute myocardialinfarction by noninvasive and invasive cardiologists: procedureuse, cost and outcome. J Am Coll Cardiol. 1996;27:262–269.

4. Naylor CD, Guyatt GH. Users’ guides to the medical literature. XL.How to use an article about a clinical utilization review. Evidence-Based Medicine Working Group. JAMA. 1996;275:1435–1439.

5. McAlister FA, Taylor L, Teo KK, et al. The treatment and preven-tion of coronary heart disease in Canada: do older patients receiveefficacious therapies? The Clinical Quality Improvement Network(CQIN) Investigators. J Am Geriatr Soc. 1999;47:811–818.

6. Hemingway H, Crook AM, Feder G, et al. Underuse of coronaryrevascularization procedures in patients considered appropriatecandidates for revascularization. N Engl J Med. 2001;344:645–654.

7. Majumdar SR, Chang W-C, Armstrong PW. Do the investigativesites that take part in a positive clinical trial translate that evidenceinto practice? Am J Med. 2002;113:140 –145.

8. The GUSTO Investigators. An international randomized trial com-paring four thrombolytic strategies for acute myocardial infarction.N Engl J Med. 1993;329:673–682.

9. Pfeffer MA, Braunwald E, Moye LA, et al. Effect of captopril onmortality and morbidity in patients with left ventricular dysfunc-tion after myocardial infarction. N Engl J Med. 1992;327:669 –677.

10. Lomas J. Retailing research: increasing the role of evidence in clin-ical services for childbirth. Milbank Q. 1993;71:439 –475.

11. Naylor CD. Better care and better outcomes. The continuing chal-lenge. JAMA. 1998;279:1392–1394.

12. Berwick DM. A primer on leading the improvement of systems.BMJ. 1996;312:619 –622.

Putting Evidence into Practice/Naylor

162 August 1, 2002 THE AMERICAN JOURNAL OF MEDICINE� Volume 113

Page 3: Putting evidence into practice

13. Davis DA, Thomson MA, Oxman AD, Haynes RB. Changing phy-sician performance. A systematic review of the effect of continuingmedical education strategies. JAMA. 1995;274:700 –705.

14. Oxman AD, Thomson MA, Davis DA, Haynes RB. No magicbullets: a systematic review of 102 trials of interventions to improveprofessional practice. CMAJ. 1995;153:1423–1431.

15. Soumerai SB, McLaughlin TJ, Gurwitz JH, et al. Effect of local med-ical opinion leaders on quality of care for acute myocardialinfarction: a randomized controlled trial. JAMA. 1998;279:1358 –1363.

16. Bradley EH, Holmboe ES, Mattera JA, Roumanis SA, Radford MJ,Krumholz HM. A qualitative study of increasing beta-blocker useafter myocardial infarction: why do some hospitals succeed? JAMA.2001;285:2604 –2611.

17. Simunovic M, To T, Johnston KW, Naylor CD. Trends and varia-tions in the use of vascular surgery in Ontario. Can J Cardiol. 1996;12:249 –253.

18. Tu JV, Hannan EL, Anderson GM, et al. The fall and rise of carotidendarterectomy in the United States and Canada. N Engl J Med.1998;339:1441–1447.

Putting Evidence into Practice/Naylor

August 1, 2002 THE AMERICAN JOURNAL OF MEDICINE� Volume 113 163