2
Forum Improving CME: Using Participant Satisfaction Measures to Specify Educational Methods JASON J. OLIVIERI, MPH; RODERICK P. REGALA,PHD Imagine having developed a continuing medical education (CME) initiative to educate physicians on updated guidelines regarding high cholesterol in adults. This initiative consisted of didactic presentations and case-based discussions offered in 5 major US cities, followed by a Web-based enduring com- ponent to distill key points of the live meeting, address fre- quently asked questions, and provide access to supportive tools such as relevant literature and patient education mate- rials. Case-based, follow-up survey data indicated that CME participants were significantly more likely to practice in ac- cordance with the National Heart Lung and Blood Institute Cholesterol Guideline Update 1 than a representative group of nonparticipants. These findings are consistent with research showing that interactive, multifaceted, and sequenced CME activities are generally more effective than traditional ap- proaches to CME. 2 Encouraged by these results, you con- sider writing a manuscript to describe this initiative and fur- ther the collective understanding of effective CME methods. But how would you describe this activity? Was it “interac- tive” because of the case discussion? Was it “multifaceted and sequenced” because of the follow-up enduring material with supportive tools? What do these terms mean? A recent Cochrane review of CME effectiveness defined interactive CME as “role play, case discussion, or the op- portunity to practice skills” 3(p.7) and multifaceted as “includ- ing two or more interventions.” 3(p.6) These descriptions are consistent with earlier reviews. 2 Although such definitions Disclosures: The authors report none. Mr. Olivieri: Manager, Educational Strategy and Outcomes Services, Imedex, LLC; Dr. Regala: Associate Director, Educational Development, Medical Affairs, Imedex, LLC. Correspondence: Jason J. Olivieri, Imedex, LLC, 11675 Rainwater Drive, Suite 600, Alpharetta, GA 30009; e-mail: [email protected]. © 2013 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Educa- tion. Published online in Wiley Online Library (wileyonlinelibrary.com). DOI: 10.1002/chp.21176 were sufficient for meta-analysis, they lack precision when describing an individual CME activity. Clear definitions are needed to categorize the varying degrees of “interaction” and “multifacetedness,” which can then be directly measured to more accurately understand their impact on CME effective- ness. In the Cochrane review, researchers attempted to cap- ture educational intensity, incorporating the degree of both interaction and multi-facetedness, in order to determine their combined effect on performance (Level 5 outcome) or patient health (Level 6 outcome). 3 Their findings were inconclusive, which was partially attributed to inadequate descriptions of the included studies. An absence of clear definitions or stan- dardization of terminology has been similarly noted as a lim- itation in other meta-analyses. 4,5 These studies point toward a need to better define educational techniques and methods in order to assess how they influence outcomes. Interaction may be an important or even necessary component of an ed- ucational intervention, but how much (and what type of) in- teraction? Is using an audience response system (ARS) in a didactic presentation enough to make an activity interactive? How many and what type of ARS questions facilitate inter- action? Can we increase the magnitude of effectiveness by adding more interactive components? Moore et al 6 describe a satisfaction (Level 2) outcome as the degree to which participant expectations regarding CME setting and delivery are met. The authors do not identify specific expectations; however, Shewchuk et al 7 developed a Level 2 outcomes tool based on expectation disconfirma- tion theory to measure the performance of a CME activity against 10 quality indicators. Included among these quality indicators were measures of interactivity and multifaceted- ness: each was assessed using a 7-point scale. Although not the only method to measure these factors, this tool offers the advantage of having undergone some validation testing. Use of a validated instrument helps to ensure that data collected supports the intent of assessment, as opposed to capturing information overly subject to varying interpretations and bi- ases of respondents. Moreover, use of a validated instrument would allow for greater consistency in measurement across JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 33(2):146–147, 2013

Improving CME: Using Participant Satisfaction Measures to Specify Educational Methods

Embed Size (px)

Citation preview

Page 1: Improving CME: Using Participant Satisfaction Measures to Specify Educational Methods

Forum

Improving CME: Using Participant Satisfaction Measuresto Specify Educational Methods

JASON J. OLIVIERI, MPH; RODERICK P. REGALA, PHD

Imagine having developed a continuing medical education(CME) initiative to educate physicians on updated guidelinesregarding high cholesterol in adults. This initiative consistedof didactic presentations and case-based discussions offeredin 5 major US cities, followed by a Web-based enduring com-ponent to distill key points of the live meeting, address fre-quently asked questions, and provide access to supportivetools such as relevant literature and patient education mate-rials. Case-based, follow-up survey data indicated that CMEparticipants were significantly more likely to practice in ac-cordance with the National Heart Lung and Blood InstituteCholesterol Guideline Update1 than a representative group ofnonparticipants. These findings are consistent with researchshowing that interactive, multifaceted, and sequenced CMEactivities are generally more effective than traditional ap-proaches to CME.2 Encouraged by these results, you con-sider writing a manuscript to describe this initiative and fur-ther the collective understanding of effective CME methods.But how would you describe this activity? Was it “interac-tive” because of the case discussion? Was it “multifacetedand sequenced” because of the follow-up enduring materialwith supportive tools? What do these terms mean?

A recent Cochrane review of CME effectiveness definedinteractive CME as “role play, case discussion, or the op-portunity to practice skills”3(p.7) and multifaceted as “includ-ing two or more interventions.”3(p.6) These descriptions areconsistent with earlier reviews.2 Although such definitions

Disclosures: The authors report none.

Mr. Olivieri: Manager, Educational Strategy and Outcomes Services,Imedex, LLC; Dr. Regala: Associate Director, Educational Development,Medical Affairs, Imedex, LLC.

Correspondence: Jason J. Olivieri, Imedex, LLC, 11675 Rainwater Drive,Suite 600, Alpharetta, GA 30009; e-mail: [email protected].

© 2013 The Alliance for Continuing Education in the Health Professions,the Society for Academic Continuing Medical Education, and the Councilon Continuing Medical Education, Association for Hospital Medical Educa-tion. • Published online in Wiley Online Library (wileyonlinelibrary.com).DOI: 10.1002/chp.21176

were sufficient for meta-analysis, they lack precision whendescribing an individual CME activity. Clear definitions areneeded to categorize the varying degrees of “interaction” and“multifacetedness,” which can then be directly measured tomore accurately understand their impact on CME effective-ness. In the Cochrane review, researchers attempted to cap-ture educational intensity, incorporating the degree of bothinteraction and multi-facetedness, in order to determine theircombined effect on performance (Level 5 outcome) or patienthealth (Level 6 outcome).3 Their findings were inconclusive,which was partially attributed to inadequate descriptions ofthe included studies. An absence of clear definitions or stan-dardization of terminology has been similarly noted as a lim-itation in other meta-analyses.4,5 These studies point towarda need to better define educational techniques and methodsin order to assess how they influence outcomes. Interactionmay be an important or even necessary component of an ed-ucational intervention, but how much (and what type of) in-teraction? Is using an audience response system (ARS) in adidactic presentation enough to make an activity interactive?How many and what type of ARS questions facilitate inter-action? Can we increase the magnitude of effectiveness byadding more interactive components?

Moore et al6 describe a satisfaction (Level 2) outcome asthe degree to which participant expectations regarding CMEsetting and delivery are met. The authors do not identifyspecific expectations; however, Shewchuk et al7 developeda Level 2 outcomes tool based on expectation disconfirma-tion theory to measure the performance of a CME activityagainst 10 quality indicators. Included among these qualityindicators were measures of interactivity and multifaceted-ness: each was assessed using a 7-point scale. Although notthe only method to measure these factors, this tool offers theadvantage of having undergone some validation testing. Useof a validated instrument helps to ensure that data collectedsupports the intent of assessment, as opposed to capturinginformation overly subject to varying interpretations and bi-ases of respondents. Moreover, use of a validated instrumentwould allow for greater consistency in measurement across

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, 33(2):146–147, 2013

Page 2: Improving CME: Using Participant Satisfaction Measures to Specify Educational Methods

Using Participant Satisfaction Measures

Lessons for Practice

• Factors associated with effective CME arenot well defined. These factors can be bet-ter understood through satisfaction (Level 2)assessment.

• Utilization of a validated Level 2 assess-ment tool would allow for greater consis-tency in measurement of factors associatedwith effective CME.

CME providers, which permits the data pooling necessary toexpedite our understanding of CME characteristics such asinteractivity and multifacetedness. Presently, very few vali-dated instruments to assess Level 2 outcomes exist.8

While the tool developed by Shewchuk et al increasesour ability to speak more uniformly about interactivity andmultifacetedness in CME, it focuses on participant satisfac-tion with these factors rather than measuring their inten-sity. Accordingly, there is still need for validated, criteria-based scales to directly measure CME interactivity andmultifacetedness. Until such scales are developed, theShewchuk et al tool can serve as a valuable proxy measureof these factors.

References

1. Martin SS, Metkus TS, Horne A, et al. Waiting for the National Choles-terol Education Program Adult Treatment Panel IV guidelines, and inthe meantime, some challenges and recommendations. Am J Cardiol.2012;110(2):307–313.

2. Mazmanian PE, Davis DA. Continuing medical education and thephysician as a learner: guide to the evidence. JAMA. 2002;288(9):1057–1060.

3. Forsetlund L, Bjørndal A, Rashidian A, et al. Continuing educationmeetings and workshops: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2009;2:CD003030.

4. Marinopoulos SS, Dorman T, Ratanawongsa N, et al. Effectiveness ofContinuing Medical Education. Evidence Report/Technology Assess-ment No. 149 (prepared by the Johns Hopkins Evidence-Based Prac-tice Center, under Contract No. 290-02-0018.). AHRQ Publication No.07-E006. Rockville, MD: Agency for Healthcare Research and Quality;2007.

5. Marinopoulos SS, Baumann MH, American College of Chest Physi-cians Health and Science Policy Committee. Methods and definitionof terms: effectiveness of continuing medical education: AmericanCollege of Chest Physicians Evidence-Based Educational Guidelines.Chest. Mar 2009;135(3 Suppl):17S–28S. doi:10.1378/chest.08-2514.

6. Moore DE, Green JS, Gallis HA. Achieving desired results and im-proved outcomes: integrating planning and assessment throughoutlearning activities. J Contin Educ Health Prof. 2009;29(1):1–15.

7. Shewchuk RM, Schmidt HJ, Benarous A, et al. A standardized approachto assessing physician expectations and perceptions of continuing med-ical education. J Contin Educ Health Prof. 2007;27(3):173–182.

8. Tian J, Atkinson NL, Portnoy, B, Gold RS. A systematic review of eval-uation in formal continuing medical education. J Contin Educ HealthProf. 2007;27(1):16–27.

JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS—33(2), 2013 147DOI: 10.1002/chp