63
Principles of Scientifically Principles of Scientifically Credible Intervention Research: Credible Intervention Research: A Brief Review A Brief Review Joel R. Levin University of Arizona

Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Embed Size (px)

DESCRIPTION

Review and Preview The Punchline: “Scientifically Credible” Intervention Research What is its hallmark in “conventional group” intervention research? Random assignment of participants to interventions And what about its hallmark(s) in single-case intervention research? Analogous methodological operations and strategies Stick around over the next few days and find out!

Citation preview

Page 1: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Principles of Scientifically Credible Principles of Scientifically Credible Intervention Research: A Brief ReviewIntervention Research: A Brief Review

Joel R. LevinUniversity of Arizona

Page 2: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Focus of This PresentationFocus of This Presentation Considerations that researchers need to

make in the way of their methodologies, procedures, measures, and statistical analyses when investigating whether or not causal relationships exist between educational interventions and outcome measures.

Page 3: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Review and PreviewReview and PreviewThe Punchline: “Scientifically Credible”

Intervention ResearchWhat is its hallmark in “conventional group” intervention research?

Random assignment of participants to interventionsAnd what about its hallmark(s) in single-case intervention research?

Analogous methodological operations and strategies

Stick around over the next few days and find out!

Page 4: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Brief Review of Common Research Brief Review of Common Research Methodologies in EducationMethodologies in Education

• Nonintervention Research– Descriptive/observational/case studies (includes

ethnographic research)– Self-report/survey/questionnaire studies– Correlational

• Intervention Research– Nonexperimental (e.g., nonequivalent control group)– Quasi-experimental– Experimental

Note: Different degrees of plausibility/credibility about empirical and causal relationships are associated with these different methodologies.

Page 5: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Selected Research ValiditiesSelected Research Validities• Internal validity ("randomization" = random assignment to

treatments or interventions; needed to argue for cause-effect relationships)

• External validity (random selection = random sampling of participants; needed to generalize from samples to populations)

• Construct validity (the relationship between experimental operations and underlying psychological traits/processes)

• Implementation validity (treatment integrity and acceptability)

• Statistical-conclusion validity (assumptions associated with the statistical tests conducted)

Shadish, W. R., Cook, T. D., & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

Page 6: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

An Important Distinction to An Important Distinction to Remember Throughout This InstituteRemember Throughout This Institute

Random Sampling/Selection of Participants vs.

Random Assignment/Randomization of Participants to Intervention Conditions

– The latter is needed to constitute an internally valid (scientifically credible) intervention study whereas the former is not.

Page 7: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Three Critical Considerations in Prescribing Three Critical Considerations in Prescribing Research-Based Educational InterventionsResearch-Based Educational Interventions

Research Criterion Primary Function

Scientific Credibility (Internal Validity, Provides confidence that aStatistical Conclusion Validity) study's outcomes are “truly” produced by

the intervention

Educational Creditability (Effect Size, Assesses the potential valueSocial Validity) of the intervention to society Evidence “Accretability” (Replicability/ Establishes the dependabilityGeneralizability, Construct Validity, of the intervention and theExternal Validity) scope of its applicability

Levin, J. R. (1994). Crafting educational intervention research that's both credible and creditable. Educational Psychology Review, 6, 231-243.

Levin, J. R. (2004). Random thoughts on the (in)credibility of educational-psychological intervention research. Educational Psychologist, 39, 173-184.

Page 8: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Upcoming AttractionsUpcoming Attractions

Three actual intervention study examples, ranging from the sublime to the hilarious ‒ to hammer in research credibility concepts

Page 9: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Intervention Study Example 1: Intervention Study Example 1: On Strolls and StrollersOn Strolls and Strollers

First consider a recent “stroll” study conducted by Aspinall, Mavros, Coyne, & Roe (2013)

Aspinall, P., Mavros, P., Coyne, R., & Roe, J. (2013). The urban brain: Analysing outdoor physical activity with mobile EEG. British Journal of Sports Medicine (Published Online First: 6 March 2013 doi:10.1136/bjsports-2012-091877)

Page 10: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Aspinall et al. (2013) “Stroll” StudyAspinall et al. (2013) “Stroll” StudyResearch Question: Do different stroll types (of the same duration and strenuousness) produce different types of emotional experience for the stroller?As operationalized in this particular study: Does walking through urban green space reduce participants’ stress, arousal, and frustration, and increase meditation (as measured by their cortical EEGs), in comparison to walking through other types of environment?

Page 11: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Stroll Study Participants and Stroll Study Participants and ProcedureProcedure

12 University of Edinburgh students “travelled individually through [three zones in the city of Edinburgh] in the same sequence: 1, 2, and 3.”• Zone 1(a): “urban shopping street with many people, 19th century buildings, light traffic”• Zone 2(b): “path through…green space, bordering lawns, playing fields with trees”• Zone 3(c): “busy commercial district with heavy traffic, many pedestrians, high-noise levels” The total walk took an average of 26 minutes, with participants’ EEG patterns recorded throughout.

Page 12: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona
Page 13: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Stroll Study Regression Stroll Study Regression Analysis ResultsAnalysis Results

• “Moving from zone 1 to 2 (from urban street to green space), [frustration, engagement or alertness and long-term excitement, as reflected by participants’ EEG patterns] became lower, whereas meditation became higher.”

• [Moving from zone 2 to zone 3 (from green space to the busy commercial district)], “engagement or alertness [as reflected by participants’ EEG patterns] is higher in zone 3.”

Page 14: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Stroll Study ConclusionsStroll Study Conclusions• Authors: “Results confirm the research

questions in showing systematic differences in processed EEG signals in different urban areas…”

• Newspaper writer: “Scientifically testing the theory that leafy settings calm urban brain waves.” “Results from a mobile EEG device found that a stroll amid the foliage can aid focus.” (Reynolds, 2013).

Reynolds, G. (2013, April 2). Brain fatigue goes green. New York Times, D5.

Page 15: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

On Strolls and Strollers: On Strolls and Strollers: Example Intervention Study 2Example Intervention Study 2

Now contrast Aspinall et al.’s “stroll” study with a different kind of “stroller” study by Zeedyk (2008).

Zeedyk, M. S. (2008) What’s life in a baby buggy like?: The impact of buggy orientation on parent-infant interaction and infant stress. Report from the University of Dundee’s National Literacy Trust, Nov. 21, 2008 (Retrieved from http://www.literacytrust.org.uk/assets/0000/2531/Buggy_research.pdf)

Page 16: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Background for the StudyBackground for the Study

Educational Issue: How to promote children’s language development?

Assumption: Parents talking to their young children is an important contributor to the children’s language development.

Page 17: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

And Talking About Talking…And Talking About Talking…Why might talking to baby be a societally creditable endeavor?Look to the findings in an important longitudinal study conducted by Hart & Risley (1995), summarized by Rosenberg (2013) and Rich (2013):

Hart, B., & Risley, T. R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Brookes.Rich, M. (2013, May 30). In raising scores, 123 is easier than ABC. New York Times, A1, A3.Rosenberg, T. (2013, Apr. 14). The power of talking to baby. New York Times Sunday Review, 8.

Page 18: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

As Rosenberg (2013) explains:“[Hart and Risley studied] how parents of different socioeconomic backgrounds talked to their babies.“Every month, the researchers visited the 42 families in the study and recorded an hour of parent-child interaction.“They waited till the children were 9, and examined how they were doing in school. “In the meantime, they transcribed and analyzed every word on the tapes – a process that took six years.”

Page 19: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“The disparity [among SES groups in parent-to-child talk] was staggering.“Children whose families were on welfare heard about 600 words per hour. Working-class children heard about 1,200 words per hour, and children from professional families heard 2,100 words.“By age 3, a poor child would have heard 30 million fewer words in his home than a child from a professional family. And the disparity mattered:The greater the number of words children heard from their parents or caregivers before they were 3, the higher their IQ and the better they did in school. TV talk not only didn’t help, it was detrimental.”

Page 20: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“[In a 2008 study, researcher Meredith Rowe found that [in comparison to higher SES caregivers, lower SES caregivers], who tend to get advice from friends and family, were [less] aware that it was important to talk more to their babies [as indicated by infants’ physical, social, linguistic, perceptual and cognitive development, along with principles related to early experience, social influences, atypical development and individual differences].”

Rowe, M. L. (2008). Child-directed speech: Relation to socioeconomic status, knowledge of child development and child vocabulary skill. Journal of Child Language, 35, 185-205.

Page 21: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Question

How do you make a baby buggy?

Page 22: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Question Revisited

How do you make a baby buggy?

Page 23: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Stroller StudyStroller Study

Educational Issue: How to promote children’s language development?

Assumption: Parents talking to their young children is an important contributor to the children’s language development.

Hypothesis: Children in forward-facing strollers receive less parent talk in comparison to those in toward-facing strollers

Page 24: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Stroller Study Initial ObservationsStroller Study Initial Observations

Naturalistic Data: Observations of 2700 families with the two types of stroller*

Results: Percentage of Talk Cases

Forward, 11%; Toward, 25%

* ”You can observe a lot by just watching.” (Yogi Berra)

Page 25: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Stroller Study Follow-up InvestigationStroller Study Follow-up Investigation

“Preliminary” Laboratory Study: 20 mothers using both types of strollers for 15 minutes, with 10 mothers randomly assigned to each stroller order (FT or TF).

Findings: Twice as much mother talk with the toward-facing strollers.

Bonus Finding: Babies in toward-facing strollers also laughed more.

Page 26: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

And What Would And What Would YouYou Conclude on the Conclude on the Basis of the Evidence Presented Basis of the Evidence Presented

in the Two Studies?in the Two Studies?

Page 27: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Pre-Institute Qualifying Pre-Institute Qualifying Examination Question:Examination Question:

Which of the two studies do you find to be more scientifically credible, the stroll study or the stroller study?

More importantly, why?

Page 28: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

From Medical to Educational Intervention From Medical to Educational Intervention Research Incorporating Randomized Research Incorporating Randomized

Controlled Trials (RCTsControlled Trials (RCTs))

• Medical Drug Research: Randomized Clinical Trials

• Public Health Intervention Research: Randomized Community/Cluster Trials

• Educational Intervention Research: Randomized Classroom Trials– Stage 3 of Levin & O’Donnell’s (1999) Educational

Intervention Research Model

Levin, J. R., & O'Donnell, A. M. (1999). What to do about educational research's credibility gaps? Issues in Education: Contributions from Educational Psychology, 5, 177-229.

Page 29: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona
Page 30: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Alternative Research Methodologies Gaining Alternative Research Methodologies Gaining Scientific RespectabilityScientific Respectability

• Nonequivalent control group designs based on comprehensive matching (Aickin, 2009; Shadish, 2011)

• Regression discontinuity analysis (Shadish, 2011)• Prompted optional randomization trial? (Flory & Karlawish,

2012)• Single-case time-series designs (Clay, 2010; Kratochwill &

Levin, 2014)Aickin, M. (2009, Jan. 1). A simulation study of the validity and efficiency of design-

adaptive allocation to two groups in the regression situation. International Journal of Biostatistics, 5(1): Article 19. Retrieved from www.ncbi.nlm.nih.gov/pmc/articles/PMC2827888/

Clay, R. A. (2010, Sept.). Randomized clinical trials have their place, but critics argue that researchers would get better results if they also embraced other methodologies. Monitor on Psychology, 53-55.

Flory, J.. & Karlawish, J. (2012). The prompted optional randomization trial: A new design for comparative effectiveness research. American Journal of Public Health, 102(12), e8-e10.

Kratochwill, T. R., & Levin, J. R. (Eds.). (2014). Single-case intervention research: Methodological and statistical advances. Washington, DC: American Psychological Association.

Shadish, W. R. (2011). Randomized controlled studies and alternative designs in outcome studies: Challenges and opportunities. Research on Social Work Practice, 21, 636-643.

Page 31: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Whys and Wherefores Single-Case Whys and Wherefores Single-Case Intervention Research?Intervention Research?

“My only recommendation would be to spend some time describing the basic tenets of single case experimental design. What are the reasons why someone would choose to apply it? What are the unique contributions of this methodology? Basically, it would help us ‘sell’ these designs to potential group reviewers.”

Attendee of the 2011 IES-SponsoredSingle-Case Design-and-Analysis Institute

Page 32: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Time Out For a Research “Case”Time Out For a Research “Case”• Case studies

– observational/naturalistic– no designed/planned intervention– Freud, Piaget, Skinner; brain-injury cases; cases of

phenomenal mnemonists– recent efforts to change the “Rodney Dangerfield”

perceptions of this mode of inquiry (e.g., Dattilio, Edwards, & Fishman, 2010)

• Case-based reasoning– arguments and inferences derived from legal and medical

cases– now used in higher-education courses

• “Make a Case” studies– the anecdotal evidence “research” currently on display in

some of our professional organizationsDattilio, F. M., Edwards, D. J. A., & Fishman, D. B. (2010). Case studies within a mixed

methods paradigm: Toward a resolution of the alienation between researcher and practitioner in psychotherapy research. Psychotherapy Theory, Research, Practice, Training, 47, 427-441.

Page 33: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Example (of What Nobody at This Institute Example (of What Nobody at This Institute Should Ever Do ‒ Except in Jest!) Single-Case Should Ever Do ‒ Except in Jest!) Single-Case Intervention Study #3: Intervention Study #3: Being Right or Being Happy: Pilot StudyBruce Arroll, professor, Felicity Goodyear-Smith, professor, [...], and Timothy Kenealy, associate professorhttp://www.ncbi.nlm.nih.gov/pmc/articles/PMC3898584/ 6 / 8Article InformationBMJ. 2013; 347: f7398. Published online Dec 17, 2013. doi: 10.1136/bmj.f7398 PMCID: PMC3898584 

Page 34: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Introduction

“Three of the authors are general practitioners who see many patients and couples who lead unnecessarily stressful lives by wanting to be right rather than happy…This might be the first study to systematically assess whether it is better to be right than happy; a Medline search in May 2013 found no similar articles. Our null hypothesis was that it is better to be right than happy.”

Page 35: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Participants, Setting, and Design

“To be eligible participants had to be part of a couple and willing to take part in the study. We carried out a parallel trial with one man and one woman in their own home. It was decided without consultation that the female participant would prefer to be right and the male, being somewhat passive, would prefer to be happy. The male was informed of the intervention while the female participant was not. The female participant was blind to the hypothesis being tested, other than being asked to record her quality of life.”

Page 36: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Intervention

“The intervention was for the male to agree with his wife’s every opinion and request without complaint. Even if he believed the female participant was wrong, the male was to bow and scrape.”

Page 37: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Main Outcome Measure

“We measured quality of life with a Likert score of 1 to 10 (10 being the best possible quality of life). Although our tool was unvalidated, it was thought to have face validity. It was justified on the grounds that brevity was essential, given that the intervention was administered in a potentially complex domestic environment.”

Page 38: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Results

“Two participants were eligible and both (100%) were randomised. All participants received the treatment and were analysed for the primary outcome with an intention to treat analysis. Several baseline characteristics differed between the subjects (see Appendix).”

Page 39: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona
Page 40: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“The data safety monitoring committee stopped the study because of severe adverse outcomes after 12 days. By then the male participant found the female participant to be increasingly critical of everything he did. The situation had become intolerable by day 12. He sat on the end of their bed, made her a cup of tea, and said as much; explained the trial and then contacted the Data Safety Monitoring committee who terminated the trial immediately..”

Page 41: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“There were three data points in the intervention group and two in the control group (the control participant had become hostile to recording her quality of life). The man’s quality of life score had fallen from 7 out of 10 at baseline to 3 at 12 days; the women’s had increased slightly from 8 to 8.5 at six days (Figure).”

Page 42: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Quality of Life, By Duration of Intervention

Page 43: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“The difference between the two participants’ QOL scores over time is significantly different (P=0.004, calculated with a repeated measures generalised linear model). We should treat the results cautiously because we cannot discount causes other than treatment reducing the male participant’s score. It seems that being right, however, is a cause of happiness, and agreeing with what one disagrees with is a cause of unhappiness.

Page 44: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Author “Potential Confounding” Alert:

“We cannot discount that the difference in results might be caused by differences between the two treatment groups, which unfortunately we were unable to match by possible confounders such as sex. The harms were estimated as 100% as all participants who received the intervention reported a serious adverse event.”

Page 45: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Discussion

“The results of this trial show that the availability of unbridled power adversely affects the quality of life of those on the receiving end.”

Page 46: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Strengths and Weaknesses“The study has some limitations. There was no trial registration, no ethics committee approval, no informed consent, no proper randomisation, no validated test instrument, and questionable statistical assessment. We used the eyeball technique for single patient trials which, as Sackett (2011) says, ‘more closely matches the way we think as clinicians.’”

Page 47: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Generalisability

“Many people in the world live as couples, and we believe that it could be harmful for one partner to always have to agree with the other. However, more research is needed to see whether our results hold if it is the male who is always right.”

Page 48: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Notes“Contributors: All authors read drafts of the document and the final version. FGS came up with the title and overall concept, SM did the statistical analysis, and BA saw the need for a clinical trial and conducted the trial, wrote the first draft and organised the team. TK was on the data safety monitoring committee. Competing interest: None declared.” 

References1. Mathieu I. Emotional sobriety. Psychology Today 20112. Friedman ML, Fruberg CD, DeMets DL. Fundamentals of clinical trials. Springer-Science, 1998.3. Sackett DL. Clinical trialist round 4. Why not do an N-of-1 RCT? Clin Trials 2011;8:350-2. [PubMed]

Page 49: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

A Case Study Addendum for the Curious Student (From Sackett, 2011)

“[I] began seeing patients in my new referral clinic in 1985. Even when my new diagnoses were confirmed and I wasconvinced that my patients were taking their new medicines, I frequently failed to relieve – or even improve – the disabling symptoms of their chronic illnesses; sometimes I even made them worse. Often there were no RCTs to guide their therapy; other times they presented as ‘non-responders’ to treatments that had been validated in RCTs.”

Page 50: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“Worst of all, even when a patient improved during one of my uncontrolled ‘therapeutic trials’ of starting a new treatment or stopping an old one, I couldn’t tell whether their illness had simply improved on its own, whether their symptoms had ‘regressed toward the mean,’ whether it was simply a placebo effect, whether I was minimizing their on-going symptoms through hope, or whether they were minimizing them through charity. I simply had no way to objectively determine whether my uncontrolled ‘therapeutic trials’ were really helping individual patients in my practice.”

Page 51: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“When I described my dilemma to a psychologist-statistician colleague, she pointed me to the psychology literature on ‘single-subject’ experimental designs where the units of randomization were times, not persons.*

*Barlow DH, Hersen M. Single Case Experimental Designs: Strategies for Studying Behavioural Change (2nd edn). Pergamon Press, New York, 1984.

Editor’s Wished-For Self-Serving Addition: “Of course, I subsequently attended the incredible IES-funded Single-Case Design and Analysis summer institute in Madison, Wisconsin, and the quality of my intervention research has never been the same.” [inferred conclusion left to the reader!]

Page 52: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“Fascinated by this potential solution to my therapeutic dilemmas, I presented what I’d learned at a ‘Continuing Education Round’ in my other department at McMaster, Clinical Epidemiology and Biostatistics. A brilliant mentee of mine, Gordon Guyatt, shared my enthusiasm and we soon embarked on [a series of N-of-1 RCTs]. “Over our first 50 N-of-1 RCTs, we generated the guidelines in Table 1. Taking stock after 3 years, 39% of completed trials led to changing prior treatment plans, and 29% led to discontinuing prior ‘permanent’ treatments.”

Page 53: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Table 1. Guidelines for performing an N-of-1 RCTTable 1. Guidelines for performing an N-of-1 RCTIs an N-of-1 RCT really indicated for your patient?(1) Is the effectiveness of the treatment really in doubt?(2) Will the treatment, if effective, be long-term?(3) Is your patient eager to collaborate in designing and carrying out an N-of-1 RCT?Is an N-of-1 RCT feasible in your patient?(4) Does the treatment have a rapid onset?(5) Does the treatment stop acting soon after it is discontinued?(6) Is an optimal duration for a treatment period feasible?(7) Can outcomes that are relevant and important to your patient be measured?(8) Can sensible criteria for stopping the trial be established?(9) Is an unblinded run-in period necessary?Is this N-of-1 RCT feasible in your practice setting?(10) Is there a pharmacist who could help you?(11) Is help available for interpreting the data?Is this N-of-1 RCT ethical?(12) Is there free, informed consent?(13) Can your patient withdraw from the trial without loss of care?(14) Will the same degree of confidentiality apply as in other clinical situations?

Page 54: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“Since we began, thousands of N-of-1 RCTs have been carried out by other clinicians and patients on treatments ranging from stimulants for children with ADHD to gabapentin for chronic neuropathic pain. Along the way, other and extended uses for them have been reported. For example, and because a series of N-of-1 RCTs of the same treatment for the same symptoms of the same condition sums to a conventional multiple crossover trial, some clinicians have – without the fuss and muss of a formal, full-scale RCT – gradually accumulated, over months or years, sufficient identical N-of-1 RCTs to be able to draw more generalizable conclusions about the efficacy of that treatment. Even senior members of our Society are encouraging us to incorporate N-of-1 RCTs into clinical practice.”

Page 55: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“And some teams of N-of-1 trialists have employed them to ask broader health care questions. For examples, Jeff Mahon’s team in London, Ontario reckoned that his series of 34 N-of-1 RCTs of theophylline didn’t do better than routine care for patients with irreversible chronic airflow limitation back in 1999 but in 2010, Paul Scuffham’s team in Brisbane, Australia reported that their series of 91 N-of-1 RCTs in osteoarthritis, neuropathic pain, and ADHD wound up saving money (partly through taking fewer expensive drugs but mostly through requesting fewer subsequent consultations). And I can’t resist reporting that Deborah Zucker and some Boston colleagues recently asked whether dialysis centers and N-of-1 RCTs aren’t ‘made for each other’.”

Page 56: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“But for now, my suggestion is that you team up with your statistical colleague(s) and have a go by collaborating with individual patients in using this powerful strategy for determining the best treatments for their worst symptoms.“You’d also better check with any friends you have at your REB before you begin, to see whether they regard clinicians collaborating with patients in order to identify their best treatments as mainstream high-quality care (and none of their business) or as an ethically risky undertaking requiring their prior permission.**‘‘The clinician who is convinced that a certain treatment works will almost never find an ethicist in his path, whereas his colleague who wonders and doubts and wants to learn will stumble over piles of them.’’ (Medical ethics: should medicine turn the other cheek? Lancet 1990; 336: 846–7.)

Page 57: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

“By the way, an increasing number of journals are receptive to N-of-1 RCT submissions, either as case-reports or as more methodologically-oriented articles. In recognition of the importance of their publication, the CONSORT gang are developing guides for reporting them; stay tuned.”

Page 58: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Single-Case Intervention Research:Single-Case Intervention Research:Preview of Things to Come in the Days AheadPreview of Things to Come in the Days Ahead• Historically, also referred to as “single-subject,” “N =

1,” “interrupted time-series,” and currently “single-participant” research

• Includes planned researcher-administered interventions and comparison conditions

• Multiple measures across time are collected• Applies both to individual participants and aggregates

or “clusters” (e.g., pairs, small groups, classrooms, schools, communities)

• For purposes of assessing an intervention effect, each case serves as its own control. That is, the case’s series of outcome measures is taken prior to the intervention and compared with measurements taken during (and after) the intervention.

Page 59: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Characteristics of Single-Case Characteristics of Single-Case Intervention ResearchIntervention Research

• Adaptations of interrupted time-series designs that have the potential to provide rigorous evaluations of the effects resulting from experimental interventions

• Single-case designs involve repeated, systematic measurement of an outcome measure before, during, and after the implementation of an intervention.

• The ratio of outcome measures to the number of cases usually is large enough to distinguish single-case designs from other longitudinal designs (e.g., traditional pretest-posttest and general repeated-measures designs).

Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M & Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from the What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Page 60: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Characteristics of Single-Case Characteristics of Single-Case Intervention ResearchIntervention Research

• The outcome measure is taken repeatedly within and across different levels of the researcher-manipulated intervention variable. These different levels are referred to as “phases” and minimally include a baseline (A) phase and an intervention (B) phase.

• A central goal of single-case intervention research is to determine whether there is a change in the outcome variable(s) following introduction of the intervention. Evidential support for an intervention effect is achieved through various forms of replication within the particular study.

Kratochwill et al. (2010)

Page 61: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Intervention Research Intervention Research Evidence StandardsEvidence Standards

American Psychological Association’s Reporting Standards

American Psychologist, 2008, 63, p. 839-851

The What Works Clearinghouse’s StandardsFor traditional “group” intervention designs:http://ies.ed.gov/ncee/wwc/references/idocviewer/doc.aspx?docid=19&tocid=1For single-case intervention designs:http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Page 62: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Methodological Standards for Methodological Standards for Single-Case ResearchSingle-Case Research

Separate credibility standards should be applied for each type of intervention research, conventional group and single-case

– Guidelines are in effect for conventional group RCTs (Moher, Schulz, & Altman (2001)

– Standards have been developed for traditional single-case intervention designs based on individual cases (Kratochwill et al., 2010)

– The same needs to be done for both randomized single-case intervention designs and single-case designs in which the “cases” consist of aggregates or “clusters” (e.g., small groups, classrooms, schools, communities)

Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Annals of Internal Medicine, 134, 657–662.

Page 63: Principles of Scientifically Credible Intervention Research: A Brief Review Joel R. Levin University of Arizona

Recurring Themes Throughout Recurring Themes Throughout This InstituteThis Institute

• Can/should single-case intervention designs be accountable for similar scientific-credibility standards as are applied to conventional “group” research?

• Can various randomization schemes be incorporated into single-case intervention designs to improve their scientific credibility?

• If so, what are the resulting logistical and ethical consequences of charting a single-case RCT course?

• Cliffhanger Conclusion: As Canadian medical (and now reformed single-case) researcher David Sackett noted earlier: “Stay tuned” for more fun and excitement throughout the week!