Bridging the Gap between Research-Supported Interventions and Everyday Social Work Practice: A New Approach

  • Published on
    03-Feb-2017

  • View
    212

  • Download
    0

Embed Size (px)

Transcript

<ul><li><p>Bridging the Gap between Research-SupportedInterventions and Everyday Social Work Practice:</p><p>A New ApproachAllen Rubin</p><p>This article describes a rationale for a focus on case studies that would provide a database ofsingle-group prepost mean effect sizes that could be analyzed to identify which service pro-vision characteristics are associated with more desirable outcomes when interventions sup-ported by randomized clinical trials are adapted in everyday practice settings. In addition,meta-analyses are proposed that would provide benchmarks that agency practitioners couldcompare with their mean effect size to inform their decisions about whether to continue,modify, or replace existing efforts to adopt or adapt a specific research-supported treatment.Social workers should be at the forefront of the recommended studies in light of the profes-sions emphasis on applied research in real-world settings and the prominence of social workpractitioners in such settings.</p><p>KEY WORDS: bridging the gap; effect size; research-supported treatments</p><p>Calls to improve the extent to which socialwork practice is informed by researchhave been made by leaders in our profes-sion ever since its earliest days (Richmond, 1917).Despite these calls, a wide gap between researchand social work practice has persisted and has out-lasted various reduction efforts made byNASWandthe Council on Social Work Education (Kirk &amp;Reid, 2002; Rubin, 1999; Rubin &amp; Rosenblatt,1979; Rubin &amp; Zimbalist, 1979, 1981). Theseefforts notwithstanding, the gap remains a problemtoday despite the significant progress being madethrough research validating the efficacy of variousinterventions for problems of concern to socialworkers. This article recommends a new approachaimed at bridging that gap to be used by social workresearchers in collaboration with practitioners ineveryday practice settings.</p><p>There are numerous interventions whose efficacyhas received strong research in randomized clinicaltrials (RCTs), including programs like the Incredi-ble Years, ParentChild Interaction Therapy, andTriple-P for child maltreatment (Rubin, 2012b);motivational interviewing, cognitivebehavioralskills training, and Seeking Safety for substanceuse disorders (Springer &amp; Rubin, 2009); dialecticalbehavioral therapy for borderline personalitydisorder (Koons et al., 2001; Linehan et al., 2006;</p><p>Verheul et al., 2003); interpersonal therapy andcognitivebehavioral therapy for depression (Springer,Rubin, &amp; Beevers, 2011); prolonged exposure ther-apy (PET), cognitive processing therapy (CPT),and eye movement desensitization and repro-cessing (EMDR) for adults with post-traumaticstress and other anxiety-related disorders (Rubin,2009); trauma-focused cognitivebehavioral ther-apy (TFCBT) for traumatized children (Rubin,2012a); and psychoeducational family groups,assertive community treatment, supported employ-ment, and critical time intervention for people withschizophrenia or their families (Trawver &amp; Rubin,2010).</p><p>Despite the strong research support that theseinterventions have received in various RCTs andmeta-analyses, other studies have shown disap-pointing results regarding the extent to which theseinterventions are implemented appropriately andwith successful outcomes in everyday practice(Embry &amp; Biglan, 2008; Weisz, Ugueto, Herren,Afienko, &amp; Rutt, 2011). Possible explanations forthese disappointing results pertain to the differencesbetween service provision realities in everyday prac-tice settings and the ideal service provision environ-ments of RCTs. For example, clients in RCTs aremore likely to complete treatment protocols, lesslikely to experience comorbidity of target issues,</p><p>doi: 10.1093/sw/swu023 2014 National Association of Social Workers 223</p></li><li><p>and less likely to be members of historically under-represented minority groups than are clients seen ineveryday social work practice. By the same token,practitioners in RCTs are likely to have better train-ing and supervision, smaller and less diverse case-loads, and greater commitment and adherence totreatment protocols than their real-world coun-terparts. Agency factors, such as higher rates ofpractitioner turnover and fewer funds for serviceprovision, may also play a role in these discrepancies(Briere &amp; Scott, 2013).</p><p>FIDELITY, ADAPTATION, AND COMMONELEMENTSIn response to the disappointing results obtainedwhen research-supported interventions (RSTs)are implemented in real-world practice settings,some advocate for modifying RST treatment pro-tocols to better fit the practice settings in whichthey are implemented. There is disagreement, how-ever, regarding the extent to which agencies shouldhave leeway in this process. Some stress the need forintervention fidelity, reasoning that any changes tothe research-supported protocol render the inter-vention no longer empirically supported. Althoughthat reasoning is scientifically correct, it is also scien-tifically correct to conclude that even if efforts weremade to follow the treatment manual meticulously,the external validity of the RCT results cannot beassumed in light of the disparities between the idealservice provision conditions of RCTs and the actualand less ideal conditions in everyday practice, espe-cially regarding differences in clientele and practi-tioner characteristics. Thus, no matter how pristinea studys research design might be, its results cannotbe assumed to generalize to people and settingswhose characteristics are unlike those of the researchstudy.</p><p>Consequently, there is scientific merit to thenotion that it is acceptable to modify the RST toimprove its fit for the setting, adapting it to makeit more acceptable, less complex, and less costly.Although such modification means that the empir-ical support for the RST can no longer be assumed,the differences between the research conditions andthe practice conditions mean that such supportcould not have been assumed in the first place,even if an attempt were made to adopt the inter-vention with impeccable fidelity. Studies haveshown that even when practitioners in real-worldpractice settings attempt to adopt RSTs with fidelity,</p><p>the outcomes are often disappointing, as a result ofthe aforementioned external validity issues (Embry&amp; Biglan, 2008;Weisz et al., 2011).</p><p>However, not all who endorse the adaptationapproach agree as to how much modification isacceptable. Some advocate minimal modifications,limiting them to the elements of the RST deemedadaptable, and not modifying any core elementsdeemed essential and indispensable (Galinsky,Fraser, Day, &amp; Rothman, 2013; Sundell, Ferrer-Wreder, &amp; Fraser, 2012). In contrast, others advo-cate for a common-elements approach, involvingidentification of a broad array of elements com-monly shared by various RSTs. Taking thisapproach, Chorpita, Becker, and Daleiden (2007)identified 23 common elements in RCTs of inter-ventions for depression in girls between ages 10and 12 (p. 648). Also taking this approach, Benderand Bright (2011) identified eight common ele-ments in their review of 430 RCTs for disruptivegirls who exhibited signs of traumatic stress. As aresult of these findings, they recommended thatpractitioners working with such girls be trained inthose eight common elements instead of in any par-ticular RST. Based on an extensive literature reviewalong with clinical experience, Briere and Scott(2013) also recommended a common-elementsapproach to treating traumatized clients. They con-tended that thewide range of symptoms and uniqueneeds of each client require that clinicians combineempirically supported elements into a broad thera-peutic approach that can be tailored to each clientsspecific clinical needs. The key components of thisapproach include an empathically attuned therapeu-tic relationship, psychoeducation, stress reduction oraffect regulation training, development of a narrativeabout the traumatic event, memory processing, andincreasing self-awareness and self-acceptance.</p><p>IMPLICATIONS FOR BRIDGING THE GAPBETWEEN RESEARCH AND PRACTICEThe disappointing outcomes of RSTs when imple-mented in real-world practice settings, combinedwith disagreement about how best to improve thoseoutcomes, imply the need for studies that describethe conditions under which RSTs are adopted oradapted, successfully and unsuccessfully, in every-day practice. This is a premier opportunity for socialwork researchers to be at the forefront of this line ofinquiry, given the professions focus on appliedresearch to solve real-world problems in a variety</p><p>224 Social Work Volume 59, Number 3 July 2014</p></li><li><p>of practice settings. Similarly, social work tends tobe distinguished from disciplines most associatedwith RCTs by its work with the types of clientsand types of settings that are usually not includedin RCT research. There is a great need for thesestudies in light of disparities between RCT out-comes and the outcomes when RSTs are imple-mented in everyday practice settings. Social workresearchers opting to pursue this line of inquiryare likely to be aware of the barriers to using rigor-ous experimental or quasi-experimental designs ineveryday social work practice settings. Such barriersmay include administrator and practitioner resis-tance to the use of random assignment or otherways of avoiding selectivity biases in assigning cli-ents to treatment and comparison groups, the costsof rigorous experimental and quasi-experimentalstudies, and the difficulty of procuring adequatefunding needed to carry out such studies. This isparticularly important in light of the ever-decreasingallocation of federal research funding coupled withthe fierce competition for major research grantscoming from researchers outside of social work,who are often perceived as more prestigious or sci-entific in the eyes of many grant reviewers.</p><p>In light of these barriers, it is important to recog-nize the value of studies without control groups in aline of inquiry seeking to describe the conditionsunder which RSTs may be successfully adoptedor adapted. Because there is already ample RCTevidence that permits causal inferences to be madeabout the efficacy of RSTs, the next priority neednot be on experimental or quasi-experimentaldesigns focused on ruling out threats to internalvalidity (and thus permitting causal inferencesabout intervention efficacy). Instead, the focusshould be on identifying the conditions underwhich these RSTs are being implemented success-fully in real-world settings. It follows that if theresearch question is not one of efficacy, but ratherone of identifying these conditions, then experi-mental and quasi-experimental designs are not theonly routes to findings that offer useful implicationsto inform practice.</p><p>One might understandably ask what is meantby successful implementation, or argue that distin-guishing successful from unsuccessful implementa-tions requires comparing RST outcomes with theoutcomes of control or comparison groups. Exper-imental and quasi-experimental studies that canrealistically be carried out to make such comparisons</p><p>would be ideal, but are they necessary? In light ofthe aforementioned barriers to the feasibility ofsuch studies, it does not seem likely that a sufficientnumber of them could be completed in the foresee-able future to provide an ample database for deter-mining the conditions under which RSTs are andare not implemented successfully in various practicesettings.</p><p>An alternative, therefore, would be not to definesuccess exclusively in terms of counterfactual de-signs, but also to define it in more descriptive andcorrelational terms. For example, suppose thatmeta-analyses of an RST with very strong researchsupport have found that it reduces recidivism froman average of 30 percent to 15 percent. Supposefurther that an agency adapts that RST to betterfit its clientele and finds that its previous agency-wide recidivism rate of 40 percent decreases to 25percent. The agency would not need a controlgroup to judge whether it is satisfied with theadaptation of that RST. Agencies and practitionersfrequently make such judgments sans controlgroups. Furthermore, by observing that their recid-ivism rate decreased by nearly the same proportionas that of thewell-supported RST, they would haveno reason to suppose that a different interventionapproach would be needed to better fit theiragency.</p><p>Moreover, suppose additional studies are carriedout in other settings dealing with the same orclosely related target populations and that the find-ings support that similar types of adaptations orservice provision characteristics also result in com-parable reductions in recidivism. If other studiesdealing with similar target populations try differ-ent adaptations of the same RST and fail to getdesirable reductions in recidivism, it could be deter-mined that the particular adaptation was ineffectivewith that population or in that particular setting.These studies could also usemixed-methods designsto generate hypotheses about the service provisioncharacteristics that might promote successful andunsuccessful adoptions or adaptations of RCTs.</p><p>Qualitative interviews with administrators, prac-titioners, and clients can also help in the identifica-tion of such characteristics, as can the examinationof agency documents and case records. Althoughnone of the studies could directly imply that inany particular setting the RST was the cause of theparticular outcome, the studies as a whole wouldprovide a database for inductively identifying the</p><p>Rubin / Research-Supported Interventions and Social Work Practice 225</p></li><li><p>conditions under which a particular RST for a par-ticular target population is associated with desirableand undesirable outcomes.</p><p>ROLE OF EFFECT SIZESPractitioners who want their practice decisions tobe informed by evidence should be familiar with,or become familiar with, effect size calculations.These statistics play a key role in meta-analysesthat assess which intervention approaches have thestrongest impacts on client outcomes for specifictarget issues. A prime function of these statistics isto compare the strength of intervention effectsacross studies that use different outcome measureswith different quantitative meanings and ranges.Readers not yet familiar with effect size statisticsmay want to examine a practitioner-oriented re-search text such as that by Rubin and Bellamy(2012) to learn more about the importance of effectsizes in intervention research.</p><p>Here is an example of how effect sizes can beused to compare the impact of two related but dis-tinct interventions. Study 1 finds that interventionA significantly reduces the mean number of timesparents inappropriately spank their children duringa one-week period from four to two. Study 2 findsthat intervention B significantly improves scores ona 100-item truefalse test of parenting knowledgefrom a mean of 30 correct answers to a mean of40 correct answers. Which intervention had thestronger impact: Study 2 with the increase of 10correct answers or study 1 with the decrease oftwo spankings? A...</p></li></ul>

Recommended

View more >