Transcript

Bridging the Gap between Research-SupportedInterventions and Everyday Social Work Practice:

A New ApproachAllen Rubin

This article describes a rationale for a focus on case studies that would provide a database ofsingle-group pre–post mean effect sizes that could be analyzed to identify which service pro-vision characteristics are associated with more desirable outcomes when interventions sup-ported by randomized clinical trials are adapted in everyday practice settings. In addition,meta-analyses are proposed that would provide benchmarks that agency practitioners couldcompare with their mean effect size to inform their decisions about whether to continue,modify, or replace existing efforts to adopt or adapt a specific research-supported treatment.Social workers should be at the forefront of the recommended studies in light of the profes-sion’s emphasis on applied research in real-world settings and the prominence of social workpractitioners in such settings.

KEY WORDS: bridging the gap; effect size; research-supported treatments

Calls to improve the extent to which socialwork practice is informed by researchhave been made by leaders in our profes-

sion ever since its earliest days (Richmond, 1917).Despite these calls, a wide gap between researchand social work practice has persisted and has out-lasted various reduction efforts made byNASWandthe Council on Social Work Education (Kirk &Reid, 2002; Rubin, 1999; Rubin & Rosenblatt,1979; Rubin & Zimbalist, 1979, 1981). Theseefforts notwithstanding, the gap remains a problemtoday despite the significant progress being madethrough research validating the efficacy of variousinterventions for problems of concern to socialworkers. This article recommends a new approachaimed at bridging that gap to be used by social workresearchers in collaboration with practitioners ineveryday practice settings.

There are numerous interventions whose efficacyhas received strong research in randomized clinicaltrials (RCTs), including programs like the Incredi-ble Years, Parent–Child Interaction Therapy, andTriple-P for child maltreatment (Rubin, 2012b);motivational interviewing, cognitive–behavioralskills training, and Seeking Safety for substanceuse disorders (Springer & Rubin, 2009); dialecticalbehavioral therapy for borderline personalitydisorder (Koons et al., 2001; Linehan et al., 2006;

Verheul et al., 2003); interpersonal therapy andcognitive–behavioral therapy for depression (Springer,Rubin, & Beevers, 2011); prolonged exposure ther-apy (PET), cognitive processing therapy (CPT),and eye movement desensitization and repro-cessing (EMDR) for adults with post-traumaticstress and other anxiety-related disorders (Rubin,2009); trauma-focused cognitive–behavioral ther-apy (TFCBT) for traumatized children (Rubin,2012a); and psychoeducational family groups,assertive community treatment, supported employ-ment, and critical time intervention for people withschizophrenia or their families (Trawver & Rubin,2010).

Despite the strong research support that theseinterventions have received in various RCTs andmeta-analyses, other studies have shown disap-pointing results regarding the extent to which theseinterventions are implemented appropriately andwith successful outcomes in everyday practice(Embry & Biglan, 2008; Weisz, Ugueto, Herren,Afienko, & Rutt, 2011). Possible explanations forthese disappointing results pertain to the differencesbetween service provision realities in everyday prac-tice settings and the ideal service provision environ-ments of RCTs. For example, clients in RCTs aremore likely to complete treatment protocols, lesslikely to experience comorbidity of target issues,

doi: 10.1093/sw/swu023 © 2014 National Association of Social Workers 223

and less likely to be members of historically under-represented minority groups than are clients seen ineveryday social work practice. By the same token,practitioners in RCTs are likely to have better train-ing and supervision, smaller and less diverse case-loads, and greater commitment and adherence totreatment protocols than their real-world coun-terparts. Agency factors, such as higher rates ofpractitioner turnover and fewer funds for serviceprovision, may also play a role in these discrepancies(Briere & Scott, 2013).

FIDELITY, ADAPTATION, AND COMMONELEMENTSIn response to the disappointing results obtainedwhen research-supported interventions (RSTs)are implemented in real-world practice settings,some advocate for modifying RST treatment pro-tocols to better fit the practice settings in whichthey are implemented. There is disagreement, how-ever, regarding the extent to which agencies shouldhave leeway in this process. Some stress the need forintervention fidelity, reasoning that any changes tothe research-supported protocol render the inter-vention no longer empirically supported. Althoughthat reasoning is scientifically correct, it is also scien-tifically correct to conclude that even if efforts weremade to follow the treatment manual meticulously,the external validity of the RCT results cannot beassumed in light of the disparities between the idealservice provision conditions of RCTs and the actualand less ideal conditions in everyday practice, espe-cially regarding differences in clientele and practi-tioner characteristics. Thus, no matter how pristinea study’s research design might be, its results cannotbe assumed to generalize to people and settingswhose characteristics are unlike those of the researchstudy.

Consequently, there is scientific merit to thenotion that it is acceptable to modify the RST toimprove its fit for the setting, adapting it to makeit more acceptable, less complex, and less costly.Although such modification means that the empir-ical support for the RST can no longer be assumed,the differences between the research conditions andthe practice conditions mean that such supportcould not have been assumed in the first place,even if an attempt were made to adopt the inter-vention with impeccable fidelity. Studies haveshown that even when practitioners in real-worldpractice settings attempt to adopt RSTs with fidelity,

the outcomes are often disappointing, as a result ofthe aforementioned external validity issues (Embry& Biglan, 2008;Weisz et al., 2011).

However, not all who endorse the adaptationapproach agree as to how much modification isacceptable. Some advocate minimal modifications,limiting them to the elements of the RST deemedadaptable, and not modifying any core elementsdeemed essential and indispensable (Galinsky,Fraser, Day, & Rothman, 2013; Sundell, Ferrer-Wreder, & Fraser, 2012). In contrast, others advo-cate for a common-elements approach, involvingidentification of a broad array of elements com-monly shared by various RSTs. Taking thisapproach, Chorpita, Becker, and Daleiden (2007)identified 23 common elements in RCTs of inter-ventions for “depression in girls between ages 10and 12” (p. 648). Also taking this approach, Benderand Bright (2011) identified eight common ele-ments in their review of 430 RCTs for disruptivegirls who exhibited signs of traumatic stress. As aresult of these findings, they recommended thatpractitioners working with such girls be trained inthose eight common elements instead of in any par-ticular RST. Based on an extensive literature reviewalong with clinical experience, Briere and Scott(2013) also recommended a common-elementsapproach to treating traumatized clients. They con-tended that thewide range of symptoms and uniqueneeds of each client require that clinicians combineempirically supported elements into a broad thera-peutic approach that can be tailored to each client’sspecific clinical needs. The key components of thisapproach include an empathically attuned therapeu-tic relationship, psychoeducation, stress reduction oraffect regulation training, development of a narrativeabout the traumatic event, memory processing, andincreasing self-awareness and self-acceptance.

IMPLICATIONS FOR BRIDGING THE GAPBETWEEN RESEARCH AND PRACTICEThe disappointing outcomes of RSTs when imple-mented in real-world practice settings, combinedwith disagreement about how best to improve thoseoutcomes, imply the need for studies that describethe conditions under which RSTs are adopted oradapted, successfully and unsuccessfully, in every-day practice. This is a premier opportunity for socialwork researchers to be at the forefront of this line ofinquiry, given the profession’s focus on appliedresearch to solve real-world problems in a variety

224 Social Work Volume 59, Number 3 July 2014

of practice settings. Similarly, social work tends tobe distinguished from disciplines most associatedwith RCTs by its work with the types of clientsand types of settings that are usually not includedin RCT research. There is a great need for thesestudies in light of disparities between RCT out-comes and the outcomes when RSTs are imple-mented in everyday practice settings. Social workresearchers opting to pursue this line of inquiryare likely to be aware of the barriers to using rigor-ous experimental or quasi-experimental designs ineveryday social work practice settings. Such barriersmay include administrator and practitioner resis-tance to the use of random assignment or otherways of avoiding selectivity biases in assigning cli-ents to treatment and comparison groups, the costsof rigorous experimental and quasi-experimentalstudies, and the difficulty of procuring adequatefunding needed to carry out such studies. This isparticularly important in light of the ever-decreasingallocation of federal research funding coupled withthe fierce competition for major research grantscoming from researchers outside of social work,who are often perceived as more prestigious or “sci-entific” in the eyes of many grant reviewers.

In light of these barriers, it is important to recog-nize the value of studies without control groups in aline of inquiry seeking to describe the conditionsunder which RSTs may be successfully adoptedor adapted. Because there is already ample RCTevidence that permits causal inferences to be madeabout the efficacy of RSTs, the next priority neednot be on experimental or quasi-experimentaldesigns focused on ruling out threats to internalvalidity (and thus permitting causal inferencesabout intervention efficacy). Instead, the focusshould be on identifying the conditions underwhich these RSTs are being implemented success-fully in real-world settings. It follows that if theresearch question is not one of efficacy, but ratherone of identifying these conditions, then experi-mental and quasi-experimental designs are not theonly routes to findings that offer useful implicationsto inform practice.

One might understandably ask what is meantby successful implementation, or argue that distin-guishing successful from unsuccessful implementa-tions requires comparing RST outcomes with theoutcomes of control or comparison groups. Exper-imental and quasi-experimental studies that canrealistically be carried out to make such comparisons

would be ideal, but are they necessary? In light ofthe aforementioned barriers to the feasibility ofsuch studies, it does not seem likely that a sufficientnumber of them could be completed in the foresee-able future to provide an ample database for deter-mining the conditions under which RSTs are andare not implemented successfully in various practicesettings.

An alternative, therefore, would be not to definesuccess exclusively in terms of counterfactual de-signs, but also to define it in more descriptive andcorrelational terms. For example, suppose thatmeta-analyses of an RST with very strong researchsupport have found that it reduces recidivism froman average of 30 percent to 15 percent. Supposefurther that an agency adapts that RST to betterfit its clientele and finds that its previous agency-wide recidivism rate of 40 percent decreases to 25percent. The agency would not need a controlgroup to judge whether it is satisfied with theadaptation of that RST. Agencies and practitionersfrequently make such judgments sans controlgroups. Furthermore, by observing that their recid-ivism rate decreased by nearly the same proportionas that of thewell-supported RST, they would haveno reason to suppose that a different interventionapproach would be needed to better fit theiragency.

Moreover, suppose additional studies are carriedout in other settings dealing with the same orclosely related target populations and that the find-ings support that similar types of adaptations orservice provision characteristics also result in com-parable reductions in recidivism. If other studiesdealing with similar target populations try differ-ent adaptations of the same RST and fail to getdesirable reductions in recidivism, it could be deter-mined that the particular adaptation was ineffectivewith that population or in that particular setting.These studies could also usemixed-methods designsto generate hypotheses about the service provisioncharacteristics that might promote successful andunsuccessful adoptions or adaptations of RCTs.

Qualitative interviews with administrators, prac-titioners, and clients can also help in the identifica-tion of such characteristics, as can the examinationof agency documents and case records. Althoughnone of the studies could directly imply that inany particular setting the RST was the cause of theparticular outcome, the studies as a whole wouldprovide a database for inductively identifying the

Rubin / Research-Supported Interventions and Social Work Practice 225

conditions under which a particular RST for a par-ticular target population is associated with desirableand undesirable outcomes.

ROLE OF EFFECT SIZESPractitioners who want their practice decisions tobe informed by evidence should be familiar with,or become familiar with, effect size calculations.These statistics play a key role in meta-analysesthat assess which intervention approaches have thestrongest impacts on client outcomes for specifictarget issues. A prime function of these statistics isto compare the strength of intervention effectsacross studies that use different outcome measureswith different quantitative meanings and ranges.Readers not yet familiar with effect size statisticsmay want to examine a practitioner-oriented re-search text such as that by Rubin and Bellamy(2012) to learn more about the importance of effectsizes in intervention research.

Here is an example of how effect sizes can beused to compare the impact of two related but dis-tinct interventions. Study 1 finds that interventionA significantly reduces the mean number of timesparents inappropriately spank their children duringa one-week period from four to two. Study 2 findsthat intervention B significantly improves scores ona 100-item true–false test of parenting knowledgefrom a mean of 30 correct answers to a mean of40 correct answers. Which intervention had thestronger impact: Study 2 with the increase of 10correct answers or study 1 with the decrease oftwo spankings? A commonly used effect size statis-tic, Cohen’s d, divides the difference in means ineach study by that study’s standard deviation(Cohen, 1988). Studies that use outcome measureswith a broader possible range of scores typicallyhave correspondingly larger standard deviations(meaning that the average distance from the meanscorewill be greater with a broader range of possiblenumbers than it will be with a more restrictedrange).

Suppose the standard deviation in the spankingstudy is 1, and the standard deviation in the true–false test study is 20. Dividing each study’s diffe-rence in means by its standard deviation results ina calculation of effect size and indicates that thespanking study had the stronger effect size (d= 2.0;4 minus 2 divided by 1), and the true–false test studyhad the weaker effect size (d = 0.5; 40 minus 30divided by 20). Thus, the recipients of intervention

A improved by 2 standard deviations, whereas therecipients of intervention B improved by 0.5 stan-dard deviation. Thus, Cohen’s d standardizes andthereby makes comparable outcome measureswith very different quantitative properties.

Effect size statistics such as Cohen’s d commonlyare used as dependent variables in meta-analyses ofvarious completed studies. In addition to beingused to compare the strength of impact of differentinterventions, effect sizes also can be used in meta-analyses to examine associations between effect sizesand service provision characteristics among a data-base of studies regarding the same intervention.For example, these analyses could determine if cer-tain adaptations of an RST treatment protocol tomake it more culturally sensitive are associatedwith stronger effect sizes than others. Likewise, per-haps common-elements approaches to service pro-vision for a particular target population or targetproblem might be significantly associated withstronger or weaker effect sizes than approachesthat attempt to maximize adherence to the treat-ment manual for a particular RST.

Using effect sizes as dependent variables has longbeen an established method in meta-analyses thatassess whether certain methodological attributes ofstudies are associated with strength of effect size andwhether certain intervention approaches have sig-nificantly stronger effect sizes than alternative ap-proaches (Bisson et al., 2007; Davidson & Parker,2001; Seidler & Wagner, 2006; van Etten & Taylor,1998). The novelty of the approach being recom-mended in this article is that the effect sizes ofeach individual study in a review of studies wouldbe in reference to the treatment group only. Insteadof using the difference in means between the exper-imental and control group as the numerator andthen dividing by the pooled standard deviation,this approach uses the mean pre–post difference ofthe treatment group as the numerator and thepretest standard deviation as the denominator(Feingold, 2009; Kadel & Kip, 2012;Maier-Riehle& Zwingmann, 2000). The one-group effect sizecalculation is necessary in agency settings that donot have control groups and thus would onlyhave a pre–post effect size for the clients that receivethe RST. As each individual study is reported in theliterature describing service provision variables in aparticular setting that implemented a specific RST(or a particular combination of common elements)along with the effect size outcome in that setting, a

226 Social Work Volume 59, Number 3 July 2014

foundation of data would be accumulated thatwould enable meta-analytic procedures to be usedto identify associations between variations in effectsizes and variations in how RSTs (or common ele-ments) are provided in real-world settings.

DEVELOPING BENCHMARK EFFECT SIZESConducting meta-analyses of one-group case stud-ies is not the only way that this approach can helpbridge the gap between research and practice.Another way is by enabling individual agencies,or other types of practice settings, to compare theirone-group effect size to the mean effect sizes foundin existing meta-analyses of RCTs. Doing so, how-ever, will require an intermediate line of inquiry,which I am currently pursuing. This ongoing re-search is examining the individual RCTs reportedin various meta-analyses and calculating the sepa-rate effect sizes for experimental and control groupsin eachRCT. The need for the separate calculationsis due to the fact that meta-analyses report averageeffect sizes for the difference between experimentaland control conditions, whereas in the studies beingrecommended in this article, the agency would onlyhave a pre–post effect size for the clients that receivethe RST. The agency’s effect size could be comparedwith both the mean experimental group effect sizeand the mean control group effect size found in thecorresponding meta-analyses of RCT results. Thiscomparison would help the agency judge whetherthe outcomes achieved by its clients adequatelyresembled the outcomes achieved by the RST recip-ients or whether they more closely resembled theoutcomes of the RCT control group participants.The former would be grounds for continuing theirprovision of the RST “as is.” The latter would sup-port pursuing an alternative approach or intervention.

For example, I am currently engaged in an ongo-ing one-group study of an adaptation of problemsolving therapy (PST) in the treatment of postpar-tum depression. The preliminary findings show aone-group pre–post effect size study of 1.07 (mean-ing that the post-test scores on average wereapproximately one standard deviation better thanthe pretest scores). The absence of a control groupin that study normally would severely limit theimplications of that finding for guiding practice.That is, without controls for threats to internalvalidity, the improvement quite plausibly couldbe attributed to things like contemporaneous events(history) or the passage of time (maturation).

To improve the value of that finding, I examinedthe individual studies covered in a meta-analysisof RCTs that assessed the efficacy of PST in reduc-ing mental and physical mental health problems(Malouff, Thorsteinsson, & Schutte, 2007). Nineof the studies in that meta-analysis used level ofdepression as the dependent variable and provideddata permitting the calculation of separate effectsizes for the randomly assigned PST treatmentgroups and the wait-list control groups. I calculatedeach separate effect size and found that (excludingone outlier) the mean effect sizes were 1.81 forthe PST groups and 0.11 for the wait-list controlgroups. The utility of the 1.07 effect size in theongoing study can be enhanced by comparing itwith the mean effect sizes in the meta-analysis.This comparison would not permit any conclusivecausal inferences; however, the fact that the study’seffect size of 1.07 is much better than the meancontrol group effect size of 0.11 in theRCTswouldprovide a basis for continuing to rely on the adap-tation. At the same time, the fact that the adaptationeffect size falls considerably short of the 1.81 PSTeffect size in the meta-analysis suggests the needto consider possible ways to improve the adaptation.

I also am involved in a meta-analysis of RCTsto develop benchmarks regarding the followingthree empirically supported treatments for traumasymptoms: PET, CPT, and EMDR. The prelimi-nary benchmark findings in terms of the meanone-group pre–post effect size for each treatmentand for the wait-list control groups, based on stan-dardized self-report measures of trauma symptoms,are displayed in Table 1. Practitioners in an agencythat has adopted or adapted one of these interven-tions can calculate their mean one-group pre–posteffect size of self-reported trauma symptoms andthen compare their mean with the correspondingvalues in Table 1.

For example, if they adopted or adapted PET,and their mean effect size for that adaptation was1.50, it would provide grounds for optimism aboutthe way they were providing that intervention,because their mean approximated the mean effectsize (1.67) for recipients of PET in the RCTs andwell exceeded the mean (0.46) of the wait-list con-trols. Conversely, if their mean effect size was foundto be 0.50, that would be grounds for contemplat-ing improving the way they were adapting PET orperhaps switching to a different intervention ap-proach that might be a better fit for their setting.

Rubin / Research-Supported Interventions and Social Work Practice 227

NEWAPPROACH FOR SOCIALWORK RESEARCHWhat I am proposing, then, is a new approach forsocial work research. This approach might notappeal to researchers who already have been achiev-ing success in procuring major research funding.Neither might it work for junior faculty memberswho must seek major external research funding asa prerequisite for achieving tenure.

For other researchers, including faculty membersat all ranks as well as doctoral students seeking adoable yet valuable dissertation topic, this approachprovides a way to do research that is highly useful tosocial work practice and is feasible, without requir-ing the long and often futile pursuit of elusive majorfunding. Although seeking major research fundingfor some can be a very rewarding quest and canyield studies of immense value to the field, theapproach being proposed here can be equally valu-able and rewarding, especially if obtaining majorfunding is not required as the only route to tenureand promotion. Moreover, this approach provides away for faculty members in social work doctoralprograms to provide their students with opportuni-ties to do dissertations that are both feasible andvaluable for building the practice knowledge baseof our profession.

What makes this approach scientifically validnow is the progress made in the RCTs and meta-analyses that have provided the basis for causalinferences about the effectiveness of RSTs andthus implied the value of the descriptive and corre-lational line of inquiry explained in this article.Instead of investing all that goes into seekingmajor external funding, or instead of trying to per-suade an agency to permit an experimental or anunbiased quasi-experimental study in its setting,

the researcher need only find an agency that is (a)interested in adopting or adapting an existingRST or is already doing so, and (b) willing to per-mit an unbiased measure of pre–post per client out-comes. Once an agreement is reached with thatagency, and the researcher is assured of the agency’scommitment to follow through with the researchprotocol, the researchers can implement a casestudy using the mixed methods described earlierto calculate the effect size for the RST recipientsin that agency coupled with a description of thefollowing: (a) whether and how the RST wasmodified to adapt it to the setting; (b) whether acommon-elements approach was taken and whatcombination of common elements was used; and(c) the agency, practitioner, and client variablesthat might have enhanced or hindered the properimplementation of the RST.

Some might wonder about the potential forpublication of these studies. However, journalsalready publish many pilot outcome studies with-out control groups or with comparison groups thatare vulnerable to severe selectivity biases (Rubin &Parrish, 2007). Moreover, those pilot studies typi-cally have not assessed the outcomes of alreadysupported RSTs. I am not advocating for the im-plementation or submission of outcome studieswith limited internal validity in general, but, rather,am only advocating for those that assess the out-comes of already supported RSTs when they areadapted or adopted in real-world agency settings.

Consider, for example, a scenario in which theadministrator of a small agency serving primarilytraumatized Latino and Latina youths invites a socialwork faculty member to conduct a study assessingthe effect size of an adapted version of TFCBTbeing provided to its clients. Although TFCBThas strong research support, none of the variousRCTs that have supported it have been donewith primarily Hispanic clients (Rubin, 2012a).Suppose that the agency can offer a sample size ofonly 30 clients and that an RCT is infeasible. Ifthe faculty member applies for a major researchgrant as a precondition for doing the study, thestudy probably will never get done. Not onlywould it be difficult to get funded for such a study,but by the time the grant proposal was written, sub-mitted, rewritten, and then resubmitted, the agen-cy’s interest in such a study might have waned.Indeed, the agency administrator and some of itsinterested practitioners may have moved on to

Table 1: Mean One-Group Pre–Post EffectSizes on Standardized Self-Report Trauma

Symptom Measures, by TreatmentCondition

Treatment ConditionMean Effect

Size SD

Prolonged exposure therapy 1.67 0.83

Cognitive processing therapy 1.48 0.42

PET + CPT 1.53 0.55

Eye movement desensitization andreprocessing

1.80 1.08

Wait-list control 0.46 0.33Note: This table is derived from the preliminary findings of an ongoingmeta-analysis bythe author. PET = prolonged exposure therapy; CPT = cognitive processing therapy.

228 Social Work Volume 59, Number 3 July 2014

different positions elsewhere. Instead of pursuingmajor funding, the faculty member could im-plement the kind of study being proposed in thisarticle, and thus contribute to an accumulating lit-erature identifying the conditions under whichadaptations of TFCBT are and are not associatedwith desirable effect sizes with Hispanic youths.

Whenobtainingmajor research funding becomesthe main or only route to academic success, pressureto obtain it might impede alternative research pur-suits of significant value to the profession. For exam-ple, an assistant professor in a Research 1 university,in a recent e-mail to me, mentioned that her men-tors have steered her away from research of agencypriorities and needs because (according to her men-tors) such research is, in general, not federally fund-able (personal communication with J. Bellamy,associate professor, University of Denver, January7, 2013). In light of this phenomenon, facultymem-bers should be given another option to pursue pro-motion and tenure, such as by conducting the kindsof studies recommended in this article. However,this new direction is not being proposed just in con-nection to career advancement interests. This optionalso is relevant to senior faculty members who nolonger seek promotion or who are not interested inpursuing major funding but who want to meaning-fully advance the empirical base of practice. Thisarticle is not meant to discourage faculty membersfrom conducting RCTs or from seeking the fund-ing to do so. Instead, it is meant to support an alter-native way in which they, as well as our doctoralstudents, can conduct valuable, practice-relevantstudies.

CONCLUSIONIn response to the disappointing outcomes obtainedwhen RSTs are implemented in real-world practicesettings, combined with the disagreements abouthow best to improve those outcomes, this article rec-ommends an innovative approach for bridging the gapbetween research and practice via one-group casestudies in everyday practice settings seeking to adoptor adapt an RST. The rationale for this approach isbased partly on the fact that RSTs, by definition,have already had their causality amply supported inreplicated RCTs and partly on the rarity with whichexperimental or unbiased quasi-experimental designsare feasible in real-world practice settings.

Because RCTs have been conducted under idealservice provision conditions typically unlike the

service provision realities of everyday social workpractice, studies are needed that would both (a)describe the conditions under which RSTs arebeing adopted or adapted in real-world practicesettings, and (b) provide a database of effect sizesfor meta-analyses seeking to identify which ser-vice provision characteristics are associated withmore desirable outcomes. Also proposed are meta-analytic studies of separate experimental and controlgroup mean pre–post effect sizes reported in RCTs.These separate mean effect sizes would providebenchmarks that agency practitioners could com-pare with their effect size to inform their decisionsas to whether to continue, modify, or replace theirongoing efforts to adopt or adapt a specific RST.

The recommended approach would providean alternative way for faculty members to collabo-rate with practitioners to conduct practice-relevantresearch, and thus to reduce the gap betweenresearch and practice. This approach to practice-relevant research may also result in increasing practi-tioner buy-in and expanding applicability of findingsacross various practice settings. This could all be donewithout having to seek major research funding.Although the pursuit of major research funding isnot being discouraged, academicians should recog-nize the value of faculty and doctoral dissertationresearch taking the recommended approach.

REFERENCESBender, K., & Bright, C. L. (2011). A review of common ele-

ments of effective interventions for reducing disruptive behaviorand traumatic stress among adolescent girls. Paper presentedat the Annual Conference of the Society for SocialWork and Research, Tampa, FL.

Bisson, J. L., Ehlers, A., Matthews, R., Pilling, S., Richards,D., & Turner, S. (2007). Psychological treatments forchronic post-traumatic stress disorder: Systematicreview and meta-analysis. British Journal of Psychiatry,190(2), 97–104.

Briere, J. N., & Scott, C. (2013). Principles of trauma therapy(2nd ed.). Thousand Oaks, CA: Sage Publications.

Chorpita, B. F., Becker, K. D., & Daleiden, E. L. (2007).Understanding the common elements of evidence-based practice: Misconceptions and clinical examples.Journal of the American Academy of Child and AdolescentPsychiatry, 46, 647–652.

Cohen, J. (1988). Statistical power analysis for the behavioral sci-ences. New York: Routledge.

Davidson, P. R., & Parker, K.C.H. (2001). Eye movementdesensitization and reprocessing (EMDR): A meta-analysis. Journal of Consulting and Clinical Psychology, 69,305–316.

Embry, D. D., & Biglan, A. (2008). Evidence-based kernels:Fundamental units of behavioral influence. ClinicalChild and Family Psychology Review, 11(3), 75–113.

Feingold, A. (2009). Effect sizes for growth-modeling anal-ysis for controlled clinical trials in the same metric as forclassical analysis. Psychological Methods, 14, 43–53.

Rubin / Research-Supported Interventions and Social Work Practice 229

Galinsky, M., Fraser, M. W., Day, S. H., & Rothman, J. M.(2013). A primer for the design of practice manuals:Four stages of development. Research on Social WorkPractice, 23, 219–228.

Kadel, R. P., & Kip, K. E. (2012). A SAS macro to computeeffect size (Cohen’s d) and its confidence interval from rawsurvey data. Retrieved from http://Analytics.ncsu.edu/2012/SD-06.pdf

Kirk, S. A., & Reid, W. J. (2002). Science and social work: Acritical appraisal.New York: Columbia University Press.

Koons, C. R., Robins, C. J., Tweed, J. L., Lynch, T. R.,Gonzalez, A. M., Morse, J. Q., et al. (2001). Efficacy ofdialectical behavior therapy in women veterans withborderline personality disorder. Behavior Therapy, 32(2), 371–390.

Linehan, M.M., Comtois, K. A., Murray, A. M., Brown, M.Z., Gallop, R. J., Heard, H. L., et al. (2006). Two-yearrandomized controlled trial and follow-up of dialecticalbehavior therapy vs. therapy by experts for suicidalbehaviors and borderline personality disorder. Archivesof General Psychiatry, 63, 757–766.

Maier-Riehle, B., & Zwingmann, C. (2000). Effect strengthvariation in the single group pre-post study design: Acritical review. Die Rehabilitation, 39(4), 189–199.Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11008276

Malouff, J. M., Thorsteinsson, E. B., & Schutte, N. S. (2007).The efficacy of problem solving therapy in reducingmental and physical health problems: A meta-analysis.Clinical Psychology Review, 27(1), 46–57.

Richmond, M. (1917). Social casework. New York: RussellSage Foundation.

Rubin, A. (1999). Presidential editorial: A view from thesummit. Research on Social Work Practice, 9(2), 142–147.

Rubin, A. (2009). Research providing the evidence base forthe interventions in this volume. In A. Rubin &D.W. Springer (Eds.),Treatment of traumatized adults andchildren: Clinician’s guide to evidence-based practice (pp.423–429). Hoboken, NJ: John Wiley & Sons.

Rubin, A. (Ed.). (2012a). Programs and interventions for mal-treated children and families at risk. Hoboken, NJ: JohnWiley & Sons.

Rubin, A. (2012b). Trauma-focused cognitive behavioraltherapy for children. In A. Rubin (Ed.), Programs andinterventions for maltreated children and families at risk. (pp.123–140). Hoboken, NJ: John Wiley & Sons.

Rubin, A., & Bellamy, J. (2012). Practitioner’s guide to usingresearch for evidence-based practice (2nd ed.). Hoboken, NJ:John Wiley & Sons.

Rubin, A., & Parrish, D. (2007). Problematic phrases in theconclusions of published outcome studies: Implicationsfor evidence-based practice. Research on Social WorkPractice, 17, 334–347.

Rubin, A., & Rosenblatt, A. (Eds.). (1979). Sourcebook onresearch utilization.New York: Council on Social WorkEducation.

Rubin, A., & Zimbalist, S. E. (1979). Trends in the MSWresearch curriculum: A decade later.NewYork: Council onSocial Work Education.

Rubin, A., & Zimbalist, S. E. (1981). Issues in the MSWresearch curriculum, 1968–1979. In S. Briar,H. Weissman, & A. Rubin (Eds.), Research utilization insocial work education (pp. 6–16). New York: Council onSocial Work Education.

Seidler, G. H., & Wagner, F. E. (2006). Comparing theefficacy of EMDR and exposure-focused cognitive–behavioral therapy in the treatment of PTSD: Ameta-analytic study. Psychological Medicine, 1–8.Retrieved from http://www.myptsd.com/c/gallery/-pdf/1-26.pdf

Springer, D. W., & Rubin, A. (Eds.). (2009). Substance abusetreatment for youth and adults: Clinician’s guide toevidence-based practice. Hoboken, NJ: John Wiley &Sons.

Springer, D.W., Rubin, A., & Beevers, C. G. (Eds.). (2011).Treatment of depression in adolescents and adults: Clinician’sguide to evidence-based practice. Hoboken, NJ: JohnWiley& Sons.

Sundell, K., Ferrer-Wreder, L., & Fraser, M. W. (2012).Going global: A model for evaluating empiricallysupported family-based interventions in new contexts.Evaluation & the Health Professions. Retrieved fromhttp://www.ncbi.nlm.nih.gov/pubmed/23291390

Trawver, K., & Rubin, A. (2010). Research providing theevidence base for the interventions in this volume. InA. Rubin, D. W. Springer, & K. Trawver (Eds.), Psy-chosocial treatment of schizophrenia (pp. 341–348).Hoboken, NJ: John Wiley & Sons.

Van Etten, M., & Taylor, S. (1998). Comparative efficacy oftreatments for posttraumatic stress disorder: A meta-analysis. Clinical Psychology and Psychotherapy, 5,126–145.

Verheul, R., van den Bosch, L., Koeter, M. W., de Ridder,M. A., Stijnen, T., & van den Brink, W. (2003). Dia-lectical behaviour therapy for women with borderlinepersonality disorder 12-month, randomised clinicaltrial in the Netherlands. British Journal of Psychiatry, 182(2), 135–140.

Weisz, J. R., Ugueto, A. M., Herren, J., Afienko, S. R., &Rutt, C. (2011). Kernels vs. ears and other questions fora science of treatment dissemination. Clinical Psychol-ogy: Science and Practice, 18(1) 41–46.

Allen Rubin, PhD, is Jean Kantambu Latting College Profes-sorship of Leadership and Social Change, University ofHouston, 110HA Social Work Building, Room 342, Hous-ton, TX 77204-4013; e-mail: [email protected].

Original manuscript received October 16, 2013Final revision received February 5, 2014Accepted March 13, 2014Advance Access Publication June 16, 2014

230 Social Work Volume 59, Number 3 July 2014


Recommended