15
BRIDGING THE GAP BETWEEN RESEARCH AND PRACTICE: A FRAMEWORK FOR BUILDING RESEARCH AGENDAS IN SCHOOL PSYCHOLOGY T. CHRIS RILEY-TILLMAN Temple University SANDRA M. CHAFOULEAS University of Connecticut TANYA L. ECKERT Syracuse University CONSTANCE KELLEHER Temple University In this article, we discuss the history behind efforts to transfer school psychology research into practice and review the literature pertaining to treatment acceptability, participatory action research, organizational change, and generalization programming. We then present a model for the sys- tematic programming of this transfer and propose a three-step framework that emphasizes mul- tiple conceptual bases to transfer research into practice. This three-step framework includes creating usable knowledge, transferring usable knowledge, and supporting usable knowledge. It is our intention that the proposed framework will provide a starting point that can be informed by researchers and practitioners in the field of school psychology. We conclude with examples of our research efforts to systematically study the transfer of research into practice. © 2005 Wiley Periodicals, Inc. As with many fields, school psychology is perceived as having two distinct yet related pop- ulations: practitioners and trainers / researchers. School psychology practitioners work in many settings including schools, hospitals, and private practices. In contrast, trainers / researchers are typically in university settings and focus on training individuals to use and conduct research regarding issues related to the field of school psychology. Although these roles can and do overlap to some extent, we propose that a disconnect prevents the field from reaching its full potential. That is, it is imperative that those who develop innovations concurrently work harder to influence those who use them in the field. Discussion surrounding this disconnect is certainly not new (see Stoiber & Kratochwill, 2000), however, rather than continue discussion about it, we suggest it is time for action. We contend that the primary impetus for such change must come from those who develop research agendas in school psychology. In this article, we discuss the history behind efforts to transfer school psychology research into practice and present a model for systematically programming for this transfer. We conclude with examples of our research efforts to systemati- cally study the process of influence and transfer of knowledge as a catalyst for continued research in the area. Bases of a Model for Transferring School Psychology Research into Practice Although many topics in psychology and education have the potential to influence a model for transferring school psychology knowledge to practice, we have selected four areas to draw on This project was partially funded by a grant provided by the Society for the Study of School Psychology. Correspondence to: T. Chris Riley-Tillman, Temple University (004–00), College of Education, Ritter Annex 265, Philadelphia, PA 19122–6091. E-mail: [email protected] Psychology in the Schools, Vol. 42(5), 2005 © 2005 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pits.20085 459

Bridging the gap between research and practice: A framework for building research agendas in school psychology

Embed Size (px)

Citation preview

BRIDGING THE GAP BETWEEN RESEARCH AND PRACTICE: A FRAMEWORKFOR BUILDING RESEARCH AGENDAS IN SCHOOL PSYCHOLOGY

T. CHRIS RILEY-TILLMAN

Temple University

SANDRA M. CHAFOULEAS

University of Connecticut

TANYA L. ECKERT

Syracuse University

CONSTANCE KELLEHER

Temple University

In this article, we discuss the history behind efforts to transfer school psychology research intopractice and review the literature pertaining to treatment acceptability, participatory action research,organizational change, and generalization programming. We then present a model for the sys-tematic programming of this transfer and propose a three-step framework that emphasizes mul-tiple conceptual bases to transfer research into practice. This three-step framework includescreating usable knowledge, transferring usable knowledge, and supporting usable knowledge. Itis our intention that the proposed framework will provide a starting point that can be informed byresearchers and practitioners in the field of school psychology. We conclude with examples ofour research efforts to systematically study the transfer of research into practice. © 2005 WileyPeriodicals, Inc.

As with many fields, school psychology is perceived as having two distinct yet related pop-ulations: practitioners and trainers/researchers. School psychology practitioners work in manysettings including schools, hospitals, and private practices. In contrast, trainers/researchers aretypically in university settings and focus on training individuals to use and conduct researchregarding issues related to the field of school psychology. Although these roles can and do overlapto some extent, we propose that a disconnect prevents the field from reaching its full potential.That is, it is imperative that those who develop innovations concurrently work harder to influencethose who use them in the field. Discussion surrounding this disconnect is certainly not new (seeStoiber & Kratochwill, 2000), however, rather than continue discussion about it, we suggest it istime for action. We contend that the primary impetus for such change must come from those whodevelop research agendas in school psychology. In this article, we discuss the history behindefforts to transfer school psychology research into practice and present a model for systematicallyprogramming for this transfer. We conclude with examples of our research efforts to systemati-cally study the process of influence and transfer of knowledge as a catalyst for continued researchin the area.

Bases of a Model for Transferring School Psychology Research into Practice

Although many topics in psychology and education have the potential to influence a modelfor transferring school psychology knowledge to practice, we have selected four areas to draw on

This project was partially funded by a grant provided by the Society for the Study of School Psychology.Correspondence to: T. Chris Riley-Tillman, Temple University (004–00), College of Education, Ritter Annex 265,

Philadelphia, PA 19122– 6091. E-mail: [email protected]

Psychology in the Schools, Vol. 42(5), 2005 © 2005 Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pits.20085

459

in this article. In particular, the literature regarding treatment acceptability, participatory actionresearch, organizational change, and generalization programming are reviewed given the directrelevance to our proposed framework for transferring school psychology research into practice.

Treatment Acceptability

To date, one of the primary methods for examining the likelihood of transferring schoolpsychology research into practice, particularly with school-based treatments, has occurred throughthe study of treatment acceptability. Treatment acceptability refers to the degree to which anindividual perceives a treatment procedure to be fair, reasonable, appropriate, and unobtrusive(Kazdin, 1980). This construct is based on the hypothesis that if individuals find a treatment to beacceptable, they are more likely to use the treatment (Detrich, 1999). Thus, a great deal of researchhas focused on treatment acceptability as a way to determine usage and thus, effectiveness. In fact,existing models of factors related to treatment compliance and effectiveness are built around theexamination of treatment acceptability. In this section, a brief review of those models is provided,followed by examples of empirical work examining them (see Eckert & Hintze, 2000 for a morecomplete review).

In 1985, Witt and Elliott developed one of the first models of treatment use, which incorpo-rated the construct of acceptability (see Eckert & Hintze, 2000; Elliott, 1988 for reviews). Thismodel linked treatment acceptability, treatment use, treatment integrity, and treatment effective-ness in a reciprocal fashion, giving each factor equal weight (Witt & Elliott, 1985). These authorshypothesized that treatment acceptability initiated the sequence and influenced the interrelation-ship among the remaining three factors. Reimers, Wacker, and Koeppel (1987) introduced a morecomplex and sequential model that added the construct of “treatment understanding” as an initialstep toward treatment usage. Reimers and colleagues argued that if a good understanding of atreatment was developed, treatment acceptability and procedural compliance would follow, result-ing in positive treatment outcomes and the maintenance of effects. A more recent model of treat-ment acceptability by Gresham and Lopez (1996) places social validity as the unifying element.Rather than focus on the relationship among factors contributing to acceptability, treatment useand treatment integrity are considered the primary outcome measures and most concrete exem-plars of treatment acceptability.

As noted by Eckert and Hintze, although a number of treatment acceptability models havebeen proposed, the link between treatment acceptability and treatment usage has not been clearlysupported. Whereas some investigators have suggested a positive relationship between acceptabil-ity and treatment integrity (Allinder & Oats, 1997), others have found none to exist (Peterson &McConnell, 1996; Sterling-Turner & Watson, 2002). In summary, the existing literature suggestsan unclear relationship between these constructs. Perhaps the lack of support for a link betweentreatment acceptability and treatment usage is the result of other factors. For example, Odom,McConnell, and Chandler (1993) found feasibility to be a better predictor of intervention use thanacceptability. Such findings suggest that although acceptability may be one important element inpredicting treatment usage, other factors may also contribute to both predicted and actual usage oftreatments.

In summary, although conceptual models regarding factors that lead to treatment use andeffectiveness are available, there is limited empirical support for many of these models, particu-larly with regard to treatment acceptability. Despite this limitation, the models can be useful whenresearchers are both conducting research and planning for its transfer into practice. The modelspreviously described provide an excellent starting point for understanding the complexity sur-rounding factors that influence treatment usage and suggest future directions for research.

460 Riley-Tillman, Chafouleas, Eckert, and Kelleher

Participatory Action Research

Participatory Action Research (PAR) provides another method that can be employed to pro-mote the transfer of research into practice. PAR is a model of research designed to develop newtechnology (or innovation) that is considerate of the realities of practice (Meyer, Park, Grenot-Scheyer, Schwartz, & Harry, 1998). PAR occurs through the ongoing collaboration between research-ers and practitioners within the design and implementation phases of the research process (Turnbull,Friesen, & Ramierz, 1998). In theory, PAR should bring research closer to adoption because thetreatment agents have been engaged in the process of designing and validating the application oftreatment to the respective practice (Park, Meyer, & Goetz, 1998).

Researchers using PAR have demonstrated that by jointly developing procedures with stake-holders, the intervention is more likely to be culturally responsive, ecologically valid, and accept-able (Nastasi et al., 2000). For example, Dowrick and colleagues (2001) used a PAR framework todesign a reading intervention program for children attending under-resourced, urban schools. Theseresearchers found the intervention designed through partnership among educators, communitymembers, and university-based researchers was rated as responsive and culturally relevant. Fur-ther, given that the individuals responsible for implementing the intervention were invested in theschools and community, they also were committed to the maintenance of the intervention pro-gram. In another application of PAR, Salisbury, Wilson, and Palombaro (1998) found that a morecollaborative and inclusive culture emerged as the participants worked together to resolve issues.

Despite the many positive aspects of PAR, there also are criticisms or cautions against itswholesale use in treatment work. For example, PAR and other partnership models are often notwell-described, lack specific procedures for developing partnerships, and/or involve poorly definedconstructs (Sheridan, Meegan, & Eagle, 2002). In addition, current PAR models have been criti-cized for their limited experimental control and generalizability. For example, in many instances,the outcomes associated with PAR models are provided in the form of case studies or anecdotaltestimonies (Hughes, 2003). Further, because the focus of PAR is on partnership-based develop-ment of an intervention, it is frequently unclear whether internal control of the intervention isestablished (Hughes, 2003). Finally, a legitimate question arises whether practitioners are willingto enter this type of partnership, which is an essential assumption underlying all PAR models. Ifthis commitment is not established, or if a more limited commitment is present, the PAR modelmay be compromised. However, despite these limitations, the PAR literature provides importanttreatment considerations, such as ecological validity, acceptability, and cultural responsiveness,which need to be considered when transferring research into practice.

Organizational Change

A somewhat different research direction regarding the transfer of research into practice canbe found in the literature on organizational change (Fullan, Miles, & Taylor, 1980; McDougal,Moody Clonan, & Martens, 2000). One relevant model of organizational change is a three-stageapproach comprised of: (1) an entry and start-up, or initiation phase, (2) an implementation sup-port phase, and (3) an expansion phase. The first stage of this model involves monitoring organi-zational readiness, verifying administrative support for implementation, and establishing a clearcommitment of both the processes and the goal. At the conclusion of this stage, a clear commit-ment of resources and staff participation should exist. In the second stage, implementation sup-port, the new knowledge is implemented in one setting with considerable support from consultantsand administrators. This support includes both staff training and monitoring of the integrity ofimplementation. Further, in this stage, it is critical that staff have an opportunity to report theirconcerns to consultants and administration, as well as to have the new knowledge customized in a

Bridging the Gap Between Research and Practice 461

manner that makes it more usable in their setting while at the same time maintaining effectiveness(Rosenfield, 2000). The final stage, expansion, is highlighted by the institutionalization of the newresearch activity. Rosenfield described the signs of a new activity becoming institutionalized asincluding (1) a consistent line of funding in the operational budget, (2) the activity implementedby staff, and (3) the activity being implemented in an additional setting aside from the initialsetting.

In a study that used this model of organizational change, McDougal and colleagues (2000)examined the implementation of a prereferral intervention team process. In the initial stages,organization readiness was monitored, and a clear goal of the development of administrativesupport at both the district and building level was obtained. Throughout the initial implementationof the model, ongoing staff training was provided to maximize skill development. The final stageof institutionalization was noted by consistent commitment to the program by the district, owner-ship of the program by district staff, and a clear plan to implement the program in all elementaryschools in the district. In summary, the authors of this study suggested that by following thethree-stage process of organizational change, the implementation of a widespread prereferral inter-vention team was feasible, effective, and acceptable.

Generalization Programming

Generalization is a term that describes the occurrence of a relevant behavior across time,setting, and target, in the absence of the conditions that promoted its acquisition (Stokes & Baer,1977). Historically, researchers did not think of generalization as something that needed to beprogrammed. Rather, generalization was considered a natural occurrence after several pairings ofstimuli or treatments. In other words, far more emphasis was placed on understanding how indi-viduals could be trained to discriminate between stimuli rather than generalize across stimuli. Intheir seminal article, Stokes and Baer (1977) challenged this belief by providing the first orga-nized list of generalization techniques. Stokes and Osnes (1989) later revised this list whereinthree general principles of generalization and 12 tactics were specified. The first general principle,Exploit Current Functional Contingencies, includes the following four tactics: (1) contact naturalconsequences, (2) recruit natural consequences, (3) modify maladaptive consequences, and (4)reinforce occurrences of generalization. The second general principle, Train Diversely, includesthe following four tactics: (1) use sufficient stimulus exemplars, (2) use sufficient response exem-plars, (3) make antecedents less discriminable, and (4) make consequences less discriminable. Thethird general principle, Incorporate Functional Mediators, includes the following four tactics: (1)incorporate common salient physical stimuli, (2) incorporate common salient social stimuli, (3)incorporate self-mediated physical stimuli, and (4) incorporate self-mediated verbal and covertstimuli. In Table 1, these three general principles and the 12 associated tactics are listed, includingexamples of use in a school setting.

In summary, generalization programming has specific implications for bridging the gap betweenresearch and practice. These techniques have direct relevance for the development of professionalworkshops that focus specifically on increasing the likelihood of transfer across research (i.e.,during workshop training session) to applied practice (i.e., when practitioners return to their prac-tice settings).

A Framework for Building Research Agendas in School Psychology

Although our knowledge base in the field of school psychology has grown dramatically overthe past few decades, the application of that knowledge in practice remains unfulfilled. Thus, aprimary goal for the field should be to consider how we can maximize the likelihood of general-izing that knowledge into practice. We suggest that it is time to move toward a more encompassing

462 Riley-Tillman, Chafouleas, Eckert, and Kelleher

Tabl

e1

Pri

ncip

les

and

Tact

ics

ofG

ener

aliz

atio

nP

rogr

amm

ing

Tha

tA

ssis

tsin

the

Tran

sfer

ofR

esea

rch

Into

Pra

ctic

e

Gen

eral

izat

ion

Str

ateg

ies1

Gen

eral

izat

ion

Tact

ics1

App

lica

ble

Sta

ge(s

)of

Mod

elE

xam

ples

Tra

inD

iver

sely

Use

suffi

cien

tst

imul

usex

empl

ars.

Tra

nsfe

rrin

gU

sabl

eK

now

ledg

eP

rese

nta

num

ber

ofex

ampl

esw

here

the

know

ledg

eis

rele

vant

.U

sesu

ffici

ent

resp

onse

exem

plar

s.T

rans

ferr

ing

Usa

ble

Kno

wle

dge

Mak

ean

tece

dent

sle

ssdi

scri

min

able

.P

rese

nta

num

ber

ofex

ampl

esw

here

the

know

ledg

eis

rele

vant

.M

ake

cons

eque

nces

less

disc

rim

inab

le.

Pre

sent

anu

mbe

rof

poss

ible

cons

eque

nces

.

Inco

rpor

ate

Inco

rpor

ate

com

mon

sali

ent

phys

ical

stim

uli.

Tra

nsfe

rrin

gU

sabl

eK

now

ledg

eU

tili

zem

ater

ials

intr

aini

ngth

atar

eal

sopr

esen

tin

the

natu

ral

envi

ronm

ent,

such

asha

ndou

tsor

othe

rre

min

ders

.F

unct

iona

lM

edia

tors

Inco

rpor

ate

com

mon

sali

ent

soci

alst

imul

i.T

rans

ferr

ing

Usa

ble

Kno

wle

dge

Inco

rpor

ate

self

-med

iate

dph

ysic

alst

imul

i.T

rans

ferr

ing

Usa

ble

Kno

wle

dge

Tra

inus

ing

mat

eria

lsth

atca

nbe

util

ized

inth

ena

tura

len

viro

nmen

t,su

chas

inte

rven

tion

scri

pts

(Hir

alal

l&

Mar

tens

,199

8).

Inco

rpor

ate

self

-med

iate

dve

rbal

and

cove

rtT

rans

ferr

ing

Usa

ble

Kno

wle

dge.

stim

uli.

Exp

loit

Cur

rent

Con

tact

natu

ral

cons

eque

nces

.B

uild

ing

Usa

ble

Kno

wle

dge

Dev

elop

know

ledg

eth

atis

like

lyto

cont

act

rein

forc

emen

tin

the

Fun

ctio

nal

Sup

port

ing

Usa

ble

Kno

wle

dge

natu

ral

envi

ronm

ent.

Con

ting

enci

esR

ecru

itna

tura

lco

nseq

uenc

es.

Sup

port

ing

Usa

ble

Kno

wle

dge

Whe

nre

info

rcem

ent

exis

tsin

the

natu

ral

envi

ronm

ent,

prov

ide

am

eans

whe

reth

atre

info

rcem

ent

can

beac

cess

ed,s

uch

aspe

rfor

man

cefe

edba

ck(N

oell

etal

.,19

97;W

itt

etal

.,19

97).

Mod

ify

mal

adap

tive

cons

eque

nces

.B

uild

ing

Usa

ble

Kno

wle

dge

Mak

ing

anin

nova

tion

less

tim

eco

nsum

ing,

such

asB

rief

Sup

port

ing

Usa

ble

Kno

wle

dge

Exp

erim

enta

lAna

lysi

s.R

einf

orce

occu

rren

ces

ofge

nera

liza

tion

.S

uppo

rtin

gU

sabl

eK

now

ledg

eP

rovi

deth

em

echa

nism

for

rein

forc

emen

tof

any

occu

rren

ceof

gene

rali

zati

onov

erti

me

byco

nsul

tant

orby

indi

vidu

als

inth

esc

hool

syst

em.

1 Bas

edon

Sto

kes

and

Osn

es(1

989)

.

Bridging the Gap Between Research and Practice 463

framework that integrates and uses multiple bases of literature to bridge the gap between researchand practice. Based on our review of literature related to treatment acceptability, PAR, organiza-tional change, and generalization programming, we propose a three-step framework for buildingresearch agendas in school psychology. An overview of the three-step framework, including build-ing usable knowledge, transferring usable knowledge, and supporting usable knowledge, as wellas the key features included within each step, is illustrated in Figure 1. Although we chose toreview and include these three particular areas, it is anticipated that the framework can accom-modate and/or be modified to include other literature bases. For example, the Evidence-BasedIntervention Work Group (this issue) suggests that several literature bases, including Social Influ-ence Theory, Concerns Based Adoptions Model, Functional Assessment, and the OrganizationalAnalytic Model, have the potential to contribute to our proposed framework. Thus, we hope thatour framework will provide a starting point that can be modified by both researchers and practi-tioners who find relevance in other areas of psychological theory. Further expansion of each aspectof our framework will be discussed subsequently.

Building Usable Knowledge

The first stage of the model, building usable knowledge, is considered essential in ensuringthat new innovations are both practical and useful tools in a school environment.

In fact, Rosenfield (2000) coined the term building usable knowledge, emphasizing the impor-tance of developing interventions that can be implemented with integrity and with positive out-comes. Thus, building usable knowledge emphasizes the need to conduct intervention researchthat is practical in schools. To do this, we need to develop interventions with high potential forgeneralization into school settings, disseminate information about the interventions, and monitorthe effects of that work (i.e., actual generalization) on actual practice. Our suggestions for accom-plishing these tasks, drawing from numerous works (e.g., Clonan, Chafouleas, McDougal, &Riley-Tillman, 2004; Elliott, Kratochwill, & Roach, 2003; Meyers & Nastasi, 1999), are pre-sented next.

Knowledge should be presented in as simple a manner as possible. Although a researcherwho spends years developing and verifying an innovation might view it as easily understandable,a practitioner must devote a number of professional and personal hours of study to understand itspurpose, procedure, and applicability. Thus, we suggest that it is critical to convey knowledge ina simple manner to maximize the efficiency with which it can be learned. By efficiency, we referto minimizing the amount of study needed for mastery of that knowledge. For example, Curriculum-Based Measurement (CBM) is one assessment technique that was specifically developed to pro-vide school staff with an easy and effective tool for measuring student performance over time(Deno, 1985). A particular method of presenting knowledge in as simple a manner as possible isfor researchers to highlight the essential or critical components of an innovation in the context ofreviewing the nonessential aspects.

Critical components need to be identified. The critical components of an innovation need tobe clearly understood and outlined prior to presentation. By critical components, we mean that thespecific elements of the intervention that are essential for change need to be operationalized (Elliottet al., 2003; Meyers & Nastasi, 1999). It should be assumed that when an innovation is used inpractice it may be altered or “personalized” to some extent (Dane & Schneider, 1998). Given thatsome alterations are expected, it seems obvious that we make it a priority to identify the criticalcomponents of an innovation (Schoenwald & Hoagwood, 2001). A number of researchers haveobserved that some components of a treatment may have a greater impact on outcomes than others(Gansle & McMahon, 1997; Noell, Gresham, & Gansle, 2002; Sterling-Turner & Watson, 2002).

464 Riley-Tillman, Chafouleas, Eckert, and Kelleher

If these key or essential features are highlighted, more systematic alterations that allow for per-sonalization while at the same time retaining the essential features.

An example of the identification of critical components can be exemplified by the researchconducted in early literacy instruction. We know that explicit and systematic instruction in identifying

Figure 1. A systematic model for the transfer of research into practice.

Bridging the Gap Between Research and Practice 465

and manipulating the sounds within a word (i.e., building phonemic awareness) is critical for goodearly literacy instruction. Providing these components is more important than the specific activi-ties used during instruction. That is, the teacher has the flexibility to alter early literacy instructionas long as the critical components are retained. In this example, identification of the critical com-ponents would inform practitioners of the specific aspects of an intervention they can and cannotalter when adapting it to their practice. This upfront presentation of the critical components hastwo benefits. First, adapted knowledge will be more effective when critical components are retained.Second, the flexibility of the intervention should increase the acceptability and understanding ofthe knowledge by the practitioners.

Incorporate generalization programming. A major emphasis within our framework is build-ing research that is sustainable in practice. That is, not only should knowledge be built, but the useof that knowledge must also be considered. As discussed previously, one way to promote long-term usage is to program for generalization across time (Stokes & Baer, 1977; Stokes & Osnes,1989). Although generalization programming strategies are emphasized in the next two steps inour model, some consideration is also warranted during the step of building knowledge. In par-ticular, exploiting current functional contingencies may be relevant. The key feature of this prin-ciple involves maximizing the amount of reinforcement and minimizing the amount of punishmentexperienced by the individual who is applying the innovation. Whereas it is possible to use someof these generalization programming techniques at the support stage, it is obviously much easierif the knowledge is quickly reinforcing rather than laborious and antagonizing.

Current practice should be considered. Although knowledge is typically created and thentransferred into practice, a very relevant source of future knowledge comes from current practice.Research has suggested that teachers typically try two or three types of interventions before engag-ing in a consultative relationship (Ysseldyke, Pianta, Christenson, Wang, & Algozzine, 1983). Forexample, it has been suggested that often potentially effective techniques are used with the improperfrequency, insufficient intensity, or under improper situations (Riley-Tillman & Chafouleas, 2003).Given the complexities of a classroom, it is likely that, in many situations, effective interventioncomponents are being implemented in an ineffective manner. Thus, one goal of researchers shouldbe the systemic analysis of current practice to identify activities that could be improved withappropriate modification. Although this sort of analysis may uncover activities with no remediablefeatures, others will likely be identified as effective or as potentially effective with some alteration.

Transferring Usable Knowledge

Once a knowledge base has been built, it must be systematically transferred to a target pop-ulation. The question now becomes how to determine the best means of transfer. Although thePAR model promotes the inclusion of school-based practitioners when building knowledge, webelieve it unlikely that significant numbers of practitioners would be able to be involved in such aninitiative. As a result, despite the utility of PAR for certain situations, it seems apparent thattraditional methods of dissemination such as workshops, academic journals, and books will con-tinue to serve as the primary method for the transfer of research to practice. Of these three meth-ods, only the professional development workshop has the potential to be a dynamic and interactiveexperience for school-based practitioners. In addition, this method provides reinforcement to manyattendees who seek continuing education credits required by many states. Thus, we suggest thatthe professional development workshop is an important avenue to pursue when promoting a changein behavior, and that we need to determine how to make this experience specifically linked tobridging the gap between research and practice. Having suggested the importance of the profes-sional development workshop, we now offer two suggestions regarding increasing its effectiveness.

466 Riley-Tillman, Chafouleas, Eckert, and Kelleher

Consider the mode of transfer. It is important to consider the specific goal of a workshop inrelation to practice. The workshop model is designed to increase the knowledge and understandingof the practitioner with the hypothesis that doing so will maximize the likelihood of using theinformation presented. To this end, it is critical that a professional development model be put inplace that includes effective procedures for building knowledge about the content. For example,an excellent outline for designing effective professional development is provided by Garet andcolleagues in the Eisenhower Program entitled Designing Effective Professional Development(Elliott et al., 2003; Garet, Porter, Desimone, Birman, & Yoon, 2001). This study used a nationalsample of 1,027 teachers enrolled in continuing education workshops to analyze the differentfactors that lead to effective professional development. The results of this analysis provided asix-factor model of effective professional development that includes: (1) a focus on content, (2)promotion of creative learning, (3) coherence, (4) reform-type activity, (5) collective participa-tion, and (6) sufficient duration. The authors noted that with appropriate resources, high qualityprofessional development for teachers was possible.

Incorporate generalization programming to maximize use across settings. The goal of aprofessional development workshop is not only to increase knowledge of the participants but alsoto promote use and sustainability in practice. As a result, it is essential to consider sustainability atthe time of training. As noted previously, the question of how to program for sustainability may betied to the literature regarding how to program for generalization across time. Two of the generalprinciples offered by Stokes and Osnes (1989), Train Diversely and Incorporate Functional Medi-ators, are relevant to consider at the time of training to promote generalization across time.

Train Diversely refers to training in a flexible manner to enhance the likelihood that theschool psychologist will see opportunities to apply the new knowledge when reentering the nat-ural environment. When a knowledge base is presented narrowly, it becomes less likely that prac-titioners will find a use for it in their daily work. Conversely, when specific and general applicationsof knowledge are presented along with diverse examples, the chance of finding an appropriatesituation to use new knowledge should increase.

The second principle to consider at the initial training stage is the Incorporation of FunctionalMediators. Functional Mediators are aspects of the training session that can be taken into thenatural environment. Examples of Functional Mediators are intervention materials and interven-tion scripts. During the workshop, disseminating materials such as intervention scripts (Hiralall &Martens, 1998) and teaching participants to use them establishes a link between training and thenatural environment. Providing materials such as blank graphing sheets also minimizes the amountof start-up work needed by practitioners. In summary, Functional Mediators serve two importantroles. First, they function as a reminder in the natural environment and second, they serve as aguide for best practice.

Use a participatory model to enhance relevance of knowledge. Literature from PAR isconsidered applicable in transferring usable knowledge. At the most basic level, PAR has demon-strated that by jointly developing knowledge with all stakeholders, the new knowledge can beenhanced in terms of maximizing ecological validity, acceptability, and cultural responsiveness(Nastasi et al., 2000). One of the lessons to be learned from the PAR model is that it is importantto involve all stakeholders from the beginning of innovation development to ensure its relevancefor all intended users. Although the benefit of this charge for collaboration between researchers,practitioners, and all other stakeholder groups was first noted at the level of building knowledge,the benefits of such partnership also are evident in later stages of programming for the transfer ofresearch to practice.

Bridging the Gap Between Research and Practice 467

Supporting Usable Knowledge

Although getting practitioners to consider an innovation, learn it, and initially use it in theirpractice are difficult tasks, an even more significant task is sustaining use of the innovation. It isimperative that when practitioners use the knowledge in their environments, both their studentsand they experience the desired outcomes. That is, if we only consider student outcomes, weignore issues pertaining to the person responsible for implementation, such as satisfaction duringand after usage. If practitioners find that use of the innovation results in negative outcomes (e.g.,resistance from an administrator, dissatisfaction with implementation), then regardless of effec-tiveness, it is unlikely that they will continue to use the technology in the future.

Use generalization programming to maximize use across time. Again, we turn to the prin-ciples of programming for generalization across time as a way to address sustainability. One of theprinciples, Exploit Current Functional Contingencies, has implications for programming for long-term usage. In general, exploiting functional contingencies maximizes reinforcement for usageand thus minimizes punishment. Although this is useful for building knowledge, the application ofthe principle is most critical immediately following training. Once new knowledge is built, and apractitioner begins to use it, support in the natural environment should be focused on three areasrelated to exploiting functional contingencies: (1) recruiting natural consequences, (2) modifyingmaladaptive consequences, and (3) reinforcing occurrences of generalization. Recruiting naturalconsequences occurs when reinforcement is systematically applied or highlighted after usage.Examples of recruiting natural consequences include performance feedback during which theeffectiveness of usage is directly provided to the user, or the recruitment of individuals to supportand reinforce when the new knowledge base is used (e.g., asking a principal or high status teacherto praise a teacher for the use of a technique). Modifying maladaptive consequences refers toalteration of the technology that is aimed at minimizing any punitive result of usage. Simplymaking a technology less time consuming is an excellent example of modifying maladaptiveconsequences. The third category of exploiting functional contingencies to be considered at thesupport stage is the reinforcement of any sustained usage that occurs naturally. It is key to reinforceany sustained usage, even if the innovation is not used in the exact manner prescribed. As noted byElliott and colleagues (2003), adaptations that occur in the natural environment may be essentialto sustainability, and thus should be considered and supported. This perspective implies that anyusage should be reinforced.

Promote the institutionalization of the knowledge. In the model of organizational changedescribed previously (Fullan, Miles, & Taylor, 1980; McDougal et al., 2000), both the secondstage of implementation and the third stage of expansion can inform as to the supporting of usableknowledge. The second stage of organizational change highlights significant support for imple-mentation with regard to both training and helping the users customize the new activity to maxi-mize its effectiveness and usability. This focus on the need for both learning and personalizing thenew knowledge is considered essential for making the knowledge relevant to the new setting. Thethird stage of organizational change takes this personalization one step further, making the newactivity an established part of the institution in terms of both dissemination and consistent resourceallocation. Although the organizational change literature has a different focus than generalizationprogramming, it is clear that the institutionalization of the new activity enhances the likelihoodand utility of generalization programming strategies.

Learning From Implementation of the Framework

In summary, we have suggested that a model for the transfer of research to practice followsthe steps of building, transferring, and sustaining the use of knowledge, and that specific attempts

468 Riley-Tillman, Chafouleas, Eckert, and Kelleher

to plan for the generalization of knowledge be incorporated within each step. In this section, wehighlight our attempts to incorporate these suggestions within discussion related to two areas ofresearch.

Brief experimental analysis. Our first example of systematically programming to bridge thegap comes from a series of studies that has begun to empirically examine the promotion of anassessment technology for reading fluency. This technology, brief experimental analysis (BEA) isnovel to school-based practitioners, yet researchers have demonstrated positive support for it.BEA involves administering brief test conditions, one at a time, by hierarchically ordered treat-ments (Daly, Martens, Hamler, Dool, & Eckert, 1999) to determine a treatment most likely to besuccessful. For reading fluency, the hierarchy of treatments has been based on previous empiricalsupport for the intervention as well as ease of implementation (e.g., baseline, reward, repeatedreadings, listening passage preview, easier materials). Using BEA to assess reading fluency holdspromise as an efficient and rational way to make decisions about intervention selection. BEA wasselected as an example of a new technology that could be built, transferred, and sustained inapplied settings for a number of reasons. First, an established empirical base has indicated BEA tobe an effective method for determining a successful reading intervention (Jones & Wickstrom,2002; Noell, Freeland, Witt, & Gansle, 2001; VanAuken, Chafouleas, Bradley, & Martens, 2002).Second, the foundation for building knowledge regarding BEA already exists and the criticalcomponents have been clearly identified. That is, BEA procedures have been clearly outlined in away that maximizes the feasibility of use. Finally, given that the application of BEA to academicskills is a fairly new line of inquiry, it was assumed that BEA would be relatively unknown to themajority of practicing school psychologists.

As a first step toward building knowledge about BEA, we decided that an assessment of itsacceptability among school psychologists would be important (Chafouleas, Riley-Tillman, & Eck-ert, 2003). Thus, we examined the acceptability of three different approaches to the assessment ofreading (i.e., norm-referenced assessment, curriculum-based assessment, brief experimental analy-sis), and assessed whether interactions exist among acceptability ratings and respondents’ reportsof their training in and use of the three approaches. We found that participating school psycholo-gists rated curriculum-based assessment as more acceptable than either BEA or norm-referencedassessment. Perhaps more importantly, BEA was rated as the procedure with which school psy-chologists had the least experience. We further hypothesized that a lack of training and exposurewas a likely reason for the lower acceptability ratings, and thus, a logical next step in this line ofinvestigation would involve training a population of practicing school psychologists in the pur-pose and use of BEA of reading while measuring acceptability and use.

In another study, this second step was taken by attempting to link increasing training withreported acceptability and use (Riley-Tillman, Chafouleas, & Ducette, 2005). In this study, prac-ticing school psychologists were surveyed in reference to their knowledge and use of BEA prior toand immediately following attendance at a half-day professional development workshop. Withinthe workshop, we structured the presentation to include facets of effective professional develop-ment (e.g., didactic plus practice/feedback) as well as principles of generalization programming(e.g., Training Diversely and Incorporating Self-Mediated Functional Mediators). For example,the concept of Training Diversely was incorporated into the workshop by: (1) emphasizing theeffective and efficient aspects of BEA and how it might be incorporated into practitioners’ currentwork, (2) using multiple case examples, and (3) having participants come up with their own “realworld” example and “take home” assignment. A manual was given to each workshop participant,increasing the likelihood that the manual would be used in the natural environment (e.g., Self-Mediated Functional Mediators). Follow-up assessment of participant-reported acceptability and

Bridging the Gap Between Research and Practice 469

use occurred at approximately 3 and 6 months following the workshop. Results indicated signif-icantly higher levels of acceptability following the 3-hour workshop, and high levels of accept-ability remained at the 3 month follow-up. Interestingly, these levels of acceptability dropped atthe 6-month follow-up. Ratings of reported usage followed a similar pattern with significantlyhigher levels of reported usage at 3 months than at 6 months. These findings suggest increasingpractitioners’ knowledge and acceptability results in short-term usage, but not long-term sustain-ability. Thus, this study provides empirical support for the need to build and transfer knowledge aswell as to program for sustainability. Thus, although the inclusion of generalization programmingstrategies during the workshop training may promote short-term reported usage, those strategieswere not sufficient to support their use in the natural environment over time.

A next step in this line of investigation involves further exploration of usage through movingbeyond an all or none approach to include what adaptations might look like. Although the idea ofexpecting and embracing adaptations has been brought forth in the literature (Dane & Schneider,1998; Elliott et al., 2003; Schoenwald & Hoagwood, 2001), limited empirical investigation ofadaptation has occurred. Given the suggestion that adaptation is acceptable as long as criticalcomponents of a technology are fully understood, it is still essential that these actual adaptationsin practice be examined (see Elliott et al., 2003). Further analysis of these adaptations, or inter-vention drift (Power, Blom-Hoffman, Clarke, Riley-Tillman, Kelleher, & Manz, 2005), should beconducted in terms of both effectiveness and acceptability because they are essential in monitoringthe impact on treatment effectiveness and subsequent implementation efforts. Researchers plan-ning for subsequent implementation efforts to be sustained over the long term can be improvedthrough information from practitioners regarding adaptations they found useful.

Daily Behavior Report Cards

Our second example provides an illustration of the systematic analysis of current practice.The area of investigation involved daily behavior report cards (DBRCs). A DBRC is a measureused to rate a specified behavior at least daily, and that information is then shared with someoneother than the rater (Chafouleas, Riley-Tillman, & McDougal, 2002). DBRCs also have beenreferred to as home school notes, good behavior notes, and behavior report cards. In a review byChafouleas and colleagues (2002), it was suggested that DBRCs may be feasible, acceptable,effective in promoting positive student, and a way to increase parent/teacher communication.However, despite these positive aspects and assumed popular use among practicing educators, amethodologically-sound literature base regarding their strengths and weaknesses could not befound. Thus, rather than investigate the importation of a new technology into practice, we soughtto build an empirical base around a technology currently used in education. Doing this wouldallow for an examination of the strengths and weaknesses of a current practice, which would thenlead to determination of ways to improve use of that current practice. We began building theknowledge base regarding DBRCs through two concurrent venues that are described next.

First, we launched a series of investigations that examine the technical adequacy of theDBRC to measure behavior. Although certainly support for the DBRC as an intervention tool isevident, use of the DBRC to monitor behavior is limited. Thus, a first line of investigation includesexamination of the technical characteristics (e.g., accuracy, reliability) of the DBRC to measurebehavior. Studies comparing information obtained from DBRCs and direct observation acrossdifferent raters have been initiated (see Chafouleas, McDougal, Riley-Tillman, Panahon, & Hilt,in press; Chafouleas, Riley-Tillman, Sassu, LaFrance, & Patwa, 2005). In addition, examinationof the degree of training required to accurately use a DBRC has begun (see Chafouleas, McDougal,et al., in press). To date, results of this line of investigation have supported some positive charac-teristics of the DBRC as a potentially feasible supplement or complement to direct observation

470 Riley-Tillman, Chafouleas, Eckert, and Kelleher

when measuring behaviors typically found in the school setting (e.g., on-task /off-task). Althoughthere certainly is a need for further work in the area, the preliminary findings positively supportcontinuing investigation in the area.

Second, we initiated a series of studies examining current perspectives about and use ofDBRCs by educators to provide some confirmation regarding assumptions about the popularity ofthese measures. In an initial study, information such as reported use, frequency of use, and reasonsfor use were collected in a national sample of educators (Chafouleas, Riley-Tillman, & Sassu,2005). In addition, assessment of the acceptability of the DBRC as an intervention tool and as away to measure behavior was conducted. Results of this study indicated that over 60% of respon-dents indicated using a tool like the DBRC to some degree, and participants rated the DBRC ashighly acceptable. The results of this study provided confirmatory support for continuing inves-tigations into the capabilities of the DBRC as an assessment and intervention tool.

Related to this line of research, two subsequent steps have been considered. The first step iscomposed of a series of studies that examine how to alter DBRCs to maximize their effective use,which is similar in nature to the previous discussion in relation to usage of BEA. Specifically, whatare the critical components of DBRCs that must be in place for effective usage, and what are thefactors that can be altered or removed during implementation without a negative impact on effec-tiveness? This analysis of critical components follows the three-step usage model presented in thisarticle in that the final step of analyzing currently used activities provides a first step towardcreation of knowledge about DBRCs. Considering that knowledge about DBRCs is already avail-able to practitioners, researchers may now focus on identification of the specific features of theinnovation that make it both usable and effective to some outcome. A second area for futureresearch involves how to circulate that information to practitioners. This analysis of a currentlyused innovation (e.g., DBRC) only becomes relevant if the information gained is disseminatedback to those who have the potential to use it.

Conclusion

The purpose of this article was to discuss the history behind efforts to transfer school psy-chology research into practice and present a model for systematically programming the transfer ofresearch into practice. We proposed a three-step framework that emphasized multiple conceptualbases for transferring research into practice, which was informed by four specific literature basesincluding treatment acceptability, participatory action research, generalization programming, andorganizational change. It is our intention that the proposed framework will provide a starting pointthat can be informed by researchers and practitioners in the field of school psychology. Further-more, additional applications of this framework by researchers may further refine our attempts todevelop a model that will bridge the gap between research and practice.

There are two important limitations of this article that should be noted. First, we selected alimited sample of literature to serve as the foundation for our three-stage framework. As a result,other relevant literature bases should be considered. It is our explicit hope that the three-stageframework will be expanded as others apply literature bases in which they identify relevance.Second, we acknowledge the need to address additional considerations that exist outside of themicrosystem we discussed. For example, Fullan and Stiegelbauer (1991) suggested the impor-tance of considering the features of knowledge, the local context, and the larger influences thatoccur at the district, state, and federal levels. We would suggest that a more comprehensive frame-work, which takes these external factors into consideration, is necessary. However, we considerour initial framework to be an important step for researchers to move closer toward building,transferring, and sustaining the use of new knowledge in school psychology practice.

Bridging the Gap Between Research and Practice 471

References

Allinder, R.M., & Oats, R.G. (1997). Effects of acceptability on teachers’ implementation of curriculum-based measure-ment and student achievement in mathematics computation. Remedial and Special Education, 18, 113–120.

Chafouleas, S.M., McDougal, J.L., Riley-Tillman, T.C., Panahon, C.J., & Hilt, A.M. (in press). What do Daily BehaviorReport Cards (DBRCs) measure? An initial comparison of DBRCs with direct observations for off-task behavior.Psychology in the Schools.

Chafouleas, S.M., Riley-Tillman, T.C., & Eckert, T.L. (2003). A comparison of school psychologists’ acceptability ofnorm-referenced, curriculum-based, and brief experimental analysis methods to assess reading. School PsychologyReview, 32, 272–281.

Chafouleas, S.M., Riley-Tillman, T.C., & McDougal, J.L. (2002). Good, bad, or in-between: How does the daily behaviorreport card rate? Psychology in the Schools, 39, 157–169.

Chafouleas, S.M., Riley-Tillman, T.C., & Sassu, K.A. (2005). An investigation of the reported acceptability and usage ofDaily Behavior Report Cards (DBRCs) by a national sample of teachers. Manuscript submitted for publication.

Chafouleas, S.M., Riley-Tillman, T.C., Sassu, K.A., LaFrance, M.J., & Patwa, S.S. (2005). Daily Behavior Report Cards(DBRCs): An investigation of consistency of data across raters. Manuscript submitted for publication.

Clonan, S.M., Chafouleas, S.M., McDougal, J., & Riley-Tillman, T.C. (2004). Positive psychology goes to school: Are wethere yet? Psychology in the Schools, 41, 101–110.

Daly, E.J., Martens, B.K., Hamler, K.R., Dool, E.J., & Eckert, T.L. (1999). A brief experimental analysis for identifyinginstructional components needed to improve oral reading fluency. Journal of Applied Behavior Analysis, 32, 83–94.

Dane, A.V., & Schneider, B.H. (1998). Program integrity in primary and early secondary prevention: Are implementationeffects out of control. Clinical Psychology Review, 18, 23– 45.

Deno, S.L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52, 219–232.Detrich, R. (1999). Increasing treatment fidelity by matching interventions to contextual variables within the educational

setting. School Psychology Review, 28, 608– 620.Dowrick, P.W., Power, T.J., Manz, P.H., Ginsburg-Block, M., Leff, S.S., & Kim-Rupnow, S. (2001). Community respon-

siveness: Examples from under-resourced urban schools. Journal of Prevention and Intervention in the Community,21, 71–90.

Eckert, T.L., & Hintze, J.M. (2000). Behavioral conceptions and applications of acceptability: Issues related to servicedelivery and research methodology. School Psychology Quarterly, 15, 123–148.

Elliott, S.N. (1988). Acceptability of behavioral treatments in educational settings. In J.C. Witt, S.N. Elliott, & F.M.Gresham (Eds.), Handbook of behavior therapy in education (pp. 121–150). New York: Plenum.

Elliott, S.N., Kratochwill, T.R., & Roach, T. (2003). Commentary: Implementing social-emotional and academic innova-tions: Reflections, reactions and research. School Psychology Review, 32, 320–326.

Fullan, M., Miles, M.B., & Taylor, G. (1980). Organizational development in schools: The state of the art. Review ofEducational Research, 50, 121–183.

Fullan, M.G., & Stiegelbauer, S. (1991). The new meaning of educational change. New York: Teacher’s College Press.Gansle, K.A., & McMahon, C.M. (1997). Component integrity of teacher intervention management behavior using a

student self-monitoring treatment: An experimental analysis. Journal of Behavioral Education, 7, 405– 419.Garet, M.S., Porter, A.C., Desimone, L., Birman, B.F., & Yoon, K.S. (2001). What makes professional development

effective? Results from a notional sample of teachers. American Educational Research Journal, 38, 915–946.Gresham, F.M., & Lopez, M.F. (1996). Social validation: A unifying concept for school-based consultation research and

practice. School Psychology Quarterly, 11, 204–227.Hiralall, A.S., & Martens, B.K. (1998). Teaching classroom management skills to preschool staff: The effects of scripted

instructional sequences on teacher and student behavior. School Psychology Quarterly, 13, 94–115.Hughes, J.N. (2003). Commentary: Participatory action research leads to sustainable school and community improvement.

School Psychology Review, 32, 38– 43.Jones, K.M., & Wickstrom, K.F. (2002). Done in sixty seconds: Further analysis of the brief assessment model for aca-

demic problems. School Psychology Review, 31, 554–568.Kazdin, A.E. (1980). Acceptability of alternative treatments for deviant child behavior. Journal of Applied Behavior

Analysis, 13, 259–273.McDougal, J.L., Clonan, S.M., & Martens, B.K. (2000). Using organizational change procedures to promote the accept-

ability of prereferral intervention services: The school-based intervention team project. School Psychology Quarterly,15(2), 149–171.

Meyer, L.H., Park, H., Grenot-Scheyer, M., Schwartz, I., & Harry, B. (1998). Participatory research: New approaches tothe research to practice dilemma. The Journal of the Association for Persons with Severe Handicaps, 23, 165–177.

Meyers, J., & Nastasi, B.K. (1999). Primary prevention in school settings. In T. Gutkin & C. Reynolds (Eds.), Thehandbook of school psychology (3rd ed., pp. 764–799). New York: Wiley.

472 Riley-Tillman, Chafouleas, Eckert, and Kelleher

Nastasi, B.K., Varjas, K., Schensul, S.L., Silva, K.T., Schensul, J.J., & Ratnayake, P. (2000). The participatory interventionmodel: A framework for conceptualizing and promoting intervention acceptability. School Psychology Quarterly, 15,207–232.

Noell, G.H., Freeland, J.T., Witt, J.C., & Gansle, K.A. (2001). Using brief assessments to identify effective interventionsfor individual students. Journal of School Psychology, 39, 335–355.

Noell, G.H., Gresham, F., & Gansle, K.A. (2002). Does treatment integrity matter? A preliminary investigation of instruc-tional implementation and mathematics performance. Journal of Behavioral Education, 11, 51– 67.

Odom, S.L., McConnell, S.R., & Chandler, L.K. (1993). Acceptability and feasibility of classroom-based social interactioninterventions for young children with disabilities. Exceptional Children, 60, 226–236.

Park, H., Meyer, L., & Goetz, L. (1998). Introduction to the special series on participatory action research. The Journal ofthe Association for Persons with Severe Handicaps, 23, 163–164.

Peterson, C.A., & McConnell, S.R. (1996). Factors related to intervention integrity and child outcome in social skillsinterventions. Journal of Early Intervention, 20, 146–164.

Power, T.J., Blom-Hoffman, J., Clarke, A.T., Riley-Tillman, T.C., Kelleher, C., & Manz, P.H. (2005). Reconceptualizingintervention integrity: A partnership-based framework for linking research with practice. Psychology in the Schools,42(5), 495–507.

Reimers, T.M., Wacker, D.P., & Koeppel, G. (1987). Acceptability of behavioral treatments: A review of the literature.School Psychology Review, 16, 212–227.

Riley-Tillman, T.C., & Chafouleas, S.M. (2003). Using interventions that exist in the natural environment to increasetreatment integrity and social influence in consultation. Journal of Educational and Psychological Consultation, 14,139–156.

Riley-Tillman, T.C., Chafouleas, S.M., & Ducette, J. (2005). School psychologists’ acceptability and reported use of briefexperimental analysis in reading: A six-month investigation of the effects of training. Manuscript submitted forpublication.

Rosenfield, S. (2000). Crafting usable knowledge. American Psychologist, 55, 1347–1355.Salisbury, C.L., Wilson, L.L., & Palombaro, M.M. (1998). Promoting inclusive schooling practices through practitioner

directed inquiry. The Journal of the Association for Persons with Severe Handicaps, 23, 223–237.Schoenwald, S.K., & Hoagwood, K. (2001). Effectiveness, transportability, and dissemination of interventions: What

matters when? Psychiatric Services, 52, 1190–1197.Sheridan, S.M., Meegan, S.P., & Eagle, J.W. (2002). Assessing the social context in initial conjoint behavioral consultation

interviews: An exploratory analysis investigating processes and outcomes. School Psychology Quarterly, 17, 299–324.Sterling-Turner, H.E., & Watson, T.S. (2002). An analog investigation of the relationship between treatment acceptability

and treatment integrity. Journal of Behavioral Education, 11, 39–50.Stoiber, K.C., & Kratochwill, T.R. (2000). Empirically supported interventions and school psychology: Rational and

methodological issues—part 1. School Psychology Quarterly, 15, 75–105.Stokes, T.F., & Baer, D.M. (1977). An implicit technology of generalization. Journal of Applied Behavior Analysis, 10,

349–367.Stokes, T.F., & Osnes, P.G. (1989). An operant pursuit of generalization. Behavior Therapy, 20, 337–355.Turnbull, A.P., Friesen, B.J., & Ramirez, C. (1998). Participatory action research as a model for conducting family research.

The Journal of the Association for Persons with Severe Handicaps, 23, 178–188.VanAuken, T., Chafouleas, S.M., Bradley, T.A., & Martens, B.K. (2002). Using brief experimental analysis to select oral

reading interventions: An investigation of treatment utility. Journal of Behavioral Education, 11, 165–181.Witt, J.C., & Elliott, S.N. (1985). Acceptability of classroom intervention strategies. In T.R. Kratochwill (Ed.), Advances

in school psychology (pp. 251–288). Hillsdale, NJ: Erlbaum.Ysseldyke, J.E., Pianta, B., Christenson, S., Wang, J., & Algozzine, B. (1983). An analysis of prereferral interventions.

Psychology in the Schools, 20, 184–190.

Bridging the Gap Between Research and Practice 473