11
A Meta-analysis of Prereferral Intervention Teams: Student and Systemic Outcomes Matthew K. Burns and Todd Symington Michigan Center for Assessment and Educational Data, Central Michigan University, USA Although prereferral intervention teams (PIT) are common in public schools, there is little and conflicting research to support them. The current article conducted an empirical meta-analysis of research on PITs by reviewing 72 articles. Nine of the articles matched the inclusion criteria for the study and 57 effect size (ES) coefficients were computed, which resulted in a mean ES of 1.10. The studies were further broken down by category of dependent variable (DV), and resulted in a mean ES of 1.15 for student outcomes and 0.90 for systemic outcomes. PITs that were implemented by university faculty resulted in a mean ES of 1.32, but field- based PITs resulted in a mean ES of only .54. Studies that used random assignment resulted in higher ES coefficients than those that used nonrandom assignment. Implications for research and cautious suggestions for practice are discussed. D 2002 Society for the Study of School Psychology. Published by Elsevier Science Ltd Keywords: Meta-analysis, Prereferral Intervention, Outcomes. During the 1980s, there was a movement that involved supporting general education as an alternative to special education (Rosenfield & Gravois, 1996). As a result, several prereferral intervention team (PIT) models were developed to better serve children without disabilities but who were difficult to teach. Various models were presented including Mainstream Assistance Teams (Fuchs, Fuchs, & Bahr, 1990a,b), Instructional Consultation Teams (Rosenfield & Gravois, 1996), Prereferral Intervention Teams (Graden, Casey, & Bonstrom, 1985) and Instructional Support Teams (Kovaleski, Tucker, & Duffy, 1995). Additional names suggested for teams from a PIT model include Teacher Assistance Teams, Teacher Support Teams, Student Assistance Teams (House & McInerney, 1996), Intervention Assistance Teams (Graden, 1989), and Child Study Teams (Moore, Fifield, Spira, & Scarlato, 1989). The current article includes in its definition of a PIT any multidisciplinary problem solving team (Fuchs & Fuchs, 1989; Graden et al., PII S0022-4405(02)00106-1 437 Received 13 February 2002; received in revised form 2 April 2002; accepted 15 May 2002. Address correspondence and reprint requests to Matthew K. Burns, Michigan Center for Assessment and Educational Data, Central Michigan University, 208 Rowe Hall, 48859 Mount Pleasant, MI, USA. Phone: (517) 774-3205; E-mail: [email protected] Journal of School Psychology, Vol. 40, No. 5, pp. 437– 447, 2002 Copyright D 2002 Society for the Study of School Psychology Printed in the USA 0022-4405/02 $ – see front matter

A Meta-analysis of Prereferral Intervention Teams: Student and Systemic Outcomes

Embed Size (px)

Citation preview

A Meta-analysis of Prereferral Intervention Teams:Student and Systemic Outcomes

Matthew K. Burns and Todd SymingtonMichigan Center for Assessment and Educational Data,

Central Michigan University, USA

Although prereferral intervention teams (PIT) are common in public schools, thereis little and conflicting research to support them. The current article conducted anempirical meta-analysis of research on PITs by reviewing 72 articles. Nine of thearticles matched the inclusion criteria for the study and 57 effect size (ES)coefficients were computed, which resulted in a mean ES of 1.10. The studies werefurther broken down by category of dependent variable (DV), and resulted in amean ES of 1.15 for student outcomes and 0.90 for systemic outcomes. PITs thatwere implemented by university faculty resulted in a mean ES of 1.32, but field-based PITs resulted in a mean ES of only .54. Studies that used random assignmentresulted in higher ES coefficients than those that used nonrandom assignment.Implications for research and cautious suggestions for practice are discussed.D 2002 Society for the Study of School Psychology. Published by Elsevier Science Ltd

Keywords: Meta-analysis, Prereferral Intervention, Outcomes.

During the 1980s, there was a movement that involved supporting generaleducation as an alternative to special education (Rosenfield & Gravois,1996). As a result, several prereferral intervention team (PIT) models weredeveloped to better serve children without disabilities but who were difficultto teach. Various models were presented including Mainstream AssistanceTeams (Fuchs, Fuchs, & Bahr, 1990a,b), Instructional Consultation Teams(Rosenfield & Gravois, 1996), Prereferral Intervention Teams (Graden,Casey, & Bonstrom, 1985) and Instructional Support Teams (Kovaleski,Tucker, & Duffy, 1995). Additional names suggested for teams from a PITmodel include Teacher Assistance Teams, Teacher Support Teams, StudentAssistance Teams (House & McInerney, 1996), Intervention AssistanceTeams (Graden, 1989), and Child Study Teams (Moore, Fifield, Spira, &Scarlato, 1989). The current article includes in its definition of a PIT anymultidisciplinary problem solving team (Fuchs & Fuchs, 1989; Graden et al.,

PII S0022-4405(02)00106-1

437

Received 13 February 2002; received in revised form 2 April 2002; accepted 15 May 2002.Address correspondence and reprint requests to Matthew K. Burns, Michigan Center for

Assessment and Educational Data, Central Michigan University, 208 Rowe Hall, 48859 MountPleasant, MI, USA. Phone: (517) 774-3205; E-mail: [email protected]

Journal of School Psychology, Vol. 40, No. 5, pp. 437–447, 2002Copyright D 2002 Society for the Study of School Psychology

Printed in the USA0022-4405/02 $– see front matter

1985; Meyers & Nastasi, 1998) that develops interventions to meet the needsof students in general education that are difficult to teach (Graden et al.,1985; Meyers, Valentino, Meyers, Boretti, & Brent, 1996). Although someminor differences exist between these different models, they all generallyfollow the steps described by Graden et al. (1985), which includes (1)Request for consultation, (2) consultation, (3) observation, (4) conference,and if needed, (5) formal referral for special education eligibility.

Promising findings suggest that these PIT models are effective inreducing both the number of unnecessary special education referrals andplacements, and consequently, unnecessary stigmatization due to labelingand separation of the child from general education (McNamara, 1998;McNamara & Hollinger, 1997). Furthermore, Nelson, Smith, Taylor, Dodd,and Reavis (1991) stated that prereferral intervention approaches appearto be having a positive effect on special education service delivery practices,the performance of students, the abilities and attitudes of teachers, andclassification rates.

Despite the apparent support of the PIT models, there have been someconcerns presented in the literature. Chalfant and Pysh (1989) citedfrequent barriers to PIT implementation such as insufficient time, nouseful intervention strategy, interference with the special education referralprocess, lack of readiness to properly initiate teams, and insufficient impacton student performance. Furthermore, Ross (1995) listed several obstaclesto successful prereferral intervention implementation including loss offunding from reduced student enrollment in special education classes,the cost of intervention assistance programs, loss of jobs, increased jobresponsibilities without increased compensation, a resistance to change,and poorly conceived plans (Ross, 1995).

There is to date only a relatively small body of research regarding theeffectiveness of PIT models. Furthermore, the existing research suffersfrom serious methodological concerns such as low sample sizes and a lackof control groups (Short & Talley, 1996; Nelson et al., 1991). This empiricalvoid prevents us from identifying the extent to which prereferral inter-vention programs can produce gains in academic skills, class managementskills, consultation skills, and professional collaboration (Ross, 1995). Inaddition to having a relatively small set of data about PIT effectiveness,Safran and Safran (1996) suggested that the data we do have are not clear.For instance, they report promising findings regarding the reduction ofreferral rates to special education associated with university model pro-grams or training (Fuchs et al., 1990b; Graden et al., 1985), however, initialreports from the field show only minimal effects on the number ofidentified students (Flugum & Reschly, 1994).

Despite these inconsistent data and a relatively small number of studiesexamining the effectiveness of the PIT model, use of prereferral pro-grams has become widespread in public education. Kavale and Forness

Journal of School Psychology438

(2000) suggested that empirical meta-analytic research is needed whenmaking special education policy decisions, and although PITs generallyfall under the umbrella of general education, they have a direct impacton the delivery of special education. Nelson et al. (1991) conducted areview of the literature and suggested that PITs were effective in reducingreferrals to and initial placements into special education, and improvingstudent outcomes. Furthermore, Nelson et al. pointed out several meth-odological flaws in PIT research, but their review was not empirical. Thecurrent study attempts to further examine the effectiveness of PIT modelsby conducting a pilot empirical meta-analysis of available research.Although Kavale and Forness suggested meta-analytic research to deter-mine policy, the small number of studies limits the confidence inconclusions that can be made from PIT meta-analytic research. There-fore, the primary goal of the current paper is not to empirically inves-tigate the effectiveness of the PIT approach, but to identify areas in needof future research.

METHOD

Data Collection

The PycINFO, ERIC, and Education Abstracts databases were searched forarticles on January 1, 2002 using the following terms: prereferral inter-vention team, mainstream assistance team, intervention assistance team,teacher assistance team, child study team, and instructional support team.After eliminating articles that did not directly address a PIT model, 72articles and technical reports remained.

The pool was next narrowed by comparing each study to the followingcriteria.

(1) The study included some outcome measure for prereferral inter-vention teams as the dependent variable (DV).

(2) The study included at least one between-group comparison and/orleast one within-group comparison of the outcomes. Between-group anal-ysis involved schools that have implemented a PIT model compared tothose that had not, and within-group analyses examined pre- and post-implementation of a PIT.

(3) The study presented quantitative data that could be used to computeeffect sizes. Means and standard deviations for both experimental andcontrol groups, or pre- and post-implementation were necessary. However,the study was also included if enough data were provided to compute thenecessary means and standard deviations or if statistical analyses providedenough data to compute an effect size.

(4) The study was written in English.

Burns and Symington 439

Of the 72 articles, 35 were eliminated because they were not empiricalarticles, but were instead descriptions of implementation or various reviewarticles. An additional 12 articles presented empirical data, but theydescribed current practice rather than measures of outcomes. Variablesexamined in these studies included demographic data of children referredto a PIT, percentage of interventions implemented, problems that werereferred, team membership, etc. Fifteen more articles were eliminatedbecause the data used to measure and compare outcomes did not includemeans and standard deviations or tests of significance, leaving 10 articles thatmet all criteria. Finally, to assure that each study examined independent im-plementations of a PIT model, each article’s description of participants wasreviewed for redundancy with other works. One study (Fuchs et al., 1990a)was eliminated because the group of PITs examined included teams thatwere investigated in other articles, leaving nine articles available for review.

Categorization of Studies

Articles were divided into two general groups based on outcome measure.Nelson et al. (1991) suggested that the advantages of PITs include variousstudent and systemic outcomes. Therefore, these categories were used toassign the studies to groups. Outcome measures that were placed into thestudent group included observations of time on task, student task com-pletion, scores on behavior rating scales, and observations of targetbehavior. Systemic variables included referrals to special education, newplacements in special education, percentage of referrals that are diagnosedwith a disability, number of students retained in a grade, and an increase inconsultative or counseling activity by school psychologists. Articles assignedto the student outcome group were further divided into dependentmeasures that were observed such as time on task (Kovaleski, Gickling,Morrow, & Swank, 1999), and those that were rated by others such as ratingscales completed by the classroom teacher (e.g. Fuchs et al., 1990b).

Troia (1999) criticized educational research that did not utilize randomassignment of participants, and described nonrandom assignment as a mostserious methodological flaw. Therefore, data were grouped based onwhether or not the study used random assignment to examine any potentialeffect in PIT research. Finally, each article was reviewed to determine if thePIT was university-based, trained, and implemented for purposes of empiri-cal investigation, or field-based, which included studies of previouslyexisting PITs that were not established by university personnel.

Having a second person code, the studies and computing the percentagreement between the two assessed interrater consistency. The number ofstudies that were placed into the same category by both coders was dividedby the total number of studies, which resulted in 100% agreement, or aratio of 1.0.

Journal of School Psychology440

Further analysis by various demographic variables was attempted, but wasnot possible due to the homogenous nature of the studies. For example,the grades of the various studies ranged from first to seventh with nosecondary school being included in any study. Although some of theparticipants in the study were identified as children at-risk for failure ordifficult to teach, all participated in a general education program settingand children diagnosed with a disability were excluded. Furthermore, onlyfour of the studies gave information about the settings of the participants,such as inner city, metropolitan, or rural, and most referred to the referringdifficulty as a target behavior without discussing if that target behavior wasan academic or behavioral outcome.

Effect Size Calculation

The statistic of critical interest to a meta-analysis is the effect size (ES;Cooper, Valentine, & Charlton, 2000). Cohen’s (1988) d was computed formost studies by subtracting the mean of the control group from the meanof the experimental group, or the mean of the pre-test from the mean ofthe post-test. Next, the product was divided by the pooled standarddeviation of the two groups ((SD1 + SD2)/2) in order to express thedifference between measurements in a common standard deviation(Cooper et al., 2000). Data from studies that used categorical outcomes(e.g. number of students referred to and/or placed into special education)were converted to binomial effect size using procedures outlined byRosenthal (1991). Redundancy within studies was addressed by poolingeffect sizes among data that were highly correlated within the same article.For example, if the study used teacher ratings as the dependent measure,but then recorded ratings on several highly correlated areas (e.g. subscalesof the same behavior rating scale), then the individual effect sizes were notindependent and a mean ES was computed.

Outlier data were identified through the approach described by Tukey(1977) and recommended by Glass, McGaw, and Smith (1981). Coefficientsthat fell 1.5 times the interquartile range above the third quartile and 1.5times the interquartile range below the first quartile were identified asoutliers and were excluded from the data set. Seven coefficients met thosecriteria and were excluded. All seven were from studies that used systemicoutcomes as the DV.

RESULTS

Table 1 lists the mean effect sizes for each variable. Seven of the nine meanES coefficients were at or above .90 with the highest being 1.43. Random-ized studies had a mean coefficient that was more than twice as large as

Burns and Symington 441

those that did nonrandomized studies. Furthermore, the mean ES forstudies that used university-based teams was more than twice as large as themean ES for those using field-based teams.

DISCUSSION

Meta-analytic researchers (Kavale & Forness, 2000; Kirk, 1996) suggestinterpreting ES data by comparing them to Cohen’s (1988) qualitativeinterpretations of effect sizes, which classifies a coefficient of .20 as small,.50 as medium, and .80 as large. Based on these criteria, all but two of thecomputed coefficients fell within Cohen’s large category and suggested thatthe PIT approach has a strong effect on desired outcomes. Although thecurrent data suggest effectiveness and data exist to support the cost-effectiveness of PIT models (Hartman & Fay, 1996; Noell & Gresham,1993), recommendations for practice cannot be made from this meta-analysis given the disappointingly small number of studies that could beused to compute effect sizes.

One potentially important finding specifically relevant to futureresearch is the difference between the mean ES coefficients for random-ized and nonrandomized studies. Randomized studies resulted in a meanES coefficient that was well within Cohen’s (1988) large effect range, butthe .64 coefficient for nonrandomized studies suggests only a mediumeffect. This difference between the two coefficients could support theneed for randomization in PIT research. Troia (1999) criticized researchon phonemic awareness training because of a lack of randomization,among other points, to control for individual differences. Although thetopic is quite different in that Troia examined reading studies in which theDV was a student outcome, his assumptions seem relevant to the currentdata as well.

Table 1Mean Effect Sizes for Categories and Total

Variable Mean

N ES SD

Student outcomes 45 1.15 .65Teacher ratings 14 1.36 .61Observed behavior 31 1.05 .66

Systemic outcomes 12 .90 .22DesignRandom assignment 33 1.43 .49Nonrandom assignment 24 .64 .39University-based 41 1.32 .51Field-based 16 .54 .41

Total 57 1.10 .60

Journal of School Psychology442

Perhaps the most significant finding is the difference between EScoefficients for university-based (1.32) and field-based (.54) PITs. Safranand Safran (1996) suggested that positive results for PITs are found muchmore frequently for university-based teams, and only inconsistent resultsare reported from the field. The current results seem to support thisconclusion. The lack of consistent positive findings from the field could belinked to a lack of consistent implementation of PITs. A 1989 (Carter &Sugai, 1989) survey of state departments of education suggested thatalthough 34 states mandated a PIT approach, all left design and imple-mentation to the local districts. Furthermore, Kovaleski et al. (1999)demonstrated the superior effectiveness of PITs implemented with highfidelity over schools that did not utilize consistent implementation of themodel. Kovaleski (2002) suggested that PIT models can varying on severalvariables including format, assignment of staff, and training. Inconsistentimplementation is a significant criticism of school consultation practice andresearch (Gutkin & Curtis, 1990) in general, and may be a significantbarrier to PIT research as well.

IMPLICATIONS FOR SCHOOL PSYCHOLOGY

A survey of school psychologists conducted by Costenbader, Swartz, andPetrix (1992) found a discrepancy between desired and actual time spentconducting consultation. The most commonly cited reason for schoolpsychologists’ inability to engage in more consultative activities is theamount of time spent conducting evaluations for special education eligi-bility (Costenbader et al., 1992; Gutkin & Curtis, 1990; Wilczynski, Mandal,& Fusilier, 2000). Current data are consistent with previous research andsuggest that PITs are effective in reducing referrals to special education.Therefore, it would seem that participation on these teams would be avaluable use of a school psychologist’s time in that PITs could serve as anavenue to reduce evaluations and increase time for other services. Addi-tionally, school psychologists could use their expertise in collaboration,consultation, data-based decision making, and program evaluation (Yssel-dyke et al., 1997) to help consistently establish and refine teams to bestmeet the desired student and systemic goals.

These data may be used in conjunction with other studies to cautiouslymake suggestions about practice, but the primary focus is to identify areasin need of additional research. First, and foremost, more research isneeded. Only 25 studies were located that presented data relevant to PITmodel effectiveness, with only nine providing data useable to the currentmeta-analysis. Furthermore, only 12 effect sizes could be computed aboutsystemic variables such as reducing special education referrals and place-ments when these are major goals of the PIT approach (Ross, 1995).

Burns and Symington 443

Having just identified the need for research using systemic dependentvariables, it now seems appropriate to call for caution in using thisapproach. All seven of the outlying ES coefficients were from studies thatused measures of systemic variables as the DV. The value and range of theseseven coefficients (� 3.18, 2.73, 2.95, 4.47, 5.61, 7.80, and 29.63) suggestedgreat variability within the variable. Unfortunately, it is impossible todetermine at this time if this variability is due to inconsistency within thevariable or true effects. There did not seem to be any clear advantage ofusing systemic or student outcomes, or between measuring the studentoutcomes with teacher ratings or observations of behavior.

Because fidelity of implementation seems to be a potential difficulty forresearch and practice, research investigating the consistency and fidelity ofimplementation of field-based PITs is needed. Furthermore, research isneeded to identify factors that lead to consistent implementation of field-based PITS.

Additional data about specific variables are needed as well. For example,all of the studies in the current meta-analysis took place in elementary ormiddle schools, and student participants all participated in general educa-tion. There are little data that examine the effectiveness of the PIT modelfor children with disabilities or from secondary schools. Analyses based ondemographic variables such as setting of the school district were notpossible because these data were either absent or homogeneous. It mightbe beneficial if future PIT researchers provided more information aboutthe participants, and conduct studies that compare prereferral interventionteams in various settings such as urban and rural school districts. Further-more, a comparison between academic and behavioral outcomes may bebeneficial, especially because data were not provided to adequately differ-entiate outcomes in the current study.

LIMITATIONS

As stated above, the studies involved used a relatively homogenous partic-ipant pool, which is advantageous in some respects in that some criticizemeta-analytic research for compiling data across samples (Sharpe, 1997).However, the homogeneity does limit the study’s generalizability. Perhapsof equal importance to effectiveness is the question of implementationintegrity, which is not assessed in the current study. A related note is thatthere may be differences between the teams being studied that were notaccounted for in the current data. Finally, the current meta-analysis1 only

1 The following studies were included in the meta-analysis: Bahr et al., 1993; Burns, 1999;Fuchs & Fuchs, 1989; Fuchs et al., 1990a; Graden et al., 1983, 1985; Kovaleski et al., 1999;McKay & Sullivan, 1990; and Ponti et al., 1998.

Journal of School Psychology444

included published studies and technical reports, which given the positivebias of publications (Riniolo, 1997; Rosenthal, 1979), could have impactedthe results.

REFERENCES

Bahr, M., Fuchs, D., Fuchs, L. S., Fernstrom, P., & Stecker, P. (1993). Effectiveness of studentversus teacher monitoring during prereferral intervention. Exceptionality, 4, 17–30.

Burns, M. K. (1999). Effectiveness of special education personnel in the intervention assistanceteam model. The Journal of Educational Research, 92, 354–356.

Carter, J., & Sugai, G. (1989). Survey on prereferral practices: responses from state depart-ments of education. Exceptional Children, 55, 298–302.

Chalfant, J. C., & Pysh, M. V. (1989). Teacher assistance teams: Five descriptive studies on 96teams. RASE: Remedial & Special Education, 10, 49–58.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ:Lawrence Erlbaum Associates.

Cooper, H. M., Valentine, J. C., & Charlton, K. (2000). The methodology of meta-analysis. InR. Gersten, E. P. Schiller, & S. Vaughn (Eds.), Contemporary special education research: synthesesof the knowledge base on critical instructional issues (pp. 263–280). Mahwah, NJ: LawrenceErlbaum Associates.

Costenbader, V., Swartz, J., & Petrix, L. (1992). Consultation in the schools: the relationshipbetween preservice training, perception of consultative skills, and actual time spent inconsultation. School Psychology Review, 21, 95–108.

Flugum, K. R., & Reschly, D. J. (1994). Prereferral interventions: Quality indices and outcomes.Journal of School Psychology, 32, 1–14.

Fuchs, D., & Fuchs, L. S. (1989). Exploring effective and efficient prereferral interventions: acomponent analysis of behavioral consultation. School Psychology Review, 18, 260–283.

Fuchs, D., Fuchs, L. S., & Bahr, M. W. (1990a). Mainstream assistance teams: a scientific basisfor the art of consultation. Exceptional Children, 57, 128–139.

Fuchs, D., Fuchs, L. S., & Bahr, M. W. (1990b). Prereferral intervention: a prescriptive ap-proach. Exceptional Children, 56, 493–513.

Glass, G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills, CA:Sage.

Graden, J. L. (1989). Redefining prereferral intervention as intervention assistance: collabo-ration between general and special education. Exceptional Children, 56, 227–231.

Graden, J. L., Casey, A., & Bonstrom, O. (1983). Pre-referral interventions: effects on refer-ral rates and teacher attitudes. Minneapolis, MN: Institute for Research on LearningDisabilities, University of Minnesota (ERIC Document Reproduction Services No. ED244438).

Graden, J. L., Casey, A., & Bonstrom, O. (1985). Implementing a prereferral interventionsystem: II. The data. Exceptional Children, 51, 487–496.

Gutkin, T. B., & Curtis, M. J. (1990). School-based consultation: theory, techniques, and re-search. In T. B. Gutkin, & C. R. Reynolds (Eds.), The handbook of school psychology (2nd ed.)(pp. 577–611). New York: Wiley.

Hartman, W. T., & Fay, T. A. (1996). Cost-effectiveness of instructional support teams in Pennsylvania.Policy Paper Number 9. (ERIC document reproduction number ED396471).

House, J. E., & McInerney, W. F. (1996). The school assistance center: an alternative model forthe delivery of school psychological services. School Psychology International, 17, 115–124.

Kavale, K. A., & Forness, S. R. (2000). Policy decisions in special education: the role of meta-analysis. In R. Gersten, E. P. Schiller, & S. Vaughn (Eds.), Contemporary special education

Burns and Symington 445

research: syntheses of knowledge base on critical instructional issues (pp. 281–326). Mahwah, NJ:Lawrence Erlbaum Associates.

Kirk, R. E. (1996). Practical significance: a concept whose time has come. Educational Psycho-logical Measurement, 56, 746–759.

Kovaleski, J. F. (2002). Best practices in operating pre-referral intervention teams. In A. Thoms,& J. Grimes (Eds.), Best practices in school psychology (4th ed.) (pp. 645–656). Bethesda, MD:National Association of School Psychologists.

Kovaleski, J. F., Tucker, J. A., & Duffy, D. J. (1995). School reform through instructional sup-port: The Pennsylvania Initiative (Part I). Communique, 23(8), (insert).

Kovaleski, J. F., Gickling, E. E., Morrow, H., & Swank, P. (1999). High versus low implementa-tion of instructional support teams: a case for maintaining program fidelity. Remedial SpecialEducation, 20, 170–183.

McKay, B., & Sullivan, J. (1990). Effective collaboration: The Student Assistance Team model. Paperpresented at the Annual Convention of the Council for Exceptional Children, Toronto,Canada (ERIC Document Reproduction Service No. ED 322695).

McNamara, K. (1998). Adoption of intervention-based assessment for special education: trendsin case management variables. School Psychology International, 19, 251–266.

McNamara, K. M., & Hollinger, C. L. (1997). Intervention-based assessment: rates of evaluationand eligibility for specific learning disability classification. Psychological Reports, 81, 620–622.

Meyers, B., Valentino, C. T., Meyers, J., Boretti, M., & Brent, D. (1996). Implementing prere-ferral intervention teams as an approach to school based consultation in an urban schoolsetting. Journal of Educational and Psychological Consultation, 7, 119–149.

Meyers, J., & Nastasi, B. (1998). Primary prevention as a framework for the delivery of psycho-logical services in the schools. In T. Gutkin, & C. Reynolds (Eds.), The handbook of schoolpsychology (3rd ed.) (pp. 764–799). New York: Wiley.

Moore, K. J., Fifield, M. B., Spira, D. A., & Scarlato, M. (1989). Child study team decisionmaking in special education: improving the process. RASE: Remedial & Special Education, 10,50–58.

Nelson, J. R., Smith, D. J., Taylor, L., Dodd, J. M., & Reavis, K. (1991). Prereferral intervention:a review of the research. Education and Treatment of Children, 14, 242–253.

Noell, G. H., & Gresham, F. M. (1993). Functional outcome analysis: do the benefits of con-sultation and prereferral intervention justify the costs? School Psychology Quarterly, 8, 200–226.

Ponti, C. R., Zins, J., & Graden, J. L. (1988). Implementing a consultation-based servicedelivery system to decrease referrals for special education: a case study of organizationalconsiderations. School Psychology Review, 17, 89–100.

Riniolo, T. C. (1997). Publication bias: a computer-assisted demonstration of excluding non-significant results from research interpretation. Teaching of Psychology, 24, 279–282.

Rosenfield, S. A., & Gravois, T. A. (1996). Instructional consultation teams: collaborating for change.New York: Guilford.

Rosenthal, R. (1979). The ‘‘File Drawer Problem’’ and tolerance for null results. PsychologicalBulletin, 86, 638–641.

Rosenthal, R. (1991). Meta-analytic procedures for social research (revised). Newbury Park,England: Sage Publications.

Ross, R. P. (1995). Best practices in implementing intervention assistance teams. In A. Thomas,& J. Grimes (Eds.), Best practices in school psychology (4th ed.) (pp. 227–237). Silver Spring,MD: National Association of School Psychologists.

Safran, S. P., & Safran, J. S. (1996). Intervention assistance programs and prereferral teams:directions for the twenty-first century. Remedial Special Education, 17, 363–369.

Sharpe, D. (1997). Of apples and oranges, file drawers, and garbage: why validity issues inmeta-analysis will not go away. Clinical Psychology Review, 17, 881–901.

Short, R. J., & Talley, R. C. (1996). Effects of teacher assistance teams on special educationreferrals in elementary schools. Psychological Reports, 79, 1431–1438.

Journal of School Psychology446

Troia, G. A. (1999). Phonological awareness intervention research: a critical review of theexperimental methodology. Reading Research Quarterly, 34, 28–52.

Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison-Wesley.Wilczynski, S. M., Mandal, R. L., & Fusilier, I. (2000). Bridges and barriers in behavioral

consultation. Psychology in the Schools, 37, 495–504.Ysseldyke, J., Dawson, P., Lehr, C., Reschly, D., Reynolds, M., & Telzrow, C. (1997). School

psychology: a blueprint for training and practice (2nd ed.). Bethesda, MD: National Associationof School Psychologists.

Burns and Symington 447