18
School Psychology Review, 2007, Volume 36, No. 4, pp. 601-617 Identifying Students at Risk, Monitoring Performance, and Determining Eligibility Within Response to Intervention: Research on Educational Need and Benefit From Academic Intervention Mark R. Shinn, PhD National Louis University, Skokie, Illinois Abstract. The new Individuals With Disabilities Education Improvement Act (2004) offers local education agencies the choice of using a student's response to intervention {RTI) as a major component to determine eligibility for special education under the category of specific leaming disabilities (SLD). Using a RTI model, it is not expected that different students will be identified as SLD than those identified historically. For more than 25 years, accumulated evidence has strongly suggested that most students labeled SLD have been those students with severe educational needs (i.e.. have performance discrepancies compared to students in their own communities), regardless of the stated eligibility criterion (e.g., ability-achievement discrepancies). What is unique about RTI is that educational need is a necessary but not sufficient requirement for SLD identifi- cation. StudenLs also must not be responding to high-quality general education instruction (i.e.. receiving educational benefit). This article first explores the scientific evidence for operationalizing the educational need component using curriculum-based measurement. Second, the use of cuiTiculum-based measure- ment to assess RTI is presented not only within a special education eligibility process, but also for universal screening and progress monitoring for all students within a three-tier problem-solving model. The article concludes with important research questions within RTI. Recent changes in federal special edu- optionof using a process to determine whether cation law resulted in a dramatic reconceptu- a student responded to high-quality, scientifi- alization of the process that educators could cally based intervention. What is often surpris- use to identify a student as eligible for special ing to many practitioners is that the response education under the category of specific leam- to intervention (RTI) process was offered as a ing disabilities (SLD). As discussed in more potential remedy to an eligibility process detail in previotis articles in this special issue, based primarily on computing the discrepancy local education agencies have been given the between ability and achievement, and to a Financial disclosure: Mark R. Shinn serves as a consultant to The Psychological Corporation in his position as chief scientist for AIMSweb. AIMSweb produces cuniculum-based measurement testing materials and organizes and reports results in a three-tier model, including response to intervention. Correspondence regarding this article .should be addressed to Mark R. Shinn, National Louis University. Skokie, IL 60077; E-mail: [email protected] Copyright 2007 by the National Association of School Psychologists, ISSN 0279-6015 601

Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review,2007, Volume 36, No. 4, pp. 601-617

Identifying Students at Risk, Monitoring Performance,and Determining Eligibility Within Response to

Intervention: Research on Educational Need and BenefitFrom Academic Intervention

Mark R. Shinn, PhDNational Louis University, Skokie, Illinois

Abstract. The new Individuals With Disabilities Education Improvement Act(2004) offers local education agencies the choice of using a student's response tointervention {RTI) as a major component to determine eligibility for specialeducation under the category of specific leaming disabilities (SLD). Using a RTImodel, it is not expected that different students will be identified as SLD thanthose identified historically. For more than 25 years, accumulated evidence hasstrongly suggested that most students labeled SLD have been those students withsevere educational needs (i.e.. have performance discrepancies compared tostudents in their own communities), regardless of the stated eligibility criterion(e.g., ability-achievement discrepancies). What is unique about RTI is thateducational need is a necessary but not sufficient requirement for SLD identifi-cation. StudenLs also must not be responding to high-quality general educationinstruction (i.e.. receiving educational benefit). This article first explores thescientific evidence for operationalizing the educational need component usingcurriculum-based measurement. Second, the use of cuiTiculum-based measure-ment to assess RTI is presented not only within a special education eligibilityprocess, but also for universal screening and progress monitoring for all studentswithin a three-tier problem-solving model. The article concludes with importantresearch questions within RTI.

Recent changes in federal special edu- optionof using a process to determine whethercation law resulted in a dramatic reconceptu- a student responded to high-quality, scientifi-alization of the process that educators could cally based intervention. What is often surpris-use to identify a student as eligible for special ing to many practitioners is that the responseeducation under the category of specific leam- to intervention (RTI) process was offered as aing disabilities (SLD). As discussed in more potential remedy to an eligibility processdetail in previotis articles in this special issue, based primarily on computing the discrepancylocal education agencies have been given the between ability and achievement, and to a

Financial disclosure: Mark R. Shinn serves as a consultant to The Psychological Corporation in his positionas chief scientist for AIMSweb. AIMSweb produces cuniculum-based measurement testing materials andorganizes and reports results in a three-tier model, including response to intervention.

Correspondence regarding this article .should be addressed to Mark R. Shinn, National Louis University.Skokie, IL 60077; E-mail: [email protected]

Copyright 2007 by the National Association of School Psychologists, ISSN 0279-6015

601

Page 2: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

much less extent, on identifying a processingdeficit.

In this article I will briefly review morethan 20 years of research revealing that whathas been considered a "traditional approach"(i.e.. assessing the ability-achievement dis-crepancy) is an insufficient explanation for themillions of students who have been labeled asSLD. Using a deductive approach, significantnumbers of students identified as SLD fail tomeet the stated eligibility standards such thatthese approaches must be rejected as an ex-planation of school practices. I also will ex-amine the research on school-based identifica-tion of students as SLD from an inductiveapproach that resulted in highly consistentSLD status markers. I then will detail howthese last research findings can be translatedinto time-efficient and accurate SLD identifi-cation practices in an RTI, dual-discrepancyprocess (Fuchs, 2003; Fuchs, Fuchs, &Speece, 2002), where educatiotial need andeducational benefit become the focus of enti-tlement decision making. Finally, I willpresent the RTI process within a larger prob-lem-solving framework of a three-tier mode!wherein all students* educational needs andeducational benefit are assessed in an ongoingmanner.

Deductive Research Fails to ConfirmConsistent Use of SLD Eligibility Criteria

Because the ability-achievement dis-crepancy has been the comerstone of SLDidentification since passage of the Educationfor All Handicapped Children Act in 1975,there is a tendency for practitioners to believethis approach is both well accepted and scien-tifically based. However, as noted by Al-gozzine and Ysseldyke (1987) 20 years ago:

The term leaming disability was scarcely offthe breath of early pioneers when a profes-sion began questioning its own integrity, To-day, the proliferation of student.s classified aslearning disabled (LD) has caused .social,political, economic, and educational con-cems that, in turn, have produced seriousquestioning of practices (p. 307).

As a point of fact, federal interest inSLD identification and other special education

602

issues began shortly after passage of Educa-tion for All Handicapped Children Act regu-lations in 1977 when five leaming disabilitiesresearch institutes were funded to investigatepotential problems and provide viable solu-tions. One of the funded institutions, the Uni-versity of Minnesota Institute for Research onLeaming Disabilities, was charged with thestudy of the assessment and decision-makingprocesses for students with SLD, includinghow schools operationalized the federal defi-nition and the congruence between SLD eligi-bility criteria. James Ysseldyke was the prin-cipal investigator in this line of research.

In the early 1980s a number of studiesemerged from Ysseldyke and the Institute forResearch on Leaming Disabilities that used adeductive approach to detennine if the datacollected and used by schools to identify stu-dents as SLD matched the severe discrepancyand/or processing criteria. I classify this typeof research as "do as I say" research. Forexample, Ysseldyke. Algozzine, Shinn, andMcGue (1982) found few reliable differencesamong students identified as SLD and low-achievers (LA) in their ability-achievementdiscrepancies or processing performanceamong a variety of ability, achievement, andother measures. This study was described byGresham. MacMillan, and Bocian (1996) as"perhaps the most infiuentiai and cited studyin this debate" (p. 571). The scores of bothgroups in the Ysseldyke, Algozzine, Shinn etal. (1982) study were similar and overlappedmore than 90%.

Other studies by this research groupyielded similar conclusions (e.g., Algozzine,Ysseldyke, & Shinn, 1982; Ysseldyke, Al-gozzine. & Epps, 1983; Ysseldyke, Algozzine,Richey, & Graden. 1982) and were summa-rized by Ysseidyke, Thurlow, Graden, Wes-son, Algozzine, and Deno (1983):

1. Schools collected lots of data as part ofthe SLD eligibility process that wererelated to the eligibility criteria (e.g.,ability and achievement scores).

2. Schools failed to use eligibility criteriain a consistent and predictable way. Asoften as not, schools ignored the data.

Page 3: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

3. Based on the testing data collected aspart of the eligibility process, it wasdifficult to detect meaningful differ-ences between students labeled SLD andLA.

4, Schools often identify students as SLDwhen they do not meet the stated eligi-bility criteria.

Over a period of more than 25 years,similar outcomes have been reported by otherresearchers (e.g.. Cone, Wilson, Bradley, &Reese. 1985; Gresham, 2002; Gresham et al.,1996; MacMillan & Siperstein, 2002; Shepard& Smith, 1983; Wilson, 1985). More recently,Peterson and Shinn (2002) studied the matchbetween the stated ability-achievement dis-crepancies of 48 fourth-grade students labeledSLD in high- and low-achieving communitiesin Minnesota. Using the state prescribed cri-terion of -1.76 standard deviation discrep-ancy, corrected for regression, 40% of stu-dents did not meet this discrepancy standard.

Inductive Research Examines WhatSchools Do

One of the tempting conclusions fromthe "do as I say" research is that school SLDidentification is a haphazard, almost capri-cious process because there has been no evi-dence of systematic application of explicitSLD eligibility criteria. Despite SLD criteria,students labeled SLD frequently do not meetthem. Running parallel to this type of researchhas been inductive research that has tried toidentify discemable patterns in assessment re-sults. I classify this type of research as "do asI do" research.

The question has been, "Is there an iden-tifiable and systematic pattem of perfonnancein SLD students that adequately explains whyschools label them this way?" One commonobservation within the "do as I say" researchstudies was that, not surprisingly, school-iden-tified SLD students performed significantlyless well on achievement measures than typi-cally achieving students, especially in reading.Notably, however, those achievement differ-ences also were apparent when comparingSLD students with their low-achieving coun-

terparts. For example, in a reanalysis of theYsseldyke, Algozzine, Shinn et al. (1982) re-sults, Kavale, Fuchs, and Scruggs (1994)reported an effect size difference of -0.34overall, considered by Cohen (1988) to bea medium effect size. Differences in Pennsyl-vania's Initiative on Assistive Technologyachievement scores accounted for differencesin approximately 80% of the cases of SLD andlow-achieving students.

More obvious differences among SLDand LA in reading achievement were noted inthe Ysseldyke, Algozzine, Shinn et al. (1982)sample in a follow-up study that formed mydissertation (Shinn, 1981) and was publishedseveral years later (Shinn, Ysseldyke, Deno, &Tindal, 1986). When 71 of the original 99students were compared on curriculum-basedmeasurement (CBM) tests of general readingability, tbe differences in the number of wordsread correctly in I min by SLD and LA stu-dents were compelling. When an eftect sizewas computed, SLD students read - 1 . 3 stan-dard deviations lower.

Severe Low Achievement as anExplanation of "What Schools Do" to

Identify SLD

After analyzing the research studies in-cluding students that school personnel actuallylabeled as SLD, severe achievement alone be-came a compelling explanation for identifica-tion practices that was tested systematically.Testing a hypothesis put forth by Gerber andSemmel (1984) that the special education pro-cess began by teacbers referring the lowestperforming students in their classrooms forpotential eligibility, Sbinn, Tindal, and Spira(1987) studied the reading performance of 570students in Grades 2-6 referred for potentialleaming disabilities. Students were tested us-ing reading CBM (R-CBM; Deno, 1989;Shinn. 1989, 1998) procedures, and theirscores were compared to the oral readingscores of typical students in the community.Across grades, a consistent pattem was noted.The typical student referred for potentialleaming disabilities performed below the 5thpercentile compared to general education

603

Page 4: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

peers. When converted to an effect size statis-tic, the difference in R-CBM scores rangedfrom - LO to -1.6 with a median effect sizeof — 1.4. This effect size translates into a largeeffect (Cohen. 1988).

The highly consistent teacher refen-alpractices led to two studies in the same schooldistrict regarding who was actually placed inspecial education for the category of SLD. Thestated policy in the district was that SLD wasdefined as a severe ability-achievement dis-crepancy. However, both studies tested thehypothesis that what schools actually did wassort students by the severity of their achieve-ment problems. Students with achievementproblems that were not as severe were pre-dicted to be students who were placed in re-medial programs. Their scores would be lowercompared to typical general education stu-dents, but not as low as students placed inspecial education as LD. It was predicted thatstudents actually placed in special education,regardless of the policy of severe ability-achievement discrepancies, would consis-tently be the lowest-performing students.

At the level of a specific school, Shinnand Marston (1985) compared the R-CBMscores of 58 SLD students in Grades 4 - 6 torandomly sampled students in a remedial pro-gram (i.e.. Title I) and general education. Themedian effect size difference between typi-cally achieving students and students labeledSLD of -2 .3 was consistent (range -2 .3 to-2.4) and large. The median effect size dif-ference between LA students in Title I andstudents labeled SLD of — Ll also was con-sistent (range -1.0 to -1.3) and large.

At the level of a school district, Shinn,Tindal, Spira, and Marston. (1987) comparedthe R-CBM scores of 638 SLD students inGrades 1-6 to 451 randomly sampled studentsin a remedial program (i.e.. Title I) and 2337general education students. The typical stu-dent labeled SLD performed at the 3rd percen-tile across grades. Performance of students inthe Title I remedial program was also highlyconsistent with the prediction. Although thesestudents were low. a median percentile of ap-proximately the 16th percentile, their scoreswere markedly higher than students labeled

604

LD. In this study, the median effect size dif-ference between typically achieving studentsand students labeled SLD of -1.85 was con-sistent (range -1.1 at Grade 1 to -2.4 atGrade 4) and large. The median effect sizedifference between LA students in Title I andstudents labeled SLD of -0.95 also was con-sistent (range -0.2 at Grade 1 to -1 .3 atGrade 6) and, with the exception of Grade 1.was large.

Studies by other authors confirmed thesefindings using other methods, populations, andmeasures. For example, Gresham et al. (1996)reported effect sizes of -0.64 between SLDand LA students in a sample of 152 second-through fourth-graders with what they de-scribed as a differentiation rate of 74%. Fi-nally, a recent meta-analysis of 79 studiescomparing SLD and low-achieving students(Fuchs, Fucbs. Mathes, Lipsey. & Eaton,2000; Fuchs, Fuchs. Mathes, Lipsey, & Rob-erts, 2001) lends strong support to the definingfeature of SLD as being severe low achieve-ment. The largest and most consistent differ-ence between school-identified SLD studentsand low-achieving students was extreme lowachievement, with an effect size of 0.61. Theauthors reported that "73% of tbe LA popula-tion fell above the average reading score of theLD population" (p. 4).

Building Effective IdentificationPractices for RTI

If schools want to serve students withsimilar characteristics as tbey have been serv-ing for more than 25 years, then it would makesense to identify students with severe achieve-ment needs as potentially eligible for specialeducation as SLD. Students labeled SLD mustbe significantly different from other studentson the target academic skill domain (e.g.,reading). This approach is consistent with aportion of the recommendation the NationalResearch Council (2002) in their report ondisproportional placement in special educa-tion. They recommended, in part, tbat "eligi-bility ensues when a student exhibits largedifferences from typical levels of performancein one or more domain(s)" (p. 6). A hypothet-

Page 5: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

ical illustration of this achievement discrep-ancy is shown in Figure 1.

A severe discrepancy criterion (i.e., cut-ting score) is established relative to typicallevels of achievement. Students below the cri-terion may be considered to have severe edu-cational need (Shinn, 2005b, in press), the firstcomponent of a dual-discrepancy approach toresponse to intervention (Fuchs et al., 2002;Pericola Case, Speece, & Eddy Molloy, 2003;Speece, Pericola Case. & Eddy Molloy, 2003).To translate previous SLD identification re-search into SLD identification practice withinRTI, two questions must be answered:

1. How large a discrepancy from "typicallevels of achievement" is required?

2. Wbat constitutes "typical achievement,"and to wbom should a student be com-pared to determine the magnitude of thediscrepancy?

How Large Should the Discrepancy Be?

Given the observed reading achieve-ment discrepancies in published studies be-

tween SLD students and typically achievingstudents, it seems logical tbat achievementat or below tbe 5th percentile could be de-fended as one tbat would identify studentsconsistent with scbool-based SLD identifi-cation practices. For example, in Sbinn, Tin-dal, Spira et al. (1987), the overarchingcharacteristic of students placed in SLD pro-grams was reading scores at or below the 5thpercentile, with a median ofthe 3rd percen-tile across grades. This criterion also wouldalign well with a projected cut score extrap-olated from tbe Fucbs et al. (2001) meta-analysis comparing SLD witb LA students.If a local education agency wanted to ensurethat students with potentially severe readingproblems were not missed (i.e.. false nega-tives) then tbe criterion for a severe discrep-ancy could be adjusted upwards. A cut scoreof tbe lOtb percentile would be consistentwith the reading discrepancies observedwhen teachers refer students for potentialleaming disabilities services (Shinn, Tindal,& Spira, 1987).

Absolute AchievementStandard (i.e., national mean)

SevereDiscrepancy

Cutting Score

StudentsIdentified

asLD

Achievement

Figure 1. A graphic representation of the absolute achievement discrepancy(AAD). LD = leaming disabled. (From "Severe Discrepancy Models: WhichBest Explains School Identification Practices for learning Disabilities.'" byK. M. Peterson and M. R. Shinn, 2002, School Psychology Review, 31, pp.459-476. Reprinted with permission.)

605

Page 6: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

What Defines Typical Achievement?

As potentially controversial as theachievement discrepancy alone may be, itpales in comparison to the issues raised aboutthe normative comparison group. Unfortu-nately, neither the National Research Councilrecommendation (2002) nor the extant re-search resolves the question of what definestypical achievement sufficiently. The intuitiveanswer to the question would be that studentsidentified as SLD be significantly differentfrom typically developing peers from a na-tional norm. This definition of a severeachievement discrepancy is reflected in Fig-ure 1 and has been called an absolute achieve-ment discrepancy (AAD) approach. See Peler-son and Shinn (2002) and Shinn (2005b) formore detail.

The AAD approach is consistent withthe published research on school-based SLDidentification. As I have noted, research hasestablished large achievement effect size dif-ferences among SLD students, LA students,and typically achieving students. A group ofSLD students has been compared to LA and/ortypically achieving students on a variety ofmeasures, including tests that have nationalnorms. However, the groups' scores werecompared to each other, not compared to dif-ferences from a national norm.

My difficulty with the AAD model isthat, although it may do a better job of ex-plaining school-based identification practicesthan the ability-achievement discrepancymodel, it, too, seems logically insufficient.Hypothetically, if a national norm were usedto define typical achievement and thus theperformance discrepancy, then large numbersof students would be identified as SLD inlow-performing communities; few studentswould be identified as SLD in high-perform-ing communities. I have noted concems aboutthis issue in previous works. "If the achieve-ment-only discrepancy model were a satisfac-tory explanation for school decision-makingpractices, one would predict that there wouldbe a relation between the severity of achieve-ment problems in communities and states andthe number of students identified as eligible

606

for special education" (Shinn, 2005b, p. 229).High-performing school districts would havethe lowest numbers of students identified asLD. Low-performing school districts wouldhave the highest number of students identi-fied as LD.

No such evidence has been presented inthe research literature. As school psychologypractitioners are well aware, there is an equalor often even higher proportion of studentslabeled SLD in high-performing communitiesas in low-perfonning communities. In addi-tion, often when students move to high-per-forming communities from low-performingcommunities, they are referred for special ed-ucation as SLD within a short period of time.

A Relative Achievement DiscrepancyMay Work Best

Only one published study has examinedthe role of what norm group defines typicalachievement in explaining school-based SLDidentification. This study, by Peterson andShinn (2002), compared the classification ac-curacy of national achievement and localachievement norms with the mandated stateability-achievement discrepancy approach.Students identified as SLD by high- and low-achieving school districts were compared. Asdescribed earlier in this article, the ability-achievement discrepancy approach did not ex-plain practice well in either school district. Anational norm approach worked reasonablywell for students in the low-f>erfomiing schooldistrict. Using the Woodcock-Johnson BroadReading Cluster (Woodcock & Johnson,1989), approximately 80% of students withSLD from the low-achieving context were sig-nificantly below a national norm. However, anationally normed definition of typicalachievement worked very poorly in the high-performing school district. Only 22% of stu-dents with SLD from the high-achievingschool district had severe achievement dis-crepancies from the national norm. Studentsplaced in special education SLD programs inthe high-achieving school district read morelike typically achieving students in the low-achieving district. The SLD students in the

Page 7: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

high-performing district performed more thana standard deviation better than SLD studentsin the low-achieving district whether assessedby the Woodcock-Johnson Broad ReadingCluster or R-CBM.

In the Peterson and Shinn (2002) study,wben the criterion for a severe achievementdiscrepancy was severe discrepancy devia-tions from typical levels of achievement fromstudents/rom the same community, nearly allstudents identified as SLD (85-95%) fromboth communities met this standard. This typeof discrepancy has been called the relativeachievement discrepancy (RAD; Peterson &Shinn. 2002) and is shown in Figure 2.

A RAD approach, where a performancediscrepancy is computed compared to localpeers, best explains the results in Peterson andShinn (2002). This approach also arguably fitsthe data in previous research when SLD andLA and typically achieving students are com-pared within the same communities. Notably,it provides a defensible explanation for what

schools do and what practitioners observewhen students' skill levels do not match tbosein their community. A full discussion of therole of the norm in defining typical achieve-ment expectations is beyond this article, butthe reader is referred to Deno (1989. 1995,2002) and Shinn (2005b) for more information.

Severe Educational Need Is Not Enoughfor RTI

Regardless of which norms are used tooperationalize a performance discrepancy(i.e., severe educational need), the discrepancytypically is a necessary but not sufficient con-dition for eligibility detennination in an RTIapproach. Most commonly, a dual-discrep-ancy approach to RTI (Fuchs et al., 2002;Pericola-Case et al , 2003) is used, whereby(a) severe educational need is combined with(b) the lack of educational benefit (i.e., lack ofresponse) from high-quality intervention. Tbisdual discrepancy was expressed cogently by

Relative AchievementStandard for Low -

Achievement Context(i.e., local mean)

Low -AchievementSevere Discrepancy

Cutting Score

LD Students fromLow-Achievement

Context

Relative AchievementStandard for High-

Achievement Context(i.e., local mean)

High-AchievementSevere Discrqiancy

Cutting Score

LD Students fromHigh-Achievement

Context

Achievement

Figure 2. A graphic represenUtion of the relative achievement discrepancy(RAD). LD = leaming disabled. (From "Severe Discrepancy Models: WhichBest Explains School Identification Practices for Leaming Disabilities?" byK. M. Peterson and M. R. Shinn, 2002, School Psychology Review, JJ, pp.459-476. Reprinted with permission.)

607

Page 8: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

the National Research Council (2002). whoproposed that "eligibility ensues when a stu-dent exhibits large differences from typicallevels of performance in one or more do-main(s) and with evidence of insufficient re-sponse to high-quality interventions in the rel-evant domain{s) of functioning in school set-tings" (p. 6).

In a dual-discrepancy approach, studentsmay have a severe performance discrepancy(e.g., below the 5 th or 10th percentile com-pared to a local or national peer) but may notbe eligible for special education if they areprogressing satisfactorily when provided withhigh-quality general education instruction. Ofcourse, the second component of the dual-discrepancy approach requires progress mon-itoring. I would argue that the quality of theprogress monitoring would be no less than thequality of the intervention; in other words, ittoo should be scientifically based.

CBM Fits the Needs of RTI forMonitoring Response

Because the progress-monitoring com-ponent of RTI should employ tools that arescientifically based, it seems logical that CBM(Deno, 1985, 1986, 2003) would be the pri-mary tool in an RTI process when there areconcems about a student's basic skills. CBMwas developed more than 25 years agothrough federally funded research to providespecial education teachers with a way to writeobjective [ndividualized Education Program(IEP) goals and continuously monitor progressfor those students who ultimately receivedspecial education. Given its documented util-ity in monitoring progress for students withdisabilities (Deno, 2003; Fuchs & Fuchs,1986b, 1999; Fuchs & Vaughn, 2005). itseems appropriate that it be used to monitorprogress as part of the RTI disability determi-nation process.

CBM (Deno, 1985, 1986. 1989; Fuchs& Deno, 1991; Shinn, 1989, 1998) is a set ofstandardized and validated short-duration tests1-4 min long in reading (e.g., oral reading,maze), mathematics computation, mathemat-ics applications, spelling, written expression,

60fi

early literacy (e.g.. Dynamic Indicators of Ba-sic Early Literacy Skills, DIBELS; Good,Wallin, Simmons, Kame'enui & Kaminski,2(X)2), and early numeracy (Clarke & Shinn,2004).

The vast majority of the CBM researchhas been in the area of reading, whereby it hasbeen demonstrated repeatedly that having stu-dents read aloud from text for 1 min andcounting the number of words read correctlyserves as a highly accurate and efficient mea-sure of general reading ability (Deno, Mirkin,& Chiang, 1982; Fuchs, Fuchs, & Deno, 1982;Fuchs, Fuchs, & Maxwell, 1988; Shinn, Good,Knutson, Tilly, & Collins, 1992). R-CBMand members of the CBM "family" (e.g.,DIBELS) have been reviewed and approvedfor use in progress monitoring by ReadingFirst (Kame'enui, 2002) and the National Cen-ter on Student Progress Monitoring, a U.S.Department of Education, Office of SpecialEducation Programs, project (see http;//www.studentprogress.org). The Center's mission isto provide technical assistance to states anddistricts and disseminate infonnation aboutprogress-monitoring practices proven towork in different academic content areas,Grades K-5.

Using R-CBM, a student's response tointervention can be evaluated reliably in aslittle as 4 - 6 weeks (Fuchs & Vaughn. 2005)should a sufficient number of data points becollected (Fuchs & Vaughn, 2005; Shinn,Good, & Stein, 1989). An illustration of anRTI progress-monitoring graph for a second-grade student, Sam, is shown in Figure 3.

The solid line represents the expectedrate of progress that was specified as repre-senting adequate progress for Sam, a studentreferred for concems about his discrepancyfi"om peers and his lack of reading progress.The dashed line represents Sam's actual rateof progress when he was provided a modifiedreading program after the initial referral meet-ing. His actual rate of progress exceeded theexpected rate of progress and it was decidedthat the intervention was effective. He re-sponded to the intervention.

A second illustration of an RTI progressmonitoring graph for a second-grade student.

Page 9: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

Samantha, is shown in Figure 4. Her actualrate of progress was below the expected rale ofprogress and it was decided that the interven-tion was ineffective. She did not respond tothe intervention and the team may decide tomodify the intervention or pursue the need foran IEP.

CBM: Monitoring Progress in RTI andMore

The documented usefulness of CBM tomonitor progress accurately and efficientlymakes it an appropriate choice for measuring aspecific student's response to intervention aspart of an eligibility process for leaming dis-abilities. However, CBM provides schoolswith the ability to do more than monitorprogress for students as part of RTI. Sbould astudent be eligible for special education, itprovides a "best-practice" approach for writ-ing IEP goals and monitoring the effects of thespecial education intervention (Fuchs &Sbinn, 1989; Shinn & Shinn, 2000). It also basbeen demonstrated to result in increasedachievement of students with disabilities(Fuchs & Fuchs, t986a; Fuchs & Fuchs,2004). The use of the same progress-monitor-ing tool before and after special educationplacement provides a continuous database, in-creasing the likelihood of understanding thedata for all educators and parents, and reduc-ing the training needs for different assessmentsystems.

CBM brings two more value added ben-efits to practice when it is included in an RTIprocess, the ability to (a) measure the perfor-mance discrepancy using a local normative ap-proach, and (b) feasibly implement a preven-tive three-tier basic skills progress monitoring.

Using CBM to Measure thePerformance Discrepancy

From a RTI perspective, it seems likeany quality achievement test could be used tomeasure the achievement discrepancy fromtypical peers. However, if the RAD best ex-plains what schools do in identifying studentsas LD. then it seems as if whatever achieve-ment test is employed should be normedlocally.

Shortly after initial validation studiesbegan showing the usefulness of CBM as aprogress-monitoring tool in the early 1980s,schools began to use it as a norm-referencedtest to determine the perfonnance discrepancy,with referred students as part of the eligibilitydetermination process (Marston, Deno, & Tin-dal, 1984; Marston & Magnusson, 1985; Tin-dal, Wesson, Deno, Germann, & Mirkin,1985). Local norms were used in tbis processand resulted in more accurate predictions ofreferral to placement outcomes (Marston et al.,1984). For more than 25 years, CBM has beenan integral tool in eligibility determination in aproblem-solving model, whereby it is the se-vere discrepancy from peers from the same

• Ctnrecn• Eirxm

'UConecti AoidincTiond

E}ate

Figure 3. Response to intervention after 5 weeks for Sam, a Grade 2 student,tested in reading standard progress monitoring passages. (Copyright © 2006Edformation, Inc. Reprinted with permission.) ^

609

Page 10: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

community that defines and validates the prob-lem. Illustrations of three potential perfor-mance discrepancies are shown in Figure 5.

To measure a performance discrepancyin reading, students typically read three pas-sages and the median score is graphed andcompared to peers using a "box-and-whisker"chart. This process takes approximately 5 min.The box represents the range of typicallyachieving students (i.e., scores between the25th and 75th percentile). The lower whiskerrepresents scores from the 25th to the 10thpercentile. Conversely, scores in the upperwhisker represent scores from the 75th to the90th percentile. Scores below the lower whis-ker are below the 10th percentile and couldmeet a definition of a severe perfonnance dis-crepancy as part of an RTI process. In thethree examples juxtaposed in Figure 5, the firststudent would not have a significant perfor-mance discrepancy that would potentially re-quire an RTI process. The student's readingscores are well above average. The secondstudent may have a performance discrepancythat requires an intervention. This student maybe an appropriate candidate for a Tier 2 ijiter-vention. The third student's R-CBM score iswell below the 10th percentile and may be agood candidate for RTI.

A number of advantages appear obviouswhen CBM is used to measure the perfor-mance discrepancy component of the RTI du-al-discrepancy process. First, the assessment

process is accurate and time efficient. Second,the process has a lengthy history of use Inschool districts that employ a problem-solvingmodel, which generally results in increasedservice delivery satisfaction for practitionersand parents (Ikeda et al., 2002; Tilly &Grimes, 1998). Third, it comprises a continu-ous assessment process that uses data col-lected in similar ways for different assessmentpurposes.

Using CBM in a Prevention Model toAssess RTI for All Students

A RTI process can be used only as areplacement process for ability-achievementdiscrepancies in the special education eligibil-ity procedure. That is, students can be re-ferred, tested, albeit on .some different dimen-sions, and placed in special education. How-ever, nearly all major RTI school-basedinitiatives (Batsche et al., 2005; Fuchs &Vaughn, 2005; Pericola Case et al., 2{X)3;Vaughn & Fuchs, 2003) embed the RTI deci-sion in a variation of a problem-solving model(Deno, 2002; Shinn, in press) that has thefollowing components:

I. Use of a three-tier heuristic to designmultiple levels of interventions with at-tention to evidence-based practices, andthe creation of highly effective remedialinterventions {e.g.. Tier 2) to support theneeds of at-risk students

11/13

- ^ Coirettt AimUiKCocrecb Trend

11/20 11/27 12/4 12/11 12/18

Date

Figure 4. Response to intervention after 5 weeks for Samantha, a Grade 2student, tested in reading standard progress monitoring passages. (Copyright ©2006 Edformation^ Inc. Reprinted with permission.)610

Page 11: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

2. A universal screening process to pro-mote early identification and interven-tion and to reduce the need for individ-ual student referrals

3. Use of scientifically based progressmonitoring to assess RTI for all students

I prefer to distinguish this conceptionof RTI as an "rti" process whereby all stu-dents' progress is monitored to ensure theyare benefiting from tbe instruction they arereceiving, and if they are not, then changesin the intervention are made. I distinguishthis "rti" process from the "RTI" eligibilityprocess, which has the purpose of making aspecial education entitlement decision be-cause the student has failed to respondwithin an "rti" process.

Integral to this rti conception is the useof CBM in a three-tier model in a way thatcombines universal screening and progressmonitoring. A full discussion of the use ofCBM in a three-tier model is beyond the scopeof this article; the reader is referred to Shinn(in press), or Shinn, Shinn, Hamilton, andClarke, (2002) for more detail. In brief, allstudents in general education are tested threeto four times per year (i.e.. benchmark assess-ment) and an individual student's scores andrates of improvement are compared to those ofpeers. An illustration in reading is shown inFigure 6.

The second-grade student in Figure 6was below average on a fall benchmark as-sessment, and although the student improvedsomewhat by the time of the winter bench-mark testing, the achievement gap was grow-ing. Based on this gap, an increasing perfor-mance discrepancy, the student received a re-medial Tier 2 intervention in addition to coregeneral education instruction. By the end ofthe year, at the spring benchmark, the studenthad improved significantly, and the perfor-mance gap was eliminated. The student "re-sponded to the intervention."

By having all students' progress moni-tored, changes can be made early and theeffects can be judged. In Figure 7, a studentwith a severe performance discrepancy wasidentified, and general education instructionwas supplemented with an intensive Tier 2remedial intervention. The intervention re-sulted in a faster rate of improvement than thatof typical students, and the student reduced thegap. The student "responded to the interven-tion," and special education did not need to beconsidered.

The Research Agenda: Social Validityand the Role of Rate of Improvement

This article began with a brief review oftbe literature on SLD identification, first froma "do as I say" perspective, which considered

Figure 5, Results of reading curriculum-based measurement (R-CBM) testingcomparing three Grade 2 students with different educational needs to localnorms in the number of words read correctly. (Copyrigbt © 2007 Edformation,Inc. Reprinted with permission.) -.rr- ••••-• : -* u

611

Page 12: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume .S6, No. 4

Studies that examined whether school-identi-fied SLD students matched a state or schooldistrict required criterion, typically the abil-ity-achievement discrepancy. School-identi-fied SLD students often failed to meet thestated eligibility criterion. For more detail, seeMacMillan and Siperstein (2002). A brief re-view was then presented, trying to distill if anyconsistent pattem in assessment results couldbe discerned from what schools actually did inthe eligibility determination process. I calledthis the "do as I do" approach. What schoolsappeared to do when identifying students asSLD was to base this decision on severe lowachievement, much lower than typicallyachieving students, and reliably and signifi-cantly lower than students that schools consid-ered low achieving. Further analysis of thefeatures of what schools do in the SLD iden-tification process posited that the achievementperformance discrepancy is a relative one,where SLD students' discrepancies are severecompared to local community peers.

I then illustrated how schools use CBM,a scientifically based progress-monitoringtool, to operationalize a dual-discrepancy ap-

proach to RTI, consistent with the recommen-dations of groups like the National ResearchCouncil and leaders in our field. RTI is bestconducted within a process of problem solvingin a three-tier model where universal screen-ing and progress monitoring are continual ac-tivities for all students.

That an RTI process with referred stu-dents and embedded within a three-tier modelcan be done is without question. It has and isbeing done in schools nationwide. However,actual practice and research-based practice arenot always the same. A number of pressingresearch questions need to be addressed. Ihave characterized these questions as ones of(a) social validity of specific leaming disabil-ity, (b) challenges in assessing response tohigh-quality interventions, (c) technical issuesin progress monitoring, (d) fidelity of imple-mentation, and (e) generalizability.

Social Validity of Specific Leaming . _'Disability

The identification of a specific learningdisability as result of a RTI process with a dual-

2FalIR-CBM

2WinterR-CBM

2SprlngR-CBM

AboveAverage

Average

BelowAverage

— Target

Student

Figure 6. Benchmark assessment of a Grade 2 student showing response tointervention. R-CBM: reading curricutum-based measurement; WRC = wordsread correctly. (Copyr^ht © 2007 Edformation, Inc. Reprinted with permission.)

Page 13: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

discrepancy approach has been and likely willcontinue to be a topic of intense debate. Many inthe Held appear to be comfortable with SLD as asolely "within-the-child" phenomenon, even il' theassociated assessment practices have {Bychometricdeficiencies and/or there is consistent evidencethat schools do not use the results as intended.

An RAD approach whereby students la-beled SLD are the lowest-achieving studentsrelative to other students in their communitiesfit the obtained data and observed school prac-tices of the past 25 years. But such a pragmaticconception of a "disability" is uncomfortableto many and threatens their world views. Froma social validity perspective, more research isneeded on this topic. I see at least threestraightforward questions:

1. What threats do professionals (e.g.,changing roles) and parents (e.g., loss ofdisability "uniqueness") see in an explicitneeds-driven service delivery system?

2. What belief systems are challengedwhen services are weighted toward solv-ing the problem instead of identifyingthe presumed cause of the problem?

3. How willing are consumers (e.g., schoolpsychologists, teachers, parents, students)to endorse a service delivery system thatemphasizes students' needs (versus labels)

2FaU 2Wnter

Figure 7. Results of Tier 2 interven-tion on the reading skills of a Grade 2student. (Copyright © 2007 Edforma-tion, Inc, Reprinted with permission.)

yet ensures the legal protections that somestudents and families need?

It is entirely plausible to me that we willcontinue to say one thing (i.e., a student "has"a disability) while doing another (i.e., servingstudents with severe educational needs relativeto community expectations). I would like tounderstand why.

Challenges in Assessing Response toHigh-Quality Intervention r •

To be certain, it has been evidenced thathistorically, a severe RAD has been the defin-ing feature of students whom schools label asSLD. This discrepancy is easy lo measure andis important to teachers and parents. RTI isbased on dual discrepancy, however, and it isa discrepancy in response to high-quality in-tervention that presents two obvious chal-lenges that must be addressed.

"Response" to date appears to be assessedbest by calculating a rate of improvement (ROI)in a fixed period of time using CBM. The firstRTI challenge is schools' limited capacity tomeasure response or ROI. Despite more than 25years of research and practice with CBM as afrequent progress-monitoring tool, it remains anunderutilized tool in schools. It even remainsunderutilized for the puipose and the populationof students for whom it was originally intended,monitoring of progress toward lEP goals forstudents who receive special education. WithoutCBM, accurately assessing response to interven-tion will be difficult

The problem of limited CBM use as aprogress-monitoring tool is not with schoolpsychologists. Shapiro, Angello, and Eckert(2004) report that more than 90% of schoolpsychology training programs instruct theirstudents in its use and more than half of prac-titioners include CBM among their assessmenttools. However, school psychologists are notthe targeted audience for CBM progress-mon-itoring skills. We must understand the limitedamount of progress-monitoring training at thepreservice and in-service levels for teachersand administrators. What are the barriers to awide range of teachers receiving scientificallybased progress monitoring training? How do

613

Page 14: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

schools ensure all their teachers have the ca-pacity for assessing all students' response tointervention accurately and efficiently, includ-ing those considered for SLD eligibility?

The second RTI challenge is ensuringhigh-quality intervention. It is beyond the scopeof this article to identify all the research needshere. However, we must continue to try to un-derstand what constitutes high-quality interven-tions in educators' knowledge base and skill sets,as well as what barriers there may be to their use.

Because of these two challenges (i.e.,limited CBM progress-monitoring skills, im-plementation of high-quality intervention), itis possible that this part of the dual discrep-ancy will be ignored in practice. The researchagenda should be designed to include investi-gating how much attention and weight deci-sion makers give to a student's ROI usingscientifically based progress monitoring (e.g.,CBM) above and beyond the achievement dis-crepancy. This question can be answered in anobjective manner through examining the ex-plained variance in placement decisions con-tributed by the achievement discrepancy andROI. Theoretically, ROI should add additionalexplained variance in the eligibility decision.Alternately, RTI individual case studies couldbe examined through systematic analyses ofachievement discrepancy and ROI decisioncriterion. Although intellectually and practi-cally appealing, it will be interesting to see ifboth discrepancies contribute to the decisionsschool make. There is precedent for this lineof research in judging SLD students' responseto reintegration in general education after be-ing served in special education resource rooms(Shinn, Powell-Smith, Good, & Baker, 1997).

Technical Issues in Progress Monitoring

Although there are a number of what Iconsider "macro issues" (e.g., social validityof disability) in RTI, several research issuesrelate lo the specific technical issues inprogress monitoring. Three issues that are par-amount are goal setting, sensitivity to im-provement, and reliability of judgments of ad-equate progress. These issues, of course, areinterrelated. RTI goal-setting practices are

614

short-term variations of long-term goal-settingstrategies (e.g., annual IEP goals) such asgrowth rate standards (Fuchs, Fuchs, Hamlett,Walz, & Germann, 1993) or "reducing-the-gap" normative approaches (Shinn, 2005a).For annual goals, a goal is set for a 25- to32-week period, students are assessed one totwo times per week, and judgments of actualprogress are made based on a trend line drawnthrough a sufficient number of accumulateddata points, preferably at least 7-10, comparedto the expected rate of progress (Shinn et al.,1989). Notably, the decision about progress isrelatively "low stakes": whether the student isprogressing and whether the interventionneeds modification at a given point in time.Judgment errors are, in a sense, self-correctingas additional progress data are collected andthe need for change in intervention may bemore apparent.

In a RTI process, goals are typically setfor 6-12 weeks and students are assessed oneto two times per week. This limited time framemakes it potentially challenging to accumulatea sufficient number of data points for reliablejudgments. With RTI, the decision aboutprogress is also more high stakes and lessself-correcting. Failure to respond potentiallyresults in a special education label and place-ment where a more intensive intervention maybe implemented that may be accompanied bywell-known special education side effectssuch as stigmatization, reduced time with non-disabled peers, and so forth.

We would expect that long-term goal-setting and progress-monitoring strategieswould result in a generalizable process to RTI.However, we need to generate much moreinformation regarding the nuts and bolts of theprogress-monitoring practices that work bestto produce quality decisions that result in spe-cial education. Among the important researchtopics are frequency of measurement, numberof data points, goal ambitiousness, and reli-ability of the progress-monitoring judgmentsmade, including those made by parents as wellas educators. For example, in RTI goal setting,one of the methods is based on reducing theachievement gap from typically achieving stu-dents. How much of a reduced gap is seen as

Page 15: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

socially important? Would there be the sameconsensus regarding social importance amongeducators as with parents? Although we knowthat with appropriate training reliable judg-ments about progress can be made by educa-tors (e.g., Shinn et al., 1997), we know verylittle about how parents reach conclusionsabout progress. More studies on how profes-sionals and parents derive their goals andreach progress decisions are in great need.

Fidelity Issues

An with all school implementation is-sues, I look forward to empirical data on howaccurately schools engage in a RTI processwhen progress-monitoring and intervention fi-delity data are to be collected and used. Time-efficient assessment is no guarantee that thedata are collected and used with integrity. Fi-delity of treatment implementation is criticalnot only as part of the research agenda, butalso to ensure the integrity of the decisionsmade with students.

GeneralizabiKty

To date, nearly all published illustra-tions of RTI have been in reading. Some stu-dents also have deficits in other language artsareas (e.g.. written expression, spelling) and inmathematics computation and applications.Whether the same assessment strategies fordetermining the severity of achievement dis-crepancies, the types of data collected, goalsetting, and frequency of progress monitoringare applicable is an open question. Certainlythere are quality progress-monitoring tools inthese areas, but compared to reading, they aresignificantly underinvestigated in general andseriously so in the RTI process.

Finally, from a legal perspective, RTI isan eligibility option for local education agen-cies only for leaming disabilities. Historically,the concepts of the dual discrepancy (i.e., ed-ucational need and quality and intensity ofintervention required for educational benefit)fieshed out explicitly in RTI for SLD are im-plicit for all students with severe educationaland behavioral needs in a problem-solvingmodel. A fundamental question becomes

whether the field of education can go furtherand apply these fundamental concepts to allstudents regardless of label. I look forward toseeing how the field addresses all these chal-lenges and more in our attempt to improveoutcomes for all students.

References

Algozzine, B.. & Ysseldyke, J. E. (1987). Questioningdiscrepancies: Retaking the first step 20 years later.Learning Disability Quarterly. I Hi), 307-3! 8.

Algozzine. B., Ysseldyke. J., & Shinn. M. R. (1982).Identifying children with learning disabilities: When isa discrepancy severe? Joumai of School Psychol-ogy, 20, 299-305.

Batsche. G. M., Ellion. J.. Graden. J.. Grimes. J.,Kovaleski. J. F.. Prasse. D.. et al. (2005). Response rointervention: Policy considerations and implementa-lion. Alexandria. VA: National Association of StateDirectors of Special Education.

Clarke, B.. & Shinn. M. R. (2004). The identification,deveiopmenl. and investigation of early mathematicscurriculum-based measurement. School PsychologyReview, 33. 234-248.

Cohen. J. (1988). Statistical power for the behavioralsciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Cone. T. E.. Wilson. L. R.. Bradley. C. M.. & Reese. J. H.(1985). Characteristics of LD students in Iowa: Anempirical investigation. Leaming Disability Quar-terly. 8. 211-220.

Deno. S. L. (1985). Curriculum-based measurement: Theemerging alternative. Exceptional Children. 52. 219-232.

Deno, S. L. (1986). Formative evaluation of individualstudent programs: A new role for school psychologists.School Psychology Review. 15, 358-374.

Deno. S. L. (1989). Curriculum-based measurement andalternative special education services: A fundamentaland direct relationship. In M. R. Shinn (Ed.). Curric-ulum-based measuremenU Asses.mig special children(pp. 1-17). New York: Guilford Press.

Deno. S. L. (1995). School psychologist as problem solver.In A. Thomas & J. Grimes (Eds.). Best practices in sclwotpsychoiogy UI (pp. 471-484). Washington DC: NationalAssociation of School Psychologists.

Deno, S. L. (2002). Problem-solving as best practice. InA. Thomas & J. Grimes (Eds.), Hest practices in schoolpsychology IV (pp. 37-55). Bethesda, MD: NationalAssociation of School Rsychologists.

Deno. S. L. (2003). Developments in curriculum-basedmeasurement. The Journal of Special Education. J7.184-192.

Deno, S. L.. Mirkin. P.. & Chiang, B. (1982). Identifyingvalid measures of reading. Exceptional Children.49(1). 36-45.

Education for All Handicapped Children Act of 1975.(1975). Public Law 94-142.

Fuchs, L. S. (2003). Assessing intervention responsive-ness: Conceptual and technical issues. Leaming Dis-abilities Research & Practice. 18, 172-186.

Fuchs, L. S., & Deno, S. L. (1991). Paradigmatic distinc-tions between instructionally relevant measurementmodels. Exceptional Children, 57(6). 488-500.

615

Page 16: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

School Psychology Review, 2007, Volume 36, No. 4

Fuchs, L. S., & Richs, D. (1986a). Effects of systematicformative evaluation on student achievement: A meta-analysis. Exceptional Children. 53, 199-208.

Ftichs. L. S.. & Fuchs. D. (1986b). Effects of systematicfonnative evaluation: A meta-analysis. ExceptionalChildren. 5i(3). 199-208.

Fuchs. L. S.. & Fuchs. D. (1999). Monitoring studentprogress toward the development of reading compe-tence: A review of three forms of classroom-basedassessment. School Psychology Review, 2«(4), 659-671.

Fuchs, L. S., & Fuchs, D. (2004). What is scientificallybased research on progress monitoring? Washington,DC: National Center on Progress Monitoring, Ameri-can Institute for Reseiwch, Office of Special EducationPrograms.

Fuchs, L. S., Fuchs, D.. & Deno, S. L. (1982). Reliabilityand validity of curriculum-based informal reading in-ventories. Reading Research Quarterly, 18. 6-26.

Fuchs, L. S.. Fuchs. D., Hamiett, C. L.. Walz, L., &Germann. G. (1993). Formative evaluation of aca-demic progress: How much growth can we expect?School Psychology Review. 22{ 1), 27-48.

Fuchs, D., Fuchs, L. S., Mathes, P. G., Lipsey, M., &Eaton. S. (2000). A meta-analysis of reading differ-ences between iinderachievers with and without thelearning disabilities label: A brief report. LeamingDisabilities: A Muitidisciplinary Joumai, 10. 1-3.

Fuchs, D., Fuchs. L. S.. Mathes, P. G., Lipsey, M., &Roberts. P. H. (2001). Is "leaming disabilities" just afancy term for low achievement? A meta-analysis ofreading differences between low achievers with andwithout the label. Paper presented at the LeamingDisabilities Summit. Reuieved from http://www.nrcid.org/resources/ldsummit/fuchs/shtml November 1. 2007.

Fuchs, L. S., Fuchs, D.. & Maxwell, L. (1988). Thevalidity of informal reading comprehension measures.Remedial and Special Education. 9. 20-28.

Fuchs. L. S., Fuchs, D., & Speece, D. L. (2002). Treat-tnent validity as a unifying construct for identifyinglearning disabilities. Leaming Disability Quarterly.25(1). 33-45.

FUchs, L. S., & Shinn, M. R. (1989). Writing CBM IEPobjectives. In M. R. Shinn (Ed.). Curriculum-basedmeasurement: Asses.':ing .special children, (pp. 132-154). New York: Guilford Press.

Fuchs, L. S.. & Vaughn. S. R. (2005). Re.sponse to inter-vention as a framework for the identitication of leam-ing disabilities. Trainer's Forum: Periodical of theTrainer's of School Psychologists.

Gerber. M.. & Semmel. M. (1984). Teachers as imperfecttest: Reconceptualizing the referral process. Educa-tional Psychologist. 19. 137-148.

Good. R. H., Wallin. J. V.. Simmons, D. C , Kame'enui,E. J.. & Kaminski. R. A. (2002). System-wide percen-tile ra/iti for DIBELS benchmark assessment. (Tech-nical Report No. 9). Eugene, OR: University of Ore-gon.

Gresham, F. M. (2002). Misidentification of students intohigh incidence disability categories: Implications forappropriate education. Washington DC: President'sCommission on Excellence in Special Education.

Grcsham. F. M.. MacMillan. D. L.. & Bocian. K. M.(1996). Leaming disabilities, low achievement, andmild mental retardation: More alike than different?Journal of Learning Disabilities, 29(6), 570-58 L

Ikeda, M. J., Grimes, J., Tilly, W. D., K, Allison, R.,Kums. S.. Stumme, i.. et al. (2002). Implementing anintervention-based approach to .service delivery: Acase example. In M. R. Shinn, H. M. Walker, & G.Stoner (Eds.). Inlen'entionsfor academic and behaviorproblems: Preventive and remedial approaches (pp,71-88). Bethesda, MD; National Association ofSchool Psychologists.

Individuals with Disabilities Education Improvement Act.(2004). Pub. L. No. 108,446.

Kame'enui, E. J. (2002). Final report on analysis ofreading assejisment instruments for K-3. Eugene, OR:Institute for Educational Achievement.

Kavale, K. A.. Fuchs. D.. & Scruggs. T. E. (1994). Settingthe record straight on leaming disability and lowachievement: Implications for policymaking. LeamingDisabilities Research & Practice, 9(2), 70-77.

MacMillan. D. L., & Siperstein, G. N. (2001). Uamingdisabilities as operationally defined by scftoots. Paperpresented at the Leaming Disabilities Summit, re-trieved from http://www.iircld.org/resources/ldsummit/macmillan.shtml

Marston. D.. Deno, S. L., & Tindal, G. (1984). Eligibilityfor leaming disabilities services: A direct and repeatedmeasurement approach. Exceptional Children. 50,554-555.

Marston, D.. & Magnusson, D. (1985). Implementingcurriculum-based measurement in special and regulareducation settings. Exceptional Children. 52. 266-276.

National Research Council. (2002). Executive summary:Disproportionate representation of minority studentsin special education. Washington. DC: Author.

Pericola Case, L., Speece, D. L.. & Eddy Molloy, D.(2003). The validity of response-to-instmction para-digm to identify reading disabilities: A longitudinalanalysis of individual differences and context factors.School Psychology Review. 32. 557-582.

Peterson, K. M., & Shinn, M. R. (2002). Severe discrep-ancy models: Which best explains school identificationpractices for ieaming disabilities? School PsychologyReview. 31, 459-476.

Shapiro, E. S., Angello. L. M.. & Eckert. T. L. (2004). Hascurriculum-based assessment become a staple ofschool psychology practice? An update and extensionof knowledge, use. and attitudes fmm 1990 to 2000.School Psychology Review. 33. 249-257.

Shepard, L. A., & Smith. M. L. (1983). An evaluaUon ofthe identification of leaming disabled studenU in Col-orado. Leaming Disability Quarterly. 6. 115-127.

Shinn, M. R. (1981). <4 comparison of differences between.'itudents labeled teaming disabled and low achievingon measures of classroom perfonnance. Unpublisheddoctoral dissenation. University of Minnesota. Minne-apolis.

Shinn. M. R. (2005a). AJMSweb response to intervention(RTI): A standard protocol-based system for managingand reporting problem-solving outcomes. Eden Prairie.MN: Edformation, Inc.

Shinn. M. R. (2(X)5b). Identifying and validating aca-demic problems. In R. Brown-Chidsey (Ed.). As.ses.'s-ment for intervention: A prohlem-.solving approach(pp. 219-246). New York: Guilford Press.

Shinn. M. R. (in press). Best practices in curriculum-basedmeasurement and its use in a problem-solving model.In A. Thomas & J. Grimes (Eds.), Best practices in

616

Page 17: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)

RTI Eligibility

school psychology V. Bethesda. MD: National Associ-ation of School Psychologists.

Shinn, M. R. (Ed.). (1989). Curriculum-ba.'ied measure-ment: Assessing special children. New York: GuilfordPress.

Shinn. M. R. (Ed.). (1998). Advanced applications ofcurriculum-based measurement. New York: GuilfordPress.

Shinn. M. R., Good. R. H., Knutson, N., Tilly, W. D., &Collins. V. (1992). Curriculum-based reading fluency:A confirmatory analysis of its relation to reading.School Psychology Review. 210), 458-478.

Shinn. M. R.. Good. R. H.. & Stein. S. (1989). Summa-rizing u^nd in student achievement: A comparison ofevaluative models. School Psychology Review. 18(3),356-370.

Shinn. M. R.. & Marston. D. (1985). Differentiatingmildly handicapped, low-achieving, and regular edu-cation students: A curriculum-based approach. Reme-dial and Special Education, 6(2). 31-38.

Shinn. M. R.. Powell-Smith, K. A.. Good. R. H.. III. &Baker, S. (1997). The effects of reintegration intogeneral education reading instruction for students withmild disabilities. Exceptional Children, 64(1), 59-79.

Shinn. M. R.. & Shinn, M. M. (2000). Writing and eval-uating lEP goals and making appropriate revisions toensure participation and progress in general curricu-lum. In C. F. Telzrow & M. Tankersley (Eds.). IDEAAmendments of 1997: Practice guidelines for .ichool-ba.<!ed teams (pp. 351-381). Bethesda, MD: NationalAssociation of School Psychologists.

Shinn. M. R.. Shinn. M. M.. Hamilton, C. & Clarke, B.(2(X)2). Using curriculum-based measurement to pro-mote achievement in general education classrooms. InM. R. Shinn. G. Stoner, & H. M. Walker (Eds.),Inien'entions for academic and behavior problems:Preventive and remedial approaches (pp. 113-142).Bethesda, MD: National Association of School Psy-chologists.

Shinn, M. R., Tindal. G., & Spira. D. (1987). Specialeducation referrals as an index of teacher tolerance:Are teachers imperfect tests? Exceptional Children, 54.32-40.

Shinn, M. R.. Tindal, G.. Spira. D., & Marston, D. (1987).Practice of learning disabilities as .social policy. Leam-ing Disabiliry Quarterly, IO{\). 17-28.

Shinn. M. R., Ysseldyke. J., Deno, S. L., & Tindal. G.(1986). A comparison of differences between studentslabeled leaming disabled and low achieving on mea-

sures of classroom performance. Joumai of LeamingDisabilities, 19. 545-552.

Speece, D. L., Pericola Case, L., & Eddy Molloy. D.(2003). Responsiveness to general education instruc-tion as the first gate to leaming disabilities identifica-tion, Uaming Disabilities Research & Practice. 18,147-156.

Tilly. W. D.. & Grimes. J. (1998). Curriculum-basedmeasurement: One vehicle for systemic educaiionalreform. In M. R. Shinn (Ed.), Advanced applications ofcurriculum-based mea.turement (pp. 32-88). NewYork: Guilford Press.

Tindal, G., Wesson. C, Deno, S. L., Germann. G.. &Mirkin. P. (1985). The Pine Count>- model for specialeducation delivery: A data-based System. In T.Kratochwill (Ed.), Advances in school psychology(Vol. rv. pp. 223-250). Hillsdale, NJ: Lawrence Erl-baum Associates.

Vaughn. S., & Fuchs, L. S. (2003). Redefining leamingdisabilities as inadequate response to instruction: Thepromise and potential problems. Leaming DisabilitiesResearch <fi Practice. 18, 137-146.

Wilson. L, (1985). Large-scale leaming disability identi-fication: The reprieve of a concept. Exceptional Chil-dren, 52. 44-51.

Woodcock, R. W.. & Johnson. M. B. (1989). Woodcock-Johnson Psycho-Educational Battery-Revised. Allen.TX: DLM teaching Resources.

Ysseldyke. J. E., Algozzine. B., & Epps. S. (1983), Alogical and empirical analysis of current practices inclassifying students as handicapped. Exceptional Chil-dren. 50. 160-166.

Ysseldyke, J. E.. Algozzine, B.. Richey, L., & Graden, J.(1982). Declaring students eligible for leaming disabil-ities services: Why bother with the data? LeamingDisability Quarterly. 5. 37-44.

Ysseldyke, J. E., Algozzine. B.. Shinn, M. R., & McGue.M. (1982). Similarities and differences between lowachievers and students labeled leaming disabled. TheJoumai of Special Education, 16. 73-85.

Ysseldyke, J.. Thurlow. M., Graden. J.. Wesson. C , Al-gozzine, B., & Deno. S. L. (1983). Generalizationsfrom five years of research on assessment and deci-sion-making: The University of Minnesota Institute.Exceptional Education Quarterly. 4, 75-93.

Date Received: December 12, 2(X)6Date Accepted: Septetnber 19, 2(X)7

Action Editor: James DiPema •

Mark R. Shinn, PhD, received his doctoral degree from the University of Minnesota in1981. He currently is Professor of School Psychology al National Louis University inSkokie, Illinois, and serves as Project Director for IASPIRE North, a U.S. Department ofEducation and Illinois Department of Education Personnel Preparation Grant. IASPIREwas designed to support schools' implementation of a three-tier problem-solving model,including RTI. A recipient of the 2003 American Psychological Association's JackBardon award, he continues his research and training efforts to support practitioners to usescientifically based progress-monitoring tools.

617

Page 18: Identifying Students at Risk, Monitoring Performance, and ......the Ysseldyke, Algozzine, Shinn et al. (1982) sample in a follow-up study that formed my dissertation (Shinn, 1981)