20
USING AN ABBREVIATED ASSESSMENT TO IDENTIFY EFFECTIVE ERROR-CORRECTION PROCEDURES FOR INDIVIDUAL LEARNERS DURING DISCRETE-TRIAL INSTRUCTION REGINA A. CARROLL UNIVERSITY OF NEBRASKA MEDICAL CENTERS MUNROE-MEYER INSTITUTE JENNIFER OWSIANY AND JESSICA M. CHEATHAM WEST VIRGINIA UNIVERSITY Previous research comparing the effectiveness of error-correction procedures has involved lengthy assessments that may not be practical in applied settings. We used an abbreviated assessment to compare the effectiveness of ve error-correction procedures for four children with autism spectrum disorder or a developmental delay. During the abbreviated assessment, we sampled participantsresponding with each procedure and completed the assessment before participants reached our mastery criterion. Then, we used the results of the abbrevi- ated assessment to predict the most efcient procedure for each participant. Next, we con- ducted validation assessments, comparing the number of sessions, trials, and time required for participants to master targets with each procedure. Results showed correspondence between the abbreviated assessment and validation assessments for two of four participants and partial correspondence for the other two participants. Findings suggest that a brief assess- ment may be a useful tool for identifying the most efcient error-correction procedure for individual learners. Key words: assessment, autism spectrum disorder, discrete-trial instruction, error correction, skill acquisition A number of procedures have been developed to reduce or prevent errors during discrete-trial instruction (DTI). Previous comparison studies have demonstrated that multiple error-correction procedures tend to be effective; however, the most efcient error-correction procedure typi- cally varies across individual learners (Carroll, Joachim, St. Peter, & Robinson, 2015; Kodak et al., 2016; Rodgers & Iwata, 1991; Smith, Mruzek, Wheat, & Hughes, 2006; Worsdell et al., 2005). Thus, to maximize learning during DTI, it may be benecial to conduct an initial assessment to identify the most efcient error- correction procedure for each learner. McGhan and Lerman (2013) evaluated the use of a rapid error-correction assessment to iden- tify the most efcient and least intrusive error- correction procedures for ve children with autism spectrum disorder (ASD). During an ini- tial assessment, the researchers compared the following four error-correction procedures: (a) an error statement (i.e., vocal feedback); (b) a model prompt (i.e., the therapist modeled the correct response, but did not require the participant to respond); (c) an active student response (i.e., the therapist modeled the correct response, and then required the participant to echo the correct response); and (d) directed rehearsal (i.e., the therapist modeled the correct response, and then re-presented the trial until the participant engaged in three correct unprompted responses). Across conditions, researchers evaluated efciency We would like to thank ChanteAdams, Taylor Fenter, Brad Joachim, and Jessica Morgan for their assistance with data collection. Address correspondence to: Regina A. Carroll, Univer- sity of Nebraska Medical Centers Munroe-Meyer Insti- tute, Department of Psychology, 9012 Q Street, Omaha, NE 68127; Email: [email protected] doi: 10.1002/jaba.460 JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2018, 51, 482501 NUMBER 3(SUMMER) © 2018 Society for the Experimental Analysis of Behavior 482

Using an Abbreviated Assessment to Identify Effective

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Using an Abbreviated Assessment to Identify Effective

USING AN ABBREVIATED ASSESSMENT TO IDENTIFY EFFECTIVEERROR-CORRECTION PROCEDURES FOR INDIVIDUAL LEARNERS

DURING DISCRETE-TRIAL INSTRUCTION

REGINA A. CARROLL

UNIVERSITY OF NEBRASKA MEDICAL CENTER’S MUNROE-MEYER INSTITUTE

JENNIFER OWSIANY AND JESSICA M. CHEATHAM

WEST VIRGINIA UNIVERSITY

Previous research comparing the effectiveness of error-correction procedures has involvedlengthy assessments that may not be practical in applied settings. We used an abbreviatedassessment to compare the effectiveness of five error-correction procedures for four childrenwith autism spectrum disorder or a developmental delay. During the abbreviated assessment,we sampled participants’ responding with each procedure and completed the assessmentbefore participants reached our mastery criterion. Then, we used the results of the abbrevi-ated assessment to predict the most efficient procedure for each participant. Next, we con-ducted validation assessments, comparing the number of sessions, trials, and time requiredfor participants to master targets with each procedure. Results showed correspondencebetween the abbreviated assessment and validation assessments for two of four participantsand partial correspondence for the other two participants. Findings suggest that a brief assess-ment may be a useful tool for identifying the most efficient error-correction procedure forindividual learners.Key words: assessment, autism spectrum disorder, discrete-trial instruction, error correction,

skill acquisition

A number of procedures have been developedto reduce or prevent errors during discrete-trialinstruction (DTI). Previous comparison studieshave demonstrated that multiple error-correctionprocedures tend to be effective; however, themost efficient error-correction procedure typi-cally varies across individual learners (Carroll,Joachim, St. Peter, & Robinson, 2015; Kodaket al., 2016; Rodgers & Iwata, 1991; Smith,Mruzek, Wheat, & Hughes, 2006; Worsdellet al., 2005). Thus, to maximize learning duringDTI, it may be beneficial to conduct an initial

assessment to identify the most efficient error-correction procedure for each learner.McGhan and Lerman (2013) evaluated the

use of a rapid error-correction assessment to iden-tify the most efficient and least intrusive error-correction procedures for five children withautism spectrum disorder (ASD). During an ini-tial assessment, the researchers compared thefollowing four error-correction procedures: (a) anerror statement (i.e., vocal feedback); (b) a modelprompt (i.e., the therapist modeled the correctresponse, but did not require the participant torespond); (c) an active student response (i.e., thetherapist modeled the correct response, and thenrequired the participant to echo the correctresponse); and (d) directed rehearsal (i.e., thetherapist modeled the correct response, andthen re-presented the trial until the participantengaged in three correct unprompted responses).Across conditions, researchers evaluated efficiency

We would like to thank Chante’ Adams, Taylor Fenter,Brad Joachim, and Jessica Morgan for their assistance withdata collection.

Address correspondence to: Regina A. Carroll, Univer-sity of Nebraska Medical Center’s Munroe-Meyer Insti-tute, Department of Psychology, 9012 Q Street, Omaha,NE 68127; Email: [email protected]

doi: 10.1002/jaba.460

JOURNAL OF APPLIED BEHAVIOR ANALYSIS 2018, 51, 482–501 NUMBER 3 (SUMMER)

© 2018 Society for the Experimental Analysis of Behavior

482

Page 2: Using an Abbreviated Assessment to Identify Effective

by measuring the total number of trials (includ-ing error-correction trials) required for partici-pants to meet the mastery criterion. After theinitial assessment, they conducted validationassessments comparing the most efficient error-correction procedure to less efficient ones.Results of the initial assessment corresponded toresults of the validation assessments for four ofthe five participants. These findings suggest thatan initial assessment may be a useful tool foridentifying the most efficient and least intrusiveerror-correction procedure for individual learners.Carroll et al. (2015) extended McGhan and

Lerman’s (2013) study by comparing the effi-ciency of additional error-correction proceduresand by incorporating multiple measures to assessthe efficiency of each procedure. Specifically,they evaluated (a) single-response repetition (thetherapist modeled the correct response, and thenrequired the participant to echo the correctresponse); (b) remove and re-present (the thera-pist implemented a brief time out and then re-presented the trial with an immediate model ofthe correct response); (c) re-present until inde-pendent (the therapist modeled the correctresponse, gave the child an opportunity to echothe model, and then re-presented the trial untilthe participant responded correctly to the initialinstruction); and (d) multiple-response repetition(the therapist re-presented the trial with animmediate model until the participant echoedthat model five times). To evaluate the efficiencyof each error-correction procedure, Carrollet al. measured the total trials (including error-correction trials), sessions, and training timerequired for participants to reach a prespecifiedmastery criterion. The results showed that allparticipants acquired the skills taught with twoor more of the error-correction procedures; how-ever, one procedure led to the most efficient skillacquisition with some participants mastering tar-gets in one condition in half the time required tomaster targets in another condition. Consistentwith previous research, the most efficient error-correction procedure varied across participants.

Additionally, with the exception of the multiple-response repetition condition, there was highcorrespondence between all three measures ofefficiency.Although previous studies have identified

effective assessment procedures to evaluate theefficiency of different error-correction proce-dures, to date these assessments have beenlengthy. For example, to compare the effective-ness and efficiency of four error-correction proce-dures and a control procedure, Carrollet al. (2015) conducted an average of 1,616 trials(range, 489-3,700) with an average of 8 hr oftraining time (range, 2-22 hr) across participants.In a recent study, Kodak et al. (2016) comparedfive error-correction procedures for children withASD. They reported that the assessment took anaverage of 3 hr of training time (range, 2-6 hr);however, Kodak et al. excluded reinforcementtime from this measure, so the actual time tocomplete the assessment was longer than 3 hr.McGhan and Lerman (2013) did not report thetotal training time required to conduct theirassessment; however, in general, they conductedfewer training trials across participants whencompared to other comparison studies. It is pos-sible that their assessment may have requiredfewer trials because they taught only one or twotarget responses across conditions, whereas otherresearchers have taught three to twelve targetresponses across conditions (i.e., Carroll et al.,2015; Kodak et al., 2016). Additionally,McGhan and Lerman, as well as otherresearchers, have conducted training sessionsuntil participants reached a prespecified masterycriterion for target responses in one or more ofthe error-correction conditions. Thus, the assess-ment still required a substantial number of train-ing trials for some participants.In applied settings, it may not be practical

for educators to conduct extended comparisonsof error-correction procedures; thus, additionalresearch is needed to evaluate methods thatquickly identify effective and efficient error-correction procedures for individual learners. In

483ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 3: Using an Abbreviated Assessment to Identify Effective

all comparison studies to date, researchers haveconducted training sessions until participantsreach mastery level responding for targetstrained in one or more conditions. It is possiblethat collecting a sample of participants’ correctresponding and errors early on in training, priorto reaching mastery level responding, would besufficient to predict the most efficient error-correction procedures for individual learners.For example, we examined the cumulative fre-quency of correct responses during the first60 trials (five sessions) of training with eacherror-correction procedure for participants inthe Carroll et al. (2015) study. The procedurethat was associated with the highest frequencyof correct responses during the first five trainingsessions corresponded with either the mostefficient error-correction procedure (threeparticipants) or the second most efficient error-correction procedure (two participants). Wealso examined the percentage of correctresponding during the first five training sessionswith each error-correction procedure for the10 participants in the McGhan and Lerman(2013) and Kodak et al. (2016) studies. In gen-eral, we found that the procedure associatedwith the highest average percentage of correctresponses during the first five training sessionscorresponded to either the most efficient error-correction procedure (eight participants) or thesecond most efficient error-correction proce-dure (two participants). Findings from theseprevious comparison studies would suggest thatan assessment that samples responding duringinitial training sessions may be useful in pre-dicting an effective and efficient error-correction procedure for individual learners.The purpose of the current study was to

extend previous research comparing the effi-ciency and effectiveness of error-correction pro-cedures commonly used during DTI byevaluating the predictive validity of an abbrevi-ated error-correction assessment. First, we com-pared the frequency of (a) correct responses,(b) errors, and (c) error-correction trials for up

to five different error-correction proceduresduring an abbreviated assessment. Second, weconducted validation assessments, which con-sisted of extended comparisons of the error-correction procedures we evaluated during theabbreviated assessment. During the validationassessments, we compared the (a) percentage ofcorrect responses, (b) number of sessions,(c) total training trials (including error-correction trials), and (d) duration of trainingrequired for participants to master new sets oftargets trained with the different error-correction procedures.

METHOD

Participants, Setting, and MaterialsFour children participated. Blake, Garrett,

and Hannah had a diagnosis of ASD, and Bellahad a diagnosis of a global developmental delay.Blake was a 5-year-9-month-old male who usedfull sentences to communicate. Blake had notreceived DTI prior to the start of this study.We conducted the Peabody Picture VocabularyTest-4 (PPVT-4; Dunn & Dunn, 2007) andthe Expressive Vocabulary Test-2 (EVT-2; Wil-liams, 2007) with Blake, and his age equivalentscore on both assessments was 6.3 years. Gar-rett was a 4-year-10-month-old male who usedfull sentences to communicate. Garrett hadbeen receiving DTI for 0.6 years prior to thestart of this study, and his PPVT-4 results indi-cated an age equivalent score of 4.0 years. Weconducted the EVT-2 with Garrett when hewas 5 years 5 months old and his age equiva-lent score was 4.9 years. Hannah was a 3-year-11-month-old female who had no vocal verbalbehavior or alternative form of communication.Hannah had been receiving DTI for approxi-mately 0.7 years, and she did not qualify fortesting with the PPVT-4 or EVT-2. Bella was a3-year-5-month-old female who communicatedusing one- to three-word phrases, and she hadnot received DTI prior to the start of thisstudy. We conducted the PPVT-4 and EVT-2

REGINA A. CARROLL et al.484

Page 4: Using an Abbreviated Assessment to Identify Effective

with Bella when she was 3 years 11 months,and 4 years old, respectively. Her age-equivalent scores were 2.9 years on the PPVT-4 and 3.2 years on the EVT-2. We conductedall sessions in a private room within auniversity-based research laboratory. For eachparticipant, we identified appropriate teachingtargets based on the results of a language-basedassessment (e.g., the Verbal Behavior Mile-stones Assessment and Placement Program;Sundberg, 2008) or their current educationalgoals.

Dependent MeasuresThe target responses were reading sight

words for Blake and Garrett, matching associ-ated pictures in a two-card array for Hannah,and labeling the functions of items for Bella(see Table 1). We collected data on (a) correctresponses, defined as providing a predeterminedvocal response (e.g., saying “like” after beingshown the sight word) or placing a picture cardon the correct comparison picture (Hannahonly) within the allotted prompt delay;(b) prompted responses, defined as providingthe correct vocal response following a modelprompt or placing the picture card on the cor-rect comparison following a physical prompt;(c) errors, defined as providing a vocal responseother than the correct response or placing a pic-ture card on the incorrect comparison picturewithin the specified prompt delay; and (d) noresponses, defined as failing to respond withinthe specified prompt delay or saying, “Idon’t know.”During the abbreviated assessment, we mea-

sured the cumulative frequency of trials withcorrect responses and errors for each error-correction procedure. We also measured thecumulative frequency of error-correction trialsfor all procedures except the no error-correctionprocedure. During the no error-correction pro-cedure, the therapist did not correct errors;thus, error-correction trials never occurred. The

definition of an error-correction trial variedslightly across conditions (see Phase I procedurebelow); however, in general, we scored anerror-correction trial every time the therapisthad to model or physically guide (Hannahonly) a correct response following an error orno response. We gave each error-correctionprocedure a score from 1 to 5 (with 1 beinglow and 5 being high) for each of our depen-dent measures (i.e., correct responses, errors,and error-correction trials). For example, theprocedure that resulted in the highest frequencyof correct responses received a score of 5 andthe procedure that resulted in the lowest fre-quency of correct responses received a score of1. We used reverse scoring for errors and error-correction trials. That is, the procedure thatresulted in the lowest frequency of errorsreceived a score of 5. Error-correction trialsoccurred during only four conditions(i.e., error-correction trials did not occur in theno-error-correction condition), so the proce-dure that resulted in the lowest frequency oferror-correction trials received a score of 4. Iftwo or more procedures resulted in the samevalue for a dependent measure, then each pro-cedure was given the same score. We calculatedthe total percentage of points for each proce-dure by adding the total number of points aprocedure received for each dependent mea-sure, dividing it by the total number of pointsavailable for that procedure, and multiplying by100. We evaluated five error-correction proce-dures for Blake, Garrett, and Bella. For theseparticipants, the model, single-response repeti-tion, re-present until independent, andmultiple-response repetition procedures couldreceive a maximum of 14 points (i.e., up to5 points for the most correct responses, 5 pointsfor the fewest errors, and 4 points for the few-est error-correction trials). The no-error-correction procedure could receive a maximumof 10 points (i.e., 5 points for the most correctresponses and 5 points for the fewest errors).We evaluated only four error-correction

485ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 5: Using an Abbreviated Assessment to Identify Effective

procedures for Hannah, so the maximum num-ber of points her procedures could receive was12 (i.e., 4 points for the most correct responses,4 points for the fewest errors, and 4 points forthe fewest error-correction trials). We predictedthat the procedure receiving the highest per-centage of available points would lead to thequickest acquisition of skills during validationassessments.During validation assessments, we converted

each dependent measure to a percentage of trialsby dividing the number of trials with an occur-rence of a participant response by the totalnumber of trials in a session and multiplying by100. We also measured the total number of ses-sions, trials (including error-correction trials),and the total training time in a condition beforereaching a predetermined mastery criterion foreach participant. In order to measure total train-ing time during each session, data collectorsstarted a timer immediately prior to the

presentation of the first trial (within 5 s) andstopped the timer immediately following theend of the reinforcement interval on the lasttrial (or following the removal of the target stim-ulus if a reinforcer was not provided on the lasttrial).

Interobserver Agreement and ProceduralIntegrityFor each participant, a secondary observer

independently collected data during an averageof 68% (range, 33%-100%) of the total ses-sions for each participant response. We com-pared the primary and secondary observers’data on a trial-by-trial basis and calculatedinterobserver agreement (IOA) by taking thenumber of exact agreements in a session, divid-ing it by the total number of agreements plusdisagreements, and converting the result to apercentage. We scored an agreement if bothobservers recorded the same participant

Table 1Target Responses for Each Participant by Condition

Target Stimuli

Participant Diagnosis Task Initial Set 1 Set 2

Blake ASD Sight words NEC: is, eat, come Model: to,and, have SRR: up, are,

went RI: we, you, that MRR:on, but, like

NEC: in, was, down SRR: so,get, find RI: me, out,

here MRR: at, run, make

NEC: as, say, where SRR: am,for, little RI: by, with,

must MRR: my, can, under

Garrett ASD Sight words NEC: on, but, like Model: we,you, that SRR: to, and,

have RI: up, are, went MRR:is, eat, come

NEC: me, out, here Model: do,saw, into SRR: at, run,

make RI: in, was,down MRR: so, get, find

NEC: it, say, where Model: am,for, they SRR: go, can,little RI: be, with,

must MRR: my, new, underSet 3 NEC: after, did,let Model: could, all,

give SRR: every, look, put RI:some, will, has

Hannah ASD Matchingassociateditems

Model: flower (tree), bowl(cup) SRR: bus (car), broccoli(carrot) RI: shirt (pants), cat(dog) MRR: chair (table),

banana (apple)

Model: puck (football), drill(hammer) SRR: plane(helicopter), comb

(shampoo) RI: turtle (fish), ear(nose) MRR: scissors (glue),

pig (cow)

Model: horse (chicken),umbrella (boots) SRR:

blanket (pillow), beach ball(sandcastle) RI: bracelet

(earrings), crab(dolphin) MRR: swing(slide), violin (clarinet)

Bella GDD Labelingfunctionsof items

NEC: write, jump, tie Model:cut, wear, clean SRR: sit,peel, dig RI: build, eat,

ride MRR: see, pull, cook

NEC: push, climb, smell Model:pet, listen, blow RI: talk,

swim, read MRR: wash, drive,play

NEC: crack, sing, light Model:buy, float, tune RI: lick, sail,pour MRR: sleep, frost,

pound

Note. NEC = No Error Correction; SRR = Single-Response Repetition; RI = Re-present Until Independent;MRR = Multiple-Response Repetition; ASD = Autism Spectrum Disorder; GDD = Global Developmental Delay

REGINA A. CARROLL et al.486

Page 6: Using an Abbreviated Assessment to Identify Effective

response on a specific trial (e.g., both observersscored the participant’s response as correct).We scored a disagreement if observers recordeddifferent participant responses on a trial(e.g., the primary observer scored the responseas a no response and the secondary observerscored the response as an error). We also com-pared observers’ data on the duration of eachsession, and we scored an agreement if bothobservers recorded the same time within a 5-swindow. The mean IOA scores were 99%(range, 92%-100%) for Blake, 98% (range,86%-100%) for Garrett, 99% (range, 83%-100%) for Hannah, and 99% (range, 82%-100%) for Bella.A secondary observer collected treatment-

integrity data on specific therapist responsesduring an average of 68% (range, 33%-100%)of the total sessions for each participant. Acrossall conditions, the therapist’s responses included(a) securing attending, defined as waiting to pre-sent the instruction until the child attended(i.e., made eye contact) to each target stimulus;(b) presenting the correct array, defined as plac-ing pictures in the same order as indicated onthe data sheet equal distance apart in front ofthe participant (Hannah only); and(c) presenting the correct instruction, defined aspresenting the instruction exactly as it wasworded in the teaching protocol. During base-line, correct therapist responses included(a) withholding a controlling prompt, defined asending the trial following an error or noresponse within the allotted prompt delay;(b) withholding a reinforcer, defined as endingthe trial following a correct response within theallotted prompt delay; and (c) interspersingreinforcement for mastered tasks, defined as pre-senting a mastered task following every one tothree trials and providing praise and brief accessto a high-preference item (determined by a dailypreference assessment) for correct responses tomastered tasks. During training, correct thera-pist responses included (a) implementing error-correction (e.g., presenting a model prompt and

re-presenting a trial) following an incorrectresponse or no response, for which the defini-tion varied depending on the experimental con-dition (specific definitions for each conditionavailable from the first author upon request);and (b) delivering a reinforcer, defined as pro-viding verbal praise and brief access to a high-preference item following a correct response.Observers scored the implementation of each

trial as correct (100% accuracy) or incorrect(less than 100% accuracy). We calculated treat-ment integrity for each session by taking thenumber of trials implemented correctly, divid-ing by the number of trials in a session, andconverting the result to a percentage. Meantreatment integrity was 99% (range, 82%-100%) for all participants.

Color Preference AssessmentWe paired a distinct color card with each

condition to assist participants with discrimi-nating among the contingencies associated witheach condition. Prior to the start of training,we conducted a preference assessment witheach participant to identify colors that wereneither high- nor low-preference. We ensuredthat each participant could label each color ormatch each color (Hannah only) before con-ducting the preference assessment. We con-ducted three brief multiple-stimulus withoutreplacement (MSWO; Carr, Nicolson, & Hig-bee, 2000) preference assessments with eight(Blake, Garrett, and Bella) or two sets of four(Hannah) different color cards. We identifiedcolor cards that each participant selected duringapproximately the same percentage of trialsduring the MSWO, and then randomlyassigned those colors to each experimentalcondition.

Experimental Design and General ProcedureWe used an adapted alternating treatments

design (Sindelar, Rosenberg, & Wilson, 1985)to compare up to five error-correction

487ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 7: Using an Abbreviated Assessment to Identify Effective

procedures during abbreviated and validationassessments. For each participant, we identifieda total of 24 to 57 target responses to teach dur-ing the assessments. Once we identified the tar-gets, we assigned three targets to each conditionin the abbreviated and validation assessments.For Hannah, we assigned two target responsesto each condition. We used a logical analysismethod to equate the difficulty of targetsassigned to each condition (Wolery, Gast, &Hammond, 2010). First, for participants withtarget responses that required a vocal response,we conducted an echoic assessment with eachof the target responses. The purpose of theechoic assessment was to identify any wordsthat the participant echoed inconsistently orwith poor articulation. During the echoicassessment, a trial began with the therapist pre-senting a vocal model of a target response(e.g., “ride”). If the participant correctly echoedthe vocal model, the therapist provided imme-diate praise (e.g., “Great job saying ride!”), andfollowing approximately every two correctresponses, the therapist provided brief access toa preferred item. None of the instructionalstimuli were present (e.g., picture of a bike)during the echoic assessment. Each target waspresented up to three times. We excluded tar-gets that the participant had difficulty echoing(e.g., the response was segmented), echoedinconsistently, or sounded too similar toanother target. Next, we assigned targets withthe same number of syllables to each condition.Finally, we made sure that we did not assigntargets sharing similar stimulus properties(e.g., sound overlap) to the same conditions.For Hannah and Bella, we also took steps toequate the visual properties of the picture cardsassigned to each condition. For example, weassigned targets of similar sizes, shapes, andcolors to each condition and we made sure thatwe did not assign pictures with similar visualproperties (e.g., color) to the same condition.Each session consisted of 12 trials with each

of the target stimuli presented four times

(or six times for Hannah). Prior to the start oftraining we conducted a minimum of two base-line sessions to ensure that the participant’s cor-rect responding was at or below chance level.At the start of each session, the therapist heldup the color card associated with that condi-tion, labeled the color, prompted the partici-pant to label or touch the color card (Hannahonly), and then placed the color card on thetable in the participant’s line of sight for theremainder of the session. At the start of a trialthe therapist presented a picture card, securedthe participant’s attention, and presented aninstruction (e.g., “Read the word”). The partic-ipant had 5 s to respond. If the participantresponded correctly, incorrectly, or did notrespond within 5 s of the instruction, the ther-apist removed the picture card. The therapistdid not provide differential consequences forcorrect or incorrect responses; however, previ-ously mastered tasks were interspersed approxi-mately every two trials, and the therapistdelivered praise and a preferred item contingenton correct responses to the mastered task. Weinterspersed mastered tasks during baseline topromote participants’ continued respondingand appropriate session behavior.We used a constant prompt-delay procedure

across all conditions. At the start of training,the therapist used a 0-s prompt delay. Thetherapist secured the participant’s attention,presented the instruction, and immediatelydelivered a prompt. After two consecutive ses-sions in which the participant engaged inprompted responses during at least 92% of tri-als, the delay from the instruction to theprompt was increased to a constant 2 s (Blake,Garrett, and Bella) or 5 s (Hannah). Correctresponses that occurred within the programmedprompt delay resulted in immediate praise and25 s access to a preferred edible or tangibleitem, identified via daily brief MSWO prefer-ence assessments. For Hannah, we providedboth a preferred edible and tangible item fol-lowing a correct response. If the participant

REGINA A. CARROLL et al.488

Page 8: Using an Abbreviated Assessment to Identify Effective

engaged in an incorrect response or did notrespond within the prompt delay, the thera-pist’s response varied depending on the differ-ent experimental conditions.

PHASE 1: ABBREVIATED ASSESSMENT

ProcedureThe purpose of the abbreviated assessment

was to conduct a brief comparison of error-correction procedures to identify a procedureassociated with the highest frequency of correctresponses and the lowest frequency of errorsand error-correction trials. We conducted aminimum of three (36 trials) and a maximumof five (60 trials) training sessions with eacherror-correction procedure during the abbrevi-ated assessment. If participants responded cor-rectly during 90% or more trials for onetraining session, then we terminated the abbre-viated assessment before conducting 60 trials.Using this criterion, we conducted a total of36 trials with each procedure for Blake, 48 trialsfor Garrett, and 60 trials for Hannah and Bella.For the abbreviated assessment, we selected a

sample of error-correction procedures that hadbeen evaluated in recent comparison studies.Additionally, we included the error-correctionprocedures found to be most efficient forone or more participants from the Carrollet al. (2015) and McGhan and Lerman (2013)studies. We compared four (Hannah) or five(Blake, Garrett, and Bella) error-correction pro-cedures during the abbreviated assessment.Hannah’s target response was matching associ-ated pictures in a two-card array, and she had ahistory of responding exclusively to one side ofthe array under conditions of differential rein-forcement. To minimize the possibility ofstrengthening a side bias, we did not includethe no-error-correction procedure in her abbre-viated assessment.No error correction (differential reinforcement

only). If the participant engaged in an error ordid not respond within the prompt delay, the

therapist ended the trial. During this condi-tion, we did not conduct error-correctiontrials.Model. If the participant engaged in an error

or did not respond within the prompt delay,the therapist provided a vocal model of the cor-rect response and immediately ended the trial.For example, if the participant engaged in anincorrect response when presented with thesight word “like,” then the therapist would say,“This is like,” and end the trial. The therapistdid not require the participant to echo themodel, and if the participant did echo the ther-apist’s model, the therapist did not provide anydifferential consequences. For Hannah, follow-ing an error or no response, the therapist mod-eled the correct matching response (i.e., placingthe picture card on the correct comparison pic-ture) while saying, “Match like this,” and thenended the trial. During the model condition,we scored an error-correction trial each timethe therapist modeled the correct response fol-lowing an error or no response.Single-response repetition. If the participant

engaged in an error or did not respond withinthe prompt delay, then the therapist modeledthe correct response and gave the participant2 s to echo the correct response. If the partici-pant correctly echoed the therapist’s model,then the therapist provided general praise(e.g., “Good”) and ended the trial. If the partic-ipant responded incorrectly or did not respondto the therapist’s model within 2 s, then thetherapist ended the trial.For Hannah, following an error or no

response, the therapist physically guided her toplace the picture on the correct comparisonwhile re-presenting the instruction, “Match.”The therapist provided praise and a small edibleitem paired with 25-s access to a preferred tan-gible item following a prompted response.Once Hannah responded correctly to the initialinstruction during 75% of trials for three con-secutive sessions, the therapist provided onlypraise for prompted responses (i.e., we no

489ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 9: Using an Abbreviated Assessment to Identify Effective

longer provided an edible and tangible item)for the remainder of training sessions. We pro-vided reinforcement for prompted responses forHannah because in the past she tended torespond exclusively to one side of the arraywhen we withheld reinforcement for promptedresponses during initial training sessions. Simi-lar to the model condition, we scored oneerror-correction trial each time the therapistmodeled or physically guided the correctresponse following an error or no response.Re-present until independent. If the partici-

pant engaged in an error or did not respondwithin the prompt delay, then the therapistprovided a vocal model of the correct response.If the participant echoed the therapist’s model,or 2 s had passed without a response, then thetherapist re-presented the trial. The therapistcontinued to re-present the trial until the par-ticipant responded correctly to the instructionor until a total of 10 error-correction trials werepresented without a correct response. If theparticipant responded correctly to the instruc-tion on an error-correction trial, then the thera-pist presented general praise (e.g., “Right”) andpresented the next trial.For Hannah, following an error or no

response, the therapist physically guided thecorrect response while re-presenting the instruc-tion “match.” The therapist then re-presentedthe trial until she responded correctly to theinstruction or 10 error-correction trials werepresented without a correct response. If Han-nah responded correctly to the instruction onan error-correction trial, then the therapist pro-vided praise and a small edible item paired with25-s access to a preferred tangible item. OnceHannah responded correctly to the initialinstruction during 75% of trials for three con-secutive sessions, the therapist provided praiseonly for correct responses following an error-correction trial. During the re-present untilindependent condition, we scored an error-correction trial each time the therapist modeledor physically guided the correct response

following an error or no response on the initialtrial, and each time the therapist had to re-present the trial.Multiple-response repetition. If the participant

engaged in an error or did not respond withinthe prompt delay, then the therapist repeatedthe instruction and immediately modeled thecorrect response. The therapist repeatedthe instruction with an immediate model of thecorrect response until the participant echoedthe correct response during five error-correctiontrials. If the participant did not echo the thera-pist’s model during one of the error-correctiontrials, then the therapist continued to presentthe instruction with an immediate model untilthe participant echoed the correct response atotal of five times or until a total of 10 error-correction trials were presented without fivecorrect responses. For Hannah, following anerror or no response, the therapist physicallyguided the correct response while re-presentingthe instruction “match” five times. Immediatelyafter the fifth error-correction trial, the thera-pist provided praise and a small edible itempaired with 25-s access to a preferred tangibleitem. Once Hannah responded correctly to theinitial instruction during 75% of trials for threeconsecutive sessions, the therapist providedpraise only for correct responses following anerror-correction trial. During the multiple-response repetition condition, we scored a min-imum of five error-correction trials followingevery trial with an error or no response.

PHASE 2: VALIDATION ASSESSMENT

ProcedureDuring the validation assessments, we con-

ducted an extended comparison of the error-correction procedures evaluated during theabbreviated assessment. Specifically, we con-ducted training sessions until participantsreached a prespecified mastery criterion for tar-get responses trained with one or more error-correction procedures. The purpose of the

REGINA A. CARROLL et al.490

Page 10: Using an Abbreviated Assessment to Identify Effective

validation assessments was to test the predictivevalidity of the abbreviated assessment. We con-ducted two validation assessments for Blake,Hannah, and Bella, and three validation assess-ments for Garrett. We identified new sets oftarget responses for each participant, andassigned targets to each condition using a logi-cal analysis (Wolery et al., 2010). During thevalidation assessments for Blake and Bella, wecompared four of the five error-correction pro-cedures from the abbreviated assessment(i.e., the two procedures that received the high-est scores and the two procedures that receivedthe lowest scores). During the validation assess-ments for Garrett, we compared all five error-correction procedures for the first two valida-tion assessments, and during the thirdvalidation assessment, we compared four of thefive error-correction procedures from the abbre-viated assessment. We removed the multiple-response repetition procedure from Garrett’sfinal validation assessment because he engagedin challenging behavior (e.g., aggression andnoncompliance) on a high percentage of trialsduring the first two validation assessments.During the validation assessments for Hannah,we compared the same four error-correctionprocedures assessed during the abbreviatedassessment.We considered a set of target responses mas-

tered once participants responded correctly dur-ing at least 92% of trials for two consecutivesessions. We applied an early-termination crite-rion if target responses in one condition werestill in training after mastery of another set oftarget responses. That is, the therapist stoppedconducting sessions in a condition thatexceeded approximately two times the numberof sessions required to master a set of targets inanother condition. For Garrett only, the thera-pist stopped conducting sessions in a conditiononce it exceeded three times the number of ses-sions required to master a set of targets inanother condition.

RESULTS

The results for Blake are depicted inFigure 1. Blake engaged in the highest fre-quency of correct responses and the lowest fre-quency of errors during the multiple-responserepetition condition. The single-response repe-tition condition required the fewest number oferror-correction trials during the abbreviatedassessment, and it was the condition associatedwith the second highest frequency of correctresponses and second lowest frequency oferrors. The results of Blake’s abbreviated assess-ment suggested that the multiple-response repe-tition (86% of points) and single-responserepetition procedures (86% of points) would behis most efficient error-correction procedures.The results of Blake’s validation assessmentswere consistent with the abbreviated assess-ment. That is, Blake mastered the targets in themultiple-response repetition condition in thefewest number of sessions during both valida-tion assessments. When assessing trials to mas-tery, Blake mastered the targets in the single-response repetition condition in the fewestnumber of trials and mastered the targets in themultiple-response repetition condition in thefewest minutes of training during both valida-tion assessments. Blake did not master the tar-gets taught with the no-error-correctionprocedure during either of the validationassessments.Figure 2 shows the results for Garrett. The

re-present until independent condition requiredthe fewest number of error-correction trials andGarrett engaged in the highest frequency ofcorrect responses and lowest frequency of errorsduring that condition. The results of Garrett’sabbreviated assessment suggested that the re-present until independent procedure (100% ofpoints) would be his most efficient error-correction procedure. During Garrett’s first val-idation assessment, he mastered the targetsfrom the multiple-response repetition conditionin the fewest number of sessions. He also

491ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 11: Using an Abbreviated Assessment to Identify Effective

0

4

8

12

16

20

24

28

32

36

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Cu

mu

lative F

req

ue

ncy

Correct

Errors

Error Corrections

Pe

rce

nta

ge

Co

rre

ct

0

5

10

15

20

25

30

0

20

40

60

80

100

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Pe

rce

nta

ge

of P

oin

ts

Points

Time

Blake

Abbreviated Assessment(36 Trials)

Validation Assessment Set 2

Validation Assessment Set 1

0

20

40

60

80

100

5 10 15 20 25 30

No Error Correction

Single-Response Repetition

Re-present Until Independent

Multiple-Response Repetition

Baseline Training

0

5

10

15

20

25

30

35

40

45

50

0

20

40

60

80

100

120

140

No ErrorCorrection

Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tria

ls to

Ma

ste

ry

Trials

Time

0

20

40

60

80

100

5 10 15 20 25 30

Session

Baseline Training

*

0

5

10

15

20

25

30

35

40

45

50

0

20

40

60

80

100

120

140

No ErrorCorrection

Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tria

ls to

Ma

ste

ry

Trials

Time

*

Figure 1. First panel shows the cumulative frequency of correct responses, errors, and error-correction trials (leftpanel) and the percentage of points and training time (right panel) for Blake during the abbreviated assessment. We didnot include data from baseline sessions in the figures for the abbreviated assessment. Blake did not engage in any correctresponses during baseline. Percentage of correct responses (left panel) and the frequency of training trials and durationof training to mastery (right panel) for Blake during the first (second panel) and second (third panel) validation assess-ments across the no-error-correction, single-response repetition, re-present until independent, and multiple-response rep-etition conditions. Data from the 0-s prompt-delay sessions are not depicted in the figures. *Indicates a conditionterminated before a participant’s correct responding reached mastery. Trials and training time are for sessions conductedbefore the early termination criterion was met.

REGINA A. CARROLL et al.492

Page 12: Using an Abbreviated Assessment to Identify Effective

mastered the targets from the multiple-responserepetition condition in the fewest number oftrials and minutes of training. The multiple-response repetition condition was the procedureassociated with the second highest frequency ofcorrect responses and second lowest frequencyof errors during the abbreviated assessment(64% of points). Garrett also mastered the tar-gets from the single-response repetition condi-tion; however, he did not master the targetsfrom the other three conditions. Although Gar-rett did not master the targets from the re-present until independent condition during thefirst validation assessment, his correct respond-ing reached 92% correct during four nonconse-cutive sessions.During Garrett’s second validation assess-

ment (Figure 2; third panel), he mastered thetargets from the single-response repetition con-dition in the fewest number of sessions fol-lowed by the re-present until independentcondition. He mastered the targets from thesingle-response repetition condition followingthe fewest number of trials and minutes oftraining. Garrett did not master the targetsfrom any other condition; however, correctresponding during the multiple-response repeti-tion condition reached 92% correct during twononconsecutive sessions. During the secondvalidation assessment, Garrett engaged in prob-lem behavior (e.g., aggression and noncompli-ance) on a high percentage of trials (M = 43%;range, 8% to 92%) during the multiple-response repetition condition. Therefore, wedid not include the multiple-response repetitionprocedure in his final validation assessment.During the third validation assessment, Garrettmastered the targets from the re-present untilindependent and single-response repetitionconditions in the same number of sessions. Healso mastered the targets from the model condi-tion; however, it took substantially longer thanthe other two conditions. Garrett mastered thetargets from the single-response repetition con-dition following the fewest number of trials

and minutes of training. During all three vali-dation assessments, Garrett never respondedcorrectly during the no-error-correction condi-tion. Overall, for Garrett there was less corre-spondence between the abbreviated andvalidation assessments when compared toBlake. The results of the abbreviated assessmentfor Garrett suggested that the re-present untilindependent procedure would be his most effi-cient error-correction procedure during the vali-dation assessment. This procedure was foundto be his second most efficient error-correctionprocedure during two of the three validationassessments.Figure 3 shows the results for Hannah. Han-

nah engaged in the highest frequency of correctresponses and lowest frequency of errors duringthe multiple-response repetition condition. Themodel condition required the fewest number oferror-correction trials during the abbreviatedassessment. The results of Hannah’s abbrevi-ated assessment suggested that the multiple-response repetition procedure (83% of points)would be her most efficient error-correctionprocedure. During the first validation assess-ment, Hannah mastered the targets in all con-ditions. However, she mastered the targetsfrom the multiple-response repetition andsingle-response repetition conditions in thefewest number of sessions. Hannah masteredthe targets in the single-response repetitioncondition in the fewest number of trials andshe mastered the targets in the multiple-response repetition condition in the fewestminutes of training. During Hannah’s secondvalidation assessment, she again mastered thetargets in all conditions, but she mastered tar-gets from the multiple-response repetition con-dition in the fewest number of sessions.Hannah mastered the targets in the re-presentuntil independent condition in the fewest num-ber of trials and again mastered the targets inthe multiple-response repetition condition inthe fewest minutes of training. The results ofHannah’s validation assessments were

493ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 13: Using an Abbreviated Assessment to Identify Effective

-20

0

20

40

60

80

100

0 20 40 60 80 100 120

Baseline Training

-20

0

20

40

60

80

100

0 20 40 60 80 100 120

Baseline Training

0

8

16

24

32

40

48

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Cu

mu

lative F

requ

ency

Correct

Errors

Error Corrections 81 Trials

Pe

rce

nta

ge

Co

rre

ct

0

5

10

15

20

25

30

35

40

0

20

40

60

80

100

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Pe

rce

nta

ge

of

Po

ints

Points

Time

Garrett

Abbreviated Assessment(48 Trials)

Validation Assessment Set 2

Validation Assessment Set 1

0

50

100

150

200

0

50

100

150

200

250

300

350

400

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tri

als

to

Ma

ste

ry

Trials

Time *

*

*

0

50

100

150

200

0

50

100

150

200

250

300

350

400

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Tra

inin

g T

ime

in M

in

Tri

als

to

Ma

ste

ry

Trials

Time

*

954 Trials

*

Validation Assessment Set 3

0

20

40

60

80

100

20 40 60 80 100 120

Session

No Error-Correction

Single-Response Repetition

Model

Re-present Until Independent

Multiple-Response Repetition

Baseline Training

0

50

100

150

200

0

50

100

150

200

250

300

350

400

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Tra

inin

g T

ime

in M

in

Tri

als

to

Ma

ste

ry

Trials

Time

*

*

Figure 2. First panel shows the cumulative frequency of correct responses, errors, and error-correction trials (leftpanel) and the percentage of points and training time (right panel) for Garrett during the abbreviated assessment. Wedid not include data from baseline sessions in the figures for the abbreviated assessment. Blake did not engage in anycorrect responses during baseline. Percentage of correct responses (left panel) and the frequency of training trials andduration of training to mastery (right panel) for Garrett during the first (second panel), second (third panel), and third(fourth panel) validation assessments across the no-error-correction, model, single-response repetition, re-present untilindependent, and multiple-response repetition conditions. Data from the 0-s prompt-delay sessions are not depicted inthe figures. *Indicates a condition terminated before a participant’s correct responding reached mastery. Trials and train-ing time are for sessions conducted before the early termination criterion was met.

REGINA A. CARROLL et al.494

Page 14: Using an Abbreviated Assessment to Identify Effective

0

10

20

30

40

50

60

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Cu

mu

lative F

req

ue

ncy

Correct

Errors

Error Corrections

85 Trials103 Trials

Pe

rce

nta

ge

Co

rre

ct

0

10

20

30

40

50

60

70

80

0

20

40

60

80

100

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Pe

rce

nta

ge

of P

oin

ts

Points

Time

Hannah

Abbreviated Assessment(60 Trials)

Validation Assessment Set 2

Validation Assessment Set 1

0

20

40

60

80

100

120

140

160

180

200

0

50

100

150

200

250

300

350

400

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tria

ls to

Ma

ste

ryTrials

Time

0

20

40

60

80

100

120

140

160

180

200

0

50

100

150

200

250

300

350

400

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tria

ls to

Ma

ste

ry

Trials

Time

-20

0

20

40

60

80

100

0 10 20 30 40 50 60 70 80

Baseline Training

-20

0

20

40

60

80

100

0 10 20 30 40 50 60 70 80

Session

Model

Single-Response Repetition

Re-present Until Independent

Multiple-Response Repetition

Baseline Training

Figure 3. First panel shows the cumulative frequency of correct responses, errors, and error-correction trials (leftpanel) and the percentage of points and training time (right panel) for Hannah during the abbreviated assessment. Wedid not include data from baseline sessions in the figures for the abbreviated assessment. Hannah’s target response wasmatching associated pictures in a two-card array. Thus, chance level responding for Hannah in baseline was around50% correct. Hannah responded correctly during an average of 48% (range, 33%-66%) of trials during baseline sessionsfor the abbreviated assessment. Percentage of correct responses (left panel) and the frequency of training trials and dura-tion of training to mastery (right panel) for Hannah during the first (second panel) and second (third panel) validationassessments across the model, single-response repetition, re-present until independent, and multiple-response repetitionconditions. Data from the 0-s prompt-delay sessions are not depicted in the figures.

495ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 15: Using an Abbreviated Assessment to Identify Effective

consistent with the results of her abbreviatedassessment when measuring sessions andminutes to mastery during the validationassessments.The results for Bella are depicted in

Figure 4. For Bella, the model condition wasassociated with the second highest frequency ofcorrect responses, the lowest frequency oferrors, and the lowest frequency of error-correction trials. Bella engaged in the highestfrequency of correct responses during the re-present until independent condition and thatcondition was also associated with the secondlowest frequency of errors. The results of Bella’sabbreviated assessment suggested that themodel procedure (93% of points) would be hermost efficient error-correction procedure fol-lowed by the re-present until independent pro-cedure (79% of points).During the first validation assessment

(Figure 4; second panel), Bella mastered thetargets in the model condition in the fewestnumber of sessions. The model condition wasalso associated with the fewest number of trialsand minutes of training to mastery. Bella alsomastered the targets from the re-present untilindependent condition. She did not master thetargets from the other two conditions; however,during the multiple-response repetition condi-tion, Bella’s correct responding reached 100%during one session. During Bella’s second vali-dation assessment, she only mastered the targetsfrom the re-present until independent condi-tion. The re-present until independent condi-tion was associated with the highest frequencyof correct responses and the second lowest fre-quency of errors during Bella’s abbreviatedassessment. Bella did not master the targetsfrom any of the other conditions during thesecond validation assessment; however, she didrespond correctly during 83% of trials for thelast three training sessions in the model condi-tion. The results of Bella’s validation assess-ments showed partial correspondence with theresults of her abbreviated assessment. The

results of Bella’s abbreviated assessment sug-gested that the model condition would be hermost efficient error-correction procedure, fol-lowed by the re-present until independentprocedure.Overall, the results of the abbreviated assess-

ment corresponded for two of the four partici-pants (Blake and Hannah). We observed partialcorrespondence between the results of theabbreviated assessment and the validationassessments for the other two participants(Garrett and Bella). The procedure that resultedin the highest percentage of points during theabbreviated assessment was the most efficientprocedure during six out of the nine validationassessments (67% accuracy) when measuringsessions to mastery, three out of the nine assess-ments (33% accuracy) when measuring trials tomastery, or five out of the nine assessments(56% accuracy) when measuring minutes tomastery. When the abbreviated assessment andvalidation assessments did not correspond, theprocedure that resulted in the highest percent-age of points during the abbreviated assessmentwas the second most efficient procedure duringone out of the three validation assessments(67% accuracy) when measuring sessions tomastery, three out of the six assessments (50%accuracy) when measuring trials to mastery, ortwo out of the four assessments (50% accuracy)when measuring minutes to mastery.

DISCUSSION

This study evaluated the predictive validityof an abbreviated assessment for identifying themost effective and efficient error-correctionprocedure for individual learners. The resultsshowed high correspondence between theabbreviated assessment and one or both of thevalidation assessments for two of the four par-ticipants and partial correspondence for theother two participants. Blake and Hannahacquired the skills taught with the procedurepredicted to be their most efficient error-

REGINA A. CARROLL et al.496

Page 16: Using an Abbreviated Assessment to Identify Effective

-20

0

20

40

60

80

100

0 10 20 30 40 50 60

Baseline Training

0

10

20

30

40

50

60

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Cu

mu

lative F

req

ue

ncy

Correct

Errors

Error Corrections

147 Trials

Pe

rce

nta

ge

Co

rre

ct

0

5

10

15

20

25

30

35

40

45

50

0

20

40

60

80

100

No ErrorCorrection

Model Single-ResponseRepetition

Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Pe

rce

nta

ge

of P

oin

ts

Points

Time

Bella

Abbreviated Assessment(60 Trials)

Validation Assessment Set 2

Validation Assessment Set 1

0

20

40

60

80

100

120

0

50

100

150

200

250

300

350

400

450

No ErrorCorrection

Model Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tria

ls to

Ma

ste

ry

Trials

Time

*

*

*

0

20

40

60

80

100

5 10 15 20 25 30 35 40 45 50 55 60

Session

No Error Correction

Model

Re-present Until Independent

Multiple-Response Repetition

Baseline Training

0

20

40

60

80

100

120

0

50

100

150

200

250

300

350

400

450

No ErrorCorrection

Model Re-presentuntil

Independent

Multiple-ResponseRepetition

Tra

inin

g T

ime

in M

in

Tria

ls to

Ma

ste

ryTrials

Time

*

*

Figure 4. First panel shows the cumulative frequency of correct responses, errors, and error-correction trials (leftpanel) and the percentage of points and training time (right panel) for Bella during the abbreviated assessment. We didnot include data from baseline sessions in the figures for the abbreviated assessment. Bella did not engage in any correctresponses during baseline. Percentage of correct responses (left panel) and the frequency of training trials and durationof training to mastery (right panel) for Bella during the first (second panel) and second (third panel) validation assess-ments across the no-error-correction, model, re-present until independent, and multiple-response repetition conditions.Data from the 0-s prompt-delay sessions are not depicted in the figures. *Indicates a condition terminated before a par-ticipant’s correct responding reached mastery. Trials and training time are for sessions conducted before the early termi-nation criterion was met.

497ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 17: Using an Abbreviated Assessment to Identify Effective

correction procedure during the abbreviatedassessment in the fewest number of trainingsessions during both validation assessments.Bella acquired the skills taught with the proce-dure predicted to be her most efficient error-correction procedure in the fewest number ofsessions during her first validation assessment.During her second validation assessment, sheacquired the skills taught with the procedurepredicted to be her second most efficient error-correction procedure in the fewest number ofsessions. For Garrett, the results of the abbrevi-ated assessment were less consistent. The proce-dure predicted to be his most efficient error-correction procedure during the abbreviatedassessment was found to be the second mostefficient error-correction procedure during twoof the three validation assessments.Our study extends the error-correction litera-

ture by evaluating the predictive validity of anabbreviated assessment of error-correction pro-cedures. Previous studies comparing the effi-ciency of different error-correction proceduresconducted training sessions until participantsmastered skills from one or more of the instruc-tional conditions. Thus, these studies conductedassessments that could be time intensive, whichmay not be practical for use by educators. In thecurrent study, we conducted a set number oftraining sessions across conditions during theabbreviated assessment, which required on aver-age 2.6 hr (range, 1.7 hr -3.6 hr) to complete.In comparison, the first validation assessmentfor each participant required on average 5.7 hr(range, 2.4 hr -9.4 hr) to complete. Thus, wewere able to complete the abbreviated assess-ment in less than half the time that would havebeen required had we conducted training ses-sions until participants reached mastery levelresponding. We observed the most time savingsfor Hannah, who was the participant with thefewest number of skills prior to the start ofthis study. Hannah’s abbreviated assessmentrequired 3.6 hr to complete, compared to herfirst validation assessment, which required

9.4 hr to complete. For early learners like Han-nah, conducting abbreviated assessments has thepotential to save valuable intervention time.In addition to saving time by conducting a

briefer initial assessment, the results of the vali-dation assessments suggest that clinicians mayalso save valuable intervention time by usingthe procedure identified during the abbreviatedassessment. Participants mastered the targetstaught with the procedure predicted to be mostefficient during the abbreviated assessment,during seven out of the nine validation assess-ments. For these seven validation assessments,we compared the time required for participantsto master the targets with the procedure identi-fied during the abbreviated assessment, and theprocedure that was found to be least efficientprocedure (i.e., took the longest duration oftraining for the participant to reach mastery)during the validation assessment. For Garrett,the re-present until independent procedure waspredicted to be his most efficient error-correction procedure during the abbreviatedassessment. During his second validation assess-ment he mastered the targets from the single-response repetition condition first (51 min oftraining), followed by the targets from the re-present until independent condition (97 minof training). Garrett did not master the targetsfrom any of the other conditions. So, althoughthe re-present until independent condition wasmore effective than three of the other error-correction procedures, we cannot calculatepotential time savings because he did not mas-ter the targets from the other three conditions.For the remaining six assessments it took par-ticipants on average an additional 39 min(range, 8 min-85 min) of training to masterthe targets taught with the least efficient error-correction procedure when compared to theprocedure identified during the abbreviatedassessment. Again we observed the most timesavings for Hannah. On average it took her anadditional 55 min of training to master the tar-gets taught with the least efficient error-

REGINA A. CARROLL et al.498

Page 18: Using an Abbreviated Assessment to Identify Effective

correction procedure when compared to thetargets taught with the procedure identifiedduring the abbreviated assessment (i.e., themultiple-response repetition procedure). Theseresults should be interpreted with cautionbecause for two of the nine validation assess-ments (Garrett Set 1 and Bella Set 2) we termi-nated training for targets taught with theprocedure identified during the abbreviatedassessment before the participants reachedmastery-level responding. Thus, for these twosets, it is not clear if using the procedure identi-fied during the abbreviated assessment wouldhave resulted in any time savings, because wedid not conduct training until the participantsmastered targets from all conditions.Consistent with previous studies, we found

that the most efficient error-correction proce-dure varied across individual learners(e.g., Carroll et al., 2015; Rodgers & Iwata,1991). The model (or demonstration) proce-dure was one of the most efficient error-correction strategies for the majority of theparticipants assessed by McGhan and Lerman(2013) and Kodak et al. (2016). In the currentstudy, the model procedure was the most effi-cient error-correction procedure for only oneof our four participants (Bella). Kodaket al. found that the model procedure was mostefficient for those participants who echoed thetherapist’s model during error correction. Dur-ing the model condition, Bella consistently ech-oed the therapist’s model across the abbreviatedassessment (M = 88% of error-correction trials)and validation assessments (M = 62% of error-correction trials). In comparison, Blake andGarrett were much less likely to echo the thera-pist’s model. During the abbreviated assess-ment, Blake echoed during an average of 27%of error-correction trials. Garrett echoed duringan average of 31% of error-correction trialsduring the abbreviated assessment, and an aver-age of 23% of error-correction trials during thevalidation assessments. Although participantsare not required to respond during error

correction in the model condition, it is possiblethat the model procedure will be most efficientfor those individuals who engage in an activeresponse during error correction. Additionalresearch is needed to assess the prerequisiteskills necessary for an individual child to benefitfrom a particular error-correction procedure.For example, it may be appropriate for educa-tors to include a model procedure in the abbre-viated assessment only if a child consistentlyechoes responses modeled by a therapist. Partic-ipants in the current study frequentlyresponded correctly during error-correction tri-als when an active response was required(e.g., single-response repetition); however, onoccasion, a participant would fail to respond onan error-correction trial. For example, duringthe multiple-response repetition condition, thetherapist occasionally had to model the correctresponse more than five times because the par-ticipant did not echo the correct response dur-ing one of the error-correction trials. Futureresearch should assess the extent to which fail-ing to engage in an active response when one isrequired, influences the efficiency of an error-correction procedure.We observed high correspondence among

all three measures of procedural efficiency(i.e., total sessions, trials [including error-correction trials], and training time to mastery).During six of the nine validation assessments,all three measures of efficiency corresponded forthe most efficient error-correction procedure.That is, the procedure associated with the fewestsessions to mastery was also associated with thefewest training trials and the shortest trainingtime. Two of our measures of efficiency (fre-quency of sessions and training time to mastery)corresponded for the most efficient error-correction procedure across all validation assess-ments. Consistent with the findings of Carrollet al. (2015) and Kodak et al. (2016), we typi-cally observed low correspondence between ourmeasures of efficiency for the multiple-responserepetition procedure. Participants mastered

499ABBREVIATED ASSESSMENT OF ERROR CORRECTION

Page 19: Using an Abbreviated Assessment to Identify Effective

targets in the multiple-response repetition pro-cedure during five validation assessments, andin three of those five assessments, the number oftraining trials did not correspond with the otherefficiency measures. For example, for HannahSet 2, the multiple-response repetition proce-dure was the most efficient procedure whenmeasuring the total number of sessions andtraining time; however, it was the third mostefficient procedure when measuring the totalnumber of trials to mastery. During themultiple-response repetition procedure, follow-ing an error, the therapist re-presented theinstruction and either immediately modeled thecorrect response or physically guided the correctresponse (Hannah only). Thus, participants’latency to respond during error-correction trialstended to be brief, which may account for thediscrepancy we observed between the total trialsand training time to mastery. The results of thecurrent study suggest that measuring either ses-sions to mastery or minutes to mastery may pro-vide the most sensitive measure of proceduralefficiency.We included three primary dependent mea-

sures during the abbreviated assessment; thefrequency of correct responses, errors, and error-correction trials. When ranking the differenterror-correction procedures, we weighted eachof these measures equally. It is possible that oneor more of these variables is more likely to bepredictive of the most efficient error-correctionprocedure. For example, if we had assessed onlythe frequency of correct responses, we wouldhave made the same predictions about the mostefficient error-correction procedure for all butone participant (Bella). However, we chose toinclude the frequency of errors as one of our pri-mary dependent measures, because there arepotential benefits to minimizing errors whenteaching new skills (see Mueller, Palkovic, &Maynard, 2007). We also included the fre-quency of error-correction trials because fre-quent error-correction trials may increase theaversiveness of teaching sessions. Prior to the

start of this study, Garrett had a history ofengaging in low-to-moderate levels of problembehavior (e.g., negative vocalizations, aggres-sion) during DTI. In the current study, Garrettengaged in problem behavior during the highestpercentage of trials in the condition associatedwith the highest frequency of error-correctiontrials. For example, during Garrett’s abbreviatedassessment, the therapist implemented 81 error-correction trials during the multiple-responserepetition condition and Garrett engaged inproblem behavior during 44% of the trials forthat condition, compared to an average of only20% of the trials (range, 13% to 29%) for theother conditions. Nonetheless, for the partici-pants in the current study, the abbreviatedassessment ranking procedure did not result inany additional benefits over measuring thecumulative frequency of correct responses alone.Thus, additional research is needed to assess thevariables most likely to influence the predictivevalidity of an abbreviated assessment. Futureresearch may also evaluate the extent to whichthe frequency of errors and error-correction tri-als during abbreviated assessments influence cli-nicians’ or learners’ preferences for usingdifferent error-correction procedures. For exam-ple, when multiple procedures are effective foran individual learner, a clinician or the learnermay prefer the procedure associated with fewererrors and error-correction trials.There are some potential limitations to the

current study. First, we did not control for par-ticipants’ histories with different error-correction procedures prior to the start of thisstudy. Two of the four participants had notbeen exposed to DTI prior to participating inthis study (Blake and Bella). Hannah and Gar-rett had been receiving DTI for fewer than10 hr a week for less than a year, and had pre-vious experience with the re-present until inde-pendent error-correction procedure. ForGarrett the re-present until independent error-correction procedure was one of his most effi-cient procedures; however, for Hannah the

REGINA A. CARROLL et al.500

Page 20: Using an Abbreviated Assessment to Identify Effective

multiple-response repetition procedure (a novelerror-correction procedure) was most efficient.Thus, previous experience with a particularerror-correction procedure did not consistentlypredict which procedure would be most effi-cient. Second, for each participant we assessedthe same type of skill across the abbreviatedand validation assessments (e.g., reading sightwords). It is possible that the efficiency of anerror-correction procedure may vary across skilltype. Additional research should assess the useof an abbreviated assessment for predicting themost efficient error-correction procedure acrossa range of different skills. Overall, the results ofthe current study suggest an abbreviated assess-ment may be a useful tool for quickly identify-ing an efficient error-correction procedure foran individual learner. Future research shouldassess the predictive validity of an abbreviatedassessment for identifying other variables likelyto influence the efficiency of DTI(e.g., consequences for correct responses; seeJoachim & Carroll, 2017).

REFERENCES

Carr, J. E., Nicolson, A. C., & Higbee, T. S. (2000).Evaluation of a brief multiple-stimulus preferenceassessment in a naturalistic context. Journal of AppliedBehavior Analysis, 33, 353-357. https://doi.org/10.1901/jaba.2000.33-353

Carroll, R. A., Joachim, B. T., St. Peter, C. C., &Robinson, N. (2015). A comparison of error-correction procedures on skill acquisition duringdiscrete-trial instruction. Journal of Applied BehaviorAnalysis, 48, 257-273. https://doi.org/10.1002/jaba.205

Dunn, L. M., & Dunn, D. M. (2007). Peabody PictureVocabulary Test Manual (4th ed.). Minneapolis, MN:Pearson.

Joachim, B. J., & Carroll, R. A. (2017). A comparison ofconsequences for correct responses on skill acquisition

during discrete trial instruction. Learning and Motiva-tion. https://doi.org/10.1016/j.lmot.2017.01.002.

Kodak, T., Campbell, V., Bergmann, S., LeBlanc, B.,Kurtz-Nelson, E., Cariveau, T., …Mahon, J. (2016).Examination of efficacious, efficient, and sociallyvalid error-correction procedures to teach sight wordsand prepositions to children with autism spectrumdisorder. Journal of Applied Behavior Analysis, 49,532-547. https://doi.org/10.1002/jaba.310

McGhan, A. C., & Lerman, D. C. (2013). An assessmentof error-correction procedures for learners withautism. Journal of Applied Behavior Analysis, 46, 626-639. https://doi.org/10.1002/jaba.65

Mueller, M. M., Palkovic, C. M., & Maynard, C. S.(2007). Errorless learning: Review and practical appli-cation for teaching children with pervasive develop-mental disorders. Psychology in the Schools, 44, 691-700. https://doi.org/10.1002/pits.20258

Rodgers, T. A., & Iwata, B. A. (1991). An analysis oferror-correction procedures during discriminationtraining. Journal of Applied Behavior Analysis, 24,775-781. https://doi.org/10.1901/jaba.1991.24-775

Sindelar, P. T., Rosenberg, M. S., & Wilson, R. J.(1985). An adapted alternating treatments design forinstructional research. Education and Treatment ofChildren, 8, 67-76.

Smith, T., Mruzek, D. W., Wheat, L. A., & Hughes, C.(2006). Error correction in discrimination trainingfor children with autism. Behavioral Interventions, 21,245-263. https://doi.org/10.1002/bin.223

Sundberg, M. L. (2008). Verbal behavior milestones assess-ment and placement program. Concord, CA: AVBPress.

Williams, K. T. (2007). Expressive vocabulary test (2nded.). Minneapolis, MN: Pearson.

Wolery, M., Gast, D. L., & Hammond, D. (2010). Com-parative intervention designs. In D. L. Gast (Ed.),Single subject research methodology in behavioral sciences(pp. 329-381). New York, NY: Routledge/Taylor &Francis Group.

Worsdell, A. S., Iwata, B. A., Dozier, C. L.,Johnson, A. D., Neidert, P. L., & Thomason, J. L.(2005). Analysis of response repetition as an error-correction strategy during sight-word reading. Journalof Applied Behavior Analysis, 38, 511-527. https://doi.org/10.1901/jaba.2005.115-04

Received July 22, 2016Final acceptance April 20, 2018Action Editor, Jennifer Austin

501ABBREVIATED ASSESSMENT OF ERROR CORRECTION