2
P\ I r hologi ,n Ihc, SI hools IWI 17 541.542 EVALUATION OF SIMULATED DECISION MAKING: A RESPONSE TO YSSELDYKE, ALGOZZINE, REGAN, AND POTTER DAN WRIGHT Ralston Public Schools A recent study challenged the ability of evaluation teams to select technically ade- quate data sources. Aspects of methodology and focus of inquiry restrict the inter- pretation of results. A more productive line of inquiry is suggested. Results recently reported by Ysseldyke, Algozzine, Regan, and Potter (1980) ap- parently demonstrate the inability of educational personnel to consider the technical merit of the test instruments they select for use in making placement decisions. Unfor- tunately, several aspects of the methodology and focus of inquiry serve to make the results less than instructive. The major finding by Ysseldyke, et al., was that, although the participants in their computer-simulated decision-making exercise initially tended to choose data sources that were adequate in terms of norms, reliability, and validity, their subsequent choices were of less adequate sources. However, one must recall that of the pool of instruments made available for selection to the participants, only 24% were rated as having adequate norms and validity, and 41% were rated as having adequate reliability, by the authors. While this pool may indeed reflect a lamentable dearth of technically adequate instruments, it also determines that with each choice of a good data source, participants were left with a poorer pool of instruments from which to make subsequent selections. It is inevitable that the soundness of the group’s choices would decline after the second or third choice. Further, participants were presented with bogus referral statements and data on all in- struments within the average range of performance; faced with unenlightening data and a bogus problem, participants likely felt considerable implied pressure to continue data collection beyond the point at which they might ordinarily have stopped. Having drawn a blank with the best instruments available, might not any practitioner be willing to ex- amine more diverse data sources in an attempt to discover a possible foundation for the referral statement? One would wish to examine the accuracy of the group’s diagnoses, choice of data sources notwithstanding. One would also wish to examine the acuity of selection by certain subgroups of the sample, such as the 25 school psychologists. Were they any better at selection than the others? Ysseldyke, et al., apparently constructed their sample to represent the diversity of personnel who meet in multidisciplinary teams (one wonders why there were no parents represented). However, the authors neglect the fact that under PL 94-142 the team as a whole is not required to select or even strictly interpret specific tests, but rather to share complementary perspectives in determining a child’s needs. For math teachers, building principals, school nurses, etc. to develop expertise in selecting and interpreting individual tests is not only unnecessary but redundant; they need only be able to consider statements of need as contributed by those recognized under the law as competent to render them. That the wide diversity of personnel examined were able to make the relatively sound initial choices reported seems, when considered in this fashion, to be quite upbeat and optimistic. Requests for reprints should be addressed to Dan Wright, Ralston Public Schools, 8545 Park Drive, Ralston, NE 68127. 54 I

Evaluation of simulated decision making: A response to Ysseldyke, Algozzine, Regan, and Potter

Embed Size (px)

Citation preview

P\ I r hologi ,n Ihc, S I hools I W I 17 541.542

EVALUATION O F SIMULATED DECISION MAKING: A RESPONSE TO YSSELDYKE, ALGOZZINE, REGAN, AND POTTER

DAN WRIGHT

Ralston Public Schools

A recent study challenged the ability of evaluation teams to select technically ade- quate data sources. Aspects of methodology and focus of inquiry restrict the inter- pretation of results. A more productive line of inquiry is suggested.

Results recently reported by Ysseldyke, Algozzine, Regan, and Potter (1980) ap- parently demonstrate the inability of educational personnel to consider the technical merit of the test instruments they select for use in making placement decisions. Unfor- tunately, several aspects of the methodology and focus of inquiry serve to make the results less than instructive.

The major finding by Ysseldyke, et al., was that, although the participants in their computer-simulated decision-making exercise initially tended to choose data sources that were adequate in terms of norms, reliability, and validity, their subsequent choices were of less adequate sources. However, one must recall that of the pool of instruments made available for selection to the participants, only 24% were rated as having adequate norms and validity, and 41% were rated as having adequate reliability, by the authors. While this pool may indeed reflect a lamentable dearth of technically adequate instruments, it also determines that with each choice of a good data source, participants were left with a poorer pool of instruments from which to make subsequent selections. It is inevitable that the soundness of the group’s choices would decline after the second or third choice. Further, participants were presented with bogus referral statements and data on all in- struments within the average range of performance; faced with unenlightening data and a bogus problem, participants likely felt considerable implied pressure to continue data collection beyond the point at which they might ordinarily have stopped. Having drawn a blank with the best instruments available, might not any practitioner be willing to ex- amine more diverse data sources in an attempt to discover a possible foundation for the referral statement? One would wish to examine the accuracy of the group’s diagnoses, choice of data sources notwithstanding.

One would also wish to examine the acuity of selection by certain subgroups of the sample, such as the 25 school psychologists. Were they any better at selection than the others? Ysseldyke, et al., apparently constructed their sample to represent the diversity of personnel who meet in multidisciplinary teams (one wonders why there were no parents represented). However, the authors neglect the fact that under PL 94-142 the team as a whole is not required to select or even strictly interpret specific tests, but rather to share complementary perspectives in determining a child’s needs. For math teachers, building principals, school nurses, etc. to develop expertise in selecting and interpreting individual tests is not only unnecessary but redundant; they need only be able to consider statements of need as contributed by those recognized under the law as competent to render them. That the wide diversity of personnel examined were able to make the relatively sound initial choices reported seems, when considered in this fashion, to be quite upbeat and optimistic.

Requests for reprints should be addressed to Dan Wright, Ralston Public Schools, 8545 Park Drive, Ralston, NE 68127.

54 I

542 Psychology in the Schools, October, 1980, Vol. 17, No. 4.

The present reply is intended not to blunt the line of inquiry initiated by Ysseldyke, et al., but only to suggest that the focus is a bit misplaced. Undeniably there still exists a body of abominably poor diagnostic instruments that are blithely used daily in placement decisions. A more productive inquiry might consider to what extent their use influences recommendations brought by a variety of individual consultants to the multidisciplinary team.

REFERENCE YSSELDYKE. J . E.. ALGOZZINE, B., REGAN. R., & POTTER, M. Technical adequacy of tests used by

professionals in simulated decision making. Psychology in rhe Schools, 1980, I7, 202-209.