3
PI I < hologi in (he Schoolr 1981, 18. 505-507 RIGHT FOR THE WRONG REASONS: A CLARIFICATION FOR YSSELDYKE, ALGOZZINE, REGAN, AND POTTER DAN WRIGHT Ralston Public Schools Clarification is made of several points raised in a recent critique of a study of Ysseldyke, et al. Although the deceptive nature of the task presented to the subjects may have contributed to spuriously poor performance in simulated decision making, a more fundamental flaw is seen in the focus on individual rather than group perfor- mance in evaluating the efficacy of diagnostic teams. It is suggested that the authors’ original study does not provide adequate support for their otherwise commendable conclusions. I greatly appreciate the response and elaboration provided by Ysseldyke, Algozzine, Regan, and Potter (1980b), and can hardly disagree with their conclusion on the need for more technically adequate assessment practices. However, my critique (Wright, 1980) of their original article (Ysseldyke, et al., 1980a) was either poorly articulated or partly misinterpreted by the authors. As succinctly as possible I will reiterate. As I understand the authors’ description of their computer-simulated decision-making exercise, (a) The selection of technically inadequate assessment instruments was literally a function of the number of instruments selected by participants, and (b) Participants were given bogus referral statements that could not be accounted for by the normal data made available to them. The situation in which participants found themselves was conducive to protracted data gathering and false-positive placement decisions. It was, in effect though not intent, a setup. I am certain similar abuses occur in actual placement decisions; I am equally certain the simulated process does not demonstrate how or how often this happens. Any parallel with reality breaks down because participants were presented with and asked to reconcile intractably dissonant “facts.” All that was demonstrated beyond a reasonable doubt is how poorly subjects can perform when they are duped. If the participants were led, to whatever extent, toward making false-positive place- ment decisions, they had no opportunity to make true-positive or false-negative decisions, or to demonstrate the quality of accompanying data-selection procedures. The information presented to the participants was not truly representative of the types of situations they encounter professionally, and they unknowingly were severely limited in the types of decisions they were able to make. The methodology appears to have elicited the worst possible performance. 1 will concede that the original authors and I are all operating on subjective inter- pretations of the requirements of PL 94-142. I still maintain we neither could nor should attempt to make every member of every diagnostic team proficient in the selection and interpretation of assessment instruments. However, my immediate concern here, which I apologize for previously having raised obliquely at best, is that the authors have confused group with individual performance. If we are to consider the performance of diagnostic teams, then it is inappropriate to focus on individuals who may perform differently in isolation. The authors have not made obvious reference to a body of existing literature on group problem-solving and decision-making processes. For a summary of this area of the literature, one may refer to Secord and Backman (1974). In outlining potential differences of group vs. individual processes, Secord and Backman note: 505

Right for the wrong reasons: A clarification for Ysseldyke, Algozzine, Regan, and Potter

Embed Size (px)

Citation preview

Page 1: Right for the wrong reasons: A clarification for Ysseldyke, Algozzine, Regan, and Potter

PI I < hologi in (he Schoolr 1981, 1 8 . 505-507

RIGHT FOR THE WRONG REASONS: A CLARIFICATION FOR YSSELDYKE, ALGOZZINE, REGAN, AND POTTER

DAN WRIGHT

Ralston Public Schools

Clarification is made of several points raised in a recent critique of a study of Ysseldyke, et al. Although the deceptive nature of the task presented to the subjects may have contributed to spuriously poor performance in simulated decision making, a more fundamental flaw is seen in the focus on individual rather than group perfor- mance in evaluating the efficacy of diagnostic teams. It is suggested that the authors’ original study does not provide adequate support for their otherwise commendable conclusions.

I greatly appreciate the response and elaboration provided by Ysseldyke, Algozzine, Regan, and Potter (1980b), and can hardly disagree with their conclusion on the need for more technically adequate assessment practices. However, my critique (Wright, 1980) of their original article (Ysseldyke, et al., 1980a) was either poorly articulated or partly misinterpreted by the authors. As succinctly as possible I will reiterate. As I understand the authors’ description of their computer-simulated decision-making exercise, (a) The selection of technically inadequate assessment instruments was literally a function of the number of instruments selected by participants, and (b) Participants were given bogus referral statements that could not be accounted for by the normal data made available to them. The situation in which participants found themselves was conducive to protracted data gathering and false-positive placement decisions. It was, in effect though not intent, a setup. I am certain similar abuses occur in actual placement decisions; I am equally certain the simulated process does not demonstrate how or how often this happens. Any parallel with reality breaks down because participants were presented with and asked to reconcile intractably dissonant “facts.” All that was demonstrated beyond a reasonable doubt is how poorly subjects can perform when they are duped.

I f the participants were led, to whatever extent, toward making false-positive place- ment decisions, they had no opportunity to make true-positive or false-negative decisions, or to demonstrate the quality of accompanying data-selection procedures. The information presented to the participants was not truly representative of the types of situations they encounter professionally, and they unknowingly were severely limited in the types of decisions they were able to make. The methodology appears to have elicited the worst possible performance.

1 will concede that the original authors and I are all operating on subjective inter- pretations of the requirements of PL 94-142. I still maintain we neither could nor should attempt to make every member of every diagnostic team proficient in the selection and interpretation of assessment instruments. However, my immediate concern here, which I apologize for previously having raised obliquely at best, is that the authors have confused group with individual performance. I f we are to consider the performance of diagnostic teams, then it is inappropriate to focus on individuals who may perform differently in isolation. The authors have not made obvious reference to a body of existing literature on group problem-solving and decision-making processes. For a summary of this area of the literature, one may refer to Secord and Backman (1974). In outlining potential differences of group vs. individual processes, Secord and Backman note:

505

Page 2: Right for the wrong reasons: A clarification for Ysseldyke, Algozzine, Regan, and Potter

506 Psychology in the Schools, October, 1981, Vol. 18, No. 4.

The outcome of a task situation is affected by group factors if they bring about changes in any of the following:

( I ) the resources available to the group, (2) the application of these resources to the task, and (3) the likelihood., . that agreement about the proper approach will be achieved. (p. 367)

The presumed advantage in the first area, resources available in decision making, is precisely the reason that multidisciplinary diagnostic teams are mandated in PL 94-142. The second area, application of those resources to decision making, involves questions of procedural safeguards and professional competence, such as awareness of the psy- chometric properties of assessment instruments. It is my contention that if such com- petencies are to be assessed of team members, they should be assessed only of those in- dividuals charged by the team with the responsibility of collecting psychometric informa- tion. If such competency is construed as being demanded of the team as a whole, then it is improper to focus investigation on component individuals; group performance in technical considerations conceivably could approach that which would otherwise be demonstrated by its most competent member (Lorge & Solomon, 1955). The third area, likelihood of reaching agreement, involves such true group attributes as affect, power, and communication and status structures. None of these effects can be inferred from the isolated performances of individuals. Since placement decisions are mandated as a func- tion of the group, and are thus subject to all the above influences, it appears totally im- proper to draw conclusions from the placement questions asked of individuals.

The objections I have raised must not be interpreted as an impassioned defense of the status quo in assessment. I have certainly been witness to the widespread disregard of the psychometric properties of popular assessment instruments. And, in fact, much of the literature I would beg to be considered does not augur well for the efficacy of team deliberations. I n summarizing a chapter on the topic of group problem solving by Kelley and Thibaut (1969), Secord and Backman note, in part:

Where a problem is such that the activities of group members distract or otherwise interfere with each other, and where the solution is not obvious so that resistance to the acceptance of the best answer occurs, groups can be expected to perform worse than their most proficient members working alone. (p.366)

To this I can only add, “I’ve been there.” However, although I agree with the general conclusions stated by Ysseldyke, et al., I remain unconvinced that the study they have described offers adequate support. Just as psychometrically inadequate assessment information should not be used to support even a correct placement decision, so inade- quate methodology should not be used to support even laudable conclusions.

REFERENCES K t i . i . w , H. H., & THIHAUT. J. W.

Lonct, I . , & SOLOMON, H.

Stconi). P. F., & BACKMAN, C. W.

Group problem solving. In C. Lindzey & E. Aronson (Eds.), The hand-

Two models of group behavior in the solution of eureka-type problems. book of sociul psychofogy (2nd ed.), Vol. 4. Reading, MA: Addison-Wesley, 1969.

Psychometrika, 1955, 20, 139-148. Social psychology. New York: McGraw-Hill, 1974.

Page 3: Right for the wrong reasons: A clarification for Ysseldyke, Algozzine, Regan, and Potter

Clarijication for Ysseldyke, et al. 507

WKIGIIT, D.

YsskmYKt, J. E., AI.GOZZINE, B., REGAN, R., & PoTTtK, M.

Ys\t.i D Y K E , J. E., AI G O Z Z I N E , B., REGAN, R., & P O T T ~ K , M.

Evaluation of simulated decision making: A response to Ysseldyke, Algozzine, Regan, and

Technical adequacy of tests used by

On “unenlightening data” and “bogus

Potter. Psychology in the Schools, 1980, 17, 541-542.

professionals in simulated decision making. Psychology in the Schools, 1980, 17, 202-209. (a)

problems:” A response to Dan Wright. Psychology in the Schools, 1980, 17, 543-544. (b)