5
Cognitive Learning During Surgical Residency A Modelfor Curriculum Evaluation ROBERT S. RHODES, M.D. MARCIA Z. WILE, PH.D. JERRY M. SHUCK, M.D., D.Sc. MARJIE L. PERSONS, M.D. The program summary of the American Board of Surgery In- Service Training Exam (ABSITE) can be used to quantitate cog- nitive learning during a surgical residency and to identify areas of curricular weakness in a residency program. Knowledge on each question is categorized as high (known) or low (unknown) depending on the percentage of residents who answered correctly. Knowledge of Level 1 (entry) residents is then compared with Level 5 (exit) residents. Each ABSITE question can thus be categorized on entry versus exit as known-known, unknown-un- known, unknown-known, and known-unknown. Only about half of unknown knowledge on entry appears to become known on exit. Very little knowledge known on entry becomes unknown on exit. Weaknesses in specific subject areas can be readily identified by rnking questions according to the number of exiting residents who answer incorrectly. Use of this technique to quan- titate cognitive learing in a residency program may allow ob- jective assessment of changes in curriculum. M W , rANY CHARACTERISTICS ARE REQUIRED of a good surgeon, and the acquisition of these characteristics are a measure of an one's pro- gress during surgical residency. The assessment of these characteristics are often subjective, however. Objective measures do exist for assessment of scientific and medical knowledge, the best known examples being the American Board of Surgery In-Service Training Exam (ABSITE) and the Qualifying Examination. The ABSITE "is designed to test the general level of knowledge which has been at- tained by residents regarding the fundamentals and the basic sciences related to Surgery."' A good correlation has been noted between performance on the ABSITE and the Qualifying Examination.23 However, the objective measurement by ABSITE often correlates poorly with the subjective assessment of clinical competence.4 Presented in part at the Special Meeting of the Association of Program Directors in Surgery, Dallas, Texas, February 28, 1986. Reprint requests and correspondence: Robert S. Rhodes, M.D., 2074 Abington Road, Cleveland, OH 44106. Submitted for publication: September 10, 1986. From the Department of Surgery and the Office of Education, Case Western Reserve University School of Medicine, Cleveland, Ohio The ABSITE is primarily used to evaluate individual resident performance; there appears to have been little emphasis on the use of ABSITE to evaluate curricular performance. A simple way to evaluate a residency pro- gram's curriculum would be to average the scores of all residents. These scores could be averaged as both total test scores and within each of the five subject areas: body as a whole; gastrointestinal; cardiovascular and respira- tory; genitourinary, skin, special senses, musculoskeletal, and nervous systems; and endocrine, hematic, lymphatic, and breast. Unsatisfactory performance by residents as a group might then be a strong motivation to modify the residency curriculum. However, averaging individual res- ident information is only of limited use for identifying the specific modifications needed since the scores within each of these categories do not specify strengths or weak- nesses on subject matter within each category. For in- stance, it could be difficult to modify a curriculum simply having identified a weakness in a subject area, such as body as a whole. Such an analysis would only identify that information was not known. It would not specify whether the failure to learn occurred during medical school or residency. This paper demonstrates a method by which the AB- SITE Program Summary can be used to quantitate the effect of a curriculum on cognitive learning during surgical residency. The Program Summary lists each question in terms of a keyword phrase and indicates the number of residents at each level that answered that question incor- rectly. For each question, knowledge of residents who have just entered the program is compared with the knowledge of residents who are about to complete the program. Ap- plication of this method to a large, midwestern, surgical 208

Cognitive Learning During Surgical Residency

Embed Size (px)

Citation preview

Cognitive Learning During Surgical ResidencyA Modelfor Curriculum Evaluation

ROBERT S. RHODES, M.D. MARCIA Z. WILE, PH.D. JERRY M. SHUCK, M.D., D.Sc.MARJIE L. PERSONS, M.D.

The program summary of the American Board of Surgery In-Service Training Exam (ABSITE) can be used to quantitate cog-nitive learning during a surgical residency and to identify areasof curricular weakness in a residency program. Knowledge oneach question is categorized as high (known) or low (unknown)depending on the percentage of residents who answered correctly.Knowledge of Level 1 (entry) residents is then compared withLevel 5 (exit) residents. Each ABSITE question can thus becategorized on entry versus exit as known-known, unknown-un-known, unknown-known, and known-unknown. Only about halfof unknown knowledge on entry appears to become known onexit. Very little knowledge known on entry becomes unknownon exit. Weaknesses in specific subject areas can be readilyidentified by rnking questions according to the number of exitingresidents who answer incorrectly. Use of this technique to quan-titate cognitive learing in a residency program may allow ob-jective assessment of changes in curriculum.

MW , rANY CHARACTERISTICS ARE REQUIRED of agood surgeon, and the acquisition of thesecharacteristics are a measure of an one's pro-

gress during surgical residency. The assessment of thesecharacteristics are often subjective, however. Objectivemeasures do exist for assessment ofscientific and medicalknowledge, the best known examples being the AmericanBoard ofSurgery In-Service Training Exam (ABSITE) andthe Qualifying Examination. The ABSITE "is designedto test the general level of knowledge which has been at-tained by residents regarding the fundamentals and thebasic sciences related to Surgery."' A good correlationhas been noted between performance on the ABSITE andthe Qualifying Examination.23 However, the objectivemeasurement by ABSITE often correlates poorly with thesubjective assessment of clinical competence.4

Presented in part at the Special Meeting ofthe Association ofProgramDirectors in Surgery, Dallas, Texas, February 28, 1986.

Reprint requests and correspondence: Robert S. Rhodes, M.D., 2074Abington Road, Cleveland, OH 44106.

Submitted for publication: September 10, 1986.

From the Department of Surgery and the Office of Education,Case Western Reserve University School of Medicine,

Cleveland, Ohio

The ABSITE is primarily used to evaluate individualresident performance; there appears to have been littleemphasis on the use of ABSITE to evaluate curricularperformance. A simple way to evaluate a residency pro-gram's curriculum would be to average the scores of allresidents. These scores could be averaged as both totaltest scores and within each of the five subject areas: bodyas a whole; gastrointestinal; cardiovascular and respira-tory; genitourinary, skin, special senses, musculoskeletal,and nervous systems; and endocrine, hematic, lymphatic,and breast. Unsatisfactory performance by residents as agroup might then be a strong motivation to modify theresidency curriculum. However, averaging individual res-ident information is only of limited use for identifyingthe specific modifications needed since the scores withineach of these categories do not specify strengths or weak-nesses on subject matter within each category. For in-stance, it could be difficult to modify a curriculum simplyhaving identified a weakness in a subject area, such asbody as a whole. Such an analysis would only identifythat information was not known. It would not specifywhether the failure to learn occurred during medicalschool or residency.

This paper demonstrates a method by which the AB-SITE Program Summary can be used to quantitate theeffect ofa curriculum on cognitive learning during surgicalresidency. The Program Summary lists each question interms of a keyword phrase and indicates the number ofresidents at each level that answered that question incor-rectly. For each question, knowledge ofresidents who havejust entered the program is compared with the knowledgeof residents who are about to complete the program. Ap-plication of this method to a large, midwestern, surgical

208

COGNITIVE LEARNING DURING SURGICAL RESIDENCY

residency indicates that only approximately half of thecognitive knowledge assessed by the ABSITE is acquiredduring residency.

Methods

The model is based on a comparison of knowledge atentry to a surgical residency program versus exit from a

surgical residency program; differences in knowledge be-tween entry and exit reflect the impact ofthe curriculum.The operational definition of knowledge is the ability tocorrectly answer questions on the ABSITE. Entry knowl-edge is that of Level 1 residents. Since Level 1 residentshave been in the surgical residency only about 6 monthsbefore taking the exam, it is assumed that most of theirknowledge was learned before starting residency and was

known on entry into the program. Level 5 residents are

usually in their final year of residency and will be leavingthe program within 6 months. Their knowledge is referredto as exit knowledge since they have probably acquiredalmost all of the knowledge they will gain during the res-

idency. Knowledge of a given question at entry and/orexit can be classified as high (known) or low (unknown)based on a percentage of residents who answer correctly.

Based on the entry-exit and high-low schema, fourgroups are established: (1) the known-known group, in

which knowledge was known at entry and at exit; (2) theunknown-unknown group, in which knowledge was notknown at entry nor at exit; (3) the unknown-known group,

in which knowledge was not known at entry but was

known on exit; and (4) the known-unknown group, inwhich knowledge was known on entry but not known on

exit. This categorization is summarized in Table 1.The current report format of the ABSITE is shown in

Table 2 and contains keyword phrases and their corre-

sponding responses that exemplify the various categoriesof response that were outlined above. The first type ofresponse pattern referred to was the known-known pat-tern. This pattern is exemplified by the question on thecharacteristics ofcarcinoembryonic antigen (CEA); it wasanswered correctly by essentially all residents at all levels.The next type of response is the unknown-unknown pat-tern. The question on the site of cancer associated withtransplantation was answered by few residents at any leveland is an example of this type of response. The next pat-tern, unknown-known, suggests that the information wasnot taught and/or learned before residency but was ac-

quired during residency. The question on the character-istics of sliding inguinal hernias was answered by few res-

idents at Level 1 but by most residents at Level 5 and isan example of this type of response. The fourth and finalpattern is the known-unknown response. This type of re-

sponse suggests that the information was taught and/orlearned before residency but then either mistaught or for-gotten and not reinforced during residency. It is exem-

TABLE 1. Schema for Categorizing Responses to Questions Based onLevels ofKnowledge at Entry and Exit

Entry Knowledge Exit Knowledge Groups

High High Known-knownLow Low Unknown-unknownLow High Unknown-knownHigh Low Known-unknown

plified by the question on physiology of the respiratorycenter. Most residents knew the information on entry intothe program but less than half knew it on leaving theprogram.

The criteria for categorization into high or low knowl-edge can vary, depending on the choice of the program

director and the number ofresidents in the program. Thedata in this study are from the 1985 ABSITE taken bysurgical residents at the Case Western Reserve University(CWRU) Integrated Hospitals. There were six chief or

Level 5 residents and 26 Level 1 residents at CWRU thatyear. Three different criteria for known or unknown were

evaluated. When the 50% criterion was evaluated, 13 or

more Level 1 residents had to answer a question correctlyfor entry knowledge to be considered high or known.Three or more Level 5 residents had to answer correctlyfor exit knowledge of a question to be considered high or

known. The 67% criterion required 18 or more Level 1

residents and four or more Level 5 residents to answercorrectly for entry and exit knowledge, respectively, to beclassified as known. Similarly, the 83% criterion were 22or more Level 1 residents and five or more Level 5 resi-dents to answer correctly.

Results

The responses by surgical residents at CWRU to eachof 203 questions ofthe 1985 ABSITE were classified intoone ofthe above four response patterns. This process was

TABLE 2. Current Format ofthe Program Summary ofthe ABSITE.Keyword Phrases and Incorrect Response Figures are From the

1985 Exam Taken by Surgical Residents at CWRU

No. of Residents Answering Incorrectly

Keyword Level 1 Level 2 Level 3 Level 4 Level 5Phrase N=26 N= 14 N= 13 N=6 N=6

CharactCEA 1 1 0 0 0Site ca assoc.

transplant 26 13 10 5 6Charact

slidinginguinalhernias 25 11 9 5 1

Physiolrespiratorycenter 9 8 8 3 4

VOl. 205 . NO. 2 209

Ann. Surg. February 1987RHODES AND OTHERS

TABLE 3. Distribution ofQuestions into the FourResponse Categories*

Category Criterion

Level I Level 5 50% 67% 83%

Known Known 48% 25% 11%Unknown Unknown 26% 30% 47%Unknown Known 22% 43% 41%Known Unknown 4% 2% 1%

* Data are the percentage of all 203 questions of the entire exam thatfell in that category. Note the effect of three different criterion of per-centage of residents answering a question correctly.

repeated using each of the three percentage criteria forcategorizing knowledge as either known or unknown. Theresults are shown in Table 3. When 50% of residents withcorrect responses was the criterion for a question beingknown, then 52% ofquestions were known on entry (48%in the known-known category plus 4% in the known-un-known category). The distribution of responses betweengroups changed with the different criteria. As expected,the percentage of questions that were known on entrydecreased as the percentage criterion for a question to becategorized as known increased. It is interesting that withan increasingly stringent criterion of knowledge, thequestions tended to redistribute themselves equally be-tween the unknown-unknown and the unknown-knowngroups. With the 50% criterion, 48% of questions wereunknown at entry (26% plus 22%) and only 46% of thesewere known at exit. When 67% of correct responses wasthe criterion for a question to be known, then only 27%of questions were known on entry (25% plus 2%). Of the73% ofquestions unknown at entry (30% plus 43%), only59% were known at exit. When 83% of correct responseswas the criterion for a question to be known, then only12% of questions were known on entry (11% plus 1%).Ofthe 88% ofquestions unknown at entry (47% plus 4 1%),only 47% were known at exit. Thus, regardless of criterion,only between 46% and 59% of "unknown" cognitiveknowledge tested by the ABSITE appeared to be learnedduring surgical residency. These percentages, if represen-tative of all cognitive knowledge learned during the resi-dency, suggest considerable room for improvement in theeffectiveness of learning. The effectiveness of the educa-tional process as assessed by this technique does not appearto be improved by changing the criterion. One consolingaspect of this analysis is that very little knowledge knownon entry becomes unknown at exit.An important question is whether this approach has

validity and reproducibility. Although difficult to answerwith certainty, it is of great interest that the distributionofresponses was very similar when the 1984 ABSITE wasanalyzed. Using the 50% criterion, the percentage ofquestions in the known-known, unknown-unknown, un-

known-known, and known-unknown categories were41%, 23%, 34%, and 2%, respectively. This consistencywith the 1985 exam is remarkable since only 15% ofques-tions were the same in both exams.A key assumption of this type of analysis is that the

Level 1 (entry) residents had similar entry knowledgecompared with the Level 5 (exit) residents when they en-tered the residency. If this assumption is not correct, therelative distribution ofquestions into the various responsepatterns might be altered. For instance, if the entry resi-dents were relatively more knowledgeable than the exitresidents, there would be an increase in the percentage ofquestions in the known-known category at the expenseof the unknown-known category. If the entry residentswere less knowledgeable than the exit residents, then theconverse might apply. Among the CWRU residents, themean nationwide percentile ± SEM was 56 ± 12 percentilefor exit residents compared with 41 ± 5 percentile for theentry residents. Among the 26 entry residents were sevenresidents previously chosen to complete the 5-year generalsurgery residency; the remaining 19 entry residents hadchosen other surgical specialties. The scores of the sevenLevel 1 general surgery residents were 38 ± 10 percentile.Therefore, in this study, differences in relative knowledgebetween entry and exit residents appear to favor the exitresidents. Thus, increases in the percentage of questionsin the unknown-known category might occur at the ex-pense ofthe known-known category. The model estimatedthat only between 46% and 59% of"unknown" cognitiveknowledge tested by ABSITE is learned during residency.If the exit residents were relatively more knowledgeable,this may be a high estimate.A further goal ofcurricular analysis is to identify specific

subject areas that need educational emphasis. One way isto subcategorize the questions into the subject groupingsused for the ABSITE. Such an analysis, using the 67%criterion, is shown in Table 4. The known-known categoryis clustered between 21% and 32%, whereas the known-unknown category remains between 0% and 5%. Cognitivelearning in the gastrointestinal subject area appears to berelatively strong in the CWRU program; 58% ofthe ques-tions fall in the unknown-known category. On the otherhand, learning, with regard to the subject areas body as awhole, the cardiovascular and respiratory systems, andthe genitourinary, head and neck, musculoskeletal, etc.,appears to be below the mean of the total exam.The current ABSITE report format can also be used to

identify weak areas of knowledge by listing the keywordphrases in descending order according to the number ofchief residents who answered incorrectly. This groups and,thus, emphasizes information that is not known on exitfrom the residency. By such arrangement, it is easy torecognize patterns of similar subject areas. A partial listfor the 1985 ABSITE for the Level S CWRU residents is

210

COGNITIVE LEARNING DURING SURGICAL RESIDENCY 211TABLE 4. Distribution ofQuestions into the Four Response Categories by Subject Area, Using the 67% Criterion

Total Exam Whole Body GI CV & Resp GU, etc Endo etc.Level I Level 5 (%) (%) (%) (%) (%) (%)

Known Known 25 27 21 24 32 24Unknown Unknown 30 33 19 32 36 30Unknown Known 43 38 58 39 32 46Known Unknown 2 2 2 5 0 0

GI = gastrointestinal.CV & Resp = cardiovascular and respiratory.GU, etc. = genitourinary, skin, special senses, musculoskeletal, and

shown in Table 5. Patterns ofweakness in transplantation,nutrition, and shock and fluid balance clearly emerge.

Discussion

Appropriate clinical skills, attitudes, and scientific/medical knowledge are attributes of a good surgeon, andthe acquisition of these attributes are a measure of a res-

ident's progress during their surgical education. Datagathering, decision making, problem solving, and appro-

priate personal characteristics and interpersonal behaviormay be the most important attributes ofa surgeon. How-ever, the relative importance of each of these attributesand the best way they are taught and/or learned is oftena subject of debate. Resolution of the issue is difficultbecause assessment ofthese attributes is often subjective.Objective assessment of clinical skills such as data gath-ering, decision making, problem solving, and psycho-motor activity and the assessment of appropriate attitudesare, at best, in their infancy. Schueneman et al.5 usedvisiospatial tests to predict operative skills. Although a

positive correlation was noted between test performanceand subsequent operative ability, experience suggests thatpoor operative skills are rarely the sole reason for deter-mining a surgical resident to be inadequate.The ABSITE and the Qualifying Examination attempt

to provide an objective assessment of the ability to recallscientific and medical knowledge. Several studies havereported positive correlations between scores on the AB-SITE and the Qualifying Examination2'3 or other measuresof cognitive knowledge.6 However, there appears to belittle correlation between the measures of cognitiveknowledge and the subjective assessment ofthe other im-portant skills.4 Most surgical educators believe that neithermeasure alone should be the sole basis for assessment.

Although not the sole measure of a resident's devel-opment, the ABSITE has been primarily used to assess

an individual's knowledge. This report describes a methodfor using the results ofthe ABSITE for evaluating learningwithin a surgical residency program. Hence, it provides a

means of curricular evaluation. The method uses the in-formation provided in the Program Summary report toquantitate learning in the program versus learning by an

nervous systems.Endo, etc. = endocrine, hematic, lymphatic, and breast.

individual. The response to each question is categorizedas high (known) or low (unknown) at entry to and at exitfrom the surgical residency program. The criterion forhigh versus low knowledge is based on the percentage ofresidents at entry or exit who answer a question correctly.The impact of the residency curriculum can then be as-

sessed by characterizing entry-exit knowledge on eachquestion as known-known, unknown-unknown, un-

known-known, and known-unknown.The known-known pattern represents information

known by the residents early in the program, and thisinformation was probably taught and/or learned beforeresidency. Furthermore, the information was retainedduring residency. Since the information was known atboth entry and exit from the residency, one could argue

that acquisition ofknowledge in this area was not a func-tion ofthe surgical residency curriculum. Another possibleinterpretation of this type of response is that it is a poor

question; medical knowledge is not required for the correctanswer.The unknown-unknown pattern suggests that the in-

formation was not taught and/or learned before or duringresidency. Alternatively, it could be a poor question, thecorrect answer being ambiguous. This response patternshould catch the program director's attention. The largenumber ofincorrect responses byCWRU residents to thequestion on the site of cancer associated with transplan-

TABLE 5. Descending Rank Order List ofKeyword Phrases Accordingto Number ofLevel 5 Residents Answering Incorrectly

Level Level Level Level Level1 2 3 4 5

KeywordPhrase N=26 N= 14 N= 13 N=6 N=6

Site ca assoc transplant 26 13 10 5 6Amino acids in nutrition 20 10 11 2 6Guide fluid vol

replacement 11 8 8 4 5DX hypovolemic shock 15 13 11 3 4DX trace metal

deficiency 20 12 9 3 3RX complic TPN 11 5 5 3 3Complic assoc renal

transplant 6 5 7 2 3

VOL. 205 . NO. 2

Ann. Surg. February 1987

tation (Table 2) suggests that the residents were not evenguessing. In fact, they had been taught that the correctresponse was lymphoma when it actually was skin.The unknown-known pattern represents, at least in

part, a positive effect ofthe program. This pattern indicatesinformation that was not taught and/or learned beforeresidency but that was acquired during residency. Theremay be a number of subgroups within this pattern, thedifferences between the subgroups reflecting when theknowledge was acquired. The responses to the questionon sliding hernias (Table 2) suggest that the informationwas not acquired by the majority of residents until thefinal year of residency. Other questions with this patternsuggested that the information was acquired by the secondyear of residency. This distinction is useful because it al-lows the program director to correlate learning with certainrotations, levels of responsibility, and/or other events inthe curriculum.

The known-unknown pattern reflects a negative impactofthe residency curriculum or simply represents materialthat has been forgotten.The choice ofa criterion for knowledge by a given pro-

gram director will be influenced by how much they wantthe exam to be a driving force in their program. If thehigh performance on the ABSITE is an important pro-gram objective, then a high criterion is likely to be chosen.On the other hand, a relatively low criterion might bechosen by a program director who feels that the measureof achievement on the ABSITE is a less important pro-gram objective. The choice of a criterion for high versuslow knowledge in a given program will also depend onthe number of exiting residents. If a program has onlyone chief resident, then high knowledge can have but onecriterion, 100%. With two chief residents the choice ofknown criterion is either 50% or 100%. With a smallnumber of residents, this type of analysis may be lessmeaningful in terms of categorizing response patterns.Combining Level 4 and Level 5 residents into a singlegroup and then analyzing the patterns may be helpful inthis situation. Since there are few, if any, programs withfewer entry residents than exiting residents, a small num-ber of residents at entry is not likely to restrict analysis.

It is important to note that, at least in the CWRU sur-gical residency program, the choice of criterion did notseem to have a major influence on the percentage of"un-known" cognitive knowledge acquired during residency.Roughly half of the knowledge assessed by ABSITE thatwas unknown on entry to the residency program remainedunknown on exit from the residency program. A signifi-cant correlation between both a resident's initial and finalABSITE scores and the Qualifying Examination score hasbeen previously noted.2 Both of these observations areconsistent with the notion that a significant proportionof tested cognitive surgical knowledge is acquired before

residency training. It is important to realize, however, thatthese findings do not necessarily quantitate the total extentof cognitive learning during residency as it is difficult towrite questions about commonplace situations.

Ranking the keyword phrases in descending order ac-cording to the number of incorrect responses by exitingresidents is likely to prove useful in identifying subjectareas that deserve greater emphasis. It has also been shownthat ABSITE performance may depend on the mainte-nance ofa general emphasis as well.7 Therefore, curricularmodifications designed to focus on areas of weaknessshould not detract from continued emphasis on acquiringa broad base of knowledge.

This is an era ofcost-consciousness with regard to healthcare expenditures; the financing of graduate medical ed-ucation is no exception. It is only appropriate then thatthe effectiveness of the educational process be analyzed.Such analysis requires objective measures, for only thencan the efficiency of different educational processes becompared. The model presented here proposes a methodfor assessing cognitive learning. The data analyzed areonly for the CWRU residency program and suggest thatthere is considerable room for enhancing the amount ofcognitive knowledge gained during the surgical residency.We believe that the data are not unique to CWRU and,in fact, are representative of a nationwide phenomenon.We believe this type of analysis will be helpful in estab-lishing guidelines to evaluate and improve surgical as wellas other types of curricula. The objective evaluation ofsurgical resident education will allow assessment oflearn-ing brought about by changes in curricula and/or faculty,and may enhance the effectiveness ofthe educational pro-cess.

AcknowledgmentThe authors are extremely grateful to Mrs. Deatrae Fritz for her as-

sistance in the preparation of the manuscript.

References1. American Board of Surgery, Inc. Booklet of Information, 1986.2. Garvin PJ, Kaminski DL. Significance ofthe in-training examination

in a surgical residency program. Surgery 1984; 96:109-112.3. Shetler PL. Observations on the American Board of Surgery In-

Training Examination. Board results and conference attendance.Am J Surg 1982; 144:292-294.

4. Lazar HL, Deland EC, Tompkins RK. Clinical performance versusin-training examinations as measures of surgical competence.Surgery 1980; 87:357-362.

5. Schueneman AL, Pickleman J, Hesslein R, Freeark RJ. Neuropsy-chologic predictors of operative skill among general surgery res-idents. Surgery 1984; 96:288-295.

6. Erlandson EE, Calhoun JG, Barrack FM, et al. Resident selection:applicant selection criteria compared with performance. Surgery1982; 92:270-275.

7. Dean RE, Hanni CL, Pyle MJ, Nicholas WR. Influence of pro-grammed textbook review on American Board of Surgery In-Service Examination. Am Surg 1984; 50:345-349.

212 RHODES AND OTHERS