6
Spreadsheet Application for Radiology Resident Match Rank List 1 Felix S. Chew, MD, MBA 2 , E. Ricardo Ochoa Jr, MD, Annemarie Relyea-Chew, JD 3 Rationale and Objective. Enormous resources are spent on the selection process for radiology residency, yet agreement on criteria, process, and outcome remains elusive. The aim was to develop, implement, and evaluate a spreadsheet appli- cation that streamlined the creation of a match list for resident selection. Materials and Methods. The Department of Radiology’s traditional procedure of screening, interviewing, and rating can- didates was left intact because the selection committee could not reach consensus regarding possible improvements in the process or outcome measures. We developed a spreadsheet application that used the ratings of candidate desirability for our program from our preexisting process to create a tentative match list by adding a system of ordinal ranking and weighting. After 4 years of use, the fairness and usefulness of this application were evaluated by members of the selection committee by means of a survey. Results. The spreadsheet application was used to create tentative match lists for 4 years. The system eliminated ties among candidates on the preliminary match list and met the expectations of a majority of the selection committee mem- bers for fairness, objectivity, efficiency, and results. Conclusion. A spreadsheet application can streamline the process of resident selection without having to resolve underly- ing conflicts about selection criteria and outcome measures. Key Words. Radiology residency; residency match; resident selection. © AUR, 2005 Each fall, radiology residency programs across the United States each receive hundreds of applications for a handful of residency training positions. An enormous amount of time and effort is devoted to evaluating the broad range of talents, achievements, and personalities available in this pool of increasingly competitive candidates, culminat- ing in the creation of a rank order list. Numerous articles have described strategies for improving the selection pro- cess, yet there remains no discernible method for effec- tively predicting which applicants will perform well as residents. Furthermore, performance as a resident has not been correlated with career-long performance as a radiol- ogist. There is no general agreement on how to even measure performance, either as a resident or as a practic- ing radiologist. It was in this light that a spreadsheet ap- plication was developed to streamline the resident selec- tion process at our institution. The radiology residency program at Wake Forest Uni- versity Baptist Medical Center (Winston-Salem, NC) has entry positions for 10 residents per year. These residents are selected through the National Resident Matching Pro- gram (NRMP) match process and begin their radiology training approximately 15 months after the match takes Acad Radiol 2005; 12:379 –384 1 From the Department of Radiology, Wake Forest University School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157-1088. Re- ceived November 4, 2004; revision received December 4; revision accepted December 4. Address correspondence to: F.S.C. e-mail: [email protected] 2 Present address: Department of Radiology, University of Washington, Box 354755, 4245 Roosevelt Way NE, Seattle, WA 98105. 3 Present address: Harborview Injury Prevention & Research Center, 925 Ninth Street, Box 359960, Seattle, WA 98104. © AUR, 2005 doi:10.1016/j.acra.2004.12.018 379

Spreadsheet application for radiology resident match rank list1

Embed Size (px)

Citation preview

Page 1: Spreadsheet application for radiology resident match rank list1

Spreadsheet Application for Radiology ResidentMatch Rank List1

Felix S. Chew, MD, MBA2, E. Ricardo Ochoa Jr, MD, Annemarie Relyea-Chew, JD3

Rationale and Objective. Enormous resources are spent on the selection process for radiology residency, yet agreementon criteria, process, and outcome remains elusive. The aim was to develop, implement, and evaluate a spreadsheet appli-cation that streamlined the creation of a match list for resident selection.

Materials and Methods. The Department of Radiology’s traditional procedure of screening, interviewing, and rating can-didates was left intact because the selection committee could not reach consensus regarding possible improvements in theprocess or outcome measures. We developed a spreadsheet application that used the ratings of candidate desirability forour program from our preexisting process to create a tentative match list by adding a system of ordinal ranking andweighting. After 4 years of use, the fairness and usefulness of this application were evaluated by members of the selectioncommittee by means of a survey.

Results. The spreadsheet application was used to create tentative match lists for 4 years. The system eliminated tiesamong candidates on the preliminary match list and met the expectations of a majority of the selection committee mem-bers for fairness, objectivity, efficiency, and results.

Conclusion. A spreadsheet application can streamline the process of resident selection without having to resolve underly-ing conflicts about selection criteria and outcome measures.

Key Words. Radiology residency; residency match; resident selection.©

AUR, 2005

Each fall, radiology residency programs across the UnitedStates each receive hundreds of applications for a handfulof residency training positions. An enormous amount oftime and effort is devoted to evaluating the broad rangeof talents, achievements, and personalities available inthis pool of increasingly competitive candidates, culminat-

Acad Radiol 2005; 12:379–384

1 From the Department of Radiology, Wake Forest University School ofMedicine, Medical Center Boulevard, Winston-Salem, NC 27157-1088. Re-ceived November 4, 2004; revision received December 4; revision acceptedDecember 4. Address correspondence to: F.S.C. e-mail:[email protected]

2 Present address: Department of Radiology, University of Washington, Box354755, 4245 Roosevelt Way NE, Seattle, WA 98105.

3 Present address: Harborview Injury Prevention & Research Center, 925Ninth Street, Box 359960, Seattle, WA 98104.

©

AUR, 2005doi:10.1016/j.acra.2004.12.018

ing in the creation of a rank order list. Numerous articleshave described strategies for improving the selection pro-cess, yet there remains no discernible method for effec-tively predicting which applicants will perform well asresidents. Furthermore, performance as a resident has notbeen correlated with career-long performance as a radiol-ogist. There is no general agreement on how to evenmeasure performance, either as a resident or as a practic-ing radiologist. It was in this light that a spreadsheet ap-plication was developed to streamline the resident selec-tion process at our institution.

The radiology residency program at Wake Forest Uni-versity Baptist Medical Center (Winston-Salem, NC) hasentry positions for 10 residents per year. These residentsare selected through the National Resident Matching Pro-gram (NRMP) match process and begin their radiology

training approximately 15 months after the match takes

379

Page 2: Spreadsheet application for radiology resident match rank list1

CHEW ET AL Academic Radiology, Vol 12, No 3, March 2005

place. Because the program receives applications frommany more applicants than it can interview and it inter-views many more applicants than it can match, the resi-dency selection process has consumed an increasingamount of resources during the past few years withoutperceptible improvement in either process or outcome.Four years ago, a spreadsheet application was created tosimplify the process of creating the match list. We de-scribe that application and its evaluation.

MATERIALS AND METHODS

Selection Process

ScreeningThe residency application process begins in late sum-

mer, when fourth-year medical students and physiciansseeking to enter the NRMP match for postgraduate train-ing programs fill out their applications, solicit letters ofrecommendation, and select the programs to which theywill apply. Applications can be entered into the ElectronicResidency Application Service (ERAS) beginning Sep-tember 1 for the match in March of the following calen-dar year. The resident selection process begins with re-trieval of applications from ERAS, typically in early Sep-tember and extending into mid-November, andoccasionally later. At Wake Forest, each application isscreened by two or more readers who individually assignone of the following ranks to each application that he orshe reviews: interview (high priority), interview (normalpriority), interview (low priority), hold for additional in-formation, and do not interview (reject). The readers donot have rigid criteria, but consider such factors as medi-cal school, class rank, Alpha Omega Alpha (AOA, na-tional medical school honor society) member or nonmem-ber, Step 1 and Step 2 United States Medical LicensingExamination (USMLE) test scores, deans’ letters, lettersof recommendation, personal statement, research experi-ence, awards, publications, additional academic degrees,life experience, geographic background, community ser-vice, and so forth. Approximately 80 applicants are in-vited for interviews, beginning with those rated high pri-ority by all readers. Applicants who decline an invitationto present themselves for an interview are replaced byalternates.

InterviewingIndividual applicants are interviewed on a continuing

basis during November through early February. At the

380

time of interview, each applicant typically has four formalinterviews: three with faculty members and one with asenior resident who generally is in his or her final year ofresidency training. At the conclusion of the day, each in-terviewer independently gives the applicant a score thatreflects the full application, including interview perfor-mance, using whatever criteria that interviewer wishes toapply. Scores are given on a scale of zero to four andrecorded on a scoring sheet (Fig 1). On this scale, fourindicates an outstanding candidate, three indicates a verygood candidate, two indicates an average candidate, oneindicates a mediocre candidate, and zero is a veto thatindicates the candidate should not be ranked. This portionof the system has been used for at least 20 years, the ap-parent length of the institutional memory. Although thelegacy scoring sheet strongly discourages finer gradationsthan the five-point scale, with the advent of the spread-sheet application, fractions and decimals were allowedand even encouraged.

RankingUnder the old system, the four scores of each candi-

date would be averaged, and that average would be usedto create the preliminary rank order list. Once that rankorder list was created, the residency selection committeewould meet and create the final rank order list through agroup process. This process had a number of shortcom-ings. Because each committee member was free to assignany score between zero and four using whatever criteriadesired, some committee members gave a four to virtuallyall their interviewees, whereas other committee membersrarely gave a four to any of their interviewees. Thus, anapplicant’s average score might depend more on who wasavailable to interview on his or her scheduled date thanon criteria related to the actual applicant or the appli-cant’s credentials. In addition, the score from a committeemember who interviewed a large number of applicantswas given the same weight as one who interviewed asmall number. Finally, the averaging process did not ini-tially produce a list in which applicants were rank orderedbecause there were numerous ties. During the committeemeeting at which the final ranks were determined, discus-sion often was chaotic because there were no agreed onranking criteria, only a few committee members inter-viewed any particular applicant, and some of those inter-views took place several months before the meeting andwere not remembered well by the participants (if remem-bered at all). Because the committee did not want to ad-

dress these issues by altering the manner in which the
Page 3: Spreadsheet application for radiology resident match rank list1

als

Academic Radiology, Vol 12, No 3, March 2005 SPREADSHEET FOR MATCH LIST

interviews were conducted or scored or by having fre-quent meetings to create and maintain a running rankingof applicants, a spreadsheet application was created toaddress the issue of scoring, weighting, and ties.

Spreadsheet applicationThe spreadsheet application, implemented in Microsoft

Excel (Microsoft Corp., Redmond, WA), was designed toaddress the issues of the scoring variability from commit-tee member to committee member, giving equal weight tocommittee members regardless of how many applicantsthey interviewed, and ties on the preliminary rank list.Raw inputs for the spreadsheet application included name,medical school, and interview date of each interviewedapplicant; each interviewer’s scores for each interviewee(on the original zero-to-four scale); and an ordinal ranklist from each interviewer of the applicants he or she ac-tually interviewed. A raw value of one was given to thehighest ranked applicant; two, to the second highestranked applicant; and so forth. In the case of ties, the av-erage value for the positions occupied by the tied appli-cants was assigned to each applicant in the tie. For exam-

Figure 1. Applicant scoring sheet showgradations; many interviewers used decim

ple, if three applicants were tied for the highest ranking,

each applicant would be assigned a two and the next ap-plicant, who would be the fourth highest ranked applicant,would receive a four. Ranking was relative and onlywithin those candidates that the particular faculty memberinterviewed. An input rank, based on scores, was used forinterviewers who returned scores, but not ranks. The in-formation for each applicant was placed into a single rowof the spreadsheet.

The spreadsheet calculated a number for each inter-viewee called the “score.” The score was an average ofthe raw scores given to the applicant by each of his orher interviewers, converted linearly to a scale from 0 to100, with 100 being the best. The spreadsheet also calcu-lated a second number for each interviewee, called the“rank,” by averaging the ranks from individual interview-ers. This calculation converted the raw ordinal rankingsfrom all interviewers into a score on a scale from 0 to100, again with 100 being the best. This conversion ad-justed for the number of applicants that each interviewerinterviewed and scored by dividing the raw rank by thesum of one plus the number of interviews conducted,multiplied by 100, and subtracted from 100. Thus, if an

ve-point scale and discouraging of finerand fractions anyway.

ing fi

interviewer ranked exactly one applicant, the rank from

381

Page 4: Spreadsheet application for radiology resident match rank list1

CHEW ET AL Academic Radiology, Vol 12, No 3, March 2005

this interviewer would be 50 and would carry essentiallyno weight. However, if an interviewer ranked exactlythree applicants, the highest ranked applicant would re-ceive a rank of 75, the second would receive a rank of50, and the last would receive a rank of 25. Thus, as thenumber of applicants interviewed increased, the weightgiven to that interviewer’s rankings also increased. Be-cause each resident interviewed only a few candidates andmost faculty members interviewed many candidates, resi-dents were asked to pool their rankings by consensus toderive a single rank list for which the weight would be onpar with faculty interviewers.

A calculated number for each applicant, called the “in-dex,” was used as the basis for the preliminary spread-sheet-calculated rank list. The index was the average ofthe score and the rank, with additional points for top andbottom rankings and bonus points. Top votes were ordinalrankings in the top 15% of individual interviewer rank-ings. Interviewers must have ranked six or more appli-cants for their top candidate(s) to reach the 15th percen-tile and qualify for top votes. Each top vote added onepoint to the applicant’s index. Similarly, low votes aregiven to those ranked in the bottom 15% of an individualranking. For each low vote that a candidate received, onepoint was subtracted from the index. The index had ascale of 0 to 100, but with top and low votes, it theoreti-cally could be outside those limits. To use the spread-sheet, raw inputs were simply keyed into the appropriatecells and various calculations were automatically made.Applicants initially were sorted by interview date for con-venience of data entry, but then they were sorted by theindex number in descending order.

When the committee met to create the final rank list,bonus points were assigned to candidates who madehighly positive or highly negative impressions or whosepreliminary rankings were not believed to accurately re-flect their relative desirability for the program. Bonuspoints were added to or subtracted from the applicant’sindex, thereby providing a means for indirect adjustmentof the final rank order list. The number of bonus pointsadded or subtracted was highly subjective and debated bythe committee during the final ranking meeting.

EvaluationA survey of faculty members of the resident selection

committee was performed to ascertain their opinions re-garding the resident selection process. The surveys hadfive items to which committee members could respond by

using a five-point Likert scale. These items assessed un-

382

derstanding of how the ranking system works, satisfactionwith results of the process, and the degree to which theprocess met expectations for fairness, objectivity, and ef-ficiency. Faculty members also were asked whether theywould be willing to continue serving on the resident se-lection committee in the following year. Additional spacewas provided for comments.

RESULTS

The spreadsheet application has been used successfullyfor four consecutive resident selection seasons. Each year,it has produced a preliminary rank order list without ties,and the number of changes in rank during the final rank-ing meeting appear to have decreased since its inception.Results for the match in 2003 are listed in Table 1. Meanscore for 77 applicants was 81.65, with a range of 53.33–100.00. The interviewers generally used only the upperrange of the five-point scale. Mean rank was 49.95, asone would expect, with a broader range of 14.82–86.22,with the interviewers being forced to use the entire scale.Finally, mean index was 65.67, with a range of 30.08–96.11. The index was skewed to the upper range becauseit blended the score and rank. The flatness of the distribu-tion was greater for the index (kurtosis � �0.84) andrank (kurtosis � �1.07) than for the score (kurtosis ��0.29), indicating that the new method dispersed the ap-plicants further than our previous method, which reliedonly on scores (Fig 2).

A written survey of the 18 faculty members on theresident selection committee was conducted after the 2004NRMP match results were known, the fourth year thespreadsheet application was used. The overall responserate was 94% (17/18 faculty members). Eighty-nine per-cent (16/18) of faculty members responded by completingall five Likert items, and one faculty member completed

Table 1Results of 2003 Match List

Index Score Rank

Mean 65.67 81.65 49.95SD 16.35 11.39 19.73Kurtosis �0.84 �0.29 �1.07Maximum 96.11 100.00 86.22Minimum 30.08 53.33 14.82

Note: N � 77.

four of them. Fifty-six percent (10/18) of these faculty

Page 5: Spreadsheet application for radiology resident match rank list1

Academic Radiology, Vol 12, No 3, March 2005 SPREADSHEET FOR MATCH LIST

members also provided one or more written comments.One faculty member responded with written commentsonly. Seven survey forms did not include written com-ments. The majority of faculty members agreed that theyunderstood how the ranking system works (72%); the pro-cess met their expectations for fairness (83%), objectivity(83%), and efficiency (78%); and they were satisfied withresults of the process (83%; Table 2). Ninety-four percent(17/18) indicated they were willing to continue helping toselect residents in the next year.

Four of the written comments discussed abandoningthe traditional zero-to-four scale in preference of a one-to-ten scale, and three comments referred to the complexityof the spreadsheet in a negative fashion. Giving greaterweight to resident input was suggested twice. Two com-mittee members questioned the fairness of the screeningprocess for interviews, one of whom suggested that toomuch weight was given to numerical items in applica-

Figure 2. Histogram of values for resident rank list. Using only ascore, the method originally used to create the rank list, valuesare skewed to the right, with many tie scores. Using a combina-tion of scores, ordinal ranks, top and low votes, and bonuspoints, values are more dispersed over the scale, making differ-ences between similar applicants more conspicuous and eliminat-ing ties.

Table 2Results of Survey of Faculty Members on Resident Selection C

AgreeStrongly

I understand how our ranking system works 10 (59%)The process met my expectations for fairness 12 (71%)The process met my expectations for objectivity 10 (59%)The process met my expectations for efficiency 11 (65%)I am satisfied with the results of the process 11 (69%)

Note: For the first four questions, N � 17; for the final question, N �

tions, and one referred to the need for standardizing thescores given by interviewers. One person commented onthe time and effort required to conduct interviews, andone suggested a monthly ranking meeting to keep a run-ning rank order list. Traditionally, we had a policy thatany committee member was able to veto an applicant; onecomment suggested the reverse, that members be able toplace one applicant into the upper portion of the list.

DISCUSSION

The purpose of the selection process is to recruit resi-dents who best fit our program, given our applicant pool.For better or worse, multiple studies have shown the dif-ficulty predicting which applicants will prove to be thebest residents. Faculty and current residents often disagreeabout which factors are most important in the successfulselection of residents. It often is difficult for members ofthe resident selection committee to articulate which fac-tors they consider most important and explain how theyjudge the strength of those factors in any individual appli-cant (1). A survey conducted by Grantham (2) found thatcriteria believed to be important for selection and the ra-diology resident selection process showed significant dif-ferences among programs. It is clear that the interview isa key tool in this process (3), but it certainly is not anobjective or necessarily a fair tool. For example, Boor etal. (4) showed that physical appearance may affect theinterview evaluations and selection of female applicantsto a medical residency, and professional demeanor mayinfluence the evaluation and selection of both male andfemale applicants.

After the interviews are over and a rank list has beencreated, how well does the position on that rank list pre-dict subsequent performance during radiology residency?According to the study by Adusumilli et al. (5), rank

ittee

Agree Neutral DisagreeDisagreeStrongly

3 (18%) 3 (18%) 0 (0%) 1 (6%)3 (18%) 2 (12%) 0 (0%) 0 (0%)5 (29%) 2 (12%) 0 (0%) 0 (0%)3 (18%) 2 (12%) 1 (6%) 0 (0%)4 (25%) 1 (6%) 0 (0%) 0 (0%)

omm

16.

383

Page 6: Spreadsheet application for radiology resident match rank list1

CHEW ET AL Academic Radiology, Vol 12, No 3, March 2005

number and rank percentiles were not significantly corre-lated with evaluations of resident rotation performance inthe fourth year of residency or with scores or percentileson the American Board of Radiology (ABR) written ex-amination. National board scores do not predict the suc-cess of ABR written and oral examinations (6,7). Onestudy found that success on the ABR could be predictedby medical school success in preclinical courses, someclinical courses, and USMLE examination scores, but notby dean’s letters, letters of recommendation, AOA selec-tion during the senior year, and medical school prestige(8).

We were unable to find published data regarding theeffect of the selection process for radiology residency andsubsequent performance as a radiologist. The structure oftraining programs is such that data of this sort are un-likely ever to be available; thus, each program and eachapplicant has very little to rely on other than anecdotalinformation, experience, and intuition. Why then shouldwe be so concerned with resident selection? Perhaps weshould not be, but the fact remains that each programmust create a rank order list from hundreds of applicantseach year.

We sought simply to produce in an efficient and effec-tive manner a rank order list of candidates who had beeninterviewed. Our criterion for efficiency was the time anddifficulty invested in creating the list, and our criteria foreffectiveness were the fairness of the process and satisfac-toriness of the outcome as perceived by members of theselection committee. Our initial process is similar to thatdescribed in the literature (9). When the committee met todiscuss the final ranking of the candidates, a proposal tomove an applicant up on the list often would lead to cha-otic debate regarding the fates of those who ranked be-tween the current and the desired positions of the candi-date in question. Conversely, altering a candidate’s stand-ing through the use of bonus points has caused much lesscontroversy by focusing the debate on the merits of thecandidate. Whereas the outcome may be the same as sim-ply displacing others to move a particular applicant to a

new position on the rank list, the abstraction of the bonus

384

points focuses the committee’s attention on the goal ofappropriately adjusting the index of the candidate in ques-tion, rather than on the rank order as a whole. A percep-tion of fairness thereby is maintained. With the rank itselfleft to be decided by the spreadsheet index, less time andeffort are required to arrive at a satisfactory match list.Spreadsheet software is highly flexible and can beadapted easily to fit any number of applicants or facultyinterviewers, and the various weights assigned to scores,ordinal ranks, top and low scores, and bonus points canbe changed easily.

The computer has been used to facilitate various as-pects of the resident selection process, particularly withtracking and administration (10) and the creation of provi-sional rankings, which serve as a starting point for com-mittee discussions (11). With ERAS, additional aspects ofthe process have been computerized, ie, the applicationform and supporting documents. We report using thecomputer to generate the match list itself.

REFERENCES

1. Longmaid HE III. Resident recruitment. Acad Radiol 2003; 10(suppl 1):S4–S9.

2. Grantham JR. Radiology resident selection: results of a survey. InvestRadiol 1993; 28:99–101.

3. Gong H Jr, Parker NH, Apgar FA, Shank C. Influence of the interviewon ranking in the residency selection process. Med Educ 1984;18:366–369.

4. Boor M, Wartman SA, Reuben DB. Relationship of physical appearanceand professional demeanor to interview evaluations and rankings ofmedical residency applicants. J Psychol 1983; 113:61–65.

5. Adusumilli S, Cohan RH, Marshall KW, et al. How well does applicantrank order predict subsequent performance during radiology resi-dency? Acad Radiol 2000; 7:635–640.

6. Gunderman RB, Jackson VP. Are NBME examination scores useful inselecting radiology residency candidates? Acad Radiol 2000; 7:603–606.

7. Wood PS, Smith WL, Altmaier EM, Tarico VS. Franken EA Jr. A pro-spective study of cognitive and noncognitive selection criteria as pre-dictors of resident performance. Invest Radiol 1990; 25:855–859.

8. Boyse TD, Patterson SK, Cohan RH, et al. Does medical school perfor-mance predict radiology resident performance? Acad Radiol 2002;9:437–445.

9. Curtis DJ, Riordan DD, Cruess DF, Brower AC. Selecting radiology res-ident candidates. Invest Radiol 1989; 24:324–330.

10. Frank MS, Sagel SS, Koehler RE. Computer facilitation of the residentselection process. Invest Radiol 1990; 25:1040–1043.

11. Pilon S, Tandberg D. Neural network and linear regression models inresidency selection. Emerg Med 1997; 15:361–364.