6
In-training assessment: learning from practice Marjan JB Govaerts, University of Maastricht, The Netherlands IN-TRAINING ASSESSMENT (ITA): WHY SHOULD WE DO IT? T raining in the clinical set- ting is a vital part of medical education, as it guides a trainee’s learning towards a standard of professional compet- ence. Performance of authentic tasks in clinical practice typically requires the integration of know- ledge, skills, judgement and atti- tudes – all indispensable for the development of professional competence. Although active participation in patient care can provide very powerful learning experiences, learning in practice does not occur automatically. Ericsson 1 shows that significant improvement of performance is acquired only through the ongo- ing evaluation of performance and feedback. 1 Without feedback, trainees will not be made aware of deficits, poor performance will go uncorrected, and good perform- ance will not be reinforced. The clinical setting is also the ideal environment for the assess- ment of what trainees actually do when confronted with tasks in the complex and stressful environ- ment of a ‘real life’ situation. Although simulation technologies (for example, standardised pa- tients or OSCEs) are important tools in the evaluation of com- petence, continuous assessment in day-to-day practice comes clo- sest to measuring habitual per- formance in essential aspects of patient care, teamwork or con- tinuous learning. This assessment is essential to ensure that, on completion of the training period, trainees have reached a level of competence that is suitable for the next level of responsibility and, eventually, independent practice. Without feedback, trainees will not be made aware of deficits Box 1. Comments from a student ‘At the end of the clerkship I had an interview with one of the staff members – he had never seen me at work. He told me that my performance had been excellent! I myself felt that, overall, my per- formance wasn’t that great. I have been busy with my clinical tasks and I suppose my performance was OK. However, no one ever told me how I was doing. I feel that my performance could have improved if there had been some feedback from time to time.’ Source: Anonymous student at the end of her surgery clerkship Work based assessment 242 Ó Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247

In-training assessment: learning from practice

Embed Size (px)

Citation preview

Page 1: In-training assessment: learning from practice

In-training assessment:learning from practiceMarjan JB Govaerts, University of Maastricht, The Netherlands

IN-TRAINING ASSESSMENT(ITA): WHY SHOULD WE DOIT?

Training in the clinical set-ting is a vital part of medicaleducation, as it guides a

trainee’s learning towards astandard of professional compet-ence. Performance of authentictasks in clinical practice typicallyrequires the integration of know-ledge, skills, judgement and atti-tudes – all indispensable for thedevelopment of professionalcompetence. Although activeparticipation in patient care canprovide very powerful learningexperiences, learning in practicedoes not occur automatically.Ericsson1 shows that significantimprovement of performance isacquired only through the ongo-ing evaluation of performance andfeedback.1 Without feedback,

trainees will not be made aware ofdeficits, poor performance will gouncorrected, and good perform-ance will not be reinforced.

The clinical setting is also theideal environment for the assess-ment of what trainees actually dowhen confronted with tasks in thecomplex and stressful environ-ment of a ‘real life’ situation.Although simulation technologies(for example, standardised pa-tients or OSCEs) are importanttools in the evaluation of com-petence, continuous assessmentin day-to-day practice comes clo-sest to measuring habitual per-formance in essential aspects ofpatient care, teamwork or con-tinuous learning. This assessmentis essential to ensure that, oncompletion of the training period,trainees have reached a level ofcompetence that is suitable for

the next level of responsibilityand, eventually, independentpractice.

Withoutfeedback,

trainees will notbe made aware

of deficits

Box 1. Comments froma student

‘At the end of the clerkship I hadan interview with one of the staffmembers – he had never seen meat work. He told me that myperformance had been excellent! Imyself felt that, overall, my per-formance wasn’t that great. I havebeen busy with my clinical tasksand I suppose my performance wasOK. However, no one ever told mehow I was doing. I feel that myperformance could have improvedif there had been some feedbackfrom time to time.’

Source: Anonymous student at theend of her surgery clerkship

Work basedassessment

242 � Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247

Page 2: In-training assessment: learning from practice

This implies that in-trainingassessment programmes shouldboth guide learning processes andprovide credible information forselective decision-making (for thepurposes of certification andaccountability). Unfortunately,current in-training assessment(ITA) is not particularly effectiveat fulfilling these assessmentfunctions. Good feedback andreflection are uncommon in mostclinical settings,2,3 and many ITApractices lack reliability andvalidity. Too often, the assess-ment of a trainee’s functioning isrestricted to a general end-of-training evaluation by a supervi-

sor. This is almost always non-specific and based on second- orthird-hand information ratherthan on direct observation of thetrainee.4

SO WHAT IS GOOD ITA?

Learning and assessment areintrinsically linked: practicewithout feedback and/or assess-ment does not result in signifi-cant improvement. Effective ITAstrategies therefore comprise sys-tematic and ongoing assessmentof habitual performance in thesetting of real life clinical practice,with the twofold purpose of

• giving feedback to the traineeto help improve performance,and

• providing credible anddefensible information onquality of performance tosupport judgements andselective decision-making.

This makes the implementa-tion of ITA in the dynamic clinicalenvironment a difficult task. Sev-eral strategies for successful ITAhave been described, however.5–8

They converge on the importanceof these key features, which willbe discussed below:

• Feedback and directobservation

• Documentation

• Broad sampling of cases

• Broad sampling of raters

• Explicit and relevant trainingobjectives

• Safety and due process.

Feedback and direct observation

Feedback is at the heart of clinicaltraining.In a recent study by Van der Hemet al., for example, all traineesindicated that observation of trai-nee performance followed byfeedback was a very powerful sti-mulus for learning.9 Meaningfulfeedback needs to be based onfirst-hand observations of traineeperformance. It is generally agreedthat direct observation of vitalclinical skills, such as historytaking, physical examination orcommunication, is the only way toobtain valid information aboutsafety and effectiveness of per-formance, and of respectful andcompassionate behaviour in pa-tient encounters.

Effective ITAstrategiescomprisesystematic andongoingassessment

Box 2. Supervisors’ deli-berations

Would I, as a patient, feel safewith this person? Would I let himor her take care of my children (myparent, my spouse)? These are theimplicit questions that many clin-ical teachers ask themselves whensupervising students or juniordoctors in the clinical setting.How do they arrive at an answer tothese questions? What do theyexpect from future colleagues? Andwhat if the answer to their ques-tions is negative?

� Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247 243

Page 3: In-training assessment: learning from practice

There are good examples ofguidelines for effective feedbackin the literature (see Table 1).Essential elements are: focusingthe feedback on specific behav-iours; the identification ofstrengths; and recommendationsfor improvement.2,10,14 Despitethe demands of practice andregular patient care, the clinicalsetting provides many opportun-ities for observation and differentkinds of feedback.2,3,10 Brieffeedback can be given whileworking with the student – forexample, while performing aphysical examination or discuss-ing patient management. Moreformal feedback sessions may bescheduled following patientencounters, ward rounds or casepresentations. Although thesesessions usually take more time –5–20 minutes – they should bescheduled regularly. The use ofpre-structured evaluation forms(for example, a mini clinicalexamination – mini-CEX) can behelpful to enhance the efficiencyand effectiveness of formal feed-back sessions, by focusing obser-vation and feedback on crucialaspects of performance.11

Teachers often find it difficultto be non-judgemental in feed-back, and to distinguish feedbackfrom assessment. And, of course,useful feedback necessarily con-tains some evaluative compo-nents, such as remarks on

strengths and weaknesses, other-wise suggestions for improvementcannot be given. On the otherhand, credible and defensibleassessments can only be made onthe basis of progressive records ofcomplete and specific feedback –providing an overview of the qual-ity of day-to-day performance.

DocumentationVerbal feedback is important, butcareful documentation ofstrengths and weaknesses in per-formance is essential. For train-ees, documentation of descriptivefeedback on evaluation forms isimportant to guide their learning(Figures 1 and 2).7 Such feedbackis more effective than any evalu-ative checklist marks, because ittakes into account individuallearning goals and case-specificinformation. Documentation ofpositive feedback promotesmotivation, and the acquisition ofself-confidence. Records ofperformance weaknesses – relatedexplicitly to the end-of-trainingobjectives – will help to specifyrelevant learning goals. Forselective purposes, this detailedinformation on trainee perform-ance provides necessary evidencefor competence in real-life prac-tice. The importance of continu-ous documentation becomesapparent in the case of poorlyperforming trainees. Recentresearch by Dudek et al. hasshown that lack of documentation

is a major barrier to the provisionof accurate (negative) perform-ance ratings.12 Only the frequentdocumentation of performancewill enable defensible decision-making for selective purposes.

For documentation to serve asa defensible ground for decision-making, it needs both accuracyand immediacy. If observationand documentation are separated,specific details are likely to belost. As a consequence, provisionof specific and useful feedbackwill be more difficult, and thereliability and validity of per-formance assessments will de-crease. Of course, documentationtakes time. Therefore, a number oftime-efficient approaches havebeen developed to support datadocumentation at the time ofperformance. Generally, theymake use of pre-structuredevaluation forms or performancereport cards, with global ratingscales. Examples are dailyperformance record cards, ormini-CEX-forms.5,11

Broad sampling of casesImpressions of professionalcompetence may result from asingle observation, but reliablejudgements require a greaternumber. Performance is highlycase-specific:4 a trainee mightperform very well on one caseand very poorly on another. Toachieve reliable judgements,

Table 1. Guidelines for effective feedback

Effective feedback should:2,14

1. Be complete, honest and supportive, identifying both strengths and weaknesses. Feedback on weaknesses shouldinclude suggestions for improvement and actions for follow-up

2. Use non-judgemental descriptions

3. Use well-defined goals and performance standards that represent end-of-clerkship objectives. Feedback shouldprovide trainees with meaningful information about their progress

4. Be given at an appropriate time (as close to the observed behaviour as possible), and not coming as a later surpriseto the trainee. Feedback given unexpectedly, especially if negative, is almost always met by strong emotionalreactions, thus minimising its effectiveness

5. Be specific, and focus on behaviours that are remediable. It is not always necessary to be comprehensive. Moreoften it is better to give feedback in small segments, focusing on what is most essential at each specific point oftraining

6. Be based on first-hand data

7. Be given frequently, to provide many opportunities for remediation

Teachers oftenfind it difficult

to be non-judgemental in

feedback

244 � Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247

Page 4: In-training assessment: learning from practice

many observations of perform-ance, covering a broad range ofcases and situations, are needed.But as time is precious, it isimportant to realise that it isnot always necessary to observea complete consultation. Onecould choose, for example, tofocus on history taking in thefirst instance, on physicalexamination on another occa-sion, and on decision-making orpatient education at a laterdate. Such a procedure, obser-ving a broad range of differentcases and spending less time oneach one, is to be preferred tothe comprehensive evaluation ofa limited number of cases.

Broad sampling of ratersOf course, raters are subjective,and this may be a threat to thereliability of the assessment.Raters may be stringent (so-called hawks) or lenient (so-called doves). Also, it is veryhard, if not impossible, to traina hawk to become a dove andvice versa. Highly structuredchecklists may be intuitivelyattractive, but they do not pro-vide a solution to this problem.Their use is time-consuming andineffective, as they often lead totrivialisation and atomisation.Global ratings have proved to beat least as reliable as checklistratings if they relate to directobservation and are givenimmediately after the observa-

tion of performance.4 A moreeffective solution is therefore toincrease the number of raters.Patients, allied health profes-sionals, nursing staff and med-ical staff at any level (includingpeers) can all provide usefulfeedback and meaningful infor-mation for performance evalua-tion.13 Increasing the number ofraters not only provides oppor-tunities to balance differences inleniency, but also ensures thatall relevant aspects of perform-ance are evaluated, from differ-ent perspectives.

Explicit and relevant trainingobjectivesEffective feedback and assess-ment require objectives. CurrentITA often covers a restrictedrange of competencies. Manyessential competencies, such ascollaboration, scholarship andpractice management are notconsidered in performanceassessment. Ironically, most ofthese competencies can only beevaluated in the context of real-life professional practice and ITAstrategies must include theobservation and assessment ofthese critical competencies.Although the unpredictable anduncontrollable environment ofclinical practice prevents thedetailed planning of learningand assessment, some kind ofblueprinting may be helpfulhere. A blueprint specifies all

relevant competencies or train-ing objectives, correspondingtasks and behaviours. It identi-fies the clinical contexts inwhich learning and assessmentwill occur, and serves as aframework for adequate samplingacross tasks and performanceelements. Thus the blueprintmay help clinical teachers andtrainees to come to a sharedunderstanding of training goals,expectations, responsibilities anddesired levels of competence.

Safety and due processIn ITA, the roles of teacher andassessor are often combined: thisis dangerous, as the latter mayhamper the former. It is thereforeessential that a safe learningclimate is established.7,10 Strat-egies are needed that create arespectful environment, in whichboth teachers and trainees feelsafe to engage in ITA practice.Special attention should then bepaid to:5,7,8

1. Clear communication aboutmutual expectations. Thisfacilitates feedback on essen-tial aspects of performanceand progress towards end-of-clerkship goals.

2. Clear communication aboutthe ITA procedures, perform-ance standards, roles andresponsibilities in ITA. Famil-iarise trainees with assess-ment instruments.

3. The incorporation of self-assessment. Self-assessmentstimulates trainees to reflecton feedback and increasesactive involvement in theirown learning process. Self-assessment or self-feedbackalso promotes the acceptanceof feedback from thesupervisor.

4. The incorporation of ITA intothe routine of daily practice.Make sure that formal feed-back and performance assess-ments are offered frequently,thus reducing the stressinherent in infrequent

Documentationof positivefeedbackpromotesmotivation

� Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247 245

Page 5: In-training assessment: learning from practice

(selective) assessments; andensure that each assessment isfollowed by many oppor-tunities for performanceimprovement.

5. Corroboration of normativestatements with arguments –that is, notes on day-to-dayperformance evaluations andfeedback. Statements aboutpoor performance shouldnever come as a surprise to thetrainee.

6. Selective decision-making bya committee rather than by anindividual staff member.Decision-making by commit-tee enhances the credibilityand defensibility of decisions,as they are based on informa-tion from different perspec-tives and offer a morein-depth reflection onperformance.8

IN SUMMARY

Assessment of habitual perform-ance in the workplace is anessential tool in making judge-ments about clinical competence,but more importantly, it guidestrainees towards expected stand-ards of practice performance.In-training assessment, therefore,is crucial in professional/medicaltraining. Recent developmentsshow that feasible, valid and reli-able in-training assessment ispossible, provided some crucialconditions are taken into account.

REFERENCES

1. Ericsson KA. Deliberate practice and

the acquisition and maintenance

of expert performance in medicine

and related domains. Acad Med

2004;79:S70–S81.

2. Ende J. Feedback in clinical medical

education. JAMA 1983;250:777–781.

3. Branch WT, Paranjape A. Feedback

and reflection: teaching methods for

clinical settings. Acad Med

2002;77:1185–1188.

4. Van der Vleuten CPM, Scherpbier

AJJA, Dolmans DHJM, Schuwirth

LWT, Verwijnen GM, Wolfhagen HAP.

Clerkship assessment assessed. Med

Teach 2000;22:592–600.

5. Turnbull JT, MacFadyen J, Van

Barneveld C, Norman G. Clinical work

sampling: a new approach to the

problem of in-training evaluation. J

Gen Int Med 2000;15:556–561.

6. Daelmans HEM, Van der Hem-Stokroos

HH, Hoogenboom RJI, Scherpbier

AJJA, Stehouwer CDA, Van der Vle-

uten CPM. Feasibility and reliability of

an in-training assessment programme

in an undergraduate clerkship. Med

Educ 2004;38:1270–1277.

7. Govaerts MJB, Van der Vleuten CPM,

Schuwirth LWT, Muijtjens AMM. The

use of observational diaries in in-

training evaluation: student percep-

tions. Adv Health Sci Educ

2005;10:171–188.

8. Williams RG, Dunnington GL, Klamen

DL. Forecasting residents’ perform-

ance – partly cloudy. Acad Med

2005;80:415–422.

9. Van der Hem-Stokroos HH, Daelmans

HEM, Van der Vleuten CPM, Haarman

HJThM, Scherpbier AJJA. A qualit-

ative study of constructive clinical

learning experiences. Med Teach

2003;25:120–126.

10. Irby DM, Bowen JL. Time-efficient

strategies for learning and perform-

ance. Clin Teach 2004;1:23–28.

11. Norcini JJ. The mini clinical evalua-

tion exercise (mini-CEX). Clin Teach

2005;2:25–30.

12. Dudek NL, Marks MB, Regehr G. Failure

to fail: the perspectives of clinical sup-

ervisors. Acad Med 2005;80:S84–S87.

13. Davies H, Archer J. Multi source

feedback: development and practical

aspects. Clin Teach 2005;2:77–81.

14. Rolfe I, McPherson J. Formative

assessment: how am I doing? Lancet

1995;345:837–839.

Figure 1. Assessment and feedback form 1.

Many essentialcompetencies

are notconsidered inperformanceassessment

246 � Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247

Page 6: In-training assessment: learning from practice

Figure 2. Assessment and feedback form 2.

� Blackwell Publishing Ltd 2006. THE CLINICAL TEACHER 2006; 3: 242–247 247