81
Assessment, Feedback and Evaluation Vinod Patel & John Morrissey 1

Assessment, Feedback and Evaluation Vinod Patel & John Morrissey 1

Embed Size (px)

Citation preview

Assessment, Feedback and

Evaluation Vinod Patel & John Morrissey

1

By the end of this session you will be able to :

• Define assessment, feedback and evaluation

• Discuss how these are related and how they differ

• Discuss the application of each in clinical education.

• Begin to apply them in practice

Learning outcomes

2

• Definitions

• Assessment : theory & practice

• Tea break

• Feedback

• Evaluation : theory & practice

• Questions and close

Lesson Plan

3

• Assessment ?

• Feedback

• Evaluation ?

Definitions ?

4

Assessment : definition

“The processes and instruments applied to measure the learner’s achievements, normally after they have worked through a learning programme of one sort or another”

Mohanna K et al (2004) Teaching Made Easy – a manual for health professionals

5

Feedback : definition

“Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance”

Van de Ridder JM et al (2008) Med Educ 42(2): 189

6

Evaluation : definition

“A systematic approach to the collection, analysis and interpretation of information about any aspect of the conceptualisation, design, implementation and utility of educational programmes”

Mohanna K et al (2004) Teaching Made Easy – a manual for health professionals

7

AssessmentPart 1

8

In this section:

• Purposes of assessment

• Miller’s pyramid

• The utility function

9

Why assess ?

10

Why assess ? 1 of 2

• To inform students of strengths and weaknesses.

• To ensure adequate progress has been made before students move to the next level.

• To provide certification of a standard of performance.

11

Why assess ? 2 of 2

• To indicate to students which parts of the curriculum are considered

important.

• To select for a course or career.

• To motivate students in their studies.

• To measure the effectiveness of teaching and to identify weaknesses in the curriculum.

12

Summative

Formative13

Clinical Education : Assessment Methods

• Written Assessments

• Observed clinical practice

• Others :

• Vivas

• Portfolios

• …

14

How a skill is acquired

• Cognitive phase

• Fixative phase• Practice

• Feedback

• Autonomous phase

Fitts P & Posner M (1967) Human Performance

15

Miller GE (1990) Acad Med (Suppl) 65 : S63

Does

Shows how

Knows how

Knows

16

17

18

Miller GE (1990) Acad Med (Suppl) 65 : S63

Does

Shows how

Knows how

Knows

OSCEOSCE

Written Exams Written Exams

OSLEROSLER

Clinical Work ObservedClinical Work Observed

ACAT, CbD, CeX, ACAT, CbD, CeX,

Short Answer-Reasoning Short Answer-Reasoning

MCQMCQ

Question :

How can we tell whether these tests are any good or not ?

Answer :

We do the maths .

20

Utility function

• U = Utility

• R = Reliability

• V = Validity

• E = Educational impact

• A = Acceptability

• C = Cost

• W = Weight

U = wrR x wv

V x weE x wa

A x wcC

Van der Vleuten, CPM (1996) Advances in Health Sciences Education 1, 41-67.

21

The Assessment Pentagram

Validity Reliability

Feasibility Acceptability

Educational Impact

22

Validity & reliability

• Validity : the extent to which the competence that the test claims to measure is actually being measured.

• Reliability : the extent to which a test yields reproducible results.

Schuwirth & van der Vleuten (2006) How to design a useful test : the principles of assessment

23

Messick (1994) Educational Researcher 23 : 13

“The degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment.”

Validity : another definition

24

Some causes of low validity

• Vague or misleading instructions to candidates.

• Inappropriate or overcomplicated wording.

• Too few test items.

• Insufficient time.

• Inappropriate content.

• Items too easy or too difficult.

McAleer (2005) Choosing Assessment Instruments

25

Some causes of low reliability

• Inadequate sampling.

• Lack of objectivity in scoring.

• Environmental factors.

• Processing errors.

• Classification errors.

• Generalisation errors.

• Examiner bias.

McAleer (2005) Choosing Assessment Instruments

26

Types of validity

• Face

• Predictive

• Concurrent

• Content

• Construct

27

1 0 1 29

511

20

10

66

6264

20

9

19

0

10

20

30

40

50

60

70%

Strongly

disagree

Disagree Neither Agree Strongly

agree

ESA 1ESA 2ESA 3

“The examination fairly and accurately assessed my ability”

28

0 03 3 0

9 8 93

69

76

67

1915 18

0

10

20

30

40

50

60

70

80%

Strongly

disagree

Disagree Neither Agree Strongly

agree

ESA 1ESA 2ESA 3

“The examination fairly and accurately assessed the candidates’ ability”

29

Problem :

Appearances can be deceptive.

30

Types of reliability

• Test-retest

• Equivalent forms

• Split-half

• Interrater and intrarater

31

32

33

The Assessment Pentagram

Validity Reliability

Feasibility Acceptability

Educational Impact

34

Utility function

• U = Utility

• R = Reliability

• V = Validity

• E = Educational impact

• A = Acceptability

• C = Cost

• W = Weight

U = wrR x wv

V x weE x wa

A x wcC

Van der Vleuten, CPM (1996) Advances in Health Sciences Education 1, 41-67.

35

Miller GE (1990) Acad Med (Suppl) 65 : S63

Does

Shows how

Knows how

Knows

36

Miller GE (1990) Acad Med (Suppl) 65 : S63

Does

Shows how

Knows how

Knows

OSCEOSCE

Written Exams Written Exams

OSLEROSLER

Clinical Work ObservedClinical Work Observed

ACAT, CbD, CeX, ACAT, CbD, CeX,

Short Answer-Reasoning Short Answer-Reasoning

MCQMCQ

FY Workplace Assessment

• Mini-CEX (from USA): Clinical Examination

• DOPS (developed by RCP): Direct Observation of Procedural Skills

• CBD (based on GMC performance procedures): Case-based Discussion

• MSF (from industry): Multi-Source Feedback

Carr (2006) Postgrad Med J 82: 57638

Practical Exercise

39

Educational interventions

• How will we assess?

• How will we feedback?

• How will we evaluate?

40

Educational interventions

• Communication skills for cancer specialists

• 2nd year medical speciality training

• Medical humanities SSM for medical students

• Masters-level pharmacology module

• Procedural skills for medical students

• Clinical Officers: ETATMBA 41

Communication skills

Learning outcome

To improve communication skills of HCPs working with individuals with cancer, e.g. with respect to breaking bad news, discussion of management plans, end of life care

Duration 2 days

Students Mostly specialist cancer nurses, including Macmillan nurses, also consultants and trainee medics. N = 30.

Teaching & learning

Mainly consultations with simulated patients

42

Speciality training

Learning outcome

To ensure trainees have reached the appropriate stage in the acquisition of the knowledge, skills and attitudes necessary to independent medical practice

Duration 1 year

Students Second-year GP trainees. N = 50.

Teaching & learning

Clinical apprenticeship, protected training days

43

Medical humanities SSM

Learning outcome

By a reading of Middlemarch by George Eliot, to enhance students ability to reflect on medical practice and to enter imaginatively into the lives of others

Duration 90-minute sessions weekly for 10 weeks

Students Second-year medical students. N = 20.

Teaching & learning

Small group teaching and discussion

44

M-level pharmacology moduleLearning outcome

To enhance knowledge and understanding of pharmacotherapies used in diabetes and its complications, and to develop the ability to apply this knowledge in clinical practice

Duration 200 hours, i.e. 20 CATS points

Students Mostly DSNs, a few GPs and endocrinology trainees, some from overseas. N = 20.

Teaching & learning

20 hours directed learning – small group teaching and discussion – and 180 hours self-directed learning

45

Procedural skillsLearning outcome

To ensure newly-qualified doctors are competent in all the bedside and near-patient procedures listed in Tomorrow’s Doctors

Duration 4 years

Students Medical students. N = 200.

Teaching & learning

Small group teaching sessions distributed across three hospitals

46

Clinical Officers Learning outcome

To……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………..

Duration x………

Students y…………………….

Teaching & learning

z…….

47

The ideal assessment instrument :

• Totally valid.

• Perfectly reliable.

• Entirely feasible.

• Wholly acceptable.

• Huge educational impact.

48

FeedbackPart 2

49

Feedback : definition

“Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance”

Van de Ridder JM et al (2008) Med Educ 42(2): 189

50

Feedback : definition

“Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance”

Van de Ridder JM et al (2008) Med Educ 42(2): 189

51

Feedback : definition

“Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance”

Van de Ridder JM et al (2008) Med Educ 42(2): 189

52

Feedback : definition

“Specific information about the comparison between a trainee’s observed performance and a standard, given with the intent to improve the trainee’s performance”

Van de Ridder JM et al (2008) Med Educ 42(2): 189

53

In this section:

• Importance of feedback

• How to give feedback: models

• How to improve feedback

54

Experiential learning

Feedback

• Its value is self-evident : experiential learning cannot take place without it

• It is often infrequent, untimely, unhelpful or incomplete

• It is often not acted upon to improve performance

56

How to give feedback

• Establish appropriate interpersonal climate

• Use appropriate location

• Establish mutually agreed goals

• Elicit learner’s thoughts and feelings

• Reflect on observed behaviours

• Be non-judgmental

• Relate feedback to specific behaviours

• Offer right amount of feedback

• Offer suggestions for improvement

Hewson MG & Little ML (1998) J Gen Int Med 13 (2) : 111 57

Practical Exercise

58

Pendleton’s Rules

• Clarification of matters of fact

• Trainee identifies what went well

• Trainer identifies what went well

• Trainee discusses what did not do well and how to improve

• Trainer identifies areas for improvement

• Agreement on areas for improvement and formulation of action plan

Pendleton D et al (1984) in The Consultation : an Approach to Learning and Teaching

60

Difficulties with Pendleton ?

• The strict format may inhibit spontaneous discussion.

• Unhelpful polarisation between “good points” and “bad points”

• Opening comments may seem predictable, insincere and a merely a prelude to criticism.

Carr (2006) Postgrad Med J 82: 576

61

Chicago• Review aims and objectives of the job at the start.

• Give interim feedback of a positive nature.

• Ask the learner to give a self-assessment of their progress.

• Give feedback on behaviours rather than personality.

• Give specific examples to illustrate your views.

• Suggest specific strategies to the learner to improve performance.

65

Improving feedback

• Recognise that we all need feedback to learn and improve

• Ask for feedback yourself and model this process for learners

• Inform learners that you expect them to ask for feedback

• Make feedback a routine activity

• Discuss the need of feedback with colleagues

Sergeant J & Mann K in Cantillon P & Wood D (eds) ABC of Learning and Teaching in Medicine 2nd edn (2010)

66

EvaluationPart 3

67

Evaluation : definition

“A systematic approach to the collection, analysis and interpretation of information about any aspect of the conceptualisation, design, implementation and utility of educational programmes”

Mohanna K et al (2004) Teaching Made Easy – a manual for health professionals

68

In this section:

• Purposes of evaluation

• Data sources

• Kirkpatrick’s heirarchy

69

The Audit Cycle

Action plan

Re-audit Review literature

Set criteria & standards

Design audit

Feed back findings Collect data

Analyse data

Ask question(s)

Review standards

Wakley G & Chambers R (2005) Clinical Audit in Primary Care70

Why evaluate ?

• To ensure teaching is meeting students’ needs.

• To identify areas where teaching can be improved.

• To inform the allocation of faculty resources.

• To provide feedback and encouragement to teachers

• To support applications for promotion by teachers.

• To identify and articulate what is valued by medical schools.

• To facilitate development of the curriculum.Morrison (2003) Br Med J 326 : 385

71

Evaluation

• Scale : micro macro

• Formative summative

• Internal external

• Can you evaluate an assessment ?

72

Evaluation : data sources

Student ratings Employer ratings

Peer ratings Video recordings

Self-ratings Administrator ratings

Assessment scores Teacher scholarship

Expert ratings Teacher awards

Student interviews Teaching portfolios

Exit ratings

Based on : Berk RA (2006) Thirteen Strategies to Measure College Teaching73

74

75

• A method of evaluating teaching

• Different models and purposes

• Three stages: pre-observation, observation, post-observation

• Form (instrument) for recording the information, observation and feedback

Siddiqui ZS, Jonas-Dwyer D & Carr SE (2007) Twelve tips for peer observation of teaching. Medical Teacher 29:297-300

Teaching Observation

76

• Evaluation : authority / summative

• Developmental : expert / formative

• Peer review : collaborative / mutual learning

Teaching Observation : purposes

77

78

Evaluation : triangulation

Self Peers

Students79

Practical Exercise

80

Results

Behaviour

Learning

Reaction

Kirkpatrick’s Hierarchy

Complexity of behaviour

Time elapsed

Reliable measures

Confounding factors

Hutchinson (1999) Br Med J 318 : 1267 81

What’s the point ?

82

83

Issues with Kirkpatrick

• Is it a hierarchy ?

• Omissions : learning objectives ?

• Linkages between levels ?

• “Kirkpatrick plus”

Tamkin P et al (2002) Kirkpatrick and Beyond, IES

84

• There is no perfect assessment instrument

• Feedback to students is essential to experiential learning

• The ultimate purpose of evaluation is to improve clinical outcomes

Conclusions

85