Transforming Student Learning: Feedback and Criteria

Preview:

Citation preview

Transforming student learning: why feedback matters

Dr Tansy JessopDOPA Development Day

8 September 2015

www.testa.ac.uk

What the research says…

• “Feedback is the single most important factor in student learning” (Hattie, 2009).

• “Not all feedback is good feedback (Molloy and Boud 2013).

• Feedback needs to be prompt, detailed, specific, developmental, dialogic feedback (Gibbs 2004; Nicol 2.010)

Think of a time…

• ….when you received really helpful feedback. Jot down a few thoughts about the context. What made it helpful?

• ….when you received really unhelpful feedback. Jot down a few thoughts about the context. What made it unhelpful?

• Turn to the person next to you and share some thoughts about the feedback you have received.

Common myths about feedback

Myth 1: Let students only hear sweet music in the feedback

What students say…• If you go for help you can say “I’m struggling. I don’t know where

I’m going wrong...” and they just pacify you. “No, you’re doing fine. Carry on the way you’re going” and the next thing you know is that you’re not doing as well as you think because they’re not giving you constructive criticism, which is what you need.

• They just pacify really. I went for help and they just told me what I wanted to hear, not what I needed to know.

• The kinds of things where it says, you know, this line of argumentation is wrong, or this assumption you’re making is wrong or something, is actually useful.

Two minute pause

1. What quote, phrase or word resonates for you?

2. Any ideas for fixing the problem?

The perils of praise…

• Synthesis of studies on praise show that people do inferior work if enticed with praise (Kohn 1999; Dweck 2012).

• ‘Vanishing feedback’ where a lecturer neglects to raise an important performance issue for fear of eliciting a negative reaction from a student (Ende 1983).

Is feedback sandwich a great meal?

I cushion the blow!

The hard truths are nicely disguised!

Me too - nice and soft!

Myth 2: The best feedback is impersonal and ignores the emotional impact on students

http://arts.brighton.ac.uk/projects/networks/issue-18-july-2012/the-art-group-crit.-how-do-you-make-a-firing-squad-less-scary

What students say…• You’re so nervous that you’re going to get it back with all

these red marks saying that it’s wrong.

• It’s always the negatives you remember, as we’ve all said. It’s always the negatives. We hardly ever pick out the really positive points because once you’ve seen the negative, the negatives can outweigh the positives.

• I feel physically sick handing in an assignment. I can’t sleep for days before because I panic that it’s not right and it’s so pathetic.

What the literature says…

• “Feedback is an inherently emotional business… emotions are a barrier and a stimulus ” (Molloy et al. 2013).

• “The ‘telling’ mode of performance information exchange implies that the lecturer viewpoint cannot be contested” (Ibid 2013).

• ‘Final vocabulary’ leaves the student with no room for manoeuvre (Boud 1995).

Pause

1. What is your experience of the emotional side of feedback?

2. How does it effect students’ use of your feedback?

3. Have you developed any mitigating strategies?

Myth 3: Performance is only in the DNA

What students say• It told you some of the problems but it doesn’t tell you

how you can manage to fix that. It was, “Well, this is the problem.” I was like, “How do I fix it?” They said, “Well, some people are just not good at writing.”

• Sometimes they just scratch through a bit and then they don’t really say how you could change it. They say like ‘No’ or ‘Don’t put this in’ and you think ‘Well what do I put there? How do I change it?’ It’s quite soul destroying.

Here are some ways in which you can

improve…

Some people are just not good at

writing…

Myth 4: Feedback is ‘telling’

What students say…• Sometimes they just scratch through a bit and then they don’t really

say how you could change it. They say like ‘No’ or ‘Don’t put this in’ and you think ‘Well what do I put there? How do I change it?’ It’s quite soul destroying.

• Because they have to mark so many that our essay becomes lost in the sea that they have to mark.

• It was like ‘Who’s Holly?’ It’s that relationship where you’re just a student.

• Here they say ‘Oh yes, I don’t know who you are. Got too many to remember, don’t really care, I’ll mark you on your assignment’.

What students say…• Oral feedback is much better, and more personal, but then it’s gone.

• We’ve had screencast, and audio feedback. You can see tutors interacting with your piece, which is interesting. It really helped me in terms of structure, and also with the method.

• I liked the screen-casting. It was really good. And sometimes it’s better than going to the lecturer, because I don’t feel embarrassed and can keep going back to it.

• I’d much rather sit down and get into a discussion with someone because then if you don’t understand something you can still ask why or say you don’t understand.

• Getting feedback from other students in my class helps. I can relate to what they’re saying and take it on board. I’d just shut down if I was getting constant feedback from my lecturer.

It’s about educational paradigms…

Transmission Model

Social Constructivist model

Myth 5: There are right answers

An enigma wrapped in a riddle surrounded by a mystery

“This course has changed my whole outlook on life. Superbly taught!”

“This course is falsely taught and dishonest. You have cheated me of my tuition”

This has been the most sloppy, disorganised course I’ve ever taken. Of course I’ve made some improvement, but this has been due entirely to my own efforts!”

Perry (1981) in “The Modern American College”

The reliance on traditional instruction is not simply a choice made by individual faculty—students often prefer it. This resistance to active learning may have more to do with their epistemological development than a true preference for passivity. Entering freshmen are likely to use a right-or-wrong, black-or-white mental model. At this dualistic stage, students believe that the “right answer exists somewhere for every problem, and authorities know them. Right answers are to be memorized by hard work.” P.79

More of Perry

By confronting students with uncertainty, ambiguity, and conflicting perspectives, instructors help them develop more mature mental models that coincide with the problem-solving approaches used by experts. Authentic learning exercises expose the messiness of real-life decision making, where there may not be a right or a wrong answer per se, although one solution may be better or worse than others depending on the particular context. Such a nuanced understanding involves considerable reflective judgment, a valuable lifelong skill that goes well beyond the memorization of content.

Intellectual Development of Students

Third Year

Commitment Teacher as endorser

Second YearRelativism Teacher as enigma

First YearDualism Teacher as expert

ReferencesBoud, D. and Molloy, E (2013) Rethinking models of feedback for learning: the challenge of design Assessment & Evaluation in Higher Education, 38:6, 698-712 http://dx.doi.org/10.1080/02602938.2012.691462Boud, D. and Molloy, E (2013) Feedback in Higher and Professional Education. Understanding it and doing it better. Abingdon. Routlledge.

Gibbs, G. & Simpson, C. (2004) Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education. 1(1): 3-31.

Hattie, J. (2007) The Power of Feedback. Review of Educational Research. 77(1) 81-112.

Hughes, G. (2014) Ipsative Assessment. Basingstoke. Palgrave MacMillan.

Jessop, T. and Maleckar, B. (2014). The Influence of disciplinary assessment patterns on student learning: a comparative study. http://www.tandfonline.com/doi/abs/10.1080/03075079.2014.943170

Studies in Higher Education. Published Online 27 August 2014

Jessop, T. , El Hakim, Y. and Gibbs, G. (2014) The whole is greater than the sum of its parts: a large-scale study of students’ learning in response to different assessment patterns. Assessment and Evaluation in Higher Education. 39(1) 73-88.

Nicol, D. (2010) From monologue to dialogue: improving written feedback processes in mass higher education.Assessment & Evaluation in Higher Education, 35: 5, 501 – 517.

Nicol, D. and McFarlane-Dick D. (2006) Formative Assessment and Self-Regulated Learning: A Model and Seven Principles of Good Feedback Practice. Studies in Higher Education. 31(2): 199-218.

Sadler, D.R. (1989) Formative assessment and the design of instructional systems.

Instructional Science, 18, 119-144.

Transforming student learning: why marking matters

Dr Tansy JessopDOPA Development Day

2 September 2015

Your task

• Use the criteria from Goldsmiths to mark this piece of art.

• You have fifteen minutes to mark the picture and write comments.

• Criteria assume meaning only when used. Simply writing criteria and publishing them in handbooks or on websites have little or no impact on students’ learning (Rust, 2001).

Pairs: how do you mark?

• How did you mark the art work?

• How do you normally mark?• How do you make

judgements?• What do you find most

difficult?

What the literature says…

Marking is important. The grades we give students and the decisions we make about whether they pass or fail coursework and

examinations are at the heart of our academic standards (Bloxham, Boyd and Orr 2011).

What the papers say…

https://www.timeshighereducation.co.uk/news/examiners-give-hugely-different-marks/2019946.article

QAA: a paradigm of accountability

• Learning outcomes• Criteria-based learning• Meticulous specification• Written discourse• Generic discourse (Woolf 2004)• ‘validating practices’ (Shay 2004) • Transparent to staff and students• Intended to reduce the arbitrariness of staff

decisions (Sadler 2009).

The first problem: confusion with terms

Standards can be seen in the typical department or university grade descriptors, which specify what students must do in relation to generic criteria in order to achieve a particular grade. This distinguishes criteria as likely to be specific to a given assignment, whereas standards might apply across all work at the relevant level (Bloxham et al 2011).

In summary: • Criteria: criteria are attributes or properties useful for making judgements about

assessments. kriterion: a means for judging.

• Grade descriptor: explain what a student needs to demonstrate in order to achieve a certain grade or mark in an assessment.

• Standard: particular degree of level of quality, meeting a threshold or minimum level, usually determined by an authority or group of scholars in a field.

Is there a right way to mark….?

Hang on, this is puzzling

Every lecturer is marking it differently, which confuses people.

We’ve got two tutors- one marks completely differently to the other and it’s pot luck which one you get.

They have different criteria, they build up their own criteria.

Q: If you could change one thing to improve what would it be?A: More consistent marking, more consistency across everything and that they would talk to each other.

It’s such a guessing game.... You don’t know what they expect from you.

What’s going wrong here?

• There are criteria, but I find them really strange. There’s “writing coherently, making sure the argument that you present is backed up with evidence”.

• I get the impression that they don't even look at the marking criteria. They read the essay and then they get a general impression, then they pluck a mark from the air.

• I don’t have any idea of why it got that mark.

But this is quite ‘normal’…

Differences between markers are not ‘error’, but rather the inescapable outcome of the multiplicity of perspectives that assessors bring with them (Shay 2005, 665).

The tension between ‘the scientific aspirations of assessment technologies to represent an objective reality and the unavoidable subjectivities injected by the human focus of these technologies’ (Broadfoot 2002, 157).

Yet profoundly worrying…

We are left with the haunting spectre of relativism – no universally fixed standards, no clear-cut criteria to which we can appeal, no escape from subjectivity. A retreat to relativism is, however, not an option. There is too much at stake for students, the academic community and society more broadly (Shay 2005, 674).

Grades matter (Sadler 2009).

It’s complicated

Divergent dominates in HE

• Opportunities for students to demonstrate sophisticated cognitive abilities, integration of knowledge, complex problem-solving, critical opinion, lateral thinking and innovative action (160)

• Divergent works are typically complex, quality can only be explained by reference to multiple criteria, possibly including some that are abstract in nature (Sadler 1983).

So does transparency and accountability

ImplicitOral

‘I just know’

Explicit WrittenI justify

Co-creation and

participation

Active engagemen

t by students

Having ‘an eye for a dog’

The Art and Science of Evaluation

Judging is both an art and a science: It is an art because the decisions with which a judge is constantly faced are very often based on considerations of an intangible nature that cannot be recognized intuitively. It is also a science because without a sound knowledge of a dog’s points and anatomy, a judge cannot make a proper assessment of it whether it is standing or in motion.

Take them round please: the art of judging dogs (Horner, T 1975).

Criteria problem 1: Analytic Vs Holistic

Analytic grading:Separate qualitative judgements on each of the criteria. After criterion by criterion judgements are made, they are combined, usually by way of a formula. The resulting aggregate is converted into a grade.

Holistic grading

The assessor progressively builds up a complex mental response to a student work. Attending to particular aspects and allowing an appreciation of the work as a whole to emerge.

Criteria Problem 2: How many criteria are enough?

Criteria problem 3: Vive la difference?

Different lecturers use different sets of criteria. Selecting some excludes others, is one more legitimate than another? What effect would applying different criteria have? What signals do using different sets of criteria send to students?• Does your department use the same criteria?• Does it matter?• Does it influence grades?

Criteria problem 4: Staff and students see criteria differently

Criteria problem 5: Staff do not always share the same understanding

What’s behind the marking mask?

• Values• Interpretation• Connoisseurship• Tacit understanding• Subjective readings• Privatisation• Exposing what we think

Marking as social practice

• Situated in the discipline – not just ‘an eye for a dog’

Far from being mere personal opinion or an arbitrary ‘taste’ or ‘gut-feel’, this subjective reading is a socially constituted, practical mastery (Shay 2005).

Marking as social practice

Marking as social practice

This highlights what is perhaps one of the great failings in our academic communities of practice, in which the typical technologies of our assessment and moderation systems – marking memorandum, double-marking, external examiners – privilege reliability. These technologies are not in themselves problematic. The problem is our failing to use these technologies as opportunities for dialogue about what we really value as assessors, individually and as communities of practice (Shay 2005).

Marking as social practice

Marking as social practice

• Design of tasks• Shared marking, calibration• In-discipline dialogue

Staff-staff

• More process-oriented• More discussion about complex tasks• Dialogue about examples• Co-creation/rewriting criteria

Staff-students-staff

• Peer-review• Dialogue about examples• Developing self-evaluative skills

Students-students

This counts towards your CPD

ReferencesBloxham, S. , P. Boyd, and Orr S. (2011) Mark my words: the role of assessment criteria in UK higher education practices. Studies in Higher Education. 36.6. 655-670.O'Donovan, B , Price, M. and Rust, C(2008) 'Developing student understanding of assessment standards: a nested hierarchy of approaches', Teaching in Higher Education, 13: 2, 205 — 217http://dx.doi.org/10.1080/13562510801923344D. Royce Sadler (2009) Indeterminacy in the use of preset criteria for assessment and grading, Assessment & Evaluation in Higher Education, 34:2, 159-179. http://dx.doi.org/10.1080/02602930801956059Shay, S.B. 2005. The assessment of complex tasks: A double reading. Studies in Higher Education. 30: 663–79.Woolf, H. (2004) Assessment criteria: Reflections on current practices. Assessment and Evaluation in Higher Education 24:4 479-93.

Recommended