23
Self-, peer-, and instructor- assessment from Bloom’s perspective Jose DUTRA Oliveira Neto Luiz Antonio Titton University of São Paulo Brazil AUGUST 8-12, 2015 • Chicago, IL Monday August 10, 2015 - 4:00 pm-5:30 pm

Self-, peer-, and instructor-assessment from Bloom’s perspective

Embed Size (px)

Citation preview

Page 1: Self-, peer-, and instructor-assessment from Bloom’s perspective

Self-, peer-, and instructor-assessment from Bloom’s

perspective Jose DUTRA Oliveira Neto

Luiz Antonio TittonUniversity of São Paulo

Brazil

AUGUST 8-12, 2015  •  Chicago, IL

Monday August 10, 2015 - 4:00 pm-5:30 pm

Page 2: Self-, peer-, and instructor-assessment from Bloom’s perspective

2

Sumary

• Assessment• Divergent results in the literature• Research question• Material & Methods• Results• Conclusion

Sumary

Page 3: Self-, peer-, and instructor-assessment from Bloom’s perspective

3

Assessment & Bloom• Assessment provides an accurate measure of student performance (Dietal,

Herman, and Knuth ,1991 ).• Assessment process begins with the identification of learning goals and

measurable objectives (Martell & Calderon, 2005) as well as the use of specific traits that help define the objectives being measured (Walvoord & Anderson, 1998). • These traits are frequently correlated with the developmental concepts

articulated in Bloom’s Taxonomy of Educational Objectives which provides a recognized set of hierarchical behaviors that can be measured as part of an assessment plan (Harich, Fraser, & Norby, 2005).

Bloom

Page 4: Self-, peer-, and instructor-assessment from Bloom’s perspective

4

Assessment

• Past• Instructor based

• Issues• Now & Future• Self-, peer- assessment, & [instructor]?

Problem

Page 5: Self-, peer-, and instructor-assessment from Bloom’s perspective

5

Self-, peer-assessmentPotential advantage over teaching grading

• Reading another’s answers or simply spending time pondering another’s view may be enough for students to change their ideas or further develop their skills (Bloom & Krathwohl,1956; Boud, 1989).

• Grading can help to demystify testing. Students become more aware of their own strengths, progress, and gaps (Alexander, Schallert, & Hare, 1991; Black & Atkin, 1996).

• Peers can often spend more time and offer more detailed feedback than the teacher can provide (Weaver & Cotrell, 1986).

• Students develop a positive attitude toward tests as useful feedback rather than for “low grades as punishment for behavior unrelated to the attainment of instructional objectives”(Reed, 1996, p. 18; see also Sadler, 1989).

• Grading time is a key issue.• Several courses evaluate and grade students using peer assessment (Kolowich,2013)• Feedback from peers more understandable and helpful than feedback from faculty( Topping, 1998)• Student learn more from giving feedback than receiving it (Cho & McArthur,2011)

Sadler, Good (2006)

Potential

Page 6: Self-, peer-, and instructor-assessment from Bloom’s perspective

6

Problem

• Divergent results among self-, peer- and instructor-assessment studies

Problem

Page 7: Self-, peer-, and instructor-assessment from Bloom’s perspective

7

Divergent results reliability? validity?

• Differences:• Chang, Liang, & Chen (2013) - the self-assessment results and teacher-assessment results

were highly consistent with out significant differences. 72 senior high school students using rubric.

• Chang et al. (2012) - 72 senior high school students. There were significant differences in the results o f the three assessment methods, among which teacher-raters adopted the most rigorous scoring standards, while peer-rater tended to use the most lax standards

• Chen (2010) - 37 students (There were 14 undergraduate students and 23 graduate students)- Lack of consistency between teacher-grading and student-grading

• Jessica Napoles (2008). 36 undergraduate music education majors. The instructor’s ratings were consistently the highest . Peer ratings were consistently the lowest

• Sadler & (2006) – Peer-grading undergrade, self-grading overgrade. Self-grading had higher agreement with teacher grades than peer-grading using several measure.

• Phillips(2014) - Peer scores were close to instructor ratings.

Divergent results

CS6
The fact that students can accurately grade a test does not mean that the test is actually measuring what is intended (Sadler, 1989).
CS6
Why? Research is needed on:FeedbackSelf-assessmentPeer-assessmentRubric
Page 8: Self-, peer-, and instructor-assessment from Bloom’s perspective

8

Self-assessment• Student as an evaluator can be a powerful learning

experience for students (O’Toole, 2013). • The literature hypothesizes that self- and peer-grading can

provide feedback that a student can use to gain further understanding (Sadler,1989).

Introduction

Page 9: Self-, peer-, and instructor-assessment from Bloom’s perspective

9

Peer assessment• Most of the students perceived that the peer review offers a positive

impact on self-confidence, improves learning behaviours, and help identify personal strengths and limitations (Theising et al., 2014). • It was also observed that the involvement provided by peer review

develops more cognitive target skills (Chang, Tseng, & Lou, 2012). • The peer review improves the student's abilities to relate instructional

objectives with assessment activities, to understand the criteria and procedures to identify the strengths and weaknesses of own performance of the students to improve their understanding and confidence in the subject at hand and improving future performance (Chen, 2010).• The dialogue between pairs is recommended for the development of

critical thinking which may include the trial of contributions that each give the other (Bonk & Smith, 1998).

Introduction

Page 10: Self-, peer-, and instructor-assessment from Bloom’s perspective

10

Rubric

• Assessment tool• Anonymity• More objective assessment• Development of thinking skills• Enable high quality assessment• Reliability is increased by using rubrics with unambiguous scales and by using a small number of

categories (five or fewer) (Sadler,1989). • When students were simply handed a rubric and asked to use it voluntarily, but were given no

training in its use, they ignored the rubric (Fair -brother, Black, & Gil, 1995)• Help to standardize assessment, provide useful data, and articulate goals and objectives to learners.

Rubrics are also particularly useful in assessing complex and subjective skills (Dodge & Pickette, 2001).

Introduction

Page 11: Self-, peer-, and instructor-assessment from Bloom’s perspective

11

ProcedureModel

Online assessment system

Phase I – Abstract assignment Phase II – Evaluate and assign grades (self- & 2 peers)

Extra points

Page 12: Self-, peer-, and instructor-assessment from Bloom’s perspective

12

Inspiration• There are many research about this topic. Boud (1989) remarked on the poor quality

of quantitative research dealing with self-assessment, citing low technical quality and results and methods that vary from study to study (Sadler & Good ,2006).

• No quantitative studies were found that attempted to measure the effect of student-grading on learning (Sadler & Good ,2006) .

• How well student grades match the teacher’s grades?• A difference was found between the accuracy of grading of items that required lower

order and higher order cognitive skills (Sadler & Good ,2006) .• Assessment with Bloom’ taxonomy (Buzzetto-more & Alade, 2006)

Introduction

Page 13: Self-, peer-, and instructor-assessment from Bloom’s perspective

13

Research question

• To what extent Bloom's taxonomy may help to explain the difference between instructors’ grades and self- & auto-assessment ?

Introduction

Page 14: Self-, peer-, and instructor-assessment from Bloom’s perspective

14

Material & Methods• Sample of 98 undergradute accounting students

from USP.• Introduction to research methodology course.• Moodle LMS as online assessment tool. We used a

tool called “workshop”. • A model with anonymous instructors’ , self- and peer-

assessment based on rubrics.

Material & Methods

Page 15: Self-, peer-, and instructor-assessment from Bloom’s perspective

15

Model

Training session

Regular classes

Task - Complete an abstract

during face to face class

Online assessment-

anonymous and rubric based:

Self-assessment

Peer –assessmentInstructor-assessment

Compare the assessment by Bloom’s level

Material & Methods

Page 16: Self-, peer-, and instructor-assessment from Bloom’s perspective

Procedure using the ModelTask DomainT1 – Identify abstract

sections - highlightKnowledge – Level 1T2 – Reorder sections

as a standard

T3 – Show the ability to identify missing sections -

Comprehension – Level 2

T4 – Write the first missing section (context) - summarize

T5 – Write the second missing section (gap) - summarize

16

Re-write the abstract and post in Moodle platform

Include previous existing sections

Add the new missing sections

Write the missing sections to the abstract

Identify missing sections at the abstract

Reorder all sections to a standard

Article readingHighlight abstract section with different predifined colors

Material & Methods

Page 17: Self-, peer-, and instructor-assessment from Bloom’s perspective

17

ResultsTask Post hoc Bonferroni testT1 Instructor-assessment < peers-assessment and self –assessment.

These last two have no significant T2 Instructor-assessment < peers-assessment and self –assessment.

These last two have no significant difference T3 Peer-assessment has significantly lower than the self-assessment and

instructor-assessment.These last two have no significant difference

T4 Instructor-assessment < peers-assessment and self –assessment.These last two have no significant difference

T5 Instructor-assessment < peers-assessment and self –assessment.These last two have no significant difference

1st level - Knowledge (T1+T2)

Instructor-assessment < peers-assessment and self –assessment.These last two have no significant difference (more convergence)Student-teacher agreement increased with tests requiring less judgment to grade

2nd level - Compreehension (T3+T4+T5)

Instructor-assessment < peers-assessment and self –assessment.Peer-assessment has significantly lower values than self-assessment (less convergence)

Hidden information

Results

Page 18: Self-, peer-, and instructor-assessment from Bloom’s perspective

18

Grades

Bloom's taxonomy level

1st level - Knowledge (T1+T2)

2nd level - Compreehension (T3+T4+T5)

2nd level - Compreehension (T3+T4+T5)1st level - Knowledge (T1+T2)

Results

Page 19: Self-, peer-, and instructor-assessment from Bloom’s perspective

19

Conclusion

• It was found that by offering activities that involve comprehension as a higher level of Bloom's taxonomy, the discrepancies between the assessments made by the students of their own work and received by their peers may not have uniform behaviour.

• Students always give higher grades when compared with instructors.• The differences may show additional actions needed in the learning process.• Anonymous assessment is important.• New students-faculty power relation• Developing high quality rubric is mandatory. Rubric requires training.• Using Bloom’s taxonomy may help to make results comparable or generalize the

models. • Emerging view – assessment “of-learning /summative” to “for-formative” and “as”

(Becker,2013)• Future studies:

• At what extent the peer assessment contribute to the academic performance?• Can student-grading substitute for teacher grades?• Design an experiment with two classes.

Conclusion

Page 20: Self-, peer-, and instructor-assessment from Bloom’s perspective

Thank you!Prof. Dutra – USP

Learning technology [email protected]

Page 21: Self-, peer-, and instructor-assessment from Bloom’s perspective

21

Demonstrate understanding of facts and ideas by organizing, comparing, translating, interpreting, giving descriptions, and stating the main ideas

Using acquired knowledge. Solve problems in new situations by applying acquired knowledge, facts, techniques and rules

Exhibit memory of learned materials by recalling facts, terms, basic concepts and answers

Page 22: Self-, peer-, and instructor-assessment from Bloom’s perspective

22

Rubrics header Bloom’s level Rubrics options Value

T1 - Correctly highlighted abstract existing sections as of template

1 Did not. 1Wrong three sections or more. 2Wrong two sections. 3Wrong one section. 4Correctly marked all sections 5

T2 - Reorder the abstract

1 Did not. 1Wrong three sections or more. 2Wrong two sections. 3Wrong one section. 4Correctly reordered all sections 5

T3 - Show the missing parts – context and gap

2 Did not. 1Identified more than 2 missing sections. 2Identified one missing section (not correct).

3

Identified the 2 missing sections (correct).

4

T4 - Quality of the missing section produced - context

2 Not produced the text. 1Produced the text, but is not correct. 2Produced correctly, however the text is not clear

3

The text is clear and correct. 4

T5 - Quality of the missing section produced - gap

2 Not produced the text (gap). 1Produced the text, but is not correct. 2Produced correctly, however the text is not clear

3

The text is clear and correct. 4

Page 23: Self-, peer-, and instructor-assessment from Bloom’s perspective

23

Data Analysis• Data was analysed comparing the three methods (instructor-,

peer-, and self-assessment) matched and the influence of variable gender (factor) by the method called Analysis of Variance (ANOVA) Repeated Measures Model Mixt, when there was difference between the assessment methods, we used the Bonferroni post hoc test. We adopted the significance level p ≤ 0.05.