19
Chapter 12 Summative Evaluation By Reginald Smith November 9, 2016

Apt chapter 12 summative evaluation

Embed Size (px)

Citation preview

Page 1: Apt chapter 12 summative evaluation

Chapter 12 Summative Evaluation

By Reginald Smith

November 9, 2016

Page 2: Apt chapter 12 summative evaluation

Summative Evaluation is defined as the design of evaluation studies and

the collection of data to verify the effectiveness of instructional materials

with target learners. According to Dick and Carey (1996), the expert

judgement phase of summative evaluation is used to find out if either current

or candidate instruction can meet an organization's identified instructional

needs. The following activities are part of the expert judgement phase of

summative evaluation when reviewing candidate instruction:

evaluating the congruence between the organization's instructional

needs and candidate instruction

evaluating the completeness and accuracy of candidate instruction

evaluating the instructional strategy contained in the candidate

instruction

evaluating the utility of the instruction

determining current users' satisfaction with the instruction (p. 323)

Page 3: Apt chapter 12 summative evaluation

The Purpose of Summative Evaluation

Make “go-no-go” decisions

Keep current materials?

The expert judgement phase has already been accomplished if the

instruction was tailored to the identified needs of the organization,

systematically design and developer and been through formative

evaluation. However, the instruction must be subjected to expert

judgement if the organization is unfamiliar with the instruction and

its developmental history (Dick & Carey, 1996). Usually expert

judgement is used to select from the available instructional options

in order to choose one or two that are most promising for a field

trial.

Page 4: Apt chapter 12 summative evaluation

Field Trial

The purpose of the field trial phase of summative evaluation is to

determine the effectiveness of instruction with the target group in

the intended setting (Dick & Carey, 1996). There are two parts to

the field trial phase: outcomes analysis and management analysis.

The outcomes analysis reviews the impact of the instruction on the

learner, the job and the organization. Management analysis

assesses "instructor and supervisor attitudes related to learner

performance, implementation feasibility, and costs" (p. 323).

Page 5: Apt chapter 12 summative evaluation
Page 6: Apt chapter 12 summative evaluation

Are learners capable of applying what they learn to their job?

Impact on organizations

Are learners attitudes helping the organizations achieve a positive

difference?

Are the instructor and manager attitudes

satisfactory

?

Are recommended implementation procedures

feasible

?

Are costs relate to time, personnel, equipments and resources

reasonable?

The focus is the outcome of the instruction

Formative vs. Summative Evaluation

Purpose of the Summative Evaluation

To make "go-no-go" decisions

Page 7: Apt chapter 12 summative evaluation

To keep current materials?

OR

Look for something better suited to meet organization's specific instruction

needs?

(v) Current User Analysis

Seek additional information about

the candidate materials

from the organizations that are

experienced

in using them

Field Trial Phase

(Was the instruction effective?)

-Planning

-Preparing

-Implementing

Page 8: Apt chapter 12 summative evaluation

-Summarizing and Analyzing Data.

Management Analysis

Are the instructor and manager attitudes

satisfactory

?

Are recommended implementation procedures

feasible

?

Are costs relate to time, personnel, equipments and resources

reasonable?

Outcome Analysis

Page 9: Apt chapter 12 summative evaluation
Page 10: Apt chapter 12 summative evaluation
Page 11: Apt chapter 12 summative evaluation

REFLECTION

Why is evaluation important? Possible reasons:

I think without evaluation it cannot be said whether the

designers have properly understood the users. Therefore, one

would not know whether the whole design is intolerant of

minor errors.

It'll be difficult to know whether the systems cause

disruption, frustration, unacceptable changes or conflict in

organisations.

Also, one would not know how to improve the systems in

order to fit the users needs better.

Without evaluation, alternative designs could not be

compared

Or whether systems cause a cognitive overload in users

It would also be hard to assess whether computer systems

force users to perform tasks in undesirable ways. Thus, it

would be hard to check conformance to a standard.

Engineering towards a target (often expressed as some form

of metric) would be hard to achieve.

Page 12: Apt chapter 12 summative evaluation

I think it is very important for us to constantly evaluate the

instructional strategy ensure that is staying the course with what

the organization has in mind while being usable and feasible for

the organization. I think the summative evaluation does just

that. Summative evaluation's existence provides monitoring and

help to develop the training for future events.

As future ID, we need to design and conduct summative evaluation

by looking at all four levels of Kirkpatrick. Kirkpatrick stated that

Levels 3 and 4 (Behavior and Results) are important indicators of

the training's value to the organization. Thus, we should not limit

ourselves to only the first two levels (Reaction and Learning).

Page 13: Apt chapter 12 summative evaluation

EXTENDED KNOWLEDGE

Smith and Ragan (1999) suggest to determine goals of evaluation

as the first step in a goal-based summative evaluation. The most

important part of this stage is determining questions that should be

answered as a result of the evaluation. The client organization

and/or funding agencies and other stakeholders should identify the

questions. These questions will guide the remainder of the

summative evaluation. Questions might include:

Does implementation of the instruction solve the problem

identified in the needs assessment?

Do the learners achieve the goals of the instruction?

What are the costs of the instruction? What is the "return on

investment" of the instruction? (p. 355)

Both the client and evaluator should agree on the questions before

moving on to subsequent steps

Page 14: Apt chapter 12 summative evaluation
Page 15: Apt chapter 12 summative evaluation

Contrasting Summative and Formative Evaluations

Summative Evaluation

• Provides teachers and students with information concerning

achievement

• Has a high-point value

• End-goal is to compare student achievement at the end of an

instructional unit by comparing it to a set benchmark

Page 16: Apt chapter 12 summative evaluation

Formative Evaluation :

• Is used to check student progress as an instructional unit

is occurring

• Guides the next steps in instruction by identifying other

opportunities to aid in success

• Must be designed in a way to respond to students’ needs

Page 17: Apt chapter 12 summative evaluation

Type of Evaluation Relative Complexity Types of Activities Descriptive • Simplest form • Least expensive • Conducted by project staff • Analysis of services • How they were operated • How program was administered • Resources consumed • Characteristics of those impacted by project • Describe any outcomes Operational • Slightly more involved • Low expense • Conducted by project staff • All of descriptive evaluation activities •

Page 18: Apt chapter 12 summative evaluation

Goals and objectives • Describe project components (start-up, recruitment, partnerships, etc.) • Explain short-term and intermediate outcomes • Explain project completion or institutionalization Process • Slightly more involved • Moderate expense • Conducted by professional evaluator (may be staff or consultant) • Focused on service delivery and administrative processes • Suggests causal relationships between what was done and outcomes • Generalize your experiences more broadly by providing insights into effectiveness • Look at efficacy of program in terms of outcomes or costs • Investigate operational features against results Outcomes •

Page 19: Apt chapter 12 summative evaluation

More complex • Moderate expense • Conducted by professional evaluator (may be staff or consultant • Use exacting data collection and statistical methods for data analysis • Requires database and analysis software • Focuses on qualitative and quantitative analysis of data Impact Study • Long-term, involved • Most expensive • Requires third- party evaluator • Often contains experimental and control groups • Proves statistical significance • Requires large sample sizes • Long-term analysis of outcomes