Program Evaluation Week 3. Tonight Name Tags Your Q’s Quant. vs. Qual Paradigms activity...

Preview:

Citation preview

Program Evaluation

Week 3

Tonight

• Name Tags• Your Q’s• Quant. vs. Qual• Paradigms activity• Evaluation developments• Context-Adaptive Model • HW

Name Tags

• Write first name is big, bold letters• Bring your name tag to every class

Your Q’s

• Find a partner, and share your questions with her.

• We will share these as a class

Quant. vs. Qual

• The field of research and evaluation often categorizes projects/research into two types

• These two categories, often labeled qualitative or quantitative share some similarities but have stark differences.

Activity 1• Find a partner and then make 1 t-chart• Then, categorize the following descriptors under

qualitative or quantitative

• Mainstream research; subjective; product; positivistic; truths; interpretive; researcher involved; causal; objective; alternative research; top-down; bottom-up; grand theory; process; naturalistic; Truth; exploratory; researcher distanced; comparative; social construction; significant difference; confirmatory; particularization; generalization

Evaluation Purposes• Summative– achievement; goals met• Formative– processes; instructional

recommendations

• Often more than one purpose

Paradigms Activity – Which paradigm used?

• Look at these articles, which paradigm(s) is the researcher working from?

• How do the data collected reflect this paradigm?

Evaluation developments • What changes in program evaluation have

occurred over the past 40 years?• Why have these changes taken place?• What does it mean for us?

What changes in program evaluation have occurred over the past 40 years?

• Performance outcomes to study of social processes

• P. 41 Four Generations of Evaluation • Tyler’s Model p. 20 • Experimental design p. 23-24• Why experimental design?

Why this change?

• Parlett & Hamilton (1972). P. 32

• Illuminative evaluation p. 32-33, 34

• Constructivism p. 42

Present

• P. 71• “The attention to issues of language here

parallels our understanding set out in this chapter: from initial positions of evaluation as the measurement of predicated program outcomes, and language as a straightforward representation of meanings and positions, each has matured into a complex account of social life and human experience.”

Context-Adaptive Model

• One way to help prepare evaluations• Consideration of how context guides design • 4 steps:– Audience and goals– Context and themes– Selecting an approach– Selecting a design

1. Audience and goals

• Who is requesting the evaluation?• Who will be effected by its results?• Why is the evaluation being conducted?

Context and Themes

• p. 19

Selecting an approach

• What type of evidence is required to make a convincing evaluation argument?

• P. 22– the basic concepts that link the evidence needed and paradigm include…

Selecting a design

• Is there a need for a large-scale evaluation beyond 1 classroom?

• Experimental or quasi-experimental?• Is the primary focus on individual or group

assessment?

Quantitative- Positivist Designs

• P. 23, 3 features of these designs

• Experimental design— random assignments and control group

• Quasi-experimental—non-random assignment and control group

• Pre-experimental– no control group, but pre vs. post with one group

Interpretivist Designs

• Emergent themes and categories• Responsive model, illumination model,

connoisseurship model

Mixed Designs

• Mixed strategies– using qualitative data to answer quantitative questions, or vice versa.

• Mixed designs or Mixed Model– using both qual and quant in the same evaluation.

Link between context and Design

• Look at vignette 1• Determine the design selected and then

decide what contextual factors influenced that design choice.

p. 39

• Act 1, 2

HW

• Read--4. a. Kiely & Rea-Dickens, Ch 5, Evaluating Teachers’ Competence

• b. Kiely & Rea-Dickens, Ch 6, Evaluating Language through Science

• Journal