Emily H. Wughalter, Ed.D. Measurement & Evaluation Spring 2010

Preview:

Citation preview

Emily H. Wughalter, Ed.D.

Measurement & Evaluation

Spring 2010

Reliability means the consistency in the measurement from one testing to the next trial.

Validity means does the measure or the score measure what it purports (is supposed) to measure.

For a measure to be valid it must be reliable; however, a measure can be reliable without it being valid.

Xobserved

XTrue

XError

Xobserved = Xtrue+XError

Testing environment should be favorable Testing environment should be well-

organized Administrator should be competent to run a

favorable testing environment Range of talent Motivation of the performers

Good day vs bad day Learning, forgetting, and fatigue Length of the test Test difficulty Ability of the test to discriminate Number of performers

Nature of measurement device Selection of scoring unit Precision Errors in measurement Number of trials Recording errors

Classroom management Warm-up opportunity

Test-Retest (Stability Reliability) Internal Consistency

Difference or change scores should be avoided because of ceiling and floor effects.

Difference scores are highly unreliable scores

Objectivity means interrater reliability or consistency from one person (rater) to another.

A criterion score represents the score that will be used to represent an individual’s performance The criterion score may be the mean of a series

of trials. The criterion score may be the best score for a

series of trials.

When selecting a criterion measure, whether it is the best score, the mean of all the trials score, or the mean of the best 2 or 3 trials score, a researcher must determine which of the criterion measures represents the most reliable and most valid score.

Inconsistent scorers Inconsistent performance Inconsistent measures Failure to develop a good set of instructions Failure to follow testing procedures

Recommended