23
Psychology 301 Chapters 10 - 11 & 13 -15 Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons

Psychology 301 Chapters 10 - 11 & 13 -15 Differences Between Two Means Introduction to Analysis of Variance Multiple Comparisons

Embed Size (px)

Citation preview

Psychology 301Chapters 10 - 11 & 13 -15

Differences Between Two Means

Introduction to Analysis of Variance

Multiple Comparisons

Basic Experimental Designs for Two Groups

T Test for Independent Samples

T Test for Dependent Samples

Completely Randomized ANOVA

Randomized Block ANOVA

Basic Factorial Designs

Completely Randomized Factorial

Randomized Block Factorial

Split Plot Factorial

Independent Samples

Subjects inevitably differ from one another.

In between-subject designs (aka. independent, uncorrelated, completely randomized designs), these differences among subjects are uncontrolled and are treated as error.

Subjects are randomly selected (N) and randomly assigned to groups (n1 or n2)

N is the total number of observations

N = n1+ n2

Dependent Samples

In within-subject designs (aka. dependent, correlated, randomized block designs), the same (or matched) subjects are tested in each condition.

N = the number of pairs of subjects/scores

Within-Subjects Designs

Differences between subjects can be measured and separated from error.

Removing variance due to differences between subjects from the error variance greatly increases the power of significance tests

Assumptions Underlying the Use of t Tests

1. The errors are independent (Independence)

2. The errors are identically distributed (Homogeneity of Variance)

3. The errors are normally distributed (Normality)

The F Test and the Analysis of Variance

Which one to use

The t test can be employed only when there are just two means to be compared

The advantage of F is that it operates just as well for three groups, or four, or any number of groups as it does for just two groups

An Illustration

The major general purpose of the analysis of variance is to divide up the total variance of all the observations into a number of separate sources of variance that can be compared with one another for purposes of both effect-size estimations and significance testing.

An Illustration

Total sum of squares = between-conditions sum of squares + within-conditions sum of squares

Or

SStotal = SSbetween + SSwithin

REMEMBER: Because they are sum of squared deviations, they are never negative.

An Illustration

SStotal = Σ(Xij – M..)2

The sum of the squared deviation of all the scores from the grand mean

SSbetween = Σ(Mj – M..)2

The sum of the squared deviation of the treatment level means from the grand mean

Sswithin = Σ(Xij – Mj)2

The sum of the squared deviation of the scores in each treatment group from the mean of the group

An Illustration

Mean squares are obtained by dividing the sum of squares by the degrees of freedom

F is obtained by dividing the mean square between conditions by the mean square within conditions

The Table of Variances

Source Table – Shows the sources of variability in an analysis of variance

Source SS df MS F eta p

Between conditions 57 3 19.0 7.60 .86 .01

Within conditions 20 8 2.5

Total 77 11 7.0

The Variances

Mean squares are variances i.e.. The sum of the squared deviations of scores from their mean divided by their degrees of freedom

Variance between groups = estimate of treatment + error

Variance within groups = estimate of error

F = (estimate of treatment + error) / (estimate of error)

Distributions of F

F distributions all begin at zero and range upward to positive infinity

Critical values of F decrease as df within increases for any given df between

Assumptions Underlying the Use of F Tests

1. The errors are independent (Independence)

2. The errors are identically distributed (Homogeneity of Variance)

3. The errors are normally distributed (Normality)

[Note that the assumptions for F are the same as for t]

Testing for Homogeneity of Variance

The assumption of homogeneity of variance underlies the use of the t and F tests

When groups are being compared, the t or F obtained will be more accurate if the variances of the populations from which our data were drawn are more nearly equal

If data are not homogeneous, we can often transform the data to make the variances more nearly equal before performing the statistical test or we can use a conservative test

Description of a One-Way or Completely Randomized Design

One treatment with p≥2 treatment levels. The levels of the treatment can differ either quantitatively or qualitatively. When the experiment contains only two treatment levels, the design is indistinguishable from the t test for independent samples.

Random assignment of subjects to treatment levels, with each subject designated to receive only one level. Does not require equal n’s, although F is more robust when the sample n’s are equal.

After the F (Multiple Comparisons)

If the overall difference among means is significant (F is significant), then the greatest difference between the means is significant.

If we were to do all possible comparisons among the means when the F is not significant, we might still find some comparisons are significant.

Generally, the more tests of significance computed on data for which the null hypothesis is true, the more significant results will be obtained, i.e., the more Type I errors will be made.

Measures of Strength of Association and Effect Size

Omega squared and eta squared

Guidelines for interpreting (Cohen, 1988)

.01 is a small association

.06 is a medium association

.14 is a large association

Outline of Designs - Basic

Subjects are completely randomly assigned to conditions One treatment (IV) with only two levels

Independent t-test

One treatment (IV) with two or more levelsCompletely Randomized ANOVA

Two or more treatments (IV) each with two or more levels

Completely Randomized Factorial ANOVA

Outline of Designs - Factorial

Subjects are matched or use repeated measuresOne treatment with only two levels

Dependent t-test

One treatment with two or more levelsRandomized Block ANOVA

Two or more treatments each with two or more levels

Randomized Block Factorial ANOVA

Outline of Designs - Mixed

Two or more treatments each with two or more levels

At least one treatment (IV) uses complete randomization and at least one treatment (IV) uses repeated measures

Split-Plot Factorial ANOVA