102
Crash Course in Statistics Data Analysis (with SPSS) July 2013 Dr. Jürg Schwarz [email protected] Neuroscience Center Zurich

Stats

  • Upload
    rt2222

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Stats

Crash Course in Statistics

Data Analysis (with SPSS)

July 2013

Dr. Jürg Schwarz [email protected]

Neuroscience Center Zurich

Page 2: Stats

Slide 2

Part 1: Program 10 July 2013: Morning Lessons (09.00 – 12.00)

◦ Some notes about…

- Type of Scales

- Distributions & Transformation of data

- Data trimming

◦ Exercises

- Self study about Boxplots

- Data transformation

- Check of Dataset

Page 3: Stats

Slide 3

Part 2: Program 11 July 2013: Morning Lessons (09.00 – 12.00)

◦ Multivariate Analysis (Regression, ANOVA)

- Introduction to Regression Analysis

General Purpose

Key Steps

Simple Example

Testing of Requirements

Example of Multiple Regression

- Introduction to Analysis of Variance (ANOVA)

Simple Example: One-Way ANOVA

Example of Two-Way ANOVA

Types of ANOVA

Requirements

Page 4: Stats

Slide 4

Part 2: Program 11 July 2013: Afternoon Lessons (13.00 – 16.00)

◦ Introduction to other multivariate methods (categorical/categorical – metric/metric)

- Methods

- Choice of method

- Example of discriminant analysis

◦ Exercises

- Regression Analysis

- Analysis of Variance (ANOVA)

◦ Remains of the course

- Evaluation (Feedback form will be handed out and collected afterwards)

- Certificate of participation will be issued Christof Luchsinger will attend at 15.30

Page 5: Stats

Slide 5

Table of Contents

Some notes about… ______________________________________________________________________________________ 9

Types of Scales ...................................................................................................................................................................................................... 9

Nominal scale ................................................................................................................................................................................................................... 10

Ordinal scale ..................................................................................................................................................................................................................... 11

Metric scales (interval and ratio scales) ............................................................................................................................................................................. 12

Hierarchy of scales ........................................................................................................................................................................................................... 13

Properties of scales .......................................................................................................................................................................................................... 14

Summary: Type of scales .................................................................................................................................................................................................. 15

Exercise in class: Scales ...................................................................................................................................................................................... 16

Distributions ......................................................................................................................................................................................................... 17

Measure of the shape of a distribution ............................................................................................................................................................................... 18

Transformation of data ......................................................................................................................................................................................... 20

Why transform data? ......................................................................................................................................................................................................... 20

Type of transformation ...................................................................................................................................................................................................... 20

Linear transformation ........................................................................................................................................................................................................ 21

Logarithmic transformation ................................................................................................................................................................................................ 22

Summary: Data transformation .......................................................................................................................................................................................... 25

Data trimming ....................................................................................................................................................................................................... 26

Finding outliers and extremes ........................................................................................................................................................................................... 26

Boxplot ............................................................................................................................................................................................................................. 27

Boxplot and error bars ....................................................................................................................................................................................................... 28

Q-Q plot ............................................................................................................................................................................................................................ 29

Example ........................................................................................................................................................................................................................... 33

Exercises 01: Log Transformation & Data Trimming ............................................................................................................................................. 34

Page 6: Stats

Slide 6

Linear Regression _______________________________________________________________________________________ 35

Example ............................................................................................................................................................................................................... 35

General purpose of regression ............................................................................................................................................................................. 38

Key Steps in Regression Analysis ........................................................................................................................................................................ 39

Regression model ................................................................................................................................................................................................ 40

Linear model ..................................................................................................................................................................................................................... 40

Stochastic model............................................................................................................................................................................................................... 40

Gauss-Markov Theorem, Independence and Normal Distribution ......................................................................................................................... 42

Regression analysis with SPSS: Some detailed examples ................................................................................................................................... 43

Simple example (EXAMPLE02) ......................................................................................................................................................................................... 43

Step 1: Formulation of the model ....................................................................................................................................................................................... 43

Step 2: Estimation of the model ......................................................................................................................................................................................... 44

Step 3: Verification of the model ........................................................................................................................................................................................ 45

Step 6. Interpretation of the model .................................................................................................................................................................................... 47

Back to Step 3: Verification of the model ........................................................................................................................................................................... 48

Step 5: Testing of assumptions ......................................................................................................................................................................................... 50

Violation of the homoscedasticity assumption .................................................................................................................................................................... 53

Multiple regression ............................................................................................................................................................................................... 54

Many similarities with simple Regression Analysis from above .......................................................................................................................................... 54

What is new? .................................................................................................................................................................................................................... 54

Multicollinearity ..................................................................................................................................................................................................... 55

Outline .............................................................................................................................................................................................................................. 55

How to identify multicollinearity ......................................................................................................................................................................................... 56

Some hints to deal with multicollinearity ............................................................................................................................................................................ 57

Multiple regression analysis with SPSS: Some detailed examples ....................................................................................................................... 58

Example of multiple regression (EXAMPLE04) .................................................................................................................................................................. 58

Formulation of the model................................................................................................................................................................................................... 58

SPSS Output regression analysis (EXAMPLE04) .............................................................................................................................................................. 59

Dummy coding of categorical variables ............................................................................................................................................................................. 61

Page 7: Stats

Slide 7 Gender as dummy variable ............................................................................................................................................................................................... 62

SPSS Output regression analysis (EXAMPLE04) .............................................................................................................................................................. 63

Example of multicollinearity ............................................................................................................................................................................................... 64

SPSS Output regression analysis (Example of multicollinearity) I ...................................................................................................................................... 65

Exercises 02: Regression ..................................................................................................................................................................................... 67

Analysis of Variance (ANOVA)_____________________________________________________________________________ 68

Example ............................................................................................................................................................................................................... 69

Key steps in analysis of variance .......................................................................................................................................................................... 73

Designs of ANOVA ............................................................................................................................................................................................... 74

Sum of Squares ................................................................................................................................................................................................... 75

Step-by-step ..................................................................................................................................................................................................................... 75

Basic idea of ANOVA ........................................................................................................................................................................................................ 76

Significance testing ........................................................................................................................................................................................................... 77

ANOVA with SPSS: A detailed example ............................................................................................................................................................... 78

Example of one-way ANOVA: Survey of nurse salaries (EXAMPLE05) .............................................................................................................................. 78

SPSS Output ANOVA (EXAMPLE05) – Tests of Between-Subjects Effects I ..................................................................................................................... 79

Including Partial Eta Squared ............................................................................................................................................................................................ 81

Two-Way ANOVA................................................................................................................................................................................................. 82

Interaction ......................................................................................................................................................................................................................... 83

Example of two-way ANOVA: Survey of nurse salary (EXAMPLE06) ................................................................................................................................. 85

Interaction ......................................................................................................................................................................................................................... 86

More on interaction ........................................................................................................................................................................................................... 88

Requirements of ANOVA ...................................................................................................................................................................................... 89

Exercises 03: ANOVA .......................................................................................................................................................................................... 90

Page 8: Stats

Slide 8

Other multivariate Methods _______________________________________________________________________________ 91

Type of Multivariate Statistical Analysis ................................................................................................................................................................ 91

Methods for identifying structures Methods for discovering structures ...................................................................................................................... 91

Choice of Method .............................................................................................................................................................................................................. 92

Tree of methods (also www.ats.ucla.edu/stat/mult_pkg/whatstat/default.htm, July 2012) ................................................................................................... 93

Example of multivariate Methods (categorical / metric) ......................................................................................................................................... 94

Linear discriminant analysis .............................................................................................................................................................................................. 94

Example of linear discriminant analysis ............................................................................................................................................................................. 95

Very short introduction to linear discriminant analysis ........................................................................................................................................................ 96

SPSS Output Discriminant analysis (EXAMPLE07) I ......................................................................................................................................................... 99

Page 9: Stats

Slide 9

Some notes about…

Types of Scales

Items measure the value of attributes using a scale.

There are four scale types that are used to capture the attributes of measurement objects

(e.g., people): nominal, ordinal, interval, and ratio scales.

Example from a Health Survey for England 2003:

Measurement object

Attribute of Object

Value of Attribute

Type of Scale

Person

Sex

Male / Female

Nominal

Attitude to health

1 to 5

Ordinal

Blood pressure

Real number

Interval

Net-Income

Real number

Ratio

Metric(SPSS: Scale)

Categorical(SPSS: Nominal, Ordinal)

Stevens S.S. (1946): On the Theory of Scales of Measurement; Science, Volume 103, Issue 2684, pp. 677-680

Page 10: Stats

Slide 10

Nominal scale

Properties

◦ Consists of "Names" (categories). Names have no specific order.

◦ Must be measured with an unique (statistical) procedure.

Examples from the Health Survey for England 2003

◦ Sex is either male or female.

◦ Places where you are often exposed to other people’s smoke: At home, at work, ...

Assignment of numbers to each category (code can be arbitrary but must be unique)

Page 11: Stats

Slide 11

Ordinal scale

Properties

◦ Consists of a series of values. Each category is associated with a number which represents the category’s order.

◦ A special kind of ordinal scale is the Likert scale (rating scale)

Example from the Health Survey for England 2003

◦ I've been feeling optimistic about the future: None of the time, Rarely, Some of the time …

Page 12: Stats

Slide 12

Metric scales (interval and ratio scales)

Properties

Measures the exact value.

Exact measurements (metric scales) are preferred.

In SPSS metric scales are also known as "Scale".

Example from the Health Survey for England 2003

◦ Age

Assigns the actual measured value:

Page 13: Stats

Slide 13

Hierarchy of scales

The nominal scale is the "lowest" while the ratio scale is the "highest".

A scale from a higher level can be used as the scale for a lower level, but not vice versa.

(Example: Based on age in years (ratio scale), a binary variable can be generated to capture

whether a respondent is a minor (nominal scale), but not vice versa.)

Possible statements Example

Cate

go

rical

Nom

ina

l

Equality,

inequality (=, ≠)

Sex (male = 0, female = 1): male ≠ female

Ord

ina

l In addition:

Relation larger (>),

smaller (<)

Self-perception of health (1 = "very bad", … 5 = "very

good"): 1 < 2 < 3 < 4 < 5

But "very good" is neither five times better than "very bad"

nor does "very good" have a distance of 4 to "very bad".

Me

tric

(SP

SS

: "S

cale

")

Inte

rva

l In addition:

Comparison of differ-

ences

Temperature in °C: Difference between 20° and 15° = dif-

ference between 10° and 15°. But a temperature of 10° is

not twice as warm as 5°. Compare with the Fahrenheit-

scale! 10° C = 50° F, 5° C = 41° F

Ratio

In addition:

Comparison of ratios

Income: $ 8,000 is twice as large as $ 4,000. There is a

true zero point in this scale: $ 0. Division by 1000.

Page 14: Stats

Slide 14

Properties of scales

Level Determination of ... Statistics

Nominal equality or unequality =, ≠ Mode

Ordinal greater, equal or less >, <, = Median

Interval equality of differences (x1 - x2) ≠ (x3 - x4) Arithmetic mean

Ratio equality of ratios (x1 / x2) ≠ (x3 / x4) Geometric meanmetr

iccate

gorical

Level Possible transformation

Nominal one-to-one substitution x1 ~ x2 <=> f(x1) ~ f(x2)

Ordinal monotonic increasing x1 > x2 <=> f(x1) > f(x2)

Interval positiv linear ' = a + b with a > 0

Ratio postiv proportional ' = a with a > 0metr

iccate

gorical

Page 15: Stats

Slide 15

Summary: Type of scales

Statistical analysis assumes that the variables have specific levels of measurement.

Variables that are measured nominal or ordinal are also called categorical variables.

Exact measurements on a metric scale are statistically preferable.

Why does it matter whether a variable is categorical or metric?

For example, it would not make sense to compute an average for gender.

In short, an average requires a variable to be metric.

Sometimes variables are "in between" ordinal and metric.

Example:

A Likert scale with "strongly agree", "agree", "neutral", "disagree" and "strongly disagree".

If it is unclear whether or not the intervals between each of these five values are the same, then

it is an ordinal and not a metric variable.

In order to calculate statistics, it is often assumed that the intervals are equally spaced.

Many circumstances require metric data to be grouped into categories.

Such ordinal categories are sometimes easier to understand than exact metric measurements.

In this process, however, valuable exact information is lost.

Page 16: Stats

Slide 16

Exercise in class: Scales

1. Read "Summary: Type of Scales" above.

2. Which type of scale?

Where do you live?

north south east west

Size of T-shirt (XS, S, M, L, XL, XXL)

1 2 3 4 5

Please mark one box per question

2.01Compared with the health of

others in my age, my health isvery bad very good

How much did you spend on food this week? _____ $

Size of shoe in Europe

Page 17: Stats

Slide 17

Distributions

Take an optical impression. Source: http://en.wikipedia.org (Date of access: July, 2013)

Normal

Widely used in statistics (statistical inference).

Poisson

Law of rare events (origin 1898: number of soldiers killed by horse-kicks each year).

Exponential

Queuing model (e.g. average time spent in a queue).

Pareto

Allocation of wealth among indi-viduals of a society ("80-20 rule").

Page 18: Stats

Slide 18

Measure of the shape of a distribution

Skewness (German: Schiefe)

A distribution is symmetric if it looks the same to the

left and right of the center point.

Skewness is a measure of the lack of symmetry.

Range of skewness

Negative values for the skewness indicate distribution that is skewed left.

Positive values for the skewness indicate distribution that is skewed right.

Kurtosis (German: Wölbung)

Kurtosis is a measure of how the distribution is shaped relative to a normal distribution.

A distribution with high kurtosis tend to have a distinct peak near the mean.

A distribution with low kurtosis tend to have a flat top near the mean.

Range of kurtosis

Standard normal distribution has a kurtosis of zero.

Positive values for the kurtosis indicates a "peaked" distribution.

Negative values for the kurtosis indicates a "flat" distribution.

AnalyzeDescriptive StatisticsFrequencies...

:

Page 19: Stats

Slide 19

Example

Dataset "Data_07.sav" (Tschernobyl fallout of radioactivity, measured in becquerel)

Distribution of original data is skewed right.

BQ has skewness 2.588 and kurtosis 7.552

Distinct peak near zero.

Logarithmic transformation

Compute lnbq = ln(bq).

freq bq lnbq.

Log transformed data is slightly skewed right.

LNBQ has skewness .224 and kurtosis -.778

More likely to show normal distribution.

Statistics

23 23

0 0

2.588 .224

.481 .481

7.552 -.778

.935 .935

Valid

Missing

N

Skewness

Std. Error of Skewness

Kurtosis

Std. Error of Kurtosis

BQ LNBQ

Page 20: Stats

Slide 20

Transformation of data

Why transform data?

1. Many statistical models require that the variables (in fact: the errors) are approximately normally distributed.

2. Linear least squares regression assumes that the relationship between two variables is linear. Often we can "straighten" a non-linear relationship by transforming the variables.

3. In some cases it can help you better examine a distribution.

When transformations fail to remedy these problems, another option is to use:

nonparametric methods, which makes fewer assumptions about the data.

Type of transformation

◦ Linear Transformation

Does not change shape of distribution.

◦ Non-linear Transformation

Changes shape of distribution.

Page 21: Stats

Slide 21

Linear transformation

A very useful linear transformation is standardization.

(z-transformation, also called "converting to z-scores" or "taking z-scores")

Transformation rule

ii

ˆx -μz =

σ

ˆ

ˆ

μ mean of sample

σ standard deviation of sample

Original distribution will be transformed to one in which

the mean becomes 0 and

the standard deviation becomes 1

A z-score quantifies the original score in terms of

the number of standard deviations that the score is

from the mean of the distribution.

=> Use z-scores to filter outliers

AnalyzeDescriptive StatisticsDescriptives...

Page 22: Stats

Slide 22

Logarithmic transformation

Works for data that are skewed right.

Works for data where residuals get bigger for bigger values of the dependent variable.

Such trends in the residuals occur often, if the error in the value of an

outcome variable is a percent of the value rather than an absolute value.

For the same percent error, a bigger value of the variable means a bigger absolute error,

so residuals are bigger too.

Taking logs "pulls in" the residuals for the bigger values.

log(Y*error) = log(Y) + log(error)

Transformation rule

f(x) = log(x);x 1

f(x) = log(x +1);x 0

size ( in cm)

200190180170160150

weig

ht (i

n k

g)

100

90

80

70

60

50

40

Example: Body size against weight

Page 23: Stats

Slide 23

Logarithmic transformation I

Symmetry

A logarithmic transformation reduces

positive skewness because it compresses

the upper tail of the distribution while

stretching out the lower trail. This is be-

cause the distances between 0.1 and 1, 1

and 10, 10 and 100, and 100 and 1000

are the same in the logarithmic scale.

This is illustrated by the histogram of

data simulated with salary (hourly wag-

es) in a sample of nurses*. In the origi-

nal scale, the data are long-tailed to the

right, but after a logarithmic transfor-

mation is applied, the distribution is

symmetric. The lines between the two

histograms connect original values with

their logarithms to demonstrate the

compression of the upper tail and

stretching of the lower tail.

*More to come in chapter "ANOVA".

Histogram of

original data

Histogram of

transformed data

Page 24: Stats

Slide 24

Logarithmic transformation II

skewed right

Histogram of

original data

Histogram of

transformed data

Transformation

y = log10(x)

nearly normal distributed

Page 25: Stats

Slide 25

Summary: Data transformation

Linear transformation and logarithmic transformation as discussed above.

Other transformations

Root functions

1/2 1/3f(x) = x ,x ;x 0

usable for right skewed distributions

Hyperbola function

-1f(x) = x ;x 1

usable for right skewed distributions

Box-Cox-transformation

λf(x) = x ;λ >1p

ln( )1 p

usable for left skewed distributions

Probit & Logit functions (cf. logistic regression)

pf (p) ln( );p [0,1]

1 p

usable for proportions and percentages

Interpretation and usage

Interpretation is not always easy.

Transformation can influence results significantly.

Look at your data and decide if it makes sense in the context of your study.

Page 26: Stats

Slide 26

Data trimming

Data trimming deals with

◦ Finding outliers and extremes in a data set.

◦ Dealing with outliers: Correction, deletion, discussion, robust estimation

◦ Dealing with missing values: Correction, treatment (SPSS), (also imputation)

◦ Transforming data if necessary (see chapter above).

Finding outliers and extremes

Get an overview over the dataset!

◦ How does distribution looks like?

◦ Arte there any values that are not expected?

Methods?

◦ Use basic statistics: <Analyze> with <Frequencies>, <Explore> and <Descriptives…>

Outliers => e.g. z-scores higher/lower 2 st. dev., extremes => higher/lower 3 st. dev.

◦ Use graphical techniques: Histogram, Boxplot, Q-Q plot, …

Outliers => e.g. as indicated in boxplot

Page 27: Stats

Slide 27

Boxplot

A Boxplot displays the center (median), spread and outliers of a distribution.

See exercise for more details about whiskers, outliers etc.

incom e

60.0

80.0

100.0

120.0

140.0

19688

83

92

"Box" identifies the

middle 50% of datset

Median

Whisker

Whisker

Outliers (Number in Dataset)

incom e

60.0

80.0

100.0

120.0

140.0

19688

83

92

"Box" identifies the

middle 50% of datset

Median

Whisker

Whisker

Outliers (Number in Dataset)

Boxplots are an excellent tool for detecting

and illustrating location and variation

changes between different groups of data.

2 3 4 5 6 7

educ

60.0

80.0

100.0

120.0

140.0

inco

me

196

191

83

65

168

88

190

92

income

inc

om

e

education

Page 28: Stats

Slide 28

Boxplot and error bars

Boxplot Error bars

Keyword "median"

Overview over data and illustration of data

distribution (range, skewness, outliers)

Keyword "mean"

Overview over mean and confidence interval

or standard error

2 3 4 5 6 7

educ

60.0

80.0

100.0

120.0

140.0

inc

om

e

196

191

83

65

168

88

190

92

2 3 4 5 6 7

educ

74.0

76.0

78.0

80.0

82.0

84.0

86.0

88.0

90.0

92.0

95%

CI in

co

me

Page 29: Stats

Slide 29

Q-Q plot

The quantile-quantile (q-q) plot is a graphical technique for deciding if two samples come from

populations with the same distribution.

Quantile: the fraction (or percent) of data points below a given value.

For example the 0.5 (or 50%) quantile is the position at which 50% percent of the data fall below

and 50% fall above that value. In fact, the 50% quantile is the median.

Sample Distribution (simulated data)

50% Quantile 50% Quantile

Normal Distribution

Page 30: Stats

Slide 30

In the q-q plot, quantiles of the first sample are set against the quantiles of the second sample.

If the two sets come from a population with the same distribution, the points should fall

approximately along a 45-degree reference.

The greater the displacement from this reference line, the greater the evidence for the

conclusion that the two data sets have come from populations with different distributions.

Some advantages of the q-q plot are:

The sample sizes do not need to be equal.

Many distributional aspects can be simultaneously tested.

Difference between Q-Q plot and P-P plot

A q-q plot is better when assessing the goodness of fit in the tail of the distributions.

The normal q-q plot is more sensitive to deviances from normality in the tails of the distribution,

whereas the normal p-p plot is more sensitive to deviances near the mean of the distribution.

Q-Q plot: Plots the quantiles of a varia-

ble's distribution against the quantiles of any of a number of test distributions.

P-P plot: Plots a variable's cumulative pro-

portions against the cumulative proportions of any of a number of test distributions.

Page 31: Stats

Slide 31

Quantiles of the first sample are set against the quantiles of the second sample.

Sta

nd

ard

Norm

al D

istr

ibu

tion

Sample Distribution (simulated data)

Sta

nd

ard

Norm

al D

istr

ibu

tion

Normal Distribution

Page 32: Stats

Slide 32

Example of q-q plot with simulated data

Normal vs. Standard Normal Sample Distribution vs. Standard Normal

2.000000 4.000000 6.000000 8.000000 10.000000

SP1_s

0

100

200

300

Häu

fig

keit

Mean = 6.01061013

Std. Dev. =

0.990727744

N = 4'066

SP1_s

0.000000 5.000000 10.000000 15.000000

SP2_s

0

100

200

300

Häu

fig

keit

Mean = 4.96828962

Std. Dev. =

2.206443598

N = 4'066

SP2_s

3 4 5 6 7 8 9

Beobachteter Wert

3

4

5

6

7

8

9

Erw

art

ete

r W

ert

vo

n N

orm

al

Q-Q-Diagramm von Normal von SP1_s

-2 0 2 4 6 8 10 12 14 16

Beobachteter Wert

-2

0

2

4

6

8

10

12

Erw

art

ete

r W

ert

vo

n N

orm

al

Q-Q-Diagramm von Normal von SP2_s

Sta

nd

ard

No

rmal

Sta

nd

ard

No

rmal

Simulated data Simulated data

Te

st dis

trib

utio

n (

SP

SS

)

Te

st dis

trib

utio

n (

SP

SS

)

Sample Distribution Normal

Page 33: Stats

Slide 33

Example

Dataset "Data_07.sav" (Tschernobyl fallout of radioactivity)

Distribution of original data Distribution of log transformed data

Page 34: Stats

Slide 34

Exercises 01: Log Transformation & Data Trimming

Ressources => www.schwarzpartners.ch/ZNZ_2012 => Exercises Analysis => Exercise 01

Page 35: Stats

Slide 35

Linear Regression

Example

Medical research: Dependence of age and systolic blood pressure

140

150

160

170

180

190

200

210

220

230

240

35 40 45 50 55 60 65 70 75 80 85 90

Systo

lic b

loo

d p

ressu

re

[mm

HG

]

Age [years]

Dataset (EXAMPLE01.SAV)

Sample of n = 10 men

Variables for

◦ age (age)

◦ systolic blood pressure (pressure)

Typical questions

Is there a linear relation between

age and systolic blood pressure?

What is the predicted mean blood

pressure for men aged 67?

Page 36: Stats

Slide 36

The questions

Question in everyday language:

Is there a linear relation between age and systolic blood pressure?

Research question:

What is the relation between age and systolic blood pressure?

What kind of model is best for showing the relation? Is regression analysis the right model?

Statistical question:

Forming hypothesis

H0: "No model" (= No overall model and no significant coefficients)

HA: "Model" (= Overall model and significant coefficients)

Can we reject H0?

The solution

Linear regression equation of age on systolic blood pressure

0 1pressure age u

0 1

pressure dependent variable

age independent variable

, coefficients

u error term

Page 37: Stats

Slide 37

"How-to" in SPSS

Scales

Dependent variable: metric

Independent variable: metric

SPSS

AnalyzeRegressionLinear...

Result

Significant linear model

Significant coefficient

pressure 135.2 0.956 age

Predicted mean blood pressure

199.2 135.2 0.956 67

Typical statistical statement in a paper:

There is a linear relation between age and systolic blood pressure.

(Regression: F = 102.763, R2 = .93, p = .000).

Systo

lic b

loo

d p

ressu

re

[mm

HG

] Age [years]

140

150

160

170

180

190

200

210

220

230

240

35 40 45 50 55 60 65 70 75 80 85 90

Page 38: Stats

Slide 38

General purpose of regression

◦ Cause analysis

Is there a relationship between the independent variable and the dependent variable.

Example

Is there a model that describes the dependence between blood pressure and age, or do these two variables just form a random pattern?

◦ Impact analysis

Assess the impact of changing the independent variable to the value of dependent variable.

Example

If age increases, blood pressure also increases: How strong is the impact? By how much will pressure increase with each additional year?

◦ Prediction

Predict the values of a dependent variable using new values for the independent variable.

Example

Which is the predicted mean systolic blood pressure of men aged 67?

Page 39: Stats

Slide 39

Key Steps in Regression Analysis

1. Formulation of the model

◦ Common sense … (remember the example with storks and babies)

◦ Linearity of relationship plausible

◦ Not too many variables (Principle of parsimony: Simplest solution to a problem)

2. Estimation of the model

◦ Estimation of the model by means of OLS estimation (ordinary least squares)

◦ Decision on procedure: Enter, stepwise regression…

3. Verification of the model

◦ Is the model as a whole significant? (i.e. are the coefficients significant as a group?) → F-test

◦ Are the regression coefficients significant? → t-tests (should be performed only if F-test is significant)

◦ How much variation does the regression equation explain? → Coefficient of determination (adjusted R-squared)

4. Considering other aspects (for example multicollinearity)

5. Testing of assumptions (Gauss-Markov, independence and normal distribution)

6. Interpretation of the model and reporting

Page 40: Stats

Slide 40

Regression model

Linear model

The linear model describes y as a function of x

0 1y x equation of a straight line

The variable y is a linear function of the variable x.

β0 (intercept)

The point where the line crosses the Y-axis. The value of the dependent variable when all of the

independent variables = 0.

β1 (slope)

The increase in the dependent variable per unit change in

the independent variable (also known as the "rise over the run")

Stochastic model

0 1y x u

The error term u comprises all factors (other than x) that affect y.

These factors are treated as being unobservable.

→ u stands for "unobserved"

1

y

x

run

rise

More details about mathematics

in Christof Luchsinger's part

Page 41: Stats

Slide 41

Stochastic model – Assumptions related to the error term

The error term u is (must be) …

◦ independent of the explanatory variable x

◦ normally distributed with mean 0 and variance 2: u ~ N(0,2)

0 1E(y) x

Woold

rid

ge J

. (2

005),

In

tro

du

cto

ry E

co

nom

etr

ics: A

Mo

-

dern

Ap

pro

ach, 3

editio

n, S

outh

-Weste

rn C

olle

ge P

ub

Subse

qu

ent

image

s h

ave s

am

e s

ou

rce

Page 42: Stats

Slide 42

Gauss-Markov Theorem, Independence and Normal Distribution

Under the 5 Gauss-Markov assumptions the OLS estimator is the best, linear, unbiased estima-

tor of the true parameters βi, given the present sample.

→ The OLS estimator is BLUE

1. Linear in coefficients y = 0 + 1 x + u

2. Random sample of n observations {(xi ,yi ): i = 1,…,n}

3. Zero conditional mean:

The error u has an expected value of 0,

given any values of the explanatory variable

E(ux) = 0

4. Sample variation in explanatory variables.

The xi’s are not constant and not all the same.

x const

x1 x2 … xn

5. Homoscedasticity:

The error u has the same variance given any value of the

explanatory variable.

Var(ux) = 2

Independence and normal distribution of error u ~ Normal(0,2)

These assumptions need to be tested – among else by analyzing the residuals.

Based on: Wooldridge J. (2005). Introductory Econometrics: A Modern Approach. 3rd edition, South-Western.

Page 43: Stats

Slide 43

Regression analysis with SPSS: Some detailed examples

Simple example (EXAMPLE02)

Dataset EXAMPLE02.SAV:

Sample of 99 men by body size and weight

Step 1: Formulation of the model

Regression equation of weight on size

0 1weight size u

0 1

weight dependent variable

size independent variable

, coefficients

u error term

Page 44: Stats

Slide 44

Step 2: Estimation of the model

SPSS: AnalyzeRegressionLinear…

:

Page 45: Stats

Slide 45

Step 3: Verification of the model

SPSS Output (EXAMPLE02) – F-test

The null hypothesis (H0) to verify is that there is no effect on weight

The alternative hypothesis (HA) is that this is not the case

H0: 0 = 1 = 0

HA: at least one of the coefficients is not zero

Empirical F-value and the appropriate p-value are computed by SPSS.

In the example, we can reject H0 in favor of HA (Sig. < 0.05). This means that the estimated

model is not only a theoretical construct but one that exists and is statistically significant.

Page 46: Stats

Slide 46

SPSS Output (EXAMPLE02) – t-test

The Coefficients table also provides a significance test for the independent variable.

The significance test evaluates the null hypothesis that the unstandardized regression coefficient

for the predictor is zero while all other predictors' coefficients are fixed at zero.

H0: i = 0, j = 0, ji

HA: i 0, j = 0, ji

The t statistic for the size variable (1) is associated with a p-value of .000 ("Sig."), indicating that

the null hypothesis can be rejected. Thus, the coefficient is significantly different from zero.

This holds also for the constant (0) with Sig. = .000.

Page 47: Stats

Slide 47

Step 6. Interpretation of the model

SPSS Output (EXAMPLE02) – Regression coefficients

i 0 1 iweight size

i iweight 120.375 1.086 size

Unstandardized coefficients show absolute

change of dependent variable weight if

dependent variable size changes one unit.

Note: The constant –120.375 has no

specific meaning. It's just the intersection

with the Y axis.

Page 48: Stats

Slide 48

Back to Step 3: Verification of the model

SPSS Output (EXAMPLE02) – Coefficient of determination

To

tal G

ap

Reg

ressio

n

Err

or iy

iy

y

iy = Data point

iy = Estimation (model)

y = Sample mean

Error is also called residual

Page 49: Stats

Slide 49

SPSS Output (EXAMPLE02) – Coefficient of determination I

Summing up distances

SSTotal = SSRegression + SSError

n

1i

2

ii

n

1i

2

i

n

1i

2

i )yy()yy()yy(

Regression

Total

SS

R Square = 0 R Square 1SS

R Square, the coefficient of determination, is .546.

In the example, about half the variation of weight is explained by the model (R2 = 54.6%).

In bivariate regression, R2 is qual to the squared value of the correlation coefficient of the two

variables (rxy = .739, rxy2 = .546).

The higher the R Square, the better the fit.

Correlation

rxy = 0.739

(rxy)2 = 0.546

Page 50: Stats

Slide 50

Step 5: Testing of assumptions

In the example, are the requirements of the Gauss-Markov theorem as well as the other as-

sumptions met?

1. Is the model linear in coefficients Yes, decision for regression model.

2. Is it a random sample? Yes, clinical study.

3. Do the residuals have an expected value of 0

for all values of x? (zero conditional mean)

→ Scatterplot of residuals

4. Is there variation in the explanatory variable? Yes, clinical study.

5. Do the residuals have constant variance

for all values of x? (homoscedasticity)

→ Scatterplot of residuals

Are the residuals independent from one another?

Are the residuals normally distributed?

→ Scatterplot of residuals

→ (consider Durbin-Watson)

→ Histogram

Page 51: Stats

Slide 51

Scatterplot of standardized predicted values of y vs. standardized residuals

3. Zero conditional mean: The mean values of the residuals do not differ visibly from 0 across

the range of standardized estimated values. → OK

5. Homoscedasticity: Residual plot trumpet-shaped; residuals do not have constant variance.

This Gauss-Markov requirement is violated. → There is heteroscedasticity.

Independence: There is no obvious pattern that indicates that the residuals would be influenc-

ing one another (for example a "wavelike" pattern). → OK

Page 52: Stats

Slide 52

Histogram of standardized residuals

Normal distribution of residuals:

Distribution of the standardized residuals is more or less normal. → OK

Page 53: Stats

Slide 53

Violation of the homoscedasticity assumption

How to diagnose heteroscedasticity

Informal methods:

◦ Look at the scatterplot of standardized predicted y-values vs. standardized residuals.

◦ Graph the data and look for patterns.

Formal methods (not pursued further in this course):

◦ Breusch-Pagan test / Cook-Weisberg test

◦ White test

Corrections

◦ Transformation of the variable: Possible correction in the case of EXAMPLE01 is a log transformation of variable weight

◦ Use of robust standard errors (not implemented in SPSS)

◦ Use of Generalized Least Squares (GLS): The estimator is provided with information about the variance and covariance of the errors.

(The last two options are not pursued further in this course.)

Page 54: Stats

Slide 54

Multiple regression

Many similarities with simple Regression Analysis from above

◦ Key steps in regression analysis

◦ General purpose of regression

◦ Mathematical model and stochastic model

◦ Ordinary least squares (OLS) estimates and Gauss-Markov theorem as well as independence and normal distribution of error

All concepts are the same also regarding multiple regression analysis.

What is new?

◦ Concept of multicollinearity

◦ Concept of stepwise conduction of regression analysis

◦ Dummy coding of categorical variables

◦ Adjustment of the coefficient of determination ("Adjusted R Square")

Page 55: Stats

Slide 55

Multicollinearity

Outline

Multicollinearity means there is a strong correlation between two or more independent variables.

Perfect collinearity means a variable is a linear combination of other variables.

Impossible to obtain unique estimates of coefficients because there are an infinite number of

combinations.

Perfect collinearity is rare in real-life data (except the fact that you make a mistake…)

Correlations or even strong correlations between variables are unavoidable.

SPSS detects perfect collinearity and eliminates one of the redundant variables.

Example: x1 and x2 have perfect collinearity → x1 is excluded automatically.

Page 56: Stats

Slide 56

How to identify multicollinearity

If the correlation coefficients between pairs of variables are greater than |0.80|, the variables

should not be used at the same time.

An indicator for multicollinearity reported by SPSS is Tolerance.

◦ Tolerance reflects the percentage of unexplained variance in a variable, given the other in-dependent variables. Tolerance informes about the degree of independence of an independ-ent variable.

◦ Tolerance ranges from 0 (= multicollinear) to 1 (= independent).

◦ Rule of thumb (O'Brien 2007): Tolerance less than .10 → problem with multicollinearity

Output from example on slide "Example of multicollinearity" on slide 64

In addition, SPSS reports the Variance Inflation Factor (VIF) which is simply the inverse of the

tolerance (1/tolerance). VIF has a range 1 to infinity.

Page 57: Stats

Slide 57

Symptoms of multicollinearity

When correlation is strong, standard errors of the parameters become large

◦ It is difficult or impossible to assess the relative importance of the variables

◦ The probability is increased that a good predictor will be found non-significant and rejected

◦ There might be large changes in parameter estimates when variables are added or removed (Stepwise regression)

◦ There might be coefficients with sign opposite of that expected

Multicollinearity is …

◦ a severe problem when the research purpose includes causal modeling

◦ less important where the research purpose is prediction since the predicted values of the dependent remain stable relative to each other

Some hints to deal with multicollinearity

◦ Ignore multicollinearity if prediction is the only goal

◦ Center the variables to reduce correlation with other variables (Centering data refers to subtracting the mean (or some other value) from all observations)

◦ Conduct partial least squares regression

◦ Compute principal components (by running a factor analysis) and use them as predictors

◦ With large sample sizes, the standard errors of the coefficients will be smaller

Page 58: Stats

Slide 58

Multiple regression analysis with SPSS: Some detailed examples

Example of multiple regression (EXAMPLE04)

Dataset EXAMPLE04.SAV:

Sample of 198 men and women based on body size, weight and age

Formulation of the model

Regression of weight on size and age

0 1 2weight = size age u

0 1 2

weight = dependent variable

size = independent variable

age = independent variable

, , = coefficients

u = error term

Page 59: Stats

Slide 59

SPSS Output regression analysis (EXAMPLE04)

Overall F-test: OK (F = 487.569, p = .000) (table not shown here)

0 1 2weight = size age u

weight = 85.933 .812 size .356 age

Unstandardized B coefficients show absolute change of the dependent variable weight if

the dependent variable size changes by one unit.

The Beta coefficients are the standardized regression coefficients.

Their relative absolute magnitudes reflect their relative importance in predicting weight.

Beta coefficients are only comparable within a model, not between. Moreover, they are highly

influenced by misspecification of the model.

Adding or subtracting variables in the equation will affect the size of the beta coefficients.

Page 60: Stats

Slide 60

SPSS Output regression analysis (EXAMPLE04) I

R Square is influenced by the number of independent variables.

=> R Square increases with increasing number of variables.

n (1 R Square)Adjusted R square = R square

n m 1

n = number of observations

m = number of independent variables

n m 1= degreesof freedom(df)

Choose Adjusted R square for the reporting.

Page 61: Stats

Slide 61

Dummy coding of categorical variables

In regression analysis, a dummy variable (also called indicator or binary variable) is one that

takes the values 0 or 1 to indicate the absence or presence of some categorical effect that may

be expected to shift the outcome.

For example, seasonal effects may be captured by creating dummy variables for each of the

seasons. Also gender effects may be treated with dummy coding.

The number of dummy variables is always one less than the number of categories.

Categorical variable

season season_1 season_2 season_3 season_4

If season = 1 (spring) 1 0 0 0

If season = 2 (summer) 0 1 0 0

If season = 3 (fall) 0 0 1 0

If season = 4 (winter) 0 0 0 1

Dummy variables

Categorical variable

gender gender_1 gender_2

If gender = 1 (male) 1 0

If gender = 2 (female) 0 1

Dummy variables

recode gender (1 = 1) (2 = 0) into gender_d.

SPSS syntax:

Page 62: Stats

Slide 62

Gender as dummy variable

Women and men have different

mean levels of size and weight.

=> introduce gender as independent dummy variable

=> recode gender (1 = 1) (2 = 0) into gender_d.

Mean gender size weight

men 1 181.19 76.32

women 2 170.08 63.95

Total 175.64 70.14

Page 63: Stats

Slide 63

SPSS Output regression analysis (EXAMPLE04)

Overall F-test: OK (F = 553.586, p = .000) (table not shown here)

weight = 25.295 .417 size .476 age 8.345 gender_d

"Switching" from women (gender_d = 0) to men (gender_d = 1) raises weight by 8.345 kg.

Model fits better (Adjusted R square .894 vs. .832) because of the "separation" of gender.

Page 64: Stats

Slide 64

Example of multicollinearity

Human resources research in hospitals: Survey of nurse satisfaction and commitment

Dataset Sub-sample of n = 198 nurses

Regression model

2

0 1 2 3 4salary = age education experience experience u

Why a new variable experience2?

The experience effect on salary is disproportional for younger and older people.

The disproportionality can be described by a quadratic term.

"experience" and "experience2"

are highly correlated!

Page 65: Stats

Slide 65

SPSS Output regression analysis (Example of multicollinearity) I

Tolerance is very low for "experience" and "experience2"

One of the two variables might be eliminated from the model

=> Use stepwise regression? Unfortunately SPSS does not take into account multicollinearity.

Page 66: Stats

Slide 66

SPSS Output regression analysis (Example of multicollinearity) II

Prefer this model, because a not significant constant is difficult to handle.

Page 67: Stats

Slide 67

Exercises 02: Regression

Ressources => www.schwarzpartners.ch/ZNZ_2012 => Exercises Analysis => Exercise 02

Page 68: Stats

Slide 68

Notes:

Page 69: Stats

Slide 69

Analysis of Variance (ANOVA)

Example

Human resources research in hospitals: Survey of nurse salaries

1 2 3 All

All 36.- 38.- 42.- 39.-

Level of Experience

Nurse Salary [CHF/h]

Dataset (EXAMPLE05.sav)

Sub-sample of n = 96 nurses

Among other variables: work experience (3 levels) & salary (hourly wage CHF/h)

Typical Question

Has experience an effect on the level of salary? Are the results just by chance? What is the relation between work experience and salary?

grand mean

Page 70: Stats

Slide 70

Boxplot

The boxplot indicates that salary may differ significantly depending on levels of experience.

- - - grand mean

Page 71: Stats

Slide 71

Questions

Question in everyday language:

Has experience really an effect on salary?

Research question:

What is the relation between work experience and salary?

What kind of model is suitable for the relation?

Is analysis of variance the right model?

Statistical question:

Forming hypothesis

H0: "No model" (= Not significant factors)

HA: "Model" (= Significant factors)

Can we reject H0?

Solution

Linear model with salary as the dependent variable (ygk = wage of nurse k in group g)

gk g gky y

g

gk

y grand mean

effect of group g

random term

Page 72: Stats

Slide 72

"How-to" in SPSS

Scales

Dependent Variable: metric

Independent Variables: categorical (called factors), metric (called covariates)

SPSS

AnalyzeGeneral Linear ModelUnivariate...

Results

Total model significant ("Corrected Model": F(2, 93) = 46.193, p = .000). experien significant.

Example interpretation:

There is a main effect of experience (levels 1, 2, 3) on the salary, F(2, 93) = 46.193, p = .000.

The value of Adjusted R Squared = .488 shows that 48.8% of the variation in salary around the

grand mean can be predicted by the model (here: experien).

Page 73: Stats

Slide 73

Key steps in analysis of variance

1. Design of experiments

◦ ANOVA is typically used for analyzing the findings of experiments

◦ Oneway ANOVA, Repeated measures ANOVA Multi-factorial ANOVA (two or more factor analysis of variance)

2. Calculating differences and sum of squares

◦ Differences between group means, individual values and grand mean are squared and summed up. This leads to the fundamental equation of ANOVA.

◦ Test statistics for significance test is calculated from the means of the sums of squares.

3. Prerequisites

◦ Data is Independent

◦ Normally distributed variables

◦ Homogeneity of variance between groups

4. Verification of the model and the factors

◦ Is the overall model significant? (F-test)? Are the factors significant?

◦ Are prerequisites met?

5. Checking measures

◦ Adjusted R squared / partial Eta squared

Mixed ANOVA

Page 74: Stats

Slide 74

Designs of ANOVA

◦ One-way ANOVA: one factor analysis of variance

1 dependent variable and 1 independent factor

◦ Multi-factorial ANOVA: two or more factor analysis of variance

1 dependent variable and 2 or more independent factors

◦ MANOVA: multivariate analysis of variance

Extension of ANOVA used to include more than one dependent variable

◦ Repeated measures ANOVA

1 independent variable but measured repeatedly under different conditions

◦ ANCOVA: analysis of COVariance

Model includes a so called covariate (metric variable)

◦ MANCOVA: multivariate analysis of COVariances

◦ Mixed-design ANOVA possible (e.g. two-way ANOVA with repeated measures)

Page 75: Stats

Slide 75

Sum of Squares

Step-by-step

Survey on hospital nurse salary: Salaries differ regarding the level of experience.

1 2 3Guess: What if y y y ?

Sa

lary

[C

HF

/h]

y

38.6

41.6

42.7

35.9

y

Sa

lary

[C

HF

/h]

y

38.6

41.6

42.7

35.9

y

Expand

y

y

3iy

1 2 3

level of experience

mean of all nurses salary38.6

3y mean of experience level 3

salary of i-th nurse with experience level 3

41.6

42.7

35.91y

A

B

Legend

individual nurse salaries

A

B

part of variation due to experience level

A+B

random part of variation

total variation from mean of all nurses

2y

y

y

3iy

1 2 3

level of experience

mean of all nurses salary38.6

3y mean of experience level 3

salary of i-th nurse with experience level 3

41.6

42.7

35.91y

A

B

Legend

individual nurse salaries

A

B

part of variation due to experience level

A+B

random part of variation

total variation from mean of all nurses

2y

y

y

3iy

1 2 3

level of experience

mean of all nurses salary38.6

3y mean of experience level 3

salary of i-th nurse with experience level 3

41.6

42.7

35.91y

A

B

Legend

individual nurse salaries

A

B

part of variation due to experience level

A+B

random part of variation

total variation from mean of all nurses

Legend

individual nurse salaries

A

B

part of variation due to experience level

A+B

random part of variation

total variation from mean of all nurses

2y

Page 76: Stats

Slide 76

Basic idea of ANOVA

Total sum of squared variation of salaries SStotal is separated into two parts

(SS = Sum of Squares)

◦ SSbetween Part of sum of squared variation due to groups ("between groups", treatments)

(here: between levels of experience)

◦ SSwithin Part of sum of squared variation due to randomness ("within groups", also SSerror)

(here: within each experience group)

Fundamental equation of ANOVA:

g gK KG G G

2 2 2gk g g gk g

g 1 k 1 g 1 g 1 k 1

(y y) K (y y) (y y )

totalSS betweenSS withinSS

g: index for groups from 1 to G (here: G = 3 levels of experience)

k: index for individuals within each group from 1 to Kg (here: K1 = K2 = K3 = 32, Ktotal = K1 + K2 + K3 = 96 nurses)Swithin

1 2 3 b wIf y y y then SS SS

Page 77: Stats

Slide 77

Significance testing

Test statistic F for significance testing is computed by relation of means of sum of squares

tt

total

SSMS

K 1

bb

SSMS

G 1

ww

total

SSMS

K G

Significance testing for the global model

b

w

MSF

MS

The F-test verifies the hypothesis that the group means are equal:

0 1 2 3H : y y y

A i jH : y y for at least one pair ij

1 2 3 b wIf y y y then MS MS

Mean of total sum of squared variation

Mean of squared variation between groups

Mean of squared variation within groups

F follows an F-distribution with (G – 1) and (Ktotal – G) degrees of freedom

Page 78: Stats

Slide 78

ANOVA with SPSS: A detailed example

Example of one-way ANOVA: Survey of nurse salaries (EXAMPLE05)

SPSS: AnalyzeGeneral Linear ModelUnivariate...

Page 79: Stats

Slide 79

SPSS Output ANOVA (EXAMPLE05) – Tests of Between-Subjects Effects I

Significant ANOVA model (called "Corrected Model")

Significant constant (called "Intercept")

Significant variable experien

Example interpretation:

There is a main effect of experience (levels 1, 2, 3) on the salary (F(2, 93) = 46.193 p = .000).

The value of Adjusted R Squared = .488 shows that 48.8% of the variation in salary around the

grand mean can be predicted by the variable experien.

Page 80: Stats

Slide 80

SPSS Output ANOVA (EXAMPLE05) – Tests of Between-Subjects Effects II

Allocation of sum of squares to terms in the SPSS output

experien is part of SSbetween.

In this case (one-way analysis) experien = SSbetween

"Grand mean"

SSbetween

SStotal

SSwithin (= SSerror)

Page 81: Stats

Slide 81

Including Partial Eta Squared

Partial Eta Squared compares the amount of variance explained by a particular factor (all other

variables fixed) to the amount of variance that is not explained by any other factor in the model.

This means, we are only considering variance that is not explained by other variables in the

model. Partial 2 indicates what percentage of this variance is explained by a variable.

2 Effect

Effect Error

SSPartial

SS SS

Example: Experience explains 49.8% of the previously unexplained variance.

Note: The values of partial 2 do not sum up to 100%! (↔ "partial")

In case of one-way ANOVA:

Partial 2 is the proportion of the corrected total variance

that is explained by the model (= R2).

Page 82: Stats

Slide 82

Two-Way ANOVA

Human resources research: Survey of nurse salary

1 2 3 All

Office 35.- 37.- 39.- 37.-

Hospital 37.- 40.- 44.- 40.-

All 36.- 38.- 42.- 39.-

Level of Experience

Nurse Salary [CHF/h]

Po

sit

ion

Now two factors are in the design

◦ Level of experience

◦ Position

Typical Question

Does position and experience have an effect on salary?

What "interaction" exists between position and experience??

Page 83: Stats

Slide 83

Interaction

Interaction means there is dependency between experience and position.

The independent variables have a complex influence on the dependent variable (salary).

The complex influence is called interaction.

The independent variables do not explain all of the variation of the dependent variable.

Part of the variation is due to the interaction term.

An interaction means that the effect of a factor depends on the value of another factor.

experience

(factor A)

salaryinteraction

(factor A x B)

position

(factor B)

Page 84: Stats

Slide 84

Sum of Squares

Again SStotal = SSbetween + SSwithin

With SSbetween = SSExperience + SSPosition + SSExperience x Position

Follows SStotal = (SSExperience + SSPosition + SSExperience x Position) + SSwithin

Where SSExperience x Position is interaction of both factors simultaneously

Sum of variation

between groups

SSb

Total sum of variation

SSt

Sum of variation

within groups

SSw

Sum of variation

due to factor A

SSA

Sum of variation

due to factor B

SSB

Sum of variation due to

interaction of A x B

SSAxB

Page 85: Stats

Slide 85

Example of two-way ANOVA: Survey of nurse salary (EXAMPLE06)

SPSS: AnalyzeGeneral Linear ModelUnivariate...

Page 86: Stats

Slide 86

Interaction

Interaction term between fixed factors is given by default in ANOVA

Example interpretation (among other duty descriptions):

There is also an interaction of experience and position on the salary

(F(2, 90) = 34.606 p = .000).

The interaction term experien * position explains 29.7% of the variance

Page 87: Stats

Slide 87

Interaction I

Do different levels of experience influence the impact of different levels of position differently?

Yes, if experience has values 2 or 3 then the influence of position is raised.

Simplified: Lines not parallel

Interpretation: Experience is more important in hospitals than in offices.

office

hospital

Page 88: Stats

Slide 88

More on interaction

Main effect of experien

Main effect of position

Interaction

Main effect of experien

Main effect of position

Interaction

Main effect of experien

Main effect of position

Interaction

Main effect of experien

Main effect of position

Interaction

Main effect of experien

Main effect of position

Interaction

Main effect of experien

Main effect of position

Interaction

sala

ry

sala

ry

sala

ry

experien experien experien

sala

ry

experien

sala

ry

experien

sala

ry

experien

Page 89: Stats

Slide 89

Requirements of ANOVA

0. Robustness

ANOVA is relatively robust against violations of prerequisites.

1. Sampling

Random sample, no treatment effects (more in Lecture 10)

A well designed study avoids violation of this assumption

2. Distribution of residuals

Residuals (= error) are normally distributed

Correction → transformation

3. Homogeneity of variances

Residuals (= error) have constant variance (more in Lecture 10)

Correction → weight variances

4. Balanced design

Same sample size in all groups

Correction → weight mean

SPSS automatically corrects unbalanced designs by Sum of Squares "Type III" Syntax: /METHOD = SSTYPE(3)

Page 90: Stats

Slide 90

Exercises 03: ANOVA

Ressources => www.schwarzpartners.ch/ZNZ_2012 => Exercises Analysis => Exercise 03

Page 91: Stats

Slide 91

Other multivariate Methods

Type of Multivariate Statistical Analysis

Regarding the practical application multivariate methods can be divided into two main parts:

Methods for identifying structures Methods for discovering structures

Dependence Analysis

(directed dependencies)

Independent

Variable (IV)

Price of

product

Dependent

Variable(s) (DV)

Quality of

Products

Quality of

customer service

Customer

satisfaction

Interdependence Analysis

(non-directed dependencies)

Customer

satisfaction

Employee

satisfaction

Motivation of

employee

Also called dependence analysis be-

cause methods are used to test direct

dependencies between variables.

Variables are divided into independent

variables and dependent variable(s).

Also called interdependence analysis

because methods are used to discover

dependencies between variables.

This is especially the case with explora-

tory data analysis (EDA).

Page 92: Stats

Slide 92

Choice of Method

Methods for identifying structures

(Dependence Analysis)

Regression Analysis

Analysis of Variance (ANOVA)

Discriminant Analysis

Contingency Analysis

(Conjoint Analysis)

Methods for discovering structures

(Interdependence Analysis)

Factor Analysis

Cluster Analysis

Multidimensional Scaling (MDS)

Independent Variable (IV)

metric categorical

Dependent Variable

(DV)

metric Regression analysis Analysis of Variance (ANOVA)

categorical Discriminant analysis Contingency analysis

Page 93: Stats

Slide 93

Tree of methods (also www.ats.ucla.edu/stat/mult_pkg/whatstat/default.htm, July 2012)

(See also www.methodenberatung.uzh.ch (in German))

Data Analysis

Descriptive Inductive

Univariate Bivariate MultivariateCorrelation t-Test

2 Independence

t-Test

2 Adjustment

Dependence Interdependence

DV metric DV not metric

IV not metricIV metric IV not metricIV metric

not metricmetric

Regression ANOVA

Conjoint

Discriminant Contingency

Cluster

Factor

MDS

Univariate Bivariate

DV = dependent variable IV = independent variable

Page 94: Stats

Slide 94

Example of multivariate Methods (categorical / metric)

Linear discriminant analysis

Linear discriminant analysis (LDA) is used to find the linear combination of features which

best separates two or more groups in a sample.

The resulting combination may be used to classify groups in a sample.

(Example: Credit card debt, debt to income ratio, income => predict bankrupt risk of clients)

LDA is closely related to ANOVA and logistic regression analysis, which also attempt to express

one dependent variable as a linear combination of other variables.

LDA is an alternative to logistic regression, which is frequently used in place of LDA.

Logistic regression is preferred when data are not normal in distribution or group sizes

are very unequal.

Page 95: Stats

Slide 95

Example of linear discriminant analysis

Data from measures of body length of

two subspecies of puma (South & North America)

100

105

110

115

120

125

130

135

140

150 160 170 180 190 200 210 220 230 240 250

x1 [cm]

x2

[c

m]

Species x1 x2

1 191 131

1 185 134

1 200 137

1 173 127

1 171 118

1 160 118

1 188 134

1 186 129

1 174 131

1 163 115

2 186 107

2 211 122

2 201 114

2 242 131

2 184 108

2 211 118

2 217 122

2 223 127

2 208 125

2 199 124

Species 1 = North America, 2 = South America

x1 body length: nose to top of tail

x2 body length: nose to root of tail

Other names for puma

cougar

mountain lion

catamount

panther

Page 96: Stats

Slide 96

Very short introduction to linear discriminant analysis

Dependent Variable (also called discriminant variable): categorical

◦ Puma's example: type (two subspecies of puma)

Independent Variables: metric

◦ Puma's example: x1 & x2 (different measures of body length)

Goal

Discrimination between groups

◦ Puma's example: discrimination between two subspecies

Estimate a function for discriminating between group

i 1 i,1 2 i,2 iY = α+β x +β x +u

i

1 2

i,1 i,2

i

Y discriminant variable

α,β ,β coefficients

x ,x measurement of body lenght

u error term

Sketch of LDA

Page 97: Stats

Slide 97

Data from measurement of body-length of two subspecies of puma

100

105

110

115

120

125

130

135

140

150 160 170 180 190 200 210 220 230 240 250

x1 [cm]

x2

[c

m]

100

105

110

115

120

125

130

135

140

150 160 170 180 190 200 210 220 230 240 250

x1 [cm]

x2

[c

m]

Page 98: Stats

Slide 98

SPSS-Example of linear discriminant analysis (EXAMPLE07)

DISCRIMINANT

/GROUPS=species(1 2)

/VARIABLES=x1 x2

/ANALYSIS ALL

/PRIORS SIZE

/STATISTICS=MEAN STDDEV UNIVF BOXM COEFF RAW TABLE

/CLASSIFY=NONMISSING POOLED MEANSUB .

Page 99: Stats

Slide 99

SPSS Output Discriminant analysis (EXAMPLE07) I

Both coefficients significant

i 1 i,1 2 i,2 iY = α+β x +β x +ε

i i,1 i,2 iY = 4.588+.131×x -.243×x +ε

Page 100: Stats

Slide 100

-5

-4

-3

-2

-1

0

1

2

3

4

5

1 1 1 1 1 1 A 1 1 1 1 2 2 2 2 2 2 2 B 2 2 2

subspecies of puma [0,1]

dis

cri

min

an

t v

ari

ab

le Y

x1 x2

A 175 120

B 200 110

The two subspecies of pumas can be com-

pletely classified (100%)

See also plot above that is generated with

i i,1 i,2 iY = 4.588+.131×x -.243×x +ε

"Found" two pumas A & B:

x1 x2

A 175 120

B 200 110

What subspecies are they?

Use

i i,1 i,2 iY = 4.588+.131×x -.243×x +ε

to determine their subspecies.

Page 101: Stats

Slide 101

Another example

Hence the word "Discrimination"

Wason Wanchakorn / AP

Page 102: Stats

Slide 102

Notes: