54
Regression I: Simple Regression Class 21

Regression

Embed Size (px)

DESCRIPTION

Regression. Class 21. Schedule for Remainder of Term. Nov. 21: Regression Part I Nov 26: Regression Part II Dec. 03: Moderated Multiple Regression (MMR), Quiz 3 Stats Take-Home Exercise assigned Dec. 05: Survey Questions I & II, but read only Schwartz and - PowerPoint PPT Presentation

Citation preview

Page 1: Regression

Regression I: Simple Regression

Class 21

Page 2: Regression

Schedule for Remainder of Term

Dec. 1: Simple Regression Stats Take-Home Exercise assigned

Dec. 3: Multiple Regression

Dec. 8: Moderated Multiple Regression Quiz 3

Dec. 10: Moderated Multiple Regression Wrap Up, Review Dec. 15: Final Exam, Room 302, 1:30-4:30

Stats Take-Home Exercise Due

Page 3: Regression

Caveat on Regression Sequence

Regression is complex, rich topic – simple and multiple regression can be a course in itself.

We can cover only a useful introduction in 3 classes.

Will cover:

Simple Regression: Does caffeine affect purchasing?

Multiple Regression: Do caffeine and income affect purchasing?

Moderated Multiple Regression: Does income moderate the effect of caffeine on purchasing?

If Time Permits: Diagnostic stats, outliers, influential cases, cross validation, regression plots, checking assumptions

Page 4: Regression

ANOVA VS. REGRESSION

ANOVA: Do the means of Group A, Group B and Group C differ?

Categorical data only0

5

10

15

20

25

30

35

Tennis fans Football fans Hocky fans

Aggr

essio

n

Regression: Does Variable X influence Outcome Y?

Continuous Data and Categorical Data 0

2

4

6

8

10

12

low medium high veryhigh

extreme

Aggr

essio

n

Frustration

Page 5: Regression

Regression vs. ANOVA as Vehicles for Analyzing Data

ANOVA: Sturdy, straightforward, robust to violations, easy to understand inner workings, but limited range of tasks.

Regression: Amazingly versatile, agile, super powerful, loaded with nuanced bells & whistles, but very sensitive to violations of assumptions. A bit more art.

Page 6: Regression

Functions of Regression 1. Establishing relations between variables

Do frustration and aggression co-vary?

2. Establishing causality between variables

Does frustration (at Time 1) predict aggression (at Time 2)?

3. Testing how multiple predictor variables relate to, or predict, an outcome variable.

Do frustration, and social class, and family stress predict aggression? [additive effects]

4.Testing for unique effects of Variable A controlling for other variables.

Does frustration predict aggression, beyond social class and family stress?

5. Test for moderating effects between predictors on outcomes.

Does frustration predict aggression, but mainly for people with low income? [interactive effect]

6. Forecasting/trend analyses

If incomes continue to decline in the future by X amount, aggression will increase by Y amount.

Page 7: Regression

The Palace Heist: A True-Regression Mystery

Sterling silver from the royal palace is missing. Why?

Facts gathered during investigation

A. General public given daily tours of palaceB. Reginald, the ADD butler, misplaces thingsC. Prince Guido, the playboy heir, has gambling debts

A. Public is stealing the silverB. Reginald is misplacing the silverC. Guido is pawning the silver

Possible Explanations?

Page 8: Regression

The Palace Heist: A True-Regression Mystery

Possible explanations:A. Public is stealing silverB. Reginald’s ADD leads to misplaced silverC. Guido is pawning silver

Is it just one of these explanations, or a combination of them? E.g., Public theft, alone, OR public theft plus Guido’s gambling?

If it is multiple causes, are they equally important or is one more important than another?

E.g., Crowd size has a significant effect on lost silver, but is less important than Guido’s debts.

Moderation: Do circumstances interact? E.g., Does more silver get lost when Reginald’s ADD is severe,but only when crowds are large?

Page 9: Regression

Regression Can Test Each of These Possibilities, And Can Do So Simultaneously

DATASET ACTIVATE DataSet1.REGRESSION /DESCRIPTIVES MEAN STDDEV CORR SIG N /MISSING LISTWISE /STATISTICS COEFF OUTS R ANOVA COLLIN TOL CHANGE /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT missing.silver /METHOD=ENTER crowds.size /METHOD=ENTER reginald.ADD /METHOD=ENTER guido.debts /METHOD=ENTER crowds.reginald.

Variable 1Variable 2Variable 3Variable 1 X Variable 2

Page 10: Regression

Why Do Bullies Harass Other Students?

Investigation shows that bullies are often:

A. Reprimanded by teachers for unruly behaviorB. Have a lot of family stress

Possible explanations for bullies’ behavior?

A. Frustrated by teachers’ reprimands—take it out on others.B. Family stress leads to frustration—take it out on others.

Questions based on these possible explanations are:

Is it reprimands alone, or stress alone or reprimands + stress?Are reprimands important, after considering stress (and vice versa)?Do reprimands matter only if there is family stress?

Page 11: Regression

Simple Regression

Features: Outcome, Intercept, Predictor, Error

Y = b0 + b1 + Error (residual)

Do bullies aggress more after being reprimanded?

Y = DV = Aggression

bo = Intercept = average of DV before other variables are considered. How much is aggression due just to being a bully, regardless of other influences?

b1 = slope of IV = influence of IV on outcome.How much do reprimands cause bullies to aggress?

Page 12: Regression

Elements of Regression Equation

ReprimandsAg

gres

sion

1 2 3 4 5 6 7 8

1

2

3

4

Y = b0 + b1 + Ɛ

Y = DV (aggression)

b0 = intercept; b0 = the average value of DV BEFORE accounting for IVb0 = mean DV WHEN IV = 0

B1 = slopeB1 = Effect of DV on IV (effect of reprimands on aggression)

Coefficients = parameters; things that account for Y. b0 and b1 are coefficients.

Ɛ = error; changes in DV that are not due to coefficients.

Page 13: Regression

ReprimandsAg

gres

sion

1 2 3 4 5 6 7 8

1

2

3

4

5

6

Y = 2 + 1.0b + Ɛ means that bullies will aggress 2 times a day plus (1 * number of reprimands).

How many times will a bully aggress if he/she is reprimanded 3 times?

Y = 2 + 1.0 (3) = 5

Translating Regression Equation Into Expected Outcomes

Regression allow one to predict how an individual will behave (e.g., aggress) due to certain causes (e.g., reprimands).

Page 14: Regression

Quick Review of New Terms

B0 is the:

B1 is the:

Y is the:

Y = b0 + b1 + Ɛ

Ɛ is the:

The coefficients include:

Does B0 = mean of the sample?

If Y = 5.0 + 2.5b + , what is Y Ɛ when b = 2?

Outcome, aka DV

Intercept; average score when IV = 0

Slope, aka predictor, aka IV

Error, aka changes in DV not explained by IV

Intercept and slope(s), B0 and B1

NO! B0 is expected score ONLY when slope = 0

5.0 + (2.5 * 2) = 10

Page 15: Regression

Regression In English*The effect that days-per-week meditating has on SAT scores

Y = 1080 + 25b in English?

The effect of Anxiety (in points) on threat detection Reaction Time (in ms)

Y = 878 -15b in English?

The effect of parents’ hours of out-loud reading on toddlers’ weekly word acquisition.

Y =35 + 8b in English?

Students’ SAT is 1080 without meditation, and increases by 25 points for each additional day of weekly meditation.

Reaction time is 878 ms when anxiety score = 0, and decreases by 15 ms for each 1 pt increase on anxiety measure.

Toddlers speak 35 words when parents never read out-loud, and acquires 8 words per week for every hour of out-loud reading.

* Fabricated outcomes

Page 16: Regression

Positive, Negative, and Null Regression Slopes

Y = 3 + 0

1

2

3

4

5

6

7

1 2 3

Page 17: Regression

Regression Tests “Models”Model: A predicted TYPE of relationship between one

or more IVs (predictors) and a DV (outcome).

Relationships can take various shapes:

Linear: Calories consumed and weight gained.

Curvilinear: Stress and performance

J-shaped: Insult and response intensity

Catastrophic or exponential: Number words learned and language ability.

Page 18: Regression

Regression Tests How Well the Model “Fits” (Explains) the Obtained Data

Predicted Model: As reprimands increase, bullying will increase.

This is what kind of model? Linear

Reprimands

Aggr

essio

n

1 2 3 4 5 6 7 8

1

2

3

4

Linear Regression asks: Do data describe a straight, sloped line? Do they confirm a linear model?

Page 19: Regression

* * * * * * * * * * * * * * * * * * * * * * * * *

Reprimands1 2 3 4 5 6 7 8 9 10 11 12

Aggr

essio

n1

2

3

4

5

6

7

8

9

Locating a "Best Fitting" Regression Line

Line represents the "best fitting slope".Disparate points represent residuals = deviations from slope."Model fit" is based on method of least squares.

Individual Response

Page 20: Regression

Method of Least SquaresRegression attempts to find the “best fitting” line to describe data.

This is line in which, on average, deviations (residuals) between actual responses (data points) and predicted responses (regression slope) are smallest.

Least squares refers to “least squared differences” between data points and slope.

Method of least squares is calculation done to determine the best fitting line, using residuals.

* *

*

Page 21: Regression

* * * X88 - Y88 * * * * * * * * * * * * * * * * * * * * *

Reprimands

1 2 3 4 5 6 7 8 9 10 11 12

Agg

ress

ion

1

2

3

4

5

6

7

8

9

1

0

Error = Average Difference Between All Predicted Points (X88 - Ŷ88) and Actual Points (X88 - Y88)

X88 - Ŷ88

ε 88

Note "88" = Subject # 88

Actual Response

Predicted Response

Deviation, i.e., Error = predicted – actual.

Page 22: Regression

**

*

*

*

*

*

**

*

*

1

2

3

4

5

6

7

8

9

1

0

1 2 3 4 5 6 7 8 9 10 11 12

Reprimands

Aggr

essio

n

Null Hyp: Mean score of aggression is best predictor, reprimands unimportant (b1 = 0)

Alt. Hyp: Reprimands explain aggression above and beyond the mean, (b1 > 0)

Regression Compares Slope to Mean

Page 23: Regression

1

2

3

4

5

6

7

8

9

1

0

1 2 3 4 5 6 7 8 9 10 11 12

Reprimands

Aggr

essio

nNull slope

Observed slope

Random slopes, originating at random means

Is observed slope random or meaningful? That's the Regression question.

Page 24: Regression

**

*

*

*

*

*

**

*

*

1

2

3

4

5

6

7

8

9

1

0

1 2 3 4 5 6 7 8 9 10 11 12

Reprimands

Aggr

essio

n

Total Sum of Squares (SST)

Total Sum of Squares (SST) = Deviation of each score from DV mean (assuming zero slope), square these deviations, then sum them.

Page 25: Regression

**

*

*

*

*

*

**

*

*

1

2

3

4

5

6

7

8

9

1

0

1 2 3 4 5 6 7 8 9 10 11 12

Reprimands

Aggr

essio

n

Residual Sum of Squares (SSR)

Residual Sum of Squares (SSR) = Each residual from regression line, square, then sum all these squared residuals.

Page 26: Regression

The Regression Question

Does the model (e.g., the regression line) do a better job describing obtained data than the mean?

In other words,

Are residuals, on average, smaller around the model than around the mean?

Regression compares residuals around the mean to residuals around the model (e.g., line).

If model residuals are smaller, the model “wins”, if model residuals not smaller, the model loses.

Page 27: Regression

Elements of RegressionTotal Sum of Squares (SST) = Deviation of each score from the

DV mean, square these deviations, then sum them.

Residual Sum of Squares (SSR) = Deviation of each score from the regression line, squared, then sum all these squared residuals.

Model Sum of Squares (SSM) = SST – SSR = The amount that the regression slope explains outcome above and beyond thesimple

mean.

R2 = SSM / SST = Proportion of model, (i.e. proportion of variance)explained, by the predictor(s). Measures how much of the DV is predicted by the IV (or IVs).

R2 = (SST – SSR) / SST

NOTE: What happens to R2 when SSR is smaller? It gets bigger

Page 28: Regression

Assessing Overall Model: The Regression F Test

In ANOVA F = Treatment / Error, = MSB / MSW

In Regression F = Model / Residuals, = MSM / MSR

AKA slope line / random error around slope line

MSM = SSM / df (model) MSR = SSR / df (residual)

df (model) = number of predictors (betas, not counting intercept)

df (residual) = number of observations (i.e., subjects) – estimates (i.e. all betas and intercept). If N = 20, then df = 20 – 2 = 18

F in Regression measures whether overall model does better than chance at predicting outcome.

Page 29: Regression

F Statistic in Regression

Regression F

MSMMSR

“Regression” = model

SSM

SSR

SSM df = No. predictors (reprimands) = 1

SSR df = subjects – (coefficients) = 20 – (intercept, reprimands) = 18

Page 30: Regression

Assessing Individual PredictorsIs the predictor slope significant, i.e. does IV predict outcome?

b1 = slope of sole predictor in simple regression.

If b1 = 0 then change in predictor has zero influence on outcome.

If b1 > 0, then it has some influence. How much greater than 0 must b1 be in order to have significant influence?

t stat tests significance of b1 slope.

t =b observed – b expected (null effect b; i.e., b = 0)

SEb

t =b observed

SEb

t df = n – predictors – 1 = n - 2

Note: Predictors = betas

Page 31: Regression

t Statistic in Regression

predictor t

B = slope; Std. Error = Std. Error of slope t = B / Std. Error

Beta = Standardized B. Shows how many SDs outcome changes per each SD change in predictor.

Beta allows comparison between predictors, of predictor strength.

sig. of t

Page 32: Regression

Interpreting Simple Regression

Overall F Test: Our model of reprimand having an effect on aggression is confirmed.

t Test: Reprimands lead to more aggression. In fact, for every 1reprimand there is a .61 aggressive act, or roughly 1

aggressive act for every 2 reprimands.

Page 33: Regression

Key Indices of Regression

R2 =

F =

b =

beta =

Proportion of variance model explains

How well model exceeds mean in predicting outcome

The influence of an individual predictor at influencing outcome.

b transformed into standardized units

t of b = Significance of b (b / std. error of b)

R = Degree to which entire model correlates with outcome

Page 34: Regression

Multiple Regression (MR)

Y = bo + b1 + b2 + b3 + ……bx + ε

Multiple regression (MR) can incorporate any number of predictors in model.

“Regression plane” with 2 predictors, after that it becomes increasingly difficult to visualize result.

MR operates on same principles as simple regression.

Multiple R = correlation between observed Y and Y as predicted by total model (i.e., all predictors at once).

Page 35: Regression

Two Variables Produce "Regression Plane"

Aggression

Reprimands Family Stress

Page 36: Regression

Multiple Regression Example

Is aggression predicted by teacher reprimands and family stresses?

Y = bo + b1 + b2 + ε

Y = __

bo = __

b1 = __

b2 = __

ε = __

Aggression

Intercept (being a bully, by itself)

family stress

reprimands

error

Page 37: Regression

Elements of Multiple RegressionTotal Sum of Squares (SST) = Deviation of each score from DV mean,

square these deviations, then sum them.

Residual Sum of Squares (SSR) = Each residual from total model (not simple line), squared, then sum all these squared residuals.

Model Sum of Squares (SSM) = SST – SSR = The amount that the total model explains result above and beyond the simple mean.

R2 = SSM / SST = Proportion of variance explained, by the total model.

Adjusted R2 = R2, but adjusted to having multiple predictors

NOTE: Main diff. between these values in mutli. regression and simple regression is use of total model rather than single slope. Math much more complicated, but conceptually the same.

Page 38: Regression

Methods of RegressionHierarchical: 1. Predictors selected based on theory or past work

2. Predictors entered into analysis in order of predicted importance, or by known influence.

3. New predictors are entered last, so that their unique contribution can be determined.

Forced Entry: All predictors forced into model simultaneously. No starting hypothesis re. relative importance of predictors.

Stepwise: Program automatically searches for strongest predictor, then second strongest, etc. Predictor 1—is best at explaining entire model, accounts for say 40% . Predictor 2 is best at explaining remaining 60%, etc. Controversial method.

In general, Hierarchical is most common and most accepted.

Avoid “kitchen sink” Limit number of predictors to few as possible, and to those that make theoretical sense.

Page 39: Regression

Sample Size in Regression

Simple rule: The more the better!

Field's Rule of Thumb: 15 cases per predictor.

Green’s Rule of Thumb:

Overall Model: 50 + 8k (k = #predictors)

Specific IV: 104 + k

Unsure which? Use the one requiring larger n

Page 40: Regression

Multiple Regression in SPSS

“OUTS” refers to variables excluded in, e.g. Model 1“NOORIGIN” means “do show the constant in outcome report”.“CRITERIA” relates to Stepwise Regression only; refers to which IVs

kept in at Step 1, Step 2, etc.

REGRESSION /DESCRIPTIVES MEAN STDDEV CORR SIG N /MISSING LISTWISE /STATISTICS COEFF OUTS R ANOVA CHANGE /CRITERIA=PIN(.05) POUT(.10) /NOORIGIN /DEPENDENT aggression /METHOD=ENTER family stress /METHOD=ENTER reprimands.

Page 41: Regression

SPSS Regression Output: Descriptives

Page 42: Regression

SPSS Regression Output: Model EffectsSame as correlation

R = Power of regression

R2 = Amount var. explained

Adj. R2 = Corrects for multiple predictors

R sq. change = Impact of each added model

Sig. F Change = does new model explain signif. amount added variance

Page 43: Regression

SPSS Regression Output: Predictor Effects

Page 44: Regression

Requirements and Assumptions (these apply to Simple and Multiple Regression)

Variable Types: Predictors must be quantitative or categorical (2 values only, i.e. dichotomous); Outcomes must be interval.

Non-Zero Variance: Predictors have variation in value.

No Perfect multicollinearity: No perfect 1:1 (linear) relationship between 2 or more predictors.

Predictors uncorrelated to external variables: No hidden “third variable” confounds

Homoscedasticity: Variance at each level of predictor is constant.

Page 45: Regression

Requirements and Assumptions (continued)

Independent Errors: Residuals for Sub. 1do not determine residuals for Sub. 2.

Normally Distributed Errors: Residuals are random, and sum to zero (or close to zero).

Independence: All outcome values are independent from one another, i.e., each response comes from a subject who is uninfluenced by other subjects.

Linearity: The changes in outcome due to each predictor are described best by a straight line.

Page 46: Regression

Regression Assumes Errors are normally, independently, and identically Distributed at Every Level of the Predictor (X)

X1 X2 X3

Page 47: Regression

Homoscedasticity and Heteroscedasticity

Page 48: Regression

Assessing HomoscedasticitySelect: Plots Enter: ZRESID for Y and ZPRED for XIdeal Outcome: Equal distribution across chart

Page 49: Regression

Extreme Cases

*

**

*

** **

**

*

*Cases that deviate greatly from expected outcome > ± 2.5 can warp regression.

First, identify outliers using Casewise Diagnostics option.

Then, correct outliers per outlier-correction options, which are:

1. Check for data entry error2. Transform data 3. Recode as next highest/lowest plus/minus 14. Delete

Page 50: Regression

Casewise Diagnostics Print-out in SPSS

Possible problem case

Page 51: Regression

Casewise Diagnostics for Problem Cases Only

In "Statistics" Option, select Casewise Diagnostics

Select "outliers outside:" and type in how many Std. Dev. you regard as critical. Default = 3

More than 3 DV

Page 52: Regression

What If Assumption(s) are Violated?

What is problem with violating assumptions?

Can't generalize obtained model from test sample to wider population.

Overall, not much can be done if assumptions are substantially violated (i.e., extreme heteroscedasticity, extreme auto-correlation, severe non-linearity).

Some options:

1. Heteroscedasticity: Transform raw data (sqr. root, etc.)2. Non-linearity: Attempt logistic regression

Page 53: Regression

A Word About Regression Assumptions and Diagnostics

Are these conditions complicated to understand? Yes

Are they laborious to check and correct? Yes

Do most researchers understand, monitor, and address these conditions? No

Even journal reviewers are often unschooled, or don’t take time, to check diagnostics. Journal space discourages authors from discussing diagnostics. Some have called for more attention to this inattention, but not much action.

Should we do diagnostics? GIGO, and fundamental ethics.

Page 54: Regression

Reporting Hierarchical Multiple Regression

B SE B βStep 1

Constant -0.54 0.42

Fam. Stress 0.74 0.11 .85 *

Step 2

Constant 0.71 0.34

Fam. Stress 0.57 0.10 .67 *

Reprimands 0.33 0.10 .38 *

Table 1:

Effects of Family Stress and Teacher Reprimands on Bullying

Note: R2 = .72 for Step 1, Δ R2 = .11 for Step 2 (p = .004); * p < .01