5
s of Significance for Regression & Correlation ill equal the population parameter of the slope rather than because beta has another meaning with respect to regression coefficients. is normally distributed about b* with a standard error of s s s N b yx x . 1 To test the null hypothesis that b* = 0, then t b b s b s s N bs N s b yx x x yx * . . 1 1 s a single predictor variable, then testing b is the same as testing r not e Distributed as N-2 df. t r N r 2 1 2 Again, distributed with N-2 df

Tests of Significance for Regression & Correlation

Embed Size (px)

DESCRIPTION

Tests of Significance for Regression & Correlation. b* will equal the population parameter of the slope rather than. because beta. has another meaning with respect to regression coefficients. b is normally distributed about b* with a standard error of. - PowerPoint PPT Presentation

Citation preview

Page 1: Tests of Significance for Regression & Correlation

Tests of Significance for Regression & Correlation

b* will equal the population parameter of the slope rather than because beta

has another meaning with respect to regression coefficients.

b is normally distributed about b* with a standard error of ss

s Nb

y x

x

.

1

To test the null hypothesis that b* = 0, then

tb b

s

bs

s N

bs N

sb y x

x

x

y x

*

. .

1

1

When there is a single predictor variable, then testing b is the same as testing r not equal to zero.

Distributed as N-2 df.

tr N

r

2

1 2Again, distributed with N-2 df

Page 2: Tests of Significance for Regression & Correlation

The difference between two independent slopes (like a t-test for two means)

If the null hypothesis is true (b1* = b2*), then the sampling distribution of b1-b2 is normal with a mean of 0 and a standard error of…

s s s sb b b b1 2 1

2

2

2

Thus, t

b b

s sb b

1 2

1

2

2

2 And is distributed with N1 + N2 –4 df

Because we know… ss

s Nb

y x

x

.

1

Therefore…. s ss

s N

s

s Nb b

y x

x

y x

x

1 2

1

2

1

2

1

2

2

2

2

21 1

. .

( ) ( )

Page 3: Tests of Significance for Regression & Correlation

Transformed….

Thus…t

b b

sb b

1 2

1 2

(is we assume homogeneity of error variance then we can pool the two estimates.)

sN s N s

N Ny x

y x y x

.

. .( ) ( )2 1 1

2

2

2

1 2

2 2

4

This can be substituted for the individual error variances in the above formula.

Distributed with N1+N2 –4 df

Page 4: Tests of Significance for Regression & Correlation

The difference between independent correlations

When is not equal to zero, the sampling distribution of r is NOT normal and its becomes more skewed more approaches 1.0 and the random

error is not easily estimated.

r r1 2The same is the case for

Fisher’s solution is that we transform r into r’ rr

re' ( . ) lo g

0 51

1

Then r’ is approximately normally distributed and the standard error is…

sN

r

'

1

3

Sometimes called the z transformation.

zr r

N N

1 2

1 2

1

3

1

3

' '

As a z score, the critical value is 1.96

Page 5: Tests of Significance for Regression & Correlation

Test for difference between two related correlation coefficients

tr r N r

r r r r r r

( ) ( )( )

( )1 2 1 3 2 3

1 2

2

1 3

2

2 3

2

1 2 1 3 2 3

3 1

2 1 2

Note that to apply this test the correlation

is required.r2 3

Distributed with N-3 df.

Because the two correlations are not independent, we must take this into account (rememberthe issue with ANOVA). In this case, specially, we must incorporate a term that reflectsthe degree to which the two test themselves are related.