16
1 Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009 HIGHLIGHTS Research and Policy Support Group FOR PRESS OFFICE – SEPTEMBER 16, 2009

Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

  • Upload
    nimrod

  • View
    14

  • Download
    0

Embed Size (px)

DESCRIPTION

Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009 HIGHLIGHTS. Research and Policy Support Group. FOR PRESS OFFICE – SEPTEMBER 16, 2009. Summary of Findings. - PowerPoint PPT Presentation

Citation preview

Page 1: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

1

Evaluating the NYC Core KnowledgeEarly Literacy Pilot: Year 1 Report

September 22, 2009HIGHLIGHTS

Research and Policy Support Group

FOR PRESS OFFICE – SEPTEMBER 16, 2009

Page 2: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

2

Summary of Findings

By all measures, Core Knowledge Reading (CKR) students made significantly greater gains in early literacy than peer students.

Compared to peers, kindergarteners taught with the CKR program made more progress in all areas of reading tested: spelling, phonemic awareness, decoding, and comprehension.

Surveys and case studies indicate overall high levels of administrator and teacher satisfaction with the CKR Pilot, while also offering guidance for year 2 implementation and evaluation work.

Administrators would recommend program to others; teachers rate CKR more favorably than other programs.

Administrators report change in teacher practice: more data-driven instruction & teacher collaboration.

Teachers and administrators feel more positively about the Skills Strand than the Listening and Learning Strand, particularly regarding student engagement.

Teachers may need more support addressing needs of struggling readers with CKR & managing time to complete lessons.

Page 3: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

3

Methodology: A multi-method, longitudinal research design

YEAR 1

Literacy Assessments (at 10 CKR schools & 10 comparison schools)• Pre- and post-test of literacy skills • Additional tests of literacy skills at end of each year• Periodic assessments throughout the year (DIBELS)

Teacher and Administrator Surveys (at 10 CKR schools):• Assesses satisfaction with and impact of CKR

Case studies (at 3 CKR schools):• Classroom observations, administrator & teacher

interviews

Hypothesis: Kindergarteners taught with the Core Knowledge Reading (CKR) Program will gain reading competencies at a faster rate than their peers.

Focus of the Evaluation

Page 4: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

5.8

55

12.5

71.9

17.5

84.3

5.2

57.9

83.1

11.3

16.8

74.5

ELL SpecialEd.

Free/Reduced

Lunch

Black/Hispanic

Level 3/4Students

ELA

SchoolSize (in

Hundreds)

4

Similar Demographics at CKR and Comparison Schools

CKR Schools (N = 584)

Comparison Schools

Note: These and other data were used to select comparison schools (data as of 2007-08 school year).

Percent of Students(Number of Students for School Size)

(N = 307)*

* N = the number of students for whom both fall and spring data were available.

Page 5: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

5

Evaluation of

Literacy Gains

Page 6: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

6

Different Literacy Domains: Greater Gains & Higher Spring Scores for CKR Students than Comparison Students in All Literacy Domains

CKR Schools

Compared with

Comparison Schools

Basic Reading

Skills

W-J Letter Word

Identification

Oral Reading Comp-

rehension

W-J Passage Comprehension

Decoding

W-J Word Attack

Written Spelling

W-J Spelling of Sounds

Oral Reading Comp-

rehension, Vocabulary,

Basic Reading, Decoding

Terra Nova Reading

Comparison Schools

CKR Sig. Greater

Gains/ Spring Scores

CKR Sig. Greater Gains/ Spring

Scores

CKR Sig. Higher Spring Scores

CKR Sig. Higher Spring

Scores

CKR Sig. Higher Spring Scores

W-J Brief Reading

Page 7: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

7

6X Greater Literacy Gains for CKR Students than Students at Demographically Similar Comparison Schools

Average Fall-Spring Gainin Scale Score Points

Woodcock-Johnson (Brief Reading Test)

CKR Students Significantly Higher

p < .00111.9

2.1

CKR Comparison Schools

Page 8: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

122.3

115.4116.5

108.6

CKR ComparisonSchools

CKR ComparisonSchools

8

Significantly Higher End of Year Performance on Decoding and Spelling

Average Spring Scoresin Woodcock-Johnson Scale Score Points

CKR Students Significantly Higher

p < .0001

Word Attack Subtest Spelling of Sounds Subtest

Launch Internet Explorer Browser.lnk

Page 9: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

18.5

8.4 7.7

11.7

7.6

0.32.8

-1.2

Lowest 1/ 4 Fall

Scores

2nd 1/ 4 3rd 1/ 4 Highest 1/ 4 Fall

Scores

9

CKR Schools

Comparison Schools

Average Fall-Spring Gainin Scale Score Points

Woodcock-Johnson (Brief Reading Test)

At All Achievement Levels, Greater Literacy Gains for CKR Students than Students at Comparison

Page 10: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

506.1

495.7

CKR Comparison Schools

10

Significantly Higher Scores on End of Year Terra Nova Reading Test

Average Spring Scale Scorefor TerraNova Overall Reading Battery

CKR Students Significantly Higher

p < .0001

Page 11: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

11

Spring Surveys &

Case Studies

11

Page 12: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

1212

Administrators Report Satisfaction with Program

No

8

4

Yes

1

Will your Kindergarten

classrooms be using the CK

Reading program next year?

(n = 9*)

Do you plan to purchase the CK Reading program when it becomes

commercially available?

(n = 9*)

Would you recommend the CK Reading curriculum

to other administrators you

know?(n = 9*)

NotSure

5

Yes1

NotSure

8

Yes

Administrators’ overall

satisfaction with CK Reading

(n = 10)

VerySomewhatSatisfied

Satisfied3

7

* One administrator did not respond to most of the survey questions.

Page 13: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

1313

46.7

26.7

40.0

40.0

Teachers Report Satisfaction with Curriculum

Teachers’ overall satisfaction with CK

Reading(n = 30)a

Percent of Respondents

66.7%

86.7%Much Better

Somewhat Better

Teachers’ overall opinion of CK Reading compared with other K

reading programs(n = 30)b

Very Satisfied

Somewhat Satisfied Teachers’ Views:“The Skills Strand is really very good for the students. Their reading levels are higher this year than last year.”

At first, I felt that many teachers did not know if they agreed with teaching sounds before letter names. But by January, when teachers started to see their children reading, they became believers.”

“The Skills Strand has exceeded my expectations. I think it is the best reading program I have ever used. We are thrilled with the results. I hope it is introduced into more schools. We plan to change the sequence of the Listening Strand.”

“After seeing how well Core Knowledge Skills worked for teaching my children to read, I would have a hard time teaching any other way.”

Number of teachers selecting the “neutral” response: question a = 3 (10%); question b = 4 (13.3%).

Page 14: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

1414

33.3

77.8 33.3

11.1

88.9%

66.7%

Somewhat More Than Last Year

Much More than Last Year

Administrators Perceive Change in Teacher Methods

Using assessment data to drive instruction

(n = 9) a

Discussing/ sharing ideas on teaching

strategies w/ other K teachers(n = 9) b

Administrators’ Views:

“This year with Core Knowledge Reading, all of the teachers are communicating more, they discuss the pacing and delivery strategies.”

“The CK pilot has honed the professional conversation.”

“There was resistance and suspicion on the teachers part in the beginning but they are ecstatic over the results—the children are reading! “

Percent of Respondents

Number of teachers selecting the “about the same as last year” response: question a n= 0; question b n = 2 (22.2%).

Page 15: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

33.3 34.5

30

13.3 10.3

50

1515

53.3

30

40

20

13.3

56.7

Teachers Have Differing Views on Strands: Teachers Feel More Positively About Skills than Listening & Learning Strand

Goals of lessons are

clear(n = 30) a

I have enough time to

complete daily lesson(n = 30) c

Students find activities engaging

(n = 30) b

43.3%

96.7%

73.3%

Somewhat Agree

Strongly Agree

Goals of lessons are

clear(n = 30) d

I have enough time to

complete daily lesson(n = 29) f

Students find activities engaging

(n = 30) e

46.7%

80.0%

44.8%

Skills Strand Listening and Learning Strand

Percent of Respondents

Number of teachers selecting the “neutral” response: question a n = 0; question b n = 3 (10%); question c n = 7 (23.3%); question d n = 2 (6.7%); question e n = 6 (20%); question f n = 5 (17.2%).

Page 16: Evaluating the NYC Core Knowledge Early Literacy Pilot: Year 1 Report September 22, 2009

1616

25.020.7 18.5 20.7

10.3

55.247.0

48.344.4

34.5

31.0

17.2

Teachers Compare CKR with Other Programs

72.4% 71.4%69.0%

Somewhat Better

Much Better

41.3%

55.2%

62.9%

Accommo-dations for

different learning needs

(n = 29) f

Ability to engage students

and spark enthusiasm for

reading(n = 29) e

Comprehen-siveness of

program(n = 27) d

Teaching content/

background knowledge(n = 29) c

Teaching decoding

skills(n = 29) a

Sequence of instruction(n = 28) b

Percent of Respondents

Number of teachers selecting the “about the same” response: question a n = 3 (10.3%); question b n = 2 (7.1%); question c n = 2 (6.9%); question d n = 7 (25.9%); question e n = 6 (20.7%); question f n = 4 (13.8%).