Teacher Evaluation and Performance Measurement Doug Staiger, Dartmouth College

Embed Size (px)

Citation preview

  • Slide 1

Teacher Evaluation and Performance Measurement Doug Staiger, Dartmouth College Slide 2 Not this. 2 Weisberg, D., Sexton, S., Mulhern, J. & Keeling, D. (2009) The Widget Effect: Our National Failure to Acknowledge and Act on Differences in Teacher Effectiveness. New York: The New Teacher Project. Satisfactory (or equivalent) Unsatisfactory (or equivalent) Slide 3 Not this. 3 Slide 4 Transformative Feedback 4 Slide 5 Recent Work on Teacher Evaluation Efforts to identify effective teaching using achievement gains Work with Tom Kane & others in LAUSD, NYC, Charlotte www.dartmouth.edu/~dstaiger www.dartmouth.edu/~dstaiger Efforts to better identify effective teaching Measures of Effective Teaching (MET) Project (Bill & Melinda Gates Foundation) www.metproject.org www.metproject.org National Center for Teacher Effectiveness (NCTE) (US Department of Education) www.gse.harvard.edu/ncte www.gse.harvard.edu/ncte 5 Slide 6 The Measures of Effective Teaching Project Two school years: 2009-10 and 2010-11 Grades 4-8: ELA and Math High School: ELA I, Algebra I and Biology Participating Teachers Slide 7 The MET data is unique in the variety of indicators tested, 5 instruments for classroom observations (use FFT here) Student surveys (Tripod Survey) Value-added on state tests in its scale, 3,000 teachers 22,500 observation scores (7,500 lesson videos x 3 scores) 900 + trained observers 44,500 students completing surveys and supplemental assessments in year 1 3,120 additional observations by principals/peer observers in Hillsborough County, FL and in the variety of student outcomes studied. Gains on state math and ELA tests Gains on supplemental tests (BAM & SAT9 OE) Student-reported outcomes (effort and enjoyment in class, grit) 7 Slide 8 What is Effective Teaching? Can be an inputs based concept Observable actions or characteristics Can be outcomes based concept Measured by student success Ultimately, care about impact on student outcomes Current focus on standardized exams Interest in other outcomes (college, non-cognitive) 8 Slide 9 Multiple Measures of Teaching Effectiveness 9 Slide 10 10 Measure #1 Student Achievement Gains (Value Added) Slide 11 11 Basics of Value Added Analysis Teacher value added compares actual student achievement at the end of the year to an expectation for each student Difference between actual and expected achievement, averaged over all of teachers students Expected achievement is typical achievement for other students who looked similar at start of year Same prior-year test scores Same demographics, program participation Same characteristics of peers in classroom or school Various flavors, all work similarly Student growth percentiles Average change in score or percentile Based on prior year test or Fall pre-test Slide 12 There are Large Differences in Teacher Effects on Student Achievement Gains Most evidence from value added analysis, but similar findings from randomized experiments Huge literature about teacher effects on achievement Large persistent variation across teachers Difficult to predict at hire Partially predictable after hire Improve only in the first few years of teaching Not related to most determinants of pay Certification, degrees, experience beyond first few years Slide 13 Large Variation in Value Added of LAUSD Teachers is Not Related to Teacher Certification Slide 14 Variation in Value Added of LAUSD Teachers is Related to Prior Performance Slide 15 Why Not Just Hire Good Teachers? Wise selection is the best means of improving the school system, and the greatest lack of economy exists wherever teachers have been poorly chosen. Frank Pierrepont Graves, NYS Commissioner, 1932 Unfortunately, easier said than done Decades of work on type of certification, graduate education, exam scores, GPA, college selectivity, TFA (Very) small, positive effects on student outcomes Slide 16 Large Variation in Value Added of NYC Teachers is Not Related to Recruitment Channel Slide 17 Of Course, Teacher Impact on State Test Score is Not All We Care About Depends on design & content of test Test scores are proximate measures But recent evidence suggests they capture long- run impact on student learning and other outcomes Test scores are only one dimension of performance Non-cognitive skills (grit, dependability, ) Slide 18 Value Added is Controversial We need to find a way to measure classroom success and teacher effectiveness. Pretending that student outcomes are not part of the equation is like pretending that professional basketball has nothing to do with the score. (Arne Duncan 2009) There is no way that any of this current data could actually, fairly, honestly or with any integrity be used to isolate the contributions of an individual teacher. (Randi Weingarten 2008) 18 Slide 19 What we learned from MET: Value-added measures Identified teachers who caused students to learn more on state tests following random assignment. Same teachers also caused students to learn more on supplemental assessments and enjoy class more. Low year-to-year correlations in value-added (and other performance measures) understate year-to-career correlations. 19 Slide 20 20 Slide 21 21 Slide 22 22 Slide 23 23 Measure #2 Classroom Observations Slide 24 Classroom Observation Using Digital Video 24 Slide 25 Access to Validation Engine: What you can expect from us: SEA/LEA chooses a rubric, trains raters The MET Project delivers sample videos SEA/LEA ratings used to -Predict value added -Gauge reliability Helping Districts Test Their Own New Classroom Observations 25 Slide 26 26 InstrumentDeveloperOriginInstructional Focus StructureScoring Framework for Teaching Charlotte Danielson Outgrowth of ETSs PRAXIS III licensing exam Constructivism Intellectual Engagement 4 domains; 22 components MET uses 8 components* 4 Points Classroom Assessment Scoring System (CLASS) Robert Pianta, Univ. of Virginia Tool for research on early childhood development Teacher- student interactions 3 domains; 12 dimensions 7 Points Two Cross-Subject Observation Instruments *not: flexibility & responsiveness & organization of physical space Slide 27 FFT competencies scored: 27 CLASSROOM ENVIRONMENT Creating an environment of respect and rapport Establishing a culture of learning Managing classroom procedures Managing Student Behavior INSTRUCTION Communicating with Students Using Questioning and Discussion Techniques Engaging Students in Learning Using Assessments in Instruction Slide 28 28 InstrumentDeveloperOriginInstructional Focus StructureScoring Mathematical Quality of Instruction (MQI) Heather Hill, Harvard Outgrowth from written test of math teaching knowledge Math errors and imprecision 6 overall elements of instruction 3 Points UTEACH Observation Protocol (UTOP) Michael Marder, Univ. of Texas- Austin Teacher prep program for math & science majors Values different modes, from direct instruction to inquiry-based 4 sections; 22 subsections 5 Points Math Observation Instruments Slide 29 29 InstrumentDeveloperOriginInstructional Focus StructureScoring Protocol for Language Arts Teaching Observations (PLATO) Pam Grossman Stanford Research on effective middle grade ELA instruction Modeling, explicit teaching of strategies, guided practice 13 elements 6 elements included in MET study 4 Points ELA Observation Instrument Slide 30 What we learned from MET: Classroom observations: Observation scores were correlated with a teachers value- added (.15-.27). Different instruments were highly correlated with each other (although subject-specific instruments were distinct from the general-pedagogical instruments). Reliability requires certified observers and more than one observer per teacher (because rater judgments differ). Principals rate their own teachers higher than other observers do, but their rankings are similar. When teachers select their own videos, scores are higher, but ranking remains the same. 30 Slide 31 31 Four Steps Four Steps to High-Quality Classroom Observations Slide 32 Actual scores for 7500 lessons. Step 1: Define Expectations Framework for Teaching (Danielson) 32 Four Steps Slide 33 Step 2: Ensure Accuracy of Observers 33 Four Steps Slide 34 Step 3: Monitor Reliability 34 Four Steps Slide 35 35 More than 1 observer One more lesson +.07 One more observer +.16 Slide 36 Step 4: Verify Alignment with Outcomes 36 Four Steps Teachers with Higher Observation Scores Had Students Who Learned More Slide 37 37 Measure #3 What do students say? Slide 38 38 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing Slide 39 39 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing Slide 40 40 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing Slide 41 41 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing Slide 42 42 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing Slide 43 What we learned from MET: Student surveys: Surveys are a low-cost way to cover untested grades and subjects. Student surveys are related to teacher value-added (.15-.25). Student surveys are the most reliable measures we tested. 43 Slide 44 44 Multiple Measures The Dynamic Trio: Classroom observations, student feedback and student achievement gains. Slide 45 Dynamic Trio 45 Three Criteria: Predictive power: Which measure could most accurately identify teachers likely to have large gains when working with another group of students? Reliability: Which measures were most stable from section to section or year to year for a given teacher? Potential for Diagnostic Insight: Which have the potential to help a teacher see areas of practice needing improvement? (Weve not tested this yet.) Slide 46 Dynamic Trio Measures have different strengths and weaknesses 46 Slide 47 Dynamic Trio Combining Measures Improved Reliability as well as Predictive Power 47 Note: For the equally weighted combination, we assigned a weight of.33 to each of the three measures. The criterion weights were chosen to maximize ability to predict a teachers value-added with other students. The next MET report will explore different weighting schemes. Observation alone (FFT) Student survey alone VA alone Combined (Equal Weights) Combined (Criterion Weights).05.1.15.2.25 Difference in Math VA (Top 25% vs. Bottom 25%) 0.1.2.3.4.5.6.7 Reliability Note: Table 16 of the research report. Reliability based on one course section, 2 observations. The Reliability and Predictive Power of Measures of Teaching: Slide 48 What we learned from MET: Combining measures: The teachers identified as more effective caused students to learn more following random assignment. Combining value added with student surveys and classroom observations produces two benefits: Increased reliability Increased correlation with other outcomes such as value-added on supplemental assessments and happiness in class Weighting value-added below.33, though, lowered correlation with other outcomes and lowered reliability. 48 Slide 49 Can the measures be used for high stakes? High-stakes decisions are being made now, with little or no data. No information is perfect, but better information should lead to better decisions and fewer mistakes. 49 Scenario 1: Teacher You have been teaching biology for 10 years and want to improve your practice. What weaknesses should you focus on and how will you know if you're making progress? Scenario 2: Principal A probationary teacher in your school is approaching the end of their 2nd year. If you retain him/her, the teacher automatically earns tenure under the collective bargaining agreement. Should you grant tenure (or recruit a new novice teacher)? Scenario 3: Superintendent Your district is considering offering coaching opportunities/higher pay to a subset of your teachers. Should you (i) allocate those slots on the basis of seniority, (ii) ensure that only excellent instructors are coaches? How would you measure effectiveness fairly? Slide 50 No information is perfect. 50 How do these compare to existing measures? But better information better decisions Masters Degrees Years of Experience Classroom Observations Alone Slide 51 Compared to What? Compared to MA Degrees and Years of Experience, the Combined Measure Identifies Larger Differences 51 on state tests Slide 52 Compared to What? and on low stakes assessments 52 Slide 53 Compared to What? as well as on student-reported outcomes. 53 Slide 54 The Value of Going Beyond Classroom Observation 54 + + + Observations Student Perceptions Observations Student Perceptions VA on state tests Slide 55 55 Compared to Classroom Observations Alone, the Combined Measure Identifies Larger Differences (Math Value Added) -.2 -.1 0.1.2.3 Average math Value Added, Other Class 020406080100 Percentile Rank on FFT Rank using FFT onlyRank using FFT and Tripod Rank using FFT, Tripod, and Value Added Slide 56 56 Improving Teaching What are Districts Doing? Slide 57 Robust evaluation systems themselves improve teaching outcomes Source: Eric S. Taylor and John H. Tyler, Can Teacher Evaluation Improve Teaching? Education Next, Fall 2012 Slide 58 Teacher Effectiveness Continues to Improve in Better Environments Source: Matthew A. Kraft and John P. Papay, Can Professional Environments in Schools Promote Teacher Development? Explaining Heterogeneity in Returns to Teaching Experience, January 2013 (on NCTE website). Slide 59 The Best Foot Forward Project 1.Teachers record their own lessons. Record 1 lesson every 2 weeks. Submit 5 lessons over course of the year. Viewed by principals, content experts. 2.Observers view and discuss videos with teachers. Observers trained to use video for feedback. Identify discreet, coachable changes. 3.Teachers can share videos with each other. 4.Students provide anonymous feedback. 59 Slide 60 Next Up: Dashboard for Tracking Teacher Evaluations and Benchmarking Performance 1.Distribution of Observation Scores: What are the differences in scores and are the differences between schools, districts, grades and subjects larger than might have occurred by chance? 2.Observations and Value-Added: What are the relationships among the different measures? Do they differ by district, school, grade level, subject? Are they weaker/stronger than we observed in MET? 3.Reliability: How does each measure vary from school to school and year to year? 60 Slide 61 Useful Resources Available at: http://www.metproject.org/resources.php http://www.metproject.org/resources.php Student surveys: Tripod survey and Asking Students about Teaching Practitioner Brief Roster Validation: Report by Battelle for Kids on ways to allow teachers to verify students in their class: Identifying The Importance of Accurately Linking Instruction to Students to Determine Teacher Effectiveness Software for Certifying Observers using Pre-Scored Videos: Certification engine from Empirical Education Available at: http://www.gse.harvard.edu/ncte/resources/default.php http://www.gse.harvard.edu/ncte/resources/default.php Classroom Observation: Links to FFT, CLASS, etc., and webinars with six organizations currently supporting classroom observations Additional examples of sites with useful resources: TNTP: http://tntp.org/ideas-and-innovationshttp://tntp.org/ideas-and-innovations Pearson: http://educatoreffectiveness.pearsonassessments.com/http://educatoreffectiveness.pearsonassessments.com/