38
Chapter 12 Evaluating Instruction Radford University EDUC 615 Group 7 Spring 2011

Chapter 12 Evaluating Instruction

  • Upload
    whitney

  • View
    41

  • Download
    0

Embed Size (px)

DESCRIPTION

Chapter 12 Evaluating Instruction. Radford University EDUC 615 Group 7 Spring 2011. ASSESSING INSTRUCTION. Do test items relate to objectives? Are the instructional objectives clear? Does the teacher present the material clearly?. - PowerPoint PPT Presentation

Citation preview

Page 1: Chapter 12 Evaluating Instruction

Chapter 12Evaluating Instruction

Radford UniversityEDUC 615

Group 7Spring 2011

Page 2: Chapter 12 Evaluating Instruction

ASSESSING INSTRUCTIONEvaluation of instruction, evaluation of teacher performance, and evaluation of the curriculum are all interrelated.Instruction is evaluated through student achievement.Evaluation of instruction is evaluation of the effectiveness of the instructor:Examples of instruction evaluation questions:

1. Do test items relate to objectives?2. Are the instructional objectives clear?3. Does the teacher present the material

clearly?

Page 3: Chapter 12 Evaluating Instruction

ASSESSING INSTRUCTION

Evaluation of instruction is the evaluation of curriculum. Curriculum evaluation reveals how well students achieve in the

areas that are assessed. Curriculum evaluation indicates whether the content has been

adequately covered.

Page 4: Chapter 12 Evaluating Instruction

ASSESSING INSTRUCTIONThere are several curriculum dimensions or concerns that must be evaluated in addition to the assessment of student achievement.

Sample questions:1. Was the subject matter the right choice to begin with?2. Was the content relevant?3. Does it meet student and social needs?4. Are the profession and the public satisfied with it?   5. Was the content selected wisely?

Page 5: Chapter 12 Evaluating Instruction

AN ERA OF ASSESSMENT

Measurement and testing are both ways of gathering assessment data:Measurement - the means of determining the degree of achievement of a particular competency.Testing - the use of instruments for measuring achievement.Accountability - the state of being liable for students’ academic progress.

Evaluation, assessment, measurement, testing and accountability are all words frequently used in both public and professional circles.

Evaluation and assessment are used interchangeably.

Page 6: Chapter 12 Evaluating Instruction

AN ERA OF ASSESSMENT

 The United States is now in an era of assessment.Edward L. Thorndike conceptualized the first standardized tests:GRE, SAT and ACT are examples of well known standardized tests in America.No Child Left Behind of 2001 - allowed individual states to set academic standards. It is also known as “high-stakes testing.” These examinations can result in negative consequences, such as grade retention and failure to graduate from high school.

Page 7: Chapter 12 Evaluating Instruction

AN ERA OF ASSESSMENTNumerous educators dislike testing, standardized and non-standardized. They feel these tests impose predetermined curriculum and are destructive to students’ self-concepts.Henry Giroux stated that “testing has become the new ideological weapon in developing standardized curricula, a weapon that ignores how schools can serve populations of students that differ vastly with respect to cultural diversity, academic and economic resources and classroom opportunities.”The National Center for Fair and Open Testing advocated replacing standardized testing with multiple forms of “high quality classroom assessments that reflect the various ways children really learn.”

Page 8: Chapter 12 Evaluating Instruction
Page 9: Chapter 12 Evaluating Instruction

Stages of Planning for Evaluation (continued)

Three Phases of Evaluation• Preassessment-takes place before instruction; allows

teachers to determine whether or not students have the prerequisite skills needed

• Formative Evaluation-takes place during instruction; consists of formal and informal techniques; enables teachers to monitor their instruction so that they may keep on course

• Summative Evaluation-takes place after instruction; major purpose is to determine whether the students have mastered the instruction; an effective teacher would use the results to revise his/her methods and program 

Page 10: Chapter 12 Evaluating Instruction

NORM-REFERENCED MEASUREMENT AND CRITERION-REFERENCED MEASUREMENT

• Norm-referenced and criterion-referenced measurements are two different types of testing measurements that are used by instructors.

• Norm-Referenced Measurement: With norm-referenced measurement, a student's individual performance is compared to the performance of other students that took the test. Standardized tests are a commonly used example of norm-referenced measurement. When classroom teachers use this type of measurement, students are graded in relation to their performance of that group on that test. Grades are given in relationship to the middle grade, a certain number of students will be above and below this grade.

• Criterion-Referenced Measurement: This type of measurement takes into consideration the individual performance of the student. Each student's success depends on their mastery of the objectives being tested. When this type of measurement is being used, students are graded on their own achievement, regardless of how the rest of the class has scored.

Page 11: Chapter 12 Evaluating Instruction

NORM-REFERENCED MEASUREMENT AND CRITERION-REFERENCED MEASUREMENT

• Comparison of the Two Types of Measurement: W. James Popham identified the "most fundamental difference" between these two measurements as "the nature of the interpretation that is used to make sense out of students' test performance". The Instructional Model, which is discussed in the text, leans more towards criterion-referenced measurement. It does say that norm-referenced measurement can be used in the classroom, however only in certain circumstances. It should not be used to ensure that some students receive lower grades so as to have an even distribution of grades.

 

Page 12: Chapter 12 Evaluating Instruction

EVALUATION IN THREE DOMAINS • Psychomotor Domain:  "Objectives in this domain are best evaluated by actual

performance of the skill being taught." (Oliva 380)  In order to pass the type of test in this domain you must meet certain criteria.

http://www.assessment.uconn.edu/docs/LearningTaxonomy_Psychomotor.pdf  

• Cognitive Domain: This domain most often takes the form of written tests.  The three main instructional levels of this domain include fact, understanding, and application.  Normally the model of instruction used in this domain is direct instruction (lecture) and the evaluation is subjective and objective test questions.  "The teacher should seek to evaluate, when appropriate, student achievement in all six levels of the Bloom taxonomy of the cognitive domain, using both essay and objective test items."(Oliva 382) 

•   Bloom's Taxonomy •  Affective Domain: "Student achievement in the affective domain is difficult and

sometimes impossible to assess.  Attitudes, values, and feelings can be deliberately concealed; learners have the right to hide their personal feelings and beliefs, if they choose." (Oliva 384)  

• Krathwohl's Taxonomy

Page 13: Chapter 12 Evaluating Instruction

PERFORMANCE-BASED ASSESSMENT 

Ideas to remember concerning assessment: • Students can demonstrate achievement both during and at the

end of instruction through means other than typical examinations.

• A skilled instructor can tell a good deal about pupils' success just by observing their classroom performance. 

• Alternative techniques of evaluation (other than examinations) include student logs, reports, essays, notebooks, simulations, demonstrations, construction activities, self-evaluation, and portfolios.  (Oliva, 385-386).

Page 14: Chapter 12 Evaluating Instruction

Alternative Assessments• Popular on the current scene of alternative assessments is the use

of portfolios as a form of measurement.  o Portfolios are systematic collections of student work over

time.  Collections help students and teachers to assess the growth

and progress of development over time.  It is essential that students develop a sense of ownership of

their portfolio to measure their progress and where the need to continue to work to be successful. 

o Portfolios exemplify achievement in all three domains of learning: cognitive, affective, and psychomotor.  Examples of work may include individual classwork assignments, reports, poems, letters, reading logs, and audiotape recordings.  (Oliva, 386)

Page 15: Chapter 12 Evaluating Instruction

Virginia Alternative Assessments:

• Below are links for information concerning the current alternative assessments completed in Virginiao  Alternative and Alternate assessment administrator manual:

 http://www.doe.virginia.gov/testing/alternative_assessments/administrators_manual_2010_11.pdf

o  Virginia Alternate Assessment Program (VAAP)  http://www.doe.virginia.gov/testing/alternative_assessments/vaap_va_alt_assessment_prog/index.shtml

o  Virginia Grade Level Alternative (VGLA)  http://www.doe.virginia.gov/testing/alternative_assessments/vgla_va_grade_level_alt/index.shtml

o Virginia Substitute Evaluation Program (VSEP)  http://www.doe.virginia.gov/testing/alternative_assessments/vsep_va_substitute_eval_prog/index.shtml

o Virginia Modified Achievement Standards Test (VMAST) http://www.doe.virginia.gov/testing/alternative_assessments/vmast_va_mod_achievement_stds_test/index.shtml

Page 16: Chapter 12 Evaluating Instruction

Alternative Assessments

• Alternative assessment measures may begin       to include practices that could reduce or eliminate      homework and change grading practices• Marzano took the position that "a single grade

      or a percentage score is not a good way to report        achievement in any subject area, because it simply cannot      present the level of detailed feedback necessary for effective         learning."• Alternative assessments may change the more traditional forms of

classroom assessments, but are unlikely, in the foreseeable future, to replace the use of standardized and teacher-made tests of student achievement.  (Oliva, 387)

Page 17: Chapter 12 Evaluating Instruction

Four rules of assessment by Popham:

Teachers seeking to improve their classroom assessment skills might follow these four rules of assessment by Popham:• Use only a modest number of major classroom tests, but make

sure these tests measure learner outcomes of indisputable importance.

• Use diverse types of classroom assessments to clarify the nature of any learning outcome you seek.

• Make students' responses to classroom assessments central to your instructional decision making.

• Regularly assess educationally significant student affect - but only to make inferences about groups of students, not individual students. (Oliva, 387-388)

Page 18: Chapter 12 Evaluating Instruction

ASSESSMENT INITIATIVE FROM BEYOND THE CLASSROOM

• Assessment on a scale broader than the classroom should be considered when visiting the topic of instruction evaluation: we will now consider district, state, national, and international assessments

 DISTRICT ASSESSMENTS • beginning in the 1980's, many districts began district wide

assessment in response to demand for accountability and increased "criticism over both real and perceived deficiencieso these are often administered at the end of each marking period 

                                                                                                (Oliva, 388)

Page 19: Chapter 12 Evaluating Instruction

STATE ASSESSMENTS • In the past twenty years, states have been subject to more attention in

regards to assessmento state legislators and state departments of education were motivated

to establish minimum standards for test performance partly due to disappointment by national and international assessment results and as a means to hold teachers and administrators accountable the No Child Left Behind Act of 2001 prompted states to

develop and administer state assessments an exception to this movement is the Nebraska School-based

Teacher-led Assessment Reporting System (STARS) which allows the use of district designed assessments in place of state tests; Nebraska's system consists of a portfolio of classroom tests, district tests, a state writing exam and one nationally recognized test and was approved for NCLB in 2006                  (Oliva, 389)

Page 20: Chapter 12 Evaluating Instruction

NATIONAL ASSESSMENTS • SAT Test

o scores on the verbal portion of the SAT dropped from 1952 to      1982; scores on the math portion showed a decline before 1991 since that time (1982 for verbal and 1991 for math), scores

on both portions rose the rise in scores may be attributed to improvements in

curriculum, instruction, and technologyo in early 2005, the SAT was lengthened

analogy questions were dropped and a writing test was addedo forty-eight percent of high school graduates took the SAT in

2006; thirty-eight percent of those tested were minority studentso the National Center for Fair and Open Testing cites inaccuracy,

bias, and susceptibility to coaching as flaws in the SAT                                                                                                (Oliva, 390)

Page 21: Chapter 12 Evaluating Instruction

• ACT Testo forty percent of all seniors graduating in 2006 took the American

College Testing Program test thirty-six percent of these graduates chose to take the optional

writing testo between 1970 and 1982, ACT scores declined

scores have generally increased since 1984•  National Assessment of Educational Progress (NAEP)

o operated with federal funds since it begano known as the "Nation's Report Card"o policies are set by the National Assessment Governing Board

(NAGP) comprised of twenty-six members appointed by the U.S. Secretary of Education

o testing areas include: the arts, civics, geography, U.S. history, math, reading, science,

writing, and knowledge and skills in using the computero foreign language and world history tests are in development

                                                                                                 (Oliva, 390-391)

Page 22: Chapter 12 Evaluating Instruction

o NAEP tests from 40,000 to 150,000 students, in 900 to 2,500 schools (depending on the number of testing disciplines) "report cards" showing national and state results are issued to

the public after each assessment national scores of students in grades four, eight, and twelve are

reported by gender, race/ethnicity, region of the country, parents' education, type of school, type of location, and free/reduced-price school lunch program participation 

o scores on NAEP assessments have been inconsistent since NAEP began testing in 1969 gaps between scores made by whites and by other racial/ethnic

groups have narrowed over the past twenty yearso data is reported only for groups (as opposed to individual schools)

to prevent embarrassment to particular schools although some districts choose to release this information

o opponents of national assessments argue that testing will result in the development of national curriculum

o  there is also concern that NAEP testing will fulfill the role of "auditor" in regards to NCLB and state assessments      (Oliva, 391-392)

Page 23: Chapter 12 Evaluating Instruction

International Assessments

• Two American Associationso International Assessment of Educational Progress

IAEPo International Association for the Evaluation of Educational

Achievement IEA

  

Page 24: Chapter 12 Evaluating Instruction

International Assessment of Educational Progress:  IAEP• First IAEP took place in 1988• Conducted by the

Educational Testing Service• Funded by the U.S.

Department of Education and National Science Foundation

• 5 Countries involvedo Irelando Koreao Spaino United Kingdomo United States

• Also involved four Canadian Provinces

• Assessed proficiency in Math and Science in 13 year olds

•  Resultso United States average was

below other in math and science

o Results showed U.S. students' achievements are below a desirable level

Page 25: Chapter 12 Evaluating Instruction

Second IAEP Assessment

• Took place in 1991• Assessed Science and Math

once again• Included fourteen countries

this timeo Looked at the achievement

of 9 year olds• Included twenty countries

o Assessed the achievement of 13 year olds

• Resultso Korea and Taiwan rated at

the top in both math and science

o United States Came in third with 9

year olds Ranked seventh with 13

year olds 

Page 26: Chapter 12 Evaluating Instruction

International Association for the Evaluation of Educational Achievement:  IEA• Funded by the U.S. Office of Education and the Ford Foundation• Studies have covered achievements in math, science, literature,

reading comprehension, foreign languages (English and French), and civic education

• Surveyed around 250,000 students and 50,000 teachers in 22 countries

Page 27: Chapter 12 Evaluating Instruction

International Mathematics Studies

• First International Mathematics Studyo 1964o Surveyed more than 130,000 studentso Covered more than 5,000 schoolso 12 countries

• Second International Mathematics Studyo Funded by the National Science Foundation and the U.S.

Department of Educationo 1981-1982o Studied 12,000 eighth and twelfth graders enrolled in college-

preparatory programso 20 countries

Page 28: Chapter 12 Evaluating Instruction

Third International Mathematics and Science Study:  TIMSS• Conducted in 1995• Took place at the TIMSS and PIRLS International Study Group in

Boston College's Lynch School of Education• Tested over 500,000 students in 41 countries• Results

o U.S. 4th graders were above international average in science and only Korea outscored them

o U.S. 4th graders scored above average in matho U.S. 8th graders were above average in science, but below

average in mathematicso U.S. 12th graders were below international averages in both

science and math

Page 29: Chapter 12 Evaluating Instruction

Third International Mathematics and Science Study-Repeat:  TIMSS-R• Conducted in 1999• Tested eighth graders in 38

countries• Found Math and Science

scores were lower in this 1999 test compared to the 1995 test

• Resultso U.S. 8th graders in math

Outperformed 17 nations and scored lower than 14 nations

o U.S. 8th graders in science Outperformed 18

nations and scored lower than 14 nations

 

Page 30: Chapter 12 Evaluating Instruction

Third International Mathematics and Science Study:  TIMSS 2003• 4th and 8th graders• Math and Science• Over 40 countries involved• Results for mathematics

o U.S. students showed improvements from 1995 to 2003o Singapore students came out on top in both 4th and 8th graderso Hong Kong SAR, Japan, and Taipai scores followed that of

Singapore in 4th graderso Korea, Hong Kong SAR, and Taipei followed Singapore in 8th

graders

Page 31: Chapter 12 Evaluating Instruction

Progress in International Reading Literacy Studies:  PIRLS• 1991 Study Results

o U.S. 9 year olds ranked near the top on the International Association for the Evaluation of Educational Achievement study on reading literacy

o 32 countries involved o U.S. 14 year olds would take second falling right behind France

Page 32: Chapter 12 Evaluating Instruction

PIRLS 2001

• 4th graders• 34 countries• Results

o U.S. fourth graders took 9th placeo Was above international average on the combined literacy scaleo Outperformed 23 of the 34 countrieso Sweden was 1sto Netherlands 2ndo England 3rd

Page 33: Chapter 12 Evaluating Instruction

PIRLS 2006 and 2007

• 2006o Administered by Science

Serviceo American high school

students won top Intel Foundation Young Scientist Awards in the Intel Science and Engineering Fairs

• 2007o Survey conducted by

Roper Public Affairs for the National Geographic Society

o Found young Americans struggle with geographic literacy

Page 34: Chapter 12 Evaluating Instruction

Difficulties in Comparing Students Across the International Spectrum• Differences among nations which can affect scores

o Curriculao Instructional Strategieso Political and Social Conditionso Length of the School Yearo Time Allocated to Studies in School and at Homeo Number of Pupils per Teachero Motivation of Studentso Dedication of Parents to Educationo Traditions

Page 35: Chapter 12 Evaluating Instruction

REALITY STATEMENTSGroup seven has a strong understanding of how the school administrator affects the instructional program of their school.We understand that assessment is a continuous process.  It not only comes at the end of  instruction, but also can happen before instruction begins. When looking into evaluating instructional programs, we mostly think of standards based tests.  However, we understand that when evaluating an instructional program, many factors come into play.  We must consider the community in which we teach, our students' behaviors, the various learning styles of our students, the available resources, and the required curriculum that we teach.  We understand evaluation is a continuous process of learning for both students and instructors.

Page 36: Chapter 12 Evaluating Instruction

• Continue to use all forms of assessment  (pre-evaluation,    formative assessment, and summative assessment) to guide our classroom instruction.

• Continue to have an understanding of the three domains of evaluation (psychomotor, cognitive, and affective).  

•  Continue to use a variety of assessment techniques in our classrooms.

•  Continue to develop and provide opportunities for our students to be self-directed life long learners.

• Continue to administer district assessments, such as benchmark testing, and state assessments (SOLs) as prescribed by our individual divisions and the Commonwealth of Virginia.

In order to promote the success of each student we must:

Page 37: Chapter 12 Evaluating Instruction

In order to promote the success of each student we must:• Continue to have a clear understanding of the stated learning

objectives.• Continue to provide new and innovative ways to teach the

learning objectives. (Teaching 21st Century Students Video ) • Continue to assess student achievement and provide data

driven instruction to our students.  •  Continue to understand that data from written tests are not

the only data we use to drive instruction.• Continue to understand that although we may not agree with

all required testing, we must use the data to provide a better instructional environment for our students.

 

Page 38: Chapter 12 Evaluating Instruction

GROUP PARTICIPATION

• Assessing Instruction and An Era of Assessment and Reality Statements - Melissa Ray

• Stages of Planning for Evaluation and Reality Statements-Victoria Florey• Evaluation in Three Domains and Reality Statements- Trudy Cobler and

Christian Miller• Norm-Referenced Measurement and Criterion-Referenced Measurement and

Reality Statements - Sarah Mercer• Assessment Initiatives from Beyond the Classroom and Reality Statements-

Kelly Russell and Jerad Ward• Alternative Assessment and Reality Statements - Melissa Roark