2
Exam Analysis: The Difference Between Testing & Assessment and Why it’s Relevant To highlight the differences between testing and assessment, let’s take three fun angles: a) the academic b) the old-schooler and c) the progressive. First the academic: Professional assessment folk get it. Assessments include a broad range of tools to collect evidence. Assessments vary widely, from a student’s final exam to an alumni satisfaction survey. ‘Assessment’ is all encompassing. By contrast, tests, exams and quizzes (collectively “tests” for simplicity) are a focused type of assessment that directly measures some attribute of a student’s knowledge. Oversimplifying yet again, tests can be subjective or objective. Subjective tests (i.e. a presentation) need to be graded by a human. Objective tests are one of the few highly scalable assessment tools available (i.e. a multiple choice online exam is easy to give to 300 students). The good news: writing objective tests is very easy; the bad news: writing a good test takes practice. Since assessments are very complicated, three things happen: 1) It is relegated to a highly trained unit that is isolated from the daily lives of faculty, 2) very smart people can’t get a top down solution implemented, and 3) limited resources go toward solving the problem since there are limited results to show. Now the old school folk: Exam analysis and assessments tend to focus on broad institutional metrics and are very important to the success of an institution. These types of assessments generally happen every year or two (or 10), and often depend on accreditation cycles. As such, a broad committee needs assessment evidence and it has little time or money to find it. So, naturally, this committee relies on sampling or indirect measures. Think of student satisfaction surveys or professor evaluations. On the other hand, testing is a necessary process that is best left to faculty to muddle through during their courses. Testing is messy, hard to control and the mostly the domain of fiercely independent faculty. Occasionally, faculty will (begrudgingly since they get little value out of this exercise) provide some testing data for a broader assessment or accreditation push. Assessments are sometimes considered separate, or even above, in-course testing.

Exam Analysis: The Difference Between Testing & Assessment and Why it’s Relevant

Embed Size (px)

Citation preview

Page 1: Exam Analysis: The Difference Between Testing & Assessment and Why it’s Relevant

Exam Analysis: The Difference Between Testing &

Assessment and Why it’s Relevant

To highlight the differences between testing and assessment, let’s take three fun angles:

a) the academic b) the old-schooler and c) the progressive.

First the academic:

Professional assessment folk get it. Assessments include a broad range of tools to collect

evidence. Assessments vary widely, from a student’s final exam to an alumni satisfaction

survey. ‘Assessment’ is all encompassing. By contrast, tests, exams and quizzes

(collectively “tests” for simplicity) are a focused type of assessment that directly

measures some attribute of a student’s knowledge. Oversimplifying yet again, tests can

be subjective or objective. Subjective tests (i.e. a presentation) need to be graded by a

human. Objective tests are one of the few highly scalable assessment tools available (i.e.

a multiple choice online exam is easy to give to 300 students). The good news: writing

objective tests is very easy; the bad news: writing a good test takes practice. Since

assessments are very complicated, three things happen: 1) It is relegated to a highly

trained unit that is isolated from the daily lives of faculty, 2) very smart people can’t get a

top down solution implemented, and 3) limited resources go toward solving the problem

since there are limited results to show.

Now the old school folk:

Exam analysis and assessments tend to focus on broad institutional metrics and are very

important to the success of an institution. These types of assessments generally happen

every year or two (or 10), and often depend on accreditation cycles. As such, a broad

committee needs assessment evidence and it has little time or money to find it. So,

naturally, this committee relies on sampling or indirect measures. Think of student

satisfaction surveys or professor evaluations. On the other hand, testing is a necessary

process that is best left to faculty to muddle through during their courses. Testing is

messy, hard to control and the mostly the domain of fiercely independent faculty.

Occasionally, faculty will (begrudgingly since they get little value out of this exercise)

provide some testing data for a broader assessment or accreditation push. Assessments

are sometimes considered separate, or even above, in-course testing.

Page 2: Exam Analysis: The Difference Between Testing & Assessment and Why it’s Relevant

Finally, the progressive view:

A good assessment program needs to have a good testing program embedded in it. More

importantly, students and faculty are the MOST important stakeholders and consumers of

assessment information. They want and need testing feedback to tweak their pedagogy

and the tests as real-time as possible. Faculty, assessment offices, student services and

students in general should be using the same testing data to make decisions and take

action as a team. The reality is that faculty wants students to learn, and students are bored

with the test-it-and-forget-it approach. Progressives understand that the high-level metrics

that often fall into the institutional assessment bucket (retention, graduation,

employability, satisfaction, etc.) all measure the distant past. Faculty find themselves

rolling their eyes at long accreditation cycles because teaching, learning and tests are

taking place every day. It’s like driving with only the rear-view mirror—and they need

feedback now. In short, progressive leaders in academia increasingly build testing

feedback loops back into strategic assessments; they realize that direct assessments

(including tests) are by far the best source of actionable feedback to fundamentally

improve institutional effectiveness…starting with the most important stakeholders.

An article by our CEO, Daniel Muzquiz. Daniel is currently the CEO of ExamSoft and

responsible for guiding the company’s overall strategy. In addition, Daniel is a founding

partner at Phoenix Strategy Investments, a private equity firm. Daniel holds a BS in

Mechanical Engineering from the University of Texas, where he was selected for Pi Tau

Sigma Honor Society, and a MBA from Harvard Business School.

ExamSoft offers a turnkey solution for computer-based testing, exam creation, delivery, scoring and analysis an easy and reliable process. ExamSoft has served the testing needs of prominent academic, certification, and licensing institutions for more than 13 years. More information on computer based exams is available at http://www.examsoft.com