88
Central New Mexico Community College Ursula Waln, Senior Director of Outcomes and Assessment 2016-18 Edition Assessment You Can Use! CNM HANDBOOK FOR OUTCOMES ASSESSMENT

CNM Handbook for Outcomes Assessment  · Web view2017. 12. 1. · Contents. Contents 1. Overview 1. Purpose of this Handbook 1. Why CNM Conducts Formal Learning Outcomes Assessment

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

CNM Handbook for Outcomes Assessment

Contents

1Contents

1Overview

1Purpose of this Handbook

1Why CNM Conducts Formal Learning Outcomes Assessment

2What Constitutes a Good Institutional Assessment Process?

2Table 1: Institutional Effectiveness Goals

3Introduction to the CNM Student Outcomes Assessment Process

3The Greater Context of CNM Learning Outcomes Assessment

4Figure 1: CNM Student Learning Outcomes Assessment Context Map

5Table 2: Matrix for Continual Improvement in Student Learning Outcomes

6Figure 2: Alignment of Student Learning Outcomes to CNM Mission, Vision, and Values

6Course SLOs

6Program and Discipline SLOs

7General Education SLOs

10Connections between Accreditation & Assessment

10Institutional Accreditation

10Program Accreditation

11Developing Student Learning Outcome Statements

11Table 3: Sample Program SLOs

12Taxonomies and Models

12Figure 3: Bloom’s Taxonomy of the Cognitive Domain

12Figure 4: Bloom’s Taxonomies of the Affective and Psychomotor Domains

13Table 4: Action Verbs for Creating Learning Outcome Statements

14Figure 5: Webb’s Depth of Knowledge Model

15Table 5: Marzano’s Taxonomy

17Rewriting Compounded Outcomes

18Designing Assessment Projects

18Why Measurement Matters

18Assessment Cycle Plans

20Alignment of Course and Program SLOs

20Figure 6: The CNM Assessment Process

21Table 6: A Model for SLO Mapping

21Table 7: Sample Rubric for Innovative Thinking in the Transformation of Ideas into Art Forms

22Developing an Assessment Focus at the Course Level

23Planning an Assessment Approach at the Course Level

24Table 8: Common Assessment Measures

25Table 9: Descriptors Related to Assessment Measures

26Table 10: Sample Likert-Scale Items

26Using Descriptive Rubrics to Make Assessment Coherent

27Figure 7: Using Rubrics to Pool Findings from Diverse Assessment Approaches

27Rubric Design

28Table 11: Sample Rubric for Assessment of Decision-Making Skill

28Using Formative Assessment to Reinforce Learning

30Collecting Evidence of Learning

30Embedded Assessment

31Classroom Assessment Techniques (CATs)

34Collecting Evidence beyond the Classroom

34Sampling Student Learning Outcomes

35IRB and Classroom Research

35Analyzing, Interpreting, and Applying Assessment Findings

35Analyzing Findings from Individual Measures

36Interpreting Findings

36Applying Findings at the Course Level

37Pooling and Applying Program Findings

39Glossary

41References

42Appendix 1: CNM Assessment Cycle Plan Form

43Appendix 2: Guide for Completion of Cycle Plan

44Appendix 3: Opt-In Cycle Plan Feedback Rubric

45Appendix 4: CNM Annual Assessment Report Form

46Appendix 5: Guide for Completion of Assessment Report

47Appendix 6: Opt-In Assessment Report Feedback Rubric

48Appendix 7: Rubric for Assessing Learning Outcomes Assessment Processes

49Appendix 8: NMHED General Education Core Course Transfer Module Competencies

Overview

Purpose of this Handbook

This handbook was designed to assist CNM faculty and program leaders in assessing student learning outcomes and applying the findings to optimize student learning.

Why CNM Conducts Formal Learning Outcomes Assessment

CNM is dedicated to ensuring that all academic courses and curricula meet the highest level of relevancy and excellence. Thus, we are collectively committed to conducting ongoing, systematic assessment of student learning outcomes across all areas of study. CNM’s assessment processes inform decisions at course, program, and institutional levels. The resulting evidence-based changes help ensure that the education CNM students receive remains first-rate and up-to-date.

Assessment of student learning progress toward achievement of expected course outcomes is a natural part of the instructional process. However, the results of such assessment, in the absence of a formal, structured assessment process, may or may not factor into decisions for change. Having an intentional, shared approach that connects in-course assessment to broader program outcomes, documents and applies the findings, and follows up on the impact of changes benefits the college, programs, instructors, and students.

As a publicly funded institution, CNM has a degree of responsibility to demonstrate accountability for the use of taxpayer funds. As an educational institution, CNM needs to be able to demonstrate achievement of the learning outcomes it claims to strive for or, if failing to do so, the ability to change accordingly. As an institution accredited by the Higher Learning Commission (HLC), CNM must be able to demonstrate that it regularly assesses student outcomes and systematically applies the information obtained from that assessment to continually improve student learning. Accreditation not only confirms institutional integrity, but it also enables CNM to offer students Financial Aid, courses that transfer readily to other institutions, and degrees that are recognized by employers as valid. Most importantly, however, as an institution that strives to be its best, CNM benefits from the ability of its faculty and administrators to make well-informed decisions.

Programs are improved through genuine appraisal of student learning when that appraisal is used to implement well-considered enhancements. Assessment can help perpetuate successful practices, address obstacles to students’ success, and encourage innovative strategies. Often, when a program’s faculty begins to develop assessment methodologies related to program outcomes, the outcome statements themselves get refined and better defined, and the program components become more coherently integrated. Evidence obtained through assessment can also substantiate programs’ requests for external support, such as project funding, student services, professional development, etc.

For faculty, assessment often leads to ideas for innovative instructional approaches and/or better coordination of program efforts. Connecting classroom assessment to program assessment helps instructors clarify how their teaching contributes to program outcomes and complements the instructional activities of their colleagues. Active faculty engagement in program assessment develops breadth of perspective, which in turn facilitates greater intentionality in classroom instruction, clearer communication of expectations, and more objective evaluation of students’ progress.

Moreover, faculty who focus on observing and improving student learning, as opposed to observing and improving teaching, develop greater effectiveness in helping students to change their study habits and to become more cognizant of their own and others’ thinking processes.

In addition, the CNM assessment process provides ample opportunity for instructors to receive recognition for successes, mentor their colleagues, assume leadership roles, and provide valuable college service.

Ultimately, students are the greatest benefactors of the CNM assessment process. Students receive a more coherent education when their courses are delivered by a faculty that keeps program goals in mind, is attentive to students’ learning needs, and is open to opportunities for improvement. Participation in assessment, particularly when instructors discuss the process in class, helps students become more aware of the strengths and weaknesses in their own approach to learning. In turn, students are able to better understand how to maximize their study efforts; assume responsibility for their own learning; and become independent, lifelong learners. And, students benefit from receiving continually improved instruction through top-notch programs at an accredited and highly esteemed institution.

What Constitutes a Good Institutional Assessment Process?

The State University of New York’s Council on Assessment developed a rubric that does a good job of describing best practices in assessment (2015). The goals identified in the SUNY rubric, which are consistent with what institutional accreditation review teams tend to look for, are listed in Table 1. The rubric is available at http://www.sunyassess.org/uploads/1/0/4/0/10408119/scoa_institutional_effectiveness_rubric_2.0_.pdf.

Table 1: Institutional Effectiveness Goals

Aspect

Element

Goal

Design

Plan

The institution has a formal assessment plan that documents an organized, sustained assessment process covering all major administrative units, student support services, and academic programs.

Outcomes

Measurable outcomes have been articulated for the institution as a whole and within functional areas/units, including for courses and programs and nonacademic units.

Alignment

More specific subordinate outcomes (e.g., course) are aligned with broader, higher-level outcomes (e.g., programs) within units and these are aligned with the institutional mission, goals, and values.

Implementation

Resources

Financial, human, technical, and/or physical resources are adequate to support assessment.

Culture

All members of the faculty and staff are involved in assessment activities.

Data Focus

Data from multiple sources and measures are considered in assessment.

Sustainability

Assessment is conducted regularly, consistently, and in a manner that is sustainable over the long term.

Monitoring

Mechanisms are in place to systematically monitor the implementation of the assessment plan.

Impact

Communication

Assessment results are readily available to all parties with an interest in them.

Strategic Planning and Budgeting

Assessment data are routinely considered in strategic planning and budgeting.

Closing the Loop

Assessment data have been used for institutional improvement.

Source: The SUNY Council on Assessment

Introduction to the CNM Student Outcomes Assessment Process

CNM’s faculty-led Student Academic Assessment Committee (SAAC) drives program assessment. Each of CNM’s six schools has two voting faculty representatives on the committee who bring their school’s perspectives to the table and help coordinate the school’s assessment reporting. In addition, SAAC includes four ex officio members, one each from the College Curriculum Committee (CCC), the Cooperative for Teaching and Learning (CTL), the Deans Council, and the Office of Planning and Institutional Effectiveness (OPIE). The latter is the Senior Director of Outcomes and Assessment, whose role is facilitative.

SAAC has two primary responsibilities: 1) providing a consistent process for documenting and reporting outcomes results and actions taken as a result of assessment, and 2) facilitating a periodic review of the learning outcomes associated with the CNM General Education Core Curriculum.

SAAC faculty representatives have put in place a learning assessment process that provides consistency and facilitates ongoing improvement while respecting disciplinary differences, faculty expertise, and academic freedom. This process calls for all certificate and degree programs, general education areas, and discipline areas to participate in what is referred to for the sake of brevity as program assessment. The goal is assessment of student learning within programs, not assessment of programs themselves (a subtle but important distinction).

The faculty of each program/area develops and maintains its own assessment cycle plan, outlining when and how all of the program’s student learning outcomes (SLOs) will be assessed over the course of five years. SAAC asks that the cycle plan, which can be revised as needed, address at least one SLO per year. And, SAAC strongly recommends assessing each SLO for two consecutive years so that changes can be made on the basis of the first year’s findings, and the impact of those changes can be assessed during the second year. At the end of the five-year cycle, a new 5-year assessment cycle plan is due.

Program faculty may use any combination of course-level and/or program-level assessment approaches they deem appropriate to evaluate students’ achievement of the program-level learning outcomes. For breadth and depth, multiple approaches involving multiple measures are encouraged.

A separate annual assessment reporting form (see Appendix 3) is used to summarize the prior year’s assessment findings and describe changes planned on the basis of the findings. This form can be prepared by any designated representative within the school. Ideally, findings from multiple measures in multiple courses are collaboratively interpreted by the program faculty in a group meeting prior to completion of the report.

For public access, SAAC posts assessment reports, meeting minutes and other information at http://www.cnm.edu/depts/academic-affairs/saac. For access by full-time faculty, SAAC posts internal documents in a Learning Assessment folder on a K drive.

In addition, the Senior Director of Outcomes and Assessment publishes a monthly faculty e-newsletter called The Loop, offers faculty workshops, and is available to assist individual faculty members and/or programs with their specific assessment needs.

The Greater Context of CNM Learning Outcomes Assessment

CNM learning outcomes assessment (a.k.a., program assessment) does not operate in isolation. The diagram in Figure 1 illustrates the placement of program assessment among interrelated processes that together inform institutional planning and budgeting decisions in support of improved student outcomes.

Figure 1: CNM Student Learning Outcomes Assessment Context Map

The CNM General Education Core Curriculum is a group of broad categories of educational focus, called “areas” (such as Communications), each with an associated set of student learning outcomes and a list of courses that address those outcomes. For example, Mathematics is one of the CNM Gen Ed areas. “Use and solve various kinds of equations” is one of the Mathematics student learning outcomes. And, MATH 1315, College Algebra, is a course that may be taken to meet the CNM Mathematics Gen Ed requirement.

Similarly, the New Mexico General Education Core Course Transfer Curriculum is a group of areas, each with an associated set of student learning outcomes – which the New Mexico Higher Education Department (NMHED) refers to as “competencies” – and a list of courses that address those outcomes. CNM currently has over 120 of its own Gen Ed courses included in the State’s transfer core. This is highly beneficial for CNM’s students because these courses are guaranteed to transfer between New Mexico postsecondary public institutions. As part of the agreement to have these courses included in the transfer core, CNM is responsible for continuous assessment in these courses and annual reporting to verify that students achieve the State’s competencies. See the General Education SLOs section below for more information on general education learning outcomes.

Specific course outcomes within the programs and disciplines provide the basis for program and discipline outcomes assessment. Program statistics (enrollment and completion rates, next-level outcomes, etc.) also inform program-level assessment. Learning assessment, in turn, informs instructional approaches, curriculum development/revision, planning, and budgeting.

While learning outcomes assessment is separate and distinct from program review, assessment does inform the program review process through its influence on programming, curricular, instructional, and funding decisions. Program review is a viability study that looks primarily at program statistics (such as enrollment, retention, and graduation rates) and curricular and job market factors. By keeping the focus of assessment on student learning rather than demonstration of program effectiveness, the indirect, informative relationship between program-level learning outcomes assessment and program viability studies enables CNM faculty to apply assessment in an unguarded manner, to explore uncertain terrain, and to acknowledge findings openly. Keeping the primary focus of assessment on improvement, versus program security, helps to foster an ethos of inquiry, scholarly analysis, and civil academic discourse around assessment.

Along with program assessment, a variety of other assessment processes contribute synergistically to ongoing improvement in student learning outcomes at CNM. Table 2 describes some of the key assessment processes that occur regularly, helping to foster the kind of institutional excellence that benefits CNM students.

Table 2: Matrix for Continual Improvement in Student Learning Outcomes

For a larger, more legible version of the Table 2, see the document “Ongoing Improvement Matrix” at K:/Learning Assessment/Assessment Process.

As illustrated in Figure 2 on the following page, outcomes assessment at CNM is integrally aligned to the college’s mission, vision, and values. CNM’s mission to “Be a leader in education and training” points the way, and course instruction provides the foundation for realization of that vision.

In the process of achieving course learning outcomes, students develop competencies specific to their programs of study. Degree-seeking students also develop general education competencies. The CNM assessment reporting process focuses on program-level and Gen Ed learning outcomes, as informed by course-level assessment findings and, where appropriate, program-level assessment findings.

Figure 2: Alignment of Student Learning Outcomes to CNM Mission, Vision, and Values

Course SLOs

At CNM, student learning outcome statements (which describe what competencies should be achieved upon completion of instruction) and the student learning outcomes themselves (what is actually achieved) are both referred to as SLOs. Most CNM courses have student learning outcome statements listed in the college’s curriculum management system. (Exceptions include special topics offerings and individualized courses.) The course outcome statements are directly aligned to program outcome statements, general education outcome statements, and/or discipline outcome statements. While course-specific student learning outcomes are the focus of course-level assessment, the alignments to overarching outcome statements make it possible to apply course-level findings in assessment of the overarching outcomes (i.e., program, Gen Ed, or discipline outcomes).

Program and Discipline SLOs

At CNM, the word program usually refers to areas of study that culminate in degrees (AA, AS, or AAS) or certificates. The designation of discipline is typically reserved for areas of study that do not lead to degrees or certificates, such as the College Success Experience, Developmental Education, English as a Second Language, and GED Preparation. Discipline, however, does not refer to general education, which has its own designation, with component general education areas. Note, however, that the word program is frequently used as short-hand to refer to degree, certificate, general education, and discipline areas as a group – as in program assessment.

Each program and discipline has student learning outcome statements (SLOs) that represent the culmination of the component course SLOs. Program and discipline SLOs are periodically reviewed through program assessment and/or, where relevant, through program accreditation review processes and updated as needed.

General Education SLOs

With the exception of Computer Literacy, each distribution area of the CNM General Education Core Curriculum has associated with it a group of courses from which students typically can choose (though some programs require specific Gen Ed courses). Computer Literacy has only one course option (though some programs waive that course under agreements to develop the related SLOs through program instruction). Each of the courses within a Gen Ed area is expected to develop all of that area’s SLOs. And, each Gen Ed course should be included at least once in the area’s 5-year assessment cycle plan.

The vast majority of CNM’s general education courses have been approved by the New Mexico Higher Education Department (NMHED) for inclusion in the New Mexico Core Course Transfer Curriculum (a.k.a., the State’s transfer core). Inclusion in this core guarantees transfer toward general education requirements at any other public institution of higher education in New Mexico. To obtain and maintain inclusion in the State’s transfer core, CNM agrees to assess and report on student achievement of associated competencies in each course (See Appendix 6). However, because the NMHED permits institutions to apply their own internal assessment schedules toward the State-level reporting, it is not necessary that CNM report assessment findings every year for every course in the State’s transfer core. CNM Gen Ed areas may apply the CNM 5-year assessment cycle plan, so long as every area course is included in the assessment reporting process at some point during the 5-year cycle.

During the 2014-15 academic year, CNM conducted a review of its own General Education Core Curriculum and decided to adopt the competencies from the State’s General Education Core Course Transfer Curriculum as its own for the areas of Communications, Mathematics, Laboratory Science, Social and Behavioral Sciences, and Humanities and Fine Arts. CNM opted to retain the additional Gen Ed area of Computer Literacy. The former areas of Critical Thinking and Life Skills/Teamwork, which were embedded in program reporting but were not directly assessed, were removed from the CNM Gen Ed core. These changes are in effect for the 2016-2018 catalog, and for reporting of 2014-15 assessment findings in October of 2016, reporters have been given the option of using either the former student learning outcomes or the new outcomes.

Shortly after CNM made the decision to adopt the State’s competencies, NMHED Secretary Damron announced plans to revise the State’s transfer core. Those revisions are expected to be decided upon by October of 2016. Normally, SAAC would conduct a review of CNM’s Gen Ed outcomes once every six years, with the next review due in 2021. However, due to the revision of the State’s transfer core, SAAC plans to conduct another review of the CNM Gen Ed outcomes in 2017 for the 2018-2020 catalog.

As noted, general education assessment cycle plans should include assessment within all area courses that serve as Gen Ed options. Because this can mean conducting assessment in a significant number of courses, it is recommended that sampling techniques be employed to collect evidence that is representative without overburdening the faculty. See Collecting Evidence of Learning for further information on sampling techniques.

An alternative Gen Ed reporting form that uses the NMHED format and allows input by individual instructors is available in an Access database at K:/Learning Assessment/NMHED Reporting Database.

CNM’s General Education Core Curriculum has two separate but related sets of areas and outcomes: one for Associate of Arts (AA) and Associate of Science (AS) degrees, and one for Associate of Applied Science (AAS) degrees. For AA and AS degrees, 35 credits are required; whereas, for the AAS degrees, 15 credits are required. Following are the current student learning outcome statements associated with each area:

CNM General Education SLOs for AA and AS Degrees

Communications

1. Analyze and evaluate oral and written communication in terms of situation, audience, purpose, aesthetics, and diverse points of view.

2. Express a primary purpose in a compelling statement and order supporting points logically and convincingly.

3. Use effective rhetorical strategies to persuade, inform, and engage.

4. Employ writing and/or speaking processes such as planning, collaborating, organizing, composing, revising, and editing to create presentations using correct diction, syntax, grammar, and mechanics.

5. Integrate research correctly and ethically from credible sources to support the primary purpose of a communication.

6. Engage in reasoned civic discourse while recognizing the distinctions among opinions, fact, and inferences.

Mathematics

1. Construct and analyze graphs and/or data sets.

2. Use and solve various kinds of equations.

3. Understand and write mathematical explanations using appropriate definitions and symbols.

4. Demonstrate problem solving skills within the context of mathematical applications.

Lab Sciences

1. Describe the process of scientific inquiry.

2. Solve problems scientifically.

3. Communicate scientific information.

4. Apply quantitative analysis to scientific problems.

5. Apply scientific thinking to real world problems.

Social/Behavioral Sciences

1. Identify, describe and explain human behaviors and how they are influenced by social structures, institutions, and processes within the contexts of complex and diverse communities.

2. Articulate how beliefs, assumptions, and values are influenced by factors such as politics, geography, economics, culture, biology, history, and social institutions.

3. Describe ongoing reciprocal interactions among self, society, and the environment.

4. Apply the knowledge base of the social and behavioral sciences to identify, describe, explain, and critically evaluate relevant issues, ethical dilemmas, and arguments.

Humanities & Fine Arts

1. Analyze and critically interpret significant primary texts and/or works of art (this includes fine art, literature, music, theatre, & film).

2. Compare art forms, modes of thought and expression, and processes across a range of historical periods and/or structures (such as political, geographic, economic, social, cultural, religious, and intellectual).

3. Recognize and articulate the diversity of human experience across a range of historical periods and/or cultural perspectives.

4. Draw on historical and/or cultural perspectives to evaluate any or all of the following: contemporary problems/issues, contemporary modes of expression, and contemporary thought.

Computer Literacy

1. Demonstrate knowledge of basic computer technology and tools

2. Use software applications to produce, format, analyze and report information to communicate and/or to solve problems

3. Use internet tools to effectively acquire desired information

4. Demonstrate the ability to create and use various forms of electronic communication adhering to contextually appropriate etiquette

5. Demonstrate the ability to create, name, organize, save and retrieve data and/or information in an electronic file management system

CNM General Education SLOs for AAS Degrees

Communications

1. Analyze and evaluate oral and written communication in terms of situation, audience, purpose, aesthetics, and diverse points of view.

2. Express a primary purpose in a compelling statement and order supporting points logically and convincingly.

3. Integrate research correctly and ethically from credible sources to support the primary purpose of a communication.

4. Engage in reasoned civic discourse while recognizing the distinctions among opinions, fact, and inferences.

Mathematics

1. Use and solve various kinds of equations.

2. Understand and write mathematical explanations using appropriate definitions and symbols.

3. Demonstrate problem solving skills within the context of mathematical applications.

Human Relations

1. Describe how the socio-cultural context affects behavior and how behavior affects the socio-cultural context

2. Identify how individual perspectives and predispositions impact others in social, workplace and global settings

Computer Literacy

1. Demonstrate knowledge of basic computer technology and tools

2. Use software applications to produce, format, analyze and report information to communicate and/or to solve problems

3. Use internet tools to effectively acquire desired information

4. Demonstrate the ability to create and use various forms of electronic communication adhering to contextually appropriate etiquette

5. Demonstrate the ability to create, name, organize, save and retrieve data and/or information in an electronic file management system

Connections between Accreditation & AssessmentInstitutional Accreditation

CNM is accredited by the Higher Learning Commission (HLC), one of six regional institutional accreditors in the U.S. Information regarding the accreditation processes and criteria is available at www.hlcommission.org. HLC accreditation criteria that have particular relevance to assessment of student outcomes are listed below:

3.A. The institution’s degree programs are appropriate to higher education.

2. The institution articulates and differentiates learning goals for its undergraduate, graduate, post-baccalaureate, post-graduate, and certificate programs.

3. The institution’s program quality and learning goals are consistent across all modes of delivery and all locations (on the main campus, at additional locations, by distance delivery, as dual credit, through contractual or consortial arrangements, or any other modality).

3.B. The institution demonstrates that the exercise of intellectual inquiry and the acquisition, application, and integration of broad learning and skills are integral to its educational programs.

2. The institution articulates the purposes, content, and intended learning outcomes of its undergraduate general education requirements. The program of general education is grounded in a philosophy or framework developed by the institution or adopted from an established framework. It imparts broad knowledge and intellectual concepts to students and develops skills and attitudes that the institution believes every college-educated person should possess.

3. Every degree program offered by the institution engages students in collecting, analyzing, and communicating information; in mastering modes of inquiry or creative work; and in developing skills adaptable to changing environments.

3.E. The institution fulfills the claims it makes for an enriched educational environment.

2. The institution demonstrates any claims it makes about contributions to its students’ educational experience by virtue of aspects of its mission, such as research, community engagement, service learning, religious or spiritual purpose, and economic development.

4.B. The institution demonstrates a commitment to educational achievement and improvement through ongoing assessment of student learning.

1. The institution has clearly stated goals for student learning and effective processes for assessment of student learning and achievement of learning goals.

2. The institution assesses achievement of the learning outcomes that it claims for its curricular and co-curricular programs.

3. The institution uses the information gained from assessment to improve student learning.

4. The institution’s processes and methodologies to assess student learning reflect good practice, including the substantial participation of faculty and other instructional staff members.

5.C.The institution engages in systematic and integrated planning.

2. The institution links its processes for assessment of student learning, evaluation of operations, planning, and budgeting.

Program Accreditation

Many of CNM’s technical and professional programs are accredited by field-specific accreditation bodies. Maintaining program accreditation contributes to program quality by ensuring that instructional practices reflect current best practices and industry standards. Program accreditation is essentially a certification of instructional competency and degree credibility. Because in-depth program assessment is typically a major component of accreditation reporting, the CNM assessment report accommodates carry-over of assessment summaries from accreditation reports. In other words, reporters are encouraged to minimize unnecessary duplication of reporting efforts by copying and pasting write-ups done for accreditation purposes into the corresponding sections of the CNM assessment report form.

Developing Student Learning Outcome Statements

Student learning outcome statements (SLOs) identify the primary competencies the student should able to demonstrate once the learning has been achieved. SLOs derive from and reflect your teaching goals, so it is important to start with clearly articulated teaching goals. What do you want to accomplish? From that, you can develop SLOs that identify what your students need to learn to do. To be used for assessment, SLO statements need to measurable. For this reason, the current convention is to use phrases that begin with action verbs. Each phrase completes an introductory statement that includes “the student will be able to.” For example:

Table 3: Sample Program SLOs

Upon completion of this program, the student will be able to:

· Explain the central importance of metabolic pathways in cellular function.

· Use mathematical methods to model biological systems.

· Integrate concepts drawn from both cellular and organismal biology with explanations of evolutionary adaptation.

· Develop experimental models that support theoretical concepts.

· Perform laboratory observations using appropriate instruments.

· Critique alternative explanations of biological phenomena with regard to evidence and scientific reasoning.

At the course level as well as the program level, SLOs should focus on competencies that are applicable beyond college. Rather than address specific topics or activities in which the students will be expected to engage, identify the take-aways (up to 10 maximum) that will help students succeed later in employment and/or in life.

Tips:

· Focus on take-away competencies, not just activities or scores.

· Focus on student behavior/action versus mental or affective processes.

· Choose verbs that reflect the desired level of sophistication.

· Avoid compound components unless their synthesis is needed.

The verbs used in outcome statements carry important messages about the type of skill expected. Therefore, much emphasis in SLO writing is placed upon selection of action verbs.

Cognitive processes such as understanding and affective processes such as appreciation are difficult to measure. So, if your goal is to have students understand something, ask yourself how they can demonstrate that understanding. Will they explain it, paraphrase it, select it, or do something else? Is understanding all you want, or do you also want students to apply their understanding in some way?

Similarly, if your goal is to develop student appreciation for something, how will students demonstrate that appreciation? They could tell you they appreciate it, but how would you know they weren’t just saying what they thought you wanted to hear? Perhaps if they were to describe it, defend it, contrast it with something else, critique it, or use it to create something new, then you might be better able to conclude that they appreciate it.

Taxonomies and Models

Benjamin Bloom’s Taxonomy of the Cognitive Domain is often used as a guide for writing SLOs. Typically depicted in a layered triangle, many consider the taxonomy to represent a progression in sophistication of skills, with foundational skills at the bottom and “higher level” skills at the top. However, the model also suggests that mastery of the foundational skills is necessary in order to reach the higher rungs. Being considered foundational, therefore, does not mean a skill level is not important.

Bloom originally presented his model in 1956 (Taxonomy). A former student of Bloom’s, Lorin Anderson, along with others, revised the taxonomy in 2000, flipping the top two levels and substituting verbs for Bloom’s nouns (A Taxonomy). A side-by-side comparison is provided in Figure 3. Both versions are commonly called “Bloom’s Taxonomy.”

It is worth noting that Benjamin Bloom also developed taxonomies for the affective and psychomotor domains, and Anderson et al also developed a revised version of that. The two are shown side-by-side in Figure 4.

As mentioned, Bloom’s taxonomies imply that the ability to perform the behaviors at the top depends upon having attained prerequisite knowledge and skills through the behaviors in the lower rungs. Not all agree with this concept, however, and the taxonomies have been used by many to discourage instruction directed at the lower-level skills.

Despite debate over their validity, Bloom’s taxonomies can be useful references in selecting verbs for outcome statements, describing stages of outcome development, identifying appropriate assessment methods, and interpreting assessment information.

Building on Bloom’s revised taxonomy, Table 4 on the following page presents a handy reference that was created and shared by the Institutional Research Office at Oregon State University. The document lists for SLO writers a variety of action verbs corresponding to each level of the cognitive domain taxonomy.

You may find it useful to think of Bloom’s revised taxonomy as being comprised of cognitive processes and the associated outcome verbs in Table 4 as representing more observable, measurable behaviors.

Table 4: Action Verbs for Creating Learning Outcome Statements

Other models have been developed as alternatives to Bloom’s taxonomies. While all such conceptual models have their shortcomings, the schema they present may prove useful in the selection of appropriate SLO verbs.

For example, Norman Webb’s Depth of Knowledge, first published in 2002 and later illustrated as shown in Figure 5 (from te@chthought), identifies four knowledge levels: Recall, Skills, Strategic Thinking, and Extended Thinking (Depth-of-Knowledge). Webb’s model is widely referenced in relation to the Common Core Standards Initiative for kindergarten through 12th grade.

Figure 5: Webb’s Depth of Knowledge Model

In 2008, Robert Marzano and John Kendall published a “new taxonomy” that reframed Bloom’s domains as Information, Mental Processes, and Psychomotor Procedures and described six levels of processing information: Retrieval, Comprehension, Analysis, Knowledge Utilisation, Meta-Cognitive System, and Self System (Designing & Assessing Educational Objectives). In Marzano’s Taxonomy, shown in Table 5, the four lower levels are itemized with associated verbs. This images was shared by the Adams County School District of Colorado (wiki.adams):

Table 5: Marzano’s Taxonomy

In a recent publication, Clifford Adelman of the Institute for Higher Education Policy advocated for the use of operational verbs in outcome statements, defining these as “actions that are directly observed in external contexts and subject to judgment” (2015). The following reference is derived from the article.

For effective learning outcomes, select verbs that:

1. Describe student acquisition and preparation of tools, materials, and texts of various types

access, acquire, collect, accumulate, extract, gather, locate, obtain, retrieve

2. Indicate what students do to certify information, materials, texts, etc.

cite, document, record, reference, source (v)

3. Indicate the modes of student characterization of the objects of knowledge or materials of production, performance, exhibit

categorize, classify, define, describe, determine, frame, identify, prioritize, specify

4. Describe what students do in processing data and allied information

calculate, determine, estimate, manipulate, measure, solve, test

5. Describe the ways in which students format data, information, materials

arrange, assemble, collate, organize, sort

6. Describe what students do in explaining a position, creation, set of observations, or a text

articulate, clarify, explicate, illustrate, interpret, outline, translate, elaborate, elucidate

7. Fall under the cognitive activities we group under “analyze”

compare, contrast, differentiate, distinguish, formulate, map, match, equate

8. Describe what students do when they “inquire”

examine, experiment, explore, hypothesize, investigate, research, test

9. Describe what students do when they combine ideas, materials, observations

assimilate, consolidate, merge, connect, integrate, link, synthesize, summarize

10. Describe what students do in various forms of “making”

build, compose, construct, craft, create, design, develop, generate, model, shape, simulate

11. Describe the various ways in which students utilize the materials of learning

apply, carry out, conduct, demonstrate, employ, implement, perform, produce, use

12. Describe various executive functions students perform

operate, administer, control, coordinate, engage, lead, maintain, manage, navigate, optimize, plan

13. Describe forms of deliberative activity in which students engage

argue, challenge, debate, defend, justify, resolve, dispute, advocate, persuade

14. Indicate how students valuate objects, experiences, texts, productions, etc.

audit, appraise, assess, evaluate, judge, rank

15. Reference the types of communication in which we ask students to engage:

report, edit, encode/decode, pantomime (v), map, display, draw/ diagram

16. Indicate what students do in groups, related to modes of communication

collaborate, contribute, negotiate, feed back

17. Describe what students do in rethinking or reconstructing

accommodate, adapt, adjust, improve, modify, refine, reflect, review

Rewriting Compounded Outcomes

Often in the process of developing assessment plans, people realize their outcome statements are not quite as clear as they could be. One common discovery is that the outcome statement actually describes more than one outcome. While there is no rule against compounding multiple outcomes into one statement, doing so provides less clarity for students regarding their performance expectations and makes assessing the outcomes more complicated. Therefore, if compounded outcomes actually represent separate behaviors, it may be preferable to either rewrite them or create separate statements to independently represent the desired behaviors.

If a higher-level behavior/skill essentially subsumes the others, the lower-level functions can be dropped from the statement. This is a good revision option if a student would have to demonstrate the foundational skill(s) in order to achieve the higher level of performance. For example, compare the following:

· Identify, evaluate, apply, and correctly reference external resources.

· Use external resources appropriately.

To use external resources appropriately, a student must identify potential resources, evaluate them for relevance and reliability, apply them, and correctly reference them. Therefore, the second statement is more direct and inclusive than the first. One could reasonably argue that the more detailed, step-by-step description communicates more explicitely to students what they must do, but the second statement requires a more holistic integration of the steps, communicating the expectation that the steps will be synthesized into an outcome that is more significant than the sum of its components. Synthesis (especially when it involves evaluation) is a high order of cognitive function.

To identify compound outcomes, look for structures such as items listed in a series and/or coordinating conjunctions. Let’s look at two examples with the compound structures in bold and the coordinating conjunctions underlined:

· Integrate concepts drawn from both cellular and organismal biology with explanations of evolutionary adaptation.

· Use scientific reasoning to develop hypotheses and evaluate contradictory explanations of social phenomena.

In the first example above, the behavior called for is singular but requires that the student draw upon two different concepts simultaneously to make sense of a third. This is a fairly sophisticated cognitive behavior involving synthesis. The second example above describes two separate behaviors, both of which involve scientific reasoning. One way to decide whether to split apart such statements is to consider whether both components could be assessed together. In the first example above, assessment not only could be done using a single demonstration of proficiency but probably should be done this way. For the second example, however, assessment would require looking at two different outcomes separately. Therefore, that statement might be better rewritten as two:

· Develop hypotheses based in scientific reasoning.

· Evaluate contradictory explanations of social phenomena through reasoned, scientific analysis.

Designing Assessment ProjectsWhy Measurement Matters

Assessment projects have two primary purposes:

1. To gauge student progress toward specific outcomes

2. To provide insight regarding ways to facilitate the learning process

Measurement approaches that provide summative success data (such as percentages of students passing an exam or grade distributions) may be fine if your aim is to demonstrate achievement of a certain acceptable threshold of proficiency. However, such measures alone often fail to provide useful insight into what happened along the way that either helped or impeded the learning process. In the absence of such insights, assessment reporting can become a stale routine of fulfilling a responsibility – something to be completed as quickly and painlessly as possible. At its worst, assessment is viewed as 1) an evaluation of faculty teaching that presumes there’s something wrong and it needs to be fixed, and 2) a bothersome process of jumping through hoops to satisfy external demands for accountability. However, when assessment is focused on student learning rather than on instruction, the presumption is not that there’s something wrong with the teaching, but rather that there’s always room for improvement in student achievement of desired learning outcomes.

When faculty embrace assessment as a tool to get information about student learning dynamics, they are more likely to select measurement approaches that yield information about how the students learn, where and why gaps in learning occur, and how students respond to different teaching approaches. Faculty can then apply this information toward helping students make their learning more efficient and effective.

So, if you are already assessing student learning in your class, what is to be gained from putting that assessment through a formal reporting process? For one thing, the purposeful application of classroom assessment techniques with the intention of discovering something new fosters breadth of perspective that can otherwise be difficult to maintain. The process makes faculty more alert to students’ learning dynamics, inspires new instructional approaches, and promotes a commitment to professional growth.

Often, underlying frustration with assessment processes are misunderstandings about what constitutes an acceptable assessment measure. CNM’s assessment process accommodates diverse and creative approaches. The day-to-day classroom assessment techniques that faculty already use to monitor students’ progress can serve as excellent measurement choices. The alignment of course SLOs to program SLOs makes it feasible for faculty to collaboratively apply their classroom assessment techniques toward the broader assessment of their program, even though they may all be using different measures and assessing different aspects of the program SLO. When faculty collectively strive to more deeply understand the conditions under which students in a program learn best, a broader picture of program dynamics emerges. In such a scenario, opportunities to better facilitate learning outcomes can more easily be identified and implemented, leading to greater program efficiency, effectiveness, and coherence.

Assessment Cycle Plans

The Student Academic Assessment Committee (SAAC) asks program faculty to submit plans every five years demonstrating when and how they will assess their program SLOs over the course of the next five years. For newly approved programs/general education areas/discipline areas, the following must be completed by the 15th of the following October:

· Enter student learning outcome statements (SLOs) in the college’s curriculum management system.

· Develop and submit to SAAC a 5-year assessment cycle plan (using the form provided by SAAC).

At least one outcome should be assessed each year, and all outcomes should be assessed within the 5-year cycle. SAAC requests/strongly recommends that each outcome be assessed for at least two consecutive years. Cycle plans for general education areas should include all courses listed for the discipline within the CNM General Education Core Curriculum. And, assessment within courses having multiple sections should include all sections, whenever feasible.

All programs/areas are encouraged to assess individual SLOs across a variety of settings, use a variety of assessment techniques (multiple measures), and employ sampling methods as needed to minimize the burden on the faculty. (See Collecting Evidence of Learning.)

Choosing an assessment approach begins with considering what it is the program faculty wants to know. The assessment cycle plan should be designed to collect the information needed to answer the faculty’s questions. Much educational assessment begins with one or more of the following questions:

1. Are students meeting the necessary standards?

Standards-based assessment

2. How do these students compare to their peers?

Benchmark comparisons

3. How much is instruction impacting student learning?Value-added assessment

4. Are changes making a difference?

Investigative assessment

5. Is student learning potential being maximized?

Process-oriented assessment

Determining whether students are meeting standards usually involves summative measures, such as final exams. Standards-based assessment relies on prior establishment of a target level of achievement, based on external performance standards or internal decisions about what constitutes ‘good enough.’ Outcomes are usually interpreted in terms of the percentage of students meeting expected proficiency levels.

Finding out how students compare to their peers typically involves comparing test or survey outcomes with those of other colleges. Benchmark comparisons, based on normative group data, may be used when available. Less formal comparisons may involve looking at summary data from specific institutions, programs, or groups.

Exploring how an instructional program or course is impacting student learning, often termed ‘value-added assessment,’ is about seeing how much students are learning compared to how much they knew coming in. This approach typically involves either longitudinal or cross-sectional comparisons, using the same measure for both formative and summative assessment. In longitudinal analyses, students are assessed at the beginning and end of an instructional unit or program (given a pre-test and a post-test, for example). The results can then be compared on a student-by-student basis for calculation of mean gains. In cross-sectional analyses, different groups of students are given the same assessment at different points in the educational process. For example, entering and exiting students are asked to fill out the same questionnaire or take the same test. Mean results of the two groups are then compared. Cross-sectional comparisons can save time because the formative and summative assessments can be conducted concurrently. However, they can be complicated by demographic and/or experiential differences between the groups.

Studying the effects of instructional and/or curricular changes usually involves comparing results obtained following a change to those obtained with the same measure but different students prior to the change. This is essentially an experimental approach, though it may be relatively informal. (See the CNM Handbook for Educational Assessment.)

Exploring whether student learning is being maximized involves process-oriented assessment strategies. To delve into the dynamics of the educational process, a variety of measures may be used at formative and/or summative stages. The goal is to gain insights into where and why student learning is successful and where and why it breaks down.

The five assessment questions presented above for program consideration can also be used independently by faculty at the course level. Because it is built upon the alignment of course and program SLOs, as discussed in the following section, the CNM assessment model allows faculty to use different approaches within their own classes based on what they want to know. Putting together findings from varied approaches to conduct a program-level analysis, versus having everyone use the same approach, yields a more complete portrait of learning outcomes.

Alignment of Course and Program SLOs

As illustrated in Figure 5, the process of program assessment revolves around the program’s student learning outcome statements. Although some assessment techniques (such as employer surveys, alumni surveys, and licensing exams) serve as external measures of program SLOs, the internal assessment process often begins with individual instructors working with course SLOs that have been aligned to the program SLOs.

The alignment between course and program SLOs allows flexibility in how course-level assessment is approached. When faculty assess their students’ progress toward their course SLOs, they also assess their students’ progress toward the overarching program SLOs. Therefore, each instructor can use whatever classroom assessment techniques work best for his/her purposes and still contribute relevant assessment findings to a collective body of evidence. If instructors are encouraged to ask their own questions about their students’ learning and conduct their own assessment in relation to their course SLOs, not only will course-related assessment be useful for program reporting, but it will also be relevant to the individual instructor.

Depending upon how the program decides to approach assessment, faculty may be able to use whatever they are already doing to assess their students’ learning. Or, they might even consider using the assessment process as an opportunity to conduct action research. Academic freedom in assessment at the course level can be promoted by putting in place clearly articulated program-level outcomes and defining criteria associated with those outcomes. Descriptive rubrics (whether holistic or analytic), rating scales, and check lists can be useful tools for clarifying what learning achievement looks like at the program level. Once a central reference is in place, faculty can more easily relate their course-level findings to the program-level expectations.

Figure 6: The CNM Assessment Process

Clear alignment between course and program SLOs, therefore, is prerequisite for using in-class assessment to support program assessment. Questions of assessment aside, most people would agree that having every course SLO (in courses comprising a program) contribute in some way to students’ development of at least one program SLO contributes to the overall integrity and coherence of the program. To map (a.k.a. crosswalk) course SLOs to program SLOs, consider using a matrix such as the one below. The notations in the cells where the rows and columns intersect represent existence of alignment.

Table 6: A Model for SLO Mapping

Program SLO #1

Program SLO #2

Program SLO #3

Course SLO #1

Mastered

Course SLO #2

Reinforced

Course SLO #3

Introduced

Reinforced

Course SLO #4

Mastered

The more clearly defined (and agreed upon by program faculty) the program SLOs are, the easier it is to align course SLOs to them and the more useful the pooling of assessment findings will be. For this reason, schools are encouraged to carefully analyze their program SLOs and consider whether any of them consist of several component skills (often referred to as elements). If so, identification of course SLO alignments (and ultimately the entire assessment process) will be easier if each of the component skills is identified and separately described. Consider developing a rubric, normed rating scale, or checklist for each such program SLO, clearly representing and/or describing what achievement of the outcome looks like.

For example, the SLO “Demonstrate innovative thinking in the transformation of ideas into forms” contains two component skills: innovative thinking and transforming ideas. Each of these elements needs to be described separately before the broader learning outcome can be clearly understood and holistically assessed. Some course SLOs may align more directly to development of innovative thinking while others may align more directly to the transformation of ideas into forms.

The advantage of using a descriptive rubric for this sort of SLO deconstruction is that you can define several progressive levels of achievement (formative stages) and clearly state what the student behavior looks like at each level. Faculty can then draw connections between their own course SLOs (and assessment findings) and the stages of development associated with the program SLO.

Following is a sample rubric, adapted from the AAC&U Creative Thinking Value Rubric (available at www.aacu.org). See the section on Using Rubrics to Make Assessment Coherent for more information on developing and getting the most out of rubrics.

Table 7: Sample Rubric for Innovative Thinking in the Transformation of Ideas into Art Forms

Advanced

(3)

Intermediate

(2)

Beginning

(1)

Innovative Thinking

Extends a novel or unique idea, question, format, or product to create new knowledge or a new application of knowledge.

Creates a novel or unique idea, question, format, or product.

Reformulates available ideas.

Transforming Ideas

Transforms ideas or solutions into entirely new forms.

Connects ideas in novel ways that create a coherent whole.

Recognizes existing connections among ideas or solutions.

Developing an Assessment Focus at the Course Level

Assessment becomes meaningful when it meets a relevant need,

for example when it:

· Starts with a question you care about.

· Can confirm or disprove what you think.

· Can shed light on something you want to better understand.

· Can reveal whether one method works better than another.

· Can be of consequence to your future plans or those of your colleagues.

· Has implications for the future of the profession and/or broader society.

Before you can formulate a coherent course-level approach to assessment, it is necessary to connect your broad teaching goals with specific, assessable activities. Begin by asking yourself what you and your students do to frame the learning that leads to the desired outcome. Sometimes identifying specific activities that directly contribute to the outcome can be a challenge, but doing so is important for assessment to proceed.

Once you have connected specific activities with the outcome, decide what you want to know about your students’ learning. What is your goal in conducting the assessment? What are you curious about? See the five assessment questions listed in the Assessment Cycle Plans section above for ideas you can apply here as well.

A natural inclination is to focus assessment on problem areas. However, it is often more productive to focus on what you think is working than to seek to confirm what is not working. Here are some reasons why this is so:

1. Exploring the dynamics behind effective processes may offer insights that have application to problem areas. Stated another way: understanding conditions under which students learn best can help you identify obstacles to student learning and suggest ideas for how these can be addressed (either within or outside of the program).

2. Students who participate in assessment that confirms their achievements gain awareness of what they are doing effectively and are thereby helped to develop generalizable success strategies. This benefit is enhanced by faculty discussing with students the assessment process and the outcomes.

3. Exploring successes reinforces instructor motivation and makes the assessment process more encouraging, rather than discouraging.

4. The process of gathering evidence of success and demonstrating the application of insights gained promotes your professional development and supports your program in meeting public accountability expectations.

5. Sharing discoveries regarding successful practices could contribute significantly to your professional field.

However, an important distinction exists between assessing student learning and assessing courses. Also, encouragement to explore successful practices should not be misconstrued as encouragement to use the assessment reporting process to defend one’s effectiveness as an instructor. To be useful, assessment needs to do more than just confirm that a majority of students at the end of a course can demonstrate a particular learning outcome. While this may be evidence of success, it does not reveal much at all about what contributed to the success. Assessment that explores successful practices needs to delve into the questions of what is working, how it is working, why it is working, whom it is working best for, when it is working best, and under what conditions is it working best?

Here are some questions that might help generate some ideas:

· Which of the course SLOs related to the program SLO(s) scheduled for assessment is most interesting or relevant to you?

· Is there anything you do that you think contributes especially effectively to development of the course outcome?

· Have you recently tried anything new that you might want to assess?

· Have students commented on particular aspects of the course?

· Have students’ course evaluations pointed to any particular strengths?

· Are there any external influences (such as industry standards, employer feedback, etc.) that point to strategies of importance?

Again, please see the Assessment Cycle Plans section for more information on formulating assessment questions and identifying appropriate assessment approaches. The information there is applicable at the course level as well.

Planning an Assessment Approach at the Course Level

How can you best measure students’ achievement toward the specific outcome AND gain insights that will help you improve the learning process? The choice of an appropriate measurement technique is highly context specific. However, if you are willing to consider a variety of options and you understand the advantages and disadvantages of each, you will be well prepared to make the selection.

Please keep the following in mind:

· Assessment does not have to be connected with grading. While grading is a form of assessment, there is no need to limit assessment to activities resulting in grades. Sometimes, removal from the grading process can facilitate collection of much more revealing and interesting evidence.

· Assessment does not have to involve every student equally. Sometimes sampling methods make manageable an assessment that would otherwise be unreasonably burdensome to carry out. For example, you may not have time to interview every student in a class, but if you interview a random sample of students, you may be able to obtain essentially the same results in a lot less time. If you have an assignment you grade using course criteria and want to apply it to program-level criteria, you may be able to use a sample rather than re-evaluate every student’s work.

· Assessment does not have to be summative. Summative assessment occurs at the end of the learning process to determine retrospectively how much or well the students learned. Formative assessment, on the other hand, occurs during the learning process, during the developmental phases of learning. Of the two, formative assessment often provides the greatest opportunity for insight into the students’ learning dynamics. In addition, formative assessment enables you to identify gaps in learning along the way, while there is still time to intervene, instead of at the end, when it’s too late to intervene. Consider using both formative and summative assessment.

· It is not necessary that all program faculty use a common assessment approach. A diverse array of assessment approaches can be conducted concurrently by different faculty teaching a wide range of courses within a program and assessing the same outcome. The key is to have group agreement regarding how the outcome manifests, i.e., what it looks like when students have achieved the learning outcome and what criteria are used for its assessment. This can be accomplished with a very precisely worded SLO, a list of SLO component skills, descriptive rubrics (see Using Rubrics to Make Assessment Coherent below), descriptions from industry standards, normed rating scales, checklists, etc. Once a shared vision of the outcome has been established, all means of assessment, no matter how diverse, will address the same end.

· Assessment does not have to meet the standards of publishable research. Unless you hope to publish your research in a peer review journal, your classroom assessment need not be flawless to be useful. In this regard, assessment can be an opportunity for ‘action research.’

· Some assessment approaches may need IRB approval. See IRB and Classroom Research regarding research involving human subjects. And, for further information, see the CNM Handbook for Educational Research.

· Assessment interpretation does not need to be limited to the planned assessment techniques. It is reasonable to bring all pertinent information to bear when trying to understand strengths and gaps in student learning. Remember, the whole point of assessment is to gain useful insights.

· Assessment does not have to be an add-on. Most of the instructional approaches faculty use in their day-to-day teaching lend themselves to assessment of program outcomes. Often, turning an instructional approach into a valuable source of program-level assessment information is just a matter of documenting the findings.

It may be helpful to think of assessment techniques within the five broad categories identified in Table 6.

Table 8: Common Assessment Measures

Written Tests

Misconception checks

Preconception checks

Pre- or post-tests

Pop quizzes

Quizzes

Standardized exams

Document/Artifact Analysis

Artwork

Displays

Electronic presentations

Exhibitions

Homework

Journals

Portfolios

Projects

Publications

Research

Videos

Writing assignments

Process Observations

Auditions

Classroom circulation Demonstrations

Enactments

Experiments

Field notes

Performances

Process checklists

Simulations

Speeches

Tick marking

Trial runs

Interviews

Calling on students

Case studies

Focus groups

Formal interviews

In-class discussions

Informal interviews

Oral exams

Out-of-class discussions

Study-group discussions

Surveys

Alumni surveys

Clicker questions

Employer surveys

Feedback requests

Show of hands

Student surveys

The lists of techniques above are by no means comprehensive. Plug in your own classroom assessment techniques wherever they seem to fit best. The purpose in categorizing techniques thus is to demonstrate not only some common characteristics but also the variety of options available. Yes, interviews and surveys are legitimate assessment techniques. You need not limit yourself to one paradigm. Consider the possibility of using techniques you may not have previously thought sufficiently valid, and you may begin to see that much of what you are already doing can be used as is, or adapted, for program-level assessment. You may also begin to have more fun with assessment and find the process more relevant to your professional interests.

Some concepts to help you select a measure that will yield the type of evidence you want are presented in the following table. Each pair (left and right) represents a continuum upon which assessment measures can be positioned, depending upon the context in which they are used.

Table 9: Descriptors Related to Assessment Measures

Direct

Student products or performances that demonstrate specific learning has taken place (WCU, 2009)

Indirect

Implications that student learning has taken place (may be in the form of student feedback or third-party input)

Objective

Not influenced by personal perceptions; impartial, unbiased

Subjective

Based on or influenced by personal perceptions

Quantitative

Expressible in terms of quantity; directly, numerically measurable

Qualitative

Expressible in terms of quality; how good something is

Empirical

Based on experience or experiment

Anecdotal

Based on accounts of particular incidents

Note that objectivity and subjectivity can apply to the type of evidence collected or the interpretation of the evidence. Empirical evidence tends to be more objective than anecdotal evidence. And, interpretation of outcomes with clearly identifiable criteria tends to be more objective than interpretation requiring judgment.

Written tests with multiple-choice, true-false, matching, single-response short-answer/fill-in-the-blank, and/or mathematical-calculation questions are typically direct measures and tend to yield objective, quantitative data. Responses to objective test questions are either right or wrong; therefore, objective test questions remain an assessment staple because the evidence they provide is generally viewed as scientifically valid.

On the other hand, assessing short-answer and essay questions is more accurately described as a form of document analysis. When document/artifact analyses and process observations are conducted using objective standards (such as checklists or well-defined rubrics), these methods can also yield relatively direct, quasi-quantitative evidence. However, the more observations and analyses are filtered through the subjective lens of personal or professional judgment, the more qualitative the evidence. For example, consider the rating of performances by panels of judges. If trained judges are looking for specific criteria that either are or are not present (as with a checklist), the assessment is fairly objective. But, if the judges evaluate the performance based on knowledge of how far each individual performer has progressed, aesthetic impressions, or other qualitative criteria, the assessment is more subjective.

Objectivity is highly prized in U.S. culture. Nonetheless, some subject matter requires a degree of subjectivity for assessment to hit the mark. A work of art, a musical composition, a poem, a short story, or a theatrical performance that contains all the requisite components but shows no creativity, grace, or finesse and fails to make an emotional or aesthetic impression does not demonstrate the same level of achievement as one that creates an impressive gestalt. Not everything worth measuring is objective and quantitative.

Interviews and surveys typically yield indirect, subjective, qualitative, anecdotal evidence and can nonetheless be extremely useful. Soliciting feedback from students, field supervisors, employers, etc., can provide insights into student learning processes and outcomes that are otherwise inaccessible to instructors.

Note that qualitative information is often translated to numerical form for ease of analysis and interpretation. This does not make it quantitative. A common example is the use of Likert scales (named after their originator, Rensis Likert), which typically ask respondents to indicate evaluation or agreement by rating items on a scale. Typically, each rating is associated with a number, but the numbers are symbols of qualitative categories, not direct measurements.

Table 10: Sample Likert-Scale Items

Poor

(1)

Fair

(2)

Good

(3)

Great

(4)

Please rate the clarity of this handbook:

Disagree

(1)

Somewhat

Disagree

(2)

Somewhat Agree

(3)

Agree

(4)

Assessment is fun!

In contrast, direct numerical measures such as salary, GPA, age, height, weight, and percentage of correct responses yield quantitative data.

Using Descriptive Rubrics to Make Assessment Coherent

Descriptive rubrics are tools for making evaluation that is inherently subjective more objective. Descriptive rubrics are scoring matrices that differ from rating scales in that they provide specific descriptions of what the outcome looks like at different stages of sophistication. They serve three primary purposes:

1. To communicate learning outcome expectations (to students and/or to faculty).

2. To facilitate fairness and consistency in an instructor’s evaluation of multiple students’ work.

3. To facilitate consistency in ratings among multiple evaluators.

Used in class, descriptive rubrics can help students better understand what the instructor is looking for and better understand the feedback received from the instructor. They can be connected with individual assignments, course SLOs, and ad hoc assessment activities. Used at the program level, rubrics can help program faculty relate their course-level assessment findings to program-level learning outcomes.

Rubrics are particularly useful in qualitative assessment of student work. When grading large numbers of assignments, even the fairest of instructors can be prone to grading fatigue and/or the tendency to subtly shift expectations based on cumulative impressions of the group’s performance capabilities. Using a rubric provides a framework for reference that keeps scoring more consistent.

Additionally, instructors who are all using the same assessment approach in different sections of a course can use a rubric to ensure that they are all assessing outcomes with essentially the same level of rigor. However, to be effective as norming instruments, rubrics need to have distinct, unambiguously defined performance levels.

A rubric used to assess multiple competencies can be referred to as non-linear; whereas, a rubric used to assess several criteria related to a single, broad competency can be described as linear (Tomei, p. 2). Linear rubrics are perhaps most relevant to the program assessment process. A linear rubric that describes progressive levels of achievement within several elements of a broad program SLO can serve as a unifying and norming instrument across the entire program. This is an important point: such a rubric can be used for making sense of disparate findings from a wide variety of assessments carried out in a wide variety of courses – so long as all are related to the same program SLO. Each instructor can reference some portion of a shared rubric in relating his or her classroom assessment findings to the program SLO. It is not necessary that every assessment address every element of the rubric.

The Venn diagram presented in Figure 7 illustrates how a rubric can serve as a unifying tool in program assessment that involves a variety of course assessment techniques. To make the most of this model, faculty need to come together at the end of an assessment period, pool their findings, and collaboratively interpret the totality of the information. Collectively, the faculty can piece the findings together, like a jig-saw puzzle, to get a broader picture of the program’s learning dynamics in relation to the common SLO.

For example, students in entry-level courses might be expected to demonstrate beginning-level skills, especially during formative assessment. However, if students in a capstone course are still demonstrating beginning-level skills, then the faculty, alerted to this, can collectively explore the cross-curricular learning processes and identify strategies for intervention. The information gleaned through the various assessment techniques, since it all relates to the same outcome, provides clues. And, additional anecdotes may help fill in any gaps. As previously noted, assessment interpretation does not need to be limited to the planned assessment techniques.

Rubric Design

To be effective, descriptive rubrics need to focus on competencies, not tasks, and they need to validly describe demonstration of competency at various levels.

Rubric design is highly flexible. You can include as many levels of performance as you want (most people use 3-5) and use whatever performance descriptors you like. To give you some ideas, below are some possible descriptor sets:

· Beginner, Amateur, Professional

· Beginning, Emerging, Developing, Proficient, Exemplary

· Below Expectations, Satisfactory, Exemplary

· Benchmark, Milestone, Capstone

· Needs Improvement, Acceptable, Exceeds Expectations

· Needs Work, Acceptable, Excellent

· Neophyte, Learner, Artist

· Novice, Apprentice, Expert

· Novice, Intermediate, Advanced

· Rudimentary, Skilled, Accomplished

· Undeveloped, Developing, Developed, Advanced

Table 11: Sample Rubric for Assessment of Decision-Making Skill

 

Novice

(0)

(1)

Developing

(2)

(3)

Advanced

(4)

Score

Identifies decisions to be made

Recognizes a general need for action

 

Owns the need to decide upon a course of action

 

Clearly defines decisions and their context and importance

 

Explores alternatives

Overlooks apparent alternatives

 

Considers the most obvious alternatives

 

Fully explores options, including complex solutions

 

Anticipates consequences

Considers only desired consequences

 

Identifies the most likely consequences

 

analyzes likelihood of differing outcomes

 

Weighs pros and cons

Identifies only the most obvious pros and cons

 

Recognizes pros and cons and compares length of lists

 

Weighs pros and cons based on values/goals/

ethics

 

Chooses a course of action

Does not choose own course of action

Chooses a course based on external influences

Expresses logical rationale in choosing a course

Follows through

Does not follow through

Acts on decision

Acts on decision, observes out-come, & reflects upon process

 

 

 

 

 

TOTAL:

 

Performance levels can be arranged from low to high or from high to low. As shown in the sample rubric in Table 9, blank columns (given numerical values but no descriptors) can be inserted between described performance levels for use when demonstration of learning falls somewhere in between the descriptions. However, some argue that well-written rubrics are clear enough that no in-between or overlap can occur.

A recommendation for description writing is to start with the most advanced level of competency (unless that level exceeds expectations) and work down to the lowest level.

Tip: The process of agreeing on outcome descriptions can help clarify goals and values shared among program faculty. To facilitate collaborative development of the descriptions in a program-level rubric, start with a brainstorming session. Then, list and group ideas to identify emerging themes.

Using Formative Assessment to Reinforce Learning

As contrasted with summative assessment, the purpose of which is to evaluate student learning at the end of a learning process, formative assessment aims to monitor student learning in progress. In and of itself, formative assessment does not necessarily reinforce learning. However, if intentionally used as an instructional technique, formative assessment provides timely feedback that can help both the students and the instructor adjust what they are doing to move more directly toward the learning goals. The key is using the assessment both to provide feedback to students and to redirect one’s teaching approach. The feedback students receive affirms their learning progress (providing positive reinforcement) and also lets them know where they have not yet met the outcome goals. And, understanding how well students are grasping the learning and where the learning is breaking down helps the instructor take appropriate steps to intervene and get the students back on track, which also reinforces learning.

To inform future teaching strategies, it is not so much the identification of gaps in the learning process that is beneficial, but rather the identification of ways to correct those gaps. Together, the methods used to redirect the learning process and the instructor’s subsequent observation of how well those methods worked can lead to insights with implications for future instructional approaches. To this end, a focus on strengths, with the intention of identifying successful teaching strategies is likely to be more productive than a focus on weaknesses in teaching and/or learning.

Possible responses to formative assessment include, but certainly are not limited to:

· Clarifying the learning goals.

· Reinforcing concept scaffolding.

· Providing additional examples and/or practice.

· Having students help other students learn (share their understanding).

· Modeling and/or discussing techniques and/or strategies.

· Teaching metacognitive processes.

· Reframing the learning within the epistemology of the discipline (i.e., showing how it fits the nature and limitations of knowledge within the discipline and/or the methods and processes used within the discipline).

Using formative assessment to facilitate classroom discussions can help students learn to view their own learning process with greater breadth of perspective and objectivity. With practice, students can learn to switch back and forth between learning and observing their own learning process. Conducting formative assessments and talking with students about the results can help students develop metacognitive skills. Learning to critically evaluate their own progress and motives as learners encourages students to accept responsibility for their own learning outcomes. And, seeing their progress along the way helps students develop an intrinsic appreciation for the learning process. Nothing motivates like realizing one is becoming better at something.

Measurement approaches used in formative assessment are not inherently different from those used in summative assessment. However, because formative assessments are typically less formal and carry less (or no) point value for grading, instructors are more apt to be creative in using them. See the section Classroom Assessment Techniques (CATs) for fifty activities that are commonly used as formative assessment processes.

One might rightly question whether the use of formative assessments provides the data necessary to demonstrate programs’ achievement of their SLOs. After all, as a publicly funded institution, CNM does need to demonstrate some level of program success to satisfy needs for public accountability. If only one assessment approach were used to assess student progress toward a program SLO, then for accountability purposes, it would be desirable for that assessment approach to be summative (to show that the outcomes were achieved to an acceptable level). However, the CNM assessment model allows (but does not require) programs to draw upon multiple measures from multiple courses for a more comprehensive picture of the learning dynamics leading up to and including the achievement of any given program SLO. Viewed together, findings from formative assessments across multiple courses, all helping to develop a shared program SLO, will typically provide more actionable information than will a single summative assessment. When formative and summative findings are analyzed in combination, the potential for actionable insights is magnified.

For example, if a program uses an external licensing exam as the sole indicator of student progress toward a particular outcome and 90% of students who take the exam pass it, we may deduce that the program is doing an acceptable job of developing the learning outcome. However, we receive little information that can be used to inform a plan of action. If we focus on how the results are less than perfect, we might ask what happened with the 10% who did not pass. Where along the way did their learning break down? Even if we know that the questions they missed pertained to the target SLO and know what the questions were, we may have to rely on anecdotal information (such as the students’ prior course performance, faculty knowledge of extenuating circumstances, etc.) to understand why those students missed those questions. Alternatively, if we focus on the performance of the successful students, we might ask what helped these students to be successful. Again, in the absence of other assessment information, the answers probably lie in anecdotal observations. And, while there is nothing wrong with bringing anecdotal information to bear on the interpretation of assessment findings, the conclusions so derived typically are not generalizable.

In this example, the faculty could attribute the outcomes to factors beyond the control of the program and conclude that no changes need to be made. That would be fine if all that mattered were demonstration of a certain threshold of success for public accountability purposes. However, CNM as an institution values the impact of its education on students’ lives and the community. Public accountability is not the only reason (nor even the most important) for conducting assessment. When optimizing student learning is the primary goal, there is always room for improvement in learning outcomes. Standing alone, a summative assessment with results that represent anything less than 100% success, therefore, suggests the need for a plan of action, even if the action is merely to recommend changes in college practices, supportive services, etc., to address the factors that are beyond the control of the program.

If the program faculty in this example were to come together to consider the findings of formative assessments conducted in several different courses, along with the licensing exam results, together these might provide insight into the development of component skills that comprise the broader program SLO. A holistic look at the learning process might reveal patterns or tendencies. The faculty might identify successful practices that could be implemented more broadly, or earlier, to reinforce student learning in additional ways.

Collecting Evidence of Learning

Evidence of student learning can be as informal as an opinion poll or as formal as the data collected in a randomized control trial. Because evidence of learning can take so many different forms, there is no single method for collecting it. Perhaps the most important consideration for program assessment purposes is that the evidence be docume