Upload
metta
View
39
Download
0
Tags:
Embed Size (px)
DESCRIPTION
Data Collection and Closing the Loop: Assessment’s Third and Fourth Steps. Office of Institutional Assessment & Effectiveness SUNY Oneonta Spring 2011. Important Assessment “Basics”. - PowerPoint PPT Presentation
Citation preview
Data Collection and Closing the Loop:
Assessment’s Third and Fourth Steps
Office of Institutional Assessment & EffectivenessSUNY OneontaSpring 2011
Important Assessment “Basics”
Establishing congruence among institutional goals, programmatic and course objectives, learning opportunities, and assessments
Linkages to disciplinary (and, as appropriate, accreditation/certification) standards
Using a variety of measures, both quantitative and qualitative, in search of convergence
Value of course-embedded assessment Course- vs. program-level assessment
Course- Vs. Program-Level Assessment
Focus of SUNY Oneonta assessment planning is programmatic student learning objectives Not about assessment of individual students or faculty Rather, the question is: To what extent are students
achieving programmatic objectives? Data collection will still, for the most part, take place
in the context of the classroom (i.e., course-embedded assessment)
However, program must have process in place for compiling and aggregating data across courses and course sections, as appropriate
What You’ve Done So Far1. Development of programmatic student learning
objectives Including discipline-appropriate as well as college-
wide expectations for student learning Covering cognitive, behavioral, and attitudinal
characteristics as appropriate2. Curriculum mapping
Determining the extent to which learning objectives correspond to curricular experiences
Reviewing rationale for program requirements and structure
Exploring potential for developing “assessment database,” leading directly to Step 3
Sample Curriculum Map – Linking Step 2 to Step 3
COURSE
SLOs 1 2 3 4 5 6 7
Introductory Course E E History/Theories E P P Methods E, L E, L L Required Course 1 E E E, P P Required Course 2 P P P Required Course 3 E E P Required Course 4 I, PO I, PO Capstone PO PO PO PO PO Assessment Key: P-Paper E-Exam PO-Portfolio O=Oral Presentation L-Lab Assignment I-Internship
Collecting Assessment Data:
Assessment’s Third Step Finding Evidence that Students are
Achieving Programmatic Goals
Important Preliminary Activities Reach consensus as a faculty on what constitutes
good assessment practice No point in collecting meaningless data!
Develop strategies for assuring that measures to be used are of sufficient quality Review by person/group other than the faculty member who
developed the measure Use of checklist that demonstrates how measure meets
good practice criteria developed by program faculty Decide how issue of “different sections” will be
addressed Will same measures be used? If not, how will comparability be assured?
Assuring Quality of Plan: Questions to Ask
Are assessment measures direct? Student perceptions of the program are valuable, but cannot be the
only indicator of learning Is there logical correspondence between the
measure(s) and the learning objective(s) being assessed?
Is there a process for establishing reliable scoring of qualitative measures?
Are data being collected from a range of courses across the program (i.e., are they representative)?
Suggestions for Maximizing Value of
Assessment Data Use a variety of assessment measures Quantitative and qualitative Course-embedded and “stand-alone” measures (e.g.,
ETS Major Field tests, CLA results) Use benchmarking as appropriate and available Ultimately, convergence of assessment results is
ideal (i.e., triangulation) Establish a reasonable schedule for collecting
assessment data on an ongoing basis (i.e., approximately 1/3 of learning objectives per year)
Suggestions for Maximizing Value of
Assessment Data (cont.) For each learning objective, collect assessment data from a variety of courses at different levels as much as possible
Helps assure results aren’t “idiosyncratic” to one course or faculty member
Can provide insight into extent to which students are “developing” (cross-sectionally, anyway)
Also Consider the Following: The value of a capstone experience for collecting
assessment data “Double dipping” (i.e., using the same evaluative
strategies and criteria to assign grades and produce programmatic assessment data)
Working closely with other faculty in developing measures, especially when teaching courses with multiple sections Do measures have to be the same?
No, but the more different they are, the harder it will be to compile data and reach meaningful conclusions
From Learning Objectives to Assessment Criteria
Once measures are selected, establish clear and measurable a priori “success” indicators
For each measure, determine what constitutes meeting and not meeting standards
While these definitions may vary across faculty, programs will need to use the same categories for results (e.g., exceeding, meeting, approaching, not meeting standards)
Otherwise, reaching conclusions about “program effectiveness” will be difficult
Again, the more faculty collaborate with each other in establishing standards, the easier it will be to organize results and reach meaningful conclusions
Ultimately, it’s a programmatic decision
Post-Assessment Considerations Once data are collected, they must be organized and
maintained in a single place An Excel spreadsheet will work just fine
They will also have to be compiled in some fashion, although the form this takes will depend on the program’s approach
One possibility: Examine for each learning objective the overall percentage of students who met or failed to meet standards (using averages)
Or: Break these percentages down by course level Ultimately, some systematic organization and
categorization of assessment results is necessary in order to move on to Step 4
Closing the Loop: Assessment’s Fourth
Step Using Assessment Data to Improve Programs, Teaching, and Learning
Now That You’ve Gone to All This Trouble…..
The only good reason to do assessment is to use the results to inform practice
Can and should happen at the individual faculty level, but in the context of program assessment, the following needs to happen:
Provision of compiled, aggregated data to faculty for review and consideration
Group discussion of those data DOCUMENTATION of the assessment process and
results, conclusions reached, by faculty, and actions to be taken (more about this later)
What Should be the Focus of
Closing the Loop Process? Identification of “patterns of evidence” as revealed by the assessment data
How are data consistent? Do students at different course levels perform similarly? Eventually, it will be possible to look at this issue over time
How are they distinctive? Do students perform better on some objectives than others?
Comparison of expected to actual results What expectations were confirmed? What came as a complete surprise?
What are possible explanations for the surprise?
What Should be the Focus of
Closing the Loop Process? (cont.)
The decision as to whether assessment results are “acceptable” to faculty in the program
What strengths (and weaknesses) are revealed? What explains the strengths and weaknesses?
Do they make sense, given results of curriculum mapping process and other information (e.g., staffing patterns, course offerings)?
And, most important, what should (and can) the program do to improve areas of weakness?
Process also provides an ideal opportunity to make changes in assessment process itself as well as in programmatic objectives for the next assessment round
Some Possible Ways to Close the Assessment Loop Faculty, staff, and student development
activities Program policies, practices, and procedures Curricular reform Learning opportunities
A Final Issue: The Importance of
Documenting Assessment Increasing requirements related to record-keeping on assessment and actions that are taken based on assessment results
Frequently, actions that are taken don’t “match” results Documentation need not be highly formal, and in fact can be
effectively done in tabular form for each objective, to include: Summary of results Brief description of strengths and weaknesses revealed by data Planned revisions to make improvements as appropriate Planned revisions to the assessment process itself
Provides record that can then be referred to in later assessment rounds and a way of monitoring progress over time
Developing an Assessment Plan:
Some Important Dates May 3, 2010: Submission of Step 1 (Establishing Objectives) of college guidelines
December 1, 2010: Submission of Step 2 (Activities & Strategies) of guidelines
June 1, 2011: Submission of Steps 3 (Assessment) and 4 (Closing the Loop) [plans only]
2011-12 academic year: First round of data collection
APAC Members
Paul French Josh Hammonds Michael Koch Richard Lee Patrice Macaluso
William Proulx Anuradhaa Shastri Bill Wilkerson (Chair) Patty Francis (ex
officio)