4
PROVIDING STUDENTS WITH FEEDBACK FOR CONTINUOUS IMPROVEMENT Stephen J. Elliott, Ph.D. 1 , Christine R. Blomeke 2 , and Michael D. Frick 3 1 Stephen J. Elliott, Ph.D., Associate Professor, Purdue University, Department of Industrial Technology, 401 N. Grant Street, West Lafayette, IN 47907- 2021, USA, [email protected] 2 Christine R. Blomeke, Graduate Researcher, Purdue University, Department of Industrial Technology, 401 N. Grant Street, West Lafayette, IN 47907- 2021, USA, [email protected] 3 Michael D. Frick, Undergraduate Researcher, Purdue University, Department of Industrial Technology, 401 N. Grant Street, West Lafayette, IN 47907- 2021, USA, [email protected] Abstract This paper describes the introduction of a feedback system for undergraduate students in a junior level automatic identification and data capture class. Students have highlighted that in general feedback has been slow from some classes in the curriculum. This issue has been made a continuous improvement action item as a part of the department’s ABET continuous improvement program. The implemented solution was to provide students with personalized feedback regarding their understanding of the lecture learning objectives prior to undertaking the lab practicals. Student performance was measured through the use of quizzes and exams to indicate whether the new system of feedback contributed to higher scores. The author’s compared scores from Spring 2008 (control) with the Fall 2008 (experimental) semesters. The results show that for higher order learning activities, the personalized feedback helped contribute to statistically significantly higher score. Index Terms biometrics, education, continuous improvement INTRODUCTION According to Chickering and Gamson (1999), there are seven principles for good practice in undergraduate education. These include student-faculty contact, cooperation amongst students, the encouragement of active learning, providing students with prompt feedback, emphasis on time on task, the communication of high expectations, and the inclusion of a diverse set of talents and ways of learning. With respect to good feedback, Chickering and Gamson state that “in classes, students need frequent opportunities to perform and receive suggestions for improvement” (page 1). This is sometimes a challenge in large lecture courses; in this class there are 80 students per semester, and dialog is limited. There have been several ways noted in the literature to improve the feedback to students, especially with the adoption of technology in the classroom such as Audience Response Systems, commonly known as clickers (Caldwell, 2007). Other communication tools available to instructors include e-mail, web-pages, forum postings, instant messenger discussions, typically using educational course management systems such as Blackboard and Moodle. According to Newlin and Wang (2002), instructors can provide personalized, question-by- question feedback to each student in a timely fashion. In doing so, Newlin and Wang identified that even anecdotally this feedback provides students with greater feelings of support, increased instructor contact and availability. CONTINUOUS IMPROVEMENT PROGRAM Continuous improvement is undertaken in many different guises, from “water-cooler discussions” through to a rigorous and measurable evaluation program. Continuous improvement in an academic environment is done for a number of reasons as outlined in Temponi (2005). There are several steps to continuous improvement, shown in Figure 1 below. They include gathering data from various sources (inputs), which can include faculty, student feedback, industrial advisory boards, alumni feedback, employer feedback, internal communications within the university environment, ABET accreditation requirements and information from conferences, journals etc. which identify issues for further study. This can be shown in Figure 1 below. In this case, the issue was identified through surveys to current students as well as senior exit interviews. Once the issue had been identified, a subgroup of faculty examined how to provide feedback to students. Given the tools available in the classroom to faculty, the authors decided to implement a continuous feedback loop based initially on student input in the classroom, through the use of a clicker. This concept was already well established in the literature as a way of engaging students in such an environment. What was not clear was how to provide resources to students using the data collected in class, as the report functionality in the clicker software was limited. © 2009 ICECE March 08 - 11, 2009, Buenos Aires, ARGENTINA VI International Conference on Engineering and Computer Education 159

(2009) Providing Students with Feedback for Continuous Improvement

Embed Size (px)

DESCRIPTION

This paper describes the introduction of a feedback system for undergraduate students in a junior level automatic identification and data capture class. Students have highlighted that in general feedback has been slow from some classes in the curriculum. This issue has been made a continuous improvement action item as a part of the department’s ABET continuous improvement program. The implemented solution was to provide students with personalized feedback regarding their understanding of the lecture learning objectives prior to undertaking the lab practicals. Student performance was measured through the use of quizzes and exams to indicate whether the new system of feedback contributed to higher scores. The author’s compared scores from Spring 2008 (control) with the Fall 2008 (experimental) semesters. The results show that for higher order learning activities, the personalized feedback helped contribute to statistically significantly higher score.

Citation preview

Page 1: (2009) Providing Students with Feedback for Continuous Improvement

PROVIDING STUDENTS WITH FEEDBACK FOR CONTINUOUS IMPROVEMENT

Stephen J. Elliott, Ph.D. 1 , Christine R. Blomeke 2, and Michael D. Frick 3

1 Stephen J. Elliott, Ph.D., Associate Professor, Purdue University, Department of Industrial Technology, 401 N. Grant Street, West Lafayette, IN 47907-2021, USA, [email protected] 2 Christine R. Blomeke, Graduate Researcher, Purdue University, Department of Industrial Technology, 401 N. Grant Street, West Lafayette, IN 47907-2021, USA, [email protected] 3 Michael D. Frick, Undergraduate Researcher, Purdue University, Department of Industrial Technology, 401 N. Grant Street, West Lafayette, IN 47907-2021, USA, [email protected]

Abstract ⎯ This paper describes the introduction of a feedback system for undergraduate students in a junior level automatic identification and data capture class. Students have highlighted that in general feedback has been slow from some classes in the curriculum. This issue has been made a continuous improvement action item as a part of the department’s ABET continuous improvement program. The implemented solution was to provide students with personalized feedback regarding their understanding of the lecture learning objectives prior to undertaking the lab practicals. Student performance was measured through the use of quizzes and exams to indicate whether the new system of feedback contributed to higher scores. The author’s compared scores from Spring 2008 (control) with the Fall 2008 (experimental) semesters. The results show that for higher order learning activities, the personalized feedback helped contribute to statistically significantly higher score. Index Terms ⎯ biometrics, education, continuous improvement

INTRODUCTION

According to Chickering and Gamson (1999), there are seven principles for good practice in undergraduate education. These include student-faculty contact, cooperation amongst students, the encouragement of active learning, providing students with prompt feedback, emphasis on time on task, the communication of high expectations, and the inclusion of a diverse set of talents and ways of learning. With respect to good feedback, Chickering and Gamson state that “in classes, students need frequent opportunities to perform and receive suggestions for improvement” (page 1). This is sometimes a challenge in large lecture courses; in this class there are 80 students per semester, and dialog is limited. There have been several ways noted in the literature to improve the feedback to students, especially with the adoption of technology in the classroom such as Audience Response Systems, commonly known as clickers (Caldwell, 2007). Other communication tools available to instructors include e-mail, web-pages, forum postings, instant messenger discussions, typically

using educational course management systems such as Blackboard and Moodle. According to Newlin and Wang (2002), instructors can provide personalized, question-by-question feedback to each student in a timely fashion. In doing so, Newlin and Wang identified that even anecdotally this feedback provides students with greater feelings of support, increased instructor contact and availability.

CONTINUOUS IMPROVEMENT PROGRAM

Continuous improvement is undertaken in many different guises, from “water-cooler discussions” through to a rigorous and measurable evaluation program. Continuous improvement in an academic environment is done for a number of reasons as outlined in Temponi (2005). There are several steps to continuous improvement, shown in Figure 1 below. They include gathering data from various sources (inputs), which can include faculty, student feedback, industrial advisory boards, alumni feedback, employer feedback, internal communications within the university environment, ABET accreditation requirements and information from conferences, journals etc. which identify issues for further study. This can be shown in Figure 1 below.

In this case, the issue was identified through surveys to current students as well as senior exit interviews. Once the issue had been identified, a subgroup of faculty examined how to provide feedback to students. Given the tools available in the classroom to faculty, the authors decided to implement a continuous feedback loop based initially on student input in the classroom, through the use of a clicker. This concept was already well established in the literature as a way of engaging students in such an environment. What was not clear was how to provide resources to students using the data collected in class, as the report functionality in the clicker software was limited.

© 2009 ICECE March 08 - 11, 2009, Buenos Aires, ARGENTINAVI International Conference on Engineering and Computer Education

159

Page 2: (2009) Providing Students with Feedback for Continuous Improvement

FIGURE. 1

CONTINUOUS IMPROVEMENT PROCESS

DESCRIPTION OF THE COURSE

The course under study is a junior level course in automatic identification and data capture, covering technologies such as bar codes, card technologies, radio frequency identification, and biometrics. The course has the following objectives:

• Outline the various forms of AIDC technologies • Differentiate AIDC technologies based on specific

scenarios • Formulate AIDC solutions and recommendations

using statistical knowledge or mathematical ability • Demonstrate the integration of AIDC equipment • Critique case studies related to the deployment of

AIDC technologies

The course is taught as a combination of face-to-face lectures (2 per week), usually followed by a 2 hour laboratory experience, taught by graduate teaching assistants. The lecture component consists of a period of time for attendance, followed by an explanation of the agenda. Students also recieve handouts and related information about the course which includes a list of objectives that the instructor is covering for either the lecture or module. The laboratory experience typically consists of students providing an overview of case studies, or applications of the technology that is related to their major, followed by a hands-on activity(ies), and then a quiz. The course utilizes the Blackboard Course Management System. Each topic is broken up into learning modules. Each learning module has the same outline. This includes PDF powerpoint lectures, Adobe Presenter summary lectures, case studies that supplement the material, and online video case studies. Students also use the E-instruction classroom response system clickers. Last semester, these were used for attendance purposes. This semester, the clickers were to be

used as feedback tools to gain real time information from the students on their understanding of a particular concept (see below).

MOTIVATION

There were several motivations for the study, the primary one from the instructor viewpoint was whether students were understanding the material. The second, was to try and provide students with feedback on their understanding of the objectives. The third, was to investigate whether there was any increase in their knowledge based on previous semester’s performance.

Providing feedback to the student is an important component to learning. A recent survey of our department’s senior undergraduate students’ expectations indicated that feedback was important. When asked whether they received feedback from their instructors regarding their written assignments 50% of the students agreed, 7.1% disagreed, 35.7% strongly disagreed, and the remainder didn’t provide a response. When asked about feedback with respect to verbal communication skills, 7.1% strongly agreed, 35.7% agreed, 21.4% were unsure, 35.7% disagreed. This highlighted that there was some scope to improve feedback to students, and at the same time, provide some empirical evidence to establish whether provinding feedback actually impacted their performance.

METHODOLOGY

There were a couple of iterations to the feedback loop, primarily as a result of understanding the technology and how to implement in a classroom environment. Initially, the instructor outlined the lecture objectives at the start of the class, followed by the instruction, and at the end of the class, the students would complete the survey via the use of the clickers. Figure 2 shows the view that the students would see in class.

FIGURE. 2

OBJECTIVE QUESTION DISPLAYED TO STUDENTS

© 2009 ICECE March 08 - 11, 2009, Buenos Aires, ARGENTINAVI International Conference on Engineering and Computer Education

160

Page 3: (2009) Providing Students with Feedback for Continuous Improvement

This did not appear to be a good use of either the clickers, or the instruction time. Completing 4 or 5 questions took almost 5 minutes, which sometimes would cause the class to run over in time. Furthermore, there seemed to be no additional feedback to the student, just to the instructor on where the students’ knowledge was at any one point in time. The second iteration was to give students a list of objectives at the beginning of the class, and then initiate the clicker system which would run in the background while the slides were being projected. As the clickers were in “student-managed” mode, the student could answer the questions when they wanted to during the entire lecture, freeing up instruction time, and keeping the student engaged with the material. The data from the students was then downloaded and analyzed; any objective that was not understood would then be covered again in the next lecture. Depending on the topic at hand, sometimes the instructor would break the objectives down into two sets, so he could display the results of the class responses to the students and answer any questions. Upon reflection, it appeared that the feedback was still class specific, and displayed the cumulative results of the students understanding. So whilst feedback was now instantaneous, it was generic to the class.

DEVELOPMENT OF PERSONALIZED STUDY GUIDES

The third iteration of the feedback concept used in the class was to give feedback to the individual student. This was non-trivial. The CPS tool does provide some reporting functionality, namely a series of export functions, or reports based on students understanding, but it was limited to providing students with their answer, and the correct response, as opposed to any additional information such as where to provide additional resources so the student could find out more about the particular topic. Using the CPS database, personalized study guides were created, and then emailed to the student. This could be used by the student as targeted revision of specific objectives. The lectures were on Wednesday, Friday, with the lab component occuring on Monday or Tuesday. In class on Friday, students were given a survey on their understanding on the objectives of the module. They would need to understand these objectives before lab on Monday or Tuesday of the following week. At the end of the lecture, the instructor uploaded the CPS data into Blackboard, and at the same time exported it to a cvs file. This was then imported into a MySQL community edition database.

FIGURE. 3 FLOWCHART

The database consisted of five tables; ‘objective’, ‘quiz’, ‘quiz_objective’, ‘student’ and ‘student_quiz’. The ‘objective’, ‘quiz’, and ‘student’ tables are all simple tables. The ‘objective’ table contains an objective ID and the objective itself. Similarly, the ‘quiz’ table contains a quiz ID along with a quiz name, and the ‘student’ table contains the student’s first and last name. The ‘quiz_objective’ and ‘student_quiz’ table links the ID of an objective to a particular question on a quiz (read quiz ID and quiz question), along with a field where additional information may be provided to assist the solution with a particular question at hand. Lastly, the ‘student_quiz’ table links the student to a quiz question along with their answer to the question. The report was then run, and the individual reports emailed to the students.

RESULTS

There were two aspects of the results. The first was the students opinions on this endevour. The second was to examine whether the adoption of clickers tied into personalized study guides had an impact on the learning of the students.

The first question asked was whether the printouts of the objectives helped the student understand the agenda for the class / module. 44 students responded positively (78% success rate), 5 negatively, and 7 did not respond. The second question asked whether the modules should be posted in Blackboard (the course management system). 91% of students agreed that they wanted to have the objectives posted online as well as handed out in class. With respect to the clickers used in class, the question asked of the students were “does using the clicker in class help you keep focused in class”. 30 students responded favorably (54%), the remainder negatively. When asked whether using the clicker in class helped retention of knowledge, 25% of the students thought that it did, 46% did not believe that the clicker

© 2009 ICECE March 08 - 11, 2009, Buenos Aires, ARGENTINAVI International Conference on Engineering and Computer Education

161

Page 4: (2009) Providing Students with Feedback for Continuous Improvement

helped them retaining the knowledge. When asked whether the personalized study guides helped, 82% of the students responded favorably, 12% responded negatively, and the remainder didn’t respond.

The second part of the study was to examine whether there was any incremental increase in scores for those modules where a personalized study guide was distributed. In order to evaluate the scores, the preceeding semester scores were used as a baseline.

The pre-test is an evaluation of the student’s prior knowledge of the course material. We also use this to establish whether there is any significant difference in knowledge over previous semesters. For the purposes of the article, the author’s compared the Spring and Fall semester of 2008. The scores of the pre-test from one semester to another should not be statistically different, and in this case an ANOVA showed no significant difference with a p-value of .736.

The second module of the semester is relatively simple, and introduces students to their first bar code symbology – PostNET. For this module, there are only a few objectives, including definitions and some calculations. There was no difference between the two groups (p=0.953).

For the next module, the students were introduced to linear bar codes. There were a number of objectives to be completed, but again the scores across the two semesters were practically identical (p=0.670). Students did receive an individualized study guide for this module after class. The study guide would have been issued prior to the quizzes.

The stacked barcode module involves a higher level of understanding, building on the concepts covered in the previous modules. The content is more challenging, as is the lab activity. The students were questioned on their understanding of the objectives in the class prior to the lab activity using the CPS pad, and then a personlized study guide was issued to the students. The results of an ANOVA showed a statistically significant difference between the two groups (p=0.000). Interestingly, the range of the two groups were the same, but there was a 7 point difference in the means.

The matrix barcode module also involved a higher level of understanding of the concepts, although the lab is not as challenging as the stacked symbology lab. Students were again polled during class abouot their knowledge of the content, and an individualized study guide was generated. Again, there was a statistically significant difference between the two groups – with a p-value of 0.007.

The next module was the midterm, again the results of the midterm were statistically different between the two groups (p=0.001).

After the exam, an additional module was surveyed. This consisted of basic information about contact memory and card technologies. Again, students were issued a personalized study guide. There was no statistically significant difference in the scores (p=0.222).

CONCLUSIONS

Once the system and objectives had been entered into the system, the time taken to provide feedback to students was significantly reduced. It was interesting that some (about 3) students wanted to get a study guide for all the objectives, regardless of whether they disclosed that they understood all of the objectives. It is interesting to note that the study guide did not impact student learning (significantly), although the mean scores were slightly higher. The use of clickers in the classroom enabled targeted discussion on topics that the group as a whole did not understand. This feedback from the students might not have been forthcoming quite as easily without the use of the clicker. The next steps will be to integrate the use of the clicker in the lab practicals. The use of the clicker would be to start a discussion on the topic in order to gain more interaction in the lab environment. In some cases, the clicker might replace the use of the bubble-sheet, again providing more information to the student.

REFERENCES

Caldwell, J. E. (2007). Clickers in the Large Classroom: Current Research and Best-Practice Tips. CBE Life Sci Educ, 6(1), 9-20. doi: 10.1187/cbe.06-12-0205.

Chickering, A. W., & Gamson, Z. F. (1999). Development and Adaptations of the Seven Principles for Good Practice in Undergraduate Education. New Directions for Teaching and Learning, 1999(80), 75-81. doi: 10.1002/tl.8006.

Newlin, M. H., & Wang, A. Y. (2002). Integrating Technology and Pedagogy: Web Instruction and Seven Principles of Undergraduate Education. Teaching of Psychology, 29(4), 325. doi: 10.1207/S15328023TOP2904_15.

Temponi, C. (2005). Continuous improvement framework: implications for academia. Quality Assurance in Education, 13(1), 17-35.

© 2009 ICECE March 08 - 11, 2009, Buenos Aires, ARGENTINAVI International Conference on Engineering and Computer Education

162