8
Common Areas for Improvement in Physical Science Units that have Critically Low Student Satisfaction Angela Carbone OPVC-L&T, Monash University, Melbourne, Australia [email protected] Jason Ceddia OPVC-L&T, Monash University, Melbourne, Australia [email protected] Abstract-Course and teaching evaluation questionnaires (CTEQs) have become standard practice in Australian universities to evaluate teaching and student experiences. National results have shown that ratings of disciplines in the physical sciences (Information Technology, Engineering and Science) generally perform poorly on the CTEQs compared to other disciplines. In this paper, a thematic analysis of the students’ feedback was undertaken for units in physical science cluster that students experienced as needing critical attention. These units were delivered in semester 2, 2010 at Monash University. The analysis has revealed that the top three concerns for students across the physical science disciplines are lecturer presentation, lecture content and unit organization. These findings are consistent with those from over 35 years ago despite changes in student demographics, advances in learning theory and technological advances used in the delivery of course content. Keywords - Computers and Education, ICT Education, education quality in ICT, teaching strategy, thematic analysis. I. INTRODUCTION The quality of tertiary students’ course and teaching experience has become an important item on the Australian government’s agenda for higher education. In 2011 the Australian government established a panel to set a variety of tertiary provider standards including standards for learning and teaching. The standards will be monitored by the operations of the new Tertiary Education Quality and Standards Agency (TEQSA) to ensure that students receive the best quality experience. Universities will be rewarded based on their achievement of these standards. Consequently, course and teaching evaluation questionnaires (CTEQs) have become standard practice in Australian universities to evaluate teaching and student experiences. An instrument widely used to measure students’ perceptions of course and teaching quality is the Course Experience Questionnaire (CEQ) [1]. The questionnaire collects students’ views on the quality of their courses. It uses items across the following five areas: Good Teaching, Clear Goals, Appropriate Workload, Appropriate Assessment and Generic Skills. CEQ scores provide reliable indicators of teaching strengths and weaknesses but do not generate a complete measure of teaching quality. As teaching is a highly complex cognitive activity, multiple sources of data are required to provide a comprehensive evaluation [2], [3] and [4]. Berk evaluated twelve sources to measure teaching effectiveness and found that student ratings was one of the multiple sources of evidence: “Student ratings is a necessary source of evidence of teaching effectiveness for both formative and summative decisions, but not a sufficient source for the latter. Considering all of the polemics over its value, it is still an essential component of any faculty evaluation system” (pg 50)[5]. Brookfield provides four lenses to engage teachers in critical reflection on practice [6]. His lenses provide different perspectives on one’s teaching, ranging from systematic self- reflection, reflecting on student feedback, engaging in peer observation, and learning from scholarly literature. The ‘student lens’ requires teachers to engage with the student feedback and become more responsive teachers. Brookfield suggests that student evaluations of teaching data can facilitate reflection on teaching, thus supporting teacher and course development and presumably, as a consequence, the enhancement of student learning. A study by Stein that investigated perceptions held by tertiary teachers about student evaluations, in three New Zealand institutions showed that overall, teachers have a generally positive disposition toward student evaluations [7]. Furthermore, Lefevere showed in her study, that lecturers who were specifically asked to alter their presentation content in light of student feedback showed a greater one-year change in scores than the course average [8]. The challenge then is, to develop an understanding of what aspects of the units in the physical sciences, students perceive as needing critical attention or improvement. This paper reports on a thematic analysis [9] of the unit evaluation qualitative comments for units in the Physical Sciences that are rated as having low overall quality. The explicit research question is: From the students’ perspective, what are the common areas in physical science units that are most in need of improvement? The implications of the findings from this analysis will offer empirical evidence to physical sciences lecturers about areas to consider when planning their next unit offering with a view to improving the student experience. It could also offer insights into shaping teaching preparation programmes. 2013 Learning and Teaching in Computing and Engineering 978-0-7695-4960-6/13 $26.00 © 2013 Crown Copyright DOI 10.1109/LaTiCE.2013.12 17 2013 Learning and Teaching in Computing and Engineering 978-0-7695-4960-6/13 $26.00 © 2013 Crown Copyright DOI 10.1109/LaTiCE.2013.12 17

[IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

  • Upload
    j

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

Common Areas for Improvement in Physical Science Units that have Critically Low Student

Satisfaction

Angela Carbone OPVC-L&T,

Monash University, Melbourne, Australia

[email protected]

Jason Ceddia OPVC-L&T,

Monash University, Melbourne, Australia

[email protected]

Abstract-Course and teaching evaluation questionnaires (CTEQs) have become standard practice in Australian universities to evaluate teaching and student experiences. National results have shown that ratings of disciplines in the physical sciences (Information Technology, Engineering and Science) generally perform poorly on the CTEQs compared to other disciplines. In this paper, a thematic analysis of the students’ feedback was undertaken for units in physical science cluster that students experienced as needing critical attention. These units were delivered in semester 2, 2010 at Monash University. The analysis has revealed that the top three concerns for students across the physical science disciplines are lecturer presentation, lecture content and unit organization. These findings are consistent with those from over 35 years ago despite changes in student demographics, advances in learning theory and technological advances used in the delivery of course content.

Keywords - Computers and Education, ICT Education, education quality in ICT, teaching strategy, thematic analysis.

I. INTRODUCTION The quality of tertiary students’ course and teaching experience

has become an important item on the Australian government’s agenda for higher education. In 2011 the Australian government established a panel to set a variety of tertiary provider standards including standards for learning and teaching. The standards will be monitored by the operations of the new Tertiary Education Quality and Standards Agency (TEQSA) to ensure that students receive the best quality experience. Universities will be rewarded based on their achievement of these standards.

Consequently, course and teaching evaluation questionnaires (CTEQs) have become standard practice in Australian universities to evaluate teaching and student experiences. An instrument widely used to measure students’ perceptions of course and teaching quality is the Course Experience Questionnaire (CEQ) [1]. The questionnaire collects students’ views on the quality of their courses. It uses items across the following five areas: Good Teaching, Clear Goals, Appropriate Workload, Appropriate Assessment and Generic Skills. CEQ scores provide reliable indicators of teaching strengths and weaknesses but do not generate a complete measure of teaching quality. As teaching is a highly complex cognitive activity, multiple sources of data are required to provide a comprehensive evaluation [2], [3] and [4].

Berk evaluated twelve sources to measure teaching effectiveness and found that student ratings was one of the multiple sources of evidence:

“Student ratings is a necessary source of evidence of teaching effectiveness for both formative and summative decisions, but not a sufficient source for the latter. Considering all of the polemics over its value, it is still an essential component of any faculty evaluation system” (pg 50)[5]. Brookfield provides four lenses to engage teachers in critical

reflection on practice [6]. His lenses provide different perspectives on one’s teaching, ranging from systematic self-reflection, reflecting on student feedback, engaging in peer observation, and learning from scholarly literature. The ‘student lens’ requires teachers to engage with the student feedback and become more responsive teachers. Brookfield suggests that student evaluations of teaching data can facilitate reflection on teaching, thus supporting teacher and course development and presumably, as a consequence, the enhancement of student learning.

A study by Stein that investigated perceptions held by tertiary teachers about student evaluations, in three New Zealand institutions showed that overall, teachers have a generally positive disposition toward student evaluations [7]. Furthermore, Lefevere showed in her study, that lecturers who were specifically asked to alter their presentation content in light of student feedback showed a greater one-year change in scores than the course average [8].

The challenge then is, to develop an understanding of what aspects of the units in the physical sciences, students perceive as needing critical attention or improvement. This paper reports on a thematic analysis [9] of the unit evaluation qualitative comments for units in the Physical Sciences that are rated as having low overall quality. The explicit research question is:

From the students’ perspective, what are the common areas in physical science units that are most in need of improvement?

The implications of the findings from this analysis will offer empirical evidence to physical sciences lecturers about areas to consider when planning their next unit offering with a view to improving the student experience. It could also offer insights into shaping teaching preparation programmes.

2013 Learning and Teaching in Computing and Engineering

978-0-7695-4960-6/13 $26.00 © 2013 Crown Copyright

DOI 10.1109/LaTiCE.2013.12

17

2013 Learning and Teaching in Computing and Engineering

978-0-7695-4960-6/13 $26.00 © 2013 Crown Copyright

DOI 10.1109/LaTiCE.2013.12

17

Page 2: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

II. BACKGROUND A. What is Effective Teaching?

The literature suggests that there is no universal criterion of a good unit or good teaching [10]. This has lead to studies that have investigated “what is effective teaching?” and “how should it be measured?” [11]. Others have examined different techniques to gather students’ perceptions to better understand teaching effectiveness [12]. A recent study by MaCabe and Layne found

“..students and faculty define effective teaching very differently. From a faculty perspective, an effective teacher should love the subject and be able to present it in multiple ways. From a student perspective, an effective teacher should be funny, interesting, and able to relate to students.” [13]. Conversely, Galbraith et al, have found that while SETU

provide little or no support for the validity as a general indicator of teaching effectiveness or student learning, they found that

“..generally students and faculty have similar, albeit not identical, views about what results in effective university level instruction.” (pg 354) [14]. A large systematic synthesis of research that considered

college students views on teaching was undertaken in 1976 by Feldman [12]. Primarily these studies dealt with undergraduate students at North American and Canadian colleges and universities. In some of these studies, students were asked to describe their ideal teacher. In others, students were requested to indicate the characteristics that they felt were especially important to good teaching. And in others, students were asked to describe the best teachers they have had. A synthesis of these studies resulted in an array of characteristics of ideal and best college teachers. Feldman concluded that stimulation of interest and clarity (understandableness), knowledge of subject matter, instructor's preparation for (and organization of) the class and instructor's enthusiasm for the subject matter or for teaching, were common characteristics of effective teaching [12].

Over twenty years later, Fisher et al. [15] constructed cognitive models of good lecturing. They ran separate focus groups with academics and students from the four faculties (Arts, Health Science, Applied Science, and Business) to discuss what they thought were the characteristics of an effective lecture [15]. A total of 21 criteria were identified, and the two most important criteria rated by students were the pace of the lecture for note taking, and the public-speaking skills of the lecturer. Another study undertaken by Patrick and Smart asked students to identify qualities of effective teachers [11]. They found teacher effectiveness to be multi-dimensional in nature, comprising of three factors: respect for students, ability to challenge students, and organisational and presentation skills.

It could be argued that the relevance of [12] is diminished given that 36 years have elapsed and the learning environment of and demographics of students today is much different to what it was then. Fry describes the changing student demographics and the Australian university environment as: “Universities and other educational providers were driven to e-learning by the change from ``learner-earners'' to ``earner-learners'' as more students seek part-time study and life-long learning (Alexander,2000). Universities also seek e-learning solutions in order to maintain institutional market position in a time of evolving knowledge, evaporation of public subsidy, and rise of new providers and alliances (pg 235)” [16]

The issue of technology use in the delivery of course material is also a confounding factor. Smith et al, found that while Australian universities are moving towards multiple delivery modes of teaching material, the traditional face to face method is

still the dominant form even though it is supplemented with on line materials [17]. Moreover, meta analysis studies have shown that the use of technology per se has very little impact on teaching effectiveness and student learning [18] , [19]. Tamin et al report an effect size of 0.33 in answering the question ‘ does computer technology use effect student achievement in formal face to face classrooms as compared to classrooms that do not use technology’ (pg 4) [18].This is still less than the threshold of 0.4 that Hattie suggests for meaningful intervention effect size [20].

B. How Do You Measure Effective Teaching? Kember and Leung [21] recognise that there are alternative

models of good teaching that can be used to frame instruments that are implemented across a broad range of disciplines. Chen has proposed a method for uncovering low-quality survey responses and shows that certain individual and circumstantial measures may increase the likelihood of low-quality responses[22]. Galbraith et al, also claim that having effective instructors does not guarantee learning outcomes are met; he concludes

"the most effective instructors are within the middle percentiles of student course ratings, while instructors receiving ratings in the top quintile or the bottom quintile are associated with significantly lower levels of student achievement."(pg 353)[14] National results have shown that ratings of disciplines in the

physical sciences (Information Technology, Engineering and Science) generally perform poorly on the CTEQs compared to other disciplines.[23]. At the university level each university has its own course and teaching evaluation instrument that gather feedback from students. These are usually called Student Evaluation of Teaching (SET) or Student Evaluation of Unit (SEU) or Student Evaluation of Teaching and Units (SETU).

There is a general advocacy of the common use of well-designed questionnaires with multi-factor structures corresponding to the identified facets of effective teaching. At Monash University, the Student Evaluation of Teaching and Unit (SETU) instrument is used to capture students’ perceptions of aspects of a unit and its teaching delivery. There are five standard university-wide (UW) items (questions) in the unit evaluation component that are consistent across all faculties. These are: UW- Item 1.The unit enabled me to achieve its learning objectives UW- Item 2. I found the unit to be intellectually stimulating UW- Item 3. The learning resources in this unit supported my studies UW- Item 4. The feedback I received in this unit was helpful UW- Item 5. Overall I was satisfied with the quality of this unit

Responses to these questions use a 5 point Likert scale. Students are also able to provide qualitative comments to two open ended questions, along with specific information about an academic’s teaching. The two open ended questions are: 1. What were the best aspects of the unit? 2. What aspects of this unit are in most need of improvement?

Reports generated from the analysis of the closed question responses for all units are publicly accessible by Monash staff and students [24]. Whereas, responses to the open ended questions are only accessible to academic staff and their superiors. At the end of each teaching semester, all ten of Monash’s faculties undertake to evaluate all their units using the SETU instrument. Faculties use this data to help them identify units that are meeting students' expectations and needs, as well as units that require improvement. Pears discusses a number of perspectives in the measurement of ‘quality’ such as viewing the student as a customer or employer

1818

Page 3: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

satisfaction with graduates [25]. However, for simplicity, Monash University focuses on UW-Item 5 (reporting overall satisfaction) in providing university managers with a quick way of monitoring aggregate performance of the unit. Using UW-Item 5 as the key question, a “traffic light” indicator was then developed to interpret the results. Table 1 explains the meaning of the four components of the indicator.

TABLE 1. MONASH UNIVERSITY UNIT QUALITY INDICATORS

These boundaries have been arbitrarily set by Monash

University. There is also a teaching evaluation component of the SETU survey which contains specific information about an academic’s teaching. Responses to the teaching evaluation questions are NOT publicly accessible, and only the academics concerned are allowed to view their results. Unlike the unit evaluation instrument, the teaching instrument is administered on a voluntary basis at the discretion of the academic.

III. RESEARCH METHOD The aim of this paper is to develop an understanding of

common areas for improvement for units needing critical attention in the Physical Science cluster. A thematic analysis [9] was adopted to analyse the student comments to the following open ended question of the unit evaluation component of SETU:

“What aspects of this unit are most in need of improvement?”. Comments from the teaching instrument component of SETU

were not sought for two reasons. First, it is not compulsory for academics to undertake a teaching evaluation. Second, analysis of these comments would require the researchers to seek permission from the lecturer which would then disclose their identity to the researchers. In an earlier study, Carbone and Ceddia reported on the analysis of students comments for units in the Faculty of Information Technology [26]. These findings will be compared with those of the Engineering faculty and the Science faculty. The remainder of this section presents a brief overview of the data collection method, approach to analysis and limitations of the study.

A. Data Collection Human ethics exemption (Ref: CF11/0658 – 2011000311)

was granted to analyse the semester 2, 2010 unit evaluation

qualitative comments for the units needing critical attention before commencement of the project. The unit evaluation survey was administered by the Monash Quality Unit who then provided the project team with comment files that were de-identified with the removal of campus, staff and unit information identifiers.

B. Approach to Analysis The categories used to code the student comments were

arrived at by the two researchers independently reading through all the comments for the unit with the most comments and then listing common themes. The main categories were straightforward to identify as they were effectively ‘keywords’ in the comment. For example, a comment may begin with “The lecturer was...” indicating that this comment is in the ‘lecturer’ category. This process was repeated for a further two units, those with the second and third most comments. Comparison of coding showed little disagreement amongst the researchers.

For each category, the researchers identified a set of attributes or sub categories. The sub categories emerged by considering the comments related to a category, for example, ‘lecturer’ and then listing themes. The ‘lecturer’ and ‘lecture’ categories differ in that ‘lecturer relates to items like the presentation style, apparent knowledge of the subject matter in answering audience questions and availability to students. ‘Lecture’ refers to the content of the actual lecture as gauged by how much material was presented, the logical flow to the material and the originality of the material. More details can be found in [26].

This analysis was adopted in the ICT units first, and then repeated for Engineering and Science units. Tables 3, 6 and 9 show the categories and sub categories for ICT, Engineering and Science respectively.

C. Limitations There were two major limitations with the analysis process.

First, when interpreting the data, units with a large student cohort could distort perceptions across the respective faculties. This means that the rankings shown in tables 3, 6 and 9 do not necessarily apply to all units in the respective faculties. Tables 4, 7 and 10 show the category rankings for the units with the most comments for ICT, Engineering and Science respectively.

Second, when the Monash Quality Unit computes the response rates, a student’s response is included even if the student only responds to the University Wide Items 1-5 (quantitative questions) and leaves the qualitative questions blank. For example, in one of the ICT units, there were 72 enrolments with 22 responses giving a response rate of 30.6%. However, there were only 13 actual qualitative comments, giving a response rate of 18.1%; these 13 qualitative comments gave rise to 35 category/sub category comments. For Engineering, five of the nine units had enrolments greater than or equal to 80 and these had a response rate of between 34% and 80%. For Science, five of the nine units had enrolments greater than or equal to 30 and these had a response rate of between 18% and 67%. Obtaining and publishing more than these statistics is an area for further investigation, without breaching ethical considerations.

IV. RESULTS The qualitative responses to Monash’s unit evaluation

questionnaire were examined for thirteen ICT units, nine Engineering units and nine Science units. These were the units that were rated as needing critical attention. Results from each faculty are presented then consolidated. Due to space limitations the breakdown of the main categories by unit will not be listed. As mentioned above, the total comments listed are not equal to the

Colour Code Meaning Unit

Measure Characteristics of

unit response Purple Outstanding Median �

4.7 Majority of responses are “strongly agree”

Green Meeting aspirations

Median between

3.6 – 4.69

Responses are generally above

“neutral”, the great majority are “agree” or “strongly agree”

Orange Need to improve

Median between

3.01 – 3.59

Responses are generally “neutral” or bimodal with no clear

trend

Red Needing critical

attention

Median � 3.0

Responses generally below “neutral”,

majority “disagree” or “strongly

disagree”

1919

Page 4: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

total number of students as one student may have commented on multiple categories. A. Information and Communication Technology (ICT)

Table 2 contains the number of student comments for the 13 ICT units needing critical attention. The ICT units are labeled 1 to 13. Units ICT1 and ICT2 had the most comments; there were 281 comments in total.

TABLE 2. COMMENTS PER ICT UNIT – TOTAL COMMENTS 281

ICT Unit 1 2 3 4 5 6 7 8 9 10 11 12 13

Comment Freq per

unit 50 62 6 7 28 1 9 22 13 2 35 12 33

Table 3 lists the eight main categories that emerged from the analysis process. The main categories for ICT are, lecturer, lecture, tutorials, assessment, tutors, Off campus learning, LMS and resources. Each of the main categories contained a set of sub-categories or attributes. These categories and sub categories have been described briefly in section III-B and defined in detail in [26]. In tables 3, 6 and 9, the ‘Freq’ column lists the number of times the category was mentioned in a comment and the ‘%’ column is the category frequency divided by the total comments. Similarly, the ‘Sub Category Freq’ is the number of times the sub category was in a comment and the ‘sub cat %’ is the ‘sub cat freq’ divided by the ‘Freq’ total for the category.

The top three sub categories, derived from the sub-category comment frequency are highlighted in table 3. They are lecture-content(51); Assignment-specification(25); lecturer-presentation(20) and Assessment-marking(20). The sub categories are highlighted instead of categories as the sub category offers greater granularity on areas of concern. Note that there are actually four sub-categories listed as both lecturer-presentation and assessment-marking had 20 comments each.

As mentioned in section III-C, the priority ordering of the subcategories is influenced by the unit with the most comments. For ICT this is unit 2 with 62 comments. Table 4 shows the top four concerns for unit 2 as lecture content, lecturer presentation, tutorial alignment and assessment alignment. In tables 4, 7 and 10, the column “% of category comments” indicates how much this unit influenced the category attribute overall. For example, in table 4, fourteen comments from a total of 51 comments related to lecture content (i.e. 27.5%) come from Unit 2; 10 comments from a total of 20 comments related to lecturer presentation (i.e. 50%) also come from Unit 2.

B. Engineering There were nine Engineering units that were identified as

needing critical attention, labeled 1 to 9 in Table 5. Units ENG2 and ENG3 had the most comments; there were 327 comments in total.

TABLE 3. CATEGORIES AND SUBCATEGORIES FROM ICT DATA.

Main Category Freq % Sub Category Sub Cat

Freq Sub

Cat %

Lecturer 48 17.1

Knowledge 4 8.3 Presentation 20 41.7 Support 13 27.1 Organisation 10 20.8 Response time 1 2.1

Lecture 80 28.4

Structure 15 18.8 Access 2 2.5 Content 51 63.8 Challenge 5 6.3 Quantity 7 8.8

Tutorials 55 19.6

Type of activity 12 21.8 Clarity 7 12.7 Alignment 16 29.1 Available 16 29.1 Length 1 1.8 Scheduling 3 5.5

Assessment 53 18.9Marking 20 37.7 Alignment 8 15.1 Specification 25 47.2

Tutors 14 5.0

Knowledge 1 7.1 Presentation style 2 14.3 Support 10 71.4 Response time 1 7.1

Off Campus 7 2.5 Support 6 85.7 Availability 1 14.3

LMS 15 5.3 Ease of use 4 25.0 Quantity 1 6.3 Accuracy 10 62.5

Resources 9 3.2 Relevance 5 55.6 Quantity 1 11.1 Availability 3 33.3

TABLE 4. TOP 4 SUB CATEGORY ATTRIBUTES FOR ICT UNIT 2.

Category description

Unit 2 category

frequency

Overall category

frequency

% of category

comments

Lecture-content 14 51 27.5 Lecturer-presentation style 10 20 50.0

Tutorial-alignment 7 16 43.7

Assessment-alignment 4 8 50.0

TABLE 5. COMMENTS PER ENGINEERING UNIT – TOTAL 327

ENG Unit 1 2 3 4 5 6 7 8 9

Comment Freq per unit

38 87 106 21 3 18 34 17 3

2020

Page 5: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

TABLE 6. CATEGORIES AND SUBCATEGORIES FROM ENGINEERING DATA

Main Category Freq % Sub Category Sub Cat

Freq Sub

Cat %

Lecturer 90 27.52

Approachability 5 5.6 Knowledge 19 21.1 Organisation 5 5.6 Presentation 50 55.6 Support 11 12.2

Lecture 62 18.96 Content 43 69.4 Organisation 16 25.8 Timetable 3 4.8

Tutorials 22 6.73

Absence 2 9.1 Activity 3 13.6 Alignment 5 22.7 Clarity 3 13.6 Content 4 18.2 Organisation 3 13.6 Relevance 2 9.1

Assessment 69 21.10

Alignment 11 15.9 Feedback 6 8.7 Groupwork 4 5.8 Marking 9 13.0 Organisation 2 2.9 Specification 21 30.4 Timing 6 8.7 Weighting 10 14.5

Tutors 30 9.17 Knowledge 10 33.3 Support 20 66.7

LMS 8 2.45 Accuracy 2 25.0 Quality 1 12.5 Quantity 5 62.5

Resources 7 2.14 Quality 2 28.6 Quantity 5 71.4

Unit 34 10.40

Content 8 23.5 Organisation 22 64.7 Quality 1 2.9 Relevance 3 8.8

Lab 5 1.53 Activity 1 20.0 Alignment 3 60.0 Timing 1 20.0

There were nine main categories that emerged from the

analysis process. As with ICT, each of the main categories contained a set of sub-categories or attributes. These are listed in Table 6, with the top three sub categories highlighted (lecturer-presentation(50); Lecture-content(43) and Unit-organisation(22)).

Engineering Unit 3 had the most comments of all the Engineering units. Table 7 lists the top four sub categories for ENG3; they are lecture-content, lecturer-knowledge, lecturer-presentation and equal fourth, assessment-specification and unit-organisation. This unit has a huge influence on the sub category priorities in table 6 as indicated by the “% of category comments” column in table 7.

TABLE 7. TOP 4 SUBCATEGORIES FOR ENGINEERING UNIT 3

Category description

Unit 3 category

frequency

Overall category

frequency

% of category

comments

Lecture-content 19 20 95.0 Lecturer-knowledge 11 12 91.7

Lecturer-presentation 10 11 90.9

Assessment-specification 8 9 88.9

Unit-organisation 8 9 88.9

C. Science There were nine Science units identified as needing critical

attention, labeled 1 to 9 in Table 8. Units SCI4 and SCI7 had the most comments. The Science units had only 148 comments, far less than the number of comments made in ICT and Engineering.

TABLE 8. COMMENTS PER SCIENCE UNIT – TOTAL COMMENTS 148

SCI Unit 1 2 3 4 5 6 7 8 9

Comment Freq per unit 24 4 4 30 9 10 41 23 3

TABLE 9. CATEGORIES AND SUBCATEGORIES FROM THE SCIENCE DATA

Main Category Freq. % Sub Category Sub Cat

Freq Sub

Cat %

Lecturer 36 24.3

Knowledge 2 5.6 Organisation 2 5.6 Presentation 27 75.0

Support 5 13.9

Lecture 11 7.4Content 3 27.3

Organisation 6 54.5 Timetable 2 18.2

Tutorial 12 8.1Absence 4 33.3

Alignment 7 58.3 Content 1 8.3

Assessment 34 23.0

Feedback 17 50.0 Marking 2 5.9

Organisation 1 2.9 Timing 7 20.6

Weighting 7 20.6 LMS 4 2.7 Quantity 4 100.0

Resources 5 3.4 Availability 5 100.0

Unit 26 17.6Content 6 23.1

Organisation 17 65.4 Quality 3 11.5

Lab 5 3.4Activity 2 40.0

Alignment 3 60.0

OffCampus 15 10.1Recordings/availability 6 40.0

Support 9 60.0

2121

Page 6: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

Some categories and/or subcategories that appeared in ICT and Engineering did not emerge in the Science data. For example, the Tutor category appears both in ICT and Engineering, but does not appear in Science. There were ten main categories that emerged from the analysis process. Each of the main categories contained a set of sub-categories. These are listed in Table 9. Some Science units were delivered in Off Campus mode and this category re-emerged; the Unit and Lab category remained.The top three sub categories are highlighted (Lecturer-presentation(27); equal second Unit-organization(17) and Assessment-Feedback(17) and Off campus-support(9)).

Science unit 7 had the most comments of all the Science units. Table 10 lists the top four sub categories for SCI7; they are unit-organisation, lecturer-presentation, assessment-feedback and, unit-content.

TABLE 10. TOP 4 SUBCATEGORIES FOR SCIENCE UNIT 7

Category description

Unit 7 category

frequency

Overall category

frequency

% of category

comments Unit-

organisation 10 17 58.8

Lecturer-presentation style/engage

7 27 25.9

Assessment - feedback 3 17 17.6

Unit-content 3 6 50.0

V. DISCUSSION The top three sub categories per faculty are presented in Table

11. The table shows only the unique sub categories; the fact that a sub category applies to a faculty is shown by the X in the corresponding cell under the faculty column. Note that ICT and Science have four Xs as two of the subcategories were ranked equally. While this list ignores absolute rankings of sub categories it does list the areas of common concern to students across the faculties.

TABLE 11. TOP SUBCATEGORIES IN THE THREE FACULTIES.

Sub Category ICT Engineering Science Lecture Content X X Assessment Specification X Lecturer Presentation X X X Assignment Marking X Unit organisation X X Assessment Feedback X OffCampus support X

Lecturer presentation is the area of most concern as it applies to all three faculties. This is followed jointly by lecture content and unit organisation as they apply to two of the three faculties. The remaining issues are in the top three for one faculty only.

A. Lecturer-presentation Lecturer-presentation is the area needing most improvement across the Physical Science units. This sub-category relates to the lack of engaging teaching methods. For example, reading directly

from overhead slides, not being audible or appearing confused with the material being presented. Typical student comments were: • The lecturer spent the entire semester reading equations

directly from a powerpoint presentation with an impenetrable accent. (ENG)

• The lecturer just read off the slides every lecture with very dry content. (ENG)

• the lecturer need to speak loudly in class. Sometimes we can't hear him DESPITE the use of microphone. (SCI)

• the lecturer should speak louder and make the unit more interesting. (SCI)

• The lectures were incredibly dull and presented poorly. (ICT) • THE TEACHING! We just sit in class without any proper

guidelines. They expect us to learn from somewhere and just come in and do exercises. (ICT) Lecturer presentation closely aligns with the findings of [12],

[15] and [11] who found similar characteristics of effective teaching. The fact that these findings were arrived at with a time difference of 36 years suggests that even though use of technology in teaching is common practice to support student learning today, the students still perceive that the teaching skills of academics are extremely important. The minimal impact of technology is supported by [18] and [19]

B. Lecture-content The lecture-content was a common concern for ICT and

Engineering. This sub-category related to the relevance of the material to real world scenarios and whether the material was current. Typical student comments were: • The overall content of the course was very "ideal situation"

theory and not real world practicalities. (ICT) • The content seems to be outdated. (ICT) • I think simpler slides would be better, and more lecturing

about stuff relevant to the course(i.e. no derivations that last two lectures which isn't even examinable). (ENG)

• structure of the lectures, seemed equation heavy why very little actual application of the equations. some equations seemed pointless and were given no meaning. (ENG)

This can be related to [12] as being the lecturer’s enthusiasm of the subject matter by keeping abreast of current developments in the field and making the content relevant to the students.

C. Unit-organisation Unit organisation was a common area for improvement for

Science and Engineering. This sub-category relates to factors such as the number of lecturers for a unit, the alignment of the unit handbook to lecture content and the due date of assignments with respect to the unit exam date. Typical student comments were: • Unit poorly organised. Difficult to guage what we are

supposed to know. (SCI) • I felt that the unit seemed very unplanned; there were always

some last minute hiccups for practicals. (SCI) • Materials were not very well organised. Students were

confused as to the direction the unit was going. (ENG) • The disorganisation of the unit meant I had no motivation to

put the extra time into it that it needed, as it is a difficult unit. (ENG) As with lecturer presentation and lecture content, technology

is of limited value if the unit is poorly organised. A disorganised unit can cause students to disengage and lose direction. Again the

2222

Page 7: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

lecturer’s ability to organize the material is still very relevant today as when listed by Feldman [12] and Patrick and Smart [11].

D. Other important concerns Assessment specification and marking appeared in the top

three sub categories for the ICT faculty, and Assessment feedback and Off campus learning appeared in the top three sub categories for the Science faculty. Typical student comments made under these categories are provided below. Assessment-specification (ICT)

Assessment specification related to the clarity in which assignments were written, the submission process and the handling of change of requirements. Typical comments included: • The first assignment was unclear and a disaster. The

requirements of this assignment were changed closely to the due date. Because of the change of requirements many students were at a disadvantage. (ICT)

• Assignments had a lot of ambiguity which left students questioning whether they were completing them correctly or not. Conflicting views were expressed on the forums as well, meaning that students were unable to gain full clarity. (ICT). Monash ICT has a policy of vetting exam papers to minimize

ambiguity of question wording and ensure appropriate level of difficulty. This process could be extended to assignment specifications using the same rationale as for exams. Assessment-marking (ICT)

Assessment marking related to consistency of marking, quality of feedback, and clarity of marking criteria. Typical comments included: • the marking system in this unit is very disappointing and the

feedback is terrible. For most assignments; they have not even stated what is done wrong, but just given a grade. (ICT)

• I felt that the submitted tasks should have been graded and feedback given throughout the semester; as opposed to what is happening which is that we get graded right at the end for all the work at once. (ICT) A proposal currently under consideration is to make

assessment marking rubrics available to the students at the same time as the assignment specification so as to help students align their expectations with their outcomes. Assessment-feedback (SCI)

Assessment feedback related to the timeliness of the feedback; when it was given late it was of little use to the students to incorporate into future work. Typical comments included: • Comment: pracs to be marked in time for students. (SCI) • Get more staff so that pracs/assignments can be assessed in a

timely manner. (SCI) OffCampus (SCI)

Off campus support related to the amount of support for distance education students and availability of resources that captured what occurred in the lecture. Typical comments included: • Lack of info and support for long distance EDU students

during tutorial Its takes too long to get answers or clarification on the tutorial. (SCI)

• No communication or assistance for off-campus students. (SCI) As [11] have found “respect for students” is very important.

Responding to student queries quickly shows this respect.

VI. CONCLUSION AND FUTURE WORK This study set out to develop an understanding of aspects that

contribute to poor student satisfaction ratings for units belonging to Monash’s Physical Science cluster that were most in need of improvement. This was achieved by undertaking a thematic analysis of the semester 2, 2010 unit evaluation qualitative comments for ICT, Engineering and Science units that the students perceived as needing critical attention. The analysis has revealed that there are some common themes that emerge across all three disciplines. Analysis of teacher evaluation qualitative comments was not feasible as these instruments are not mandatory, and the results are strictly confidential.

Examining unit evaluation student survey responses showed that a broad area of concern for ICT students was the lecture, whereas for Engineering and Science students it was the lecturer. As subcategories offer greater granularity on areas of concern, the top concerns across the three disciplines are lecturer presentation, lecture content and unit organization. Our study validates the work of [11] and [12] in today’s environment and adds the above three attributes to the qualities of an effective teacher.

The implications for the institutions are in the development of teacher preparation programs. The presentation skills of the lecturer and their ability to organize a unit are extremely important to students. There has been a rapid uptake of Virtual Learning Environments at tertiary institutions to assist with the delivery of unit content. While this offers the student flexibility of access to unit material, it does nothing to ensure unit quality.

Like most Australian tertiary institutions, Monash University has a graduate certificate in higher education called the Graduate Certificate of Academic Practice (GCAP). This certificate includes units such as “Principals of Effective Teaching” that are meant to address the issue of how to teach well. All newly appointed lecturing staff at Monash University are required to complete this certificate as part of their three year probation period, ensuring that new teaching staff are exposed to the ideas behind good teaching.

Another program at Monash University is the Peer Assisted Teaching Scheme (PATS) described in [27]. Here, less experienced teaching staff enter into a mentoring relationship with experienced staff that have a reputation of being an excellent teacher. The scheme can also be used to improve unit delivery by having a peer review of unit objectives and subsequent unit content and delivery.

Lecture-content is also an area most need of improvement and the GCAP and PATS can offer strategies for unit improvement. There are many tricks and tips academics are using to re-invent the lecture so that the lecture is organised, covers the same or more content, and is challenging for the students. As a direct outcome of this research, a checklist will be provided to lecturers participating in PATS or the GCAP that lists these categories and subcategories. This checklist should help focus the lecturers’ unit preparation effort on areas most likely to be of concern to the students.

Future work is to analyse comments from the Humanities cluster. Neumann [23] has suggested that Humanities rate better than Sciences. While our data are not able to determine this directly – as the units that come within the scope of this investigation are all in the ‘critical zone’ – it will be interesting to see if the same categories emerge as being of concern to the students. At this stage it is unclear but perhaps academics in the Humanities units have better developed presentation skills, are using innovative teaching approaches that are engaging students

2323

Page 8: [IEEE 2013 Learning and Teaching in Computing and Enginering (LaTiCE) - Macau (2013.3.21-2013.3.24)] 2013 Learning and Teaching in Computing and Engineering - Common Areas for Improvement

in the lecture, develop content that is more relevant, situated and authentic, and are better at organising a unit in its totality.

ACKNOWLEDGEMENTS This work has been supported by the Australian Learning and Teaching Council Teaching Fellowship Program, extension grant.

REFERENCES [1] P. Ramsden, Learning to teach in higher education. London: RoutledgeFalmer, 2003. [2] J. McKenzie, S. Sheely, and K. Trigwell, "Drawing on experience: an holistic approach to student evaluation of courses.," Assessment and Evaluation in Higher Education, vol. 23, pp. 153-163, 1998. [3] A. Saroyan and C. Amundsen, "Evaluating university teaching: Time to take stock. ," Assessment and Evaluation in Higher Education, vol. 26, pp. 337-349, 2001. [4] M. Byrne and B. Flood, "Assessing the teaching quality of accounting programmes: An evaluation of the Course Experience Questionnaire," Assessment & Evaluation in Higher Education, vol. 28, pp. 135-145, Apr 2003. [5] R. A. Berk, "Survey of 12 Strategies to Measure Teaching Effectiveness," International Journal of Teaching and Learning in Higher Education, vol. 17, pp. 48-62, 2005. [6] S. Brookfield, Becoming a Critically Reflective Teacher. San-Francisco, CA: Jossey-Bass 1995. [7] S. Stein, J. Kennedy, T. Harris, S. Terry, L. Deaker, and D. Spiller, "Student evaluations of teaching: Perceptions determining teacher behaviours [HERDSA conference presentation]. 2012. [8] K. Lefevere "Course evaluation: does student feedback improve future teaching?" [HERDSA conference presentation]. 2012. [9] V. Braun and V. Clarke, "Using thematic analysis in psychology. ," Qualitative Research in Psychology, vol. 3, pp. 77-101, 2006. [10] H. W. Marsh and J. Overall, "Validity of students' evaluations of teaching effectiveness: Cognitive and affective criteria," Journal of Educational Psychology, vol. 72, pp. 468-475, Aug 1980. [11] J. Patrick and R. M. Smart "An Empirical Evaluation of Teacher Effectiveness: the emergence of three critical factors.," Assessment & Evaluation in Higher Education, vol. 23, pp. 165-178, 1998. [12] K. A. Feldman, "The Superior College Teacher from the Students' View," Research in Higher Education, vol. 5, pp. 243-288, 1976. [13] K. A. McCabe and L. S. Layne. (4 July). Tomorrow's Professor Msg.#1170 The Role of Student Evaluations in Tenure and Promotion. Available: http://cgi.stanford.edu/~dept-ctl/tomprof/posting.php?ID=1170 ,2012. [14] C. Galbraith, G. Merrill, and D. Kline, "Are Student Evaluations of Teaching Effectiveness Valid for Measuring Student Learning Outcomes in Business Related Classes? A

Neural Network and Bayesian Analyses," Research in Higher Education, vol. 53, pp. 353-374, 2012. [15] A. T. Fisher, J. G. Alder, and M. W. Avasalu, "Lecturing performance appraisal criteria: Staff and student differences," Australian Journal of Education, vol. 42, pp. 153-168, 1998. [16] K. Fry, "E-learning markets and providers: some issues and prospects," Education + Training, vol. 43, pp. 233 - 239, 2001. [17] A. Smith, P. Ling, and D. Hill, "The Adoption of Multiple Modes of Delivery in Australian Universities," Journal of University Teaching and Learning Practice vol. 3, pp. 67-81, 2006. [18] R. M. Tamim, R. M. Bernard, E. Borokhovski, P. C. Abrami, and R. F. Schmid, "What Forty Years of Research Says About the Impact of Technology on Learning: A Second-Order Meta-Analysis and Validation Study " Review of Educational Research 0034654310393361, first published on January 10, 2011 doi:10.3102/0034654310393361 [19] B. Means, Y. Toyama, Murphy. R., M. Bakia, and K. Jones. Evaluation of Evidence-based Practices in Online Learning: A Meta-analysis and Review of Online-learning Studies. 2009. Available: http://www2.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf [20] J. Hattie, Visible learning: A synthesis of over 800 meta-analyses relating to achievement. . London, UK: Routledge., 2009. [21] D. Kember and D. Y. P. Leung, "Disciplinary Differences in Student Ratings of Teaching Quality," Research in Higher Education, vol. 52, pp. 278-299, May 2011. [22] P.-S. D. Chen, "Finding Quality Responses: The Problem of Low-Quality Survey Responses and Its Impact on Accountability Measures," Research in Higher Education vol. 52, pp. 659-674, 2011. [23] L. Neumann and Y. Neumann, "Determinants of students' instructional evaluation: A comparison of four levels of academic areas. ," Journal of Educational Research, vol. 78, pp. 152-158, 1985. [24] Monash University. SETU survey results. (Accessed, August 2011).Available: http://opq.monash.edu.au/us/surveys/unit-evaluations/distribution-administration.html [25] A. N. Pears, " Does Quality Assurance Enhance the Quality of Computing Education? ," in Twelfth Australasian Computing Education Conference (ACE 2010) Brisbane, Australia pp. 9-14, 2010. [26] A. Carbone and J. Ceddia, "Common Areas for Improvement in ICT Units that have Critically Low Student Satisfaction," presented at the ACE2012 Fourteenth Australasian Computing Education Conference, Melbourne, Australia, 2012. [27] A. Carbone, "Building peer assistance capacity in faculties to improve student satisfaction of units," presented at the Higher Education Research and Development Society of Australasia (HERDSA), Gold Coast, Queensland, Australia, 2011.

2424