15
7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ… http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 1/15

Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

Embed Size (px)

Citation preview

Page 1: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 1/15

Page 2: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 2/15

While evaluation at the level of the instructor or course must be almost universal,

evaluating the teaching involved in degrees, programs, majors or departments is far less

prevalent and has received a great deal less attention in the literature. This situation is

somewhat surprising in that the curriculum for a degree is more than the sum of the

teaching in the constituent courses. Universities have procedures for approving the plansfor overall degrees and list degree awards in their calendars. Surely, then, the entity itself 

needs evaluation at least as much as the parts which make it up.

If evaluation is confined to individual courses, it could be argued that the important

teaching and learning environment becomes neglected. Fraser (1998) has conceptualised a

learning environment as:

‘Learning environment’ refers to the social, psychological and pedagogical contexts

in which learning occurs and which affect student achievement and attitudes. (Fraser

1998, p. 3)

Standard course evaluation questionnaires do not normally encompass this broad con-

struct. Indeed many have been criticised for being based on teacher-centred models of 

education (Centra 1993; d’Appollonia and Abrami 1997; Kolitch and Dean 1999;

McKeachie 1997). Yet the teaching and learning environment has been shown to be

important in the development of the generic competencies that are so vital to today’s

graduates (Kember and Leung 2005a; Leung and Kember 2006).

One reasonably common form of evaluation of a learning environment involves the

focus on the student experience. Tinto (1975, 1987) theorised that integrating students into

the social and academic fabric of college life was an important determinant of student

persistence. Consequently, surveys have been developed to examine the degree to whichstudents enjoy a range of campus experiences of the type which might foster academic and

social integration.

The most commonly used survey of this type is probably the College Student Experience

Questionnaire (Pace and Kuh 2007) which has been widely administered (Kuh 1999; Kuh

et al. 2000). The instrument asks about the degree to which students have experienced a

wide range of activities, such as using the library, interaction with staff, cultural activities,

making use of campus facilities, attending societies and making student acquaintances.

Surveys of this type are usually administered at the college or campus level. Programs

to promote integration, such as first-year orientation, are commonly run by school or

college offices. Many of the experiences surveyed are not program-specific. While theinstruments have proved valuable for the purpose for which they were designed, they

would not give useful feedback to specific departments for evaluation at the degree or

program level.

The Course Experience Questionnaire (CEQ; Ramsden 1991) appears to be the most

widely used instrument designed to assess teaching effectiveness at the level of the whole

degree or program in higher education institutes. The CEQ has been employed in the

annual national graduate survey in Australian higher education since 1993 and a similar

national survey in the UK is currently being developed.

The CEQ was primarily designed as a performance indicator for teaching effectivenesswhich assesses a range of student perceptions related to teaching and learning. The older

version of the CEQ consisted of 30 items with the five scales of Good Teaching, Generic

Skill, Clear Goal and Standard, Appropriate Workload and Appropriate Assessment,

together with an item for Overall Satisfaction. In an extensive study by Wilson and Lizzio

(1997) with three large samples of Australian graduates, a shortened version of the CEQ

with 23 items measuring the same six dimensions was developed. The 23-item CEQ was

16 Learning Environ Res (2009) 12:15–29

 123

Page 3: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 3/15

shown to have moderate to high reliability for the five scales, as well as acceptable

construct validity. The CEQ also produced expected correlations with student’s learning

outcomes which established its external validity.

The obvious limitation of the CEQ is that it consists of just five scales and an item for

overall satisfaction. This implies that the teaching and learning environment of a degreecan be characterised by just five factors, or that feedback to departments would be

restricted to five variables. This is possibly why the CEQ has been used to produce ranking

lists, because such a restricted number of scales can give little diagnostic feedback capable

of identifying strengths or weaknesses which can be remedied.

The CEQ has one overall satisfaction item and one scale for all generic skills, which

leaves just four scales for teaching and learning. Even at the instructor level, teaching is

recognised as a multidimensional skill (Marsh 1987) and well-designed teaching evalua-

tion instruments have multiple scales (see Marsh 1987 for examples). Therefore, the CEQ

must be limited in its ability to adequately characterise even the pedagogical context of a

learning environment. It is not clear that it makes any provision at all for the social and

psychological contexts included in Fraser’s (1998) definition. A further limitation is the

inclusion of all generic skills within one scale, which implies that all are of the same

nature.

Development of a teaching and learning environment questionnaire

The university at which we previously worked recognised the need for an evaluation

instrument which was functional at the degree or program level. The design of this wasguided by two principles.

1. It needed to be sufficiently diagnostic to identify strengths and weaknesses, making it

possible for feedback to lead to an action plan for improvement.

2. The instrument needed to be consistent with research into teaching and learning

environments and recognise the breadth of constructs encompassed within an

environment.

Both principles necessitated the inclusion of a range of scales that represented the

breadth of constructs implicit in a learning environment and the ability to provide diag-

nostic feedback on teaching and curriculum design. A constraint is that students arereluctant to complete questionnaires which they consider overly long. The resultant

questionnaire, therefore, needed to include vital constructs without appearing daunting to

complete.

Generic capabilities

Another design decision was associated with the desire to make the questionnaire out-

comes-based. For it to be used across all disciplines, it could not evaluate discipline-

specific outcomes. However, there is general agreement that a university education should

result in graduates developing a range of generic capabilities, which equip them for life-

long learning.

To determine the generic capabilities to include in the instrument, each of the faculties

was asked to determine the capabilities important for their graduates. The resulting lists

were examined and a set of common capabilities emerged. While the terminology was not

necessarily consistent, there was broad agreement that all graduates needed to develop the

Learning Environ Res (2009) 12:15–29 17

 123

Page 4: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 4/15

capabilities in the list below. As a result, the initial version of the questionnaire contained a

scale seeking perceptions of the development of each of the following generic capabilities:

• Critical Thinking

• Creative Thinking

• Ability to Pursue Lifelong Learning

• Adaptability

• Problem Solving

• Communication Skills

• Interpersonal Skills

• Discipline Knowledge

• Career Relevance.

Learning environment

The remainder of the questionnaire needed to be characteristic of a teaching and learning

environment. The initial version concentrated on variables found in the literature on the

student experience. The two reasons for this were that, firstly, the development of the

questionnaire was part of a project known as Evaluating the Part-Time Student Experience.

Secondly, teaching was already heavily evaluated through regular course and teaching

evaluation. Accordingly, there were just four scales specific to teaching:

• Active Learning

• Teaching for Understanding

• Teacher–Student Interaction• Assistance from Teaching Staff.

Field-testing of initial version

The initial version of the questionnaire was used as a graduate survey. It was administered

to all those who graduated from taught courses from a university in Hong Kong in 2001.

The university offered discrete programs for full- and part-time students.

Some rather surprising results were obtained, which have been reported in detail(Kember and Leung 2005a). The part-time students perceived higher levels of capability

development, than full-time students, on all but one of the nine generic capabilities scales.

The result was unexpected in that, while the literature was not definitive about the

mechanism for capability development, there was speculation that generic capabilities are

nurtured through immersion in a stimulating campus environment. However, part-time

students spent markedly less time on campus than their full-time counterparts.

Having eliminated a number of possible explanations, only one seemed to remain. The

principal mechanism for capability development actually was through active learning

experiences which required students to practice the capability in question. The part-time

courses were mostly taught through 3-h evening classes. Even the most teacher-centred

lecturers tended to eschew purely didactic teaching to keep students awake. The part-time

students, therefore, experienced a more interactive teaching and learning environment than

their full-time counterparts.

As a test of the initial version of the questionnaire, the generic capabilities part had

proved successful. The scales for the capabilities had acceptable psychometric properties

18 Learning Environ Res (2009) 12:15–29

 123

Page 5: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 5/15

(Kember and Leung 2005a) and gave useful feedback on their development, which were

accepted as important learning outcomes across the wide range of degrees. The results

seemed to suggest that there needed to be greater emphasis on aspects of teaching and

learning. To avoid lengthening the instrument, new scales could replace ones concerned

with the campus environment, which had not correlated strongly with capabilitydevelopment.

Final version

Subsequent development of the questionnaire took place at another university in Hong

Kong, in conjunction with a quality assurance initiative known as the Student Engagement

Project. The project aims to give feedback to departments about their undergraduate

degrees in alternate years. First- and third-year students are sampled. The students have the

option of completing the questionnaire on paper or through the internet, and English and

Chinese versions are available.

There have been two versions of the instrument. The one used for the first 2 years was

modified in the light of feedback. Only the final version is described here. The instrument

still retains the same two parts included in the original version.

1. A section seeking feedback on students’ perceptions of the development, during their

degree, of a set of generic capabilities.

2. A section seeking feedback on perceptions of the quality of elements of the teaching

and learning environment.

The set of capabilities has not changed greatly from that in the original version. The

minor changes resulted from the need to gain approval for the instrument in a compre-

hensive university. There now seems to be virtually universal acceptance of the need for

graduates to develop generic competencies and the set listed below has proved acceptable

across the university. There are differences between programs as to the relative importance

of the competencies but, as the feedback is considered indicative, it can be interpreted with

these weightings in mind. The capabilities included in the final version are:

Critical Thinking• Creative Thinking

• Self-managed Learning

• Adaptability

• Problem Solving

• Communication Skills

• Interpersonal Skills and Groupwork 

• Computer Literacy.

The elements included in the teaching and learning environment have been adapted to

include those shown to have a major impact on the development of capabilities (Kember

and Leung 2005a, b; Leung and Kember 2005a, b, 2006) and others which give useful

feedback to departments about teaching and curriculum design. By including this range of 

scales, the questionnaire encompasses a wide range of facets of a teaching and learning

environment, so as to make it consistent with Fraser’s (1998) holistic definition. The scales

now included are:

Learning Environ Res (2009) 12:15–29 19

 123

Page 6: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 6/15

• Active Learning

• Teaching for Understanding

• Feedback to Assist Learning

• Assessment

• Relationship between Teachers and Students• Workload

• Relationship with Other Students

• Cooperative Learning

• Coherence of Curriculum.

A listing of scales and their items is given in the Appendix. All but one of the scales has

a minimum of two items. The remaining scale has three items. This was a deliberate design

decision. It permits a range of constructs to be included in the instrument; so reflecting the

breadth of the construct of a teaching and learning environment. The shortness of the

scales, though, keeps the questionnaire to a reasonable length. The paper version fits ontwo A4 sides. The trade-off with short scales is the difficulty of constructing reliable scales

which adequately represent the construct. Evidence of satisfactory reliability and validity

of the scales is given in the following sections.

The questionnaire uses a five-point Likert scale with responses ranging from Strongly

Agree to Strongly Disagree. The instrument also includes two open-ended questions:

• What are the best aspects of your program?

• Which aspects are most in need of improvement?

Reliability

Data for the tests of reliability and validity were obtained from a sample of 2,879 students

from the 21 programs surveyed in 2005. The return rate was 53.2% and 1,530 useable

questionnaires were available for analysis.

The reliabilities of the scales in the questionnaire were checked using Cronbach

a values (Table 1).

Validity

Validity of scales or instruments is harder to establish than reliability. The procedure

involved in deriving the set of capabilities to be included in the original instrument pro-

vided a measure of construct validity in establishing them as a viable set of generic

attributes which graduates could be expected to have developed. Further confirmation of 

construct validity comes from the observation that a similar set of attributes was adopted

by a second university which is a comprehensive research-intensive university, whereas the

original was a former polytechnic concentrating on applied professional courses.

The content validity of the items in all scales, as measures of the intended constructs,

was ensured by their examination by expert panels. The formulations were then checked

through focus-group discussions with small groups of typical students. The construct

validity of the instrument was also established through a series of psychometric tests

(Leung 2005).

There are two strands of evidence that indicate that the scales included in the teaching

and learning environment part of the instrument are a valid characterisation of an

20 Learning Environ Res (2009) 12:15–29

 123

Page 7: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 7/15

environment. Firstly, the results from the surveys have been found to provide readily

interpretable feedback for diagnosing strengths and weaknesses, as discussed in the next

main section. Because evidence from all of the scales has resulted in recommendations for

curriculum revisions, they must assess relevant elements of a teaching environment.

Model of capability development

A further demonstration of construct validity comes from using structural equation mod-

elling (SEM) to establish that the elements included in the teaching and learningenvironment make a significant contribution to nurturing the set of generic capabilities

included in the questionnaire. Anderson (1987) suggested that external validity of a con-

struct can be established through examining the set of relationships of the construct with

other constructs in a nomological network.

SEM has the ability to test hypothesised models of the causal relationship between

measured variables. In this case, the hypothesised model is of the teaching and learning

environment influencing the development of a set of generic capabilities. The model is

shown in Fig. 1. The diagram of the model includes the standardised coefficients obtained

from the testing of the model. Although these are not part of the hypothesised model,

showing them avoids duplication of the diagram.The scales from the questionnaire appear in the model as indicators, which are repre-

sented in SEM models as rectangles. It will be noted that two scales from the questionnaire

do not appear as variables in the model. Because computer literacy is somewhat different

in nature to the other generic capabilities, it did not correlate well with them. Students’

perceptions of workload relate only weakly to hours worked but are influenced by their

Table 1 Cronbach a coefficient for scales in the questionnaire

Scale No. of items a

Capability

Critical thinking 2 0.79Creative thinking 2 0.58

Self-managed learning 2 0.73

Adaptability 2 0.65

Problem solving 2 0.73

Communication skills 2 0.64

Interpersonal skills and groupwork 2 0.64

Computer literacy 2 0.70

Teaching and learning environment

Active learning 2 0.65Teaching for understanding 2 0.77

Feedback to assist learning 2 0.68

Assessment 3 0.71

Relationship between teachers and students 2 0.70

Workload 2 0.82

Relationship with other students 2 0.65

Cooperative learning 2 0.74

Coherence of curriculum 2 0.79

Learning Environ Res (2009) 12:15–29 21

 123

Page 8: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 8/15

perceptions of other elements of their learning environment (Kember 2004). Therefore,

workload can be modelled as being weakly influenced by most of the variables in this

characterisation of a teaching and learning environment (Kember and Leung 2006), which

makes its inclusion in the model problematic.

In addition to the variables corresponding to the questionnaire scales, the model

includes five latent variables shown in ovals. On the teaching and learning environment

side of the model, the three latent variables of teaching, teacher–student relationship, and

student–student relationship can be envisaged as higher-order factors that group distinct

facets of the environment. Similarly, on the capabilities side of the model, the two latent

variables are intellectual and working together. Again, these represent logical groupings of the capabilities.

SEM models are tested by examining the goodness of fit of the model against the data

collected to measure the variables included in the model. Assessment of model fit was

based on multiple criteria including both absolute misfit and relative fit indexes. The

absolute misfit indexes included the Root Mean Square Error of Approximation (RMSEA;

Browne and Cudeck  1993) and the Standardised Root Mean Squared Residual (SRMR;

Bentler 1995). The relative goodness-of-fit index used in the study was the Comparative Fit

Index (CFI; Bentler 1990). A model with RMSEA\ 0.08, SRMR\ 0.08 and CFI[ 0.95

is considered an excellent fit to the data.

The goodness-of-fit indexes for this model were RMSEA = 0.057 (90% CI = 0.052,

0.062), SRMR = 0.042 and CFI = 0.943, which indicates an excellent fit to the observed

data. The standardised parameter estimates of the model are shown in Fig. 1. All of the

paths between the latent variables were statistically significant, except the one from

teaching to working together. Moreover, the directions of all the paths were positive as

anticipated.

Fig. 1 The SEM model showing how the teaching and learning environment influences capability

development. Latent variables are shown as ovals and measured variables are shown as rectangles. All paths

are statistically significant at the 5% level. Error terms are omitted in the figure for simplicity

22 Learning Environ Res (2009) 12:15–29

 123

Page 9: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 9/15

The excellent fit of the model to the data from the questionnaire provides further

evidence of the validity to the questionnaire. Its design is based on a hypothesis that the

teaching and learning environment, as characterised by the variables included in the

questionnaire, has a significant impact on the development of the set of generic capabilities

that were selected as being important for graduates. Data collected with the questionnaire,from a large sample of students, shows an excellent fit to the data confirming that the

questionnaire is based upon a valid overall model.

Diagnostic power for quality assurance

It has been fairly common to rely solely on psychometric data to establish the credentials

of questionnaires. Where an instrument is used for quality assurance or enhancement

purposes, though, we feel that its diagnostic power is also important. If an instrument is to

provide help in improving the quality of teaching and learning, it must be able to identify

strengths and weaknesses.

To do this, it must have a factor structure that is representative of the multidimensional

nature of a teaching and learning environment and includes elements that are important for

effective environments. Marsh (1987) has established that the same principle applies to

teaching evaluation questionnaires, which is why well-designed instruments have multiple

factors, based on a model of effective teaching. Unfortunately, there are many poorly

designed instruments, which have just one overall factor and, therefore, are only capable of 

ranking teachers rather than providing diagnostic feedback.

Use of the questionnaire in a quality assurance initiative

For 4 years, the results from the surveys have been used as part of a quality enhancement

initiative. Each undergraduate program is surveyed in alternate years. Feedback is supplied

to departments by using a profile of the form shown in Fig. 2. The departments receive a

profile in this form for responses in year 1 and year 3. They are also provided with a

complete set of responses to the two open-ended questions.

The feedback profile includes frequencies and means for each scale for the current andprevious administrations of the instrument. Each scale score for the program is compared

to the overall university mean for all surveyed programs. The comparison is shown

graphically in the column at the right of the feedback sheet. This gives z-scores, which are

the number of standard deviations from the mean. The feedback profile shown in Fig. 2 is

illustrative of the diagnostic power of the instrument. The z-scores indicate that the

department has a generally good record in teaching, with means for all but three scales

being higher than the university mean. The marked variations in the z-scores, though,

indicate that it is possible to identify relative strengths and weaknesses.

Shortly after the feedback profiles are released to departments, an appointment is made

for one of the two professors in the Centre for Learning Enhancement and Research

(CLEAR) to meet with the department chair and other members of the department with

responsibilities for teaching coordination. At this meeting, there is a discussion of feedback 

profiles, the listings of qualitative comments and other available feedback. Data are treated

as being indicative, rather than absolute, and the need for intelligent interpretation in the

particular context of the program is recognised. The meetings, therefore, take the form of 

Learning Environ Res (2009) 12:15–29 23

 123

Page 10: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 10/15

interactive dialogues. The aims are to identify strengths to be built upon and challenges for

which remedial action plans can be suggested. In the initial stages, reactions to the

initiative varied. In some quarters, there was suspicion because CLEAR was a newly

formed unit and departments were not used to receiving feedback at the program level.

Acceptance has grown as the feedback has been found to be useful. An indication of thelevel of acceptance is that a growing number of departments are seeking additional

information, including annual surveys, additional years besides year 1 and year 3, and

additional competencies specific to disciplines or professions.

Case study

A case study is now presented which shows how the collection of diagnostic feedback 

through evaluation can inform and facilitate the enhancement of teaching and learning

quality. The case is the program with the third year feedback profile shown in Fig. 2.

Below is an extract from the meeting notes from the 2003 round of the survey, showing

how some of the recommendations resulted in an improved teaching and learning

environment.

= 35 = 23

3.87 3.63 0.45

4.20 4.10 0.16

4.13 3.91 0.21

3.84 3.87 0.20

3.89 3.41 0.54

3.46 3.35 0.29

3.61 3.69 0.54

2.49 2.62 0.40

=35 =23

3.20 2.92 0.37

3.93 3.63 0.26

3.43 3.47 0.29

3.62 3.31 0.32

3.14 3.38 0.42

3.60 3.54 0.31

2.60 3.04 0.55

2.47 3.04 0.51

3.39 3.51 0.22

3.46 3.20 0.37

Strongly Agree Agree Neutral

Development of capabilities

St rongl y Agree Agree Neutral

0% 25% 50% 75% 100% -2 -1 0 1 2

0% 25% 50% 75% 100% -2 -1 0 1 2

Teaching and learning environment

Fig. 2 Feedback profile for the case-study program

24 Learning Environ Res (2009) 12:15–29

 123

Page 11: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 11/15

Extract from meeting summary 2003

Areas which could need further attention:

• Inconsistency appears in students’ perceptions of low interpersonal and groupwork 

skills despite them having been involved in a large number of group projects;

• A perception of a low level of  problem solving and career preparation;

• Heavy workload .

Suggestions for ways forward were made during the meeting:

1. The compulsory first year courses, such as X and Y, could be examined to see if 

practical applications could be stressed more in order for them to become a

means of developing critical thinking and problem solving ability.

2. The new module which introduces Z with guest speakers sharing their experience in

their profession seems likely to improve problem-solving and career preparation.3. The nature of group projects could be examined to see why they appear to be having

negative effects on student relationships and cooperative learning. Possibly the

introduction of peer assessment of contributions to the group might help.

Comparison of mean scores between 2003 and 2005 enabled the department to monitor

changes. In this case, it can be seen that issues highlighted in the 2003 meeting notes have

been acted upon in the way suggested and that significant improvements have resulted.

This has been noted in the extract from the 2005 meeting notes.

Extract from meeting summary 2005

Compared to the 2003 data, there were significant increases in the ratings for

problem solving for both first and third years. This was attributed to teachers

stressing applications to solve typical problems. This improvement is worth

communicating to the Department for wider application.

There were also major improvements in student–student relationships and

cooperative learning. This was attributed to better handling of group projects and the

wider adoption of the assessment of individual contributions to group projects. It could

be worthwhile to explain to students each time the purpose and anticipated outcomes of 

projects as they are a time-consuming, but very worthwhile, form of learning.

The improvements in the identified aspects of teaching and learning are shown

graphically in Fig. 3 which compares z-scores for 2003 and 2005 for the scales corre-

sponding to the issues identified in the 2003 summary. It can be seen that there have been

significant improvements.

Conclusion

There are numerous questionnaires for evaluating teaching and courses, but instruments

suitable for evaluating teaching and learning in programs, degrees, majors or departments

are comparatively scarce. At this level, a major influence on the achievement of learning

outcomes is the teaching and learning environment. An instrument suitable for program-

level quality assurance or enhancement purposes, therefore, needs to provide measures of a

well characterised teaching and learning environment. The questionnaire described in this

Learning Environ Res (2009) 12:15–29 25

 123

Page 12: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 12/15

article has nine scales for assessing a teaching and learning environment. Four are sub-

sumed under a higher-order teaching variable, two are concerned with teacher–student

relationships, and two focus on student–student relationships. The remaining variable is

workload. Together, the resulting conceptualisation of the teaching and learning

environment corresponds closely to the holistic definition by Fraser (1998).

The instrument also measures student perceptions of the development of eight generic

capabilities. These were included as there is wide agreement that they are important

learning outcomes for graduates. Reliabilities of the scales in the instrument were estab-

lished using the Cronbach a coefficient. The validity of construct selection and scale and

item design was initially addressed through consultation with panels with appropriate

expertise. The validity of the whole instrument was then verified by testing a model

inherent in the design of the questionnaire with SEM. There was a very good fit to

collected data for a model of the teaching and learning environment being influential in the

development of the set of generic capabilities.

The instrument also has good diagnostic properties, evidenced by its ability to identify

relative strengths and weaknesses in a program. This is an important characteristic of any

questionnaire which is to be used to guide quality enhancement. The instrument, therefore,is recommended for wider application, as programs are not evaluated as frequently as

teaching or courses, and rarely by instruments designed to be diagnostic of elements which

constitute a teaching and learning environment.

Acknowledgments This research was partially funded by grants from the University Grants Committee of 

Hong Kong.

Change in Z -scores for five scales for Year 3 students:

2003 vs 2005

-1.5

-1

-0.5

0

0.5

1

1.5

      z    -      s      c      o      r      e

Workload

Cooperativelearning

Relationshipwith otherstudents

Interpersonalskills andgroupwork

Problemsolving

Change in Z -scores for five scales for Year 1 students:

2003 vs 2005

-2

-1.5

-1

-0.5

0

0.5

1

1.5

2003 Year 3 2005 Year 3

2003 Year 1 2005 Year 1

      z    -      s      c      o      r      e

Workload

Cooperativelearning

Relationshipwith otherstudents

Interpersonalskills andgroupwork

Problemsolving

Fig. 3 Change in z-scores for

five scales

26 Learning Environ Res (2009) 12:15–29

 123

Page 13: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 13/15

Page 14: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 14/15

References

Anderson, J. C. (1987). An approach for confirmatory measurement and structural equation modeling of 

organizational properties. Management Science, 33, 525–541. doi:10.1287/mnsc.33.4.525.

Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107 , 238–246.doi:10.1037/0033-2909.107.2.238.

Bentler, P. M. (1995). EQS structural equations program manual. Encino, CA: Multivariate Software.

Browne, M. W., & Cudeck, R. (1993). Alternative ways of assessing model fit. In K. A. Bollen & J. S. Long

(Eds.), Testing structural equation models (pp. 136–161). Newbury Park, CA: Sage.

Centra, J. (1993). Reflective faculty evaluation. San Francisco: Jossey Bass.

d’Appollonia, S., & Abrami, P. C. (1997). Navigating student ratings of instruction. American Psychologist,

52, 1198–1208.

Fraser, B. J. (1998). The birth of a new journal: Editor’s introduction. Learning Environments Research,

1, 1–5.

Kember, D. (2004). Interpreting student workload and the factors which shape students’ perceptions of their

workload. Studies in Higher Education, 29, 165–184.

Kember, D., & Leung, D. Y. P. (2005a). The influence of active learning experiences on the development of graduate capabilities. Studies in Higher Education, 30, 157–172.

Kember, D., & Leung, D. Y. P. (2005b). The influence of the teaching and learning environment on the

development of generic capabilities needed for a knowledge-based society. Learning Environments

 Research, 8, 245–266.

Kember, D., & Leung, D. Y. P. (2006). Characterising a teaching and learning environment conducive to

making demands on students while not making their workload excessive. Studies in Higher Education,

31, 185–198.

Kolitch, E., & Dean, A. V. (1999). Student ratings of instruction in the USA: Hidden assumptions and

missing conceptions about ‘good’ teaching. Studies in Higher Education, 24, 27–42.

Kuh, G. D. (1999). How are we doing? Tracking the quality of the undergraduate experience, 1960s to the

present. Review of Higher Education, 22(2), 99–119.

Kuh, G. D., Hu, S., & Vesper, N. (2000). ‘‘They shall be known by what they do’’: An activities-basedtypology of college students. Journal of College Student Development, 41(2), 228–244.

Leung, D. Y. P. (2005). Development of a questionnaire for assessing students’ perception of teaching and

learning environment. In T. Hudson, M. Abramson, T. Acland, J. Braham, M. Hill, A. Lines,

M. Mclinden, D. Saunders, & J. Storan (Eds.), Towards a global understanding of lifelong learning:

 Making a difference (pp. 284–293). London: Forum for Access and Continuing Education.

Leung, D. Y. P., & Kember, D. (2005a). Comparability of data gathered from evaluation questionnaires on

paper and through the internet. Research in Higher Education, 46 , 571–591.

 Assessment 

23. The program uses a variety of assessment methods.

24. To do well in assessment in this program you need to have good analytical skills.

25. The assessment tested our understanding of key concepts in this program.

 Relationship between teachers and students

26. The communication between teaching staff and students is good.

27. I find teaching staff helpful when asked questions.

Workload 

28. I manage to complete the requirements of the program without feeling unduly stressed.

29. The amount of work we are expected to do is quite reasonable.

 Relationship with other students

30. I feel a strong sense of belonging to my class group.

31. I frequently work together with others in my classes.

Cooperative learning

32. I have frequently discussed ideas from courses with other students out-of-class.

33. I have found that discussing course material with other students outside classes has helped me toreach a better understanding of the material.

Coherence of curriculum

34. I can see how courses fitted together to make a coherent program of study for my major.

35. The program of study for my major was well integrated.

Appendix continued

28 Learning Environ Res (2009) 12:15–29

 123

Page 15: Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environment and its use in quality assurance

7/27/2019 Kember-Development of a questionnaire for assessing students’ perceptions of the teaching and learning environ…

http://slidepdf.com/reader/full/kember-development-of-a-questionnaire-for-assessing-students-perceptions 15/15

Leung, D. Y. P., & Kember, D. (2005b). The influence of the part-time study experience on the development

of generic capabilities. Journal of Further and Higher Education, 29, 91–101.

Leung, D. Y. P., & Kember, D. (2006). The influence of teaching approach and teacher–student interaction

on the development of graduate capabilities. Structural Equation Modeling, 13, 164–186.

Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological

issues, and directions for future research. International Journal of Educational Research, 11, 255–388.McKeachie, W. (1997). Student ratings: The validity of use. American Psychologist, 52, 1218–1225.

Pace, C. R., & Kuh, G. D. (2007). The college student experiences questionnaire assessment program.

Retrieved October 29, 2008, from http://cseq.iub.edu/cseq_generalinfo.cfm.

Ramsden, P. (1991). A performance indicator of teaching quality in higher education: The Course Expe-

rience Questionnaire. Studies in Higher Education, 16 , 129–150.

Seldin, P. (1993, July 21). The use and abuse of student ratings of instruction. The Chronicle of Higher 

Education, p. A-40.

Tinto, V. (1975). Drop-out from higher education: A theoretical synthesis of recent research. Review of 

Educational Research, 45(1), 89–125.

Tinto, V. (1987). Leaving college. Chicago: University of Chicago Press.

Wilson, K. L., & Lizzio, A. (1997). The development, validation and application of the Course Experience

Questionnaire. Studies in Higher Education, 22, 33–53.

Learning Environ Res (2009) 12:15–29 29

13