Upload
andrew-clegg
View
225
Download
0
Embed Size (px)
DESCRIPTION
Monitoring Learning within the Laboratory-Based Learning Environment
Citation preview
Journal of Learning and Teaching
2012 Series: Paper 3
Introduction
As part of the assessment for a postgraduate certificate in learning
and teaching (PGCert), I was asked to reflect upon my teaching
and identify an area in which I felt my teaching itself and/or
student learning could be enhanced. As it was my first semester of
teaching I was given hours to teach on a first year undergraduate
module, titled ‘Bioenergetics of the Exercising Human’. My role
was to teach students key concepts and skills they need within the
laboratory environment, skills which will be utilised continuously
throughout the rest of their Sport and Exercise Science degree.
During my teaching hours it became slowly apparent to me that
although the module is well designed it has one major flaw, within
the laboratory environment learning progress is poorly monitored.
With the exception of two laboratory reports, both of which are
summative assessments, one written mid-semester and one at
conclusion of the module, no other assessments take place to
monitor student progress. This therefore poses two key questions:
how do we know that students are learning from laboratory style
methods? And how do we determine whether or not students
are making the link between theory and practice? Formatively
assessing students can be an important key in the process of
monitoring student learning. In this paper the need for monitoring
learning and why it is becoming of importance within higher
education will be discussed; furthermore methods which can be
utilised to enhance this process will be reviewed.
Literature Review
The laboratory-based environment is designed to encourage
diagnostic and communication skills, in addition to providing a
team approach to learning (Black and Smith, 2006). Providing the
Monitoring Learning within the Laboratory-Based Learning Environment
learner with a ‘hands on experience’ is known to enrich the learning
process by reducing the amount of time spent simply observing
and visually learning from the teacher, encouraging active student
participation. It has also been suggested that learning by ‘doing’ is
not only effective but leads to greater understanding and retention
of information (Randall and Burkholder, 1990) due to a direct
action-to-reaction response (Feldmann and Hofinger, 1997).
Sport and exercise physiology, a discipline of sports medicine
that involves the study of the body’s response to physical stress
has employed the use of the laboratory environment for numerous
years. It allows the learner to engage in activities such as study
design, data collection, analysis and interpretation, in order to
gain a greater understanding of basic principles used within this
discipline. However, although it is suggested that this environment
leads to greater retention of information, often learning is poorly
monitored, therefore the teacher is left unaware of the student
learning process and progress.
It has been reported that often students exploring theory through
laboratory experiments experience difficulty in integrating their
understanding of concepts gained in the lecture with physical
phenomena observed in the laboratory (Nakhleh, 1994). Friedler
and Tamir (1990) report that students seem to experience four
major difficulties in carrying out laboratory work; (1) an inadequate
understanding of the basic concepts underlying the lab, (2) an
inability to relate their observations to their theoretical knowledge,
(3) an inability to order their observations so that irrelevant details
are filtered out, and (4) weak links and even gaps in their knowledge
which slow down students’ understanding or even mislead
them. In addition to the findings of Friedler and Tamir (1990)
Abstract
Monitoring of student learning is a huge challenge for faculty members teaching within the laboratory environment and can present
the teacher with difficult situations. Although the laboratory based environment is designed to encourage more active student
participation, it appears that students often find it difficult to establish the link between theoretical knowledge and laboratory based
practice, leaving students confused and apathetic. Therefore, a need to monitor learning within this environment is key, as the
question; ‘are students really learning?’ is raised. This paper explores various effective methodologies that could be employed in
order to monitor learning within the laboratory environment.
Carla Gallagher, Sport and Exercise Science
Journal of Learning and Teaching
2012 Series: Paper 3
other researchers have explored the difficulties faced within the
laboratory environment, reporting that those students who have
difficulty making decisions on how to focus their observations
(Nakhleh and Krajcik, 1993) and those who routinise procedures
without gaining corresponding understanding encapsulate gaps
within their knowledge structure (Mulder and Vedonk, 1984).
Moreover students often need to establish a purpose of an
experiment and understand the teacher’s reasons for carrying out a
particular task, in order to conceptualise the information provided. It
has been reported that within the laboratory environment students
focus too much attention on completion of a task (Hart et al. 2000),
focusing relatively little attention on understanding an experiment.
Research examining purposes, uses and, learning from laboratories
have drawn important conclusions. Johnstone and Wham (1982)
noted that laboratory work often cognitively overloads students
with too much information to recall. Furthermore, Hodson (1990)
described laboratory work as often being dull and teacher-
directed, and highlighted the fact that students often failed to
relate the laboratory work to other aspects of their learning. Thus,
it is becoming increasingly apparent from the literature that there
is a need to monitor learning within the laboratory environment
to ensure that links are being made between theory and practice.
This paper will explore methodologies that could be employed in
order to monitor learning within this highly active, rich environment.
Verbal presentations, peer assessment, formative assessments,
response cards, and concept mapping will all be discussed; these
methods should stimulate creativity and actively involve students
within the learning process.
Verbal Presentations and Peer Assessment
Verbal presentations are a method of information delivery, in which
an individual or group explains the content of a topic to an audience
or learner. In academic environments verbal presentations are
often formative, taking many forms, including talks, seminars
or research proposals. Nevertheless, within higher education
presentations are frequently used as a method of summative
assessment as opposed to formative assessment. However, in
recent years, it has been suggested that verbal presentations
are integrated into the classroom environment as a means of
monitoring learning. To engage students more actively within the
laboratory environment, Black and Smith (2004), asked students
to give reviews for each laboratory in a presentation style. Their
research had three main aims; (1) to encourage active student
participation, (2) to provide opportunity for the students to learn
a topic in greater depth, and (3) to give the students a chance to
give a verbal presentation. From this research they found students
not only accepted the opportunity to review previous laboratory
classes but reviews were also valued by most of the students.
In addition, students felt that they gained experience, benefited
from the presentation, and most importantly verbal presentations
allowed students to understand course content in more depth. As
a result, it would appear that presentations can be a useful tool for
establishing themes and concepts throughout a semester.
Another benefit arising from the use of verbal presentations is the
opportunity for peer assessment to take place. Peer assessment
is defined as the process through which groups or individuals
rate their peers (Falchikov, 1995) and can take a formative or
summative approach (Dochy et al. 1999). Peer review as part of a
formative assessment, the primary function of which is to provide
information for learners about their progress, not only helps
develop the students own skills of reflection (Somervell, 1993),
but also develops attitudes of responsibility towards other group
members (Burnett and Cavaye, 1980). Research investigating the
advantages of peer assessment has reported that students are
very positive towards the process (Dochy et al., 1999; Orsmond
et al. 1996). Keaten and Richardson (1993) also affirmed that
peer assessment fostered an appreciation for the individuals’
performance within the group and interpersonal relationships
in the classroom. On the other hand, research by Brindley and
Scoffield (1998) reported that students viewed peer assessment
as a biased process; further to this a number of students doubted
the ability of other students to interpret the criteria and assess
work. Therefore, it would appear more appropriate for this method
to be utilised for general feedback and should not necessarily play
a major role in the assessment process.
Overall, verbal presentations are a valuable way to assess and
monitor learning throughout a course module. Also, presenting
under formative conditions allows students to gain confidence
in discussing course material and ideas to a wider audience.
Moreover, engaging students in the peer assessment process
allows individuals to foster skills of personal judgement (Magin
and Helmore, 2001) and gain a greater insight into how grading is
obtained by the teacher. Additionally, peer assessment of verbal
presentations could highlight to the teacher aspects of module
content which students are finding complex.
Journal of Learning and Teaching
2012 Series: Paper 3
Formative Assessments
Formative assessment refers to assessment that is specifically
intended to generate feedback on performance to improve
and accelerate learning (Sadler, 1998). In general, within higher
education, formative assessment and feedback should be used
to empower students as self-regulated learners (Nicol and
Macfarlane-Dick, 2006). In addition to self-regulated learning,
formative assessments enable students to actively monitor their
own learning alongside identifying a number of different learning
processes, for example, setting of and orientation towards learning
goals, the strategies used to achieve goals, the management of
resources, the effort exerted, and reactions to external feedback
(Nicol and Macfarlane-Dick, 2006). From the teachers viewpoint
formative assessments are used to provide the learner with
external feedback, for example, praise or constructive criticism.
Providing external feedback allows the learner to identify and
generate internal feedback, as they monitor their engagement
with learning activities, and tasks, and assess progress towards
goals e.g. module aims (Butler and Winne, 1995). In support, Black
and William (1998) concluded from a review of 250 studies that
this generation of feedback consistently resulted in learning and
achievement across all content areas, knowledge and skill types,
and levels of education. Therefore suggesting that using formative
assessments as a tool to provide external feedback is effective.
On the other hand, research studying the effectiveness of
assessing students using this system have criticised the method
for being too controlled by the teacher. Boud (2000) stated if
formative assessment is exclusively in the hands of teachers,
then it is difficult to see how students can become empowered
and develop the self-regulation skills needed to prepare them
for learning outside university and throughout life. Therefore,
it is suggested that self-assessment methods are developed
within the learning environment (e.g. student generated formative
assessment questions). In support of self-assessment methods,
Zimmerman and Schunk (2004) reported that learners who are
more self-regulated are more effective learners: they are more
persistent, resourceful, confidents and higher achievers.
In short, it is important that formative assessments are used as
a tool for monitoring progress of students on an individual basis
however, teachers should be aware that this method is more
effective when student self-regulated learning is applied.
Response Cards
Response cards are cards, signs, or items that are simultaneously
held up by all students in the class to display their responses to
questions or problems presented by the teacher (Heward et al.
1996; Gardner et al. 1994) and can be presented in different forms
i.e. pre-printed and write-on. Response cards are of invaluable use
as teachers are able to easily detect the responses of individual
students and further monitor those students who appear to be
lower down the learning process on a one-to-one basis. The
response card method also allows the teacher to steer away from
conventional methods, which only enable one student to answer a
proposed question at any one time (Brophy and Evertson, 1976),
leading to wider classroom participation (Narayan et al. 1990;
Gardner et al. 1994).
Furthermore, it is well established that conventional methods often
result in more frequent responses by high-achieving students and
almost no responses from low achieving students (Maheady, et
al. 1991). A study by Gardner and colleagues (1994) reported the
frequency of student response was fourteen times higher with
response cards compared with hand-raising, suggesting that
response cards lead to increased student participation within
the learning environment. In addition to increased participation,
response cards are also an effective tool for the teacher. As
previously mentioned, response cards allow the teacher to gain
immediate feedback, and maintain close, continual contact
with relevant outcome data they need to make well informed,
instructional decisions (Bushell and Baer, 1994). Moreover, Gardner
and colleagues (1994) reported response cards to be a valuable
tool in the teaching of basic scientific concepts and definitions.
In sum, response cards are an appealing tool for use within higher
education. Prevailing evidence would suggest that this method
leads to increased active student participation, which makes it
an attractive idea for the laboratory environment. Moreover, as
science education requires students to retain a multitude of facts
and definitions, response cards provide a quick, easy and effective
way of monitoring progress within groups of students.
Concept Mapping
Concept mapping is a meta-cognitive tool which visually represents
knowledge as a hierarchical framework of concepts and concept
relationships (Iuli and Helleden, 2004). More importantly the
process of concept mapping makes it necessary for learners to
Journal of Learning and Teaching
2012 Series: Paper 3
find matching ‘knots’, core terms and connections between core
terms, which accounts for deep understanding of theory (Vohle,
2009). Since their development in the 20th century, concept
maps have been widely utilised and are most widely known within
science education. Concept mapping has been shown to be useful
for (1) providing a summary of a person’s existing knowledge,
(2) revealing gaps in understanding, (3) designing curricula
and instructional materials, (4) assessing student learning, (5)
facilitating communication and arriving at shared understandings
among groups of individuals and (6) understanding the processes
by which scientists construct new knowledge (Iuli, 2004; Mintzes
et al. 1998 and 2000; Novak, 1998; Novak and Gowin, 1984).
In the laboratory environment concept mapping should not be
underestimated, and can be a useful tool either pre- or post-
laboratory classes. It provides students with an opportunity to
reflect on the meaning of experiments which have taken place
and on how the observations made in the laboratory relate to the
concepts they learn within the class or previous lectures (Nakhleh,
1994). Concept mapping will also enable students to make the
link between various concepts. Additionally getting students to
develop a concept map allows the teacher to identify gaps within
their student’s knowledge and recognise where students are
finding it difficult to make theoretical links.
Conclusion
It is becoming increasingly important to monitor student learning
within the laboratory environment to ensure that students
are establishing the link between theoretical knowledge and
laboratory-based practice. Without continuous monitoring of
learning students will continue to encapsulate gaps within their
knowledge. Verbal presentations and practice examinations
provide an informal opportunity for students to demonstrate their
understanding of a single topic, at the same time as developing
greater communication skills. Additionally, verbal presentations
allow the opportunity for peer assessment to take place. Concept
mapping and response cards are also beneficial ways of integrating
groups of students, developing teamwork skills as well as allowing
the teacher to monitor individual learners. Finally, if we implement
the monitoring of learning within the laboratory environment it is
likely to have many beneficial outcomes; teachers will understand
their students and their learning progression and students should
become more motivated to learn within this highly demanding
environment.
References
BLACK, V.H. AND SMITH, P.R. (2006), Increasing active student
participation in histology, The Anatomical Record, 2788, pp. 14-17.
BLACK, P. AND WILLIAM, D. (1998), Assessment and classroom
learning, Assessment in Education, 5, 7-74.
BROPHY, J.E. AND EVERTSON, C.M. (1976), Learning from
Teaching: A Developmental Approach, Boston: Allyn and Bacon.
BOUD, D. (2000), Sustainable assessment: rethinking assessment
for the learning society, Studies in Continuing Education, 22,
pp.151-167.
BRINDLEY, C. AND SCOFFIELD, S. (1998), Peer assessment in
undergraduate programmes, Teaching in Higher Education, 3, pp.
79-90.
BURNETT, W. AND CAVAYE, G. (1980), Peer assessment by fifth
year students of surgery, Assessment in Higher Education, 2, pp.
273-278.
BUSHELL, D. AND BAER, D.M. (1994), Measurably superior
instruction means close, continual contact with the relevant
outcome data: revolutionary!, in Gardner, R., Sainato, D.M.,
Cooper, J.O., Heron, T.E, Heward, W.L., Eshleman, J. and Grossi,
T.A (Eds.), Behaviour Analysis in Education: Focus on Measurably
Superior Instruction, Monterey, CA.
BUTLER, D.L. AND WINNE, P.H. (1995), Feedback and self-
regulated learning, a theoretical synthesis, Review of Educational
Research, 65, pp. 245-281.
DOCHY, F., SEGERS, M. AND SLUIJSMANS, D. (1999), The use
of self-, peer and co-assessment in higher education: a review,
Studies in Higher Education, 24, pp. 331-350.
HARRIS, H. AND BRETAG, T. (2003), Reflective and collaborative
teaching practice: working towards quality student learning
outcomes, Quality in Higher Education, 9, pp. 179-185.
FALCHIKOV, N. (1995), Peer feedback marking: developing peer
assessment, Assessment and Evaluation in Higher Education, 32,
pp. 175-187.
Journal of Learning and Teaching
2012 Series: Paper 3
FELDMANN, L.J. AND HOFINGER, R.J. (1997), Active
participation by sophomore students in the design of experiments,
paper presented at the Frontiers in Education Conference,
Pittsburgh, PA.
FRIEDLER, Y. AND TAMIR, P. (Eds.) (1990), The student laboratory
and the science curriculum. (Routledge: London).
GARDNER, R., HEWARD, W.L. AND GROSSI, T.A. (1994),
Effects of response cards on student participation and academic
achievement: a systematic replication with inner-city students
during whole-class science instruction, Journal of Applied
Behaviour Analysis, 27, pp. 63-71.
HART, C., MULHALL, P., BERRY, A., LOUGHRAN, J. AND
GUNSTONE, R. (2000), What is the purpose of this experiment?
Or can students learn something from doing experiments?, Journal
of Research in Science Teaching, 37, pp. 655-675.
HEWARD, W.L., GARDNER, R., CAVANAUGH, R.A., COURSON,
F.H., GROSSI, T.A. AND BARBETTA, P.M. (1996), Everyone
participates in this class: using response cards to increase active
student participation, Teaching Exceptional Children, 28, pp. 4-10.
HODSON, D. (1990), A critical look at practical work in school
science, School Science Review, 71, pp. 33-40.
IULI, R.J. AND HELLEDEN, G. (2004), Using concept maps as a
research tool in science education research, paper presented at
the Proceedings of the First International Conference on Concept
Mapping, Pamplona, Spain.
JOHNSTONE, A.H. AND WHAM, A.J.B. (1982), The demands of
practical work, Education in Chemistry, 19, pp. 71-73.
KEATEN, J.A. AND RICHARDSON, M.E. (1992), A field
investigation of peer assessment as part of the student group
grading process, paper presented at the Western Speech
Communication Association Convention, Albuquerque, NM.
MAGIN, D. AND HELMORE, P. (2001), Peer and teacher
assessments of oral presentation skills: how reliable are they?,
Studies in Higher Education, 26, pp. 287-298.
MAHEADY, L., MALLETTE, B., HARPER, G.F. AND SACA, K.
(1991), Heads together: A peer-mediated option for improving
the academic achievement of heterogeneous learning groups,
Remedial and Special Education, 12, pp. 25-33.
MINTZES, J.J., WANDERSEE, J.H. AND NOVAK, J.D.
(Eds.)(1998), Teaching Science for Understanding: A Human
Constructivist View, San Diego: Academic Press.
MINTZES, J.J., WANDERSEE, J.H. AND NOVAK, J.D. (Eds.)
(2000), Assessing Science Understanding: A Human Constructivist
View, San Diego: Academic Press.
MULDER, T. AND VEDONK, A.H. (1984), A behavioural analysis
of the laboratory learning process, redesigning a teaching unit on
recrystallization, Journal of Chemical Education, 61, 451-453.
NAKHLEH, M.B. AND KRAJCIK, J.S. (1993), A protocol analysis
of the influence of technology on students’ actions, verbal
commentary, and thought processes during the performance of
acid-base titrations, Journal of Research in Science Teaching, 30,
pp. 1149-1168.
NAKHLEH, M.B. (1994), Chemical education research in the
laboratory environment. How can research uncover what students
are learning?, Symposium: What is Research in Chemistry
Education, 71, pp. 201-205.
NARAYAN, J.S., HEWARD, W.L., GARDNER, R., COURSON,
F.H. AND OMNESS, C. (1990), Using response cards to increase
student participation in an elementary classroom, Journal of
Applied Behaviour Analysis, 23, pp. 483-490.
NICOL, D.J. AND MACFARLANE-DICK, D. (2006), Formative
assessment and self-regulated learning: a model and seven
principles of good feedback practice, Studies in Higher Education,
31, pp. 199-218.
NOVAK, J.D. (1998), Learning, Creating, and Using Knowledge,
Concept Maps as Facilitative Tools in Schools and Corporations,
Mahwah, NJ: Lawrence Erlbaum Associates.
NOVAK, J. D. AND GOWIN, D. B. (1984), Learning How to Learn,
New York: Cambridge University Press.
ORSMOND, P., MERRY, S. AND REILING, K. (1996), The
importance of marking criteria in the use of peer assessment,
Assessment and Evaluation in Higher Education, 21, pp. 239-249.
RANDALL, W.C. AND BURKHOLDER, T. (1990), Hands-on
laboratory experience in teaching-learning physiology, Advances
in Physiology Education, 259, pp. S4-S7.
Journal of Learning and Teaching
2012 Series: Paper 3
SADLER, D.R. (1998), Formative assessment: revisiting the territory,
Assessment in Education, 5, pp.583-593.
SOMERVELL, H. (1993), Issues in assessment, enterprise and higher
education: the case for self,-peer and collaborative assessment,
Assessment and Evaluation in Higher Education, 18, pp. 221-233.
VOHLE, F. (2009), Cognitive tools 2.0 in Trainer Education,
International Journal of Sports Science and Coaching, 4, pp. 583-593.
ZIMMERMAN, B.J. AND SCHUNK, D.H. (2001), Self-regulated
Learning and Academic Achievement: Theoretical Perspectives,
London, Routledge.