Upload
utoledo
View
0
Download
0
Embed Size (px)
Citation preview
Available online at www.sciencedirect.com
Currents in Pharmacy Teaching and Learning 5 (2013) 311–320
Research
Pilot of peer assessment within experiential teaching and learning
Craig D. Cox, PharmD, BCPSa,*, Michael J. Peeters, PharmD, MEd, BCPSb,Brad L. Stanford, PharmD, BCOPc, Charles F. Seifert, PharmD, FCCP, BCPSa
a Texas Tech University Health Sciences Center School of Pharmacy, Lubbock, TXbUniversity of Toledo College of Pharmacy & Pharmaceutical Sciences, Toledo, OH
cGenentech BioOncology, Medical Affairs, Wolfforth, TX
Abstract
Objectives: The objectives of this study were as follows: (1) to pilot test an instrument for peer assessment of experiential
teaching, (2) to compare peer evaluations from faculty with student evaluations of their preceptor (faculty), and (3) to
determine the impact of qualitative, formative peer assessment on faculty’s experiential teaching.
Methods: Faculty at Texas Tech University Health Sciences Center School of Pharmacy implemented a new peer assessment
instrument focused on assessing experiential teaching. For eleven quantitative evaluation questions, inter-rater reliability was
compared between faculty and student assessments. Student evaluations from 2003–2004 and 2010–2011 were compared to
determine if preceptor performance improved.
Results: Eight faculty members participated in this pilot. Comparing peer evaluations and student evaluations of faculty, a
median intraclass correlation of 0.85 suggested redundancy. Five of eight faculty members remained seven years later, and
three of five reported this assessment helpful and reported making changes to their teaching. Among these faculty members,
preceptor performance improvements appeared strongest.
Conclusion: A peer assessment of experiential teaching was developed and implemented. Aside from evaluation, formative
peer assessment seemed important in fostering feedback for faculty in their development.r 2013 Elsevier Inc. All rights reserved.
Keywords: Experiential education; Peer teaching assessment; Pharmacy education; Faculty development
Introduction
Teaching can be assessed through multiple sources
including self-reflection, students, and peers. Each has its
advantages and disadvantages as a teaching evaluation
source. Peer assessment through teaching observation has
become increasingly used in pharmacy colleges/schools
within the United States. In a recent survey, 66% of
institutions stated they use a form of peer assessment of
classroom teaching, which is up more than 50% from ten
years earlier.1 However, just as classroom assessment can
be formative, summative, and possibly some of both; peer
teaching assessment is no exception. While past research
has focused on summative peer teaching assessments to
evaluate faculty for purposes of merit raises, promotion,
and/or tenure, these goals seem misplaced. Experts in
faculty development agree that formative assessment would
be the preferred method of peer teaching assessment, while
these summative evaluations have substantial concerns.2–8
Important issues include poor inter-rater consistency with
any evaluation instrument based on only a single or few
observations (i.e., entire process reliability), and a true
ability to observe learning resulting from specific teaching
methods in a limited classroom observation time (i.e.,
validity). Formative assessments are not used directly for
evaluation and have received much less focus in the medical
literature. However, these assessments provide constructive
feedback aimed at improving teaching effectiveness and can
help foster teaching development with resulting improvements.
http://www.pharmacyteaching.com
1877-1297/13/$ – see front matter r 2013 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.cptl.2013.02.003
* Corresponding author: Craig D. Cox, PharmD, BCPS, Texas
Tech University Health Sciences Center School of Pharmacy,
Pharmacy Practice, 3601 4th Street, Suite 1B201, Lubbock, TX
79430.
E-mail: [email protected]
Literature demonstrating the advancement of student
learning involving teacher coaching by peers is more
widespread in K-12 education, however it is also described
in university cohorts. Historically, little change has resulted
from course-based teacher development, but in a random-
ized trial comparing coursework, on-site peer coaching, and
a control group not receiving any professional development,
coaching fostered statistically significant changes by the
teacher in students’ learning environment while neither
coursework nor control did this.9 Coaching appeared to
help foster change. While more scant, university experi-
ences with peer coaching have been reported as well.10,11
Facilitating positive changes in students’ learning environ-
ment (whether K-12 or university teaching) should be
central to teacher development, and coaching seems encour-
aging at facilitating change.
Several pharmacy institutions have developed instru-
ments that appear to be used for both formative and
summative assessments, while documenting strengths and
limitations with each of their processes.12–15 These instru-
ments have been limited to classroom teaching and, to our
knowledge, there is a dearth of instruments or evidence
available with preceptor peer evaluation in pharmacy
experiential education. In the same recent survey described
above, only 18% of US pharmacy colleges/schools used a
form of preceptor peer assessment with their advanced
pharmacy practice experiences (APPEs).1 With more than
three of five pharmacy colleges/schools using classroom
peer observation and less than one of five colleges/schools
using preceptor peer assessment, there appears to be a gap
in evidence for change resulting from preceptor develop-
ment in pharmacy experiential education. For this reason,
we set out to (1) develop an instrument for peer assessment
of experiential teaching, (2) compare quantitative evaluation
information from peer faculty assessments and concurrent
student evaluations, and (3) assess student experiences after
this formative peer assessment to see if it had a positive
effect on faculty’s experiential teaching ability.
Methods
The Texas Tech University Health Sciences Center
Institutional Review Board approved this pilot study.
Students and faculty were aware that their responses would
be used as a quality assurance method within the Experi-
ential Programs Office, but they were not directly informed
of evaluation use for this study specifically.
Instrument development
Content validity of our develop instrument began with
eleven clinician faculty volunteers in the Adult Medicine
division at our Texas Tech University Health Sciences
Center School of Pharmacy (TTUHSC SOP). Several
brainstorming sessions were held during which faculty
identified qualities of an effective teacher and also included
a literature review (Fig. 1).16–19 These qualities were
divided into three main categories—clinical teaching, infor-
mal discussion sessions, and general teaching qualities.
Clinical teaching was defined as the time spent with
students in a patient care location, involving other health
care professionals in either an inpatient or ambulatory
setting. Informal discussion sessions were defined as time
spent between a preceptor and student(s), where they
discuss rotation patients, disease states, and/or drug thera-
pies, though they are physically outside of that clinical
environment. Finally, general teaching qualities were those
elements that were determined to be important to the overall
student–teacher relationship.
Questions were developed to assess each of these core
areas. Some of the questions were used for formative
purposes only and peers were asked to describe the rationale
for each of their answers in this section (Fig. 2). In addition,
using a 5-point Likert-type scale, eleven close-ended ques-
tions were placed on the instrument (Fig. 3). Six of these
questions explicitly focused on evaluation of preceptor
while five were focused on evaluation of learning environ-
ment (i.e., practice site). These questions were duplicates of
required evaluation items from the forms that students
already complete at the conclusion of each pharmacy
practice experience (PPE) to assess their preceptor. PPEs
included both introductory and advanced pharmacy practice
experiences. The instrument underwent several revisions
prior to reaching its final form (Appendix).
The quantitative information from the instrument was
collected to compare peer faculty and student evaluations.
Of note, no students or faculty received a formal orientation
on how to interpret the individual evaluation items on the
assessment tools. Although a few students provided spora-
dic comments in their evaluations, these written comments
were not considered during this study.
Peer assessment process
Once the instrument was developed, Adult Medicine
faculty met again to discuss a preferred implementation
strategy. Following multiple meetings, it was determined
that the peer assessment process would be voluntary.
Ideal Preceptor Qualities16-19
Role ModelFacilitator
EnthusiasticOrganizational Skills
Expert ClinicianConsultant
Communication skillsCreativity/Innovation
Encourages critical thinking/problem solving
Fig. 1. Ideal preceptor qualities.16–19
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320312
1. Does the preceptor possess an enthusiasm for teaching? YES or NO . Please
describe
2. Does the preceptor emphasize the importance of the process of problem solving or are they only critical of the students/residents knowledge base? (Do they facilitate critical thinking skills and/or facilitate development and application of knowledge?) Please
describe
3. Is the preceptor well organized? (Has daily/weekly schedule that students/residents follow, or are they very erratic and/or spontaneous in their activities?). Please describe
4. Is the preceptor clinically competent? (Not easy to assess, but may be able to commentbased on rapport with other team members on rounds/clinics). Please describe
5. Is the preceptor seen as a positive role model for the student/resident? (Is there respectfor the preceptor by the student/resident, does the student/resident look up to the preceptor, seek out advice or ask many questions?) Please describe
6. Does the preceptor exhibit good communication skills? (In other words, are they able toeasily convey their thoughts to other health care professionals on rounds/clinics and totheir students/residents in patient discussions?) Please describe
7. Is the preceptor accessible by the student/resident? (Is the student/resident able tocontact the preceptor if questions arise?) Please describe
8. Does the preceptor maintain a good balance between supervising students/residentsand also allow them to work/learn on their own? (Does preceptor do all the talking or isthe student/resident actively involved, does the preceptor make students/residents lookup every answer to the questions they ask or do they simply give the answers to allquestions that students/residents ask, or is there a good balance?). Please describe
9. Does the preceptor use innovative or creative methods in their teaching? (Have games,trivia days, clinical pearls/drugs of the day?) Please describe
Fig. 2. Experiential peer teaching instrument (Formative Comment section).
*Using a Likert-type Scale (1=Poor, 2=Fair, 3=Good, 4=Excellent, 5=Outstanding)
1. The site provides the opportunity to see a wide variety of patients and provide patient care
2. The relationship of the pharmacists with other health care professionals at the site promoted integrated healthcare
3. The availability of necessary references and equipment at the site were appropriate for student needs
4. The overall atmosphere at the site enhanced the learning experience of students duringthe rotation
5. The level of interaction with patients and other health care professionals during the rotation was adequate
6. The preceptor created an environment that was conducive to learning7. The preceptor shared their knowledge and ability and integrated practice evidence-
based medicine with patient specific factors8. The level of time, energy, and commitment the preceptor made to the educational
experience was beneficial9. The feedback and help provided by the preceptor to students on this rotation was both
constructive and effective10. The level of supervision provided, during this rotation, by the primary preceptor was
beneficial11. The primary preceptor was a positive role model and mentor during this rotation
Fig. 3. Items for experiential peer teaching instrument (Ratings section) and Student Evaluation form.
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320 313
Faculty who chose to participate would not be required to
submit the completed instrument to his/her Department
Chair or other administration for review unless they chose
to do so. The intent was that resulting feedback would
facilitate formative teaching development.
Word-of-mouth was used to find Adult Medicine faculty
interested in participating in this instrument’s development
and implementation. Interested faculty contacted their
colleagues and arranged for a visit. This visit was designed
to take place in one day, with the reviewer observing the
chosen faculty member in all aspects of the pharmacy
practice experience including practice activities (i.e., round-
ing or clinic) and informal discussion sessions with stu-
dents. After each peer assessment was complete, the
reviewer provided immediate verbal feedback to the faculty
member. At a later date, the reviewer sent the completed
instrument to the faculty member for consideration. If
questions regarding the instrument arose, a follow-up
meeting was scheduled between colleagues to discuss.
Peer evaluation follow-up
Seven years later (after 2010–2011 and after sufficient
time to see a longer term change, such as with learner
perceptions and avoid an immediate post-intervention
change because of a Hawthorne effect among faculty),
participating faculty who still remained at TTUHSC SOP
were asked (a) whether or not they felt this peer review had
been helpful and (b) if they attempted any subsequent
changes to their experiential teaching. None of these faculty
members who remained at the institution reported having an
additional formal peer review of their experiential teaching
performed since that initial pilot assessment. For this
investigation, a positive perspective on the process was
defined as affirmatively responding to both questions.
Statistical analysis
First, student and peer evaluations for academic year
2003–2004 were analyzed. Being a key concern within any
summative evaluation20 and not a fixed instrument property
for all evaluation uses,21 reliability was analyzed. For internal
consistency of responses for each student or faculty, we used
Cronbach’s alpha. For inter-rater reliability of evaluation
responses between peer faculty and students, an intraclass
correlation (ICC) was used. Figure 3 shows the eleven items
for both internal consistency and inter-rater analyses.
Second, changes in student evaluations between years
were used to assess preceptor development. This could only
be accomplished with participating faculty continually
employed at TTUHSC SOP. Comparing with initial
2003–2004 student evaluations, participating faculty
remaining in 2010–2011 were asked whether they felt the
peer assessment had been helpful. For this small group of
faculty, a Many-Facet Rasch Measurement model (Facets,
Chicago, IL) was used to integrate the numerous student
evaluation responses into a single-number preceptor meas-
ure.22,23 Item, student, and faculty model fit were evaluated
in the Rasch model according to accepted ranges.22,23 Of
added help for this small sample, the Rasch model is helpful
in providing initial instrument construct validity and further
reliability evidence.24,25 In this small pilot, the number of
faculty was too small for any other more common quanti-
tative statistical analysis, so resulting Rasch measures could
only be visually compared for trends between the student
evaluations from years 2003–2004 and 2010–2011.
Results
Twenty-two percent of pharmacy faculty (n = 8)
participated in this pilot study. All were fully funded
faculty members of the Adult Medicine division at
TTUHSC SOP and each had participated in both the
development and implementation of the peer evaluation
instrument. Experiential teaching accounted for more than
half of each participating faculty member’s teaching
responsibilities. Most faculty was non-tenure track (75%)
and all but one was at the assistant professor level and had a
median of four years of experience. At TTUHSC SOP, non-
tenure track Adult Medicine faculty precept an average of
18 students each year, while tenure track faculty precept an
average of six students each year. Faculty members helped
select their evaluator pragmatically, with both being from
the same campus. Only in one instance did a faculty
member observe the same peer individual who performed
the assessment of them.
During the PPEs in which faculty peer assessments were
completed during 2003–2004 academic year, a total of 20
students were enrolled (seven IPPEs and thirteen APPEs).
Faculty members were assessed only once during the year.
The number of students on rotation with each individual
faculty member ranged from one to four students. These
students were either on an inpatient clinical skills introduc-
tory pharmacy practice experience (IPPE), or an adult
medicine, critical care, or oncology advanced pharmacy
practice experience (APPE). At their respective facilities,
internal medicine and critical care unit practice settings
were similar among campuses for all PPEs. Internal con-
sistency of student evaluation forms for all these PPEs and
peer faculty assessment form is listed in Table 1. When
comparing peer faculty ratings to student ratings, the ICC
for inter-rater reliability showed similar ratings with a
median of 0.85 (Table 2). Parts of the peer assessment
form (Fig. 2 and Appendix) enabled constructive feedback
for each observed preceptor, and was completed in all cases.
For conciseness, the qualitative written comment feedback
has not been reported herein.
Five of the eight participating faculty members remained
at the institution seven years after the peer assessment pilot
was done. Of these five faculty members, three found the
peer assessment process helpful and reported making
changes to their students’ learning environment based on
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320314
the peer feedback. Student evaluations of these faculty from
2003–2004 were compared to the 2010–2011 evaluations to
see if there was any improvement. A total of 70 students
were included in the analysis in 2003–2004 (22 IPPE and
48 APPE) and 58 students in 2010–2011 (21 IPPE and 37
APPE). Rasch measures for these academic years are in
Table 3. Only rough differences can be observed from this
small pilot, but they increased in two of the three faculty
members who reported the process helpful and who
mentioned making changes to their teaching, while not in
any of those who failed to see a benefit in this process. No
specific reasons were provided by the two individuals as too
why they did not find the process beneficial. Future
implementation of peer assessment process among a larger
group of faculty may provide additional insight into this.
With the Rasch-modeled data, the fit of data for items
were in acceptable limits. Of note, when looking at the
scale’s category probability curves in the Rasch model, the
original instrument’s 5-point Likert-type scale did not
function properly. However, collapsing the scale to four
points (by combining categories 1 and 2) improved this
enough to adequately function and the instrument’s overall
reliability improved appreciably as well.
Discussion
To our knowledge, this pilot study offers preliminary
results of peer observation in an experiential teaching area
that seems devoid of other published evidence. Several
processes for assessment of teaching effectiveness exist
outside of the experiential setting.2–8,26,27 Student
evaluations remain the most widely accepted format within
those processes, however along with student evaluation
strengths, there are noted limitations. A notable strength is
that students observe teachers in class or on a practice
experience every day over the length of a semester or
rotation and seem to be in a good position to give feedback
on their perceptions of learning when evaluating their
preceptors. On the other hand, students very likely do not
have a sufficient educational background to adequately
assess many other areas of pedagogy, or have perceptions
that correlate very well with actual student learning.28,29 In
addition, Kidd and Latif performed a study of more than
5000 pharmacy students and found a strong positive course
correlation between mean course evaluation scores and the
students actual or expected grades.30 This suggests a
potential non-systematic bias among students who may
positively evaluate their instructors and those students who
receive higher grades. These should be notable concerns as
institutions continue to build on student evaluations in
developing peer assessment programs toward more holistic
assessments of faculty teaching.
Despite these concerns, some experts argue that peer
assessments add nothing new to information gained from
student evaluations. In a meta-analysis by Feldman in 1989,
14 studies comparing peer and student evaluations of
classroom teaching found an overall correlation of 0.55.31
This suggests that when students and faculty peers assess
the same instruction, limited new information is learned.
The correlation of 0.85 in our study suggests even more
redundancy among student and peer evaluations in the
experiential setting. It seems prudent that different questions
should be asked of faculty and students and that faculty
questions should address areas beyond what students could
evaluate. Thus, common student and faculty quantitative
questions that appeared on our peer assessment tool may not
Table 1
Reliability (internal consistency) by Cronbach’s alpha
Only student evaluations 0.82
Only peer faculty instruments 0.71
All faculty and student evaluations 0.80
Note:40.7 is favorable.22
Table 2
Intraclass correlation (ICC) for each assessed faculty member
comparing faculty peer and student evaluations
Faculty 1 0.68
Faculty 2 0.71
Faculty 3 0.84
Faculty 4 0.85
Faculty 5 0.49
Faculty 6 1.00
Faculty 7 0.86
Faculty 8 0.85
Median ICC 0.85
Note: ICC from 0–1 and closer to 1 is more consistent between raters.
Table 3
Peer observation helpfulness and trends in Rasch measures from
student evaluations
Found
helpful?
Rasch
measures of
2003–2004
student
evaluationsa
Rasch
measures of
2010–2011
student
evaluationsa
Difference
between 2003–
2004 and 2010–
2011a
Faculty 1 Yes 1.45 0.36 Lowered 1.09
units
Faculty 2 Yes 0.68 1.62 Increased 0.94
units
Faculty 3 Yes 0.32 1.65 Increased 1.33
units
Faculty 4 No 1.71 0.01 Lowered 1.70
units
Faculty 5 No 0.40 0.39 Essentially same
Note: The model standard error of measurement was 0.38, so we have 95%
confidence that any increase or decrease beyond 0.76 is beyond error.a In logits (logarithm–odds units).
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320 315
be necessary; rather the focus should be on the qualitative
(formative) components of the assessment (Appendix).
Please note that inter-rater reliability is only a portion of
overall process reliability and should not alone define
reliability for an entire peer assessment process.
Limited evidence exists in peer assessment of teaching
during pharmacy practice experiences, and it is clear that
appropriate methods for peer assessment are still being
defined. One concern with having faculty members assess
each of their colleagues is the amount of time needed for
this process, potentially affecting a faculty member’s
productivity in other areas. It is also recognized that all
pharmacy faculty members are not routinely trained in
pedagogy and this may affect their ability to accurately
evaluate their colleagues, especially if teaching experience
is limited. At our institution, we have faculty members
distributed over multiple campuses, making any peer
assessment process more difficult to facilitate. In our study,
we found that having faculty regardless of practice dis-
cipline at each of the respective campuses assess other
same-campus faculty (even if possibly in a different practice
setting) helpful; an oncology practitioner could help an
Adult Medicine colleague. One advantage in having multi-
ple faculty members across various practice areas partic-
ipating in this process could be that one would gain
perspectives from individuals outside of their specific focus
of pharmacy practice (i.e., other faculty in geriatrics,
pediatrics, or ambulatory care). Since the sole purpose of
this instrument was to assess one’s precepting ability, this
method may prove more beneficial than more exclusive
intra-disciplinary assessments.
The Accreditation Council for Pharmacy Education’s
(ACPE) Standard 26 on Faculty Development has Guideline
26.2 that requires colleges/schools of pharmacy to use a
form of peer assessment for teaching faculty.32 Pharmacy
experiential education currently makes up approximately
one third of PharmD curricula as more recent ACPE
standards placed an increasing emphasis on this area. In
addition, faculty members themselves are motivated to
improve their teaching and often desire more ways to assess
their abilities aside from student evaluations. Based on this
pilot’s ICC median, it seems that faculty peer assessment
adds little to student evaluations. However formative, open-
ended feedback (i.e., coaching) can be helpful in assisting
teachers to make changes that hopefully benefit future
students. While it did not appear helpful for every faculty
member, for those with an optimistic perspective of the
process, it did appear helpful toward future student expe-
riences. Additionally, faculty development experts suggest
that with the limited time of most faculty members, peer
observation may not be helpful for summative assessment.2–
8 For process reliabilities, summative assessments would
require multiple faculty peers visiting preceptors on multi-
ple occasions rather than a single day visit. Although,
multiple visits would allow a peer to better assess their
colleague reliably for evaluative decisions, it does not seem
practical given faculty time constraints for teaching, patient
care, scholarship, and service. Herein we suggest that
formative assessment may be helpful and does not have the
same reliability implications of summative assessments.
Toward evaluation (i.e., summative assessment) and similar
to those experts in faculty development, we would suggest
evaluating a portfolio including student evaluations, formative
peer faculty assessment (i.e., coaching) with a self-reflection
based on the feedback, and peer evaluation of handouts/
supplements for students. This multi-faceted approach would
also require much less faculty time than the numerous
teaching observations that would be needed for sufficient
reliability. Although not an objective of our study, these
findings appear to support the notion that peer assessment
may be most appropriately used as formative feedback and
not directly for evaluation as others might suggest.26,27
With development and implementation of our assess-
ment, we found several issues that needed consideration.
While very limited evidence exists in experiential teaching,
lessons generalized from peer review in the classroom
setting proved helpful.12–15 For our project’s success; we
needed buy-in from faculty involved. To do this, faculty
were involved from day 1 to provide insight into the
assessment process, helping to develop our assessment
instrument, and helping to determine how the information
would be used. If peer review with qualitative feedback is
used for formative assessment purposes, then reliability of
the instrument and overall process is less critical.4 While
reliability is essential for making performance decisions
such as merit, promotion, and tenure, other assessment
properties become more applicable with formative assess-
ment, such as validity, feasibility, and educational impact.33
With formative assessment, faculty desire and participation
are paramount. In fact, the more input a faculty member has
in developing the instrument, the more likely they are to
implement improvements suggested by peer review.13 It
seems that faculty desire may have played a role in the
differences seen in this pilot study between faculty who felt
this peer observation pilot was helpful versus those that did
not. While not every student evaluation change was
positive, motivated faculty who suggested the process was
helpful appeared to make more positive change, while
faculty reporting the process was not helpful and who did
not report making changes had no or negative changes on
future student evaluations.
Limitations of this pilot study should be noted. First,
there were a small number of participants from a single
pharmacy school and the same individuals contributed to
both the development and implementation of the peer
assessment which may have affected their interpretation of
the instrument. Second, faculty members who served as
peer reviewers and students were not formally trained prior
to implementation of this piloted process and while instru-
ment internal consistency was acceptable, feedback and
priority of certain content may have differed among
observers. Third, to date the instrument has yet to be
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320316
validated in another cohort from any other college/school of
pharmacy. Its generalizability seems uncertain as a result.
Fourth, only patient care rotations were analyzed. Finally,
participation was voluntary and so it is possible that only the
‘‘best’’ faculty preceptors may have participated in this pilot
investigation since they are often the ones interested in
enhancing their teaching. Even with these limitations, the
study gives insights into peer assessment with experiential
teaching and learning. Notably, while 18% of schools report
performing peer evaluation of their APPE preceptors,1 to our
knowledge there is a scarcity of literature documenting this
process. It also remains unknown the number of institutions
that have developed and implemented an assessment of
faculty in (IPPEs). The fewer hours sometimes provided
with IPPEs as compared to APPEs may greatly limit
opportunities for peer observation among preceptors for
IPPEs. Further research should attempt to do this. Following
initial implementation of our process in 2004, we made this
formative assessment opportunity available to all pharmacy
practice divisions on all of our institution’s campuses.
Although modifications to the instrument (Appendix) may
be necessary to eliminate redundancy of quantitative ques-
tions asked of both our students and peers as noted in this
pilot study, we hope this could provide a starting point for
institutions to initiate dialog toward developing a process for
assessing their faculty’s experiential teaching. As of today, it
still remains an optional activity at our institution, however
some faculty members have utilized it in their promotion
dossiers to support their clinical teaching.
Conclusion
In this pilot study, we successfully created and imple-
mented peer assessment of experiential teaching among
Adult Medicine faculty members at our institution. Com-
parison of quantitative data from both peer faculty and
student evaluations suggested little quantitative information
was gained through a resource-intensive process of peer
evaluation when compared to what student evaluations were
already providing. However, since the instrument also
included qualitative information for formative purposes,
some faculty found this formative feedback beneficial and it
appeared to stimulate some change. As colleges/schools of
pharmacy work toward peer assessment of faculty, we are
not yet aware of other peer assessment evidence specifically
looking at pharmacy experiential education or demonstrat-
ing a peer assessment program’s improvement in student
learning outcomes.
Acknowledgments
We would like to thank the following faculty members
for their involvement in assisting in the development and/or
implementation of the peer assessment tool described in this
manuscript: Drs. Sara Brouse, Krystal K. Haase, Ronda L.
Akins, Venita L. Bowie, Anthony J. Busti, Sachin Shah,
Ronald Hall, and Brian Burleson.
Appendix. Experiential Teaching Evaluation Form
Preceptor Name: ______ Course #: ______ Date: ______
Evaluator: ______ Site: ______ Students#: P3__ P4__ Resident ___
**All of the information below should be filled out by the individual performing the evaluation. In order to access this
information, the evaluator should ask the evaluee the questions listed below.**
Overall Rating of the Preceptor based on their Experiential Teaching (Circle one)
1-Poor 2-Fair 3-Good 4-Excellent 5-Outstanding
I. Background Information
1. How long have they been at their current practice site? ______ (Months/Years)
2. Does their practice involve rounding with a medical team (consisting of residents, interns, attending physician, students,
etc)? YES or NO. If Yes, what is the average size of their team?______
3. Does their practice involve working in a specific clinic(s)? YES or NO. If Yes, please list clinic(s) they are involved with.
______, ______, ______, ______
4. What is the average number of patients on the team that they round on?
a. 5 or less b. 6–10 c. 10–20 d.420 e. N/A (do not round)
5. What is the average number of patients they see in their clinic(s) each week?
a. 5 or less b. 6–10 c. 10–20 d. 420 e. N/A (no clinics)
6. How many hours do they estimate they spend on clinical responsibilities (rounding, clinics) on a weekly basis?
a. 5 or less b. 6–10 c. 10–20 d. 420 e. N/A
7. Does the preceptor routinely go over site objectives/expectations prior to each rotation with his/her students/residents?
YES or NO
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320 317
II. Clinical Evaluation (interaction with students/residents on rounds and/or in the clinics)
Overall Rating of the Preceptor based on their Clinical Practice (Circle one)
1-Poor 2-Fair 3-Good 4-Excellent 5-Outstanding
Please answer the following questions (using the above scale, if unable to assess put N/A).
____ The site provides the opportunity to see a wide variety of patients and provide patient care
____ The relationship of the pharmacists with other health care professionals at the site promoted integrated healthcare
____ The availability of necessary references and equipment at the site were appropriate for student needs
____ The overall atmosphere at the site enhanced the learning experience of students during this rotation
____ The level of interaction with patients and other health care professionals during the rotation was adequate
____ The organization of the rotation materials provided a clear overview of the experience
1. How many days a week do they round with their students/residents? 1 2 3 4 5
2. Does the preceptor allow for student/resident contribution on rounds? Consider the following:
� Team dynamics, number of patients, post-call day, new team, how long student has been with team, 1st rotation or last
rotation for student, etc.
3. Does the preceptor have good rapport with team? Please describe. (Important to take above items into consideration.)
4. Does the preceptor teach students/residents one on one while rounding with physicians? Please describe. (For example,
Are there separate group discussions between preceptor and students aside from discussion with team? If yes, are these
discussions distracting to the rounding process, or seen as a supplement to the learning process?)
III. Discussion Times (interaction with students/residents in office/conference room etc.)
Overall Rating of the Preceptor based on their Student/Resident Discussions (Circle one)
1-Poor 2-Fair 3-Good 4-Excellent 5-Outstanding
1. How many days a week do they meet with their students/residents apart from time spent on rounds and/or in clinics?
1–2 days 2–3 days 3–4 days 4–5 days45 days
2. How much time per day do they spend in these discussions?
0 to 1 hr 1 to 2 hrs 2 to 3 hrs43 hrs
3. If they have P3/P4 students or residents on rotation at the same time, do they meet with them? (INDIVIDUALLY or
TOGETHER)
4. Briefly describe discussion sessions with students/residents. Consider the following:
� Does the preceptor lecture to students/residents
� Lead a group discussion
� Are discussions based solely on patients, disease states, or a combination of the two?
� Does the preceptor or the student(s)/resident(s) do most of the talking?
IV. Preceptor Qualities:
Overall Rating of the Preceptor based on important preceptor qualities (Circle one)
1-Poor 2-Fair 3-Good 4-Excellent 5-Outstanding
Please answer the following questions (use the scale above, if unable to assess put N/A):
____ The preceptor created an environment that was conducive to learning
____ The preceptor shared their knowledge and ability and integrated practice evidence-based medicine with patient specific factors
____ The level of time, energy and commitment the preceptor made to the educational experience was beneficial
____ The feedback and help provided by the preceptor to students on this rotation was both constructive and effective
____ The level of supervision provided, during this rotation, by the primary preceptor was beneficial
____ The primary preceptor was a positive role model and mentor during this rotation
1. Does the preceptor possess an enthusiasm for teaching? YES or NO. Please describe.
2. Does the preceptor emphasize the importance of the process of problem solving or are they only critical of the students/
residents knowledge base? (Do they facilitate critical thinking skills and/or facilitate development and application of
knowledge?) Please describe.
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320318
3. Is the preceptor well organized? (Has daily/weekly schedule that students/residents follow, or are they very erratic and/or
spontaneous in their activities?). Please describe.
4. Is the preceptor clinically competent? (Not easy to assess, but may be able to comment based on rapport with other team
members on rounds/clinics). Please describe.
5. Is the preceptor seen as a positive role model for the student/resident? (Is there respect for the preceptor by the student/
resident, does the student/resident look up to the preceptor, seek out advice or ask many questions?) Please describe.
6. Does the preceptor exhibit good communication skills? (In other words, are they able to easily convey their thoughts to
other health care professionals on rounds/clinics and to their students/residents inpatient discussions?) Please describe.
7. Is the preceptor accessible by the student/resident? (Is the student/resident able to contact the preceptor if questions arise?)
Please describe.
8. Does the preceptor maintain a good balance between supervising students/residents and also allowing them to work/learn
on their own? (Does preceptor do all the talking or is the student/resident actively involved, does the preceptor make
students/residents look up every answer to the questions they ask or do they simply give the answers to all questions that
students/residents ask, or is there a good balance?). Please describe.
9. Does the preceptor use innovative or creative methods in their teaching? (Have games, trivia days, clinical pearls/drugs of
the day?) Please describe.
References
1. Barnett CW, Matthews HW. Teaching evaluation practices in
colleges and schools of pharmacy. Am J Pharm Educ.
2009;73(6): Article 103.
2. Cohen PA, McKeachie WJ. The role of colleagues in the
evaluation of college teaching. Improv Coll Univ Teach.
1980;28(4):147–154.
3. Centra JA. Formative and summative evaluation: parody or
paradox? New Dir Teach Learn. 1987;31:47–55.
4. Weimer M. Colleagues as collaborators. In: Weimer M, ed.
Inspired College Teaching. Jossey-Bass: San Francisco, CA;
2010:105–110.
5. Weimer MG, Kerns MM, Parrett JL. Instructional observation:
caveats, concerns, and ways to compensate. Stud Higher Educ.
1988;13(3):285–293.
6. Berk RA. Teaching portfolios used for high-stakes decisions:
you have technical issues! In: Berk RA, ed How to Find and
Support Tomorrow’s Teachers. Amherst, MA; National Eval-
uation Systems,; 2002:45–56.
7. Aleamoni LM. Some practical approaches for faculty and
administrators. New Dir Teach Learn. 1997;31:75–78.
8. Berk RA. Survey of 12 strategies to measure teaching
effectiveness. Int J Teach Learn Higher Educ. 2005;17:
48–62.
9. Neuman SB, Wright TS. Promoting language and literacy
development for early childhood educators: a mixed-methods
study of coursework and coaching. Elementary School J.
2010;111(1):63–86.
10. Skinner ME, Welch FC. Peer coaching for better teaching. Coll
Teach. 1996;44:153–156.
11. Scott V, Miner C. Peer coaching: implications for teaching and
program improvement. Transformative Dialogues: Teach
Learn J. 2008;1(3):1–2.
12. Davis TS. Peer observation: a faculty initiative. Curr Pharm
Teach Learn. 2011;3(2):106–115.
13. Schultz KK, Latif D. The planning and implementation of a
Faculty Peer Review Teaching Project. Am J Pharm Educ.
2006;70(2): Article 32.
14. Trujillo JM, DiVall MV, Barr J, et al. Development of a Peer
Teaching Assessment Program and a Peer Observation
and Evaluation Tool. Am J Pharm Educ. 2008;72(6):
Article 147.
15. Wellein MG, Ragucci KR, Lapointe M. A peer review process for
Classroom Teaching. Am J Pharm Educ. 2009;73(5): Article 79.
16. Kleffner JH. Becoming an effective preceptor. TTUHSC
School of Pharmacy [CE program]. June 2010.
17. Goertzen J, Stewart M, Weston W. Effective teaching behav-
iours of rural family medicine preceptors. CMAJ. 1995;153:
161–168.
18. Parsel G, Bligh J. Recent perspectives on clinical teaching.
Med Educ. 2001;35(4):409–414.
19. Bardella IJ, Janosky J, Elnicki DM, et al. Observed versus
reported precepting skills: teaching behaviours in a community
ambulatory clerkship. Med Educ. 2005;39(10):1036–1044.
20. Downing SM. Reliabiilty: on the reproducibility of assessment
data. Med Educ. 2004;38(9):1006–1012.
21. Zibrowski EM, Myers K, Norman G, Goldszmidt MA. Relying
on others’ reliability: challenges in clinical teaching assess-
ment. Teach Learn Med. 2011;23(1):21–27.
22. Bond TG, Fox CM. Applying the Rasch Model, 2nd ed.
Mahwah, NJ: Lawrence Erlbaum Associates, Publishers; 2007.
23. Linacre JM. Many-Facet Rasch Measurement. Chicago, IL:
MESA Press; 1994.
24. Bond TG. Validity and assessment: a Rasch measurement
perspective. Metodologia Cien Comportamiento [Methodol
Behav Sci]. 2003;5(2):179–194.
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320 319
25. Smith EV. Evidence for the reliability and validity of measure-
ment interpretation: a Rasch measurement perspective. J Appl
Meas. 2001;2(3):281–311.
26. Arreola RA. Developing a Comprehensive Faculty Evaluation
System. 3rd ed. Bolton, MA: Anker; 2007.
27. Centra JA. The use of the teaching portfolio and student
evaluations for summative evaluation. J Higher Educ.
1994;65(5):555–570.
28. DiPiro JT. Student learning; perception versus reality. Am
J Pharm Educ. 2010;74(4): Article 63.
29. Naughton CA, Freisner DL. Comparison of pharmacy students’
perceived and actual knowledge using the Pharmacy Curricular
Outcomes Assessment. Am J Pharm Educ. 2012;76(4): Article 63.
30. Kidd RS, Latif DA. Student evaluations: are they valid measure of
course effectiveness? Am J Pharm Educ. 2004;68(3): Article 61.
31. Feldman KA. The association between student ratings of
specific instructional dimensions and student achievement:
refining and extending the synthesis of data from multisection
validity studies. Res Higher Educ J. 1989;30(6):583–645.
32. Accreditation Council for Pharmacy Education. Accred-
itation standards and guidelines for the Professional Pro-
gram in Pharmacy Leading to the Doctor of Pharmacy
Degree. /https://www.acpe-accredit.org/deans/standards.
aspS Accessed February 9, 2013.
33. Van der Vleuten CPM. The assessment of professional com-
petence: developments, research and practical implications.
Adv Health Sci Educ Theory Pract. 1996;1:41–67.
C.D. Cox et al. / Currents in Pharmacy Teaching and Learning 5 (2013) 311–320320