Upload
waldenu
View
0
Download
0
Embed Size (px)
Citation preview
Evaluating Computer Supported Collaborative Learning (CSCL) a Graduate Level Educational Technology Web-Delivered Course
Offering
By
Joanne G. Stuckey
A research report submitted to the Department of Curriculum and Instruction, Instructional Technology Program, in partial fulfillment of the
requirements for the Ph.D., Mid-Program Review
College of Education
University of Texas, Austin
Table of Contents
LIST OF FIGURES IV
i
INTRODUCTION 5STATEMENT OF THE PROBLEM 5WEB-DELIVERED COURSE BACKGROUND INFORMATION 7
Course Re-visioning 7Course Development Team: Design and Delivery of the Course Materials & Assessment Database 9E-Sherpas: Course Support Team 11Constructivist Teaching and Learning 12Cooperative and Collaborative Learning 15
WEB-DELIVERED CSCL COURSE 16Course Participants 17Course Newsletters 18Course Webcasts 18Course Modules 19Module One: Syllabus 19Module Two: Building a Collaborative Team 21Module Three: Strategies for Collaborative Writing 22Module Four: Synchronous Online Collaborative Learning 22Module Five: Collaborative Web-based Inquiry Skills 22Module Six: Collaborative Planning and Implementation of Larger Knowledge-Building Projects 23Module Seven: Designing an Online Collaborative Learning Project 23
RESEARCH FOCUS: THEORETICAL ORIENTATION 23REVIEW OF RELATED LITERATURE 25EVALUATION OF HIGHER EDUCATION DISTANCE LEARNING ENVIRONMENTS 25
Defining Distance Education 26WEB-DELIVERED LEARNING ENVIRONMENTS 28
Web-Delivered/Online Education 30EDUCATIONAL PROGRAM EVALUATION 35
Effective Educational Environments 36Educational Program Evaluation in the US 38
EVALUATION RESEARCH 40Assessment and Evaluation 42Summative Evaluation and Formative Evaluation 43
PROGRAM EVALUATION STANDARDS 46Accuracy Standards 47Feasibility Standards 48Propriety Standards 48Utility Standards 49
EVALUATION: DEFINITIONS AND APPROACHES 49Evaluation Definitions 49Evaluation Approaches 54
EVALUATION DESIGN AND METHODOLOGY 69EVALUATION DESIGN: APPROACH AND DEFINITION 69EVALUATION METHODOLOGY 71
The Human Instrument 71Direct Observation 71
ii
Participant Observation 72Records and Documents 73
EVALUATION PROCEDURES 75Stakeholder Identification 75Discovering Concerns 75
EVALUATION STANDARDS AND CRITERIA 81Standard 1: Instructional Design & Learning Materials 82Standard 2: Learning Objectives & Outcomes 82Standard 3: Course Tools, Technology, and Learner Support 83Standard 4: Pedagogical Strategies 84
ESTABLISHING TRUST IN THE RESEARCH 84Truth Value 85Applicability 88Neutrality 89
ANALYSIS OF DATA 91ANALYSIS: INSTRUCTIONAL DESIGN AND LEARNING MATERIALS 91
Standard 1: Instructional Design & Learning Materials 91Summary: Instructional Design and Learning Materials 104
ANALYSIS: LEARNING OBJECTIVES & OUTCOMES 105Standard 2: Learning Objectives & Outcomes 105Summary: Learning Objectives and Outcomes 115
ANALYSIS: COURSE TOOLS, TECHNOLOGY, AND LEARNER SUPPORT 116Standard 3: Course Tools, Technology, and Learner Support 116Summary: Course Tools, Technology, and Learner Support 129
ANALYSIS: PEDAGOGICAL STRATEGIES 130Standard 4: Pedagogical Strategies 130Summary: Pedagogical Strategies 134
SUMMARY AND IMPLICATIONS 135SUMMARY 135Course Strengths 135Course Weaknesses 137Unresolved Issues and Concerns 139
IMPLICATIONS 139Future Research 143Limitations of the Study 144
APPENDICES 147APPENDIX A: PSEUDONYMS 148APPENDIX B: PERSON AS INSTRUMENT 151
Overview and Personal Philosophy 151Educational Values and Beliefs 152Relevant Experience 153Formative Evaluation Research Project 157
APPENDIX C: CSCL COURSE MODEL 159Course Overview 159
APPENDIX D: EVALUATION MATRICES 167CSCL Evaluation Matrix: Student Perceptions & Group Culture 167CSCL Evaluation Matrix: Student Perceptions & Group Culture continued 168
iii
CSCL Evaluation Matrix: Pedagogical Elements 169CSCL Evaluation Matrix: Leadership 170CSCL Evaluation Matrix: Hardware & Software 171
APPENDIX E: SHERPA LOG 172APPENDIX F: SAMPLE FIELD NOTES 176
*Meeting 2/4/2000 176APPENDIX G: SAMPLE REFLEXIVE JOURNAL 180APPENDIX H: POST-COURSE SURVEY 181APPENDIX I: SAMPLE NEWSLETTER 186References 187
iv
List of Figures
Figure 1: CSCL Technology, Inc. Logo................................................................17Figure 3: How do you evaluate each of the course modules?...............................95Figure 4: Average weekly course-related hours..................................................100Figure 5: CSCL course general evaluation..........................................................109Figure 6: Do you agree with the following statements?......................................115
v
Introduction
STATEMENT OF THE PROBLEM
The proliferation of Web-delivered courses and programs is paired with
skepticism, fears, and concerns about the effectiveness and quality of distance
learning courses and programs (Cyrs, 1997; Khan, 1997, 2001; Moore,
Thompson; Quigley & Clark 1990; Moore & Kersey 1996; Schank, 1999). Social
pressures are demanding accountability for student learning (Hill, 1997; Schank,
1999; Noone & Swenson, 2001; Popham, 2001) and many educators fear that
without face-to-face classroom interactions, students taking courses via distance
education are not receiving instruction that is equal in quality to what they would
receive in the traditional classrooms.
Currently, many instructors of face-to-face courses in higher education are
enhancing their courses with the use of Web-based materials, or are delivering
their courses online. Being online means being in direct communication with a
remote computer or computer system, which enables communication and the
transfer and exchange of information (Chute, Thompson & Hancock, 1999).
Barron (1998) defines a Web-enhanced course as a campus-based course that
makes use of the World Wide Web (WWW or Web) and a Web-delivered course
as one where all course activities take place on the Web.
Instructors in higher education environments often do not have fiscal or
personnel resources available for the development and field-testing of activities
and materials of courses or programs before they implement them with students.
The lack of resources, paired with institutional demands to develop both
traditional face-to-face and distant learning higher education courses quickly,
often results in the course and program participants inadvertently serving as the
preliminary or beta-test “subjects” for these courses and programs. Compounding
these problems is the fact that many institutions of higher education are utilizing
only end-of-course instructor surveys and/or final exams for evaluating their
programs.
Most often, institutions of higher education do not release the results of
end-of-course instructor surveys until well after the instructors have scored the
final exams and finalized course participant’s grades. While the end-of course
surveys and tests are inexpensive and easy to administer, they fail to provide
timely information, which instructors can utilize to improve courses or programs
for students while they are participating in the course or program (Marzano, 2000;
Dick, 1992). Nonetheless, institutions of higher education, almost exclusively,
have utilized outcomes-based data sources such as end-of-course surveys and
exams, for the evaluation of instructional programs since 1960 (Pinar, Reynolds,
Slattery & Taubman, 1995).
Many contemporary educators are looking for new and better ways to
evaluate and improve distance-learning environments (Belanger & Jordan, 2000;
Khan, 1997, 2001; Simonson, 1997). Currently, few educational researchers have
utilized non-obtrusive data sources and embedded constructivist forms of student
assessment, distinct and different from objectivist end-of-course tests, for
improving and evaluating constructivist distance learning environments
(Belanger, et al., 2000; Khan, 1997, 2001; Phipps & Merisotis, 1999; Simonson,
1997).
7
It is my intent to develop and field-test a non-obtrusive data-driven
constructivist evaluation approach for the formative evaluation of a graduate-
level, Web-delivered course offering. The key research questions guiding this
inquiry are: What are the strengths and weaknesses of the Web-delivered course?
This research will detail the selection of an appropriate theoretical orientation and
contemporary evaluation approach, which will serve as the basis for conducting
the formative evaluation of a unique and new constructivist Web-delivered
distance-learning environment. In this report, I will discuss the methodology,
data analysis, findings, and recommendations, which emerge from the utilization
and modification of the selected evaluation approach. It is the responsibility of
the readers of this report to determine the transferability, if any, of this
information to their own learning environments.
WEB-DELIVERED COURSE BACKGROUND INFORMATION
Course Re-visioning
The transformation or re-visioning of a computer supported collaborative
learning course from a Web-enhanced, face-to-face course, which was previously
offered three times face-to-face at a large research university in the United States,
to a Web-delivered course, and my sustained, intense, and long-term observation
observations of the course environment began in January of 2000, when I
obtained the instructor’s permission to join his Computer Supported
Collaborative Learning (CSCL) course re-visioning team as a participant-
observer.
8
CSCL is an emergent learning paradigm, which focuses on the utilization
of information and communication technologies (ICT) for the mediation and
support of collaborative learning activities such as problem-based learning, group-
decision making, or peer tutoring (Koschmann, 1996). Michael Patton (1990)
defined a paradigm as: “a world view, a general perspective, a way of breaking
down the complexity of the real world (1990, p. 37).”
Collaborative Course Re-Visioning Team
The course instructor suggested that my first step as participant-observer
would be to arrange to meet with the media coordinator for the online version of
the CSCL course to get an overview of plans for the course. During this meeting,
I learned that the approach for designing the online version of the course would be
constructivist, incorporating problem-based learning, and strategies for adult
learners. The course instructor recognized that moving a face-to-face course
online involves more than creating Web pages and uploading face-to-face
materials.
The media coordinator stated that the primary goal of the revised Web-
delivered course would be to provide course participants with comprehensive and
intensive experiences in online collaborative learning while helping them to
understand, create, and reflect upon computer-supported collaborative learning
environments. She also explained that the first decision for the newly formed
instructional design team would be to come up with a thematic metaphor to make
the course experiences realistic for the participants. During this meeting, the
9
coordinator suggested that I contact the instructor’s secretary and schedule regular
weekly meetings for the instructional design team and myself, which I did.
The instructional design team, which the instructor formed, consisted of
several graduate students including the course media coordinator, three students
from his Advanced Instructional Systems Design course, and me. The course
instructional design team met face-to-face twelve times during the five-month
period from January through May of 2000, each time audio taping their meetings.
I attended and participated in ten of these design team meetings.
During the re-visioning process, the instructor and his instructional design
team were cognizant that some of the materials and methods utilized in the Web-
enhanced face-to-face course would be applicable to the Web-delivered version.
However, they also acknowledged that a complete transformation of the course
was requisite for creating a meaningful and productive online knowledge-building
community. As participant-observer I was closely observing and participating in
the development of the program in order, “to make a comprehensive statement of
what the program is observed to be" (Stake, 1973, p. 4).
Course Development Team: Design and Delivery of the Course Materials & Assessment Database
The instructional design and the concurrent development of course
instructional materials and Web site occurred in a recursive cycle. The CSCL
course development team consisted of five graduate students. Two of these
graduate students were also members of the course instructional design team. The
course development team began by brainstorming ideas that would enhance the
CSCL course by enabling interactive and real-time assessment and evaluation.
10
The capability of the existing courseware and the groupware, did not allow for
uncomplicated modification of the systems. The challenge for the development
team was to devise a method that would enable seamless integration of the
assessment database into the context of existing courseware and groupware
systems.
The team developed criteria for selecting a database tool that could track
the online assessment activities. The desirable attributes included HTML support,
relational capacity, simple administration, security, portability, and cost
effectiveness. After evaluating several alternatives, the development team chose
File Maker Pro, which is a relational database system. Another capability of this
software is a markup language that extends the database functionally by
interpreting a set of proprietary database specific tags within web pages. As the
development team began to share their intentions of using File Maker Pro at the
university, they discovered a network of colleagues using the software for
tracking intra-departmental data. Having a network of people using the tool on
campus proved invaluable later in their assessment tool development because they
were able to get timely answers to questions about setup, administration, and web
publishing of data.
The instructor gave one of his graduate students, employed as a
professional software developer, the task of extending the capabilities of the
existing courseware and groupware in order to collect assessment data. The
problem facing the database developer was how to devise a method that would
vector course participants from the context of the systems that supported the
content, to the assessment Web pages, which existed on a separate system, and
11
then back to the content. The database designer utilized assessment rubrics
created by the instructional design team, and based on these assessment rubrics he
created the data entry pages. The development team worked collaboratively with
the instructional design team to impart a common look and feel for the course
content and the assessment database.
E-Sherpas: Course Support Team
The instructor and his course re-visioning team carefully considered
strategies to meet the unique support needs of online learning teams. A key
design decision was the inclusion alternative forms of encouragement and
interaction, via a course support team, to compensate for support that the
instructor previously afforded in face-to-face classroom interactions.
The course instructor coined the term e-Sherpa for the Web-delivered
course support-team role. Sherpas are Tibetan people, skilled in mountain
climbing, who live on the high slopes of the Himalayas. The function of a Sherpa
on a mountain climbing expedition is to familiarize the members of the expedition
with the local terrain while helping members them to carry their loads. Likewise,
in the Web-delivered course, the e-Sherpas served to help the course participants
navigate the online course terrain. The role of the e-Sherpa in the Web-delivered
course differed significantly from the traditional and more authoritative roles of
an online conference moderator or facilitator (Salmon, 2000). The course e-
Sherpas, much like Tibetan Sherpas, were not the leaders of the teams, nor did
they have authority over the teams. Rather, their role in the web-delivered course
12
was simply to help the online teams carry their loads (Williams, Lee & Adams,
2000. p. 393) and be successful in accomplishing their learning goals.
Constructivist Teaching and Learning
Traditional or objectivist curriculum-driven instructional systems hold that
there is one real world and the instruction and assessments are designed for this
average or generalized learner (Bednar, Cunningham, Duffy, & Perry, 1992). The
two assumptions of objectivists have led to another assumption, the assumption
that an instructional designer can select objectives and strategies that will bring
about predictable changes in the skills and knowledge of these average students.
In traditional objectivist systems, the focus is on information processing and
precise storage and regurgitation of externally defined information. Assessment
of student learning in the traditional systems is a matter of observing and
measuring or testing student performance against the predetermined instructional
objectives (Dick & Carey, 1985; Gentry, 1994).
These objectivist curriculum-driven instructional systems have served well
in teaching basic knowledge and skills in well-structured knowledge domains
(Winn, 1992). However, higher order learning skills and the mastery of advanced
knowledge in ill-structured domains encompasses a great deal of what instructors
teach in academic environments (Spiro, et al., 1992) including Web-delivered
courses. Examples of ill-structured knowledge domains include medicine,
history, and literary interpretation according to Spiro et al., and an example of a
well-structured domain is mathematics (1992). The major difference between a
well-structured domain and ill-structured domain is that that the ill-structured
13
knowledge domain involves multiple and complex conceptual structures,
schemas, perspectives, and across-case irregularities, (Spiro et al., 1992) while
knowledge structures in well-structured domains tend to singular, as well as the
schemas or perspectives.
In well-structured domains, there is often across-case regularity. For
example, educators consider the domain of traditionally taught mathematics to be
well structured. Most traditional math teachers would agree and teach their
students that two plus two equals four. However, in the ill-structured domain,
history, most teachers would not teach their students a single reason for the
September 11, 2001, attack on America and the destruction of the World Trade
Towers in New York. Rather, they might present and discuss multiple
perspectives related to the roots of terrorism in the modern world. Well-
structured knowledge domains, such as math, can have aspects of ill
structuredness in advanced levels of study (Spiro, et al., 1992) or when instructors
reject traditional teaching methods in favor of contemporary constructivist
pedagogy.
In contrast constructivists view reality as a constantly problematic and
changing knowledge building process, which relies upon a vastly different set of
assumptions than do traditional objectivist systems (Gagné, 1985; Gagné, &
Glaser, 1987). Constructivists, unlike objectivists, believe that no one world is
more real than another world (Jonassen, 1992). The concept of a global or
average learner is foreign to constructivists (Winn, 1992). Constructivist learning
environments are dynamic, encouraging the exploration of multiple pathways;
perceiving learners as constructing their own knowledge through meaningful
14
interactions with the world. Constructivists believe that learners interpret what
they learn individually and have different individual learning outcomes (Jonassen,
1992). Constructivist environments often focus on developing reflexive
knowledge or metacognitive knowledge, an awareness and knowledge about
one’s own thinking and learning processes (Pressley & McCormick, 1995) and
“skills of reflexivity, not remembering” (Bednar et al., 1992, p. 24).
To foster reflexivity, constructivists design instruction to reside in
authentic tasks, “those that have real-world relevance and utility, that integrate
those tasks across the curriculum, that provide appropriate levels of complexity,
and that allow students to select appropriate levels of difficulty or involvement”
(Jonassen, 1996, p. 271). Many educators who are using collaborative or
constructivist learning strategies concur that the assessment of student learning in
constructivist environments is a complex process that involves accessing
knowledge construction in real-world contexts which encourage multiple
perspectives and viewpoints while engaging students in authentic learning tasks
(Jonassen, 1992; Cunningham, 1992; Spiro, Feltovich, Jacobson & Coulson,
1992).
This complexity and diversity in constructivist learning environments calls
for new methods and a generative and a on-going process for improving and
modifying the many Web-delivered courses and programs, such as the Web-
delivered CSCL course, which is the subject of this research. Traditional forms of
student assessment such as multiple choice tests and their corresponding correct
or incorrect answers are not suited to constructivist learning environments, like
15
the CSCL course which are focused on higher order thinking skills, multiple
outcomes and advanced knowledge-building in ill-structured domains.
The course instructor and his instructional design team utilized a
collaborative team-based approach to re-vision a face-to-face course for Web-
delivery. The philosophical basis, which guided the creation of this Web-
delivered learning environment, was constructivist. The intent of the course
instructor and his instructional design team was to have course participants learn
about cooperative and collaborative theories and trends while they actively
participate in, design, conduct, and evaluate online computer supported
cooperative and collaborative learning strategies and activities, which I will
discuss next.
Cooperative and Collaborative Learning
These two strategies, cooperative and collaborative learning, are distinctly
different from the traditional curriculum-based approaches, which focus on
individual learning and competition. Varied and diverse conceptualizations of the
terms cooperative and collaborative are evident in modern discourse (Dillenbourg
& Schneider, 1995; Panitz, 1996; Harris, 2002). The instructional design team
devoted one section of the CSCL course materials and several assignments to
increasing course participants' understanding of the diverse definitions and
implications of both cooperative and collaborative learning approaches. The
online course materials explicated the following viewpoint, which I will utilize in
this report to distinguish these two approaches:
One argument is that they differ in the degree to which control resides with the instructor or the group. In this view a cooperative learning
16
environment is one in which the teacher largely controls the goals, tasks, processes, and rewards of the group. A collaborative learning environment would be one in which the group exerts far greater autonomy in the choice of its goals, tasks, roles and processes.
Harris (2002) utilized two scenarios and an analogy of two children
playing in a sandbox and to distinguish cooperation “between” (p. 58) students
and collaboration “among” (p. 58) students. The first scenario represents
cooperation and paints an image of two children playing peacefully side-by-side
each building their own sandcastle. The second scenario involves the, “same
two children in the same sandbox working together on a single sandcastle”
(Harris, 2002, p. 58). Harris questions whether cooperation or collaboration is
more challenging, and concludes, “…the more we have to negotiate with others
(students and/or teachers) what we are and will be doing during a learning
activity, the more challenging the activity is to conduct” (2002, p. 58).
To facilitate collaboration among course participants, the instructional
design team utilized a metaphor of a new high tech company, specializing in
designing online collaborative learning and work environments to provide an
authentic context for the course learning activities. Course participants began the
course in the given role of a new staff member of this high tech company and
were expected to work together, using and generating information for solving real
world problems, in line with the constructivist emphasis on situating cognitive
learning activities in realistic contexts (Brown, Collins, & Duguid, 1989; Duffy &
Jonassen, 1992).
17
WEB-DELIVERED CSCL COURSE
The instructor and his course development team created a graphical user
interface to provide a realistic constructivist environment for course
communications. They utilized the various folders depicted in this graphical
interface to organize communications, and provide specific locations for working
on, sharing, and submitting course assignments. While exploring, utilizing, and
developing their knowledge and skills related to computer-supported collaborative
learning, course participants were challenged to exhibit the corporation ideals
detailed in the logo which is displayed in the commons area ground floor in the
online “virtual” architectural design of the course’s CSCL Technology, Inc.
building: Communication, Community, Teamwork, Curiosity, and Learning,
[Figure 1].
18
Course Participants
At the beginning of the course, thirty-six participants enrolled in the Web-
delivered CSCL course and four of the participants dropped the course at the
beginning of the semester due to personal or scheduling conflicts. All of the
remaining thirty-two course participants completed the course and earned “A”
level graduate credit for the Web-delivered course. Five of thirty-two course
participants were male (16%) and twenty-seven course participants (84%) were
female.
Twelve of the course participants enrolled through the instructor’s home
campus and twenty enrolled through a distance-learning program at a separate
component of the same institution of higher education, resulting in one course
with thirty-two participants enrolled through two different higher education
institutions in two different locations. I will refer to the respective course
Figure 1: CSCL Technology, Inc. Logo
19
participants enrolled through the instructor’s home campus as on-campus course
participants and those enrolled through the separate component institution, as off-
campus course participants in this research. I am utilizing pseudonyms for the
course participants, suites, instructor, and course support team in this research
report (see Appendix A for a complete listing of course participants and their
pseudonyms).
Course Newsletters
The instructor published a course newsletter approximately every two
weeks during the semester. The course newsletters, Web pages with embedded
text, images, audio, and movies provided items such as the instructor’s welcome
message, team assignments, and the latest course news, updates, instructions, and
modifications. The CSCL course newsletters were archived and available
throughout the semester for participants when they logged into their secure
commercial courseware environment.
Course Webcasts
An innovative design feature of the constructivist Web-delivered CSCL
course is the enhancement of the course with face-to-face Webcasts, which are
one way audio-video broadcasts conducted live at the instructor’s home campus
and simultaneously broadcast online. The broadcasts were archived on video-tape
for subsequent re-broadcasts for course participant’s who were unable to be
online or present during the scheduled Webcast times.
The on-campus course participants were physically present, when their
circumstances permitted, during the monthly face-to-face course Webcasts, while
20
the off-campus course participants and course participants from both groups who
could not be physically present attended these face-to-face meetings via their
Internet connected computers. Course participants who were unable to be
physically present during course face-to-face meetings also had the option of
telephoning the instructor during the meetings and/or participating in text-based
chats, which the course support team projected on a large viewing screen during
these meetings.
The course instructor and course materials challenged course participants,
as new staff members in a Web-based “virtual” company, CSCL Technology,
Inc., to work together as they learned about designing and evaluating computer
supported collaborative learning projects. The instructor and his instructional
design team organized the CSCL course into seven different modules or units for
Web-delivery. Each module progressively builds on the preceding modules and
includes individual, and group assignments.
Course Modules
Module One: Syllabus
Before the first day of class, course participants received course materials
and instructions on CD ROM. The materials for module one include a “welcome
“ video message from the instructor, and the course syllabus, which details the
course objectives, competencies, schedule, conferencing skills, technical
requirements, and provided selected written recommendations and readings
related to teamwork and collaboration. The course materials detailed the
following course objectives and competencies,
21
Specific objectives of the course are for students to: Understand the theoretical foundations of collaborative learning
and CSCL Experience and demonstrate skill in working as an active and
contributing member of an online collaborative team and knowledge-building community
Understand the benefits and limitations of learning in Web-based environments
Understand processes and strategies for building online learning teams and the dynamics of group work
Design, implement, and evaluate online collaborative learning activities Web-based CSCL technologies provided in the course
Find analyze, and share knowledge of online CSCL resources, tools, and research articles
Explore, demonstrate, and critique a CSCL too not included in the course
Reflect upon your experiences in participating in online collaborative teams and designing CSCL projects.
Critically evaluate research studies and applications of technology-supported learning and teaching environments.
Course competencies: Plan, conduct, and evaluate technology-supported learning
activities that meet quality standards Demonstrate skill in working actively and continuously as a
member of an online collaborative project team Effectively use the CSCL tools in course modules and at least one
CSCL tool of own choosing Articulate theoretical foundations of CSCL Critically evaluate strengths and weaknesses of CSCL tools and
learning environments
A rubric that was included in the course materials for module one details
the four performance elements for collaboration in the course: group goals,
interpersonal skills, group maintenance, and roles within a group. The provided
rubric details criteria for attainment of each performance element at one of the
four different levels of proficiency: Novice, Intermediate, Proficient, and Expert.
22
The culminating activity for this module involved participants in an online survey,
“Are Online Learning and This Course for Me?”
Module Two: Building a Collaborative Team
The instructor, during this module, introduced course participants to the
mission of CSCL Technology, Inc., and the course collaborative online
environment and tools. He then assigned them, in pairs, to one of sixteen
different virtual offices, and grouped each office with three other virtual offices,
into five CSCL Technology, Inc., virtual office suites. Suite “mates” selected a
name for their five respective suites: WebCity, Web Wonders, CollabCrew,
InTune, and Harmony. The objectives for module two were for course
participants to: understand: “key concepts of cooperative and collaborative
learning, strategies and processes for building a virtual team, basic roles and
responsibilities of being a member of a virtual team, and how to install and use
basic features of an online collaborative learning tool.” During module two, to
facilitate team building, course materials directed participants to submit personal
demographic data and a photo for the online course “Staff Directory,” and
introduce themselves by a short written introduction posted in the threaded
discussion message folder labeled, “Introducing Yourself.” The course materials
and communications then asked each course participant to learn more about
another course participant via e-mail. The culmination of this introductory
activity was each course participants’ posting of a new discussion message to
introduce their assigned course participant, and their selection of a “Topic of
Personal Interest” for “Module Three: Strategies for Collaborative Writing.”
23
Module Three: Strategies for Collaborative Writing
The objectives for this module included, “increasing course participant’s
understanding of how to: plan and organize a collaborative learning task,
understand strategies and techniques for collaboratively authoring a document,
use online tools for collaborative writing, understand current research related to
collaborative writing, work effectively as a member of a collaborative learning
team.” Course participants collaboratively researched a topic in order to produce
a knowledge-building relic, a “Topic Paper” for the, “CSCL, Inc., Company
Handbook” in this module.
Module Four: Synchronous Online Collaborative Learning
The syllabus detailed participant objectives for this module which
included learning about the purposes, characteristics, and development of
educational MOOs; how to download and use MOO client software, basic MOO
commands, and serving as a guide to other team members or learners on a tour of
a participant-selected MOO site
Module Five: Collaborative Web-based Inquiry Skills
The instructor designed the readings and assignments in this section to
help participants become familiar with the purpose, structure, and design of
Webquests. Course participants were encouraged to use at least one other tool
such as Zebu for facilitating the development of Web-based learning resources.
The key goal of this module included strengthening participant’s skills in working
as a collaborative team members while designing a Webquest that met stated
criteria.
24
Module Six: Collaborative Planning and Implementation of Larger Knowledge-building Projects
The instructor centered module six activities around participants
developing a “White Paper,” and then detailed objectives of this module, which
included enabling participants to: “work effectively as a member of an online
collaborative learning team in the completion of the complex knowledge
construction project,” and to acquaint them with the challenges of working on
such projects under tight time constraints. Brainstorming and concept-mapping
tools were utilized to organize, schedule, and manage the tasks within a larger
scale project.
Module Seven: Designing an Online Collaborative Learning Project
The instructor, as is often the case in emergent course or programs,
included too many activities for the allotted time period, and consequently
readjusted the course schedule and culminated the course with module five.
The original objectives for this module were for participants to: “plan,
design, develop, conduct, evaluate, and document a online collaborative learning
activity” of personal choice while demonstrating knowledge of design strategies
for online learning and a CSCL tool or environment.
RESEARCH FOCUS: THEORETICAL ORIENTATION
I began this research with a defined focus and ultimate goal of conducting
a formative evaluation of the Web-delivered CSCL course and the notion that a
long-term and sustained ethnographic participant-observation of the process of
moving a face-to-face course online could help to shed light on the problems and
25
issues faced by course designers and participants. According to Emerson, Fretz &
Shaw, the ethnographic participant-observer,
Participates in the daily routines of this setting, develops ongoing relations with the people in it, and observes all the while what is going on. Indeed, the term ‘participant-observation’ is often is often used to characterize this basic research approach. But, second, the ethnographer writes down in regular, systematic ways what she observes and learns while participating…Thus the researcher creates an accumulating written records of these observations and experiences (1995, p.1).
I examined, in conjunction with participant-observation, traditional and
contemporary evaluation paradigms, models and approaches in an attempt to
understand how to structure and improve evaluations of constructivist online
learning environments, which focus on the unique needs of the course participants
and the needs of a knowledge-driven society. The culmination of the exploration
was the selection an appropriate evaluation approach, which I utilized to conduct
a formative evaluation research study focused on the first implementation of the
Web-delivered CSCL course.
26
Review of Related Literature
THE EVALUATION OF HIGHER EDUCATION DISTANCE LEARNING ENVIRONMENTS
Educational institutions are experiencing revolutionary and at times
radical changes due to the convergence of powerful new information and
instructional technologies. The challenges associated with change are evident in
this statement from University of Michigan President Emeritus, James J.
Duderstadt,
There is no question that the need for learning institutions such as colleges and universities will become increasingly important in a knowledge-driven future. The real question is not whether higher education will be transformed but rather how and by whom. It is my belief that the challenge of change before us should be viewed not as a threat but as an opportunity for a renewal, perhaps even a renaissance in higher education (Duderstadt, 1998, p.1-2).
Indicative of this renaissance or rebirth in higher education is the coining
of new terminology such as the virtual university, a metaphor that is used to
describe modern electronic teaching, learning, and research environments that
utilize new technologies for educational purposes (Schank, 1999).
The Web-delivered CSCL course is subsumed in this vast and rapidly
expanding terrain of courses offered by virtual universities. During the final
decade of the 20th century in the United States many institutions of higher
learning employed some form of distance education with 78% of public four year
programs offering distance learning opportunities and 87% of institutions with
over 10,000 students offering distance learning options in 1999. Most of the
growth occurring in higher education distance learning involved the use of
27
computer-based technology and online delivery of courses (US Department of
Education, 1997, 1999).
Defining Distance Education
A common understanding of terminology is a crucial to advancement in
any field (Clark & Clark, 1977). Analysis of distance education has been,
“characterized by confusion over terminology and by lack of precision on what
areas of education were being discussed or what was being excluded” (Keegan,
1996, p. 23). Many terms have been used to describe distance education
including: “ ‘Correspondence study, ‘home study’, ‘external studies’,
‘independent study’, ‘teaching at a distance’, ‘off-campus study’, ‘open
learning’…”(Keegan, 1996, p. 23). With so many terms describing distance
education one wonders where did the term come from and what are the
connotations of the different uses of this term?
The English term distance education is derived from the following terms:
German “fernunterricht,” French, “télé-enseignement,” and Spanish, “educación
a distancia,” and predates the use of the term, “independent study” (Moore, 1996,
p.24). Distance education has been used as a generic term for the field of
education which included a range of teaching and learning strategies used by
“correspondence colleges, open universities, distance departments of conventional
colleges or universities and distance training units of corporate providers”
((Keegan 1996, p.34). “In the United States the term ‘distance learning’ has come
to be used as a global term for the use of electronic technologies in distance
education” (Keegan, 1996, p. 37). Keegan clarified why he chose to use the term
28
distance education, “Distance teaching and distance learning are each only half
the [educational] process we are seeking to describe” (1996, p. 37). “Distance
education is a suitable term to bring together both the teaching and learning
elements of this field of education” (Keegan, 1996, p. 38).
Although distance education has many forms and has been defined in
various ways, most definitions acknowledge that the terminology refers to an
approach to teaching and learning that utilizes learning resources available outside
the conventional face-to-face classroom and that time and/or space separate the
learners from the provider and possibly other students (Cyrs, 1997; Moore &
Kearsley, 1996). Moore and Kearsley (1996) choose a “working definition” (p.2)
of distance education, which will serve as the definition of distance education in
this research report,
Distance education is planned learning that normally occurs in a different
place from teaching and as a result requires special techniques of course design,
special instructional techniques, special methods of communication by electronic
and other technology, as well as special organization and administrative
arrangements (p. 2).
The availability of virtual systems, which allow instructors to deliver
face-to-face conventional education at a distance, has enriched distance education
(Keegan, 1996). Keegan explained that the “virtual university,” (1996, p. 9) is
“based on (electronically) teaching face-to-face at a distance” (p. 9). He (1996)
explained, “The theoretical analyses of virtual education, however, have not yet
been addressed by the literature: is it a subset of distance education or to be
regarded as a separate field of educational endeavor” (1996, p.9)?
29
As the twenty-first century approaches, conventional or face-to-face teaching in schools, colleges, and universities continues to prosper, but this mode of instruction is increasingly being complemented by; correspondence courses; audio, video, and computer technologies; and Web-delivered course offerings. Our world is in the midst of a social transition into a post-industrial society as our economy has shifted from material-and labor-intensive products and processes to knowledge-intensive products and services. A radically new system for creating wealth has evolved that depends upon the creation and application of new knowledge. We are at the dawn of an age of knowledge in which the key strategic resource necessary for prosperity has become knowledge itself, that is, educated people and their ideas. (Duderstadt, p. 145)
1 The CSCL course is a unique Web-delivered learning environment,
simultaneously offered through a virtual university program and a traditional
university program.
WEB-DELIVERED LEARNING ENVIRONMENTS
During the period from 1995-1998 distance learning courses and
programs doubled and Web-delivered or Internet based courses increased by 32%
(US Department of Education, 1999). The growth in Web-delivered education
has been fostered by the affordability and advancement of digital transmission
technologies, which include powerful and reasonably priced home computer
systems and a rising number of homes with Internet access. Collectively, these
trends create a ripe new online habitat for teaching, learning, and research in
Web-delivered learning environments (Web-based Education Commission, 2000).
The Higher Educational Council (HEC) conducted a Meta analysis of
distance education research and their primary criticism of distance education
research was that there was a lack of quality studies (Phillips & Merisotis, 1999).
A large number of the studies have shown that distance courses are not as
effective as conventional courses (Phillips & Merisotis, 1999). The HEC report
30
expounded a need to conduct in depth case studies of university based Web-based
graduate programs with emphasis on understanding the needs, desires,
expectations, hopes dreams, and frustrations of the program stakeholders.
Thomas Russell, director emeritus of instructional telecommunications at
North Carolina State University, examined research studies looking for evidence
that distance learning is superior to classroom instruction. Dr. Russell found,
after reviewing over four hundred studies, that no matter what media or methods
were used the results of the studies showed “no significant difference” in learning.
Other research comparing distance education to conventional instruction indicates
that teaching and studying at a distance can be as effective as traditional forms of
instruction if: there are meaningful student-to-student interactions; the methods
and technologies are selected to match the instructional tasks; timely teacher-to-
student feedback is provided (Moore & Thompson, 1990; Verduin & Clark, 1991,
Bachman, 1995, Task Force on Distance Education, 1992).
Institutions of higher education are racing to provide diverse educational
opportunities and are making substantial investments in new technologies that
support distance education. These institutions are spending more of their allotted
tax dollars for technology-enhanced instruction. With increased funding, come
increased accountability (Web-based Education Commission, 2000) and the need
to ensure excellence in communications and information technology initiatives
(Somekh, 2001).
31
Web-Delivered/Online Education
The Web is the online environment or “habitat” for the CSCL course. The
first version of the World Wide Web (WWW), or Web, was run in 1990 and made
available on the Internet by Tim Berners-Lee and his colleagues in the summer of
1991 (Crossman, 1997). Chute, et al, and Hancock (1999) defined the Web as,
A virtual library of video, audio, and textual data and information is stored on the computers of the Internet. These data are accessible to anyone with a modem, a personal computer, a way of connecting to the Internet (through a private or public Internet Service Provider, and a computer application program or ‘software’ called a browser designed to allow a person to explore Web resources (p. 221).
Porter defines the Web as a system that allows access to this information
on sites all over the world using a standard, common interface to organize and
search for information (1997) and Driscoll states that the Internet refers to a
subset of the WWW through which people can exchange data and
communications (1998). While the terms WWW, Internet, and online have been
defined differently by different individuals, in common usage, these terms are
often confused and used interchangeably (e.g. McGreal, 1997). The Internet
originated in 1969 as a U.S. Department of Defense project. This project was
taken over in 1986 by the National Science Foundation, which upgraded the
Internet in the United States with high-speed, long-distance data lines (Barron,
1999). Barron differentiates between the Internet and the WWW,
The Internet is a worldwide telecommunications system that provides
connectivity for thousands of other, smaller networks; therefore, the Internet is
often referred to as a network of networks. The World Wide Web (first
developed in 1991) connects these resources through hypermedia, so you can
32
jump immediately from one document or resource to another with an arrow key or
a click of a mouse button (1999). Online teaching and learning has also been
described using a variety of terms with entire books devoted to the “Web-based”
medium of instruction (Khan, 1997, 2001; Hall, 1997; Driscoll, 1998). A
description of one type of online education, “Web-based Instruction” (WBI) was
offered by Khan,
Web-based instruction (WBI) is a hypermedia-based instructional program, which utilizes the attributes and resources of the World Wide Web to create a meaningful learning environment where learning is fostered and supported (1997, p. 6).
Reland’s and Gillani’s definition of WBI is more concerned with
strategies and paradigms,
We define WBI as the application of a repertoire of cognitively oriented instructional strategies implemented within a constructivist (Lebow, 1993; Perkins, 1991) and collaborative learning environment, utilizing the attributes and resources of the World Wide Web (1997, p. 43).
No matter what we call them or how various Web-delivered learning
environments are designed, one thing is apparent - they are here to stay. In 1999,
the United States Congress established The Web-based Education Commission
(WBEC) to make recommendations for utilizing the educational promise of the
Internet for pre-K learners through postsecondary education in the 21st Century.
Members of the Commission met with hundreds of experts in education, business,
and technology, and obtained input from hearings and through e-Testimony. The
Commission developed a report, which praised the Internet as an instructional tool
and presented a consensus of the findings of the committee in a final report,
which concluded with this statement:
33
The question is no longer if the Internet can be used to transform learning in new and powerful ways. The Commission has found it can. Nor is the question should we invest the time, the energy, and the money necessary to fulfill its promise in defining and shaping new learning opportunity. The Commission believes that we should. We all have a role to play. It is time we collectively move the power of the Internet for learning from promise to practice (Web-based Education Commission, 2000, p. 134).
As we enter the 21st Century new terms such as “virtual” defined in the
Encyclopedia of Educational Technology as, "computer-generated existence”
(Hoffman, 2001) and E-learning” defined by Cisco as, “Internet-enabled
learning,” (2001) are being utilized to describe electronically delivered distance
education. Cisco claims that, “E-learning will be the great equalizer in the next
century. By eliminating barriers of time, distance, and socio-economic status,
individuals can now take charge of their own lifelong learning” (2001). Two
types of communication: synchronous and asynchronous, were utilized by the
instructor of the CSCL Web-delivered course environment. Synchronous
communication and asynchronous communication are terms that have been
utilized to define two basic ways of thinking about the delivery of distance
learning environments and the degree to which a course or program is bounded by
location and/or time.
Synchronous and Asynchronous Communications
Synchronous communication is a term used to describe simultaneous
group learning experiences where all parties of the educational endeavor
participate at the same time. Another term used to describe synchronous
communication is real time. Participants in distance learning environments can
achieve real time communication via interactive audio or audio-
34
videoconferencing from a classroom to one or more remote classrooms. These
synchronous events require that students attend at a specified time and place.
Synchronous communication can also be achieved by the use of television,
computer based online chat rooms, and Web-based videoconferences in which
students communicate at the same time but from different locations (Connick,
1999). The course instructor utilized three forms of synchronous communication
in the Web-delivered course: chats, telephone conversations, and Webcasts.
Asynchronous communication, often utilized by instructors of online
courses, provides flexibility to students and teachers by allowing them to
participate at different times and from different locations (Connick, 1999).
Harasim, Hiltz, Teles, & Turoff cite advantages of asynchronous networked
communications such as the ease of linking with international counterparts and
control over time and pace of participation (1997). Harasim et al. suggests that
the quality of interactions in online courses is enhanced by asynchronous
communication due to, “increased opportunities to reflect on the message being
received or being composed” (1997, p.273). They point out that one major
advantage of online or networked education is the opportunity to participate
“actively and frequently [which] is not possible in the time-dependent face-to-face
classroom” (p. 273). The asynchronous course tools utilized by the instructor of
the Web-delivered course included e-mail and threaded discussions. What else
makes online instruction different from conventional instruction?
35
Online Teaching and Learning
Learners are receptive to distant asynchronous and interactive learning
environments because many of them are struggling to balance the responsibilities
of home, work and school. The majority of distance learners are over 25 years of
age, approximately 60% are women, and most have completed some education
beyond high school. These students find the ability to learn at times and places
convenient to them is well suited to their educational and training needs (Connick,
1999). Some advocates of distance education including Chute, et al., believe that
distance learning courses are ideal for academic environments as well as work
environments and see the expansion of distance learning via the WWW as answer
to preparing workers for a lifetime of learning in this technologically driven
world:
As technology explodes and reshapes the workplace, today’s job skills will become obsolete, as tomorrows jobs require a completely new set of worker skills. All of these changes require changes in the way workers are trained or re-trained (1999, p. 3).
Distance learning may not be appropriate for every student, yet at the
beginning of the 21st century, students in record numbers are flocking to enroll in
online courses and programs. These Web-delivered programs are expanding
exponentially and becoming an integral part of the curriculum at many institutions
of higher education (US Department of Education, 1999) and workplace training
environments (Chute et al., 1999). This increase in the number of online students
is also increasing the need for faculty members who are willing to teach online
courses. Some educators view online teaching as a cultural change (Cini and
Bilic, 1999, p. 38) for faculty. They stress that faculty who move to online
36
teaching need to re-conceptualize their ideas about what is effective teaching and
learning.
Ragan (1999) examined the differences between the roles of instructors
and students in the conventional classroom as compared to these same roles in
distance educational settings. He concluded that the role differences are
negligible, but he also stressed that the design and development of instruction is
paramount in both environments. Ragan posits that these new standards are
forcing educators to re-evaluate teaching and learning,
Within both the distance education and general education framework, new standards are being defined based on a student-centered curriculum, increased interactive learning, integration of technology into the educational system, and collaborative study activities. Core to these changes is an examination of the fundamental principles of what constitutes quality instructional interaction. Without a firm understanding of these principles, decisions are made based on the merits of the technology or methodologies without consideration of the long-term and potential benefit to the student (1999, paragraph 2).
Kemp (2000) talked about the constant pressure on schools to attain
“successful” (p. xv) student results on standardized tests and stated that he
supports systematic attention to comprehensive methods for improving school
programs. The term “educational accountability” (p. xv) is being used by
educational critics more often according to Kemp (2000), which leads to the
following two questions: What does constitute “effective” teaching and learning
and how does one determine and evaluate “effectiveness?”
EDUCATIONAL PROGRAM EVALUATION
Kemp (2000) attributes the rising interest in accountability and
educational program evaluation to the following four factors:
37
• Rising educational requirements for good jobs while preparing a diverse and flexible workforce
• Limited degree of acceptable performance of graduates from many schools
• Need to control instructional costs while increasing learning effectiveness and efficiency.
• Spread of school choice, which is giving a growing number of parents the option of selecting the best schools for their children to attend (Preface, p. xv).
Pinar explained that educational program evaluators used educational
indicators to report strengths and weakness of the education system nationwide
(Pinar et al, 1995).
Effective Educational Environments
The term educational indicator is used to summarize the condition of the
educational system and parallels existing terms such as economic indicators or
social indicators (Pinar, et al., 1995). Variables that impact on the educational
indicators such as the nature of the material to be learned, the needs of the
learners, the needs of the learning community, and the needs of society are
intertwined and must be unraveled to describe “effectiveness” in specific learning
contexts, whether they be conventional classrooms environments or online
classrooms.
Peter Cooper (1993) discussed the constructivist paradigm and suggested
that evaluation of constructivist environments would be “less founded upon
criterion-referenced tests” (p. 17). He and stressed the need to design alternative
forms of evaluation, “ to account for multiple goals” (Cooper, 1993, p. 17) in
constructivist environments. Cooper posited that evaluation of constructivist
38
environments would need to be complex and include, “both objective and
subjective components” (p. 17).
Just as there is not one “right” definition of educational evaluation, there
are no one-size-fits-all answers to the question of what makes an “effective”
educational environment. Cooper (1993) charted the development of designed
instruction and suggested that the implementation and evaluation of designed
instruction limits researchers to available technology paradigms. Patton (1978)
defined a paradigm as: “a world view, a general perspective, a way of breaking
down the complexity of the real world (1978, p. 203).” Both Patton and Cooper
saw paradigms as deeply embedded and normative, telling the practitioner how to
act, “without the necessity of long existential or epistemological consideration”
(Patton, 1978, p. 203). Patton viewed this movement of the practitioner towards
action as the strength of paradigms. However, Patton also saw this as a weakness,
“in that the very reason for action is hidden in the unquestioned assumptions of
the paradigm” (1978, p. 203).
The current emphasis on standards-based curriculum and benchmarks;
points of reference that serve as standards, which are developed to ensure quality
in education environments, (Phipps & Merisotis, 2000; Carr & Harris, 2001,
Popham, 2001) is one effort to define what constitutes “effectiveness” in
contemporary educational domains and environments. Just as Cooper (1993)
suggested these standards are often complex and include both objective and
subjective components.
Evaluation can provide valuable information for discovering and
articulating the problems that students face in educational environments, Patton
39
(1990) suggested a “qualitative-naturalistic evaluation approach” (p. 53) is
especially appropriate for developing, innovative, or changing programs where
the focus is on program improvement, facilitating more effective implementation,
and exploring a variety of effects on participants (p. 53). Patton’s viewed this
approach, “as dynamic and developing, with ‘treatments’ changing in subtle but
important ways as staff learn, as clients move in and out, and as conditions of
delivery are altered” (p.52). Nixon viewed evaluation as a dynamic process that
functioned to discover and articulate the problems of program clients (1990).
Evaluation can also serve as a valuable resource for improving courses and
programs (Elliott, 1991; Scriven, 1967). Gathering and sharing of information
that illuminates problems and issues faced by participants in a specific course or
program may also help define what constitutes “effective” teaching and learning
in that environment.
Educational Program Evaluation in the US
Michael Scriven’s work; The Countenance of Educational Evaluation”
marked the beginning of contemporary educational program evaluation in the
United States (Popham1975). The growth of the field of educational program
evaluation in the United States is often associated with the 1960’s Kennedy
administration sponsorship of the “curriculum reform movement” (Pinar,
Reynolds, Slattery & Taubman, 1995, p. 734). The Elementary and Secondary
Education Act (ESEA) of 1965 brought a flood of funding to public schools for
diverse categorical areas such as education innovation, media and materials,
bilingual education, etc., but with this increased funding also came cries for
40
accountability. Legislation followed in 1971 that required all ESEA projects to be
evaluated by the government. From 1985 to the present, methodical sorting
through reports by researchers has given rise to the emergent feeling that earlier
studies were generally poorly designed and of little practical value (Borich, 2000,
p. 17).
Mary Thorpe, research coordinator for British Open University from
1975-1980, stated, “Evaluation is the collection, analysis and interpretation of
information about any aspect of a programme of education and training, as part of
the recognized process of judging its effectiveness” (1988, p.5). Madaus and
Kellaghan claim that Ralph Tyler’s 1949 definition of evaluation has had
“considerable influence” (1992, p. 120). “The process of evaluation is essentially
the process of determining to what extent the educational objectives are actually
being realized by the program of curriculum and instruction” (Tyler, 1949, p.105-
106). Kemp (2000) claims that we can measure program effectiveness by asking
a single question. “To what degree did students achieve competency with the
learning objectives?” (p. 84).
It is simple to see that many and diverse definitions of evaluation in
different educational contexts have emerged (Madaus, Scriven & Stufflebeam,
1983; Stufflebeam & Shinkfield, 1985; Worthen & Sanders, 1987; Murphy &
Torrance, 1987; English, 1988; Weiss, 1989). The Joint Committee on Standards
for Educational Evaluation (1981) defined evaluation as, “the systematic
investigation of the worth or merit of some object (program, project, or
materials)” (p.12). Guba and Lincoln saw no possible way to define what
evaluation really is, “…There is no ‘right’ way to define evaluation…. We take
41
definitions of evaluation to be human mental constructions, whose
correspondence to some ‘reality’ is not and cannot be an issue” (1989, p. 21).
Others such as Nixon (1990) and Donmoyer (1990, 1991) view evaluation as a
process of negotiating meaning as did Berman (1986) when she argued that
focusing on measurement, “has frequently ruled out much teaching of human
experience that is not measurable’ (p.45).
EVALUATION RESEARCH
Evaluation has been distinguished from “ …evaluation research, a term
popularized in the late 1960’ and early 1970’s, beginning with Suchman’s 1967
book Evaluative Research” (Worthen, Sanders & Fitzpatrick, 1997, p.5).
Worthen and Sanders (1973) took the position that while evaluation and
evaluation research have a great deal in common they are entirely separate and
different. They maintained that the goal of evaluation must always be to
determine whether the phenomenon under investigation has, “greater value than
the competitors or sufficient value of itself that it should be maintained” (p. 26).
In line with this reasoning, they differentiated research from evaluation,
explaining that research is aimed at obtaining generalizable knowledge while
evaluation is aimed at making judgments. This objectivist notion about
generalizable knowledge is directly related to the long entrenched scientific
paradigm, which many social science researchers have rejected (Scheurich, 1997).
According to Worthen, the distinction in the late 1960’s and early 1970’s between
evaluation and evaluation research had to do with methodological issues.
42
Evaluation researchers employed rigorous social science research methodology
and evaluators conducted evaluations with other methods (1997).
The development of professional standards for evaluators in the 1980’s
fueled the growth and development of evaluation as a professional field (Borich,
2000). These standards and the shifts in thinking have contributed to growth of
evaluation as a field of inquiry as well as to the professionalism of the field.
Evaluations conducted following rigorous social-science methodology also
contribute to the advancement and professionalism of the field. I do recognize
that various constraints may make formal evaluation research studies impractical,
if not impossible, in many situations. The decision whether to use formal or
informal methodology is individual and dependent on a variety of factors such as
who or what the evaluation is for, what the goals and purposes of the evaluation
are, local constraints, and organizational and fiscal limitations of the evaluation
research environment.
Stake’s (1973) and Guba and Lincoln’s (1981) rejection of the scientific
paradigm, or world-view, in favor of social science methodologies for evaluation
research, changed the face of educational evaluation in the United States. Lincoln
and Guba (1985) defended a different but equally rigorous, research paradigm for
the social sciences based on “constructivist methodology, ” (Guba & Lincoln,
1988, p. 45) and the “naturalistic” (Lincoln & Guba, 1985, p. 7) paradigm. The
evaluation approach which Lincoln and Guba espoused and labeled
“constructivist” (Stufflebeam, 2001) utilizes naturalistic methodologies. The rise
of naturalistic methodologies was an influential factor in the shift of educational
evaluation practice and theory from objectivist methodology and towards
43
constructivist methodology (Guba & Lincoln, 1988). The constructivist
evaluation approach does not reject using quantitative measures in evaluation
research. However, the focus is on utilizing the judgment of the evaluator and
qualitative methodology (Patton, 1990; Guba & Lincoln, 1981, 1989) rather than
on quantitative methodology.
The naturalistic paradigm rejects the notion of utilizing representative
populations and generalizability of evaluation findings. Offering instead
transferability, which is dependent on the degree of similarity between the
sending and receiving contexts (Lincoln & Guba, 1985). Since the evaluator is
the sending context or sender when she produces an evaluation report, and the
sender can only know the sending context (Lincoln & Guba, 1981, 1985)
evaluation research which is conducted utilizing this paradigm does not attempt to
generalize findings to other situations. Stake’s (1973) and Guba and Lincoln’s
(1981) rejection of the notion that evaluations must produce generalizable
knowledge to be effective, has moved many evaluators to operate under different
assumptions than the scientific-objectivists.
Assessment and Evaluation
Since the 1960’s curriculum evaluation, as well as assessment in school
classrooms, has experienced tremendous growth (Herman, Aschbacher &
Winters, 1992; Madaus & Kellaghan, 1992). Marzano (2000) defines the term
assessment as, “vehicles for gathering information about student’s achievement
or behavior” (p. 12) and the term evaluation as, “the process of making
judgments about the level of student’s understanding or performance (p. 13).” It
44
has been common practice to use the terms evaluation and assessment
interchangeably (Carr & Harris, 2001).
Pinar et al. (1995) however, do not view evaluation and assessment as
interchangeable terms. “Evaluation is the broad category while assessment is
subsumed within it. Within assessment is measurement, the most narrow form or
subset of evaluation” (p. 732). Thorpe (1988) also differentiates evaluation from
assessment: “One of the terms evaluation is not synonymous with is assessment,
which is the procedure of assigning value to the learning achieved during and at
the end of a course” (1988, p.6).
I choose to differentiate the term assessment from the term evaluation in
this research report for sake of clarity. I will use the term assessment related to
indicators or measures of student learning, which can take many different forms,
from the traditional pen and paper test to more modern notions of authentic
assessment (Herman et al., 1992, p. 2; Marzano, 2000; Carr & Harris, 2001, p.
175). I will use the term evaluation to indicate research conducted to improve or
judge the merit and/or worth of an educational program.
Summative Evaluation and Formative Evaluation
Summative Evaluation
Scriven (1967) introduced the distinction between formative and
summative evaluation with, “…the former category suggesting evaluation
intended to inform revisions in practice and the latter suggesting judgments, say,
for personnel files (Pinar et al., 1995). Hiltz viewed summative evaluation as a
way to justify the implementation of an instructional technology (1994).
45
Flagg stated that summative evaluation arose from a concern with
assessing both intended and unintended impacts of program and defined
summative evaluation as a kind of evaluation that is, “…usually performed
independently of the project; and it is not intended for the program developers but
instead for program consumers, purchasers, funders and so on. Summative
evaluation may yield formative information, but that is not its goal” (1990, p.6).
Guba and Lincoln (1981) took the stance that both formative and
summative evaluations can be used to either to refine and improve and/or to judge
the merit of programs as is evident in the following statement. “But in fact the
dimensions of merit/worth and of formative/summative are orthogonal;
evaluations of merit can be either formative or summative just as can evaluations
of worth” (Guba & Lincoln, 1981, p. 49).
While Guba and Lincoln’s orthogonal view of evaluation consistent with
constructivist orientations, evaluators have traditionally utilized summative
evaluations to evaluate and judge the merit or worth of an educational program.
Since the goals and objectives of constructivist environments are not
generalizable, summative evaluation as defined by Scriven is not a viable solution
for improving these programs. The evaluation focus for the implementation of
the Web-delivered course was formative, focused on identifying strengths and
weakness of the course to help guide modifications of the course for current
future offerings.
46
Formative Evaluation
Formative evaluation of an educational program is to help a designer
during the early development stages to improve the program or product (Flagg,
1990; Northwest Educational Research Laboratory, 2001). Michael Scriven
introduced the term formative evaluation in 1967 using it to refer to the “outcome
evaluation of an intermediate stage in the development of a teaching instrument”
(p. 51). Flagg utilized a broadened application of the term formative evaluation;
“to cover any kind of feedback from target students or professional experts that is
intended to improve the product during design, production, and initial
implementation” (1990, p. 5).
Flagg’s definition of formative evaluation is definition utilized for this
research because her definition fits best with the instructional design team’s desire
to improve the course for current and future offerings. Flagg identified three
types of formative evaluations: pre-production formative evaluation, production
formative evaluation, and implementation formative evaluation.
Pre-production formative evaluation is related to the collection of
information to guide design phase decisions and could involve items such as the
testing of: software and hardware interfaces, types of interactivity, feedback
procedures and levels of user control. Production formative evaluation is utilized
to assure the effectiveness of the program and to make decisions about program
completion and involves gathering data using parts of the program or close
approximations of the program with target groups and/or experts for such items as
user friendliness, comprehensibility and learning (Flagg, 1990). Implementation
formative evaluation, or field-testing, is utilized to determine the effectiveness of
47
the program under normal use conditions while it is still possible to change the
program (Flagg, 1990).
The first offering of the Web-delivered course was also the “field-test” of
the course because of the lack of time and resources to gather data on the course
before implementation. Time, nor resources, did permit course designers to “try
out” the Web-delivered course before implementation. The evaluation of the
Web-delivered course is therefore, a formative-implementation evaluation. The
Web-delivered CSCL course is an “emergent environment” (Edwards, 1987, p.
27) where the goals and objectives of the course shift depending on the
information needs of the course participants. As such, constructivist courses are
never finished products suitable for traditional outcomes-based summative
evaluations. The generative nature of constructivist learning environments makes
them amenable to an ongoing and generative cycle of course improvement.
Formative evaluation strategies are well suited to those needs and
purposes. Before conducting the formative evaluation of the Web-delivered
CSCL course, I studied the following standards for evaluations, endorsed by the
American Evaluation Association to ensure that I understood the accepted
standards and criteria utilized to judge the accuracy, feasibility, propriety, and
utility of educational evaluations.
PROGRAM EVALUATION STANDARDS
Several different organizations contributed to the 1994 publication of the
“Program Evaluation Standards” and its predecessor, the 1981, “Standards for
Evaluations of Educational Programs, Projects and Materials.” A committee with
48
members from the American Educational Research Association, the American
Psychological Association, and the National Council on Measurement in
Education in 1975 launched a project to develop standards to help ensure useful,
feasible, ethical, and sound evaluations of educational programs, projects, and
materials. This project resulted in the publication of the 1981 “Standards for
Evaluations of Educational Programs, Projects, and Materials” (Joint Committee
on Standards for Educational Evaluation, 1994).
The sole responsibility for these standards was vested in a Joint
Committee (Joint Committee on Standards for Educational Evaluation, 1994) and
the twelve organizations, which sponsored the formulation of the standards, had
no history of working together. The many different points of view resulted in
serious disagreements about the topics to address with the new standards. The
committee eventually resolved these issues and disagreements and decided to
limit the scope of the 1981 standards to evaluations of educational materials,
projects and programs. The committee decided in 1989 that the 1981 standards
could be applied in a broader context and conducted and extensive review
process, which resulted in the 1994 following four categories of standards:
• Accuracy Standards
• Feasibility Standards
• Propriety Standards
• Utility Standards
(Joint Committee on Standards for Educational Evaluation, 1994).
49
Accuracy Standards
The twelve Accuracy Standards are intended to ensure that an evaluation
will reveal and convey technically adequate information about the features that
determine worth or merit of the program being evaluated. The Accuracy
Standards address such topics as: context analysis where the program is examined
in detail making it likely influences on the program can be identified; appropriate
and systematic analysis of quantitative and qualitative information. These
standards also address utilizing defensible information sources described in
enough detail so that the adequacy of the information can be assessed; valid
information gathered with carefully chosen, developed, and implemented
procedures; and reliable information where the information gathering procedures
are chosen, developed and then implemented (Joint Committee on Standards for
Educational Evaluation, 1994).
Feasibility Standards
The three Feasibility Standards are intended to ensure that an evaluation
will be realistic, prudent, diplomatic, and frugal. These standards involve
practical procedures, which keep disruptions to a minimum, planning which
involve various interest groups to ensure their cooperation, and efficient and cost
effective resource expenditures (Joint Committee on Standards for Educational
Evaluation, 1994).
Propriety Standards
The eight Propriety Standards are intended to ensure that an evaluation
will be conducted legally, ethically, and with due regard for the welfare of those
50
involved in the evaluation, as well as those affected by its results. These
standards involves such items as due concern for the rights of human subjects,
formal written evaluation agreements, complete and fair assessment, fiscal
responsibility, and full disclosure of findings (Joint Committee on Standards for
Educational Evaluation, 1994).
Utility Standards
The seven Utility Standards are intended to insure that an evaluation meets
the information needs of intended users. This area includes items such as
researcher credibility, stakeholder and value identification, and report timeliness,
dissemination and evaluation impact (Joint Committee on Standards for
Educational Evaluation, 1994).
The readers of this report can utilize these four standards to judge the
accuracy, feasibility, propriety, and utility of the formative evaluation of the Web-
delivered course.
EVALUATION: DEFINITIONS AND APPROACHES
Flagg (1990) defined a program in the context of educational evaluation
as, “any replicable educational materials for electronic technologies such as
television and microcomputers” (p. 3). Worthen and Sanders define evaluation
“simply, ” (1997, p. 5) as determining the worth or merit of whatever is being
evaluated, “Said more expansively, evaluation is the identification, clarification,
and application of defensible criteria to determine an evaluations object’s value
(worth or merit), quality, utility, effectiveness, or significance in relation to these
criteria” (Worthen & Sanders, 1997, p.5). Michael Patton (1997) defines program
51
evaluation as an art of creating the right design, “that is appropriate for a specific
situation and particular policymaking context” (p. 249).
Evaluation Definitions
I began this research with a basic understanding of formative evaluation
and summative evaluation in the context of instructional design. What I
perceived as a deficit was my lack of in depth knowledge of traditional evaluation
theory and the roots of curriculum evaluation.
In January of the spring semester, 2000, I joined the Web-delivered course
re-visioning team as a component of directed research, with my doctoral advisor,
an expert in the field of instructional technology, instructional design, computer-
supported collaborative learning, and distance education. My first objective was
to explore and learn about traditional evaluation theory.
Consequently, I enrolled in and completed three graduate-level evaluation
courses, Evaluation Models and Techniques, during the spring semester of 2000.
The course instructor, Gary Borich, Ph.D., has published widely and has extensive
theoretical knowledge of educational program evaluation, combined with vast
practical experience in evaluating educational programs. Dr. Borich exposed me
to a broad range of evaluation concepts, readings, strategies, and evaluation
approaches (Borich, 2000) and to four definitions of evaluation, which represent
four distinct theoretical orientations for evaluation research (Borich & Jemelka,
1981). These theoretical orientations: decision-oriented, applied-research, value-
oriented, and systems-oriented can be utilized to provide decision makers with
information about the effectiveness of an instructional program (Borich &
52
Jemelka, 1981). I also subsequently enrolled in a two-semester evaluation
research program during fall of 2000 and spring of 2001.
Decision-Oriented Definition
The decision-oriented school of evaluation views evaluation as a process
of determining decision areas of concern, deciding which information is most
appropriate to address those concerns, and then collecting, analyzing, and
summarizing information in order provide decision makers information for
selecting among alternatives (Borich, 2000, p. 35). Stufflebeam’s Context
evaluation, Input Evaluation, Process Evaluation and Product Evaluation (CIPP)
model, which and divides evaluation into four distinct strategies is representative
of this school of evaluation thought Context evaluation is focused on specifying
the operational context of a program and identifying needs and problems of
program clients. Input evaluations focus on identifying and assessing the
capabilities of the system. Process evaluations focus on the identification of
defects and documentation of program activities. Product evaluations relate
outcome information to objectives, context, input and process information
(Borich, 2000, p 35).
I did not select the decision-oriented definition as a theoretical basis for
the formative evaluation of the Web-delivered course because of two distinct
limitations. The first limitation is the tendency for this definition to focus on
concrete short-term objectives, which are easily measured, and to ignore longer-
term and higher order objectives, which are the primary focus of the Web-
delivered course. Second, this approach emphasizes objectivist measures such as
53
test scores. This emphasis on testing gives the illusion that highly precise modes
of quantification, and the data behind these scores are what is important and not
the judgment criteria by which the researcher interprets data. The instructor
individualized learning and projects for Web-delivered course participants
requiring individualized judgment criteria for each participant.
Applied Research Definition
The applied research school of evaluation, which is not widely recognized,
other than for its limitations, focuses on establishing causal connections between
instructional program experiences and outcomes. Three components: Inputs (such
as participant characteristics), the program (participant activities), and outcomes
(target skills and abilities measured at program completion) make up the core of
this definition (Borich, 2000).
I did not select this definition as a theoretical basis for the formative
evaluation of the Web-delivered course for two reasons. First, this definition
relies on experimental design, which was not feasible for this formative
evaluation because of the emergent nature and constructivist orientation of the
Web-delivered course. Second, experimental designs typically provide data for
making judgments only after the program ends, which precludes using program
data for continuous refinement of ongoing instruction, which is a primary goal of
the formative evaluation of the Web-delivered course.
The Systems-Oriented Definition
While the applied research definition narrowly focuses on three
components, the systems-oriented definition posits that complex events cannot be
54
understood by reducing them to their individual elements. “Rather, instructional
programs are viewed as orgasmic entities that evolve and decay, depending on
their relationships to other programs” (Borich, 2000, p. 39). The focus of
systems-orientation is on studying whole programs and interrelations within the
program (Borich, 2000). For example, research to determine if the goals and
objectives of a particular course in an educational program align with the
objectives and goals of the entire program.
I did not select the systems-oriented definition as a theoretical basis for the
formative evaluation of the Web-delivered course because this orientation and its
attendant methodology are aimed at testing the relative worth of different
programs within the same system (Borich, 2000).
Value-Oriented Definition
The value-oriented definition of evaluation focuses on value judgments
that evaluators make when evaluating instructional programs and, “describes the
act of judging merit or worth as central to the role of the evaluator” (Borich, 2000,
p. 37). This perspective begins with the premise that the evaluator seldom knows
all of the criteria, which they or others will utilize to make judgments about the
program. The supposition of the value-oriented theoretical basis for evaluation is
that since the criteria for making judgments are not existent or evident the
explication of criteria is a key component and important role of the evaluator.
Van Gigch cited Louch (1966) who did an excellent job of capturing the world of
the value-oriented evaluator as one in which:
…Behavior cannot be explained by a methodology borrowed from the physical science…what is needed… is not measurement, experiment,
55
prediction, formal argument but appraisal, detailed description, reflection, and rhetoric…Human action is a matter of appraising the rightness or appropriateness of what is attempted or achieved by men in each set of circumstances. Its affinities are with morality rather than the causal or statistical accounts appropriate to the space-time frameworks of the physical sciences. Its methods are akin to the deliberations and judgments in the law rather than the hypotheses and experiments of physics (Van Gigch, 1978, p. 220).
I selected the value-oriented definition as a theoretical basis for the
formative evaluation of the Web-delivered course because this orientation and its
attendant methodology of appraisal, detailed description, reflection and rhetoric is
well suited to the purpose of the formative evaluation and its focus on the breadth
of the course experience for the course participants.
The value-oriented definition stresses the value judgments made in the
evaluation of the Web-delivered course and describes the act of judging worth or
merit as central to the role of the evaluator. This approach suggests that
evaluations should determine who is benefiting and if anyone or who is being
shortchanged by a program (Borich, 2000), and implies that the evaluator must
justify and evaluate the Web-delivered course in terms of the values and needs of
the CSCL course participants.
The first premise of the value-oriented evaluation definition is that
evaluator must define the criteria that she will utilize to judge or determine the
strengths and weaknesses of the Web-delivered course. The second premise is
that the evaluator will collect data and analyze it utilizing these criteria.
Understanding the value-oriented theoretical orientation of the evaluator helps the
readers of this evaluation report to understand the context and values inherent in
56
the evaluation. For the evaluator, selecting an evaluation approach to guide the
design, data collection and analysis of the evaluation is crucial (Borich, 2000).
Evaluation Approaches
I examined diverse and conflicting classifications of evaluation
approaches (Borich, 2000; Stufflebeam, Madaus, & Kellaghan, 2000; Worthen &
Sanders, 1987; Worthen, Sanders, & Fitzpatrick, 1997) as part of the process of
selecting an approach to guide the design of the formative evaluation of the Web-
delivered course, and the concurrent data collection and analysis. I also examined
a broad variety of evaluation models or approaches (Borich, 2000; Guba &
Lincoln, 1981; House, 1983; Jonassen, 1992; Stake, 1974, 1973; Stufflebeam,
1971, 2001; Worthen, Sanders and Fitzpatrick, 1997).
Based on these explorations I selected a taxonomy or classification
framework to help explain the vast variety of evaluation models or approaches.
Stufflebeam, Madaus, and Kellaghan (2000) described three distinct categories of
evaluation models or approaches: Questions/Methods-Oriented Evaluation
Models, Improvement/Accountability Models, and Social Agenda-Directed
(Advocacy) Models (Stufflebeam, et. al., 2000). Stufflebeam further explicated
and extended this taxonomy (2001). Stufflebeam’s extended taxonomy (2001)
will be utilized in this research report as a tool for organizing and describing
evaluation approaches. This taxonomy will serve as the basis for situating the
evaluation approach I selected for the formative evaluation of the Web-delivered
course and for situating it among the range of contemporary evaluation
approaches.
57
Stufflebeam (2001) based his taxonomy on prior assessments of program
evaluation approaches including works by Stake (1974), Hastings (1976), Guba
(1977, 1990), House (1983), Scriven (1991, 1994), and Madaus, Scriven &
Stufflebeam (1983). I selected Stufflebeam’s taxonomy out of the many she
examined because Stufflebeam’s taxonomy is contemporary and has the unique
feature of being the only classification or taxonomy, which looks backward to
examine evaluation approaches utilized in the previous century, and forward to
judge these evaluation approaches finally recommending the best evaluation
approaches for continued use in the 21st Century.
Stufflebeam (2001) grouped twenty-two of the most commonly used and
recognized evaluation models or what he prefers to call “approaches” (p. 9, 2001)
into four classifications or categories of evaluation approaches:
• Improvement/accountability approaches
• Pseudoevaluations
• Questions/methods-oriented approaches
• Social agenda/advocacy approaches.
After grouping the twenty-two evaluation approaches under the preceding
four classifications Stufflebeam (2001) then utilized ten descriptors, which
follow, to characterize each of one of the twenty-two approaches:1. Advance organizers, that is, main cues that evaluators use to set up
a study2. Main purpose(s) served3. Sources of questions addressed4. Questions that are characteristic of each type of study5. Methods typically employed6. Persons who pioneered in conceptualizing each study type7. Other persons who have extended development and use of each
study type
58
8. Key considerations in determining when to use each approach9. Strengths of the approach10. Weaknesses of the approach (Stufflebeam, 2001, p. 11)
Stufflebeam assessed each of the twenty-two evaluation approaches,
utilizing a metaevaluation checklist, which he based on the four Program
Evaluation Standards previously described. This metaevaluation checklist is
available on the Web (Stufflebeam, 2001a).
Two approaches, “Public Relations-Inspired Studies and Politically
Controlled Studies,” were grouped under the classification of
“Pseudoevaluations” and judged by Stufflebeam to be “objectionable”
(Stufflebeam, 2001, p. 13) because they “…do no seek truth but instead acquire
and broadcast information that provides a favorable, though often false
impression of a program” (p.13). The twenty remaining approaches Stufflebeam
judged to be “legitimate” (p. 7).
Stufflebeam selected nine approaches from these twenty, and deemed
these nine viable, “for continued use in the 21st century” (2001, p. 7).
Stufflebeam acknowledges some caveats to his appraisals such as the fact that he
bases the assessments solely on his judgments. He also acknowledges the conflict
of interest involved in including the approach he developed, the responsive
approach. He posits that in spite of these limitations his analyses might be
helpful, “to evaluators and evaluation students at least in the form of working
hypotheses to be tested” (p. 12).
Gary Henry, co-editor of the American Evaluation Association’s, New
Directions for Evaluations book series, notes some obvious omissions and
possible questions that may surface in relation to Stufflebeam’s classifications and
59
judgment. However, Henry concurs that Stufflebeam’s approach, “…reflects
practical wisdom. Choose the alternatives from among those which are most
likely to turn out good evaluations, work on those that could be made right, and
discard those which are unlikely to be improved sufficiently to confidently be
used to assess merit and worth” (Stufflebeam, 2001, p.4).
Stufflebeam’s classifications of approaches (2001), spans a wide expanse
of philosophical orientations, and details varied approaches, which are applicable
for a broad range of evaluation purposes. After the elimination of the category of
Pseudoevaluations, which I discussed earlier, Stufflebeam’s three remaining
(2001) classifications of evaluation approaches included the
Improvement/Accountability Approaches, Questions/Methods Approaches, and
Social/Agenda Advocacy Approaches. These three are the same classifications
which Stufflebeam, Madaus, and Kellaghan developed earlier (2000).
In the following section, I will briefly describe each of the three
classifications. Next, I will detail my reasons for selecting or not selecting an
approach from each classification for the evaluation of the Web-delivered course.
Third, I will detail the category, which fits best with her theoretical orientation
and the information needs of the stakeholders of the formative evaluation. Fourth,
I will summarize the best (Stufflebeam, 2001) approaches in the selected category
and details her reasons for selecting or not selecting each approach. I will
conclude this review of literature with a description of the evaluation approach
that I selected to guide the formative evaluation of the Web-delivered course.
60
Improvement/Accountability Approaches
These approaches focus on helping consumers to judge the merit and
worth of competing programs. They are objectivists, hence assuming “…an
underlying reality in seeking definitive, unequivocal answers to the evaluation
questions” (Stufflebeam, 2001, p. 42). These approaches are thorough with an
emphasis on serving decisions, improving programs, and addressing the needs of
program stakeholders. These approaches typically utilize both qualitative and
quantitative assessment methods (Stufflebeam, 2001).
I was not seeking to judge the merit and worth of competing programs. I
found the Improvement/Accountability approaches to be unsuitable for the
evaluation of the Web-delivered course because the objectivist and systems-
oriented nature of these approaches did not mesh with the constructivist
orientations of the stakeholders of the Web-delivered course evaluation or with
the Value-Oriented theoretical orientation that I selected.
Questions/Methods-Oriented Approaches
Stufflebeam grouped the questions-oriented and methods-oriented
approaches into one classification because both approaches are quasi-evaluation
studies which narrow the focus of an evaluation seeking to answer a limited
number of well defined questions rather than aiming to broadly assess a
program’s merit or worth. The questions-oriented approach can employ a wide
range of methods to answer a narrowly defined set of question, which are derived
from a variety of different sources such as the objectives of the program or the
program funder’s accountability requirements (Stufflebeam, 2001).
61
Some of these approaches emphasize technical quality and utilize specific
methods such as an experimental design, a standardized test, or a specific program
theory. Evaluators that are committed to employing mixed qualitative and
quantitative methods initiate the second type of methods-oriented approach. The
major weakness of both approaches is that their focus is usually too narrow to
assess the merit or worth of ca program (Stufflebeam, 2001).
The strongest disadvantage of the methods-oriented approach is in its
political volatility since it attributes accountability for failures and successes to
individual teachers or schools. Another key disadvantage of this heavily
quantitative approach is while analyses are complex and powerful they only
explore a limited set of outcome variables. Critics of this approach feel that there
are multifarious causal factors that impact on student performance and question
whether a, “measurement and analysis system can fairly fix responsibility for the
academic progress of an individual and collections of students to the level of
teachers” (Stufflebeam, 2001, p. 24).
I selected none of the Questions/Methods-Oriented approaches for the
formative evaluation of the Web-delivered course because the transformed
version is still subject to modifications; hence, I am focusing the formative
evaluation on examining the breadth of the course experience for participants for
determining course strengths and weakness. The objectivist focus of the
Questions/Methods-Oriented approaches and their emphasis on conducting quasi-
evaluation or experimental studies, which narrow the focus of an evaluation and
seek to answer a limited number of well-defined questions, did not satisfy the
62
needs or purposes of the exploratory formative evaluation of the constructivist
Web-delivered course.
Social Agenda/Advocacy Approaches
Social Agenda/Advocacy approaches favor a constructivist orientation and
the use of qualitative methods,
“For the most part, they eschew the possibility of finding right or best
answers and reflect the philosophy of postmodernism, with its attendant stress on
cultural pluralism, moral relativity, and multiple realities” (Stufflebeam, 2001, p.
62).
Social/Agenda Advocacy approaches are often oriented towards
affirmative action and increasing access to educational and social opportunities
and services for the disenfranchised. These approaches seek to employ the
perspectives of program stakeholders as well as experts when investigating;
characterizing and judging programs and all of the approaches is constructivist in
orientation (Stufflebeam, 2001).
The constructivist orientations of the Web-delivered course and the course
instructional design team, paired with need for me to represent the perspectives
and values of the course participants were the two key factors, which influenced
the research to choose a Social/Agenda Advocacy evaluation approach from the
following approaches.
Constructivist Approach
The Constructivist approach to evaluation is heavily philosophical and
rejects the existence of an ultimate reality instead employing a subjectivist
63
epistemology, which rejects the notion of any one and ultimate reality.
Constructivist evaluators are authorized, and almost expected, to maneuver the
evaluation to emancipate and empower disenfranchised people. “The main
purpose of the approach is to determine and make sense of the variety of
constructions that exist or emerge among stakeholders” (Stufflebeam, 2001. p.
72). Guba, (1981) the developer of constructivist evaluation, was heavily
influenced by Stake’s writings in the early 1970’s on responsive evaluations.
Lincoln and Guba (1985), Guba and Lincoln (1989) pioneered constructivist
program evaluation.
Questions for constructivist evaluations are co-created by evaluator and
participants through interactions and negotiations. The evaluator plans schedules
of discussions with all stakeholders. Resulting questions may not cover a range of
issues necessary to determining how to improve the program, nor are the
questions to be studied ever fixed (Stufflebeam, 2001).
The methodology of constructivist evaluations is at first divergent. The
evaluator collects and describes the individual constructions of stakeholders, on
each evaluation question or issue. The evaluator helps the participants to examine
all the constructions and urges them to move towards a consensus. Constructivist
evaluations never end because there are no ultimate answers there is always more
to learn (Stufflebeam, 2001).
All interested parties need to support and cooperate with the approach and
reach agreement on what the approach can or cannot deliver. The strength of the
approach is that it seeks directly to involve the full range of stakeholders, and it
fully discloses the whole evaluation process and its findings. Effective change
64
comes from within. This approach engages stakeholders in the evaluation and so
this approach can be an effective change agent (Stufflebeam, 2001).
The weakness of the approach is that it assumes that all participants are
informed and have a deep and abiding interest and desire to participate in the
evaluation. Even with deep and abiding interest, the participation of all parties is
difficult to sustain. The constructivist evaluation process is based on the honesty
and full disclosure of participants. A strong weakness of this approach is that
participants may not want to reveal or be honest about their innermost thoughts
and feelings (Stufflebeam, 2001).
I did not select the constructivist approach for the formative evaluation of
the Web-delivered course because it does not agree with the selected value-
oriented theoretical framework I selected, who believes it is unrealistically
utopian to expect all stakeholders to have an abiding interest in improving the
Web-delivered course. Many of the course participants, a key stakeholder group,
are working adults who are taking the Web-based version of the course because of
the flexibility afforded by Web-based distance education and the fact that this
mode of educational delivery fits within the framework of their busy schedules.
Every course participant is not necessarily informed about instructional design,
program management and development, nor can the course participants be
expected to vest their already limited their time for improving the course as they
participate in the course. The other key stakeholders, the instructor and course
support team, lead busy and active lives and cannot be expected to participate
fully in the formative evaluation of the course while performing their assigned
duties in relation to the course.
65
Deliberate Democratic Approach
The main purpose of the deliberate democratic approach is to use
democratic process for evaluating programs. Ernest House originated this new
entry into program evaluation in 1998 and House and Howe further developed it
in 2000 (Stufflebeam, 2001). The deliberate democratic approach “…envisions
program evaluation as a principled, influential social institution, contributing to
democratization through the issuing of reliable and valid claims” (Stufflebeam,
2001, p. 74). A deliberate-democratic evaluator(s) determines the evaluation
questions that she will address through dialogue, deliberation, and debate with all
interested stakeholders.
Dialogue, deliberation, and inclusion are all important at every stage of
the deliberate democratic evaluation. Deliberate democratic evaluation methods
range from debates and discussions to surveys. The approach is most applicable
when there is adequate funding and the sponsoring agent is willing to give up
power to democratically allow participation of a wide range of stakeholders who
are willing to engage in open, meaningful and honest interactions (Stufflebeam,
2001).
The key strength of this approach is that it seeks to be just in incorporating
the views of all interested parties while at the same time allowing the evaluator
the express right to rule out unethical or faulty inputs. While the evaluator must
be receptive to input from all parties, she does not leave the conclusions to a
majority vote of stakeholders who may be uninformed or have conflicts of interest
(Stufflebeam, 2001). While the ideals of the deliberate democratic approach are
admirable, House and Howe, developers of this approach, both acknowledge that
66
this approach cannot be applied realistically and fully at this time (Stufflebeam,
2001).
Utilization-Focused Approach
The utilization-focused approach is aimed at making the evaluation useful
and utilizes a collaborative process with a targeted group of priority users. The
utilization-focused evaluator must be a highly competent, and confident evaluator
who can proceed to choose and apply methods that will help the targeted users to
obtain and apply evaluation findings. This approach works best with a select
representative group of users. Utilization-focused evaluations concentrate on the
differences they make in influencing decisions and improving programs rather
than on technical elegance (Stufflebeam, 2001).
The utilization-focused evaluator works closely with the target users to
determine the evaluation questions. Foci for utilization-focused evaluations might
for example, include outcomes or impact and cost benefit analysis. The
utilization-focused evaluator can creatively employ whatever methods she deems
relevant for answering the evaluation questions. The methodology evolves in
response to ongoing deliberations between evaluator and the targeted users
(Stufflebeam, 2001).
A key strength of this approach is the involvement of the users of the
evaluation. Stufflebeam quotes Patton, “The evaluator is training users in use,
preparing the groundwork for use, and reinforcing the intended utility of the
evaluation” (Stufflebeam, 2001, p. 79). This approach may be unproductive if
there is a turnover of involved users, which could derail or delay the entire
67
evaluation process. This approach is subject to corruption by the user group and
may serve only the interests of the targeted users group who may have conflicts of
interest. This approach may also limit the evaluation to a small subset of
important questions (Stufflebeam, 2001).
I did not select this approach because I am a novice evaluator, not a highly
competent, and confident evaluator. The involvement of course participants in
the formative evaluation of the course was not practical, as the design team
wanted the course participants energies to be focused on achieving course goals
and the formative evaluation to be unobtrusive.
The Client-Centered Responsive Evaluation Approach
The goal of the responsive evaluator is to carry on a continuous search
for key questions and standards providing the stakeholders with useful
information as it becomes available in an attempt to help the stakeholders to see
things from alternative viewpoints. The approach values collection and reporting
of multiple and conflicting perspectives on the value of a program’s format,
operations and achievements (Stufflebeam, 2001).
The client-centered/responsive approach is process oriented aimed at
serving a variety of purposes such as helping people in a local setting to gain a
perspective on the program’s full countenance or understanding how various
groups view the program’s strengths and weakness. The Client-Centered or
Responsive approach is relativistic seeking no final and authoritative conclusions,
instead giving preferential treatment to subjective information, by seeking to
examine the full countenance of a program. The approach is morally relativistic
68
positing that for any given set of findings, there are equally plausible and possible
multiple interpretations. The responsive approach rejects objectivist evaluation,
instead taking a postmodern view that there are no best answers or values
(Stufflebeam, 2001).
The responsive approach supports local autonomy by helping people who
are involved in the program to evaluate it and use the evaluation for program
improvement. Therefore, the evaluator must continuously interact with and seek
the support of diverse clients, which develop, support, administer, or directly
operate the program under study (Stufflebeam, 2001).
The questions to be addressed in responsive evaluations come from
practitioner and beneficiary groups in the local environment with more specific
evaluation questions emerging as the study unfolds. The emergent questions are
based on the continuing interactions of the evaluator with the stakeholders.
Designs for responsive evaluations are, “…open-ended and emergent, building to
narrative description…”(Stufflebeam, 2001, p. 69). Responsive evaluators delve
deeply into stakeholders’ interests and examine the program’s background,
rationale, processes, and outcomes (Stufflebeam, 2001).
Depending on the purpose of the evaluation the responsive approach
legitimizes a range of different methods and encourages evaluators to employ
redundancy in their data-collecting activities. Responsive evaluators focus on
observing, collecting, and processing the opinions and judgments of the full range
of program stakeholders. This approach uses the information sources and
techniques that are relevant to portraying the program’s complex and multiple
69
realities and is intent on communicating the complexity of the program even if the
results instill doubt and make decisions more difficult (Stufflebeam, 2001).
The key strength of the responsive approach is that it involves action-
research by helping people to conduct their own evaluations and use the results to
improve their understandings, decisions, and actions. I based the choice of a
responsive approach for this formative implementation evaluation on the desire of
the course instructor and the collaborative instructional design team to make the
formative evaluation as unobtrusive as possible. The emergent nature of the
course constructivist and collaborative learning environment allows for shifts in
course goals, objectives, and activities and a responsive evaluation approach is
well-suited to accommodating these types of shifts, as the design of responsive
evaluation is “relatively open-ended and emergent” (Stufflebeam, 2001, p.69)
dependent on the context of the evaluation.
The responsive approach is compatible with my value-oriented theoretical
definition, because the responsive approach is flexible enough to serve as a guide
for the formative evaluation during the time when I am observing the course and
defining the value-oriented criteria that she will utilize to determine the strengths
and weaknesses of the course. The instructor expects participants in the course to
travel down varied pathways having multiple experiences and holding varied
perceptions. Diversity is encouraged and respected. The responsive approach
focuses on the needs of the key stakeholders and on making the evaluation
information useful and educational. The responsive approach allows flexibility in
design, methodology, execution, and reporting during the formative-
70
implementation stage of course development when the staff needs help
monitoring the program, and no is sure what problems will arise (Stake, 1973).
71
Evaluation Design and Methodology
Identifying the strengths and weaknesses of the Web-delivered CSCL
course required a formative evaluation approach that could unobtrusively examine
course participants’ perceptions of the course experience and their fellow course
participants, while at the same time taking into account the instructor’s and course
support teams’ educational expertise and their perceptions about the course
participants’ attainment of course goals and objectives.
EVALUATION DESIGN: APPROACH AND DEFINITION
I conducted this value-oriented, qualitative, formative evaluation research
from an interpretivist paradigm utilizing the events and themes Robert Stake’s
client-centered/responsive evaluation approach as guidelines. Erickson defined,
interpretive research as, “The study of the immediate and local meanings of social
actions for the actors involved in them” (Borg & Gall, 1996, p. 29). The value-
oriented definition of evaluation implies that I will justify the identification of
course strengths and weaknesses, and my recommendations for course
improvement, in terms of the concerns and issues of the course participants.
Stake’s client-centered/responsive approach is process-oriented and aimed
at serving a variety of purposes such as helping people in a local setting to gain
perspectives on the program’s full countenance or to understand how various
groups view the program’s strengths and weakness. The responsive approach
views the course under study as a dynamic and developing environment which
changes in subtle and significant ways as the instructor and instructional design
team learns; as course participant enter the learning environment, progress, and
72
move on; and as materials, modes, and methods, of delivery are altered. Stake
(1975) developed a heuristic diagram, [see Figure 2] which depicts the twelve
events that he utilized to describe the process for conducting a responsive
evaluation. The approach encourages the collection of data through multiple
means and sources, which include both quantitative and qualitative data, “…so
that it effectively represents the perceived status of the program, however
complex” (Stake, 1973). To satisfy these requirements, I collected both
qualitative and quantitative data through multiple means and sources. This value-
oriented responsive formative evaluation approach views the Web-delivered
CSCL course as a dynamic and developing environment which changes in subtle
and significant ways as the instructor and instructional design team learns; as
course participant enter the learning environment, progress, and move on; and as
materials, modes, and methods, of delivery are altered.
Stake’s responsive approach and its emergent constructivist design
allowed me to focus on the events, as they occurred, to address the issues and
concerns of the key stakeholders of this formative evaluation. Stake (1975)
developed a heuristic diagram, [see Figure 2] which depicts the twelve events that
he utilized to describe the process for conducting a responsive evaluation. I
utilized these events as guidelines for the design of this responsive formative
evaluation. Stake explained that the twelve events in an evaluation do not
necessarily occur in a clockwise fashion, but rather can occur in any order,
simultaneously or at different times. Events or stages of the research may be
repeated as many times as necessary during an evaluation to be responsive to the
needs of the evaluation (Stake, 1975).
73
EVALUATION METHODOLOGY
The Human Instrument
The major data collection instrument in this research is the human
instrument, the researcher herself. Naturalistic inquiry identifies the importance
of the human instrument in conducting qualitative research (Erlandson, Harris,
Skipper & Allen, 1993). The importance of the human instrument leads to the
writing of a “Person as Instrument Statement” (see Appendix B) before
embarking in the research process. I utilized the “Person as Instrument Statement
to clarify my preconceived ideas, and biases related to the research topic. In my
person as instrument statement, I discuss my feelings and experiences with
technology and computer mediated communication (CMC). The course instructor
and I believe evaluations should not interfere with the goals and objectives of the
education programs or the naturally unfolding learning processes of the program
participants. The utilization of direct participant-observation, record, and
document analysis, all unobtrusive data sources, satisfied this requirement.
Direct Observation
Guba and Lincoln (1985) suggested that direct observation as a data
collection method “provides here-and-now experience in depth” (p. 273). Guba
and Lincoln summarized the basic methodological arguments for observation.
Observation (particularly participant observation):
• Maximizes the inquirer’s ability to grasp motives beliefs, concerns interests, unconscious behaviors, customs…”
74
• Allows the inquirer to see the world as his subjects see it, to live in their time frames, to capture the phenomenon in and on its own terms, and to grasp the culture in its own natural, ongoing environment.
• Provides the inquirer with access to the emotional reactions of the group introspectively- that is; in a real sense, it permits the observer to use [herself] as a data source.
• Allows the observer to build on tacit knowledge, both [her] own and that of members of the group (Guba & Lincoln, 1981, p. 163).
Participant Observation
The observer roles in qualitative research vary along a continuum from a
complete observer to complete participant. The complete observer “maintains a
posture of detachment from the setting being studied,” while the complete
participant “studies a setting in which she is already a member or becomes
converted to genuine membership during the course of the research” (Borg &
Gall, 1996, p. 345).
As participant-observer I conducted preliminary qualitative data collection
and analysis utilizing a recursive, collaborative process that involved the course
instructor and support team members in discussions where we compared our
observations and perceptions of the course and participants’ reactions to the
course goals, objectivities, materials, and activities during weekly telephone
conference calls. The sources that I tapped, as human instrument, through direct
observation included both human sources and nonhuman sources. The human
sources included: the course instructor, course instructional design and support
team members, and the course participants. The non-human sources were: course
records and documents.
75
Records and Documents
Due to the vast quantity of course-related documents and records, the first
trade-off in this research was deciding which documents and records to collect for
later in-depth analysis. A record, for purposes of this research is, “any written or
recorded statement prepared by or for an individual or organization for the
purpose of attesting to an event or providing an accounting” (Lincoln & Guba,
1985, p. 277). Records collected in this research include: the course syllabus,
newsletters, and other Web-based course materials; files related to participant
assessment such as grade files, assessment rubrics, and assessment database files;
meeting minutes; surveys; and knowledge building products (course assignments)
produced by participants.
The term document in this research is “used to denote any written or
recorded material other than a record that was not prepared specifically in
response to a request from the inquirer” (Lincoln & Guba, 1985, p. 277).
Documents collected in this research include; informal threaded discussion
messages, e-mail messages, chat transcripts, videotapes of course Webcasts.
Lincoln and Guba (1985) cite many reasons to use documents and records in
qualitative research,
• They are, first of all, almost always available, on a low-cost (mostly investigation time) or free basis.
• Second, they are a stable source of information, both in the sense that they may accurately reflect situation that occurred at some time in the past and that they can be analyzed and reanalyzed without undergoing changes in the interim.
• Third, they are a rich source of information, contextually relevant [especially in an online environment where the majority of interactions
76
take place in textual form] and grounded in the contexts they represent. Their richness includes the fact that they appear in the natural language of that setting.
• Fourth, they are often legally unassailable, representing, especially in the case of records, formal statements that satisfy some accountability requirement.
• Finally, they are, unlike human respondents, non-reactive-although the reader should not fail to note that what emanates from a documentary or records analysis still represents a kind of interaction, that between the sources and the analyzing investigator (Lincoln & Guba, 1985, pp. 276-277).
Often evaluators only look for the program effects specified by the
program designers in the program objectives. In so doing, they may only select
data for evaluation purposes that looks for only those effects. Such data
collection may have a bias in favor of individual or group interests (Borich,
2000). I utilized a recursive and responsive formative evaluation approach, in this
research, and minimized bias by seeking input from all of the key stakeholders.
The phases of this research correspond with the following recursively occurring
events in Stake’s Responsive approach:
1. Identification of stakeholders
2. Discovering concerns
3. Conceptualizing problems
4. Identifying data needs
5. Thematizing
6. Validating
7. Winnowing information
77
I based all my decisions on: close observation of course activities and
communications; weekly conversations with program staff and the peer debriefing
team; recursive examination and analysis of course records and documents. From
this recursive process the seven key evaluation stages emerged in this research:
identifying stakeholders, discovering their concerns and issues, conceptualizing
problems, identifying data needs, thematizing, validating, and winnowing
information.
EVALUATION PROCEDURES
Stakeholder Identification
During the first phase of this research, I identified four groups as having a
key investment in the formative evaluation of the Web-delivered CSCL course:
1. The course instructor
2. The professional staff and graduate students who worked on the course
instructional design, development, and technical teams
3. The course support team: e-Sherpas
4. The course participants
Discovering Concerns
My role as participant-observer on the instructional design team was overt
and natural. The instructor and the graduate students on the instructional design
team were aware that I had joined the instructional design team as a participant-
observer. I informed the team that my focus was on embedding formative
evaluation strategies into the design of the course for subsequent formative
evaluation of the Web-delivered course.
78
Course Modeling
Utilizing the responsive themes for the formative evaluation of the course,
I identified the scope of the course and course activities by utilizing “program
modeling” (Borich, 2000, p. 197). According to Borich (2000) program modeling
can help the evaluator to answer questions about the, “legitimacy,
representativeness, and appropriateness of a program’s objectives” (p. 197).
Questions about program legitimacy such as, “Can the course logically be
expected to improve the collaborative skills of course participants?” are the type
of questions that allow me to determine if the objectives and desired outcomes of
a program have logically been aligned by program designers (Borich, 2000).
Questions about representativeness and of program objectives involved me in
questioning if the course objectives did represent all of the things the course was
doing. Often evaluators only look for the program effects specified by the
program designers in the program objectives. In so doing, they may only select
data for evaluation purposes that looks for those effects. Such data collection may
have a bias in favor of individual or group interests (Borich, 2000).
The value-oriented evaluator ties the issues and values of program
participants to the evaluation standards, which evaluators can use to determine the
strengths and weaknesses of programs (Borich, 2000). This suggests that
evaluations should determine who is benefiting and if anyone or who is being
shortchanged by a program. Questions about appropriateness involved me in the
activity of matching the needs and wants of course participants with the course
objectives. To assist this matching process I created a visual model [see
Appendix C] of the course, which included the course components, objectives,
79
and assignments. The visual model allowed me to identify the scope of the
program and to overview program activities which corresponds to several key
events in the responsive approach including: discovering concerns,
conceptualizing problems, identifying data needs, and thematizing.
The course model also helped the instructor, and course support teams to
understand the interrelations and flow of course components, activities, and
assignments. The course model was utilized by the instructor and instructional
design team to help pin-point the location and nature of problems the course
participants were having in relation to the organization of course materials and to
correct those during course implementation.
Formative Implementation Evaluation
I began my observations and examination of course records and
documents during the collaborative course re-visioning process. Later, during
course implementation, I continued my observations and recursive analysis of
course records and documents by logging into the Web-based course
environments, reading both informal and formal course records and documents.
Conceptualizing Problems and Identifying Data Needs
The primary focus of the course goals, objectives, activities, and
assignments was on the development of high performance learning teams. I
collected the concerns of the first group of stakeholders, which included the
instructor, the instructional design team, and the e-Sherpas, via observation and
analysis of selected course documents. I combined their concerns into four
80
evaluation matrices [see Appendix D] and classified the concerns into the
following four categories:
• Course Participant Perceptions & Group Culture
• Pedagogical Elements & Sherpas
• Leadership
• Hardware & Software
During course implementation, participation took a back seat and
observation achieved primacy. My observations were overt at all times. The
instructor listed me as the “Course Revision Editor” in the course “Staff
Directory.” This role allowed me to be “present” during course chats and other
activities and at the same time to keep a very low profile.
Data collection and analysis was constant and the recursive process was
on going, which allowed ensuing evaluation activities to emerge from the ongoing
data analysis. As the course progressed, I also examined course participants
knowledge-building products (course assignments) in relation to these four
categories. My observations and document collection continued throughout
course implementation. I was “present” during Web-delivered course activities,
weekly telephone conference calls, and monthly online Webcasts. During all
stages of the study I interspersed close observation of course activities and
communications, frequent meetings with the instructor and his course support
teams, and the gathering and analysis of course documents and records as data
sources for identifying key areas of stakeholder concerns.
81
Thematizing, Validating, and Winnowing Information
The first trade-off in this research was deciding which documents and
records to collect for later in-depth analysis. Based on my preliminary
observations and record and document analysis, I selected a combination of both
formal and informal course documents for in-depth analysis. The emphasis of
this exploratory formative evaluation was on capturing the “breath” of the course
participant’s experiences, rather than on conducting in-depth case studies of
selected course participants.
Purposive Sampling
As the course progressed, correspondence became voluminous, and I
progressively narrowed my focus to selected documents that allowed me to
overview program activities, discover concerns, and conceptualize issues and
problems. I utilized the identified concerns of evaluation stakeholders to select
course documents for in depth post-course analysis.
During course implementation, my participation took a back seat and
observations achieved primacy. Several course participants tried to enlist my help
with questions about course assignments or activities during course
implementation. Each time, I redirected these requests to the course instructor or
e-Sherpas, explaining to the course participants that my role as the “Course
Revision Editor” was to carefully observe the course and to stay focused on
accumulating data and evidence for improving future versions of the course. This
overt role of “Course Revision Editor” allowed me to be “present” while keeping
a very low profile during course activities and avoid being intrusive or interfering
with the naturally unfolding course processes.
82
I utilized the four categories of concerns to guide preliminary data
collection and analysis, which included determining the document sources that I
would utilize, in an unobtrusive fashion, to address these four categories of
concerns expressed by the evaluation stakeholders. Due to the vast volume of
online correspondence in the course and limitations of the human instrument, it
was logistically impossible for me to read every course document or record during
course implementation and that is why the purposive document sampling,
discussed earlier, was utilized. Nor was it possible for me to be physically present
during every live course Webcast. The course technical support teams produced a
VHS videotape of each Webcast and re-broadcast the Webcast later for course
participants who were not able to attend via technology or physical presence at
the scheduled time. I utilized these videotapes to view Webcasts that I was
unable to attend.
I gathered formative evaluation data from weekly telephone conference
calls with the course instructor and the e-Sherpas, which afforded this group of
stakeholders a venue to express their concerns to the course instructor while
discussing the concerns of course participants that they had observed during the
preceding week. Early during course implementation, the e-Sherpas decided
among themselves to write course-related concerns, issues, and suggestions for
improvement in a weekly “Sherpa Log” [see Appendix E]. The e-Sherpas made
entries in the logs for a period of four weeks and then after that, no longer found
time to make entries in the logs
My observations were overt at all times. The instructor listed me as the
“Course Revision Editor” in the course “Staff Directory.” This role allowed me
83
to be “present” during course chats and other activities and at the same time to
keep a very low profile. I utilized naturalistic methodologies in this constructivist
study and consequently focused on meeting tests of rigor, which are addressed in
four basic concerns: Truth Value, Applicability, Consistency, and Neutrality
(Guba & Lincoln, 1981).
EVALUATION STANDARDS AND CRITERIA
“Concerns and issues provide a much wider focus for evaluation than do
the behavioral objectives that are the primary focus of some quantitative
approaches to evaluation” (Borg & Gall, 1996, p. 704). Borg and Gall (1996)
define a concern as, “any matter about which stakeholders feel threatened, or any
claim they want to substantiate,” and an issue as, “any point of contention among
stakeholders” (p. 704). I developed the standards for this formative evaluation
based on the collective concerns and issues of the stakeholders of this evaluation.
Once the standards were developed, I utilized criteria developed by the American
Council on Education for the “comprehensive and fair review of distance learning
programs, courses, and modules” (ACE, 2001, p. 5), to develop course specific
evaluation criteria for each of these standards:
Standard 1: Instructional Design & Learning Materials
Standard 2: Learning Objectives & Outcomes
Standard 3: Course Tools, Technology & Learner Support
Standard 4: Pedagogical Elements
84
Standard 1: Instructional Design & Learning Materials
The organization of the course, instructional goals, objectives, learning
materials, activities, and assessment strategies, provide comprehensive coverage
of the course content and are presented in a manner that is consistent with the
participants’ prior knowledge and training.
A. Instructional Design: Evaluation Criteria
Course goals, objectives, expectations, activities, and assessment methods
provide flexible opportunities for interaction, participation, and leadership skill
development for learners, encourage multiple outcomes, and are appropriate to the
goals, objectives, and the technologies utilized.
B. Organization and Presentation of Learning Materials: Evaluation Criteria
The organization and presentation of the learning materials is clear to
learners, allows learner choice, and facilitates the achievement of course goals
and objectives.
Standard 2: Learning Objectives & Outcomes
The course learning objectives should be clear and support positive
cognitive outcomes for course participants. I will use the following criteria as
evidence of standard two:
A. Learning Objectives: Evaluation Criteria
The learning goals and objectives of the course are clearly defined and
simply stated, using language that the course participants understand. The course
85
structure and administration organizes learning activities around course goals and
objectives and assesses learner progress in relation to these goals and objectives.
B. Learning Outcomes: Evaluation Criteria
Course participants express positive feelings about the course experience.
The course provided many opportunities for participants to achieve individual,
cooperative, and collaborative learning goals.
Standard 3: Course Tools, Technology, and Learner Support
Adequate tools and technical infrastructure support the course participants
and their achievement of course and personal goals. I will use the following
criteria as evidence of standard three:
A. Tools: Evaluation Criteria
The course structure, instructor, and support teams, allows consistent and
equal opportunities for course participants to utilize a variety of tools, systems,
software, and hardware.
B. Technology: Evaluation Criteria
The course materials, instructor, and support teams provide easy to
understand and sufficient instructions for participants to understand how to use
the course technological resources, and encourage them to use and explore
provided and new technologies to accomplish course, individual, and team goals.
86
C. Learner Support: Evaluation Criteria
The course instructor and course support teams provide learner support
and feedback which: is ample and timely, enhances instruction, and motivates
participants.
Standard 4: Pedagogical Strategies
The pedagogical strategies utilized by the instructor and support team
facilitate critical thinking, cooperation, collaboration, and the real life application
of course skills. I will use the following criteria as evidence of standard four:
A. Collaborative Strategies and Activities: Evaluation Criteria
Course pedagogical strategies encourage course participants to: formulate
and modify hypotheses; apply what they are learning outside of the course
environment; and take into account the time needed to complete course activities
and to collaborate on knowledge-building process and products.
B. Cooperation in Groups: Evaluation Criteria
The course pedagogical strategies facilitate the communication of ideas,
feelings and experiences; respect for others, and cooperation among course
participants.
ESTABLISHING TRUST IN THE RESEARCH
I utilized procedures to insure the rigor and trustworthiness of this
formative evaluation research and kept documentation of each of the procedures
utilized, while the study was ongoing. Examples of the following procedures are
included in the appendices.
87
Truth Value
In the scientific or pre-ordinate research, paradigm truth-value is equated
with internal validity, “The extent to which observed differences on the
dependent variable in an experiment are the result of the independent variable, not
some uncontrolled extraneous variable or variables” (Ary, Jacobs & Razavieh,
1996). The corresponding naturalistic term for this aspect of rigor is credibility,
which Erlandson, Harris, Skipper, & Allen (1993) note, “is essentially its ability
to communicate the various constructions of reality in a setting back to the
persons who hold them in a form that will be affirmed by them” (p. 40).
To insure credibility of my findings, and to minimize possible distortions
that may have resulted from my presence, I engaged in sustained and long-term
engagement with the stakeholders of this evaluation all the while examining
course communications, recording my observations in field notes [see Appendix
F]. Lincoln and Guba (1985) later utilized the term “prolonged engagement” (p.
301) in addressing this aspect of rigor. Eisner (1979) tells us prolonged
engagement, “is important for someone functioning as an educational critic to
have an extended contact with an educational situation…to be able to recognize
events or characteristics that are atypical. One needs sufficient time in a situation
to know which qualities characterize it and which do not” (p. 218).
To address possible distortions that could arise from my involvement with
the stakeholders I utilized weekly debriefing by a team of disinterested peers, my
peer-debriefing team, and a reflexive journal where I recorded thoughts,
decisions, questions and insights related to the research [see Appendix G: Sample
Reflexive Journal]. My peer debriefing team consists of four other doctoral
88
students and me. I share a bond of trust and a long-term history of confidentiality
with all of the peer-debriefing team members. Each of my peers has expertise in
the field of education and has utilized, or is familiar with, naturalistic
methodologies. Each team member also has knowledge of evaluation theory and
practice, and is aware of the procedures and details of this research project.
During this research, the peer-debriefing team reviewed data generation
techniques, procedures, and data analysis, which included confirming or
disconfirming emergent themes, and provided editing suggestions for this final
report.
To address distortions that could arise from employment of data-gathering
techniques I carefully recorded data, and continually scrutinized the data for
internal and external consistency. I utilized “structural corroboration” (Eisner,
1979, p. 215) and the technique of “triangulation” (Guba & Lincoln, 1981, p.
107) to address truth vale in this research. Eisner first utilized the term structural
corroboration to describe,
“…A process for gathering data or information and using it to establish
links that eventually create a whole that is supported by the bits of
evidence that constitute it. Evidence is structurally corroborative when
pieces of evidence validate each other, the story holds up, the pieces fit, it
makes sense, and the facts are consistent” (Eisner, 1979, p. 215).
Lincoln and Guba later (1985) explained that structural corroboration or
triangulation of data sources is a matter of crucial importance in naturalistic
studies. They stressed that the evaluator needs to take steps to validate each new
piece of information in a research study against at least one other source. In this
89
research, I utilized structural collaboration by validating information in one
document such as a peer evaluation, with information in a second document
source. “A naturalistic study involves an inseparable relationship between data
collection and data analysis. An assumption of the naturalistic researcher is that
the human instrument is capable of ongoing fine tuning in order to generate the
most fertile array of data” (Erlandson et al, 1993). To fine tune data collection
and analysis I used the constant comparative method of unitizing the data and
assigning categories (Lincoln and Guba, 1985) to analyze the data gathered during
this study. I utilized printed documents, records, and a multi-functional
qualitative analysis software system for the development, support, and
management of qualitative data analysis in this project.
First, I prepared the documents for importation into the software system.
Next, the constant comparative method of unitizing the data and assigning
categories involved in making a document system began. I unitized and coded
data into categories of stakeholder concerns, and defined and redefined these
categories of concerns in a recursive process as I imported each new document
into the software system. I unitized all data recursively reviewing previous
documents and revising the emerging categories of concerns and issues
accordingly. In this stage, after the categories were firmly established, I arranged
and examined the topics for emergent categories while recursively defining the
emerging categories or themes.
After the data generation and the initial unitizing of data were completed,
the instructor, course support teams, and the peer debriefing team, and I reviewed
data and categories. Then I grouped together the issues or problems discussed at
90
greater length by the majority of evaluation stakeholders and the instructor,
support teams, and peer debriefing team. The teams and I reviewed these
categories and six key categories or themes of concerns emerged: Instructional
Design; Learning Objectives; Course Tools, Technology, Learner Support; and
Pedagogical Strategies.
I analyzed the selected course documents and records looking for evidence
of course strengths and weaknesses related to these six categories of concerns. I
then recursively coded and arranged the course participants comments related to
these four emergent categories and winnowed and formatted this information for
audience use. This reconstructive process is the foundation for establishing the
credibility of the study.
Applicability
Scientific or pre-ordinate research equates applicability with external
validity, “The extent to which the findings of a particular study can be
generalized to other subjects, other settings, and/or other operational definition of
the variables” (Ary et al., 1996). Guba and Lincoln utilized a corresponding
naturalistic term “fittingness” (1981, p. 104) and cautioned, “For many purposes
the question of whether the findings of one evaluation might be applicable in
some other setting is meaningless” (p. 115). They cautioned that an emphasis on
a priori control of factors or conditions to achieve, “…high internal validity-may
seriously affect external validity, because then the findings can, at best, be said to
be generalizable only to other, similarly controlled situations” (p. 116). Lincoln
91
and Guba (1985) later utilized the term transferability, or the application of the
findings to other contexts and other individuals, to address this aspect of rigor.
Several researchers have asserted “that instead of making generalization
the ruling consideration in inquiry, researchers should place emphasis on careful
description” (Cronbach, 1975; Guba & Lincoln, 1981, p.118). I am responsible
for providing detailed descriptions or “thick description” (Guba & Lincoln, 1981,
p. 119) of the course context, data collection, data analysis and reporting
procedures which allow the reader to draw inferences and to apply these
inferences to other contexts, if they so desire. Transferability of the findings of
this report is the responsibility of the reader.
Neutrality
Scientific or pre-ordinate research equates neutrality with objectivity;
Guba and Lincoln utilized a corresponding naturalistic term, confirmability (1981,
p. 104) or the ability to determine that the findings which emerge from the
generated data and the interpretations of the data represent the perspectives of the
study participants rather than from the projections or expectations of the
researcher. They saw this as the, “most thorny” (p. 124) issue that can be raised
with respect to using naturalistic methods in evaluation research,
“For how can inquiry be objective if it simply ‘emerges’; if it has no careful controls laid down a priori; if the observations to be made or the data to be recorded are not specified in advance; and if on the admission of its practitioners, there exist multiple realities capable of being plumbed to different depths at different times by different investigators” (Guba & Lincoln, 1981, p.124).
92
The trustworthiness criterion of confirmability can be satisfied through
documentation of the human instrument. Guba and Lincoln (1981) used an
illustration, which suggests that one person’s experiences are not unreliable just as
what a number of people experience is not necessarily reliable and explained that
the difficulty mentioned above is due to, “the meaning that is ascribed to the term
objectivity,” rather than, “the innate characteristic of naturalistic inquiry” (p. 124).
I utilized triangulation of data sources and the reflexive journal (see
Appendix G] to develop a “human instrument” audit trail to ensure the
confirmability of my research results. However, one issue remained for me to
address: What measure or measures would I use to determine the Web-delivered
course’s strengths and weakness? I chose to answer this question by
“conceptualizing issues and problems” (Worthen and Sanders, 1987, p.136)
through the development of standards and criteria to guide my determination of
the strengths and weaknesses of the course. I utilized the developed standards and
criteria to guide data analysis, which I will discuss next.
93
Analysis of Data
I developed the course-specific standards and underlying criteria for this
formative evaluation based on the concerns and issues expressed by the key
stakeholders and utilized these standards and criteria to guide data analysis and
the formulation of judgments about the strengths and weaknesses of the Web-
delivered CSCL course. I recorded and qualitatively analyzed observational data,
course documents and records, and report here in relation to these standards.
ANALYSIS: INSTRUCTIONAL DESIGN AND LEARNING MATERIALS
Data utilized in analyzing the course participants’ perceptions of the
course design and learning materials included my observations, course
participant’s reflective writings, Café CSCL threaded discussion messages,
participant’s final course grades, and the post-course survey. Twenty-one
anonymous respondents, or approximately sixty-six percent of course participants,
responded to the post-course survey [see Appendix H: Post-Course Survey].
Results from these five data sources indicate that the Web-delivered course met
all of the criteria for this standard:
Standard 1: Instructional Design & Learning MaterialsThe instructional design, organization of the course, learning materials, objectives, and assessment of students should provide comprehensive coverage of the course content and should be presented in a manner that is consistent with the course participants’ prior knowledge and training.
A. Instructional Design: Evaluation CriteriaCourse goals, objectives, expectations, activities, and assessment methods provide flexible opportunities for interaction, participation, and leadership skill development for learners, encourage multiple outcomes, and are appropriate to the goals, objectives, and the technologies utilized.
94
A. Instructional Design: Evaluation
My observations, analysis of the post-course survey, peer reviews, product
reviews, and Café CSCL threaded discussion messages indicate that most of the
course participants achieved the goals and objectives of the course and their own
personal learning goals. All course participants received “A” level graduate
credit.
The intent of this qualitative research is not to make generalizations,
however, since I am using closed-ended and open-ended survey items as data
sources it is important to acknowledge that some problems with surveys can be
attributed to mistakes, or even negligence, however many survey problems are
unavoidable and can only be minimized rather than eliminated altogether. For
example, non-response was inevitable on the voluntary post-course survey
because some of the course participants (around 34%) choose not to participate.
Such survey problems can lead to bias, the tendency for findings to be inexact in
projecting from the sample to what is happening in the whole population, or to a
less predictable effect, variance, which may cause projection to be low one time
and high the next or vise-versa. Both qualitative and quantitative methods have
strengths and weaknesses.
The debunking of positivism after World War II and the ascendance of
constructivism resulted in what social scientists have referred to as “paradigm
wars.” However, the so-called “wars” have given way to pragmatism
(Tashakkori & Teddlie, 1998) especially in evaluation research (Patton, 1990;
Borg & Gall, 1996). Howe’s (1988) concept of pragmatism held a tenet that
95
stressed the compatibility of qualitative and quantitative methods. Brewer and
Hunter made essentially the same point:
However, the pragmatism of employing multiple research methods to study the same general problem by posing different specific questions has some pragmatic implications for social theory. Rather than being wed to a particular theoretical style…one might instead combine methods that would encourage or even require integration of different theoretical perspectives. (1989, p. 74).
Qualitative and quantitative methods are both useful in evaluation
research, when the researcher understands and accounts for the strengths and
weaknesses of each utilized method. In this exploratory research, we used both
qualitative and quantitative methods to responsively and unobtrusively evaluate
the newly developed Web-based CSCL learning environment.
Analysis of the thirty-two course participants, module three through six
reflections, which were posted in the course assignments folder, indicates the
course materials were comprehensive and helped participants to achieve the
course objectives and as well as their own learning outcomes. Annie, an off-
campus participant who works full-time as a school district technology
coordinator while she completes a graduate degree in English as a Second
Language (ESL), reflected about what she learned while working collaboratively
with other course participants on the “topic paper,” during module three,
I learned great techniques for using a collaborative document, which includes how to share thoughts with teammates and organize the documents to maximize understanding. Through my office and the entire suite’s research, I learned about the tools, strategies, advantages, and disadvantages of collaborative writing.
96
The Web-based format of the course provided flexible opportunities for
interaction and participation and these opportunities allowed participants to utilize
course technological resources to work at places and times that were convenient
for them. Dr. Resta informed course participants early in the semester that the
formative evaluation of the course would be ongoing and solicited course
participant input throughout the semester. Learner choice and multiple outcomes
were integral to course activities, for example, the course collaborative writing
activities in module three and six where participants found, analyzed, and shared
knowledge of online course resources, tools, and research articles. Dr. Resta e-
mailed the course participants, after he had finalized course grades, and
encouraged them to critically analyze the strengths and weaknesses of the course
by participating in the voluntary and anonymous post-course survey.
Post-course survey respondents indicated that participation in the course
and social contacts with fellow course participants were important to them, and
that the course had lived up to their expectations. One post-course survey item
asked the respondents to use the following Likert scale to evaluate each of the
course modules (see Figure 3: Evaluation of Course Modules),
Very Poor Poor Average Good Excellent
1 2 3 4 5
97
Many of the post-course survey respondents rated the course modules
“better than average” (see Figure 3) however, not every course participant was
satisfied with the instructional design and learning materials. The open-ended
comment sections on the post-course survey provide insight and help to clarify
and extend respondent’s perceptions of the course expressed on the closed-ended
Likert scale post-course survey items such as this comment about the course
modules: “The modules were a helpful resource and provided useful information.
It was helpful to be able to refer back to them as needed.”
The last open-ended post-course survey item asked course participants: Do
you have any further remarks on how we might improve the course (see Appendix
H: Item 153). Sixteen of the twenty-one survey respondents (76%) entered a
response. Two of these sixteen respondents (13%) indicated that they could not
think of a way to improve the course. One survey respondent offered this
suggestion for course improvement,
Figure 2: How do you evaluate each of the course modules?
98
I would rather have explored and reflected upon some of the established collaborative learning strategies as they could be interpreted in an online environment. Instead, what my suite learned was a way of doing collaborative work, and we used it over-and-over. It proved practical for turning out the assignments, but it did not give me first hand understanding of a variety of ways that I could structure collaborative learning through technological means with my students.
This comment indicates that members of the respondent’s suite developed
comfortable ways of working together. Perhaps this respondent’s suite did not
fully explore ways of structuring collaborative learning, but I believe this was a
group limitation and not related to the instruction or course materials. My
observations indicate that the instructor and course materials encouraged the suite
teams to explore various roles and strategies for collaborative knowledge-building
activities.
A key suite-team task during module six, “Collaborative Planning and
Implementation of Larger Knowledge-Building Projects,” was the selection of a
suite representative for the Project Leadership Team (PLT). The course instructor
built extra assessment points into the course structure to compensate course
participants for the additional time and effort that they expend when performing
Project Leadership Team (PLT) roles. One survey respondent was “distracted” by
the course assessment practices,
According to the instructor, the original intent of the course was to give the learner experience in several collaborative methods. However, what he actually emphasized were group leadership and the accumulation of points. This was very distracting to me and took away from the overall learning experience.
99
Another post-course survey respondent (see H: Item 13) who did have the
opportunity to serve in a leadership role indicated satisfaction with the course
leadership roles,
I have become a little more outgoing because of having to take on “leadership roles” in this course. I have a quite personality normally, but this course did not allow me to sit band and learn or even be a follower. This was good for me.
The course materials and instructor did encourage course participants to assume leadership roles throughout the course, however, the leadership roles were limited to selected leaders for various course activities, hence leadership practice occurred more extensively for the students who served as leaders of activities and as members of the PLT, than for other course participants
The course materials and instructor provided resources and software tools
for course participants to help them with their collaborative tasks. The instructor
also made provision for learner choice, as the following threaded discussion
exchange that took place in the course Café threaded discussion folder illustrates,
Dr. Resta, Do we have to use Inspiration for the mapping task, or can we use another tool that we are more familiar with? -Rene
Hi Rene, The use of Inspiration is optional. If you have access to another tool that you would prefer to use to develop your concept map, please feel free to use it. -Dr. Resta
One post-course survey respondent expressed general satisfaction with the
course but dissatisfaction with the learner choices the instructor and course
materials afforded,
I am, overall, satisfied with the course and content. However, I believe that things got a bit out of hand toward the end of the course, leading to an atmosphere of frustration rather than learning. This was not the developer's fault, but rather letting the class dictate the delivery of instruction.
100
Eight-four percent of the course participants were female, working full
time, and attending school part-time. Several of the course participants indicated
that they felt overwhelmed at times by the quantity and intensity of course
assignments. Sandy, an on-campus course participant and a faculty member at a
local community college took the course because she was, “interested in seeing
how collaborative learning works in a class setting.” She posted this threaded
discussion message in Café CSCL,
Is it just me? Or are there way too many assignments? I am still trying to figure out what to do about the evaluations from the topic paper. In the meantime, the MOO thing is due on Monday and then we have to figure out the Webquest assignment by Friday of next week. (This added to my other class work and my work schedule!)
Laurie, an on-campus course participant, telecommutes from home and
work as a software developer, “creating, maintaining, and supporting database
software for the crop insurance industry,” while she is earning a graduate degree
in education. Laurie took the course to, “acquire the necessary skills in order to
work in a collaborative group.” Laurie replied to Sandy’s threaded discussion
message,
I empathize with you about the amount of work. I too will be struggling to get the reviews and the MOO assignment completed by Monday. Anyone else having problems? Good luck! -Laurie
Tracy, an elementary school teacher with nine years of experience
teaching first graders, wants “to be a contributing member of a team that designs
projects, analyzes information, and explores new ideas.” She responded to
Laurie’s message with this threaded Café CSCL message,
101
I too am slightly overwhelmed with MOO and now the Webquest assignment 5.1, which is due on October 20. Is the October 20 deadline a typo?
The previous comments indicate that some of the course participants
found the requirements of the course to be demanding and time intensive which in
may be positive or negative depending on the participants individual and
collaborative goals. The survey respondents varied on the time they spend
working on the course related activities and assignments (see Figure 4). On-
campus survey respondents indicated that they spend fewer hours per week
working on course assignments than did the off-campus participants. The
differences in the average weekly hours that participants spend on course related
activities might be due to a variety of factors, such as motivation, prior
knowledge, previous experience in online learning, work responsibilities, or a
multitude of other variables. Further research could help to shed light on the
extreme variance among course participants in relation to time spent on course
related activities (see Figure 4: Survey Respondents Average Weekly Course-
Related Hours).
102
The course instructional materials and instructor allowed learner choice,
materials were comprehensive, and did facilitate the achievement of course goals
and objectives. However, the organization and presentation of the learning
materials was not clear and/or satisfactory at all times, to all of the course
participants.
B. Learning Materials: Evaluation CriteriaThe organization and presentation of course learning materials is clear to learners, allows learner choice, and facilitates the achievement of course goals and objectives.
Learning Materials: Evaluation
The mean rating of course Web pages by the post-course survey
respondents was “Good” (see H: item 90). However, the variance among
respondents on this item portrays their evaluation of the course web pages more
clearly than the mean. Seven of the twenty-one survey respondents rated the
Figure 3: Average weekly course-related hours
103
course Web pages as “Excellent, “ twelve rated them as “Good” and two rated
them as “Average.” One respondent expressed the following opinion about the
course learning materials and activities,
I think some of the activities could have been better organized. I think we spent too much time in activities such as voting and therefore we spend less time for activities such as writing papers.
Utilizing an emergent and generative responsive formative evaluation
process the instructor, course support team, and I, discussed participant input
during support-team weekly telephone conference calls, and on-campus face-to-
face meetings. While this comment indicated to us that this post-course survey
respondent might have preferred more emphasis on course written assignments
and less emphasis on the team-building activities, the course instructor discussed
the fact that he had intentionally focused instructional activities around building
learning teams and considered that most important. In this case, we determined
that perhaps this survey respondent expressed this opinion because the steps and
processes for writing a paper were more familiar to this participant and other
members of her suite team than the steps and processes for building high-
performance learning teams were. No course modifications result from this
specific feedback for this reason, however, the instructor “listened” to participant
feedback personally and through interacting with members of his course support
teams. Some of the participant feedback generated viable course improvements.
For example, several course participants were confused about the instructions for
peer and product assessments during the semester. One collaborative course
activity required the suite teams to work together assessing the white papers of the
104
other course suite teams. Jaye, a former mental health counselor from India,
works on-campus twenty hours per week as a graduate research assistant. Jaye
was confused with the directions for peer evaluation in module three,
This is so confusing. I just presumed we were going to evaluate each other’s papers as a group. One suite evaluating others, but now I see that the instructions are not clear. Will somebody please clarify this?
The organization of learning materials and instructions for completing
course activities were not clear to all learners at all times however, I observed that
course participants were able to resolve problems or questions with the course
materials quickly. For example, the instructor and course support team clarified
the instructions for the assignment that Jaye commented on, during this generative
and responsive formative evaluation, because they “listened” to student feedback,
discussed the implications of the feedback, and responded, when appropriate, with
course modifications.
However, as I have indicated, not all student feedback generated course
modifications. The course support team, the instructor, and I, discussed course
participants concerns which were “voiced” in communications, such as postings
in the Café, comments made during course Webcasts, and participants reflections.
The instructor made course modifications and adjusted assignments, after these
weekly telephone conferences and the monthly Webcasts to accommodate the
needs and requests of course participants which he and the course support team
had garnered from the above mentioned sources, only when such needs and
requests did not invalidate or interfere with the participant’s attainment of course
goals and objectives.
105
Some of the course participants expressed frustration or confusion with
course materials, deadlines, or tools, in Café CSCL. Karen, an off-campus course
participant and grandmother of two, is currently in her twenty-ninth year of
teaching secondary school. Karen directed this threaded discussion message to
Louise, an on-campus course participant, and former ballet dancer. Louise
currently works full-time for a local arts commission while she completes a
master’s degree in adult education,
Louise, I feel like I am on a merry-go-round that is stuck in high gear. I know just how you feel, too many assignments being due at relatively the same time. Add to that unclear instructions, plus things that don't function, and you cannot get things done, and frustration sets into the madness. -Karen
Laurie believes that, “collaborative learning via the internet is the wave of
the future.” She posted this message in Café CSCL shortly after Karen’s,
“However, it was not until I started reading the postings of others ABOUT the
class that I finally realized that I am not the only person freaking out here.”
Bridgett enrolled through the local campus as a doctoral student. She works full
time as web course developer and information architect for a local online learning
company and replied to Laurie’s post,
Yeah, I guess I agree with everyone... I like the idea of having a place to get to know each other about stuff that isn't directly related to the class but on the other hand, a Café would be the logical place to vent and indeed, it was.
Working on collaborative projects can be frustrating experiences for
experienced collaborators and even more so for novices. Many of the course
participants “vented” their frustrations in threaded discussion messages in Café
CSCL as Bridgett indicated. My observations indicate that the informal Café
106
discussion area for course participants was a positive course provision because it
relieved participant anxiety and diffused tension during the semester while
serving as a venue for participants to discuss course related concerns and issues in
an informal setting with out fear of retribution.
Summary: Instructional Design and Learning Materials
Course goals, objectives, expectations, activities, and assessment
methods provided flexible opportunities for interaction, participation, encouraged
multiple outcomes, and were appropriate to the goals, objectives, and the
technologies utilized. The organization and presentation of learning materials
facilitated the achievement of course goals and objectives, but the organization
and/or presentation of these materials were not clear at all time to all course
participants. However, participants were able to resolve most problems or
questions with course materials quickly by utilizing the many course
communication tools, which included the instructor’s telephone number, and e-
mail address for questions that needed immediate answers, or answers outside of
the course structure. Within the course structure, communication tools included:
chat, threaded discussion folders, and monthly Webcasts. All of these
communication tools served as vehicles of communication between the instructor,
course support teams, course participants, and me. A few course participants felt
overwhelmed by the course activities, roles, and tools or never got comfortable
with the independence and learner control that the course structure and instructor
provided. Leadership skill development and practice occurred more extensively
for the students who served as members of the PLT during module six, however
107
all course participants had exposure to, and were encouraged to take on leadership
roles by the course materials, instructor, and e-Sherpas.
The instructor and his support teams clarified and modified the course
design and learning materials during this formative evaluation, based on our
observations, course participant input, and course modeling. Modifications of the
instructional design and learning materials, which the instructor made for the
second implementation of the course based on this formative input, will be
discussed in a future research report. The instructional design, organization of the
course, learning materials, objectives, and assessment of students, provided
comprehensive coverage of the course content and was presented in a manner
consistent with the technologies utilized in the course and the course participant’s
prior knowledge and training. Course instructional materials were comprehensive
and facilitated the achievement of the course objectives and learning outcomes,
which I will discuss next.
ANALYSIS: LEARNING OBJECTIVES & OUTCOMES
Data utilized in analyzing the course learning objectives and outcomes
included: my observations, the course syllabus and Web-based materials,
reflective writings of course participants, Café CSCL threaded discussion
messages, and the post-course survey. Results from these five data sources
indicate that the Web-delivered CSCL course met the criteria for this standard:
Standard 2: Learning Objectives & Outcomes The course learning objectives are clear and support positive cognitive outcomes for course participants.
108
A. Learning Objectives: Evaluation CriteriaThe learning goals and objectives of the course are clearly defined and simply stated, using language that the course participants understand.
Learning Objectives and Outcomes Evaluation
The course materials clearly stated the course learning goals and
objectives and defined course requirements. I found no evidence of course
participants that did not understand the goals and objectives of the course as they
were detailed in the course materials. My observations and analysis of course
documents and records indicate that the course instructor and instructional
materials organized learning activities around the course goals and objectives, and
assessed learner progress in relation to these goals and objectives. However, the
instructor also took individual and team goals and objectives into account and
adjusted course objectives, activities, and course participant assessment, when
needed.
The course Webcasts, private e-mail and telephone messages to the
instructor, and course participant’s posting to the Café CSCL threaded discussion
folder, were utilized by the course instructor and support teams to identify and
adjust course activities and deadlines based on concerns and issues expressed by
participants during course implementation. Bart, an off-campus participant, has
been a medical administrator for the past two decades and is completing a
program to become a Certified Knowledge Manager and Certified Knowledge
Environmental Engineer. His reflective writing after completing the module five
Webquest activity, provides participant insight into two issues faced in both
109
online and face-to-face programs which intensively utilize collaborative learning
experiences,
Our suite was technologically light. Only two of us had any experience in doing educational Web pages. The others preferred to work on writing curriculum or doing research. Our suite divided itself along the lines of what we already knew and could contribute to the project rather than a truly collaborative attempt to learn something new and as a result, there was little acquisition new knowledge and skills. This is a constant problem with group work in graduate school. Usually the project is rushed, as this one was, and then groups divide the work based on strengths. Unless there is sufficient time to process information and build on it in a group project, very little real learning happens because everyone contributes based on previous experience. Allowing at least two weeks for this project would be minimum.
Twelve of the course participants expressed either or both of these
concerns in their reflective writings: feeling rushed, and/or division of labor based
on prior knowledge. The instructor did not require participants to complete
module seven activities because comments from many of the course participants
indicated that prior course activities required more time for participants to
complete than he and his instructional design team had allowed for. The instructor
and the course support team realized that they would have to eliminate some
planned activities to avoid over stressing course participants. The instructor
modified the course, utilizing the responsive formative evaluation process to
address the issues and concerns of course participants while the course was
ongoing, and he plans to adjust the objectives and activities of the course for the
next offering to allow course participants to complete module seven. Further
research into the ways that course participants divide work among themselves
may shed light into how we can modify the objectives and structure of
110
collaborative activities to encourage participants to explore new roles and to
develop new skills.
The course Webcasts gave the participants time to present their
collaborative projects and the opportunity to “voice” their course related issues
and concerns with the instructor and course support-team members. The
generative process of ongoing formative evaluation and course re-visioning also
included weekly telephone conference with the course instructor, support team,
and me. The telephone conference calls facilitated communications between
instructor and the course support team as they worked together to assist course
participants to develop high-performance collaborative knowledge-building
teams.
During the weekly conference calls, the instructor, course support team,
and I discussed the strengths and weaknesses of the course, made decisions about
course modifications, and prepared the agendas for monthly Webcasts. The
weekly feedback from the course support team helped the instructor to adjust and
improve the course for participants during the first semester of Web-based
implementation, and provided course improvement data for the second
implementation of the Web-delivered course, which I will describe and discuss in
a future research report.
B. Learning Outcomes: Evaluation CriteriaCourse participants express positive feelings about the course experience. The course provides many opportunities for participants to achieve individual, cooperative, and collaborative learning goals.
111
Learning Outcomes Evaluation
Many course participants expressed positive feelings about the course in
their reflective writings, postings to Café CSCL, and anonymously on the post-
course survey. Post-course survey respondents ranked the course overall as
“Good,” on a Likert scale that ranged from poor to excellent. The on-campus
students ranked the course slightly higher than did the off-campus students [see
Figure 4]. Poor Below Average Average Good Excellent
1 2 3 4 5
Course participant’s comments on the post-course survey indicate that
they learned about the theoretical background of CSCL, team roles, and
successfully developed learning teams,
I learned a lot about online collaboration and how this work approach can make one more of a constructivist.
Figure 4: CSCL course general evaluation
112
Collaboration with people in general was incredibly useful. I do not think it made a difference to me what their disciplines were, just that we were all brainstorming together.
I really enjoyed the class and feel I learned a great deal about collaborative learning...I had such a wonderful suite that I felt we would stick together and work through any difficulties we might encounter.
Course participant’s comments in Café CSCL threaded discussion
messages and reflective writings indicate that the course provided many
opportunities for participants to achieve course goals, individual learning goals,
and cooperative and collaborative learning goals. Katie, an instructional
technology consultant for a regional educational service center, is an off-campus
participant who hopes to use what she learns about collaborative online learning,
web-based inquiry, and online projects to enhance the staff development sessions
she conducts. Katie commented about the course,
It was a bit of a whirlwind and a bit like trial by fire, but my initiation into the world of CSCL was a success. I learned to be explicitly clear…to give others 24 hours to reply to e-mail because they may not check e-mail as often as I do, and I learned to manage the information overload problem when collecting research data.
Chris is an on-campus course participant who is seeking a graduate degree
in library science. Chris is married and she has a nine-month old baby boy. She
is working part-time as a library assistant. Chris had this to say about the course,
By participating in a computer supported collaborative writing process I learned a great deal about the necessity of clear concise and timely communication among suite members…Collaborative writing truly does utilize the strengths of group members.
Tracy, as I mentioned earlier, is an off-campus participant working as a
first grade teacher. Tracy enjoys reading, gardening, and spending time with her
family. She is earning a graduate degree in Early Childhood education while
113
working full-time and she had this to say about the development of her suite
learning team,
I found that through time and chat discussions, the team became more cohesive and provided support to other members when necessary…At different times during the process of writing this paper, we relied on each other’s talents in order to produce the best work possible.
Jim is an off-campus course participant and former United States Air
Force instructor who is currently working full time for the U.S. Department of
Defense as a civilian instructor and curriculum developer while he earns a
graduate degree in educational technology. Jim made this comment about the
collaboration among participants in the course,
I learned that getting everyone on the same page could be difficult. Chats and assignments have different meanings to everyone…I learned that if one person is not available or pulling their weight, it can be rough going for the rest.
My observations indicate that course participants experienced positive
cognitive and social growth during the semester. The instructor’s assignment of
an “A” course grade for each course participant indicates that he was satisfied that
all of the course participants had achieved course goals and objectives of the
course as well as individual goals and objectives. Participant’s reflective writings
serve as a record of each participant’s social and cognitive growth in the course.
Milan, an off-campus course participant who has worked as computer specialist
for the past nine years and before that as a high school biology and chemistry
teacher. Milan made this statement in his course reflection written after his
completion of module three,
114
I realized that I don’t learn as much content information working this way as I do working alone. I can read what others have written and understand it, but I don’t really know it thoroughly, or care about it the way I would if I had organized, written, and revised my own text. There’s not much of a sense of accomplishment or mastery, just relief it’s over and gratitude toward everyone who helped the group succeed.
Milan’s reflective writing, after the fifth course module, indicates that at
this time, near the end of the course, he was no longer just relieved when a course
computer supported collaborative learning activity ended. By then Milan had
moved to searching for ways in collaborative knowledge building, to “to do it
better:”
Groups should have or get more working knowledge of various cooperative/collaborative configurations that might be appropriate in a Webquest, and use a composing document that is accessible and stable.
Many of the course participants felt troubled by publicly evaluating their
peers’ contributions to the group work and processes. Scott, an on-campus course
participant, is currently a full-time international student from Korea. He is
working on a graduate degree in adult and organizational learning and described
his feelings about the course peer evaluations in this Café CSCL threaded
discussion message,
In a true office setting, people collaborate constantly but they do not evaluate each other after every task. How awful would that be? How could we work with each other if we knew each of our co-workers was judging our efforts and would turn in an evaluation to the supervisor as a basis for measuring our worth? What kind of relationships could we have with our co-workers in this kind of atmosphere?
Jaye replied to Scott’s message in the Café CSCL threaded discussion
folder,
My sentiments exactly. This framework beats the collaborative purpose and goal either way you look at it, evaluating teammates works against
115
team building. I think I might be more comfortable with team grading (grade for the team as opposed to individuals on the team) by Dr. Resta.
The conflicts about the peer reviews occurred when the course participants
logged into the course database to examine their peer evaluations, after module
three, and found the identities of their peer reviewers revealed. This release of
reviewer identity was not the original intent of course instructor, and was the
result of a miscommunication between the instructional design team and the
course database programmer. When the instructor discovered the problem of
divulging the reviewer’s identity, he raised and discussed the peer assessment
process and gave participants a chance to change the peer evaluation process.
The instructor used the October 13, 2000, course newsletter to clarify
misunderstandings about the assessment of peers, and the October 25th newsletter
to set the stage for the October 30th Webcast. The instructor outlined three peer-
review issues for the voting process in the October 25, course newsletter:
(1) To keep the peer evaluations the same with identities of all parties
revealed
(2) To use only the comment section of the peer review forms and not the
points sections or both
(3) To determine who would have access to the peer review information:
only the instructor; the instructor, and the subject of the peer-review;
or the instructor and all course participants?
During this Webcast, the instructor gave participants the option of
changing the peer evaluation process by a majority vote and Sam, a professional
software engineer, doctoral student, and course database programmer, related how
116
peer and software quality reviews are institutionalized work-world practices
which by necessity release reviewer’s identities. Results of the course
participant’s majority vote: Twenty-three participants voted to use the form with
both points and comments sections, and nine participants voted to use only the
comments sections of the peer review forms. Twenty-three participants voted for
the instructor and the peer-reviewed student to have access to the review and nine
students wanted only the instructor to view the comments. The concerns and
issues, which surfaced in relation to course peer-assessment practices, generated
meaningful dialog between course participants and the instructor. The negotiation
process and majority vote encouraged learner choice and involvement. While the
chain of events which led up to the conflict were not planned, because of the
success of the negotiation process in this course, the instructor has incorporated
the process in his future online course offerings. Post-course survey respondents
expressed positive feelings about course when asked on the post-course survey,
Do you agree with the following statements?11. Participating in the course was very important to me.12. Social contacts with fellow participants were very important to me.13. The course lived up to my expectations.14. I am very satisfied with the course.15. I have learned a lot in the course.16. I enjoyed the course.17. The course was very stimulating to me.18. My cooperative/collaborative skills have improved.19. I would like to participate in other online courses.
20. By participating in computer-supported collaborative learning activities, I developed new perspectives on learning.
21. Collaborating with participants from other fields was useful.
Strongly Disagree Disagree Undecided Agree Strongly Agree1 2 3 4 5
117
One anonymous survey respondent commented,
I strongly agree with all of the above statements; although this class did present several challenges, both technological as well as human, I found them to be quite similar to real life challenges I have experienced at work in collaborating on projects with a global team. One of the only drawbacks was that I did not have the opportunity to stretch my knowledge in the area of using web tools to develop collaborative learning environments due to time constraints.
Another respondent commented,
I believe that online courses are getting more-and-more important and used. I will definitely take other online courses even though I can think of a few things to improve this course.
Summary: Learning Objectives and Outcomes
The course materials stated the course learning goals and objectives and
clearly defined the procedures and requirements. The course activities and
materials provided many resources and opportunities for participants to achieve
individual, cooperative, and collaborative learning goals. The course instructor
and the course learning materials organized learning activities around these goals
and objectives and assessed learner progress in relation to these goals and the
Figure 5: Do you agree with the following statements?
118
course participant’s individual and collaborative goals. Many of the course
participants expressed positive feelings about the course experience.
ANALYSIS: COURSE TOOLS, TECHNOLOGY, AND LEARNER SUPPORT
Data utilized in analyzing the course tools, technology, and learner support
included: my observations, reflective writings of course participants, Café CSCL
threaded discussion messages, and the post-course survey. Results from these
four data sources indicate that the Web-delivered CSCL course met all of the
criteria for this standard:
Standard 3: Course Tools, Technology, and Learner SupportAdequate tools and technical infrastructure supported the course participants and their achievement of course and personal goals.
A. Tools: Evaluation CriteriaThe course structure, instructor, and support teams, allow consistent and equal opportunities for course participants to utilize a variety of tools, systems, software, and hardware.
Tools: Evaluation
Course participants’ anonymous comments on the post-course survey
indicate a level of satisfaction with the course tools. One wrote,
I feel I learned a great deal about collaborating with a virtual team including how to communicate effectively using electronic tools and to adapt to differing schedules and technological experience and access.
The other survey respondent wrote,
The different tools used during the course were very beneficial to my learning and work-related activities. I valued the chance to use a variety of methods when participating in online learning.
119
Descriptions of three key course tools and analysis of collected data
related to these tools follow.
Course Communication Client
The instructor utilized FirstClass groupware as the primary course
communication environment. The FirstClass groupware client software for either
IBM compatible or Macintosh computers was a free download for course
participants, with the instructor’s local college bearing the cost of the needed
infrastructure such as: a full-time technician, hardware, software, back up
systems, and other incidental costs for this groupware.
The instructor used the FirstClass platform for all of the course
communications except course Web pages and newsletters, which he published in
WebCT groupware for the on-campus course participants and VirtualU
groupware for off-campus participants. The course participants did not need to
download software to access WebCT and VirtualU groupware as these platforms
utilize either of the popular Internet browsers. The necessity of using three
different groupware platforms was due to the different groupware provisions
available at the two enrolling institutions, and the instructor’s preference for the
communication features of FirstClass groupware which includes text based e-
mail, voice messaging, chat, threaded discussions, document sharing, calendaring,
and Web-page hosting.
Two different post-course survey items asked participants to evaluate
features of the First Class environment. Eight survey respondents rated the
FirstClass Conferencing area as “Excellent,” ten respondents rated the
120
conferencing as “Good,” and three respondents rated the area as “Average” (see
Appendix H: Survey Item 92). Eleven of the respondents rated the FirstClass
Chat features as “Excellent,” nine rated the features as “Good” and one
participant rated the features as “Average” (see Appendix H: Item 92).
Annie, the off-campus participant who works full-time as a school district
technology coordinator, is “looking forward to learning more about distance
learning and innovative ways to teach the students in our fast growing district. As
always, I hope to learn from our diverse group, a variety of experiences in
technology in education.” She reflected about using First-Class collaborative
documents for the “topic paper,” after her team had completed module three
activities, “This project has given me the opportunity to participate in a valuable
asynchronous tool, which allows many people to work collaboratively even
though they are at different locations.” Several of the post-course survey
respondents made anonymous comments about the course tools and technologies,
The course could be improved by providing all of the course lesson information within one courseware environment so you would not have to continuously "flip" between instructions in one courseware’s Web pages, and the communication activities taking place in the other courseware environment.
Teach people more about the course tools at the beginning of the course. Some people did not know how to join the class chat even on the last day of class!
I feel I learned a great deal about collaborating with a virtual team including how to communicate effectively using electronic tools and adapting to differing schedules and technological experience and access.
Newsletters
121
The course instructor and support teams published eleven newsletters
during course implementation. The post-course survey asked respondents to rate
their satisfaction with the CSCL Newsletter. Seven of the respondents strongly
agreed that they were satisfied with the newsletters as an element of support in the
course, ten agreed, one was undecided, one disagreed, and two strongly disagreed
(see Appendix H: Item 103). On survey respondent commented about the course
newsletter, “The newsletter was helpful.” The instructor “advertised” and
published the agendas for the course monthly one-way Internet audio-video
broadcasts, or Webcasts, via the course newsletters. The newsletter did serve an
important course function by providing the latest course news, assignment
clarifications, course modifications, and Webcasts agendas.
122
Webcasts
The course Webcasts, one way audio-video broadcasts conducted live at
the instructor’s home campus and simultaneously broadcast online, involved the
on-campus course participants, instructor, and course support teams in face-to-
face meetings on campus, and the off-campus course participants via telephone
and Web-based chat conversations. Getting access to Internet connected
computers and making time for course activities such as the Webcasts was a
challenge for some of the course participants. Chris, who is studying library and
information science, and many of the off-campus learners who participated in the
Webcasts via Web-based chat sessions, expressed dissatisfaction with the
resources available to the “chatters” for “voicing” their concerns and issues
during Webcasts. They had two choices. They could either make a long-distance
telephone call to the instructor during the Webcast or type text-based chat
messages.
The instructor and course participants did not see and hear the “chatters”
as often or completely as they did the on-campus participants who attended and
presented during the local face-to-face course Webcasts. Several of the on-
campus participants shared the Webcast broadcast information with friends and
family members who could “tune in” via the Internet to see and hear them online.
The off-campus students could view the Webcasts but they did not have the
luxury of being both “seen” and “heard.” However, several remote participants
seemed to enjoy telephoning the instructor with their questions and comments and
being “heard” this way during course Webcasts.
123
I choose to attend course Webcasts remotely, through my Internet
connected computer, in order to experience what the off-campus participants
experienced during course Webcasts. I quickly discovered that both the chat and
broadcast going on simultaneously during course Webcasts created a situation
where I could keep up with the chat or the broadcast, but not both at the same
time. The quality of the Webcast video and audio, or both, were poor through my
home dial-up modem connection. However, I had no problems viewing the
Webcasts at the local campus or via my home cable modem connection. Local
course support-team members served as chat instructors, interacting with the text-
based chat participants, or chatters, on a computer in the Webcast broadcast
location, while projecting their chat messages on a projection screen, which was
visible during course Webcasts,
Chatter’s messages were impossible for me to keep up with during course
Webcasts and I observed that the local course participants, who presented during
the face-to-faces Webcasts, did not have time to read the chat messages while
they were participating in the Webcasts. I become aware of the off-campus
course participant’s concerns and issues related to course Webcasts through
subsequent analysis of chat messages, re-viewing of the course Webcast video
tapes, and course participants responses on the post-course survey. Jack made
this comment about the course Webcasts in a Café CSCL threaded discussion
message,
The poor quality is probably the fault of current technology and limited bandwidth. I went to a bigger and faster at a copy place and the audio and video was great…I think a lot of the problems have to do with older computers. I am optimistic that online courses will only get better. I still
124
feel excited when the Webcast happens - like I've gotten to be in class with all of you and it feels productive. I have no complaints as long as I can hear what is being said.
The post-course survey asked participants “On the basis of your
experience in this course, how would you evaluate the Webcast as a learning
activity” (see Appendix H: Items, 114-123). Most of the respondents found the
Webcast to be “somewhat” enjoyable, stimulating, exciting, pleasant, interesting,
easy, and satisfying. However, the some of the survey respondents did not think
that the Webcasts were efficient and clear.
A post-course survey item asked respondents to rank how satisfied they
were with the Webcasts as an element of support in the course (see Appendix H:
Item, 107). Four survey respondents strongly agreed that they were satisfied with
the Webcasts, eleven agreed, three were undecided, one disagreed, and two
strongly disagreed. The off-campus participants that “attended” the Webcasts
through text-based chat sessions, seemed to be least satisfied with the Webcasts,
as these three “chatters” voiced anonymously on the post-course survey,
During our suite's presentation, we were not given a chance to comment. During the presentation, there was a lot of superfluous conversation going on by fellow chatters and my suitemate's comments as well as mine were ignored…In addition, Dr. Resta never gave the chatters from our suite a chance to comment (from what I could hear anyway).
I think the instructor should also ask other chatters to refrain from conversation during another group's presentation. It is not that I had anything groundbreaking to say, but I did have ideas that I would have liked to discuss. An important aspect of any class is to allow free and constant exchange of ideas...I felt that this was not possible during last night's Webcast.
The Webcasts were hard to participate in online. What bothered me is when people chatted when the instructor was talking or a presentation was taking place…I found that the second to last Webcast (November) was
125
much better at giving the online participants a chance to voice their opinions and concerns. As the people in the class have an opportunity to discuss coursework, the same opportunity must be given to those in chat.
The chat instructors worked diligently to voice the concerns and issues of
the chatters during the Webcasts and became more skilled at doing so during the
course of the semester and the last comment indicates. However, I observed
frustration on the part of both the chat instructors and chat participants when they
were not able to understand, keep up with, or have input into all of the Webcast
activities. The instructor and instructional design team is searching for ways to
modify the course Webcasts for the next course offering so that the issues and
concerns of all course participant’s can be “heard.”
B. Technology: Evaluation CriteriaThe course materials, instructor, and support teams provide easy to understand and sufficient instructions for participants to understand how to use the course technological resources, and encourage them to use and explore both provided and new technologies to accomplish course, individual, and team goals.
Technology Evaluation
The course syllabus spelled out minimum and optimal technical
requirements for PC and Macintosh computers, connectivity requirements, and
provided instructions for the provided course tools. The instructions on how to
use course technological resources and tools were sufficient and easy for some of
the course participants to understand. One item on the post-course survey asked
the participants if the course technical specifications were easy to understand (see
Appendix G: Item 36) four survey respondents “strongly agreed” that the course
technical specifications were easy for them to understand, eleven “agreed,” three
were “undecided” and two “disagreed.” Nine survey respondents “strongly
126
agreed that they were satisfied with the technical assistance they received in the
course, five “agreed,” five were “undecided,” and two “disagreed” (see Appendix
H: Item 106).
Courseware Technical Support
I observed that the course communication platform, FirstClass, has a
responsive local network administrator who chats with or e-mails course
participants about their technical questions and requests. The FirstClass network
administrator read and responded to participant’s postings about technical aspects
of FirstClass in the informal threaded discussion folder, Café CSCL. The post-
course survey asked respondents if the instructions for the FirstClass environment
were easy to understand (see Appendix H: Item 38). Seven survey respondents
strongly agreed that the instructions for FirstClass were easy to understand, seven
agreed, three were undecided, and four disagreed. The survey respondent’s
ratings of the FirstClass conference area instructions indicates that some of the
survey respondents did not find instructions for the FirstClass groupware simple
to understand, however the survey item did not distinguish between the embedded
FirstClass instructions provided by the software company and those of the
network administrator. Staff of the two component institutions provided either
WebCT groupware or VirtualU courseware support. I did not collect data related
specifically to the participant’s perceptions of these two groupware platforms on
the post-course survey, and the respondents indicated that they used the FirstClass
environment more often that the other two course tools (see Appendix H: Items,
9-11).
127
C. Learner Support: Evaluation CriteriaThe course instructor and e-Sherpas provide learner support and feedback which: is ample and timely, enhances instruction, and motivates participants.
Learner Support Evaluation
Course Instructor
The course instructor and support teams enhanced instruction and
motivated participants with feedback, however course participants did not always
view this feedback as ample or timely. Rene, posted this comment in the Café,
I read a message posted by Tracy on the 15th and I would have to agree with her. I also feel as though I might be missing something somewhere along the way, so my paranoia forces me to double check other mail, announcements, the Café, etc. Is there some way we can get FEEDBACK on what we've submitted so that we'll know if we're up to par?
Sandy, who works as a faculty member at a local community college,
posted this comment in the Café,
I have read so many repeated messages almost begging for help in clearing up some confusion or problem. It would be so much nicer to GET more timely, consistent feedback and assistance to lessen the levels of techno-stress and frustration.
Many of the post-course survey respondents expressed satisfaction with
instructor mentoring and assistance (see Appendix H: Item 104). Four strongly
agreed that they were satisfied with instructor mentoring and assistance, ten
agreed, one was undecided, one disagreed, and two strongly disagreed. One of the
survey respondents commented about the instructor mentoring and assistance,
“On an individual basis, this was nonexistent, but I never asked for individual
help so maybe this was my fault.”
E- Sherpas
128
Obstacles to successful online collaborative learning such as technical
problems, getting lost, and isolation from the course, (Graham, Scarborough, &
Goodwin, 1999; Wegerif, 1998) are compounded by the fact that it is almost
impossible for online instructors to read all of the messages generated by course
participants on a daily basis. Providing immediate feedback or guidance for every
problem that occurs during online courses is often not humanly possible for one
instructor however, online support teams can serve to help instructors to
overcome obstacles to online learning. Post-course survey respondents had varied
opinions about the roles of the course e-Sherpas,
I would like to see a more facilitative role for Sherpas, in terms of helping in the initial team building at the start of the course.
I think they should be more active and participative.
I expected the Sherpa to be more of a mentor and resource for guiding us through projects and clarifying instructions.
Some of the post-course survey respondents viewed the course e-Sherpas
as helpers,
Our Sherpa offered encouragement to our suite members.
The Sherpa helped us to get started (become familiar) with the course, how it was set up, and what we would be doing.
Other post-course survey respondents did not find the e-Sherpas to be all
that helpful,
Our Sherpa disappeared in the middle of the process, and didn’t give us much feedback when she was in.
My Sherpa didn’t help me that much, but Judy, the Sherpa of another suite, was very helpful in answering questions for people.
129
E-Sherpa’s for the CSCL course served as a mediators between the
instructor and students and worked with the instructor to clarify their roles and
functions as e-Sherpas in relation to the course participants via the Sherpa Logs
(see Appendix E: Sherpa Log), e-mail, and weekly telephone conferences. They
served as encouragers, utilizing available resources to help course participants
understand assignments, deadlines, and they also helped course participants to
understand their respective roles and responsibilities in the course. Todd, a public
school administrator, and doctoral student in educational administration, was the
e-Sherpa assigned to the suite that named their team WebCity. Todd detailed his
understanding of the roles and functions of the course e-Sherpas in the Sherpa
Log dated October 9, 2000: Counting heads: make sure all members are
participating; clarifying directions: clear any confusion for assignment or other
matters, guiding operational and instruction process: walk along with members
and make sure a smooth learning process; increase on-line comfort: ease out the
tense or stress followed by technical obstacles or course work difficulties; Dealing
with problems: give suggestions to solutions of problems members raised; Help
confidence building: give members a hand by positive reinforcement and
encouragement; Checking progress: monitor the progress through team
participation and involvement, ensure the group is on-task, and focused; As a
group cheerleader or a therapist: sympathetic, supportive, and encouraging;
Inspire in-depth discussions: questioning or facilitating in-depth, meaningful
discussions. (see Appendix E)
The E-Sherpas monitored course participant’s collaboration, progress on
assignments, and feelings about the course, sharing suggestions for modifications
130
of course materials and the results of their observations with the instructor
through the regular telephone conferences and e-mail. However many of the
post-course survey respondents indicated confusion or misunderstandings about
the role of the course e-Sherpas,
If they are supposed to be facilitating more information needs to be shared as to their role and what to expect from them.
Sherpas should provide feedback to the suites as to their progress in the course. In other words, tell us what our grades are on the projects, etc.
The role of the Sherpa should be more facilitative, supportive and clearly defined. The suite members should have a better idea of what the Sherpa is there for.
The course e-Sherpas helped some of the course participants solve their
technical problems and clarified instructions for assignments. One post-course
survey respondent commented,
Our Sherpa took care of technical glitches and clarified certain assignment instructions.
One of the e-Sherpas, Todd, noticed that his assigned suite, WebCity,
really missed his online presence when he was not around,
I was out of pocket a couple of days last week and I can really feel the effect in my group. The "Where's Todd?" comment appeared. I guess that is a good thing - they seemed to need my feedback - however short it may have been.
The results of the analysis of the e-Sherpa role suggest that the concept
and use of a non-authoritative help person, the e-Sherpa, can be an effective and
scalable support system for online learning teams (Resta, Lee, & Williams, 2001).
However, the majority of course participants had nothing in their prior traditional
academic experiences that enabled them to immediately understand the non-
131
authoritarian support-person or e-Sherpa role. At first, the course participants
expected the e-Sherpas to direct their learning teams or to help their teams to
complete the course work. The course participant’s shared misconceptions about
the roles of the e-Sherpa’s helped the course instructor and the instructional
design team to realize that it is imperative at the beginning of the course to
provide course participants with a description of the exact functions, roles, and
responsibilities of the e-Sherpa in relation to their virtual teams.
Summary: Course Tools, Technology, and Learner Support
The course allowed consistent and equal opportunities for course
participants to utilize a variety of tools, systems, software, and hardware. The
course instructor provided for learner-support to help the course participants
understand their respective roles and responsibilities in the course, and the
instructor’s and support-team member’s roles and functions. The materials,
instructor, and support teams provided instructions on how to use the
technological resources, and encouraged the use of the provided technologies, and
the exploration of new technologies to accomplish individual, team, and course
goals. However, at certain time, these instructions were insufficient or hard for
participants to understand. It would be helpful for the instructor to reiterate and
further develop course participant’s understanding of the e-Sherpa role and
continue to provide ongoing training and support for the e-Sherpas as they
perform the important and complex tasks that their roles demand.
132
ANALYSIS: PEDAGOGICAL STRATEGIES
Data utilized in analyzing the course pedagogical strategies included: my
observations, reflective writings of course participants, Café CSCL threaded
discussion messages, and the post-course survey. Results from these four data
sources indicate that the Web-delivered CSCL course met the criteria for this
standard:
Standard 4: Pedagogical StrategiesThe pedagogical strategies utilized by the instructor and support team facilitate critical thinking, cooperation, collaboration, and the real life application of course skills.
A. Collaborative Strategies and Activities: Evaluation CriteriaCourse pedagogical strategies encourage course participants to: formulate and modify hypotheses; apply what they are learning outside of the course environment; and take into account the time needed to complete course activities and to collaborate on knowledge-building process and products.
Collaborative Strategies and Activities: Evaluation
The course pedagogical elements encouraged participants to formulate and
modify hypotheses and to respect alternative viewpoints and ways of organizing
knowledge-building processes and products. Bess is an off-campus participant
who has two college-age sons. She works full time with a school district in her
area as a "district technology helping teacher" assigned to six different elementary
schools. Bess presently wants to gain information that she, “can share with other
teachers that will apply to their students and classrooms in the area of
technology.” Bess indicates that she understood alternative viewpoints and ways
of organizing collaborative activities,
133
Because we have all worked together on all of the projects during this class, I was excited about another group project. I was interested in see how the organization and communication would work with so many people…Taking the course completely online gives me a sense of being able to work with anyone.
Bart has focused on “planning and operations improvement across the
enterprise” in his role as medical consultant. His post-module six reflective
writing indicates that Bart recognized, understood, and evaluated, alternative
viewpoints during collaborative coursework activities,
A major benefit of this larger collaborative project is that it certainly spreads the workload among more resources. Mathematically, you are able to get more work done in less time. I see two potential problem areas here. The communication and coordination work multiplies, and therefore you have more opportunity for miscommunication that can delay or derail the project.
The course pedagogical elements encouraged participants to apply what
they learned in the course in new and different ways, outside of the course
environment. One anonymous post-course survey respondent indicated that s/he
was immediately able to use some of what s/he was learning and practicing in the
course,
I particularly enjoyed the MOO and Webquest units. In fact, I am in the process of developing a Webquest workshop for teachers at my school, which I intend to teach them through the Tapped-In Moo, which I was introduced to in this course. This is extremely exciting because I plan for them to "meet" me without them leaving their classroom.
Another survey respondent stated, “I really enjoyed the course.” This
respondent also extended and applied what s/he learned in the course, outside of
the course environment, “I sent all of my suite mates Christmas cards expressing
how much I enjoyed working with them this semester.” Participants were
encouraged to take into account the time needed to design instructional activities;
134
participate in collaborative knowledge-building process: and generate
collaborative knowledge products, which serve as relics of these activities and
processes. Chris is “quite interested in exploring how virtual environments can
and do help people share information and ideas from across wide distances.” Her
post-module three reflective writing indicated that she learned about the intensive
demands of collaborative work during the first major course assignment,
The amount of time it takes to make relatively simple decisions is greatly lengthened by necessary agreement among suite members. One must be patient and willing to discuss both mundane and theoretical issues in detail and never to discourage such conversation.
Sandy works in a learning lab and she is a “hands-on, in the trenches
work with students using computers as part of their coursework” community
college instructor. Her post-module five reflective writing provides insight into
Sandy’s understandings related to the intensive time demands of collaborative
online learning activities:
I think the assignment failed as a collaborative exercise for two reasons. First, was the lack of time to develop the assignment…In writing this type of assignment it is important to understand that messaging and replies require additional time. What could be accomplished in a few hours during face-to-face interaction requires much longer online. Even the chat feature needs time to set up and often not everyone in the suite is available at the same time.
B. Cooperation in Groups: Evaluation CriteriaThe course pedagogical strategies facilitate the communication of ideas, feelings and experiences; respect for others, and cooperation among course participants.
Cooperation in Groups: Evaluation
135
The course pedagogical strategies facilitated respect for others,
cooperation and collaboration. Assessment of course participant’ contributions to
group work was a key course pedagogical element that set the stage for
cooperation and collaboration among course participants. The instructor’s
utilization of peer and product assessments did serve to enhance co-operation, and
collaboration among course participants.
The course syllabus detailed the course grading policy and the fact that the
instructor would base participant’s course grades on a combination of scores from
their individual input and their collaborative scores on different course tasks
utilizing the following 0-300 point scale:
Grade Points
A 260-300 B 225-259 C 180-224 D 150-179 F 149 or less
Individual input was worth 86 points out of the 300 possible points. The
individual input score consisted of the instructor’s assessment of each course
participant’s portfolio in which the participants included: excerpts from their best
contributions to online discussions; specific product contributions to the team
projects: and reflections which were the individual course participants reflective
writings about their work and project processes. The group/collaborative work
was worth a total of 214 points for course participants and 226 points for course
participants who performed the PLT roles during module six. The instructor
required all course participants to assess their fellow course participant’s
contributions on course modules three through six. Dr. Resta derived each course
participant’s peer evaluation score from the average of peer evaluation scores that
their teammates gave them. By utilizing peer evaluation, the instructor and his
136
instructional design team placed emphasis on the key goals of the course,
cooperation, and collaboration among course participants. The peer evaluation
scores of course participants indicate that course participants varied in their
assessment of fellow course participant’s cooperation and contributions to
collaborative team and knowledge-building activities. However, most of the
course participants evaluated each other’s contributions in a positive light. One
course participant’s anonymous comment on the post-course survey indicates
satisfaction with cooperation and collaboration in the course, “The collaborative
projects were very worthwhile and gave us a chance to experience the process.”
Milan’s post-module three reflective writing explains what she learned about
collaborative work,
I learned that working in a collaborative group has the advantage of dividing the burden of work. I was sick with the flu for the first half of the project…if I had been responsible for the entire paper, I wouldn’t have had the mental or physical resources to do in on time. In the group, I could manage to do, and when another member had to be away, we covered for her too.
Summary: Pedagogical Strategies
Data utilized in analyzing the course pedagogical strategies included
course documents, records, and my observations. Results from these three data
sources indicates that the pedagogical strategies utilized in the course facilitated
respect for others, encouraged cooperation, collaboration, and involved
participants in applying what they learned in the classroom to their work and
personal environments.
137
Summary and Implications
SUMMARY
I utilized themes and events of Stake’s Responsive Evaluation Model to
conduct an exploratory value-oriented, qualitative, formative evaluation of a
Web-delivered, graduate-level computer supported collaborative learning course.
Stake’s responsive model offered: use of naturalistic, qualitative methods;
emergent and flexible evaluation design, methodology and implementation; and a
process-oriented approach aimed at understanding how the course participants
viewed the strengths and weakness of the course.
My observations and analysis of course-related documents and records,
was frequent, sustained, long-term, and included a recursive review of course
participants’ assignments, group projects, course grades, Webcast video
broadcasts, chat transcripts, and threaded discussion messages. The responsive
approach allowed me to closely interact with the key stakeholders of this
evaluation throughout the entire formative-implementation evaluation process,
and led me to observations about the strengths and weaknesses of the course,
which follow:
Course Strengths
The Web-based format of the course met the distance learning needs of the
course participants. The course goals, objectives, expectations, activities, and
assessment methods, provided flexible opportunities for interaction, participation,
and encouraged multiple outcomes, among course participants. The organization
138
of course learning materials facilitated the achievement of course goals and
objectives and provided course participants with many tools to help them
communicate, with the instructor, course-support team members, and each other,
as they worked together collaboratively building “high-performance” learning
teams.
During this formative evaluation many course participants showed
evidence of applying what they had learned in the course, outside the course, in
their own educational or work environments, and many expressed positive
feelings about the course experience. The generative and ongoing formative
evaluation process, which the instructional design team embedded in the Web-
delivered course structure, afforded course participant input. The course
participants eagerly offered their suggestions for improving the course while it
was ongoing, and post-course, on the survey, which indicates that if we ask course
and program participants for their input, they can lend valuable insight into
improving educational courses and programs for them.
The course materials clearly stated the course learning goals and
objectives and provided many resources and opportunities for participants to
achieve individual, cooperative, and collaborative learning goals and the Web-
delivered course allowed consistent and equal opportunities for course
participants to utilize a variety of tools, systems, software, and hardware. Ample
learner support was provided for course technological resources and the course
instructor and support teams helped course participants to understand their
respective roles in the course. The course pedagogical strategies encouraged
participants to formulate and modify hypotheses and course participants learned
139
to take into account the time needed for participation in collaborative knowledge-
building activities.
The course pedagogical elements facilitated respect for others and
cooperation and collaboration among course participants, the course instructor
and course support teams. The course instructor’s participation in ongoing
training and weekly communications with course support-team members was a
successful support and training strategy and may be integrated into future online
courses.
The peer and product assessment process served to enhance cooperation
among course participants and the collaborative resolution of course participant’s
problems turned out to be an unplanned but successful course activity involving
participants and the instructor in collaborative negotiations. The negotiation
process for peer and product assessment, which was collaboratively developed by
the instructor and the course participants during course implementation, was so
successful that the will be utilized in the instructor’s future online courses.
Course Weaknesses
The organization and presentation of course learning materials and/or
activities were not clear, at all times, to all course participants. However, course
participants were able to resolve questions and problems with course materials
quickly by using the many course communication tools such as chats, e-mail,
telephone, voice messages, threaded discussions, and Webcasts. Modifications to
future course materials will include reorganization of course materials based on
140
participant suggestions to assist them in following the scope and sequence of
course activities and assignments.
Bannan and Milheim (1997) describe the difficult issues involved in
designing and defining “effective” Web-delivered learning environments,
The World Wide Web is becoming a major source for educational material delivered to learners who prefer (or are required) to learn apart from a traditional classroom. While the educational potential of this medium is just beginning to be realized, its utilization will certainly increase over time as larger numbers of educators and learners see the significant value in this type of instruction. However, while there is tremendous potential for this type of learning, there is also a significant need to describe these Web-based courses in terms of their overall instructional design characteristics, rather than defining each course only by the specific content it provides. Without this organizational process, courses will be perceived and categorized based primarily on their subject material, rather than the instructional strategies and tactics used for the delivery of the educational material (p. 381).
The course participant’s shared misconceptions about the course
instructional support strategy which involved utilizing course-support team
members to assist the instructor, helped the course instructor and instructional
design team to realize the importance of providing course participants with a
description of the exact functions, roles, and responsibilities of the course
instructor, and course support team members, in relation to their collaborative
knowledge-building teams.
Some of the off-campus participants could not view the Webcasts and they
did not have the luxury of being both “seen” and “heard” during these one-way
video broadcasts. The instructor and instructional design team is searching for
ways to modify the course Webcasts for the next course offering so that the issues
and concerns of all course participant’s can be “heard.” Two-way audio video
141
technologies are becoming more affordable, and effective. It may be possible in
the future to incorporate remote participant’s input with personal Internet
audio/video cameras with the resolution of bandwidth, access, and accessibility
issues that impact on Web-delivered learning environments.
Unresolved Issues and Concerns
A few course participants expressed, at various times, that they felt
overwhelmed by either: course activities, assignments, roles, or tools, and some of
the course participants never got comfortable with the independence and/or
learner control that the Web-delivered CSCL constructivist-learning environment
afforded. Future research could lend insight into these three issues and seek to
identify course strengths and weaknesses in relation to these issues and concerns
of program stakeholders.
IMPLICATIONS
The lessons to be learned from the findings resulting from the formative
evaluation of the Web-delivered CSCL course are decisions that are the
responsibility of the reader. However, the findings may suggest possible
consideration in other distance learning contexts. Documentation of the common
themes that emerged from the value-oriented formative evaluation of the CSCL
Web-delivered course may provide useful insights for those who develop or
instruct Web-delivered courses. I do not intend for the findings of this research to
be generalizable to other situations. However, the emergent themes may provide
direction “for the investigation of others” (Erlandson et al., 1993, p. 45).
142
This research demonstrates that educators can design, improve, adjust, and
modify a Web-delivered course to meet the needs of course participants while the
course is ongoing. This internal and generative process of ongoing formative
evaluation, utilizing themes of Stake’s Responsive Model for the evaluation of the
Web-delivered CSCL course, may applicable to other emergent educational
courses and programs. James Duderstadt explained,
Our world is in the midst of a social transition into a post-industrial society as our economy has shifted from material-and labor-intensive products and processes to knowledge-intensive products and services. A radically new system for creating wealth has evolved that depends upon the creation and application of new knowledge. We are at the dawn of an age of knowledge in which the key strategic resource necessary for prosperity has become knowledge itself, that is, educated people and their ideas…Unlike natural resources such as iron and oil that have driven earlier economic transformations, knowledge is inexhaustible. The more it is used, the more it multiplies and expands. But knowledge is not avail-able to all. It can be absorbed and applied only by the educated mind. Hence, as our society becomes ever more knowledge-intensive, it becomes ever more dependent upon those social institutions, such as the university, that create knowledge, educate people, and provide them with knowledge and learning resources (Duderstadt, 2000, p. 145).
Contemporary state and federal educational program evaluations are often
intimately tied to social institutions or politicians which “sponsor their own
evaluations to generate evidence favoring their cause” (Borg & Gall, 1996, p.
681). I believe that evaluation researchers need to redirect the contemporary
focus of state and federal educational evaluations away from objectivist
methodologies and judgments with their associated reliance on high stakes testing
programs, which in the words of James Popham,
“…Are doing serious educational harm to children. Because of unsound high-stakes testing programs, many students are receiving educational
143
experiences that are far less effective than they would have been if such programs had never been born” (2001, p. 1).
There is a scarcity of data available regarding instruction delivered
primarily over the Internet, however a constructivist, recursive, responsive
formative evaluation approach such as the one described in this research, can be
utilized to monitor student characteristics and results of such research may help
course designers and online instructors to figure out what produces favorable
outcomes. Politicians certainly do not know more about the business of education
than professional educators do.
Professional educators in both face-to-face and distance-learning
environments can learn more about student assessment and course and program
evaluation in order to become proficient at assessing their students’ learning and
evaluating their own learning environments. Educators can utilize a generative
and formative process such as is described in this research for improving their
own educational programs or they can continue to allow politicians to test and
judge with instruments that allow students to be washed back and forth,
dependent, on the waves of political reform and education change. I believe in a
small way this research has demonstrated that educators can stake out and claim
educational evaluation as their own territory. James Popham derides what he
describes as “scoreboard-induced motivation,”
“Now, in 2001, there’s no question that a score-boosting sweepstakes has enveloped the nation. Who has been tasked with boosting student’s test scores? Teachers and administrators, of course.” … U.S. educators have been thrown into a score-boosting game they cannot win. More accurately, the score-boosting game cannot be won without doing educational damage to the children in our public schools” (2001, p. 12).
144
Professional educational evaluators can help instructors and administrators
learn how to conduct evaluation research that focuses on improving their own
educational programs, for the students they serve. There is no "one best way" to
go about the work of creating an online course—or “one best way” for outsiders
to help administrators and instructors create and improve their face-to-face
courses. Rudyard Kipling clearly spelled out the beauty and “effectiveness” of
human variance in this line from his poem, In the Neolithic Age, "There are nine
and sixty ways of constructing tribal lays, and every single one of them is right."
One “right” way to “construct” constructivist educational course and
program evaluation is for educational administrators and instructors to evaluate
their own courses and programs utilizing embedded “alternative” constructivist
assessment input, such as were utilized in this exploratory formative evaluation:
informal threaded discussion messages, course participants reflections,
audio/video broadcasts, telephone conversations, e-mail and chat sessions. One
“right” way to help them to learn to do this self-evaluation is ask them questions
such as: What learning goals and objectives do you have for your students? How
will you know when they have achieved these learning goals and objectives?
Another way to help them is to ask, "What in your present program or course is
and isn’t working as well as you'd like?" What advice or information do you need
to “fix” this problem?
Some educators are strangers within their own gates, when it comes to the
assessment of student learning, and the evaluation of courses and programs. We
must step forward and take responsibility. Like Popham,
145
I do not believe America’s educators are the guiltless victims of an evil imposed by wrong-thinking policy makers. I think the education profession itself is fundamentally at fault. We allowed students’ test scores to become the indicator of our effectiveness. We failed to halt the profound misuse of standardized achievement tests to judge our educational quality. We let this happen to ourselves. And more grievously, we let it happen to the children we are supposed to be educating. Shame on us. (2001, p. 12-13).
There are many different methods and strategies available for evaluating
the courses and programs that we develop or implement, and as Kipling would
say, “every single one of them is right.”
Future Research
Future research could seek to:
Further identify, describe, and evaluate indicators of “ high-
performance” among members of collaborative groups and
organizations.
Shed light on the extreme variance among course participants in
relation to weekly time spent on course-related activities.
Help us understand the ways that course participants divide
cooperative and collaborative work among themselves
Shed light into how course and program designers can modify the
objectives and structure of collaborative activities to encourage
participants to explore new roles and to develop new skills.
Further understand collaborative team member roles.
Find strategies and methods of providing ongoing training and
support for support team members as they perform the important
and complex tasks that their roles demand.
146
Paint thick, rich, descriptions of individual course participant’s
perceptions of course activities, materials, tools, technologies,
pedagogical strategies, and learner support through interviews and
in-depth case studies.
Examine future offerings of the Web-delivered CSCL course to
determine what, if any, impact the generative formative
implementation evaluation process, embedded in the course
structure, has on future courses structure, activities, support-
services, and student-assessment practices.
Limitations of the Study
Major weaknesses of the responsive approach, which I selected as the
basis for this formative evaluation, as noted by Stufflebeam (2001) include,
Vulnerability regarding external credibility, since people in the local setting, in effect, [has] considerable control over the evaluation of their work. Similarly, evaluators working so closely with stakeholders may lose their independent perspectives. The approach is not very amenable to reporting clear findings in time to meet decision or accountability deadlines (p. 71).
Patton (1990) states, “there are no perfect research designs” (p. 162). He
discusses limitations or “trade-offs” (Patton, 1990, p. 165), which are applicable
to this research as well, “These tradeoffs are necessitated by limited resources,
limited time, and limits on the human ability to grasp the complex nature of social
reality” (p. 162). During this research, the instructor and media coordinator did
exert considerable control over the evaluation of their work, including but not
limited to access to course records and documents. I utilized peer debriefing,
147
field notes, and reflexive journaling to minimize the impact of these controls on
this research. During this research a second “trade-off” was the fact that the
human instrument, the researcher, was constrained by personal and time
limitations.
A third trade-off related to this research Patton calls the “breadth versus
depth” (1990, p. 165) trade off. This trade-off involves deciding if it is more
advantageous to study one, or a few questions, in great depth or to study many
questions but in less depth (Patton, 1990). I focused on breadth, rather than depth,
with two broad formative evaluation questions: What are the stakeholders’
perceptions of course strengths and course weaknesses?
Patton describes a trade-off of choosing between quantitative methods that
“require the use of a standardized approach so that people’s experiences of people
are limited to certain predetermined response categories” (1990, p. 165) and
qualitative methods, which facilitate in depth research on selected issues. The
formative evaluation of the course utilized both qualitative and quantitative data
sources, but due to time-constraints and the desire to explore the course
experience for course participants, this research focused on qualitative
methodology.
“The breadth versus depth trade-off is applicable not only in comparing
quantitative and qualitative methods; the same trade-off applies within qualitative
methods” (Patton, 1990, p. 165). One trade-off in qualitative methodology applied
to this study. Qualitative methods used in evaluation studies typically consist of
three kinds of data collection: in-depth, open-ended interviews, direct observation,
and written documents” (Patton, 1990, p.10). The course instructor and design
148
team determined that interviewing course participants during course
implementation would distract them and possibly interfere with their attainment
of course learning goals. Due to this, and time and resource limitations,
interviewing was not utilized as a research strategy during the formative
evaluation of the graduate-level Web-delivered CSCL course.
149
Appendix A: Pseudonyms
ID # PSEUDONYM NAME SUITE OFFICE SHERPA PLTCOURSE PARTICIPANTS01 Marie (F) Leslie
BaileyS1 1 S1_S X
02 Rene (F) Kristin Scott
S1 1 S1_S
03 Scott (F) Julia Ruggeri
S1 2 S1_S
04 Becky (F) Anjana Singhal
S1 2 S1_S X
05 Tina (F) Edmara Cavalcanti
S1 3 S1_S
06 Megan (F) Raquel Brown
S1 3 S1_S
07 Annie (F) Karie Lawrence
S2 1 S2_S
08 Sandy (M) Sang-Seub Lee
S2 1 S2_S
09 Milan (F) Phyllis Miller
S2 1 S2_S
10 Susie (F) Natalia Hernandez
S2 2 S2_S
11 Chris (F) Anne Hoeksema
S2 2 S2_S X
12 Michelle (F) Joy Jefferies
S2 3 S2_S
13 Tracy (F) Hilee Kelm
S2 3 S2_S
14 Karen (F) Carmen Gomez
S3 1 S3_S
15 Laurie (F) Candi S3 1 S3_S
151
Rathbone
16 Bess (F) Rena Andrus
S3 2 S3_S
17 Jaye (F) Madhuri Kumar
S3 2 S3_S X
18 Louise (F) Juliet Cadenhead (Kate)
S3 3 S3_S
19 Jack (M) Jeffrey Getchell
S3 3 S3_S
20 Katie (F) Nancy Donaldson
S4 1 S4_S X
21 Laura (F) Jane Flores
S4 1 S4_S
22 Richard (M) David Johnston
S4 2 S4_S
23 Bridgett (F) Temi Rose
S4 2 S4_S
24 Diane (F) Yu-Lu Hsiung
S4 3 S4_S
24 Lou (F) Leann Walker
S4 3 S4_S
26 Rita (F) Kathryn Lee
S5 1 S4_S
27 Donna (F) Karon Tarver
S5 1 S5_S
28 Theresa (F) Jennifer Drumm
S5 2 S5_S
29 Susan (F) Jung-Min Ko
S5 2 S5_S
30 Bart (M) Robert Skaggs
S5 2 S5_S
31 Jim (M) Guadalupe
S5 3 S5_S
152
Briseno32 Mary Lee (F) Theres
a JonesS5 3 S5_S X
QUALITATIVE SOFTWARE DOCUMENT HEADERS: COURSE PARTICIPANTSID Number_Suite Number_Office Number EXAMPLE: 01_S1_1
COURSE INSTRUCTOR AND IMPLEMENTATION SUPPORT TEAMPSEUDONYM NAME COURSE ROLE(S)
C_1 Dr. Resta Resta Instructional Design (ID), InstructorD_1 Sam Adams Development, Database ProgrammerS1_S Todd Knezek e-SherpaS2_S Janelle Wang ID, Development, e-SherpaS3_S Jason Lee ID, Development, e-SherpaS4_S Judy Jackson e-SherpaS5_S Olive Awalt ID, Media Coordinator, e-SherpaCOURSE SUITESID # PSEUDONYM NAME # OF
OFFICES# OF COURSE PARTICIPANTS
S1 WebCity Symphony 3 6 (0 MALE)S2 Web Wonders Visionaries 3 7 (1 MALE)S3 CollabCrew SynchronCit
y3 6 (1 MALE)
S4 InTune Cyberia 3 6 (1 MALE)S5 Harmony Netizens 3 7 (2 MALE)
153
Appendix B: Person as Instrument
Overview and Personal Philosophy:
All of my present being is a reflection of my past thoughts and actions. I
truly believe that we define humanity through our thoughts and actions. I became
interested in philosophy while I was in high school. James Allen, a favorite writer
of mine wrote,
Man as mind is subject to change. He is not something ‘made’ and finally completed, but has within him the capacity for progress. By the universal law of evolution, he has become what he is, and is becoming that which he will be. His being is modified by every thought that he thinks. Every experience affects his character. Every effort he makes changes his mentality. Herein is the secret of man’s degradation, and his power and salvation if he but utilizes this law of change in the right choice of thought. (1971, Pp. 18-19)
I also believe that we humans are tossed and thrown with the waves and
flow of the natural universe and have little that we can control in life. However,
the control of our thoughts and actions are dimensions of the natural world where
we exercise and experience some small measure of control and subsequently
feelings of empowerment. In line with this notion, I believe that all research is
autobiographical, and as a researcher, I am keenly aware:
Much of theory-work begins with an effort to make sense of one’s experience and is initiated by an effort to resolve unresolved experiences. Here, the problem is not to validate what has been observed or to produce new observations, but rather to locate and interpret the meaning of what one has lived... Theory making, then, is often an effort to cope with a threat to something in which the theorist himself is deeply and personally implicated and which he holds dear (Gouldner, 1970, p. 484).
Gouldner, A. (1970). The coming crisis of western sociology. New York: Avon Books, 1970:
154
I expect honesty in my relations and interactions with fellow humans and
think that this is a reciprocal process. I am sharing my beliefs, values, attitudes,
experiences, dreams and expectations in an attempt to create a sense of openness,
trust and shared cognition.
Educational Values and Beliefs:
Speaking from an eclectic set of educational values, I feel the essence of
education is growth, mediated by reason and intuition. The goal of education is to
provide a framework of knowledge and skills for the student to helps foster
uniqueness, individual growth, and personal and communal responsibility within
the environment of life itself. I define the role of the "effective" teacher as one of:
guiding, instructing, and advising students. The qualities an effective teacher
must posses include self-knowledge, self-discipline, and a love of truth and
learning. The students own interests should largely determine what they learn
with the guidance of the teacher as a disciplinarian, intent on fostering self
discipline; as a mediator between the adult world and the world of the child; and
as a presenter of principles and values, encouraging thought and examination. I
feel the preferred teaching method is project oriented, with demonstrations,
lectures, readings, discussions, and dialogue intermingled. Communication
encompasses educational experience, and the exciting tool of online
communication adds depth to the palate. I have seen the possibilities that utilizing
technology affords educators and students.
155
Relevant Experience
My oldest daughter moved to high school in 1992 and I tagged along by
transferring into what seemed to be a dream come true job as a high school Think-
Write computer-lab teacher. Think-Write was a state project, which aimed to
improve language skills in marginal students. I inherited a large computer
classroom that contained sixteen Apple IIes and I found that I loved working with
computers and students. When the Think-Write Project and ability grouping was
phased out in 1992, I was ripe and excited to be assigned to design and run a
computerized writing lab utilizing the 36 Apple IIe’s left from the remnants of the
project at our high school. I figuratively struggled, kicked, pinched and fought to
design, implement, and improve instructional technology in El Paso Independent
School District (EPISD), a Texas “Big Five” school district and felt that I could
go no further without learning more, so I went on to obtain a master’s degree in
Educational Management and Development with emphasis in educational
technology.
During 1993, and my full time position as director of the Andress High
School Writing Lab plus three part-time jobs, one as a technology trainer for the
district, and the second as Instructional Technology Advisement Team (ITAT)
member, kept me busy. I took on a third job as technical writer and editor for
Coleman Research Corporation working on the command computer support
systems project in order to meet college expenses for myself and my oldest
daughter. I wanted to focus on the management of educational technology
because I had found little support or appreciation for instructional technology in
my school district. I wanted to change that and improve the situation for other
156
teachers and that is why I moved forward to earn my M.A. in Educational
Management and Development.
I first connected to the Internet in 1994 however using computers for
educational purposes was not new to me. I had been fiddling with computer-
assisted instruction (CAI) since 1983. First, working as a learning facilitator for
the Center for Learning Assistance at New Mexico State University while I was
completing my bachelor’s degree in secondary education, and then upon
graduation when I moved on to teach middle school language arts in El Paso, TX,
where I co-sponsored a middle school computer club during my first four years of
teaching. During this time-period, I also started the publication of the first
computerized middle school newspaper in the district.
I first connected to the Internet during my master’s work at New Mexico
State University (NMSU) in 1994, and today I am still awed at the exploding
technological changes and the personal self-efficacy that information power
affords or denies individuals in our society. The class that introduced me to
online communications at NMSU was Dr. Derlin’s, Management Technology
course. In that class, I entered a new and exciting world of Mosaic, electronic
mail, TIN, news groups, LISTSERVs, the World Wide Web, SAS, and UNIX
machines. Information technology services at NMSU set up a gateway that
allowed me to telnet directly through the Texas Education Network (TENET) to
NMSU without paying long distance phone charges which was crucial to a single
mother working as a schoolteacher.
I was working two jobs to support my children while attending college so
I had little time for face-to-face socialization, but I found much personal support
157
and positive communication online. Browsing around the Internet using Mosaic
became my favorite pastime. I was entranced. The world of self-directed
learning seemed to be finally open to me and within my grasp. This time was also
memorable to me because my oldest daughter left home to attend the University
of New Mexico in Albuquerque, NM. During her first year away from home we
spent a great deal of time in computer labs and so our adventures with online
communications began together. I spent time online that first year, not only
communicating with my daughter, but also with my sister who was so excited
after she came to visit me that she bought a computer, so we could meet online.
When my former students found out that I had an e-mail address, I began to get e-
mail from many of them. I am thrilled today to still get updates and hear from my
former students and old high school pals through e-mail and chat. I realize that I
would not be hearing from them if not for these fast, easy, and inexpensive means
of electronic communication.
When I left El Paso and moved to Austin I had built a successful
computerized writing lab, which due to my efforts evolved into a multi-purpose
computer lab, which served the entire campus of 2,300+ students, but it had been
a constant struggle. During those times I was busy working full time as director
and lead teacher for the Andress High School Writing Lab. In addition, I was
performing the duties of a computer technician and lab manager. I designed,
installed, and administered network file server while engulfed me in a web of
rapidly advancing technology with no training or support. I had worked amid a
backdrop of resentment for many of my peers, all the while desperately fighting to
hang on to the technology train, while most of the teachers I was working with
158
were fighting to jump off the same train. Many of my fellow teachers did not
understand the need for a computer lab and/or believed that all students could do
was play games with computers. Several resented the fact that I had obtained
funding to hire teachers to work in the lab before and after school hours and
disapproved of the fact that my program involved utilizing and giving high school
English elective credit to high school students who served as teaching assistants in
the computer lab. It all looked like just too much fun and to easy many of them. I
must admit I was having fun, but easy was far from the truth. However, in spite
of the nay Sayers, many of my fellow educators began to hang out in the
computer lab and like me, became entranced with technology and its possibilities.
A former fellow teacher completed a master’s degree in educational technology.
She claims that her interactions in the computer lab motivated her to want to
specialize in the area of educational technology.
My experiences as a doctoral student in Curriculum and Instruction’s
Instructional Technology Program at the University of Texas (UT) in Austin and
my work as Director of the Instructional Media Lab for the Chemistry and
Biochemistry Department at UT, have only added to my conviction that
technology can move education and educators to new levels of intellectual and
personal growth. During the spring of 1998, I improved my communication skills
in the course, Internet-Based Telecommunications, a course that the instructor
conducted mainly online. In this course, I learned much about myself, and
learned more about communicating with others. The quality and quantity of time
spent online allowed the students in that class to know each other and the
instructor in depth. The interpersonal knowledge that developed among students
159
in this class created more empathy and interchange than what occurred in most of
my face-to-face courses. Many of the friendships formed online in that course
moved from online to face-to-face friendships and have endured and are still
growing.
Formative Evaluation Research Project
I expect to find, in this research that prospective or practicing educators
have had and are having positive interpersonal and intellectual growth during, and
resulting from participation in online activities in this Web-delivered course. I
feel very wide-eyed and open to learning from and in this new online medium of
interchange, but also understand that online Web-delivered learning environments
are not right for all learners. I hope that the outcome of this project will be to
encourage other educators to explore the many options that online collaborative
knowledge-building activities can afford. I am willing to discover what unfolds.
I will conduct this research from a postmodern perspective and
constructivist paradigm in an attempt to define and understand complex social
issues involved in delivering and participating in Web-delivered higher
educational environments. The ontology of this research is relativism and the
locally and specifically constructed realities of the key stakeholders of this
formative evaluation. I selected the use of a subjectivist epistemology, and
interpretivist methods for this research because the needs, feelings, and
interpersonal relationships that develop amid online activities and
communications are well suited to the multiple perspectives, paradigms,
strategies, and methods of qualitative research. This selected perspective,
160
paradigm, strategies, and methods are well suited to the purpose of this research,
which is focused on the understanding and reconstruction of the stakeholders
perspectives related to the strengths and weakness of the online course.
161
Appendix C: CSCL Course Model
Course Overview
This overview model of the CSCL presents from left to right: an overview
of the core course competencies in light blue, the core course components for
each module, modules 1-7, in yellow; the objectives for each module in green;
and the assignments for each module in tan. The original overview was printed on
large format poster paper for legibility. The overview is depicted here to provide
an overall sense of the modular structure and flow of the course. In the pages that
follow this one a legible representation of each of the seven modules is provided.
162
CSCL Evaluation Matrix: Student Perceptions & Group Culture continued
Evaluation Questions
Data Collection Methods
Case Studies
Document Review
Focus Groups
Interviews Observations Surveys/Questionnaires Checklists
Student Perceptions and Attitudes Toward the Course
What are the student attitudes towards the course and their perceptions of the online discussions and collaborative decision-making built into the course?
X ReflectionsWebcasts
X X End of Course Survey
Course Effectiveness in Establishing a Group Culture for Collaborative Learning
How did members of class within each project group work together?
X ReflectionsWebcasts
X X End of Course Survey
Continued on next page
171
Evaluation Questions
Data Collection Methods
Case Studies
Document Review
Focus Groups
Interviews Observations Surveys, Questionnaires Checklists
How did the group culture affect group collaboration, dynamics, and processes in different modules and projects of the course?
X ReflectionsWebcasts
X
How did the group culture affect members’ communication and decision-making?
X ReflectionsWebcasts
X
How do students perceive the online group culture and collaboration compared to Face-to-face interactions?
ReflectionsWebcasts
X X Post-course survey
172
CSCL Evaluation Matrix: Pedagogical Elements
Evaluation QuestionsData Collection Methods
Case Studies
Document Review
Focus Groups
Interviews Observations Surveys, Questionnaires, Checklists
Effectiveness of Pedagogical Elements of the Course
How effective are the pedagogical strategies in this online course in promoting collaborative learning communities?
X ReflectionsWebcasts
X X Post-course survey
What are the learning experiences of the students related to the collaborative strategies used in the course? strategies?
X ReflectionsWebcasts
X X Post-course survey
The Use of Online Mentors (called e-Sherpas) to Support Students in Course
What were the strategies, activities and roles of the e-Sherpas in supporting students in the course?
X ReflectionsWebcasts
X X Post-course survey
How do students perceive e-Sherpas' role?
X ReflectionsWebcasts
X X Post-course survey
What strategies do students think were effective and helpful for their learning?
X ReflectionsWebcasts
X X Post-course survey
173
CSCL Evaluation Matrix: Leadership
Evaluation Questions
Data Collection Methods
Case Studies
Document Review
Focus Groups
Interviews Observations Surveys/Questionnaires, Checklists
Effectiveness of Course in Developing Leadership in Learning Teams
Is the course effective in building leadership within the online learning teams?
X ReflectionsWebcasts
X X Post-course survey
Do early leaders maintain leadership roles throughout the course in formal or informal ways?
X ReflectionsWebcasts
X X Post-course survey
Do the leaders maintain a consistent leadership style? If so, what kind?
X ReflectionsWebcasts
X X Post-course survey
How do leaders encourage participation?
ReflectionsWebcasts
X X Post-course survey
Are the online leadership acts within the course comparable to real-time, in-class leadership acts?
X ReflectionsWebcasts
X X Post-course survey
174
CSCL Evaluation Matrix: Hardware & Software
Evaluation Questions
Data Collection Methods
Case Studies
Document Review
Focus Groups
Interviews Observations Surveys/Questionnaire, Checklists
Hardware and Software Tools Utilized in the Course
What are perceptions of students related to technological variations within the course?
X ReflectionsWebcasts
X X Post-course survey
What are the hardware and software matrixes within the course that best support meaningful collaborative learning?
X ReflectionsWebcasts
X X Post-course survey
What are the technology factors or problems that have adversely affected online collaborative learning in the course?
X ReflectionsWebcasts
X X Post-course survey
175
Appendix E: Sherpa Log
Sherpa Teleconference Meeting Nov. 6, 2000, 1:00 p.m. CALL 471-5099
Issues to discuss...
Progress Report Overall: The feedback from them about the function at
this site is good overall. Suite two is moving along and members’ participation is
good. The group’s working relationship is getting better day after day. Higher
cohesiveness is seen through time in this group.
Issues/Confusions
Some of our members faced difficulties when they were in
Jointplanning.com site. I also faced that. Error messages were found jumping
from page to page. One of the members from other suite also asked me that when
I was online. It was better later on. Some technical set backs for them.
Quote: “They were having difficulty in Joining.com and were getting an
error message also. So it sounds like we were not the only one.”
Course revision:
A member mentioned that it would be helpful if they were clear about
their roles at the beginning of the semester, they would work better.
Even though the overview of the whole course is built in the modules,
many people worked through the projects in a linear way (whatever comes up,
they work on it.) They did not preview and get the overall picture and
understanding, so confusions showed over time.
A member suggested that tools can be introduced early on the semester
and the group pick one theme and be the expert of that subject.
177
One member said that their individual interests are not fully fulfilled. Too
much pressure from the outset and they didn’t have time to build any
relationships. One member said that it does not help “doing fake assignments.”
Problems/Conflicts/Solutions
Members among suite are sharing and inviting each other to chat and ask
questions. They help each other solving problems. The site has some problems
sometimes, but not all the time so some people got it through. Also, there are
some navigation drawbacks so it created some confusion. Overall, the problems
are solved.
Successes/Examples
The Web Wonders s worked well on the WebQuest project. I joined a few
of their chats working together on it. I think they worked quite well together.
The leadership was defined and group members supported each other and
reminded each other even though they are not leaders. They did not cross the
boarder but give suggestions and offer opinions.
Sherpa Journal
People want to get things done for the semester and start to worry about
how to collaborate with each other on projects over holidays.
Personal research project: (Questions would like other Sherpas to
response, support or give ideas.)
Sherpa meeting notes this week.
Think about the agenda for next Webcast session.
Clarify procedures about Module 6 when necessary to the group.
Consider instructional design for revision of the course
178
Vote result will be posted
Next step/ actions
Sherpa roles...Questions were asked: What can/can not sherpa do for us?
Specifics should be defined early on of the semester.
How often should sherpa be there in our group meetings and chats?
1. counting heads: make sure all members are participating.
2. clarifying directions clear any confusions for assignment or other
matters
I have to admit that I am also a learner for the jointplanning.com site and I
couldn’t answer their questions when they were at the site. I felt embarrassed
when I had to face them in the chat. I was not taught before them. Trying to
figure out every module myself is also a challenge.
3. guiding operational and instruction process: walk along with
members and make sure a smooth learning process.
4. increase on-line comfort: ease out the tense or stress followed by
technical obstacles or course work difficulties.
5. dealing with problems: give suggestions to solutions of problems
members raised.
6. help confidence building: give members a hand by positive
reinforcement and encouragement.
7. checking progress: monitor the progress through team
participation and involvement, ensure the group is on-task and focused.
8. as a group cheer leader or a therapist: sympathetic, supportive,
and encouraging
179
Appendix F: Sample Field Notes
*Meeting 2/4/2000
*Purpose of Meeting 1. To discuss Joanne's Directed Research Project: An
Ethnographic Participant-Observer Case Study of the Design, Development,
Implementation with a focus on the Evaluation of A Graduate Level Online CSCL
Course: During this meeting the group decided it would be appropriate to audio
record this and future meetings in order to capture a thick and rich description of
the instructional design, development, implementation and evaluation cycle
moving a face-to-face course online. Dr. Resta and Olive had a meeting prior to
this one on February 3rd and brainstormed some ideas for the course. Olive
agreed to write a summary of the information from that meeting for the group
from notes on the dry erase board in Dr. Resta office and to record any meetings
that Joanne was unable to attend. Meeting Summary (from audio tape) 2/4/00
*Olive brought up the fact that there was a need to come up with an authentic
metaphor for the CSCL course such as had been done in the Planning and
Management course with the Mustang ISD metaphor since all the activities and
assignments would hinge on the metaphor.
*Dr. Resta said a problem with that may be the fact that the Planning and Mgt.
Course was more well structured than the CSCL course which made an umbrella
or overriding metaphor more difficult
*Olive then stressed the need to consider the students in the course as
stakeholders. Primarily the general audience for the course is educational
technology students who are participating in the online ME.d. degree program
181
off-campus and are primarily educators working in the field. The second group of
student stakeholders is the on-campus students who are getting a masters or
doctorate degree and represent a more diverse group including education and
industry.
*Olive suggested the metaphor of a consulting firm and suggested the activity of
teams answering RFP's and writing proposals using CSC.
*Joanne mentioned that she and Olive had discussed including a survey at the
beginning of the course to gather demographic data about the course participants
and to determine their entry skills related to CSCL. She also suggested that
perhaps a module or modules to help students become more effective distance
learners might be incorporated and noted that one observation she made at the
North American Web Conference in Canada was the fact that more of this seems
to be taking place in the online programs in Canada. The suggestion was made
that perhaps students in the AISD course could design some modules Reference:
The Distance Learners Guide, 1999 ISBN: 0-13-939513-X Prentice Hall Upper
Saddle River, NJ 07458 (http://www.prenhall.com/dlguide)
*Dr. Resta stressed that it was important to give students the feeling of going into
a virtual space, a comfortable environment and said, "The closer we can come to
making that environment a place, then the more successful we will be in the front
creating that environment.
*Joanne discussed using Khan's WBL framework and a blend of program
evaluation models (such as Stufflebeam's CIPP or Provus' Discrepancy Evaluation
Model, (Dr. Borich suggested a blending of models). She pointed out that the
April 99 Institute for Higher Education Report prepared for the NEA and the
182
American Federation of Teachers had identified gaps in distance education
research and these might be useful in looking at for research on this course.
Two Key points from this study:
1. The research does not include a theoretical or conceptual framework.
2. The lack of research about the different learning styles of students and
how they relate to the use of specific technologies.
*Joanne suggested we might include some type of cognitive or learning
style inventory at the beginning of the course. *Olive said that she felt that the
model was more aimed at the evaluation of the framework of a course and its
context and that it is not aimed at measuring if learning is taking place.
*Dr. Resta said that models like the Discrepancy Evaluation are classic and are
more aimed at a broad program contexts. He then suggested that we should put
Khan's framework to the test by using it to and providing him feedback on what
we think works and doesn't work.
Dr. Resta began with the need to document the process and opened with
the first aspect of Khan's model
*The Pedagogical1.1 Content (type and accuracy of course content)CSCL does not have a well-structured knowledge domain compared with math for instance.
1.2 Goals/Objectives How do we know that some one has done a high quality
collaborative learning activity?
*Olive suggested that we might use Khan's framework as some the criteria for
evaluating the course framework and use our own criteria for learning activities.
183
She then suggested that we might also use appropriate parts Khan’s framework in
evaluating activities also.
*As Dr. Resta had another meeting we adjourned. Olive made copies of the WBL
chapter for everyone. Dr. Resta suggested we all go through the model taking
notes so that we can provide Dr. Khan with feedback since he has asked for that.
He suggested that perhaps Irving could join us for our next meeting.
Meeting adjourned at 11:30 AM the group agreed to meet again on 2/18/00 at
10:00 PM
184
Appendix G: Sample Reflexive Journal
Reflexive JournalCSCL Course 2000
Date Reasons, Questions, Reactions to What is Occurring in the Study & References
Decision Made/Actions Taken
August 20008/2 Dr. Resta cancelled meeting today as he was
called to California. I have volunteered to work as team assistant for the course and to help prepare materials for training the team assistants. Nothing on that from Dr. Resta. I missed the debriefing meeting yesterday at Kerby Lane because of work obligations, which needed immediate attention.
Meeting rescheduled for 8/9 with Dr. RestaSent apology to peer group for missing meeting today will discuss upcoming implementation and the events and stages of responsive evaluation.
8/9 Met with Dr. Resta & Olive. I will help with the design of the rubrics for the students to evaluate each other and the rubrics to evaluate the projects that the groups work on in the course. Began chemical therapy for Hep. C on Wed. I went to Dr. yesterday and had lost seven pounds overnight he said I was setting a record for response to therapy.
Work on design of rubrics for course modules. We will need to have all material up by 18th for the other campus. We will correspond and work together via FirstClass to complete the materials.
8/15 Dr. Resta E-mailed changes on course materials and comments on the rubrics I have been working on. Module two did not have intro and objectives so he attached an html file with those. The title of that module has been changed to Building a Collaborative Learning Team Module three did not have intro and objectives so he attached an html file with that also. He changed the title of mod three to Strategies for Collaborative Writing. Module 4 Webquest Collaboration: Dr. Resta suggested that I physically separate the Leadership and team evaluations because the evaluation “may seem a little daunting to the students” the ways I have combined these on one form.
Revise Rubric for module five and work on rubrics fro module 6 & 7.
185
Appendix H: Post-Course Survey
The purpose of this post-course survey
We are asking you to help us revise and improve the course by sharing your
experiences and opinions of the course you just completed. Please answer all the
questions carefully. This evaluation survey along with course records will help us
to determine where the course is successful and where changes or improvements
need to be made. It will take you about 20 minutes to fill out the questionnaire.
Most of the questions you can answer by simply marking the answer. If you have
any questions about this survey, please feel free to contact me: Dr. Resta
3 2 0 0 0 0 00001
1 5 4 5 0 1 00002
186
Use of the Course Modules How often did you use the following communication or course tools?
Not at all
1 2 3 4Very Often
5
Communication Tools Mean Std.Dev
4. Chat 4.476 0.649
5. Email 4.810 0.327
6. Glossary 2.143 0.871
7. Handbook 2.952 0.916
8. Resources 3.333 1.111
Course Tools9. FirstClass 5.000 0.000
10. VCampus 3.524 1.397
11. WebCT 3.211 1.357
187
Do you agree with the following statements?
Strongly Disagree Disagree Undecided Agree Strongly
Agree
1 2 3 4 5 Mean Std.Dev
14. Participating in the course was very important to me.
4.524 0.544
15. Social contacts with fellow participants were very important to me.
4.238 0.580
16. The course lived up to my expectations. 3.714 0.789
17. I am very satisfied with the course. 3.952 0.739
18. I have learned a lot in the course. 4.286 0.816
19. I enjoyed the course. 4.048 0.635
20. The course was very stimulating. 4.333 0.635
188
Your Opinion of the Course Modules How do you evaluate each of the course modules?
VeryPoor Poor Average Good Excellent
1 2 3 4 5 Mean Std. Dev
26. Module 1 - Syllabus 3.810 0.653
27. Module 2 - Building a Collaborative Team
3.905 0.603
28. Module 3 - Strategies for Collaborative Writing
4.143 0.653
29. Module 4 - Synchronous Online Collaborative Learning (The Moo)
3.619 0.853
30. Module 5 - Collaborative Web-based Inquiry Skills: Developing a Webquest
3.952 0.921
31. Module 6 - Collaborative Planning and Implementation of a Larger Knowledge-building Project
3.857 0.857
32. Comments:
*Complete 153 Item Post-Course HTML survey available upon request
189
References
ACE College Credit Recommendation Service. (2001). Distance learning evaluation guide. Washington, DC: American Council on Education.
Ary, D., Jacobs, L.C., & Razavieh, A. (1996). Introduction to research in education. Orlando, FL: Harcourt Brace College Publishers.
Bachman, H.F. (1995). The on-line classroom for adult learners: An examination of teaching style and gender equity. Doctoral dissertation, Virginia Polytechnic State University.
Bannan, B., and Milheim, W. (1997). Existing web-based instruction courses and their design. In Badrul H. Khan (Ed.), Web-based instruction, (pp. 381-387). Englewood Cliffs, NJ: Educational Technology Publications.
Barron, A.E. (1999). The Internet: Ideas, activities, resources. Retrieved November 12, 2001, from http://fcit.usf.edu/internet/default.htm
Barron, A. E. (1998). Designing Web-based training, British Journal of Educational Technology, 29 (4). 1-16.
Bednar, A., Cunningham, D., Duffy, T., & Perry, J. (1992). Theory into practice: How do we link? In Duffy, T., & Jonassen, D., (Eds.), Constructivism and the technology of instruction: A conversation (pp. 17-34). Hillsdale, NJ: Lawrence Erlbaum.
Belanger, F. & Jordan, D. (2000). Evaluation and implementation of distance learning: Technologies, tools and techniques. Hershey, PA: Idea Group Publishing.
Berman, L. (1986). Perception, paradox, and passion: Curriculum for community. Theory Into Practice, 25(1), 41-45.
Borich, G. (2000). Program evaluation: models and techniques. A bound volume of course materials for EDP Evaluation Models and Techniques, University of Texas, Austin.
Borich, G. & Jemelka, R. (1981). Definitions of program evaluation and their relation to instructional design. In Borich, G. (2000). Program evaluation: models and techniques. (Pp. 35-43). University of Texas, Austin.
191
Brewer, J., & Hunter, A. (1989). Multimethod research: A synthesis of styles. Newbury Park, CA: Sage.
Brown, J.S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-42.
Carr, J.F. & Harris, D. E. (2001). Succeeding with standards: linking curriculum, assessment, and action planning. Alexandria, VA: Association for Supervision and Curriculum Development.
Chute, A., Thompson, M. & Hancock, B. (1999). The McGraw-hill handbook of distance learning an implementation guide from trainers and human resources professionals. New York: McGraw Hill.
Cini, M. & Bilic, B. (1999) On-line teaching; moving from risk to challenge. Syllabus 12 (10), 38-40.
Cisco. (2001). E-learning. Retrieved November 12, 2001, from http://www.cisco.com/warp/public/10/wwtraining/elearning/
Clark, H.H., & Clark, E.V. (1977). Psychology & language. San Diego, CA: Harcourt Brace Jovanovich.
Connick, G. (Ed.). (1999). The distance learner's guide. Upper Saddle River, NJ: Prentice Hall.
Cooper, P.A. (1993). Paradigm shifts in designed instruction: From behaviorism to cognitivism to constructivism. Educational Technology 33 (5) 12-19.
Crossman, D. (1997). The evolution of the World Wide Web as an emerging instructional technology tool. In B. Khan (Ed.) Web-based instruction (pp. 19-24). Englewood Cliffs, NJ: Educational Technology Publications.
Cunningham, D. (1992). Assessing constructions and constructing assessments: A dialogue. In Duffy, T., & Jonassen, D., (Eds.), Constructivism and the technology of instruction: A conversation (35-44). Hillsdale, NJ: Lawrence Erlbaum.
Cyrs, T.E. (Ed.). (1997). Teaching and learning at a distance: what it takes to effectively design, deliver, and evaluate programs. New Directions for Teaching and Learning, 71. San Francisco, CA: Jossey-Bass.
192
Dick, W. (1992). An instructional designer’s view of constructivism. In Duffy, T., & Jonassen, D., (Eds.), Constructivism and the technology of instruction: A conversation (91-98). Hillsdale, NJ: Lawrence Erlbaum.
Dick, W., & Carey, L (1985). The systematic design of instruction (2nd ed). Glenview, IL: Scott Foresman.
Dillenbourg, P., & Schneider, D. (1995). Collaborative learning and the Internet. Retrieved November 12, 2001, from http://tecfa.unige.ch/tecfa/research/CMC/colla/iccai95_1.html
Driscoll, M. (1998). Web-based training: Using technology to design adult learning experiences. San Francisco, CA: Jossey-Bass Pfieiffer.
Donmoyer, R. (1990). Curriculum evaluation and the negotiation of meaning. Language Arts, 67 (3), 274-286.
Donmoyer, R. (1991). Post positivist evaluation: Give me a for instance. Educational Administration Quarterly, 27 (3), 265-296.
Duderstadt, J. (1998). Can colleges and universities survive in the information age? In Katz, R., (Ed.), Dancing with the devil: Information technology and the new competition in higher education (pp. 1-25). San Francisco, CA: Jossey Bass.
Duderstadt, J. (2000). Between a rock and a hard place: Leading the university during an era of rapid change. In Devlin, M., Larson, R., Meyerson, J., (Eds.), The Internet and the university (pp. 143-158). (EDUCAUSE PUB5005) Retrieved February 11, 2001, from http://www.educause.edu/internetforum/2000/
Duffy, T. & Jonassen, D. (1992). Constructivism: New implications for instructional technology. In Duffy, T., & Jonassen, D., (Eds.), Constructivism and the technology of instruction: A conversation (pp. 1-15). Hillsdale, NJ: Lawrence Erlbaum.
Edwards, B. (1987). Conceptual and methodological issues in evaluating emergent programs. Evaluation and Program Planning, 10, 27-33
Elliott, J. (1991). Changing contexts for educational evaluation: The challenge for methodology. Studies in Educational Evaluation 17, 215-238.
Emerson, R., Fretz, R., & Shaw, L. (1995). Writing ethnographic fieldnotes. Chicago: IL: University of Chicago Press.
193
English, F. (1988). Curriculum auditing. Lancaster, PA: Technomic.
Erlandson, D., Harris. E., Skipper, B., Allen, S. (1993). Doing naturalistic Inquiry: A guide to methods. Thousand Oaks, CA: Sage Publications.
Flagg, B.N. (1990). Formative evaluation for educational technologies. Hillsdale, NJ: Lawrence Erlbaum.
Gagné, E. D. (1985). The cognitive psychology of school learning. Boston, NY: Little Brown.
Gagné, R. M., & Glaser, R. (1987). Foundations in learning research. In R. M. Gagné (Ed.), Instructional technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum.
Gentry, C. G. (1994). Introduction to instructional development. Belmont, CA: Wadsworth.
Graham, M., & Scarborough, H. (1999). Implementing computer mediated communication in an undergraduate course: A practical experience. Journal of Asynchronous Learning Networks, 3(1), 32-45.
Guba, E.G. (1977). Educational evaluation: The state of the art. Keynote address at the annual meeting of the Evaluation Network, St. Louis, MO.
Guba, E.G., Lincoln, Y.S. (1981). Effective evaluation: improving the usefulness of evaluation results through responsive and naturalistic approaches. San Francisco, CA: Jossey-Bass.
Guba, E.G., Lincoln, Y.S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage.
Guba, E.G. & Lincoln, Y. (2000). Evaluation models: viewpoints on educational and human services evaluation. In Stufflebeam, Madaus, Kellaghan (Eds.). Evaluation models: Viewpoints on educational and human services evaluation (2nd ed., pp. 363-381). Boston, MA: Kluwer.
Hall, B. (1997). Web-based training cookbook. New York, NY: John Wiley & Sons.
Harasim, L., Hiltz, S.R., Teles, L., & Turoff, M. (1997) Learning networks: A field guide to teaching and learning online. Cambridge, MA: MIT Press.
194
Harris, J. (2002). Wherefore art thou, telecollaboration? Learning & Leading with Technology, 29 (6), 54-59.
Hasting, T. (1976). A portrayal of the changing evaluation scene. Keynote speech at the annual meeting of the Evaluation Network, St. Louis, MO.
Herman, J.L., Aschbacher, P.R., & Winters, L. (1992). A practical guide to alternative assessment. Alexandria, VA: Association for Supervision and Curriculum Development.
Hill, R. P. (1997). The future of business education’s functional areas. Marketing Educator, 16 (Winter).
Hill, R. P. (1997). The future of business education’s functional areas. Marketing Educator, 16 (Winter).
Hoffman, B. (ed.). (1994-2001). The encyclopedia of educational technology. Retrieved November 12, 2001, from San Diego State University, Department of Educational Technology: http://coe.sdsu.edu/eet/
House, E.R. (Ed.). (1983). Assumptions underlying evaluation models. In G. F. Madaus, M. Scriven, & D.L. Stufflebeam (Eds.), Evaluation models. Boston, MA: Kluwer-Nijhoff.
Howe, K.R. (1988). Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher, 17, 10-16.
Institute for Higher Education Policy. (2000). Quality on the line: Benchmarks for success in internet-based distance education. Retrieved November 12, 2001, from http://www.ihep.com/Publications.php?parm=Pubs/PubLookup.php
Johnson, D.W., & Johnson, R. T. (1996). Cooperation and the use of technology. In D.H. Jonassen (Ed.). Handbook of research for educational communications and technology (pp. 1017-1044). New York: Simon and Schuster Macmillan.
Jonassen, D.H. (1992). Evaluating constructivist learning. In T. Duffy & D. Jonassen (Eds.), Constructivism and the technology of instruction (pp. 137-148). Hillsdale, NJ: Lawrence Erlbaum.
Jonassen, D.H. (1996). Computers in the classroom: Mindtools for critical thinking. Englewood Cliffs, NJ: Prentice-Hall.
195
Joint Committee on Standards for Educational Evaluation (1981). Standards for evaluation of educational programs, projects and materials. New York: McGraw Hill.
Joint Committee on Standards for Educational Evaluation (1994). The program evaluation standards: how to access evaluations of educational programs. Thousand Oaks, CA: Sage.
Keegan, D. (1990). Open learning: concepts and costs, successes and failures. In Atkinson, R. and McBeath, C. (Eds.). Open learning and new technology (pp. 230-243). Perth: ASET/Murdoch University.
Keegan, D. (1996). Foundations of distance education (3rd ed). London: Routledge.
Kemp, J. (2000). An interactive guidebook for designing education in the 21st
century. Bloomington, IN: TECHNOS Press.
Khan, B. (Ed.). (1997). Web-based instruction. Englewood Cliffs, NJ: Educational Technology Publications.
Khan, B. (Ed.). (2001). Web-based training. Englewood Cliffs, NJ: Educational Technology Publications
Koschmann, T. (1996). CSCL, theory and practice of an emerging paradigm. Mahwah, NJ: Lawrence Erlbaum Associates.
Lebow, D. (1993). Constructivist values for instructional systems design: Five principles toward a new mindset. Educational Technology Research and Development, 41(3): 4-16.
Lincoln, Y.S., Guba, E.G. (1985). Naturalistic Inquiry. Newbury Park, CA: Sage.
Madaus, G. & Kellaghan, T. (1992). Curriculum evaluation and assessment. In P. Jackson (Ed.), Handbook of research on curriculum (119-154). New York: Macmillan.
Madaus, G., Scriven, M. & Stufflebeam, D. (1983). Evaluation models: Viewpoints on educational and human service evaluation. Boston, MA: Kluwer-Nijhoff.
Marzano, R. (2000). Transforming classroom grading. Alexandria, VA: Association for Supervision and Curriculum Development.
196
McGreal, R. (1997). The Internet: A learning environment. In Cyrs, T. (Ed.), Teaching and learning at a distance: what it takes to effectively design, deliver, and evaluate programs, New Directions for Teaching and Learning, 71. (pp.67-74). San Francisco, CA: Jossey-Bass.
Moore, M., & Tompson, M., with Quigley, A., Clark, G., & Goff, G. (1990). The effects of distance learning: a summary of the literature. (Research Monograph No. 2.). University Park, PA: The Pennsylvania State University, American Center for the Study of Distance Education. (Eric Document Reproduction Service No. ED 330 321).
Moore, M. G, & Kearsley, G. (1996). Distance education: A systems view. San Francisco, CA: Wadsworth Publishing Company.
Murphy, R., & Torrance, H. (Eds.). (1987). Evaluating education: Issues and methods. London, England: Harper & Row.
Nixon, J. (1990). Curriculum evaluation: Old and new paradigms. In N. Entwistle (Ed.), Handbook of educational ideas and practices (pp 637-647). New York: Routledge.
Noone, L.P., Swenson, C. (2001). Five dirty little secrets in higher education. EDUCAUSE Review 36, (6). Retrieved December 2, 2001, from http://www.educause.edu/ir/library/pdf/erm0161.pdf
Northwest Regional Educational Laboratory (2001). Program evaluation. Retrieved November 12, 2001, from http://www.nwrel.org/evaluation/index.shtml
Panitz, T. (1996). A definition of collaborative vs. cooperative learning. Retrieved November 12, 2001, from http://www.lgu.ac.uk/deliberations/collab.learning/panitz2.html
Patton, M. (1990). Qualitative evaluation and research methods. (2nd ed.). Newbury Park, CA: Sage.
Patton, M. (1997). Utilization focused evaluation. Thousand Oaks, CA: Sage.
Perkins, D.N. (1991). Technology meets constructivism: Do they make a marriage? Educational Technology, 13:18-23.
Phipps, R., & Merisotis, J. (1999). What's the difference? A review of contemporary research on the effectiveness of distance learning in higher education. The Institute for Higher Education Policy. Retrieved November
197
12, 2001, from http://www.ihep.com/Publications.php?parm=Pubs/PubLookup.php
Phipps, R., & Merisotis, J. (2000) Quality on the line. The Institute for Higher Education Policy. Retrieved November 12, 2001, from http://www.ihep.com/Publications.php?parm=Pubs/PubLookup.php
Pinar, W., Reynolds, W. M., Slattery, P., & Taubman, P. (1995). Understanding curriculum: An introduction to the study of historical and contemporary curriculum discourses. New York: Peter Lang.
Popham, J. (1975). Educational evaluation. Englewood Cliffs, NJ: Prentice Hall.
Popham, W. J. (2001). The truth about testing: An educator’s call to action. Alexandria, VA: ASCD.
Porter, L. (1997). Creating the virtual classroom: Distance learning with the Internet. New York: John Wiley & Sons, Inc.
Pressley, M. McCormick, C. (1995). Advanced educational psychology: For educators, researchers, and policymakers. New York, NY: HarperCollins College Publishers.
Ragan, L. (1999). Good teaching is good teaching: An emerging set of guiding principles and practices for the design and development of distance education. Cause/Effect, 22 (1). Retrieved November 12, 2001, from http://www.educause.edu/ir/library/html/cem9915.html
Relan, A., & Gillani, B.B. (1997). Web-based instruction and the traditional classroom: similarities and differences. In B. Khan (Ed.), Web-based Instruction (pp. 41-58). Englewood Cliffs, NJ: Educational Technology Publications.
Resta, P., Lee, D., Williams, J. (2001). E-Sherpas: Strategies for Supporting Web-Based Knowledge Building Communities. Proceeding of EDUCAUSE, 2001, Conference. Retrieved January 15, 2002, from http://www.educause.edu/ir/library/pdf/EDU0156.pdf
Russell, T. L. (2001). No significant difference phenomenon. . Retrieved November 12, 2001, from http://teleeducation.nb.ca/nosignificantdifference/
Scheurich, J.J. (1997). Research method in the postmodern. Bristol, PA: Falmer Press.
198
Schank, R. (1999). Measurement, course design, and the rise of the virtual university. (Tech. Rep. No. 74). Evanston, IL: Northwestern University, The Institute for the Learning Sciences.
Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagné, & M. Scriven (Eds.), Perspectives of curriculum evaluation (pp. 39-83). Chicago, IL: Rand McNally.
Scriven, M. (1991). Evaluation thesaurus. Newbury Park, CA: Sage.
Scriven, M. (1994). Evaluation as a discipline. Studies in Educational Evaluation, 20(1), 147-166.
Simonson, M. (1997). Evaluating teaching and learning at a distance. In Cyrs, T. (Ed.), Teaching and learning at a distance: what it takes to effectively design, deliver, and evaluate programs, New Directions for Teaching and Learning, 71, (pp.87-94). San Francisco, CA: Jossey-Bass.
Somekh, B. (2001). The role of evaluation in ensuring excellence in communication and information technology initiatives. Education, Communication & Information, 1 (1), 75-101.
Spiro, R., Feltovich, P., Jacobson, M., & Coulson, R. (1992). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advance knowledge acquisition in ill-structured domains. In Duffy, T., & Jonassen, D., (Eds.), Constructivism and the technology of instruction: A conversation (57-75). Hillsdale, NJ: Lawrence Erlbaum.
Stake, R. (1973). Program evaluation, particularly responsive evaluation. Keynote Presentation presented at the New Trends in Evaluation conference Goteborg, Sweden. Retrieved November 12, 2001 from http://www.wmich.edu/evalctr/pubs/ops/ops05.html
Stufflebeam, D. (1971). Educational evaluation and decision-making. In Worthen, B., & Sanders, J. Educational evaluation: Theory and practice. (Pp. 128-150). Worthington, OH: Charles A. Jones.
Stufflebeam, D. & Shinkfield, A. (1985). Systematic evaluation. Boston, MA: Kluwer-Nijhoff.
Stufflebeam, D., Madaus, G., Kellaghan, T. (Eds.). (2000). Evaluation models: Viewpoints on educational and human services evaluation. Boston, MA: Kluwer
199
Stufflebeam, D. (2001). Evaluation models. In G.T., Henry & J.C., Green (Eds.), New directions for evaluation, 89. San Francisco, CA: Jossey-Bass.
Schwandt, T. (1997) Qualitative inquiry a dictionary of terms. Thousand Oaks, Sage Publications. 7, 78, 101-102, 88, 136,164, 129,
Tashakkori, A., Teddlie, C. (1998). Mixed methodology combining qualitative and quantitative approaches. Applied Social Research Methods Series, Volume 46. Thousand Oaks, CA: Sage Publications.
Task Force on Distance Education. (1992). Report of the task force on distance education. Retrieved November 12, 2001, from The Pennsylvania State University Web site: http://www.cde.psu.edu/DE/DE_TF.html
Thorpe, M. (1988). Evaluating open and distance learning. Essex: Longman Group.
Tyler, R.W. (1949). Basic principles of curriculum and instruction. Chicago, IL: University of Chicago Press.
U.S. Department of Education, National Center for Education Statistics. (1999). Distance Education at Postsecondary Education Institutions: 1997-98. NCES 2000-013. Retrieved November 12, 2001, from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2000013
U.S. Department of Education, National Center for Education Statistics. (1997). Distance Education in Higher Education Institutions, (NCES 97-062). Retrieved November 12, 2001, from http://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=98062
Van Gigch, J. (1978). Applied general systems theory. New York, NY: Harper and Row Publishers.
Verduin, J. & Clark, R. 1991. Distance education: The foundations of effective practice. San Francisco, CA: Jossey-Bass Publishers.
Web-based Education Commission. (2000). The power of the Internet for learning: moving from promise to practice. Retrieved November 12, 2001, from http://www.ed.gov/offices/AC/WBEC/FinalReport/
Wegerif, R. (1998). The social dimension of asynchronous learning networks [Electronic version]. Journal of Asynchronous Learning Networks, 2(1), 34-49.
200
Weiss, J. (1989). Evaluation as subversive activity, In G. Milburn, I. Goodson & R. Clark (Eds.), Re-interpreting curriculum research: Images and arguments (121-131). London, Ontario, Canada: The Althouse Press.
Williams, J., Lee, D., Adams, W. (2001). Formative evaluation and the generative process of designing an online environment. Proceeding of the 17th Annual Conference on Distance Teaching and Learning (pp. 393-398). Madison: University of Wisconsin System.
Winn, W. (1992). The assumptions of constructivism and instructional design. In Duffy, T., & Jonassen, D., (Eds.), Constructivism and the technology of instruction: A conversation (177-182). Hillsdale, NJ: Lawrence Erlbaum.
Worthen, B.R., Sanders, J.R. (1973). Educational evaluation: Theory and practice. Worthington, OH: Charles A. Jones.
Worthen, B.R., & Sanders, J.R. (1987). Educational evaluation: Alternative approaches and practical guidelines. New York: Longman Publishers.
Worthen, B.R., Sanders, J.R. & Fitzpatrick, J.L. (1997). Program evaluation Alternative approaches and practical guidelines (2nd ed). White Plains, NY: Longman Publishers.
201