16
Learning Objectives Feature for the Instructional Module Development System Ketaki Andhare Department of Engineering Arizona State University [email protected] Abstract - The road to effective science, technology, engineering and mathematics (STEM) instruction starts with a well-conceived and constructed plan or curriculum. STEM educators, who typically come from STEM backgrounds and have little or no STEM education training, can benefit from the use of an information technology (IT) tool that guides them through the complex task of designing an instructional module (i-mod), i.e., a single course that can span over a specified duration of time. The next generation of the World Wide Web (WWW), called the Semantic Web, promises to further improve productivity by providing meaning and intelligence to the vast data currently present on the web. The Instructional Module Development (IMoD) system, currently under development, presents a web-based framework for representing an i-mod and scaffolds users through the design process. This framework uses Semantic Web technologies to automate aspects of the complex decision making processes that are needed for designing an i-mod. One of the main components of the IMoD system is the Learning Objectives Feature (LOF). The LOF consists of an interface and the backend intelligence needed to help the user (instructor) to create specific and measurable learning objectives and link them with other aspects of the i-mod design. This paper presents the LOF. Keywords - instructional module, learning objectives, ontology, semantic web, web application I. INTRODUCTION A. Motivation To ensure that future generations of engineering, science and other technological practitioners are equipped with the required knowledge and skills to continue to come up with innovative solutions to solve societal challenges, effective courses or i-mods that incorporate the best pedagogical and assessment practices must be developed and delivered. Tertiary- level STEM educators tend to have little or no STEM education training. Their approaches to learning, instruction, and assessment mimic the experiences they were exposed to as students and are not necessarily informed by scholarship in the area of how people learn. The road to effective STEM instruction starts with a well-conceived and constructed plan. An information technology (IT) tool that can guide STEM educators through the complex task of instructional module development, provide relevant information about research-based pedagogical and assessment principles and automate aspects of the complex decision making involved in the design process, will be of great value. Although IT tools such as Electronic Performance Support Systems (EPSSs), Knowledge Management Systems (KMSs), and Repositories, have been used to support some parts of the i-mod design and development process, none of them currently provide all of these features [1] . B. Problem Statement The Instructional Module Development (IMoD) system presents a web-based framework for representing an i-mod and scaffolds users through the design process in a fashion similar to the way turbo tax, a tax preparation software package, provides step-by-step guidance on request, to users completing tax returns. This framework uses Semantic Web technologies to automate aspects of the complex decision making processes that are needed for designing an i-mod. The architecture of the IMoD system resembles the structural model of an i-mod [2] , that is, it consists of five main components – Context [19] , Learning Objectives, Content [20] , Pedagogy and Assessments. This paper discusses the tool being developed for the Learning Objectives component, known as the Learning Objectives Feature (LOF). The LOF forms the heart of the IMoD system and is built on top of a semantic web ontology that represents the learning objectives (LOs) associated with an i-mod. An ontology formally represents knowledge as a set of concepts and their relationships in a domain [3] .The LOF provides an

Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Embed Size (px)

Citation preview

Page 1: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Learning Objectives Feature for the Instructional Module Development System

Ketaki Andhare Department of Engineering Arizona State University

[email protected]

Abstract - The road to effective science, technology, engineering and mathematics (STEM) instruction starts with a well-conceived and constructed plan or curriculum. STEM educators, who typically come from STEM backgrounds and have little or no STEM education training, can benefit from the use of an information technology (IT) tool that guides them through the complex task of designing an instructional module (i-mod), i.e., a single course that can span over a specified duration of time. The next generation of the World Wide Web (WWW), called the Semantic Web, promises to further improve productivity by providing meaning and intelligence to the vast data currently present on the web. The Instructional Module Development (IMoD) system, currently under development, presents a web-based framework for representing an i-mod and scaffolds users through the design process. This framework uses Semantic Web technologies to automate aspects of the complex decision making processes that are needed for designing an i-mod. One of the main components of the IMoD system is the Learning Objectives Feature (LOF). The LOF consists of an interface and the backend intelligence needed to help the user (instructor) to create specific and measurable learning objectives and link them with other aspects of the i-mod design. This paper presents the LOF.

Keywords - instructional module, learning objectives, ontology, semantic web, web application

I. INTRODUCTION

A. Motivation

To ensure that future generations of engineering, science and other technological practitioners are equipped with the required knowledge and skills to continue to come up with innovative solutions to solve societal challenges, effective courses or i-mods that incorporate the best pedagogical and assessment practices must be developed and delivered. Tertiary-level STEM educators tend to have little or no STEM education training. Their approaches to learning,

instruction, and assessment mimic the experiences they were exposed to as students and are not necessarily informed by scholarship in the area of how people learn. The road to effective STEM instruction starts with a well-conceived and constructed plan. An information technology (IT) tool that can guide STEM educators through the complex task of instructional module development, provide relevant information about research-based pedagogical and assessment principles and automate aspects of the complex decision making involved in the design process, will be of great value. Although IT tools such as Electronic Performance Support Systems (EPSSs), Knowledge Management Systems (KMSs), and Repositories, have been used to support some parts of the i-mod design and development process, none of them currently provide all of these features [1].

B. Problem Statement

The Instructional Module Development (IMoD) system presents a web-based framework for representing an i-mod and scaffolds users through the design process in a fashion similar to the way turbo tax, a tax preparation software package, provides step-by-step guidance on request, to users completing tax returns. This framework uses Semantic Web technologies to automate aspects of the complex decision making processes that are needed for designing an i-mod. The architecture of the IMoD system resembles the structural model of an i-mod[2], that is, it consists of five main components – Context[19], Learning Objectives, Content[20], Pedagogy and Assessments. This paper discusses the tool being developed for the Learning Objectives component, known as the Learning Objectives Feature (LOF). The LOF forms the heart of the IMoD system and is built on top of a semantic web ontology that represents the learning objectives (LOs) associated with an i-mod. An ontology formally represents knowledge as a set of concepts and their relationships in a domain[3].The LOF provides an

Page 2: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

interface to enable the user to enter learning objective data. This data is captured and processed into metadata that is used by the backend intelligence to create and link the LOs with data in the other components of the IMoD.

II. BACKGROUND

This section describes the literature review that was undertaken before the LOF was designed. The system has been designed such that its foundation is based on research in areas such as curriculum and instruction design [2-6,8-9] and how people learn [7]. The aim of this tool is to make this research accessible to instructors so that they can follow the best practices while designing their instruction.

A. Instructional Design

According to Pellegrino, curriculum can be defined as the knowledge and skills in subject matter areas that teachers teach and students are supposed to learn [4]. The curriculum also describes the scope of the content that is to be taught in a particular subject area. It also usually provides an order in which the content is to be taught to be most effective. Pellegrino defines a triad that is the center of an educational enterprise. This triad consists of – the Curriculum, the Instruction (learning activities used to teach the content) and the Assessments. Pellegrino believes that this triad should be centered about a subject domain and all three components of the triad should be in alignment with each other in order to achieve effective teaching. In order to achieve alignment in the triad elements, Pellegrino believes that it is important to understand the principles behind how people learn. An important aspect in the process of learning is being able to develop a “foundation of factual knowledge” and then understand and apply these facts in a practical environment [4]. This concept is in line with the taxonomy of learning domains developed by Benjamin Bloom in 1956. Bloom identified different levels of learning within the cognitive domain [5]. Two more domains were later added to this classification – the affective domain and the psychomotor domain. As shown in Fig. 1 these domains are further divided and arranged from the simplest learning activity to more complex activities. Bloom’s taxonomy was later modified by Anderson and Krathwohl to bring up to date with the 21stcentury [6].

Fig. 1. Learning domains and learning activities (complex to simple)

Another important principle of learning is the environment in which the learning takes place. A learning environment can be classified into four types: learner-centered, knowledge-centered, assessment-centered and community-centered [7]. In a learner-centered environment, the focus is on the student, and teaching is adapted to the skills of the learner. In a knowledge-centered environment, the focus is on the content being taught and in an assessment-centered environment, feedback is important to adapt the learning to what the student has learned. Community-centered environments think of the classroom or the learning environment as a community and learning techniques are adapted to fit this ideology.

Another approach to developing a curriculum is known as the Backward-Design principle introduced by Wiggins and McTighe in their book Understanding by Design [2]. Usually instructors design a course based on the content presented in a textbook. Wiggins and McTighe advocate the opposite. They believe that identifying the objectives to be achieved during the course is a more effective starting point. One should “start with the end—the desired results (goals or standards)—and then derive the curriculum from the evidence of learning (performances) called for by the standard and the teaching needed to equip students to perform” [2]. To backward design a curriculum, the desired results are identified, and then assessments are designed to verify that these results have been achieved. The learning experiences and instruction are then formulated around the desired results and the assessments.

The IMoD system draws inspiration from the work done by Bloom, Anderson and Krathwohl in the area of learning taxonomies to design the ontological model and from Wiggins and McTighe to design the structure of an i-mod.

Page 3: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

B. Learning Objectives

As shown by Wiggins and McTighe, the desired results form a crucial part of the instructional module design process. The desired results or the objectives are the starting point for developing an effective and successful i-mod. An objective is a way in which an instructor can inform others of what he/she intends for the students to achieve. As defined by Robert Mager, an objective is related to outcomes instead of the process followed for achieving those outcomes and the outcome is specific and measurable [8].

According to Dr. Dee Fink, learning goals or objectives should be defined after identifying the context of learning. Dr. Fink has formulated a Taxonomy of Significant Learning that classifies learning activities [9]. This taxonomy contains six sub-categories – foundational knowledge, application, integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think about which one of these sub-categories the objective would belong to. The way these sub-categories are defined is such that they are interactive and each is able to stimulate any of the others.

Another way of formulating an objective is by following the structure defined by Mager. An objective can have three characteristics: Performance – what the learner should be able to do, Conditions – the conditions under which the learner should be able to do it and the Criterion – how well must it be done [8]. The Performance characteristic states what the learner is expected to do during the course of learning in order to display competence. This performance is to be carried out under certain Conditions. These conditions should also be specified in the objective in order to add clarity. Since the objective is an outcome that is to be attained, there has to be a way in which acceptability can be determined. This is the criterion that should be specified to describe what performance is acceptable. It would be quite simple to clutter an objective with unnecessary information, like describing the procedures to be followed for the learning instruction, the target audience, etc. Mager suggests that it is better to leave these out of an objective.

The IMoD project follows the instructional objective format defined by Mager and builds upon it to help the instructor develop clear, specific objectives.

C. Existing Tools that Support Instructional Design

This section talks briefly about the different e-learning and curriculum development tools that are available via the World Wide Web and how they differ from the IMoD system.

1) Curriculum Design Tools:

The National Engineering Education Delivery System (NEEDS) is a library of learning resources belonging to the engineering domain and is available digitally [10]. These resources can be accessed by users (both students and teachers) through a web interface. Users can upload and download these resources, search for and also comment on the available resources in order to facilitate their educational needs.

Connexions is also a digital library of educational resources. It provides a repository of educational content that can be accessed over the World Wide Web. Instructors or authors can create and upload content in the form of modules of varying sizes. A group of modules forms a “collection” of knowledge that can be downloaded and used by students [11].

National Science Digital Library (NSDL) provides a broad set of tools and services that help the education community to “organize, manage, and disseminate digital educational content to advance STEM teaching and learning” [12]. These tools can be used by developers as stand-alone applications or can be partnered with NSDL.

The Understanding by Design tool [13] is based on Wiggins’s and McTighe’s Backward Design principle [2]. This tool helps instructors to design a course or a unit based on the Backward Design principle. It provides a user interface where an instructor can enter the desired results, design assessments to evaluate whether these results have been achieved and then plan instruction for the course.

While most of the existing tools help in the sharing and management of educational resources, the IMoD tool helps instructors to develop i-mods based on the best design principles that have arisen out of extensive research in the fields of education and instruction. It helps both experts and novices to learn to develop effective instruction without needing prior knowledge of the best practices to be followed.

Page 4: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

2) Ontology-based Tools:

Content Automated Design and Development Integrated Editor (CADDIE) is an e-learning tool that employs ontologies and semantic web technologies in order to support instructional design and provide personalized learning services to its users [14]. The learner using this tool is first profiled and then based on the profile the tool identifies the best strategies for presenting resources to the learner so that he/she can learn in the most effective manner.

The Intelligent Web Teacher (IWT) is a research project based on semantic web technologies that also provides personalized learning services. It uses semantic web ontology to represent “Domain Concepts” and relations between them. A Domain Concept is a concept that belongs to some educational domain and could be described by an educational resource or “Learning Object” [14].

LOMster is a tool that has been developed to “share and reuse” educational resources and is based on peer – to – peer technology [15]. The tool is based on the Learning Object Metadata (LOM) standard developed by the IEEE LTSC (Learning Technology Standards Committee). The LOM standard specifies the format of the metadata that is used to describe a learning object. The tool allows users to add learning objects/ resources to the system, generates metadata for the added content and also provides the ability to share this resource with peers connected in the network.

The ontology built for the Learning Objectives Feature contains classes that define the structure of a learning objective and also the structure of the metadata that is generated for the LOs. The ontology also holds the relations between the different classes.

D. Semantic Web Technologies

1) Semantic Web Ontology: At the heart of semantic web technologies lie ontologies. Ontologies provide a formal representation of knowledge by specifying knowledge as concepts belonging to a domain. An ontology also provides relationships between the different concepts that have been specified. Ontologies are usually defined by creating a hierarchy of classes to represent data entities and linking these classes by creating relationships between them. The languages used to specify ontologies are called ontology languages. Some of the XML based ontology languages are DAML+OIL (DARPA Agent Markup Language + Ontology Inference Layer), RDF (Resource Description

Framework), OWL (Web Ontology Language), RDF Schema and SHOE (Simple HTML Ontology Extensions).

2) Resource Description Framework (RDF):

The Resource Description Framework, as the name suggests, is a language for describing web resources. It used for representing information, especially metadata, about web resources [16]. RDF is designed to be machine-readable so that it can be used in software applications for intelligent processing of information.

3) Web Ontology Language (OWL): The Web

Ontology Language is a markup language that is used for publishing and sharing ontologies [17]. OWL is built upon RDF and an ontology created in OWL is actually a RDF graph. Individuals with common characteristics can be grouped together to form a class. OWL provides different types of class descriptions that can be used to describe an OWL class. OWL also provides two types of properties: object properties and data properties. Object properties are used to link individuals to other individuals while data properties are used to link individuals to data values.

4) Protégé: Protégé is an open-source ontology

editor [18]. Protégé provides a Protégé-OWL editor that allows users to create semantic web ontologies in W3C’s Web Ontology Language.

III. LOF FRAMEWORK

A. IMoD Structure

The IMoD tool helps to create new i-mods, store and manage already defined i-mods and share them if needed. The structure of the i-mods would also facilitate re-use of existing i-mods and sharing of i-mods among users.

In order to understand the structure of the Learning Objectives Feature (LOF), it is important to understand the structure of the IMoD system. The IMoD system is comprised of five components – Context [19], Learning Objectives Feature, Content [20], Pedagogy and Assessments. The Context, Content and Learning Objectives Feature of the IMoD system have been implemented while the Pedagogy and Assessments components are under development. The Context component holds information about the instructor, the i-mod being designed and the schedule that will be followed by

Page 5: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

the i-mod. The Content component contains information about the educational content of an i-mod, i.e., what content is to be taught as part of this i-mod. The Learning Objectives component or the LOF stores all the learning objectives to be achieved. Pedagogy would help to define the learning activities or instructional activities that the instructor would utilize to convey the subject matter, i.e., content. Assessments would contain all the assessment pieces designed for the objectives defined in the LOF. Each of the components would be linked to one another to provide a more comprehensive and integrated i-mod. For example, an LO specifies the Content that supports it. It could also be linked to Assessments that would help to evaluate whether or not the LO was attained. Content could be linked to the Pedagogy that would specify how this content is to be delivered to the audience.

B. LOF Structure

As Wiggins and McTighe discovered, the desired results form the linchpin on which the i-mod rests. IMoD follows this understanding and provides a Learning Objectives Feature component that allows an instructor to define a learning objective that follows a modified version of the format defined by Robert Mager. Bloom’s revised taxonomy by Anderson and Krathwohl is then used to build upon the objective and add more meaning to it. The model of a LO is as shown in Fig. 2 and 3.

Fig. 2. The learning objective structure for the Learning Objectives Feature of the IMoD

The Learning Objective consists of four parts – Condition, Performance, Content and Criteria. These parts are entered by the user while stating the LO. The back end analyzes the input and generates metadata for the LO, i.e., adds more meaningful information to it. The learning domain and domain

category of the Performance part is based on Bloom’s revised taxonomy and is as illustrated in Fig 3. The learning domain can be cognitive, affective or psychomotor, while the domain categories are the learning activities that form the learning domains.

Consider a learning objective defined as: Given a Software Requirements Specification (SRS) document of a Capstone project, students should be able to analyze the quality (identify incorrect requirements) of the Software requirements in the document with 95% accuracy. In this example, the Condition is – ‘Given a Software Requirements Specification document’ and the target audience is the student. This Condition is performance-based and has been explicitly provided by the user. If the condition is not specified, then the LOF annotates the LO as containing a generic condition and picks the target audience from the Context component of the IMoD system. The Performance is the act of analyzing the quality. The use of the learning action ‘analyze’ indicates that the learning objective belongs to the cognitive learning domain whose domain category is ‘analyzing. This is higher order learning and the action word ‘analyze’ is covert or non-observable. To make this action overt or explicit, an indicator – ‘identify incorrect requirements’ has been added to the performance. The system also allows the user to enter the content area that the performance belongs to; for example, the content area in this instance could be ‘Software requirements’. If the content area has already been entered into the system by the user then it is picked up from the database and displayed to the user in the learning objectives component. The percentage of accuracy with which the student is supposed to carry out the performance forms the Criteria part of the LO and is of the ‘accuracy’ type. This example is illustrated in Fig. 4.

Fig. 3. Learning domains and domain categories (Blue boxes indicate Learning Domains and Green boxes indicate Domain Categories)

Page 6: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Fig. 4. An example of a complete learning objective

C. Ontological Model

The ontology has 5 main classes: Learning Objective, Learning Domain, Domain Category, Objective Components and Content. These classes are further divided into sub-classes as shown in Fig. 5. The metadata is used to associate information about one class with another class. This association is done using object properties used to define relations between the classes. The class hierarchy is as shown in Figure 6. Table I shows how the ‘criteria’ class and its sub-class ‘accuracy’ are defined in OWL. Table II shows the OWL code snippet for the object property class ‘hasPerformance’ and its sub-class ‘LearningDomain’.

TABLE I CODE SNIPPET FOR AN OWL CLASS AND SUBCLASS

CLASS SUB CLASS <owl:Class rdf:about="#Criteria"> <rdfs:subClassOf rdf:resource="#Objective_Components"/> </owl:Class>

<owl:Class rdf:ID="Accuracy"> <rdfs:subClassOf rdf:resource="#Criteria"/> </owl:Class>

TABLE II CODE SNIPPET FOR AN OWL OBJECT PROPERTY AND

SUB-PROPERTY CLASS SUB CLASS

<owl:ObjectProperty rdf:ID="hasPerformance"> <rdfs:subPropertyOf> <owl:ObjectProperty rdf:about="http://www.w3.org/2002/07/owl#topObjectProperty"/>

<owl:ObjectProperty rdf:about="#LearningDomain"> <rdfs:subPropertyOf rdf:resource="#hasPerformance"/> <rdfs:range rdf:resource="#Affective_Domain"/> <rdfs:range rdf:resource="#Cogniti

</rdfs:subPropertyOf> <rdfs:domain rdf:resource="#Learning_Objective"/> <rdfs:range rdf:resource="#Performance"/> </owl:ObjectProperty>

ve_Domain"/> <rdfs:range rdf:resource="#Psychomotor_Domain"/> <rdfs:domain rdf:resource="#Performance"/> </owl:ObjectProperty>

The learning domain class has the 3 sub-classes as shown in Fig. 3: Cognitive Domain, Affective Domain and Psychomotor Domain. The domain category class is further divided into the 18 categories shown in Figure 3.

Consider a learning objective LO1 that has performance P, a generic condition GCO and criteria C. P belongs to the learning domain D and domain category DC, and the condition GCO has target audience TA and i-mod type TY. LO1 also specifies the content that P belongs to as, CON. Figure 7 shows how classes are associated with one another based on the properties being used with respect to LO1. The circles in Figure 7 indicate the classes in the ontology and the links between the classes illustrate the relations (object properties) between them.

Fig. 5. Ontological model (class hierarchy)

Page 7: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Fig. 6. Object properties

Fig. 7. Ontological model of a learning objective

IV. IMPLEMENTATION The LOF comprises of four distinct components:

the web client, the server, the database and the backend application. The following technology was used to implement these components.

ExtJS 4.0: is a pure JavaScript application

framework that provides libraries to develop web applications in JavaScript. The LOF web interface was built using these libraries.

PHP 5.3.8: is a server scripting language that

facilitates the development of dynamic web pages. PHP embedded with ExtJS code was used to render the user interface and to interact with the database.

MySQL 5.0.8: is the database used to store all

IMoD data. The PHP and MySQL database are deployed on an Apache 2.2 server using XAMPP. XAMPP is a free and open source web server stack. It works well on multiple platforms like Windows, Linux, Solaris and Mac OS X. XAMPP consists of an Apache HTTP server, MySQL database and interpreters for PHP and Perl.

Java and Protégé OWL APIs: are used to create the stand alone application that provides the backend functionality, mainly the Evaluate feature. The stand alone application is created as a Java application. This application then uses OWL APIs to read and load the semantic ontology. The OWL API also provides ‘reasoner’ classes that help in identifying relationships in the ontology.

A. Server and Database

The programming language used for providing

the server functionality is PHP. The web client interacts with the database through the use of PHP.

The tables in the database that are relevant for the LOF are:

1) learningobjectives: this table stores the

learning objectives along with their IModID. It stores the IModID, condition, performance, indicator, learning domain, domain category, content, criteria, criteria type and the compete learning objective along with a learning objective ID.

2) learningdomains: this table stores the

learning domains along with their domain IDs.

3) domaincategories: this table stores category names along with the domain ID of the domain they belong to and a category ID.

4) actionwords: this table stores all the action words that can be used to describe a performance along with a category ID to identify the domain it belongs to. The action words are also classified as being covert or overt.

5) help: this table stores all the help content that is displayed in the help panel when the user is creating a learning objective. The help content is associated with the appropriate UI element ID.

Fig. 8 shows the relationships between the tables

used by the LOF.

B. Web Client

The IMoD system is a web based tool that uses semantic web technologies to add meaning to user input. The web client has been implemented using a JavaScript framework called ExtJS 4. ExtJS provides JavaScript libraries for developing interactive web applications using Document Object Model (DOM), Asynchronous JavaScript (AJAX), etc.

Page 8: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Fig. 8. Relationships between the different tables used by the LOF

The user interface is designed as a collection of tabs, with each tab being a separate component of the IMoD system. There are tabs for Context, Learning Objectives, Content, Assessments and Pedagogy. The main components of the Learning Objectives tab are:

1) Learning Objectives Description panel: is made up of an accordion panel which houses 4 more panels within it. The accordion layout is such that it manages multiple panels in an expandable accordion fashion and only one panel can be open at a time out of all the panels that form this layout. Table III shows the piece of code that creates this panel.

TABLE III EXTJS CODE FOR CREATING AN ACCORDION PANEL

// create the main panel that will hold details about the Learning Objective var mainLearningObjectivePanel = Ext.create('Ext.form.Panel', { title: 'LEARNING OBJECTIVE DESCRIPTION', region: 'center', layout: 'accordion', xtype: 'panel', autoscroll: true, height: Ext.getBody().getViewSize().height

* 0.3, margins: '5 0 0 0', id: 'learningObjectiveDescription' });

The Learning Objectives Description panel contains the following 4 panels:

Condition: The condition panel provides a text area for users to enter the condition for a learning objective.

Fig. 9. Condition panel of the LOF interface

Performance: The performance panel contains 2 combo boxes and 2 text areas. The combo boxes are drop down lists, one for selecting the learning domain and the other for the domain category. The combo boxes are populated by querying the database for the domain names and the category names. The text areas are for the user to enter the performance and the indicator.

Fig. 10. Performance panel of the LOF interface

Content: The content panel provides the user with an editable combo box for entering a new content topic. If the user has already saved any content topics in the contents tab for that particular i-mod, then this combo box will be populated with those pre-existing content topics. These topics are pulled from the ‘content’ table of the database.

Fig. 11. Content panel of the LOF interface

Page 9: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Criteria: The criteria panel allows the user to select a criteria type, like accuracy, speed, quality or quantity from a drop down combo box. A text area lets the user enter a criterion for the learning objective.

Fig. 12. Criteria panel of the LOF interface

2) Toolbar: The toolbar contains 4 buttons –View All Learning Objectives, Create New Learning Objective, Evaluate Learning Objectives and Save Learning Objective.

Fig. 13. Toolbar of the LOF interface

View Learning Objectives: Clicking on the View button shows a list of all the learning objectives that have been created and saved by the user for that particular i-mod.

Create New Learning Objective: When this button is clicked, it clears all the text fields so that the user can start creating a fresh learning objective.

Save Learning Objective: When this button is clicked, the learning objective entered by the user in the learning objective description panel is saved to the database. When the Save button is clicked, a function is called that picks up data from the UI and passes it to a PHP file. This file then contains a query that inserts the learning objective data into the ‘learningobjectives’ table of the IMoD database. Along with the learning objective being saved, the content from the content text area is picked up and inserted into the ‘content’ table. Another function that the save performs is the concatenation of the entire learning objective. The various parts of the learning objective entered by the user are put together when Save is clicked and displayed in the panel at the bottom of the page.

Evaluate Learning Objective: On clicking the Evaluate button, an xml file is generated that contains the learning objective that the user has entered. This file is provided as an input to the application that forms the back end of the LOF, as discussed in the next section.

3) Learning Objective Display panel: When the Save button is clicked, the various parts of the learning objective are put together along with other information to form a complete sentence to describe the LO. The audience that the i-mod targets, is obtained from the Context tab and added to the learning objective.

For example, if the user enters the condition as –

‘Given a Software Requirements Document’, performance as – ‘analyze’, indicator as – ‘identifying incorrect requirements’, content as – ‘Software requirements’ and criteria as – ‘with 95% accuracy’, the complete learning objective formed would be – ‘Given a Software Requirements Document, LowerDivision student should be able to analyze Software requirements with 95% accuracy. (demonstrated by - identifying incorrect requirements)’. Here ‘LowerDivision student’ indicates the target audience specified in the Context tab.

4) Help panel: Along with these UI components, another very important component is the Help panel on the page. As the user enters data into the different panels to create a learning objective, the help panel displays relevant information, tips and examples to help the user formulate the learning objective. Based on what the user is currently focused on, the content in the help panel changes. So, if the user is currently in the Performance panel and viewing the drop down list of the domain categories, then the help panel shows information about the different categories in order to help the user make an informed choice. If the user clicks the text area of the Criteria section, then help on the Criteria piece would be provided.

When a UI element has focus, an event is triggered that calls a function to handle the appropriate response. Based on which UI element has focus, the appropriate help text is retrieved from the ‘help’ table of the database and displayed in the help panel as shown in Fig. 14. If focus is on the performance text area then the help panel also displays a table that contains action words that the user can use. If the user clicks on one of the words in the list, then the performance text area gets populated with that word. This list of words can also be sorted in ascending or descending order as shown in Fig. 15.

Page 10: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

Fig. 14. Help panel of the LOF interface

Fig. 15. Help panel for the performance text area

The Help panel provides assistance to the user and can be opened or closed at the user’s convenience. If the user does not want the help panel to show, then the panel can be collapsed to a side and will not be visible any longer. Clicking on the arrow on the top right section of the panel will reopen it.

C. Backend Application The back end is responsible for connecting the

ontology to the user interface and providing the intelligence needed by the tool for processing user data. The back end algorithm analyzes the data entered by the user, compares it to the ontology and generates metadata for it. This metadata is stored along with the user data to make connections and associations between the different components of an i-mod. The generated metadata also helps the user by providing suggestions or tips that makes the LO more effective.

The backend contains certain algorithms that form a key piece of the LOF. These algorithms are responsible for validating a learning objective once it has been created by the user. Once the Evaluate button on the page is clicked by the user, then these algorithms come into play. One of the strengths of the IMoD system is that once the user has designed an i-mod, the user can evaluate it for completeness, correctness and alignment with the other components. These validations are based on the research that forms the backbone of the system. The algorithm follows certain rules to ensure that a learning objective is complete and correct. The basic steps followed during validation are outlined in Table IV.

TABLE IV PSEUDO-CODE FOR ALGORITHM THAT GENERATES

PERFORMANCE METADATA 1. Get user input and create an ontology instance

document. 2. Parse the ontology using an OWL parser. 3. Extract and load semantic relations from the

ontology. Identify the performance component of the LO.

4. Compare user data with the extracted relations to derive the learning domain and domain category.

5. Add the derived metadata for the learning domain and the domain category to the ontology instance document.

The steps shown in Table IV are implemented as follows.

1) When the user clicks on the Evaluate button, an instance document is created in the form of an xml document. The user input from the screen is obtained via PHP and saved to an xml file.

2) The back end of the LOF is programmed in

Java using the Protégé OWL API. This API provides interfaces which in turn allow access to an OWL model, its classes, properties and individuals. The

Page 11: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

API allows us to work directly with the classes and properties that have been defined in the learning objective ontology.

3) The application that implements the backend

consumes the xml file generated in the first step and parses the document. At the same time the Protégé OWL API is used to read and parse the owl file that contains the ontology. An owl model is created from this parsed ontology which can then be used to extract classes, subclasses, properties, etc from the ontology. Table V shows part of the code that does this.

4) Once the OWL model has been created,

classes and properties can be extracted from it and compared to the data in the instance document. Based on the results of the comparison, another xml document is created.

The algorithms that form the evaluate piece

perform the following actions: Determine the structure of the learning objective, validate the learning domain and domain category, validate a performance as being covert or overt and validate the priority of the content based on the domain category.

Based on the instance document that contains

user input, the owl model is used to determine if any of the components of the learning objective are missing. In case any are missing, a warning is added to the xml generated at the end.

When a user selects the learning domain and

domain category, it is important to validate its correctness. The performance part of the learning objective is parsed and the action word is determined by using a text search and matching algorithm called the Rabin Karp algorithm. Based on the action word that is identified by this algorithm, its corresponding learning domain and domain categories are obtained. If the learning domain and the domain category match the ones in the instance document, then the domain and the category entered by the user are correct.

A performance can be either covert or overt, that

is, implicit or explicit. A covert action word can be made overt by adding an indicator to it. If the performance in the instance document is of the covert type and is missing the indicator, then a warning is provided to the user that brings his/her attention to it.

When content is specified in the content tab, a

priority is provided for the content topic as being – ‘Good to be familiar with’, ‘Important to know or

understand’ and ‘Enduring understanding’. If a content topic provides for enduring knowledge or understanding, then the learning objective that uses this content should belong to a higher order domain category like ‘Evaluating’. On the other hand if a topic is important to know, then a lower order domain category like ‘Applying’ would be satisfactory. This algorithm of the evaluate function ensures that the content is in line with the learning objective.

5) After the evaluation has taken place, the

results of the evaluation are added to another xml file. This document is sent back to the web application so that the LOF can display the results of the evaluation to the user.

TABLE V

CODE FOR PARSING AN OWL ONTOLOGY

private void evaluate(OWLModel owlModel, ProtegeReasoner reasoner) { if (reasoner == null) { System.out.println("reasoner is null."); return; } try { //verify that Performance component exists OWLNamedClass performanceClass = owlModel.getOWLNamedClass("Performance"); if(performanceClass != null) { System.out.println("performance class found = " + performanceClass.getName()); OWLObjectProperty hasContent = owlModel.getOWLObjectProperty("hasContent"); if(hasContent != null) { String prop = hasContent.getName(); Iterator iter = loComponents.iterator();

Page 12: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

V. VALIDATION AND TESTING

A. Validation The IMoD system was developed under the

supervision of Dr. Srividya Bansal, Assistant Professor, CTI Department of Engineering at Arizona State University and Dr. Odesma Dalrymple, Assistant Professor, CTI Department of Engineering at Arizona State University. The tool was verified by Dr. Bansal and Dr. Dalrymple. The intent of verification was to confirm if the product of a given development phase satisfies the requirements imposed at the start of the phase.

B. Testing

The LOF has been tested and found to meet the

requirements. The different types of testing that were carried out on the product are functional testing and cross-platform testing. The test cases for the functional testing are available in the Appendix. These tests have been carried out on Google Chrome, Internet Explorer 9 and Mozilla Firefox 3.6.

VI. LIMITATIONS AND SUMMARY

This implementation can be extended to include all the components that form the IMoD system. The ontology built during this project can be extended to include data needed by the Content, Pedagogy and the Assessments components to form a comprehensive whole, and also to make strong associations between all the components. The aim is to fully utilize semantic web technologies to integrate all the components in the IMoD system and reveal the relations between the components to the user, so that designing instruction becomes not only simple and convenient, but also helps to improve the quality of instruction. Some of the limitations of the system are:

• The stand alone application that provides the evaluation functionality should be a web service that integrates seamlessly with the web application.

• If an action word entered by the user is not present in the database, then it is not recognized by the evaluate feature. Future work would involve implementing the ability to add user defined action words to the database.

• The system as of now does not allow sharing of i-mods between different users.

• The results of evaluating a learning objective are available in an xml file and cannot be displayed in the web application.

However, the Learning Objectives Feature provides a component of the IMoD system that is based on the best design practices that should be followed during instruction and curriculum design. It also makes good use of semantic technology in order to create inter connections between the different components and to evaluate the design. Content that has been created is available while creating a Learning objective and vice-versa. Relevant information from the Context is also available in the learning objective space. The evaluation function also relies on these inter connections to make more sense out of a learning objective as a whole. The LOF also provides the user with help at each step, ensuring that there is no discrepancy in the understanding of the user and the implementation of the system.

ACKNOWLEDGEMENTS

I would like to express my gratitude to all those who have helped me through the conception and realization of this project. I would like to thank Dr. Odesma Dalrymple and Dr. Srividya Bansal for their invaluable guidance and encouragement throughout the project. I would also like to extend my gratitude towards Dr. Timothy Lindquist for serving as a committee member. I would like to thank Shashank Balasubramanian, John Houghtelin, Vishnu Menon, Vrushali Moghe and Michael Sheppard for their assistance in the development of this project.

REFERENCES

1. McKenney, S., Nieveen, N., & Strijker, A. (2008). Information Technology Tools for Curriculum Development. Springer International Handbooks of Education.

2. Wiggins, G., & McTighe, J. (1998). What is Backward Design? In G. Wiggins, & J. McTighe, Understanding by Design.

3. Antoniou, G., & van Harmelen, F. (2004). A Semantic Web Primer. MIT Press.

4. Pellegrino, J. W. (2006). Rethinking and Redesigning Curriculum, Instruction and Assessment: What Contemporary Research and Theory Suggests. Chicago: National Center on Education and the Economy.

5. Bloom, B., & Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals, by

Page 13: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

a committee of college and university examiners. Handbook 1: Cognitive Domain. New York: Longman.

6. Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching and assessing: A revision of Bloom's Taxonomy of educational objectives: Complete edition. New York: Longman.

7. Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How People Learn: Brain, Mind, Experience, and School. Washington, D.C.: National Academy Press.

8. Mager, R. F. (1997). Objectives. In R. F. Mager, Preparing Instructional Objectives (pp. 1-12). Atlanta, Georgia: CEP Press.

9. Fink, D. (2003). Initial Design Phase: Building Strong Primary Components. In D. Fink, A Self-Directed Guide to Designing Courses for Significant Learning (pp. 4-23). San Francisco: Jossey-Bass.

10. A Digital Library for Engineering Education. (n.d.). Retrieved from A Digital Library for Engineering Education: http://www.needs.org/needs/

11. Connexions. (n.d.). Retrieved from Connexions: http://cnx.org/

12. National Science Digital Library. (n.d.). Retrieved from National Science Digital Library: http://nsdl.org/

13. Understanding by Design Exchange. (n.d.). Retrieved from Understanding by Design Exchange: http://www.ubdexchange.org/

14. Adorni, G., Battigelli, S., Brondo, D., Capuano, N., Coccoli, M., Miranda, S., et al. (2010). CADDIE and IWT: Two different ontology-based approaches to Anytime, Anywhere and Anybody Learning. Journal of e-Learning and Knowledge Society , 53-66.

15. Ternier, S., Duval, E., & Vandepitte, P. (2002). LOMster: Peer-to-Peer Learning Object Metadata. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications.

16. RDF Primer. (2004, February 10). Retrieved from World Wide Web Consortium: http://www.w3.org/TR/rdf-primer

17. Bechhofer, S., van Harmelen, F., Hendler, J., Horrocks, I., McGuinness, D. L., Patel-Schneider, P. F., et al. (2004, February 10). OWL Web Ontology Language Reference . Retrieved from World Wide Web Consortium: http://www.w3.org/TR/owl-ref

18. What is Protege. (n.d.). Retrieved from Protege: http://protege.stanford.edu/overview/

19. Moghe, V. (2012). User Interface for Web-based Instructional Module Development (IMoD) System. Arizona State University. 20. Menon, V. (2012). Content Feature for Web-based Instructional Module Development (IMoD) System. Arizona State University.

APPENDIX

1. Test case for verifying the UI of the Learning Objectives tab

STEPS EXPECTED RESULTS 1 Click on the Learning

objectives tab Learning objectives tab should open

2 Verify toolbar Tool bar should show View, Evaluate and Save learning objectives buttons

3 Click on the expand sign corresponding to the Condition panel

The Condition panel should open. No other panel in the accordion should be open. The panel should contain a text area.

4 Click on the expand sign corresponding to the Performance panel

The Performance panel should open. No other panel in the accordion should be open. There should be 2 combo boxes and 2 text areas in the panel.

5 Click on the expand sign corresponding to the Content panel

The Content panel should open. No other panel in the accordion should be open. There should be a combo box in the panel.

6 Click on the expand sign corresponding to the Criteria panel

The Criteria panel should open. No other panel in the accordion should be open. There should be 1 combo box and 1 text area in the panel.

2. Test case for verifying Condition panel

STEPS EXPECTED RESULTS 1 Click on the condition

panel to open it Condition panel should open

2 Click in the text area for the Condition

Help panel should display information relevant to the condition

3 Enter a condition into the text area

Condition should show in the text area

4 Close the panel and re-open

Condition that was entered should be present in the text area

3. Test case for verifying Performance panel

Page 14: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

STEPS EXPECTED RESULTS 1 Click on the

performance panel to open it

Performance panel should open

2 Open the Learning Domain drop down list

Drop down should contain : Affective, Cognitive and Psychomotor

3 Open the Domain Category drop down list

Drop down should contain all the domain categories.

4 Click in the text area for the Performance

Help panel should display information relevant to performance Help panel should also contain a list of action words.

5 Click on an action word in the list

The word should show up in the performance text area.

5 Click in the text area for indicator

Help panel should display information relevant to the indicator.

6 Enter a performance and an indicator in the respective text areas

Text should be displayed correctly

4. Test case for verifying Content panel

STEPS EXPECTED RESULTS 1 Click on the Content

panel to open it Content panel should open

2 Click on the drop down to open it

Drop down should contain all the content topics for that i-mod

3 Click on the drop down to make it editable

Drop down should allow user to enter new text

4 Close the panel and re-open

Content that was entered should be present

5. Test case for verifying the Criteria panel

STEPS EXPECTED RESULTS 1 Click on the Criteria

panel to open it Criteria panel should open

2 Click on the Criteria type drop down to open it

Drop down should contain : Accuracy, Speed, Quality, Quantity

3 Click on the drop Drop down should

down to make it editable

NOT allow user to enter new text

4 Enter text in the Criteria text area

Text should be displayed correctly

6. Test case for verifying the Help panel

STEPS EXPECTED RESULTS 1 Click on the Learning

objectives tab Tab should contain a help panel on the right

2 Click on any of the UI elements on the page

Help panel should display relevant text

3 Click on the Performance text area

Help panel should also contain a list of action words

4 Click on the >> button on the top right of the help panel

Panel should close by collapsing to the right.

5 Click on the << button on the top right of the closed panel

Panel should open and display the correct information

7. Test case for verifying Save Learning objective

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas Text should remain in the text areas

2 Select values in the combo boxes

Selections should remain in the combo boxes

3 Click on the Save Learning Objective button in the toolbar

Learning objective should get saved to the database

4 Click on the View Learning objectives button

The grid should contain the learning objective that was just entered.

8. Test case for verifying Save LO when performance field is blank

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas EXCEPT performance

Text should remain in the text areas

2 Select values in the combo boxes

Selections should remain in the combo boxes

3 Click on the Save Learning Objective button in the toolbar

An alert should show up: “Learning objective not saved. Please enter a performance.”

4 Click on View to verify that LO is not

LO should not be present in the View list.

Page 15: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

saved.

9. Test case for verifying concatenated and complete learning objective on Save

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas Text should remain in the text areas

2 Select values in the combo boxes

Selections should remain in the combo boxes

3 Click on the Save Learning Objective button in the toolbar

The Learning Objective panel at the bottom should display text that reads the entire learning objective that was just saved.

10. Test case for verifying Evaluate

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas Text should remain in the text areas

2 Select values in the combo boxes

Selections should remain in the combo boxes

3 Click on the Evaluate button

An XML file should get generated with tags for those elements that were not null. Example: if the content combo box was populated with text, then a tag for content should be present in the file. <content> entered text</content>

11. Test case for verifying the evaluation function that validates the structure of the learning objective as compared to the ontology

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas Text should remain in the text areas

2 Select values in the combo boxes

Selections should remain in the combo boxes

3 Click on the Evaluate button

An XML file should get generated with tags for those elements that were not null.

4 Provide this file as

input to the java evaluate application. Run the application.

The output file should be an xml file that contains no warnings but validates the presence of all the elements.

5 Enter text into the text areas leaving one text area blank. Click on the evaluate button

Xml file should be generated. This file should not contain a tag for the element whose text area was left blank. Eg. If the criteria text area was left blank then xml file should not contain <criteria> </criteria>

6 Provide this file as input to the java evaluate application. Run the application.

The output file should be an xml file that has a warning about the missing component.

12. Test case for verifying the evaluation function that validates the learning domain and the domain category.

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas Text should remain in the text areas

2 Select Cognitive as the learning domain and Analyzing as the domain category. Enter ‘analyze’ in the performance text area

Selection should be displayed

3 Click on the Evaluate button

An XML file should get generated with tags for those elements that were not null.

4 Provide this file as input to the java evaluate application. Run the application.

The output file should be an xml file that has no warnings regarding the learning domain and domain category.

5 Select Cognitive as the learning domain and Analyzing as the domain category. Enter ‘produce’ in the performance text area

Selection should be displayed

6 Click on the Evaluate button

An XML file should get generated with tags for those elements that

Page 16: Learning Objectives Feature for the Instructional … integration, human dimension, caring and learning how to learn. In order to define the objectives, the instructor should think

were not null.

7 Provide this file as input to the java evaluate application. Run the application.

The output file should be an xml file that has warnings about the learning domain and domain category. The warning should indicate that the domain category for the performance ‘produce’ is incorrect and should instead be ‘Creating’

13. Test case for verifying the evaluation function that checks for the indicator

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas. Text should remain in the text areas

2 Select Cognitive as the learning domain and Evaluating as the domain category. Ensure that an indicator is entered

Selection should be displayed

3 Click on the Evaluate button

An XML file should get generated with tags for those elements that were not null.

4 Provide this file as input to the java evaluate application. Run the application.

The output file should be an xml file that has no warnings regarding the indicator.

5 Select Cognitive as the learning domain and Evaluating as the domain category. Ensure that the indicator field is blank.

Selection should be displayed

6 Click on the Evaluate button

An XML file should get generated with tags for those elements that were not null.

7 Provide this file as input to the java evaluate application. Run the application.

The output file should be an xml file that has warnings about the absence of an indicator. For a higher order domain category, a missing indicator should issue a warning to the user.

14. Test case for verifying the evaluation function that checks the content and learning objective compatibility

STEPS EXPECTED RESULTS 1 Enter text into all the

text areas. Text should remain in the text areas

2 Make a selection in the learning domain and domain category fields. Select a pre-existing content topic.

Selection should be displayed

3 Click on the Evaluate button

An XML file should get generated with tags for those elements that were not null.

4 Provide this file as input to the java evaluate application. Run the application.

If the content’s priority = Enduring, domain category should be Analyzing, Evaluating or creating. If priority = important to know, category should be Applying or Understanding, if priority = good to know, category = remembering.