13
Automatic detection of learning styles for an e-learning system Ebru Özpolat * , Gözde B. Akar Department of Electrical and Electronics Engineering, Middle East Technical University Ankara, 06531, Turkey article info Article history: Received 2 September 2008 Received in revised form 13 February 2009 Accepted 16 February 2009 Keywords: Applications in subject areas Interactive learning environments Cooperative/collaborative learning abstract A desirable characteristic for an e-learning system is to provide the learner the most appropriate infor- mation based on his requirements and preferences. This can be achieved by capturing and utilizing the learner model. Learner models can be extracted based on personality factors like learning styles, behav- ioral factors like user’s browsing history and knowledge factors like user’s prior knowledge. In this paper, we address the problem of extracting the learner model based on Felder–Silverman learning style model. The target learners in this problem are the ones studying basic science. Using NBTree classification algo- rithm in conjunction with Binary Relevance classifier, the learners are classified based on their interests. Then, learners’ learning styles are detected using these classification results. Experimental results are also conducted to evaluate the performance of the proposed automated learner modeling approach. The results show that the match ratio between the obtained learner’s learning style using the proposed lear- ner model and those obtained by the questionnaires traditionally used for learning style assessment is consistent for most of the dimensions of Felder–Silverman learning style. Ó 2009 Elsevier Ltd. All rights reserved. 1. Introduction Today’s technology in information systems changes also the state of the art in education. There is a tremendous trend towards e-learn- ing to access the huge amount of information worldwide. In order to make best use of these e-learning systems, adaptivity and personal- ization of the e-learning systems; i.e. to support learner’s diversity and individual needs, are a must. Most of the existing studies achieve adaptive personalization using learner/user modeling. A learner model can be described as a combination of personality factors, behavioral factors and knowledge factors (Gu & Sumner, 2006; Tian, Zheng, Gong, Du, & Li, 2007). Furthermore, as a part of personality factors, learners have different learning styles; i.e. they learn in different ways. According to Felder and Silverman (1988), learners having learning material, which fits their learning styles, learn more effectively and progress better. Due to these facts, learner modeling based on learning styles attracted a lot of attention in the literature (Cha, Kim, Lee, Jang, & Yoon, 2006; Graf, Viola, & Kinshuk, 2007a); García, Amandi, Schiaffino, & Campo, 2007). There are various methods in the literature for automatic learner modeling. These methods differ based on the attributes they use (per- sonality factors, behavioral factors, etc.), the underlying technique (Bayesian networks, decision trees etc.) and the underlying infrastruc- ture (Learning Management Systems (LMSs), special user interface, etc.). Most of these methods developed are specific to an underlying LMS using the participation of the learner in forums, chats and mail systems, etc. In addition, there is a lack in designing an automatic lear- ner model to diagnose learner’s learning style using only the content of the data objects selected by the learner but not learner’s other behavior observed in time scale. Such kind of an automatic learner model design will bring the independency from a background, under- lying LMS. As a result, simplicity and easy integrability of such a learner model to a LMS or just a web-based search can be obtained. In this paper, we propose such an automatic learner modeling method. In the proposed approach, initially the learner interest is collected explicitly using generic queries. Then, the learner profile is con- structed using a conversion unit based on keyword mapping. The learner model is built by processing the learner profile over a clustering unit and then using a decision unit. The experimental results show that the match ratio between the obtained learner’s learning characteristics using the proposed learner model and those obtained by the questionnaires traditionally used for learning style assessment is high for most of the dimensions of learn- ing style. 0360-1315/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2009.02.018 * Corresponding author. Tel.: +90 0505 590 13 48; fax: +90 312 210 23 04. E-mail addresses: [email protected] (E. Özpolat), [email protected] (G.B. Akar). Computers & Education 53 (2009) 355–367 Contents lists available at ScienceDirect Computers & Education journal homepage: www.elsevier.com/locate/compedu

Computers & Education detection of learning styles for an e-learning system Ebru Özpolat*, Gözde B. Akar Department of Electrical and Electronics Engineering, Middle East Technical

  • Upload
    dodien

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Computers & Education 53 (2009) 355–367

Contents lists available at ScienceDirect

Computers & Education

journal homepage: www.elsevier .com/locate /compedu

Automatic detection of learning styles for an e-learning system

Ebru Özpolat *, Gözde B. AkarDepartment of Electrical and Electronics Engineering, Middle East Technical University Ankara, 06531, Turkey

a r t i c l e i n f o

Article history:Received 2 September 2008Received in revised form 13 February 2009Accepted 16 February 2009

Keywords:Applications in subject areasInteractive learning environmentsCooperative/collaborative learning

0360-1315/$ - see front matter � 2009 Elsevier Ltd. Adoi:10.1016/j.compedu.2009.02.018

* Corresponding author. Tel.: +90 0505 590 13 48;E-mail addresses: [email protected] (E. Özpolat

a b s t r a c t

A desirable characteristic for an e-learning system is to provide the learner the most appropriate infor-mation based on his requirements and preferences. This can be achieved by capturing and utilizing thelearner model. Learner models can be extracted based on personality factors like learning styles, behav-ioral factors like user’s browsing history and knowledge factors like user’s prior knowledge. In this paper,we address the problem of extracting the learner model based on Felder–Silverman learning style model.The target learners in this problem are the ones studying basic science. Using NBTree classification algo-rithm in conjunction with Binary Relevance classifier, the learners are classified based on their interests.Then, learners’ learning styles are detected using these classification results. Experimental results are alsoconducted to evaluate the performance of the proposed automated learner modeling approach. Theresults show that the match ratio between the obtained learner’s learning style using the proposed lear-ner model and those obtained by the questionnaires traditionally used for learning style assessment isconsistent for most of the dimensions of Felder–Silverman learning style.

� 2009 Elsevier Ltd. All rights reserved.

1. Introduction

Today’s technology in information systems changes also the state of the art in education. There is a tremendous trend towards e-learn-ing to access the huge amount of information worldwide. In order to make best use of these e-learning systems, adaptivity and personal-ization of the e-learning systems; i.e. to support learner’s diversity and individual needs, are a must. Most of the existing studies achieveadaptive personalization using learner/user modeling. A learner model can be described as a combination of personality factors, behavioralfactors and knowledge factors (Gu & Sumner, 2006; Tian, Zheng, Gong, Du, & Li, 2007). Furthermore, as a part of personality factors, learnershave different learning styles; i.e. they learn in different ways. According to Felder and Silverman (1988), learners having learning material,which fits their learning styles, learn more effectively and progress better. Due to these facts, learner modeling based on learning stylesattracted a lot of attention in the literature (Cha, Kim, Lee, Jang, & Yoon, 2006; Graf, Viola, & Kinshuk, 2007a); García, Amandi, Schiaffino,& Campo, 2007).

There are various methods in the literature for automatic learner modeling. These methods differ based on the attributes they use (per-sonality factors, behavioral factors, etc.), the underlying technique (Bayesian networks, decision trees etc.) and the underlying infrastruc-ture (Learning Management Systems (LMSs), special user interface, etc.). Most of these methods developed are specific to an underlyingLMS using the participation of the learner in forums, chats and mail systems, etc. In addition, there is a lack in designing an automatic lear-ner model to diagnose learner’s learning style using only the content of the data objects selected by the learner but not learner’s otherbehavior observed in time scale. Such kind of an automatic learner model design will bring the independency from a background, under-lying LMS. As a result, simplicity and easy integrability of such a learner model to a LMS or just a web-based search can be obtained. In thispaper, we propose such an automatic learner modeling method.

In the proposed approach, initially the learner interest is collected explicitly using generic queries. Then, the learner profile is con-structed using a conversion unit based on keyword mapping. The learner model is built by processing the learner profile over a clusteringunit and then using a decision unit.

The experimental results show that the match ratio between the obtained learner’s learning characteristics using the proposed learnermodel and those obtained by the questionnaires traditionally used for learning style assessment is high for most of the dimensions of learn-ing style.

ll rights reserved.

fax: +90 312 210 23 04.), [email protected] (G.B. Akar).

356 E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367

The rest of this paper is organized as follows. In Section 2, the literature review is focused on various ways in learner/user modeling andlearning style detection approaches. In Section 3, background on learning styles is presented. In Section 4, the proposed learner modelingframework is introduced and an application scenario is presented. In Section 5, experimental results are given. Finally, conclusion is givenin Section 6.

2. Literature review

In the literature, there are different techniques that are used in learner modeling such as rule-based methods (Dolog & Nejdl, 2003; Grafet al., 2007a), fuzzy logic (Georgiou & Makry, 2004; Xu, Wang, & Su, 2002), case-based reasoning (Peña, Narzo, & de la Rosa, 2002), Bayesiannetworks (García et al., 2007), belief networks (Reye, 2004) and decision trees (Cha et al., 2006).

These works can cover different aspects of learner’s behavior and knowledge depending on the application, which use the learner model.These aspects can be learner goals and plans, capabilities, learning style, attitudes and/or knowledge or beliefs. Our work can be placedamong those automatic learner modeling based on learning styles like (Cha et al., 2006; García et al., 2007; Graf et al., 2007a). Regardinglearner modeling based on learning styles, Papanikolaou and Grigoriadou (2004) concentrated on the critical issues influencing the designof adaptation based on the learning style information in Adaptive Educational Hypermedia Systems. Further, adaptive systems such asMASPLANG (Peña et al., 2002) and CS383 (Carver, Howard, & Lane, 1999) detect learning styles using questionnaires. Gomes et al.(2007) describe a powerful learning management system (dotLRN), which is a world-wide open source software used for supporting e-learning and digital communities. DotLRN provides learner modeling, which reflects the gathered data from learners including their learn-ing styles using Felder test (Felder & Soloman, 1997), IMS-QTI (IMS-QTI) questionnaire results, and also their active and passive interactions(Gomes et al., 2007).

Graf and Kinshuk (2006b) propose an approach to detect learning styles in LMS based on the behavior of learners during an on-linecourse. Additionally, Graf and Kinshuk (2006a) provide a practical example by extending the open-source LMS Moodle with the proposedlearning style detector tool. Further, Graf et al. (2007a) propose an automatic student modeling approach for LMSs for detecting learningstyle preferences according to the Felder–Silverman learning style model (FSLSM) (Felder & Silverman, 1988). The proposed approach isbased on patterns related to those features, implemented in most LMSs and used by teachers and course developers like content objects,outlines, self-assessment tests, exercises, examples and discussion forums and time spent on visited content objects. Then, the relevance ofthese patterns to each dimension of learning style is described and using a simple rule-based method students’ behavior is related to theirpreference of semantic groups (Graf et al., 2007a).

Cha et al. (2006) propose an intelligent learning system with a specific user interface based on the FSLSM. Using this interface, learningstyles are detected from learner behavior patterns on the interface using decision tree approach.

García et al. (2007) evaluated Bayesian networks by detecting the learning style of a student in a Web-based education system based onhis behavior.

On the other hand, Gomes et al. (2007) discuss how information on learning styles can be used in the design of a computer-based pro-gramming environment.

For an extended review on learner/user modeling, the readers are refereed to Brusilovsky and Peylo (2003).

3. Learning styles

Learner’s characteristics and requirements play an important role in educational domain. Therefore, learning styles are treatedwith a great importance in the literature from earlier times (Martin, 1986). In the literature, there are mainly five learning style mod-els, which are subject of studies in the engineering science education literature. These learning style models are namely Myers–Briggstype indicator (MBTI) (Pittenger, 1993; Kolb’s model (Kolb,1984); Kolb, Boyatzis, & Mainemelis, 2000), Felder and Silverman learningstyle model (FSLSM) (Felder & Brent, 2005; Felder & Silverman, 1988; Herrmann Brain Dominance Instrument (HBDI) Felder & Brent,2005; Herrmann, 1989) and Dunn and Dunn model (Coffield, Moseley, Hall, & Ecclestone, 2004; Dunn, 2003). Among these learningstyle models, in our proposed work, we used FSLSM, which was constructed by experiences in engineering education. The reason ofselecting FSLSM is that it is more suitable for applications covering basic science issues. Furthermore, in adaptive educational sys-tems, which incorporate learning styles, FSLSM is one of the models that are most often used in the recent times and some research-ers even argue that it is the most appropriate model, since it describes learning style in more detail (Carver et al., 1999; Kuljis & Liu,2005).

FSLSM has four dimensions to distinguish between preferences of the learner, where each dimension has two scales. These dimensionsare as follows (Felder, 1993):

� Learners perceive information either by sensing via physical sensations, obvious facts or by intuition via theoretical, abstract approaches,memories (Sensing/Intuitive).

� Learners like learning either using visual materials, illustrations or using verbal material like listenings or narrative texts (Visual/Verbal).� Learners learn either actively via experiments and with collaboration or reflective by themselves, and without trying things(Active/

Reflective).� Learners get the concept either sequentially by following step by step or globally by starting from the overall picture of the concept and

then going into details (Sequential/Global).

Regarding the assessment of this learning style model, Index of Learning Styles (ILSs) (Felder & Soloman, 1997), a forty-four item force-choice questionnaire is developed in 1991 by Richard Felder and Barbara Soloman to assess preferences on the four dimensions of theFSLSM (Felder & Brent, 2005). This questionnaire is available free on the internet. Further, studies about verifying the reliability and validityof the instrument are explained in (Felder & Spurlin, 2005).

E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367 357

4. System architecture

In this section, we present the proposed system for learning style extraction based on FSLSM. The general overview of our proposedautomatic learner model is illustrated in Fig. 1.

The learner first submits a generic query for the topic he is interested in. From the returned data, the learner selects the ones that mostfulfill his requirements. We will name these selections as the learner selected data objects (LSDOs) from now on. Then, using the conversionunit given in Fig. 2, learner profile table is obtained. This table is then processed by the clustering unit which is based on NBTree classifi-cation used in conjunction with Binary Relevance classifier. The training of the system for the clustering unit has to be done using a traindata set reflecting different learning styles as an offline process.

The clustering unit assigns labels to each row of the learner profile table using the dimensions of the FSLSM. After that, using the deci-sion unit, learning style is detected and hence learner model is obtained using the dominant scale in each dimension of the FSLSM.

The above-mentioned process is done for each learner only at their first log on to the system as an offline process. Then, in the consec-utive logons of the same learner, his learner model based on his learning characteristics is used to return the learner most effective andsuitable results for his submitted generic query. Moreover, taking into account that, learning style behavior of the learner changes slowlyalong the time, using the proposed approach with certain time intervals, the changes in the learner’s learning characteristics over time canbe followed. Hence, a dynamically updated learner model can be obtained.

In the following subsections, we will explain each part of the proposed system in detail.

4.1. Profile table

The profile table consists of four columns corresponding to attributes, i.e. the metadata names. The attributes ‘‘LearningResourceType”and ‘‘RequiredDegreeOfCollaboration” are IEEE LOM, 2002 metadata fields and the other two attributes are extended fields. These fields areadded to derive characteristics representing each dimension of the selected learning style model. Each row of the table takes predeter-mined values of these attributes given in Table 1. The length of this table is constant such that it covers all the possible combinationsof the attribute values. Each learner profile table can be considered as a sub part of this profile table.

AUTOMATIC LEARNER MODELLING

CONVERSION UNIT

Learner Profile Table

CLUSTERING UNIT

Learner selected data object

Learner model

DECISION UNIT

Input for Decision Unit

Learner

Fig. 1. The general overview of proposed automatic learner model module.

Learnerselected data object

STAGE-1

Domain-specific keyword finder

STAGE-2

Keyword-to-attribute value mapper

STAGE-3

Combining attribute values and selecting corresponding profile table row

Learner profile table

diagramfigure

outline

Real world

Pair work

LRT/visualresource

TAOPC/ sequentiall, step by step

RDOC/ team work

TOC/ practical-known methods, applicable to real world

CONVERSION UNIT

LearningResource Type (LRT)

RequiredDegreeOf Collaboration (RDOC)

TypeOfCoverage (TOC)

TheArtOfPresentingContent (TAOPC)Sequentially, step by step

practical-known methods, applicable to real world

team workvisual resource

Fig. 2. Conversion unit.

Table 1Predetermined attribute values.

Learning resource type Required degree of collaboration Type of coverage The art of presenting content

Exercise/simulation-demo/visualresources/experiment/written text/lecture

Individual work/team work/withthe assistance of the teacher

Theoretical-concepts, discover relationships/practical-known methods, applicable to real world

Sequentially, step by step/globally-with the big picture of subject

358 E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367

4.2. Conversion unit

The aim of the conversion unit is to construct the learner profile table from LSDOs. For this only LSDOs are used rather than learnerbehavior observed in time scale. The reason is that to be able to propose an automatic learner modeling approach with independency fromthe underlying background like LMSs.

This conversion unit is domain specific and composed of three stages: (1) domain-specific keyword finder, (2) keywords-to-attribute-value mapper, and (3) learner profile table construction. The keyword finder is based on the dominant characteristics of each dimension(Felder & Brent, 2005; Felder & Silverman, 1988), the semantic grouping of each dimension (Graf, Viola, Kinshuk, & Leo, 2007c) and theLSDO properties (containing mostly drawings or textual data, subsections, etc.). The keywords presented in Table 2 can be extended basedon a further detailed pedagogical and cognitive study.

Once the keywords are found, they are mapped to the corresponding attribute values given in Table 2. Finally, the attribute values arecombined to select the corresponding row in the profile table. The collection of these rows forms the learner profile table.

Learner profile table construction from LSDOs using the conversion unit is illustrated with an example in Fig. 2. In Fig. 2, at stage-1 do-main-specific keywords are found. Then, at stage-2, these keywords are mapped to the corresponding attribute values. After that, at stage-3, these attribute value pairs are combined and the corresponding learner profile table row is constructed.

Table 2A sample for relevance of keywords for groups of learning styles.

Dimension name of learning style model Relevant keywords

Global (G) Overall, overview, totally, abstract, wholeSequential (Seq) Sequential, outline, first, second, flowchart, detail, subsections, partsActive (A) Experimentally, pair work, usually, ordinary, interactiveReflective (R) Observation, theory, theorem, challenges, alone workIntuitive (I) Theoretically, in principalSensing (S) Practically, in real world applications, experimental data resultsVisual (Vis) Simulations, figure, graph, chart, *.jpg, *.gif, *.bmap, *.avi, table, diagram, schema, videoVerbal (Ver) Forum, discussion board, text

E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367 359

It should be noticed that, for learners with different learning style dimensions, the same profile table rows; i.e. the same combina-tion of keywords can exist. This means, the attributes of learner profile rows are dependent. The same attribute value can be used inevaluating different learning style dimensions. Hence, we permit multi-labeling in the proposed approach. This is one of the differencesof the proposed automatic learner modeling approach. In Cha et al. (2006) and García et al. (2007), the described relevant patterns foreach learning style dimension disjoint from each other. In other words, a pattern is not used for any two different learning styledimensions.

4.3. Clustering unit

The aim of the clustering unit is to assign labels to each row of the learner profile table using the dimensions of the FSLSM. The clus-tering is done based on the predetermined classes obtained by NBTree classification in conjunction with Binary Relevance classifier from atrain data set.

For preparing the train data set, we collect LSDOs from learners with different learning styles. Then using the conversion unit, we get thelearner profile table of each learner. Furthermore, we get ILS results for each learner simultaneously. For the train data set, we label eachrow of learner profile table with the corresponding ILS result. Hence, the train data set is composed of correctly labeled rows. However, asstated previously learners with different learning styles can have common LSDOs causing two rows with the same content having differentlabels. As a result, we get multi-label classification input. In the literature, to handle multiple labels, multi-label classification methods areused (Boutell, Luo, Shen, & Brown, 2004; Zhang & Zhou, 2005).

These methods can mainly be grouped into two groups: transformation methods and algorithm adaptation methods (Tsoumakas &Katakis, 2007a). Problem transformation methods transform the multi-label classification problem into either one or more single-labelclassification or regression problems, whereas algorithm adaptation methods extend specific learning algorithms in order to handle mul-ti-label data directly. The main problem transformation methods are Label Powerset (LP), Binary Relevance classifier (BR), and problemtransformation method 6 (PT-6) (Tsoumakas & Katakis, 2007a).

In LP, for each different set of labels in multi-labeled samples, a new single label is used.BR method learns |L| binary classifiers Hl: X ? {l, l}, where l represents each different label in L, H represents a binary classifier and X

represents a transformed multi-labeled sample. It transforms the original data set into |L| data sets, where each of them contains all exam-ples of the original data set, labeled as l if the labels of the original example contained l and as l otherwise.

In PT-6, each example x with a multi-label set Y is decomposed into |L| examples (x, l, Y[l]), for all l 2 L, where Y[l] = 1 if l 2 Y, and Y[l] = �1otherwise.

An example for each of the methods is given in Tables 4–9, where the original multi-label input is given in Table 3.The last step of the classification algorithm is to classify the single-labeled rows. If the attributes of these labeled rows are not totally

independent (which is the case in our problem), it is common to use NB or NBTree, a hybrid of NB classifiers and decision tree classifiers(Kohavi, 1996). NB algorithm estimates the probability of each class using Baye’s rule. On the other hand, NBTree algorithm is a hybridalgorithm. It is similar to the classical decision trees except that leave nodes are Naive–Bayes categorizers instead of nodes predicting asingle class. NBTree algorithm is appropriate when many attributes are relevant for classification and attributes are not necessarily inde-pendent. Regarding decision tree induction, ID3 and C4.5 are the basic algorithms, where C4.5 is an extension of ID3, that covers alsounavailable values, continuous attribute value ranges, pruning of decision trees, rule derivation, and so on (Kohavi, 1996).

Hence, the proposed clustering unit is composed of a classification algorithm in conjunction with a multi-label classification method. Inthe literature, it is shown that different classification algorithms in conjunction with different multi-label classification methods illustratedifferent performance results depending on the experimental data properties (Tsoumakas & Katakis, 2007a). Due to this, in order to choosethe most reliable classification for our domain, we evaluate different classification algorithms with different multi-label classificationmethods using cross-validation technique and get performance results based on the evaluation measures such as hamming loss, precision,accuracy and recall (Tsoumakas & Katakis, 2007a). Evaluation measures can be categorized into two groups: example-based evaluationmeasures and label-based evaluation measures (Tsoumakas & Vlahavas, 2007b). Example-based evaluation measures (e.g. hamming loss,accuracy, precision, and recall) use the difference between actual and predicted label sets for each example and average the results over allexamples. Label-based evaluation measures (e.g. accuracy, precision, and recall) calculate a binary evaluation separately for each label andmicro/macro average the results across all labels (Tsoumakas & Vlahavas, 2007b).

The example-based evaluation measures used in the performance results are defined as presented in the following equations. In allequations, D is the multi-label data set consisting of |D| multi-label examples (xi, Yi), i = 1. . .|D| and Yi # L. H is a multi-label classifierand Zi = H(xi) is the set of labels predicted by H for example xi.

Table 4Transformed data set with LP.

ROW (VIS–NEU and GLO–INT–NEU) (REF–NEU and SEQ–NEU)

Row 1 XRow 2 X

Table 3Original multi-label data set.

ROW VIS–NEU GLO–INT–NEU REF–NEU SEQ–NEU

Row 1 X XRow 2 X X

Table 5Transformed data set with BR for label VIS-NEU.

ROW VIS–NEU VIS–NEU

Row 1 XRow 2 X

Table 6Transformed data set with BR for label GLO–INT–NEU.

ROW GLO–INT–NEU GLO–INT–NEU

Row 1 XRow 2 X

Table 7Transformed data set with BR for label REF–NEU.

ROW REF–NEU REF–NEU

Row 1 XRow 2 X

Table 8Transformed data set with BR for label SEQ–NEU.

ROW SEQ–NEU SEQ–NEU

Row 1 XRow 2 X

Table 9Transformed data set with PT-6.

ROW l Y[l]

Row 1 VIS–NEU 1Row 1 GLO–INT–NEU 1Row 1 REF–NEU �1Row 1 SEQ–NEU �1Row 2 VIS–NEU �1Row 2 GLO–INT–NEU �1Row 2 REF–NEU 1Row 2 SEQ–NEU 1

360 E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367

Hamming loss is defined as presented in Eq. (1)

ðH;DÞ ¼ 1jDj

XjDj

i¼1

jYiDZijjLj ð1Þ

where D stands for the symmetric difference of two sets and corresponds to the XOR operation in Boolean logic (Schapire & Singer, 2000).The remaining measures are defined in Godbole and Sarawagi (2004) as given in Eqs. (2)–(4):Accuracy is defined as presented in Eq. (2)

ðH;DÞ ¼ 1jDj

XjDj

i¼1

Yi \ Zi

Yi [ Zi

����

���� ð2Þ

Precision is defined as presented in Eq. (3)

ðH;DÞ ¼ 1jDj

XjDj

i¼1

Yi \ Zi

Zi

����

���� ð3Þ

Recall is defined as presented in Eq. (4)

ðH;DÞ ¼ 1jDj

XjDj

i¼1

Yi \ Zi

Yi

����

���� ð4Þ

Among the performance results, the one with maximum accuracy and precision, minimum hamming loss should be selected. In order tochoose the algorithm for our clustering unit, we cluster our train data set using three different multi-label classification methods (LP, BR,and PT-6) with four different classification algorithms (NBTree, NB, ID3, and C4.5).

In obtaining the results, we used our train data set where the correct class labels are assigned to each row from the ILS results. The classlabels represent four dimensions of the FSLSM. The class labels are presented with the following abbreviations: REF (Reflective), ACT (Ac-

E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367 361

tive), INT (Intuitive), SEN (Sensing), VIS (Visual), VER (Verbal), SEQ (Sequential), GLO (Global), and NEU (Neutral/Balanced). NEU is usedonly if the learner is balanced for a dimension. For example, if the learner is strong on Reflective scale for processing dimension and strongon Visual scale of input dimension and balanced in the rest two dimensions, then this person is labeled as REF–VIS–NEU. The properties ofour train data set are presented in Table 10. In Table 10, label cardinality represents the average number of labels of the multi-labeled rowsand label density is label cardinality divided by number of labels. Different multi-label data sets with the same label cardinality but withthe different label densities can cause different performance results in multi-label classification methods. Therefore, we presented theproperties of the train data set.

Then, we used different multi-label classification methods on our train data set and obtained different transformed data sets for eachmulti-label classification method (LP, BR, PT-6). After that, for each of the transformed data sets we run different classification algorithms(ID3, C4.5, NB, and NBTree) separately and get predicted label set. We evaluated the performances by Eqs. (1)–(4) using the predicted andassigned labels for each case and the results are shown in Tables 11 and 12.

Table 11 presents performance results based on example-based evaluation and Table 12 presents performance results based on label-based evaluation using microaveraging. Microaveraging is done by collecting decisions for all labels, computing contingency table and eval-uating it.

By evaluating Tables 11 and 12, we get maximum accuracy for Binary Relevance classifier with NBTree classifier. Also, for this case, ham-ming loss is minimum and precision is maximum among all. As a result, using cross-validation with our train data set, we use NBTree clas-sification algorithm in conjunction with Binary Relevance classifier inside the clustering unit.

Once the clustering unit is trained as described above as an offline process, at run time it works as follows for test data set:First, LSDOs from learner with unknown learning style are converted to learner profile rows. Then, these learner profile rows are clas-

sified using the clustering unit and for each learner profile row the predicted label set is obtained.

4.4. Decision unit

The output of the decision unit is the learner model. The learner model is presented with four dimensions, where each dimensionrepresents learner’s most dominant characteristic for that dimension. The learner model is obtained using decision unit as follows: after

Table 10Properties of train data set.

Number of examples (Rows) Attributes Labels Label density Label cardinality

Train data set before multi-labeling Multi-labeled train data set Numeric Discrete

244 68 0 4 10 0.359 3.59

Table 11Performance results of train data set with different classification algorithms in conjunction with multi-label classification methods – with example-based evaluation.

Measure -example-based evaluation LP BR PT-6

ID3 C4.5 NB NBTree ID3 C4.5 NB NBTree ID3 C4.5 NB NBTree

Hamming loss 0.206 0.245 0.270 0.249 0.188 0.168 0.232 0.153 0.187 0.168 0.282 0.172Accuracy 0.572 0.514 0.446 0.488 0.591 0.612 0.504 0.633 0.594 0.615 0.407 0.594Precision 0.727 0.686 0.673 0.689 0.715 0.835 0.725 0.855 0.715 0.832 0.628 0.827Recall 0.720 0.648 0.557 0.605 0.746 0.709 0.625 0.694 0.749 0.715 0.578 0.665

Table 12Performance results of train data set with different classification algorithms in conjunction with multi-label classification methods – with label-based evaluation(microaveraging).

Measure-label based evaluation (micro averaging) LP BR PT-6

ID3 C4.5 NB NBTree ID3 C4.5 NB NBTree ID3 C4.5 NB NBTree

Accuracy 0.794 0.755 0.730 0.751 0.812 0.832 0.768 0.847 0.813 0.832 0.718 0.828Precision 0.715 0.671 0.667 0.682 0.738 0.849 0.727 0.876 0.738 0.845 0.623 0.851Recall 0.712 0.633 0.536 0.595 0.755 0.671 0.598 0.694 0.758 0.679 0.592 0.645

Table 13Rule table for learning style diagnose.

Rules for diagnosing learner Diagnosed learning style dimension

Determine and ignore min(#)min(#) = NEU and D ¼ jð#XÞ � ð#YÞj > e and #X > # Y Xmin(#) = NEU and D ¼ jð#XÞ � ð#YÞj > e and #Y > # X Ymin(#) = NEU and D ¼ jð#XÞ � ð#YÞj 6 e NEUmin(#) = X and D ¼ jð#YÞ � ð#NEUÞj 6 e Ymin(#) = X and D ¼ jð#YÞ � ð#NEUÞj > e and #NEU > # Y NEUmin(#) = X and D ¼ jð#YÞ � ð#NEUÞj > e and #Y > # NEU Ymin(#) = Y and D ¼ jð#XÞ � ð#NEUÞj 6 e Xmin(#) = Y and D ¼ jð#XÞ � ð#NEUÞj > e and #NEU > # X NEUmin(#) = Y and D ¼ jð#XÞ � ð#NEUÞj > e and #X > # NEU X

Table 14Learner profile.

Learning resource type Required degree of collaboration Type of coverage The art of presenting content

Visual resource Individual work Theoretical concepts-discover relationships Globally-with the big picture of subjectVisual resource Team work Practical-known methods, applicable to real world Sequentially, step by step

Table 15Output of clustering unit.

Rows of tested learner Predicted label set Assigned predicted label with the highest confidence

1 {NEU, SEQ–NEU, VIS–NEU, REF–SEN–VIS–NEU} VIS–NEU2 {NEU, VIS–NEU} VIS–NEU

362 E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367

classifying all the learner profile table rows, a set of possible labels for each row as stated in the previous section, is obtained. Among them,the label with the highest confidence is assigned to the respective row as shown by an example in Table 15. Then, the total score for eachscale is calculated using class label counts.

The learner is modeled according to his learning style using these total scores. Each learning style dimension is obtained separatelyusing the rules over total scores of the corresponding dimension. The rules for diagnosing learning style are illustrated in Table 13 by noti-fying the two strong scales of a dimension as X, Y and balanced as NEU and predetermined threshold as e and difference of scores as D:

In Table 13, threshold e = epredetermined_reference*R (compared total counts), where epredetermined_reference = Dpredetermined_reference/Rpredeter-

mined_reference (compared total counts).Predetermined reference is selected among the train data set rows according to the minimum difference value. Minimum difference

value is defined as the difference between total scores of either strong scales or a strong scale and balanced case of a dimension.As stated before, we suggest using this learner model, in consecutive logons of the same learner to the system. Hence, we can use this

learner model in filtering results of a submitted generic query, such that the learner gets the query results, which are more related andappropriate to his learning characteristics. Such a filtering can be done application specific, since we have to know the interdependencybetween the attributes and the learning style dimension.

4.5. Application scenario of learner modeling

In this section, we illustrate the usability of the proposed automatic learner model with web-based search engine. The usage of web-based search engines for educational purposes is also studied by PoSTech!; Seyedarabi, Peterson, and Keenoy (2005); Seyedarabi (2006);Seyedarabi (2008). Personalized Search Tool for Teachers called PoSTech is used as front-end of search engines that allow teachers to spec-ify their search according to their own pedagogical vocabulary. This tool is designed for teachers as a mediator between educational infor-mation provided by search engines and students having different requirements.

Fig. 3. Query submission and data object selection.

E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367 363

The scenario can be described as follows:The learner requires information about ‘‘simultaneous linear equations” and submits a query with the keywords ‘‘simultaneous linear

equations” to the system as given in Fig. 3.Among the returned result set, the learner selects the data objects, LSDOs. So, we got LSDOs as input for the proposed automatic learner

modeling module. If he selects a document with mostly visual data, our conversion unit starts to work on this document to find out do-main-specific keywords. It finds out the following keywords: video, figure, outline, and interactive. Then, in the next part of the conversionunit, these keywords are mapped to the related attribute values. In the last part of the conversion unit, these attribute values are combinedand the corresponding profile table row is selected. This selected row will be a row of the learner profile table.

In this scenario, the LSDOs correspond to the learner profile table as shown in Table 14.In the next step, for clustering unit, we assume that we have built our NBTree classifier in conjunction with Binary Relevance classifier

with previously prepared train data set as an offline preprocess. Then, we classify learner profile table rows using the clustering unit andobtain the predicted label set. Then we assign a predicted label with the highest confidence to a row. The predicted label set and the as-signed labels are illustrated in Table 15.

Hence, we get the total scores for each scale of each dimension of the learning style of the learner. Using these total count values, we canmodel the learner, according to his dominant scale in each dimension of the learning style. We can conclude that this learner is dominantlyvisual, balanced on the remaining dimensions.

Additionally, regarding the usage of the learner model in filtering user’s future queries, Figs. 4 and 5 illustrate the comparison of ‘‘simul-taneous linear equations” search result with and without a learner model in the background for a visual user. Fig. 4 shows the result for ageneral search engine, Google and Fig. 5 shows the results for Warwick search engine (Warwick search engine). Warwick search service isdeveloped at Warwick University, UK and can be used to get the on-line e-learning material available in the campus.

5. Experimental results

To evaluate learner model results obtained with the proposed approach; i.e. the correctness of identifying learning styles based onFSLSM by using our proposed automatic learner model, we selected ILS (Felder & Soloman, 1997) results as reference, since ILS question-naire is an often used and well-investigated instrument to identify learning styles based on the FSLSM. Felder & Spurlin (2005) summarizestudies concentrated on analyzing the response data of the ILS questionnaire regarding the distribution of preferences for each dimensionand also on verifying the reliability and validity of the instrument. Although there are a few studies (Graf, Viola, & Kinshuk, 2007b; VanZwanenberg, Wilkinson, & Anderson, 2000), where open issues arose, like weak reliability and validity and also dependencies betweensome learning styles, Felder & Spurlin (2005) concluded that the ILS questionnaire is a reliable and valid instrument and is suitable for iden-tifying learning styles according to FSLSM.

ILS is a 44 item forced-choice questionnaire, developed for identifying learning styles based on the FSLSM. As explained before, eachlearner has a preference on a scale for each of the four dimensions. The strength of preferences on a scale for each dimension is describedwith a score value ranged between 1 and 11 with steps ±2 (Felder & Brent, 2005; Graf et al., 2007b). This range arises from 11 forced-choiceitems associated for each dimension. Each answer can have either +1 (answer a) or �1 (answer b) value. Answer a corresponds to the

Fig. 4. Comparison of search results with and without a learner model over Google search engine.

Fig. 5. Comparison of search results with and without a learner model over Warwick search engine.

364 E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367

preference for the first scale of each dimension (active, sensing, visual, or sequential) and answer b to the second scale of each dimension(Reflective, Intuitive, Verbal, or Global).

Furthermore, as stated on the questionnaire results page on the web, scores are presented on only one scale of a dimension and scorevalues have the following meanings: if the score values range between 1 and 3, then the learner is balanced on the two scales of this dimen-sion. If the score values range between 5 and 7, then the learner has more tendencies towards this scored scale of the dimension. If thescore values range between 9 and 11, then the learner has strong characteristics in this scored scale of the dimension.

In our experimental results, we used three possibilities for each dimension as strong scales of the dimension (corresponding to ILS range5–11) and balanced; i.e. neutral (corresponding to ILS range 1–3). Accordingly, the four dimensions of learning style are evaluated as fol-lows: perception dimension as (SEN/INT/NEU), input dimension as (VIS/VER/NEU), understanding dimension as (GLO/SEQ/NEU), and pro-cessing dimension as (ACT/REF/NEU).

Regarding the experiment of the proposed approach, the train data set is prepared from LSDOs collected from 10 graduate students.These 10 students are selected such that each of them has a different learning style in order to enrich the train data set. There are 244labeled rows in train data set. The properties of train data set are illustrated in Table 10. There are common selections of different studentscausing the rows belong to multiple labels. Therefore, we convert these 244 labeled rows to 68 multi-labeled rows, where the labels areNEU, REF–NEU, SEN–VIS–NEU, INT–VIS–NEU, SEQ–NEU, VIS–NEU, REF–SEQ–NEU, GLO–INT–NEU, GLO–ACT–VIS–NEU, and SEN–REF–VIS–NEU.

Using the obtained performance results of train data set with different classification algorithms in conjunction with multi-label classi-fication methods, as shown in Tables 11 and 12, we build our selected classifier with the train data set; i.e. NBTree classifier in conjunctionwith BR.

3

9

13

5

only1

any2

any3

all 4

# of

dim

ensi

ons

of le

arni

ngst

yle

mod

el, f

or w

hich

re

sults

mat

ch

# of users

# of users,whose resultsmatch

Fig. 6. Match ratio of ILS results and our proposed learner model results.

E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367 365

For the test part, a group of 30 graduate students corresponding to 599 unlabeled rows, is used. For each learner in the test data set, weprocessed learner profile rows over the clustering unit to obtain decision unit input. Hence, we get prediction label set for these unlabeledrows using the classifier of clustering unit. Then we assigned the predicted class label with the highest confidence value to each unlabeledrow. So, the input of learner model is ready.

In Table 16, we present the comparison of the ILS questionnaire result and diagnosed learning style via our learner model for each testedlearner. The bold italic parts in Table 16 illustrate mismatches.

Using the information in Table 16, the following statistics can be made as presented in Figs. 6 and 7.Fig. 6 shows the match ratio by presenting the relation between the number of dimensions of the learning style model, for which ILS and

our learner model results are the same and the number of learners. Looking at Fig. 6, it can be concluded that, in the worst case for threelearners of 30, only one dimension of the learning style is found as the same as ILS results. On the other hand, for five learners of 30, alldimensions of the learning style are found as the same as ILS results.

Fig. 7 shows the number of misses and matches for each dimension of learning style model. Looking at Fig. 7, it can be concluded that forthese experimental learner groups, the most misses occur in input dimension; i.e. the success ratio is 53.3%. On the other hand, processingdimension has the success ratio of 70% and the remaining two dimensions, namely understanding and perception dimensions have thesame hit success ratio of 73.3%.

0

5

10

15

20

25

# of Matches# of Misses

22 21 21 168 9 9 14

ACT/REF/NEU SEN/INT/NEU SEQ/GLO/NEU VIS/VER

Fig. 7. Miss ratio of dimensions of learning style model.

Table 16Comparison of ILS results to learner model results.

Testedlearner

Number of rows in the learnerprofile tables

Perception (SEN/INT/NEU) Input (VIS/VER/NEU) Understanding (GLO/SEQ/NEU)

Processing (ACT/REF/NEU)

ILSresult

Learner modelresult

ILSresult

Learner modelresult

ILSresult

Learner modelresult

ILSresult

Learner modelresult

1 36 NEU NEU NEU VIS NEU NEU NEU NEU2 34 NEU NEU NEU VIS NEU NEU REF NEU3 36 NEU NEU NEU VIS NEU NEU NEU NEU4 47 NEU NEU VIS VIS NEU NEU NEU NEU5 37 INT NEU VIS VIS NEU NEU NEU NEU6 20 SEN NEU VIS VIS NEU NEU REF NEU7 14 NEU NEU VIS VIS GLO NEU ACT NEU8 6 NEU NEU NEU VIS SEQ NEU REF NEU9 6 SEN NEU VIS VIS NEU NEU NEU NEU10 8 INT NEU NEU VIS GLO NEU NEU NEU11 18 NEU NEU NEU NEU NEU NEU NEU NEU12 9 NEU NEU NEU VIS SEQ NEU NEU NEU13 12 NEU NEU VIS VIS NEU NEU NEU NEU14 17 SEN NEU VIS VIS NEU NEU REF NEU15 6 NEU NEU VIS VIS NEU NEU NEU NEU16 20 NEU NEU NEU VIS SEQ NEU NEU NEU17 15 NEU NEU VIS VIS NEU NEU NEU NEU18 22 SEN NEU NEU VIS NEU NEU NEU NEU19 26 NEU NEU NEU NEU NEU NEU ACT NEU20 19 NEU NEU NEU VIS NEU NEU NEU NEU21 10 NEU NEU VIS VIS GLO NEU NEU NEU22 8 NEU NEU VIS VIS NEU NEU ACT NEU23 28 INT NEU VER NEU NEU NEU NEU NEU24 32 NEU NEU NEU VIS NEU NEU NEU NEU25 10 NEU NEU NEU VIS SEQ NEU REF NEU26 15 NEU NEU NEU VIS NEU NEU NEU NEU27 22 INT NEU VIS VIS NEU NEU NEU NEU28 18 SEN NEU VIS VIS GLO NEU NEU NEU29 24 NEU NEU NEU VIS NEU NEU NEU NEU30 16 NEU NEU NEU NEU SEQ NEU NEU NEU

Table 17Comparison results.

Perception dimension(SEN/INT) (%)

Understanding dimension(GLO/SEQ) (%)

Processing dimension(ACT/REF) (%)

Input dimension (VIS/VER) (%)

Our proposed work(25 tested learner)

73.3 73.3 70 53.3

García et al. (2007)(27 tested learner)

77 63 58 Considered as not suitableto evaluate

Cha et al. (2006)(25 test data sets)

77 63.4 56.7 84

366 E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367

In order to get an idea about these success ratios for this experimental group, we compare our FSLSM learning style diagnose resultswith the results of other works available in the literature, the conclusion is as presented in Table 17.

6. Conclusion and future work

In this paper, we propose an automatic learner modeling based on diagnosing and classifying learning styles by NBTree classificationused in conjunction with Binary Relevance classifier. The proposed learner model uses only the content of the data objects selected bythe learner but not the learner’s other behavior observed in time scale Such kind of an automatic learner model design will bring the inde-pendency from a background, underlying LMS. As a result, simplicity and easy integrability of such a learner model to a LMS or just web-based search can be obtained. The experimental results show that the match ratio between the obtained learner’s learning characteristicsusing the proposed learner model and those obtained by the questionnaires traditionally used for learning style assessment is high for mostof the dimensions of learning style.

To extend such kind of a learner model, the keyword mapping part and profile table structure can be enhanced depending on the appli-cation area.

References

Boutell, M. R., Luo, J., Shen, X., & Brown, C. M. (2004). Learning multi-label scene classification. Pattern Recognition, 37(9), 1757–1771.Brusilovsky, P. & Peylo, C. (2003). Adaptive and intelligent web-based educational systems. International Journal of Artificial Intelligence in Education, 13 (2–4, Special Issue on

Adaptive and Intelligent Web-based Educational Systems), 159–172.Carver, C. A., Howard, R. A., & Lane, W. D. (1999). Addressing different learning styles through course hypermedia. IEEE Transactions on Education, 42(1), 33–38.Cha, H. J., Kim, Y. S., Lee, J. H., Jang, Y. M., & Yoon, T. B. (2006). Learning styles diagnosis based on user interface behaviors for the customization of learning interfaces in an

intelligent tutoring system. In Intelligent tutoring systems (pp. 513–524). Berlin: Springer.Coffield, F., Moseley, D., Hall, E., & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning. A systematic and critical review. London: Learning and Skills Research

Centre. Available from:<http://www.lsneducation.org.uk/research/reports/>.Dolog, P. & Nejdl, W. (2003). Personalisation in Elena: How to cope with personalisation in distributed e-learning Networks. In Proceedings of the international conference on

worldwide coherent workforce, satisfied users – new services for scientific information dotLRN. Available from: <http://dotlrn.org/>.Dunn, R. (2003). The Dunn and Dunn learning style model and its theoretical cornerstone. In R. Dunn & S. Griggs (Eds.), Synthesis of the Dunn and Dunn learning styles model

research: Who, what, when, where and so what (pp. 1–6). New York: St. John’s University.Felder, R. M., & Spurlin, J. (2005). Applications, reliability and validity of the index of learning styles. International Journal of Engineering Education, 21(1), 103–112.Felder, R. M., & Brent, R. (2005). Understanding student differences. Journal of Engineering Education, 94(1), 57–72.Felder, R. M., & Soloman, B. A. (1997). Index of learning styles questionnaire. Available from: <http://www.engr.ncsu.edu/learningstyles/ilsweb.html>.Felder, R. M. (1993). Reaching the second tier: Learning and teaching styles in college science education. College Science Teaching, 23(5), 286–290.Felder, R., & Silverman, L. (1988). Learning and teaching styles in engineering education. Journal of Engineering Education, 78(7), 674–681.García, P., Amandi, A., Schiaffino, S., & Campo, M. (2007). Evaluating Bayesian networks’ precision for detecting students’ learning styles. Computers and Education, 49(3),

794–808.Georgiou, D. A., & Makry, D. (2004). A learner’s style and profile recognition via fuzzy cognitive map. In Proceedings of the 4th IEEE international conference on advanced learning

technologies (ICALT’04) (pp. 36–40).Godbole, S., & Sarawagi, S. (2004). Discriminative methods for multi-labeled classification. In Proceedings of the 8th Pacific-Asia conference on knowledge discovery and data

mining (PAKDD 2004).Gomes, A. Santos, A., Carmo, L., & Mendes, A. J. (2007). Learning styles in an e-learning tool. In International conference on engineering education (ICCE 2007).Graf, S., Viola, S. R., & Kinshuk (2007a). Automatic Student Modelling for Detecting Learning Style Preferences in Learning Management Systems. In Proceedings of the IADIS

international conference on cognition and exploratory learning in digital age (CELDA 2007) (pp. 172–179).Graf, S., Viola, S. R., & Kinshuk (2007b). Detecting Learners’ Profiles based on the Index of Learning Styles Data. In Proceedings of the international workshop on intelligent and

adaptive web-based educational systems (IAWES 2007) (pp. 233–238).Graf, S., Viola, S. R., Kinshuk & Leo, T. (2007c). In-depth analysis of the Felder-Silverman learning style dimensions. Journal of Research on Technology in Education, 40(1), 79–93.Graf, S., & Kinshuk (2006a). Enabling learning management systems to identify learning styles. In Proceedings of the international conference on interactive computer aided

learning (ICL 06).Graf, S., & Kinshuk (2006b). An approach for detecting learning styles in learning management systems. In Proceedings of the international conference on advances learning

technologies (ICALT 06) (pp. 161–163).Gu, Q., & Sumner, T. (2006). Support personalization in distributed e-learning systems through learner modeling. Information and Communication Technologies, 2(1), 610–615.Herrmann, N. (1989). The creative brain. Lake Lure, North Carolina, USA: The Ned Herrmann Group.IEEE LOM. (2002). Draft standard for learning object metadata (IEEE P1484.12.1).IMS-QTI. The IMS question and test interoperability (QTI) specification. Available from: <http://www.imsglobal.org/question/>.Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. NJ: Prentice Hall.Kolb, D. A., Boyatzis, R. E., & Mainemelis, C. (2000). Experiential learning theory: Previous research and new directions. In R. J. Sternberg & L. F. Zhang (Eds.), Perspectives on

cognitive, learning, and thinking styles. NJ: Lawrence Erlbaum.Kohavi, R. (1996). Scaling up the accuracy of Naïve-Bayes classifiers: A decision-tree hybrid. In Proceedings of the second international conference on knowledge discovery and

data mining KDD (pp. 202–207).Kuljis, J., & Liu, F. (2005). A comparison of learning style theories on the suitability for e-learning. In Proceedings of the conference on web technologies, applications, and services

(pp. 191–197). ACTA Press.Martin, F. (1986). Phenomenography – research approach to investigating different understandings of reality. Journal of Thought, 21, 28–49.Papanikolaou, K., & Grigoriadou, M. (2004). Accommodating learning style characteristics in adaptive educational hypermedia. Workshop on individual differences in adaptive

hypermedia organised in AH2004, Part I (pp. 77–86).Peña, C., Narzo, J., & de la Rosa, J. (2002). Intelligent agents in a teaching and learning environment on the web. In Proceedings of the international conference on advanced

learning technologies.

E. Özpolat, G.B. Akar / Computers & Education 53 (2009) 355–367 367

Pittenger, D. J. (1993). The utility of the Myers–Briggs type indicator. Review of Educational Research, 63, 467–488.PoSTech! Available from: <http://www.postech.me.uk/>.Reye, J. (2004). Student modelling based on belief networks. International Journal of Artificial Intelligence in Education, 14(1), 63–96.Schapire, R. E., & Singer, Y. (2000). Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2,3), 135–168.Seyedarabi, F., Peterson, D., & Keenoy, K. (2005). Personalised search tool for teachers – PoSTech! I-manager’s Journal of Educational Technology, Teacher’s Use of Technology for

Creative Learning Environment, 1(2), 38–49.Seyedarabi, F. (2006). The missing link: How search engines can support the informational needs of teachers. ACM Magazine, 6 April 2006.Seyedarabi, F. (2008). Teaching teachers to Google: Development of a personalised search tool for teachers’ online searching (research in progress). In K. McFerrin et al. (Eds.),

Proceedings of society for information technology and teacher education international conference 2008 (pp. 3414–3419). Chesapeake, VA: AACE.Tian, F., Zheng, Q., Gong, Z., Du, J., & Li, R. (2007). Personalized learning strategies in an intelligent e-learning environment. International Conference on Computer Supported

Cooperative Work in Design, 11, 973–978.Tsoumakas, G., & Vlahavas, I., (2007b). Random k-labelsets: An ensemble method for multilabel classification. In Proceedings of the 18th European conference on machine

learning (ECML 2007) pp. 406–417.Tsoumakas, G., & Katakis, I., (2007a). Multi label classification: An overview. International Journal of Data Warehousing and Mining, In D. Taniar (Ed.), Idea Group Publishing,

3(3), pp. 1–13.Van Zwanenberg, N., Wilkinson, L. J., & Anderson, A. (2000). Felder and Silverman’s index of learning styles and honey and Mumford’s learning styles questionnaire: How do

they compare and do they predict academic performance? Educational Psychology, 20(3), 365–380.Warwick search engine, Available from: <http://www2.warwick.ac.uk/services/its/servicessupport/web/search/>.Xu, D.,Wang, H., & Su, K. (2002). Intelligent student profiling with fuzzy models. In 35th Annual Hawaii international conference on system sciences (HICSS_02).Zhang, M., & Zhou, Z. (2005). A k-nearest neighbor based algorithm for multi-label classification. IEEE International Conference on Granular Computing, 2, 718–721.