6
A Tool for Evaluating Social Media Enhanced Learning Environments Kirsi Silius, Anne-Maritta Tervakari, Olli Pirttilä, Jukka Paukkeri and Teemu Mäkelä Intelligent Information Systems Laboratory, Department of Mathematics Tampere University of Technology Tampere, Finland [email protected], [email protected], [email protected] , [email protected], [email protected] Abstract—The evaluation tool WeSQu 1 was developed to help evaluators pay attention to critical quality factors of social media enhanced learning environment and provide guidelines to support implementation of high-quality educational web services. The WeSQu (earlier ARVO) tool is a web based evaluation tool that was developed in 2003 (updated 2008 and 2013) at Intelligent Information Systems Laboratory (IISLab), Tampere University of Technology (TUT), in cooperation with the Finnish Virtual University (FVU) to help the improvement of web based learning. In 2013 hypermedia experts at IISLab evaluated the renewed WeSQu tool in order to find out 1. if the background theories behind the original evaluation questions and the questions themselves are still relevant, and 2. if the improved functionalities and new added functionalities can support reliable evaluation. Keywords—quality evaluation; evaluation tools; social media; learning environment I. INTRODUCTION The development of online learning platforms has occurred rapidly. Only a decade ago, we used learning management systems or virtual learning environments for administrating and delivering online learning courses. Nowadays, however, we concentrate on developing increasingly personal and mobile social media enhanced learning environments with open access to the Internet that support students in communicating, collaborating, and learning from each other. New technical solutions are typically taken into use before they are completely ready/mature, which may not satisfy users and may pose challenges. Social media enhanced environments and other web-based solutions used for educational purposes should be useful, usable, and accessible, with robust technological solutions. Earlier research showed that almost three out of four students considered ease of use to be the most important criterion. Usability of the service is important, especially in the educational context, in order to allow the user to concentrate on earning instead of struggling with the interface. Learning environments should also be “pedagogically usable” and support the different types of learners by providing various learning contexts according to selected pedagogical objectives [1] [2]. 1 WeSQu is available at http://iislab.ee.tut.fi/wesqu II. SOCIAL MEDIA ENHANCED LEARNING ENVIRONMENT Social media and Web 2.0 tools are often utilized in the context of higher education nowadays. At their best, they offer versatile features to support a learning community to study and teach. Social media enhanced learning environments typically allow students to contribute content, change opinions, and create communities for different needs. Students can usually 1) construct a public or semi-public profile within a bounded system; 2) tag with keywords, comment, and like or dislike contents; 3) update their status; 4) send private messages between users; and 5) request other users within the system to be their friends. In community groups, students can collaboratively chat, write news, manage events, write blogs, post and edit wiki pages, and share resources (e.g., files, images, videos), to name a few [3]. At the same time, the role of the teacher has changed from that of a knowledge distributor to that of a facilitator of self-directed learning. Therefore, the instructional design and the use of social media functionalities should concentrate on facilitating the learning process in a meaningful way and providing clear added value. [4] The majority of social media enhanced learning environments do not have the appropriate tools to facilitate learning on the part of the right students at the right time; thus, the new trend seems to be to visualize user data and content to make students’ actions visible to teachers. Visualizations can reveal meaningful insights, display information that was previously invisible, and emphasize similarities and deviations of the data. Visualizations can be useful tools for teachers to identify the active and passive students in web-based learning environments. For students, visualizations can be valuable for evaluating or planning their own studies and completing assignments. Well-designed visualizations of user data can provide teachers with valuable insight into student activity and participation, and thus, help evaluate the quality of a course’s instructional design. [5] The use of visualizations as a learning tool sets new requirements for teachers. The integration of social media into online learning environments sets new quality factors, which are not always apparent. In order to design and utilize good visualizations as effective learning and teaching tools, one must be able to evaluate their quality and usefulness reliably. Quality of social media enhanced learning environments is a complex and multidimensional phenomenon affected by different factors, such as usability, accessibility, reliability of 978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey 2014 IEEE Global Engineering Education Conference (EDUCON) Page 152

[IEEE 2014 IEEE Global Engineering Education Conference (EDUCON) - Istanbul (2014.4.3-2014.4.5)] 2014 IEEE Global Engineering Education Conference (EDUCON) - A tool for evaluating

  • Upload
    teemu

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

A Tool for Evaluating Social Media Enhanced Learning Environments

Kirsi Silius, Anne-Maritta Tervakari, Olli Pirttilä, Jukka Paukkeri and Teemu Mäkelä Intelligent Information Systems Laboratory, Department of Mathematics

Tampere University of Technology Tampere, Finland

[email protected], [email protected], [email protected] , [email protected], [email protected]

Abstract—The evaluation tool WeSQu1 was developed to help evaluators pay attention to critical quality factors of social media enhanced learning environment and provide guidelines to support implementation of high-quality educational web services. The WeSQu (earlier ARVO) tool is a web based evaluation tool that was developed in 2003 (updated 2008 and 2013) at Intelligent Information Systems Laboratory (IISLab), Tampere University of Technology (TUT), in cooperation with the Finnish Virtual University (FVU) to help the improvement of web based learning. In 2013 hypermedia experts at IISLab evaluated the renewed WeSQu tool in order to find out 1. if the background theories behind the original evaluation questions and the questions themselves are still relevant, and 2. if the improved functionalities and new added functionalities can support reliable evaluation.

Keywords—quality evaluation; evaluation tools; social media; learning environment

I. INTRODUCTION The development of online learning platforms has occurred

rapidly. Only a decade ago, we used learning management systems or virtual learning environments for administrating and delivering online learning courses. Nowadays, however, we concentrate on developing increasingly personal and mobile social media enhanced learning environments with open access to the Internet that support students in communicating, collaborating, and learning from each other. New technical solutions are typically taken into use before they are completely ready/mature, which may not satisfy users and may pose challenges. Social media enhanced environments and other web-based solutions used for educational purposes should be useful, usable, and accessible, with robust technological solutions. Earlier research showed that almost three out of four students considered ease of use to be the most important criterion. Usability of the service is important, especially in the educational context, in order to allow the user to concentrate on earning instead of struggling with the interface. Learning environments should also be “pedagogically usable” and support the different types of learners by providing various learning contexts according to selected pedagogical objectives [1] [2].

1 WeSQu is available at http://iislab.ee.tut.fi/wesqu

II. SOCIAL MEDIA ENHANCED LEARNING ENVIRONMENT Social media and Web 2.0 tools are often utilized in the

context of higher education nowadays. At their best, they offer versatile features to support a learning community to study and teach. Social media enhanced learning environments typically allow students to contribute content, change opinions, and create communities for different needs. Students can usually 1) construct a public or semi-public profile within a bounded system; 2) tag with keywords, comment, and like or dislike contents; 3) update their status; 4) send private messages between users; and 5) request other users within the system to be their friends. In community groups, students can collaboratively chat, write news, manage events, write blogs, post and edit wiki pages, and share resources (e.g., files, images, videos), to name a few [3]. At the same time, the role of the teacher has changed from that of a knowledge distributor to that of a facilitator of self-directed learning. Therefore, the instructional design and the use of social media functionalities should concentrate on facilitating the learning process in a meaningful way and providing clear added value. [4]

The majority of social media enhanced learning environments do not have the appropriate tools to facilitate learning on the part of the right students at the right time; thus, the new trend seems to be to visualize user data and content to make students’ actions visible to teachers. Visualizations can reveal meaningful insights, display information that was previously invisible, and emphasize similarities and deviations of the data. Visualizations can be useful tools for teachers to identify the active and passive students in web-based learning environments. For students, visualizations can be valuable for evaluating or planning their own studies and completing assignments. Well-designed visualizations of user data can provide teachers with valuable insight into student activity and participation, and thus, help evaluate the quality of a course’s instructional design. [5]

The use of visualizations as a learning tool sets new requirements for teachers. The integration of social media into online learning environments sets new quality factors, which are not always apparent. In order to design and utilize good visualizations as effective learning and teaching tools, one must be able to evaluate their quality and usefulness reliably. Quality of social media enhanced learning environments is a complex and multidimensional phenomenon affected by different factors, such as usability, accessibility, reliability of

978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey2014 IEEE Global Engineering Education Conference (EDUCON)

Page 152

information, visual design, security and privacy, and navigation support. As a result, quality evaluation becomes challenging. One evaluator (typically a teacher) cannot be an expert in all the fields of science needed for serious evaluation.

The evaluation tool WeSQu (earlier, ARVO) was developed to help evaluators pay attention to critical quality factors of social media enhanced learning environments and provide guidelines to support the implementation of high-quality educational web services. WeSQu is a web-based evaluation tool developed in 2003 (updated in 2008 and 2013) at Hypermedia Laboratory (currently Intelligent Information Systems Laboratory, IISLab) of Tampere University of Technology (TUT) in cooperation with Finnish Virtual University (FVU) to help improve web-based learning [2].

However, the quality factors of web services are in a state of perpetual transformation. In particular, the rapid rate of technological development shortens the relevance life cycle of technical quality factors. In addition, new ways of utilizing social media enhanced learning environments for educational purposes provide new quality requirements that require attention. In 2013, IISLab established that the quality factors and evaluation system needed revision. The evaluation material was examined, and key development points for the system and quality factors were identified. The WeSQu tool then went through a development cycle in which new quality factors for evaluating the quality of content and user data visualization were added and the functionalities and user interface of the tool were improved.

In autumn of 2013, hypermedia experts at IISLab evaluated the renewed WeSQu tool in order to determine 1) whether the background theories behind the original evaluation questions and the questions themselves were still relevant and 2) whether the improved functionalities and newly added functionalities could support reliable evaluation.

III. WESQU 2.0 EVALUATION TOOL The WeSQu tool currently consists of 12 quality factors

and over 600 questions designed to help evaluate the quality of educational web sites and traditional web services. The quality factors — including evaluation questions and corresponding development guidelines — were developed on the basis of a background theory based on research of human-computer interaction, psychology, and pedagogy, as well as on evaluation research, which has its roots in the theory of usefulness of computer systems. In order to develop the background theory, a literature survey was conducted to determine the features that form the quality factors. The survey consisted mainly of scientific publications found through the ACM Portal, the IEEE Explore and Scirus databases, and Finnish university library databases. The results of the survey were analyzed, and the factors and questions for evaluating special features were formed through theme analysis.

The latest version of the WeSQu tool is a bilingual system, with Finnish and English as possible language choices. The user can switch languages seamlessly, even during the evaluation process. Making the system bilingual broadened the number of potential participants in crowdsourcing new content and revising the existing one. During the translation process,

the content was also revised for outdated information. The focus of the development of WeSQu’s functional features was to make the evaluation process less of a strain for the user and to improve and diversify the presentation of the evaluation results. This goal was achieved by making the user interface more explicit and easier to use and by structuring the evaluation process and material into a consistent and understandable form.

IV. STRUCTURE OF QUALITYAND SUPPORTING EVALUATION PROCESSES

The WeSQu tool includes functionalities that support distributed evaluation. Carrying out a comprehensive evaluation of a website is a laborious task; therefore, distributing the workload eases the strain on the evaluators. In the WeSQu tool, an evaluator can start an evaluation and then share a link to that evaluation with other registered WeSQu users. Those users can then take part in the evaluation, making it a collective process where, for example, responsibility is divided by assigning each evaluator one or more specific quality factors and their questions. WeSQu also saves the evaluation automatically during the evaluation, thereby allowing the evaluator the flexibility to carry out the evaluation according to his/her own schedule and preferred order. To decrease the strain on the user during the evaluation process, the evaluation is carried out one basis of evaluation at a time (Fig. 1). The user can switch between different quality factors and their bases of evaluation seamlessly, without having to progress through the evaluation in a set order.

Fig. 1. The evaluation process using the WeSQu tool

978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey2014 IEEE Global Engineering Education Conference (EDUCON)

Page 153

Fig. 2. The structure of the quality of the evaluated web service

Along with the latest development and implementation of the new WeSQu, IISLab clarified and simplified the terms used in the evaluation tool to better suit the needs of a wider audience. In the previous version, the terms used to illustrate the structure of the quality of websites were quite technical and often required expert knowledge of heuristic evaluation and web service quality. The quality of an evaluated website was structured to consist of different sections that contained different criteria, each of which contained its own set of questions. The structure and terms used were revised to be more common and logical (Fig. 2). In the new version, the quality of an evaluated website consists of different quality factors that are defined by bases of evaluation that contain evaluation questions.

Fig. 3. Presenting the mean values of evaluation results and the state of the evaluation

Each quality factor contains a number of evaluation questions. For example, the “accessibility” quality factor contains bases of evaluation such as “textual information” and “forms and dynamic pages.” Every evaluation question has a development hint or a guideline attached to it, which the evaluator can view during the evaluation in order to gain better insight into the issue at hand. This feature helps an inexperienced evaluator to evaluate reliably and get the most out of the evaluation process. The given answers are automatically scored on a scale from one to five. From these answers, a mean value is calculated for each basis of evaluation and their corresponding quality factors, as shown in Fig. 3.

V. ILLUSTRATING AND ANALYZING THE EVALUATION RESULTS

After the user has answered the questions, the results of the evaluation are presented. The results report consists of the overall summary of the evaluation results, visualizations that illustrate the results and the evaluation process, and a listing of weak fields that need to be developed. The results can be analyzed quite thoroughly by utilizing the versatile filtering and analyzing functionalities that the WeSQu tool offers. As illustrated in Fig. 4, the overall summary shows the evaluator the individual score of a certain question or the mean value of a specific basis of evaluation or quality factor. In addition, the user can see the answers he/she gave and the notes that he/she made during the evaluation process. Each question also has a development hint attached to it, which the evaluator can view by clicking the info button next to the question.

Fig. 4. Evaluation results showing scores, development hints, and the evaluator’s answers and notes.

Because WeSQu is such an extensive evaluation tool, IISLab wanted to make the evaluation process as transparent as possible. For example, transparency was increased by adding visualizations to illustrate the evaluation results and the evaluation process itself. A visualization is a useful way to show data in an effective and attractive way. In designing of visualizations psychological issues such as human pattern recognition, perception, preattentive processing, and visual recalling should be taken into account [6] [7] [8]. Visualizations can reveal meaningful insights. They can show information that has previously been invisible and emphasize data similarities and deviations. Using visualizations, one can obtain meaningful data about an evaluator’s actions during the evaluation process and possibly even estimate the reliability of

978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey2014 IEEE Global Engineering Education Conference (EDUCON)

Page 154

the evaluation on the basis of the time required to complete the evaluation, for example.

Two visualizations with different purposes have been developed in the WeSQu tool. Both visualizations represent current results of the evaluation process. They also represent the relationships among quality factors, bases of evaluation, and evaluation questions. In distributed evaluation, both visualizations also identify which evaluators answered each question.

The first visualization provides an overview of the evaluation as a whole. It clearly shows areas that have the best or worst scores and the corresponding evaluators (Fig. 5). The score of each answer is mapped into the color transition of the element that represents the data. The deeper the green, the higher is the score of that answer. For example, when the answer scores zero points, the color is white. When more than one evaluator has answered the same question, the average score is shown. Hovering over the element with the cursor provides additional information for that element. When the user clicks the element, the comments added to the questions and the development hints related to the question are shown. There are two layouts in the visualization. In both layouts, each evaluation question is mapped into its own element of visualization. The first layout, a circular chart, is better for showing a small number of answered questions, and the second layout, a bar chart, is better when the number of questions answered is much higher. When changing the layout, the transition between layouts is represented by an animation to preserve the user’s understanding of the visualization. For teachers, the visualization is useful for identifying the areas of the learning environment that are not well designed and for obtaining development tips on how to improve those areas.

Fig. 5. A visualization showing evaluating questions, answer scores, comments, and the overall structure of the evaluation.

The purpose of the second visualization is to provide a deeper analysis of evaluators’ activities and the progress of the evaluation (Fig. 6). The visualization has several synchronous views, with filtering functionality and different points of view. The views are evaluation status of corresponding evaluators, time distribution of the answers, answers by score, answers by day of the week, answers by hour of the day, quality factors, and basis of evaluation. When one of these views is filtered, the other ones are updated synchronically. Fig. 6 shows how the same data is plotted into a table that can be filtered and sorted using several properties. For teachers, the visualization shows the transparency and coverage of the evaluation process and brings out the quality factors of learning environments that have not been evaluated yet.

Fig. 6. A visualization for analyzing the progress of an evaluation and the evaluators' activities.

VI. EVALUATION OF WESQU In autumn of 2013, IISLab conducted research to evaluate

the quality of the WeSQu tool itself. Eight hypermedia experts were asked to evaluate the currency of the evaluation questions and development hints/guidelines of one or two quality factors. They were also asked to give their opinions regarding the usefulness of the WeSQu tool.

At first the 12 quality factors were divided among the experts for further analysis. Each individual was asked to go through the quality factors’ evaluation questions and their corresponding development hints/guidelines. They were asked to consider whether the questions and hints/guidelines were still relevant and current, whether the questions were valid (i.e., how well the questions measured what they were supposed to measure), and whether the questions were reliable (i.e.,

978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey2014 IEEE Global Engineering Education Conference (EDUCON)

Page 155

whether the questions produced stable and consistent results). They were also asked to make suggestions on how to improve the evaluation questions and development hints/guidelines related to them.

After analyzing the evaluation questions and development hints/guidelines, the experts were asked to evaluate the usefulness and quality of WesQu evaluation tool from four perspectives: 1) generally, 2) the report of evaluation results, 3) the visualization of answers and 4) the visualization presenting analysis of answers (Table 1). They were asked to rate their opinions using a seven-level bipolar rating scale (known as a semantic differential) with contrasting adjectives at each end. The scales were scored from one to seven, negative to positive (i.e., favorable). Six of the rating scales were designed to measure pragmatic quality of the visualizations: technical-human, complicated-simple, impractical-practical, cumbersome-straightforward, confusing-clearly structured, and useless-useful. Five of the rating scales were designed to measure hedonic quality of the visualization: discouraging-motivating, dull-captivating, unpleasant-pleasant, boring-interesting, and individual-communal. The bipolar rating scale is an ordinal scale, which means that averages and standard deviation have no valid interpretation, but they can still offer useful information about the phenomenon under study [9]. Through open-ended questions in the survey, the experts were asked to give arguments for their answers and suggestions on how to improve the WeSQu tool. The answers to the open-ended questions were analyzed using content analysis, a method for analyzing qualitative data in order to identify themes, patterns, similarities, and differences among answers.

TABLE I. PRAGMATIC QUALITY IS RATED HIGHER THAN HEDONIC QUALITY EXCEPT IN VISUALIZATION OF ANSWERS.

Scale items Q.1 Q. 2 Q. 3 Q. 4

technical - human 6.0 5.6 4.3 5.7

complicated - simple 6.1 5.8 4.7 5.9

impractical - practical 6.0 5.4 4.3 5.3

cumbersome - straightforward 5.7 6.1 4.9 5.6

confusing - clearly structured 5.5 5.9 4.2 5.1

useless - useful 5.1 5.4 5.1 5.4

Pragmatic quality 5.7 5.7 4.6 5.5

discouraging - motivating 4.4 4.4 4.6 5.1

dull - captivating 5.4 5.1 5.1 5.6

unpleasant - pleasant 5.3 4.8 5.6 5.4

boring-interesting 4.1 4.3 4.7 4.7

individual - communal 6.1 6.0 4.9 5.6

Hedonic quality 5.1 4.9 5.0 5.3

According to the results, the WeSQu tool provides good support for the evaluation process and good basic background for evaluating the quality of different types of web services. It also helps teachers’ pay attention to important facts about the social media service they are planning to use. However, the

main challenge is the currency of the evaluation questions and hints/guidelines, which seem to require frequent updating.

The research findings indicated that some of the quality areas assessed in the tool are in constant transition, and therefore, outdated. The rate of technological development is fast, and the principles and issues of web service quality correspond with this development. For example, the quality factors of technical implementation and use of media elements had questions that no longer represented the current situation and state of technological solutions. The WeSQu tool contains a large number of evaluation questions and corresponding development hints, making the maintenance of the content’s currency very difficult if the responsibility lies only with the administrator of the tool. Therefore, on the basis of these findings, IISLab decided to incorporate crowdsourcing functionality into the WeSQu tool in order to keep the content current and high in quality at all times.

WeSQu currently offers a feedback form that the user can use to provide feedback, correct possible mistakes that he/she finds, or suggest new quality factors, bases of evaluation, or evaluation questions to be added to the official set of evaluation material. Further crowdsourcing ideas were also identified for future development cycles, including requesting changes and reporting outdated information directly during the evaluation process. By offering a quick way to submit feedback at the moment a possible error is noticed, the threshold for taking action is lowered. The user could also build his/her own quality factors for personal evaluation use, thereby making the WeSQu tool a versatile platform for virtually any type of heuristic evaluation. If the user then feels that the quality factor would be useful in common use, he/she could, with the click of a button, offer it to be revised by administrative staff and added to the official WeSQu tool. This option also lowers the threshold for users to collaborate in the upkeep of WeSQu’s content.

VII. CONCLUSION The need for a large-scale evaluation tool such as WeSQu

is argued for several reasons. The development of learning platforms has been fast. Nowadays, social media and Web 2.0 tools are often utilized in the context of higher education. At their best, they offer versatile features to support a learning community to study and teach. Typically, social media enhanced learning environments allow students to contribute content, change opinions, and create communities for different needs. Technological development has made new types of equipment available, such as iPads and smartphones. Their use and the different ways in which they are used have increased greatly. The abundance of technological solutions, the different ways of using web services, and the diverse collection of learning environments pose new challenges for reliable quality evaluation.

Comprehensive quality evaluation is a laborious and resource-consuming process. When the WeSQu tool is used in the evaluation process, it is not necessary to be an expert in implementing an educational web service in order to evaluate one. WeSQu makes it easy for teachers who are not experts in special vocabulary or who are not knowledgeable regarding

978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey2014 IEEE Global Engineering Education Conference (EDUCON)

Page 156

web service implementation to evaluate educational web services.

Based on the work conducted on the new WeSQu tool, the project group had the idea to design, in future development cycles, a visualization for showing the timeline, relevance, and transparency of collaborative content creation. The visualization might be useful for showing the version history of collaboratively created content. In the visualization, the timelines of added or modified quality factors, bases of evaluation, and questions can be clearly seen. The second idea that arose was to use specific questions to filter out questions irrelevant to the social media enhanced learning environment context before the actual evaluation starts. By using this type of filtering, the teacher is able to decide whether to answer only the most relevant questions or all of the questions relating to more than just the learning environment context.

With the help of the renewed evaluation tool, teachers can execute an evaluation with the collaboration of multiple evaluators and compile the results of the evaluation into one single report. The evaluation process is also much easier and faster with the WeSQu tool than, for example, using heuristics lists, even if the evaluator is an accustomed user of the educational web service in question. Overall, the WeSQu tool provides a good basic background for evaluating the quality of different types of educational web services and helps teachers pay attention to important facts about the social media service they are planning to use.

To keep the content relevant, it was decided that crowdsourcing methods would be used. In this case, crowdsourcing is the practice of obtaining needed content by utilizing contributions from a large group of voluntary researchers. The goal is that the new or updated content will come from an undefined user, not from a previously defined group of researchers. This practice will allow anyone with theoretical knowledge of learning platforms, content production, technical implementation, or web service design to be a content producer for the WeSQu tool. On the other hand, maintaining high-quality content limits the technical implementation possibilities of publishing content prior to monitoring. Based on these identified development areas,

IISLab decided that the WeSQu tool needs additional functions to support the upkeep of evaluation material through crowdsourcing in the future.

REFERENCES

[1] K. Silius, M. Kailanto, and A.-M. Tervakari, “Evaluating the Quality of Social Media in an Educational Context”. International Journal of Emerging technologies in Learning, 2011, vol. 6, issue 3, 21-27. Available online: http://online-journals.org/i-jet/article/view/1732

[2] K. Silius and A-M.Tervakari. “An Evaluation of the Usefulness of Web-based Learning Environments. The Evaluation Tool into the Portal of Finnish Virtual University.” In: Peñarrocha, V. & alt. (ed.)mENU 2003 - International Conference on University Networks and E-learning 2003, 8-9 May 2003 in Valencia, Spain. Proceedings of mENU, [CD-rom]. ISBN:84-9705-3l69-9.

[3] K. Silius, T. Miilumäki, J. Huhtamäki, T. Tebest, J. Meriläinen, and S. Pohjolainen, ”Students’ Motivations for Social Media Enhanced Studying and Learning”. Knowledge Management & E-Learning: An International Journal (KM&EL) , 2010, Vol. 2, No. 1, pp. 51-67. Available online: http://www.kmel-journal.org/ojs/index.php/online-publication/article/view/55/39 .

[4] E. Kyndt, F. Dochy, and H. Nijs, “ Learning conditions for non-formal and informal workplace learning”. Journal for Workplace Learning, 2009, 21 (5), S369–S383.

[5] K. Silius, A.-M. Tervakari, and M. Kailanto, “Visualizations of User Data in Social Media Enhanched Web-based Environment in higher Education”. iJet – Volume 8, special Issue 2: “EDUCON2013”, August 2013. Available online: http://dx.doi.org/10.3991/ijet.v8iS2.2740

[6] J. Hullman, E. Adar, and P. Shah, “Benefitting InfoVis with Visual Difficulties. Visualization and Computer Graphics”, IEEE Transactions, 2011, 17:12. Available online: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6064986

[7] J. Heer, M. Bostock, and V. Ogievetsky, “A tour through the visualization zoo”, Communications of the ACM, 2010, 53:6, 59-67. Available online: http://dl.acm.org/citation.cfm?id=1743567

[8] R. Burkhard, “Learning from architects: the difference between knowledge visualization and information visualization”, 2004, Information Visualisation 2004. IV 2004. Proceedings. Eighth International Conference. Available online: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1320194

[9] M. Hassenzahl, “The thing and I: understanding the relationship between user and product”, In M.Blythe, C. Overbeeke, A. F. Monk, & P. C. Wright (Eds.), Funology: From Usability to Enjoyment, Dordrecht: Kluwer Academic Publishers, pp.31-42, 2003.

978-1-4799-3190-3/14/$31.00 ©2014 IEEE 3-5 April 2014, Military Museum and Cultural Center, Harbiye, Istanbul, Turkey2014 IEEE Global Engineering Education Conference (EDUCON)

Page 157