14
Article User experience evaluation of human representation in collaborative virtual environments Economou, Daphne, Doumanis, Ioannis, Argyriou, Lemonia and Georgalas, Nektarios Available at http://clok.uclan.ac.uk/23440/ Economou, Daphne, Doumanis, Ioannis ORCID: 0000-0002-4898-7209, Argyriou, Lemonia and Georgalas, Nektarios (2017) User experience evaluation of human representation in collaborative virtual environments. Personal and Ubiquitous Computing, 21 (6). pp. 989-1001. ISSN 1617-4909 It is advisable to refer to the publisher’s version if you intend to cite from the work. http://dx.doi.org/10.1007/s00779-017-1075-4 For more information about UCLan’s research in this area go to http://www.uclan.ac.uk/researchgroups/ and search for <name of research Group>. For information about Research generally at UCLan please go to http://www.uclan.ac.uk/research/ All outputs in CLoK are protected by Intellectual Property Rights law, including Copyright law. Copyright, IPR and Moral Rights for the works on this site are retained by the individual authors and/or other copyright owners. Terms and conditions for use of this material are defined in the http://clok.uclan.ac.uk/policies/ CLoK Central Lancashire online Knowledge www.clok.uclan.ac.uk

User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Article

User experience evaluation of human representation in collaborative virtual environments

Economou, Daphne, Doumanis, Ioannis, Argyriou, Lemonia and Georgalas, Nektarios

Available at http://clok.uclan.ac.uk/23440/

Economou, Daphne, Doumanis, Ioannis ORCID: 0000­0002­4898­7209, Argyriou, Lemonia and Georgalas, Nektarios (2017) User experience evaluation of human representation in collaborative  virtual environments. Personal and Ubiquitous Computing, 21 (6). pp. 989­1001. ISSN 1617­4909   

It is advisable to refer to the publisher’s version if you intend to cite from the work.http://dx.doi.org/10.1007/s00779-017-1075-4

For more information about UCLan’s research in this area go to http://www.uclan.ac.uk/researchgroups/ and search for <name of research Group>.

For information about Research generally at UCLan please go to http://www.uclan.ac.uk/research/

All outputs in CLoK are protected by Intellectual Property Rights law, includingCopyright law. Copyright, IPR and Moral Rights for the works on this site are retained by the individual authors and/or other copyright owners. Terms and conditions for use of this material are defined in the http://clok.uclan.ac.uk/policies/

CLoKCentral Lancashire online Knowledgewww.clok.uclan.ac.uk

Page 2: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001DOI 10.1007/s00779-017-1075-4

ORIGINAL ARTICLE

User experience evaluation of human representationin collaborative virtual environments

Daphne Economou1 · Ioannis Doumanis2 ·Lemonia Argyriou1 ·Nektarios Georgalas3

Published online: 29 August 2017© The Author(s) 2017. This article is an open access publication

Abstract Human embodiment/representation in virtualenvironments (VEs) similarly to the human body in real lifeis endowed with multimodal input/output capabilities thatconvey multiform messages enabling communication, inter-action and collaboration in VEs. This paper assesses howeffectively different types of virtual human (VH) artefactsenable smooth communication and interaction in VEs. Withspecial focus on the REal and Virtual Engagement In Real-istic Immersive Environments (REVERIE) multi-modalimmersive system prototype, a research project funded bythe European Commission Seventh Framework Programme(FP7/2007-2013), the paper evaluates the effectiveness ofREVERIE VH representation on the foregoing issues basedon two specifically designed use cases and through thelens of a set of design guidelines generated by previousextensive empirical user-centred research. The impact ofREVERIE VH representations on the quality of user expe-rience (UX) is evaluated through field trials. The output ofthe current study proposes directions for improving human

� Daphne [email protected]

Ioannis [email protected]

Lemonia [email protected]

Nektarios [email protected]

1 Department of Computer Science, Faculty of Scienceand Technology, University of Westminster, London, UK

2 CTVC Ltd, London, UK

3 BT Intel Co-lab, British Telecom, Ipswich, UK

representation in collaborative virtual environments (CVEs)as an extrapolation of lessons learned by the evaluation ofREVERIE VH representation.

Keywords Virtual environments · Virtual humans ·Avatars · Design guidelines · Immersion · Presence ·Communication · Interaction · Collaboration

1 Introduction

Human embodiment/representation within collaborative vir-tual environments (CVEs), to which we will refer in the restof the paper as virtual humans (VHs), is very important ifone considers the role of the human body in real life. Thehuman body

• Provides immediate and continuous information aboutuser presence, identity, attention [1], activity status,availability [2] and mood

• Sets a social distance between conversants (e.g. theiractual location) [3] which helps in regulating interaction[4]

• Helps managing a smooth sequential exchange betweenparties by supporting speech with non-verbal commu-nication [5]

Non-verbal communication or bodily communication“takes place whenever one person influences another bymeans of facial expression, tone of voice, or any other chan-nels (except linguistic)” [6] (involving body language, handgesture, gaze and facial expressions, or any combinationthereof). VHs in VEs play the same role as the human bodydoes in real life, and thus, they have been recognized askey elements for human interaction and communication inCVEs [7, 8]. Another important function of VHs is to help

Page 3: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

990 Pers Ubiquit Comput (2017) 21:989–1001

humans immerse themselves in the mediated environment.Immersion in this context refers to the degree the VHs helpto create a sensation of being spatially located in the CVEfacilitating the users feel as being actually “there” and par-ticipate in the virtual world. This feeling of spatial presencehas been studied extensively in psychology with aboundtheories developed as a result [9].

Current virtual reality (VR) systems and frameworksenable various types of human representation. However,VHs are still far from fully encompassing the requiredattributes in order to realistically represent humans in VRand support immersion. In this paper, we review the effec-tiveness of VHs supported by REal and Virtual EngagementIn Realistic Immersive Environments (REVERIE), a VRsystem prototype developed under the European Commu-nity’s FP7 [10] in terms of addressing issues related tovirtual presence, communication and interaction in CVEs.

Empirical user-centred research discussed further inSection 5 led to the creation of a list of design guidelinesfor VHs in CVEs [11, 12], shown in the first and sec-ond columns of Table 1. The application of those designguidelines in the design of human embodiment in CVEsensures smooth remote communication, interaction and col-laboration. Those design guidelines are used to evaluatehow effectively the different types of REVERIE VHs sup-port smooth communication, interaction and collaborationsin CVEs. Design guidelines provide a direct mean of trans-lating detailed design specifications into actual implementa-tion and a way for designers to determine the consequencesof their design decisions. They are also a great tool thatcan drive technological challenges in suggesting directionsin which the underlying VHs and CVE technology shouldbe developed to support designers implementing their decisions.Despite the study being conducted some time ago, the designguidelines are still relevant and valid to current CVEs.

The rest of the paper is structured as follows. Section 2reviews state-of-the art VR systems in terms of realistic humanrepresentation supported in VR. Section 3 is an introduc-tion to the REVERIE project demonstrating different typesof VHs supported. Section 4 presents the user scenariosand field trials based on which REVERIE VH representa-tions are being evaluated. Those field trial results are usedin Section 5 to compare how well REVERIE VHs addressthe aforementioned set of VH design guidelines related tovirtual presence in CVEs. Section 6 provides a discussionof the importance of VH representation in CVEs. Finally,Section 7 closes with conclusions and future directions.

2 State of the art of virtual human representation

Until recently, most of VH representation platforms enabledhuman representation within CVEs as animated 3D avatars.

An avatar is a 3D graphical representation of a user’s char-acter that can take any form (cartoon-like, animal-like oranthropomorphic). In addition, avatars in most CVEs arecontrolled by the keyboard and mouse limiting user actionsand interactivity.

Second Life [13], OpenSim [14] and Active Worlds [15]are the first online VR platforms that allowed users to createtheir own avatars and use them to explore the virtual worldor interact with other avatars, places or objects. By means ofthese avatars, users can meet other VR residents, socialize,participate in individual and group activities, build, cre-ate, shop and trade virtual property and services with oneanother.

Sansar [16] is a VR platform created by Linden Labs asthe official successor of Second Life. Sansar aims to democ-ratize VR content creation (including avatars) by empow-ering people to easily create, share and monetize theircontent without requiring engineering resources. Althoughthe platform has not been officially released yet, it alreadyfeatures several user-generated worlds of impressive beautyand detail. However, in terms of user representation, Sansarseems to follow the same approach as in Second Life. Usershave access to a library of female and male avatars whichthey can fully customize (e.g., change their skin colour,hair). Using a custom avatar, they can explore VR worldsand communicate with other users by either text or voice.Apart from supporting the latest Oculus Rift headset, Sansardoes not (at least in its current version) support any othermultimodal technologies (e.g., facial expression mapping orfull body avatar puppeting) to increase immersion in the VRenvironments.

High Fidelity [17] aims to improve human representa-tion and realistic interaction in virtual worlds comparedto all of the above VR platforms by using sophisticatedmotion capture techniques that could mirror user body andhead movement, plus facial expressions onto their avatarand allow controlling the avatar arms and torso in order tointeract as naturally as possible in the virtual world. Theplatform supports a range of devices and inputs for greaterimmersion and control (e.g. Oculus Rift, Leap Motion con-troller and Microsoft Kinect). Although High Fidelity is apromising virtual reality platform, it does not support thelatest generation of VH representation called “replicants”,which refers to a dynamic full-body 3D reconstruction ofthe user. Replication technology uses the latest generationof motion capture sensors (e.g. Microsoft Kinect) to create aphotorealistic representation of the human user in the CVE.

All previous VR platforms do not track or represent theuser’s body in real scale with natural motion to the user,due to a lack of data about its position and orientation inthe world. The result is that the user body is not visibly apart of the environment, which risks damaging the user’simmersion. SomaVR is a platform that performs an in-depth

Page 4: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001 991

Table

1Mapping

ofhowREVERIE

VHsaddressuser-centred

VHdesign

guidelines

(DGs)thatsupportcom

munication,

interactionandcollaboratio

nin

CVsresulting

inenhancingim

mersion

(showsthattheDGhasnotb

eenmet,�

show

sthattheDGhasbeen

met)

Designguidelines

(DGs)

Avatars

Puppeted

avatars

Hum

anreplicants

ECA

DG1:

VHsshould

supportrealistic

oraesthetically

pleasing

representa-

tionof

theuser

–�

––

DG2:

VHsshould

supportu

niquerepresentatio

n�

��

DG3:

VHsshouldconvey

theuser’sroleintheCVE(e.g.student,teacher,

other)

��

��

DG4:

VHsshould

supportcustomizablebehaviour

––

––

DG5:

VHsshould

convey

theuser’sview

point

��

��

DG6:

Userview

pointsshould

beeasily

directed

toseean

activ

epartici-

pant

oraspeakereven

whenthey

areouto

fotheruser

view

point

––

–�

DG7:

Anactiv

eparticipantn

eeds

tobe

identifiedeven

whentheirVHis

outo

fotherusers’view

points

––

–�

DG8:

Atool

should

beprovided

forusersto

lock

onto

theactiv

eVHand

follo

witautomatically

––

–�

DG9:

AVHshould

beeasily

associated

with

itscommunication

–�

��

DG10:

The

speakerneedsto

beidentifiedeven

whentheirVH

isoutof

otherusers’view

points

––

–�

DG11:

VHsshould

convey

theuser’s

intentionto

take

turn

oroffering

aturn

even

whennotb

eing

inotherusers’view

points

––

––

DG12:

Privatecommunicationandinteractionshould

besupported

––

––

DG13:

VHsshould

show

whentheuser

isinvolved

inprivatecommunica-

tionandwhether

ornoto

therscouldjoin

in-

��

DG14:

VHsshould

revealtheuser’sactio

npoint

–�

�–

DG15:

Users

need

tobe

provided

with

real-tim

ecues

abouttheirow

nactio

ns–

��

DG16:

VHsshouldconvey

explicitlytheuser’sprocessof

activ

ityandstate

ofmind

––

��

DG17:

The

expertshould

bein

controlo

fnovice

user

behaviour

––

��

DG18:

The

expertshould

have

controlo

veran

individualuser’sview

point

––

–�

DG19:

The

expertshould

bein

controlo

fthecommunicationtools

��

��

DG20:

theexpertshould

beableto

take

controlo

fobjectsin

theCVE

––

––

DG21:

The

expert

should

beaw

areof

and

have

controlover

private

communicationof

novice

users

––

––

DG22:

The

expert

should

beaw

areof

and

have

controlover

private

interactions

ofnovice

users

––

––

DG23:

The

expertshould

have

anepisodic

mem

oryof

novice

user

mis-

takes

––

––

Page 5: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

992 Pers Ubiquit Comput (2017) 21:989–1001

Fig. 1 RAAT tool preliminarytesting

analysis of the data provided by both a Kinect V2 and HTCVive and creates a virtual body for the user that movesand acts as their own and can be perceived from a natu-ral first-person perspective [18]. SomaVR aims to enableplayers feel physically grounded as their virtual body repli-cates their own. However, SomaVR does not support facialexpressions.

3 REVERIE technologies for online humanrepresentation and interaction

REVERIE is a multimodal and multimedia system pro-totype that offers a wide range of VH representationsand input control mechanisms. The REVERIE frameworkenables users to meet and share experiences in an immersiveVE by using cutting-edge technologies for 3D data acquisi-tion and processing, networking and real-time rendering.

Similarly to the platforms discussed above, REVERIE’sstandard VH representation is an avatar. However, the plat-form also enables humans to be represented in the virtualworld by real-time realistic full-body 3D reconstructionsenabling the creation of more immersive experiences [19].The system supports also affectively user representation inthe virtual worlds by employing various modules that anal-yse user behaviour in the real world . There are modules toanalyse: body gestures, facial expressions, and speech andmodules to analyse emotional and social aspects.

Finally, REVERIE’s virtual worlds can be cohabited byhumans (in the form of avatars or replicants) as well asautonomous agents. Thus, REVERIE is considered as asystem that provides currently a more holistic approachin terms of human representation in VR. The differentVH representations that REVERIE supports are presentedbelow.

REVERIE VR system prototype features both human-to-human and human-to-agent interaction. Users can enter theVE represented by conventional avatars, puppeted avatarsor real-time dynamically reconstructed users (replicants).They can adapt the basic features of VHs, but they canalso create photorealistic 3D representations of themselvesto interact with other users and embodied conversationalagents (ECAs) in VEs.

3.1 Avatars

REVERIE supports an avatar authoring tool (RAAT) [20]that allows the creation of bespoke VHs that closely matchthe facial appearance of the representative user, see Fig. 1.Users are provided with the option to use RAAT to createtheir own lookalike personalized avatar simply by allow-ing the tool to take a single snapshot of their face by usingtheir device’s webcam. This personalized lookalike avatarthat resembles the users’ appearance is their VH representa-tion for navigating and interacting with others in the virtualscenes offered by REVERIE.

3.2 Puppeted avatars

Puppeteering or avateering VHs refers to the process ofmapping a user’s natural motion and live performance toa VH’s deforming control elements in order to realisti-cally reproduce the activity during rendering cycles [21].REVERIE supports puppeted avatars adopting two differenttypes of technology:

• Kinect Body-Puppeted avatars using a single-sensordevice to implement advanced algorithmic solutionswhich enable user activity analysis, full body re-construction, avatar puppeting and scene navigation,see Fig. 2.

Page 6: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001 993

Fig. 2 Kinect Body-Puppeted avatar shown in the 3D Hangout envi-ronment using the Kinect gesture-driven navigation module via Kinectskeleton resource sharing offered by the Shared Skeleton commonmodule

• Webcam Face-Puppeted avatars, which is based onmodules that perform facial detection, tracking the fea-tures by point extraction from a single, front-facingweb camera connected to the system while deformationand rendering of the character’s face mesh geometryhappens in a separate component

In the first option, the user’s virtual body representationis controlled by skeleton-based tracking with the use of aKinect depth sensing camera. The virtual body moves inthe virtual scene according to the user movements in thereal world. The second option uses a simple web camera tocapture the user facial expressions which are then mappedto its avatar’s face, replicating in this way user emotionsto its virtual representation. Both options are integrated inthe REVERIE platform allowing users to select the mostsuitable one to various use cases. In combination with theRAAT tool, these options offer an efficient way of realistichuman representation in VEs.

3.3 Human replicants

The most realistic human representation is achieved inREVERIE by means of “replicants”. By replicants, werefer to a dynamic full-body 3D reconstruction of the user.The REVERIE visual “Capturing” module is responsiblefor capturing a user during a live session with the use ofmultiple depth-sensing devices (Kinects) and dynamicallyreconstructing a 3D representation of the user (includingboth 3D geometry and texture) in real time. The user movesinside a restricted area, surrounded by at least three Kinectdevices, while standing in front of the display interactingwith other participants. The “replicant” reconstruction iscoded in real time and transmitted in order to be visual-ized on other users’ displays along with the elements of theshared virtual world, see Fig. 3. Alexiadis et al. [22] discussin detail the reconstruction module’s pipeline for capturingand 3D reconstruction of replicants.

Fig. 3 REVERIE virtual hangout scenario session with replicants

3.4 Interaction with the autonomous agent

REVERIE emphasizes non-verbal, social and emotionalperception, autonomous interaction and behavioural recog-nition features [23]. The system can capture the user facialexpression, gaze and voice, and interacts with them throughan ECA’s body, face gestures and voice commands. Theagent exhibits audiovisual listener feedback and takes user’sfeedback into account in real time. The agent pursues dif-ferent dialogue strategies depending on the user’s state,interprets the user’s non-verbal behaviour and adapts itsown behaviour accordingly. To direct the participants in theVE to areas where they should focus, REVERIE uses the“Follow-Me” module. The Follow-Me module controls alluser avatars’ navigation in a scene by automatically address-ing them destinations according to an ECA’s path and thevirtual scene structure. The users’ facial expressions are alsocaptured in real time through their webcam and adapted totheir character’s representation allowing the analysis of theusers’ attention and emotional status throughout the wholeexperience. This is controlled by the “Human Affect Anal-ysis Module” and allows the users that may take the role ofinstructors in the virtual activity to be aware of the emotionsand feelings of users they need to guide during a virtualexperience.

4 Use cases and field studies

This section discusses two use cases that drove the develop-ment of REVERIE [24] and two field studies that have beenconducted to evaluate the following:

• The impact of REVERIE’s immersive and multimodalcommunication features

• The cognitive accessibility of educational tasks com-pleted with the use of the platform

• The quality of the user experience (UX)

Page 7: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

994 Pers Ubiquit Comput (2017) 21:989–1001

The quality of REVERIE VHs is then reviewed under thelens of a list of VH design guidelines that should be met toaid smooth interaction, communication, collaboration andcontrol in CVEs (see Section 5).

4.1 Use case 1

The first use case (UC1) shows how REVERIE can be usedin educational environments with emphasis on social net-working and learning. This is accomplished through thefollowing:

• The integration of social networking services• The provision of tools for creating personalized looka-

like avatars• Navigation support services for avoiding collisions• Spatial audio adaptation techniques• Real-time facial animation adaptation for avatar repre-

sentation• Artificial intelligence techniques for responding to the

users’ emotional status

UC1 consists of two educational scenarios. Scenario1 (Sc1) is a guided tour in the virtual European Parlia-ment followed by a debate. Students log in to REVERIEusing their personal account credentials and create theirown personalized avatar by accessing the RAAT tool (seeSection 3.1). Then, they are transferred in the virtual parlia-ment scene where an automated explanatory tour takes placeguided by an autonomous agent (see Section 3.4). When thetour is over, a virtual debate takes place and each studentpresents their view on a topic (e.g. multiculturalism) whichis streamed as video over the internet and is being renderedin a 3D virtual projector in the parliament scene. Studentsrate their preferred presentations using the rating widget(at a 5-point scale) and the results are shown to everyonethrough an overlay list display.

Scenario 2 (Sc2) is a search-and-find game in a 3D virtualgallery. Students log in to REVERIE, create their own avataras in Sc1 and join a virtual gallery. Each student is assignedwith a card containing information about an object in thevirtual gallery based on which they have to locate this objectby exploring the virtual gallery on their own. Then, theyhave to give a presentation about the object they have found.As in Sc1, students rate their preferred presentations.

In both scenarios, communication between users in theVE is multimodal supporting the use of spatial audiostreaming and avatar gestures.

4.2 UC1 field trial

UC1 field trial was conducted in a lab at Queen MaryUniversity, London, with a setup resembling an actual class-room environment, see Fig. 4. In particular, a series of desks

Fig. 4 REVERIE UC1 field trial setting. S student, T teacher, Aassistant

was put in a rectangular shape, dividing the space into threeareas, two sides allocated to students (represented with an Sin Fig. 4), and the top to the teacher(s) (represented with a Tin Fig. 4) associated with each group. Four researchers werepresent in the lab (represented with an A in Fig. 4) to recordeach session and to provide the necessary support (tech-nical and logistical) for the successful completion of eachsession. Participants used high specification desktop com-puters, standard wireless mouse and a Bluetooth headsetattached to a computer to complete the assigned educa-tional tasks. Web cameras were attached to the computersto enable multimodal communication (e.g. head nods) andto detect the students’ attention. The computers were con-nected to a LAN (local area network) with access to theWWW.

In total, 52 participants took part in this study, six ofwhich were used in an initial pilot to ensure that the mainstudy would run smoothly. The remaining 46 participants(students and teachers) were assigned randomly in the studyconditions.

Participants included a mix of male and female stu-dents between 11 and 18 years old. All participants hada variety of familiarity with video games and social net-working media portals (e.g. Facebook, Twitter). Participantswere administered in groups with a maximum of six mem-bers each and they were given a short training session atthe beginning of the study to familiarize with the use ofREVERIE. Participants were also given a printed GUI mapfor REVERIE in case they would still not feel comfortablewith its use after the training session. After this training, theteachers were asked to follow the lesson plan that consistedof the following:

• A starter activity, which was a group discussiondesigned to introduce and give students a chance toreflect on the topic of the educational activities

• An individual activity, where the students got the oppor-tunity to present their views (with arguments for and/oragainst) on the topic of the educational activities usingREVERIE and get feedback

Page 8: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001 995

• A group activity, where the students had to work closelywith their classmates to come up with an answer onthe topic of the educational activity using REVERIEVE. Each session was followed by an interview of thestudents discussing their experience using REVERIE.

4.3 Use case 2

The second use case (UC2) of REVERIE features a novel3D tele-immersion system that can stream in real-time3D meshes (including high-resolution scans of real users)which can be fully integrated into virtual worlds. Partici-pants in a group of four had to enter a “virtual hangout”and they had to chat and collaborate in the CVE to com-plete the assigned task. Participants were represented in theVE with replicants who were captured with three Kinects.This was done to investigate the impact of the level of real-ism to the quality of the UX and task completion. The userrepresented by the replicant was given a step-by-step man-ual on how to create two objects using Lego Mega Blocks.Using verbal and non-verbal means of communication, thereplicant had to show to the rest of the group the objectshe/she created using the Lego Mega Blocks. The rest of thegroup had to replicate the shapes on a notepad using wordsto describe their various features (e.g. color and shape). Theuser represented by the replicant commented on the accu-racy of the drawings and the whole process was repeatedwith the second object.

4.4 UC2 field trial

The evaluation of the scenario was carried out in the labora-tory of Dublin City University. Thirty-one participants wererecruited internally for the study with a variety of comput-ing and educational background and with mixed gender andage. The following three dimensions of UX were deemed asimportant:

• The usability of the UC2 prototype: the degree in whichparticipants are able to complete assigned tasks witheffectiveness, efficiency and satisfaction

• The user engagement: the extent by the user’s experi-ence makes the entertainment prototypes desirable touse for longer and more frequently

• The user acceptance: the degree by which the proto-types can handle the tasks for which they were designed[25]

Field study user experience feedback of both UCs wascollected, by video recording user testing sessions, ask-ing the users to complete questionnaires and conductinginterviews after each session. Video recordings includeda screengrab of individual users’ monitors documentingtheir actions using REVERIE, plus recording the room

that the activity took place (see Fig. 4). The later record-ings provided data related to user communication outsidethe system and to cases where human intervention wasrequired by human helpers attending the session. In addi-tion, data included communication logs between all usersusing REVERIE communication tools. Interviews havebeen video recorded and transcribed detailing user reac-tion to a set of question related to the user impressionabout REVERIE’s user interface, mode of communication(text vs audio), user feeling of immersion, VH represen-tation, user control over the activity, navigation and gam-ification. This resulted to a rich qualitative dataset. Allrecorded data was analysed by expert usability evalua-tors following formal methods (e.g. analyse the sense ofparticipants’ presence in the virtual world by measuringtheir attention, emotional engagement and overall senti-ment) [26] aiming to extract repetitive patterns in userresponse that could lead to quantifying user response andforming more generalizable remarks. The evaluation of thedata collected from the field trials is described in Section 5below demonstrating the results on the quality of REVERIEVHs.

5 Evaluation of REVERIE VHs

This section reviews how well REVERIE VH addresses theVH framework of design guidelines that warrant smoothsocial communication and interaction in CVEs and inessence support immersion. Those VH design guidelinesderived from previous empirical research which is out-lined in Section 5.1 below and are listed in columns 1–2of Table 1. The data used for this analysis stems fromREVERIE tools used to implement UC1 and UC2 and fromreviewing the quantitative and qualitative data collectedfrom the field trials (see Section 4). Table 1 columns 3–6depict which design guidelines are met by the three differentREVERIE VH representation artefacts (avatars, puppetedavatars and replicants) and by the autonomous agent (shownas ECA).

The following sections provide information about themethodology based on which the VH framework of designguidelines derived and the justification of how each designguideline is met by the REVERIE VHs.

5.1 Elicitation of VH design guidelines

The derivation of design guidelines that ensures smoothinteraction in CVES was the output of a user-centred, itera-tive, multiphased approach [12] involving overall 60 users.The novel aspects of this approach are that [12]

• It uses a “real-world” application as a case study

Page 9: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

996 Pers Ubiquit Comput (2017) 21:989–1001

The advantage of using a real-world application isthat problems arising in such a situation can determinethe success or the failure of the system according to realuser and application needs

• It breaks the problem into a series of phases of increas-ing sophistication

Increased sophistication is achieved by incremen-tal upsurge of user populace, use of more “mature”technologies and conducting ethnographic studies offace-to-face user actions to determine requirements thatCVEs should support. Increased sophistication helpsovercoming the difficulty of isolating the vast amountof factors involved in the situation in generating therequired results and evaluating their validity.

• It follows a rigorous method of analysing rich qualita-tive data

The approach organizes and manages rich qualitativedata and enables the extraction of quantitative values,which helps deriving design guidelines and technologyrequirements regarding the use of VHs in CVEs forlearning [27].

The users that took part at this study have been engagedin an educational activity learning the rules of a game,the ancient Egyptian game (Senet), and finally playing thegame within a bespoke CVE [28] using Deva Large-ScaleDistributed Virtual Reality System [29]. The rules of thegame have been provided by a user that took the role of aninstructor (played by a researcher) who simulated the pre-ferred behaviour of an autonomous expert agent. The studyprovides an understanding of interactivity and social com-munication issues that arise in a collaborative environmentand it creates a set of design guidelines related to the CVEenvironment, objects contained in the CVE and VH features,behaviours and controls. In this paper, only the guidelinesrelated to the VH are considered. Although the context ofthe study was learning, the design guidelines are generic andapply to a wide range of CVEs.

5.2 Aesthetically pleasing, realistic representation

This section assesses how REVERIE VHs addressed the DGrelated to the aesthetics of the VH representation in VEs.

• DG1: VHs should support realistic or aestheticallypleasing representation of the user

UC2 stretched the importance of high-quality VHrepresentation in VE to add realism to the activity. Usersstated that no realistic appearance of all types of avatarswas distracting. Specifically for avatars and puppetedavatars, users identified VH features such as unrealis-tic tone of skin, emotionless facial expressions, bad lipsynchronization and lack of non-verbal gestures (ran-domized gestures not realistic) particularly distracting.

Replicants increased the level of satisfaction of userrepresentation in the VE as they provided their actualrepresentation/clone of their body in VR. A represen-tative quote follows: “The moments that you couldactually see the replicant, well the quality was verygood and looked real”. However, users felt odd in caseswhere only half of their body was represented in the VEor parts of their body were disappearing due to lagging.

5.3 Identity

This section reviews how REVERIE VHs addressed DGsrelated to representing the user’s identity in VEs.

• DG2: VHs should support unique representationUC1 showed that the REVERIE RAAT (see

Section 3.1) and avatar puppeting satisfy the require-ment for unique VH representation which is classedby users as very important for two reasons: it closelymatches users’ personality, and it helps in user iden-tification/recognition by others in the VE. The usersexpressed the requirement for a bigger list of clothesand accessories to closely and more accurately rep-resent the user’s natural appearance and convey theiridentity, personality and uniqueness. They stated thatalthough such accessories do not seriously contribute tothe main activities in VR, they do improve the realismof the task.

Replicants match meticulously the requirement foraccurate representation of users in the VE as theyprovide the exact reconstruction of the users.

• DG3: VHs should convey the user’s role in the CVE (e.g.student, teacher, other)

This DG was fully met by REVERIE by providingmodels/emblems via the RAAT to represent users withspecific roles.

• DG4: VHs should support customizable behaviourPuppeted avatar and replicant behaviour is driven

by the user. Editing of agents’ behaviour/discourseand verbal or non-verbal response is not supported byREVERIE.

5.4 Users focus of attention

This section looks how REVERIE VHs addressed DGsrelated to the user’s involvement in activities that take placein the VE and their level of engagement.

• DG5: VHs should convey the user’s viewpoint• DG6: an active participant needs to be identified even

when their VH is out of other users’ viewpointsIn both DG5 and DG6, the avatars’ positioning and

direction of gaze indicate the user’s focus of attentionin the VE. The Human Affect Analysis Module (see

Page 10: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001 997

Section 3.4) provides information about a user’s emo-tional status (only in UC1). Based on this module, theuser can indirectly conclude who is paying attention.

• DG7: user viewpoints should be easily directed to seean active participant or a speaker even when they areout of other users’ viewpoint

• DG8: a tool should be provided for users to lock ontothe active VH and follow it automatically

UC1 showed that DG7 and DG8 are met byREVERIE navigation system due to the Follow-Memodule (see Section 3.4). This module affects naviga-tion of a group of participants by a system-controlledautonomous agent that directs user attention where itshould be focused. However, users found losing con-trol of their viewpoint intrusive and they expresseddiscomfort with the way this happened.

5.5 Communication and turn taking

This section evaluates how REVERIE VHs addressed DGsrelated to enhancing discourse in CVEs.

• DG9: a VH should be easily associated with its commu-nication

• DG10: the speaker needs to be identified even whentheir VH is out of other users’ viewpoints

UC1 showed that REVERIE addresses DG9 andDG10 by means of the lip synchronization and Web-cam Face Puppeting module. The latter allows users tocontrol the movements of their avatar’s face throughdirect mapping of the character’s face mesh geometryto a number of tracked feature points on the user’s face.When the users are distant or hidden outside one’s view-point, REVERIE provides a list displaying the speakerand the users that request turn to talk.

• DG11: VHs should convey the user intention to taketurn or offering a turn even when not being in otherusers’ viewpoints

UC1 shows that REVERIE supports a turn-takingprotocol and meets DG11. This is achieved by provid-ing a tool that shows who is talking and who wantsto take turn in two ways: by pressing a button thatmakes the user’s VH hand rise to claim a turn and byshowing an indication in a list that displays the speakerand who wants to take turn. UC1 and UC2 showedthat puppeted avatars and replicants convey the userintention to discourse as they closely map user facialexpressions.

5.6 Private communication and interaction

This section studies how REVERIE VHs addressed DGsrelated to private communication and interaction in CVEs.

• DG12: private communication and interaction shouldbe supported

• DG13: VHs should show when the user is involved inprivate communication and whether or not others couldjoin in.

Regarding DG12 and DG13, UC1 showed that pri-vate communication and interaction is not supported byREVERIE. However, user testing revealed the impor-tance of extending the platform to meet this require-ment. In field trials of UC1, private communicationwould support teams to talk/interact with each otherbefore taking part in a debate or a public presentation.Teachers taking part in the study mentioned that privatecommunication would benefit educational purposes, asit would allow them to deal with group or individualuser questions.

5.7 User status

This section looks how the affective features of REVERIEVHs addressed DGs related to the user status of interactionwith others or objects in the VE and state of mind.

• DG14: VHs should reveal the user’s action point• DG15: users need to be provided with real-time cues

about their own actions

DG14 and DG15 are about associating objectmanipulation with the VHs performing those actions.REVERIE VHs are restricted in revealing informationabout the VHs walking, looking at certain directions orbeing seated. The animations they support are restrictedto random verbal movements and not to object manip-ulation in the VE. In contrast, puppeted avatars fulfillthose guidelines as they are capable of reconstructingan animation in the VE that copies the user action fromreal life. Replicants fully address those design guide-lines as they provide an exact 3D reconstruction of theuser and possibly objects that the user may be car-rying/manipulating (including both 3D geometry andtexture) in real time. However, puppeted avatars’ andreplicants’ movement in the VE is restricted to a smallphysical space within the range that can be coveredby the Kinect. Supporting different viewpoints in theVE allows users to see their own human representationperforming an action in the VE.

• DG16: VHs should convey explicitly the user’s processof activity and state of mind.

Two points are covered by this DG: capturing theusers’ process of activity, such as starting and com-pleting moving/manipulating an object or walking ornavigating to reach a point; and state of mind, meaningengagement and focus of attention. REVERIE avatarsconvey process of movement (reaching a place), while

Page 11: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

998 Pers Ubiquit Comput (2017) 21:989–1001

puppeted avatars and replicants fully convey a pro-cess of activity. The Gaze Direction User Engagementcomponent of the REVERIE Human Affect AnalysisModule in UC1 allows users with the role of a teacherto monitor if the student users are focused on the activ-ity (if they look at the screen) and their emotional state(happy, neutral or unhappy).

5.8 Control

This section looks how DGs related to control in an edu-cational CVE or any other environment where similarconditions of practical management apply are addressedby REVERIE VH representation. In this design guide-lines group, by expert we refer to users with the role of ateacher/instructor and by novice we refer to users that learnwithin the VE.

• DG17: the expert should be in control of novice userbehaviour

DG17 implies the existence of tools that allow moni-toring user behaviour and being able to intervene to aiduser interaction in the VE. Such a design solution wouldbe beneficial in any pedagogical environment. Thefacial puppeting capabilities provided by REVERIEmeet DG17 as well, as they inform a user with the roleof a teacher/instructor about student users’ engagementvia the Human Affect Analysis Module based on whichteachers can intervene to attract students’ attention.

• DG18: the expert should have control over an individ-ual user’s viewpoint

The Human Affect Analysis Module informs expertusers about other users’ emotional status. This helpsan expert user to change behaviour in order to attractdisengaged users. The Human Affect Analysis Mod-ule and the “Follow-Me” module (see Section 3.4)allow the autonomous agent to attract the user atten-tion by approaching users that appear disengaged andstart clapping in front of them to attract their atten-tion and guiding users to an area of interest. Suchsolution has been characterized as very intrusive bythe student users in UC1. However, teacher usersstated that for educational purposes, this feature isnecessary.

• DG19: the expert should be in control of the communi-cation tools

REVERIE partially meets DG19 as controlling stu-dents’ communication is restricted to muting their VH.They do not have any control over other VHs though.

• DG20: the expert should be able to take control ofobjects in the CVE

DG20 is about providing tools that allow expert usersbeing in control of objects contained in the VE and

other VHs’ behaviour. None of those requirements arecurrently met by REVERIE.

• DG21: the expert should be aware of and have controlover private communication of novice users

• DG22: the expert should be aware of and have controlover private interactions of novice users

Design guidelines 21 and 22 are not met byREVERIE as the system does not support privatecommunication or interaction. Satisfying those require-ments would be beneficial in any pedagogical environ-ment.

• DG23: the expert should have an episodic memory ofnovice user mistakes

DG23 stretches the need for a tool that keeps his-tory of frequent mistakes. This implies providing atool for setting a series of actions, a set of properties(right/wrong) and keeping tracks of a user’s progressof task completion in a VE. Satisfying DG23 wouldincrease expert’s promptness in assisting novice users.REVERIE partially meets DG23 by providing a toolthat allows recording user actions in the VE that couldbe viewed by an expert user in order to provide feed-back and guide other users. However, this is difficult tohappen in real time and for a large number of users.

6 Discussion and directions of future work

In this section, we discuss the results of the evaluation ofREVERIE VH representation based on how effectively theyaddressed:

• The needs of the use cases that have been createdto evaluate REVERIE’s technological features in realworld scenarios (see Section 4)

• The user-centred VH DGs outlined in Table 1.

The results are discussed according to a list of featuresthat VH representation should fulfill in virtual platformsthat enable remote synchronous human interchange to suc-cessfully support smooth interaction, communication andcollaboration, with prim focus to REVERIE. Also, this dis-cussion leads to general remarks of directions for futuredevelopment of REVERIE which are directly applicable tothe CVEs that have been discussed in the state of the art ofVH representation section (see Section 2). The discussionof the list of VH features is grouped as follows:

• Aesthetically pleasing and realistic representation isimportant in a VE in order to aid realism in the activity

REVERIE replicants fully support realistic represen-tation of users in a VE, while puppeted avatars supportit close enough. To fully address this requirement, direc-tion for future work should focus towards representing

Page 12: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001 999

more realistically real-life user facial expressions andperformance of puppeted avatars and perfecting real-life image reconstruction of replicants.

• Identity is important to effectively identify user rolesand represent user personalities

REVERIE replicants meet meticulously the need foraccurate representation of users in the VE. Avatar pup-peting that maps the image of the user to the avatarface helps closely match the natural user appearance,while the REVERIE RAAT allows the customizationof the user avatars to closely and accurately matchthe user natural appearance. The RAAT tool could beextended with a greater list of actors, clothes, hair stylesand accessories to match user requirement for a morepersonalized representation.

• User focus of attention and status of activity are essen-tial prerequisites in initiating and following up an activ-ity on the basis of associating VHs with actions they areengaged in a VE.

The REVERIE avatars’ positioning in the VE indi-cates the users’ focus of attention, while replicants andpuppeted avatars adequately reveal the action which isperformed by a user in the VE. However, the field tri-als indicated that more work needs to be done in orderto improve the quality of both sets of VH representa-tion replicants and puppeted avatars to avoid breakingof the models and motion resulting in a non-realisticrepresentation of users and user actions.

The REVERIE Human Affect Analysis Module (seeSection 3.4) indicates users’ focus of attention andaddresses problems of constrained sense of actions per-formed in a CVEs which is imposed in general bythe restricted human visual field and spatial audio inVR. However, the field trials showed that although theactual way the autonomous agent is designed to attractand refocus user attention in REVERIE (see DG18)was appreciated by teachers that need tools to enforcecontrol in a pedagogical environment, it was generallyrather intrusive for the rest of the users.

• Communication and turn taking is supported ade-quately in REVERIE by lip synchronization, avatarpuppeting and relevant animations that identify thespeaker expressing willingness to take turn, for exam-ple raising your avatar’s hand to express interest to taketurn.

• Private communication and interaction is not supportedby REVERIE. UC1 field trials revealed the importanceof extending the system to meet this requirement par-ticularly to support pedagogical purposes that deal withassisting novice users to become more active.

• Control implies the need to record and take control ofother users’ communication and actions in the VE inorder to effectively assist the activity.

Such requirement is particularly valuable in peda-gogical environments where control over trainees needsto be applied to effectively assist educational require-ments. This is met in REVERIE through the servicesprovided by the Follow-Me module, the Human AffectAnalysis Module and the design of the graphical userinterface that allows the teacher user to control whocan be heard or not in the CVE. Control also impliesthe need of keeping track of actions/mistakes and beingable to intervene and assist the activity accordingly.This implies that the system provides a tool for setting aseries of actions, a set of behaviour (right/wrong) and totrack the user’s progress of activities in a VE. Satisfyingsuch a requirement would increase the expert’s con-trol over novice users’ progress and comprehension andit would enhance the teacher’s promptness in assistingthem. This would be particularly beneficial in any ped-agogical environment or any VE where users need tofollow specific tasks and routines. REVERIE partiallymeets this requirement by providing a tool that allowsrecording user actions in VE that could be viewed by anexpert user in order to provide feedback and guide otherusers. This solution might be effective for feedback fol-lowing an activity, not in real time, and for a smallnumber of users taking part. Otherwise, automation ofthe process is required.

7 Conclusions

The evaluation of VH representation in the REVERIE envi-ronment showed that it partially satisfies the design guide-lines for realistic human representation. We believe the nextsteps in developing CVEs in the future should cater for allmain features REVERIE currently provides or are recom-mended in Section 6 of discussion for further improvementsof the platform. In summary, these sets of features shouldinclude the following:

• Successful indication of user focus of attention andstatus of activity

• Effective support of turn taking• Detection of user status of engagement in the interac-

tions• Reaction in various ways by showing appropriate

behaviors (gestures, gaze, speeches) in response• Tools of control over user actions

Further VH features relating to smooth interaction, com-munication and collaboration in CVEs should be consideredfor development towards the following:

• Improving 3D reconstruction techniques for the cre-ation of realistic VH representation

Page 13: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

1000 Pers Ubiquit Comput (2017) 21:989–1001

• Integrating AI tools for analysing user behaviour andengagement that will grant control to expert users whenrequired

• Recording and being in control of communication anduser actions

• Supporting private communication and interaction

In this paper, we discussed the importance of human rep-resentation in VEs to foster communication, interaction andcollaboration, and as a result aid the feeling of presenceand immersion in CVEs. We demonstrated a list of fea-tures that VHs should encompass in order to become moreaffective and support immersion in a VE. Those can be high-lighted to realistic lookalike representation and behaviour,integration of communication control tools and tools forproviding feedback on user’s attention. For these featuresto be met, certain technological breakthroughs will have tobe made. Realistic VH representation to reach human eyelevel of detail requires technological advance in compres-sion algorithms, available internet bandwidth and increasingthe resolution of 3D capturing devices. Successful puppet-ing of VH requires a hybrid multimodal control schemewhich integrates optical systems (e.g. a Kinect device) withwearable sensors such as wireless/wearable inertial mea-surement unit (WIMU) which has been extensively used inREVERIE [30]. Such hybrid system could provide accurateinformation for both positioning and posture of the humanuser which should translate to better puppeting of theiravatar (or replicant). Similarly, a system which fuses opticaland wearable biometric sensors could also provide finer-grained information about the user’s emotional state to theartificial intelligence system and improve the VH affectiveresponse. Currently, wearable sensors are big and cumber-some and are likely to be considered intrusive by most users.However, in the future, these sensors are expected to becomea seamless part of human clothing so as to help in simulatingreality and advancing intuitive human interaction in CVEs.

Acknowledgements The research that led to this paper was sup-ported in part by the European Commission under the ContractFP7-ICT-287723 REVERIE.

Open Access This article is distributed under the terms of theCreative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricteduse, distribution, and reproduction in any medium, provided you giveappropriate credit to the original author(s) and the source, provide alink to the Creative Commons license, and indicate if changes weremade.

References

1. Kahneman D (1973) Attention and effort. Prentice Hall Inc, NJ2. Goodwin C (1986) Gestures as a resource for the organisation of

mutual orientation. Semiotica 62(1/2):29–49

3. Becker B, Mark G (1998) Social conversation in collaborative vir-tual environments. In: Snowdon D, Churchill E (eds) Proceedingsof the collaborative virtual environments (CVE’98). University ofManchester, pp 47-56

4. Ekman P, Fiesen W (1978) Facial action coding system. Consult-ing Psychologists Press, Palo Alto

5. Sacks H (1992) Lectures on conversation. Blackwell, Cambridge6. Argyle M (1988) Bodily communication, 2nd edn. Methuen,

London7. Benford SD, Bowers JM, Fahlen LE, Greenhalgh CM, Snowdon

DN (1995) User embodiment in collaborative virtual environ-ments. In: Proceedings of the ACM conference on human factorsin computing systems (CHI’95). ACM/SIGCHI, Denver, pp 242–249

8. Kilteni K, Groten R, Slater M (2012) The sense of embodimentin virtual reality. Presence: Teleoperators and Virt Environ 21:373–387

9. Slater M, Usoh M (1993) Presence in immersive virtual environ-ments. In: Proceedings of the IEEE conference—virtual realityannual international symposium (VRAIS’93), IEEE neural net-works council. IEEE Computer Society, Seattle, pp 90–96

10. http://cordis.europa.eu/project/rcn/100323 en.html. Accessed 12July 2017

11. Economou D (2001) The role of virtual actors in collaborativevirtual environments for learning. (PhD Thesis) Department ofComputing and Mathematics, Manchester Metropolitan Univer-sity, Manchester

12. Economou D, Pettifer SR (2005) Towards a user-centred methodfor studying CVEs for learning. In: Developing future interac-tive systems. Idea Group Publishing, Hershey, pp 269–301. ISBN1591404126

13. http://secondlife.com/. Accessed 12 July 201714. http://opensimulator.org/wiki/Main Page. Accessed 12 July 201715. https://www.activeworlds.com/. Accessed 12 July 201716. https://www.sansar.com/. Accessed 12 July 201717. https://www.highfidelity.io/. Accessed 12 July 201718. Fririksson FA, Kristjansson HS, Sigursson DA, Thue D,

Vilhjalmsson HH (2016) Become your avatar: fast skeletal recon-struction from sparse data for fully-tracked VR. In: Proceedingsof the 26th international conference on artificial reality and telex-istence and the 21st eurographics symposium on virtual environ-ments: posters and demos. Eurographics Assoc. Arkansas, LittleRock

19. Fechteler P, Hilsmann A, Eisert P, Broeck SV, Stevens C, WallJ, Sanna M, Mauro DA, Kuijk F, Mekuria R, Cesar P, MonaghanD, O’Connor NE, Daras P, Alexiadis D, Zahariadis T (2013)A framework for realistic 3D tele-immersion. In: MIRAGE ’13Proceedingsof the6th international conferenceoncomputervision /computer graphics collaboration techniques and applications,Article No. 12. ACM, New York, pp 1–8

20. Apostolakis KC, Daras P (2013) RAAT—the reverie avatar author-ing tool. In: 18th International conference on digital signal pro-cessing (DSP). IEEE

21. Apostolakis KC, Daras P (2015) Natural user interfaces for vir-tual character full body and facial animation in immersive virtualworlds. In: International conference on augmented and virtualreality. Springer International Publishing, pp 371–383

22. Alexiadis DS, Zarpalas D, Daras P (2013) Real-time, realistic,full 3-D reconstruction of moving humans from multiple Kinectstreams. IEEE T Multimed 15(2):339–358

23. Kuijk F, Apostolakis KC, Daras P, Ravenet B, Wei H, MonaghanDS (2015) Autonomous agents and avatars in REVERIE’s virtualenvironment. In: International conference on 3DWeb Technology.Heraklion

24. Wall J, Izquierdo E, Argyriou L, Monaghan DS, O’ConnorNE, Poulakos S, Mekuria R (2014) REVERIE: natural human

Page 14: User experience evaluation of human representation in collaborative virtual …clok.uclan.ac.uk/23440/1/Economou2017_Article_User... · 2019. 7. 18. · virtual presence, communication

Pers Ubiquit Comput (2017) 21:989–1001 1001

interaction in virtual immersive environments. In: 2014 IEEEInternational conference image processing (ICIP). IEEE, pp 2165–2167

25. Davis FD, Venkatesh V (2004) Toward preprototype user accep-tance testing of new information systems: implications for soft-ware project management. IEEE Trans Eng Manag 51(1):31–46

26. Silverman D (2006) Interpreting qualitative data: methods foranalyzing talk, text and interaction. Sage

27. Economou D, Mitchell W, Boyle T (2000) Requirements elic-itation for virtual actors in collaborative learning environments.Comput Educ 34(3-4):225–239

28. Economou D, Mitchell WL, Pettifer SR (2002) Problem drivenCVE technology development. J Comput Netw Appl (JCNA)Elsevier Science 25(3):1–20

29. Pettifer SR, Cook J, Marsh J, West A (2000) Deva 3: architecturefor a large-scale distributed virtual reality system. In: Proceedingsof ACM symposium in virtual reality software and technology.ACM Press, pp 33–39

30. Valtazanos A, Arvind DK, Ramamoorthy S (2013) Using wear-able inertial sensors for posture and position tracking in uncon-strained environments through learned translation manifolds. In:Proceedings of the 12th international conference on Informationprocessing in sensor networks. ACM, Philadelphia, pp 241–252