Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
25 May, 2004
Encounters With Virtual Humans(Plus a little about UCL)
Vinoba Vinayagamoorthy[[email protected]]
Virtual Environments and Computer GraphicsUniversity College London
PrologueSurrey graduate (1997 - 2001)
Information Systems EngineeringIndustrial Placement with Philips Semiconductors LtdFinal year project: Interactive Figure Animation using Inverse kinematics, Dr. Adrian Hilton
At present with UCL working with Virtual Humans and their believability issues
What is realism? Is it to do with geometry or is it more to do with their behaviour
Virtual humans endowed with realistic behaviour models evoke better levels of presence, co-presence and believability…
Main Works at VECG, UCL
Virtual reality (VR), Global illumination, real-time rendering, Virtual reality systems and networking,Image based rendering, Understanding the human factors of VR,
Avatar, agent, humanoid simulation,Crowd SimulationVirtual clothing (Prometheus), Collaborative virtual environments, Augmented/mixed reality
UCL: Recent ProjectsEquator (mixed reality)
www.cs.ucl.ac.uk/research/equator/Virtual light field (VLF)
www.cs.ucl.ac.uk/research/vr/Projects/VLF/index.htmPRESENCIA
www.cs.ucl.ac.uk/research/vr/Projects/Presencia/VR in psychotherapy for social phobias,
www.cs.ucl.ac.uk/research/vr/Projects/SocialPhobias/COVEN (useful papers)
www.cs.ucl.ac.uk/research/vr/coven/index.htmlCREATE, PureForm, EnvESci, Acting, Internet2…
Ultimate Goal: UCL
A theory of virtual reality: To make it 'work' in a given application context and With given resources,
what is the best approach to take, what is the best algorithm, interaction and rendering style to use?…
Where Do I Fit in???
Full-time RA (Dr. Anthony Steed)Avatars and agentsAlso I get to play with people’s heads ☺
Part-time PhD student (Prof. Mel Slater)Behavioural Animation of Virtual Humans
In other words, make virtual humans “work”
Funded by EPSRC project: EquatorIRC project involving eight universitiesSix year periodwww.equator.ac.ukwww.cs.ucl.ac.uk/research/equator/
Resources Available
Trimension ReaCTor (cave-like) immersive systemHead Mounted DisplaysHaptics Device: PhantomOther systems from other universities
Wide-area ceiling tracker system from UNC-CHHaptics device from MIT
Lots of willing subjectsPsychology Researchers
Our Psychology departmentInstitute of Psychiatry
Internet2
Collaboration with UNC-CH & MITAim: Investigate the extent to which people in physically remote locations could collaborate within a shared VE to carry out a joint taskTask: Two people (separated by the Atlantic) carrying a “stretcher” through a virtual maze collaborativelyProblems:
Lack of weight & force-feedback (i.e. lack of piano-movers problem)Very simplistic Avatar representations“Odd” method of navigating through the VE
Subjects had to be trained
Internet2 (Contd.)
Procedure5-8 minutes in the Virtual EnvironmentSubjects: Get their partner at UNC to do the taskConfederates at UNC
Two emotional states: happy or depressed
Subjective questionnaires
• Mortensen J., Vinayagamoorthy V., Slater M., Steed A., Whitton M. C., and Lok B., "Collaboration in Tele-Immersive Environments", Eight Eurographics Workshop on Virtual Environments, pp 93-101, (May 2002)
Images from Internet2
Lessons from Internet2
Higher the level of subjective task performance, the higher the co-presence scoreHigher the self-assessed contribution of the subject, and the more they wished to meet the other person, the greater the degree of co-presenceSubjects were able to recognise emotional states correctly (More so for the depressed state)
Only tone of voice, partner attitude and avatar behaviourDespite network lag, lost messages, simple avatar, bad audio connection
Social Paranoia Experiments
People with persecutory ideas have a tendency to distort experience by misconstruing the neutral or friendly actions of others as hostile or contemptuousVirtual Task: Form impressions on five virtual agents in a virtual library room
Agents responded to distance of subject to itselfAvatars were pre-programmed
Lessons from Social Experiments
Confirmed that VR could induce persecutory ideas in people who have them in real life
Due to “realness” of the environmentBehaviour of Avatars
VR may prove to be a valuable methodology for the understanding of persecutory ideationOther phobias are being studies at present
• Daniel Freeman, Mel Slater, Paul Bebbington, Elizabeth Kuipers, Philippa Garety, David Fowler, Cristina Read, Alican Met, Joel Jordan, Vinoba Vinayagamoorthy, “Can Virtual Reality be used to Investigate Persecutory Ideation?", The Journal of Nervous and Mental Disease, (2003)
Visual vs. Behaviour Realism in Avatars
Behaviour investigated was eye-gaze in dyadic situationsThreefold investigation:
Independent of head tracking, does eye-gaze simulation effect participants in a virtual dyadHow effectively does such a simulation work in an immersive environmentExplore the combined impact of eye-gaze realism and visual appearance on the perceived quality of communication within a dyad
The StudyTwo participants negotiating for 10 minutesRole playing: baker and mayor
More about the Study
Experiment: 2x2 factor designLevels of visual realismLevel of eye-gaze behaviour realism
Inferred model (from observational data)Random model
Accompanying body animationsGender-matched participants (strangers)
Technical Ordeal
The avatars were made H-Anim compliantThe photo-simplistic was created in accordance to impressions artists use in drawing the human figure ex. human body ≡ 8 x length of the headThe photo-realistic avatars were “ripped” apart and re-assembled with “moveable” eyes www.h-anim.org
Independent behaviour model was considered desirableImplemented in DIVE as an extension plugin
www.sics.se/dive
Behaviour Animation
Avatars were controlled by the userTracked right hand and head in both systemsParticipants could speak to each otherVirtual Body was invisible to ReaCTor userVirtual Body was visible to HMD user
Adv: Animations could be made visually consistent as opposed to an accurate mapping of user actions
Behaviour Animation
Right hand tracking data: Avatar’s right arm was bent accordinglyHead tracker data:
Avatar was moved along the X-Z planeThe avatar was made to squat if the user bentAny head gestures on the users’ part were mapped onto the corresponding avatar
Presence of Speech from the participants were used to regulate the eye-animations of their avatar
Look at partner more when listeningWhile speaking, the eyes are more dynamic
Behaviour Model
Four Machines Timed to the microsecond
Eye-Gaze behaviour Model
Saccade magnitudeRandom (exponential)
Saccade velocityMagnitude dependent function
Saccade Direction (8)Probability associatedVertical/Horizontal movements occur at twice the frequency as diagonal onesNo difference in occurrence between vertical or horizontal
Inter-saccadic durationLook at partner more when listening
Our virtual aides
More nice images
• Garau M., Slater M., Vinayagamoorthy V., Brogni A., Steed A., and Sasse M. A., "The Impact of Avatar Realism and Eye Gaze Control on the Perceived Quality of Communication in a Shared Immersive Virtual Environment", SIGCHI,(April 2003)
• Vinayagamoorthy V., Garau M., Steed A., and Slater M., "Design of a Behaviour Model for Avatars: Practice and Experience", (Submitted to The Computer Graphics Forum)
Results from the eye-gaze experiments
The impact of the gaze behaviour model is different depending on which type of avatar is used
For the Lower-realism Avatar, the (More Realistic) Inferred-gaze Behaviour Reduces Face-to-face EffectivenessFor the Higher-realism Avatar, the (More Realistic) Inferred-gaze Behaviour Increases this effectivenessSurprising but may be explained by the “mellowness” of the inferred gaze model
Conclusion: With respect to face-to-face effectivenessLow fidelity appearance demands low fidelity behaviourHigher fidelity appearance demands a more realistic behaviour model (w.r.t eye gaze)
Results from the eye-gaze experiments
Copresence and partner evaluation variables illustrate the same strong interaction effect as face-to-faceThe exception is involvement, for which there is no significant effect of either avatar or gaze type
Consistent with the findings of prior work due to participants paying attention to the task of negotiation
During the experiment it was noticed that the subjects maintained a sense of personal distance for both types of avatars
Results from the eye-gaze experiments
The baker had a lower face-to-face response count than the mayor
Role is important (consistent with prior work)
The type of interface (Cave or HMD) did not have a significant effect on responses
Even though the refresh rate for the displays were very low
Older people were more likely to have rated their experience as being like a face-to-face interaction
Issues with work so far
Tired of ripping, re-assembling and pre-programming avatars to satiate the needs of every set of experimentIt should be about the experience of the study not the technicalitiesFor starters: wouldn’t it be great to have an avatar that behaves in the way appropriate to a study?
Sad avatar (emotion-controlled)Old avatar (age-controlled)Outgoing avatar (Personality-controlled)Inter-personal attitudes, gender behaviour, etc…
Thoughts about work so far
Solution: Come up with a separate behaviour animation model that can be plugged into an existing virtual corpse and get a lively humanoidProblem: Allocated PhD timing at UCL ≡ 3-4 yearsRe-thought solution:
Do it for one particular behaviour: Body PostureWill the humanoid’s level of photo-realism effect the user’s experience? (Eye-gaze experiments)
One particular high-level control variable: EmotionGot results that emotions can be conveyed across networks in a Immersive Virtual Environment (internet2)
Do it for an agent NOT an avatar
Research status so farPlenty of information on emotions models existNot that much on body posturesSome works around:
Thalmanns and their team; Miralab (University of Geneva) & the VR lab (Swiss Federal Institute of Technology)
Norman Badler’s teamUniversity of Pennsylvania
Justine Cassell, Cynthia Breazeal, Bruce Blumberg, Hannes Högni Vilhjálmsson, …
MIT media labsKen Perlin
NYU media research labRoel Vertegaal
Human media labs
First problem: Useless to tell a virtual human that it should droop when it is sadNeed a viable model of:
Different types of emotionsHow emotions are linked to each otherDefine a continuous model if possibleInclude the concept of moods, since emotions decay over time?
Posture & Emotional States
Second problem: Haven’t found research (computational models) on posture so far
Need a model for the different types of stable human body postures (about a thousand)Need to filter out and categorise relevant postures
Not all postures are continuousObservational studies to figure out sensible postures?Constraints?
Posture & Emotional States
Final Problem:Attempt to define a computation model between human body postures and emotions…Test model and if it worksFurther work:
Explore levels of behaviour realism w.r.t posture and visual realism of the avatarsExtend model to work in avatars
Posture & Emotional States
The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows!- Frank Zappa
Finally…
• Vinoba Vinayagamoorthy. Supervisors: Mel Slater, Anthony Steed, “Emotional personification of humanoids in Immersive Virtual Environments”, In Proceedings of the Equator Doctoral Colluquim, (October 2002)
• www.cs.ucl.ac.uk/research/equator
• www.cs.ucl.ac.uk/staff/V.Vinayagamoorthy