1
HCI Lecture 12
Special Issues II: Agency
Hiroshi Shimodaira Key points:
– The computer as an agent – Reasons to engage anthropomorphism – Factors affecting perceived agency:
• appearance • natural interactions (verbal and nonverbal) • expression of emotion and personality • behavioural competence • social competence
– Potential problems
2
Introduction
! Human-computer interaction as a subject came from human-machine interaction, where the machine is a (complex) tool
! But there are many ways in which interacting with computers is like interacting with an intelligent agent (like another human)
Human
Task
Computer as tool
Computer as agent
3
Computers as agents
Compare computer to someone acting as your business agent: ! Carries out complex tasks with high level instructions ! Communicates with other similar agents, or acts as our
intermediary in communicating with other people ! Actively updates and informs us about the state of world ! …
! Has a degree of autonomy to follow own goals
4
Computers as agents
To interact with complex systems (such as other humans) we often take an “intentional stance” (Dennett, 1987):
! We assume the system has internal beliefs, desires & intentions ! We explain and predict the behaviour by reference to a
(rational) capacity to act so as to achieve its goals: – “The thermostat is trying to hold the temperature at 21 degrees” – “She didn’t come to the lecture today because she wanted to finish
her assignment – unless she’s done, she won’t go out tonight” – “Microsoft Word is trying to make me write a letter, I’d better tell it
that I’m not.”
! We can think of this as an anthropomorphic conceptual model – Anthropomorphism is a natural tendency to attribute agency – It provides a social metaphor for how to interact
5
Computers as agents
! Nass et. al (1995) – human perceptions of computer being used in a co-operative task
! “Even the most superficial manipulations are sufficient to produce personality, with powerful effects”: – ‘Dominant’ computer went first
and issued imperatives – ‘Submissive’ computer went
second and made suggestions – Users preferred interaction with
their own personality type, and rated competence of computer accordingly
6
Reasons to engage anthropomorphism
! We can try to reinforce the tendency to anthropomorphise by designing the interface to match this conceptual model.
! Why? – Humans are already expert at social interaction
• Already know the ‘language’ -> effortless communication • Feel competent working in this domain, reduces anxiety
– Can grab attention, seem more fun – Provides emotional engagement
• ‘Affective’ computing
“Interface agents draw their strength from the naturalness of the living organism metaphor in terms of both cognitive accessibility and communication style” (Laurel, quoted in Isbitser, 1995)
7
Factors affecting perceived agency
What aspects of the interface can we use to increase the sense of interaction with an agent?
! Appearance ! Natural interaction cues such as gaze, gesture, body language ! High level dialogue (both content and fluency) ! Expression (and perception) of emotions ! Exhibiting distinctive personality ! Displaying behavioural competence ! Showing social competence Many of these suggest interaction in the form of an animated
talking head (see last lecture) but there are many alternatives N.B. This list is adapted from Fong et. al (2003) regarding requirements
for social robots, but bears a strong similarity to how humans informally assess intelligence in other humans – see Isbitser (1995)
8
Appearance
! Anthropomorphic
Asimo
ReplieeQ2
9
Appearance
! Zoomorphic
Genghis
Necoro 10
Appearance
! Caricature
eMuu
Papero
11
Appearance
! Functional
Cardea
Roomba
12
Appearance
! Anthropomorphic
! Zoomorphic
! Caricature
! Functional
Naturalness of interaction
Clarity of capabilities
13
Natural interaction cues
! Produce recognisable gestures and expressions
! Tracking human movements – Recognise and follow bodies – Look at hand movements – Orient to faces – Maintain gaze
! Shared attention – follow pointing finger or gaze direction
! Recognise gestures or activities ! Imitation
14
Natural interaction cues
! ‘SPARKY’ the “anti-workstation” (Scheeff et. al 2000) ! Used information from cartooning
to chose suitable cues ! Tested in museum environment ! Most engaging traits were
following, eye contact and mimicking
! Need for body awareness (surroundings and touch) to interact appropriately
! ‘Pseudo-speech’ was just confusing
15
Dialogue
! Requires speech production and speech understanding – Both are still problematic for natural situations – See previous lecture on Speech Interfaces
! Also requires: – Appropriate responses (e.g. executing requests) – Appropriate turn taking and flow of conversation, implying real time
responsiveness
! If speech is present, it will dominate the interaction – May be better to have no speech than bad speech
16
Emotions
! Directly readable and universal social cues
! Will facilitate believability and engagement
! It is hard to explain or predict behaviour of people/agents who don’t show emotion
! Useful to provide feedback about the system state, e.g. goals achieved or failed: – May be directly used in control, i.e.
the system response is modulated by its ‘emotional state’
! To interact, agent should also be able to recognise the user’s emotional state (difficult)
! See Pantic & Rothkranz 2003 for review of methods
17
Emotions
! Kismet (Breazeal, 2002)
http://www.ai.mit.edu/projects/sociable/videos.html
18
Personality
! “Big 5”-dimensional description – Extraversion, agreeableness,
conscientiousness, neuroticism, openness ! Usually involves interaction of features,
e.g. appearance, type of language, typical behaviour, amount and type of emotion expressed etc.
! Can project ‘non-human’ personality e.g. of pet, cartoon or robot
! Kiesler & Goetz (2002) use willingness to ascribe personality measures as measure of success in engaging an anthropomorphic mental model
! However some unexpected results – people co-operate less with a more friendly robot
Glad to be of service!
! Has potential to irritate as much as help (c.f. our reaction to people we suspect of ‘faking’ emotions)
Personality (cont.)
19
http://perso.telecom-paristech.fr/~pelachau/Greta/clips/public/2011/SEMAINE_example_interaction_with_Spike.avi
Conversational virtual Agent, “Spike” (in SEMAIN project)
20
Behavioural competence
! Possible in specialised domains, hard in really social ones ! Have to have real competence, can’t just simulate it
– Necessary to establish trust ! Less critical in entertainment context
– No specific expectations
! Behaviour should appear (or be) intentional, i.e. goal driven – Characterised by ability to overcome obstacles, plan ahead etc.
! Human-like behavioural competence implies Artificial Intelligence – Not necessarily achieved the same way as natural intelligence – But deep similarity may be required to support convincing
natural interaction (c.f. the Turing Test) – Arguably, this may also require a similar embodiment,
immersion in the same physical and social environment
21
Social competence
! Requires a user model – At minimum, classify user’s intentions or capabilities to interact with
appropriate responses – A richer model of users beliefs, goals and desires, and how they
change over time will provide richer interaction – Examples in adaptive help and tutoring systems:
• E.g. Anderson & Gluck (2001) • Assumes learning is the acquisition of appropriate production
rules for a particular domain • System attempts to model which rules in a particular domain the
user is still missing or has incorrect version • Then supplies problems and examples aimed at illustrating the
correct rule
! For genuine social interaction, the computer or robot must be capable itself of recognising agency
22
Potential problems
“A common underlying assumption is that humans prefer to interact with machines the way that they interact with people” (Fong et al 2003)
! May want a master-slave rather than a peer-to-peer relationship with our technology
! Apparent agency can create false expectations – Usability is badly compromised
when not fulfilled ! Human-like but not quite human
systems may fall into the ‘uncanny valley’ (Mori, 1970)
23
References
Dennett, D.C. (1987) The intentional stance. MIT Press Nass, Clifford, Moon, Youngme, Fogg, B. J., Reeves, Byron and Dryer, D.
Christopher (1995): Can Computer Personalities be Human Personalities?. In International Journal of Human-Computer Studies, 43 (2) pp. 223-239
Isbister, K. (1995) Perceived intelligence and the design of computer characters. Master's thesis, Dept. of Communication, Stanford University. http://www.katherineinterface.com/lccpaper.html
A survey of socially interactive robots Fong, T. Nourbakhsh, I. & Dautenhahn, K. (2003) Robotics and Autonomous
Systems 42:143-166 Scheeff, M. et. al (2000) Experiences with Sparky, a Social Robot, in proceedings of
the Workshop on Interactive Robotics and Entertainment (WIRE - 2000) Pantic, M. Rothkrantz, LJM (2003) Toward an affect-sensitive multimodal human-
computer interaction - Proceedings of the IEEE 91(9):1370-90 Breazeal, C. (2002) Designing Sociable Robots, The MIT Press. Kiesler, S. and Goetz, J. (2002) Mental models and cooperation with robotic
assistants, in: Proc. CHI, 2002 Anderson, J. R. & Gluck, K. (2001). What role do cognitive architectures play in
intelligent tutoring systems? In D. Klahr & S. M. Carver (Eds.) Cognition & Instruction: Twenty-five years of progress, 227-262. Erlbaum.
! See also: Dix et. al. sections 1.5, 3.9, 4.2.13, 11.4