67
1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger 1 , Jella Pfeiffer 2 , Armin Heinzl 3 1 University of Mannheim, Germany [email protected] 2 Justus Liebig University Giessen, Germany [email protected] 3 University of Mannheim, Germany [email protected] * Paper accepted for publication in Journal of the Association for Information Sys- tems. The Association for Information Systems owns the copyright. Use of the man- uscript for profit is not allowed. Abstract Conversational agents (CAs) are natural language user interfaces that emulate human-to- human communication. Because of this emulation, research into CAs is inseparably linked to questions about anthropomorphism—the attribution of human qualities, including conscious- ness, intentions, and emotions, to nonhuman agents. Past research has demonstrated that anthropomorphism affects human perception and behavior in human-computer interactions, for example by increasing trust and connectedness or stimulating social response behavior. Based on the psychological theory of anthropomorphism and related research on computer interface design, we develop a theoretical framework for designing anthropomorphic CAs. We identify three groups of factors to stimulate anthropomorphism: technology design- related, task-related, and individual factors. Our findings from an online-experiment support the derived framework but also reveal novel, yet counterintuitive, insights. In particular, we demonstrate that not all combinations of anthropomorphic technology design cues increase perceived anthropomorphism. For example, we find that using only nonverbal cues harms anthropomorphism; however, this effect turns positive when nonverbal cues are comple- mented with verbal or human identity cues. We also find that whether CAs complete comput- er-like or human-like tasks and individuals’ disposition to anthropomorphize greatly affect perceived anthropomorphism. This work advances our understanding of anthropomorphism and makes the theory of anthropomorphism applicable to our discipline. We advise on the direction research and practice should take to find the right spot in anthropomorphic CA de- sign. Keywords: Conversational Agents, Chatbots, Anthropomorphism, Interface Design

Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

1

Texting with Human-like Conversational Agents: Designing for Anthropomorphism

Anna-Maria Seeger1, Jella Pfeiffer2, Armin Heinzl3

1University of Mannheim, Germany [email protected] 2Justus Liebig University Giessen, Germany [email protected]

3University of Mannheim, Germany [email protected]

* Paper accepted for publication in Journal of the Association for Information Sys-tems. The Association for Information Systems owns the copyright. Use of the man-uscript for profit is not allowed.

Abstract

Conversational agents (CAs) are natural language user interfaces that emulate human-to-human communication. Because of this emulation, research into CAs is inseparably linked to questions about anthropomorphism—the attribution of human qualities, including conscious-ness, intentions, and emotions, to nonhuman agents. Past research has demonstrated that anthropomorphism affects human perception and behavior in human-computer interactions, for example by increasing trust and connectedness or stimulating social response behavior. Based on the psychological theory of anthropomorphism and related research on computer interface design, we develop a theoretical framework for designing anthropomorphic CAs. We identify three groups of factors to stimulate anthropomorphism: technology design-related, task-related, and individual factors. Our findings from an online-experiment support the derived framework but also reveal novel, yet counterintuitive, insights. In particular, we demonstrate that not all combinations of anthropomorphic technology design cues increase perceived anthropomorphism. For example, we find that using only nonverbal cues harms anthropomorphism; however, this effect turns positive when nonverbal cues are comple-mented with verbal or human identity cues. We also find that whether CAs complete comput-er-like or human-like tasks and individuals’ disposition to anthropomorphize greatly affect perceived anthropomorphism. This work advances our understanding of anthropomorphism and makes the theory of anthropomorphism applicable to our discipline. We advise on the direction research and practice should take to find the right spot in anthropomorphic CA de-sign. Keywords: Conversational Agents, Chatbots, Anthropomorphism, Interface Design

Page 2: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

2

Introduction

In the past few years, conversational agents (CAs)—also called conversational interfaces or

chatbots—have played a dominant role in the pursuit of innovative and ground-breaking hu-

man-computer interactions by digital giants such as Google, Facebook, Amazon, and

WeChat (McCarthy, 2019). Although the first CA was developed in the 1960s (Weizenbaum,

1966), recent technological advances in artificial intelligence, machine learning, and natural

language processing have revived attention to conversational interfaces and their implemen-

tation in research and practice. Indeed, the market for CAs is expected to more than triple by

2024 (Marketsandmarkets, 2019). Important use contexts for CAs include customer service,

e-commerce, business workflows, Internet of Things, and personal health and fitness

(Araujo, 2018; Gnewuch, Morana, & Maedche, 2017). In sum, CAs are expected to create

substantial business value in current and future digital ecosystems.

CAs are “user interfaces that mimic human-to-human communication using natural language

processing, machine learning, and/or artificial intelligence” (Schuetzler, Giboney, Grimes, &

Nunamaker, 2018, p. 94). In essence, CAs enable users to interact with information systems

(IS) in the same way they would with a human interaction partner via text messaging or tele-

communication applications. Therefore, a major advantage of CAs is that users can interact

with them naturally and intuitively (Chattaraman, Kwon, Gilbert, & Ross, 2019). This natural

language interface is CAs’ distinct characteristic (Araujo, 2018) which enables a human-like

interactional experience (Pickard, Schuetzler, Valacich, & Wood, 2017; Schuetzler, Grimes,

Giboney, & Buckman, 2014). Providing an object observable human-like features and behav-

iors, such as human language, can induce anthropomorphism—the attribution of human

qualities, including consciousness, intentions, and emotions, to nonhuman agents or objects

Page 3: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

3

(Epley, Waytz, & Cacioppo, 2007). Accordingly, CAs’ emulation of human-likeness insepara-

bly links them to questions around anthropomorphism.

Meaningful and generalizable research on CA design, perception, and use benefits greatly

from knowledge of how to stimulate anthropomorphism because this psychological phenom-

enon has several important consequences for human perception and behavior. For example,

according to psychological research, anthropomorphism leads to interpretations about an

interaction partner’s intentions and motives (Waytz, Gray, Epley, & Wegner, 2010b). In turn,

these interpretations intensify the experience of interactional events as more meaningful and

important (Guthrie, 2013). On a related note, anthropomorphism has positive effects on the

relationship with a nonhuman interaction partner in terms of trust (De Visser et al., 2016) and

connectedness (Tam, Lee, & Chao, 2013). Besides these perceptional consequences, an-

thropomorphism also stimulates social response behavior in the perceiver (Epley et al., 2007;

Waytz et al., 2010b). For example, anthropomorphism affects self-disclosure behavior in that

individuals interacting with a highly anthropomorphized interaction partner disclose less un-

desirable information (Schuetzler et al., 2018). Overall, the degree to which a CA is anthro-

pomorphized has several implications for the ongoing interactional relationship, thus anthro-

pomorphism is a critical factor requiring careful consideration during the design phase.

Extant research in IS and human-computer interaction (HCI) has focused on these effects of

anthropomorphic design in terms of perceptional (Benbasat, Dimoka, Pavlou, & Qiu, 2010;

Nowak & Rauh, 2005; Qiu & Benbasat, 2009) and behavioral (Burgoon et al., 1999; Gong,

2008; Riedl, Mohr, Kenning, Davis, & Heekeren, 2014; 2011; Schuetzler et al., 2014; 2018)

outcome variables. However, this body of research employs varying and inconsistent per-

spectives on what constitutes anthropomorphic design and what evokes anthropomorphism.

We will demonstrate that designing anthropomorphic interactions with CAs is no trivial mat-

Page 4: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

4

ter—a theory-grounded and empirically validated knowledge base is required to understand

how and which CA design elements can stimulate perceptions of anthropomorphism.

To establish this knowledge base, we build on Epley et al.'s (2007) psychological theory of

anthropomorphism1 that has to date received limited attention in IS and HCI research. This

theory identifies cognitive, motivational, and dispositional determinants of anthropomorphism

(Epley et al., 2007; Waytz et al., 2010a). While some studies have considered the role of the

cognitive determinant in investigating anthropomorphic features in technology design (Niu,

Terken, & Eggen, 2018; Yuan & Dennis, 2017; Yuan, Dennis, & Potter, 2016), the other two

determinants of anthropomorphism have, to the best of our knowledge, not similarly been

considered before.

Additional to the need for including motivational and dispositional determinants, our review of

studies on anthropomorphic technology design also revealed the need to further systemize

our knowledge of anthropomorphism’s cognitive determinant. According to the theory of an-

thropomorphism, perceiving human-like features in technology design is supposed to activate

users’ knowledge of humans when they assess a technological artifact. This cognitive pro-

cess results in perceptions of anthropomorphism (Epley et al., 2007). However, several stud-

ies manipulate the anthropomorphic design of a technological artifact without validating the

related effect on anthropomorphic perceptions (e.g., Burgoon et al., 2016; Cowell & Stanney,

2005; Nunamaker, Derrick, Elkins, Burgoon, & Patton, 2011; Qiu & Benbasat, 2009; Verha-

gen, van Nes, Feldberg, & van Dolen, 2014). Other studies report contradictory results: for

example, in terms of anthropomorphic design’s effect on socially desirable behavior, some

researchers find a significant effect of human-like visual cues (Kiesler, Powers, Fussell, &

Torrey, 2008; Sproull, Kiesler, Walker, & Waters, 1996), while others find such an effect only

1 Further, the term “theory of anthropomorphism” will be used to refer to Epley et al.’s (2007) eponymous theory.

Page 5: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

5

for communicative but not for human-like visual cues (Sah & Peng, 2015; Schuetzler et al.,

2018). Moreover, a CA offers design possibilities different to those of a humanoid robot

(Kiesler et al., 2008; Trovato et al., 2015) or a product in a webshop (Yuan et al., 2016; Cyr,

Head, Larios, & Pan, 2009). Therefore, research findings on the anthropomorphic design of

related technological artifacts cannot readily be applied to CA design.

This paper builds on the theory of anthropomorphism (Epley et al., 2007) to identify the right

spot for anthropomorphic CA design and to account for the complexity of anthropomorphic

perception. We derive three groups of antecedents from the three determinants of anthropo-

morphism mentioned above: design-related (=cognitive), task-related (=motivational), and

individual (=dispositional) factors. We integrate these three elements in a comprehensive

framework of anthropomorphic CA design. As such comprehensive knowledge of the mech-

anisms leading users to anthropomorphize CAs has been lacking, our goal is to fill this gap.

Therefore, this paper seeks to answer the following research question:

RQ: Can cognitive, dispositional, and motivational factors be used to understand us-

ers’ perceptions of anthropomorphism in CAs?

To address this question, we develop a theoretical design framework and evaluate it with an

online experiment. From a theoretical perspective, our framework integrates technological as

well as task and user-related factors to explain perceptions of anthropomorphism in human-

CA interactions. This knowledge provides IS researchers and practitioners with the neces-

sary guidance to achieve varying degrees of anthropomorphism. Prior research has focused

primarily on specific technological features’ impact on users’ perception of anthropomorphism

(e.g., Pütten et al., 2010; Sah & Peng, 2015; Schuetzler et al., 2014); we contribute to the

literature by referring to the theory of anthropomorphism in extending our understanding of

anthropomorphism in human-CA interactions. Conversely, our work also contributes to the

Page 6: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

6

theory of anthropomorphism by translating the three determinants of anthropomorphism into

factors for designing anthropomorphic interactions with CAs.

Further, our empirical evaluation provides important insight into the complex mechanisms

that lead humans to anthropomorphize CAs. For example, we find that increasing the number

of anthropomorphic design elements does not necessarily result in higher perceptions of an-

thropomorphism. For specific cues, we even find a negative effect on perceptions of anthro-

pomorphism. We discuss the potential reasons for these counterintuitive findings in detail

and illustrate their relevance for research and practice. Our analysis also shows that CAs are

not equally anthropomorphized across tasks and individuals. The data analysis broadly sup-

ports the deduced theoretical framework but also provides opportunities to further pursue

questions of anthropomorphic design in human-CA interactions.

Theoretical Background

In developing our theoretical framework, we mainly draw on two research streams. First, our

work relates to studies investigating design-related questions on CAs and related technologi-

cal artifacts. Second, knowledge from psychological research on anthropomorphism provides

the theoretical foundation for our framework. This section provides the necessary foundation

to differentiate between CAs and related technological artifacts, before we introduce the the-

ory of anthropomorphism.

Conversational Agents and Related Technological Artifacts

The mode of communication with a CA can be text- or voice-based (Gnewuch et al., 2017).

This paper focuses on text-based CAs because of their prevalence in real life (Araujo, 2018).

CA research is not isolated; it is connected to studies on related technological artifacts. In our

view, the most important technological artifacts among these are embodied conversational

agents and avatars. For a well-defined understanding of CAs, we need to emphasize the

Page 7: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

7

similarities and differences between these technological artifacts. Table 1 and Figure 1 pro-

vide an overview of these artifacts and how they are related.

Table 1. Distinction between Conversational Agents and Related Technological Artifacts Name Definition IS studies Conversational Agent

“systems that mimic human-to-human communication using natural language pro-cessing, machine learning, and/or artificial intelligence” (Schuetzler et al., 2018, p. 94)

(Gnewuch et al., 2017, 2018; Schuetzler et al., 2014; 2018)

Embodied Conversational Agent

“virtual, three-dimensional human charac-ters that are displayed on computer screens … [and] interact with people through natural speech” (Derrick et al., 2011, p. 72)

(Derrick et al., 2011; Nunamaker et al., 2011; Pickard et al., 2017)

Avatar “computer generated visual representations of people or bots” (Nowak & Rauh, 2005, p. 153)

(Davis et al., 2009; Riedl et al., 2014; Suh et al., 2011)

Figure 1. Venn Diagram of Conversational Agents and Related Technological Artifacts

Embodied conversational agent (ECA), initially described by Cassell, Bickmore, and others in

the early 2000s (Bickmore & Cassell, 2005; Cassell & Bickmore, 2000; Cassell et al., 1999),

refers to a virtually represented human that can interact with users through body gestures,

facial expressions, and human speech. Inspired by the “metaphor of face-to-face conversa-

tion” (Cassell et al., 1999, p. 520), the objective of ECAs is to provide a complete representa-

Page 8: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

8

tion of a human interaction partner. Accordingly, a defining characteristic of ECAs is their

ability to interact with users via natural language dialogue. This common characteristic of

CAs and ECAs explains why the body of ECA research is highly relevant to our research.

However, notably, ECA research specially emphasizes the physical, three-dimensional ap-

pearance that enables the agent to mirror human interactions through kinetic movements,

such as head nods, smiles, and postures (Derrick, Jenkins, & Nunamaker, 2011; Nunamaker

et al., 2011). Because of their visual embodiment, these agents require a virtual space, which

usually is available on websites (Ben Mimoun & Poncin, 2015), in virtual reality (Lee & Chen,

2012), or on specifically developed information technology artifacts whose single purpose is

to enable the user-agent interaction (Derrick et al., 2011; Nunamaker et al., 2011). By con-

trast, CAs are designed without such a dynamic visual representation and are typically avail-

able through social media, messaging applications, or chat windows (Araujo, 2018). Often,

CAs are also available via chat windows on company websites as a touchpoint for customer

service inquiries (Pfeiffer, 2020; Verhagen et al., 2014). To sum up, CAs, unlike ECAs, either

have no visual representation or have only a static profile image. Thus, CAs are disembodied

and cannot rely on bodily behavior when interacting with users (Araujo, 2018). Therefore,

extant research on anthropomorphic design of ECAs is only partly applicable to CA design.

Avatars refer to computer-generated graphical representations of humans in virtual environ-

ments (Holzwarth, Janiszewski, & Neumann, 2006). Such representations can take the form

of static images (Riedl et al., 2014) or fully animated characters (Davis et al., 2009; K.-S.

Suh, Kim, & Suh, 2011). Unlike ECAs, avatars are not defined by the ability to communicate

with users via natural speech (Nunamaker et al., 2011). Static avatar images can be used as

a profile image for CAs (Schuetzler et al., 2018), while fully animated avatars can be used in

ECAs (Nunamaker et al., 2011).

Page 9: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

9

Theory of Anthropomorphism

Anthropomorphism refers to the psychological phenomenon of “attributing human character-

istics to the nonhuman” (Guthrie, 1993, p. 62). In essence, anthropomorphizing a nonhuman

object means assigning unobservable and uniquely human mental capacities, such as inten-

tionality and emotions, to the object (Epley et al., 2008b; Gray, Gray, & Wegner, 2007). Re-

search in social cognition also refers to anthropomorphism as mind perception (Waytz et al.,

2010b). In introducing the theory of anthropomorphism Epley, Waytz, and Cacioppo (2007,

2008) describe anthropomorphism as the outcome of an inductive inference process. Be-

cause knowledge of human characteristics is highly accessible to human beings, their rea-

soning about unknown nonhuman objects spontaneously uses this accessible knowledge.

Therefore, anthropomorphism is a plausible and reasonable way of making sense of things

and events (Guthrie, 1993). Consequently, anthropomorphizing a nonhuman object testifies

not to erroneous perception, but rather to the human mental ability to assess the world. Fur-

ther, anthropomorphizing also happens when a person recognizes an object as nonhuman

but sees human characteristics in some of its appearance or behavior (Guthrie, 1993).

Page 10: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

10

Figure 2. Process of Anthropomorphic Perception

Epley and colleagues differentiate between the cognitive, motivational, and dispositional de-

terminants of anthropomorphism (Epley et al., 2007, 2008; Waytz et al., 2010a). In assessing

an unknown nonhuman object, eliciting agent knowledge represents the cognitive determi-

nant of anthropomorphism (Epley et al., 2007). If the perceived object seems similar to the

self or to other human beings, it becomes more likely that the perceiver will activate highly

accessible knowledge about humans (i.e., elicited agent knowledge) to assess the object

(Epley et al., 2007; Waytz et al., 2010c). Therefore, the target object’s characteristics influ-

ence the probability of anthropomorphism through the cognitive mechanism of elicited agent

knowledge (Waytz et al., 2010b). Specifically, the more the perceived object resembles a

human in terms of observable features and behavior, the higher the probability that humans

will anthropomorphize it (Epley et al., 2007). Perceiving anthropomorphic design cues is a

form of cognitive information processing. Previous studies have found that the human-like

Page 11: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

11

movement of geometric objects (Tremoulet & Feldman, 2000) and robots (Kiesler et al.,

2008) stimulate perceptions of anthropomorphism.

The theory of anthropomorphism identifies two motivational determinants—sociality and ef-

fectance motivation—that influence the probability of anthropomorphizing a nonhuman object

in different situations (Epley et al., 2007). Sociality motivation relates to humans’ basic need

to be socially related to other humans. In situations that stimulate a desire to be socially con-

nected or that increase feelings of loneliness, people are more likely to anthropomorphize

(Epley, Akalis, Waytz, & Cacioppo, 2008a; Epley et al., 2008b). Effectance motivation relates

to humans’ basic need to understand and control one’s environment. Where human

knowledge of a novel nonhuman agent (e.g., CA) is limited, but a person has to rely on the

agent for a specific task, anthropomorphizing the agent in order to reduce perceptions of un-

certainty and increase feelings of confidence is particularly likely (Epley et al., 2007). In con-

trast to the cognitive determinant, these motivational mechanisms are best described as

drive states triggered by feelings of deprivation in terms of social connection (sociality moti-

vation) or control (effectance motivation) (Epley et al., 2007). The motivational and cognitive

influences of anthropomorphism are independent as they are based on separate psychologi-

cal mechanisms (Epley et al., 2007).

Lastly, individual dispositions have been identified as an important determinant of anthropo-

morphism (Waytz et al., 2010a). This was not explicitly considered as a separate determinant

in Epley et al.’s (2007) initial description of the theory of anthropomorphism, but subsequently

Waytz, Epley, and Cacioppo (2010a) added it to complement the theory. The manifold

sources of different individual dispositions include education, culture, previous experiences,

cognitive reasoning style, and norms (Epley et al., 2007). For example, people from a culture

associated with high uncertainty avoidance (e.g., Japan) are more prone to anthropomor-

phize novel agents than someone from a culture associated with low uncertainty avoidance

Page 12: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

12

(e.g., the USA) (de Waal, 2003; Epley et al., 2007). Similarly, individuals who, confronted

with new situations, tend to engage in onerous cognitive processing (i.e., high need for cogni-

tion) are less prone to anthropomorphize than individuals with a low need for cognition (Epley

et al., 2007).

To summarize, the theory of anthropomorphism provides us with relevant knowledge of an-

thropomorphism determinants related to either the object being perceived (cognitive) or the

perceiver (motivational and dispositional). Figure 2 gives an overview of the determinants

involved in the anthropomorphizing process.

Theory of Anthropomorphism and User-centered Technology Design

We argue that the three determinants of anthropomorphism introduced above have to be

considered in the design of anthropomorphic HCI. To interpret this psychological knowledge

from an HCI perspective, we use Benyon’s (2014) framework for user-centered HCI design

involving people, activities, contexts, and technologies (PACT) that map onto the three identi-

fied determinants. The technology factor of the PACT framework that concerns designing a

technology with observable features and interactive behavior, maps onto the cognitive de-

terminant of anthropomorphism. The theory of anthropomorphism states that eliciting anthro-

pomorphic knowledge is stimulated if the object being perceived has a human-like appear-

ance and behavior (Waytz et al., 2010c). Accordingly, technology design characteristics that

simulate human features, prompt anthropomorphism. The PACT factors activities and con-

texts highlight that technology design needs to correspond with specific tasks users need a

technology to perform in a given context. These factors best relate to the two motivational

determinants (i.e., sociality and effectance) (Epley et al., 2008a) because for some tasks

anthropomorphizing an interaction partner could respond to the user’s motivational needs,

but for other tasks not. The people factor of the PACT framework advises that we consider

individual differences related to psychological and social variables. This factor corresponds

Page 13: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

13

with the role of dispositional anthropomorphism (Waytz et al., 2010a), that is individuals’ ten-

dency to anthropomorphize. This mapping of the PACT framework onto the three determi-

nants of anthropomorphism provides a basis for developing our design framework.

Developing a Framework for the Design of Anthropomorphic Conversational Agents

Figure 3 illustrates our theoretical framework for designing and researching anthropomorphic

CAs. The three factors are derived from the three determinants of anthropomorphism and are

structured according to the PACT framework. In the following, we integrate relevant literature

from related studies to detail how each factor relates to perceptions of anthropomorphism in

CA interactions. Based on this literature analysis, we formulate our research hypotheses.

Figure 3. Framework of Anthropomorphic Conversational Agent Design

Technology: Anthropomorphic Design of CAs

To understand how the design of CAs can stimulate anthropomorphism, we reviewed studies

that focus on how human-like computer interfaces are designed or perceived. For the review,

Perceived Anthropomorphism

Technology

Human-Identity Design

VerbalDesign

Nonverbal Design

Task

Human-like Computer-like

HumanDispositional

Anthropomorphism

Motivational Determinant Dispositional Determinant

Cognitive Determinant

Page 14: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

14

we selected studies that focus on design elements clearly defined as anthropomorphic. Ap-

pendix A gives details of our search and selection criteria.

The theory of anthropomorphism predicts a positive relationship between the degree to which

nonhuman agents’ observable features appear human-like and the perception of anthropo-

morphism (Epley et al., 2007). Human resemblance can be manifested in the nonhuman enti-

ty’s appearance or behavior (Waytz et al., 2010b). In line with the theory of anthropomor-

phism, we distinguish between observable features and behavior. The observable features of

a target object are stable characteristics that resemble humanness (Epley et al., 2007). We

categorize identified design cues that convey such stable information of humanness as the

design dimension of human identity. Observable behavior, by contrast, is more dynamic and

interaction-related (Epley et al., 2007). Such observable behavior of CAs is limited to their

communication behavior. Following communication theory (Mehrabian, 1972), we distinguish

between the two dimensions of verbal and nonverbal human-like communication behavior.

These two behavioral dimensions play a special role as they convey a CA’s intelligence.

Psychological research (Brody, 2004; Salovey & Mayer, 1990) identifies two forms of intelli-

gence: cognitive and emotional intelligence. The former represents the performance-related

ability to solve problems in different contexts (Brody, 2004), while the latter represents the

ability to understand and react to others’ feelings and emotions (Salovey & Mayer, 1990).

Intelligence is generally considered a human capacity, and thus is important in designing

anthropomorphic CAs. For this reason, we relate specific verbal and nonverbal cues to these

notions of intelligence whenever appropriate. In all, we group the identified anthropomorphic

design cues into three design dimensions: human identity, verbal, and nonverbal cues. Ta-

bles 2 and 3 give an overview of the identified cues and reviewed studies, respectively.

Page 15: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

15

Table 2. Identified Anthropomorphic Cues Anthropomorphic Cue

Example Reference H

uman

Iden

tity

Human-like visual representation

Images of humanoid robots, human-like avatars, real human faces

Berry et al., 2005; Gong, 2008; Qiu & Benbasat, 2009; Riedl et al., 2011; Yuan et al., 2016

Demographic information

Gender, age, name, ethnicity Cowell & Stanney, 2005; Qiu & Benbasat, 2010; Benbasat et al., 2010; Nunamaker et al., 2011

Verb

al

Social dialogue

Greeting rituals (“Hi, how are you?”), anecdotes, non-task related questions (“How is the weather in Berlin?”)

Bickmore & Cassell, 2001; Bickmore & Pi-card, 2005; Chat-taraman et al., 2019

Emotional expressions

Apologies (“I am sorry”), congratula-tions (“well done!”), concerns ("I am worried")

Al-Natour, Benbasat, & Cenfetelli, 2010; De Visser et al., 2017

Verbal style Self-references ("I", "me"), active voice, variability of syntax and words

Chattaraman et al., 2019; Sah & Peng 2015

Context-sensitive responses

Adjust responses to individual user in-put

Knijnenburg & Willem-sen, 2016; Schuetzler et al., 2014

Non

verb

al

Emoticons 😊,😉,😕,😣,👍 Derks, Bos, & Grumb-kow, 2008; Walther & D’Addario, 2001

Temporal cues Delayed responses to signal writing or thinking

Feine, Gnewuch, Mo-rana, & Maedche, 2019; Gnewuch, Mora-na, Adam, & Maedche, 2018

Turn-taking ges-tures

Blinking dots, “is typing” indicator De Visser et al., 2016; Gnewuch, Morana, Adam, & Maedche, 2018

The human identity dimension includes stable cues that typically help to identify a human

being in a computer-mediated interactional context (Donath, 1998; Lampe, Ellison, & Stein-

field, 2007). Human-like visual representations and demographic information constitute this

dimension (see Table 2). The most used anthropomorphic cue by reviewed studies is hu-

man-like visual representation. The theory of anthropomorphism argues that “the extent to

Page 16: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

16

which a nonhuman agent’s observable features look humanlike” (Epley et al. 2007, p. 869)

increases perceptions of anthropomorphism in the perceiver. More precisely, nonhuman

agents’ morphological similarity stimulates the cognitive determinant of anthropomorphism:

the perceiver is more likely to use their knowledge of human agents to assess nonhuman

agents that appear human-like.

The reviewed studies investigating human identity cues as independent variables and an-

thropomorphism as a dependent variable (see Table 3) provide insight in these cues’ actual

effects. For example, Araujo (2018) gives the experimental CA a human name in the anthro-

pomorphic condition, which produces higher perceptions of anthropomorphism than the

nonanthropomorphic version of the CA. Simultaneously, they enhanced the anthropomorphic

CA with human-like verbal cues, therefore it is not clear whether the adopted human identity,

the verbal cues, or their interaction caused participants’ anthropomorphic response. Other

studies have the same issue of not differentiating the effects of different cue types (see De

Visser et al., 2016; Kiesler et al. 2008; Waytz et al. 2014). However, three of the reviewed

studies investigated human identity cues independently of other cue types with mixed find-

ings regarding anthropomorphic responses (Niu et al., 2018; Sah & Peng, 2015; Yuan &

Dennis, 2017). While Niu et al. (2018) and Yuan and Dennis (2017) find a significant positive

effect of human identity cues on perceived anthropomorphism, Sah and Peng (2015) do not

find such a significant effect of the same cues. Still, one should note that these three studies

do not focus on CAs; they investigate the effect of anthropomorphic design on different tech-

nological artifacts (see Table 3). Sah and Peng (2015), for instance, investigate the anthro-

pomorphic design of a static health website, which, regarding the interactional experience, is

not comparable to a CA. Therefore, these findings’ applicability to CA design is limited.

Previous studies overall provide initial justification to argue that human identity cues stimulate

perceived anthropomorphism in CA users. Because human identity cues increase the hu-

Page 17: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

17

man-like appearance of CAs, we follow Epley et al. (2007) in arguing that this morphological

similarity eases the use of anthropomorphic knowledge when people interact with CAs.

Hence, we formulate the following hypothesis:

Hypothesis 1a: CA designs with human identity cues evoke higher perceptions of anthro-

pomorphism than designs without human identity cues.

Cues that emulate human-like communicative behavior have been categorized into the two

dimensions of verbal and nonverbal cues. Following Isbister and Nass (2000), the dimension

of verbal cues includes the choice of words and sentences as well as the way in which a

speaker refers to her-/himself (“I”) and others. Correspondingly, manipulating verbal cues

does not affect a conversation’s task-oriented content, but does affect how the conversation

unfolds. We identify several verbal strategies that make agents appear more human-like.

One such strategy is the use of social dialogue. Social dialogue refers to non-task-oriented

talk that seeks to establish an emotional relationship in a conversation—commonly known as

small talk (Bickmore & Cassell, 2005). A related verbal strategy is the use of emotional ex-

pressions (De Visser et al., 2017). Context-sensitive use of emotional expressions in addition

can signal emotional intelligence (Salovey & Mayer, 1990). While social dialogue and emo-

tional expressions represent verbal cues that are added to the task-oriented content, we also

identified several verbal strategies that change only the verbal style of a computer agent.

Human-like verbal style was increased by introducing self-reference (Sah & Peng, 2015),

active voice (Hess et al., 2009; Sah & Peng, 2015), coherent responses (Burgoon et al.,

2016) and variability of words and syntax (Schuetzler et al. 2014). Finally, varied and context-

sensitive responses were used to increase the human-likeness of an agent’s verbal behavior

(Knijnenburg & Willemsen, 2016; Schuetzler et al., 2014). According to Laurel (1997), such

responsive behavior is characteristic of human cognitive intelligence by showing the agent

Page 18: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

18

can understand a context-sensitive problem and respond appropriately. In sum, research has

identified several verbal strategies to enhance CAs’ human-likeness (see Table 2).

The dimension of nonverbal cues can be defined as “all potentially informative behaviors that

are not purely linguistic in content” (Hall & Knapp, 2013, p. 6). This dimension includes visible

cues such as facial expressions, head, hand, and body movements, and interpersonal gaze.

Several studies enable ECAs to perform nonverbal behaviors, such as hand gestures, eye

contact, and facial expressions (Berry, Butler, & de Rosis, 2005; Bickmore & Picard, 2005;

Bickmore & Cassell, 2005; Cowell & Stanney, 2005; Nunamaker et al., 2011), which do not

apply to CAs (see section Conversational Agents and Related Technological Artifacts). How-

ever, there are ways of adapting nonverbal behavior to CAs’ design space. For example,

emoticons are nonverbal signs that convey emotional expressions in text-based computer-

mediated communication (Derks, Bos, & Grumbkow, 2008; Walther & D’Addario, 2001). Ex-

tant IS research on computer-mediated text-based human-to-human communication has

established the role of emoticons in eliciting emotional and social responses (Brown, Fuller,

& Thatcher, 2016; Wang, Zhao, Qiu, & Zhu, 2014). Enabling a CA to use these nonverbal

cues to react appropriately to user input also conveys emotional intelligence (Salovey &

Mayer, 1990). Another nonverbal code system in computer-mediated communication is tem-

poral cues (Walther & Tidwell, 1995). The time it takes to receive a reply from a CA repre-

sents another instrument of anthropomorphic CA design (Feine, Gnewuch, Morana, & Maed-

che, 2019; Gnewuch, Morana, Adam, & Maedche, 2018). Finally, nonverbal turn-taking cues

can also be implemented in text-based CAs (De Visser et al. 2016). See Table 2 for exam-

ples of the identified nonverbal cues.

The human-likeness of CAs’ observable behavior can be enhanced by the above-mentioned

verbal and nonverbal cues. According to the theory of anthropomorphism, human-like ob-

servable behavior also stimulates the cognitive determinant elicited agent knowledge (Epley

Page 19: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

19

et al., 2007). This means that perceivers are more likely to draw on their knowledge of hu-

mans when assessing nonhuman interaction partners that exhibit typical human behavior.

Analogous to the human identity cues dimension, several studies have investigated the effect

of verbal and/or nonverbal cues on anthropomorphism (see Table 3). For example, De Visser

et al. (2016) allowed an experimental agent to give participants empathic feedback. In line

with the theory of anthropomorphism, they find that participants anthropomorphize the exper-

imental interaction partner more readily when the latter uses such verbal cues. However,

their computer agent was simultaneously enhanced with human identity cues. Thus, once

again, it is unclear which cue type caused the positive effect on anthropomorphism. The

same ambiguity applies to the studies by Araujo (2018), Kiesler et al. (2008), and Waytz et

al. (2014). By contrast, Schuetzler et al. (2014) focus solely on verbal cues. More precisely,

their anthropomorphic CA was able to respond in varying ways to user input, while the

nonanthropomorphic CA always used the same static responses. The authors find that the

anthropomorphic CA produced higher perceptions of anthropomorphism. Relatedly, Sah and

Peng (2015) reveal that using a personalized language on a website positively influences

perceptions of anthropomorphism. As the theory of anthropomorphism predicts, these two

studies show that verbal cues can have a positive effect on perceptions of anthropomor-

phism. None of the reviewed studies provides evidence that nonverbal cues—independently

of other cues—positively affect perceived anthropomorphism. However, similar to verbal

cues, nonverbal cues make a CA’s communicative behavior more human-like. Epley et al.

(2007) argue that the human-like behavior—similar to the human-like appearance—of a non-

human agent can stimulate anthropomorphism through the cognitive determinant elicited

agent knowledge. Therefore, we argue that both forms of communicative behavior increase

the perceiver’s use of anthropomorphic knowledge in interaction with CAs. Accordingly, we

formulate the following hypotheses:

Page 20: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

20

Hypothesis 1b: CA designs with verbal cues evoke higher perceptions of anthropomor-

phism than designs without verbal cues.

Hypothesis 1c: CA designs with nonverbal cues evoke higher perceptions of anthropomor-

phism than designs without nonverbal cues.

Table 3. Overview of Reviewed Studies Reference Area Anthropomor-

phized Object Dependent Variable(s)

Anthropomorphic Design Dimension Verbal Cues

Non-verbal Cues

Human Identity Cues

Al-Natour et al. (2010)

IS Recommenda-tion Agent

Trust x

Araujo (2018) HCI CA Anthropomor-phism, Social Presence, Compa-ny Perception

x x

Benbasat et al. (2010)

IS Recommenda-tion Agent

Social Presence x

Bickmore & Cassell (2001)

HCI ECA Likability, Helpful-ness x

Bickmore & Picard (2005)

HCI ECA Likability, Trust x x x

Burgoon et al. (1999)

HCI ECA Interaction In-volvement, Mutu-ality, Social Judg-ment, Decision Quality

x x

Burgoon et al. (2016)

HCI ECA Communication Process, Social Judgment, Influ-ence

x x

Cowell & Stanney (2005)

HCI ECA Likability, Trust x x

Chattaraman et al. (2019)

HCI CA Trust, Usefulness, Information Over-load, Ease of Use, Interactivity

x

Cyr et al. (2009)

IS Website Social Presence, Trust, Image Ap-peal

x

Diederich, Janßen-

IS CA Humanness, Social Presence, Empa- x x x

Page 21: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

21

Müller, Bren-del, & Mora-na (2019)

thy, Satisfaction

Diederich, Lichtenberg, Brendel, & Trang (2019)

IS CA Humanness, Per-suasiveness x x x

De Visser et al. (2016)

PSY Avatar Trust, Anthropo-morphism x x x

Gong (2008) HCI Recommenda-tion Agent

Social Response x

Gnewuch et al. (2018)

HCI CA Humanness, Social Presence, Satis-faction

x x

Hess et al. (2009)

IS Recommenda-tion Agent

Social Presence x x

Kiesler et al. (2008)

HCI ECA Information Disclo-sure, Social Influ-ence, Anthropo-morphism

x x x

Knijenburg & Willemsen (2016)

HCI CA Social Response, Performance x x

Niu et al. (2018)

HCI Driving Agent Anthropomor-phism, Trust, Lik-ing

x

Nowak & Biocca (2003)

HCI Avatar Social Presence x

Nunamaker et al. (2011)

IS ECA Trust, Expertise, Likability x x

Pickard et al. (2017)

HCI ECA Similarity, Likabil-ity, Information Disclosure

x

Qiu & Ben-basat (2009)

IS Recommenda-tion Agent

Social Presence x

Qiu & Ben-basat (2010)

IS Recommenda-tion Agent

Social Presence, Enjoyment, Use-fulness

x

Riedl et al. (2014)

IS Avatar Mentalizing, Trust-ing Behavior x

Sah & Peng (2015)

HCI Website Anthropomorphism x x

Schuetzler et al. (2014)

IS CA Anthropomorphism x

Schuetzler et al. (2018)

IS CA Socially Desirable Behavior x x

Sproull et al. 1996

HCI CA Socially Desirable Behavior x

Verhagen et HCI ECA Social Presence, x x

Page 22: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

22

al (2014) Personalization, Satisfaction

Von der Püt-ten et al. (2010)

HCI ECA Social Presence x x

Waytz,Heafner& Epley (2014)

PSY Driving Agent Anthropomorphism Trust x x

Wölfl, Feste, & Peters (2019)

IS Webshop Anthropomorphism x

Yuan et al. (2016)

IS Product in Webshop

Willingness to Pay x

Yuan & Den-nis (2017)

IS Product in Webshop

Anthropomorphism Willingness to Pay x

IS=Information Systems, HCI=Human-computer Interaction, PSY=Psychology ECA=Embodied Conversational Agent, CA=Conversational Agent

Interaction of Anthropomorphic Design Dimensions

Although several studies have considered more than one anthropomorphic design dimension

(see Table 3) their assumptions about interaction effects are only implicit. For example, none

of the reviewed studies relies on nonverbal cues as the sole cue of anthropomorphic design.

Rather, such cues are combined with verbal or human identity cues (Bickmore & Picard,

2005; Cowell & Stanney, 2005; Nunamaker et al., 2011). Similarly, other studies intuitively

combine verbal and human identity cues in their anthropomorphic designs (Araujo, 2018;

Knijenburg & Willemsen, 2016). Overall, researchers’ intuitions seem to suggest that combin-

ing several design dimensions has a positive effect on users’ anthropomorphism perceptions.

Still, the theoretical basis for the design dimensions’ potential interaction effect remains un-

clear.

We find one theoretical perspective on this relationship in the uncanny valley effect (Mori,

1970). Originally found in robotics research, this effect describes how subtle imperfections in

the anthropomorphic appearance of nonhuman agents can cause users to focus more on the

apparently nonhuman details (MacDorman et al., 2009). Mori (1970) illustrates this effect with

a curve that describes how an increase in an object’s observable human-likeness relates to

Page 23: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

23

perceived likability. Likability in that context denotes the “feeling of being in the presence of

another human being—the moment when you feel in synchrony with someone other than

yourself and experience a ‘meeting of minds’” (Hsu, 2012), which is closely related to anthro-

pomorphism. As the object becomes more human-like, its likability increases until it drops

from a relatively high level of human-likeness, and the typical valley pattern occurs. Mori

(1970) likens this effect to a prosthetic hand that is perceived to be human until one touches

it and realizes it is cold and unhuman. Researchers repeatedly find that a mismatch of the

nonhuman and human-like characteristics of a target object causes ambiguity, which leads to

the object being assessed as less human-like, thus causing an uncanny response (Burleigh,

Schoenherr & Lacroix, 2013; MacDorman et al., 2009; Saygin, Chaminade, Ishiguro, Driver,

and Frith, 2012, Seyama & Nagayama, 2007; Wiese & Weis, 2020). Relatedly, findings show

that less human-like interaction partners can stimulate higher likability than nearly perfect

human replicas because they do not cause a perceptional conflict (Mathur & Reichling,

2016). In sum, this literature argues that successful anthropomorphic designs are not simply

those with a higher number of anthropomorphic cues; rather, successful designs ensure con-

sistency of the overall human-like manifestation.

Anthropological and communication research offer another theoretical perspective on how

anthropomorphic design dimensions are related. Communication research considers human

nonverbal and verbal communication as an inseparable unit in which verbal communication

is supported by nonverbal cues (Argyle, 1969; Knapp et al., 2014). Recipients find a mis-

match between nonverbal and verbal cues disturbing and confusing (Argyle, 1969). Moreo-

ver, human nonverbal communication emerges from primate communication, while verbal

communication is a uniquely human aspect of human cognition (Burling et al., 1993; Knapp,

Hall, & Horgan, 2014). Therefore, combining nonverbal cues with human identity cues or

verbal cues might be required to stimulate anthropomorphism. Based on these considera-

Page 24: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

24

tions, using nonverbal cues without other matching cues possibly produces a disturbing im-

age of a human-like interaction partner.

The suggestion that target objects highly similar to real human beings activate the cognitive

determinant elicited agent knowledge (Epley et al., 2007) contradicts literature on the uncan-

ny valley effect according to which increased human-likeness of the target object is not nec-

essarily the best approach. Therefore, designers should ensure a consistent balance in their

human-like designs. Further, anthropological and communication research argues that non-

verbal cues need to be accompanied by other cues to create a human-like design. In sum,

the discussed theoretical perspectives support the idea that the effects of different anthropo-

morphic design dimensions are interrelated. However, conflicting perspectives on the bene-

fits of combining all or only some anthropomorphic design dimensions preclude the possibility

of formulating a clear prediction on potential interaction effects. To address this ambiguity,

we state the following exploratory research question:

Exploratory RQ: Are there interaction effects among anthropomorphic design dimensions?

Task: Task-specific Effects on Users’ Motivation to Anthropomorphize CAs

The theory of anthropomorphism postulates that humans’ motivation to anthropomorphize

nonhuman objects varies across tasks and situations. More specifically, depending on the

particular task, humans need to be socially related to a nonhuman interaction partner (sociali-

ty motivation) and/or to understand and control it (effectance motivation) (Epley et al.,

2008b). Perceptions of anthropomorphism, in turn, satisfy both sociality and effectance moti-

vations. Empirical evidence supports the relationship between these two motivational forces

and anthropomorphism. For sociality motivation, both Epley et al. (2008a) and Eyssel and

Reich (2013) manipulated feelings of loneliness and in turn sociality motivation in an experi-

mental study. Participants with a high sociality motivation showed significantly higher percep-

tions of anthropomorphism towards several objects (Epley et al., 2008a) and robots (Eyssel

Page 25: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

25

and Reich, 2013) than those less sociality motivated. For effectance motivation, social cogni-

tion experiments have repeatedly used different types of tasks to evoke anthropomorphism

(for a review, see Mar, 2011). For example, Fletcher et al. (1995) evoked anthropomorphism

by asking participants to read stories that require anthropomorphic imagination to properly

understand the story. Rilling, Dagenais, Goldsmith, Glenn, and Pagnoni (2008) asked partici-

pants in a neuro-imaging experiment to engage in a social task that required anthropo-

morphic understanding for successful completion. Their results show that brain areas associ-

ated with anthropomorphic perception are more active during tasks that require anthropo-

morphic sense-making to complete the task effectively. This body of experimental research

demonstrates that sociality and effectance motivation can be activated in a specific task con-

text to stimulate anthropomorphic perceptions. We posit that this also applies to the design of

anthropomorphic CA interactions because CAs are utilized in a variety of tasks that differ

concerning the extent to which sociality and effectance motivation are activated.

In 1994, Don Norman anticipated that computer agents would “take over human tasks, and

[would] interact with people in human-like ways” (Norman, 1994, p. 68). He suggested that

the tasks these agents perform for human beings would range from providing assistance and

guidance (e.g., selecting and booking a hotel) to mastering tasks humans could not carry out

without a computer agent. Indeed, today’s ecosystem of CAs covers a wide variety of tasks

and use contexts (see Table 4 for an overview). Combining Norman’s anticipation with our

evaluation of existing CAs, we conclude that CA tasks can be differentiated along a continu-

um of tasks ranging from human-like to computer-like. Human-like tasks include ones that

humans typically do as part of completing an interaction with a human partner, such as learn-

ing a new sport. Such tasks require humans to engage in anthropomorphic perceptions to

make sense of the interaction (e.g., empathy after failure). Computer-like tasks include ones

that humans typically perform with computer assistance, such as online banking transfers.

Page 26: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

26

Such tasks do not require that humans engage in anthropomorphic perceptions to make

sense of the interaction.

Table 4. Overview of Application Domains for Conversational Agent Conversational Agent Use Con-text

Example CA Tasks Real-world Examples

E-commerce

Understand customer needs, provide advice on products and services.

eBay ShopBot, Whole Foods Market, Lidl UK, Sephora, SnapTravel, Lufthansa, HLX.com

E-banking Conduct bank transfers, provide account information.

Mastercard, k2.pl, American Express

Customer Service Answer questions about contract details, handle complaints.

Coca-Cola US, Windstream, British Gas, Geico, KLM, Mer-cedes, ARAG

Tutoring/Teaching Teach new lessons (e.g., lan-guage, mathematics), motivate to learn, and keep engaged.

Duolingo, Grapheme, Ovoto

Health/Fitness Coaching

Listen to users’ health/fitness issues/goals, provide guidance, motivate.

Woebot.io, Babylon Health, In-somnobot-3000, HelloAva, Health Tap, Atlas

Internet of Things (IoT)

Control and adjust settings of remote devices, provide infor-mation on status of devices.

Thington, action.ai, Netatmo, Neato

Business Opera-tions

Conduct transactions in or re-trieve data from enterprise sys-tems (e.g., ERP, CRM).

Kore.ai, Growthbot by HubSpot, Unit4, Pegg

Business Services Interview job candidates, provide assistance for employees (e.g., pay slip, leave requests, training).

Allianz Careers, ABIe, HubBot by Telekom, Sixt Jobbot, Kore.ai

Lankton and McKnight (2015) provide a similar distinction regarding the perceived human-

likeness of technological artifacts. According to them, technologies that substitute human

experts are perceived as more human-like than technologies that do not substitute human

experts. Following this perspective, and based on the diversity of tasks CAs can perform, we

argue that the distinction between human-likeness and computer-likeness should be made

on the task level. Moreover, classifying a task as one of the two types can change over time.

As automation steadily developed (see e.g., Brynjolfsson & McAfee, 2016; Frey & Osborne,

2017), tasks that were classified as human-like gradually shifted and are now seen as com-

Page 27: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

27

puter-like, as in banking transfers and purchase recommendations. Further, people tend to

classify tasks differently along the continuum, based on their individual experiences.

According to Epley et al. (2007), the motivational determinant sociality motivation stimulates

anthropomorphism in situations in which individuals feel the drive to be socially connected to

others. Previous studies have found that situationally activated sociality motivation stimulates

perceptions of anthropomorphism regarding different target objects such as robots (Eyssel &

Reich, 2013; Wang, 2017) and consumer products (Chen, Sengupta, & Adaval, 2018). Re-

search has further shown that interacting with an ECA satisfies users’ need for social contact

(Krämer, Lucas, Schmitt, & Gratch, 2018). We argue that, based on humans’ drive to be so-

cially connected, interactions in which the CA performs a human-like task triggers percep-

tions of anthropomorphism. A user who interacts with a CA asking advice on health issues,

for instance, is accustomed to talking to a human being (e.g., physician, therapist). There-

fore, we argue that this type of task stimulates anthropomorphism because the user requires

a social component in the interaction (e.g., empathic feedback). Similarly, users who interact

with a customer service CA in laying a complaint seek a human-like response in the interac-

tion (e.g., an apology). By contrast, a computer-like task does not trigger a social relatedness

need in users. For example, users interacting with a CA to control a remote device or to re-

trieve data from a database do not have social expectations of the agent; rather they need a

convenient way to complete the task. Because the human component is an integral part of

human-like CA tasks but not of computer-like tasks, we argue that, based on the motivational

determinant sociality motivation, users’ perceptions of anthropomorphism are higher in inter-

actions with CAs that perform human-like tasks.

The second motivational determinant of anthropomorphism, effectance motivation, is stimu-

lated by tasks that provide “incentives associated with accurately understanding or predicting

the behavior of a nonhuman agent” (Epley et al., 2007, p. 872). Previous studies have shown

Page 28: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

28

that effectance motivation is stimulated when a technological interaction partner performs

tasks in a way that is difficult to understand and anticipate (Eyssel, Kuchenbrandt, &

Bobinger, 2011; Waytz et al., 2010c). Because humans seek to understand and control their

environment (White, 1959), completing tasks in nontransparent ways incentivizes anthropo-

morphizing the particular technological gadget (Epley et al., 2007). Other studies have shown

that effectance motivation can also trigger anthropomorphism when completing the task re-

quires anthropomorphic sense-making (Fletcher et al., 1995; Rilling et al., 2008). According-

ly, we argue that the incentives to anthropomorphize are higher in interactions with CAs per-

forming human-like tasks because, in these cases, users feel that anthropomorphism will

produce meaningful interactions. For example, a user might believe that a computer cannot

understand physical or mental health issues that are uniquely human. Therefore, anthropo-

morphizing the agent can help inspire confidence in talking to it about these human-like top-

ics. To successfully complete such a task requires the user to feel confident in the interac-

tion. Consequently, the effectance motivation to anthropomorphize is high. As Norman

(1994) points out, the need to feel confident and in control is high when machines take on

human-like tasks. Otherwise, humans feel intimidated by the novel interaction partner and do

not know how to engage with it in a meaningful way. By contrast, users’ effectance motiva-

tion to anthropomorphize a CA performing a computer-like task is not as salient because

there is no incentive to anthropomorphize. For example, users of a CA that controls remote

home devices do not benefit from increasing their understanding of the agents’ behavior

through anthropomorphism because completing the task successfully happens independently

of their anthropomorphic perception of the CA.

Overall, we argue that the distinction between human-like and computer-like tasks is im-

portant for studies interested in anthropomorphic CA interactions because sociality and ef-

Page 29: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

29

fectance motivation increase the tendency to anthropomorphize CAs that perform human-like

tasks. In line with our argumentation, we formulate the following hypothesis:

Hypothesis 2: CAs that perform human-like tasks elicit higher levels of perceived anthro-

pomorphism than CAs that perform computer-like tasks.

Human: Individual Differences and Perceptions of Anthropomorphism

Not all users respond to the same computer interface in the same way, as HCI researchers

have known for decades (Aykin, 1989; Egan, 1988). The manifold causes of these individual

differences include physical and cognitive abilities, personality traits, demographics, devel-

opmental states, previous experiences, and cultural background (Egan, 1988). Analogously,

not all humans respond to the same object with perceptions of anthropomorphism (Cullen,

Kanai, Bahrami, & Rees, 2014; Waytz et al., 2010a). Instead, every individual has a stable

tendency to anthropomorphize nonhuman objects (Epley et al., 2007). Previous studies in

psychology (Caruso, Waytz, & Epley, 2010; Tam, 2014; Waytz et al., 2014; Willard & No-

renzayan, 2013) and ergonomics (Chin, Sims, Clark, & Lopez, 2004; Chin et al., 2005) have

used dispositional anthropomorphism to understand the phenomenon of anthropomorphism

and its consequences for human perception and behavior. However, anthropomorphism

studies in HCI have not recognized this dispositional determinant yet. Because the psycho-

logical body of research suggests this individual disposition as a relevant determinant of an-

thropomorphism, we expect to observe differences in the extent to which individuals anthro-

pomorphize CAs, even when the CA’s anthropomorphic design and task are kept stable.

Therefore, we formulate the following hypothesis:

Hypothesis 3: The higher the level of dispositional anthropomorphism, the higher the users’

perceptions of anthropomorphism.

Study Design

Page 30: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

30

We conducted an online experiment through the panel company clickworker to validate our

theoretical framework empirically, using a vignette technique. Vignettes are short descrip-

tions of actual objects, situations, or persons given to stimulate participants’ “beliefs, atti-

tudes, judgments, knowledge, or intended behavior with respect to the presented vignette

scenarios” (Atzmüller & Steiner, 2010, p. 129). In the context of HCI research, vignettes

simulate actual interactions used to standardize and control the experimental interaction

across all participants (Robert, Dennis, & Hung, 2009). This technique has previously been

used to study users’ perceptions of and attitudes to technology (e.g., Howell, Roberts, &

Mancini, 2018; Vance, Lowry, & Eggett, 2015). Similar to our approach, Robert et al. (2009)

presented simulated text-based interactions to experimentally analyze participants’ perceptu-

al responses. Other studies focuses on anthropomorphic design also applied the vignette

technique to evaluate users’ responses to technological artifacts (Eyssel et al., 2011; Pak,

McLaughlin, & Bass, 2014). We used a full factorial design to examine the independent vari-

ables’ (task type, anthropomorphic design, disposition to anthropomorphize) effect on the

dependent variable (perceived anthropomorphism). All combinations of the two independent

variables’ task type (human-like or computer-like) and anthropomorphic design (verbal, non-

verbal, human identity) were examined. The third variable of interest, namely the individual

disposition to anthropomorphize, cannot be manipulated but represents a quasi-experimental

variable. This resulted in 16 different CA designs, one for each condition. We used a be-

tween-subject design, randomly assigning participants to the 16 treatment groups (see Table

5).

Table 5. Treatment Groups NV+V+HI NV+V NV+HI V+HI NV HI V None Computer-like 1 2 3 4 5 6 7 8

Human-like 9 10 11 12 13 14 15 16

NV=Nonverbal; HI=Human Identity; V=Verbal

Page 31: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

31

Task Type

We conducted two pretests to identify human-like and computer-like CA tasks that stimulate

effectance and sociality motivation. For the first pretest, we described typical tasks of seven

major CA classes (e-commerce, customer service, tutoring/teaching, health/fitness, Internet

of Things, business operations, and business services; see Table 4) to 95 participants (61

females; age: M=37.9, SD=12.24). The pretest was conducted through the panel company

Prolific. We asked each participant to rate 21 tasks (three for each class) on the extent to

which the task is more human- or more computer-like (1= “very human-like”, 7= “very com-

puter-like”). We briefed participants to rate a task as more human-like (computer-like) if they

believed the type of task is typically performed by a human being (computer program). This

method of categorizing objects along a human-computer continuum was adapted from Tou-

ré-Tillery and McGill (2015). The pretest indicated that the most human-like task is conducted

by a health agent who listens to a user and assists in handling stress and discomfort

(M=1.97, SD=1.41). The most computer-like task, by contrast, was done by a smart home

agent (Internet of Things class) that controls remote devices and provides statistics about

usage (M=6.07, SD=1.22). A paired sample t-test confirmed a significant group difference

(t(94)=-20.768, p<.001). In the further analysis, we focused on these two tasks that were rat-

ed to be the most human-like and computer-like, respectively.

Next, we conducted a second pretest to ensure that the human-like task stimulates sociality

and effectance motivation more than the computer-like task. We recruited 118 participants

(40 females; age: M=37.9, SD=12.01) through the panel company clickworker. Participants

were randomly assigned to read a description of one of the two tasks. Afterwards, we asked

each participant to rate their level of sociality and effectance motivation using established

seven-point Likert scales (Cheek & Buss (1981) for sociality motivation; Eyssel et al. (2011)

and Waytz et al. (2010) for effectance motivation; see Appendix B). Two independent sample

Page 32: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

32

t-tests confirmed that the 60 participants who assessed the human-like CA task (MSociali-

ty=4.95, SDSociality=1.28; MEffectance=5.61, SDEffectance=.84) compared to the 58 participants who

assessed the computer-like CA task (MSociality=4.01, SDSociality=1.46; MEffectance=5.26,

SDEffectance=.96) reported significantly higher levels of sociality (t(116)=-3.73, p<.001) and ef-

fectance motivation (t(116)=-2.003, p<.05).

Participants and Procedure

For the main experiment, we invited native German speakers residing in Germany to take

part in our online experiment because understanding anthropomorphic notions in verbal be-

havior requires advanced language competence. We carried out a language test to further

ensure that all participants had the required language skills. After agreeing to the experi-

ment’s terms and conditions, participants were briefed on CAs, assuring their general under-

standing of the technology. We also informed them the CA was a computer program and not

a human interaction partner. Depending on the task context, participants were informed of the

tasks of either a health agent (human-like task) or a smart home agent (computer-like task)

(see Appendix C). Next, participants saw one screenshot of a typical text-based dialogue be-

tween a user and the respective CA and were asked to carefully read the presented dialogue

(see Figure 4). Each user observed one of the 16 possible CA designs. Subsequently, partic-

ipants were asked to rate the CA design with respect to perceived anthropomorphism. Finally,

participants completed a questionnaire including measures for dispositional anthropomor-

phism, previous experience with CAs, and demographic information. A total of 539 individuals

participated in the experiment. Of the 539 participants, 21 failed the check for necessary lan-

guage skills, four withdrew from the experiment, and 18 failed an attention check. After re-

moving these participants from our sample, we analyzed the remaining 496 individuals’ data.

Page 33: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

33

Anthropomorphic Design

Based on our literature review, we operationalized human identity and verbal and nonverbal

cues as follows: as human identity cues, we gave the CAs a human name (“Laura”) and a

human profile image; CAs without human identity cues were simply referred to as a health

agent or a smart home agent, and they had no profile image. As anthropomorphic verbal

cues, CAs used self-referencing pronouns (e.g., “I” and “me”) and emotional expressions

(e.g., “Oh, I am sorry”) in communicating with the user. In the non-anthropomorphic condition,

CAs used an impersonal style and emotionally neutral language. For anthropomorphic non-

verbal cues, an “is typing” indicator was implemented, which enabled the CA to convey tem-

poral cues of nonverbal behavior (Gnewuch et al., 2018). The indicator signals that the agent

needs time to formulate a response and that it is engaging in turn-taking behavior. Moreover,

the CA used emoticons as a digital substitute for nonverbal behavior supporting the content of

the message. While emotional expressions and emoticons signal emotional intelligence, we

deliberately decided not to manipulate cognitive intelligence because this would have manipu-

lated the CA’s performance (Brody, 2004). In terms of the CA’s responsiveness and coher-

ence, we therefore kept cognitive intelligence stable across conditions. We kept the text

length equal across groups (see Figure 4 for the English translation of the stimulus material).

Figure 4 illustrates two of the anthropomorphic designs for the health agent (see Appendix D

for the smart home agent). Following Hauser, Ellsworth, and Gonzalez (2018), we used a

separate sample to check that our manipulations of the design dimensions were effective.

This approach ensured that the manipulation check did not affect our experimental measure-

ment of anthropomorphism. The panel company, clickworker, invited 481 individuals to partic-

ipate in our manipulation check. Participants were selected based on the same criteria used

in the main experiment. By checking the participants’ IDs, we ensured that no one took part in

both the manipulation check and the main experiment. The experimental procedure corre-

sponded with the main experiment except that directly after presenting the stimulus, we asked

Page 34: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

34

participants whether they had recognized the experimental CA’s nonverbal, verbal, and hu-

man identity cues. Afterwards the manipulation check was over. We evaluated this categori-

cal data with Pearson’s chi-squared test. Participants in a nonverbal design treatment group

reliably identified these cues (78.60%), as opposed to participants in a treatment group with-

out these cues (17.46% erroneously stated that the CA had nonverbal cues) (χ2(1)= 180.25,

p<.001). The same holds for the verbal (χ2(1)=106.59, p<.001, 91.32% vs. 48.12%) and hu-

man identity (χ2(1)=298.96, p<.001, 97.94% vs. 20.59%) design manipulation.

Left: Health agent with all types of cues highlighted. Right: Health agent with no cues. Figure 4. Designs of Experimental Health Agent

Page 35: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

35

Measures

We measured the dependent variable perceived anthropomorphism with a seven-item scale

taken from the literature (Waytz et al., 2010c), which has been used in related studies (Kim &

McGill, 2011; Touré-Tillery & McGill, 2015; Waytz et al., 2014). Participants were asked to

report the extent to which they agree with the following statements: The CA has (1) a mind of

its own, (2) intentions, (3) a free will, (4) consciousness, (5) desires, (6) beliefs, and (7) the

ability to experience emotions.

To measure the quasi-experimental variable dispositional anthropomorphism, we deployed

the individual differences in anthropomorphism questionnaire (IDAQ) (Waytz et al., 2010a).

According to the IDAQ, dispositional anthropomorphism is composed of three subdimensions

that reflect three commonly anthropomorphized classes of entities: natural entities, nonhu-

man animals, and technology (Waytz et al., 2010a). Each of these subdimensions was

measured with five items. All responses were recorded on a seven-point Likert scale. Appen-

dix E provides the measurement items and details on the validity of our measurement in-

struments.

Data Analysis Results

Of the 496 participants, 60.4% had never used a CA before. Of those who had previously

interacted with a CA, 26% use CAs frequently (at least once a month), and 33.53% had used

a CA only once before. The average age of the participants was 36.7 (SD=12.62), and there

were 233 (46.98 %) females. Previous experience as well as age had no significant effect on

perceptions of anthropomorphism (see Appendix F, Table A5). Therefore, we only included

gender in our further analysis. Descriptive statistics are provided in Table 6. A post hoc pow-

er analysis via GPower (Faul, Erdfelder, Buchner, & Lang, 2009) indicates an effect size of f²

of .036 and a power of .94. Thus, the achieved power exceeds the conventional power level

Page 36: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

36

of .80 (Cohen, Cohen, West, & Aiken, 2003), so that our sample size can be considered

large enough to detect significant effects.

To test in a single common model whether the three different factors (technology, task, indi-

vidual) in our framework influence perceived anthropomorphism, we used an OLS multiple

regression analysis. Robust standard errors were used in all regressions to account for het-

eroskedasticity, based on the Breusch-Pagan test (Cohen, Cohen, West, & Aiken, 2003). We

expected minimal issues with multicollinearity because of the full factorial experimental de-

sign (design cues, task type), including only one quasi-experimental variable (dispositional

anthropomorphism) and one control variable (gender) that could have been correlated with

the other independent variables. Indeed, all the variance inflation factor values (VIFs) ranged

between 1.00 and 6.94 (see Appendix F, Table A6), thus falling well below the cutoff of 10

(Hair, Anderson, Tatham, & Black, 1995). In Model 1, our analysis checked whether each

design dimension, task type, and dispositional anthropomorphism had a significant main ef-

fect on perceived anthropomorphism. Therefore, the dependent variable perceived anthro-

pomorphism was regressed on the three dummy-coded independent design variables (non-

verbal, verbal, and human identity), task type (dummy variable with 0=computer-like,

Table 6. Descriptive Statistics Mean s.d. N Min. Max. Perceived Anthropomorphism 2.58 1.47 496 1 7 Dispositional Anthropomorphism 3.06 .83 496 1 6.4 Perceived Anthropomorphism by Treatment Mean s.d. N Min. Max. Design None Nonverbal Verbal Human Identity Nonverbal x Verbal Nonverbal x Human Identity Verbal x Human Identity Nonverbal x Verbal x Human Identity

2.53 2.1 2.67 2.56 2.85 2.85 2.47 2.6

1.32 1.33 1.51 1.44 1.49 1.51 1.54 1.6

66 61 62 65 64 59 58 61

1 1 1 1 1 1 1 1

6.29

6 7 6

5.86 6

6.57 6

Task Type Human-like Computer-like

2.91 2.28

1.56 1.32

237 259

1 1

7

6.57

Page 37: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

37

1=human-like), dispositional anthropomorphism (standardized as z-score), and gender

(dummy variable with 0=female, 1=male). Model 1 explains a significant proportion of the

variance in perceived anthropomorphism (R2=.1393, F(6, 489) = 14.33, p<.001). The regres-

sion revealed no significant main effects for the three design dimensions (H1a-c) but a signif-

icant effect of task type (H2: b=.638, t(489)=5.10, p<.001), dispositional anthropomorphism

(H3: b=.433, t(489)=6.51, p<.001), and gender (b=.3, t(489)=2.42, p=.016). Perceptions of

anthropomorphism, according to Hypothesis 2, are higher in interactions with CAs that per-

form human-like tasks (see also Appendix G 5). Specifically, perceived anthropomorphism

increases by .638 units on the seven-point Likert scale if the task is human-like. Moreover, as

Hypothesis 3 predicted, dispositional anthropomorphism has a significant impact on users’

perceived anthropomorphism in a CA. More precisely, a one standard deviation increase in

dispositional anthropomorphism relates to an increase in perceived anthropomorphism by

.433 units. To identify the three anthropomorphic design dimensions’ potential interaction

effects (exploratory research question), we used an OLS regression that allows for two-way

and three-way interactions. In Model 2, the dependent variable perceived anthropomorphism

was regressed on the three dummy-coded independent design variables (nonverbal, verbal,

and human identity), along with their interaction terms, task type, dispositional anthropomor-

phism, and gender. This model explains a significant proportion of the variance in perceived

anthropomorphism (R2=.1693, F(10, 485)=11.99, p<.001). The outcome of a likelihood ratio

test shows that Model 2 results in a statistically significant improvement of the model fit

(χ2(4)=17.55, p=.0015). Table 7 summarizes these regression results both Model 1 and

Model 2.

Page 38: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

38

Table 7. Regression Results for Perceived Anthropomorphism Model 1 Model 2 Linear model

estimates (robust std. errors)

p-value Linear model estimate (robust std. errors)

p-value

First-order terms Nonverbal Verbal Human Identity

.037 (.123) .134 (.123) .12 (.124)

.767 .276 .331

-.568 (.199) .105 (.211) .068 (.217)

.004** .621 .753

Second-order terms Nonverbal x Verbal Verbal x Human Identity Nonverbal x Human Identity

- - -

- - -

.798 (.311) -.308 (.335) .871 (.321)

.011* .359 .007**

Third-order terms Nonverbal x Verbal x Human Iden-tity

-

-

-.892 (.491)

.070

Task Type (0=computer-like, 1=human-like)

.638 (.125) .000*** .636 (.49) .000***

Dispositional Anthropomorphism (z-score)

.433 (.066) .000*** .461 (.065) .000***

Control: Gender (0=female, 1=male)

.3 (.124) .016* .317 (.125) .011*

Constant 1.973 (.137) .000*** 2.075 (.154) .000*** R2

Adjusted R2 .1393 .1289

.000*** .1693 .1521

.000***

Observations 496 496 * p<0.05, ** p<0.01,*** p<0.001

Following Aiken and West (1991), we first analyzed the conditional interaction effect of the

first-order terms. Each of the three coefficients estimates the effect of the respective design

dimension on perceived anthropomorphism when the other two dimensions equal 0 (Aiken &

West, 1991). Those designs that include either solely human identity or solely verbal cues,

and therefore no cues from the remaining two design dimensions, do not have a significant

influence on perceived anthropomorphism. Moreover, designs that include nonverbal cues

and no cues from the other two design dimensions have a significant negative effect on per-

ceived anthropomorphism (b= -.568, t(485)= -2.86, p=.004). On the seven-point Likert scale,

the responses are estimated to decrease by 0.568 when only nonverbal cues are added,

compared to not adding any cue. Next, we analyzed the second-order terms’ conditional in-

Page 39: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

39

teraction effect. They describe each two-way interaction term’s effect on perceived anthro-

pomorphism when the design dimension that is not included in the interaction term equals 0

(Aiken & West, 1991). We find that the combinations of (i) nonverbal and verbal cues

(b=.798, t(485)=2.57, p=.011) and (ii) nonverbal and human identity cues (b=.871,

t(485)=2.71, p=.007) have a significant positive effect on perceived anthropomorphism. Look-

ing at the predictive margins, this means that when nonverbal cues are combined with verbal

or human identity cues and no other cues are added the responses increase by .334 (com-

bined with verbal) and .371 (combined with human identity), compared to a design without

any cues. Thus, the data indicates an effect of complementarity between nonverbal cues and

verbal cues, as well as between nonverbal cues and human identity cues. This means the

sole use of nonverbal cues harms anthropomorphism. This negative effect vanishes if non-

verbal cues are complemented by verbal or human identity cues. Figure 5 plots these two

conditional interaction effects. Simple slope tests on these two-way interactions show that the

effect of nonverbal cues on perceived anthropomorphism is negative and significant for de-

signs without verbal and human identity cues (b= -.569, p=.005), and positive, albeit non-

significant, for designs with verbal (b=.230, p=.342) or human identity cues (b=.303, p=.23).

Finally, we analyzed the third-order interaction term, finding that the three-way interaction

has a negative coefficient and is not significant (see Table 7). This indicates that designs

considering all design dimensions are not evaluated the highest concerning perceptions of

anthropomorphism. According to our findings and in line with our analysis above, hypotheses

1a-c are not supported; however, we find interesting interaction effects of pairwise combina-

tions of cues in response to our exploratory research question. Model 2 reveals a significant

effect of task type (b=.636, t(485)=5.16, p<.001) and dispositional anthropomorphism

(b=.461, t(485)=7.06, p<.001), which supports hypotheses 2 and 3. Table 8 summarizes the

hypotheses tests’ results.

Page 40: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

40

Figure 5. Significant Two-way Interactions

Table 8. Summary Hypotheses Hypothesis Description Supported?

H1a H1b H1c Exploratory RQ

Main effect: human identity cues anthropomorphism Main effect: verbal identity cues anthropomorphism Main effect: nonverbal identity cues anthropomorphism Are there interaction effects?

Second-order terms nonverbal x verbal verbal x human identity nonverbal x human identity Third-order terms nonverbal x verbal x human identity

NO NO NO

YES NO YES

NO

H2 Main effect: task type anthropomorphism YES

H3 Main effect: dispositional anthropomorphism anthropo-morphism

YES

1,0

1,5

2,0

2,5

3,0

0 1

Perc

eive

d An

thro

pom

orph

ism

(pre

dict

ed)

Nonverbal

Conditional Two-way Interaction(when Human Identity = 0)

Verbal = 0 Verbal = 11,0

1,5

2,0

2,5

3,0

0 1

Perc

eive

d An

thro

pom

orph

ism

(pre

dict

ed)

Nonverbal

Conditional Two-way Interaction(when Verbal = 0)

Human Identity = 0 Human Identity = 1

++

+

+

+

Page 41: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

41

Discussion and Contributions

The empirical evaluation supports the proposed framework for anthropomorphic CA design.

In line with the theory of anthropomorphism (Epley et al., 2007), we find that (i) cognitive fac-

tors evoked by anthropomorphic technology design, (ii) motivational factors resulting from the

task type, and (iii) dispositional factors that depend on the individual user do influence the

human perception of anthropomorphism. Notably, we provide evidence that considering in-

teraction effects between anthropomorphic technology design dimensions is important to

stimulate anthropomorphism. More precisely, we find that nonverbal cues have a negative

effect on perceptions of anthropomorphism if they are not complemented with verbal or hu-

man identity cues. Below, we discuss the implications for theory and practice.

Our work contributes to the theory of anthropomorphism and advances our disciplines’ un-

derstanding of this phenomenon by relating the three anthropomorphism determinants to the

design of anthropomorphic interaction with CAs. Concerning the cognitive determinant, the

theory of anthropomorphism suggests that making objects more human-like increases the

probability of anthropomorphism (Epley et al., 2007) but provides no further guidance on how

to achieve this level of human-likeness. By relating the three anthropomorphic design dimen-

sions for CAs to our literature analysis, we systemize disparate techniques of anthropo-

morphic technology design (see Table 3) and provide a unifying knowledge base of anthro-

pomorphic CA cues (see Table 2). While recent work has focused on classifying such an-

thropomorphic design features (Feine et al., 2019; Wagner & Schramm-Klein, 2019), our

work goes one step further to empirically validate such designs’ hypothesized effects. Alt-

hough the theory suggests a positive relationship between the identified anthropomorphic

design dimensions and anthropomorphism, our empirical analysis reveals this relationship is

more complex.

Page 42: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

42

First, we do not find simple main effects of each of the anthropomorphic design dimensions.

Previous studies found a positive effect of human identity cues on perceived anthropomor-

phism (Araujo, 2018; De Visser et al. 2016; Niu, Terken, & Eggen et al. 2018; Waytz et al.

2014; Yuan & Dennis, 2017). However, only Araujo (2018) investigated CA design. The other

studies focused on website design, driving agents, and online products (see Table 3). Fur-

ther, Araujo’s (2018) CA was simultaneously given anthropomorphic verbal cues. Thus, it is

not clear whether each cue type or their interaction caused the positive effect on anthropo-

morphism. Our study is the first to provide insight on the effect of the three design dimen-

sions independently of each other. That we do not find simple main effects is novel, and

might justify the body of research that manipulated several design dimensions without disen-

tangling the effects of each dimension.

Second, we show that the sole use of nonverbal cues harms anthropomorphism, but that this

effect turns positive if verbal or human identity cues complement nonverbal cues. Previous

studies have intuitively combined nonverbal cues with cues from the other design dimensions

without justifying this combination (De Visser et al., 2016; Nunamaker et al., 2011; Verhagen

et al., 2014; Von der Pütten et al., 2010). CAs enhanced by nonverbal cues alone could trig-

ger skepticism about their human-likeness because without complementary verbal or human

identity cues, the nonverbal cues provide a disturbing image of a human interaction partner.

According to anthropological and communication research, nonverbal cues are special be-

cause (i) they are not uniquely human (Burling et al., 1993; Knapp et al., 2014) and (ii) they

typically support the verbal part of an interaction (Argyle, 1969; Knapp et al, 2014). Conse-

quently, this paper reveals that nonverbal cues need to be complemented with human identi-

ty cues or verbal cues to have a positive effect on perceived anthropomorphism.

Finally, the simultaneous use of all cue types did not result in the highest perceptions of an-

thropomorphism. This counterintuitive finding might be a manifestation of the uncanny valley

Page 43: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

43

effect, according to which perceptions of anthropomorphism drop when a relatively high level

of human-likeness is attained (Mori, 1970). Since all participants in our study had been in-

formed that they would be interacting with a CA, the simultaneous use of all design dimen-

sions might have signaled a degree of humanness that did not fit with users’ understanding

of the CA as a computer program. Consequently, this mismatch violated the deep-rooted

understanding of the target object as nonhuman and might have caused an uncanny re-

sponse (Burleigh et al., 2013, MacDorman et al., 2009 Saygin et al., 2012). Research on the

uncanny valley effect has largely focused on studying it with reference to robots and avatars

(Burleigh et al., 2013), while recent studies on its occurrence in interactions with text-based

CAs are still inconclusive (Ciechanowski, Przegalinska, Magnuski, & Gloor, 2019; Skjuve,

Haugstveit, Følstad, & Brandtzaeg, 2019). Our study provides initial support for a possible

uncanny valley effect in purely text-based interactions with CAs. In sum, our study advances

the knowledge on the cognitive determinant of anthropomorphism by showing that specific

interactions of design cues should be considered to stimulate anthropomorphism. Moreover,

we provide explanations for these findings by integrating anthropological, communication,

and robotics research.

Further, our work advances the knowledge on the motivational determinant of anthropomor-

phism by relating sociality and effectance motivation to the type of task a technology per-

forms. While psychological theory suggests that situational variables can influence humans’

motivation to anthropomorphize (Epley et al., 2008b), the distinction between human and

nonhuman tasks has not yet been empirically investigated. We demonstrate that human-like

CA tasks (e.g., providing health advice) stimulate higher levels of perceived anthropomor-

phism than computer-like CA tasks (e.g., controlling remote devices) because humans feel

an urge to be socially related to their interaction partner (sociality motivation) and require

anthropomorphic perception to make sense of the interaction (effectance motivation). Thus,

Page 44: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

44

we add to extant work that has shown how situational variables (e.g., loneliness (Eyssel &

Reich, 2013; Wang, 2017), unpredictability (Eyssel et al., 2011; Waytz et al., 2010c)) stimu-

late anthropomorphism in HCI through sociality and effectance motivation. In addition, differ-

entiating human-like from computer-like CA tasks calls on HCI researchers to attend to the

specific task type and to consider them in theorizing about anthropomorphism. Specifically,

the task type could explain why previous work investigating computer-like tasks has failed to

stimulate anthropomorphism (e.g., querying information on a website (Sah & Peng, 2015)).

By contrast, other studies have found significant positive effects of weak anthropomorphic

designs (i.e., designs that included verbal or human identity cues only) on anthropomorphism

in investigating human-like tasks, such as talking about emotions (Schuetzler et al., 2014),

driving a car (Waytz et al., 2016), or communicating with customer services (Araujo, 2018).

Overall, the task type and related motivational forces provide a possible explanation for these

mixed findings and need to be considered in future studies on anthropomorphism in HCI.

Regarding the dispositional determinant of anthropomorphism, we demonstrate that disen-

tangling users’ individual tendencies to anthropomorphize from design- or task-related effects

is important to predict anthropomorphism. While psychological studies have repeatedly

shown that this determinant is an important predictor of perceived anthropomorphism (Caru-

so, Waytz, & Epley, 2010; Tam, 2014; Waytz et al., 2014; Willard & Norenzayan, 2013), IS

studies on anthropomorphism so far have disregarded dispositional anthropomorphism’s

role. As dispositional trust (McKnight, Choudhury, & Kacmar, 2002) is regularly included in IS

studies on trust (e.g., Lankton & McKnight, 2015; Nicolaou & McKnight, 2006; Wang & Ben-

basat, 2008), we offer strong evidence that individual tendencies to anthropomorphize should

be included more frequently in studies on anthropomorphism. Low levels of dispositional an-

thropomorphism might provide an additional explanation for failed anthropomorphic technol-

ogy design in extant work (Sah & Peng, 2015; Wölfl et al., 2019; Yuan & Dennis, 2017).

Page 45: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

45

Overall, this study provides a theoretical basis for researching questions related to anthro-

pomorphism in various types of human-CA interactions. The advent of conversational inter-

faces changes how we interact with computer programs in both private and professional use

contexts (Goasduff, 2019; Marketsandmarkets, 2019). Anthropomorphism is inseparably

linked to these conversational interfaces (Araujo, 2018; Epley et al., 2007; Pfeuffer, Benlian,

Gimpel, & Hinz, 2019) and can have positive and negative effects on human perception and

behavior. On the one hand, research shows that anthropomorphism stimulates user trust (De

Visser et al., 2016), connectedness (Tam, Lee, & Chao, 2013), and social response behav-

ior (Epley et al., 2007; Waytz et al., 2010b). On the other hand, a growing body of literature

shows the negative consequences of anthropomorphism, exposing how it can result in inap-

propriate expectations (Culley & Madhavan, 2013), unwillingness to share intimate infor-

mation (Kiesler et al., 2008; Złotowski, Proudfoot, Yogeeswaran, & Bartneck, 2015), privacy

concerns (Sohn, 2019), undesirable emotional attachment (Robert, 2017), moral ascriptions

(Waytz et al. 2010b), and discriminatory behaviors (Giger et al., 2019). Given these conse-

quences, researchers have to be aware and in control of the factors that stimulate anthropo-

morphism in CA interactions. The introduced framework provides direction for the systematic

design and analysis of anthropomorphic interactions with CAs and their positive and negative

consequences.

This study’s findings are also relevant for practitioners. Owing to advancements in natural

language processing, CAs perform a variety of tasks on various websites and platforms in

private and business use contexts (Araujo, 2018; Feine et al., 2019). Due to the positive and

negative consequences of anthropomorphism we have named, designers need to conscious-

ly decide on whether they want to promote or mitigate anthropomorphism in the user-CA in-

teraction. Our study shows that unreflectively using human-like cues in CA designs presum-

ably do not stimulate anthropomorphism in the perceiver, and carelessly implementing non-

Page 46: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

46

verbal cues can even prevent anthropomorphism. Moreover, CAs that perform human-like

tasks (e.g., health counseling, customer services) are more readily anthropomorphized even

when designers deliberately do not add any human-like design cues. This can be an unde-

sired effect if designers want users to share intimate information (Kiesler et al., 2008; Zło-

towski et al., 2015). In sum, we equip designers with novel knowledge and guidance on the

effects design cues, task types, and individual differences have on anthropomorphism.

Limitations and Future Research

One limitation of this study is its focus on text-based CAs, because typically, CAs can have a

text and/or a voice interface. For voice-based CAs, additional dimensions of anthropomorphic

cues that reflect speech-related characteristics (e.g., paralinguistic attributes) are relevant in

stimulating anthropomorphism (Schroeder & Epley, 2016). Future research should extend

our study of anthropomorphism to include all types of CAs. Relevant research questions

should identify vocal and paralinguistic anthropomorphic cues and address how these relate

to the proposed theoretical framework.

A second limitation results from our experimental use of simulated interactions. The vignette

technique represents an established experimental procedure to study perceptions, attitudes,

and judgments (Atzmüller & Steiner, 2010). Nevertheless, vignettes cannot substitute for the

investigation of actual behavior (Riedl et al., 2014). Therefore, future research should in-

crease our framework’s generalizability by investigating actual interaction behavior with CAs.

Relatedly, future studies should investigate the effects of anthropomorphism on important

outcome variables. For example, do differing levels of anthropomorphic CA design influence

user performance, satisfaction, or decision quality? How do these effects change across task

types?

Another potential limitation of the current study is the time-dependent categorization of hu-

man-like and computer-like tasks. As described earlier, advances in artificial intelligence en-

Page 47: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

47

able computer programs to take over more and more tasks previously carried out by human

experts (Frey & Osborne, 2017). Along with this development, users’ perceptions of what

constitutes a human-like versus computer-like task might also evolve. Accordingly, our cate-

gorization might require an epochal reassessment and adjustment in the future.

The last limitation involves the collected anthropomorphic cues classified into the three di-

mensions of human identity, verbal, and nonverbal cues. Our collection is based on a review

of existing studies focused on human-like interface design; yet, additional cues might exist

that can be classified into these three design dimensions. Equally, our experimental manipu-

lation of the design dimensions is limited to the cues we identified. Besides our thorough se-

lection of cues from extant work, renewed efforts to identify and investigate additional cues

would allow further verification of our framework.

Conclusion

Computer agents that interact with users via human language are at the forefront of the era

of conversational HCI. This emulation of human-to-human communication makes research

into CAs inseparable from questions regarding anthropomorphism. We integrate the cogni-

tive, motivational, and dispositional determinants of anthropomorphism in a comprehensive

theoretical framework for anthropomorphic CA design that advances IS knowledge of anthro-

pomorphism and makes the theory of anthropomorphism (Epley et al., 2007) applicable to

our discipline. Our experiment shows that anthropomorphic CA design is not trivial: not all

anthropomorphic designs achieve the hypothesized effect. We provide possible explanations

for the unexpected results and identify several opportunities for future research. Our theoreti-

cal and empirical findings inform both researchers and practitioners about the complex rela-

tionship between anthropomorphic design and users’ perceptions of anthropomorphism.

Page 48: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

48

Acknowledgements

The authors are grateful to the senior editor, Prof. Radhika Santhanam, and the team of re-

viewers who provided constructive feedback throughout the entire review process. This work

was partially supported by the research alliance ForDigital (fordigital.org), an initiative en-

couraged and funded by the Federal State of Baden-Württemberg, Germany.

References

Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage.

Al-Natour, S., Benbasat, I., & Centefelli, R. (2010). Trustworthy Virtual Advisors and Enjoya-ble Interactions: Designing for Expressiveness and Transparency. In Proceedings of the 18th European Conference on Information Systems (ECIS), Regensburg, Germany, (pp. 1-12).

Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company percep-tions. Computers in Human Behavior, 85, 183-189.

Argyle, M. (1969). Social interaction. London: Methuen. Atzmüller, C., & Steiner, P. M. (2010). Experimental vignette studies in survey re-

search. Methodology, 6(3), 128-138. Aykin, N. M. (1989). Individual differences in human-computer interaction. Computers & In-

dustrial Engineering, 17(1-4), 614-619. Ben Mimoun, M. S., & Poncin, I. (2015). A valued agent: How ECAs affect website custom-

ers' satisfaction and behaviors. Journal of Retailing and Consumer Services, 26, 70-82. Benbasat, I., Dimoka, A., Pavlou, P. A., & Qiu, L. (2010). Incorporating Social Presence in

the Design of the Anthropomorphic Interface of Recommendation Agents: Insights from an fMRI Study. In Proceedings of the 31st International Conference on Information Sys-tems (ICIS), St. Louis, 1-22.

Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the Association for Information Systems, 6(3), 72-101.

Benyon, D. (2014). Designing Interactive Systems: A comprehensive guide to HCI, UX and interaction design.Essex: Pearson Education Limited.

Berry, D. C., Butler, L. T., & de Rosis, F. (2005). Evaluating a realistic agent in an advice-giving task. International Journal of Human-Computer Studies, 63, 304-327.

Bickmore, T. W., & Picard, R. W. (2005). Establishing and maintaining long-term human-computer relationships. ACM Transactions on Computer-Human Interaction (TOCHI), 12(2), 293-327.

Bickmore, T., & Cassell, J. (2001). Relational agents. In Proceedings of the SIGCHI confer-ence on Human factors in computing systems - CHI '01, New York, (pp. 396-403).

Page 49: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

49

Bickmore, T., & Cassell, J. (2005). Social dialogue with embodied conversational agents. In Advances in Natural Multimodal Dialogue Systems (pp. 23-54). Dordrecht: Springer.

Brody, N. (2004). What Cognitive Intelligence Is and What Emotional Intelligence Is Not. Psychological Inquiry, 15(3), 234-238.

Brown, S. A., Fuller, R., & Thatcher, S. M. B. (2016). Impression Formation and Durability in Mediated Communication. Journal of the Association for Information Systems, 17(9), 614-647.

Brynjolfsson, E. & McAfee, A. (2016). The second machine age: Work, progress, and pros-perity in a time of brilliant technologies. New York: W.W. Norton & Company.

Burgoon, J. K., Bonito, J. A., Bengtsson, B., Artemio Ramirez, S. J. S., Dunbar, N. E., & Miczo, N. (1999). Testing the Interactivity Model: Communication Processes, Partner As-sessments, and the Quality of Collaborative Work. Journal of Management Information Systems, 16(3), 33-56.

Burgoon, J. K., Bonito, J. A., Lowry, P. B., Humpherys, S. L., Moody, G. D., Gaskin, J. E., & Giboney, J. S. (2016). Application of Expectancy Violations Theory to communication with and judgments about embodied agents during a decision-making task. Journal of Human Computer Studies, 91, 24-36.

Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759-771.

Burling, R., Armstrong, D. F., Blount, B. G., Callaghan, C. A., Foster, M. L., King, B. J., ... & Wallman, J. (1993). Primate calls, human language, and nonverbal communication [and comments and reply]. Current Anthropology, 34(1), 25-53.

Caruso, E. M., Waytz, A., & Epley, N. (2010). The intentional mind and the hot hand: Perceiv-ing intentions makes streaks seem likely to continue. Cognition, 116(1), 149-153.

Cassell, J. (2000). Embodied conversational agents. Cambridge: MIT press. Cassell, J., & Bickmore, T. (2000). External manifestations of trustworthiness in the interface.

Communications of the ACM, 43(12), 50-56. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., & Yan,

H. (1999). Embodiment in conversational interfaces: Rea. Presented at the CHI Pitts-burgh PA, USA, New York, (pp. 520-527).

Chattaraman, V., Kwon, W.-S., Gilbert, J. E., & Ross, K. (2019). Should AI-Based, conversa-tional digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior, 90, 315-330.

Cheek, J. M., & Buss, A. H. (1981). Shyness and sociability. Journal of Personality and So-cial Psychology, 41, 330-339.

Chen, F., Sengupta, J., & Adaval, R. (2018). Does Endowing a Product with Life Make One Feel More Alive? The Effect of Product Anthropomorphism on Consumer Vitality. Journal of the Association for Consumer Research, 3(4), 503-513.

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human-chatbot interaction. Future Generation Computer Systems, 92, 539-548.

Page 50: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

50

Chin, M. G., Sims, V. K., Clark, B., & Lopez, G. R. (2004). Measuring Individual Differences in Anthropomorphism toward Machines and Animals. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 48(11), 1252-1255.

Chin, M. G., Yordon, R. E., Clark, B. R., Ballion, T., Dolezal, M. J., Shumaker, R., & Finkel-stein, N. (2005). Developing and Anthropomorphic Tendencies Scale. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49(13), 1266-1268.

Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2003). Applied multiple regres-sion/correlation analysis for the behavioral sciences. New York: Psychology Press.

Cowell, A. J., & Stanney, K. M. (2005). Manipulation of non-verbal interaction style and de-mographic embodiment to increase anthropomorphic computer character credibility. In-ternational Journal of Human-Computer Studies, 62(2), 281-306.

Crossler, R., & Posey, C. (2017). Robbing Peter to pay Paul: Surrendering privacy for securi-ty’s sake in an identity ecosystem. Journal of the Association for Information Systems, 18(7), 487-515.

Cullen, H., Kanai, R., Bahrami, B., & Rees, G. (2014). Individual differences in anthropo-morphic attributions and human brain structure. Social Cognitive and Affective Neurosci-ence, 9(9), 1276-1280.

Culley, K. E., & Madhavan, P. (2013). A note of caution regarding anthropomorphism in HCI agents, Computers in Human Behavior, 29(3), 577-579.

Cyr, D., Head, M., Larios, H., & Pan, B. (2009). Exploring human images in website design: a multi-method approach. MIS Quarterly, 33(3), 539-566.

Davis, A., Murphy, J., Owens, D., Khazanchi, D., Zigurs, I. (2009). Avatars, People, and Vir-tual Worlds: Foundations for Research in Metaverses. Journal of the Association for In-formation Systems, 10(2), 90-117.

de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331-349.

de Waal, F. B. M. (2003). Silent invasion: Imanishi’s primatology and cultural bias in sci-ence. Animal Cognition, 6(4), 293-299.

Derks, D., Bos, A. E. R., & Grumbkow, von, J. (2008). Emoticons in Computer-Mediated Communication: Social Motives and Social Context. CyberPsychology & Behavior, 11(1), 99-101.

Derrick, D. C., Jenkins, J. L., & Nunamaker, J. (2011). Design Principles for Special Purpose, Embodied, Conversational Intelligence with Environmental Sensors (SPECIES) Agents, AIS Transactions on Human-Computer Interactions, 3(2), 1-21.

Diederich, S., Janssen-Müller, M., Brendel, A. B., & Morana, S. (2019). Emulating Empathet-ic Behavior in Online Service Encounters with Sentiment-Adaptive Responses: Insights from an Experiment with a Conversational Agent, In Proceedings of the Fortieth Interna-tional Conference on Information Systems, Munich (pp. 1-17).

Diederich, S., Lichtenberg, S., Brendel, A. B., & Trang, S. (2019). Promoting Sustainable Mobility Beliefs with Persuasive and Anthropomorphic Design: Insights from an Experi-ment with a Conversational Agent, In Proceedings of the Fortieth International Confer-ence on Information Systems, Munich (pp. 1-17).

Page 51: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

51

Donath, J. S. (1998). Identity and Deception in the Virtual Community. In S. M & P. Kollock (Eds.), Communities in Cyberspace (pp. 146-168). London: SAGE Publications Ltd.

Egan, D. E. (1988). Individual differences in human-computer interaction. Handbook of hu-man-computer interaction, (pp. 543-568). Amsterdam: Elsevier.

Epley, N., Akalis, S., Waytz, A., & Cacioppo, J. T. (2008a). Creating Social Connection Through Inferential Reproduction. Psychological Science, 19(2), 114-120.

Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.

Epley, N., Waytz, A., Akalis, S., & Cacioppo, J. T. (2008b). When We Need A Human: Moti-vational Determinants of Anthropomorphism. Social Cognition, 26(2), 143-155.

Eyssel, F., Kuchenbrandt, D., & Bobinger, S. (2011). Effects of anticipated human-robot in-teraction and predictability of robot behavior on perceptions of anthropomorphism. In Proceedings of the 6th international conference on Human-robot interaction (pp. 61-68).

Eyssel, F., & Reich, N. (2013). Loneliness makes the heart grow fonder (of robots); On the effects of loneliness on psychological anthropomorphism. Presented at the 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), (pp. 121-122).

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Meth-ods, 41, 1149-1160.

Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A Taxonomy of Social Cues for Conversational Agents. Journal of Human Computer Studies (132), 138-161.

Fletcher, P.C., Happe, F., Frith, U., Baker, S.C., Dolan, R.J., Frackowiak, R.S.J., & Frith, C.D. (1995). Other minds in the brain: a functional imaging study of “theory of mind” in story comprehension. Cognition, 57(2), 109-128.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39-50.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254-280.

Gefen, D., & Straub, D. (2005). A practical guide to factorial validity using PLS-Graph: Tutori-al and annotated example. Communications of the Association for Information systems, 16(1), 91-109.

Giger, J. C., Piçarra, N., Alves Oliveira, P., Oliveira, R., & Arriaga, P. (2019). Humanization of robots: Is it really such a good idea?, Human Behavior and Emerging Technologies, 1(2), 111-123.

Gnewuch, U., Morana, S., Adam, M. T. P., & Maedche, A. (2018). Faster Is Not Always Bet-ter: Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interac-tion. In Proceedings of the 26th European Conference on Information Systems (ECIS), Portsmouth, United Kingdom (pp. 23-28).

Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards Designing Cooperative and Social Conversational Agents for Customer Service. In Proceedings of the 38th International Conference on Information Systems (ICIS), Seoul, (pp. 1-13).

Page 52: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

52

Goasduff, L. (2019). Chatbots Will Appeal to Modern Workers. Retrieved from https://www.gartner.com/smarterwithgartner/chatbots-will-appeal-to-modern-workers/ . March 10, 2020.

Gong, L. (2008). How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Computers in Human Behavior, 24(4), 1494-1509.

Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of Mind Perception. Science, 315(5812), 619-619.

Guthrie, S. E. (2013). Anthropomorphism. In A.L.C. Runehov & Oviedo L. (Eds.), Encyclope-dia of Sciences and Religions. Dordrecht: Springer.

Guthrie, S. E. (1993). Faces in the clouds: A new theory of religion. New York: Oxford Uni-versity Press.

Hair, J. F. Jr., Anderson, R. E., Tatham, R. L., & Black, W. C. (1995). Multivariate Data Anal-ysis (3rd ed). New York, NY: Macmillan.

Hall, J. A., & Knapp, M. L.(2013). Nonverbal communication. Berlin: Walter de Gruyter. Hauser, D. J., Ellsworth, P., & Gonzalez, R. (2018). Are manipulation checks neces-

sary? Frontiers in Psychology, 9, 998. Hess, T., Fuller, M., & Campbell, D. (2009). Designing Interfaces with Social Presence: Using

Vividness and Extraversion to Create Social Recommendation Agents. Journal of the Association for Information Systems, 10(12), 889-919.

Holzwarth, M., Janiszewski, C., & Neumann, M. M. (2006). The Influence of Avatars on Online Consumer Shopping Behavior. Journal of Marketing, 70(4), 19-36.

Howell, J. A., Roberts, L. D. & Mancini, V. O. (2018). Learning analytics messages: Impact of grade, sender, comparative information and message style on student affect and aca-demic resilience. Computers in Human Behavior, 89, 8-15.

Hsu, J. (2012). Robotics’ Uncanny Valley gets new translation. Livescience. Retrieved from https://www.livescience.com/20909-robotics-uncanny-valley-translation.html; retrieved March 11, 2020.

Isbister, K., & Nass, C. 2000. “Consistency of personality in interactive characters: verbal cues, non-verbal cues, and user characteristics,” International Journal of Human-Computer Studies (53:2), pp. 251-267.

Kiesler, S., Powers, A., Fussell, S. R., & Torrey, C. (2008). Anthropomorphic Interactions with a Robot and Robot-like Agent. Social Cognition, 26(2), 169-181.

Kim, S., & McGill, A. L. (2011). Gaming with Mr. Slot or Gaming the Slot Machine? Power, Anthropomorphism, and Risk Perception. Journal of Consumer Research, 38(1), 94-107.

Knapp, M. L., Hall, J. A. & Horgan, T. G. (2014). Nonverbal communication in human interac-tion. Boston: Wadsworth.

Knijnenburg, B. P., & Willemsen, M. C. (2016). Inferring Capabilities of Intelligent Agents from Their External Traits. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(4), 28-25.

Krämer, N. C., Lucas, G., Schmitt, L., & Gratch, J. (2018). Social snacking with a virtual agent - On the interrelation of need to belong and effects of social responsiveness when

Page 53: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

53

interacting with artificial entities. International Journal of Human Computer Studies, 109, 112-121.

Lampe, C. A. C., Ellison, N., & Steinfield, C. (2007). A familiar face(book): profile elements as signals in an online social network (pp. 435-444). In Proceedings of the CHI 2001, San Jose, USA, (pp. 435-444).

Lankton, N. K., & McKnight, D. H. (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 880-918.

Laurel, B. (1997). Interface agents: Metaphors with character. In Friedman, B. (Ed.), Human Values and the design of Computer Technology (pp. 207-219). Cambridge: CSLI Publi-cations.

Lee, Y., & Chen, A. N. K. (2012). Usability Design and Psychological Ownership of a Virtual World. Journal of Management Information Systems, 28(3), 269-308.

MacDorman, K. F., Green, R. D., Ho, C. C., & Koch, C. T. (2009). Too real for comfort? Un-canny responses to computer generated faces. Computers in Human Behavior, 25(3), 695-710.

Mar, R. A. (2011). The Neural Bases of Social Cognition and Story Comprehension. Annual Review of Psychology, 62(1), 103-134.

Marketsandmarkets (2019). Conversational AI Market by Component (Platform and Ser-vices), Type (IVA and Chatbots), Technology, Application (Customer Support, Personal Assistant, and Customer Engagement and Retention), Deployment Mode, Vertical, and Region - Global Forecast to 2024. Retrieved from https://www.marketsandmarkets.com/Market-Reports/conversational-ai-market-49043506.html; retrieved March 10, 2020.

Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley, Cognition, 146(C), 22-32.

McCarthy, P. X. (2019). Conversational AI — A New Wave Of Voice-Enabled Computing. Retrieved from https://www.forbes.com/sites/paulxmccarthy/2019/12/17/conversational-ai---a-new-wave-of-voice-enabled-computing/#f90e51b1a109. March 10, 2020.

McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and Validating Trust Measures for e-Commerce: An Integrative Typology. Information Systems Research, 13(3), 334-359.

Mehrabian, A. (1972). Nonverbal communication. New York: Transaction Publishers. Mori, M. (1970). The uncanny valley. Energy, 7(4), 33-35. Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are Social Actors Presented. In Pro-

ceedings of the SIGCHI conference on Human factors in computing systems (pp. 72-78). Nicolaou, A. I., & McKnight, D. H. (2006). Perceived Information Quality in Data Exchanges:

Effects on Risk, Trust, and Intention to Use. Information Systems Research, 17(4), 332-351.

Niu, D., Terken, J. & Eggen, B. (2018). Anthropomorphizing information to enhance trust in autonomous vehicles. Human Factors and Ergonomics in Manufacturing & Service In-dustries, 28(6), 352-359.

Norman, D. A. (1994). How might people interact with agents. Communications of the ACM, 37(7), 68-71.

Page 54: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

54

Nowak, K. L., & Rauh, C. (2005). The Influence of the Avatar on Online Perceptions of An-thropomorphism, Androgyny, Credibility, Homophily, and Attraction. Journal of Comput-er-Mediated Communication, 11(1), 153-178.

Nunamaker, J. F., Jr., Derrick, D. C., Elkins, A. C., Burgoon, J. K., & Patton, M. W. (2011). Embodied Conversational Agent-Based Kiosk for Automated Interviewing. Journal of Management Information Systems, 28(1), 17-48.

Pak, R., McLaughlin, A. C., & Bass, B. (2014). A multi-level analysis of the effects of age and gender stereotypes on trust in anthropomorphic technology by younger and older adults. Ergonomics, 57(9), 1277-1289.

Pfeiffer, Jella (2020). Interview with Omer Biran and Michael Halfmann on "Conversational Agents at SAP". Business & Information Systems Engineering, 62(3), 249-252.

Pfeuffer, N., Benlian, A., Gimpel, H., and Hinz, O. (2019). Anthropomorphic Information Sys-tems. Business & Information Systems Engineering, 61(4), 523-533.

Pickard, M., Schuetzler, R., Valacich, J., & Wood, D. A. (2017). Next-Generation Accounting Interviewing: A Comparison of Human and Embodied Conversational Agents (ECAs) as Interviewers. SSRN Electronic Journal.

Pütten, von der, A. M., Krämer, N. C., Gratch, J., & Kang, S.-H. (2010). “It doesn’t matter what you are!” Explaining social effects of agents and avatars. Computers in Human Be-havior, 26(6), 1641-1650.

Qiu, L., & Benbasat, I. (2009). Evaluating Anthropomorphic Product Recommendation Agents: A Social Relationship Perspective to Designing Information Systems. Journal of Management Information Systems, 25(4), 145-182.

Qiu, L., & Benbasat, I. (2010). A study of demographic embodiments of product recommen-dation agents in electronic commerce. Journal of Human Computer Studies, 68(10), 669-688.

Reeves, B., & Nass, C. (1997). The media equation: how people treat computers, television, and new media like real people and places. New York: Cambridge University Press.

Riedl, R., Mohr, P. N. C., Kenning, P. H., Davis, F. D., & Heekeren, H. R. (2014). Trusting Humans and Avatars: A Brain Imaging Study Based on Evolution Theory. Journal of Management Information Systems, 30(4), 83-114.

Riedl, R., Mohr, P., Kenning, P., Davis, F., & Heekeren, H. (2011). Trusting humans and ava-tars: Behavioral and neural evidence. In Proceedings of the Thirty Second International Conference on Information Systems (ICIS), Shanghai, China (pp.1-23).

Rilling, J. K., Dagenais, J. E., Goldsmith, D. R., Glenn, A. L., & Pagnoni, G. (2008). Social cognitive neural networks during in-group and out-group interactions. Neuroimage, 41(4), 1447-1461.

Robert, L. P. (2017). The Growing Problem of Humanizing Robots. International Robotics & Automation Journal, 3(1), 1-2.

Robert, L. P., Dennis, A. R., & Hung, Y.-T. C. (2009). Individual swift trust and knowledge-based trust in face-to-face and virtual team members. Journal of Management Infor-mation Systems, 26(2), 241-279.

Sah, Y. J., & Peng, W. (2015). Effects of visual and linguistic anthropomorphic cues on social perception, self-awareness, and information disclosure in a health website. Computers in Human Behavior, 45(C), 392-401.

Page 55: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

55

Salovey, P., & Mayer, J. D. (1990). Emotional Intelligence. Imagination, Cognition and Per-sonality, 9(3), 185-211.

Saygin, A. P., Chaminade, T., Ishiguro, H., Driver, J., & Frith, C. (2012). The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid ro-bot actions. Social Cognitive and Affective Neuroscience, 7(4), 413-422.

Schroeder, J., & Epley, (N. 2016). Mistaking minds and machines: How speech affects de-humanization and anthropomorphism, Journal of Experimental Psychology: General, 145(11), 1427-1437.

Schuetzler, R. M., Giboney, J. S., Grimes, G. M., & Nunamaker, J. F. (2018). The Influence of Conversational Agent Embodiment and Conversational Relevance on Socially Desira-ble Responding. Decision Support Systems, 114, 94-102.

Schuetzler, R. M., Grimes, M., Giboney, J. S., & Buckman, J. (2014). Facilitating natural con-versational agent interactions: lessons from a deception experiment. In Proceedings of the Thirty Fifth International Conference on Information Systems, Auckland, (pp.1-16).

Seyama, J., & Nagayama, R. S. (2007). The Uncanny Valley: Effect of Realism on the Im-pression of Artificial Human Faces. Presence: Teleoperators and Virtual Environments, 16(4), 337-351.

Short, J.; Williams, E.; and Christie, B. (1976). The Social Psychology of Telecommunication. London: John Wiley & Sons.

Skjuve, M., Haugstveit, I. M., Følstad, A., & Brandtzaeg, P. B. (2019). Help! Is My Chatbot Falling Into The Uncanny Valley? An Empirical Study Of User Experience In Human-Chatbot Interaction. Human Technology, 15(1).

Sohn, S. (2019). Can Conversational User Interfaces Be Harmful? The Undesirable Effects on Privacy Concern. In Proceedings of the Fortieth International Conference on Infor-mation Systems, Munich (pp. 1-9).

Sproull, L., Subramani, M., Kiesler, S., Walker, J., & Waters, K. (1996). When the Interface Is a Face. Human-Computer Interaction, 11(2), 97-124.

Suh, K.-S., Kim, H., & Suh, E. K. (2011). What If Your Avatar Looks Like You? Dual-Congruity Perspectives for Avatar Use. MIS Quarterly, 35(3), 711-729.

Tam, K.-P. (2014). Anthropomorphism of Nature and Efficacy in Coping with the Environmen-tal Crisis. Social Cognition, 32(3), 276-296.

Tam, K.-P., Lee, S.-L., & Chao, M. M. (2013). Saving Mr. Nature: Anthropomorphism en-hances connectedness to and protectiveness toward nature. Journal of Experimental Social Psychology, 49(3), 514-521.

Tanriverdi, H. (2006). Performance Effects of Information Technology Synergies in Multibusi-ness Firms. MIS Quarterly, 30(1), 57-77.

Touré-Tillery, M., & McGill, A. L. (2015). Who or What to Believe: Trust and the Differential Persuasiveness of Human and Anthropomorphized Messengers. Journal of Marketing, 79(4), 94-110.

Tremoulet, P. D., & Feldman, J. (2000). Perception of Animacy from the Motion of a Single Object. Perception, 29(8), 943-951.

Trovato, G., Ramos, J. G., Azevedo, H., Moroni, A., Magossi, S., Ishii, H., Simmons, R. & Takanishi, A. (2015). Designing a receptionist robot: Effect of voice and appearance on

Page 56: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

56

anthropomorphism. In 24th IEEE International Symposium on Robot and Human Interac-tive Communication (RO-MAN) (pp. 235-240).

Vance, A., Lowry, P. B., & Eggett, D. (2015). Increasing Accountability Through User-Interface Design Artifacts: A New Approach to Addressing the Problem of Access-Policy Violations. MIS Quarterly, 39(2), 345-366.

Verhagen, T., van Nes, J., Feldberg, F., & van Dolen, W. (2014). Virtual Customer Service Agents: Using Social Presence and Personalization to Shape Online Service Encoun-ters. Journal of Computer-Mediated Communication, 19(3), 529-545.

Wagner, K., & Schramm-Klein, H. (2019). Alexa, Are You Human? Investigating Anthropo-morphism of Digital Voice Assistants-A Qualitative Approach. In Proceedings of the For-tieth International Conference on Information Systems, Munich (pp. 1-17).

Walther, J. B., & D’Addario, K. P. (2001). The Impacts of Emoticons on Message Interpreta-tion in Computer-Mediated Communication. Social Science Computer Review, 19(3), 324-347.

Walther, J. B., & Tidwell, L. C. (1995). Nonverbal cues in computer-mediated communication, and the effect of chronemics on relational communication. Journal of Organizational Computing, 5(4), 355-378.

Wang, W. (2017). Smartphones as Social Actors? Social dispositional factors in assessing anthropomorphism. Computers in Human Behavior, 68(C), 334-344.

Wang, W., & Benbasat, I. (2008). Attributions of Trust in Decision Support Technologies: A Study of Recommendation Agents for E-Commerce. Journal of Management Information Systems, 24(4), 249-273.

Wang, W., Zhao, Y., Qiu, L., Zhu, Y., . 2014. Effects of Emoticons on the Acceptance of Negative Feedback in Computer-Mediated Communication. Journal of the Association for Information Systems, 15(8)0, 454-483.

Waytz, A., Cacioppo, J., & Epley, N. (2010a). Who Sees Human? The Stability and Im-portance of Individual Differences in Anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232.

Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010b). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383-388.

Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism in-creases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52(C), 113-117.

Waytz, A., Morewedge, C. K., Epley, N., Monteleone, G., Gao, J.-H., & Cacioppo, J. T. (2010c). Making sense by making sentient: Effectance motivation increases anthropo-morphism. Journal of Personality and Social Psychology, 99(3), 410-435.

Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45.

Wiese, E., & Weis, P. P. (2020). It matters to me if you are human - Examining categorical perception in human and nonhuman agents, Journal of Human Computer Studies, 133, 1-12.

White, R. W. (1959). Motivation reconsidered: The concept of competence. Psychological review, 66(5), 297-333.

Page 57: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

57

Willard, A. K., & Norenzayan, A. (2013). Cognitive biases explain religious belief, paranormal belief, and belief in life’s purpose. Cognition, 129(2), 379-391.

Wölfl, S., Feste, J. M., & Peters, L. D. K. (2019). Is Somebody There? Anthropomorphic Website Design and Intention to Purchase from Online Stores. In Proceedings of the Twenty-fifth Americas Conference on Information Systems, Cancun (pp. 1-10).

Yuan, L., & Dennis, A. (2017). Interacting Like Humans? Understanding the Effect of Anthro-pomorphism on Consumer’s Willingness to Pay in Online Auctions. In Proceedings of the Hawaii International Conference on System Sciences, Waikola, HI, USA (pp. 537-546).

Yuan, L., Dennis, A., & Potter, R. (2016). Interacting Like Humans? Understanding the Neu-rophysiological Processes of Anthropomorphism and Consumer's Willingness to Pay in Online Auctions. In Proceedings of the Thirty Seventh International Conference on In-formation Systems, Dublin, Ireland (pp. 1-19).

Złotowski, J., Proudfoot, D., Yogeeswaran, K., and Bartneck, C. (2015). Anthropomorphism: Opportunities and Challenges in Human-Robot Interaction. International Journal of So-cial Robotics, 1-14.

Page 58: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

58

About the Authors Anna-Maria Seeger is a PhD candidate in Information Systems at the University of Mann-

heim. She received a master’s degree in business administration from the University of

Mannheim and acquired practical experience as a business analytics consultant at SAP. Her

research interests include human-computer interaction and e-commerce with a special focus

on conversational technologies. Her work has been presented at international conferences,

including the International Conference on Information Systems.

Jella Pfeiffer is Professor for Digitalization, E-Business and Operations Management at the

Department of Economics and Business Studies, Gießen University. Before, she worked as

Post-Doc at the Karlsruhe Institute of Technology and was manager of the Karlsruhe Deci-

sion & Design Lab (KD²Lab). Her research on decision support systems in e-commerce and

VR-commerce and her works on experimental research have been published in Information

Systems Research, Journal of Management Information Systems, Journal of the Association

for Information Systems, Journal of Behavioral Decision Making, European Journal of Opera-

tional Research, and others.

Armin Heinzl is a Professor and Chair of Business Administration and Information Systems

at the University of Mannheim, Germany. His research and teaching interests include human-

computer-interaction, digital innovation, digital platform ecosystems, and healthcare IT. He

has held visiting positions at Harvard, Berkeley, Irvine, ESSEC, LSE, and USI Lugano. His

papers have appeared in Information Systems Research, MIS Quarterly, Journal of the As-

sociation of Information Systems, Information Systems Journal, IEEE Transactions on Soft-

ware Engineering, IEEE Transactions on Engineering Management, Business and Infor-

mation Systems Engineering (BISE), Information Systems Frontiers, and others.

Page 59: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

1

Appendix A: Literature Review

To identify relevant studies, we first used a keyword search. The keyword search string was as

follows: (“design”) AND (“user interface” OR “conversational agent” OR “avatar” OR “interface”

OR “embodied conversational agent” OR “embodied agent” OR "recommendation agent") AND

(“anthropomorphi*” OR “human-like” OR “humanlikeness” OR “humanlike” OR “human-likeness”

OR “social cues”). Additional relevant papers were identified based on forward and backward

search. To ensure that all relevant IS studies were identified, we manually searched the IS sen-

ior scholar’s basket of journals. Additionally, we searched the Web of Science and ABI/Inform

databases to identify relevant peer-reviewed journal papers. We did not set any date re-

strictions. To select appropriate research, we defined the following selection criteria: (1) empiri-

cal study and (2) anthropomorphic manipulation of the design of a computer interface. We ex-

cluded studies on humanoid robots in order to ensure relevance for our study. We identified 35

studies in journals and proceedings from the fields of IS, HCI, and Psychology.

Page 60: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

2

Appendix B: Pre-Test Items for Sociality and Effectance Motivation

Sociality Motivation (Adapted from Cheek & Buss, 1981), Cronbach’s a= .96 (seven-point Likert scale from (1) strongly disagree to (7) strongly agree)

1. I would prefer to be with another person when executing this task.

2. I would welcome the opportunity to perform this task in the presence of a human.

3. I would prefer to solve this task in collaboration with another person.

4. I would find it more stimulating to execute this task working with another person.

5. I'd be unhappy if I were prevented from performing this task with others around.

Effectance Motivation (Adapted from Eyssel et al. 2011, Waytz et al. 2010), Cronbach’s a= .86 (seven-point Likert scale from (1) not important at all to (7) very important)

How important do you consider it…

1. …to be able to understand the conversational agents’ behavior?

2. …to be able to predict the conversational agents’ future behavior?

3. … that the conversational agent will follow your instructions?

4. … to be able to control the conversational agents’ behavior?

Table A2. Factor Loadings and Cross-Loadings SOCIALITY EFFECTANCE SOC1 0.911 0.003 SOC2 0.786 0.092 SOC3 0.943 0.082 SOC4 0.934 0.048 SOC5 0.939 0.11 EFF1 0.177 0.79 EFF2 0.038 0.826 EFF3 -0.014 0.819 EFF4 0.043 0.661

Table A1. Mean, Standard Deviation (SD), Composite Reliability (CR), Cronbach Alpha (CA), Average Variance Extracted (AVE), Correlations

Latent Construct Mean SD CR CA AVE 1 2 3 4 Sociality 4.49 1.44 .96 .95 .82 .90a Effectance 5.44 .96 .86 .78 .60 .16 .77 a The square root of the AVE is shown on the diagonal.

Page 61: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

3

Appendix C: Task Scenarios

In the study, the following task scenarios were presented in German to all participants.

Human-like task scenario: Health Agent (translated into English)

Please read the following dialogue between a user and a conversational agent carefully. This

conversational agent provides users with health information and advice based on its under-

standing of their psychological and physical health state. Put yourself in the position of the user

(dialogue section highlighted in blue) who talks to the health agent (dialogue section highlighted

in gray). The health agent’s task is to understand the health issue of the user and to provide him

with advice to address the issue.

Computer-like task scenario: Smart Home Agent (translated into English)

Please read the following dialogue between a user and a conversational agent carefully. This

conversational agent can control connected devices in a user’s home and provide information

about usage and consumption of these devices. Put yourself in the position of the user (dia-

logue section highlighted in blue) who talks to the smart home agent (dialogue section highlight-

ed in gray). The smart home agent’s task is to control the connected heating system and to pro-

vide information about usage and consumption.

Page 62: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

4

Appendix D: Anthropomorphic Design of the Smart Home Agent

We illustrate two designs of the smart home agent with either i) no anthropomorphic cues or ii)

all anthropomorphic cues. The remaining designs are combinations of these two versions. All

stimulus material was presented in German to the participants. In the German version, text

length was equal across groups. The authors translated the stimulus material into English.

Left: Agent with all type of cues highlighted. Right: Agent with no cues.

Page 63: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

5

Appendix E: Measurement Items, Convergent and Discriminant Validity

Perceived Anthropomorphism (seven-point Likert scale from (1) strongly disagree to (7) strongly agree)

I think the agent has...

1. a mind on its own

2. intentions

3. free will

4. consciousness

5. desires

6. beliefs

7. the ability to experience emotions

Dispositional Anthropomorphism (seven-point Likert scale from (1) strongly disagree to (7)

strongly agree)

To what extent do you agree with the following statements?

1. Technologies and machines (e.g., computers, cars, television sets) have inten-tions.

2. A fish has free will. 3. A mountain has free will. 4. A television set experiences emotions. 5. A robot has consciousness. 6. Cows have intentions. 7. A car has free will. 8. The ocean has consciousness. 9. A computer has a mind of its own. 10. A cheetah experiences emotions. 11. The environment experiences emotions. 12. An insect has a mind of its own. 13. A tree has a mind of its own. 14. The wind has intentions. 15 A reptile has consciousness.

Page 64: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

6

We assessed the validity of our measurement instruments by examining the convergent and

discriminant validity. The used instruments exhibit Cronbach alphas and composite reliabilities

well above the suggested minimum of .70. In addition, the values of the average variance ex-

tracted (AVE) are greater than .50 (see Table A1). In line with the criterion of Fornell and Larck-

er (1981), we conclude that our instruments exhibit an adequate level of convergent validity. For

discriminant validity, we first assessed the item loadings and cross-loadings. Measurement

items should load highly on the construct they are designed to measure and not high on other

constructs (Gefen & Straub, 2005). All factor loadings and cross-loadings in Table A2 loaded

higher on the assigned theoretical construct than on any other factor. A second criterion to es-

tablish discriminant validity requires that the square root of the AVE should be larger than any

correlation with another construct (Fornell & Larcker, 1981). This criterion is also satisfied (see

Table 4). Thus, we conclude that our measures exhibit adequate level of discriminant validity.

We also assessed the appropriateness of dispositional anthropomorphism as a superordinate

construct consisting of the three subdimensions natural entities, nonhuman animals, and tech-

nology. We followed two approaches used by studies with similar constructs (e.g., Crossler &

Posey, 2017; Benbasat & Wang, 2005; Tanriverdi, 2006). First, we found that all subdimensions

of dispositional anthropomorphism were significantly intercorrelated with p<0.01 (see Table A1).

Second, we assessed if each subdimension was significantly related to dispositional anthropo-

morphism. We found significant loadings for the three subdimensions of natural entities (.861,

p<.001), nonhuman animals (.518, p<.001), and technology (.796, p<.001). In line with Waytz et

al. (2010a), we thus conclude that dispositional anthropomorphism is a superordinate construct

that reflects humans stable individual tendencies to anthropomorphize nonhuman entities.

Page 65: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

7

Table A4. Factor Loadings and Cross-Loadings

DISANT: Natural Entities (NE)

DISANT: Non-human Animals (NHA)

DISANT: Tech-nology (TECH)

Perceived Anthro-pomorphism (ANT)

NE1 0.811 0.160 0.518 0.236 NE2 0.815 0.241 0.341 0.166 NE3 0.749 0.317 0.286 0.136 NE4 0.780 0.398 0.351 0.180 NE5 0.795 0.155 0.451 0.296 NHA1 0.161 0.697 -0.056 -0.037 NHA2 0.290 0.846 0.148 0.072 NHA3 0.270 0.842 0.086 0.032 NHA4 0.253 0.811 0.104 0.055 NHA5 0.249 0.713 0.111 -0.054 TECH1 0.387 0.077 0.806 0.396 TECH2 0.416 0.011 0.790 0.295 TECH3 0.343 0.153 0.778 0.393 TECH4 0.465 0.051 0.837 0.300 TECH5 0.376 0.170 0.794 0.391 ANT1 0.239 0.088 0.411 0.860 ANT2 0.181 0.057 0.299 0.745 ANT3 0.249 -0.022 0.452 0.883 ANT4 0.268 0.056 0.418 0.915 ANT5 0.221 -0.037 0.352 0.837 ANT6 0.192 0.012 0.333 0.854 ANT7 0.163 -0.019 0.335 0.863

Table A3. Mean, Standard Deviation (SD), Composite Reliability (CR), Cronbach Alpha (CA), Average Variance Extracted (AVE), Correlations

Latent Construct Mean SD CR CA AVE 1 2 3 4 DISANTa: Natural Entities 2.38 1.27 .89 .85 .63 .79b DISANTa: Nonhuman Animals

5.11 1.26 .89 .84 .62 .32** .79

DISANTa: Technology 1.67 .85 .90 .86 .64 .50** .12** .80 Perceived Anthropomor-phism

2.58 1.47 .95 .94 .73 .26** .03 .44** .85

a Dispositional anthropomorphism (DISANT) b The square root of the AVE is shown on the diagonal. ** indicates significant correlations at the .01 level.

Page 66: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

8

Appendix F Table A5. Regression Results for Perceived Anthropomorphism with all Controls Model 1 Model 2 Linear model

estimates (robust std. errors)

p-value Linear model estimate (robust std. errors)

p-value

Nonverbal Verbal Human Identity

.040 (.124)

.133 (.123)

.127 (.125)

.745

.281

.309

-.572 (.201) .106 (.212) .07 (.22)

.005**

.618

.751 Nonverbal x Verbal Verbal x Human Identity Nonverbal x Human Identity Nonverbal x Verbal x Human Identity

- - - -

- - - -

.806 (.312) -.298 (.335) .899 (.322) -.936 (.489)

.010*

.374

.005**

.056 Task Type (0= computer-like, 1= human-like)

.642 (.125) .000*** .641 (.123) .000***

Dispositional Anthropomorphism .415 (.071) .000*** .442 (.069) .000*** Gender (0= female, 1= male) .297 (.124) .017* .315 (.124) .012* Age -.004 (.005) .432 -.004 (.005) .440 Previous Experience (0=no, 1=yes) .096 (.139) .489 .133 (.137) .332 Constant 2.09 (.248) .000*** 2.075 (.154) .000*** R2

Adjusted R2 .1417 .1276

.000*** .1725 .1519

.000***

Observations 496 496 * p<0.05, ** p<0.01,*** p<0.001

Table A6. Variance Inflation Factor Variable VIF Tolerance

(1/VIF) Model 1 Nonverbal 1.00 1.00

Verbal 1.01 0.99 Human Identity 1.01 0.99 Task Type (0= computer-like, 1= human-like) 1.01 0.99 Dispositional Anthropomorphism 1.01 0.99 Gender (0= female, 1= male) 1.03 0.97 Mean VIF 1.01

Model 2 Nonverbal 3.94 0.25 Verbal 3.89 0.26 Human Identity 3.79 0.26 Nonverbal x Verbal 5.95 0.17 Verbal x Human Identity 5.78 0.17 Nonverbal x Human Identity 5.85 0.17 Nonverbal x Verbal x Human Identity 6.94 0.14 Task Type (0= computer-like, 1= human-like) 1.01 0.99 Dispositional Anthropomorphism 1.03 0.97 Gender (0= female, 1= male) 1.04 0.96 Mean VIF 3.92

Page 67: Texting with Human-like Conversational Agents: Designing for … · 2020. 10. 15. · 1 Texting with Human-like Conversational Agents: Designing for Anthropomorphism Anna-Maria Seeger1,

9

Appendix G

Figure A1. Perceived Anthropomorphism by Task Type

12

34

56

7Pe

rcei

ved

Anth

ropo

mor

phis

m

computer-like human-like