25
Testing the Interactivity Model: Communication Processes, Partner Assessments, and the QuaHty of Collaborative Work JUDEE K. BURGOON, JOSEPH A. BONITO, BJORN BENGTSSON, ARTEMIO RAMIREZ, JR., NORAH E. DUNBAR, AND NATHAN MICZO EE K. BuRGOON IS Professor of Communication and Director of the Center for the Management of Information at the University of Arizona. She is the author or coau- thor of seven books and nearly two hundred articles and chapters on topics related to interpersonal and nonverbal communication, deception and credibility, mass media, and new communication technologies. Her current research focuses on interactivity, adaptation, and attunement in communication. JOSEPH A. BoNrro is Assistant Professor of Communication at the University of Arizona. His articles have appeared in Communication Yearbook and Computers in Human Behavior. His research interests include computer-mediated communication, small- grt)up processes, message production processes, and research methods. BJORN BENGTSSON is a graduate student of computing science and cognitive science in the Department of Computer Sciences, Utnea University, Sweden. His research deals with virtual communication—a cross-disciplinary area involving computer-medi- ated communication, human-computer interaction, and virtual reality. ARTEMIO RAMIREZ, JR., is a doctoral candidate in the Department of Communication at the University of Arizona. His interests include interpersonal communication and new technologies. E. DLINBAR is a doctoral candidate in the Department of Communication at the University of Arizona. Her research interests include interpersonal communication, relational conflict, and nonverbal communication in mediated contexts. She is cur- rently working on her dissertation on power in marital conflicts. NAIHAN MICZO is a doctoral candidate in the Department of Communication at the University of Arizona. His research interests include interpersonal communication. ARSTRACT; A major consideration in designing and adopting new communication technologies is their impact on communication processes and outcomes. One way to understand this impact is according to the principle of interpersonal interactivity. Findings from two investigations are reported here that address how properties of task-related communication conducted with differing interfaces relate to perceptions Journal of Managemfnl Information Systems /Wmei 1999-2000. Vol. 16. No. 3. pp. 33-56 ©2000M.B.Sharpc.lnc. 0742-1222 / 2000 $9.50 + 0.00.

Testing the Interactivity Model: Processes, Partner Assessments, and the Quality of Collaborative Work

Embed Size (px)

Citation preview

Testing the Interactivity Model:Communication Processes, PartnerAssessments, and the QuaHty ofCollaborative Work

JUDEE K. BURGOON, JOSEPH A. BONITO,

BJORN BENGTSSON, ARTEMIO RAMIREZ, JR.,

NORAH E. DUNBAR, AND NATHAN MICZO

EE K. BuRGOON IS Professor of Communication and Director of the Center for theManagement of Information at the University of Arizona. She is the author or coau-thor of seven books and nearly two hundred articles and chapters on topics related tointerpersonal and nonverbal communication, deception and credibility, mass media,and new communication technologies. Her current research focuses on interactivity,adaptation, and attunement in communication.

JOSEPH A. BoNrro is Assistant Professor of Communication at the University of Arizona.His articles have appeared in Communication Yearbook and Computers in HumanBehavior. His research interests include computer-mediated communication, small-grt)up processes, message production processes, and research methods.

BJORN BENGTSSON is a graduate student of computing science and cognitive science inthe Department of Computer Sciences, Utnea University, Sweden. His research dealswith virtual communication—a cross-disciplinary area involving computer-medi-ated communication, human-computer interaction, and virtual reality.

ARTEMIO RAMIREZ, JR., is a doctoral candidate in the Department of Communication atthe University of Arizona. His interests include interpersonal communication andnew technologies.

E. DLINBAR is a doctoral candidate in the Department of Communication at theUniversity of Arizona. Her research interests include interpersonal communication,relational conflict, and nonverbal communication in mediated contexts. She is cur-rently working on her dissertation on power in marital conflicts.

NAIHAN MICZO is a doctoral candidate in the Department of Communication at theUniversity of Arizona. His research interests include interpersonal communication.

ARSTRACT; A major consideration in designing and adopting new communicationtechnologies is their impact on communication processes and outcomes. One way tounderstand this impact is according to the principle of interpersonal interactivity.

Findings from two investigations are reported here that address how properties oftask-related communication conducted with differing interfaces relate to perceptions

Journal of Managemfnl Information Systems /Wmei 1999-2000. Vol. 16. No. 3. pp. 33-56

©2000M.B.Sharpc.lnc.

0742-1222 / 2000 $9.50 + 0.00.

34 BURGOON ETAL.

of interaction partners and the outcomes of their collaborative work. Study I manipu-lated the interface affordances of mediation, contingency, and modality richness.Study 2 examined the affordance of mediation. Results show that interfaces thatpromote higher mutuality and involvement lead to more favorable perceptions ofpatiners' credibility and attraction, and those perceptions are systematically relatedto higher-quality decisions and more influence. Discussion focuses on the relationbetween user perceptions, design features, and task outcomes in human-computerinteraction and computer-mediated communication.

KEY WORDS AND PHRASES: collaborative work, communication interfaces, computer-mediated communication, decision making, human-computer interaction,interactivity.

RAPIDLY DEVELOPING TECHNOLOGY NOW AFFORDS INDIVIDUALS, ORGANIZATIONS,

and institutions of learning a cornucopia of options for communication and informa-tion exchange. Computer-mediated communication (CMC) is becoming ubiquitousin such text-based forms as e-mail and computerized group support systems as well asin multimedia forms such as Microsoft's NetMeeting, audio- and videoconferencing.Augmenting these is human-computer interaction (HCI), in which computer inter-faces, which come in a variety of guises (e.g.. the Microsoft "office assistant"), con-duct part or all of a transaction with individuals. Through technological advances invoice recognition software, voice synthesis, and computer animation, interactionswith these "intelligent" computer agents are becoming increasingly anthropomor-phic so that they, too, resemble interactions with humans.

The accelerating adoption of these technologies has important ramifications forcommunication in work and educational contexts. Two central issues are (1) howvarious CMC and HCI interfaces affect the character of the communication processitself among users and (2) how resultant communication patterns affect such desir-able outcomes as positive interpersonal relationships among collaborators, accurateinformation exchange, and high-quality collaborative work. These issues are ad-dressed here via the principle of interpersonal interactivity (hereafter referred tosimply as interactivity). This principle, elaborated momentarily, posits the (decep-tively) simple proposition that !

Human communication processes and outcomes vary systematically with thedegree of interactivity that is afforded and/or e.xperienced.

To investigate the nature and consequences of interactivity, we have begun todecompose interactivity into its relevant properties and to test experimentally theindividual and collective impacts of those properties. Reported here are two initialstudies, one manipulating the properties of contingency, mediation, and modalityrichness in HCI and one manipulating mediation in CMC. The interface manipula-tions are reported elsewhere 13,4, 8]. Our primary objective in this companion reportwas to ascertain how experiential qualities of interactive communication during

TESTING THE INTERACTlVtTY MODEL 35

collaborative tasks relate to users' judgments about partners and to task outcomes.We demonstrate that the degree to which users perceive high interactivity—reflectedin indicators of interaction involvement and a multidimensional construct calledmutuality—is strongly related to how positively users judge their partner's credibil-ity and attractiveness. In turn, we show that these assessments strongly relate topartner influence, decision quality, and accurate information exchange. The analy-ses offer insights into why various interfaces sometimes facilitate and sometimesimpair group communication and accomplishments. Further, consistent with the prin-ciple that media and communication formats should be matched to such situationaldemands as the tasks to be performed, the group's well-being, and support for indi-vidual group members [16. 17], results offer guidance as to what facets of cotnmuni-cation processes contribute to achieving desired outcomes.

Structural Affordances and Experiential Properties of Interactivity

SOME CHAMPIONS OF NEW TECHNOLOGIES ARGUE THAT H C I AND CMC will eventu-ally supplant face-to-face (FtF) communication in the workplace. Others are con-vinced that some level of FtF interaction must be retained, especially if tasks arecomplex, involve statistical information or sophisticated inferences and judgments,or depend on trusting and solid interpersonal relationships {see, e.g., [19, 30]). Webelieve that at issue is not FtF interaction per se but the properties of interactivitythat are associated with FtF interaction. Our theoretical goal therefore is to decom-pose interactivity into its constituent properties.

We begin with the assumption that interactivity is value-neutral: that is, the pres-ence of these qualities can be either beneficial or detrimental. For example, limitedinteractivity might undercut trust, enable undue influence by mediated sources, orproduce more understanding and rapport among participants [34, 35]. Highinteractivity might be distracting, promote ready acceptance of dubious information,or facilitate idea generation. Deciding which communication formats—HCI, CMC,FtF—are most or least advantageous under what circumstances and why depends,then, on understanding the role of interactivity.

Although the meaning of interactivity may seem self-evident, the term has beenapplied to widely divergent phenomena. One way to conceptualize interactivity isaccording to the structural properties, or affordances, that are present in a givencommunication format. Some of these distinctive features have emerged from analy-ses of various electronic media (e.g., [17, 25,33, 36]). We began our analysis insteadby identifying intrinsic properties of FtF communication that might be retained,supplemented, amplified, or suppressed in HCI and CMC formats (see [4, 13] forfurther elaboration). We theorize that these affordances individually and/or collec-tively account for observed differences in cognitions, communication, and outcomesobserved across mediated and nonmediated, human-human and human-computerinteraction.

The various analyses of affordances can be integrated into an extensive set ofproperties. Chief among them are the following; ' ''^ '

36 BURGOON ETAL.

1. Participation (the extent to which senders and receivers are actively engagedin the interaction as opposed to giving monologues, passively observing, orlurking),

2. Mediation (whether the communication format is mediated or not),3. Contingency (the extent to which one person's queries, responses, and com-

ments are dependent on the prior ones of the cointeractant),4. Media and information richness (whether the format utilizes one or more

modalities such as text, audio, visual, or touch, and the extent to which itsupports symbol variety to present "rich" or "poor" social information).

5. Geographic propinquity (whether users are physically colocated or distributed),6. Synchronicity (whether interaction is same-time, which permits immediate

bidirectional feedback, or asynchronous, which permits rehearsability andeditability),

1. Identification (the extent to which participants are fully identified, partiallyidentified, or anonymous),

8. Parallelism (whether the format permits concurrent communication and mul-tiple addressees, as in the case of electronic brainstorming, or only permitsserial messages), and

9. Anthropomorphism (the degree to which the interface simulates or incorpo-rates humanlike characteristics).

A second way to conceptualize interactivity is according to the qualitative experi-ences that users equate with interactivity. These properties are what make a mode ofcommunicating or information exchange look and feel interactive. These qualitiescan be conceptualized as the mediating communication processes through whichstructural properties exert their impact. Our theorizing about what properties createthe experience of interactivity is in its eariy stages, but there are three that we believeare salient:

1. Interaction involvement (the degree to which users perceive they arecognitively, affectively, and behaviorally engaged in the interaction),

2. Mutuality (the extent to which users perceive and create a sense of relationalconnection, interdependence, coordination, and understanding with one an-other), and

3. Individuation (the extent to which users perceive they have a rich, detailedimpression of the other's identity and personalizing information).

This report centers on the first two properties of involvement and mutuality. In thecontext of communication, interaction involvetnent concerns the extent to whichusers experience high cognitive, sensory, visceral, and motor engagement in an inter-action—that is, the interaction creates a sense of presence, of "here and now."Nonverbally, involvement may be expressed through such behaviors as close prox-imity, frequent eye contact, facial and vocal expressivity, coordinated conversation,attentiveness, fluent speech, and moderate relaxation. Verbally, it may be expressedthrough linguistic choices and personal disclosures [7, 15].

TESTING THE tNTERACTIVlTY MODEL 37

I

Mutuality also has multiple facets. Among the essential preconditions for commu-nication to occur are a mutual other-orientation and belief in shared background (see[261). Psychologically, mutuality may take such forms as feeling connected andsimilar to others. Cognitively and affectively, it may take the form of group membersperceiving they share common meanings, are responsive to one another, and areunderstood. Behaviorally, it may take the form of users' actions being interdepen-dent and mutually influential and their verbal and nonverbal communication withinand across episodes being coordinated and harmonized with others.

An important objective in many interpersonal and group interactions is that col-laborators regard one another as competent, reliable, trustworthy, attractive, and use-ful contributors [2]. To the extent that greater involvement and mutuality lead tofavorable judgments of credibility and attractiveness, they may also promote suchoutcomes as greater productivity, better decision making, and more accurate under-standing of messages that are exchanged. Hence, involvement and mutualitymay have an indirect impact on other group outcomes by virtue of their directimpact on perceptions of group members. Figure 1 illustrates this hypothesizedset of relationships.

In our initial HCI experimental test [4], we hypothesized that FtF interaction andhighly anthropomorphic computer interfaces would be more influential. Instead, wefound the opposite: Computer agents using only text or text and voice (rather thanstill images or animated faces) achieved higher decision quality and were more influ-ential than human partners. In other words, creating highly anthropomorphic featuresyielded less influence and lower quality decisions. On social judgments, however,FtF interaction created more favorable perceptions of the partner's credibility thandid the computer conditions, on average. These counterintuitive results led us toundertake the additional analyses reported here. Specifically, we tested the model infigure 1 by correlating (I) users' self-reported experiences of involvement and mutu-ality, (2) judgments of partner and/or format credibility, attractiveness, and utility,and (3) the outcomes of influence, decision quality, and accuracy.

Relationship of Involvement and Mutuality toPartner/Format Assessments

IF PARTtCIPANT INVOLVEMENT AND MUTUALITY ARE NECESSARY (but not necessar-

ily sufficient) preconditions for effective communication and group outcomes dur-ing collaborative work, then interfaces that enable greater involvement and mutual-ity between collaborators should promote attraction and credibility. Conversely,interfaces that are distracting, that create a sense of distance and detachment betweenparticipants, or that make users feel "unplugged" from the interaction may lead themto judge the partner or interface more harshly. Research on FtF interaction, for ex-ample, has shown that senders communicating under a high-participation formatwere perceived as more involved, dominant, pleasant, and believable 110], and par-ticipative receivers achieved greater understanding than did eavesdropping observ-

38 BURGOON ETAL.

Processes Socialjudgments Outcomes

Structural

affordances

Involvement

Mutuality

Task attraction

Credibilily

Utility

Decision quality

Accuracy

Figure I. Hypothesized relationships among experienlial properties of interactivity, socialjudgments ofpartners/format, and interaction outcomes

ers [24]. These findings imply that involvement and mutuality relate systematicallyto social judgments and outcomes.

It may seem peculiar to think of judging computers or computer agents in the samemanner as humans. But computers, by virtue of their participation, functionality, andappearance, may be responded to in fundamentally social ways by users and aresubject to the same kinds of communication evaluations that are commonly reservedfor humans (e.g., [23,28, 291). They may be thought of, for example, as more or lesscompetent, engaging, or influential. Mass media research has demonstrated that peopleascribe high credibility to the media (e.g., [20]). It may be that mediated communica-tion forms in general are accorded a positivity bias such that users assume whateveris delivered through a mechanical or electronic medium has already been authenti-cated and so is, de facto, regarded as credible. There is also evidence of credibilityrising as a medium becomes "richer" in the sensory channels that are engaged and theamount of social information it supplies [31].

Relationship of Partner/Format Assessments to Task Outcomes

A FAMILIAR THREAD IN COMMUNICATION RESEARCH IS THAT INTERACTION OUt-

comes are often a function of the credibility and attractiveness of the source [9].Credible and attractive sources are usually more persuasive. By the same token, usersroutinely base their satisfaction with new technologies on how useful and user-friendly they are. When the interface is a surrogate human, as is sometimes the case inHCI, then the utility judgment becomes basically another social judgment, much asone would judge how helpful a human partner is in achieving one's work objectives.The prediction that greater credibility and attraction foster infiuence. accuracy, anddecision quality, although straightforward when applied to humans, may seem a bitstrained when applied to computer agents. However, if one thinks about such appli-cations as search engines, web agents, filtering devices, and decision support systems(i.e., the kinds of tools that will surely become commonplace very soon), then the

TESTING THE [NTERACTIVITY MODEL 39

I

application may seem more plausible. For many kinds of communication transac-tions, users may neither need nor expect highly contingent interaction or the sense ofsocial presence that multiple modalities are intended to afford [25]. Thus, it stands toreason that interfaces that are perceived as more useful, attractive, or credible shouldproduce better outcomes on collaborative tasks. If this is Uiie, then the higher decisionquality, influence, accuracy of understanding, and credibility that Bengtsson et al. [4]found in text- and voice-ba.sed versions of HCI should be attributable partly to thesemediating assessments. Conversely, formats or interfaces that are perceived as less user-friendly, believable, or attractive should be associated with lower-quality outcomes.

Casting possible doubt on this prediction is the further Bengtsson et al. finding [4],in which the HCI conditions were, on average, more influential but also less crediblethan FtF interaction on judgments of truthfulness, sociability, and dynamism. Amongthe HCI interfaces, the text-only condition, rather than more anthropomorphic ones,received the most consistently favorable ratings. This raises the possibility that credibil-ity might be unrelated or negatively related to interaction outcomes in the special caseof HCI. If so, when selecting or designing interfaces, might one achieve gains on influ-ence and decision quality at the expense of positive regard for coactors, or vice versa?

The answer to this question resides in the correlations among the credihility andattraction dimensions and the other outcome measures. We reasoned that there mightbe enough variability within and between HCI conditions that, despite the meanratings between conditions not comporting with the predicted association betweencredibility and attraction on the one hand, and influence, decision quality, and accu-racy on the other, those individuals who judged the partner or interface to be compe-tent, trustworthy, dependable, and useful might be more influenced by their partnerthan those who held their partner or the interface in low regard. Put differently,analyses conducted at the individual level, via correlations, might reveal patternsthat had failed to emerge at the between-groups level due to heterogeneity withinand across the HCI modalities. We also anticipated that the gain in statistical powerachieved by examining data from all seven conditions might pay dividends in termsof achieving more statistically reliable findings.

Experimental Tests: Study 1

Method

Participants and Confederates

Details of the method and experimental conditions are given more fully in [4].' Inbrief, participants (N= 70) were male undergraduate students in the social sciences atUmea University, Sweden, who were paid approximately $10 (100 Swedish crowns)to engage in a study of alternative problem-solving methods. Participants were ran-domly assigned to one of seven experimental conditions in which they conducted adecision-making task with a human or computer partner. Two male confederatesserved as the human partners.

40 BURGOON ETAL.

Experimental Conditions and Procedures

The experimental conditions varied the "humanness" of the partners with whomparticipants were paired. In the five computer conditions, which ranged from a text-only interface to a human-like image with synthesized speech and matching lipsynchronization, the cotnputer output was preprogrammed. In the remaining two FtFconditions, human confederates presented the same scripted responses as in the HCIconditions but either adhered strictly to the script (noncontingent interaction) orinteracted "freely" (i.e,, in a normal, contingent turn-taking fashion) white still intro-ducing the same information supplied in the script.

The task—the Desert Survival Problem—asked participants to imagine that theirjeep has crashed in the Kuwaiti Desert, with no sign of potable water but somesalvageable items from the wreckage. They then rank-ordered twelve items for theirsurvival value (e.g., a gun, matches, a flashlight, a magnetic compass). This task wasselected because it allows a fair amount of experimental control while still approxi-mating features of normal conversation. Unlike some tasks that would stretch credu-lity among computer-savvy participants, it is also amenable to use in both HCI andCMC contexts (as well as immersive virtual environments, where additional exten-sions of the work are being conducted). This same task was used in Study 2.

In Study 1, participants first completed their own rankings of the twelve items theninteracted with their partner (computer or human). During this task discussion, par-ticipants and confederates (human or computer) alternated giving their rankings andreasons on each of the twelve items. Unbeknownst to the participants, confederaterankings and reasons were based on ones arrived at previously by groups of experts.Participants then reranked the items and completed a survey instrument that queriedthem about their partner and the interaction itself The instructions, description of theDesert Survival Problem, initial rankings, postrankings, and all other postmeasureswere collected on the Worid Wide Web via a Macintosh computer.

Interaction Measures

To measure interaction involvement, participants rated perceived involvement with twoLikert-format items taken from Burgoon and Hale's |11] Relational CommunicationScale (coefficient alpha reliability = 0.71). To capture the range of possible perceptionsthat might correspond to mutuality, participants also rated partners on receptivityand similarity subscales from the RCS (reliabilities = 0.64 and 0.87, respectively). Tothese measures were added Aron, Aron, and Smollan's [ 1 ] pictorial instrument, whichuses seven increasingly overlapping circles to depict degrees of perceived connect-edness and fifteen items from Cahn and Shulman's [14] Feelings of Understanding/Misunderstanding Scale (reliability = 0.80). To further facilitate interpretation, thesemeasures were also all averaged together to create an overall mutuality score.

Social Judgments and Outcome Measures

Assessments of the partner and format were created from several previously validatedmeasures, including utility and liking scales from [28]; credibility and dominance

TESTING THE INTERACTIVITY MODEL 41

measures found in [12,32]; and an attraction measure reported in [27]. In the originalBengtsson et al. study, these scales were utilized to measure utility, six dimensions ofcredibility, and task attraction. However, to reduce multicollinearity and increaseinterpretability, we used principal components factor analysis with varimax rotationto reduce these data to a smaller subset of composite measures. The composites werecreated by averaging the unit-weighted items. The factor analysis on the semanticdifferential items produced five measures—dominance, expertise, dependability,sociability, and trust—with respective reliabilities of 0.66, 0.82, 0.87,0.74, and 0.71.Dominance was comprised of the qualities of dominant, confident, and energetic.Expertise incorporated such attributes as expertise, competence, and experience.Dependability combined elements of the character and competence dimensions ofcredibility with format qualities related to utility; it included such attributes as reli-able, helpful, clever, useful, intelligent, and efficient. Sociability combined beingsociable and friendly. Trust was measured by such attributes as truthful, trustworthy,high character, and very credible. Task attraction was measured separately because itutilized a Likert format. It included the partner showing the participant differentways to view the situation, approaching the task with professionalism, being satis-fied with the partner's contribution and enjoying working with the partner.

Decision quality, absolute influence, and relative influence were calculated fromparticipants' rankings of the salvaged items prior to interaction ("prerankings") andfollowing the interaction ("postrankings"). Decision quality was measured as themean absolute discrepancy between participant and confederate (expert) rankings onthe twelve items. A small score thus indicates high decision quality, that is, closecorrespondence to the expert rankings. Influence was computed as the distance par-ticipants moved toward the confederate's position by calculating differences be-tween (a) each person's preranking and partner preranking and (b) each person'spostranking and partner postranking. A large score reflects a substantial shift.

Three measures were utilized to tap actual understanding. Accuracy of recall wasassessed by asking participants to record the partner's ranking for the three top-rankedand three bottom-ranked items. It was scored as number of correct matcbes. Deviationsfrom partner ranking (the absolute distance from actual partner ranking) were alsocalculated to capture close but not perfect matches. A small distance indicated highaccuracy. Content understanding was measured by asking participants to paraphrasewhat they believed to be their partner's position and reasoning on the six middle-rankeditems (selected because they might be less obvious and less biased by subjects' ownpreferences). These responses were rated by two independent coders (interrater reliabil-ity r= 0.92). Individual ratings were averaged to produce a single score.

Results

Correlation Analysis

Table 1 presents the correlations among involvement, mutuality, and partner/formatassessments. The correlations between involvement and social judgments strongly

42 BURGOON ETAL.

Table 1. Correlations of Involvement and Mutuality with Credibility, Utility, andAttraction Judgments, Study 1

Credibility/utiiityDominanceExpertiseDependabilitySociabiiity

TrustTask attraction

*/7<0.05; ** p

Involvement

0.249*0.291**0.446"0.175

0.440**0.375**

<0.01.

Mutuality:

Similarity

0.212*0.0270.413"0.226*

0.258*

0.341"

Mutuality:

Perceived

Receptivity

-0.1800.1040.0860,229*

0.027

0.252*

Mutuality:

Feeling

Understood

-0.1080,1650.230*0.291"

0.142

0.180

Mutuality:

Connectedness

-0.050-0.073

0.1800.258*

0.011

0.170

supported predictions both in terms of the number of significant correlations and themagnitudes of the correlations. Involvement correlated positively with dominance,expertise, dependability, trust, and task attraction (i.e., all dimensions except socia-bility). Thus, partners perceived as more involved were judged as more credible andattractive to work with. Mutuality was also positively associated with credibility andattraction. The more participants felt their partners were similar to them, the morethey rated the partner as reliable, useful, friendly, dominant, trustworthy, and attrac-tive to work with. The more they saw the partner as receptive and understanding, themore they judged the partner as friendly, dependable, and/or task-attractive. And themore they felt connected to the partner, the more they liked the partner.

Table 2 presents the correlations among all of the foregoing measures and theoutcome variables. Involvement and mutuality had more limited direct effects onoutcome measures. Involvement was unrelated to decision quality, influence, or ac-curacy of understanding. Of the mutuality measures, similarity was positively associ-ated with decision quality: Those who felt more similar to their partner selectedrankings closer to those advocated by their partner (which were the expert rankings).Receptivity was also associated with understanding but negatively so—that is. thosewho perceived their partner as most receptive to their own viewpoints actually hadthe least accurate recall of the content of the partner's arguments. Mutuality, then,had a positive impact when it took the form of the participant being influenced by thepartner to adopt the expert-based rankings, but it had a detrimental impact when ittook the form of participants feeling their own inexpert judgments held sway withtheir partner.

Comparatively, outcomes were far more affected by credibility, utility, and attrac-tion (which were themselves affected by the qualities of the communication process).The strongest relationship was with the combined dependability and utility measure.Participants achieved the highest decision quality, the most accurate recall of partner'srankings (i.e., the smallest deviations from correct answers), and the most accurateunderstanding of the content of partner's messages when they interacted with a part-

TESTING THE INTERACTIVITY MODEL 43

Table 2. Correlations of Involvement, Mutuality, and Social Judgments with

Outcome Measures, Study 1

InvolvementMutualitySimilarityPerceived receptivityFeeling understoodConnectednessCredibility/utilityDominanceExpertise

DependabilitySociabilityTrustTask attraction

Relative

influence

0.079

-0.0270.148

-0.095-0,102

0.0460.183

0.21 r0,0600.0830.449"

Decision

qualitydeviation:from best

solution

-0.043

-0.229'

-0,099-0.100

0,036

-0.010-0.098

-0,229*-0.196-0.115-0.508"

Accuracy:deviation

fromcorrect

answers

-O.0B3

-0,048

0.051-0,131

0.176

-0.093-0.339"

-0.296"-0.099-0.284-

-0.065

1

Accuracy:total

correct

answers

0.059

0.0900.011

-0.179-0.135

-0,108-O.054

0.2100.205

-0,0400.069

Accuracy:content

understanding

0.144

0.146-0,221*

0.0930.026

0.0540.419"

0.218*-0,0030.252*0,040

*/7< 0.050; **/7<0.0I0.

ner or interface that they judged to be useful and reliable. In addition, participantswere more accurate in processing and understanding the information exchanged ifthe partner or interface was viewed as trustworthy and expert. Finally, participantswho found the partner or computer agent more attractive to work with were moreinfluenced by the partner and moved closer to the partner's recommended rankings.

Overall, judgments related to utility and dependability were particularly valuablein discriminating more from less successful task outcomes. Judgments related toexpertise, trust, and task attractiveness also were associated with outcomes, makingthese measures helpful in predicting which interfaces were most beneficial. By con-trast, judgments related to the sociability or dominance of the interface or partner hadno association with outcomes. In other words, whether or not the interface was judgedas friendly and dominant made no difference to how well the participant processedinformation, was influenced, or arrived at a high-quality decision.

Comparisons Among Conditions

Given that mutuality had some relationship to outcome measures and both involve-ment and mutuality related to social judgments, it becomes useful, first, to considerwhich interfaces are most or least likely to promote involvement, mutuality, andresultant positive social judgments. Insights can be gleaned from the mean ratings forthe seven experimental conditions, which are displayed for all variables in Table 3. Itshould be noted that high scores for influence indicate that participants shifted theirattitudes in favor of the position advocated by the partner; low scores on decision

44 BURGOON ETAL.

Table 3. Means for All Measures by Condition, Study I

InvolvementMutuaiitySimiiarity

ReceptivityFeeiing underst.

ConnectednessCredibiiityDominanceExpertiseDependabiiitySociabilityTrust

Task attractionInfluence

Reiative

influenceDecision qualityAccuracy

Deviation fromcorrect ans.

Total correct ans.Content recaii

textonly

5.35

3.472.700.621.80

5.234.624.844.604.784.68

0.203.14

5.70

3.004.80

text+voice

4.75

3.902.900.74

2.10

4.474.324.684.054.734.96

0,262.87

5.103.005.00

text+

voice+still

5.45

3.532.000.471.80

4.634.554.953.754.854.61

0.243.23

5.20

3.405.10

Condit ion

voices-animation

5.60

3.832.701.092.40

4.854.724.994.054.904.88

0.213.24

5.803.505.18

text+voices-

animation

5.10

3.972.901.193.00

5.104.585.364.105.205.29

0.233.15

5.102.804.88

FtF. non-contingent

5.40

3.402.50.62

2.40

4.654.084.854.654.834,04

0.223.34

FtF,contingent

4.70

3.803.801.553.20

4.935.055.045.105.134.40

0.123.54

4.303.305.08

quality represent smaller distances from the "best" decision; and low scores on devia-tions from the correct answers represent higher accuracy.

Statistical tests parallel to those conducted in [4] were conducted on the involve-ment, mutuality, and new social judgment measures. Although the means for involve-ment are suggestive of participants finding interactions with computer partnersequipped with voice and animation as the most involving, statistical tests failed toestablish significant differences among conditions. Contrast tests conducted betweencontingent FtF and the combined HCI conditions did, however, produce significantdifferences (with one-tailed probability levels) for three measures: feeling under-stood /(63) = 2.17./3 = 0.03; perceived receptivity /(63) = 2.59,p = O.Ol; connected-ness r(63) = 1.73, p = 0.045. FtF interaction created more sense of receptivity,connection, and being understood than the HCI conditions. Among the HCI condi-tions, adding animation also increased mutuality on two of the same measures: feel-ing understood r(63) = 1.88, p = 0.03, and perceived connectedness /(63) = 1.73, p -0.045. By contrast, the addition of the still image reduced rather than enhancedperceivedreceptivity,/(26.5) = -1.86,/>< 0.10.^ Thus, the addition of dynamic, hu-manlike qualities to the interface created greater connectedness and felt understand-ing with the partner, but simply adding a fixed facial image (as might occur with

TESTING THE INTERACTIVITY MODEL 45

avatars) detracted from mutuality. Among the FtF conditions, the loss of contingencysignificantly reduced mutuality on two measures: feeling understood r(63) = 2.16,p = 0.03; receptivity r(63) = 2.24, p = 0.01.

In sum, mutuality (but not involvement) was greater with contingent FtF interac-tion and with dynamic, anthropomorphic computer interfaces; it was jeopardized byreducing contingent responses between partners or by interjecting a fixed image.

As for social judgments, the strong associations between outcome measures andthe social judgments of dependability/utility, expertise, trust, and task attractionmake the analyses on these measures most instructive. Of these dimensions, onlyexpertise yielded statistically significant differences. Contingent FtF interactions,which earned the highest rating on this measure, conferred greater expertise on part-ners than did noncontingent responding, r(63) = 2.21, p = 0.03. Other suggestivedifferences are also worth noting for future research because they might becomestatistically significant with a larger sample size and more powerful tests. Within theHCI conditions, highest ratings on dependability/utility and trust went to thetext+voice-i-animation interface, which was also rated highest on task attraction. Lowestratings tended to be given to the text+voice interface, or text+voice+still image, or thenoncontingent FtF partner, but no single interface was consistently the lowest.

Although sociability and dominance were not strongly associated with outcomes,they did differ by condition. On average, humans engaged in contingent FtF interac-tion were seen as far more sociable than computer agents, r(63) = 2.82, p <0.01, andadding voice to text made the computer partner seem less dominant, t(63) = -1.77,p = 0.04. Put differently, computer agents suffered relative to humans in terms ofbeing friendly, and they were seen as most dominant in the text-only interface (fol-lowed closely by the text+voice+animation condition).

Discussion

This first study was undertaken to see if interrelationships among the interactivequalities of involvement and mutuality, social judgments, and task outcomes wouldprovide insights into the findings from [4]. It will be recalled that Bengtsson et al.showed thai structural properties of interactivity exert direct impact on interactionoutcomes. Participants were more influenced by computer agents than human part-ners, yet humans scored higher than computer partners on select credibility dimen-sions. Although not statistically significant, there were also indications of higherdecision quality with the HCI interfaces, especially the least anthropomorphic ones, butmore acurate recall with human interaction. These results implied that different pro-cesses were at work in accounting for influence and decision quality as opposed tocredibility and recall and that these processes might not be working uniformly withineach interface, thus making it important to delve into the processes themselves.

The current results begin to explain those processes. They reveal that influence,decision quality, and information processing accuracy are partly a function of theexperiential properties of interactivity (i.e., involvement and mutuality) and partly afunction of the social judgments that are activated. Consistent with the model ad-

46 BURCXXJN ETAL.

vanced here, properties of interactivity largely exert indirect rather than direct infiu-ence on task outcomes by affecting social judgments. Achieving desired outcomes,then, depends to some extent on the level of interactivity that participants experi-ence as well as the structural properties that various interfaces afford.

Consider first the impact of involvement. If partners were viewed as interested andengaged, they were more likely to be judged favorably on dependability, expertise,dominance, trustworthiness, and task attraction. Higher perceptions of dependabilityin turn were associated with more influence, better decisions, and higher accuracy;favorable judgments of expertise were associated with more accuracy; and favorablejudgments of task attractiveness were associated with more influence and betterdecisions. Interfaces that promote involvement, then, are likely to be ones that gener-ate not only more favorable social judgments but also better task outcomes.

Which interfaces do so? Curiously, none emerged as consistently superior or infe-rior, although the voice+animation condition earned the highest rating. It is possiblethat the novelty of interacting with a computer agent, combined with its perceiveddynamism, made this condition especially involving. But apart from the specificinterfaces tested here, the very strong relationship of involvement to social judg-ments and task outcomes signifies the importance of this aspect of interaction beingtaken into consideration in designing and selecting interfaces.

Next, consider the role of mutuality, which was indexed by four different measures.The first is similarity. The more participants viewed themselves as similar to theirpartner, the more they rated the partner and/or interface as attractive, useful, andcredible on all measures except expertise. This relationship can also be viewed in theconverse: The more dissimilarity that participants perceived, the less attractive, cred-ible, and useful they found the partner or interface. This suggests that anthropomor-phism at some level may be beneficial when the objective is to maximize favorablesocial judgments and to derive the task benefits that these judgments promulgate.However, if the main concern is conveying expertise, then similarity is an unneces-sary consideration. As for which interfaces would be most or least likely to fostersimiiarity, there was more variability within than between conditions, so that nodefinitive claims can be made about the specific interfaces used here. Some of theperception of similarity may be a function of individual differences rather than sys-tematically related to interface affordances. Still, the significance of similarity inpromoting or inhibiting key social judgments makes further exploration of similarityworthwhile.

The second measure of mutuality, receptivity, had weaker associations with socialjudgments. It only correlated positively with sociability and task attraction andactually correlated negatively with accurate understanding. Since sociability wasunrelated to task outcomes, and the goal is usually to achieve more rather than lessunderstanding, one might conclude that giving the impression of receptivity to thepartner is inconsequential or even undesirable. Such a conclusion would be war-ranted unless one was concerned with generating positive long-term reiationshipsand high morale among users. Under such circumstances, creating perceptions offriendliness among users might be the objective in itself, in which case receptivity

TESTING THE INTERACTIVITY MODEL 47

I

could be valuable. Which interfaces created the most perceived receptivity? Contin-gent face-to-face interaction. Thus. FtF communication might be indispensable forpurposes of creating and maintaining siKial relationships. The least receptivity wasfostered by the noncontingent FtF condition. If this finding is generalized to medi-ated forms of interaction, it suggests that noncontingent formats such as broadcastmessages or noncontingent computer agents may be least helpful in promoting goodinterpersonal relationships.

The third measure, feeling understood, was positively associated with dependabil-ity and sociability. Like similarity, interfaces that promote this form of mutuality willgain the benefits of influence, decision quality, and accuracy that come with in-creased credibility and utility. Given that feeling understood was highest in contin-gent FtF and animated HCI conditions and lowest under noncontingent FtF interaction,interfaces most likely to engender feelings of being understood are those that arecontingent, anthropomorphic, and/or unmediated.

The final measure, connectedness, correlated only with sociability and hencemight be dismissed as less important than the other aspects of mutuality. However,the fact that it was a single-item measure with a restricted range may have limited theability of statistical tests to fmd strong associations. A sense of connection with thepartner was strongest under contingent FtF interaction and In HCI conditions thatincluded the animated face and synthesized voice, that is, those that were moreanthropomorphic. Achieving humanlike qualities in the interface, then, has benefitsin terms of enhancing mutuality and positive views of the partner as sociable andfriendly.

As for the relationships of the new social judgment composite measures to taskoutcomes, the correlation results revealed that dependability, expertise, trust, andtask attraction had the strongest associations with task outcomes. The implication isthat these judgments about partners and interfaces are especially key in predictingoutcomes such as influence, decision quality, and accuracy of information process-ing. Creating circumstances in which these judgments are maximized should havepayoffs in terms of maximizing desired outcomes. Conversely, information and com-munication formats that undermine these judgments should have adverse effects ontask outcomes.

One somewhat itiexplicable finding was that judgments of expertise were unre-lated to influence; they were only related to accuracy. This might seem to challengeour argument that the reason computer agents are more influential than human part-ners is that users attribute more credibility to media and computers. We are notprepared to abandon this explanation just because our measure of expertise failed topredict influence. For one, it is possible that the wording of our expertise measure wasinsufficient to capture the authoritative status that we believe users ascribe to mediatedcommunication forms. The adjectives used in our measure—experienced, expert,competent, insightful, responsible—may have been inadequate to capture the senseof the computer agent as infallible and being the ultimate authority. Or the wordingmay have seemed peculiar to apply to computers. Anecdotal reports from our labora-tory assistants indicated that some users had difficulty applying such judgments to

48 BURGOON ETAL

computers in similar tasks. This suggests that these scale items may not have been theideal choices for judging computers as opposed to humans.

Ahernatively, these facets of computer credibility may have been better capturedin the dependability measure, which included such attributes as intelligent, reliable,and useful. One might almost think of these as "quality assurance" characteristics,hence our choice of the label of "dependability," These may be the characteristicsthat make computers highly credible. And the dependability measure was the onethat predicted all three outcomes of influence, decision quality, and accurate infor-mation processing. This set of associations thus offers confirmation for our theoriz-ing that users treat computer agents as having the most accurate, valid information,which may account for why users defer to the judgments computers offer. As noted in[4], this process of deferring to computers may be beneficial when the objective is tohave users place high trust in the information they receive via mediated deliverysystems. The downside is that users may fail to make critical judgments of informa-tion and arguments, leading to poor and potentially disastrous decision making. Tothe extent that humans mindlessly accept whatever information is presented viacomputer agents, the design and implementation of these technologies must be un-dertaken with great discretion and caution.

No single investigation should ever be regarded as definitive, especially whendealing with novel technologies tested in single encounters. The next investigationtherefore was intended to replicate the same task and measures but under conditionsof CMC between human partners.

Experimental Tests: Study 2

STUDY 2 WAS DESIGNED TO EXAMINE HUMAN-HUMAN INTERACTION under medi-ated and nonmediated conditions. Because the noncontingent FtF condition in Study1 had created interaction patterns that were unnatural and unrealistic, it was elimi-nated in Study 2. In its place, we created an offset control group in which two naiveparticipants conducted the same task face-to-face. This condition was intended toserve as a benchmark for how users would conduct and experience this task whenpermitted to interact freely in the absence of experimental controls on the communi-cation process itself The other two conditions represented contingent, unmediatedFtF interaction and contingent CMC among colocated participants, respectively.Like computerized group support systems, the latter condition retained the proxim-ity of FtF interaction but introduced mediation in the form of text-based interaction.Participants sat beside one another but could not talk during completion of the task.

Participants and Confederates

Participants (A' = 68: 34 males, 34 females) were undergraduate students recruitedfrom organizational communication courses that are largely populated by businessand public administration students. They were compensated with class credit in

TESTING THE INTERACTIVITY MODEL 49

exchange for their participation. Confederates were one male and one female under-graduate student in communication who were of similar age and attractiveness. Theconfederates received extensive training and conducted numerous practice sessionsto insure that they maintained consistency in verbal and nonverbal performancebetween themselves and across sessions. i

I

Experimental Conditions and Procedures

The experimental task, instructions, and measures were identical to those from Study 1,except that the accuracy data were not obtained due to a programming malfunctionin the sequencing of the Web pages. All task instructions, the description of the task,and pre- and postinteraction questionnaires were delivered via an IBM computer.Upon arrival at the experimental site, participants entered a waiting room where theyfilled out consent forms, were given preliminary information regarding the experi-ment, and were introduced to a same-sex partner. Participants were randomly as-signed to (1) iht face-to-face control group, in which two naive participants con-ducted the task orally and face-to-face, with no restrictions placed on the content,pace, or length of interaction; (2) contingent face-to-face interaction, in which par-ticipants were paired with a same-sex confederate who followed the script as closelyas possible but was given the latitude to respond to questions, concerns, or digres-sions initiated by tbe participant; or (3) computer-mediated communication, in whichparticipants were paired with a confederate and conducted the discussion via a syn-chronous Windows-based online chat program. Confederates followed the scriptclosely but were allowed to respond quickly and relevantly to contingencies initi-ated by the participant. i

Participants and partners were seated at a table in front of a small computer terminaland keyboard. Computers and chairs were positioned obliquely toward one anotherso that interactants could see one another easily but were still visible frontally throughthe one-way mirror (to enable videotaping of their interactions, with their consent).The computers were angled so that neither the confederate nor the participant couldsee each other's terminal. Interactants read the Desert Survival Problem and con-ducted initial rankings on their respective computers (see Study 1), then discussedthe problem face-to-face or via tbe chat program. Following discussion of all twelveitems, participants completed the Web-based questionnaires; confederates feignedresponding so as to conceal their true roles. Participants were then debriefed andthanked for tbeir participation.

Interaction and Patiner Assessments and Outcome Measures

The same measures were used as in Study 1, with these exceptions: The utility itemsasked specifically about the partner's helpfulness and so were clearly associated withtbe partner rather than the format, and recall data were not collected, so there was nomeasure of accuracy. Factor analysis was again used to form composite measures. Theresultant measures and their reliabilities were: involvement, 0.71; similarity, 0.93;

50 BURGOON ETAL.

feeling understood, 0.93; receptivity, 0.55; dominance, 0.70; dependability. 0.91;expertise, 0.73; trust, 0.80; sociability, 0.84; task attraction, 0.82. Relational con-nectedness was a single-item measure.^

Results

Correlational Analyses

Table 4 shows the correlations among the interactivity variables of involvement andmutuality on the one hand and social judgments of credibility and attraction on theother. As predicted, involvement and mutuality were positively correlated with so-cial judgments. Greater involvement corresponded to more favorable judgments ofthe partner's expertise, dependability, sociability, dominance, and task attractiveness(but not trust). Greater mutuality corresponded to more favorable judgments of thepartner's expertise, dependability, trustworthiness, sociability, and task attractive-ness (but not dominance). Table 5 shows that the primary variables directly affectingthe outcome measures of influence and decision quality were dominance, expertise,and task attraction. Partners perceived as more dominant, expert, and attractive towork with were the most influential and moved the participant toward the "best"decision. Also, higher involvement contributed to being influential. Thus, the modelwe proposed was supported. Involvement and mutuality exerted strong impact onsocial judgments and ensuing task outcomes. The experiential aspects of inter-activity—^involvement and mutuality—were less likely to affect outcomes directlythan indirectly, although involvement directly affected influence.

Comparisons Among Conditions

To compare text-based CMC with FtF communication, planned comparisons wereconducted between the experimental groups and the control group and between theFtF and CMC experimental conditions on all measures. Means appear in Table 6. Inaddition to the previous finding of the experimental groups achieving more influ-ence and high-quality decisions than the control group [8], experimental partnerswere judged as more dominant than control group partners, r(64) = 2.04, p = 0.04. Thecomparison between the CMC and FtF experimental conditions produced no signifi-cant differences on involvement, mutuality, and credibility but did produce one ontask attraction, /(64) = -1.99,p = 0.02. CMC earned higher ratings. Thus, not only didthe CMC condition not suffer on interpersonal judgtnents related to involvement,mutuality and credibility, it gained ground on task-related perceptions.

Discussion

The second study extended the first by examining tbe experiential properties ofinteractivity in a CMC context. Much like Study I, examination of correlationsrevealed that important features of interactivity, including mutuality and involve-ment, operate as expected across CMC and FtF conditions. Social judgments were, in

TESTING THE INTERACTIVITY MODEL 51

Table 4. Correlations of Involvement and Mutuality with Credibilityand Attraction Measures, Study 2

DominanceDependabilityExpertiseSociabilityTrustTask attraction

*p< 0.05; •* p

Involvement

0.391"0.300"0.414"0.289"0.1250.455"

<0.01.

Mutuality:Similarity

0.0280.533"0.384"0,304"0.412"0.070

Mutuality:Connected-

ness

0.1690.485"0.232"0,394"0.423"0.093

Mutuality:Receptivity

0,1000,398"0.313"0.378"0.1760.533"

Mutuality:Feeling

Understood

0.1490.461"0.409"0,366"0,304"0.249*

OverallMutuality

0.1190,604"0.461"0.418"0.450"0.354"

Table 5. Correlations of Involvement, Mutuality, and Partner Assessments with

Influence and Decision Quality, Study 2

Influence Decision quality

InvolvementMutualitySimilarityConnectednessReceptivityFeeling understoodOverall mutualityCredibility/utilityDominanceDependabilityExpertiseSociabilityTrust

Task attraction

0.21 r

-0,078-0,0890,070

-0.0210.011

0,261*0.1060.262*0.000

-0.0250.367**

-O.051

0.0030.004-^.015-0,031-0,041

-0.275'-0.150-0.207*-0,0420,011

-0.266*

•*p<0.01.

turn, positively related to task outcomes; more credible and more involved partnerswere more persuasive. Finally, comparisons between FtF and CMC conditions re-vealed that only task attraction was significantly different, with the CMC conditionrated higher than FtF interaction.

One might speculate that the physical proximity of partners offset any negativesassociated with being unable to talk. Also, the novelty of the CMC condition mayhave contributed to its attraction—participants are unlikely to have experience withproximal, synchronous chat situations. But this is not unimportant when one consid-ers that task attraction is positively associated with several features of mutuality,which in turn are related to task outcomes. These results are suggestive, if one'sinterest is in l1rst increasing mutuality among participants. Since task attractionmight be greater for novel modalities, one way to create a sense of mutuality is to

32 BURGOON ETAL.

Table 6. Means and Standard Deviations for All Measures, by ExperimentalCondition, Study 2

InvolvementMutuaiitysimilarityconnectednessreceptivityfeeiing understoodoverall mutuality

Credibilitydominancedependabiiityexpertisesociabilitytrust

Task attractionInfiuenceDecision quality

Face-to-face,unmediated

control

Mean

5.84

4.134.005.672.134.34

4.515.455.056.055.455,040.053.53

group

S.D.

1.10

1.401.460.911.060,80

1.130.830.700,810.811.090.251.35

Condition

Face-to-face,unmediated

experimentalgroup

Mean

5.67

3.963.405.381.753,96

4.935.455.045.705.204.910.252.96

S.D.

1.13

1.521.641.161.241.19

0.700.730.670.920.641.430.191.34

Face-to-face.mediated textexperimental

group

Mean

5.78

4.124.105.371.724.19

5.035.595.296.085.575.730.232.98

S.D.

1.46

1.281.741.441.300.96

0.731.010.820.800,721.240.381.50

have participants ftrst interact via an unfamiliar medium. Once mutuality is estab-lished, other kinds of media may then be incorporated, ones that are more conduciveto producing desirable instrumental outcomes. Of course, interfaces such as comput-erized group systems that can be used with colocated participants may be one of therare options in which mediation still entails other properties of interactivity.

The preceding speculations aside, these results bolster the model of interactivity asapplied to mediated communication. It is clear that many of the processes associatedwith communication in general work in similar ways across media. The interestingquestion concerns the comparability of mediated and FtF interaction (excluding thecontrol group). Each seems to provide the affordances associated with interactivityin relatively equal ways. This is likely due to the fact that the experimental condi-tions had much in common. Communication in each was synchronous, proximal, andprovided a sense of rhythm to the interaction (in the sense that one could hear thespeed and force with which his or her colleague typed). In fact, the main differencebetween the two conditions was that participants could speak in one and could not inthe other. The conclusion is that speech (and some of its relevant nonverbal charac-teristics) is not always necessary to provide optimal conditions for interactivity aslong as other features of FtF interaction are maintained.

This finding is important for systems that support real-time interaction. Group

TESTING THE INTERACTIVITY MODEL 53

support systems, for example, are designed specifically to facilitate efficient groupinteraction by removing factors responsible for "process loss." thus forcing the groupto consider ideas, issues, and solutions rather than the people who produce them. Itdoes, in a sense, remove some aspect of the "human moment" that Hallowell [191argues is, and ought to remain, an important feature of interaction. However, since theinteraction is proximal, synchronous, and rhythmic, group support systems may wellpreserve features of interactivity that in turn allow for members to be influential andfor members to build and maintain positive social relationships.

General Discussion and Summary

NEW TECHNOLOGICAL INNOVATIONS, WHILE OPENING UP ENTIRELY NEW FORMS

and arenas for communication, carry potential risks of misunderstandings, distrust,and poor decision making if used without regard to their suitability to different goalsand tasks or their impact on inteqiersonal relationships between users. Conversely,they may achieve unanticipated benefits if users creatively adapt them to meet theirown and their organization's objectives.

The pair of studies presented here offers some insights into these issues by examin-ing how aspects of interactivity relate to user perceptions and those perceptionsrelate to task outcomes. As designers build systems that amplify or attenuate specificcharacteristics of interactivity, they will be well advised to consider which propertiesachieve outcomes that are most desirable for both providers and users. For example,anthropomorphic interfaces and interfaces that support mutuality would have anadvantage if successful task achievement depends on users collaborating to pool andjudge critical information. Such interfaces would seem appropriate for building trustand strong interpersonal relationships because respondents tend to rate their partnersand interactions higher when more human features are provided. To date, these inter-faces are still an unachieved goal except in the futuristic prophesies of MIT computerscientist Michael Dertouzos [ 18] and the virtual butler in Apple Computer's Knowl-edge Navigator video, but advancements in computer technology undoubtedly willincrease the capability to introduce more anthropomorphic features.

In other cases, such as obtaining standard information from an online investmentcompany, there may be no real gain in designing an interface to appear more like ahuman. The challenge for interface designers is to resist the temptation of burdeningusers with an overloaded interface not matched to the desired task or outcome.

Future research ought to focus on two areas. The first concerns mapping specificfeatures of interfaces and software for supporting mediated communication ontocharacteristics of interactivity. Our attempts to do so for HCI did not manifest them-selves in increased assessments of partners, at least not as predicted and certainly notlinearly. What might those features be? First, and most obviously for HCI, is to makethe interface appear more human in appearance and behavior than the software andgraphics used here. Less obviously, making the interface appear more "intelligent" in(he sense that it is able to respond relevantly and cogently to interactional contin-gencies may realize the kinds of gains in interactivity that were hypothesized here.

.'i4 BURGOON ETAL.

Suchman's [32] work is important in this respect; she detailed features of interactionin general and then showed how humans adapted to interaction when anticipatedmoves were not forthcoming from the machine. Reducing the need to adapt mightamplify interactivity and stimulate the processes associated with task outcomes.

Recent thinking about social interaction research has led to models that recognizethe possibility of multiple sources of influence on relevant behavioral or attitudinaloutcomes. Work by Kenny and his associates [21, 22] has been particulariy instru-mental in noting that a person's own behavior and cognitions are affected not only byhis or her previous behavioral or cognitive states, but that they might also be affectedby a partner's states. The former is referred to as the "actor effect," whereas the latter iscalled the "partner effect," and the two are often not the same. For example, in a studyof group participation. Bonito [6] found that evaluations of one's own participationwere differentially affected by the substantive contributions of self and others suchthat there was a positive actor effect (i.e., self-assessments of participation increasedthe more self contributed substantively to discussion) but no partner effect—contri-butions by others had no effect on self-assessments of participation. However, whenconsidering participation ratings of others by self, there was a negative actor effectand a positive partner effect, indicating that ratings of others were positively associ-ated with the amount of others' contributions, and negatively associated with theamount of self's contributions.

We believe such models can be fruitfully applied to CMC. particularly when con-sidering partner effects. An interesting empirical question is whether perspectivechanges as a function of interface design. It is possible that the actor's perspectivewill not change but the partner's will, depending on the medium (e.g., proximal ornot, synchronous or asynchronous, or featuring an avatar). If this is the case, actoreffects ought to be consistent across media, but partner effects ought to changeaccordingly as a function of how the interaction is affected by affordances of interactivity.CMC in some respects changes the ground rules for interaction; the ways in whichcommunicators influence or affect each other ought to be similarly affected.

Notes

Acknowledgments: Portions of this resean:h were supported by funding from the U. S. ArmyResearch Institute (Contract #DASWOI -98-K-009). The views, opinions, and/or findings in thisreport are those of the authors and should not be construed as an official Department of the Armyposition, policy, or decision.

1 The text-only condition utilized separate windows on the screen for presenting the computer'sand the participant's responses. In the conditions employing vocal cues, text-to-speech synthesissoftware developed by KTH was used. In the animation conditions, an animated face, moving itslips and facial skin synchronized with the speech, and occasionally flashing its eyelashes waspresented. The animation software used in this study was developed by Jonas Beskow andMagnus Lundeberg at KTH (Royal Institute of Technology), Stockholm, Sweden, and is furtherdescribed in [5]. The computer image was named "Holger." The same image was used in the still-image condition. In (he noncontingent FtF condition, which is described in [3] as the scriptedcondition, the male confederates followed the script explicitly, which often meant disregardingquestions and comments by the subjects, causing the dialogue to become rather unnatural, al-though confederates were still permitted to use the same kinds of gestures and backchannel cues

TESTING THE INTERACTIVITY MODEL 55

(e.g.. liead nods) used in the other FtF condition. In the contingent condition (labeled as "unscripted"in the other report), confederates were allowed to use their own words and adapt to subjects'questions, while still trying to convey the same information contained in the script.

2. Where variances for dependent measures were not equal across all groups, unequal vari-ance f-tests were employed,

3, The low reliabilily on receptivity would normally warrant eliminating it from analyses, asmeasures with low reliabilities typically fail to produce significant results, but it was retained herefor purposes of parallelism wiih Study 1.

REFERENCES

1, Aron, A.; Aron. E.N.; and Smollan, D, Inclusion of other in the self scale and ihe structureof interpersonal closeness. Journal of Personality and Social Psychology, 63 (1992), 596-612.

2, Bales, R. F, Personality and Interpersonal Behavior. New York: Holt, Rinehart, & Win-ston, 1970.

3, Bengtsson, B,; Burgoon, J.K.; Cederberg, C ; Bonito, J.; and Lundberg, M. Virtual com-munication and Ihe impact of anthropomorphic interfaces. Paper presented to the Second SwedishSymposium on Multimodal Communication. Lund, 1998.

4. Bengtsson, B.; Burgoon, J.K.; Cederberg, C ; Bonilo, J,; and Lundberg. M. The impact ofanthrtJmorphic interfaces on influence, understanding, and credibility. Proceedings of the Thirty-Second Hawaii International Conference on Computer Systems Sciences, Maui, 1999,

5. Beskow, J. Rule-based audiovisual speech. Ma.ster"s thesis. Royal Institute of Technology.Stockholm, Sweden, 1995.

6, Bonito. J.A. Collecting free-response data via computers: effects of technology on re-sponses to hypothetical scenarios. Computers in Human Behavior, 15, 2 (1999), 195-211.

7. BurgtHjn. J.K, Nonverbal signals. !n M.L, Knapp and G.R. Miller (eds,). Handbook ofInterpersonal Communication. 2d ed. Beverly Hills. CA: Sage. 1994. pp. 344-390.

8. Burgoon. J.K.; Bengtsson. B.; Bonito. J.; Ramirez, A.; and Dunbar, N.E. Designinginterfaces to maximize the quality of collabtirative work. Proceedings of the Hawaii InternationalConference on Computer and Systems Sciences, Maui, 1999.

9, Burgoon. J.K.; Birk. T; and Pfau. M. Nonverbal behaviors, persuasion, and credibility.Human Communication Research, 77(1990). 140-169.

10. Burgoon. J.K.; Buller, D,B.: Floyd, K,: and Viprakasit, R. Does participation affecl decep-tion success? Paper presented to the annual meeting of the Intemalional Communication Associa-tion, Jerusalem, 1998.

11. Burgoon, J,K,. and Hale, J.L. Validation and measurement of the fundamental themes ofrc\i\[nma\commun\ca{km.Commiinicaiion Monographs, 54, 1 (1987). 19^1 .

12. Burgoon. J.K.; Johnson. M.L.; and Koch. RT. The nature and measurement of interper-sonal dominance. Communication Monographs, 65 (1998). 308-335.

13. Burgoon, J.K.; Walther, J,B.; and Baeslcr. E.J. Interpretations, evaluations, and conse-quences of interpersonal touch. Human Communication Research, 19,2 (1992), 237-263.

14. Cahn. D.D.. and Shulman. G.M. The perceived understanding instrument. CommunicationResearch Reports, I (1984). 122-125.

15. Coker. D,A.. and Burgoon, J.K. The nature of conversational involvement and nonverbalencoding patterns. Human Communication Research, 13. 4 (1987). pp. 463-494.

16. Daft, R.L.. and Lengel, R.H. Organizational information, media richness and structuraldesign, Management Science, 32 (1986), 554-571,

17. Dennis. A., and Valacich.J.S. Rethinking media richness: toward a theory of synchronicity.Proceedings of the Hawaii Intemationai Conference on Computer and Systems Sciences, Maui.1999.

18. Dertouzos. M. L. What Will Be: How the New World of Information Will Change OurLives. San Francisco: Harper-Edge. 1997.

19. Hallowell, E. M, The human moment at work. Harvard Business Review, 77(1999),58-64.

20. Hart. R.R Seducing America: How Television Charms the Modem Voter. New York:Oxford University Press, 1994.

36 BURGOON ETAL.

21. Kashy, D.A., and Kenny, D.A. The analysis of data from dyads and groups, ln H.T. Reisand CM. Judd (eds.). Handbook of Research Methods in Social Psychology New York: Cam-bridge University Press, in press.

22. Kenny, D.A. Models of nonindependence in dyadic research. Joumal of Social and Per-sonal Relationships, 13 (1996), 279-294.

23. Kiesler, S.; Sproull, L.; and Waters, K. A prisoner's dilemma experiment on cooperalionwith people and human-like computers. Joumal of Personality and Social Psychology, 70 (1996),47-65.

24. Krauss, R.M., and Fussell, S.R. Mutual knowledge and communication effectiveness. InR. Galegher, R. Kraut. andC. Egido(eds.), Intellectual Teamwork. Hillsdale, NJ: Erlbaum, 1990,pp. 111-145.

25. Lombard, M., and Ditton, TB. At the heart of it all: the concept of presence. Joumal ofComputer Mediated Communication, 3 (1997).

26. Markova, l.C; Graumann, C.F., and Foppa, K. Mutualities in Dialogue. Cambridge:Cambridge University Press, 1995.

27. McCroskey, J.C., and McCain. TA. The measurement of interpersonal anraction. SpeechMonographs, 41 (1974), 261-266.

28. Moon, Y, and Nass. C. How "real" are computer personalities? Psychological responses topersonality lypes in human-computer interaction. Communication Research, 23,6 (1996), 651-674.

29. Nass, C ; Fogg, B.J.; and Moon, Y. Can computers be teammates? IntemationalJoumal ofHuman-Computer Studies, 45 (1996), 669-678.

30. Nohria, N., and Ecctes. R.G. Networks in Organizations: Structure, Form and Action.Boston: Harvard Business School Press, 1992.

31. Perrolle, J Computers and Social Exchange: Infonnation, Property, and Power. Belmont,CA: Wadsworth, 1987.

32. Suchman. L. Plans and Situated Action.^: The Problem of Human-Machine Communica-tion. Cambridge: Cambridge University Press, 1987.

33. Valacich, J.S.; Paranka, D.; and Nunamaker, J.F Communication concurrency and the newmedia: a new dimension for media richness. Communication Research, 20,2(1993), 249-276.

34. Walther, J.B. Computer-mediated communication: impersonal, interpersonal, andhypeqKYSon-A inieraciion. Communication Research, 23. 1 (February 1996), 3-43.

35. Walther. J.B., and Burgoon, J.K. Relational communication in computer-mediated interac-tion. Human Communication Research, 19, 1 (September 1992), 50-88.

36. Williams, F.; Rice. R.; and Rodgers. E. Research Methods and the New Media. New York:Free Press, 1988.