10
Distributed Research Teams: Meeting Asynchronously in Virtual Space Lia Adams, Lori Toomey, and Elizabeth Churchill FX Palo Alto Laboratory, Inc.; 3400 Hillview Ave., Bldg. 4; Palo Alto, CA 94304 {adams, toomey, churchill}@pal.xerox.com Abstract As computer networks improve, more social and work interactions are carried out “virtually” by geographically separated group members. In this paper we discuss the design of a tool, PAVE, to support remote work interactions among colleagues in different time zones. PAVE extends a 2D graphical MOO and supports synchronous and asynchronous interactions. PAVE logs and indexes activities in the space. This capture facility enables playback and augmentation of meeting interactions by non-collocated group members. Thus, members can participate asynchronously in meetings they could not attend in real time, not just review them. 1. Introduction In recent years, improvements in computer hardware, software and network infrastructures have increased digitally based interactions between geographically separated individuals. Interactions between members of these virtual communities include socializing, game playing, information sharing, sales and advertising, leisure pursuits, and collaborative work. These on-going developments have created new working group possibilities and the emergence of digitally based “virtual communities” [12, 13]. In this paper we consider the design of tools that support such distributed work interactions. More specifically we discuss the design of PAVE, a tool developed at FX Palo Alto Laboratory to support research meetings between non-collocated research team members. Our research focus has been to support both synchronous and asynchronous activities, enabling spatially and temporally separated collaborators to work together on projects. In the next sections we describe the working group activities we aim to support, and we review existing tools for distant collaboration. Then we describe the design of PAVE and illustrate its use with a detailed description of a meeting scenario. 2. Research Meetings The focus of our research is on the support of small research team group meetings. Within these meetings, the primary function is to share information about project- related activities; this information sharing enables members to make decisions about project actions and goals. The groups in question have a heterarchical structure, and information flow tends to follow a connected network communication structure. Ordinarily, such meetings occur in real time between collocated colleagues. It is clear that physical proximity enables fine-grained postural, gestural and speech interactions, and the sharing of artifacts. It also offers a natural frame for interjection, graphical and anecdotal illustration, and thus for participation in decision-making processes. Physical proximity therefore also supports feelings of participation and group membership. Increasingly, however, our research meetings are made up of geographically separated colleagues who work in different time zones. Clearly, being geographically separated can interfere with spontaneous interjections and with sharing documents and other artifacts; group members are unable to easily follow discussion on a topic without aid of tools like telephone conferencing and video-conferencing. When the group members must work asynchronously, real-time involvement in discussion is not possible. Thus, when asynchronous collaboration is the only option, but participation is desired, the challenge is to provide sufficient context and detail of the decision- making processes to inform group members and enable involvement and meaningful comment. Much work has already been done to address the problems of informing and involving distributed co-workers, as we summarize below. Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999 0-7695-0001-3/99 $10.00 (c) 1999 IEEE Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999 1

Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

Embed Size (px)

Citation preview

Page 1: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Distributed Research Teams: Meeting Asynchronously in Virtual Space

Lia Adams, Lori Toomey, and Elizabeth ChurchillFX Palo Alto Laboratory, Inc.; 3400 Hillview Ave., Bldg. 4; Palo Alto, CA 94304

{adams, toomey, churchill}@pal.xerox.com

tr

t

in

y

etgot

eo

rn

se

allthet-sndal

ntych aalgrts

deinyndpic

drkotheto-blesinge

Abstract

As computer networks improve, more social and workinteractions are carried out “virtually” by geographicallyseparated group members. In this paper we discuss design of a tool, PAVE, to support remote wointeractions among colleagues in different time zonePAVE extends a 2D graphical MOO and supporsynchronous and asynchronous interactions. PAVE loand indexes activities in the space. This capture facilenables playback and augmentation of meetiinteractions by non-collocated group members. Thumembers can participate asynchronously in meetings thecould not attend in real time, not just review them.

1.� Introduction

In recent years, improvements in computer hardware,software and network infrastructures have increaseddigitally based interactions between geographicallyseparated individuals. Interactions between members ofthese virtual communities include socializing, gameplaying, information sharing, sales and advertising, leisurepursuits, and collaborative work.

These on-going developments have created newworking group possibilities and the emergence of digitallybased “virtual communities” [12, 13]. In this paper wconsider the design of tools that support such distribuwork interactions. More specifically we discuss the desiof PAVE, a tool developed at FX Palo Alto Laboratory tsupport research meetings between non-collocaresearch team members. Our research focus has beesupport both synchronous and asynchronous activitienabling spatially and temporally separated collaboratto work together on projects. In the next sections wdescribe the working group activities we aim to suppoand we review existing tools for distant collaboratio

0-7695-0001-3/99

heks.sgstygs,

edn

edn tos,rset,.

Then we describe the design of PAVE and illustrate its uwith a detailed description of a meeting scenario.

2.� Research Meetings

The focus of our research is on the support of smresearch team group meetings. Within these meetings, primary function is to share information about projecrelated activities; this information sharing enablemembers to make decisions about project actions agoals. The groups in question have a heterarchicstructure, and information flow tends to follow aconnected network communication structure.

Ordinarily, such meetings occur in real time betweecollocated colleagues. It is clear that physical proximienables fine-grained postural, gestural and speeinteractions, and the sharing of artifacts. It also offersnatural frame for interjection, graphical and anecdotillustration, and thus for participation in decision-makinprocesses. Physical proximity therefore also suppofeelings of participation and group membership.

Increasingly, however, our research meetings are maup of geographically separated colleagues who work different time zones. Clearly, being geographicallseparated can interfere with spontaneous interjections awith sharing documents and other artifacts; groumembers are unable to easily follow discussion on a topwithout aid of tools like telephone conferencing anvideo-conferencing. When the group members must woasynchronously, real-time involvement in discussion is npossible. Thus, when asynchronous collaboration is tonly option, but participation is desired, the challenge is provide sufficient context and detail of the decisionmaking processes to inform group members and enainvolvement and meaningful comment. Much work haalready been done to address the problems of informand involving distributed co-workers, as we summarizbelow.

$10.00 (c) 1999 IEEE 1

Page 2: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

ic

haTnenvt

uageothro

reiam

/o ll

as

lsersndse can afor

cehatal

andedemreedinal

by, thetanth

singrs.

tolsoeywero-ly

pitengoadceond

itheseted in.,

dother

toinges.r

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

3.� Requirements and Related Work

In order to support our target group activity, i.e.,meetings of non-collocated colleagues in different timezones, we need elements from the domains of real-timecollaboration support, asynchronous groupware, andmeeting capture and replay.

3.1.� Supporting synchronous work

A number of tools provide awareness of the activitiesof other (remote) group members and enable real-timeconversation and activity sharing, e.g., videoconferencingand desktop conferencing systems, media spaces, chatsystems, and virtual spaces [1]. The expense and overheadof videoconferencing generally limits use of such systemsto infrequent, formally scheduled events. Desktopconferencing systems support more lightweightinteractions, and feature text and/or audio (andoccasionally video) for communication, mechanisms forawareness, and shared tools (e.g., whiteboards and texteditors) for creating and manipulating artifacts.

Some desktop conferencing systems, such asTeamRooms [26] and its successor TeamWave [29], use a“place” metaphor (i.e., there are various rooms in whpeople meet, some designed for specific purposes, andartifacts within those rooms are persistent across sessioTeamRooms is primarily a text-based system, with grapadditions. Participants can communicate textually, shviews of documents, and leave notes for one another. latter is the only form of asynchronous communicatioTeamWave provides the infrastructure for on-lininteraction, supplemented with tools such as electrowhiteboards that mimic the tools collocated workers haat their disposal. In essence, TeamRooms tries to creadigital version of the traditional office with itsopportunities and affordances for both formal and informcommunication.

Other environments, such as the RoundTable prodfrom ForeFront [27], use a meeting metaphor, and geared specifically towards supporting formal meetinamong distributed participants. In addition to text-baschat, participants may share images, documents, WWide Web locations, audio, and video. Rather than gatogether physically, each user connects to a server fhis own office at an agreed upon time.

Providing a different twist on on-line meetings asystems such as DOLPHIN [19] and the ElectronMeeting Room (EMR) at the University of Hawaii, Mano[9]. These systems are based in physical meeting rooin which participants contribute via a keyboard andpen-based interface. Proponents claim that meetingsshorter because participants are able to work in paraand “talk” (type) simultaneously.

0-7695-0001-3/99

hthens).icrehe.

icee a

al

ctresdrlderm

c

s,rareel,

Chat systems and place-based virtual spaces knownMUDs (Multi-User Dimensions) and MOOs (MUDsObject-Oriented) were originally created for sociapurposes and games [3]. In simple chat systems, uconverse with one another by typing text. MOOs aMUDs also feature textual conversations, but theexchanges occur in a place-based context, and usersextend the space and create artifacts that defineparticular place (e.g., creating virtual books and tables a library room).

More recently, MUDs and MOOs such as The Pala[22] have become graphical spaces, augmenting text cby the immensely popular use of avatars (a visurepresentation such as a graphic or photo of oneself) other images or props. Still, the focus has remainprimarily social. Notable exceptions are the Jupiter systfrom Xerox PARC [4], Jupiter’s successor PlaceWa[24], and Electric Communities [8]. Jupiter supplementa text MOO with audio, video and interactive artifacts order to make it a richer environment for professionwork.

The popularity and sense of community created these spaces suggest that some of their features (e.g.place metaphor and the use of avatars) may be imporelements in creating an inviting environment in whicparticipants engage in collaborative work.

Excluding video-conferencing, all of these systemshare some advantages in addition to just supportcommunication between (possibly) non-collocated useBecause interaction takes place textually, it is simpleproduce a written record of the proceedings. It is apossible for participants to remain anonymous if thwish. Finally, because these systems have a much looverhead for participants versus traditional videconferencing, they are practical for a variety of daiinteractions.

These systems also share some shortcomings. Desthe claims of some desktop and video-conferencisystems, none of these technologies provides the brspectrum of communication possible in face-to-fameetings. A key ingredient missing is the ability ttransmit and sense cues such as turn-taking aacknowledgment of the floor that are often conveyed wbody language and/or subtle facial expressions. Thconventions are further hampered by the latency creaby the input of textual utterances. Another drawback isthe maturity of the available tools to support work, e.gshared editors, whiteboards, drawing tools, etc. Rarelythe tools provided in these systems approach sophistication we readily find in their single usecounterparts. Ideally, we would like to be able incorporate the tools users are accustomed to havavailable on their desktop into these shared spacFinally, with the exception of leaving behind notes o

$10.00 (c) 1999 IEEE 2

Page 3: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

dr

ieed

in

et

nz

tea

u

norel

athethebeinuprthy

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

other shared artifacts, these systems are still limited tosynchronous interaction.

3.2.� Supporting asynchronous group work

Because many kinds of workplace communicationcannot be done synchronously, groupware and other toolsfor computer-supported cooperative work have beendeveloped for supporting asynchronous workplaceactivities. Tools and suites such as Lotus Notes [17]provide strong support for asynchronous communicationand coordination.

The most common tools for asynchronouscommunication are electronic mail, newsgroups, andgroupware tools like Lotus notes. These tools not onlyallow a single user to dispatch information to a largenumber of people, but also support multi-user discussionsuch as brainstorming and other cooperative activity.Such systems preserve documents but do not oftenpreserve the context in which those documents werecreated.

Both email and newsgroups create new documents atevery communication; an extended discussion in eithermedium depends heavily on extensive quoting of earliernotes, to provide context for new additions. Othergroupware tools focus on creating, updating, and sharingpersistent documents. When co-workers collaborateasynchronously on a shared document, a coordinationmechanism is needed to alert them to new work-relateddocuments and versions, and to keep one user’s updfrom interfering with another’s, whether they are madedisjoint or overlapping periods of time. Systems that version control, such as RCS [30] for program soucode, and Documentum [5] and DocuShare [6] for genedocuments, are common examples of coordinatmechanisms. Other CSCW tools that manage docummay use a version-control system as part of thinfrastructure, even if the end-user does not explicitly version control operations.

More recently, asynchronous tools are incorporatfeatures popular in synchronous tools, such as awarenand integration of graphical media and audio and vidAwareness in particular is a valuable addition synchronous collaboration, since awareness of co-workincreases the likelihood of collaborating with them [25The Piazza System [14] embeds awareness collaborators into artifacts themselves, with pictorial icoattached to document viewers, for example. While Piaprovides awareness of synchronous use, the ideaattaching awareness to artifacts was extended asynchronous use in Timewarp [7], where the associaof collaborators with the artifacts they have manipulatcan be reviewed through a historical lens. Edwards Mynatt describe the Timewarp system (which combinversioning and time-based browsing with asynchrono

0-7695-0001-3/99

atesino

ceralonntsiro

gess,o.oers].ofszaofto

iondndess,

coordinated resource sharing) as neither synchronous asynchronous collaboration; they call this hybrid modautonomous collaboration.

������ � Replay� An emerging area in asynchronouscomputer-supported cooperative work is replay. XeroxPARC’s meeting capture and salvage work [21] takesreal-time, synchronous meeting, captures the audio, text recorded by a human note-taker, and drawings on Tivoli [23] whiteboard, and saves them so that they can revisited later. Any of the media can be replayed coordination with the others. From a synchronous groactivity, they produce a multi-media artifact that can latebe reviewed, browsed, or studied asynchronously. BoPARC’s WhereWereWe architecture [20] and work bManohar [18] support this kind of multimedia replay.

3.3.� Summary of requirements

Thus, to support virtual meetings between spatially andtemporally separated colleagues, we require functionalitydrawn from the domains of real-time communication andcollaboration, asynchronous groupware tools, and meetingcapture. We require that our system be low-overhead touse, and that it support replay of the various media used ina meeting. In addition to straightforward replay, we needto be able to reconstruct decision-making processes andthe context in which they occurred. This reconstructionshould address the purpose of a meeting, namely togenerate understanding of decisions, not just to present alist of items decided upon.

To apply the strengths of asynchronous collaborativetools to our virtual space, we must ensure that the toolsthat users ordinarily depend on to create and accesspersistent work documents are still available to them. Thevirtual space cannot effectively supplant these tools;instead, our goal is for the virtual space to coordinate othermeeting documents. In common with real-timecollaboration, we need the features of awareness and theability to create and manipulate shared artifacts.

We hope that meetings in virtual space will share thebenefits of other technology-mediated meetings, inpromoting more equal participation by members [10].

4.� PAVE: A virtual meeting room

In this section we describe the design and use scenarioof PAVE (PAL Virtual Environment), a system intendedto address the requirements specified above. PAVE isimplemented on The Palace, a 2D graphical MOO with aclient-server architecture. A client-server architecture wasrequired for our prototype, so that we could instrument thecentral server, modifying it to capture all events from allclients and assign them a sequential ordering.

$10.00 (c) 1999 IEEE 3

Page 4: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

u

tcg

eppn

nge

ue e

e

n.reotasthe

,

irnds. ithetoebler

o aeualofe”

ofdte

s astic

tedintd.

;llyabydrsrers

he

yontheee.

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Our space, PAVE, is a 2-dimensional graphical MOO,structured as a collection of rooms connected by virtualdoorways. A room in PAVE consists of a set ofbackground graphics, in front of which users appear ascustomizable avatars that can carry props (graphicalobjects). Rooms and users can each contain event-drivenscripts activated by movement to a spot in a room, inresponse to utterances, in response to arrivals anddepartures, or through other events. Scripts can playsound effects, add props to a room, execute paintingcommands, and drive a user’s web browser to a particURL.

Avatars are simple GIF images; users can swirapidly between many different avatars to chanexpression. All visitors to the same room cacommunicate textually with one another and can drawthe same space; each of them sees the same third- pviewpoint of the room, the contents, and the other peoThus, the room constitutes a shared conversational s[15]. Users can play prerecorded sounds but canconverse using live audio.

To share documents and other non-conversatioinformation, users can exchange information throuclient-side non-WYSIWIS (What-you-see-is-what-I-seweb browsers.

In addition to these features for synchronocommunication that PAVE inherits from The PalacPAVE’s server is instrumented to be able to captureevents that occur: text chat, artifact creation, movemavatar changes, drawing, and document sharing. PAVserver also permits users to issue WYSIWIS browsloading commands.

Figure 1: The meeting begins.

on

0-7695-0001-3/99

lar

he

ninrsonle.aceot

alh)

s,

allnt,E'sr-

4.1.� Synchronous meeting

We describe a scenario for using PAVE to conduct aasynchronous meeting within a distributed work groupConsider a project team, of which some members alocated in Palo Alto and others are located in Tokyo, whwill meet in virtual space to conduct a weekly projecstatus meeting. The meeting will occur in two phases, described below. The participants have an agenda for meeting in the form of an HTML page. The Californiaparticipants start the meeting at 10 AM Pacific Timewhich is 2 AM in Tokyo.

They conduct their part of the meeting, sharing thestatus items, marking up a diagram under discussion, aupdating the agenda to include new items or clarificationOnce the live (synchronous) part of their meeting ends,has been captured and saved by the PAVE server. Tparticipants then move to the patio in the virtual space continue informal discussion that is not part of thmeeting, and thus, is not recorded. They are now availafor periodic informal exchanges throughout the day, or foparticipation in other captured meetings.

Figure 1 shows the virtual meeting room with the twCalifornia participants. As the images show, the room isphotograph of an actual meeting room familiar to all thgroup members. Research has indicated that using visand spatial metaphors not only supports the use appropriate activities but can aid a sense of “being ther[2, 28]. A primary function of metaphor is to provide apartial understanding of one kind of experience in terms another kind of experience, providing a common grounfor understanding situations and for selecting appropriaactions, thus offering a shared scaffold for activities [16].

There are a number of props in the room with obviouphysical analogues. Group members are representedstatic photographic avatars; these avatars are stagesturally, but may be moved anywhere in the represenroom. To move avatars, group members can simply poand click where they would like their avatar to be moveThere are certain “hot spots” in the conference roomwhen an avatar is placed on these spots it is automaticatransported into another room. In PAVE, there is separate whiteboard room that a user can enter positioning their avatar over the picture of the whiteboarat the edge of the meeting room. Here, all group membecan draw illustrative diagrams. Because such items apersistent in this space, it is possible for group membeattending later to see and annotate anything on twhiteboard.

Participants communicate by typing text into the gratext area along the bottom of the screen. When a perstypes text into the box, the text appears as speech in form of a cartoon balloon, as shown in Figure 1. Thparticipants are thus able to discuss issues in real timSince long utterances create large balloons that remain

$10.00 (c) 1999 IEEE 4

Page 5: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

aith

ofaest canthatByanal

ea asussthend.ir

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

the screen a long time (potentially blocking part of thescene), we have noticed that frequent users get in the habitof breaking long remarks into smaller segments byintroducing carriage returns. The size of the font used inballoons, as well as the length of time a balloon remainson the screen, is configurable by the user. If a user missesseeing a balloon in real time, or doesn’t notice whichutterance was made before another, PAVE keeps acomplete scrollable log of all dialogue.

In addition, any participant can give commands fromwithin the virtual space to bring up web pages in allpresent participants’ browsers. If a group holding meeting finds it too chaotic for all participants to have thability, these commands can easily be restricted to facilitator(s) only.

Figure 2: Screen shot showing PAVE, text log, web browser with meeting document

eT

nls

jectualthe

Figure 2 is a typical screen shot, showing the PAVwindow, the text log window below it, and a separate wbrowser loaded with the agenda for the meeting. conduct the meeting, one person takes the role facilitator, stepping through the agenda items. The ageis used by the facilitator to structure the activities but aoffers a structure for later participants to follow in repla

0-7695-0001-3/99

se

and in annotation. The system itself takes the role recorder, logging all activities. During the meeting, number of documents are introduced on topics of interto the research group. When these are introduced, theybe seen by all participants in a separate browser appears next to the conference room window. launching web pages in this way, group members cshare information they have produced or externdocuments that are deemed relevant.

The two California group members go through thagenda, discussing each topic, and updating the agendthey proceed. For example, one agenda item is to discan upcoming conference; the two add a question to agenda asking which members of the group want to atte

When they finish their meeting, they conclude the

Eboofdao

y

PAVE session, and the capture process terminates.

4.2.� Asynchronous meeting

Several hours later, when the members of the proteam in Japan come in to work, they gather in the virtspace and commence the second phase of

$10.00 (c) 1999 IEEE 5

Page 6: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

rs ondon

ysarn

snt tong ifings.e

sedre. to at all

oamal

algs.

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

asynchronous meeting. At 1 PM Tokyo time, themembers open the captured meeting and begin theirparticipation.

The new participants add their names to the existing listof attendees on the agenda web page. After they give acommand to commence replay, each line of dialogue typedin the earlier meeting is replayed as a dialogue balloon.Avatars of the original speakers appear in the positionswhere they first spoke each line, but their avatars appear inblack and white to distinguish them from the currently liveparticipants. Figure 3 shows this new phase of themeeting, with two color avatars of the Tokyo participants,and two black-and-white avatars of the previousparticipants.

The participants in Tokyo review the remarks madeduring the earlier phase of the meeting, adding their ownremarks, possibly asking or answering questions, andupdating the agenda. The Tokyo participants can interjectremarks at any time during the replay; each remark isadded to the log, along with information on when itoccurred.

As documents are introduced or updated in the earlierversion of the meeting, the new participants’ browsedisplay them. For example, in this scenario, the agendaupdated to show the current item (using change typeface, see Figure 4), to clarify old items (the time aplace of a training class), and to show new informati(such as the names of later participants).

4.2.1 Effectiveness. Instead of merely seeing a summarof the meeting held in California, with a list of decisionarrived at during the meeting, the Tokyo participants creplay the full meeting, see how those decisions wearrived at, and add their own comments and informatio

Figure 3: The second phase of the meeting.

0-7695-0001-3/99

isf

ne.

They can talk and respond not only to the previoucaptured remarks, but also to one another in their joireplay session. Any participant may issue a commandcontinue replay, but as is the case with browser-loadicommands, this ability can be restricted to a moderatorchaos results. The replay and augmentation of the meetis a collaborative activity, just as the earlier phase waThe contributions of each participant are seen in thcontext of the project meeting.

People who revisit the meeting after this second phawill see the entire composite meeting, including black anwhite avatars for all previous participants, and coloavatars for those new participants meeting in real tim(Avatars can be shown with session numbers attached,make obvious which previous participants were presentthe same time.) The composite meeting scene showsthe participants present as a team. Thus, the meetingrepresentation, while allowing viewers to determine whwas present when, presents each participant as a temember, rather than distinguishing people as origincontributors versus responders.

4.2.1.� Implementation. As mentioned above,instrumenting the server to capture all events in the virtuspace is the key to supporting asynchronous meetin

Figure 4: The updated agenda.

$10.00 (c) 1999 IEEE 6

Page 7: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

e

iiu

l

t

ei

ne

-i

,

kr

fort

ofundthechingny

ead

g,heynd

topes in

hee

xt:wasaredcty

ws. aisd

aa

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

All events are time-stamped, numbered, and associatedwith a session.

The time-stamp captures the actual date and time of theevent, while the event number indicates position in thesequence of events. In the initial session, event numbersare integers beginning with 1. During subsequentplayback sessions, event numbers are assigned using ahierarchical scheme (e.g., 1.1) to indicate insertionbetween events from earlier sessions. For replay views(discussed in the next sections), users specify whichordering of events they want to use. The real clock time-stamps provide an absolute chronological order, while theevent numbers provide a “logical” order that interweavevents from multiple sessions.

The capture of external events including documeupdate, which is built on a version control facility, enabledecisions to be unraveled to note all disagreements ahow resolution was achieved.

Participants at each site can continue a meeting in laphases, by using tools that report when a meeting contanew content and directing them to the new materwithout having to replay previously-seen material. (Thfeature is a familiar one in groupware and newsgrotools, and will be essential for communication througasynchronous augmentation.)

4.3.� Accessing aspects of the meeting

We offer participants a number of ways of accessinimportant parts of the meeting. These are the Repview, the Text view and the MultiView.

������ � The Replay view. The asynchronous meetingscenario described above uses the Replay view, consisof the replay and annotation of the whole meeting bstepping through each scene. The Replay view includthe meeting room in PAVE, the text log, and the webrowser documents (displayed in a separate web browsIn the Replay view, later participants view the dialogue its original order, interjecting their own remarks, hearinand playing sound cues, seeing and adding to drawinviewing and updating auxiliary documents as they weintroduced into the meeting, and creating new documeat any point in the replayed time-stream. A simple timline is shown at the bottom of the PAVE window showinhow much of the meeting has already been played baIn Figure 3, this time line is shown in green.

The Replay view offers an opportunity for noncollocated group members to participate in a meeting wcoworkers, experiencing the same state of the virtumeeting room, the audio cues, and shared documentsthey would have if they had attended the meetinsynchronously with their coworkers. By playing bacevents using the logical order or time-stamps, all remaare seen in their original context.

0-7695-0001-3/99

s

ntsnd

terinsalsp

h

gay

ingyesbr).

nggs,rets-

gck.

thal asg

ks

4.3.2. Text view. While the Replay view offers a fullpresentation of a meeting, the Text view is designed quicker perusal. In the Text view, the log of all texuttered in the virtual space is the primary presentationthe meeting contents. For each remark made or sorecorded in the log, the Text view shows the speaker, time-stamp, and a time-line number showing in whithread the remark was made. (A new meeting happenon-line defines thread 1; each subsequent visit, by agroup member or members, defines the next higher thrnumber.)

The Text view supports rapid scan of the text loincluding forward and backward search for keywords. TText view offers a quick way to peruse the text bskimming and navigation through keyword search ascrolling through the meeting.

If the user sees something of interest, they can sskimming and start participating. From any line in thText view, the reader can follow links to the other viewof the meeting. The views appear in the state they wereat the time the line of text was uttered. From this view, tviewer can navigate from the textual content of thmeeting (the words of the participants) to the contewhat words had been already said, awareness of who present, appearance of all avatars, and the state of shartifacts. Thus, the Text view presents a fairly comparepresentation of a meeting, while allowing readnavigation to richer media.

4.3.3 MultiView; threaded activity lines. Finally, we aredeveloping a viewpoint that shows several graphical vieonto the meeting activities; we call this the MultiViewThis MultiView appears in a separate web page asscrollable web frame. A prototype multi-view interface shown in Figure 6. This interface is similar to that offereby Ginsberg and Ahuja [11]. In their interface they offervisual history or record of distributed multi medi

Figure 5: Text view of the asynchronous meeting.

$10.00 (c) 1999 IEEE 7

Page 8: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

Figure 6: Prototype of MultiView.

yr

er

h

rde

e

cs s

ine

h

atis

berethes

reske

atthe

cteir

ily

riod

atckngantherd

ayn

ot innt

collaborations by showing meta-information visualizationsof virtual meetings.

Our MultiView meeting reconstruction interfaceutilizes information about the activities of “live”participants by considering activity types going throughcentral server. The MultiView is then automaticallcreated. The MultiView interface shows foucontemporaneous meeting cues in rows, indexed bytime-stamp. Rows are created on the basis of when evoccur: the events are text speech (i.e. when the sedistributes text speech to the clients), when shared wpages are called up, when drawing occurs in tWhiteboard Room, and when sounds are played. As meeting activities are logged through our central serveis easy to graphically reconstruct the “state of the worlat any given time. Again, this is similar to the moddescribed by Ginsberg and Ahuja, in that their view constructed on the basis of information about whmultimedia channels are operative.

In the MultiView, the meeting activity cues arerepresented graphically and sounds that were played shown by the command issued to play the sound. Rowsordered in time sequence; thus row 1 occurred before r2 and so on. The first column indicates the time at whievents occurred. The second column shows the room itand shows who was present and speaking as well astime for the participants (note that times shown are baon the geographical location of the original meetinparticipants). The third column offers a view onto the wedocument that was being discussed at the time. The foucolumn offers a view onto the state of the whiteboard the time of the discussion. The fifth column shows whsound effects were heard during the represented t“snapshot”. Annotations appear as highlighted commein the sixth column of the MultiView. Annotations can badded at any time and a time-stamp for these automatically added as well as the initials of the user t

0-7695-0001-3/99

a

antsverebe

all it”l

isn

areareowhelftheedgbrthatatmets

isat

left the annotation. Finally, there is an arrow pointer thmonitors where in the meeting the playback onset currently set.

By selecting an image, the meeting replay can restarted at the point in time when the events weoccurring, and can be reviewed and/or annotated. As MultiView interface is in a separate web page, it remainvisible while the meeting is played back. As events aplayed, the indicator in the far right column movethrough the column to indicate where the playbaccurrently is. To stop playback the user simply clicks on tharrow and playback freezes (as in freeze frame onvideo). The arrow is static if no playback is active bumoves when playback restarts. The user can drag arrow to particular rows to continue the playback.

We believe this viewpoint enables viewers to selesegments of the meeting to play back on the basis of thown interests. This viewpoint also gives a rich yet easbrowsable overview of the meeting activities.

5.� Summary

In the above section we described a meeting scenaillustrating how PAVE can support both synchronous anasynchronous activities in a virtual space.

Rather than reading a simple textual summary of whtook place in a meeting, interested parties may play baan actual meeting and observe the decision-makiprocess. This playback includes seeing each particip“speak” and change expressions, and watching tsurrounding context of documents and whiteboadrawings update as the meeting progresses. Users minsert themselves into the meeting by adding their owcontributions, which provides a sense of participation, njust observation. These additions to the meeting areturn captured, and can be included in subsequeplaybacks.

$10.00 (c) 1999 IEEE 8

Page 9: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

.d

t o

le

an

ttyena

nla

sg

a

rtinad

r

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

The Replay view, Text view, and MultiView supportdifferent goals for navigating through a single meeting,letting the user either participate in, review, or recallmeeting events.

Thus far, we have developed only a prototype versionof our design to let us experiment with various ways tocreate and navigate our captured meetings. Implementinga robust version of this system depends on two kinds ofinfrastructure: a version-control system for storingseparate versions of documents and meeting state overtime, and a time-line mechanism. Even in our prototypeversion, preliminary data indicates that the size of storagerequired to capture a substantial meeting in PAVE is quitea bit less than the comparable space required by video.

A time-line such as TimeWarp’s would provide theflexibility to permit timelines to diverge and be rejoined(Timelines that diverge could arise when distributegroups each revisit a meeting and update it with neinteractions simultaneously.) Currently, our simple clienserver architecture forces all updates to go throughcentral server, making it impossible for timelines tdiverge. But in a more truly distributed implementationin which separate groups may update a meeting locaand then periodically synchronize with a remote sittimeline divergence is quite possible.

6.� Conclusions

In this paper we have shown how a two-dimensiongraphical MOO can be used to support interactiobetween group members who are spatially and temporaseparated. The prototype system, PAVE, capturescomplete log of all interactions and shared documenGiven this automatic logging and indexing, and the facilifor augmenting captured events, the meeting as a wholpreserved in a way that supports contextually rich meetireconstruction. The rich hyper-textual log exceeds whcan be conveyed through traditional meeting documelike the agenda and minutes. Further, the multiple repviews and the facility for augmentation offers greateflexibility in reviewing and replaying meeting processethan is possible through capturing and replaying meetinon videotape. We argue that this flexibility in how meeting can be read or revisited offers a strong senseinvolvement and participation.

7.� Acknowledgements

The authors would like to thank Steve Smoliar foencouraging us to write this paper. We would also like thank Les Nelson and Eleanor Rieffel for taking part numerous meetings in our virtual space. We thank SBly and Ellen O’Connor for their encouragement an

0-7695-0001-3/99

w-a

,ly,

lsllya

s.

isgt

tsy

r

s

of

o

ra

valuable comments. We thank Polle Zellweger for hehelpful comments on an earlier version of this material.

8.� References

[1] Bly, Sara A., Harrison, S.R. and Irwin, Susan. MediaSpaces: Bringing People Together in a Video, Audio andComputing Environment. Communications of the ACM 36,1 (January 1993), 28-47.

[2] Carroll, J.M., Mack, R.L. and Kellogg, W.A. InterfaceMetaphors and User Interface Design. In Handbook ofHuman Computer Interaction, M. Helander (Ed.), ElsevierScience Publishers, 67-85.

[3] Curtis, Pavel. Mudding: Social Phenomena in Text-BasedVirtual Realities, in Proceedings of Directions andImplications of Advanced Computing (DIAC’92)Symposium, (Berkeley CA, 1992).

[4] Curtis, Pavel and Nichols, David. MUDs Grow Up: SocialVirtual Reality in the Real World, in Proceedings of theThird International Conference on Cyberspace, (Austin TX,1993).

[5] Documentum web site: www.documentum.com.

[6] DocuShare home page:www.xerox.com/products/docushare.

[7] Edwards, W. Keith and Mynatt, Elizabeth D. Timewarp:Techniques for Autonomous Collaboration, Proceedings ofCHI 97 (Atlanta, GA, March 1997), ACM Press, 218-225.

[8] Electric Communities web site.: www.communities.com.

[9] The Electronic Meeting Room web page:www.cba.hawaii.edu/emr/home.htm.

[10] Fulk, J. and Collins-Jarvis, L. Wired Meetings:Technological Mediation of Organizational Gatherings. InNew Handbook of Organizational Communication, F. Jablin& L. Putnam (Eds.), Newbury Park: Sage, 1998.

[11] Ginsberg, A. and Ahuja, S. Automating Environment ofVirtual Meeting Room Histories, in Proceedings ofMultimedia ‘95, (San Francisco, November 1995), ACMPress, 65-75.

[12] Graham, Stephen and Marvin, Simon. Telecommunicationsand the city: electronic spaces, urban places, Routledge,London.

[13] Harasim, L. Global Networks: Computers and InternationalCommunication, London and Cambridge, Mass, MIT Press,1993.

[14] Isaacs, E., Tang, J., Morris, T. Piazza: A DesktopEnvironment Supporting Impromptu and PlannedInteractions, in Proceedings of the 1996 Conference onComputer Supported Cooperative Work, (Cambridge MA,Nov 1996), ACM Press, 315-324.

[15] Kendon, A. Conducting Interaction: Patterns of behavior infocused encounters, Cambridge University Press, 1990.

$10.00 (c) 1999 IEEE 9

Page 10: Distributed Research Teams: Meeting … shared artifacts, these systems are still limited to synchronous interaction. 3.2. Supporting asynchronous group work Because many kinds of

Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999Proceedings of the 32nd Hawaii International Conference on System Sciences - 1999

[16] Lakoff G. and Johnson, M. Metaphors We Live By, TheUniversity of Chicago Press, 1980.

[17] Lotus Notes web site: www2.lotus.com.

[18] Manohar, N. and Prakash, A. The Session Capture andReplay Paradigm for Asynchronous Collaboration,Autonomous Collaboration, in Proceedings of CHI 95,(Denver CO, May 1995), ACM Press, 218 – 225.

[19] Mark, Gloria, Haake, Jorg and Streitz, Norbert.Hypermedia Structures and the Division of Labor inMeeting Room Collaboration, in Proceedings of the 1996Conference on Computer Supported Cooperative Work,(Cambridge, MA, November 1996), ACM Press, 170-179.

0-7695-0001-3/99 $10.00 (c) 1999 IEEE 10