9

Click here to load reader

The Conference Assistant: Combining Context-Awareness with

  • Upload
    hoangtu

  • View
    213

  • Download
    1

Embed Size (px)

Citation preview

Page 1: The Conference Assistant: Combining Context-Awareness with

The Conference Assistant: Combining Context-Awareness with WearableComputing

Anind K. Dey, Daniel Salber, Gregory D. AbowdGVU Center, College of Computing

Georgia Institute of TechnologyAtlanta, GA 30332-0280

+1 404 894 7512{anind, salber, abowd}@cc.gatech.edu

Masayasu FutakawaHitachi Research Laboratory

7-1-1 Omika-choHitachi-shi, Ibaraki-ken, 319-1221, Japan

[email protected]

http://www.cc.gatech.edu/fce/contexttoolkit

AbstractWe describe the Conference Assistant, a prototypemobile, context-aware application that assists conferenceattendees. We discuss the strong relationship betweencontext-awareness and wearable computing and applythis relationship in the Conference Assistant. Theapplication uses a wide variety of context and enhancesuser interactions with both the environment and otherusers. We describe how the application is used and thecontext-aware architecture on which it is based.

1. Introduction

In human-human interaction, a great deal ofinformation is conveyed without explicit communication,but rather by using cues. These shared cues, or context,help to facilitate grounding between participants in aninteraction [3]. We define context to be any informationthat can be used to characterize the situation of an entity,where an entity can be a person, place, or physical orcomputational object.

In human–computer interaction, there is very littleshared context between the human and the computer.Context in human-computer interaction includes anyrelevant information about the entities in the interactionbetween the user and computer, including the user andcomputer themselves. By improving computers’ access tocontext, we increase the richness of communication inhuman-computer interaction and make it possible toproduce more useful computational services. We defineapplications that use context to provide task-relevantinformation and/or services to a user to be context-aware.

Context rapidly changes in situations where the user is

mobile. The changing context can be used to adapt theuser interface to an application, providing relevantservices and information to the user. While context isimportant to mobile computing in general, it is ofparticular interest to wearable computing. This is evidentfrom the number of papers dealing with context-awarenessin the previous Symposiums on Wearable Computers.

Rhodes [13] presented a list of defining characteristicsfor wearable computers. In each of these features, contextplays an important role.Portable while operational: A wearable computer is

capable of being used while the user is mobile. When auser is mobile, her context is much more dynamic. Sheis moving through new physical spaces, encounteringnew objects and people. The services and informationshe requires will change based on these new entities.

Hands-free use: A wearable computer is intended to beoperated with the minimal use of hands, relying onspeech input or one-handed chording-keyboards andjoysticks. Limiting the use of traditional inputmechanisms (and somewhat limiting the use of explicitinput) increases the need to obtain implicitly sensedcontextual information.

Sensors: To enhance the explicit user input, a wearablecomputer should use sensors to collect informationabout the user’s surrounding environment. Rhodesintended that the sensors be worn on the body, but thereal goal is for the sensed information to be availableto the wearable computer. This means that sensors cannot only be on the body, but also be in theenvironment, as long as the wearable computer has amethod for obtaining the sensed environmentalinformation.

Proactive: A wearable computer should be acting on itsuser’s behalf even when the user is not explicitly usingit. This is the essence of context-aware computing: thecomputer analyzes the user’s context and makes task-

Page 2: The Conference Assistant: Combining Context-Awareness with

relevant information and services available to the user,interrupting the user when appropriate.

Always on: A wearable computer is always on. This isimportant for context-aware computing because thewearable computer should be continuously monitoringthe user’s situation or context so that it can adapt andrespond appropriately. It is able to provide usefulservices to the user at any time.

From a survey on context-aware computing [6], wefound that most context-aware applications use a minimalvariety of context. In general, the use of context is limitedto only identity and location, neglecting both time andactivity. Complex context-aware applications are difficultto build. By complex, we mean applications that not onlydeal with a wide variety of context, but also that take intoaccount the contexts of multiple people or entities, real-time context as well as historical context, that use a singlepiece of context for multiple purposes, and that supportinteractions between multiple users, mobile and wearablecomputers and the environment. This family ofapplications is hard to implement because there has beenlittle support for thinking about and designing them.

This paper describes the design of a complex context-aware application that addresses these issues. We willpresent the Conference Assistant, a context-awareapplication for assisting conference attendees andpresenters. We demonstrate how context is used to aidusers and describe how the application was built. Inparticular, we will present some concepts that make iteasier to design complex context-aware applications.

2. The Conference Assistant

In this section, we will present a complex prototypeapplication, the Conference Assistant, which addresses thedeficiencies we pointed out in previous context-awareapplications. The Conference Assistant is a context-awareapplication intended for use by conference attendees. Wewill describe the conference domain and show why it isappropriate for context-awareness and wearablecomputing and provide a scenario of use. We will thendiscuss the context used in this application and how it wasused to provide the user benefit. We will end with adiscussion on the types of context-aware features theConference Assistant supports.

2.1. Conference Domain

The Conference Assistant was designed to assistpeople attending a conference. We chose the conferencedomain because conferences are very dynamic andinvolve an interesting variety of context. A conferenceattendee is likely to have similar interests as other

attendees. There is a great deal of concurrent activity atlarge conferences including paper presentations,demonstrations, special interest group meetings, etc., atwhich a large amount of information is presented. Webuilt the Conference Assistant to help users decide whichactivities to attend, to provide awareness of the activitiesof colleagues, to enhance interactions between users andthe environment, to assist users in taking notes onpresentations and to aid in the retrieval of conferenceinformation after the conference concludes.

A wearable computer is very appropriate for thisapplication. The Conference Assistant uses a wide varietyof context: time, identity, location, and activity. Itpromotes interaction between simultaneous users of theapplication and has a large degree of interaction with theuser’s surrounding environment. Revisiting Rhodes’ list ofwearable computer characteristics, we can show how thedomain is applicable for wearable computing.Portable while operational: During a conference, a user is

mobile, moving between presentation anddemonstration spaces, with rapidly changing context.

Hands-free use: During a conference, hands should befree to take notes, rather than interacting with acomputer to collect context information.

Sensors: Sensors in the environment can provide usefulinformation about the conference to the user, includingpresentation information and activities of colleagues.

Proactive and Always on: In a conference, a user wants topay attention to what is being presented whilemaintaining an awareness of other activities. Awearable computer can provide this awareness withoutexplicit user requests.

2.2. User Scenario

Now that we have demonstrated the utility of context-awareness and wearable computing in the conferencedomain, we will present a user scenario for theConference Assistant. A user is attending a conference.When she arrives at the conference, she registers,providing her contact information (mailing address, phonenumber, and email address), a list of research interests,and a list of colleagues who are also attending theconference. In return, she receives a copy of theconference proceedings and an application, theConference Assistant, to run on her wearable computer.When she starts the application, it automatically displays acopy of the conference schedule, showing the multipletracks of the conference, including both paper tracks anddemonstration tracks. On the schedule (Figure 1), certainpapers and demonstrations are highlighted (light gray) toindicate that they may be of particular interest to the user.

Page 3: The Conference Assistant: Combining Context-Awareness with

Figure 1. Screenshot of the augmented schedule, withsuggested papers and demos highlighted (light-coloredboxes) in the three (horizontal) tracks.

The user takes the advice of the application and walkstowards the room of a suggested paper presentation. Whenshe enters the room, the Conference Assistantautomatically displays the name of the presenter and thetitle of the presentation. It also indicates whether audioand/or video of the presentation are being recorded. Thisimpacts the user’s behavior, taking fewer or greater notesdepending on the extent of the recording available. Thepresenter is using a combination of PowerPoint and Webpages for his presentation. A thumbnail of the currentslide or Web page is displayed on the wearable computerdisplay. The Conference Assistant allows the user tocreate notes of her own to “attach” to the current slide orWeb page (Figure 2). As the presentation proceeds, theapplication displays updated slide or Web pageinformation. The user takes notes on the presentedinformation using the Conference Assistant. Thepresentation ends and the presenter opens the floor forquestions. The user has a question about the presenter’stenth slide. She uses the application to control thepresenter’s display, bringing up the tenth slide, allowingeveryone in the room to view the slide in question. Sheuses the displayed slide as a reference and asks herquestion. She adds her notes on the answer to her previousnotes on this slide.

Figure 2. Screenshot of the Conference Assistant note-taking interface.

Figure 3. Screenshot of the partial schedule showing thelocation and interest level of colleagues. Symbols indicateinterest level.

After the presentation, the user looks back at theconference schedule display and notices that theConference Assistant has suggested a demonstration to seebased on her interests. She walks to the room where the

demonstrations are being held. As she walks pastdemonstrations in search of the one she is interested in,the application displays the name of each demonstratorand the corresponding demonstration. She arrives at thedemonstration she is interested in. The applicationdisplays any PowerPoint slides or Web pages that thedemonstrator uses during the demonstration. Thedemonstration turns out not to be relevant to the user andshe indicates her level of interest to the application. Shelooks at the conference schedule and notices that hercolleagues are in other presentations (Figure 3). Acolleague has indicated a high level of interest in aparticular presentation, so she decides to leave the currentdemonstration and to attend this presentation. The usercontinues to use the Conference Assistant throughout theconference for taking notes on both demonstrations andpaper presentations.

Figure 4. Screenshots of the retrieval application: queryinterface and timeline annotated with events (4a) andcaptured slideshow and recorded audio/video (4b).

She returns home after the conference and wants toretrieve some information about a particular presentation.The user executes a retrieval application provided by theconference. The application shows her a timeline of theconference schedule with the presentation anddemonstration tracks (Figure 4a). The application uses afeature known as context-based retrieval [9]. It provides aquery interface that allows the user to populate thetimeline with various events: her arrival and departurefrom different rooms, when she asked a question, whenother people asked questions or were present, when apresentation used a particular keyword, or when audio orvideo were recorded. By selecting an event on the timeline(Figure 4a), the user can view (Figure 4b) the slide orWeb page presented at the time of the event, audio and/orvideo recorded during the presentation of the slide, andany personal notes she may have taken on the presented

Page 4: The Conference Assistant: Combining Context-Awareness with

information. She can then continue to view the currentpresentation, moving back and forth between thepresented slides and Web pages.

In a similar fashion, a presenter can use a thirdapplication to retrieve information about his/herpresentation. The application displays a timeline of thepresentation, populated with events about when differentslides were presented, when audience members arrivedand left the presentation (and their identities), theidentities of audience members who asked questions andthe slides relevant to the questions. The interface is similarto that shown in figure 4. The presenter can ‘relive’ thepresentation, by playing back the audio and/or video, andmoving between presentation slides and Web pages.

2.3. Use of Context

The Conference Assistant uses a wide variety ofcontext to provide both services and information to users.We will first describe the context used in real time toassist a conference attendee during a conference and thenwill describe the historical context used after theconference by a conference attendee and a presenter.

When the user is attending the conference, theapplication first uses information about what is beingpresented at the conference and her personal interests todetermine what presentations might be of particularinterest to her. The application uses her location, theactivity (presentation of a Web page or slide) in thatlocation and the presentation details (presenter,presentation title, whether audio/video is being recorded)to determine what information to present to her. The textfrom the slides is being saved for the user, allowing her toconcentrate on what is being said rather than spendingtime copying down the slides. The context of thepresentation (presentation activity has concluded, and thenumber and title of the slide in question) facilitates theuser’s asking of a question. The context is used to controlthe presenter’s display, changing to a particular slide forwhich the user had a question.

The list of colleagues provided during registrationallows the application to present other relevantinformation to the user. This includes both the locations ofcolleagues and their interest levels in the presentationsthey are currently viewing. This information is used fortwo purposes during a conference. First, knowing whereother colleagues are helps an attendee decide whichpresentations to see herself. For example, if there are twointeresting presentations occurring simultaneously,knowing that a colleague is attending one of thepresentations and can provide information about it later, auser can choose to attend the other presentation. Second,

as described in the user scenario, when a user is attendinga presentation that is not relevant or interesting to her, shecan use the context of her colleagues to decide whichpresentation to move to. This is a form of social orcollaborative information filtering [15].

After the conference, the retrieval application uses theconference context to retrieve information about theconference. The context includes public context such asthe time when presentations started and stopped, whetheraudio/video was captured at each presentation, the namesof the presenters, the presentations and the rooms in whichthe presentations occurred and any keywords thepresentations mentioned. It also includes the user’spersonal context such as the times at which she enteredand exited a room, the rooms themselves, when she askeda question and what presentation and slide or Web pagethe question was about. The application also uses thecontext of other people, including their presence atparticular presentations and questions they asked, if any.The user can use any of this context information toretrieve the appropriate slide or Web page and anyrecorded audio/video associated with the context.

After the conference, a presenter can also use theconference context to obtain information relevant tohis/her presentation. The presenter can obtain informationabout who was present for the presentation, the times atwhich each slide or Web page was visited, who askedquestions and about which slides. Using this information,along with the text captured from each slide and anyaudio/video recorded, the presenter can playback theentire presentation and question session.

2.4. Context-aware Features

Pascoe [11] introduced a set of four context-awarecapabilities that applications can support. We will presenteach capability and will show how the ConferenceAssistant supports each one.

Contextual sensing: A system detects context andsimply presents it to the user, augmenting the user’ssensory system. The Conference Assistant presents theuser’s current location, name of the current presentationand presenter, location of colleagues, and colleagues’level of interest in their presentations.

Contextual adaptation: A system uses context to adaptits behavior instead of providing a uniform interface in allsituations. When a new presentation slide or Web page ispresented, the Conference Assistant saves the user’s notesfrom the previous slide and creates an empty textbox inwhich notes on the new current slide can be entered.

Contextual resource discovery: A system can locateand use resources that share part or all of its context.When a user enters a presentation/demonstration area, theConference Assistant creates a temporary bind to the

Page 5: The Conference Assistant: Combining Context-Awareness with

presentation server in the local environment. The sharedcontext is location. This binding allows the application toobtain changes to the local presentation/demonstration.

Contextual augmentation: A system augments theenvironment with additional information, associatingdigital data with the current context. All the notes that auser makes on presented information are augmented withcontextual information (location, presentation title andpresenter, and time). This augmentation supports retrievalof the notes using context-based retrieval techniques.

The Conference Assistant exploits all four of thecontext-aware capabilities presented by Pascoe. Thesecapabilities are used to provide substantial benefits to bothconference attendees and presenters.

3. Application Design

In this section, we describe the design of the ConferenceAssistant. We illustrate the software architecture of theapplication, as well as the context-aware architecture itwas built on top of. We discuss the concepts thearchitecture supports that make it easier to build andevolve context-aware applications. Finally, we alsodescribe the hardware used to deploy the application.

3.1. Software

The Conference Assistant is a complex context-awareapplication. It uses a wide variety of context, supportingboth interaction between a single user and theenvironment and between multiple users. This applicationwould have been difficult to build without a great deal ofunderlying support. It was built on top of an architecturedesigned to support context-aware applications [5]. Wewill first briefly describe this architecture and then willshow how the architecture was used to build theConference Assistant.

3.1.1. Context Architecture1

In previous work [5,14], we presented an architectureand toolkit that we designed and implemented to supportbuilding of context-aware applications. We will brieflydiscuss the components of the architecture and its merits.

The architecture consists of three types of components:widgets, servers, and interpreters. They implement theconcepts necessary for easing the development of context-aware applications. Figure 5 shows the relationshipbetween the context components and applications.

1 For more information on the context architecture, seehttp://www.cc.gatech.edu/fce/contexttoolkit

Context widgets encapsulate information about a singlepiece of context, such as location or activity, for example.They provide a uniform interface to components orapplications that use the context, hiding the details of theunderlying context-sensing mechanism(s).

Figure 5. Relationship between applications and thecontext architecture. Arrows indicate data flow.

A context server is very similar to a widget, in that itsupports the same set of features as a widget. Thedifference is that a server aggregates multiple pieces ofcontext. In fact, it is responsible for all the context about aparticular entity (person, place, or object). Aggregationfacilitates the access of context by applications that areinterested in multiple pieces of context about a singleentity.

A context interpreter is used to abstract or interpretcontext. For example, a context widget may providelocation context in the form of latitude and longitude, butan application may require the location in the form of astreet name. A context interpreter may be used to providethis abstraction.

Context components are instantiated and executedindependently of each other in separate threads and onseparate computing devices. The architecture makes thedistribution of the context architecture transparent tocontext-aware applications, mediating all communicationsbetween applications and components. It supportscommunications using the HyperText Transfer Protocol(HTTP) for both the sending and receiving of messages.The language used for sending data is the eXtensibleMarkup Language (XML). XML and HTTP were chosenbecause they support lightweight integration of distributedcomponents and facilitate access to the architecture onheterogeneous platforms with multiple programminglanguages. The only requirements on devices using thearchitecture are that they support ASCII parsing andTCP/IP. These minimal requirements are particularlyimportant for mobile and wearable computers, for whichcommunications support tends to be small.

The context architecture promotes three main conceptsfor building context-aware applications: separation of

Page 6: The Conference Assistant: Combining Context-Awareness with

context sensing from context use, aggregation, andabstraction. It relieves application developers from havingto deal with how to sense and access context information,and instead, concentrate on how to use the context. Itprovides simplifying abstractions like aggregation andabstraction to make it easier for applications to obtain thecontext they require. Aggregation provides “one-stopshopping” for context about an entity, allowingapplication designers to think in terms of high levelinformation, rather than low-level details. The architecturemakes it easy to add the use of context to existingapplications that don’t use context and to evolveapplications that already use context. In addition, thearchitecture makes context-aware applications resistant tochanges in the context-sensing layer. It encapsulateschanges and the impact of changes, so applications do notneed to be modified.

3.1.2. Software Design

The Conference Assistant was built using the contextarchitecture just described. Table 1 lists all the contextcomponents used and Figure 6 presents a snapshot of thearchitecture when a user is attending a conference.

During registration, a User Server is created for theuser. It is responsible for aggregating all the contextinformation about the user and acts as the application’sinterface to the user’s personal context information. Itsubscribes to information about the user from the publicRegistration Widget, the user’s Memo Widget and theLocation Widget in each presentation space. The MemoWidget captures the user’s notes and also any relevantcontext (relevant slide, time, and presenter identity).

Table 1. Architecture components and responsibilities: S= Servers, W = Widgets, I = InterpretersComponent ResponsibilityRegistration (W) Acquires contact info, interests, and

colleaguesMemo (W) Acquires user’s notes and relevant

presentation infoRecommender(I) Locates interesting presentationsUser (S) Aggregates all information about userQuestion (W) Acquires audience questions and

relevant presentation infoLocation (W) Acquires arrivals/departures of usersContent (W) Monitors PowerPoint or Web page

presentation, capturing content changesRecording (W) Detects whether audio/video is recordedPresentation (S) All information about a presentation

There is a Presentation Server for each physicallocation where presentations/demos are occurring. A

Presentation Server is responsible for aggregating all thecontext information about the local presentation and actsas the application’s interface to the public presentationinformation. It subscribes to the widgets in the localenvironment, including the Content Widget, LocationWidget, Recording Widget and Question Widget.

Figure 6. Conference Assistant capture architecture.

When an audience member asks a question using theConference Assistant, the Question Widget captures thecontext (relevant slide, location, time, and audiencemember identity) and notifies the local PresentationServer of the event. The server stores the information andalso uses it to access a service provided by the ContentWidget, displaying the slide or Web page relevant to thequestion.

The Conference Assistant does not communicate withany widget directly, but instead communicates only withthe user’s User Server, the User Servers belonging to eachcolleague and the local Presentation Server. It subscribesto the user’s User Server for changes in location andinterests. It subscribes to the colleagues’ User Servers forchanges in location and interest level. It also subscribes tothe local Presentation Server for changes in a presentationslide or Web page when the user enters a presentationspace and unsubscribes when the user leaves.

In the conference attendee’s retrieval application, allthe necessary information has been stored in the user’sUser Server and the public Presentation Servers. Thearchitecture for this application is much simpler, with theretrieval application only communicating with the user’sUser Server and each Presentation Server. As shown inFigure 4, the application allows the user to retrieve slides(and the entire presentation including any audio/vide)using context via a query interface. If personal context isused as the index into the conference information, theapplication polls the User Server for the times andlocation at which a particular event occurred (user enteredor left a location, or asked a question). This informationcan then be used to poll the correct Presentation Serverfor the related presentation information. If public contextis used as the index, the application polls all thePresentation Servers for the times at which a particular

Page 7: The Conference Assistant: Combining Context-Awareness with

event occurred (use of a keyword, presence or question bya certain person). As in the previous case, this informationis then used to poll the relevant Presentation Servers forthe related presentation information.

In the presenter’s retrieval application, all thenecessary information has been stored in the publicPresentation Server used during the relevant presentation.The architecture for this application is simple as well, withthe retrieval application only communicating with therelevant Presentation Server. As shown in Figure 4, theapplication allows the user to replay the entirepresentation and question session, or view particularpoints in the presentation using context-based retrieval.Context includes the arrival and departure of particularaudience members, transitions between slides and/or Webpages, and when questions were asked and by whom.

3.2. Hardware

The Conference Assistant application is being executedon a variety of different platforms, including laptopsrunning Windows 95/98 and Hewlett Packard 620LXWinCE devices. It was not actually run on a wearablecomputer, but there is no reason why it could not be. Theonly requirements are constant network access and agraphical display. For communications with the contextarchitecture, we use Proxim’s RangeLAN2 1.6 Mbpswireless LAN for WinCE devices and RadioLan’s10BaseRadio 10 Mbps wireless LAN.

The retrieval applications are running on desktopmachines, under both the Windows 95/98 and Solarisoperating systems.

The context components were executed on a number ofdifferent computers running different operating systems.This includes Powermac G3’s running MacOS, IntelPentiums running Windows 95 and Windows NT, andSPARC 10s running Solaris.

To sense identity and location of the conferenceattendees and presenters, we use PinPoint Corporation’s3D-iDTM Local Positioning System. This system usesradio frequency-based (RF) tags with unique identities andmultiple antennas to locate users in a physical space.Although, it can provide location information at theresolution of 6 feet, we used coarser-grained informationto determine when users entered a room.

4. Related Work

In this section, we will discuss other work that isrelevant to the Conference Assistant, in the areas ofconference assistants, context-aware tour guides, notetaking, and context-based retrieval.

There has been little work in the area of context-awareness in a conference setting [10]. In the Mobile

Assistant Project, Nishibe et al. deployed 100 handheldcomputers with cell phones at the InternationalConference on Multiagent Systems (ICMAS ’96). Thesystem provided conference schedule and touristinformation. Social filtering, using the queries of otherconference attendees, was used to determine relevanttourist information. The system supported communityactivity by allowing attendees to search for others withsimilar interests. Context was limited to “virtualinformation” such as personal interests, not taking intoaccount “real information” like location.

Somewhat similar to a conference assistant is a tourguide. Both applications provide relevant informationabout the user’s current context. The context-aware tourguide application is, perhaps, the canonical context-awareapplication. It has been the focus of much effort by groupsdoing context-aware research. Feiner et al. developed atour guide for the Columbia University campus thatcombined augmented reality with mobile computing [7].Fels et al. and Long et al. built tour guide applications forvisitors attending an open house at their respectivelaboratories [8,2]. These systems use static configurationsand can not deal with changes to tours at runtime. Incontrast, the context aware architecture used in theConference Assistant is able to make runtime changestransparent to the application.

There have been a number of systems that supportindividual users in taking notes on presentations. TheClassroom 2000 project used an augmented classroomthat captured audio, video, web slides visited andwhiteboard activity to make the student notetaking activityeasier [1]. The NotePals system aggregated the notes fromseveral notetakers to provide a single group record of apresentation [4]. Stifelman built an augmented papernotebook that allowed access to the audio of apresentation during review [17]. The context mostextensively used in these applications is time. TheConference Assistant expands the range of context used.

One of the most important projects in context-basedretrieval was the Forget-me-not system from Lammingand Flynn [9]. It kept a record of a person’s activitythroughout the day in a diary format, allowing retrieval ofthe activity information based on context. Rhodes’wearable Remembrance Agent used context informationabout notes a user wrote, such as co-located people,location, and time to allow automatic retrieval of thosenotes that most closely matched the user’s current context[13]. Rekimoto et al. used an augmented reality system toattach notes to objects or locations [12]. When usersapproached those objects or locations, the note wasretrieved. This is similar to the Locust Swarm project byStarner et al [16], which allowed the attachment andretrieval of notes from infrared-based location tags.

Page 8: The Conference Assistant: Combining Context-Awareness with

5. Conclusions and Future Application Work

We have presented the Conference Assistant, aprototype mobile, context-aware application for assistingconference attendees in choosing presentations to attend,taking notes, and retrieving those notes. We discussed theimportant relationship between context-awareness andwearable computing. We demonstrated this relationship inthe Conference Assistant. We showed how the ConferenceAssistant made use of a wide variety of context, bothpersonal and environmental, and how it enhanced userinteractions with both the environment and other users.We discussed the important concepts that our architecturesupports, that make it easier to build and modify complexcontext-aware applications: separation of sensing andusing context, aggregation, and abstraction.

The Conference Assistant is currently a prototypeapplication running in our laboratory. We would like todeploy the application at an actual conference. This wouldrequire us to provide many handheld devices (in case theconference attendees do not have their own wearablecomputers), a wireless LAN, and an indoor positioningsystem. This would allow us to perform a realisticevaluation of the application.

There are also additional features that we would like toadd to the Conference Assistant. The first is the additionof an improved assistant for demonstrations. Currently,the application doesn’t treat paper presentations anydifferently from demonstrations. We would like toenhance the application when a demonstration is beinggiven, by providing additional information about thedemonstration. This includes relevant web pages, researchpapers, and videos.

Currently, the Conference Assistant only usesinformation about PowerPoint slides and Web pages beingpresented. We would like to extend this to use otherpresentation packages and mediums. This will require nochange to the application, but will require thedevelopment of additional context widgets to capture thepresentations and relevant updates.

Other features to add deal with access to informationabout the user’s colleagues. Presently, at registration,users indicate the colleagues about whom they would liketo receive information on. This is actually the opposite ofhow we should be approaching this problem, from aprivacy point of view. At registration, users shouldactually indicate who is allowed to access theirinformation (location and level of interest). This allowsusers to manage their own information. A related featureis to allow users to access their colleagues’ notes with theretrieval application. This would provide additionalinformation that would augment the user’s own notes on apresentation and would be a source of notes forpresentations that the user did not attend.

A final feature to add to the Context Assistant is aninterface that supports serendipitous information retrievalrelevant to the current presentation, much like theRemembrance Agent [13]. Potential information toretrieve includes conference and field-relevantinformation.

We would like to add a third retrieval application forconference organizers. This application would allow themto view anonymized information about the number ofpeople at various presentations and demonstrations andthe average amount of time attendees spent at each.

Acknowledgements

We would like to acknowledge the support of theFuture Computing Environments research group forcontributing to the ideas of the Conference Assistant. Wewould like to thank Thad Starner for his comments on thiswork. This work was supported in part by an NSFCAREER Grant # 9703384, NSF ESS Grant EIA-9806822, a Motorola UPR and a Hitachi grant.

References

[1] G.D. Abowd et al., “Investigating the capture, integration and accessproblem of ubiquitous computing in an educational setting”, inProceedings of CHI’98, April 1998, pp. 440-447.

[2] G.D. Abowd, C.G. Atkeson, J. Hong, S. Long, R. Kooper and M.Pinkerton, “Cyberguide: A mobile context-aware tour guide”, ACMWireless Networks, 3(5), 1997, pp. 421-433.

[3] H.H. Clark and S.E. Brennan, “Grounding in communication”, inL.B. Resnick, J. Levine, & S.D. Teasley (Eds.), Perspectives on sociallyshared cognition. Washington, DC. 1991.

[4] R. Davis et al., “NotePals: Lightweight note sharing by the group,for the group”, in Proceedings of CHI’99, May 1999, pp. 338-345.

[5] A.K. Dey, D. Salber, M. Futakawa and G. Abowd, “An architectureto support context-aware applications”, submitted to UIST ’99.

[6] A.K. Dey and G.D. Abowd, “Towards an understanding of contextand context-awareness”, submitted to HUC ’99.

[7] S. Feiner, B. MacIntyre, T. Hollerer and A. Webster, “A TouringMachine: Prototyping 3D mobile augmented reality systems forexploring the urban environment”, in Proceedings of the 1st

International Symposium on Wearable Computers, October 1997, pp.74-81.

[8] S. Fels et al., “Progress of C-MAP: A context-aware mobileassistant”, in Proceeding of AAAI 1998 Spring Symposium onIntelligent Environments, Technical Report SS-98-02, March 1998, pp.60-67.

[9] M. Lamming and M. Flynn, “Forget-me-not: Intimate computing insupport of human memory”, in Proceedings of FRIEND21: InternationalSymposium on Next Generation Human Interfaces, 1994, pp. 125-128.

[10] Y. Nishibe et al., “Mobile digital assistants for communitysupport”, AAAI Magazine 19(2), Summer 1998, pp. 31-49.

Page 9: The Conference Assistant: Combining Context-Awareness with

[11] J. Pascoe, “Adding generic contextual capabilities to wearablecomputers”, in Proceedings of 2nd International Symposium onWearable Computers, October 1998, pp. 92-99.

[12] J. Rekimoto, Y. Ayatsuka and K. Hayashi, “Augment-able reality:Situated communications through physical and digital spaces”, inProceedings of the 2nd International Symposium on WearableComputers, October 1998, pp. 68-75.

[13] B. Rhodes, “The Wearable Remembrance Agent: A system foraugmented memory”, in Proceedings of the 1st International Symposiumon Wearable Computers, October 1997, pp. 123-128.

[14] D. Salber, A.K. Dey and G.D. Abowd, “The Context Toolkit:Aiding the development of context-enabled applications”, inProceedings of CHI’99, pp. 434-441.

[15] U. Shardanand and P. Maes, "Social information filtering:algorithms for automating 'word of mouth'", in Proceedings of CHI’95,May 1995, pp. 210-217.

[16] T. Starner, D. Kirsch and S. Assefa, “The Locust Swarm: Anenvironmentally-powered, networkless location and messaging system”,in Proceedings of the 1st International Symposium on WearableComputers, October 1997, pp. 169-170.

[17] L.J. Stifelman, “Augmenting real-world objects: A paper-basedaudio notebook”, in Proceedings of CHI’96, April 1996, pp. 199-200.