11
Interacting with Digital Tabletops 36 September/October 2006 Published by the IEEE Computer Society 0272-1716/06/$20.00 © 2006 IEEE T he term display suggests a device used solely to output visual information, untouched for fear of occluding or dirtying the screen. In contrast, a surface is free of this burden—it’s part of the physical environment and invites touch. By super- imposing input and visual output spaces onto surfaces, we can merge both ideas, creating touchable, interac- tive surfaces. Such surfaces have numerous uses; one exciting example is a horizontal, interactive, computa- tionally augmented tabletop. Compared with traditional displays, interactive tables provide three potential benefits. First, because the table is both the display and direct input device, it can take as input natural hand gestures and intuitive manipulations. Such inputs can improve the fluidity and reduce the cognitive load of user–content interactions. Second, by leveraging people’s tendency to gather around a table for face-to-face interactions, a horizontal tabletop surface provides opportunities for building and enhancing colocated collaborative environments. Third, large tabletop surfaces have a spa- cious work area that can positively influence working styles and group dynamics. Users can also employ the surfaces’ larger visual field as an external physical memory (thereby extending their working memory capacity); it can further serve as an external cognitive medium for new forms of visual representation and direct manipulation. Over the past few years, we’ve sought to exploit direct- touch surfaces’ advantages and affordances. To this end, we’ve designed, implemented, and studied a variety of tabletop user interfaces, interaction techniques, and usage scenarios. We’ve also empirically evaluated our work and obtained preliminary findings on: how people use a story-sharing table with digital pho- tos; how nonspeech audio feedback affects multiuser interaction on tabletops, and how group size affects different aspects of multiuser tabletop collaboration. Here, we explore tabletop advantages by examining the techniques we’ve developed to leverage those advan- tages. In addition to presenting six basic challenges we’ve encountered in our efforts, we discuss the experiences gained and lessons learned on this research journey. Usability challenges Direct-touch tabletops are a new interaction form fac- tor, so researchers don’t yet well understand appropriate user interfaces and interaction techniques for their wide- spread use. Existing research surveys include Scott and colleagues, who summarize tabletop systems and design approaches, 1 and Kruger and colleagues, who cover ori- entation approaches on a traditional meeting table. 2 Tables are commonly found in homes, offices, com- mand-and-control centers, cafés, design centers, show- rooms, waiting areas, and entertainment centers. As such, they provide a convenient physical setting for peo- ple to examine documents, lay out and navigate maps, sketch design ideas, and carry out tasks that require face-to-face collaboration. In contrast, digital docu- ments are still commonly used only on desktop/laptop computers, vertical plasma or projected displays, and handheld devices. Making such documents available on direct-touch interactive tabletop surfaces involves sev- eral design and usability challenges, including tabletop content orientation, occlusion and reach, gestural inter- action, legacy application support, group interaction, and walk-up/walk-away use issues. Tabletop content orientation In contrast to computer monitors or projected dis- plays, people seated around digital tabletops don’t share Tables provide a large and natural interface for supporting direct manipulation of visual content for human-to-human interactions. Such surfaces also support collaboration, coordination, and parallel problem solving. However, the direct-touch table metaphor also presents considerable challenges, including the need for input methods that transcend traditional mouse- and keyboard-based designs. Chia Shen, Kathy Ryall, Clifton Forlines, and Alan Esenther Mitsubishi Electric Research Labs Frédéric D. Vernier University of Paris Katherine Everitt University of Washington Mike Wu and Daniel Wigdor University of Toronto Meredith Ringel Morris Stanford University Mark Hancock and Edward Tse University of Calgary Informing the Design of Direct- Touch Tabletops

Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

Interacting with Digital Tabletops

36 September/October 2006 Published by the IEEE Computer Society 0272-1716/06/$20.00 © 2006 IEEE

The term display suggests a device usedsolely to output visual information,

untouched for fear of occluding or dirtying the screen.In contrast, a surface is free of this burden—it’s part ofthe physical environment and invites touch. By super-imposing input and visual output spaces onto surfaces,we can merge both ideas, creating touchable, interac-tive surfaces. Such surfaces have numerous uses; oneexciting example is a horizontal, interactive, computa-tionally augmented tabletop.

Compared with traditional displays, interactive tablesprovide three potential benefits. First, because the tableis both the display and direct input device, it can take asinput natural hand gestures and intuitive manipulations.

Such inputs can improve the fluidityand reduce the cognitive load ofuser–content interactions. Second,by leveraging people’s tendency togather around a table for face-to-faceinteractions, a horizontal tabletopsurface provides opportunities forbuilding and enhancing colocatedcollaborative environments. Third,large tabletop surfaces have a spa-cious work area that can positivelyinfluence working styles and groupdynamics. Users can also employ thesurfaces’ larger visual field as anexternal physical memory (therebyextending their working memorycapacity); it can further serve as anexternal cognitive medium for newforms of visual representation anddirect manipulation.

Over the past few years, we’ve sought to exploit direct-touch surfaces’ advantages and affordances. To this end,we’ve designed, implemented, and studied a variety oftabletop user interfaces, interaction techniques, andusage scenarios. We’ve also empirically evaluated ourwork and obtained preliminary findings on:

■ how people use a story-sharing table with digital pho-tos;

■ how nonspeech audio feedback affects multiuserinteraction on tabletops, and

■ how group size affects different aspects of multiusertabletop collaboration.

Here, we explore tabletop advantages by examiningthe techniques we’ve developed to leverage those advan-tages. In addition to presenting six basic challenges we’veencountered in our efforts, we discuss the experiencesgained and lessons learned on this research journey.

Usability challengesDirect-touch tabletops are a new interaction form fac-

tor, so researchers don’t yet well understand appropriateuser interfaces and interaction techniques for their wide-spread use. Existing research surveys include Scott andcolleagues, who summarize tabletop systems and designapproaches,1 and Kruger and colleagues, who cover ori-entation approaches on a traditional meeting table.2

Tables are commonly found in homes, offices, com-mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. Assuch, they provide a convenient physical setting for peo-ple to examine documents, lay out and navigate maps,sketch design ideas, and carry out tasks that requireface-to-face collaboration. In contrast, digital docu-ments are still commonly used only on desktop/laptopcomputers, vertical plasma or projected displays, andhandheld devices. Making such documents available ondirect-touch interactive tabletop surfaces involves sev-eral design and usability challenges, including tabletopcontent orientation, occlusion and reach, gestural inter-action, legacy application support, group interaction,and walk-up/walk-away use issues.

Tabletop content orientationIn contrast to computer monitors or projected dis-

plays, people seated around digital tabletops don’t share

Tables provide a large andnatural interface for supportingdirect manipulation of visualcontent for human-to-humaninteractions. Such surfaces alsosupport collaboration,coordination, and parallelproblem solving. However, thedirect-touch table metaphoralso presents considerablechallenges, including the needfor input methods thattranscend traditional mouse-and keyboard-based designs.

Chia Shen, Kathy Ryall, Clifton Forlines,and Alan Esenther Mitsubishi Electric Research Labs

Frédéric D. VernierUniversity of Paris

Katherine EverittUniversity of Washington

Mike Wu and Daniel WigdorUniversity of Toronto

Meredith Ringel MorrisStanford University

Mark Hancock and Edward Tse University of Calgary

Informing theDesign of Direct-Touch Tabletops

Page 2: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

a common perspective on information. That is, informa-tion presented right-side up to one participant might beupside down for another. Content orientation has impli-cations for group social dynamics,2 readability,3 and per-formance.4 Because content-orientation solutions letresearchers evaluate other tabletop applications andinteraction techniques, it’s an extensively studied issue.

To address content orientation, we developed the Dia-mondSpin Tabletop Toolkit,5 a set of interaction tech-niques. Figure 1 shows one popular layout technique inwhich the toolkit constrains documents to always facethe tabletop’s closest outside edge. People can use a sin-gle-finger touch and slide to move documents, whichautomatically turn to face them or the users they’re pass-ing documents to. A separate corner widget lets usersperform arbitrary orientation, in which the constraintof documents always facing the tabletop’s outside edgeis removed. DiamondSpin also provides two other doc-ument orientation options:

■ a lazy-Susan tabletop background that lets usersrotate all documents together, and

■ a magnetizer that reorients all the documents to facethe same direction.

There are many possible document orientation andmovement schemes. We classified and compared fivedifferent rotation and translation techniques for objectsdisplayed on a direct-touch digital tabletop display.6 Wethen analyzed their suitability for interactive tabletopsgiven their respective input and output degrees of free-dom, as well as their consistency, completeness, GUIintegration for conventional window-based documents,and support for coordination and communication. Ourcomparative usability analysis results indicate that Dia-mondSpin’s polar-coordinate-based orientation andtranslation schemes are especially effective for usagescenarios that require consistency and GUI integration.

Occlusion and reach When users interact with displayed information

through direct touch, they might visually obscure theinformation immediately below their hand, arm, or sty-lus. Furthermore, the tabletop’s large workspace makesmany display regions either uncomfortable to work inor completely unreachable. To contend with theseissues, we developed three techniques: Context-Root-ed Rotatable Draggables (CoR2Ds),7 ExpressiveTouchpuppetry,8 and occlusion-aware visual feedback.

Context-Rooted Rotatable Draggables. Wedesigned CoR2D interactive popups for multiuser direct-touch tabletop environments. As Figure 2 (next page)shows, translucent, colored swaths visually root CoR2Dsto objects. Users can employ CoR2Ds to issue commandsor display information. Users can freely move, rotate,and reorient CoR2Ds on a tabletop display surface usingtheir fingers or hands; pointing devices (such as mice);or marking devices (such as a stylus or light pen).

CoR2Ds address five key interaction issues in tabletopsystems: occlusion, reach, establishing context on a clut-tered display, readability, and concurrent, coordinated

multiuser interaction. Also, multiple people can use asingle CoR2D or a pair of CoR2Ds to cooperatively com-plete a task (as in Figures 2a through 2e). For example,one person can drag a CoR2D to a different part of thedisplay surface while another user operates on it, thusfacilitating multiuser operations. CoR2Ds let operatorsand operands function across a visual distance, elimi-nating on-object occlusion. This also lets users operateobjects at a distance without losing the visual cue of theobjects’ menus or tools, which might be across the dis-play and partially hidden among display clutter.

Puppetry. In our ExpressiveTouch puppetry tech-nique, users can apply operations—such as copy andpaste8—on a distant document. This can be importantwhen target objects are obscured from view. In a copy-and-paste operation, for example, users select objectsby touching a document region. Then, while still touch-ing the table, users can slide their hands away from thedocument to transition to indirectly adjusting the selec-

IEEE Computer Graphics and Applications 37

1 Content orientation in DiamondSpin Tabletop Toolkit. (a) Users meetaround a multitouch, multiuser interactive digital table. (b) A bird’s-eyeview of a DiamondSpin application, which constrains documents to facethe tabletop’s outside edge.

(b)

(a)

Page 3: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

tion box’s location and size. As Figure 3a shows, fourvisual lines provide feedback to indicate the relation-ships between the control and display regions. This is avisually tethered, indirect distant operation that solvesthe occlusion problem on direct-touch surfaces. Our

technique lets users comfortably control documentsfrom various locations. The technique also mitigatesphysical interference, letting multiple people simulta-neously use the same document from different sides ofa table (see Figure 3b).

Interacting with Digital Tabletops

38 September/October 2006

2 Concurrent interaction on a large tabletop. (a) Two users employ CoR2Ds in a poster design scenario on an interactive tabletop. (b)A user double taps inside an empty poster document on the tabletop to launch its associated CoR2Ds. (c) A user double taps the USBreader icon, launching its associated CoR2Ds. (d) and (e) Users stretch and overlap “Current poster” and “Poster.ppt” from two sepa-rate CoR2Ds, completing the task of copying the PowerPoint file from the USB reader into the poster document on the table. Thisoperation can be carried out bimanually by one user or by two separate users.

3 Bimanual, multitool gestural interaction. (a) The tool decouples control and display spaces. (b) Two users simultaneously performcopy-and-paste operations on the same image object.

(b)(a)

(b)

(a)

(c) (d) (e)

Page 4: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

Occlusion-aware visual feedback. When users’fingers are larger than the target object—say, a menuitem or a button—it’s difficult for them to hit the targetof the selection. Compounding this is the issue of occlu-sion; traditional windows, icons, menus, and pointingdevice (WIMP) systems generally offer in-place visualfeedback on an action’s target pixel. On a direct-touchinterface, this feedback will always be occluded by theuser’s hand, thus removing much-needed feedback ontarget selection accuracy. Thus, traditional visual feed-back—such as highlighting or drop shadows—isn’talways effective in direct-touch tabletop interactions.

To address this, we developed occlusion-aware visualfeedback. Our solution provides both in-place and visiblefeedback on a direct-touch interface. Figure 4 shows anexample: when the user successfully selects the target, itenlarges. This change in visual state clearly indicates acti-vation and offers in-place feedback without occlusion.

Gestural interactionIn a GUI, users must manage a plethora of tools and

interaction methods, typically using only a keyboardand mouse as input devices. Direct-touch surfaces thatpermit fluid, bimanual input could provide a more natural mapping between input and commands. Forthis to happen, we must first address several majorquestions:

■ How should gestures map to various system func-tions?

■ Should gestures map to the most common tasks orthe most complex?

■ Does each command require its own gesture—and, ifso, how many gestures can we reasonably expectusers to learn?

■ How can hand gestures coexist with familiar point-based, mouse-like interaction?

Designers must develop gestures that address theirapplications’ specific needs. However, guidelines for

introducing new gestural commands into an applicationcan help designers avoid overly complicated systems.Here, we offer guidelines for gesture reuse within table-top applications that begin to address some of theseresearch questions.

Registration and gesture reuse. In our solu-tions, each gesture operation begins with gesture reg-istration,8 which sets the context for all subsequentinteractions. A gesture registration action can be a handposture, a simple touch, a dwell action, or a specificnumber of finger taps. Registration occurs when the sys-tem recognizes a distinctive gesture registration actionon the table.

Registration lets users reuse gestures during other ges-ture phases. The same hand movements can thus pro-duce different results, depending on which gesture actionthe user employs in the registration phase. For example,our desktop publishing application uses gesture registra-tion to change stylus action modes. In normal operation,the stylus moves documents around the table andbehaves like a mouse. However, if the user places two fin-gers on a document—as if to hold it in place—the stylusbehaves like a pen, letting the user mark up and annotatethe document. Through gesture registration, users canmap the same stylus movements to multiple system com-mands. Gesture registration combined with gesture reuseis a powerful idea that lets designers define a variety ofgestures using a small set of building blocks.

Modal Spaces and gesture reuse. Our ModalSpaces solution9 enhances conventional modal inter-faces to permit reuse of gestures for different com-mands. It also clearly indicates the system’s mode andlets users seamlessly change modes.

Modal Spaces divides the table into multiple work-spaces, called modal regions. The system recognizescommands based on the target object, the user’s gestur-al input, and the table’s input region. As Figure 5 (nextpage) shows, location mediates user input, and docu-

IEEE Computer Graphics and Applications 39

4 Occlusion-aware visual feedback. (a) When a user’s finger lands on a DiamondSpin object’s Resize widget (leftside of image), (b) the widget’s size temporarily enlarges and grows transparent, to provide visual feedback and indicate its ephemeral nature, respectively.

(b)(a)

Page 5: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

ments respond differently to the same gestures depend-ing on where users execute them. In one modal region,for example, a touch might open a popup menu; inanother, it might launch a stroke operation.

Legacy application supportAs the tabletop interfaces field matures, developers

will likely design more applications from scratch to takefull advantage of multiuser horizontal workspace char-acteristics. Still, legacy applications are widely deployedand many are indispensable for real-world tasks. A dig-ital tabletop environment must therefore address issuesrelated to using preexisting applications on a horizontalworkspace. We’ve developed solutions here in twoareas: one enables mouse-emulation on touch surfaces;the other enables bimanual gestures for mouse-basedapplications.

Mouse support for touch-based interactions.To support existing mouse-based applications, we needa finger-touch mechanism that emulates a computermouse. This entails several issues. How does a user indi-cate mouse dragging versus mouse hovering (movingthe mouse without pressing any buttons)? How does theuser right-click? We also face a finger-resolution issue.A fingertip has a relatively large area, so how does theuser specify a particular pixel, especially when his or her

fingertip is obscuring the mouse cursor? Finally, tradi-tional desktop applications assume (and support) onlya single mouse. What happens if multiple users touch atthe same time?

Early work attempted to mitigate the effects of visualocclusion and limited pointing accuracy by introducinga fixed offset between touch location and cursor position. This approach breaks the direct-touch input paradigm. In contrast, our solution detects multiple con-current touches from the same user,10 allowing the user’shand posture to define the offset more logically. Whena user touches the table with one finger, the left mousebutton is activated to simulate dragging. When a usertouches with two fingers at once, the mouse cursorjumps to the center point between the touches; nomouse button is activated. Once in contact with thetable, moving either or both fingers moves the mouse. AsFigure 6 shows, this precision-hover mode gives usersan unobscured view of the precise mouse cursor loca-tion between their fingers. This two-fingered controlprovides precision unobtainable with single-fingerinput. Unlike with tool tips and image rollovers, ourmethod lets users move the mouse without activatingmouse buttons. While in precision-hover mode, tappingwith a third finger in between the first two will togglethe left mouse button up or down. Users can thus fluid-ly switch between dragging and moving, without inad-vertently moving the mouse cursor.

This technique is natural and intuitive if users employthe thumb and middle finger of one hand for the firsttwo touches, and use the index finger to toggle the leftmouse button. They press the right mouse button byplacing one finger down at the desired location and thenquickly tapping anywhere else with a second finger.Users can then either drag with the right mouse buttonheld down, or—to generate a right click—let go withthe first finger, too. We can use variations of this basictechnique to support other mouse buttons. We can, forexample, support multiple users by letting the firsttoucher win: the system ignores subsequent touches byother users until the first toucher stops touching thetable. Also, we underprojected the display inside thetouch surface just enough so that it’s easy to use the tech-nique in display corners, too. Our initial user experi-ences with these touch-based mouse-emulationschemes have shown encouraging user acceptance andfast learnability.

Gestural interactions for mouse-based appli-cations. Meaningful gestures on a tabletop canimprove group awareness in ways that simple mouseemulation can’t achieve. For example, using a wholehand to pan a map might be more intuitive than select-ing a pan mode and then panning with a single finger.

To allow multiuser gestural interaction with desktopapplications, we map between gestures that a gestureengine recognizes and the keyboard and mouse eventsthat legacy applications expect.11 Using the Microsoft SendInput API, we can map a single gestural act to a series ofkeyboard and mouse events to provide a more naturalmapping between tabletop gestures and legacy input.Turn-taking protocols let us manage system responses

Interacting with Digital Tabletops

40 September/October 2006

5 Modal Spaces for an image editing application. (a) The touch-sensitivetabletop surface has four modal spaces: cutting, resize, annotation, andlayout. (b) A before-and-after look at the effects of three annotations:bounding box cut, shrinking, and clearing.

(b)

(a)

Page 6: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

when multiple people gesture simultaneously. As a result,legacy applications appear to directly understand free-hand gestures from multiple people. We’ve also integrat-ed support for speech input using the same techniques,creating multimodal tabletop applications.

Group interaction techniquesColocated, multiuser activities present many new

challenges for UI designers. Among the problems: Howcan multiple users access conventional, single-usermenubars and toolbars? How can multiple users simul-taneously explore detailed image or geospatial datawithout interfering with the global context view?

Conventional menubar access. Menubars andtools are popular UI widgets, and we must facilitate bothshared and personal use of them in a group setting. InDiamondSpin, we provide four types of menubar usagepatterns:

■ Draggable for sharing. With a single finger movement,a user can slide a group menubar along the tabletop’sedge to any parking position. Multiple users can thusshare a single menubar, passing it among themselves.

■ Lockable for private use. On a multitouch, multiuserdirect-touch tabletop, it’s easy for one user to selectitems from another user’s menubar. Consequently,

we built in to the menubar a touch-to-lock user-con-trolled option that lets users selectively prevent allother colocated users from operating their personalmenubars.

■ Replicated for convenience. Users can replicate theirprivate menubars and give copies to other collabora-tors at the table. Figure 1b shows a usage scenario inwhich the user’s replicated menubar appears on boththe top and bottom edges of the tabletop. This is a con-venient usage pattern, but it requires social protocolsto mitigate conflicting menu operations among colo-cated users.

■ Subset for restricted access. Finally, rather than dupli-cating an entire menubar, users can replicate and dis-tribute a menubar subset to other users. Makinglimited menubar actions available could be useful in situations with one power user, as in a teacher–student setting. The teacher’s menubar might havefull functionality, for example, while the students’menubars contain a smaller set of options.

Group exploration of geospatial data. High-resolution satellite photographs, maps, and blueprintsare often presented from a bird’s-eye view so that intel-ligence analysts, architects, and city planners can gath-er around the rolled-out paper documents. Such expertsare accustomed to viewing documents from a variety of

IEEE Computer Graphics and Applications 41

6 Using precision-hover mode to transition between moving the mouse and drawing. (a) The user makes contactwith the display. (b) The mouse remains centered between the user’s thumb and middle finger as he moves hisfingers and drags the mouse. (c) The user taps briefly with the index finger to toggle the left mouse button’s state,engaging the drawing function. (d) The cursor remains between his two fingers, but by moving his thumb andmiddle finger, the user engages the left mouse button and the interface draws a line.

(b)

(a)

(d)

(c)

Page 7: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

perspectives as they work face-to-face around a table.In contrast, groups that work with high-resolution dig-ital files on a tabletop display are hindered by the sur-faces’ inability to display a document’s full resolution.

Tiling displays or projectors offer a promising solu-tion, but they’re currently prohibitively expensive. Thesingle-user solution of zooming in and panning aroundthe document is inappropriate in groups, as membersmight want to see a detailed view of different documentregions at the same time. Furthermore, people often getlost in a data set when using pan and zoom interfaces,which sacrifice larger context when they zoom in fordetailed views.

Our DTLens12 is a multiuser tabletop tool that letsgroups explore high-resolution spatial data without thepanning and zooming drawbacks. As Figure 7 shows,DTLens gives each group member an independent,zoom-in-context fisheye lens that he or she manipulatethrough multihand gestures performed directly on thedocument. Any group member can thus reach onto thetable, grab a document region, and stretch the area toreveal more detail. By allowing simultaneous explo-ration of document details, DTLens lets group membersmove naturally from collaborative to independent activ-ities as they work face to face around the tabletop dis-

play. DTLens also gives users a consistent interaction setfor lens operations, thus minimizing tool switching dur-ing spatial data exploration.

Walk-up and walk-away usage issuesWith traditional tables—in airports, cafés, and con-

ference rooms, for example—people often spontaneous-ly approach a table and collaborate with people alreadyseated. In such scenarios, people generally bring theirown material and documents. Therefore, if we’re todevelop walk-up, kiosk-like digital tabletops, we mustconsider appropriate user interfaces. Althoughresearchers have actively explored how to share person-al data on public vertical displays, a key differencebetween shared use of a tabletop and that of a verticaldisplay is that when people sit around a table, a partic-ular table region is in their immediate physical proxim-ity. Moreover, such regions are not visually equivalentfor all users on all sides of a table. These physical andperceptual properties make these individual areas idealchoices for private work spaces.13 To this end, we devel-oped a user interface design solution called UbiTable.14

As Figure 8 shows, UbiTable lets users dynamicallyconnect personal laptops, cameras, and USB devices toan interactive tabletop so they can fluidly share, manip-

Interacting with Digital Tabletops

42 September/October 2006

7 Exploring a satellite image using DTLens on a multitouch tabletop. (a) Using both hands, the user launches theDTLens tool and (b) stretches open the view. (c) The user presses his hands to tack the lens in place, which (d)frees his hands for other work.

(b)

(a)

(d)

(c)

Page 8: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

ulate, exchange, and mark up data. At the same time,each user maintains explicit control over his of her doc-ument’s accessibility. We divided the UbiTable tabletopinto two regions—personal and public—that have dis-tinct access control and document interactibility prop-erties. Personal region documents are visible to all users,but can only be manipulated and moved by the docu-ment owner. Public region documents are accessible toall users, but only the owner can move it into a person-al region. This gives the owner explicit control over howthe document is distributed to other meeting partici-pants. Documents that are displayed on the tabletop usecolored borders to provide feedback information forowners and users. Green and pink borders, for exam-ple, indicate personal document regions, while gray bor-ders indicate public document regions. In addition tothe public and personal areas, UbiTable designates per-sonal devices, such as laptops, as private regions forusers’ data. It therefore offers three information-shar-ing levels: public, personal, and private.

Evaluations, experiences, and reflectionsWe’ve learned many lessons and gained many insights

in developing, using, and testing our solutions. Ourexperiences fall into three general categories that we’veobserved across numerous application prototypes andevaluation sessions: orientation side effects, input pre-cision, and nonspeech audio feedback for group interac-tion.

Orientation side effectsProviding interface-level support for flexible docu-

ment orientation and positioning on a large tabletop hasemerged as an important foundation for our work. TheDiamondSpin toolkit has supported many research pro-totypes and applications. In one project,4 we evaluatedand analyzed various orientation techniques’ perfor-mance and differences. Our findings suggest that a moreobjectively precise technique doesn’t necessarily trans-late into high qualitative ratings from users. Indeed,each technique seems to have a different feel for users,related to interaction fluidity, how a technique behavesunder the user’s touch, and the technique’s perceivednaturalness.

We also observed two noticeable operational sideeffects of rotating tabletop documents that are indepen-dent of the rotation and translation methods. First,while users want the ability to reorient documents tosuit tabletop collaboration, some orientations canseverely affect a document’s resize range. This problemoccurs when a document is rotated out of alignmentwith a rectangular or square tabletop’s canonical edges,as seen in three out of the four documents in Figure 1b.Second, text readability can degrade when a documentis rotated at an angle with respect to the canonical Carte-sian x and y axes, due to nonantialiased text renderingfor rotated text.

To solve these usability problems, we built a Table-for-N feature in the current DiamondSpin version. Whena user rotates a document, Table-for-N automaticallysnaps the document’s bottom edge to align parallel withone of the tabletop’s N edges. This function is convenient

IEEE Computer Graphics and Applications 43

8 Walk-up options for worksharing. (a) Walk-up usage of a conventionaltable. (b) Walk-up usage of UbiTable, which offers users a private region forlaptop displays, and personal or public regions on the tabletop. (c) TheUbiTable interface. Green and pink borders indicate personal areas, blueportals are for data movement to and from private regions (laptops orother personal devices), and the center gray area is for document sharing.

(a)

(b)

(c)

Page 9: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

when several tabletop users are working independent-ly on documents or images.

Input precisionAlthough many touch-interactive surfaces provide

high input resolution at single-pixel or finer precision,we’ve observed (along with other researchers) that pixel-accurate interaction is difficult with direct-touch interac-tion. This imprecision particularly manifests in terms ofjitter—a shift in touched or selected pixels between inputframes. Jitter is a problem in at least two cases.

First, some operations require repeated interactions,as when users must double or triple tap on the tabletop(with a single finger, multiple fingers, or the wholehand). However, such consecutive taps don’t necessar-ily land on the exact same pixel region on the tabletopdisplay. To solve this, we’ve built in a larger activationregion for retouch. That is, when an interactionrequires a user to repeatedly tap on the tabletop, the exact pixel area touched on the second or thirdlanding has a tolerance. For single-finger double ortriple taps, for example, we find a tolerance of 0.5 cmworks quite well.

Other researchers in touch-interaction precision haveproposed to solve this problem through indirect (offset)or relative pointing. However, in our view, it’s impor-tant to maintain direct full-handed touch and the biman-ual interaction paradigm on a horizontal tabletopsurface as much as possible.

Second, jitter is an issue when the operation’s effectoccurs when the user lifts his or her hand. During ouruser study for bimanual gesture design,8 for example,we noticed that some participants were troubled by theiraccuracy in selecting an image region during copy-and-paste (see Figure 3). Often, as participants lifted theirhands to complete the paste operation, the pasted imagewas slightly shifted in one or both dimensions.

To stabilize imprecise interaction jitter, we improvedour gesture termination algorithm: it now looks back afew time frames from the instance that a user lifts his orher hand off the table. Specifically, our algorithm looksfor a time window in which the user’s hand has main-tained a stable selection posture for a few frames of time.We take this selection box as the user’s intended regionfor copy and paste. This improvement has offered fair-ly satisfactory performance.

Nonspeech audio feedbackIn a multiuser, interactive tabletop setting, the table

serves as both a shared display and a shared inputdevice. Because the display is visual, designers oftenfocus on presenting information to users through thevisual channel. Although visual feedback is the prima-ry communication modality in this environment, webelieve auditory feedback can also serve an importantrole. However, what that role is and how it mightenhance users’ experiences and assist in applicationbuilding is not yet clear. In particular, simultaneousactions by multiple users can both increase efficiencyand create interference.

In an initial UbiTable user study,14 we found that userswere confused when the tool offered auditory feedback

to indicate operational errors. Because an identical sys-tem beep sounded for both users, the common reactionwas: “Who was that sound for?” In collaborative set-tings, users often work in parallel on individual tasksand might want to be aware of their peers’ actions. Usersmight, for example, wish to know when another useraccesses or manipulates screen objects outside theirvisual attention. Auditory feedback can be useful inthese circumstances. While using redundant auditoryfeedback can increase group awareness in some cases,it can also hinder individual users’ performance. Alter-natively, sounds that are useful for an individual mightoverload the larger group’s auditory channels. It’s essen-tial to consider this tradeoff when designing colocated,collaborative applications.

Our study included two experiments using nonspeechaudio in an interactive multitouch, multiuser tabletopsurface.15 In our first experiment, we investigated twocategories of reactive auditory feedback: affirmativesounds that confirm user actions and negative soundsthat indicate errors. Our results show that affirmativeauditory feedback might improve a user’s awareness ofgroup activity at the expense of awareness of the user’sown activity. Negative auditory feedback might alsoimprove group awareness, but simultaneously increasethe perception of errors for both the group and the indi-vidual.

In our second experiment, we compare two methodsof associating sounds to individuals in a colocated envi-ronment. Specifically, we compared localized sound,where each user has his or her own speaker, to codedsound, where users share a speaker, but the sound’swaveform is varied so a different sound is played foreach user. Results of this experiment reinforce the firstexperiment’s finding: a tension exists between groupawareness and individual focus. User feedback suggeststhat users can more easily identify who caused a local-ized or coded sound, and that either option lets themmore easily focus on their individual work. In general,these two experiments show that, depending on its pre-sentation, it’s possible to use auditory feedback in colo-cated collaborative applications to support eitherindividual work or group awareness, but not both simul-taneously.

ConclusionInteractive, direct-touch digital tables are an emerg-

ing form factor with largely immature user interfacedesign. Our research results, along with those of otherresearchers, set forth interaction techniques, user expe-riences, and design considerations that we’ll continueto expand as we exploit and explore the advantages ofinteractive tabletop systems.

Among our lessons so far is that, whenever wedemonstrated our tabletop systems to actual users orpotential customers, the most compelling momentswere when the tables interoperated with vertical dis-plays and other devices. This observation agrees withour previous finding that group interactions require sup-plemental vertical displays.16

While the digital table provides a compelling focalpoint for group activity, we recognize the opportunity

Interacting with Digital Tabletops

44 September/October 2006

Page 10: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

to augment it with additional computational resourcesand surfaces. We’ve thus begun exploring multisurfacetable-centric interaction,17,18 wherein all interactionoccurs at a table, while ancillary surfaces provide coor-dinated and multiview visualization and display space.This is different from previous interactive roomresearch, in which displays and devices are generallyindependent.

There are many outstanding research issues in thisarea. In our view, two of the most fundamental openquestions are: Does a large tabletop provide spatial andperceptual cognitive advantage in helping users accom-plish their tasks? If so, under what circumstances doesthis cognitive assistance occur, and when does it breakdown? To fully examine these questions, we must stepback and

■ analyze basic human perception and cognition;■ evaluate not only our design artifact, but the cogni-

tive prosthesis it might enable; and ■ envision how people might use this externalization

device to better represent, visualize, and express theirideas.

Our future research will focus on these issues. ■

References 1. S.D. Scott et al., “System Guidelines for Co-located, Col-

laborative Work on a Tabletop Display,” Proc. EuropeanConf. Computer-Supported Cooperative Work (ECSCW),Kluwer Press, 2003, pp. 129-178.

2. R. Kruger et al., “Roles of Orientation in Tabletop Collab-oration: Comprehension, Coordination and Communica-tion,” J. Computer Supported Cooperative Work, vol. 13, nos.5-6, Kluwer Press, 2004, pp. 501-537.

3. D. Wigdor and R. Balakrishnan, “Empirical Investigationinto the Effect of Orientation on Text Readability in Table-top Displays,” Proc. 9th Euro. Conf. Computer-SupportedCooperative Work (ECSCW), Kluwer Press, 2005, pp. 205-224.

4. C. Forlines et al., “Under My Finger: Human Factors inPushing and Rotating Documents Across the Table,” Proc.Int’l Conf. Human–Computer Interaction (Interact), LNCS3585, Springer Verlag, 2005, pp. 994-997.

5. C. Shen et al., “DiamondSpin: An Extensible Toolkit forAround-the-Table Interaction,” Proc. Int’l Conf. HumanFactors in Computing (CHI), ACM Press, 2004, pp. 167-174.

6. M.S. Hancock et al., “Rotation and Translation Mecha-nisms for Tabletop Interaction,” Proc. 1st IEEE Int’l Work-shop Horizontal Interactive Human–Computer Systems(TableTop), IEEE CS Press, 2006, pp. 79-78.

7. C. Shen et al., “CoR2D: Context-Rooted Rotatable Drag-gables for Tabletop Interaction,” extended abstracts, Int’lConf. Human Factors in Computing Systems (CHI), ACMPress, 2005, pp. 1781-1784.

8. M. Wu et al., “Gesture Registration, Relaxation, and Reusefor Multi-Point Direct-Touch Surfaces,” Proc. 1st IEEE Int’lWorkshop Horizontal Interactive Human–Computer Systems(TableTop), IEEE CS Press, 2006, pp. 183-190.

9. K. Everitt et al., “Modal Spaces: Spatial Multiplexing toMediate Direct-Touch Input on Large Displays,” extendedabstracts, Proc. Int’l Conf. Human Factors in Computing Sys-tems (CHI ), ACM Press, 2005, pp. 1359-1362.

10. A. Esenther and K. Ryall, “Fluid DTMouse: Better MouseSupport for Touch-Based Interactions,” Proc. Working Conf.Advanced Visual Interfaces (AVI), ACM Press, 2006, pp. 112-115.

11. E. Tse, “Enabling Interaction with Single User Applicationsthrough Speech and Gestures on a Multi-User Tabletop,”Proc. Working Conf. Advanced Visual Interfaces (AVI), ACMPress, pp. 336-343.

12. C. Forlines and C. Shen, “DTLens: Multi-User Tabletop Spa-tial Data Exploration,” Proc. 18th Ann. ACM Symp. UserInterface Software and Technology, ACM Press, 2005, pp.119-122.

13. S.D. Scott et al., “Territoriality in Collaborative TabletopWorkspaces,” Proc. ACM Conf. Computer-Supported Coop-erative Work (CSCW), ACM Press, 2004, pp. 294-303.

14. C. Shen, K. Everitt, and K. Ryall, “UbiTable: ImpromptuFace-to-Face Collaboration on Horizontal Interactive Sur-faces,” Proc. 5th Int’l Conf. Ubiquitous Computing, LNCS2864, Springer Verlag, 2003, pp. 281-288.

15. M.S. Hancock et al., “Exploring Non-Speech AuditoryFeedback at an Interactive Multi-User Tabletop,” Proc. 2005Conf. Graphics Interface, Canadian Human–ComputerComm. Soc., 2005, pp. 41-50.

16. K. Ryall et al., “Exploring the Effects of Group Size andTable Size on Interactions with Tabletop Shared-DisplayGroupware,” Proc. ACM Conf. Computer-Supported Coop-erative Work (CSCW), ACM Press, 2004, pp. 284-293.

17. D. Wigdor et al., “Table-Centric Interactive Spaces for Real-Time Collaboration,” Proc. Working Conf. Advanced VisualInterfaces (AVI), ACM Press, pp. 103-107.

18. C. Forlines et al., “Multi-User, Multi-Display Interactionwith a Single-User, Single-Display Geospatial Application,”to appear in Proc. ACM Conf. User Interface Software andTechnology, ACM Press, 2006.

Chia Shen is a senior research scien-tist at Mitsubishi Electric ResearchLaboratories. Her research interestsinclude human–computer interfacesand interactions with nonconvention-al visual surfaces, including tables andwalls. Shen has a PhD in computer science from the University of Massa-

chusetts at Amherst. Contact her at [email protected].

Kathy Ryall is on the principal tech-nical staff at Mitsubishi ElectricResearch Laboratories. Her researchinterests include human–computerinteraction, computer-supported co-operative work, and information visu-alization. Ryall has a PhD in computerscience from Harvard University and

is a senior member of the IEEE. Contact her at [email protected].

IEEE Computer Graphics and Applications 45

Page 11: Mitsubishi Electric Research Labs University of Paris ... · mand-and-control centers, cafés, design centers, show-rooms, waiting areas, and entertainment centers. As such, they

Clifton Forlines is a researcher atMERL. His research interests includethe design and evaluation of noveluser interfaces, digital video presen-tation, collaborative tabletops, multi-user and multidisplay workspaces,and using handheld projectors foraugmented reality. Forlines has an MS

in human–computer interaction from Carnegie MellonUniversity. Contact him at [email protected].

Alan Esenther is a user interfacesoftware developer in the TechnologyLab at Mitsubishi Electric ResearchLaboratories. His research interestsinclude human–computer interac-tion; finding the right digital tool forthe job; and making multiuser, mul-titouch, and large-touch surface inter-

actions and development easier. Esenther has an MS incomputer engineering from Boston University. Contact himat [email protected].

Frédéric D. Vernier is an assistantprofessor in the Department of Com-puter Science, University of Paris Sudin France, where he works in theLIMSI-CNRS laboratory. His researchinterests include human–computerinteraction and interactive visualiza-tion. Vernier has a PhD in computer

science from the University of Grenoble. Contact him [email protected].

Katherine Everitt is a PhD studentin computer science at the Universityof Washington. Her research interestsinclude tabletop interfaces, computer-supported cooperative work, and mul-timodal interfaces. Everitt has an MSin computer science from the Universi-ty of California, Berkeley. Contact her

at [email protected].

Mike Wu is a PhD candidate in com-puter science in the Dynamic Graph-ics Project lab at the University ofToronto, where he is a Health Care,Technology, and Place Fellow. Hisresearch interests include assistivetechnologies and tabletop user inter-faces. Wu has an MSc in computer

science from the University of Toronto. Contact him [email protected].

Daniel Wigdor is a PhD candidatein the Dynamic Graphics Project labat the University of Toronto and along-term intern at Mitsubishi Elec-tric Research Laboratories. Hisresearch interests include multisur-face interactive spaces, shared table-top interaction, and text entry into

mobile devices. Wigdor has an MSc from the University ofToronto. Contact him at [email protected].

Meredith Ringel Morris is aresearcher in the Adaptive Systemsand Interaction group at MicrosoftResearch. Her research interestsinclude human–computer interac-tion, computer-supported cooperativework, and educational technologies.Morris has a PhD in computer science

from Stanford University. Contact her at [email protected].

Mark Hancock is a PhD student incomputer science at the University ofCalgary. His research interests includetabletop display interaction, includ-ing 3D interaction. Hancock has anMSc in computer science from the Uni-versity of British Columbia. Contacthim at [email protected].

Edward Tse is a doctoral student atthe University of Calgary specializingin natural interaction over large digi-tal displays. His research interestsinclude supporting users’ speech andgestures when collaborating on largesurfaces. Tse has an MSc in human–computer interaction at the Universi-

ty of Calgary. Contact him at [email protected].

Interacting with Digital Tabletops

46 September/October 2006

computer.org/e-News

Available for FREE to members.

Be alerted to

• articles and special issues

• conference news

• registrationdeadlines

Sign Up Today for the IEEE

Computer Society’s

e-News

Sign Up Today for the IEEE

Computer Society’s

e-News