13
Tailored Controls: Creating Personalized Tangible User Interfaces from Paper Vincent Becker Department of Computer Science, ETH Zurich Switzerland [email protected] Sandro Kalbermatter Department of Computer Science, ETH Zurich Switzerland [email protected] Simon Mayer University of St.Gallen and ETH Zurich Switzerland [email protected] Gábor Sörös Nokia Bell Labs Budapest, Hungary [email protected] (a) (b) (c) Figure 1. We present a concept and a prototype of a system for creating personalized user interfaces from paper. Users can cut out preferred shapes from paper and add interaction functionalities to them. The paper shapes and the user’s fingers are tracked by an RGBD camera mounted above the interaction surface (Figure 1a). Our processing pipeline recognizes and measures manipulations of the shapes (Figure 1b), such as movements and touches, which can be mapped to certain functions according to the assigned interactions, e.g. for controlling a toy rover as illustrated in Figure 1c. ABSTRACT User interfaces rarely adapt to the specific user preferences or the task at hand. We present a method that allows to quickly and inexpensively create personalized interfaces from plain paper. Users can cut out shapes and assign control functions to these paper snippets via a simple configuration interface. After configuration, control takes place entirely through the manipulation of the paper shapes, providing the experience of a tailored tangible user interface. The shapes and assignments can be dynamically changed during use. Our system is based on markerless tracking of the user’s fingers and the paper shapes on a surface using an RGBD camera mounted above the interaction space, which is the only hardware sensor required. Our approach and system are backed up by two studies where we determined what shapes and interaction abstractions users prefer, and verified that users can indeed employ our system to build real applications with paper snippet interfaces. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. ISS ’19, November 10–13, 2019, Deajeon, Republic of Korea. Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6891-9/19/11 ...$15.00. http://dx.doi.org/10.1145/3343055.3359700 Author Keywords Personalized user interfaces; Tangible interfaces; User interface framework CCS Concepts Human-centered computing Human computer inter- action (HCI); Haptic devices; User studies; INTRODUCTION Tangible user interfaces, such as buttons and knobs, enjoy widespread use for controlling appliances: they naturally arose as controls of purely mechanical devices before the emergence of digital products, and virtual implementations of the inter- action patterns they express are ubiquitous in graphical user interfaces (GUIs) for software as well. In fact, with the spread of displays in the past decades, more and more interactions have been transferred from tangible mechanical controls or remote controls to GUIs. These have the advantage of being able to display changing content and are therefore highly flexi- ble and widely applicable. However, GUIs often do not induce any haptic sensation, which humans innately prefer over pas- sive interfaces [42] and moreover GUIs usually do not provide common physical interactions humans are used to, such as easily moving, adding, or removing elements. This has given rise to the creation and proliferation of tangible user interfaces

Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

Tailored Controls: Creating Personalized TangibleUser Interfaces from Paper

Vincent BeckerDepartment of Computer Science, ETH Zurich

[email protected]

Sandro KalbermatterDepartment of Computer Science, ETH Zurich

[email protected]

Simon MayerUniversity of St.Gallen and ETH Zurich

[email protected]

Gábor SörösNokia Bell Labs

Budapest, [email protected]

(a) (b) (c)Figure 1. We present a concept and a prototype of a system for creating personalized user interfaces from paper. Users can cut out preferred shapesfrom paper and add interaction functionalities to them. The paper shapes and the user’s fingers are tracked by an RGBD camera mounted above theinteraction surface (Figure 1a). Our processing pipeline recognizes and measures manipulations of the shapes (Figure 1b), such as movements andtouches, which can be mapped to certain functions according to the assigned interactions, e.g. for controlling a toy rover as illustrated in Figure 1c.

ABSTRACTUser interfaces rarely adapt to the specific user preferences orthe task at hand. We present a method that allows to quicklyand inexpensively create personalized interfaces from plainpaper. Users can cut out shapes and assign control functionsto these paper snippets via a simple configuration interface.After configuration, control takes place entirely through themanipulation of the paper shapes, providing the experience ofa tailored tangible user interface. The shapes and assignmentscan be dynamically changed during use. Our system is basedon markerless tracking of the user’s fingers and the papershapes on a surface using an RGBD camera mounted above theinteraction space, which is the only hardware sensor required.Our approach and system are backed up by two studies wherewe determined what shapes and interaction abstractions usersprefer, and verified that users can indeed employ our systemto build real applications with paper snippet interfaces.

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected] ’19, November 10–13, 2019, Deajeon, Republic of Korea.Copyright is held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-1-4503-6891-9/19/11 ...$15.00.http://dx.doi.org/10.1145/3343055.3359700

Author KeywordsPersonalized user interfaces; Tangible interfaces; Userinterface framework

CCS Concepts•Human-centered computing → Human computer inter-action (HCI); Haptic devices; User studies;

INTRODUCTIONTangible user interfaces, such as buttons and knobs, enjoywidespread use for controlling appliances: they naturally aroseas controls of purely mechanical devices before the emergenceof digital products, and virtual implementations of the inter-action patterns they express are ubiquitous in graphical userinterfaces (GUIs) for software as well. In fact, with the spreadof displays in the past decades, more and more interactionshave been transferred from tangible mechanical controls orremote controls to GUIs. These have the advantage of beingable to display changing content and are therefore highly flexi-ble and widely applicable. However, GUIs often do not induceany haptic sensation, which humans innately prefer over pas-sive interfaces [42] and moreover GUIs usually do not providecommon physical interactions humans are used to, such aseasily moving, adding, or removing elements. This has givenrise to the creation and proliferation of tangible user interfaces

Page 2: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

(TUIs) in the past two decades with the aim of making interac-tions more natural and binding virtual elements and functionsto real objects. TUIs, which can take almost any physical form,are also conducive to the fundamental goal of ubiquitous com-puting: making computers (and their controllers) disappearphysically and mentally into users’ environments [37].

However, TUIs usually cannot adapt to a specific user nor tothe task the user intends to carry out, simply because they arehardware devices and are thus constrained to – and by – theirphysical form. Consequently, many tangible interfaces thatwe use in our daily lives (e.g., traditional TV remote controls)are often built to provide interactors for all functions of thecontrolled device, even if these functions are used only rarely –or never. This can be overcome with interfaces that are able tochange their shape during interaction [11] [21]. However, thiscurrently requires complex and expensive electro-mechanicalsystems.

An intriguing solution for adaptive user interfaces is to letusers themselves create and assemble their personalized userinterfaces from inexpensive everyday materials (e.g., paper,cardboard, or playdough), and enabling them to link theseTUIs to device functions. This would allow users to createexactly the interfaces they require, at their desired level ofcomplexity.

Such an approach enables various applications. First, it ben-efits the process of user interface prototyping, where papersnippets are commonly used, by making it possible to addactual functionality to the paper shapes and thus being able torapidly test designed interfaces – this enables us to bridge the“ideation” and “implementation” steps in the design process,which are typically separate [4]. Furthermore, the interfacescan easily be reconfigured, thereby enabling quicker iterationsin the design process and an earlier realization of potentially er-roneous approaches as the UI does not have to be implementedfirst, which entails high development cost.

Second, an approach like this could be used in various realapplications for example in the domain of building control,where it would permit the creation of passive control interfacesfor lights, blinds, AC, and other devices that are attached tosurfaces and walls without requiring infrastructure such aspower and control lines. This would provide an installationthat can be easily reconfigured, which becomes necessaryas building automation companies are searching for ways toflexibly rearrange the spaces we live and work in, in responseto user preferences.

Third, the development towards the increasingly automatedoperation of industrial equipment entails less frequent directinteractions of workers with machines in industrial shopfloors.In the wake of this process, adaptive, tactical, user interfaceswould enable us to avoid cluttering workshops with UIs thatare operated infrequently (and might not even be suitable for aparticular user). Rather, they would enable workers bring theirown interfaces with them, quickly assemble and configurethem, and use them to interact with machines throughout theproduction space.

ContributionsThis paper presents a system that allows users to create theirown, personalized, user interfaces by cutting out arbitrarypaper snippets and assembling them on a suitable surface(e.g., a table). We track both the shapes and the user’s fingerswith a RGBD (colour and depth) camera without the need toemploy any markers, enabling a natural control of devices andpermitting a wide range of interaction abstractions. In addition,our system allows for dynamically adding and removing papersnippets from the interface.

We determined the types of interactions that such a systemshould enable through an elicitation study with 20 partici-pants, and deduced 25 abstract interactions, out of which the13 most common ones were implemented in our prototype –these interaction abstractions cover more than 92% of all ob-served interactions from the elicitation study. When creatingan interface using our system, users cut out paper shapes andassign interaction abstractions to them; being tracked by theRGBD camera, these shapes respond to corresponding user in-teractions by emitting events that carry the relevant interactioninformation (e.g., the distance a shape was moved; the angle itwas rotated; the duration it was tapped, etc.). These events aresubsequently passed to any (networked) consumer where theytrigger application functionality – this can be configured usinga common dataflow programming language. We evaluated ourcomplete prototype in another user study, which showed thatparticipants were able to use our system to successfully createfully functioning interfaces for several different applications,such as controlling a sound system.

RELATED WORKAlready in 1995, Fitzmaurice et al. [10] presented Bricks, anapproach that connects physical bricks, laid out on a displaysurface, to virtual graphical counterparts depicted on the dis-play. The virtual shapes can be manipulated, e.g. moved orrotated by manipulating the corresponding bricks. Shortlyafter, Ishii et al. [17] presented their vision of Tangible Bits,aiming at a coupling of digital information to physical objectsand thereby making this information tangible. This couplingshould also allow the manipulation of the digital state by themanipulation of the objects. Our approach fits this conceptwell by coupling components made from simple and inex-pensive material, in our case paper shapes created by usersthemselves, to digital control components.

Dedicated tangible interfacesComplex tangible interfaces that feature high representation ca-pabilities are enabled by the interface changing its shape [11].This may be combined with an augmented reality (AR) appli-cation [24] to add a tangible representation of the AR content.Shape-changing tangible interfaces can also be used to com-bine different elements such as sliders or knobs in a singlecomponent [21]. Studies on shape-changing interfaces canbe found in [7, 22]. These interfaces provide an interestingapproach in the area of dynamic and adaptable interfaces, how-ever, they also entail relatively complex hardware and highcost. We intend to follow the opposite direction. Users shouldbe enabled to create their own personalized interfaces from

Page 3: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

simple, inexpensive material, and should be able to easilychange and recreate interfaces themselves.

Printable circuitsOne possibility for using paper for interaction elements is toprint electronic circuits onto it. A number of projects haverecently explored combining paper with electronics. For ex-ample, Instant Inkjet Circuits [18] and Printem [5] are sim-ply created by a modified printer. The projects Sketching incircuits [30], PaperPulse [32], CodeCollage [31], and Light-ItUp [14] specifically explore how users combine various in-teractive paper electronics elements. The recent Pulp non-fiction [41] extends regular paper with capacitive touch andpen sensing. Some paper electronics are even available incommercial products such as Chibitronics1 or DynamicLand2.The fundamental difference to our approach is that the paper,on which the circuits were printed to glued or stuck to, has tostay in a single piece, i.e. the user cannot create independentinteraction elements which can be moved or reassembled, andas such the paper circuits act more as printouts of graphicaluser interfaces than tangible interfaces.

Using everyday objects as interfacesIn addition to the creation of dedicated tangible user interfacesas mentioned above, arbitrary objects in the user’s environ-ment can also be used as tangible interfaces. Pohl et al. [29]investigated the space of everyday objects that could be usefulas tangible controllers and found that there is both a diverse setof objects available and potential use for them. This conceptis employed in several works in order to assign different func-tions to different objects, e.g. in painting applications [12, 34].Often, a workspace is instrumented with overhead cameras totrack simple objects and their movements and map them tocontrol functions, for example, iCon [6] repurposes physicalobjects to control other objects by augmenting them with atrackable paper label. Corsten et al. [8] do the same withoutmarkers, using an over-head depth camera for pose estimation.Funk et al. [13] present an augmented workspace in whicheveryday objects can be combined into personalized controlwidgets. The idea of using objects as tangible interfaces isalso used in mobile settings by employing tablets [1,15,16] orhead-mounted AR displays [2]. In contrast to visual sensing,project Zanzibar [35] contributes a portable rubber mat withembedded NFC readers that recognize various objects placedon top. This mat can be used to create reconfigurable tangiblecontrols similar to our system.

Similar to our work, using simple objects in the user’s environ-ment also has the goal of using readily available componentsas TUIs at no (or hardly any) extra cost. Nevertheless, insteadof using arbitrary objects, we intend to let users design theirown interfaces – according to their personal preferences andabilities. Furthermore, our interfaces do not interfere with thenormal use of objects (e.g., drinking from a mug that is alsoused as a “dial” TUI).

1https://chibitronics.com/2https://dynamicland.org/

Reconfigurable interfaces from simple materialsBesides, using existing physical objects as interfaces, thereare other approaches which allow personalized interfaces andreconfiguration as in ours. Kelly et al. [19] recently presentedARcadia, which is intended for rapid prototyping. Controlelements such as buttons, wheels, or sliders can be cut outfrom paper or cardboard, arranged and then assigned a cer-tain function in a browser application. Fiducial markers areattached to the shapes in order to recognize and track themusing a webcam, e.g. a laptop camera. The laptop can beused to configure the function assignments at the same time,as ARcadia requires an additional input device for this task.Using markers has the advantage of being able to track theshapes very accurately and giving the shapes a unique identity,however, the user has to provide the markers for every shapein the interface construction process, e.g. by printing them.In contrast to ARcadia, our intention was to make the TUIcreation process as simple as possible. Hence, we avoid theuse of markers and directly track the shapes and the user’sfingers. We furthermore enable a broader range of interactions,as we (1) track the fingers separately from the markers, therebyallowing users to simply tap on shapes – or swipe over them– instead of having to cover an entire marker, and (2) studywhich interaction behavior is employed by participants in anelicitation study resulting in a wider set of possible interactionswhich goes beyond simple movements and rotations.

The VoodooSketch [3] system allows users to draw user inter-face elements on light-reflective paper that are recognized byan over-head camera like in our setting. Our system providessimilar flexibility and ease of use, the main difference is increating the widgets via drawing or cutting and the selectionof supported materials.

Olberling et al. [28] work relies on printable circuits as men-tioned above but furthermore allows cutting interaction ele-ments from the circuit-augmented paper. They presented aspecial wiring topology for multitouch sensors that make themrobust to cutting, so users can define and cut personalized UIelements. An advantage of our approach is that no specialmaterial is required. The sensing part of our system, an RGBDcamera, can be reused and any kind of paper can be used forcutting out shapes. Moreover, with the camera tracking, ourapproach allows the position and the movement of the widgetsto be taken into account when defining controls.

Object trackingAn important component of these approaches is the recog-nition and tracking of the elements, which may be printedcircuits or real-world object, as mentioned above. There arevarious technologies for tracking user-defined elements on asurface, all having their advantages and disadvantages. Withcameras, for example, ShadowTracking [9] recognizes sil-houette shadows from above a well-lit touch screen, VoodoS-ketch [3] and RetroDepth [20] apply special retro-reflectivesurfaces. DIRECT [39] combines RGBD and IR input. In ourapproach, we do not only track the position of the finger, butalso its orientation. Instead of vision, also other modalitiessuch as radar sensing [40] have been used. Voelker et al. [36]presented Passive Untouched Capacitive Widgets that can be

Page 4: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

Figure 2. An example for a paper interface: a trackbar.

tracked on top of unmodified capacitive touch screens. SLAP-Widgets [38] provide tangible transparent silicon elementsthat can be reconfigured on the fly as various UI elements.The actual functions are projected onto the blank elementswith an overhead projector. The elements are recognized fromthe bottom via IR-reflective identifiers. In comparison withour approach, the elements cannot be defined by the user in asimple manner.

ELICITATION STUDYTechnically, our approach is based on visually tracking user-cut paper shapes. The manipulations users perform (i.e. move-ments, rotations, touches, etc.) are then mapped to specificvalue changes or discrete actions. For example, a user mightcreate a paper circle intending it to be a knob he/she can useto control the volume of a sound system. The user shouldalso be able to assemble basic shapes to groups, which thenrepresent an interface element, e.g. one could use two rectan-gles and a circle to create a trackbar as shown in Figure 2. Inorder to infer this mapping from a certain shape or group, weeither require a fixed set of shapes and groups with a fixedmapping, which we can automatically recognize, or a set ofmappings the user can assign to the shapes. The first case hasthe advantage of allowing the user to directly employ a shapeor group without having to assign an interaction behavior toit, but would limit him/her to the fixed set of known shapes.This observation is closely related to the concept of affordance,introduced by Donald Norman [26], which describes the inter-action possibilities a human perceives when encountering anobject or another thing in his/her environment, e.g. that onecan throw a ball or push a button. We are specifically inter-ested in which affordances different paper shapes or groupshave.

Our initial hypothesis was that there would be a commonground on the affordances of certain shapes among users. Tovalidate this hypothesis, we carried out an elicitation study toinvestigate which shapes users would cut out for specific usecases. We invited 20 participants (seven female, average age:29.1 years, ranging from 15 to 69 years) and asked them toimagine different scenarios in which they had to control a setof appliances using paper shapes they cut out themselves ascontrols. We included the four scenarios listed below withthe corresponding appliances or functionality which had to becontrolled.

1. Office: blinds (up/down), temperature, ventilation, a lightbulb (on/off), an LED with adjustable brightness and colortemperature, a doorbell.

2. Movie remote control: play/pause, fast-forward, rewind,volume, back button, ok-button, directions for selectioncursor.

Figure 3. Two examples of interfaces participants designed. Left: Aninterface for controlling a smart workplace, similar to what one mightexpect. Right: A complex interface for controlling a movie.

3. Drone: 3D movement (translation in each direction), rota-tion (horizontal rotation), taking a picture.

4. Car: air temperature, driver’s seat temperature, drivingmodes (eco, comfort, sport), driver’s seat adjustment (back-rest tilt, seat pitch, seat height).

We provided the participants with sheets of paper in differentcolors and a pair of scissors, and let them cut out the shapes andgroups they believed were best suited to provide the specificcontrol functionality.

In contrast to our assumption, the shapes and assembled groupsthe participants created varied a lot. While some interfaceswere implemented similar to our expectations (and to com-mon everyday interfaces), many of them would not be un-derstandable for another person without further explanation(cf. Figure 3). Often, the interfaces were relatively complex,because a single shape would be assigned a range of differentfunctions. We furthermore did not find a consistent use ofdifferently colored paper.

What we did find are common abstractions of interaction ele-ments (or interaction patterns). The interaction elements maynot have the same appearance (i.e., shape), but the same be-havior. For example, many participants use buttons whichcould be tapped and which all related to the same underlyingbehavior despite being different in appearance. Hence, wedecided to not implement an automatic recognition of shapestogether with an automatic assignment of the interaction be-havior, but to implement a system which allows the user tocreate an arbitrary shape and assign the intended behaviorhim-/herself from this set of identified common abstractions.The fact that all participants thought that their personal inter-face made sense with respect to the given task emphasizes thestrength of the underlying idea, i.e., that the interfaces can befully personalized to each individual’s preferences.

TouchetsFrom the elicitation study, we inferred a set of 25 abstractinteractions, which we refer to as touchets. A touchet repre-sents a behavior a certain shape or a group of shapes shouldexhibit, e.g. a “finger slider” should allow a piece of paper tobecome a slider manipulated by the movement of the fingeron top of it – note that this concept is independent of whatthe shape actually looks like. We found five categories oftouchets, the two largest ones being buttons and sliders. Inthe following, we will shortly describe each identified touchet.In our implementation, touchets propagate their manipulation

Page 5: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

by returning events, which we also explain here. For someof the touchets we added a picture displaying some elementsthat actually occurred in the study with visualizations showinghow a shape may be moved or touched, however, these aremerely examples of shapes or groups that could be assignedthe corresponding touchet.

Button Touchets1. Button (B): A simple button which returns a

“touched” event when the user’s finger touchesit.

2. Hold Button (HB): A button which can distinguishbetween a short and a long press. It also re-turns an event with the duration of being pressed.

3. Positional Button (PB): A positional buttonmay contain several clickable zones, which aredefined by the user.

4. Positional Hold Button (PHB): The same as a positionalbutton, but with hold buttons instead of buttons.

Finger SlidersFinger sliders react to a user moving his/her finger above them,i.e. they track the position of the finger relative to the slider.They return this position as a relative value whenever the fingeris moved. The finger is always kept close to the slider.

5. Finger Slider (FS): A 1D finger slider with astraight axis.

6. 2D Finger Slider (2FS): A 2D finger slider thatreacts to the movement of the finger in two di-rections.

7. Curved Finger Slider (CFS): A 1D fingerslider, however, with a curved axis.

8. Special Finger Slider (SFS): A finger sliderwith an arbitrary shape.

9. Swipe Button (SB): A finger slider that returnsthe direction of the swipe.

Shape SlidersShape sliders are similar to finger sliders with the differencethat the value is not set by the finger directly, but by movingan indicator shape relative to a reference shape.

10. Slider (S): A 1D slider with an indicator shapeand a reference shape. It returns the relativeposition of the indicator to the main axis of thereference shape, which is defined by the user byidentifying the two endpoints of the axis.

11. Circular Slider (CS): For circular sliders, theindicator shape can be rotated around the ref-erence shape. The touchet returns the currentrotation angle.

12. Unbounded Slider (US): A 1D slider withoutan upper bound. The returned value is the dis-tance of the indicator shape to the reference.

13. 2D Unbounded Slider (2US): An unboundedslider that returns the distances from the refer-ence shape on two orthogonal axes.

14. Trackbar (T): A trackbar consists of threeshapes, two representing the lower and upperbounds, and the third being the indicator. It re-turns the relative position w.r.t. the bounds.

15. 2D Trackbar (2T): A trackbar where the indi-cator can be moved on a 2D area bounded hori-zontally and vertically by four reference shapes.It returns the corresponding relative position forboth axes.

16. Initial Position Offset Slider (IS): A singleshape that returns the current distance from theposition it was first touched along a single axis.

17. 2D Initial Position Offset Slider (2IS): Thesame as above only returning the distance fromthe initial position on two orthogonal axes.

18. Rotation (R): A single shape that reports thecurrent rotation angle relative to the position itwas in at the beginning of being touched andrelative to its initial position on the table.

19. Angle (A): Returns the current angle betweentwo shapes.

Hand TouchetsThese interactions all concern the movement of the user’s handdirectly, and independent of the paper shapes. Hence, theydo not fit into our concept of interacting with paper shapeand were also relatively rare. However, we list them here forcompleteness.

20. Hand Horizontal Position (HP): Returns the2D position of the hand in the plane parallel tothe interaction surface.

21. Hand Rotation (HR): Returns the rotation ofthe hand in the plane parallel to the surfacearound the center of the hand.

22. Hand Height (HH): Returns the height of thehand from the surface.

23. Hand Tilt (HT): Returns the two angles of avirtual plane represented by the hand relative tothe surface.

Special touchets24. Existence (E): The existence touchet consists of a sin-

gle shape which returns a boolean value referring towhether it is currently placed on the surface or not.

25. Containment (C): Consisting of two shapes, itreturns a boolean value referring to whether thesmaller shape is currently contained within thelarger one or not.

Touchets provide a powerful abstraction from the actual shapesand thereby enable users to personalize their TUIs. In addition,it is possible to assign multiple touchets to the same shape orgroup, which will then react to more interactions. Hence, our

Page 6: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

touchets approach allows users to model and create complexinterfaces. Besides, this approach has the benefit that we donot have to classify the shapes, but only track each shape andgroup and its touchet assignment.

Frequency of Touchet OccurrenceThe touchets presented above cover the complete set of in-teraction abstractions observed in the study. When using ourapproach, one has to assign the touchet functionality to thecut-out paper shapes. Providing the user with a large set oftouchets might lead to confusion and make it difficult to choosethe appropriate touchet. Hence, we counted the frequency ofoccurrence of touchet instances in the study in order to selectthe most popular touchets for our prototype system. In totalwe found 702 touchet instances. Table 1 shows the frequencyof each touchet. Only nine touchets account for over 90% ofoccurrences. Based on the frequencies, we decided to includeall touchets with a count of at least five in our implementation.However, we excluded the “hand height” (HH), as it doesnot fit into our concept of paper interaction. Another touchetwe omit is “existence” (E), as shapes may not have a uniqueappearance and it hence is difficult to track a shape visuallywhich is taken away from the surface and reused later. The 13selected touchets cover over 92% of occurrences; we henceprovide a nearly complete set of interaction elements withoutoverwhelming the user.

TAILORED CONTROLS

Method OverviewWe first describe the typical process of using user-definedpaper snippets for controlling appliances.3 After the userhas cut out and arranged his/her shapes, he/she has to assigntouchets to the shapes to add functionality, i.e. responsivebehavior, to them. This is done by placing the finger in thecorner (shown as a blue square in Figure 1b), which opensa menu to select a touchet. For some touchets, the user mayhave to select several shapes, e.g. for a slider, an indicatorand a referential element are required. Similarly, one canassign pre-defined actions to the shapes by opening anothermenu using the green square. Actions are tags which can beused by the attached application to distinguish which shapewas touched (as there might be several buttons for example).From now on the touchet instance emits the correspondingevent carrying the relevant information about its state and theaction tag whenever it is manipulated by the user. These eventsthen can be used in any connected application, as explainedbelow. Our implementation reacts to all changes immediately,e.g. movements of the reference shapes of a trackbar touchetinstance change the corresponding bounds, which directly alsochanges the trackbar’s value.

From the touchets we selected for our prototype, we can derivetwo main requirements: tracking the shapes and groups whichhave touchets assigned to them, and tracking the 3D-positionof the hand, especially of the fingers. Here, we restrict thefinger tracking to a single stretched out finger. A way to fulfilthese requirements using only a single hardware sensor is to3The use of our prototype is furthermore showcased in a video avail-able at https://youtu.be/p_wS6BhTpQQ.

use an RGBD camera mounted above the interaction surface(cf. Figure 1a). The depth information is particularly importantto detect when the finger touches the surface or a shape. Anoverview of our processing pipeline is depicted in Figure 4.On the depth image, we perform a color-independent hand seg-mentation based on depth-thresholding. In the hand segment,we find a first estimate of a stretched out finger. We then trackthe fingertip using an optical flow algorithm, which allows usto obtain the changes in position of key points on the finger-tip between each pair of consecutive frames. Knowing the2D-position of the fingertip, we can obtain the finger’s heightfrom the depth image as well and thereby deduce whether theuser is currently touching the surface or a shape on the surface.Based on the RGB image, we track all the shapes in the fieldof view of the camera. As soon as a shape is placed on thesurface, it is assigned a unique identifier. Each shape is trackedcontinuously by a shape tracking algorithm we designed forthis purpose, which returns an estimate of the translation androtation a shape has undergone between two frames. This isa challenging problem, as a significant part of a shape maybe occluded by the user’s hand. Finally, we combine the in-formation about the finger and shape positions and therebycan detect whether the user is touching a shape, swiping overit, or moving it. If any of the actions performed matches thetouchet specification assigned to the touched shape, an eventis forwarded to a server, from where this information can berelayed to other applications. For example, a “button” touchetwill not react to rotation, but a “rotation” touchet will.

For configuring the propagation of events, we use Node-RED4,a programming tool which allows to connect and programhardware devices, APIs, and online services in a GUI editor.Using Node-RED, users can completely customize their ownapplication. For frequent use, the Node-RED parts can cer-tainly also be pre-implemented, so that the user can simplyconnect the touchet events to these pre-configured actions.

SetupFor the RGB-D camera, we use an Intel RealSense D435,which we mount 40 cm above a table as shown in Figure 1a.The camera monitors a rectangular area of about 43 cm by32 cm. The camera stream is processed by a desktop computerwith an Intel i7-4790K 4x4 GHz CPU 16 GB of RAM. All theprocessing runs on the CPU. The source code of the systemcan be downloaded from https://github.com/vincentbecker/

TailoredControls.

Image PreprocessingBefore detecting the hands and shapes, we first preprocess thecolor and depth images from the camera. The RGB image ismerely aligned with the depth image so that the coordinatesystems correspond to each other. The depth image requiresfurther processing due to noise: We first apply a temporal filterto reduce the noise. Additionally, to remove spurious errorsin the depth image where the values are clearly too high, weemploy a hole filling algorithm, which takes the neighboringpixels of such a hole into account. Both algorithms are avail-able in the RealSense library. Another problem is that the4https://nodered.org/

Page 7: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

Touchet B HB R PB S PHB E US T FS 2T 2FS SB CS HH IS A 2IS HT 2US C CFS SFS HP HR

Freq. 233 95 73 66 46 38 33 30 24 14 8 7 7 5 5 4 3 2 2 2 1 1 1 1 1

Table 1. The frequency of occurrence of each touchet in the elicitation study sorted according to the frequency.

Depth image

RGB image

Touch detection

Event propagation, e.g. to NodeRED

Hand & fingertip detection

Manipulation trackingShape detection and tracking

Hand & finger tracking

greyscale

Figure 4. Our processing pipeline mainly consisting of finger and shape tracking based on the images from the depth camera.

camera plane is usually not perfectly parallel to the interactionsurface, resulting in some areas of the surface being perceivedas further away, which interferes with the hand and fingerdetection on the surface. In order to prevent this from hap-pening, we take a snapshot of the depth values when startingthe system and henceforth subtract these from the currentlyrecorded image. Consequently, all points on the surface havethe same depth, which facilitates further processing.

Finger Detection and TrackingOne main requirement is the ability to track the hand andfingers in order to detect when the user is touching and ma-nipulating the shapes. In an optimal case, we would knowthe pose of the complete hand. For this purpose, we testedseveral hand pose estimation algorithms such as [27], but noneshowed a satisfying accuracy although having high computa-tional requirements. Hence, we restricted the way users maymanipulate shapes to the most common one, touching themwith a single fingertip, and designed an efficient tracking algo-rithm for this case. Furthermore, we require the contours ofthe hand to know when shapes are occluded. We assume thatthere are no objects in the field of view of the camera apartfrom the paper shapes and the hand. Hence, we can distinguishthe hand by applying thresholding to the depth image, as theshapes are flat. We threshold the image at a level of 1 mm toobtain a binary mask of the hand and apply a contour findingalgorithm. From all the resulting contours, we take the largestone, as we assume the hand and arm to be the only larger 3Dobject in the scene. Thereby, we obtain a rough estimate ofthe hand and arm position. To find the fingers, we search fordefects in the convex hull around the hand. The fingertipsare the corners of the convex polygon. Unfortunately, evenafter preprocessing the depth image, there is still a significantamount of noise, which deteriorates the accuracy of the finger-tip position. Hence, we do not only search for the fingertip inthe 1 mm-thresholded binary mask, but also in higher masks.

Figure 5. The resulting corner points of the finger finding algorithmapplied on different depth levels.

We thereby obtain several corner points along the top of thefingers as illustrated in Figure 5. Afterwards, we apply a clos-ing morphology on the corner points in order to merge nearbypoints. Fingertips then appear as the ends of point clusters.

We assume that the relevant fingertip for touching is the pointfurthest away from the arm. We can find the arm as the part ofthe body which intersects the image frame.

The detection step described above by itself merely providesthe current location of the fingertip, but not its movement overtime, which is relevant to recognize and track interactionswith the shapes. In order to actually track the finger, weapply the Lucas-Kanade optical flow algorithm [25] to thedetected finger region in the greyscaled RGB stream. Thealgorithm is able to find salient keypoints in the region andtrack their position across multiple frames. We apply it assoon as a finger is detected in the images. As multiple pointsare tracked, we can not only calculate the translation of thefinger across frames, but also its rotation by matching the twocorresponding point sets from two consecutive frames. This isdone by computing the best-fitting rigid transformation [33].With the 2D location of the fingertip we can directly extractits height above the surface from the depth map and deducewhether the user is touching a shape or not.

Page 8: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

Figure 6. The binary shape masks with the detected hand region.

Evaluation of Fingertip DetectionIn order to evaluate the accuracy of the fingertip detection, werecorded several sequences during which a participant movedtheir fingers below the camera. We then manually annotatedthe position of the fingertip in order to compare it to our estima-tion. In total, we annotated 1137 frames from six participants(note that our detection method works independent of skincolor). On average, the error was 15 pixels which correspondsto approximately 1 cm, however, this average was influencedby few large errors which occur when the hand is enteringthe field of view of the camera. The median error was 8.6pixels, which is accurate enough given shape sizes of a fewcentimeters.

Shape TrackingDetecting and tracking the shapes is performed purely on theRGB image, as the depth image does not contain any infor-mation about the flat paper shapes. We assume the interactionsurface to be unicolored with low saturation, e.g. a standardtable. To detect the shapes we convert the RGB image tothe HSV color space and threshold the saturation channel (as-suming the shapes have a different color than the table). Inthe resulting binary mask, we find the contours. From theseshape candidates, we exclude those in the area covered by thehand contour, as illustrated in Figure 6. Whenever a shapeis occluded by the hand or arm, we try to match reappearingshapes by their position to recover the shape identity.

Tracking Manipulations of ShapesKnowing the position of the fingertip and the shapes, we candetect when a user touches a shape. To fulfil the necessaryrequirements to recognize all possible touchet interactions, wehave to detect for how long a shape is touched and whether itis swiped upon, as well as the translation and rotation of theshape in case of a movement. A swipe can be distinguishedfrom moving the shape by the fact that only the finger, but notthe shape itself moves. For movements of shapes, we track thechange in translation and rotation from a shape’s appearancefor every frame since the shape was touched as this mightinduce a parameter value change which we want to be able tofollow continuously. Since the shape is occluded by the finger(and the finger position might also shift during the movement),applying a direct transformation calculation through a best-fitalgorithm as for the finger tracking is not possible. Instead, wetake a snapshot of the shape in the moment it is first touchedand calculate the translation between the currently detectedshape and the snapshot. For the rotation, we iteratively test forthe best fit employing an efficient ternary search. The search

uses the rotation estimate of the finger as a starting point,however, as the finger position and rotation might changeindependently of the shape during the movement, this can onlybe used as a first indication. The measure for a fit is the areaof intersection between the detected shape and the snapshot.Note that we can also track the rotation of circular shapes,as a circle is partly occluded by the finger during interactionresulting in a perceived shape that has a “hole” in the locationof the finger, which allows the tracking of the rotation.

Evaluation of Shape TrackingTo evaluate the shape tracking accuracy, we recorded severalsequences when moving a squared shape at different speedsalso including rotations. For 1,529 frames, we manually an-notated the positions of the four corners of the square andcompare them to the estimate calculated by our tracking al-gorithm. We calculated the average displacement of a cornerpoint. The mean error over all the frames was 6 pixels (0.4 cm),which is accurate enough for our purpose.

USER STUDYTo evaluate the entire process of creating a TUI with our pro-totype, we invited six new participants to a second study (twofemales, 18 to 28 years old). As we planned to let the partic-ipants build a complete application from scratch, includingthe connection to and implementation of the Node-RED flow,we only invited computer science students with programmingknowledge. We asked them to implement interfaces and theNode-RED flows for the three following applications:

1. Driving a little toy rover as shown in Figure 1c, which candrive forwards, backwards, and rotate to the left and right.

2. Creating a music playback control based on an MPD5 wrap-per we provided. The interface should control volume, playand pause, and switching to the next or previous song.

3. Creating an interface for controlling a slide show. In orderto do so, we provided a wrapper in Node-RED to access thexdotool6, which allows to emulate keyboard presses. Theinterface should allow to start and stop a slide show, as wellas to move to the next and previous slides.

The order of implementing each of these applications wasdifferent for each participant. Three days before the study,we gave the participants basic information on the requiredcomponents and asked them to get familiar with MPD andxdotool. During the study, we provided the participantswith an overview of the available touchets besides the taskdescriptions. We set no time limit and let the participants tryany interface they wanted. After the study, we asked themseveral questions concerning our prototype and let them fill ina User Experience Questionnaire (UEQ) [23]7 score sheet inorder to obtain a standardized measure of the user experience.The UEQ includes 26 items on a seven-point score rangingfrom -3 to 3. The output of the UEQ analysis are scoresin the following six dimensions: attractiveness, perspicuity,efficiency, dependability, stimulation, and novelty.5https://www.musicpd.org/6https://www.semicomplete.com/projects/xdotool/7http://www.ueq-online.org/

Page 9: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

Attractiv

eness

Perspicuity

Efficiency

Dependability

Stimulatio

nNove

lty

Dimensions

0

1

2U

EQ

Sco

res

Figure 7. The average results of the UEQ.

All participants successfully completed the tasks. On average,a participant spent 2 hours and 18 minutes on all three tasks.Note that this includes all necessary parts, i.e., the designof the paper interface and the assignment of the touchets, aswell as the Node-RED implementation. As participants spentmost of the time on the Node-RED implementation, this latterpart could also be pre-implemented for certain real-worldapplications, speeding up the use of our method significantly,e.g. for most of our tasks from a magnitude of half an hourto an hour down to a few minutes. As we let participantstry out interfaces as they wished, they did not optimize fortime. One participant enjoyed using the system and optimizinghis interface so much, that he stayed for five hours. On theother hand, another participant created an entire interface in11 minutes. Furthermore, the average task time significantlydecreased from 70 minutes for the first task to 32 minutes forthe second.

The resulting scores for the six dimensions of the UEQ areshown in Figure 7. For all dimensions, the scores are posi-tive, indicating that participants generally enjoyed using ourprototype. Most noticeable are the scores for perspicuity, stim-ulation, and novelty, which imply what it was easy to learnand to understand, that the participants were motivated andexcited to use the approach, and that they believed it to bea creative new method for designing interfaces. The worstcharacteristic is efficiency, because our prototype cannot trackfast movements of the finger due to significant motion blurin the depth stream, i.e., sometimes participants had to repeattheir interactions. Moreover, several participants mentionedthe relatively long time it takes to configure the necessaryNode-RED flows, especially for inexperienced users. Thismost likely also influenced the attractiveness score. However,the Node-RED part is only used to forward the interactionevents and could be replaced by a different mechanism, or theflows could be pre-implemented for common devices to becontrolled.

As part of a survey that followed the study, participants con-firmed that our method was easy to understand and that mostof them enjoyed using the prototype, especially to try out dif-ferent interfaces. They also suggested different scenarios inwhich the Tailored Controls could be used, such as for rapidprototyping, for interfaces in changing environments, or forcontrolling home devices.

DISCUSSIONOur technical evaluation and the results from the presenteduser study show that we created a functioning prototypewith appropriate performance (e.g., regarding shape detec-tion) which was positively received by participants. This isin spite of the fact that our prototype has several limitations,which are discussed below.

Despite these limitations, we were able to show that our sug-gested touchet taxonomy that we derived from the results ofour elicitation study can indeed be used for the implementationof personalized TUIs. With our approach, we significantlyprogress beyond the state of the art compared to other currentsystems that enable dynamic TUIs: in addition to not requiringfiducial markers, our system enables more natural interactionsdue to the explicit finger tracking functionality, and it providesusers with a wide range of interaction abstractions to selectfrom when creating TUIs.

We built our prototype with the intention to demonstrate thefeasibility of our approach and evaluate it in the user study.However, we believe there are a multitude of different applica-tion scenarios where a system such as ours could prove usefulas already mentioned in the introduction.

First, our approach can be used for the rapid prototyping offunctional TUIs – paper prototyping is a common techniquefor interface design, however, such interfaces are usually notfunctional and the effects of using them have to be imaginedby prototypers. With our system, it is possible to immediatelytest and experience functional interfaces, which we believewould spur creativity. This is backed by our user study, whereparticipants were very motivated to try out different inter-action elements, in particular because our system does notrestrict users to employ predefined generic paper shapes. Con-sequently, Tailored Controls can bridge the gap between the“ideation” and “implementation” stages in the design thinkingprocess [4], enabling the designer to be more creative andfaster by providing direct feedback about what a functionallyimplemented interface reacts like.

Second, our system can be used in the context of dynamicallychanging workspaces, e.g., to provide ad-hoc interfaces todevices that are used only infrequently – we imagine this tobe of use especially in crowded workspaces (e.g., in industrialworkshops), where it allows users to carry interfaces withthem and deploy them tactically, rather than attaching TUIs tothe machines they intend to control. To this end, we want toemphasize that the presented system and our prototype is notlimited to paper alone, but can be used with other (sufficientlyflat) materials.

Third, our approach enables a very high degree of personal-ization in terms of the TUIs that can be created by users. Itthus enables individualized user interfaces, for instance in thecontext of accessibility constraints: with our approach, it ispossible to create interfaces that are optimally suited for indi-vidual disabled persons, given their specific type of disabilityand context constraints.

Page 10: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

Limitations and Future WorkOur approach has several limitations, both from a theoreticaland a practical perspective, which we intend to approach inthe future. Our current implementation of the finger trackingalgorithm only allows the use of a single finger for manipu-lations and there is no multi-touch support. Furthermore, itis difficult to track very fast movements of the fingertip, asfast movements create sequences of blurry images in the depthstream. Finally, the interaction space is currently limited to arelatively small flat area. We envision that these problems canbe solved in the future by better hand pose estimation algo-rithms on the one hand and by improved hardware on the other.Our approach could directly be adapted to such improvements.A further issue is that our system requires an RGBD camerato be available to monitor the scene, i.e. the approach is mostapplicable for fixed workspaces. With future improvements,we however expect the area which may be covered to grow.

On a philosophical level, the use of flat paper shapes may nobe considered tangible in a strict sense. However, we viewtangibility as the physical, material presence in the elementsused, i.e. that they can be touched, moved, or reconfiguredwithout interacting with a digital proxy device that displays theuser interface. We believe already this property of the papershapes is beneficial for the interaction process, as it enablesthe interactions to take place in the real world, a space the useris naturally accustomed to. In terms of taking advantage ofthe full properties of paper, one could experiment with otherforms paper interfaces may take, such as folding or crushingthe paper, which would further enhance the tangible experi-ence. Nevertheless, none of our participants in the elicitationproduced any of these interfaces, neither did anyone mentionthat this could be a possibility, and we did not restrict themonly to cut out paper shapes. Furthermore, one could includedifferent types of paper or materials, such as fabric, whichis an interesting aspect of future work. We partly already in-cluded this in the study by providing differently colored paper,however, this property was rarely incorporated in the inter-face designs by the participants. Further, we do believe ourapproach could naturally be used with other (flat) materials aswell, such as fabric or even play dough, which would enablean even easier reconfiguration of interfaces. This could alsosolve practical environmental issues paper shapes might have,for example they are easily blown away by wind or a draft.

CONCLUSIONWe presented the Tailored Controls approach that enables thesimple creation of personalized TUIs that are made from plainpaper but can be connected to virtually any application. In afirst user study with 20 participants, we found 25 interactionabstractions implicitly employed by the participants. We builta functional prototype and implemented 13 of the most com-mon of these abstractions. Our system tracks a user’s fingertipas well as the paper shapes and is thereby able to detect in-teraction events, including tapping, rotating, swiping, sliding,and many more. Our prototype allows the simple creationof user interfaces for diverse applications, as we proved ina second user study, and demonstrates the feasibility of ourinitial concept. By using a set of abstractions that covers awide range of potential interaction elements, we are able to

give users the freedom to create the paper shapes they wantand afterwards add corresponding functionality themselves,instead of having to choose from a predefined set of genericelements. The only hardware required is an RGBD camerathat is mounted above the interaction surface.

Tailored Controls demonstrates the feasibility of inexpensivereconfigurable TUIs, and one main contribution is to enablefurther research in the direction of customizable personalizedTUIs. Immediate applications of our approach include therapid prototyping of functional TUIs, the creation of tacti-cal interfaces for sporadic interactions, and the generation ofindividualized TUIs that are optimal for concrete individualcontexts, for instance for the benefit of disabled persons. Inaddition, triggered by the expected proliferation of RGBDcameras, we expect our approach to enable novel interactionmethods in domestic and public environments, as well as inmobile scenarios.

ACKNOWLEDGEMENTSWe would like to thank all study participants for their timeand effort and the anonymous reviewers for their constructivefeedback.

REFERENCES[1] Daniel Avrahami, Jacob O. Wobbrock, and Shahram

Izadi. 2011. Portico: Tangible Interaction on and Arounda Tablet. In Proceedings of the 24th Annual ACMSymposium on User Interface Software and Technology(UIST ’11). ACM, New York, NY, USA, 347–356. DOI:http://dx.doi.org/10.1145/2047196.2047241

[2] Vincent Becker, Felix Rauchenstein, and Gábor Sörös.2019. Investigating Universal Appliance Control throughWearable Augmented Reality. In Proceedings of the 10thAugmented Human International Conference (AH ’19).9. DOI:http://dx.doi.org/10.1145/3311823.3311853

[3] Florian Block, Michael Haller, Hans Gellersen, CarlGutwin, and Mark Billinghurst. 2008. VoodooSketch:Extending Interactive Surfaces with Adaptable InterfacePalettes. In Proceedings of the 2nd InternationalConference on Tangible and Embedded Interaction (TEI’08). ACM, New York, NY, USA, 55–58. DOI:http://dx.doi.org/10.1145/1347390.1347404

[4] Tim Brown. 2008. Design Thinking. Harvard businessreview 86 (07 2008), 84–92, 141.

[5] Varun Perumal C and Daniel Wigdor. 2015. Printem:Instant Printed Circuit Boards with Standard OfficePrinters & Inks. In Proceedings of the 28th Annual ACMSymposium on User Interface Software & Technology(UIST ’15). ACM, New York, NY, USA, 243–251. DOI:http://dx.doi.org/10.1145/2807442.2807511

[6] K. Cheng, R. Liang, B. Chen, R. Laing, and S. Kuo.2010. iCon: Utilizing Everyday Objects As Additional,Auxiliary and Instant Tabletop Controllers. InProceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’10). 1155–1164.DOI:http://dx.doi.org/10.1145/1753326.1753499

Page 11: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

[7] Marcelo Coelho and Jamie Zigelbaum. 2011.Shape-changing Interfaces. Personal UbiquitousComput. 15, 2 (Feb. 2011), 161–173. DOI:http://dx.doi.org/10.1007/s00779-010-0311-y

[8] Christian Corsten, Ignacio Avellino, Max Möllers, andJan Borchers. 2013. Instant User Interfaces:Repurposing Everyday Objects As Input Devices. InProceedings of the 2013 ACM International Conferenceon Interactive Tabletops and Surfaces (ITS ’13). ACM,New York, NY, USA, 71–80. DOI:http://dx.doi.org/10.1145/2512349.2512799

[9] Florian Echtler, Manuel Huber, and Gudrun Klinker.2008. Shadow Tracking on Multi-touch Tables. InProceedings of the Working Conference on AdvancedVisual Interfaces (AVI ’08). ACM, New York, NY, USA,388–391. DOI:http://dx.doi.org/10.1145/1385569.1385640

[10] George W. Fitzmaurice, Hiroshi Ishii, and William A. S.Buxton. 1995. Bricks: Laying the Foundations forGraspable User Interfaces. In Proceedings of theSIGCHI Conference on Human Factors in ComputingSystems (CHI ’95). ACM Press/Addison-WesleyPublishing Co., New York, NY, USA, 442–449. DOI:http://dx.doi.org/10.1145/223904.223964

[11] Sean Follmer, Daniel Leithinger, Alex Olwal, AkimitsuHogge, and Hiroshi Ishii. 2013. inFORM: DynamicPhysical Affordances and Constraints Through Shapeand Object Actuation. In Proceedings of the 26th AnnualACM Symposium on User Interface Software andTechnology (UIST ’13). ACM, New York, NY, USA,417–426. DOI:http://dx.doi.org/10.1145/2501988.2502032

[12] Kaori Fujinami, Mami Kosaka, and Bipin Indurkhya.2018. Painting an Apple with an Apple: A TangibleTabletop Interface for Painting with Physical Objects.Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.2, 4, Article 162 (Dec. 2018), 22 pages. DOI:http://dx.doi.org/10.1145/3287040

[13] Markus Funk, Oliver Korn, and Albrecht Schmidt. 2014.An Augmented Workplace for Enabling User-definedTangibles. In CHI ’14 Extended Abstracts on HumanFactors in Computing Systems (CHI EA ’14). ACM,New York, NY, USA, 1285–1290. DOI:http://dx.doi.org/10.1145/2559206.2581142

[14] Anneli Hershman, Juliana Nazare, Jie Qi, MartinSaveski, Deb Roy, and Mitchel Resnick. 2018. Light ItUp: Using Paper Circuitry to Enhance Low-fidelityPaper Prototypes for Children. In Proceedings of the17th ACM Conference on Interaction Design andChildren (IDC ’18). ACM, New York, NY, USA,365–372. DOI:http://dx.doi.org/10.1145/3202185.3202758

[15] Valentin Heun, James Hobin, and Pattie Maes. 2013a.Reality Editor: Programming Smarter Objects. InProceedings of the 2013 ACM Conference on Pervasive

and Ubiquitous Computing Adjunct Publication(UbiComp ’13 Adjunct). ACM, New York, NY, USA,307–310. DOI:http://dx.doi.org/10.1145/2494091.2494185

[16] Valentin Heun, Shunichi Kasahara, and Pattie Maes.2013b. Smarter Objects: Using AR Technology toProgram Physical Objects and Their Interactions. In CHI

’13 Extended Abstracts on Human Factors in ComputingSystems (CHI EA ’13). ACM, New York, NY, USA,961–966. DOI:http://dx.doi.org/10.1145/2468356.2468528

[17] Hiroshi Ishii and Brygg Ullmer. 1997. Tangible Bits:Towards Seamless Interfaces Between People, Bits andAtoms. In Proceedings of the ACM SIGCHI Conferenceon Human Factors in Computing Systems (CHI ’97).ACM, New York, NY, USA, 234–241. DOI:http://dx.doi.org/10.1145/258549.258715

[18] Yoshihiro Kawahara, Steve Hodges, Benjamin S. Cook,Cheng Zhang, and Gregory D. Abowd. 2013. InstantInkjet Circuits: Lab-based Inkjet Printing to SupportRapid Prototyping of UbiComp Devices. In Proceedingsof the 2013 ACM International Joint Conference onPervasive and Ubiquitous Computing (UbiComp ’13).ACM, New York, NY, USA, 363–372. DOI:http://dx.doi.org/10.1145/2493432.2493486

[19] Annie Kelly, R. Benjamin Shapiro, Jonathan de Halleux,and Thomas Ball. 2018. ARcadia: A Rapid PrototypingPlatform for Real-time Tangible Interfaces. InProceedings of the 2018 CHI Conference on HumanFactors in Computing Systems (CHI ’18). ACM, NewYork, NY, USA, Article 409, 8 pages. DOI:http://dx.doi.org/10.1145/3173574.3173983

[20] David Kim, Shahram Izadi, Jakub Dostal, ChristophRhemann, Cem Keskin, Christopher Zach, JamieShotton, Timothy Large, Steven Bathiche, MatthiasNiessner, D. Alex Butler, Sean Fanello, and VivekPradeep. 2014. RetroDepth: 3D Silhouette Sensing forHigh-precision Input on and Above Physical Surfaces.In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’14). ACM, NewYork, NY, USA, 1377–1386. DOI:http://dx.doi.org/10.1145/2556288.2557336

[21] Hyunyoung Kim. 2018. Fostering Design Process ofShape-Changing Interfaces. In The 31st Annual ACMSymposium on User Interface Software and TechnologyAdjunct Proceedings (UIST ’18 Adjunct). ACM, NewYork, NY, USA, 224–227. DOI:http://dx.doi.org/10.1145/3266037.3266131

[22] Hyunyoung Kim, Celine Coutrix, and Anne Roudaut.2018. Morphees+: Studying Everyday ReconfigurableObjects for the Design and Taxonomy of ReconfigurableUIs. In Proceedings of the 2018 CHI Conference onHuman Factors in Computing Systems (CHI ’18). ACM,New York, NY, USA, Article 619, 14 pages. DOI:http://dx.doi.org/10.1145/3173574.3174193

Page 12: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

[23] Bettina Laugwitz, Theo Held, and Martin Schrepp. 2008.Construction and Evaluation of a User ExperienceQuestionnaire. In Proceedings of the 4th Symposium ofthe Workgroup Human-Computer Interaction andUsability Engineering of the Austrian Computer Societyon HCI and Usability for Education and Work (USAB

’08). Springer-Verlag, Berlin, Heidelberg, 63–76. DOI:http://dx.doi.org/10.1007/978-3-540-89350-9_6

[24] Daniel Leithinger, Sean Follmer, Alex Olwal, SamuelLuescher, Akimitsu Hogge, Jinha Lee, and Hiroshi Ishii.2013. Sublimate: State-changing Virtual and PhysicalRendering to Augment Interaction with Shape Displays.In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’13). ACM, NewYork, NY, USA, 1441–1450. DOI:http://dx.doi.org/10.1145/2470654.2466191

[25] Bruce D. Lucas and Takeo Kanade. 1981. An IterativeImage Registration Technique with an Application toStereo Vision. In Proceedings of the 7th InternationalJoint Conference on Artificial Intelligence - Volume 2(IJCAI’81). Morgan Kaufmann Publishers Inc., SanFrancisco, CA, USA, 674–679.

[26] Donald A. Norman. 2002. The Design of EverydayThings. Basic Books, Inc., New York, NY, USA.

[27] Markus Oberweger and Vincent Lepetit. 2017.DeepPrior++: Improving Fast and Accurate 3D HandPose Estimation. In International Conference onComputer Vision Workshops.

[28] Simon Olberding, Nan-Wei Gong, John Tiab, Joseph A.Paradiso, and Jürgen Steimle. 2013. A CuttableMulti-touch Sensor. In Proceedings of the 26th AnnualACM Symposium on User Interface Software andTechnology (UIST ’13). ACM, New York, NY, USA,245–254. DOI:http://dx.doi.org/10.1145/2501988.2502048

[29] Henning Pohl and Michael Rohs. 2014. Around-deviceDevices: My Coffee Mug is a Volume Dial. InProceedings of the 16th International Conference onHuman-computer Interaction with Mobile Devices &Services (MobileHCI ’14). ACM, New York, NY, USA,81–90. DOI:http://dx.doi.org/10.1145/2628363.2628401

[30] Jie Qi and Leah Buechley. 2014. Sketching in Circuits:Designing and Building Electronics on Paper. InProceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’14). ACM, NewYork, NY, USA, 1713–1722. DOI:http://dx.doi.org/10.1145/2556288.2557391

[31] Jie Qi, Asli Demir, and Joseph A. Paradiso. 2017. CodeCollage: Tangible Programming On Paper With CircuitStickers. In Proceedings of the 2017 CHI ConferenceExtended Abstracts on Human Factors in ComputingSystems (CHI EA ’17). ACM, New York, NY, USA,1970–1977. DOI:http://dx.doi.org/10.1145/3027063.3053084

[32] Raf Ramakers, Kashyap Todi, and Kris Luyten. 2015.PaperPulse: An Integrated Approach for EmbeddingElectronics in Paper Designs. In Proceedings of the 33rdAnnual ACM Conference on Human Factors inComputing Systems (CHI ’15). ACM, New York, NY,USA, 2457–2466. DOI:http://dx.doi.org/10.1145/2702123.2702487

[33] Olga Sorkine-Hornung and Michael Rabinovich. 2017.Least-Squares Rigid Motion Using SVD. TechnicalReport. ETH Zurich.https://igl.ethz.ch/projects/ARAP/svd_rot.pdf

[34] Martin Spindler, Victor Cheung, and Raimund Dachselt.2013. Dynamic Tangible User Interface Palettes. InHuman-Computer Interaction – INTERACT 2013, PaulaKotzé, Gary Marsden, Gitte Lindgaard, Janet Wesson,and Marco Winckler (Eds.). Springer Berlin Heidelberg,Berlin, Heidelberg, 159–176.

[35] Nicolas Villar, Daniel Cletheroe, Greg Saul, ChristianHolz, Tim Regan, Oscar Salandin, Misha Sra,Hui-Shyong Yeo, William Field, and Haiyan Zhang.2018. Project Zanzibar: A Portable and FlexibleTangible Interaction Platform. In Proceedings of the2018 CHI Conference on Human Factors in ComputingSystems (CHI ’18). ACM, New York, NY, USA, Article515, 13 pages. DOI:http://dx.doi.org/10.1145/3173574.3174089

[36] Simon Voelker, Kosuke Nakajima, Christian Thoresen,Yuichi Itoh, Kjell Ivar Overgaard, and Jan Borchers.2013. PUCs: Detecting Transparent, Passive UntouchedCapacitive Widgets on Unmodified Multi-touchDisplays. In Proceedings of the 2013 ACM InternationalConference on Interactive Tabletops and Surfaces (ITS’13). ACM, New York, NY, USA, 101–104. DOI:http://dx.doi.org/10.1145/2512349.2512791

[37] Mark Weiser. 1999. The Computer for the 21st Century.SIGMOBILE Mob. Comput. Commun. Rev. 3, 3 (July1999), 3–11. DOI:http://dx.doi.org/10.1145/329124.329126

[38] Malte Weiss, Julie Wagner, Yvonne Jansen, RogerJennings, Ramsin Khoshabeh, James D. Hollan, and JanBorchers. 2009. SLAP Widgets: Bridging the GapBetween Virtual and Physical Controls on Tabletops. InProceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI ’09). ACM, NewYork, NY, USA, 481–490. DOI:http://dx.doi.org/10.1145/1518701.1518779

[39] Robert Xiao, Scott Hudson, and Chris Harrison. 2016.DIRECT: Making Touch Tracking on Ordinary SurfacesPractical with Hybrid Depth-Infrared Sensing. InProceedings of the 2016 ACM International Conferenceon Interactive Surfaces and Spaces (ISS ’16). ACM,New York, NY, USA, 85–94. DOI:http://dx.doi.org/10.1145/2992154.2992173

Page 13: Tailored Controls: Creating Personalized TangibleUser ... · user interfaces than tangible interfaces. Using everyday objects as interfaces In addition to the creation of dedicated

[40] Hui-Shyong Yeo, Ryosuke Minami, Kirill Rodriguez,George Shaker, and Aaron Quigley. 2018. ExploringTangible Interactions with Radar Sensing. Proc. ACMInteract. Mob. Wearable Ubiquitous Technol. 2, 4,Article 200 (Dec. 2018), 25 pages. DOI:http://dx.doi.org/10.1145/3287078

[41] Yang Zhang and Chris Harrison. 2018. Pulp Nonfiction:Low-Cost Touch Tracking for Paper. In Proceedings ofthe 2018 CHI Conference on Human Factors in

Computing Systems (CHI ’18). ACM, New York, NY,USA, Article 117, 11 pages. DOI:http://dx.doi.org/10.1145/3173574.3173691

[42] Oren Zuckerman and Ayelet Gal-Oz. 2013. To TUI ornot to TUI: Evaluating performance and preference intangible vs. graphical user interfaces. InternationalJournal of Human-Computer Studies 71, 7 (2013),803–820. DOI:http://dx.doi.org/10.1016/j.ijhcs.2013.04.003