4
Mixed Reality for Supporting Office Devices Troubleshooting Frederic Roulland, Stefania Castellani, Pascal Valobra, Victor Ciriza, Jacki O’Neill, and Ye Deng 1 Xerox Research Centre Europe 2 ABSTRACT In this paper we describe the Mixed Reality system that we are developing to facilitate a real-world application, that of collaborative remote troubleshooting of broken office devices. The architecture of the system is centered on a 3D virtual representation of the device augmented with status data of the actual device coming from its internal sensors. The purpose of this paper is to illustrate how this approach supports the interactions required by the remote collaborative troubleshooting activity whilst taking into account technical constraints that come from a real world application. We believe it constitutes an interesting opportunity for using Mixed Reality in this domain. KEYWORDS: device troubleshooting, Mixed Reality, 3D modeling, collaborative systems. I NDEX TERMS: .H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems—Artificial, augmented and virtual realities; H.5.2 [Information Interfaces and Presentation]: User Interfaces—Training, help, and documentation 1 INTRODUCTION The work presented in this paper is part of a research program originally motivated by real-world problems in the domain of collaborative remote troubleshooting of broken office copiers and printers. A key characteristic of this situation is that the people on- site actually carrying out the troubleshooting work are the users of the copier and are rarely expert in its inner workings. Users have access to a number of troubleshooting resources from an online knowledge base to a call centre staffed by experts. However, ethnographic studies we conducted (see e.g. [15]) revealed a number of flaws in the existing troubleshooting support. This inspired us to design a collaborative system based on a shared virtual representation of the ailing device, which captures both the device status and the problem to be solved [3]. The system enables the device user (customer) to call the call centre and interact with a remote troubleshooter directly through the device itself. The shared representation primarily consists of a 3D model of the ailing device and a number of means of interacting with it adapted to the customer’ and troubleshooter’ roles in the troubleshooting task. This virtual representation of the device in its current state, which appears on both the troubleshooters’ and customers’ interfaces, will be the main focus of this paper. Despite the fact that the system was developed in a very specific context of application and for a particular type of device, the approach that we have adopted could be of interest in a much wider context and opens new opportunities for the use of MR systems and 3D based representations in support of particular real world remote collaboration applications. 2 RELATED WORK A number of systems have been developed in an attempt to help remotely situated people to work together when this work involves physical objects in the local environment of one or more of the participants ([7], [9], [11], [12]). Such systems aim to re- create aspects of face-to-face interaction around remote objects and tend to use video as the medium for bringing local objects to the remote site. Our research has taken an alternative approach to design in these circumstances. Rather than treating face-to-face interaction as a starting point, we began by examining a situation in which remote interaction around objects already occurs. Other work focuses on the enhancement of remote expert- customer troubleshooting settings. The method illustrated in [5] is close to our work in that the customer can establish from the printer a phone/data line connection with a troubleshooter and use voice communication and remote diagnostics. “Session data logs” can be used to share data read from the device and inform the troubleshooter. However, since the primary interaction channel is still the phone, troubleshooting is likely to continue to suffer from many of the problems related to the audio channel, such as the need to verbalize instructions, direct users through space and describe device parts using non-technical language, etc. To overcome this lack of information due to the dislocation of the expert, some methods involving video-based Augmented Reality (AR) have been proposed. For instance, Friedrich [6] describes an AR system allowing a mobile on-site user to be instructed or to access documentation via an AR headset, in order to carry out device maintenance. This method provides help according to situational factors (e.g. the users position), but it does not encapsulate information about the current status of the device. In the same area of instructing the user from a remote location, there is work on pointing, defining actions, etc. Bauer et al. [2] describe an AR telepointer for supporting mobile, non-expert fieldworkers undertaking technical activities with the aid of remote experts. Non-experts wear a Head-Mounted Display (HMD) with camera, audio and wireless communication link. The video captures what the non-expert sees and relays it to a PC- based system. The expert is able to view the video information and guide the non-expert's activities through audio instruction and an overlaid ‘pointer’ which the non-expert can see in their HMD. This is close to our scenario of use and offers similar ostensive capability and there are some similarities with our notion of a shared representation. However, all these AR approaches are video-based and require an HMD. The use of an HMD in a troubleshooting and maintenance context can be envisaged with a good level of confidence to support the work of professionally- trained operators, like service engineers or mechanics ([10],[17]), but we believe that for users in office environment it is both too costly and difficult to master to be practical. The use of a 3D representation in the way we suggest does not require any additional equipment for displaying the instructions on the device or for collecting device information and communicating it to the remote expert. Therefore, a 3D approach constitutes an alternative to video that is more affordable and less network bandwidth 1 Currently working at ESI Group ([email protected]) 2 [email protected] 175 IEEE Virtual Reality 2011 19 - 23 March, Singapore 978-1-4577-0038-5/11/$26.00 ©2011 IEEE

[IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - Mixed reality for supporting office devices troubleshooting

  • Upload
    ye

  • View
    214

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - Mixed reality for supporting office devices troubleshooting

Mixed Reality for Supporting Office Devices Troubleshooting

Frederic Roulland, Stefania Castellani, Pascal Valobra, Victor Ciriza, Jacki O’Neill, and Ye Deng1

Xerox Research Centre Europe

2

ABSTRACT In this paper we describe the Mixed Reality system that we are developing to facilitate a real-world application, that of collaborative remote troubleshooting of broken office devices. The architecture of the system is centered on a 3D virtual representation of the device augmented with status data of the actual device coming from its internal sensors. The purpose of this paper is to illustrate how this approach supports the interactions required by the remote collaborative troubleshooting activity whilst taking into account technical constraints that come from a real world application. We believe it constitutes an interesting opportunity for using Mixed Reality in this domain. KEYWORDS: device troubleshooting, Mixed Reality, 3D modeling, collaborative systems. INDEX TERMS: .H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems—Artificial, augmented and virtual realities; H.5.2 [Information Interfaces and Presentation]: User Interfaces—Training, help, and documentation

1 INTRODUCTION The work presented in this paper is part of a research program originally motivated by real-world problems in the domain of collaborative remote troubleshooting of broken office copiers and printers. A key characteristic of this situation is that the people on-site actually carrying out the troubleshooting work are the users of the copier and are rarely expert in its inner workings. Users have access to a number of troubleshooting resources from an online knowledge base to a call centre staffed by experts. However, ethnographic studies we conducted (see e.g. [15]) revealed a number of flaws in the existing troubleshooting support. This inspired us to design a collaborative system based on a shared virtual representation of the ailing device, which captures both the device status and the problem to be solved [3]. The system enables the device user (customer) to call the call centre and interact with a remote troubleshooter directly through the device itself. The shared representation primarily consists of a 3D model of the ailing device and a number of means of interacting with it adapted to the customer’ and troubleshooter’ roles in the troubleshooting task. This virtual representation of the device in its current state, which appears on both the troubleshooters’ and customers’ interfaces, will be the main focus of this paper.

Despite the fact that the system was developed in a very specific context of application and for a particular type of device, the approach that we have adopted could be of interest in a much wider context and opens new opportunities for the use of MR systems and 3D based representations in support of particular real

world remote collaboration applications.

2 RELATED WORK A number of systems have been developed in an attempt to help

remotely situated people to work together when this work involves physical objects in the local environment of one or more of the participants ([7], [9], [11], [12]). Such systems aim to re-create aspects of face-to-face interaction around remote objects and tend to use video as the medium for bringing local objects to the remote site. Our research has taken an alternative approach to design in these circumstances. Rather than treating face-to-face interaction as a starting point, we began by examining a situation in which remote interaction around objects already occurs.

Other work focuses on the enhancement of remote expert-customer troubleshooting settings. The method illustrated in [5] is close to our work in that the customer can establish from the printer a phone/data line connection with a troubleshooter and use voice communication and remote diagnostics. “Session data logs” can be used to share data read from the device and inform the troubleshooter. However, since the primary interaction channel is still the phone, troubleshooting is likely to continue to suffer from many of the problems related to the audio channel, such as the need to verbalize instructions, direct users through space and describe device parts using non-technical language, etc.

To overcome this lack of information due to the dislocation of the expert, some methods involving video-based Augmented Reality (AR) have been proposed. For instance, Friedrich [6] describes an AR system allowing a mobile on-site user to be instructed or to access documentation via an AR headset, in order to carry out device maintenance. This method provides help according to situational factors (e.g. the users position), but it does not encapsulate information about the current status of the device. In the same area of instructing the user from a remote location, there is work on pointing, defining actions, etc. Bauer et al. [2] describe an AR telepointer for supporting mobile, non-expert fieldworkers undertaking technical activities with the aid of remote experts. Non-experts wear a Head-Mounted Display (HMD) with camera, audio and wireless communication link. The video captures what the non-expert sees and relays it to a PC-based system. The expert is able to view the video information and guide the non-expert's activities through audio instruction and an overlaid ‘pointer’ which the non-expert can see in their HMD. This is close to our scenario of use and offers similar ostensive capability and there are some similarities with our notion of a shared representation. However, all these AR approaches are video-based and require an HMD. The use of an HMD in a troubleshooting and maintenance context can be envisaged with a good level of confidence to support the work of professionally-trained operators, like service engineers or mechanics ([10],[17]), but we believe that for users in office environment it is both too costly and difficult to master to be practical. The use of a 3D representation in the way we suggest does not require any additional equipment for displaying the instructions on the device or for collecting device information and communicating it to the remote expert. Therefore, a 3D approach constitutes an alternative to video that is more affordable and less network bandwidth

1 Currently working at ESI Group ([email protected]) 2 [email protected]

175

IEEE Virtual Reality 201119 - 23 March, Singapore978-1-4577-0038-5/11/$26.00 ©2011 IEEE

Page 2: [IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - Mixed reality for supporting office devices troubleshooting

consuming. This approach could even provide some advantages when sensing the actual status of the device. Video-based approaches [4] rely on sensing of external/visible properties of a device and consider it as a passive object. Exploiting the internal sensor data of the device itself and inferring its visible state from this avoids occlusion or capture precision issues which could make the device status difficult to visually detect at times.

Our work also illustrates how we can support new types of interactions through the definition of richer device models combining 3D representations with semantic representations. On one hand, semantic representations of devices such as the one described in [21] have been in the past primarily used for remote administration purposes. On the other hand, 3D representations of devices, individuals and environments are used in various enterprise applications for which the term “Serious Games” [1] is used. The educational and off-line simulation aspects have been the main focus of these types of applications.

Robotics, 3D representations and remote operations in hazardous environments such as nuclear plants, surgery and space and underwater operations have been the subject of several works such as the ROTEX [13] and the Magritte [14] projects. The complexity of the tasks, security requirements and magnitude of the economic investment done in the equipment for which the mentioned systems are designed imply the use of high-performance, proprietary sensors, communication protocols and 3D rendering engines. In our case, where cost constraint is a major feature, we use low-cost, open-source or commercially available web-based 3D tools in order to represent the status of a real device and remotely guide the interactions.

3 SYSTEM OVERVIEW Our collaborative troubleshooting system enables a customer experiencing a problem with a device to call the call centre directly from the device and interact with a remote troubleshooter through the device itself. Customers and troubleshooters are provided with a shared representation of both the device status and the problem to be solved [3]. The shared virtual representation mainly consists of a 3D model of the ailing device, and a number of means of interacting with it adapted to the customer’ and troubleshooter’ roles in the troubleshooting task. It is presented to the customer on the device itself and to the troubleshooter on his terminal. The representation is linked to the device itself, such that actions on the device are shown on the representation, e.g. if a customer opens a door, that door will appear on the representation and the troubleshooter will see it. This is enabled through the sensors that reside on the devices.

Reciprocal viewpoints are supported and remote troubleshooters and customers are able to coordinate and co-orient around the representation of the device. Troubleshooters interact with the representation though control buttons, e.g. to indicate a part of the device such as a door, select an action the customer should perform, e.g. lifting a handle and sliding a toner cartridge out of the machine, and so on. When the customer performs actions on the device which trigger sensors, they are revealed on the shared representation, thus the troubleshooter can infer, for example, if the customer is following all the instructions he is providing and whether he is doing so correctly.

Interaction with the virtual representation of the device can be mainly through three different modes supporting the various requirements of troubleshooting: synchronous, step-by-step, and simulation. The default mode of interaction proposed to users (customers and troubleshooters) is to have the two screens synchronized with the current status of the device. For example, if the front door of the device is open this is shown on both users’ interfaces. Using this mode both users can build a common understanding of the problem through a synchronous investigation

of the current situation. The troubleshooter is driving the navigation and can zoom or move the view point around the device model. Both the customer – using his finger on the device touch screen – and the troubleshooter can use a shared pointer in order to point to specific areas or parts of the device during the discussion. As shown for example by Figure 1, areas pointed to by the troubleshooters are made visible on the customer interface and vice versa.

Figure 1. Area pointed to by a troubleshooter visible on the

customer interface.

The virtual device representation can be used by the remote troubleshooter to drive the customer through the troubleshooting operations. The troubleshooter will select the part to be operated and choose the action to be performed on this part. Figure 2 shows an example of such an interaction, where the troubleshooter demonstrates how to remove the cleaning unit using the contextual menu popping up on top of the cleaning unit 3D model This selection displays an animation of the operation to be performed on the customer’s interface. The troubleshooter’s view is frozen until the operation has been completed on the actual device, whereupon the system returns to the synchronous visualization mode and shows the new device status.

Figure 2. An example of an action being demonstrated.

Lastly, the virtual representation can be disconnected from the physical device status and used as a simulation tool. This is particularly useful for the remote troubleshooter who is not nearby an actual device since the simulation can act as an aide memoire enabling exploration of different aspects of the device. In our system, the remote troubleshooter can switch to the disconnected mode at any time and explore the model independently from the rest of the system status. During this period, the customer is put on hold and does not have access to the device representation. The 3D representation will be automatically re-synchronized with the actual device status when switching back to the default mode.

As already mentioned, our design has been primarily driven by the findings from an ethnographic study, together with the practical and technical requirements and limitations of the application domain. The collaboration space provided by our system can also be nicely characterized using the conceptual model proposed in [8]. According to this model, our system provides a “transitional collaborative space”, illustrated in Figure 3. The main space of interaction for the users is an augmented virtual space where the customer and the troubleshooter share the same egocentric viewpoint that is controlled by the troubleshooter. In addition, each user can switch back and forth to a dedicated perspective. The customer will switch to the real space in order to operate the device and the remote troubleshooter will switch to a virtual environment disconnected from the device

176

Page 3: [IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - Mixed reality for supporting office devices troubleshooting

status in order to simulate operations and prepare a sequence of instructions to be given to the customer through the augmented virtual space. Our augmented virtual space can be seen as the reference environment enabling the work done in the real environment by the customer to be linked with the work scheduled in the virtual environment by the troubleshooter. We believe it constitutes an acceptable alternative to a standard collaboration using AR.

Figure 3. Transitional collaborative model.

4 VIRTUAL DEVICE MODELING APPROACH The shared virtual representation of the device in the collaborative troubleshooting system is central to the users interactions. It has to include information on the parts of the device that can be accessed and manipulated by a customer and to represent the information on the current device status that might be relevant to the users during troubleshooting. It also acts both as a 3D virtualization and as a live representation of the actual device. As illustrated in Figure 4, we represent the device through a combination of models.

Figure 4. Components of the virtual device model.

Each model is specialized in some particular facet which needs to be modeled, but all are interconnected to create a coherent representation. Through this modular approach a device virtual

representation can be constructed through the composition of various resources that can be developed and maintained independently. The device status model is provided by the device manufacturer; the troubleshooting model is defined by a troubleshooting expert and the 3D model is produced by a graphic designer. This independence enables an efficient process to be established to obtain rich device models for all the various device types which might require collaborative troubleshooting.

The device status model is built on top of existing standards developed by device manufacturers in order to model devices and their status. These standards are primarily used in order to develop applications for remote administration and monitoring of devices. In our case the devices are compliant with the semantic printer model developed by the IEEE Print Working Group (PWG) [21] and with the Web Services for Devices (WSD) specifications [20]. This model describes the device and its services status for a general purpose usage and we extracted from it the information required by our system on the configuration and status of the sub-components of the device.

The conceptual model is an application-specific model which considers only attributes of the device that are relevant to troubleshooting and maintenance activities. More precisely, it considers the parts of the devices and their related attributes and actions that are visible and operable by the device’ users. This model constitutes a conceptual view of the device independent both from the way it is actually implemented and from the way it is represented in 3D. Each part of the device is defined as a finite state automaton where transitions correspond to an action that a device user can perform on the part, e.g. open or close a tray. Each part is associated with both a unique identifier and a label that can be used to display the name of the part to the user of the collaborative troubleshooting system. Finally, some interoperability constraints can be defined for each transition of the automata in order to model physical constraints between parts. For instance, an internal locker of the device cannot be unlocked until the relevant door has been opened.

The 3D model contains the geometry and textures of the device components. It is implemented using the COLLADA format [18] which is a standard XML encoding for 3D scenes. In order to comply with the low footprint requirement on the device, the geometric complexity is limited to a few thousand polygons. However, this limitation would be too restrictive to represent all the actionable parts of the device with a definition high enough to efficiently support the troubleshooting activity. Consequently, we have adopted a hierarchical approach. A main 3D model represents the entire device with few details on its sub-components. Some additional detailed models of the sub-components are loaded during the session when some operations need to be performed within a sub-component.

In addition, visibility constraints can be defined between components to optimize the 3D rendering, e.g. some parts may become visible only when a door is opened. These constraints optimize the number of elements loaded in the rendering engine to only the ones that will be visible in the current device status.

The status-conceptual mapping is used to synchronize our specific semantic representation of the device with the status of the actual device. It ensures a complete decoupling between the complex general purpose semantic model, provided by the remote administration services of the device, and our domain specific semantic model of the device for troubleshooting. In particular, status information of the device components, received as “condition changed” events, is mapped to the corresponding parts and actions of our semantic model. For instance, if a customer opens a tray on the device, a corresponding notification will be received by the system and this event will be mapped to the

3D-Conceptual mapping

Conceptual model

Device status model

Status-Conceptual

mapping

3D model

Real AR

AV VR

T

C

V1(t) T

V2(t)

T1(t)

T2(t)

T Troubleshooter

C Customer V (t)

Task Representation

Viewpoint

Viewpoint Control

T2(t) Transition

177

Page 4: [IEEE 2011 IEEE Virtual Reality (VR) - Singapore, Singapore (2011.03.19-2011.03.23)] 2011 IEEE Virtual Reality Conference - Mixed reality for supporting office devices troubleshooting

trigger of an “open” transition in the automaton corresponding to this tray in the semantic model.

The view-conceptual mapping augments our 3D models with the semantic information contained in the conceptual model. Each part defined in the conceptual model is associated with a 3D model and a 3D geometry object within this model and the actions defined for this part are translated into transformations of the geometry object.

5 CURRENT STATUS AND NEXT STEPS The proposed system is an auxiliary feature of the device used only when a customer is experiencing a problem. It must therefore be integrated with the device without affecting normal usage conditions and maximize reuse of existing properties to avoid extra cost. We have developed a fully working prototype leveraging Adobe® Flash® technology and Papervision3D™ 2.1 library [19]. The synchronization between the 2 views uses the Real Time Messaging Protocol (RTMP) thus enabling a real time transmission of messages using Adobe Live Cycle server. This ensures a fast and low bandwidth communication since the Adobe Action Message Format (AMF), a binary protocol, is used and the messages sent between the views are reduced to the minimal information, like viewpoint and device positions. We believe that this prototype is compliant with the constraints defined by the application domain. We have conducted a qualitative usability test whose results are reported in [16]. The results of this test strongly indicated that the collaborative system that we have designed and implemented provides a usable solution for provision of remote troubleshooting assistance. Several suggestions for improvement of the level of details displayed in the virtual representation or on the controls available to the users were collected. They are currently being used for a refinement of the prototype implementation.

This work has also opened some interesting paths toward the use of virtual representations of devices. With the increasing diffusion of connected electronic devices we believe that our modeling approach could be applied to a variety of work and home equipments. We will therefore investigate how our approach can be applied to other domains of troubleshooting and other types of devices in order to further generalize the work developed for this prototype.

REFERENCES [1] C. Abt. Serious Games. New York: The Viking Press. 1970. [2] M. Bauer, G. Kortuem, and Z. Segall. Where Are You Pointing At?

A Study of Remote Collaboration in a Wearable Videoconference System. Proceedings 3rd

[3] S. Castellani, A. Grasso, J. O’Neill, and F. Roulland. Designing Technology as an Embedded Resource for Troubleshooting. Journal of Computer Supported Cooperative Work (JCSCW), vol. 18, no. 2-3, pp. 199--227, 2009.

International Symposium on Wearable Computers (ISWC’99), (18-19 October 1999, San Francisco, California, USA), IEEE Computer Society, pp. 151—158, 1999.

[4] S. G. Deshpande, J. C. Thomas, and M. D. Baker. Interactive multimedia for remote diagnostics and maintenance of a multifunctional peripheral. US-Patent, 7,149,936, 2006.

[5] C. W. Edmunds, D. Auman, K. R. Mathers, C. S. Lippolis, A. M. Lorenzo, and C.-L. Goldstein. Simultaneous voice and data communication for diagnostic procedures in a printing or copying machine. US-Patent, 6,665,085, 2003.

[6] W. Friedrich. ARVIKA-Augmented Reality for Development, Production and Services. Proceedings International Symposium on Mixed and Augmented Reality (ISMAR’02), (Darmstadt, Germany, September 30 - October 01 2002), pp. 3--4, IEEE Computer Society, 2002.

[7] S. R. Fussell, R. E. Kraut, and J. Siegel. Coordination of communication: effects of shared visual context on collaborative work. Proceedings Computer Supported Collaborative Work (CSCW 2000), (Philadelphia, Pennsylvania, USA), pp. 21--30, ACM, 2000.

[8] R. Grasset, P. Lamb, and M. Billinghurst. Evaluation of Mixed-Space Collaboration. Proceedings International Symposium on Mixed and Augmented Reality (ISMAR’05), (Vienna, Austria, October 05-08, 2005), pp. 90--99, IEEE Computer Society, 2005.

[9] C. Gutwin and R. Penner. Improving interpretation of remote gestures with telepointer traces. Proceedings Computer Supported Collaborative Work (CSCW’02), (New Orleans, Louisiana, USA, November 16--20, 2002), pp. 49--57, ACM, 2002.

[10] S. Henderson and S. Feiner. Evaluating the Benefits of Augmented Reality for Task Localization in Maintenance of an Armored Personnel Carrier Turret. Proceeding of IEEE International Symposium on Mixed and Augmented Reality (ISMAR '09), (Orlando, Florida, USA, October 19-22, 2009), pp. 135--144.

[11] R. E. Kraut, M. D. Miller, and J. Siegel. Collaboration in performance of physical tasks: Effects on outcomes and communication. In M. S. Ackerman, editor, Proceedings Computer Supported Collaborative Work (CSCW’96), (Boston, Massachusetts, USA, November 16--20, 1996), pp. 57--66, ACM, 1996.

[12] H. Kuzuoka, S. Oyama, K. Yamazaki, K. Suzuki, and M. Mitsuishi. GestureMan: A mobile robot that embodies a remote instructor’s actions. Proceedings Computer Supported Collaborative Work (CSCW 2000), (Philadelphia, Pennsylvania, USA), pp. 155--162, ACM, 2000.

[13] G. Hirzinger, B. Brunner, J. Dietrich, and J. Heindl. ROTEX-the first remotely controlled robot in space. Proceedings of IEEE International Conference on Robotics and Automation, pp 2604--2611, vol.3, 1994.

[14] C. Leroux, M. Guerrand, C. Leroy, Y. Méasson, and B. Boukarri. MAGRITTE: a graphic supervisor for remote handling interventions. ESA Workshop on Advanced Space Technologies for Robotics and Automation, pp 471--478, 2004.

[15] J. O’Neill, S. Castellani, A. Grasso, P. Tolmie, and F. Roulland. Representations can be good enough. In H. Gellersen, K.Schmidt, M. Beaudouin-Lafon, and W. Mackay, editors, Proceedings 9th

[16] J. O’Neill, S. Castellani, F. Roulland, N. Hairon, C. Juliano, and L. Dai. From Ethnographic Study to Mixed Reality: A Remote Collaborative Troubleshooting System. To appear in Proceedings Computer Supported Collaborative Work (CSCW 2011), (Hangzhou, China, 19-23 March 2011).

European Conference on Computer Supported Collaborative Work, (ECSCW ’05), (Paris, France, 18–22 September 2005), pp. 267--286, Springer, 2005.

[17] J. Platonov, H. Heibel, P. Meier, and B. Grollmann. A mobile markerless AR system for maintenance and repair. Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR’06), (Santa Barbara, USA, October 22-25, 2006), pp. 105--108

[18] “COLLADA - Digital Asset and FX Exchange Schema”, http://collada.org, last accessed in May 2010.

[19] Papervision3D™, http://blog.papervision3d.org, last accessed in May 2010.

[20] “Web Services for Devices Print Service Schema”, http://msdn.microsoft.com/en-us/library/ff563758.aspx, last accessed in May 2010.

[21] “The Printer Working Group” (PWG), http://www.pwg.org, last accessed in May 2010.

178