8
User Study on a Position- and Direction-aware Museum Guide using 3-D Maps and Animated Instructions Takashi Okuma 1) , Masakatsu Kourogi 1) , Kouichi Shichida 1) 2) , and Takeshi Kurata 1) 1) Center for Service Research, AIST 2) University of Tsukuba ABSTRACT In this paper, we present a science museum guide system and a user study on the system and services it provides. We investigated information services based on a user's location and directions in indoor environments that have three-dimensional structure through developing and evaluating a museum guide system. We conducted a subjective comparison of virtual viewpoint controls for 3-D map presentation, a subjective evaluation on how to enhance the real exhibits by animated instructions using 3-D CG, and a subjective evaluation of the guide system and its services. As a result, participants gave a high rating to the combination of the "automatic tracking" view that displays the map just around the user on a relatively large apparent scale and the "automatic rotation" view that rotates the viewing direction to match the walking direction of the user to the top of the display, although some participants reported that they wanted to be certain where they were on an entire floor using the “bird’s-eye” view as necessary. In addition, we found that the animated instructions possibly enhanced the appeal of the real exhibits. In the subjective evaluations of the guide system, it was rated as necessary and useful. However, because the evaluation results show that the participants tended to view the display more than the real exhibits, the guide system might reduce the opportunities to truly experience the real exhibits. Additional Keywords: wearable mixed reality system, museum guide, GIS. 1 INTRODUCTION Presenting content based on a users’ situation is becoming a requirement in information systems. As an example, services that provide maps and information of attractions, stores, etc., based on a user’s position, are now available on cell phone services; consequently, many consumers are aware of their usefulness. In addition, to make it easy for users to correspond the real environment with additional information, services that provide maps using three-dimensional expressions based on the user’s direction have become available. So far, these types of services in existing measurement systems have been limited to outdoor environments. However, services based on a user’s position and direction are also desirable in an indoor environment that has a complicated three-dimensional structure, such as shopping malls and museums. Because these services have many potential applications, society is demanding them for indoor applications. By developing and evaluating a museum guide system, we have been investigating effective information services based on a user’s position and direction in an indoor environment [1, 2]. In this paper, we describe a user study of our guide system and the guide service provided by the system. This study was conducted in February 2008. We report the knowledge acquired from the evaluation results, especially regarding virtual viewpoint control for displaying three-dimensional maps, enhancement of real exhibits using virtual content, and usability as a guide system. 2 RELATED RESEARCH In the field of augmented reality (AR), technologies provide information to users through intuitive correspondence between the real environment and a virtual environment by “placing” virtual information in appropriate positions in the real environment. To implement an AR system, first we set a coordinate system for placing the information in the real environment. Next, we measure the users’ viewing positions and orientations on the coordinate system. Then, we visualize the information that can be seen from the position and the orientation and display it to the users using devices such as see-through head-mounted displays (HMDs) [3, 4]. Early AR systems required setting large scale sensing devices on the environment. Because measurement accuracy was regarded as the most important element, the measurement range was limited to a small space. Since then, AR systems have evolved to wearable AR systems that use long-range measurement devices. Feiner et. al. developed the Touring Machine that guided people across a university campus by displaying information such as the names of the buildings in the real environment based on the GPS system, and it was the first information presentation using wearable AR technology [5]. Information presentation based on AR technology has the advantage that information can be placed and displayed at any place in the measurement area of the users’ viewing position and orientation. Although the method for overlaying the virtual environment onto the real environment is intuitive, very accurate measurements are required. Therefore, there are few practical information presentation systems using AR technology. Meanwhile, technologies that realize position-based information presentation by embedding sensors into all places where information is required have been investigated in the field of ubiquitous computing [6]. Fitzmaurice proposed the Chameleon System, which presents information on a handheld display based on the position of the display, and developed a prototype of the system using a palmtop monitor and a position/orientation measurement system [7]. Nishimura et al. developed the CoBIT system, which consists of transmitters that encode audio information into light signals and receivers that receive the light signals and decode them with a solar cell. In other words, they realized audio information presentation based on the users’ position by placing transmitters everywhere information presentation is required [8]. Schmalstieg et al. developed a handheld AR framework named Studierstube ES [9], and reported two types of position-based museum games that were developed as applications on the AR framework. Studierstube ES relied on fiducials placed on information presentation points named hotspots. Compared with normal AR type image presentation, image presentation methods that show the sight from the virtual viewpoint of a handheld display would increase the cognitive load for corresponding real and virtual environments. However, this type of presentation is more practical for two reasons: 1) a measurement target is shifted from a user’s actual viewpoint to a virtual viewpoint on the handheld device, so calibration of Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, 305-8568 Japan e-mail: [email protected]

User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

  • Upload
    dangnhu

  • View
    216

  • Download
    0

Embed Size (px)

Citation preview

Page 1: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

User Study on a Position- and Direction-aware Museum Guide using 3-D Maps and Animated Instructions

Takashi Okuma1), Masakatsu Kourogi1), Kouichi Shichida1) 2), and Takeshi Kurata1)

1) Center for Service Research, AIST 2) University of Tsukuba

ABSTRACT In this paper, we present a science museum guide system and a user study on the system and services it provides. We investigated information services based on a user's location and directions in indoor environments that have three-dimensional structure through developing and evaluating a museum guide system. We conducted a subjective comparison of virtual viewpoint controls for 3-D map presentation, a subjective evaluation on how to enhance the real exhibits by animated instructions using 3-D CG, and a subjective evaluation of the guide system and its services. As a result, participants gave a high rating to the combination of the "automatic tracking" view that displays the map just around the user on a relatively large apparent scale and the "automatic rotation" view that rotates the viewing direction to match the walking direction of the user to the top of the display, although some participants reported that they wanted to be certain where they were on an entire floor using the “bird’s-eye” view as necessary. In addition, we found that the animated instructions possibly enhanced the appeal of the real exhibits. In the subjective evaluations of the guide system, it was rated as necessary and useful. However, because the evaluation results show that the participants tended to view the display more than the real exhibits, the guide system might reduce the opportunities to truly experience the real exhibits. Additional Keywords: wearable mixed reality system, museum guide, GIS.

1 INTRODUCTION Presenting content based on a users’ situation is becoming a requirement in information systems. As an example, services that provide maps and information of attractions, stores, etc., based on a user’s position, are now available on cell phone services; consequently, many consumers are aware of their usefulness. In addition, to make it easy for users to correspond the real environment with additional information, services that provide maps using three-dimensional expressions based on the user’s direction have become available.

So far, these types of services in existing measurement systems have been limited to outdoor environments. However, services based on a user’s position and direction are also desirable in an indoor environment that has a complicated three-dimensional structure, such as shopping malls and museums. Because these services have many potential applications, society is demanding them for indoor applications.

By developing and evaluating a museum guide system, we have been investigating effective information services based on a user’s position and direction in an indoor environment [1, 2].

In this paper, we describe a user study of our guide system and the guide service provided by the system. This study was conducted in February 2008. We report the knowledge acquired

from the evaluation results, especially regarding virtual viewpoint control for displaying three-dimensional maps, enhancement of real exhibits using virtual content, and usability as a guide system.

2 RELATED RESEARCH In the field of augmented reality (AR), technologies provide information to users through intuitive correspondence between the real environment and a virtual environment by “placing” virtual information in appropriate positions in the real environment. To implement an AR system, first we set a coordinate system for placing the information in the real environment. Next, we measure the users’ viewing positions and orientations on the coordinate system. Then, we visualize the information that can be seen from the position and the orientation and display it to the users using devices such as see-through head-mounted displays (HMDs) [3, 4]. Early AR systems required setting large scale sensing devices on the environment. Because measurement accuracy was regarded as the most important element, the measurement range was limited to a small space. Since then, AR systems have evolved to wearable AR systems that use long-range measurement devices. Feiner et. al. developed the Touring Machine that guided people across a university campus by displaying information such as the names of the buildings in the real environment based on the GPS system, and it was the first information presentation using wearable AR technology [5]. Information presentation based on AR technology has the advantage that information can be placed and displayed at any place in the measurement area of the users’ viewing position and orientation. Although the method for overlaying the virtual environment onto the real environment is intuitive, very accurate measurements are required. Therefore, there are few practical information presentation systems using AR technology.

Meanwhile, technologies that realize position-based information presentation by embedding sensors into all places where information is required have been investigated in the field of ubiquitous computing [6]. Fitzmaurice proposed the Chameleon System, which presents information on a handheld display based on the position of the display, and developed a prototype of the system using a palmtop monitor and a position/orientation measurement system [7]. Nishimura et al. developed the CoBIT system, which consists of transmitters that encode audio information into light signals and receivers that receive the light signals and decode them with a solar cell. In other words, they realized audio information presentation based on the users’ position by placing transmitters everywhere information presentation is required [8]. Schmalstieg et al. developed a handheld AR framework named Studierstube ES [9], and reported two types of position-based museum games that were developed as applications on the AR framework. Studierstube ES relied on fiducials placed on information presentation points named hotspots.

Compared with normal AR type image presentation, image presentation methods that show the sight from the virtual viewpoint of a handheld display would increase the cognitive load for corresponding real and virtual environments. However, this type of presentation is more practical for two reasons: 1) a measurement target is shifted from a user’s actual viewpoint to a virtual viewpoint on the handheld device, so calibration of

Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, 305-8568 Japan e-mail: [email protected]

tomoyo
テキストボックス
The first Korea-Japan workshop on Mixed Reality (KJMR08) in JEJU, S. Korea (2008)
Page 2: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

sensing devices is easier, 2) a handheld display device is more popular than a head mounted display. The disadvantage is that hotspots must be placed sparsely because of the costs of placing and maintaining sensing devices or fiducials in practical use.

3 MUSEUM GUIDE SYSTEM In this section, we introduce the Science Museum (Kitanomaru Park, Tokyo, Japan), where we conducted a user study. Then, we describe the map and guide content, and hardware and software configuration of our guide system.

3.1 Science Museum The Science Museum has various exhibit facilities, including twenty-one exhibition rooms. Each room has a theme that concerns science and technology. The appeal of the Science Museum is that it has a lot of hands-on exhibits. However, in a questionnaire survey that was conducted approximately six months before our experiment, some exhibition rooms were as much as 10 times more popular that other rooms, so some exhibition rooms could benefit by improving their content.

Their exhibit areas on the second to fifth floors are from 2,500 to 2,700 m2 and have interesting floor layouts such as the shape of a starfish. Because, the stairs are at the center of each floor, visitors have to take roundabout routes, which make it difficult for the visitors not only to grasp their current position but also to find the route to their chosen exhibits.

Some Wi-Fi access points are set on each floor. A few active RFID tag readers, which we describe later, are also installed on the ceilings near the stairs and escalators of each floor.

3.2 Content The content provided by the guide system can be grouped into the following three types. I. Three-dimensional models of each floor (3-D map content,

KML files). II. The shortest route to a sub-goal, such as a particular

exhibit room (route content, connected arrows on the 3-D map content).

III. Exhibit introduction content A) Still images including a photo, descriptive text, and

approximately fifteen seconds of voice data for summarizing each exhibit (still image content was placed on fifty-three spots that that include every exhibition rooms, Adobe Shockwave Flash file).

B) Animations using 3-D computer graphics to explain how to experience the exhibits (animation content was placed on six spots in the “Opt” exhibition room that was ranked low in popularity according to a previous investigation, Adobe Shockwave Flash file), See Figure 1.

3.3 Hardware Configuration As shown in Figure 2, participants of our user study wore on their waists a self-contained sensor module consisting of a three-axis accelerometer, gyroscope sensors, magnetometers, and a Bluetooth module, and an active RFID tag operated at 300MHz. In addition, they either held a handheld PC (SONY VAIO-U) or attached the PC to a strap around their neck.

We found that a monocular HMD and a handheld display had positive and negative characteristics based on the results of our pilot user study [1] and other related studies. In this user study, we only used the handheld displays due to the ease of sharing information among multiple visitors and the safety of children.

3.4 Software Configuration Our experimental system consists of a positioning subsystem and a content management subsystem. The positioning subsystem measures and estimates the user’s position and

direction using data from the sensor module and the active RFID readers. The content management subsystem controls the content displayed on Google Earth, which is used as the user interface platform, based on the position and direction from the positioning subsystem or from the manually corrected data by the user when the user thinks the measurement error is large. In this section, we describe the details of these subsystems.

Figure 2: Hardware configuration

Map-matching module

RFID adjustmentmodule

Stairs / Escalator / Elevator detector

module

Dead-reckoning module

Sensor data distributor

Location / orientation data fusion module

Displacement vector

BluetoothSerial port

Position / orientation

TCP/IP Socket

TCP/IP Socket

Sensor module

Sensor data

Position/orientation(Output)

RFID readerRFID reader’s Position database

RFID adjustmentmodule

Map database

Position/orientationadjustment (Input)

Corrected position

Corrected position

Corrected position

Corrected position

Map database

Figure 3: Software configuration of the positioning subsystem

Figure 1: An example of the animation content

Wi-Fi

Bluetooth

LAN Server PC

Sensor module

RFID tag

Page 3: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

3.4.1 Positioning Subsystem The positioning subsystem was implemented as the integration of 1) detection of motions of walking, going up and down stairs, and taking an escalator; 2) estimation of walking direction; 3) dead-reckoning based on the result of 1) and 2); 4) map-matching; and 5) positioning with the active RFID [10, 11]. For 1) and 2), data is obtained from the three-axis accelerometer, gyroscope sensors, and magnetometers built in the sensor module mentioned in Section 3.3 (see Figure 3).

The motion detection, dead-reckoning, and map-matching are executed on the handheld PC. A web service, which returns information of the RFID tag and the position of the reader that detects the tag when the tag ID is input, is executed on the server PC. We can use the service through the Wi-Fi network to obtain the position measured by the active RFID.

3.4.2 Content Management Subsystem The content management subsystem displays the content and instructions to a user based on the user’s context. The subsystem is implemented using collaborative work among KML files, web services based on PHP scripts, PostgreSQL database, Adobe Shockwave Flash files, and a Windows Form program that uses the Google Earth COM API (see Figure 4).

User interfaces of the guide system can be generally classified into the following three modes. The content management subsystem handles the displayed content and input values of each mode. [Map mode] The subsystem provides navigation to a user by visualizing the user’s current position and direction on the 3-D map, and updating the route based on the position. It also updates the virtual viewpoint of Google Earth and the visualized position and orientation every 500 milliseconds. When it updates the visualized position, it refers to the database for the exhibit introduction content around the position, and decides whether there exists the exhibit introduction content that should be displayed based on the positions and directions of the content and the user. If the content exists, the system changes to the exhibit introduction mode. Deviation from the route is checked every 5 seconds. When the subsystem determines the user has deviated from the route, the route is recalculated based on the position. Figure 5 shows a captured display image in the map mode.

Figure 4: Software configuration of the content management subsystem

Google Earth

KML for Control

Main PHP Script• Logging• Virtual Camera Control• Mode Management• Sub-goal Arrival Detection• Content Display Request• Update 3-D Map

PHP Script for Recommendation Route Update• Course out detection• Route updating

Positioning System

GEController• 3-D position input• Showing Flash Request

Call every 0.5 seconds

Call every 5 seconds

Update

Update

Data Request

Pos

tion

+O

rient

atio

n

Title Flash• User Name Registration• Calibration Parameter

Measurement• Sub-goal Selection

Control Panel Flash• Position Correction Mode Request• Floor Change Request• View Control Mode Change Request

コンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashコンテンツFlashContent Flash

PHP Script for Request

PHP Script forMouse Event Handling

Position + Orientation(Correction)

Call

PHP Script for RequestPHP Script for RequestPHP Script for Request

PostgreSQL

Figure 5: A still image of the map mode

Figure 6: A still image of the exhibit introduction mode

Figure 7: A still image of the manual position correction

Page 4: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

[Exhibit introduction mode] The subsystem displays the exhibit introduction content. It uses a function that displays for Flash Player 9 embedded in an HTML document on a balloon that describs a placemark for displaying an Adobe Shockwave Flash file. It also uses the Google Earth COM API for opening and closing the balloons. After the exhibit introduction content is displayed, the system changes to the map mode. Figure 6 shows a display image of the exhibit introduction mode. Technically, it is more intuitive for users to see the animation embedded in the virtual objects on the 3-D map. However, we displayed Flash files that showed rendered CG animations using the same model that is used in the 3-D map in the user study because of the rendering ability and the limitation of the virtual viewpoint control of Google Earth. [Manual position and direction correction mode] When functions for automatic position correction of the positioning subsystem do not work well because of imperfect implementation, and when the user thinks the measurement error is large, the user inputs the current position and direction using the following steps for sending feedback to the positioning subsystem. Step 1. When the system is in the map mode, the user clicks to show a menu for operations. The user selects the “position and direction correction” button to change the system to this mode. Step 2. The user inputs the current position by clicking the corresponding point on the 3-D map. By preventing structures such as walls from occluding the current position, the virtual viewing direction is set to the downward direction. While the system is on this step, instruction text is displayed (see Figure 7). Step 3. After inputting the current position, the user turns to the direction indicated by an arrow on the 3-D map, and clicks the display to finish this mode. The system changes back to the map mode.

4 USER STUDY In this section, we describe the user study that was conducted at the Science Museum for four days (2/24/2008 (Sun.) – 2/27/2008 (Wed.)). Although this user study had several purposes, in this paper, we focuses primarily on the evaluation results and findings for how to set the viewing position and direction to show the 3-D map, including the visualized current position, direction, and the route; how much the animation content enhances the appeal of the real exhibits; and how consumers accept this type of guide system and services.

4.1 Virtual Viewpoint Control The guide system employs a 3-D map display method similar to that used in popular car navigation systems and personal navigation devices. In this method, by showing a 3-D map as a virtualized real environment and an icon that indicates the position and direction of the user on the map, the system offers the user visual cues for corresponding the real environment with the map and exhibit introduction content.

Therefore, the user’s cognitive load for this method could be higher than that of the handheld AR. However, the method has the following advantages: 1) consumers are already familiar with this kind of display method, 2) there is no visual displacement between a virtual environment such as the 3-D

Table 1. Comparison of the real and virtual exhibits

Content Presence Cost Renewal Control of popularity

Real Exhibits High High Difficult Static

Virtual Exhibits

Improved by mixed reality technology

Low Easy Dynamic

(a) AA: automatic tracking + automatic rotation (b) AF: automatic tracking + fixed direction

(c) BA: bird’s eye + automatic rotation (d) BF: bird’s eye + fixed direction

Figure 8: Comparison of four types of viewpoint control

Page 5: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

map and the virtual content, and 3) as described later, it is possible to control the display of the 3-D map based on the measurement uncertainty.

Virtual viewpoint controls are very important for this type of 3-D map display method. In this user study, we focus on how to control the apparent scale of the map, that is, the distance between the current position and the virtual viewpoint, and how to present the walking direction. So we prepared the following four types of virtual viewpoint control methods (AA / AF / BA / BF) and conducted a subjective evaluation.

AA: [automatic tracking + automatic rotation] (Figure 8-(a)) AF: [automatic tracking + fixed direction] (Figure 8-(b)) BA: [bird’s eye + automatic rotation] (Figure 8-(c)) BF: [bird’s eye + fixed direction] (Figure 8-(d)) To provide visual cues such as the shapes and textures of

structures around the user, especially those in front of the user as much as possible, the “automatic tracking + automatic rotation” view shows the current user position at the center of the display and the apparent scale is set to be relatively large. The viewing direction is rotated to match the walking direction of the user toward the top of the display. In addition, to increase the number of situations in which the current position is displayed inside of the display area, even if the visualized position on the map is different from the actual position because of measurement error, and to show the measurement uncertainty implicitly, the apparent scale is updated automatically based on the measurement uncertainty from the positioning subsystem.

Moreover, to enable the users to easily grasp their relative position on the entire floor, the apparent scale is fixed to show the entire floor in the display area in the “bird’s eye” view.

Finally, for users who like to recognize their position using the absolute direction, the north is fixed toward the top of the display in the “fixed direction” view.

4.2 Exhibit Introduction Content As mentioned in Section 3.2, the “Opt” exhibition room rated low in the results of an earlier population survey conducted by the Science Museum. We exchanged comments with some docents of the Science Museum and found that the exhibits in the “Opt” room are actually very interesting if visitors understand how to experience them, and visitors experience the exhibits with interest when a docent explains it appropriately.

Table 1 shows a comparison of the real exhibits and virtual exhibits that include the exhibit introduction content. The real and virtual exhibits can complement each other. Because we value the experience with the real exhibits of the Science Museum, our intent is to improve them using the virtual content. To do this, we reviewed the docents’ comments to create animation content that shows how to experience the exhibits without showing what will happen.

The still image content that explains an overview of exhibits was placed in other exhibition rooms. In this user study, we investigated how the exhibit introduction content, especially the animation content, affects the Quality of Experience (QoE).

4.3 Participants, Procedure, and Experimental Task We recruited the participants from visitors of the Science Museum on each day. We had a total of twenty-three participants: five females and eighteen males. Thirteen participants were elementally school students, five in the 20-29 age group, three in the 30-39 age group, one in the 40-49 age group, and one in the 50-59 age group; so, feedback was obtained from various generations. We gave book coupons as an incentive to participate in the study.

Before each participant started an experimental task, he or she performed a calibration of a personal parameter, that is, a dynamic stride estimation, and set a course for the task. In this user study, a course with three sub-goals (exhibits) was set by

Q1. Which sub-goal was the most interesting?(Number of votes to the "Opt" room)

87

1 1

0

2

4

6

8

10

12

rank 1st rank 2nd rank 3rd no answer

vote

s

Q1. Which sub-gaol was the most interesting?(Average number of votes to other sub-goals)

6 5.5

10.5

1

0

2

4

6

8

10

12

rank 1st rank 2nd rank 3rd no answer

vote

s

Figure 9: Questionnaire result for Q1.

Figure 10: Average evaluation values for Q2.

Figure 11: Average evaluation values for Q3.

Q2. Is the 3-D map was easy to understand?(1:difficult <-> 7:easy)

1 2 3 4 5 6 7

AA

AF

BA

BF

Q3. Was it easy to correspond your position on the mapand the actual position?(1:difficult <-> 7:easy)

1 2 3 4 5 6 7

AA

AF

BA

BF

Page 6: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

the participant, except the first sub-goal was automatically set to the “Opt” exhibition room, which had the animation content.

During the user study, participants followed to the route that showed the shortest route to the next sub-goal, and experienced the real exhibits and the guide system. When the participants reached at each sub-goal, the virtual viewpoint control was switched automatically to one of the four types. On this occasion, the order of the types for the participants experienced was randomly set to cancel the order effect. After the task, users answered a questionnaire and were interviewed.

In the user study, we assigned a support person to each participant to ensure the participant’s safety, to record the participant’s activity, and to maintain the system. Each support person stayed behind the participant to record a video log using a video camera.

5 QUESTIONNAIRE RESULTS Rigorous statistical analysis of the questionnaire results is left to future work, but we provide here a brief report of the results. Figures 9 to 16 show the questionnaire results for a total of twelve questions. Q1 rates the interest for the three sub-goals, and Q2-Q12 are answered on a 7-point scale.

From the result of Q1 (Figure 9), the number of participants who answered the “Opt” room as the most interesting of the three sub-goals is the largest.

The results of Q2 and Q3 (Figures 10 and 11) that concern the virtual viewpoint control show tendency that the “automatic tracking” view is easier to understand than the “bird’s eye” view, and the “automatic rotation” view is easier to understand than the “fixed direction” view. In addition, the average value of the “automatic tracking + automatic rotation” (AA) view is barely beyond the criterion value of 4.

Q4 and Q5 (Figure 12) compares the animation content and still image content. The animation content was evaluated as easier to understand than the still image content, even though it had no voice data.

Q6 (Figure 13) asked if participants followed the route. The result was participants followed the route because we instructed them to do so, and the participants selected the sub-goals themselves.

In Q7 (Figure 14), participants evaluated how easy it was to find the first, second, and third sub-goals. For comparison of the same condition, we only show the graph that indicates average evaluation values for the “Opt” room, grouped by each viewpoint control mode. All viewpoint controls have average values over the criterion value of 4. In addition, the AA and BF views tended to be highly rated.

We asked about the necessity and usefulness of the guide system in Q9, and about the necessity of guiding and explanation by docents in Q10. From these results (Figure 15), we found that participants tended to evaluate the guide system as necessary and useful, and the explanation and the navigation by the docents are also necessary. However, we found that participants looked at the display more often than the real exhibits.

In Q11 and Q12 (Figure 16), we asked if the participants felt the guide system was cumbersome for experiencing hands-on exhibits and for engaging in conversation. The result does not confirm that the system is cumbersome.

6 DISCUSSION In this section, considering comments from the participants, we discuss the virtual viewpoint controls, the exhibit introduction content, and the usability as a guide system

Figure 12: Average evaluation values for Q4 and Q5.

Q4. Were explanations by animation in the "Opt" roomeasy to understand? (1:difficult <-> 7:easy)Q5. Were explanations easy to understand? (1:difficult <-> 7:easy)

1 2 3 4 5 6 7

Q4

Q5

Q6. Did you follow the displayed route?(1:Never <-> 7:Always)

Q8. Which did you see often? (1:Display <-> 7:Exhibits)

1 2 3 4 5 6 7

Q6

Q8

Figure 13: Average evaluation values for Q6 and Q8.

Figure 14: Average evaluation values for Q7. (Data of the start point to the “opt” room only are used.)

Q9. Did you feel a necessity or a usefullness of the guidesystem?(1: Negative <-> 7:Positive)

Q10. Do you still need an explanation from a docents?(1:Negative <-> 7:Positive)

1 2 3 4 5 6 7

Q9

Q10

Figure 15: Average evaluation values for Q9 and Q10.

Figure 16: Average evaluation values for Q11 and Q12.

Q11. Did you feel the guide system cumbersome?

(1: cumbersome <-> 7: not cumbersome)

Q12 Did the system encumber your conversation?

(1: cumbersome <-> 7: not cumbersome)

1 2 3 4 5 6 7

Q11

Q12

Q7. Were you able to find sub-goals easily?(from start to the "Opt" room 1: difficult <-> 7: easy)

1 2 3 4 5 6 7

AA

AF

BA

BF

Page 7: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

6.1 Virtual Viewpoint Control Overall, the 3-D map and the route were evaluated as easy to understand. Though there were some cases in which the measurement error was large, the system generally showed reasonable accuracy. The view mode that displays on a relatively large apparent scale was rated the best. However, there is the possibility that the apparent scale control based on the measurement uncertainty affected well, so we need to evaluate it by additional experiments.

The “bird’s-eye” view was rated lower than the “automatic tracking” view. We think the display size and the resolution affected the result because there were many comments such as “it was too small to see.” Meanwhile, there were some comments such as “when I got lost, I wanted to make certain where I was on the entire floor using the ‘bird’s-eye’ view as necessary,” and “I prefer the large-scale map, but I wanted the system to always display the next sub-goal on the screen.” It is likely that these comments indicate that the “bird’s eye + fixed direction” view is as highly rated as the “automatic tracking + automatic rotation” view to find the sub-goals easily in the result of Q7. We have to consider how we can bring the advantages of both “auto tracking” and “bird’s eye” view into our system under the limitation of the display size and resolution.

6.2 Exhibit Introduction Content As we mentioned, the “Opt” exhibition room was rated higher than the average in this user study, even though the room had been rated low by previous visitors. We need an additional investigation to see if the appeal of the animation content itself raised the rank or if the animation content raised the appeal of the “Opt” room. With respect to this, we received an interesting comment: “This was the first time that I could understand what the exhibits in the ‘Opt’ room intend to do,” from a participant who had visited the science museum several times. From this comment, there is a possibility that the virtual content enhanced the real exhibits.

Meanwhile, a participant reported that some content was displayed with unexpected timing in a place where some other content was placed with high density, so it was annoying to see the map. Our guide system decided to play the content based on a simple threshold-based method using the user’s position and the direction, the position of the content, and the content direction, so that users can see all the content. Thus, in the high-density content area, the exhibit introduction content was played continuously. We must consider that the system can play the content with appropriate timing based on an estimated measurement error and activity logs of the user.

6.3 Usability of the Guide System As we described, participants tended to evaluate the guide system as necessary and useful. Even though we didn’t force them to use the guide system for a long time, the participants used it for an average of approximately 35 minutes. Moreover, we observed that they enjoyed the experience for a relatively long time. Collectively, many reported that they enjoyed it. Many of these favorable comments came from the 3rd or 4th grade students, who enjoyed it as much as playing a video game.

In the evaluation results, participants indicated they viewed the display more often than the real exhibits. This result could be affected by the impression that they saw the route displayed during their walk to the next sub-goal. In addition, in this kind of “intervention research,” it is difficult to eliminate the effect of the system’s novelty. However, if we find the same tendency when the participants are in the exhibition room in a future user study, there is some possibility that the guide system takes away

some opportunities to experience the real exhibits; in this case, we need to reconsider how this kind of the guide system should function.

We found that the participants who preferred explanations and navigational help from the docents wanted to understand the content of the real exhibits more deeply through interactive communications with the docents. So, it would be valuable to consider additional functions for communicating with off-site docents.

A woman in the 60-69 age group, was an early participant in the user study , but she left because the icon showing the user’s position and direction and the text on the display was too small for her to see. We have to consider a universal design for the GUI based on these experiences and comments from the participants. Moreover, there were many comments that the handheld PC was large and heavy as a terminal device. Thus, we have to consider providing the AR guide services using smaller devices such as cell phones, personal navigation devices, and portable game devices, while balancing the display size with the usability.

7 CONCLUSION The objective of the Science Museum is to provide services that help visitors gain scientific knowledge through experiences and observations of exhibits. Our guide system is expected to improve the services by providing navigation help for visitors and by showing animation content for enhancing their appreciation and understanding of the exhibits. In addition, the guide system itself can possibly serve as a hands-on exhibit.

We recorded various activity logs, such as walking trajectories and videos, for activity analyses of the participants. It is expected that analyses of the visitors’ activity logs can contribute to designing more attractive exhibits, developing new courses that help visitors learn existing exhibits more effectively, and improving the guide system. Therefore, we are working on a reliving system that makes content from the huge amount of activity logs, and allows users to query and browse the content intuitively [2].

As we discussed above, we have to consider not only the services provided by the guide system for effective learning, but also improvement of the entire service based on the analysis results of the visitors activity logs. Realizing a “service innovation management” of practical operation using positioning and context-aware, augmented reality, and human-computer interaction technologies for contributing to the improvement of the QoE is an important issue in future.

REFERENCE [1] T. Okuma, M. Kourogi, N. Sakata, and T. Kurata: “A Pilot User

Study on 3-D Museum Guide with Route Recommendation Using a Sustainable Positioning System,” In Proc. International Conference on Control, Automation and Systems 2007 (ICCAS2007) in Seoul, KOREA, pp.749-753 (2007).

[2] T. Okuma, M. Kourogi, N. Sakata, and T. Kurata: “Reliving Museum Visiting Experiences on-and-off the Spot,” In Proc. The Sixth International Symposium on Mixed and Augmented Reality (ISMAR2007) in Nara, JAPAN, pp. 279-280 (2007).

[3] R. T. Azuma: “A survey of augmented reality,” Presence, vol.6, No.4, pp.355-385 (1997).

[4] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre: “Recent Advances in Augmented Reality,” IEEE Computer Graphics and Applications, Vol. 21, No. 6, pp. 34-47 (2001).

[5] S. Feiner, B. MacIntyre, T. Höllerer, and T. Webster: “A Touring Machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment,” In Proc. First IEEE

Page 8: User Study on a Position- and Direct ion-aware Museum ...seam.pj.aist.go.jp/papers/distribution/2008/kjmr08-okuma.pdf · enhancement of real exhibits using virtual content, ... RFID

International Symposium on Wearable Computers (ISWC ’97) in Cambridge, MA, pp. 208-217 (1997).

[6] M. Weiser: “The Computer for the 21st Century,” Scientific American, Vol. 265, No. 3, pp.94-104 (1991).

[7] G. W. Fitzmaurice: “Situated Information Spaces and Spatially Aware Palmtop Computers,” Communications of the ACM, Vol.36, No. 7, pp. 38-49 (1993).

[8] T. Nishimura, H. Itoh, Y. Nakamura, Y. Yamamoto, and H. Nakashima: “ A Compact Battery-Less Information Terminal for Real World Interaction,” In Proc. PERVASIVE 2004, Springer LNCS 3001, pp.124-139 (2004).

[9] D. Schmalstieg and D. Wagner: “Experiences with Handheld Augmented Reality,” In Proc. The Sixth International Symposium on Mixed and Augmented Reality (ISMAR2007) in Nara, JAPAN, pp. 3-15 (2007).

[10] M. Kourogi and T. Kurata: “Personal positioning based on walking locomotion analysis with self-contained sensors and a wearable camera,” In Proc. The Second International Symposium on Mixed and Augmented Reality (ISMAR2003), pp. 103–112, (2003).

[11] M. Kourogi, N. Sakata, T. Okuma, and T. Kurata. “Indoor/Outdoor Pedestrian Navigation with an Embedded GPS/RFID/Self-contained Sensor System,” In Proc. ICAT2006, pp.1310-1321 (2006).