231
Project Acronym: ARCHEOGUIDE Project Title: Augmented Reality-based Cultural Heritage On-site GUIDE Contract Number: IST-1999-11306 Starting Date: 01-01-2000 Ending Date: 01-06-2002 Deliverable Number: D04 Title of the Deliverable: ARCHEOGUIDE application scenarios Task/WP related to the Deliverable: WP1, Task 1.1, 1.2, 1.3 Type (Internal, Restricted, Public): Public Author(s): Consortium Partner(s) Contributing: All Contractual Date of Delivery to the CEC: 30-06-2000 Actual Date of Delivery to the CEC: 27-06-2000 Project Coordinator Company name: INTRACOM S.A.

ARCHEOGUIDE application scenarios

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: ARCHEOGUIDE application scenarios

Project Acronym: ARCHEOGUIDE

Project Title: Augmented Reality-based Cultural Heritage On-site GUIDE

Contract Number: IST-1999-11306

Starting Date: 01-01-2000 Ending Date: 01-06-2002

Deliverable Number: D04

Title of the Deliverable: ARCHEOGUIDE application scenarios

Task/WP related to the Deliverable: WP1, Task 1.1, 1.2, 1.3

Type (Internal, Restricted, Public): Public

Author(s): Consortium

Partner(s) Contributing: All

Contractual Date of Delivery to the CEC: 30-06-2000

Actual Date of Delivery to the CEC: 27-06-2000

Project Coordinator

Company name: INTRACOM S.A.Name of representative: Nikos IoannidisAddress: 19.5 Km. Markopoulou Ave. Peania, 19002 GreecePhone number: +30 (1) 6860349Fax number: +30 (1) 6860312E-mail: [email protected] Web Site Address: archeoguide.intranet.gr

Page 2: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Table Of Contents

1. INTRODUCTION................................................................................................... 6

1.1. PURPOSE............................................................................................................... 61.2. GENERAL OBJECTIVES OF THE PROJECT...............................................................61.3. SYSTEM CONTEXT................................................................................................ 61.4. OVERVIEW OF THE REPORT..................................................................................7

2. ACTORS, SCENARIOS AND COMPONENTS..................................................8

2.1. ACTORS IN THE ARCHEOGUIDE PROJECT.........................................................82.2. APPLICATION SCENARIOS.....................................................................................9

2.2.1. On-Site Virtual Guide Application Scenario.....................................................92.2.2. Off-Site Virtual Guide (remote information retrieval) Application Scenario...242.2.3. Content Creation Application Scenario..........................................................272.2.4. Virtual Human Representations Application Scenario....................................39

2.3. FUNCTIONAL COMPONENTS................................................................................40

3. GENERAL REQUIREMENTS...........................................................................42

3.1. SITE VISITORS.................................................................................................... 423.2. CONTENT CREATORS.......................................................................................... 423.3. ARCHEOLOGISTS................................................................................................. 433.4. SCIENTISTS......................................................................................................... 433.5. SITE MANAGERS................................................................................................. 433.6. SYSTEM ADMINISTRATORS.................................................................................433.7. TABLE OF GENERAL REQUIREMENTS..................................................................43

4. SPECIFIC REQUIREMENTS............................................................................47

4.1. ON-SITE VIRTUAL GUIDE....................................................................................474.2. OFF-SITE VIRTUAL GUIDE..................................................................................494.3. CONTENT CREATION........................................................................................... 51

5. TECHNICAL CONSTRAINTS / MEASURABLE OBJECTIVES...................57

5.1. MOBILE UNIT..................................................................................................... 575.1.1. Memory / Processing power...........................................................................575.1.2. Wireless LAN bandwidth / Hard disk space....................................................575.1.3. Rendering / Visualization...............................................................................575.1.4. Audio output/input.......................................................................................... 585.1.5. Connectors / Slots..........................................................................................585.1.6. Data formats.................................................................................................. 58

5.2. SERVER HARDWARE........................................................................................... 585.2.1. Multi-processor system...................................................................................585.2.2. Memory capacity............................................................................................ 595.2.3. Network Cards............................................................................................... 595.2.4. Video Card..................................................................................................... 595.2.5. Hard Disk....................................................................................................... 59

5.3. MEASURABLE OBJECTIVES..................................................................................59

6. CONCLUSIONS.................................................................................................. 61

2 of 171

Page 3: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7. APPENDIX A – SURVEY OF RELATED APPLICATIONS AND TECHNOLOGIES........................................................................................................... 62

7.1. HARDWARE/SOFTWARE AND SYSTEM SURVEY...................................................627.1.1. Wearable computers / Notebooks...................................................................627.1.2. HMDs / Displays............................................................................................ 747.1.3. Interaction Devices........................................................................................857.1.4. Communication infrastructure........................................................................887.1.5. Virtual Human Representation.......................................................................89

7.2. TRACKING SYSTEMS........................................................................................... 917.2.1. Introduction................................................................................................... 917.2.2. Different approaches on tracking:..................................................................917.2.3. Some Advantages and Disadvantages of typical Tracking Systems.................957.2.4. Example for a possible Hybrid Tracking Technology.....................................977.2.5. Sources........................................................................................................... 997.2.6. Survey of GPS-related technologies.............................................................1007.2.7. GPS use on Tracking Systems.......................................................................105

7.3. DATABASE FOR CONTENT REPRESENTATION CONSIDERATIONS..........................1227.3.1. Survey of related standards..........................................................................1227.3.2. The Core Data Standard for Archaeological Sites and Monuments..............1267.3.3. The Dublin Core: Metadata on Archaeology................................................1287.3.4. Internet Survey of Sites on DB Standards on Archaeology...........................1307.3.5. Geographic Information Systems..................................................................1337.3.6. Projects and References...............................................................................135

7.4. PROTOCOLS / (DE FACTO) STANDARDS..............................................................1367.4.1. MPEG-4....................................................................................................... 1367.4.2. XML............................................................................................................. 1367.4.3. VRML........................................................................................................... 1397.4.4. X3D eXtensible 3D....................................................................................... 1407.4.5. PHYTON...................................................................................................... 1417.4.6. UML – Unified Modeling Language.............................................................1417.4.7. 3DML........................................................................................................... 1437.4.8. Integration of Animation..............................................................................144

8. APPENDIX B - SURVEY OF RELATED PROJECTS...................................145

8.1. EC-PROJECTS................................................................................................... 1458.1.1. AQUARELLE............................................................................................... 1458.1.2. ENREVI....................................................................................................... 1458.1.3. PISTE........................................................................................................... 1468.1.4. STARMATE.................................................................................................. 1468.1.5. TOURBOT................................................................................................... 147

8.2. NATIONAL PROJECTS........................................................................................1478.2.1. ARVIKA....................................................................................................... 147

8.3. OTHER PROJECTS.............................................................................................. 1498.3.1. Columbia University..................................................................................... 1498.3.2. Carnegie Mellon University.........................................................................1508.3.3. Georgia Tech...............................................................................................1508.3.4. University of California, Berkeley................................................................1518.3.5. University of North Carolina........................................................................151

3 of 171

Page 4: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8.3.6. Massachussets Institute of Technology.........................................................1528.3.7. University of Toronto...................................................................................1528.3.8. HRL Laboratories........................................................................................1538.3.9. US Navy....................................................................................................... 1538.3.10. IBM Research........................................................................................... 1538.3.11. Rockwell Science Center...........................................................................1548.3.12. ATR.......................................................................................................... 1548.3.13. Sony CSL.................................................................................................. 1548.3.14. Keio University......................................................................................... 1558.3.15. University of South Australia....................................................................1558.3.16. LORIA...................................................................................................... 1558.3.17. INRIA....................................................................................................... 1558.3.18. TU Wien................................................................................................... 1558.3.19. Fraunhofer-IGD.......................................................................................1568.3.20. EML (European Media Lab)....................................................................1568.3.21. Conferences.............................................................................................. 157

9. APPENDIX C - QUESTIONNAIRES...............................................................158

9.1. EXPERTS’ QUESTIONNAIRE...............................................................................1589.2. Visitors’ Questionnaire....................................................................................164

4 of 171

Page 5: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Executive Summary

This deliverable has as its main purpose to identify the actors involved in the ARCHEOGUIDE project, to describe in detail their requirements and to describe in detail its application scenarios. The report also identifies the various functional modules of the system and their interactions. It is important to stress that the requirements described in this document will form the basis on which the whole project will be designed and built. In this report, we also provide a detailed survey of the state-of-the-art in enabling technologies and related applications.

5 of 171

Page 6: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

1. Introduction

1.1.Purpose

This deliverable has as its main purpose to identify the actors involved in the ARCHEOGUIDE project, to describe in detail their requirements and to describe in detail its application scenarios. The report also identifies the various functional modules of the system and their interactions. It is important to stress that the requirements described in this document will form the basis on which the whole project will be designed and built. In this report, we also provide a detailed survey of the state-of-the-art in enabling technologies and related applications.

1.2.General Objectives of the Project

The project has multiple goals, which can be broadly divided in two categories: technological and business oriented. Overall, ARCHEOGUIDE aims at building a system that will provide an electronic personalized guide to cultural heritage sites; it will also provide virtual reconstruction of the monuments of the site thereby dramatically enhancing both user experience as well as monument readability. Another product of this effort will be a database of digitized information about the site together with appropriate authoring tools for population and/or modification of the database, to be used by scientists and archeologists via the Internet. The technological objectives of the project include the following: Research and development of an accurate position tracking system suitable for outdoors

augmented reality. Development of new intuitive human computer interaction techniques applicable in

outdoors environments. Testing the effectiveness of augmented reality techniques and wearable computers in

providing personalized access to cultural information.

The technological objectives of the project require intensive research into many diverse new areas: in the fields of Augmented Reality, no position tracking system is currently suitable for outdoor use, nor are there currently available any techniques with the required precision for object registration (i.e. placing the virtual objects in their correct spot.) In the field of intelligent agent technologies, research is needed to understand the behavior of the user from their various multi-modal inputs. It is therefore these scientific and technological areas where the project will advance the state-of-the-art through extensive research.

The business objectives of the project include: Attracting more people to cultural heritage sites by the lure of high technology Triggering interest for private funding for expanding various modules of the system

and deploying them to other sectors or sites.

1.3.System Context

There are a couple of contexts within which the ARCHEOGUIDE system applications will be deployed. Within the context of an on-site personalized tour guide, the system will act as a guide for highly customizable guided tours that may be as intrusive as the visitors desire, offering augmented reality enhancements to the site. In the context of digitization, organization and storage of multimedia material about cultural heritage sites, it

6 of 171

Page 7: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

can also serve as a suit of tools that will enable remote access to information about the sites. It can also serve as a virtual reality world that can be distributed in CD-ROMs (or through the internet) to allow remote “virtual” visitors to explore it (thus enhancing its visibility.)

1.4.Overview of the Report

The rest of this report is organized as follows: in section 2.1 the actors involved in the project are identified. We describe the uses of the system by each actor in a number of application scenarios in section 2.2. In this section we make use of UML use cases to describe the functions of the system from the users’ point of view. An outline of the functional components of the system is given in section 2.3. The general requirements of each actor are described in section 3. Then we present more specific requirements dictated either by the application scenarios or by technological constraints (section 4). In section 5 we present the technical constraints and measurable objectives that are implied by the user requirements. Finally we present concluding remarks in section 6. In the Appendix A, we include a survey of the state-of-the-art in enabling technologies and related applications and in Appendix B a survey of related projects. In Appendix C we provide the questionnaires that were used in order to discover users requirements.

7 of 171

Page 8: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

2. Actors, Scenarios and Components

2.1.Actors in the ARCHEOGUIDE Project

The actors involved in this project are divided broadly among users, archeological & cultural researchers and administrators. User actors are divided among site visitors (end users), and content creators.Archeological and cultural researchers are divided among archeologists that are responsible for the (physical) restoration of the site, and scientists responsible for interpretation and readability of the monuments of the site.Administrators are responsible for the proper operation and functionality of the system and the site in general. They are divided between site managers and system administrators.In the following table we describe the responsibilities of each actor next to their UML symbol. UML (Unified Modeling Language) is a graphical language for soft name intensive system modeling and development.

Site Manager

Site managers are principally concerned with the maintenance of the site, its overall visibility to the visitors and the world.

Scientist

Scientists responsible for interpretation and readability of remains of a site essentially provide the necessary input to the content creators about the models to be used for the virtual objects. They are also responsible for validating the audio/textual information contained in the database of the system regarding the site and the monuments therein.

Archeologists

Archeologists responsible for the restoration of the site may use the system to record and document their progress, and also to visualize the desired outcome of the whole restoration process (and demonstrate in the physical space of the site the results of the process before it is complete.)

Content Creator

Content creators are the users of the system responsible for creating content about a site. They are also responsible for creating user profiles according to various characteristics of the users, and for linking different audio or textual information about objects or areas to user profiles.

8 of 171

Page 9: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Multimedia Systems and Tools

These are the external tools and system that the user must use in order to prepare the multimedia content required.

Examples of such tools are: 3D Studio, Alias / Wavefront, Cinema 4D, Photoshop, Illustrator, Soundforge, etc.

Visitor

Site visitors are the end users that use the system as a personalized guide that provides a personalized virtual reconstruction of the monuments of the site.

System Administrator

System administrators are responsible for the installation of the ARCHEOGUIDE system in the site, including installation of all necessary hardware and software equipment. They are also responsible for the seamless and normal operation of the system.

2.2.Application Scenarios

2.2.1. On-Site Virtual Guide Application Scenario

General Description

This is the major application scenario for the ARCHEOGUIDE system. The actors in this scenario are the on-site visitors and the system administrators. The steps that need to be followed by the actors for an on-site visit using the system are the following:

1. Equipment preparation (system administrator)2. User Authorization (system administrator, on-site visitor)3. User Profile Selection (on-site visitor)4. Guided Tour (on-site visitor)5. Exit (on-site visitor, system administrator.)

Graphically, the next picture shows the flow of events in this scenario.

9 of 171

Page 10: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

In step 1, the system administrator ensures that there exists equipment available for the visitor to wear (a Mobile Unit, MU), that the hardware and software is up and running, that the system server is also up and running and that the wireless network is also operational. The system administrator then puts the visitor on-hold until they ensure that the number of ARCHEOGUIDE users in the site is below the maximum supported. In step 2, the system administrator asks the user to agree with the operational requirements of the system (e.g. user must agree they will not intentionally break the equipment etc.) Once the agreement is made, and other requirements (such as appropriate lighting conditions or number of concurrent users of the system) are met, the system administrator hands over to the visitor a MU and helps them wear it.In step 3, the user enters information by answering questions the system (the wearable computer) asks them; such information is entered through selections from pull-down menus appearing in the user’s screen (traditional user interfaces.) The nature of this information will be described in the specific requirements for the on-site virtual guide application scenario. For the time being suffice it to say that this information remains anonymous (system does not ask the user their name, social security or other sensitive information so that the user does not feel uncomfortable.) The wearable computer records this information, submits it also via the wireless LAN to the site server and the site server displays a set of guided tour previews for them. The visitor may (or may not) select one of the predefined guided tours. A preview of a guided tour is a list of the titles of the information to be presented to the visitor –in time order- together with information about the total time of the tour etc. The user may even modify a selected tour, by requesting that more information about selected monuments be added. It is important to mention that the user’s profile selected at this point is only a starting point and dynamically adapts during

10 of 171

Page 11: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

the visit according to the user’s actions (requests for more or less information from the system.) Step 4 is the main step in the scenario. The system guides the visitor in the site in the following manner. The system knows the exact position and orientation of the visitor through advanced position and orientation tracking systems. It gives the visitor therefore navigational instructions about how to enter the site. Once they are in the site, the system retrieves any general information about the site that fits the visitor’s profile and starts the presentation. In the case the user has selected a tour, the system presents the scheduled information. If the system detects a deviation in the user’s path from the expected path of the guided tour it interrupts the presentation momentarily and provides navigational instructions to the visitor about where the nearest area of interest should be. As soon as the user enters an area about which the system has information to present, it starts the presentation. The presentations are of course multimedia presentations. The system displays virtual objects that are reconstruction of the remains of the monuments in the site, and delivers audio and/or textual information about the area and the objects in the area according to the visitor’s profile. The visitor may at any time interrupt this presentation by selecting an object (or area) and request more or all available information about their selection. The system responds by presenting the requested information, and then waits for the user to “resume” the tour. Or the user may request to stop the delivery of information from the system. In such a case, the whole tour (if there is one selected) will be modified to exclude information that was linked to the interrupted information. In the case there is no selected tour, once the system has finished the delivery of information about the current area in general, it starts the delivery of information about the selected objects in the area. Information, in other words, is presented from the general (site, then area) to the more specific (objects in the area, then objects within other objects.) Finally, after having delivered information about each object in the current area, the system provides navigational directions (audio and/or visual in the form of a map in the user’s display) about the next programmed area.Step 5 is the final step in the scenario. The visitor is informed that the tour has finished (or the visitor informs the system of their desire to end the tour) and the system provides navigational directions about the exit. In the exit, the system administrator ensures that the equipment used by the visitor has not been damaged (or else takes the appropriate actions). The visitor optionally provides feedback about the system’s operations. The visitor then departs from the site.

Use Case Diagrams

In the following we will describe the system using the UML Use Case Model. Use Cases describe the behavior of a system from a user’s standpoint by using actions and reactions. They allow the definition of the system’s boundary, and the relationships between the system and the environment. Use cases address a lack of support in the early object-oriented methods, such as OMT-1 and Booch’91, which did not supply any technique for determining requirements. A Use Case corresponds to a specific kind of system use. It is an image of a system’s functionality, which is triggered in response to the stimulation of an external actor. The complete use case diagram of the On-site tour is illustrated below:

11 of 171

Page 12: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Display All Virtual Objects

Scientif ic Query

General Query

Authorize Use On-site

Sy stem Administrator

Enter Personal Inf ormation

Track Position

Present AudioVisual Inf ormation

Select Tour

Skip Inf o

Update Visitor Prof ile

Erase All Virtual Objects

Interrupt Audio Inf ormation

Query Database

Select Object

Authorize Use

Resume

Visitor

12 of 171

Page 13: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The use cases that are included in the above diagram are described as follows:

1. Enter Personal Information

Enter Personal Information

Visitor

USE CASE Enter Personal Information

ACTORS Visitor

DESCRIPTION This use case starts when the user picks up the Mobile Unit.

The system asks them to enter certain information about themselves using pull-down menus, so as to form a personalized tour for them. The system asks them to enter information about their preferred language for communication (possible choices should include Greek, English, French, German) by displaying the flag of each country. The visitor must pick (only) one category.

The system reacts by selecting the chosen language for the rest of the interactions with the users.

Then, the system asks them to enter information about their age (which age group they belong to such as child, young adult, adult). The visitor chooses only one category.

Then, the system asks them about their education background. Possible choices are elementary, high school, university, and graduate school. User may only choose one option.

Then the system asks them about their expertise in archaeology. Possible choices are novice, amateur, expert, professional archaeologist. The visitor may select only one option.

Then the system asks them about their interests. Possible options include archaeology, history, culture, science, sports, dance, architecture, and sculpture. The visitor ranks zero or more areas of their interests on a scale from 0 to 10 (10 implying highest interest).

13 of 171

Page 14: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The system asks them to confirm their choices.

The visitor confirms choice or moves back to previous choices (using the "back" button) to change settings.

The mobile unit transmits this information to the site server, which then proceeds to prepare a tour for the visitor. The use case ends.

Exceptions: The user at any point (before confirming choices) pushes the "back" button. The system displays the previous menu of choices, and their current choice. The user may now select another option from this menu.

2. Query Database

Visitor

Query Databas

GeneralQuery

Scientific Query

USE CASE Query Database

ACTORS Visitor

DESCRIPTION This use case initiates when the user requests more information about objects, materials or any other related subjects about the site.

Exceptions: The user has not selected any object in the site. In this case, the Query Database use case displays an error message "No object Selected".

2.1. General Query

GeneralQuery

Query Databas

USE CASE General Query

ACTORS Visitor

DESCRIPTION The system provides a list with all the information regarding the selected object.

It is possible to have many related topics about that object. In this case the user can reduce the list by giving some limitations.

14 of 171

Page 15: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The options used to limit the list of information are:

Area

Category of monument: temple, statue etc.

Material: marble, ceramic etc.

Object

Periods of time (dates/ years) by which user selects the historic time period

Version in the case where there exists more than one version regarding the monument's manufacturing process

Category/ type of information reflecting to user's special interests: general, history, sports, arts, music etc.

Exceptions: None.

2.2. Scientific Query

Scientific Query

Query Databas

USE CASE Scientific Query

ACTORS Visitor

DESCRIPTION This use case begins as soon as a site manager, an archaeologist or a scientist carries out a query in the site manager requesting to retrieve information residing in the database.

In this case the user selects the table from the list of tables in the database and then performs a query construction following the steps of attribute selection and criteria selection.

The user selects a table from the database from a list. Then the system responds by presenting a list of attributes contained in the selected table.

We add clauses selecting for each clause an attribute and criteria to form the query.

The system responds by returning the results of the query (but not executing any visual or audio objects).

15 of 171

Page 16: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Exceptions: None.

3. Erase All Virtual Objects

Erase All Virtual Objects

Visitor

USE CASE Erase All Virtual Objects

ACTORS Visitor

DESCRIPTION The use case is initiated when the visitor selects the option "Erase All Virtual Objects" from the HMD.

The system responds by erasing any virtual objects displayed on the user's HMD, but continuouing the audio presentation of the tour.

The use case may be initiated by the Track Position use case automatically too.

4. Present Audiovisual Information

Visitor

Present AudioVisual Information

Display All Virtual Objects

USE CASE Present Audiovisual Information

ACTORS Visitor

DESCRIPTION The use case starts as soon as the visitor enters an area for which there is a scheduled presentation.

The visitor enters such an area.

The system displays all the virtual objects that were selected by the tour and the virtual objects pre-selected by the visitor.

The system starts an audio presentation of the part of the tour related to this area.

Exceptions: The user selects an object from a menu of all available visible objects.

The system displays the selected object.

16 of 171

Page 17: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Include (Interrupt Audio Information).

The system then prompts the visitor for a continuation of the tour.

Exceptions: The visitor interrupts the audio presentation using the Interrupt Audio Presentation use case.

4.1. Display All Virtual Objects

Display All Virtual Objects

Present AudioVisual Information

USE CASE Display All Virtual Objects

ACTORS Visitor

DESCRIPTION The use case starts when the visitor selects the option "Display All Visual Objects".

The system responds by drawing all virtual objects.

Include (Interrupt Audio Information).

5. Interrupt Audio Information

Interrupt Audio Information

Visitor

USE CASE Interrupt Audio Information

ACTORS Visitor

DESCRIPTION The use case starts when the visitor selects the "Interrupt Audio Presentation" option from their wearable computer.

The use case is also initiated by the "Select Object" use case automatically.

The system responds by interrupting the audio presentation.

Exceptions: If the use case was not manually initiated, and the information presented is about the selected object, nothing happens.

6. Select Object

17 of 171

Page 18: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Select Object

Visitor

USE CASE Select Object

ACTORS Visitor

DESCRIPTION The use case initiates when the user wishes to select an object (that can be as general as the whole site, or as special as an individual wav file containing a piece of information about a column of a temple.)

The user navigates through a tree-like structure containing folders: the route folder is entitled "site", which contains information about the whole site, as well as folders containing the areas which comprise the folder, as well as scripts containing guided tours.

7. Skip Info

Skip Info

Visitor

USE CASE Skip Info

ACTORS Visitor

DESCRIPTION The use case is initiated when the visitor wishes to skip the audio/textual information they are receiving.

The system responds with a path of the hierarchy of the information being received for example:

Site.

Area.

Temple.

Temple roof.

Script k.

Paragraph about the builder of the roof.

The user selects the level from which they wish to skip the information presentation.

18 of 171

Page 19: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8. Resume

Resume

Visitor

USE CASE Resume

ACTORS Visitor

DESCRIPTION The use case is available only after the user has interrupted the flow of the presentation.

The system resumes from the beginning of the most recently interrupted presentation.

9. Select Tour

Select Tour

Visitor

USE CASE Select Tour

ACTORS Visitor

DESCRIPTION The use case is only available before the visitor enters the site.

The user may select one of a number of predefined tours, which are displayed on the wearable computer's screen.

Each tour is categorized as appropriate for a number of user profiles, which are all displayed to the user as well as the title of each tour (e.g. short tour emphasizing games or long tour with scientific details)

The system displays a preview of the tour as a drawing on the map, and as a list of information metadata (i.e. title) of every piece of information to be presented during the tour.

The user may even edit a selected predefined tour to include certain objects that were not originally

19 of 171

Page 20: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

included in the tour.

10. Track Position

Track Position

Visitor

USE CASE Track Position

ACTORS Visitor

DESCRIPTION This use case starts as soon as the visitor starts the mobile unit.

The visitor moves in the physical cultural heritage site.

The system tracks the position of the visitor continuously using a GPS.

The output of the GPS is used to determine the identifiable landmarks near the visitor, so that the image-based position system can search for them.

The system determines the position and orientation of the visitor, and sends this information to the mobile unit for presentation control -correct registration and rendering of the objects.

The system displays on the screen of the wearable computer the position of the user on a 2D map.

The use case ends only when the visitor ends the visit.

Exceptions: The image-based system fails to determine the identifiable landmarks within the visitor's eyesight. In this case, the system sends an appropriate message to the mobile unit's presentation control system for further actions to be taken (such as interrupting any virtual object drawing).

Exceptions: The GPS fails to determine the visitor's approximate position. The system sends an appropriate message to the mobile unit's presentation control system for further actions to be taken (such as interrupting virtual object drawing, and notifying visitor about loss of position tracking).

20 of 171

Page 21: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Exceptions: The user approaches an area outside the covered places. The system issues a warning that the user is moving towards off-limits.

11. Update Visitor Profile

Update Visitor Profile

Visitor

USE CASE Update Visitor Profile

ACTORS Visitor

DESCRIPTION The visitor may initiate the use case manually at any time, when they wish to modify their profile.

The user may modify their profile using this use case by modifying the level of their interests in any of the interests supported by the system.

As a side effect, if the user has selected a tour at the beginning of their visit, the system stops the scheduled presentations of the tour. It now selects a new set of objects to present to the visitor according to their new profile.

Exceptions: The system may decide automatically to initiate this use case when it detects a consistent pattern of actions from the user that indicates that their profile needs to be changed.

21 of 171

Page 22: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

13. Authorize Use

Visitor

Authorize Use

Authorize Use On-site

Authorize Off-site Use

System Administrator

USE CASE Authorize Use

ACTORS Visitor

DESCRIPTION This use case is initiated when the user wishes to use the system. The system verifies that user is allowed to do so and that all necessary resources are available.

Exceptions: If the user does not have permission to use the system, the system logs them out.

Exceptions: If the resources needed are not available (e.g. not enough wearable computers) the system puts them on hold.

13.1. Authorize Use On-site

Visitor

Authorize Use

Authorize Use On-site

System Administrator

USE CASE Authorize Use On-site

ACTORS Visitor, System Administrator

DESCRIPTION This use case is initiated when a visitor arrives at the site and wishes to use the system.

The system administrator checks to see that there is equipment available (an operational Mobile Unit).

Then they check that all conditions for use of the system are satisfied; such conditions include appropriate weather conditions, lighting conditions, and the number of concurrent users of the system.

Exceptions: At least one condition is not met. In this case, the use case starts over again waiting until all conditions are met.

Normally, all conditions are met, the system administrator proceeds to obtain an "agreement of use" from the visitor, which states the terms of use

22 of 171

Page 23: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

of the system and the responsibilities of all parties. Normally, the visitor signs the agreement and then proceeds to use the system (using the Run Mobile Unit use case). The use case has ended.

Exceptions: The user does not accept the "agreement of use" in which case the use case ends immediately.

23 of 171

Page 24: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

2.2.2. Off-Site Virtual Guide (remote information retrieval) Application Scenario

General Description

This scenario enables scientists, archeologists and other qualified persons to retrieve multimedia information about the site without being present in the site. This scenario is made possible because of the digitization of information about the site in a CIDOC-compliant database. Equally importantly because there exists via the content creation application scenarios a virtual representation of the site (a Virtual Reality world, described in VRML) complete with geographic information –geodesics and other cartographic information- and representations of all static objects in the site –trees, big stones, buildings etc.

The actors of this scenario are therefore the off-site visitors of the system.The steps in this scenario are the following:

1. System authorization.2. Login and personalized tour creation.3. Off-site information tour.4. Exit.

The figure below shows the flow of events of this scenario.

Visitor Login Authorize Use

Profile Selection Tours Display

Tour Selection/Modification

Move to Next Area Present Audio/VisualInformation

Query DBMS

Schedule Presentationof Information

Visitor System

Update Scheduled Presentation of Information

Present Audio/VisualInformation

Exit

Move to Different Area

Visitor Login Authorize Use

Profile Selection Tours Display

Tour Selection/Modification

Move to Next Area Present Audio/VisualInformation

Query DBMS

Schedule Presentationof Information

Visitor System

Update Scheduled Presentation of Information

Present Audio/VisualInformation

Exit

Move to Different Area

In step 1 the system recognizes the off-site visitor as a valid user. In the case of remote access to the site server via internet, the system asks the visitor for a valid name and password in order to verify that they are indeed authorized users of the system.

24 of 171

Page 25: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Step 2 checks the system’s databases for an existing user profile (in this scenario, the user is not anonymous since authorization is required in step 1.) If it finds one, it proceeds to retrieve the user’s profile. Else, it is the first time that the user logs in to the system, and the system asks the visitor for various information in order to create an also dynamically adaptable profile for them. Once this step is finished, the user may proceed to step 3.Step 3. The user starts exploring the virtual reality world contained in the system’s database. The visitor has two options. They may choose the guided tour or the free-roaming tour. In both cases the system behaves as in the on-site guide application scenario step 4, but with different user interfaces since the visitor has no HMD, cameras or microphones to interact with the computer, but rather only a keyboard and mouse.Step 4 logs outs the user upon explicit request, or after the virtual tour has ended. The system keeps a record of all users’ activities so as to statistically process the categorization of the various information about the objects in the database. (For example, if more than 90% of the visitors of the system that were young adults with no athletic interests, interrupted the information about an object deemed as appropriate for young adults with any interests and adults with athletic interests, the system should restrict the appropriate categories of this piece of information to young adults or adults with athletic interests.)

Use Case Diagrams

The complete use case diagram of the Off-site tour follows:

25 of 171

Page 26: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Mov e to Dif f erent Area

Display All Virtual Objects

Scientif ic Query

General Query

Authorize Of f -site Use

Enter Personal Inf ormation

Present AudioVisual Inf ormation

Select Tour

Skip Inf o

Update Visitor Prof ile

Erase All Virtual Objects

Interrupt Audio Inf ormation

Query Database

Select Object

Authorize Use

Resume

Visitor

26 of 171

Page 27: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

System behavior is almost the same as in the case of the On-site tour, thus most of the use cases are the same. Two use cases are added, the “Move to Different Area” and “Authorize Off-site Use”:

1. Move to Different Area

Move to Different Area

Visitor

USE CASE Move to Different Area

ACTORS Visitor

DESCRIPTION This use case initiates when a user during the Off-site tour wishes to move to a different area.

The system responds by providing a list with all areas available for off-site visit and the user selects the one he prefers.

Exceptions: None

2. Authorize Off-site Use

VisitorAuthorize

Use

Authorize Off-site

USE CASE Authorize Off-site Use

ACTORS Visitor

DESCRIPTION The use case initiates when the user wishes to run the system remotely (using a CD-ROM or access the server through the Internet).

The system asks the user to enter their login name and password.

The user enters this info, and then the system verifies the validity of the user.

Exceptions: The user login and password pair is not valid.

The system refuses the connection.

27 of 171

Page 28: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Content Creation Application Scenario.

General Description

The Authoring Tool is the part of the application that is responsible for the creation and maintenance of the Site Information Server Database.The actors involved are: Site managers responsible for the parameterization and non-periodic maintenance of the

database Archeologists and scientists responsible for the scientific data of the database Content creators responsible for the creation of end-user multimedia information, user

profiles, scripts etc.

The Authoring Tool will be used periodically by various actors, which may not have sufficient time for learning a complex tool. It is therefore necessary to provide a simple, concise and easy to use interface that will enable casual users to use the system with minimal training. The involved actors can perform the following actions:

1. Site Setup2. Site Documentation3. Content Creation

Parameterize Site

Set Tracking Information

Update Site Information Server

Update Site Information Server

User System

Site Setup

Edit Artifact Update Site Information Server

Edit Scientific Information

Site Docum

entation

Update Site Information Server

Query Database Select New Object

Exit

In the figure above we show the flow of events of actions 1 and 2

1. Site Setup

28 of 171

Page 29: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The site manager is responsible for the initial setup and any subsequent changes in the system parameterization. This way the site manager can customize the Archeoguide system to the specific needs of his/hers respective site.

The Tracking Information required for the site includes all the information related to GIS, fiducial points, site geometry and topology etc.

2. Site Documentation

The documentation of archaeological sites and monuments plays an essential role in promoting the understanding, conservation and preservation of the archaeological heritage. Within Europe, a wide range of recording methods are employed in the compilation of inventories, often within a national framework. We have decided therefore, to base our database on the CIDOC Core Data Standard for Archaeological Sites and Museums. There are several advantages to the implementation of the CIDOC standard:

1. Many scientists may already be familiar with the CIDOC standard2. The interoperability of the database with other international systems is guaranteed3. The soundness and completeness of the database is guaranteed

Archeologists will use the CIDOC-compatible database to thoroughly document the existing archeological site. This database shall be continuously maintained and provide an ongoing scientific reference for ongoing archeological work.However the CIDOC standard is more an inventory of the site in question than a complete documentation system. In order to fully document all aspects of the site and the works in progress we allow users to augment the CIDOC database with any kind of multimedia information that is relevant to the CIDOC entries, in the general form of scientific information.Last but not least, users of the system can perform various queries to the complete database, be that simple hierarchical viewing of the database contents or complex retrieval of objects from the database. The results can be viewed and of course further processed.

29 of 171

Page 30: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

3. Content Creation

Prepare Multimedia Objects

Query DatabaseSelect Documentation

Object

User System

Create Personalized Information Object

Update Site Information Server

Create Personalized Tour Script

Content C

reation

Update Site Information Server

Exit

Figure 1 Content Creation Flow

Archeologists have created a complete and scientifically sound documentation for the archeological site. However this database is suitable only for expert usage (e.g. by other scientists). It is therefore necessary that Content Creators enrich this database with multimedia information suitable for the on-site guide functionality required by site visitors. In the following we describe in a little more detail the steps needed to populate the database (and the tools used.)

Multimedia ObjectsThis multimedia information (including Virtual Objects) must be prepared from suitable tools, e.g.:

4. VRML models with 3D Studio MAX5. Audio sequences with Sonic Foundry6. Video Sequences with Adobe Premier

Personalized Information ObjectsThe Multimedia Objects (MO) must now be integrated into the database created above. To accomplish this, the multimedia objects must be attached to their logical parent from the database. Furthermore the content creator will have to provide additional information about the nature of the object, e.g.:

7. targeted audience (age / education / etc)8. language9. estimated presentation duration 10. author / creation date

Thus the content creator will have created a new Personalized Information Object (PIO).

30 of 171

Page 31: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

In order to support coherent pieces of information, as required in the on-site application scenario, the content creator must create Personalized Tours Scripts (PTS). A PTS is simply a pre-selected sequence of PIO, which:

11. is targeted at some specific audience12. has a specified flow and timeline13. has an estimated duration

Use Case Diagrams

The complete diagram for the site and content creation – related use case is presented here.

Complex Query

Archeologists

Parametrize Site

Set Tracking Information (fiducial points)

Site Manager

Multimedia Systems and Tools

Categorize Object

Hierarchical Selection

Prepare Multimedia Objects

Create/edit Information Object

<<uses>>

<<uses>>

Create/edit Personalized Tour Script

<<uses>>

<<uses>>

Add / Edit new artifact

<<uses>>

Add/Edit scientific information <<uses>>

Scientist

Content Creator

Query Database

<<uses>>

31 of 171

Page 32: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

1. Parameterise Site

Site Manager

Parametrize Site

USE CASE Parameterise Site

ACTORS Site Manager

DESCRIPTION In this use case the site manager is presented with a list of all the configurable parameters in the system.

The user can now tailor the Archeoguide system by changing the parameters to match the needs of the specific site.

The list of configurable parameters includes: supported languages, areas of interest, parameterisation of the equipment used (camera characteristics), wearable network configuration and more.

The site manager is also responsible for the GIS information contained in the database.

2. Set tracking information (fiducial points)

Site Manager

Set Tracking Information (fiducial points)

USE CASE Parameterise Site

ACTORS Site Manager

DESCRIPTION This use case starts with the site manager placing fiducial points into the archaeological site. The points will be recognized by the user's camera to identify the exact position and orientation of the user.

The exact position of the fiducial points, in complete coordinates according to the coordinate system used by the GIS, has to be entered in the system database, for the fiducial points to be correctly recognized.

32 of 171

Page 33: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

This is a prerequisite for the tracking system to work correctly.

3. Add / Edit new Artifact

Scientist Add / Edit new artifact

<<uses>>

Hierarchical Selection

USE CASE Add / Edit new Artefact

ACTORS Scientist, Archaeologist

DESCRIPTION In this use case the responsible actor creates and maintains the site documentation.

This use case begins with the user selecting the parent object for the new object to be created.

For each monument / artefact of the site, the user must create a new entry in the site documentation. These entries are hierarchically organized.

Then the user can populate the newly created entry with various attributes, i.e. location information, time reference, GIS data, etc.

A basic set of attributes is mandatory for every entry, whilst other attributes are optional and are only populated when suitable.

Logical objects (e.g. areas) are also allowed.

33 of 171

Page 34: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

4. Add / edit scientific information

Scientist

Add/Edit scientific information<<uses>>

Hierarchical Selection

USE CASE Add / edit scientific information

ACTORS Scientist, Archaeologist

DESCRIPTION To fully document an archaeological site, more data is needed than provided for by the CIDOC standard.

This case begins with the selection of a monument or an artefact from the site documentation database by the user.

The user can now augment the object with additional multimedia information (text / audio / video / 3D / etc) as needed.

The information entered at this stage is meant to complete the site documentation and NOT for presentation to visitors using the system.

34 of 171

Page 35: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

5. Query Database

Scientist

Content Creator

Query Database

Hierarchical Selection

Complex Query

USE CASE Query Database

ACTORS Scientist, Archaeologist, Content Creator

DESCRIPTION Query Database

This use case initiates when the user requests more information about objects, materials or any other related subjects about the site.

Exceptions: The user has not selected any object in the site. In this case, the Query Database use case displays an error message "No object Selected".

5.1. Hierarchical Selection

Hierarchical Selection

Query Database

USE CASE Hierarchical Selection

ACTORS Scientist, Archaeologist, Content Creator

DESCRIPTION All the data contained in the database (site documentation, scientific information, PIO, PTS) is presented to the user in a unified view.

This view is hierarchically based, so that the user can 'open' and 'close' parts of the site hierarchy,

35 of 171

Page 36: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

thus 'exploring' the site hierarchy.

By a simple 'point-and-click' interface, a user can quickly locate the desired object in the database and select it.

5.2. Complex Query

Complex Query

Query Database

USE CASE Complex Query

ACTORS Scientist, Archaeologist, Content Creator

DESCRIPTION In this use case a user wishes to locate an object or a range of objects in the database.

He can do so by entering a series of attributes about the desired item.

The system will locate the items and allow the user to examine them more closely.

6. Prepare Multimedia Objects

Content Creator

Prepare Multimedia ObjectsMultimedia Systems

and Tools

USE CASE Prepare Multimedia Objects

ACTORS Content Creator

DESCRIPTION In this use case, the content creator prepares the content that will be presented to visitors.

This multimedia information (including Virtual Objects) must be prepared from suitable tools, e.g.:

1. VRML models with 3D Studio MAX

36 of 171

Page 37: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

2. Audio sequences with Sonic Foundry

3. Video Sequences with Adobe Premier

By using suitable tools the content creator(s) will prepare the required data files for later integration into the database.

7. Create / Edit Information Object

Content CreatorCreate/edit Information Object

Hierarchical Selection<<uses>>

<<uses>>

Categorize Object

USE CASE Create / Edit Information Object

ACTORS Content Creator

DESCRIPTION This use case starts with the user selecting some object (monument / artefact) from the site documentation database he wishes to augment with more information for the site visitors.

The multimedia data with which he wishes to augment the object has been prepared in a separate use case.

After adding the multimedia data to the object, the user has to categorize the object.

The content creator has now created a new Personalized Information Object (PIO).

37 of 171

Page 38: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8. Create / Edit Personalized Tour Script

Categorize Object

Hierarchical Selection<<uses>>

<<uses>>

Create/edit Personalized Tour Script

<<uses>>

Create/edit Information ObjectContent Creator

USE CASE Create / Edit Personalized Tour Script

ACTORS Content Creator

DESCRIPTION In this use case the user wishes to create a sequence of information objects to be prepared to a visitor, much in the same way a writer prepares a script. This sequence will be called a Personalized Tour Script (PTS).

The user must select the database object, for which he wishes to create his PTS.

Then he is given a list of all the available Personalized Information Objects (PIO) for this object, as well as all the available PTS for any direct children of this object.

He can now select and order a mixture of PIO according to his wishes.

The newly created PTS must be also categorized. It is now available, under the selected object, for presentation to the user.

9. Categorize Object

Categorize Object

USE CASE Categorize Object

ACTORS Content Creator

DESCRIPTION In this use case the user has to designate the targeted audience for some database object.

In order to accomplish this, the user must assign a number of attributes to the object.

38 of 171

Page 39: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Examples of this attributes would be:

1. Age {young, young adult, adult}

2. Education {elementary, high school, university}

These attributes (and the possible selections) are defined by the site manager.

39 of 171

Page 40: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

2.2.3. Virtual Human Representations Application Scenario

General Description

To enrich the tour and make the virtual scenes more dynamic, virtual human representations will be created and inserted. There are some different ways in which a standard VRML avatar might be used. We will present three ways that can be implemented in Archeoguide. All of them will work in either a single or multi-user environment.

1. Simple, Repetitive Key frame AnimationsIf the main objective is to use an avatar that performs some sort of repetitive action, such as "background characters" or "extras", to give some life in the VRML scene, the procedure is simple. The key frame animation sequence is converted into a series of interpolator nodes, which are driven by a Time Sensor with its loop sequence.Conceivably, avatars could share animations. For example, a number of athletes in different parts of the stadium might be driven by the same key frame animation sequence, which means that the sequence data has to be downloaded only once.These avatars and their animations would work equally well in either a single-user or multi-user environment.

2. Multiple Actions, Triggered by SensorsOn some applications could be necessary, multiple actions performed by the same avatar, i.e., different animations are associated to the same avatar, to be executed on different periods of time. If this is the case, multiple key frame animation sequences are created for the same avatar file. All the outputs of the interpolators are connected to the avatar, but independent Time Sensor drives them all, which are triggered by automatic or user events.To understand how this might work, consider an athlete warming up (one animation associated to the avatar). Upon a sound display, for example, he begins the race (another animation associated to the avatar). For the example above, a sound sensor trigger and different animations are executed.

3. Multiple Concurrent Animations for the same avatarOther possible situation is the performance of multiple behaviour (animations) simultaneously. For example, if there's an animation sequence for running and another for jumping, the avatar can execute them simultaneously. This simply involves triggering more than one Time Sensor simultaneously. Obviously, problems will arise if multiple animation sequences are all trying to control the same joints or body segments. If this is the case, special care must be taken at implementation phase.

40 of 171

Page 41: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

2.3.Functional ComponentsThe picture below shows the various functional components of the ARCHEOGUIDE system and

their interactions.

These components are described in more detail below: Client component: it comprises of

User Input. This module is responsible for providing to the visitor a friendly and easy-to-learn graphical user interface (GUI) that allows them to interact with the system. Novel multi-modal techniques such as gesture recognition and speech processing will be combined with traditional pull-down menus that will be displayed on the user’s HMD or screen of the wearable computer. All these user interfaces allow the visitor to select, retrieve or interrupt information about the area in which they are located.

Client Database. This module is responsible for storing the 3D models (VRML files) comprising the site together with the site cartography and other geographic information. It also stores audio/textual information about the site to be presented to the visitor upon request.

Position Tracking. This module tracks the position of the user continuously with very high accuracy.

Presentation Control. This is the module responsible for selecting the virtual objects to be rendered in the user’s display, for selecting the appropriate audio information to be presented, for ordering this information, and for handling various latencies in the system. It is also responsible for dynamically adapting the original presentation plan (which is a skeleton of an ordered chain of information material to be presented at various times during the visitor’s tour) to include or exclude information.

Multimedia Presentation. This is the functional component responsible for rendering and displaying the 3D models of the selected virtual objects, for playing the audio information selected (when instructed by the presentation control) and for

41 of 171

Page 42: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

displaying any textual information for navigational help or for user interface purposes (menus on the user’s display.)

The client component therefore must at least include the following hardware equipment on the on-site guided tour application scenario:

Head Mounted Display (HMD). Global Positioning System (GPS) for approximate position tracking. Camera attached to the HMD and interface to the client computer for image

based position and orientation tracking with high accuracy. Microphone and interface to the client computer for voice input. Wearable computer (laptop) with keyboard, trackball, fast hard disk, LCD

display for traditional user interfacing and processing. Batteries for electric power.

Server Component: it consists of the following components: Content Creation (authoring) tools. They are responsible for populating the site

information server’s database, which will be used to populate the client databases as well.

Server Database. It is responsible for storing the whole site cartography (together with locations and details of the fiducial points), the content created by content creators for the documentation about the site (according to the ICOM-CIDOC standard), and audio-visual information about the site to be used in the guided tours.

Adaptive Presentation Modifications. This component is responsible for tracking user behavior and automatically updating the scheduled information presentation (expanding and/or reducing the audio information scheduled for presentation to the visitor.)

The server component is implemented on a high-end server computer. It requires high processing capacity for handling multiple user requests for audio (and less frequently visual) information, computing any changes in the planned guided tours in real-time by tracking each user’s behaviour, and enough network bandwidth on the server side to transmit the results. Network Component. The network component is responsible for the wireless transport

of client requests to the server side and the subsequent response of the server. The CORBA architecture standard will be used since it allows a uniform and systematic way of defining and using the system’s various components’ interfaces and encapsulates network complexities within it. The network requires certain hardware equipment: access points (AP) will be installed in various places in the site so as to cover in a wireless fashion most of the site area. Fast Ethernet cables will be used for AP interconnection with the server computer.

42 of 171

Page 43: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

3. General Requirements

3.1.Site Visitors

First we shall describe general requirements stemming from the site visitor actors of the system. Currently, cultural heritage site visitors may only get acquainted with the site’s various aspects through the guidance of a group guide. Such guides usually only have limited knowledge of the history of the site, the architectural details of the monuments, the site’s role in the history of the surrounding area or the civilization that created it etc. Detailed knowledge about the excavations in the site or the methodologies used to study it, are not available since such information requires scientific expertise. Furthermore, human guides usually memorize a certain amount of information about some of the most important elements in a site and “lecture” the visitors allowing them little interaction –since the tour always includes more than one person. The nature of these tours forces the visitors to follow a pre-specified path that may not correspond to their individual preferences; in such cases the visitor loses interest in the site very fast, and either mechanically follows the group or abandons the tour altogether.

The proposed system provides personalized tours enhanced with multimedia information presentations that offer the potential for a much higher level of entertainment and education. In order however to achieve these goals, it is necessary that the system acts reliably as a personalized source of “exciting” information about the site. The visitors therefore broadly divide their requirements in the following categories:a. Personalized and Context-related Information Delivery: the system should have a good

idea of the interests and background of the visitor and deliver information relevant to their surroundings that they will appreciate. Within this category are issues of the quality of the virtual reconstructions of the site’s remains (image resolution, object registration etc.)

b. Navigation & Orientation Assistance: this category includes all site visitor requirements for assistance from the system in order for the user to locate objects or places of interest to them in the site (or guide them to the exit or the parking lot.)

c. Tours: it includes requirements regarding the behavior of the system while operating in a mode that resembles a single-person tour by an expert human guide.

d. User Interaction: this category describes the visitors’ requirements regarding the various interfaces through which they will interact with the system. It includes traditional as well as novel user interface methods since the equipment with which the visitors are equipped allows for much more intuitive user interaction than what is available in standard desktop computers.

3.2.Content Creators

The role of the content creators is to create content that is to be delivered to the end users, i.e. the site visitors, as well as the archeological and cultural researchers. They are responsible for populating the central database with multimedia information and organizing it in appropriate hierarchies. Their requirements can be divided in the following categories:a. Authoring Tools for (multimedia) Content: the category describes requirements about

the tools needed to create, organize, link and store in the system’s databases multimedia content about the site.

43 of 171

Page 44: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

b. User Interface: the category describes their requirements about their interaction with the tools in order to quickly organize (create, link, store) the information they prepare about the site.

3.3.Archeologists

The role of the archeologists responsible for the restoration of a site –or parts of it- is a combination of a content creator and end user. Their requirements stem from their needs to create visual content, i.e. virtual models of the monuments to be restored, store these models as different versions of information about the same object, and attach to each such version appropriate information about the process that will be used to create it. They also act as site visitors while they use the system to inspect the models they have created.

3.4.Scientists

This category of users is responsible for the readability of the monuments. Since they provide input to the content creators (and may in fact be themselves content creators) their requirements overlap with those of content creators. However, they also add some new requirements in one particular application scenario, that of remote access to the system’s database (through intranets or the web for example.) Such access enhances the readability of the monuments.

In the following table we describe analytically the requirements gathered from each actor of the ARCHEOGUIDE system. The table describes requirements from these actors that are common in the application scenarios of the system.

3.5.Site Managers

As mentioned earlier, site managers are those user actors of the system that are responsible for its overall smooth operation in a way that is beneficial for the site. They require the minimization of the system’s “intrusion” in the site. This single requirement is in fact of paramount importance, because in order to achieve accurate position tracking several fiducial points have to be identified in the field of view of the user. Artificial landmarks (in the form of small labels) serve as easily identifiable fiducial points, and the site managers’ requirements deal with the minimal introduction of such labels in the site. They also require minimizing the impact of the equipment (server computers, network access points, cables between access points and server etc.) to the environment.

3.6.System Administrators

The system administrators require tools for the easy installation and monitoring of the system. They also need tools for issue recording (bug reporting to be fixed in subsequent releases of the system.)

3.7.Table of General Requirements

Code Origin Description Comments RelatedURG1 Site

VisitorsMulti-user capability

The system should support many visitors simultaneously.

44 of 171

Page 45: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

URG2 Image quality The quality of the virtual objects displayed should be as high as possible when the visitor is standing still. At least, simpler objects of the objects should be displayed while user is moving.

URG3 Object registration

While the user is standing still, the virtual static objects to be drawn must be placed in their correct place and they should not appear to be moving.

URG4 Context related information delivery

The audio information presented to the visitor should be relevant to their surroundings i.e. only about objects in their vicinity or general information about their surroundings (unless explicitly requested by the user.)

URG5 Personalized information delivery

The audio information delivered to the visitor should be appropriate to them according to their characteristics or else it should have been directly requested by the user.

URG6 Information presentation structure

System should display upon request the groups to which the currently presented information belongs.

URG7 Information categorization for personalized delivery

The information in the system’s databases should be categorized as appropriate for audiences matching the following attributes:1. age (child, young adult,

adult)2. education (high school,

university, graduate, expert archeologist).

3. Interests (general, reading, history, archeology, paleontology, science, athletics, fine arts) Each interest has associated a level (0-10).

4. Expertise (novice, amateur,

45 of 171

Page 46: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

expert, prof. Archeologist)Each piece of information may be valid for many values of an attribute.

URG8 Multilingual information delivery

The system should deliver information to a language selected by the user (system should provide support for the most common languages.)

URG9 Audio/visual navigational help

The system gives directions to user about how to move to certain places when asked. It also provides directions about how to move to the next area of interest when it has presented all currently relevant information to the visitor.

URG10 Tours The system should provide guided tours selected by the user. It should also allow him to move freely around the site and ask information about real or virtual objects.

URG11 Real-time adaptive behavior

The system should adapt to the visitor’s behavior during a guided tour and change the scheduled presentation as appropriate.

URG10

URG12 Ease of Use The system should be very easy to use and should not require any learning curve.

URG13 Speech controls

The system understands simple voice commands by the user: select <object name>,Stop info,Erase [<object>|all],Refresh, More [about <object>]Where is [<object>|exit…]The language available for speech processing should be English.

URG14 Point & click The system provides a traditional point & click user-interface available through the user’s wearable computer.

URG15 Real-time audio

The system should respond correctly to unambiguous user

46 of 171

Page 47: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

response. input in real-time. It should offer help when user enters wrong commands.

URG16 Content Creators

Authoring tool for virtual objects.

The system should provide a toolkit for creating or modifying virtual objects.

URG17 Authoring tool for audio/textual information.

The system should provide a toolkit for entering audio/textual information, organizing it in hierarchical structures, linking it to objects in the site and storing it to the database.

URG18 Site cartography

The system database should contain all the (static) objects of the site together with their exact position. Such objects are trees, monuments etc. It should also contain site geodesic information so that virtual objects can be accurately registered.

URG19 Intuitive user interfaces

The authoring tools should be easy to learn with no learning curve.

URG20 Scientists Remote access System should allow remote access to its databases for retrieval of information.

URG21 Site Managers

Minimize system’s intrusion to the site.

No artificial landmarks on monuments. Minimum artificial landmarks in other places (garbage bins, trees etc.) No cables or large antennas should be visible.

URG22 System Admin.

Easy-to-use administration tools

Installation scripts for the system. Monitoring tools to check server status, mobile units’ status and network status.

47 of 171

Page 48: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

4. Specific Requirements

In the following we describe specific requirements from each actor as they appear in each of the application scenarios outlined before. Those requirements that overlap in most application scenarios have been already classified as general requirements and described above.

4.1.On-site Virtual Guide

As already mentioned, this is the main application scenario of the ARCHEOGUIDE system that serves as an on-site guide for cultural heritage sites offering augmented reality enhancements to the overall visitor experience. There are two possibilities for this scenario. In the first, the visitor picks up a mobile unit, enters personal information such as background and interests in the wearable computer and chooses the guided tour option. Then they proceed to follow a guided tour where the system delivers information –according to a coherent order- to them about some objects in their vicinity, and then directs them to the next area in the site that information about it is available. Finally, there is the possibility of a free-roaming tour where the visitor (equipped with the wearable computer) moves around freely in the site. The system displays virtual objects that are of interest to them, and delivers audio information about an area or an object after a direct request from the user.

Code Origin Description Comments RelatedURS1 Site Visitors Lightweight

mobile unit.The mobile unit (computer, HMD, earphone, camera, batteries etc.) should be lightweight so that typical visitors can carry it for more than 2 hours without getting tired.

URS2 Mobile unit autonomy.

The rechargeable batteries supporting the MU should allow 2 hours of autonomous operation.

URS3 Wireless infrastructure coverage

The wireless coverage of the area should be such so that user can roam freely in the whole area. When network connectivity is lost, user should be made aware of it. For such disconnected areas, all information should reside permanently in the MU databases so that user can still be guided by the system in it (through approximate GPS-based tracking.)

48 of 171

Page 49: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

URS4 Image visualization rate

System should be as fast as possible in terms of frames per second it can support depending on the hardware etc. (goal is about 12 frames per second) even while user is moving.

URS5 Visitor speed System should be able to register virtual objects and render them in real-time in as good resolution as possible for user speeds up to 3 km/hour. The resolution should be the maximum that still allows correct registration and rendering at 12 frames per second.

URS4

URS6 Audio delivery

As soon as user enters a new area and system has finished delivery of previous audio information, the system starts delivering audio information about the area and the objects in the area in a priority order (most important objects first.)

URS7 Occlusion handling

Real static objects (e.g. trees, small hills) that are between a visitor and a virtual object to be drawn should be modeled and recorded in the database so that they can be displayed giving the user a view of the scene with minimal occlusion errors.

URS8 Selection of virtual objects to be displayed

All objects of interest to the visitor should be displayed when their location is within 30m of the user.

URS7

URS9 Taking photos

The user should be able to take a snapshot of their viewpoint and have the server print out a copy.

URS10 Manual selection of important topics or objects

System should provide to the user the option to select a number of topics or objects from the database about which to receive all available information. Time needed for

49 of 171

Page 50: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

such information about each object should be displayed to assist user in their decisions

URS11 Guided tours Information to be included for delivery should be selected based on indicated user time availability. The system takes into account user profile and any special requests for presentation, computes the time needed for the tour, and then discards information to be delivered based on information importance until tour time matches user’s available time. System should still allow user to obtain more information upon request. Information importance is computed on the fly.

URS12 Coherency and non-redundancy of information.

Information should never be repeated unless explicitly asked by the user. Information delivery should follow a coherent order within the guided tour.

URS13 Archeologists & Scientists

Multiple versions of the same virtual object supported and version control

In order to accurately record the progress of a restoration process, the system should be capable of recording and displaying according to user commands, different versions of the same virtual object (representing different periods or different points in a restoration process). The system should provide a sense of version control in such cases.

URS14 User interface for display of multiple versions

System should provide a menu with options about versions of selected virtual object and display in highest resolution possible the user selection.

URS13

50 of 171

Page 51: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

4.2.Off-site Virtual Guide

A subset of the functionality of the system, namely the system’s databases together with the traditional user interfaces of the system and the content browsing tools can be used to provide an off-site guide in a virtual world that is a recreation of the site. This virtual world will be created as a standard virtual reality application on a moderately powered workstation (no augmented reality problems are present here.) This world can be explored the same way that visitors in the real world can explore the site with the superimposed synthetic images. Such functionality is very useful for scientists located in remote locations doing research about the site who wish to find what information is available in digital format. The multimedia capabilities of the system (which can be accessed remotely through the web assuming certain authorization clearing, or through CD-ROMs where the whole software with its content will reside) will enhance the site’s visibility dramatically and facilitate scientific research.Code Origin Description Comments RelatedURR1 Site

ManagersSecure access Content about the site should

be accessed only by authorized persons.

URR2 Scientists & Archeologists

Multiple versions of the same virtual object supported and version control

In order to accurately record the progress of a restoration process, the system should be capable of recording and displaying according to user commands, different versions of the same virtual object (representing different periods or different points in a restoration process). The system should provide a sense of version control in such cases.

URS13

URR3 Image Quality

The image quality of the displayed virtual world should be the highest possible for refresh rates of up to 15 frames per second.

URR4 User Interface for display of multiple versions

System should provide a menu with options about versions of selected virtual object and display in highest resolution possible the user selection.

URS13

URR5 User Interface for navigation and content retrieval in the site.

System should provide an easy to learn user interface for navigating in the virtual world and accessing information about its objects. Emphasis is on traditional user interfaces such as point & click to highlight, select, edit objects

51 of 171

Page 52: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

and/or retrieve information about them.

4.3.Content Creation

The content creation application scenario defines the approach that is required in order to create (and maintain) the database required for the Site Information Server. As indicated in the corresponding scenario, the database consists of several distinct but interconnected modules:

Database Module Responsible Actor Operating parameters and site meta-information (e.g. fiducial points)

Site Manager

Scientific Site Documentation ArcheologistsMultimedia content linked to the site documentation Content CreatorsGuided Tour scenarios Content Creators

Before we present analytically the user requirements gathered from the actors of the content creation application scenario, it is worth explaining in a little more detail the organization and structure of the information that ARCHEOGUIDE will record and present. The ARCHEOGUIDE system will follow closely the CIDOC core data standard for archeological sites and monuments. This standard facilitates greatly the tasks that the experts in the field (archeologists and other scientists) face in their day-to-day operations. It helps them retrieve and store information about monuments and archeological sites in a fast and meaningful way. This way we can assure a sound theoretical model for the site documentation that will help avoid wasting resources. Furthermore the adherence to an international standard will greatly facilitate the exploitation of the system in an international level, as well as data exchange or integration with other systems.

The theoretical framework about the structure of the information about cultural heritage sites in this standard is a set of four elements: Archeological items Archeological groups Physical spaces Physical groups

Archeological items are the fundamental pieces of archeology. They may be individual artifacts or ecofacts, components of sites or monuments (such as remains of walls) or even complete monuments, depending on the level of the desired granularity for the data. Archeological items may be logically grouped to form archeological groups. An item may be part of zero or more groups and such groups may be based on different logic (an item may belong to the group of artifacts from the hellenistic era, and also belong to the group of items made of gold.) Finally, archeological items occupy physical space, which in turn may be grouped to various groups based on different logical criteria.

Next we present the minimum structuring of information required by the ICOM-CIDOC standard (the extensions required by the content creators of ARCHEOGUIDE are described analytically in the table below.)1. Names & References

1.1 Ref. Number

52 of 171

Page 53: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

1.2 Name of Monument or Site1.3 Date of Compilation1.4 Date of Last Update1.5 Ref. Originator1.6 Cross References to Related Records (set)

1.6.1 Ref. Number1.6.2 Relationship Qualifier1.6.3 Ref. Originator

1.7 Cross References to Archeological Collections and Artifacts (set)1.7.1 Ref. Number1.7.2 Ref. Originator

1.8 Cross References to Documentation (set)1.8.1 Ref. Number1.8.2 Doc. Type1.8.3 Originator of Reference

1.9 Cross References to Archeological Events (set)1.9.1 Ref. Number1.9.2 Event Type1.9.3 Start Date1.9.4 End Date1.9.5 Ref. Originator

2. Location2.1 Administrative Location (set)

2.1.1 Country or Nation2.1.2 Geo-political Unit2.1.3 Administrative sub-division

2.2 Site Location2.3 Address

2.3.1 Name for Address Purposes2.3.2 Street Number2.3.3 Street Name2.3.4 Locality2.3.5 City2.3.6 Zip code

2.4 Cadastral Reference/Land Unit2.5 Cartographic Reference

2.5.1 Cartographic Identifier2.5.2 Spatial Ref. System2.5.3 Topology2.5.4 Qualifier2.5.5 Sequence Number2.5.6 X, Y, and Z co-ordinates

3. Type3.1 Monument or Site Type3.2 Monument or Site Category

4. Dating4.1 Cultural Period4.2 Century

53 of 171

Page 54: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

4.3 Date Range4.3.1 From Date4.3.2 To Date

4.4 Scientific & Absolute Dates4.4.1 Date4.4.2 Method

5. Physical Condition5.1 Condition5.2 Date Condition Assessed

6. Designation/Protection Status6.1 Designation/Protection Type6.2 Date of Designation/Protection6.3 Ref. Number6.4 Ref. Originator

In the next diagram we show a logical structure of the database containing information to be presented to the site visitors. The site is divided among areas (and connections via roads, bridges etc.) within which there are a number of objects (monuments, houses, trees, rocks and other physical items) each of which may contain a number of other objects. An object may also be contained in a number of other objects. Finally, the objects in the database may be grouped in categories (e.g. stone tools) in which case, an object may be contained in more than one category. Each object, area, site, and category has attached information (available in many languages) that has descriptors, or attributes, indicating categories of users for which presentation is appropriate.

54 of 171

Page 55: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Finally, we present the user requirements for this scenario in a tabular format.

Code Origin Description Comments RelatedURC1 Site

ManagersSite Customization

Site Managers should be able to influence (through the Authoring Tool) all the operating parameters and the site metadata required to customize the ARCHEOGUIDE system to their specific site and its respective requirements.

URC2 Archeologists Information Organization about the site to follow ICOM-CIDOC standards

The organization and structure of the database should follow closely the Draft International Core Data Standard for Archeological Sites and Monuments.

URC3 Site Documentation

Archeologists (and other scientific experts) should be able to thoroughly document the site (including places, monuments, museum exhibits, etc).

URC4 Site Exploration

The Authoring tool should provide an interface simple yet powerful enough, so that scientists (e.g. visiting archeologists) can make use of the same tool to explore the realm of scientific information that has been stored in the database.

URC5 Content Creators

Multimedia and 3D tools

Many kinds of multimedia and 3D (VRML) information can be added to the system. Therefore Content Creators need to have access to a large and appropriate set of tools that will assist them in the task of preparing the multimedia / 3D information for inclusion in the system

URC6 Multimedia / 3D linking

The multimedia / 3d content has to be embedded in the database and associated with the appropriate parts of the site documentation

55 of 171

Page 56: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

URC7 Content Access

The content creator should be able to view and (again, if possible) edit the content they manage. Especially for 3D models that would mean that a 3D viewing ability of them is highly desirable. This should be possible in a manner following closely the documentation system of the site.

URC8 Multimedia / 3D information categorization

Audio/textual information to be presented to the visitors should contain attributes indicating appropriate categories of users. Null attributes match all categories. A piece of information will be included for presentation to the user if the values of its attributes match all the attributes of the user profile.

URG7

URC9 Guided Tours The content creator should be able to prepare sequences of database objects that shall constitute guided tours for some targeted audience. It should be easy to create, view and review the generated sequences (guided tours).

URC8

URC10 Information Coherency

The information delivered to the visitor should be coherent for all possible visitor profiles.

URS12

URC11 Multilanguage support

The system should facilitate (wherever possible) the content creator in translating a series of objects or even a complete tour in some other language.

URC12 Authoring tools for creating & editing visual models

Toolkit should provide what-you-see-is-what-you-get functionality; produce optimized VRML code of manageable size for inclusion in the wearable computer’s database.

56 of 171

Page 57: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

URC13 Content browsing tool for audio/text information

Toolkit should allow user to select from an organized hierarchy of information. Toolkit should automatically detect inconsistencies of information and prompt user in the following cases:1. Change of information

about an object is not cascaded in all supported languages.

2. Information entered about an object that is part of another object matches broader user profiles than parent objects.

57 of 171

Page 58: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

5. Technical constraints / measurable objectivesIn the following we describe the technical constraints the system components must follow in order to satisfy the user requirements and provide all the services with acceptable quality and performance. We further list the measurable objectives that are implied by the user requirements.

5.1.Mobile Unit

5.1.1. Memory / Processing power

The mobile unit is the central component of the Archeoguide system. Therefore, a lot of different tasks have to be done by this computer, e.g.:

Interaction with the user Tour scheduling Position and orientation tracking Rendering of 3D models Decompression and visualization of MPEG video streams Decompression and playback of MP3 audio streams

Most of these tasks need a lot of resources in terms of processing time and memory. To provide high quality and little delay of feedback to user requests, the mobile unit should be equipped with a fast processor and plenty of memory.

5.1.2. Wireless LAN bandwidth / Hard disk space

Audio and Video streams need a lot of network bandwidth. Most probably, with the existing WLAN technologies it will not be possible to transfer video on the fly with sufficient quality. Therefore, we need to put as much multimedia data as possible onto the build-in hard disk of the mobile unit. Caching strategies can be used to store the most often and most recently requested data on this disk. Prefetching of data most probably used next can be implemented by analysis of the tour schedule and by knowledge about the distribution of points of interest over the historical site. As a conclusion, we need a combination of high WLAN bandwidth and plenty of hard disk space to provide the user with high quality multimedia data and fast feedback to user requests.

5.1.3. Rendering / Visualization

The mobile computer has to render the virtual 3D models used to augment the real scene. Furthermore, the computer has to decompress and display all video streams. Because both tasks need a lot of processing power, it is recommendable to have a 3D acceleration card that is capable of rendering 3D models and handling MPEG video streams by hardware.The VR toolkit “Avalon” used for rendering is based on OpenGL. This implies that an OpenGL driver must be available for the 3D accelerator. For decompression of MPEG video streams, we will use the DirectX (DirectShow) application-programming interface. Therefore, a DirectX driver must be available to use any hardware decompression features of the accelerator card.

58 of 171

Page 59: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

5.1.4. Audio output/input

The mobile computer has to handle simultaneous audio input and output, i.e. it needs a suitable sound card. Audio input is needed to control the Archeoguide system by speech. Audio output will be used to play acoustical information. Because audio data may be compressed by data reduction techniques like MP3, the computer has to do decompression work. It is recommendable to have a soundcard that can do this decompression by hardware.For all audio operations, we will use the DirectX (DirectSound and DirectShow) application-programming interface. Therefore, a DirectX driver must be available for the soundcard to use any hardware acceleration features like mixing of multiple audio streams or decoding auf MP3 streams.

5.1.5. Connectors / Slots

The Archeoguide system consists of a lot of different hardware components. Most of these components have to be connected to the mobile unit:

Video cameras: There are several ways to connect video cameras to a computer, e.g. FBAS and S-Video connectors, USB and IEEE 1394 ports, and PCMCIA slots.

Tracking systems: For position and orientation tracking, we will use a hybrid tracking system consisting of GPS, video and inertial trackers. These trackers get attached to the computer via RS232 and USB ports or PCMCIA slots.

Connectors for human interface devices like keyboards, mice and joysticks. These devices need PS/2, RS232, game or USB ports.

VGA connector to connect the head mounted display. A PCMCIA slot for the wireless LAN card. Connectors for power supply, external CD-ROM drives etc.

5.1.6. Data formats

The following data formats will be used by the rendering system: VRML97 for 3D models MPEG for video streams WAV and MP3 for audio streams

5.2.Server Hardware

5.2.1. Multi-processor system

Tightly connected processing units (shared-memory) are not a must because of algorithms that must distribute processing to the various processors and perform a lot of fork-join operations. In fact each user request can be handled as a single-threaded process in a single processor. However, the need for large amounts of available memory (to edit perhaps large multimedia objects, and to respond quickly to database requests) is a stronger argument for choosing a shared memory multiprocessor. Systems in this category include the IBM SP/2 multi-processor that is a network of distributed memory computers (running AIX) connected via a high-speed network switch, having a shared-memory simulator running on top to allow shared-memory applications to run. Other true shared memory multiprocessors are available from Sun Microsystems (more than 16 high-performing processors, and a total shared memory easily exceeding 4 GB), SGI, Intel (with top performing Mercer processors) and others.

59 of 171

Page 60: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

5.2.2. Memory capacity

The main memory available should exceed 4 GB for fast handling of database requests.

5.2.3. Network Cards

The network cards installed on the computer should allow the highest possible bandwidth on the server side. The network cards should allow wireless access as well as standard Ethernet connectivity.

5.2.4. Video Card

The highest performing AGB video card should be chosen, as it will be needed if rendering software will be running on the machine.

5.2.5. Hard Disk

Today’s hard disks offer a lot of space for very cheap prices. Therefore we can easily select perhaps a RAID with many hundreds of GB storage and data rate transfers in the order of 9ms or less.

5.3.Measurable objectives

1. The position tracking should enable the end user to move naturally and not mechanically.

2. Localisation of artificial landmarks capability.3. Localisation of expected natural landmarks (buildings, mountains) capability.4. Virtual Image Visualisation Rate. It must be greater than 12 frames per second when

the user is not moving and greater than 8 frames per second when the user is moving.5. Virtual Image Visualisation Quality. The system should be able to render at least 500

polygons per frame.6. Maximum User speed supported (in Km/h) in order to achieve real-time vision-based

position and orientation tracking, real-time rendering and combination of real scene with additional data. The system should support the speed of a normal walking person, i.e. 3-5Km/h.

7. Maximum Speed of user-orientation changes supported in order to achieve real-time vision-based position and orientation tracking and real-time combination of real scene with additional data.

8. Immediate System responds to User Interactions. It is calculated as the number of msecs required from user request till the system response. The interaction of the application with the end-user has very strict performance requirements. In particular, no more than 5-second response time is acceptable for 90% of cases.

9. Reliability of voice recognition. The percentage of wrong system responds to voice user commands should be less than 1%.

10. Reliability of gestures’ recognition. The percentage of wrong system responds to user commands through gestures should be less than 5%.

11. Overall Accuracy of system responds. The erroneous answers of the ARCHEOGUIDE system should be less than 1%.

12. User Visual Comfort. There are a variety of issues that affect physical comfort. One is the weight of the ARCHEOGUIDE Mobile Unit (which include the Nose Weight from

60 of 171

Page 61: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

the eyeglasses). Commercial HMDs range in weight from 8 ounces to 80 ounces-for safety reasons, HMDs used on young children should be as light as possible.

13. ARCHEOGUIDE Mobile Unit appropriate for numerous groups of people, such as: a) elderly, b) kids, c) people with special needs and d) ordinal people.

14. The ARCHEOGUIDE Mobile Unit should be able to be used by specific groups of people such us people with technical parts (heart etc).

15. The ARCHEOGUIDE Mobile Units to have zero interference between them.16. The ARCHEOGUIDE Mobile Unit to be independent, for at least 2 hours by the use of

rechargeable batteries.17. The ARCHEOGUIDE Mobile Unit rechargeable batteries to be either recyclable or to

have the minimum negative environmental impact.18. The ARCHEOGUIDE Mobile Unit to have zero impact in human health based on

current research results.19. The visualization display capable to be used by specific groups of peoples such us

people wearing eyeglasses.20. The display to support high colour and high definition analysis.21. The ARCHEOGUIDE Mobile Unit and especially the display units to provide to the

end user a visualization of the information content without loosing the feeling of the real world. The use of see-through HMDs would guarantee satisfaction of this requirement.

22. The head mounted display should offer wide field-of-view (at least 45 degrees horizontal), long eye relief (3/4"), and large exit pupils (at least 12mm).

23. The Site Information Server system to be developed in such way thus to be independent from the underlying hardware solutions. The Information Server will communicate with the underlying hardware through a well-defined application interface.

24. The Information Database Management unit to support either local or remote databases updating, through the network.

25. The updating of the Information server to be done either locally or remotely through a user-friendly front-end application. (Content creator)

26. The Information Server to be easily upgradeable for future additions.27. The position tracking needed signs will be selected to have as little as possible impact

in the site and in the environment.

61 of 171

Page 62: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

6. ConclusionsThese deliverable detailed user requirements gathered from the various actors of the ARCHEOGUIDE system. We structured the presentation of these requirements around the three application scenarios of ARCHEOGUIDE. It is important to explicitly state that these requirements may be refined in the future as a better understanding of the system is obtained. Furthermore, certain quantified requirements were made possible after consulting with the technology providers of the project who contributed the numbers needed for quantifying a requirement whenever possible.

62 of 171

Page 63: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7. Appendix A – Survey of Related Applications and Technologies

7.1.Hardware/Software and System Survey

7.1.1. Wearable computers / Notebooks

The central component of the Archeoguide mobile unit is a computer. The hardware equipment of this computer depends on the following tasks that have to be done by this computer: Interaction with the user: The computer has to handle all interactions with the user. Interaction means not only

sophisticated technologies like speech and gesture recognition, but also conventional interaction by keyboard, mouse, joystick and graphical user interface.

Tour scheduling: Based on the user profile and the order in which the user visits points of interest, the computer has to calculate which information to show. For example, if the user is especially interested in statues, he will be mainly presented information about statues on the historical site. Furthermore, given a position of the user on the site by the tracking system, the computer has to determine which points of interest are at this position and which multimedia data are available.

Position and orientation tracking: To seamlessly integrate virtual objects into real scenes, the Archeoguide system has to know the viewing position and orientation of the user. Furthermore, the kind of information that is shown to the user depends on his position on the historical site. For example, if he is standing in front of the Zeus temple, he wants to get information about this temple. For the Archeoguide system, we will use a hybrid tracking system consisting of several different tracking technologies. The wearable computer has to calculate the position and orientation based on the signals provided by these trackers. To connect the trackers to the computer, we need suitable connectors at the computer, for example RS232 ports, USB ports or PCMCIA slots.

Audio input and output: The computer has to handle simultaneous audio input and output, i.e. it needs a suitable sound card. Audio input is needed to control the Archeoguide system by speech. Audio output will be used to play acoustical information. Because audio data may be compressed by data reduction techniques like MP3, the computer has to do decompression work. It would be desirable to have a soundcard that can do this decompression by hardware.

Graphical input: For video based tracking and gesture recognition, we need to connect one or more cameras to the computer. This means that the computer must have connectors to get the video images for image processing, for example FBAS connectors, S-Video connectors, USB ports, IEEE 1394 ports, or PCMCIA slots.

Graphical output: The computer has to render the virtual 3D models used to augment the real scene. Furthermore, the computer has to decompress and display all video streams. Because both tasks need a lot of processing power, it would be desirable to have a 3D acceleration card that is capable of rendering 3D models and handling MPEG video streams by hardware.

For the Archeoguide system, we have the choice between to types of computers: Wearable computers and Notebooks. Wearable computers are especially designed for AR applications. Notebooks are cheap (compared to wearable computers) and widespread. On the following pages, an overview is given about wearable computers and notebooks that are available for the Archeoguide system.

63 of 171

Page 64: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Xybernaut Mobile Assistant IVManufacturer Xybernaut (http://www.xybernaut.com/)Model MA IVProcessor Pentium MMX Processor 200 or 233 MHzCache 512KB level 2Operating System Windows 9x, NTGraphics NeoMagic MagicGraph 128ZV with 1 MB Video RAMDigital Audio 16 bit playback and recordHD 2.1 or 4.3 GBRAM 32, 64, 96 or 128 MBPC card slots 2 Type I or II, one Type IIIInterfaces Head mounted or flat panel display

USB Power Port Replicator

HMD Mono colour display (640x480), microphone, ear piece speakers, video camera (optional, colour)

Dimensions 11.7x19x6.3 cmWeight 795 gPrice $6,971

64 of 171

Page 65: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

VIA II PCManufacturer VIA (http://www.flexipc.com/)Model II PCProcessor Cyrix MediaGC 166MHzOperating System Windows 98Graphics SVGAHD 3.2 GBRAM 64 MBPC card slots 2 Type I or II, one Type IIIInterfaces RS232 serial

USB Power Port Replicator

Dimensions Computer Assembly: 9.75" (L) x 3.125" (W) x 1.25" (H)Weight Computer: 21 oz.

50 Watt hour battery: 16 oz.Belt assembly: 5 oz.

65 of 171

Page 66: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Interactive Solutions MentisManufacturer Interactive Solutions (http://www.info-isi.com/)Model MentisProcessor Pentium MMX Processor 166 MHzCache 256 or 512KB level 2Operating System

Windows 9x, NT

Graphics VGA, SVGA, XGA for CRT or LCD, 16 bit colour-depthDigital Audio 16 bit playback and recordHD 6.4 GBRAM 128 MBPC card slots 2 Type I or II, one Type IIIInterfaces external LCD

serial port parallel port ps/2 keyboard and mouse

HMD Mono greyscale display (640x480), microphone, ear speakersDimensions Processing unit: 7.5”x5.5”x1”

Battery pack: 7.5”x5.5”x0.75”

66 of 171

Page 67: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Phoenix FalconManufacturer Phoenix Group (http://www.ivpgi.com/)Model FalconProcessor Pentium MMX Processor 233 MHzOperating System Windows 9x, NTGraphics 2 MB Video RAM, truecolorDigital Audio 16 bit playback and recordHD 6 GBRAM 16 to 64 MBPC card slots 2 Type I or II, one Type IIIInterfaces serial port

parallel port ps/2 keyboard and mouse video includes speaker and microphone

Display 8.4” LCD, 640x480, daylight readable, touch screenDimensions 11.99"W x 8.72"H x 3.46"DWeight 8 lbs. with battery

67 of 171

Page 68: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

FieldWorks 7000Manufacturer FieldWorks (http://www.field-works.com/)Model 7000Processor Pentium MMX Processor 233 MHzCache 512 KB level 2Graphics SVGA 24 bit colour depth, 2 MB Video RAMDigital Audio 16 bit playback and recordHD 2.1 GB (optional 6.4 GB)RAM 32 MB (optional up to 128 MB)PC card slots 1 Type I-IV SlotInterfaces 2 serial ports

1 parallel port ps/2 keyboard and mouse Port Replicator

Display 10.4”, 800x600, colour

68 of 171

Page 69: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Matrox 4SightManufacturer Matrox (http://www.matrox.com/)Model 4SightProcessor Cyrix MediaGX 166 MHzOperating System Windows NTGraphics 1280x1024 (8 bit colour depth), 1024x768 (16 bit colour depth)Digital Audio 16 bit playback and recordHD 8 GB (optional), 2-144 MB flash diskRAM 32, 64, or 128 MBPC card slots 2 Type I or II, one Type IIIInterfaces 2 serial ports

1 parallel port ps/2 keyboard and mouse VGA (standard and tft flat panel), video 3 IEEE 1394 (Fire wire, iLink) ports Network

Dimensions 20.8x7.5x18.4 cm

69 of 171

Page 70: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Picture books with integrated cameraManufacturer Sony (http://www.ita.sel.sony.com/products/pc/notebook/)Model PCG-C1XS PCG-C1XProcessor Pentium II Processor 400 MHz 266 MHz MMXCache Memory 256KB integrated on-die level 2 256KB MultiBank DRAMOperating System Windows 98HD 12 GB 4,3 GBBuilt-In CameraCCD ResolutionLens

1/6" CCD410.000 pixelsf = 2,8mm / F 2,8

1/6" CCD270.000 pixels f = 2,8mm / F 2,8

LCD 8.9" XGA width (1024 x 480) TFT with XWIDE display technologyGraphics NeoMagic MagicMedia 256AV with 2.5 MB Video RAM, 128 bit accelerator

and MPEG playback acceleratorDigital Audio 16 bit playback and recordRAM 64 MB, expandable to 128 MBModem V.90 56K modem (PC-Card)PC card slots Supports one type II card

Zoomed Video and Card Bus SupportInfrared Supports 4 Mbps, 1.1 Mbps, and 115 kbps IrDA standardOther Interfaces VGA output

USB interface i.LINK (IEEE-1394) Interface RJ-11 Phone Jack Audio In, Headphone Output Microphone

Supplied Accessories

USB floppy disk drive Standard Lithium-Ion battery AC adapter VGA monitor adapter Phone Cable

Optional Accessories (e.g.)

External 4x4x20x max. CD-RW drive (PCGA-CDRW51) External 16X max. CD-ROM drive (PCGA-CD51)

Price $2,299.99 ESP (U.S. Retail) $1,799.99 ESP (U.S. Retail)

70 of 171

Page 71: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

VAIO Z505 SuperSlim Pro NotebooksManufacturer Sony (http://www.ita.sel.sony.com/products/pc/notebook/)Model PCG-Z505HSK PCG-Z505HSProcessor Intel Pentium III Processor 500 MHzCache Memory 256 KB integrated on-die level 2Operating System Microsoft Windows 2000

Professional pre-installedMicrosoft Windows 98 Second Edition

HD 12.0 GBLCD 12.1" XGA active matrix (1024 x 768)Graphics NeoMagic MagicMedia 256AV with 2.5 MB video RAM and 128-bit

accelerator, MPEG playback accelerationDigital Audio 16 bit playback and recordRAM 128 MB SDRAM (expandable to 256 MB max.)Modem Integrated V.90 56K modemBattery Lithium-Ion Battery

Supports standard Lithium-Ion battery (supplied) or triple-capacity Lithium-Ion battery (optional)

PC card slots Supports one type II card CardBus support

Infrared Supports 4 Mbps, 1.1 Mbps, and 115 kbps IrDA StandardOther InterfacesOn Main Unit

On Port Replicator (supplied)

Port replicator interface USB (2) i.LINK (IEEE 1394) S400 interface Audio in Headphone out RJ-11 phone jack Memory Stick RJ-45 10Base-T/100Base-TX Ethernet

Parallel Serial VGA output i.LINK (IEEE 1394) S4005 USB

Supplied Accessories External USB Floppy Disk Drive Standard Lithium-Ion Battery AC Adapter Port Replicator

Optional Accessories (e.g.)

External 4x4x20x speed (max) CD-RW drive (PCGA-CDRW51) External 16x speed (max) CD-ROM drive (PCGA-CD51) Additional standard battery (PCGA-BPZ51) Triple-capacity battery (PCGA-BPZ52) USB mouse (PCGA-UMS1) Monitor adapter (PCGA-DA5) Memory Stick media: 4 MB (MSA-4A), 8 MB (MSA-8A), 16 MB (MSA-16A), 32 MB (MSA-32A), 64 MB (MSA-64A)

Price $ 3,199.99 ESP (U.S. Retail) $ 2,999.99 ESP (U.S. Retail)

71 of 171

Page 72: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

VAIO XG NotebooksManufacturer Sony (http://www.ita.sel.sony.com/products/pc/notebook/)Model PCG-XG19 PCG-XG18 PCG-XG9Processor Intel Pentium III Processor 650 MHz feat. Intel

SpeedStep technologyIntel Pentium III Processor 500 MHz

Operating System Microsoft Windows 98 Second EditionHD 18.1 GBLCD 14.1" XGA (1024 x 768)

TFT screen13.3" XGA (1024 x 768) TFT screen

Graphics NeoMagic MagicMedia 256XL+ with 6.0 MB VRAM, 128-bit accelerator and MPEG playback acceleration

Digital Audio Hardware MIDI 3D Surround Sound Blaster® Pro compatible Yamaha® PCI audio accelerator

RAM 128 MB SDRAM (expandable to 256 MB max.)CD-RW Drive 4X CD-R Write; 4X CD-

RW Write; 20X CD-ROM- -

DVD-ROM Drive 4X max. removable DVD-ROM drive with DVD movie playback capabilityModem Integrated V.90 56K modemBattery Lithium-Ion battery

Supports dual battery operationPC card slots Supports 2 type II cards or one type III card

CardBus SupportInfrared Supports 4 Mbps, 1.1 Mbps and 115 kbps IrDA standardOther Interfaces VGA monitor

USB RJ-11 phone jack i.LINK® (IEEE1394) S400 interface Mic-in Headphone Port replicator

Supplied Accessories (e.g.)

Rechargeable battery CD-RW drive DVD-ROM drive Floppy drive

Rechargeable Battery AC Adapter Weight Saver

Optional Accessories (e.g.)

10 GB hard drive (PCGA-HDX10) XG Dock Additional battery USB mouse

CD-RW drive (PCGA-CDRWX1) Privacy screen (PCGA-FLX1) 10 GB hard drive XG Dock (PCGA-PSX1) Additional battery (PCGA-BP71) USB mouse (PCGA-UMS1)

Benefits Hot-swappable multi-bay for CD-RW, DVD-ROM, floppy, additional hard drive or additional battery

Price $ 3,999.99 ESP (U.S. Retail)

$ 3,499.99 ESP (U.S. Retail)

$ 3,299.99 ESP (U.S. Retail)

72 of 171

Page 73: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

IBM-Camcorder

Global viewage

IBM's WorldBoard puts entire libraries of travel information in the palm of your hand. Looking through the GPS-assisted viewer, select a wireless network channel; the real-world things you see will be overlaid with text and graphics. Get tips on Madrid's restaurants from the Michelin Guide, or read the travel notes your friend took when he vacationed in Morocco. In your own neck of the woods, take a virtual tour of places you never have time to visit. When you're ready to sound off on what you've seen, just plug the viewer into your PDA and mark up the world your own way.(Wired Jan. 2000, page 60)

*Available 2002.WorldBoard: $400. IBM:

Sourse

http://www.research.ibm.com/

73 of 171

Page 74: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Video Capturing Cards

For video tracking (one of the tracking solutions planned for Archeoguide) and video based gesture recognition it is necessary to connect video cameras to the computer. There are several different ways to do this: USB: USB is a serial bus for peripheral hardware components, which is available on most modern PCs. There

are a few Web cams that do have USB connectors. Because the bandwidth of USB is quite small, this solution yields low frame rates, low resolutions and low quality.

IEEE 1394 (Firewire/iLink): IEEE 1394 is a fast serial bus. It is quite new and not very widespread. Apple iMacs and Notebooks and Camcorders from Sony are already equipped with IEEE 1394 connectors.

Video Capturing Cards: Most cameras provide the video signal as an analog FBAS or S-Video signal. Special video capturing cards are required to digitize these signals. Because wearable computers and notebooks do not have slots for PCI cards, we have to use PCMCIA cards. Of course the capturing results of PCMCIA cards have lower video quality (in terms of frame rate and resolution) compared to PCI cards. There are only few video capturing cards available for PCMCIA slots.

Manufacturer Nogatech (http://www. nogatech .com/ )Model CaptureVisionOperating Systems supported

Windows 9x

Video Display standard 320 x 240Video Standards supported

NTSC / PAL

Max. resolution 640 x 480Video Inputs 1 composite video (or S-video with special add on cable)Software included Video Capture Software for Windows 9xRequirements Pentium notebook with 16 MB of RAM

SVGA 640x480 display with a minimum of 256 colours Microsoft Windows 95/98

Price range $ 98.95 - $ 129.95

74 of 171

Page 75: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.1.2. HMDs / Displays

Introduction

Head mounted displays will be used in the Archeoguide project to display all graphical information to the user. They consist of video or computer displays mounted directly in front of the eyes of the user. HMDs allow to combine the real and the virtual world. They provide the user with the impression of being immersed in a more or less simulated three-dimensional environment.

Classification of HMDs

A general method of classifying HMDs is by whether they are occluded or see-through displays. The occluded (or inclusive, closed-view) display is one where only the image produced by the display is visible to the viewer. Standard closed-view HMDs do not allow any direct view of the real world . The see-through (or augmented vision) display is one where the viewer sees both the image produced by the display and the ambient scene.Another method of classifying HMDs is by how many images are presented and to which eyes. The three classifications are monocular, biocular, and binocular. A monocular display presents one image to one eye. A biocular display presents one image to both eyes (both eyes see the same image). Finally, a binocular display presents different images to each eye. It is only with a binocular display that true stereoscopic images can be presented.An optical see-through HMD is a device that allows to overlay the real scenery with virtual, computer-generated objects. Ideally, it would appear to the user that the real and the virtual worlds coexist. Most existing optical see-through HMDs reduce the amount of light from the real world, so they act like a pair of sunglasses when the power is cut off.Optical see-through HMDs work by placing optical combiners in front of the user’s eyes. These combiners are partially transmissive, so that the user can look directly through them to see the real world. The combiners are also partially reflective, so that the user sees virtual images bounced of the combiners from head mounted monitors. The result is a combination of the real world and a virtual world drawn by the monitors.In a see-through HMD with narrow field-of-view displays the optical combiners add virtually no distortion, so the user’s view of the real world is not warped.Optical see-through HMDs do not degrade the user’s view of the real world, only the graphic images are shown at the resolution of the display device.A basic problem with optical see-through is that the virtual objects do not completely obscure the real objects, because the optical combiners allow light from both virtual and real sources. No existing optical see-through HMD is able to block incoming light selectively, which makes the virtual objects appear ghostlike and semitransparent. This appearance damages the illusion of reality.Video see-through HMDs work by combining a closed-view HMD with one ore two head-mounted video cameras. The video cameras provide the user’s view of the real world. This video is combined with graphic images created by a scene generator, blending the real and the virtual image. The result is sent to the monitors in front of the user’s eyes in the closed-view HMD. Video blending limits the resolution of what the user sees, both real and virtual, to the display devices. With current displays the resolution is far from the resolving power of the fovea.If the power is cut off, the user is effectively blind, which is a safety concern in some applications.With video see-through, the user’s eyes reside at the position of the video cameras, which creates an offset between the camera and the real eyes. This difference between the locations of the cameras and the eyes introduces displacements from what the user sees compared to what he expects to see. This can be avoided through the use of mirrors to create another set of optical paths that mimic the paths directly into the user’s eyes. Using those paths the cameras will see what the user’s eyes would normally see without the HMD.Video see-through is far more flexible about how it merges the real and the virtual images. Since both the real and the virtual are available in digital form, video see-through compositors can, only at pixel-by-pixel basis, take the real or the virtual, or some blend between the two to simulate transparency. Due to this flexibility video see-through techniques enable to produce more compelling environments.

75 of 171

Optical See-Through HMD Design

Page 76: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

With video see-through it is possible to minimize the delay or temporal mismatches between the blending of the virtual into the real world by delaying the video of the real world to match the delay from the virtual image stream.Video blending provides a good source for tracking the user’s head location: the digitized image of the real scene.

Sources

“A Survey of Augmented Reality” by Ronald T. Azuma(http://www.cs.unc.edu/~azuma/)

“Wide-Area-tracking” (http://www.cs.unc.edu/~tracker/)

“A Virtual Retinal Display For Augmenting Ambient Visual Environments, Chapter 4: Survey of Helmet Mounted Displays”(http://www.hitl.washington.edu/publications/tidwell/ch4.html)http://extremecomputing.com/xcom/inhmdres.html

76 of 171

Page 77: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Survey of HMDs / Displays

See-Through-Displays, Hand-HeldManufacturer Virtual Research Systems, Inc.

(http://www.virtualresearch.com/)N-Vison Inc(http://www.nvis.com/)

Model Window VR VB 30Display Options 15“ XGA (1024 x 768)

17“ SXGA (1280 x 1024)21“ (1600 x 1200)

Pair of high resolution (1,5 millionpixels) 0,7-inch TFT LCD screens, providing a virtual image size similar to a 30-inch display at a distance of 1,2 m,incorporating Sony’s LDI-100 series displays

Tracking Options 3 DOF orientation tracking 6 DOF position & orientation tracking

Tracking sensors are easily mounted internally, making them unobtrusive to the user

Navigation Options Handgrip buttons emulate keyboard strokes SpaceBall buttons Joystick buttons Touch Screen

Integrated mouse-compatible buttons are conveniently positioned on the right topside of the unit. Uses for the buttons include zoom control, toggling symbology, and motion control in virtual environments.

Inputs:Power

Serial

Video

RGB

Display and position sensor Universal supply: Universal (100-240VAC / 50-60Hz) input Position Sensor Navigation buttons (Spaceball, emulation) Touch Screen 15" XGA flat panel display – 1024 x 768 @ up to 75 Hz 17" SXGA flat panel display – 1280 x 1024 @ up to 75 Hz

-

-

NTSC (PAL available)

VGA / SVGA / MacintoshWeight - 8 oz (display)

9.2 oz (video electronics)Options Counterbalance structure

Choice of standard floor standing structure or compact drop ceiling mounted structure Durable handgrips with fully configurable buttons, trigger and hatswitch

Stereo AirTron (wireless video) See-through

Price 15.900 € -

77 of 171

Page 78: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

InterTrax / i-glasses Tracked PC-VR DisplaysManufacturer InterSense Inc.

(http://www.isense.com/IS900/index.html/)Tracked Solution InterSense PC-VR 2 InterSense PC-VR InterSense VR-LCPersonal Display i-glasses x2 i-glasses i-glasses LCField of View 30 degrees each eyeImmersion Visor Not required Not required YesHeads up OptionalPanels 2 full colour 0.7” LCDs 2 full colour 0.7” LCDs 2 full colour 0.7” LCDsResolution 360,000 pixels/LCD 360,000 pixels/LCD 180,000 pixels/LCD3D-Capable YesInterface 1 NTSC / PAL RCAScan Converter (for operation with PC)

Included Included Optional extra

InterTrax Tracker InterTrax 30 InterTrax 30 InterTrax 20Benefits cost effective

Trackers provide distortion free trackingPrice - - -

78 of 171

Page 79: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

HMDs, See-Through-DisplaysManufacturer Sony

(http://www.vrt.de/products/sony/glasstron.asp, http://www.est-kl.com/,no Infos from Sony found)

N-Vision Inc,(http://www.nvis.com/dv.htm/,http://www.est-kl.com/)

Model LDI-100BE / LDI-D100BE

LDI-50BE/LDI-D50BE

Datavisor HiRes Datavisor 80

Displays 2 x 0.7" LCDs 2 x 0.7" LCDs 1" CRT 1" CRTResolution 832 x 3 x 624 800 x 225 - -Pixels 1,55 million 180.000 - -Field of View 28° (h) 30° (h) Monocular FOV

(diameter): 52°Maximum FOV (horizontal): 78°

Monocular FOV (diameter): 80°Maximum FOV (horizontal): 120°

See-Through-Mode

LCD Shutter Mech. Shutter Optional Optional

Diffuser Yes - -Headphone In-ear-stereo - -Controls Colour / Hue / Brightness - -Input Any PAL and VGA

sourceAny PAL source Video formats:

640x480 to 1280x1024 (multisync)

Video formats: 640x480 to 1280x1024 (multisync)

Weight 120 g 180 g 56 oz 64 ozOptions Stereo-capable

(LDI-D100BE) See-through Motion Tracking

Disadvantages Not stereo-capable

Price 4150,00 € 2070,00 € 47.000 € – 58.300 €

110.000 € – 120.000 €

79 of 171

Page 80: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

HMDs, ProView FamilyManufacturer Kaiser Electro Optics Inc.

(http://www.keo.com/ProView_family.html)Model PV 30 PV40ST /

PV50STXL40Stm / XL50STm

PV 60 XL 35 / XL 50

Field of View 30° 40° / 50° 40° / 50° 60° 35° / 50°Resolution / Eye 2,25 arcmin 3,4 arcmin 2,2 arcmin 4,5 arcmin 1,6 / 2,3 arcminOverlap 100% 100% / 24% 100% / 25% 100% 100%Brightness 25 ft-lamberts >6 ft-lamberts 20 ft-lamberts 25 ft-lamberts 5-50 (adjust.)

ft-lambertsColour Full-colour Full-colour Monochrome Full-colour Full-colourContrast 25:1 20:1 25:1 25:1 40:1Transmission Occluded 24% 24% Occluded OccludedOptics Plastic Reflective Reflective Plastic PlasticEye Relief 50 mm 30 mm 30 mm 25 mm 25 mmWeight 25 oz 28 oz 28 oz 28 oz 35 mmStereo / Mono Video Inputs

1 or 2 1 or 2 1 or 2 1 or 2 1 or 2

Display Type AMLCD, full colour TFT high speed polysiliconHead tracker Capability

Accommodates Polhemus, Ascension, or InterSense trackers

Video Inputs One or two VGA or XGA, H & V-TTL, analog 0.7V P-P, 75 ohms60 Hz video inputs

Autosense for stereoscopic or monoscopic operation Internal and external sync

Vertical Scan Rate

60,0 Hz

Cable Length 20 ft. from HMD to control electronicsOptions See-Through See-ThroughPrice 5000 € 49.750 € 49.750 € 6000 € 14.950 €

80 of 171

Page 81: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

HMDsManufacturer ?

(http://www.vrealities.com/hmd.html)Model VFX1 VFX3DDisplay Options:

Resolution

FOV

0.7" Dual Active-Matrix Colour LCDs789 x 230 (181,470 pixels)

45°

2 colour 0.7 inch LCDs

360,000 pixels/LCD

35° DiagonalTracking Options:

DOF

Virtual Orientation System (VOS) Head Tracker360° yaw, ±70° pitch and ±70° roll

VOSTM (Virtual Orientation System) TrackingPitch & Roll Sensitivity+/- 70° ~+/- 0,05° (12 bit)Yaw Sensitivity360° ~+/-0,1° (12bit)

Inputs:Power

Serial

Video

-

ACCESS.bus host protocol - accepts upto 125 I/O devices.16-bit ISA slot

Interfaces with standard VGA feature connector and industry standard

UL Approve Wall mount CE and European supported

RS-232 Standard 9 pin RS-232

VGA Standard 15 pin VGA Active pass through connection

Weight 3 oz -Price $595.00 $1700.00

81 of 171

Page 82: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

HMDsManufacturer Tek Gear

(http://www.tekgear.ca/)Model M1 M2Display Format MonocularVideo Resolution 76,800 pixels (320 x 240) 480,000 pixels (800 x 600)Pixels Density 1,700 Lines Per Inch -Field of View 16° Horizontal 22° Horizontal

16.6° VerticalSee-Through-Mode No YesContrast 80:1 50:1Colour Depth Greyscale UnlimitedVisual Luminance Up to 20 ft-Lamberts - user

adjustableTBD

Video Rate 60 Hz or 72 Hz 60 HzControls Picture Position – Up, Down,

Left, RightPicture Property – Brightness, Contrast

Brightness, Image Flip

Input Signal VGA or NTSC Composite 18 bit Digital RGB GVIF interface

Weight:Display and Head AssemblyElectronic Driver Package

less than 4 oz. less than 6 oz.

210 g

Price Single Unit $ 500.00 -

82 of 171

Page 83: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Miniature DisplaysManufacturer MicroOptical Corporation

(http://www.microopticalcorp.com/)Model Integrated Eyeglass Display ClipOn DisplayAbout MicroOptical's Integrated

Eyeglass Displays include a concealed electronic display. When the user wears the glasses and turns the display on, an image of a video or computer screen appears at a distance of several feet. A focus adjustment allows the user to place the image at a comfortable distance

Portable display that attaches to ordinary eyeglasses or safety glasses

Display Format 320 x 240, colour, 60 Hz refresh rateOptics Left eye version is available now, right eye versions by special

orderFOV Approximately 11° horizontalFocus Range Pre-set to 1 m, other distances available upon requestEyeglass Frame Size Clip-on fits most wire frame glasses, plano glasses suppliedVideo Input Standard VGA, female DB-15 connector and standard NTSC, RCA

plugWeight 62 g 33 gContains The Integrated EyeGlasses

contain see-around display optics and VGA and NTSC conversion electronics. The housing of the conversion electronics is separated from the display by a 4-foot cable.

The C-1 Clip-on field test kit contains one Invisible Monitor Clip-on Information display with see-sround display optics, articulating mountingarm, and VGA and NTSC conversion electronics. The housing of the conversion electronics is separated from the display by a 4-foot cable.

Benefits The technology can be applied to both lenses in the eyeglasses to form a 3D image. MicroOptical is not currently developing stereo glasses, but may do so in the future.

Price Upon request

83 of 171

Page 84: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Microdisplay SystemsManufacturer Liteye Microdisplay Systems

(http://www.liteye.com/index.html)Model Liteye Liteye 300FOV ~18,5° diagonal 47° diagonal, 40° horizontalColour Full colour Amber / Green GreyscaleResolution 320 x 240 Colour Sequential 640 x 480Contrast Ratio 100:1 Min., 200:1 Typical 500:1Brightness 22 ft-L @ the center of display

with 0° viewing angleCd/m² (fL) 170 (50)

Input Signal NTSC Composite Video VGA and Video (PAL, US)Weight 1,5 oz 1,5 ozBenefits Connectable to helmets, david

clarks, hard hats, baseball caps, ... Provides an image to the user equivalent to a direct view display, which would have to be more than 5.25inches diagonal larger in size

clear elevation of FOV

OEM Package

With Sennheiser 410-6 Headset

Dave Clark Mount

84 of 171

Page 85: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Wearable Display SystemManufacturer Microvision

(http://www.mvis.com/1-pis2000.htm)About Microvision’s wearable display system consists of a high

performance head- mounted display with a belt-worn PCResolution VGA: 640 x 400 pixels

SVGA: 800 x 600 pixels*Luminance 1-480 FL at the eyeGreyscale 64 user discernable grey shades*Dimming Ratio 1000:1FOV 30° horizontal, 22° verticalDisplay Colour Monochrome Red (635 nm)Image Refresh Rate 60 HzInterface Requirements SVGA*, VGA, NTSC*, PAL*HMD Weight 1,5 lb. (657 g)

* available mid-late 2000

85 of 171

Monocular head-worn Retinal Scanning Display

Wearable System

How it works:

Page 86: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.1.3. Interaction DevicesUser-interaction in Archeoguide should be based on user-friendly, multi-modal interaction techniques that allow the user to interact with the system through gestures and speech.

Interaction by speech

Introduction

Various Speech recognition Systems that can be run on an ordinary PC are available. Usual Hardware requirements are a PC with Pentium II processor performance, 128 MB RAM, 200MB hard disk space. Most systems require a soundcard capable of a full-duplex mode, like Creative SoundBlaster 16 and compatibles. Some systems allow acquiring speech input through USB-Microphones or -Headsets.A common feature is the extendibility of the base dictionaries, which contain approximately 300.000 words.Software Development Kits are available for most Systems.The response time is system-depended and takes approximately 0.2 sec.Problems could arise with language/dialect dependence and speaker dependence. Because the applications have to be trained for correct recognition of the users commands. Some Systems provide a small set of speaker independent commands.

Comparison of speech recognition and synthesis software

Product IBM ViaVoice Dragon NaturallySpeaking

Lernout & Hauspie VoiceXpress

Features ACTIVE VOCABULARY EXPANDABLE AND CUSTOMIZABLE

SDK AVAILABLE

ACTIVE VOCABULARY EXPANDABLE AND CUSTOMIZABLE

SDK AVAILABLE

ACTIVE VOCABULARY EXPANDABLE AND CUSTOMIZABLE

Requirements Pentium 233 MHz Processor

64 MB RAM Windows

9x/NT

Pentium 300 MHZ Processor

128 MB RAM Windows 9x/NT

PENTIUM II PROCESSOR

64 MB RAM

WINDOWS 9X/NT

86 of 171

Page 87: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Needed interaction-devices

The user needs some earphones

and a microphone close to his head to be able to interact with a speech recognition and synthesis System. A common way to serve both needs is to combine these two devices into a headset, which can be worn like earphones. The only difference to regular earphones is the extension, which carries the microphone. Another possibility is a small microphone-clip, which can be worn close to the collar. While most AR-Applications come with a Head-Mounted Display and some HMD’s are already equipped with Earphones it is possible to use these.Most headsets are connected through cables, but some headsets are equipped with transmitter and sender to provide wireless operation, which gives more convenience.

Interaction through gestures

The recognition of the users gestures should be realized in a way that does not force the user to wear special gloves or other special devices. One already realized system is the Leeds Hand Tracker, developed by Tony Heap, University of Leeds.(http://www.scs.leeds.ac.uk/vision/proj/ajh/tracker.html ) The system provides 2D information about the users hand, which is located opposite to a color video camera. Within the current application a mouse cursor can be moved on the computer screen.Current tasks of the EMBASSI project occupy with gesture recognition.

Keyboards

A keyboard is attractive as an input device because it allows a full range of textual input. Usual keyboards are unattractive in a wearable context because of size and cumbersomeness of use. The keyboard has to be worn somewhere and then it has to be positioned for input. This conflict has given rise to alternative keyboard devices.

The Twiddler is a one handed “chorded” keyboard that has been commercially available for quite some time. A chording keyboard is one where combinations of keys are punched to indicate particular letters. On the positive side a chording keyboard uses only one hand for input and requires no surface to mount it on (it can be held in the hand). It also has reasonable speed (50 words per minute is achievable), is inexpensive, requires low power, low bandwidth and

is compatible with existing software. On the negative side the one handed requirement for input means that it could not be used for applications where the user must have both hands totally free at all times. There is a learning curve for the device and it is only suitable for textual input. There is no pointing capability inherent in the device.

87 of 171

Page 88: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

An alternative has been realized within the Cybernaut Wrist-Worn Keyboard, a small size QWERTY keyboard, which can be worn at the wrist of one arm, while the other Hand is free to

operate.

Pointing devices

Both of the input devices we have discussed thus far, speech and a chording keyboard, have no ability to do pointing. As we alluded to in the speech discussion, the ability to point to a position on a screen is important for all direct manipulation interfaces and, more importantly for wearable use, for all applications where there is a figure of interest or a map on the screen. These devices can be either joystick, joy pad, or touch pad together with one or more selection buttons.

A considerable solution might be a touch-panel combined with a display, as realized by Cybernaut Touch-Panel-Display, a Wrist worn device with 640x480 Pixel Color Display.Pointing devices are intuitive, allow random access and positional input and are compatible with desktop interfaces. They are widely available and could provide a virtual keyboard by having a representation of a keyboard on the screen and pointing to the various keys desired. The interfaces that currently utilize pointing devices are resource intensive. They are inexact for precise coordinate specification and they are slow when used to provide a virtual keyboard.

Other input devices

A Palm Pilot has been utilized as Input device in the VASE Lab Wearable Computing Project. See http://vasewww.essex.ac.uk/projects/wearable/ and http://palmpilot.3com.com/.

One layered cloth-based keypad

http://www.media.mit.edu/~rehmi/cloth/

Keyboards can also be made in a single layer of fabric using capacitive sensing [Baxter97], where an array of embroidered or silk-screened electrodes make up the points of contact. A finger's contact with an electrode can be sensed by measuring the increase in the electrode's total capacitance. It is worth noting that this can be done with a single bi-directional digital I/O pin per electrode, and a leakage resistor sewn in highly resistive yarn. Capacitive sensing arrays can also be used to tell how well a piece of clothing fits the wearer, because the signal varies with pressure.The keypad shown here has been mass-produced using ordinary embroidery techniques and mildly conductive thread. The result is a keypad that is flexible, durable, and

responsive to touch. A printed circuit board supports the components necessary to do capacitive sensing and output key press events as a serial data stream. The circuit board makes contact with the electrodes at the circular pads only at the bottom of the electrode pattern. In a test application, 50 denim jackets were embroidered in this pattern. Some of these jackets are equipped with miniature MIDI synthesizers controlled by the keypad. The responsiveness the keyboard to touch and timing were found by several users to be excellent.

88 of 171

Page 89: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Sources

http://www.cs.cmu.edu/afs/cs.cmu.edu/project/vuman/www/boeing/hci.htmlhttp://vasewww.essex.ac.uk/projects/wearable/

89 of 171

Page 90: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.1.4. Communication infrastructure

Wireless communication infrastructure

Wireless technology frees the ARCHEOGUIDE clients from their limiting umbilical cord to the network. A wireless front end connects the client to the wired network over a wireless link. This approach frees clients from cables and wires, enabling the flexibility of mobility and letting users decide how and where they want to work instead of having the workspace define how and where users work. Several proposed wireless standards promote the growth of wireless connectivity. The IEEE 802.11 standard and the HIPERLAN targets professional and wireless LANs applications. The other standard Bluetooth has more relaxed specs, targeting cost-continuous consumer markets. Finally, the OFDM-based WLAN being developed by IMEC and offering bit rates up to 155 Mbit/s illustrates the increased research and development interest in high bit rate wireless communications systems.

IEEE 802.11

Among the 2.4-GHz standards, 802.11 business wireless-LAN standard, offers a secure and robust platform for connectivity. In July 1997, IEEE adopted the 802.11 standard supporting 1 and 2 Mbps data rates in the 2.4 GHz band with frequency hopping spread-spectrum and direct sequence spread spectrum techniques. This standard supports both data (synchronous) and voice or time-bounded data (asynchronous). The point coordination function, a protocol scheme similar to time division multiple access, supports asynchronous data, guaranteeing that a client owns a certain amount of bandwidth. Two new extensions have been released in Dec. 1999, meaning that the extensions are probably stable enough. The extensions will continue to use the same MAC layer as before but different radio (physical) layers from before. Task Group A (802.11a) jumps from 2.4 to 5 GHz, uses an orthogonal frequency division multiplexing (OFDM) scheme, and requires mandatory support of 6-, 12-, and 24-Mbps data rates. It also provides optional "turbo" modes as fast as 54 Mbps to allow for future extension as technology progresses to enable the multimedia network and its diverse types of traffic. Additionally, Task Group A has set a goal of making the physical layer the same as the European Telecommunications Standards Institute (ETSI) broadband radio-access network to allow both standards to use the same radio and drive down costs.Task Group B (802.11b), designed to deliver Ethernet speeds, uses a complementary-code-keying waveform, the result of a compromise between schemes that Harris and Lucent presented. The standard specifies a 2.4-GHz band, and the data rate is negotiable from 1 or 2 Mbps at 400 ft to 11 Mbps at 150 ft and 20 dBm. Major challenges for designs operating at 11 Mbps include modularizing and shrinking the radio, as well as antenna issues, such as

whether to use an onboard or an integrated antenna, how to achieve an actual coverage of 360 ˚, and whether to use multiple antennas (antenna diversity) to improve the chance of getting a signal with low interference.

HIPERLAN/2

HIPERLAN/2 is the all-new high performance radio technology, specifically suited for operating in LAN environments. HIPERLAN/2 is a technology being developed ETSI, and a final specification is due to be finalized at the first six months of 2000. HIPERLAN/2 provides high speed (25 Mb/s typical data rate) communications between portable computing devices and broadband IP, ATM and UMTS networks and is capable of supporting multi-media applications. The typical operating environment is indoor. Restricted user mobility is supported within the local service area; wide area mobility (e.g. roaming) may be supported by standards outside the scope of the BRAN project. HIPERLAN/2 operates in the unlicensed 5 GHz frequency band (5.150-5.300 GHz), which has been specifically allocated to wireless LANs. A HIPERLAN/2 network typically has a topology as depicted in Figure 7-2 below. The Mobile Terminals (MT) communicate with the Access Points (AP) over an air interface as defined by the HIPERLAN/2 standard. There is also a direct mode of communication between two MTs, which is still in its early phase of development and is not further described in this version of the document. The user of the MT may move around freely in the HIPERLAN/2 network, which will ensure that the user and the MT get the best possible transmission performance. An MT, after association has been performed (can be viewed as a login), only communicates with one AP in each point in time. The APs see to that the radio network is automatically configured, taking into account changes in radio network topology, i.e., there is no need for manual frequency planning.

90 of 171

Page 91: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Figure 7-2: A typical HIPERLAN/2 network

HIPERLAN/2 has a very high transmission rate, which at the physical layer extends up to 54 Mbit/s and on layer 3 up to 25 Mbit/s. To achieve this, HIPERLAN/2 makes use of the modularization method OFDM to transmit the analogue signals. OFDM is very efficient in time-dispersive environments, e.g., within offices, where the transmitted radio signals are reflected from many points, leading to different propagation times before they eventually reach the receiver. Above the physical layer, the Medium Access Control (MAC) protocol is all-new and implements a form of dynamic time-division duplex to allow for most efficient utilization of radio resources.

Bluetooth

The original conception of Bluetooth targeted point-to-point short-range links for voice applications, such as cell phones to PDAs or a hands-free automobile-adapter kit that eliminates the need for cables and adapter sockets. Its functions have expanded to connecting both the personal network/work space and the distinct personal networks to each other. Additionally, Bluetooth devices could form an ad hoc network with multiple devices exchanging information or relaying information to other devices to extend the 10m range. For example, a company's private-branch exchange could route calls to users' desk phones or cell phones through a Bluetooth network of access points or gateways throughout the office.The Task Force had a completed Version 1.0 of the spec in June 1999. To preserve interoperability of products, Bluetooth partners have agreed to release no products before finalization of the standard, so currently many companies announce developer kits.

7.1.5. Virtual Human Representation

The problem of creating virtual humans in a 3D environment addresses two issues: modelling the geometry of the human and simulate the behaviours. We will not describe the modelling process here, one useful text about this subject can be found on [1]. Badler [2] and Thalmann [3] present an overview of this problem.Defining human character behaviour includes its animation to execute “human like” movements [4]. Methods for animating virtual characters can be roughly divided on two approaches: motion based captures or simulation based. Motion capture is typically done off-line, being the data recorded and played at runtime. The drawbacks of motion capture include the relatively high cost of capturing data, the lack of flexibility of pre-recording, and the fact that the motion must be performed by someone [4].Simulation relies on computation of a mathematical model to generate the model motion. This can be done on-line or off-line. If the model is too heavy, in computational terms, the animation should be calculated off-line and pre-recorded. Otherwise, the animation can be computed in real time, enabling some control over the movement parameters. It is somehow easy to adapt a simulation to include variations in animation parameters, such as limb, length or walking speed.On Archeoguide using as example the Olympia trial site, we can enrich the Olympic stadium model with running athletes, and change the speed of each runner (realistic simulation), keeping the same kind of animation. To implement the simulation, we will follow the h-anim [5] specification, which intends to be a standard

91 of 171

Page 92: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

representation for human characters. H-anim compatible software applications have been identified and tested, to model and animate the avatars: Poser 4.0, Avatar Studio and Cosmo Worlds.

References

[1] Peter Ratner. 3-D human modelling and animation. John Wiley and sons, 1998. [2] Norman I. Badler, Cary B. Phillips, Bonnie Lynn Webber. Simulating Humans, Oxford University Press, 1993. [3] Nadia Magnenat Thalmann, Daniel Thalmann, editors. Interactive computer animation, Prentice Hall, 1996. [4] Mike Wray and Vincent Belrose; Avatar in Living Space Proceedings VRML 99 of the fourth symposium on The virtual reality modelling language, 1999[5] H-ANIM Specification for a standard VRML avatar, version 1.0, 1998, on-line paper http://ece.uwaterloo.ca:80/~h-anim/spec.html[8] VRML Specification on-line http://www.vrml.org

92 of 171

Page 93: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.2.Tracking systems

7.2.1. IntroductionAccurate, long-range sensors and trackers, that report the locations of the user and the surrounding objects in the environment are required for effective augmented reality systems. Commercial trackers are aimed at the needs of virtual environments and motion-capture applications. Augmented reality demands much stricter accuracy and larger working volumes. Not many trackers provide real time tracking at long ranges up to now.An AR system needs trackers that are accurate to around one millimeter and a tiny fraction of a degree across the entire working range of the tracker.Few trackers are able to fulfill these specifications, and every technology has its weaknesses.For Archeoguide tracking is necessary for two reasons. The information provided to the user depends on his position on the historical site. For example, if he is

standing in front of the ruin of the Zeus temple, he wants to see the virtual model of the Zeus temple. For seamless integration of the virtual images into the real scene, the Archeoguide system needs to know the

exact head position and viewing direction of the user.

7.2.2. Different approaches on tracking:A number of different tracking schemes have been developed using a variety of means, including ultrasonic, magnetic fields, scanning lasers and encoded radio. They all require a transmitter and a receiver unit, with one in an accurately known position and the other on the entity being tracked, and some means of getting data from one to the other. All have serious drawbacks in one or more areas as far as AR is concerned: e.g. accuracy, resolution, responsiveness, interference, clear line-of-sight, scalability for multiple entities and wide coverage.

GPS (Global Positioning System)

GPS is often proposed as a cheap means for accurate tracking. It is funded by and controlled by the U.S. Department of Defense (DOD). This worldwide radio-navigation system is formed from a constellation of 24 satellites and their ground stations. These “man-made stars” are used as reference points to calculate positions accurate to a matter of meters. The satellites transmit specially coded signals, which lead to distorted measurements for “civil” users.An advanced form of GPS, Differential GPS involves the cooperation of two receivers, one that’s stationary and another that’s roving around making position measurements. The stationary receiver is the key. It ties all the satellite measurements into a solid local reference. DGPS allows making measurements better than a centimeter, assuming that many measurements are integrated, so that accuracy is not generated in real time.The satellite signals can be processed in a GPS receiver, enabling the receiver to compute position, velocity and time. Four GPS satellite signals are used to compute positions in three dimensions and the time offset in the receiver clock.GPS receivers have been miniaturized to just a few integrated circuits and so are becoming very economical, which makes the technology accessible to virtually everyone.It might be useful as one part of a long-range tracker for AR systems. DGPS is getting close to the necessary accuracy, at least for body tracking, but it generally lacks the necessary responsiveness and orientation tracking needed for head tracking. Also GPS only works reliably outdoors, in the open, away from buildings and not under tree cover.

93 of 171

Page 94: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Ultrasonic Tracking

Technology

A typical setup of an ultrasonic tracker is as follows:1. An array of ultrasonic transmitting beacons permanently mounted on ceiling.2. Three ultrasonic receivers to pick up sound waves and measure time-of-flight or phase-coherence.3. An infrared triggering device for activating the beacons to determine the start of time-of-flight or phase-

coherence.4. A computer

Ultrasonic tracker utilize high frequency sound waves to track objects by either the triangulation of several transmitters (time-of-flight method) or by measuring the signal’s phase difference between transmitter and receiver (phase-coherence method):

Time-of-flight method

The “time-of-flight” method of ultrasonic tracking uses the speed of sound through air to calculate the distance between the transmitter of an ultrasonic pulse and the receiver of that

pulse. This method requires the tracker to coordinate the activation of transponder beacon at a known location so that time taken between transmission and reception of the ultrasonic sound waves can be measured accurately. One method of activation is to emit infrared (IR) trigger codes from the ultrasonic range-finder module (receiver) to activate the transponder beacons one at a time. As the beacon senses its unique IR code, it responds by emitting the ultrasonic sound waves.

94 of 171

The picture depicts the IR trigger code activation, followed by the transmission of ultrasonic sound waves from a beacon on the ceiling.

Page 95: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The ultrasonic range-finder module measures the time taken to receive the ultrasonic sound wave after emitting the IR code, and uses the speed of sound to convert the time taken into distance. Since speed of light is almost instantaneous and the latency of the transponder beacon is constant, distance can be measured accurately.The picture on the left shows triangulation of 3 ultrasonic sound waves transmission from 3 beacons. The distance measured from the point marked “X” is the derived position obtained from 3 beacons, whereas those positions marked “O” do not agree with at least 1 beacon.Distance from 2 other transponder beacons are then determined using the same method and the position (3 DOF - x, y, and z) of the receiver can be calculated by triangulation from 3 known beacon positions. These transponder beacons have to be set up in a geometrical configuration that allows triangulation to be determined without ambiguity, as a rule of thumb they are normally positioned at each corner of the room.6 DOF can be obtained by having at least 3 receivers on the same object, and the position of the receivers provides the orientation information of the object. That is, if the object were tilted to its side the receivers would have

different height and lateral positions. Roll, pitch and yaw can be derived from these receivers’ position, as their relative distance from each other is known.

Phase-coherence method

The phase-coherence method uses the signal phase difference to determine position. Since sound travel by means of sinusoidal waveform, the signal phase angle at a fixed receiver and transmitter position will always remain constant. As the receiver moves away from the transmitter, the signal phase angle will change because sound waves need to travel further to get to the receiver. This change in the signal phase angle can be converted to change in distance from the transmitter, since the ultrasonic signal wavelength is known. However, if the object being tracked moves farther than one-half of the signal wavelength in any direction during the period of one update, errors will result in the position determination. This is because signal waveform repeats itself after each wavelength, and any change between updates in position of one-half of wavelength would be difficult for the computer to determine whether the object moved towards or away from the transmitter, as both positions would yield the same phase angle. Since phase-coherent tracking is an incremental form of position determination, small errors in position determination will result in larger errors over time (drift errors).The picture on the right depicts that measured phase angles are similar at two locations marked “O” and “X”. The incorrect point marked “X” is more than half a wavelength from the initial point.

Inertial Tracking

Technology

A typical setup of an inertial tracker is as follows:1. An accelerometer and gyroscope to measure the linear

and angular rates 2. A computer

95 of 171

Page 96: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The inertial tracker uses accelerometer and gyroscope to measure the linear and angular orientation rates. This tracker is truly source less, and is unaffected by almost any environmental interference. Although comparatively cheap, fast and accurate, gyroscopic trackers suffer from reading drift, typically around 3° per minute.The principles behind the linear accelerometer and gyroscope are as follows:

Linear Accelerometer

In order to measure the acceleration along the three axes of an object simultaneously it is necessary to have three accelerometer movements, each mounted perpendicular to one of the axes. The moving parts are made very small and light to reduce their moment of inertia and a hairspring on each mirror spindle takes up all backlashes. The motion of three springs corresponding to each axis records the acceleration.

Gyroscope

Inertial tracking uses the gyroscopic principle that any spinning object possesses gyroscopic characteristics. The central mechanism of the gyroscope is a wheel similar to a bicycle wheel. Its outer rim has a heavy mass and rotates at high speed on very low friction bearings. When it is rotating normally, it resists changes in direction. The inertial tracker uses the gyroscope characteristic of rigidity in space to determine orientation. Rigidity in space means that the gyroscope resists turning. When it is gimbaled (free to move in a given direction) such that it is free to move either in 1, 2 or 3 dimensions, any surface attached to the gyro assembly will remain rigid in space even though the case of the gyro turns. The inertial tracker uses this property of rigidity in space to determine change of heading, pitch and roll.The general principles in inertial tracking are to measure the accelerations on masses (accelerometers) and the orientation of spinning masses (gyroscopes). Integrated circuit technology has advanced to the point where these sensors are small enough to be used in human position tracking. Linear accelerometer output needs to be integrated twice to derive position, while angular rate accelerometers need to be integrated once to determine orientation.Integration causes the actual positions and orientations to be sensitive to drift, and have to be re-calibrated periodically. Gyroscopic information can directly give relative orientation, but it is also subject to drift.

96 of 171

Page 97: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.2.3. Some Advantages and Disadvantages of typical Tracking Systems

Mechanical Tracking

Advantages DisadvantagesGood accuracy Tether the user, burdensome, cumbersome

Ultrasonic Tracking

Advantages DisadvantagesImmune to electrical and magnetic interference. Electrical and magnetic fields do not affect ultrasonic sound waves. Therefore, the environment needs not to be freed of electrical cables, monitors, computers, and ferrous metal fixtures.Tracking is not affected by nearby objects. Ultrasonic sound waves do not change their properties or waveforms when other objects are in the vicinity of the tracked object.Compact and lightweight receiver can be easily embedded in Head Mounted Displays and other devices. Lightweight microphones (receivers) can be easily carried or held by a person.No moving parts to interfere with freedom of movement. Unlike mechanical trackers, ultrasonic trackers do not need any body-based linkage to perform tracking.

Ultrasonic trackers suffer from noise and are difficult to make accurate at long ranges because of variations in the ambient temperature.Restricted workspace volume. Ultrasonic tracker requires transponder beacons to be permanently mounted in the vicinity of the tracking area. This requirement restricts the tracker’s effective workspace volume and limits their effective tracking range.Must have a direct line-of-sight from the transponder beacons to the receiver. High frequencies are directional in signal wave propagation, which is susceptible to masking or shadowing problem.Time-of-flight trackers have a low update rate and low resolution. Could resolve with shorter wavelength but suffers heavy atmospheric attenuation. Attenuation increases rapidly at about 50-60 kHz, and generally the frequency used is at 40 kHz tone pulses with wavelength of 7mm.Phase-coherence trackers are subject to error accumulation over time. If object being tracked moves farther than one-half of the signal wavelength in any direction during the period of one update, errors will result in the position determination.Affected by temperature and pressure changes, and the humidity level of the work environment. This introduces constant time-delay errors in transponder beacons and receivers due to electronics drifts.Interference from echoes and other noises in the environment. Ultrasonic tracking is sensitive to reflection and interference from ambient noise sources. Echo from hard surface has 90% reflectivity to ultrasonic waves. This reflection makes the object appear farther than its actual position for “time-of-flight” trackers and it also introduces phase shifts errors for “phase-coherence” trackers.Suffer from other error sources. Ultrasonic trackers may suffer from other error sources that may be present, for instance, error in the beacon position and transducer angle related errors.Requires 3 receivers on same object to obtain 6 DOF. Since 3 receivers are needed to obtain 6 DOF, this limits the number of receivers to track the entire human body.

Magnetic Tracking

Advantages DisadvantagesRobust, places minimal constraint on user motion. Lack of accuracy.

97 of 171

Page 98: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Affection by metallic objects.Calibration of the magnetic tracker is difficult and impractical.

Optical Tracking

Advantages DisadvantagesOptical technologies show the most promise because of trends toward high-resolution digital cameras, real-time photogrammetric techniques and structured light sources that result in more signal strength at long distances.Good accuracy, no constraint on user motion.

Distortion and calibration problems.Not adequate for outdoor applications.

Inertial Tracking

Advantages DisadvantagesRelatively new technology: Inertial Tracking offers the greatest promise for achieving the necessary responsiveness, resolution, accuracy in position and especially orientation tracking.No need for transmitting source. The lack of an artificial source means that inertial tracking methods do not suffer from signal or metallic object interference or shadowing.Freedom to move around. The transmitting of body tracking data via wireless means would also eliminate tethers. The user would be free to move around in the real world with no restrictions. Also, the use of advanced micro-machined inertial sensors and application specific integration circuits would minimize the encumbrance of the user.Can be used in large working volume. Inertial tracker can be used in large workspaces because it does not need transmitting source to perform tracking and there is no hardware or cabling between computer and tracker.Low latency. Inertial tracker is able to derive positional and orientation changes instantaneously by performing integration and double integration directly on the linear accelerometer and gyroscope respectively.

Sensitive to drift (up to 3° per min) and bias of sensors. The one major drawback of inertial sensors is their tendency to drift (their tendency to accumulate measurement errors over time). Techniques are available which minimize the effects of drift. The most widely used is a combination of angular rate and linear acceleration sensors, which compensates for drift using the earth magnetic and gravitational fields as a directional reference.Gravity field distorts the output. The effect of gravity on linear accelerometer and gyroscope induces an erroneous downward acceleration force on the tracking device.Accelerometers suffer inaccuracies due to nonlinear effects of beam bending. The relationship between input and output in acceleration sensors will not always be a linear one. This may be due to imperfections in the manufacturing process or properties of the material itself, and if the resulting 2nd or 3rd order relationship can be determined, error correcting is often possible. Another common error is temperature non-linearity, a property of materials whose physical characteristics change with temperature.Gyroscope is stable, as it is force-balanced but sensitive to vibration. Must periodically be returned to home position for offset correction, or used in conjunction with other position sensor. Not accurate for slow position changes. Slow changing position on linear accelerometer may be difficult to detect.Hysteresis. Hysteresis is the tendency of a sensor’s components to maintain their perturbed-state characteristics after the perturbation is removed. An example of hysteresis within an accelerometer is the presence of residual deflection/strain within the sensor’s spring after acceleration has been applied and then removed. In the presence of hysteresis, an accelerometer will not be able to successfully repeat its null position, which will lead to unstable bias.

98 of 171

Page 99: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Vision Tracking

Passive technique that relies on computer vision techniques to determine the position and orientation of an entity, one approach is to use area video cameras to track the users in an area.

Advantages DisadvantagesReal-time tracking data transmittal. Users must remain in a line-of-sight distance.

Vision-based trackers have stability problems that arise from their assumptions about the working environment and the user’s movements.

7.2.4. Example for a possible Hybrid Tracking TechnologyA typical setup of a hybrid (ultrasonic and inertial) tracker is as follows:

1. An array of ultrasonic transmitting beacons permanently mounted on ceiling.2. Three ultrasonic receivers to pick up sound waves and measure time-of-flight or phase-coherence.3. An infrared triggering device for activating the beacons to determine time-of-flight or phase-coherence.4. An accelerometer and gyroscope to measure the linear and angular rates5. A computer

The term “hybrid” means mixture or composite of technologies. Generally, hybrid tracker utilizes the strength of one type of tracker to compensate the weaknesses of another. In this presentation, we will use a hybrid of ultrasonic and inertial tracking technologies to achieve better accuracy and lower latency. The key technology in hybrid tracking is in the Kalman filter, which is a sensor fusion algorithm. Ultrasonic tracker typically has fairly high latency, marginal accuracy, moderate noise levels, shadowing problems and limited range. The primary reason for most of these problems is the dependence on a transmitted source to determine orientation and position information. This source may be transmitted by body-based beacons or received by body-based sensors. Either way, limited range, shadowing problems and susceptibility to interference makes such system unfit for tracking multiple users in a large working volume. The largely source less nature of inertial trackers makes possible a full body tracking system that avoids the problems associated with ultrasonic tracking technology. Besides, inertial tracker provides position and orientation information at a much lower latency than other tracking technology.

Kalman Filter

The ultrasonic tracker provides changes to position information. In order to derive 6 DOF position and orientation, all these sensor data have to be fused together by a Kalman filter.

The angular rate obtained from the gyroscope is integrated once to derive orientation. This orientation is also used to transform the acceleration obtained from the accelerometer, in order to derive a more updated coordination. The gravitation effect on linear accelerometer is canceled and the result is integrated twice to get position.

99 of 171

Page 100: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The ultrasonic tracker activates each beacon in sequence and the acoustic range measurement determines the distance from the activated beacon either by time-of-flight or phase-coherence method. Since the tracker already knows its approximate position and orientation from the inertial tracker, it is able to determine whether the distance measured by the ultrasonic tracker is acceptable. The accepted position and orientation from the ultrasonic tracker is then used by the extended Kalman filter to make small adjustment to correct the integration and double integration modules. This provides the offset correction to the drift error of the inertial sensor.

Advantages DisadvantagesImproved update rates, resolution, and immunity to ultrasonic interference.Position tracking is performed by accelerometer and gyroscope with ultrasonic drift correction, not just pure time-of-flight or phase-coherence measurement.Can predict angular motion. Up to 50 ms in the future, which compensates for graphics rendering delays and further contributes to eliminating simulator lag.Fast Response. Update rates of up to 150 Hz with extremely low latency.Distortion-Free. Inertial sensing technology is not susceptible to the electromagnetic interference.

Restricted workspace volume. Although, inertial trackers could continue to provide the position and orientation information outside its workspace volume, drift associated to inertial tracker would accumulate over time.Must have a direct line-of-sight from the emitter to the detector. This weakness is associated with ultrasonic tracking. The tracker would still function with less accuracy when it losses its line-of-sight.Affected by temperature and pressure changes, and the humidity level of the work environment. This introduces constant time-delay errors in transponder beacons and receivers due to electronics drifts.Requires 3 receivers on same object to obtain 6 DOF. Since 3 receivers are needed to obtain 6 DOF, this limits the number of receivers to track the entire human body.

Future DirectionsFuture tracking systems may be hybrids of existing technologies, because combined approaches can cover the weaknesses of conventional systems by compensating each other’s disadvantages. This combination of several techniques may offer a greater robustness towards usual errors.Hybrid tracking provides a dramatic improvement in stabilization (concerning user motion) to achieve more accurate registration, which finally results in a better alignment of the real and the virtual objects.

100 of 171

Page 101: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.2.5. Sources

“A Survey of Augmented Reality” by Ronald T. Azuma(http://www.cs.unc.edu/~azuma/)“Experiences and Observations in Applying Augmented Reality to Live Training”(http://www.augsim.com/vwsim99/vwsim99.html)“Wide-Area-tracking”(http://www.cs.unc.edu/~tracker/)“GPS”(http://www.trimble.com/gps/index.htm)“GPS Overview”(http://www.utexas.edu/depts/grg/gcraft/notes/gps/gps.html)“Augmented Reality Overview”(http://www.cs.unc.edu/~us/hybrid.html)“Ultrasonic, Inertia and Hybrid Tracking Technologies”(http://www.cs.nps.navy.mil/people/faculty/capps/4473/projects/chang2/Full.htm#HybridTrackingProducts)“Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking”(http://www.cs.unc.edu/~us/chen/paper.html)“Acquisition, Tracking, and Pointing VI, Abstracts”(http://www.spie.org/web/abstracts/1600/1697.html)“Hybrid Gyroscopic / Electromagnetic Tracking System”(http://www.ndirect.co.uk/~vr-systems/research.htm)

101 of 171

Page 102: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.2.6. Survey of GPS-related technologies

GPS and Archeoguide

The Archeoguide project is evaluating the use of GPS technology in two different instances: that of the survey work needed to “prepare” a certain site for a local Archeoguide implementation and that for tracking the “raw” position of a user walking in the site. This paper is mainly concerned for this second application of GPS technology, where “raw” is currently meant as anything between 100 meters and 1 centimeters of precision, remembering that final corrections for the augmented reality reconstructions will be handled directly by other position references such as landmarks.

Global Positioning System (GPS)

GPS is a satellite-based global navigation system created and operated by the United States Department of Defense (DOD). Originally intended solely to enhance military defense capabilities, GPS capabilities have expanded to provide highly accurate position and timing information for many civilian applications.An in-depth study of GPS is required to fully understand how it works, but its main characteristics can be highlighted as follows:

1. 24 satellites in six orbital paths circle the Earth twice each day at an inclination angle of approximately 55 degrees to the equator; they are called the GPS-Navstar constellation.

2. This constellation of satellites continuously transmits coded positional and timing information over two different radio frequencies in the 1500 Megahertz range.

3. GPS receivers with antennas located in a position to clearly view a minimum number of satellites (at least 4 for a 3D fix) pick up these signals and use the coded information to calculate a position in an earth centered coordinate system.

4. GPS receivers determine position by calculating the time it takes for the radio signals transmitted from each satellite to reach the user. Time, in the Distance = Speed x Time equation, is determined using an ingenious code matching technique within the GPS receiver which operates correlating a locally generated code with that generated by the satellite (each satellite in the constellation has a unique code). With time determined, and the fact that the satellite’s position is computed from the coded navigation message, by using a little trigonometry the receiver can determine its location in space.

5. Further receiver processing relates the 3D position to a standard model for the Earth surface and user coordinates and altitude are computed and displayed. The typical geodetical reference used for these computations is WGS-84.

GPS is the navigation system of choice for today and many years to come. While GPS is clearly the most accurate worldwide all-weather navigation system yet developed, it still can exhibit significant errors. Position accuracy depends on the receiver’s ability to accurately calculate the time it takes for each satellite signal to travel to earth. This is where the problem lies. There are primarily five sources of errors, which can affect the receiver’s calculation. These errors consist of:

1. Ionosphere and troposphere delays on the radio signal;2. Signal multi-path interferences;3. Receiver clock biases;4. Orbital position errors, also known as ephemeris errors of the satellite’s exact location;5. Intentional degradation of the satellite signal by the DOD. This intentional degradation of the signal is

known as “Selective Availability (SA)” and is intended to prevent adversaries from exploiting highly accurate GPS signals and using them against the United States or its allies.

Selective Availability accounts for the majority of the error budget. The combination of these errors in conjunction with poor satellite geometry can limit GPS accuracy to 100 meters 95% of the time and up to 300 meters 5% of the time. These performances are usually well maintained by most of commercial receivers. It is therefore typical to expect from a standard, modern, GPS receiver to show accuracy better than 40/50 meters even if its guaranteed for the usual 100 meters accuracy.It is to be noted that, due to geometry of the user and the visible satellites, the altitude error is usually worst than the lat/lon coordinates error. In fact the generic GDOP (Geometric Dilution Of Precision) parameter used to express the “goodness” of a certain user/satellites configuration can also be separated into two different parameters, HDOP (Horizontal Dilution Of Precision) and VDOP (Vertical Dilution Of Precision), to better highlight this fact.In generic terms, GPS receivers, thanks to advance in microelectronics, are getting “smarter” and more “powerful” to the point that multi-satellite simultaneous receiving techniques (8 or more satellites), or dual frequency

102 of 171

Page 103: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

receiving techniques (exploiting both GPS transmission frequencies) can be used to improve accuracy in position fixes. Recently technology has concentrated on small GPS receivers and on receivers of automotive applications where peculiar conditions, known as “urban canyons”, put to serious tests algorithms for satellites tracking and for fast re-acquisition times. Both developments will be exploited in Archeoguide where dimensions are an issue and where the presence of obstacles, such as artifacts and trees with dense foliage, must be taken into account.Standard GPS receivers’ technologies cannot, however, bring accuracy down to the 10 meters, or better, level. There are two other technologies that enable a civilian user to obtain better accuracy in position fixes: Differential GPS (DGPS) technology, which is used to correct most of system errors Carrier Phase and Real Time Kinematic technologies, which make use of different complex receiving

techniques to improve precision given certain operating conditionsAnother technique, pretty new and peculiar, has recently emerged on the market thanks to the requirement for embedding GPS receivers in mobile telephones. The technique is known as Server-aided GPS. All these technologies are now discussed.

Differential GPS (DGPS) Techniques

The idea behind all-differential positioning is to correct bias errors at one location with measured bias errors at a known position. A reference receiver, or base station, computes corrections for each satellite signal. Because individual pseudo-ranges must be corrected prior to the formation of a navigation solution, DGPS implementations require software in the reference receiver that can track all satellites in view and form individual pseudo-range corrections for each satellite. The error data for each tracked satellite, or corrections, are formatted into a correction message and transmitted to users’ GPS receivers. The correction message format follows the standard established by the Radio Technical Commission for Maritime Services, Special Committee 104 (RTCM-SC104). These differential corrections are then applied by the receiver in its GPS calculations, thus removing most of the satellite signal error and improving accuracy. The level of accuracy obtained is a function of the GPS receiver.It must be remembered that applying a simple position correction from the reference receiver to the remote receiver has limited effect at useful ranges because both receivers would have to be using the same set of satellites in their navigation solutions and have identical GDOP terms (not possible at different locations) to be identically affected by bias errors. That is why corrections are effectively related to measurements made with each satellite in view in the user area.Transmission of correction messages can employ different carriers and needs a repetition rate usually ranging from 10 seconds to a few minutes. Increasing the repetition rate to more than one minute can worsen degradation in the accuracy of the corrected solution due to latency problems. However reducing the repetition rate to the order of seconds will be useless because the precision of the system is anyway limited by other factors.Generally speaking to remove Selective Availability (and other bias errors), differential corrections should be computed at the reference station and applied at the remote receiver at an update rate that is less than the correlation time of SA. Suggested DGPS update rates are usually less than twenty seconds. DGPS removes common-mode errors, those errors common to both the reference and remote receivers (not multipath or receiver noise). Errors are more often common when receivers are close together (less than 100 km). Differential position accuracies ranging from 1 to 10 meters are possible with DGPS based on basic receivers (single frequency, standard positioning service). Most of commercial portable and OEM board receivers now offer an RTCM input port for differential corrections. Price for such receivers are usually less than $500, while that for a reference station are around $3000/5000. It shall be noted that the need for a continuous radio link between the reference station and all the rover stations can be implicitly satisfied by the presence of a wireless radio LAN. In this respect we will recommend that software planning include a specification for assigning a TCP port to receive RTCM connections from the server.

Carrier Phase and RTK Techniques

Carrier-Phase Tracking of GPS signals has resulted in a revolution in land surveying. A line of sight along the ground is no longer necessary for precise positioning. Positions can be measured up to 30 km from a reference point without intermediate points. This use of GPS requires specially equipped carrier tracking receivers. The L1 and/or L2 carrier signals are used in carrier phase surveying (being L1 and L2 the two different operating frequencies of GPS satellites: normal receivers, i.e. low cost ones, use only L1). L1 carrier cycles have a wavelength of 19 centimeters. If tracked and measured these carrier signals can provide ranging measurements with relative accuracies of millimeters under special circumstances.Tracking carrier phase signals provides no time of transmission information. The carrier signals, while modulated with time tagged binary codes, carry no time-tags that distinguish one cycle from another. The measurements used in carrier phase tracking are differences in carrier phase cycles and fractions of cycles over time. At least two receivers track carrier signals at the same time. Ionospheric delay differences at the two receivers must be small

103 of 171

Page 104: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

enough to insure that carrier phase cycles are properly accounted for. This usually requires that the two receivers be within about 30 km of each other.Carrier phase is tracked at both receivers and the changes in tracked phase are recorded over time in both receivers. All carrier-phase tracking is differential, requiring both a reference and remote receiver tracking carrier phases at the same time. Unless the reference and remote receivers use L1-L2 differences to measure the ionospheric delay, they must be close enough to insure that the ionospheric delay difference is less than a carrier wavelength. Using L1-L2 ionospheric measurements and long measurement averaging periods, relative positions of fixed sites can be determined over baselines of hundreds of kilometers.Phase difference changes in the two receivers are reduced using software to differences in three position dimensions between the reference station and the remote receiver. High accuracy range difference measurements with sub-centimeter accuracy are possible. Problems result from the difficulty of tracking carrier signals in noise or while the receiver moves.Post processed static carrier-phase surveying can provide 1-5 cm relative positioning within 30 km of the reference receiver with measurement time of 15 minutes for short baselines (10 km) and one hour for long baselines (30 km). Rapid static or fast static surveying can provide 4-10 cm accuracies with 1 kilometer baselines and 15 minutes of recording time.Pure Carrier Phase technology is not evidently useful for Archeoguide purposes. However given the reduced size of archeological sites Carrier Phase technology can be considered thanks to RTK.Real-Time-Kinematic (RTK) surveying techniques can provide centimeter measurements in real time over 10 km baselines tracking five or more satellites and real-time radio links between the reference and remote receivers. Currently are available RTK enabled receivers for OEM applications that could be used for Archeoguide purposes. However it shall be highlighted that a portion of technology, which enables position accuracies in the order of centimeters, is also devoted to the antenna used both for the reference station and for the rover one. These antennas will be specialized systems designed to minimize multi-path (reflections) interferences and are not only expensive but also bigger in dimensions.

Server-aided GPS

This technique has been pioneered by SiRF Technologies, one of the most innovative companies in the world of GPS. Its concepts are rather convoluted but in simple terms it aims in reducing drastically dimensions and power consumption of GPS receiver by linking them to a centralized server, which can reduce the need for processing power. The minimized receiver tunes different available satellites (whose ID code has been provided by the server which “knows” the a rough estimate, in range ~50 Km, of the position of the user) and compute line of positions from each of them. This raw data are sent to the server, which can therefore compute the correct position of the user applying, if necessary, all differential corrections needed.The SnapTrack system developed by SiRF has been devised for application in cellular telephones where there is a strong need to localize a user to provide him services depending on its exact position. Many additional “tricks” are blended into SnapTrack to contain the position error in less than 4 meters for 70% of the time. What is important is that these results are obtained with a very small and power-saving GPS receiver with a minimal antenna.This technology must be considered carefully whenever there is a continuous radio-link between the user and a centralized server and, more particularly, when the user position is more needed by the server than by the user.

GPS and Selective Availability

During the preparation of this paper the US Government, which acts on the GPS Navstar system by means of its Department of Defence, stopped the intentional degradation of the C/A code GPS signal (that used for normal civil purposes) and brought the Standard Positioning Service to its maximum precision. This decision is considered to be permanent and in line with statements declared by the US White House in previous occasions.The immediate results are a real bonus for the civilian market. Now a commercial low-cost receiver can achieve a position fix within a radius of 15 meters, for most of the time, from the real location without any differential technique. Also altitude precision is greatly improved by at least an order of magnitude bringing the error within 20-30 meters. Particular conditions, such us optimal geometry of the constellation, the use of high performances antennas and advanced algorithms in receivers, can bring this error within a radius of 15 meters for most of the time.This results will have a great impact on Archeoguide, as in any other civilian application, since they could improve the estimate of user raw position to the point that a choice toward differential correction could be avoided with a consequent (relative) reduction on costs.It is to be noted, however, that due to the intrinsic errors in the single frequency approach to GPS receivers, the use of differential correction in these more benign conditions will not produce improvements in accuracy of the same error in magnitude. In fact the main target for differential correction was that to reduce the effects of Selective

104 of 171

Page 105: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Availability. With this gone, the correction can still produce useful result but with an improvement of only half an order of magnitude. So if a receiver with correction could achieve 5 meters of accuracy in optimum conditions, now the error will be limited to 1-2 meters. In practical terms, we don’t believe it will be possible to achieve sub-meter accuracies with standard receivers and with basic differential corrections. On the other hand, Carrier Phase and RTK, will not be able to produce better results since they are based on techniques, which are independent of Selective Availability.

Implications for Archeoguide

In practical terms we have three options (in parenthesis are values corrected for the new GPS situation without SA): “low” accuracy services

accuracy within than 100 meters (now within 20 meters!)low cost / low power / small dimensions receivers

DGPS systemaccuracy variable from 5 to 15 meters (now from 2 to 8 meters?)still low cost receivers, requires a reference station with a good antenna

DGPS systemaccuracy variable from 1 to 5 meters (now from 1 to 3 meters?)higher cost receivers, requires a reference station with a good antenna, the receiver also need a good antenna

SnapTrack systemaccuracy estimated from 3 to 10 meters (now from 1.5 to 8 meters?)very low power / very low dimensions receiver, costs to be evaluated, requires server functions

RTK systemaccuracy in the order of centimetershigh cost / higher power, dimensions receivers, plus high cost reference station eed also a bulkier antenna for the user GPS

To select the right GPS technology for Archeoguide we shall define first the generic requirements for the raw positioning of the users. As soon as those are known a quick market survey will yield possible commercial candidates for providing test receivers. Therefore, despite the fact that this paper was meant to produce a list of possible candidates, we elected to postpone this part of the survey in order to first obtain recommendations about user raw positioning requirements.A comprehensive, selected, listing of Companies producing OEM GPS receivers modules are:

o Ashtec / Magellan Corporationo Canadian Marconio Dassault Sercelo Furuno Electrico Garmin Internationalo Javad Positioning Systemso Leica Geosystemso Motorolao Novatelo Racal Surveyo Rockwell Semiconductoro Rokar Internationalo Starlinko SIRF Technologyo Snaptracko Trimble Navigationo Blox AG

Companies in bold should be initially considered for Archeoguide purposes.A&C 2000 already had experience with small, compact, receivers for embedded applications, including Trimble SV3 (not so small) and SV6 GPS receivers plus Trimble PCMCIA GPS receiver (discontinued), Ashtec G8 (credit card sized receiver), Motorola OnCore 6. In this period we are conducting a long-term evaluation of the GPS system in its new state and we will provide Archeoguide partners with valid results for this class of GPS receivers.

Warning: Archeoguide shall evaluate the possibility that interferences from the S-band wireless link could downgrade GPS receiver performances or even impair its operations. These interferences could be minimized by careful positioning of antennas of both systems.

105 of 171

Page 106: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Applicable standards

NMEA 0183 - National Marine Electronics AssociationsRTCM SC104 - Radio Technical Commission for Maritime Services

References

This paper has been produced with the help of Geomedia, the Italian Newsletter of Geomatics.GPS’99 - A commercial market analysis - Forward ConceptsGPS World’s BIG Book of GPS 2000 - GPS World magazinehttp://www.utexas.edu/depts/grg/gcraft/notes/gps/gps.htmlhttp://www.starlinkdgps.com/http://www.navtechgps.com/

106 of 171

Page 107: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.2.7. GPS use on Tracking Systems

Purpose

The main purpose of the use of a GPS system as a device for the tracking system is directly related to the determination of the spatial position (in term of x, y, z coordinates referred to the General System Reference, and of the three angles , , k of the view axis) of a human walking in the site.

The

Space Resection method (used with landmarks)

The determination of the human point of view and the view-axis direction is usually addressed as a “space resection” method.The problem of space resection involves the determination of the spatial position of a camera (considered coincident with the human eye).Solution of the problem usually also yields the rotation angles of the optical axis of the camera (, , k).Thus, space resection is a method of determining the six exterior orientation parameters (three angles and three coordinates) of the camera. One of the most used solutions (i.e. Church Method) requires three control points imaged on the camera, and a unique solution is obtained by assuming that no geometric distortion exists at the three image points.The formulation can be extended to account for image distortion.Then four or more control points are needed to determine the most probable solution by the method of least squares.Common solutions, as the Church’s one, are based on the fundamental condition that the phase angle subtended at the camera station by any two points on the object is equal to the phase angle subtended at the camera station by the images of the two points on the camera plate.Three control points will form a space pyramid with the apex angle at the exposure center O. Similarly, the three image points also form a space pyramid at the exposure center O. Since interior orientation of the camera is usually known (i.e. determined in photogrammetry by calibration methods), the space pyramid formed by the three image points is geometrically defined. The solution to the problem generally follows a successive iteration procedure.An initial position of the exposure center O and initial attitude of the camera are generally assumed. Spatial discrepancies between the two space pyramids are computed and correction are applied to the assumed parameters. The procedure is repeated until the two pyramids coincide.From this we can think that more the initial values are near to the real values faster best results are achieved.In this phase the possibility to get the more accuracy on the initial value of camera position (x, y, z) is of high value.

107 of 171

k

O (x, y, z)

X

Y

Z

view axis

Page 108: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

GPS as a way to get initial values in Space Resection method

GPS system can be seen as a good system to give such initial approximate values.To use this system we have to consider that: GPS coordinates are given in a world geodetic system (WGS84) The actual accuracy can be of 10 meters without Differential SystemsFor the first aim we have to consider the coordinate transformation from WGS84 to the local reference system. This determines the necessity of execution of some survey directly on the site where Archeoguide will be installed.In order to set-up the site following steps are required:

1. Definition of at least two points as Reference Point to be determined in two different Reference System: WGS84 and the local “Archeoguide Reference System”

2. Calibration and test of algorithms used for the coordinate transformation with special regard to the height (Z coordinate) taking special account of the local “height of geoid”

For the second aim we have to consider the important decision taken by U.S.A. to discontinue Selectively Availability on GPS signals so that final precision is actually increased of an order of magnitude.

Actual precision of GPS (as of 4 May 2000)

Experiments taken in this days shows that with simple GPS receiver as the navigation ones it is possible to get an absolute accuracy on Grid reference WGS84 of ~ 10 m.In the following diagram it is shown the change of accuracy acquired from 2 May 2000. This is a plot of GPS navigational errors through the SA (Selective Availability) transition prepared by Rob Conley of Overlook Systems for U.S. Space Command in Colorado Springs, Colorado. The data was measured using a Trimble SV6 receiver.The following images compare the accuracy of GPS with and without selective availability (SA). Each plot shows the positional scatter of 6.5 hours of data (0730 to 1400 UTC) taken at one of the Continuously Operating Reference Stations (CORS) operated by the U.S. Coast Guard at Hartsville, Tennessee. On May 2, 2000, SA was no longer present. The plots show that SA causes 95% of the points to fall within a radius of 44.2 meters. Without SA, 95% of the points fall within a radius of 4.2 meters.

108 of 171

Page 109: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The need of Differential GPS

Even if with this new increased accuracy could bring to think that a Differential Reference Station is fewer necessary there are always at least two motive to implement it: The improved sub-metric accuracy The coordinate system referenced on a “Archeoguide Reference Point”In the second item it is necessary to underline that using the Reference Station final value of coordinates of the human walking are not to be transformed because calculated in the right system referred to the site.

109 of 171

Page 110: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Head / Motion TrackingManufacturer InterSense Inc.

(http://www.isense.com/precision.htm)Model IS-300 Orientation

TrackerIS-300 Pro IS-600 Mark 2

Technique Inertial Measurement Unit (IMU) which senses angular rate of rotation, gravity and earth components along three perpendicular axes The angular rates are integrated to obtain the orientation (yaw, pitch, and roll) of the sensor Gravitometer and compass measurements are used to prevent the accumulation of gyroscopic drift

IMU; Ultrasonic time-of-flight distance measurements are used to obtain a starting position and to correct any drift in the inertial position and orientation tracking

Degrees of Freedom (DOF)

Yaw, pitch, roll

Angular range All orientationsMax. Angular rate 1200°/secAngular resolution 0,02° RMSAngular Accuracy - - 0,25 ° RMSDynamic Accuracy 1° RMS 3° RMS k.A.Static Accuracy 3° RMS 1° RMS k.A.Number of Sensors 1 4 Up to 4Prediction NA 0-50 ms 0-50 msMax. Update Rate Up to 150 Hz Up to 500 Hz Up to 150 HzInterface RS-232C with selectable baud rates to 115,200Protocol Compatible with industry-standard protocolPrice 3.970 € 7.935 € 13.685 €

Properties of IS-300 / IS-300 Pro ( http://www.isense.com/is300.html ): Jitter-free, fast response, unlimited range (IS-300 is completely source less, which means: no set-up, no line-of-sight constraints and virtually unlimited operating range), the signal processor is small enough to wear on a belt for tether less applications.By IMU and signal processing the sloshy response common to inclinometers and the accumulation of drift error that plagues ordinary gyroscopes are virtually

eliminated.

110 of 171

IS-300 / IS-300 ProIS-300 / IS-300 Pro Connection Diagramm

IS-600 Mark 2 IS-600 Mark 2 Connection Diagram

Page 111: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The inertial sensing technology is not susceptible to electromagnetic interference, usage of the system even in noisy, metal-cluttered environments possible.Motion Prediction: The IS-300 Pro can predict motion up to 50 ms in the future, which compensates for graphics rendering delays and further contributes to eliminating simulator lag.

Properties of IS-600 Mark 2 ( http://www.isense.com/is600.html ): All advantages of IS-300 and IS-300 Pro.The IS-600 mark2 obtains its primary motion sensing using a miniature solid-state inertial measurement unit (IMU or InertiaCube) which senses angular rate of rotation and linear acceleration along three perpendicular axes. The angular rates are integrated to obtain the orientation (yaw, pitch, and roll) of the sensor. The ultrasonic time-of-flight distance measurements are used to keep track of changes in position (x, y, z). Superior Accuracy and Robustness: The IS-600 Mark2 uses InterSense’s proprietary sensor fusion algorithms to obtain superior orientation accuracy. Position tracking is performed by accelerometry with ultrasonic drift correction, not just the pure time-of-flight trilateration used by others. This results in vastly improved update rates, resolution, and immunity to ultrasonic interference.

Four Operating Modes:GEOS mode: Gyroscopic Earth-Stabilized Orientation Sensing for smooth source less 3-DOF orientation tracking with update rates up to 500 Hz.PULSAR mode: PULSed Acoustic Ranging provides wireless 3-DOF ultrasonic position tracking.DUAL mode: 6-DOF orientation and position tracking. The sensors operate independent of each other.FUSION Mode: 6-DOF orientation and position tracking, using sensor fusion algorithms to combine inertial and ultrasonic measurements.

400 ft² Tracking Capability: The new system allows up to four X-Bars, which can each cover a 10' x 10' area.Installation Flexibility: The X-bar is modular in design with detachable ReceiverPods. These allow the user to suit unique configuration needs such as inside flight simulators. Shorter X-Bars are available.

Wide Area Tracking / Multi-Point Tracking: IS-900 Product FamilyManufacturer InterSense Inc

(http://www.isense.com/IS900/index.html)Model IS-900Technique Combination of inertial and ultrasonic trackingDOF 6 (per Station)ResolutionPosition (X/Y/Z)Angular (P/R, Y)

4 mm RMS0,2°, 0,4° RMS

Maximum Update Rate1 Station2 Stations3 Stations4 Stations

180 Hz180 Hz120 Hz90 Hz

Genlock options NTSC; TTL (25-120 Hz); Programmable internal syncPrediction 50 msLatency 4-10 msInterface RS-232 up to 115,2 kbaud; Optional: Ethernet (Q1 00)Protocol Industry Standard Protocols; compatibel with IS-300 / IS-600Standard components SensorFusion base unit; SoniStrips (long / short); Auto-mapping station;

Wand or Stylus Station; CrystalEyes stationOptions Wand station; Stylus station; CrystalEyes station; 4 Port expansion hub

(12 SoniStrips); SoniStrip expansion packsFuture Options Generic station; Wrist station; HMD stationCable length 9 m (standard), 6 m extension (optional)Price -

111 of 171

Page 112: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Properties of the IS-900 Product Family:The IS-900CT is a revolutionary miniaturized system that tracks camera orientation and position with very high accuracy over a virtually infinitely scalable range. The system uses InterSense’s new Constellation expandable motion tracking technology to provide producers with the ability to make unrestrained use of 6 degree-of-freedom (DOF) camera motion at a far lower cost than sensitized cranes, dollies and tripods. For the first time, virtual set and film cameras can be moved and tracked as easily as traditional cameras. The television and film industries can now utilize the boundary expanding capabilities of computer generated virtual sets and/or special effects and still maintain the creative control of camera position they are accustomed to. With the most precise, real-time, 6DOF, multi-point tracking on the market, the Constellation design of the IS-900 offers users the ability to track head, hands and objects over a virtually unlimited range, in an easily installed and expandable system.Using our proprietary SensorFusion software to combine the latest advances in inertial and ultrasonic tracking, the IS-900 delivers tracking resolution in the millimeter range for position, and below 0.1° for orientation.From CAVEs, PowerWalls and ImmersaDesks, to rooms and buildings, the IS-900 product line offers the ideal, precision motion tracking solution for your specific application.System Overview:Each IS-900 system is comprised of a SensorFusion base unit, a series of pre-assembled ultrasonic SoniStrips that can cover up to 900m² (LAT only), and multiple omni-directional tracked stations.Standard IS-900 stations include a comfortable, lightweight 6DOF stylus with 2 buttons, an ergonomically designed handheld wand with 5 buttons and 2-axis proportional joystick, and a tracking station customized to attach seamlessly to the StereoGraphics CrystalEyes 3D shutter glasses.

112 of 171

Page 113: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Motion Tracking, Head / Object trackingManufacturer Ascension

(http://www.ascension-tech.com/graphic.htm)Model LaserBIRD 3D-BIRDTechnique OpticalDOF 6 (Position & Orientation) 3 (Orientation)Scanner Field of View

+ 42° horizontal+ 55° vertical

-

Measurement Rate 90 measurements/second Up to 160 measurements/secondInterface RS-232 or USB RS-232Sensor PositionOperating DistanceAccuracyResolution

0,25 m to 2 m0,35mm to 1,0mmRMS0,04 mm to 0,1 mm

-

Sensor AnglesAngle Range

AccuracyResolution

60° Azimuth, Elevation, 180° Roll or 180° Azimuth, 85°Elevation and Roll0,15° to 0,5° RMS0,02° to 0,05°

±180° Azimuth & Roll ±90° Elevation

4° RMS0,2°

Sensor Cable - 15 feet, extendable to 30 feetBenefits Highly accurate performance in

almost any environment No metallic distortion or acoustic interference Highest precision of all optical trackers Compact scanner does not disrupt work space Unaffected by ambient light Easy interface to host computer

Fast orientation angle tracking without range limitations orline-of-sight restrictions Easy integration and interface without external electronics Unit All attitude tracking without head-pointing limits Works in metallic environments with virtually no distortion orInterference

Price 13.900 € 1.450 €Stylus

Sensor

Sensor

Connection

113 of 171

LaserBIRD Connection Diagram3D-BIRD Connection Diagram

Page 114: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Head / Hand / Body TrackingManufacturer Ascension

(http://www.ascension-tech.com/graphic.htm)Model Flock of Birds PcBird MotionStarTechnique MagneticDOF 6Translation range 4'(10' optional) in any direction 10 feet in any

direction; Transmitter 16 feet/dual Transmitter

Angular range 180° Azimuth & Roll; 90° ElevationStatic Accuracy Position

0,07 inch RMS 0,3 inch RMS at 5-ft-range 0,6 inch RMS at 10-ft-range

Static Accuracy Orientation

0,5° RMS 0,5° at 5-ft-range 1,0° at 10-ft-range

Static Resolution Position

0,02 inch @ 12“ 0,03 inch at 5-ft-range 0,10 inch at 10-ft-range

Static Resolution Orientation

0,1° @ 12“ 0,1° RMS at 5-ft-range 0,2° at 10-ft-range

Update rate Up to 144 HzOutputs X,Y,Z positional coordinates and orientation angles, rotation matrix or

quaternionsInterface RS-232C with

selectable baud rates up to 115.200;RS-485 with selectable baud rates up to 500.000

ISA-Bus EthernetRS-232CRS-485SCSI

Format Binary -Modes Point or Stream -Disadvantages Metal objects and stray magnetic fields in the operation volume will

degrade performance.Benefits No distortions due to conductive materials

Unrestricted Tracking without Line-of-SiteRestrictions Real –Time Interaction with Virtual Images Fast, dynamic Performance without Degradation Cost effective

With magnetic DC-Sensors to overcome blocking delays Up to 90 sensors at the same time No metallic distortion Also available: wireless, with restricted functions / properties

„Long Range“-Operations;

Easy expandability No calibrations necessary CRT sync to neutralize CRT-Noise

Price 2.610 €2.320 € (Extended Range Transmitter ERT)

2.397 € 9.300 €50.000 € (wireless)

114 of 171

Page 115: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

115 of 171

Page 116: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Head / Hand / Instrument TrackingFabricator Polhemus

(http://www.polhemus.com/ourprod.htm)Model FASTRAK ISOTRAK II STAR*TRACKTechnique Electromagnetic Magnetic ElectromagneticDOF 6 k.A. k.A.Position coverage The system will

provide the specified performance when the receivers are within 30 inches of the transmitter. Operation over a range of up to 10 feet is possible with slightly reduced performance.

The system will provide the specified performance when the receivers are within 30" of the transmitter. Operation over a range of up to 5 feet is possible with slightly reduced performance.

With Long Ranger Transmitter: from 27 inches to 15 feet separation between transmitter and receiver. With Super Nova Transmitter: from 5 ft to 25 feet separation between transmitter and receiver.

Latency 4 ms 20 ms (without software filtering)

-

Update rate 120 Hz, divided by the number of receivers

60 Hz, divided by the number of receivers

120 Hz per receiver

Static Accuracy 0,03“ RMS for X/Y/Z-Position 0,15° RMS for receiver orientation

0,1" RMS for the X/ Y/ Z position 0.75° RMS for receiver orientation

Position: ± 1 inch per receiver Orientation: ± 2º (peak) per receiver

Resolution 0,0002 inches per inch of transmitter and receiver separation 0.025 degrees orientation

0.0015"/" of transmitter and receiver separation 0,1° orientation

0,4 inch and 0,75º, one receiver relative to another

Range Up to 10 feet with standard transmitter Up to 30 feet with „Long Ranger“-Transmitter

Max. 5 feet Up to 330 feet (wireless)

Angular Coverage The receivers are all-attitude -Outputs Position in Cartesian coordinates (inches or

centimeters); orientation in direction cosines, Euler angles or Quaternions.

-

Benefits Placed on the object to be tracked, miniature receivers sense the transmitted magnetic field.STAR*TRAK can simultaneously accept inputs from 32 receivers per system.„On-the-Spot-Direction;Also available as wired version (cheaper).

Disadvantages Large metallic objects, such as desks or cabinets, located near the transmitter or receiver, may adversely affect the performance of the system.

Price 5.850 € 2.750 € 63.000 € (with TRAKBELT, 8 receiver)

116 of 171

Page 117: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

117 of 171

Page 118: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Eye Tracking SystemsManufacturer Applied Science Laboratories

(http://www.a-s-l.com/products.htm)Model Model 210 Model 501 Model 504 Mobile 501About Imbus (iris-

sclera boundary) and eyelid tracker capable of measuring horizontal and vertical eye movements

Complete eye tracking system for use in situ-ations where the subject can wear lightweight,head mounted optics and must have unrestricted head movement The system is designed to mea-sure a subject’s eye line of gaze with respect to the head

Video eye tracking system for use in situations wherehead mounted optics are not desirable and the stimulus presented to the subject is limited to a single surface

Mobile eye tracking system The control unit, VCR and video transmitter are worn by thesubject in a modified back pack, a power belt is worn around the subject's waist

Technique OpticalSampling (Output) rate

1000 Hz 50 Hz or 60 Hz 120 Hz and 240 Hz (optional)

50 Hz or 60 Hz 50 Hz or 60 Hz 120 Hz and 240 Hz (optional)

Measurement principle

Differential reflectivity

Bright pupil/ corneal reflection

Bright pupil/ corneal reflection

Bright pupil/ corneal reflection

Accuracy 1° horizontally2° vertically

0,5°-1,0° visual angle

0.5°-1.0° visual angle

0.5°-1.0° visual angle

Precision 0.25° horizontally1° vertically

Resolution: 0.25° visual angle

Head movement unlimited Unlimited one square foot unlimitedVisual range 30° horizontally

30° vertically50° horizontally40° vertically

50° horizontally40° vertically

50° horizontally40° vertically

Scope of supply Control unit Sensor array

Control unit Head Mounted Optics

Control unit Remote Optics

Control unit Head Mounted Optics Power Belt Video Trans-mitter (1000 ft range)

Generated data Vertical and Horizontal eye position Event marks 16 bits of user generated data Elapsed time

Time X/Y eye position coordinates Pupil diameter

Time X/Y eye position coordinates Pupil diameter

Data analysis is through the scene camera image with superimposed cursor recorded on video tape

Price

118 of 171

Page 119: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

3D-Tracking SystemManufacturer Origin Instruments Corporation

(http://www.orin.com)Model DynaSight SensorTechnique OpticalOperating Wave Band Near InfraredDOF 6Field of Regard 75° Azimuth x 75° ElevationLatency 9 to 28 milliseconds (Operating mode dependent)Lock-on Delay 0,3 sec TypicalUpdate rate 65 Hz Maximum for Passive Targets

100 Hz Maximum for Active TargetsAbsolute Accuracy 2 mm Cross Range Typical

8 mm Down Range TypicalResolution 0,1 mm Cross Range Typical

0,4 mm Down Range TypicalOperating Range 0,1 to 1,5 m for 7 mm Passive Target

0,3 to 4 m for 25 mm Passive Target 0,6 to 5 m for 50 mm PassiveTarget Up to 12 m for Active Targets

Outputs(Default Data Format)

8 Bytes per Measurement Update X, Y, Z in 16-bit Two's Complement Format 0,05 mm per Least Significant Bit

Options and Accessories International Power Option High Gain Passive Targets Omni-directional Passive Targets Active Target Adapter Active Target ArraysDynaSight Sensor

Active Target Adapter

119 of 171

Page 120: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

XEROX ParctabManufacturer XEROX

(http://www.parc.xerox.com/parc-go.html)Mobile Unit:DimensionsWeightBatteryScreenTouch ScreenButton InputsIR CommunicationProcessor / MemoryExternal Ports

Height: 7.8 cm, Width: 10.5 cm, Depth: 2.4 cm215 g12 h continuous or 2 weeks at 10 min/h x 8 h/day x 5 days/weekSize: 6.2 x 4.5 cm, Pixels: 128x64, Text: 21 chars x 8 linesTouch Technology passive stylus, Resolution: 1:128 (X) and 1:64 (Y)Three finger push switchesWavelength: 880nm, Data rate: 19.2k baud, Protocol: CSMA12 MHz Signetics 87C524/528, Memory: V 1.0: 8k, V 2.0: 128kI2C bus expansion, Battery recharge port

Basestation transceiver:IR Communication

Interface

Wavelength: 880nm, Data rate: variable 9.6k, 19.2k, 38.4k pulse-position modulation, Protocol CSMA38.4k baud serial connection up to 30m in length (2-pair telecom cable), Serial line daisy chain capability for 10 units (RJ11 4/6 jacks), External 20-pin peripheral connector

The system is intended for use in an office setting where people carry one, or a few devices, and interact with a few to a few dozen devices. Our research is concerned with people and their interactions in a world of terminals, printers, hosts, devices, information, and other people.

120 of 171

XEROX Parctab Parctab „Deathstar“ Basestation

PARCTAB: For Use in an Office Setting

Page 121: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Inertial SensorsManufacturer Systron Donner Inertial Division

(http://www.systron.com/)Model GyroChip Model

QRS11BEI GyroChip II BEI GyroChip

HORIZONBEI GyroChip Model AQRS

Sensing Element Micromachined piece of crys-talline quartz (no moving parts)

Monolithic Quartz Angular rotationSensor

Vibrating quartzGyroscope

SpecificationsPerformance:Standard RangesBandwidth(-90°)Threshold/ResolutionOutput Noise (DC to 100Hz)

±50, 100, 200, 500, 1000°/sec>60 Hz

0,004°/sec

0,01°/sec/Hz

±50, 100, 200, 500, 1000°/sec>50 Hz

k.A.

0,05°/sec/Hz0,02°/sec/Hz

±90°/sec

>18 Hz

k.A.

0,025°/sec/Hz

±64, 75°/sec

>50 Hz

k.A.

0.025°/sec/Hz

Applications(for example)Stabilization

Guidance

Optical Line-of-Sight Systems Inertial/GPS Navigation Systems

Platform Stabilization GPS Augmentation Camera Stabilization Robotics

Antenna Stabilization Systems GPS Augmentation Vehicle Location Systems, Navigation Systems Precision Farming

Yaw Stability Control Vehicle Navigation/Location Adaptive Cruise Control Rollover Detection

Features High-PerformanceInertial Sensor Internal Electronics Long Operating Life Fast Start-Up

Solid-State DC Input/High-Level DC Output Compact, LightweightDesign Internal PowerRegulation POWER SAVE Mode (+12 Vdc Version) High Reliability

High Reliability Low Cost; Micro machined Sensor DC Input, DC Output Operation Internal PowerRegulation Low Drift

Low Cost; Rugged, High Reliability Micro machined Sensor Ratiometric Output Signal Reverse Voltage, OverVoltage Protection Continuous Built-in-Test(CBIT™)

121 of 171

Page 122: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Multi-Axis Inertial Sensor SystemsManufacturer Systron Donner Inertial Division

(http://www.systron.com/)Model BEI MotionPakWhat it is: “Solid-state” six degree of freedom inertial sensing system used for

measuring linear accelerations and angular rates in instrumentation and control applications Highly reliable, compact, and fully self-contained motion Measurement package

Sensing Elements Three orthogonally mounted "solid-state" micro machined quartz angular rate sensors, and three high performance linear servo accelerometers

SpecificationsPerformance:Standard RangesBandwidth (-90°)Output Noise (DC to 100Hz)

Rate Channels Acceleration Channels

±50, 100, 200, 500°/sec

>60 Hz0,01°/sec/Hz

1, 2, 3, 5, 10 g's

>300 Hz7,0 mV

122 of 171

MotionPak THEORY OF OPERATION

Page 123: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

CompassManufacturer Precision Navigation

(http://www.precisionnav.com/)Model TCM2 –20, -50, -80What it is: Electronic compass module

Provides compass heading, pitch, roll, 3-axis magnetometer, temperature No mechanical parts

Sensing Elements Magneto-Inductive sensorsSpecifications(TCM2-20, -50, -80)

Heading Information: Accuracy when level: ±0.5° RMS, ±1.0° RMS, ±2.5° RMS Accuracy when tilted: ±1° RMS, ±1.5° RMS, ±2.5° - ±3.5° RMS Resolution: 0.1°, 0.1°, 0.1° Repeatability: ±0.1, ±0.3°, ±0.6°

Tilt Information: Accuracy: ±0.2°, ±0.4°, ±0.5° Resolution: 0.1°, 0.3°, 0.5° Repeatability: ±0.2°, ±0.3°, ±0.75° Range: ±20°, ±50°, ±80°

Interfaces Digital: RS232C, NMEA0183 Analog

Size 2.50” x 2.00” x 1.25”Price $699 - $1,199

123 of 171

Page 124: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.3.Database for content representation considerations

7.3.1. Survey of related standards

Formal Standards

Art and Architecture Thesaurus — The Getty Information Institute produced this useful thesaurus in 1990 and has recently made it searchable over the World Wide Web. It's an invaluable tool for standardised description of material culture, architecture, and art in the Western World from prehistory to the present. Vocabulary is controlled through a hierarchical structure of broader/narrower terms, synonym control, and other helpful tools. British Archaeological Thesaurus — Written by Cherry Lavell and published by the Council for British Archaeology in 1989, this was one of the earliest thesauri for British archaeology. It remains a useful companion to back issues of the British and Irish Archaeological Bibliography (and its predecessors, British Archaeological Bibliography and British Archaeological Abstracts), but is now out—of—print.International Guidelines for Museum Object Information: CIDOC Information Categories — CIDOC, the International Documentation Committee of the International Council of Museums has developed these guidelines about what information should be recorded for museum objects, how it should be recorded, and the terminology with which this information should be recorded. This standard is particularly useful for archaeologists working in a museums setting.“Istituto Centrale per il Catalogo e la Documentazione”, Italian Ministry of Culture – DB Standard for Archaeological sites and items – Italian standard for recording of archaeological sites and for related items. Include Thesauri.Management of Archaeological Projects (MAP2) — English Heritage's guide to the management of all phases of archaeological projects. Includes guidelines for planning, fieldwork, assessment of potential, analysis, report preparation, and archiving.Rules for the Construction of Personal, Place and Corporate Names — The National Council on Archives' 1997 guide to the recording of name information in archives. These rules include guidance on the use of non—current place names, and other issues of relevance to the archaeological community.Social History and Industrial Classification, 2nd edition (SHIC2) — The MDA's standard for classifying the subject matter of museum collections. A simple sub—set of SHIC2 is provided by the less formal Simple Subject Headings system, although SHIC2 should be used where possible.SPECTRUM: The UK Documentation Standard — Created by the MDA in 1994 as a standard for documenting museum collections, its use is required for registration with the Museum and Galleries Commission. A condensed version called SPECTRUM Essentials is available. The MDA provides a team of subject specialists, including an archaeologist, who are available to advise about the use of SPECTRUM. Thesaurus of Archaeological Site Types — This 1992 thesaurus has been superceded by the RCHME Thesaurus of Monument Types. Thesaurus of Building Materials: A Standard for Use in Architectural and Archaeological Records — An unpublished standard from the Royal Commission on the Historical Monuments of England, this thesaurus deals explicitly with the stuff of which buildings are composed: animal, vegetable, or mineral. Like the Thesaurus of Monument Types, terms are organized hierarchically and terminology control is provided. Users can nominate terms to be included in this thesaurus by returning a form within the thesaurus to the RCHME Data Standards Unit. Thesaurus of Monument Types: A Data Standard for Use in Archaeological and Architectural Records — Produced by the Royal Commission on the Historical Monuments of England (RCHME) in 1995, this is a standard for use with both archaeological and architectural information. This thesaurus is actively updated by the Data Standards Unit at the RCHME. The purpose of this thesaurus is to standardize the terms used to describe archaeological sites or standing buildings by, for example, listing terms hierarchically and relating the levels of this hierarchy to one another or indicating preferred terms in the case of synonyms. For example, the hierarchical structure means that RELIGIOUS monuments include the subset of MONASTERYs and that there is a further sub—division into BENEDICTINE MONASTERY or CISTERCIAN MONASTERY depending on the particular monument being described. Synonyms are dealt with by pointing the user to a preferred term (e.g. for 'tribunal' use COURT HOUSE). This is one of the most widely used documentation standards in UK archaeology, and is a useful source of subject terms. Towards an Accessible Archaeological Archive. The Transfer of Archaeological Archives to Museums: Guidelines for Use in England, Northern Ireland, Scotland and Wales — The Society of Museum Archaeologists 1995 guide, edited by Janet Owen, provides detailed information about all aspects of preparing an archive for deposit in a museum. Does not cover digital archiving explicitly.

124 of 171

Page 125: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Draft Standards

Aerial Photography and Remote Sensing Guide to Good Practice — The Archaeology Data Service is scheduled to produce this guide in January 1998. Written by a working party of A/P and remote sensing specialists, and widely peer reviewed, this guide deals with all aspects of creating, documenting, and archiving images and interpretations. The creation of Dublin Core—style metadata records for these data is also described, and examples are provided. The Aquarelle project is in the process of developing several standards of relevance to archaeologists and museum specialists. This project is run by a consortium of technical partners and national heritage managers in England, France, Greece, and Italy. Of particular interest is their work with SGML and multilingual thesauri. Archaeological Object Name Thesaurus — The MDA is in the final review process for this new archaeological standard. The thesaurus has been developed to provide guidance and common principles for the recording of object names within the archaeological profession and related disciplines, and to provide an interface with other national and international standards. The goal of this thesaurus is to encourage the use of, and access to, collections, archives and record systems and facilitate co-operation and data exchange between all individuals and institutions involved in the retrieval, research and curation of archaeological objects.European Bronze Age Monuments: A Multi—lingual Glossary of Archaeological Terminology — Created in 1995 by a multinational working party for the Council of Europe, this multilingual thesaurus pilot project is designed to assist in the recording of Bronze Age archaeological sites and monuments in Danish, Dutch, English, and French. Though the terms within the glossary are listed in all four of these languages, the glossary itself is currently available only in English and French. When this glossary makes its public debut, it should be a very helpful resource for prehistorians throughout Europe. Geophysics Guide to Good Practice — Another guide in a series by the Archaeology Data Service, this standard is scheduled for publication in January 1998. Written in close collaboration with the creators of the English Heritage AML Geophysics Database, this volume complements the existing English Heritage guide to geophysical surveying. Topics covered include georeferencing survey grids, data formats, data documentation, and the archiving of survey data. International Core Data Standard for Archaeological Sites and Monuments (Draft) — Produced in 1995 by CIDOC, the International Documentation Committee of the International Council of Museums this document guides the user in documenting archaeological sites and monuments. The goal of this standard is to facilitate international exchange of information by encouraging standardized approaches to database structure. Useful information about naming, describing, cross—referencing, and spatially referencing sites and monuments is provided. Working examples from Denmark, England, France, and the Netherlands are provided. Contributors come from these countries and Albania, Canada, Poland, Romania, Russia, and the United States.SMR'97 (Draft) Currently being developed by the Association of Local Government Archaeological Officers (ALGAO), English Heritage, and the Royal Commission on the Historical Monuments of England this data standard is designed to unify the structure of Sites and Monuments Records databases throughout England.

Informal Standards

Geophysical Survey Database — This database, created by the Ancient Monument's Laboratory of English Heritage, is a well—crafted recording structure for geophysical survey data. Terminology is controlled in all fields describing the survey techniques or the archaeological site itself. The RCHME Thesaurus of Monument Types, in particular, is incorporated to control subject and period terms.IFA codes — The Institute of Field Archaeologists (IFA) provides a Code of Conduct with which members are meant to comply. This Code — and other IFA guidelines — lay down minimum standards for work in many areas of archaeological endeavour, including the deposition of archival material. Simple Subject Headings — Designed by the MDA for use in small museums, this tool helps control terminology used to classify museum collections. Four broad categories are included: community life, domestic and family life, personal life, and working life. This is a simplified subset of the MDA's Social History and Industrial Classification.

Other Relevant Standards

There are a number of specialized standards which are useful in archaeology, but which are too specialized to address here. These include standards for recording coastal and underwater archaeology sites (e.g. the National

125 of 171

Page 126: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Register of Historic Vessels, the NMR/SMR Maritime Type Standards, the Shipwreck Identification Thesaurus by A. M. Elkerton, the Scottish Institute of Maritime Studies draft Thesaurus of Vessel Types, and the University of Wales Department of Maritime Studies and International Transport Archaic and Alternative Port and Place Names lists.) A number of standards have been developed for other archaeological specialties including industrial archaeology (the Association for Industrial Archaeology's Index Record for Industrial Sites), military archaeology (the National Army Museum's Thesaurus for Cataloguing Military Collections), and urban archaeology (the RCHME and English Heritage Data Standard for Urban Archaeological Databases). The Archaeology Data Service will rely on comments from specialists in these areas for guidance in adapting these standards for generic resource discovery purposes. Architectural recording is an area closely related to archaeological recording, which has developed its own standard practices. Examples of documentation standards include the Council of Europe's Core Data Index to Historic Buildings and Monuments of the Architectural Heritage and the Getty Information Institute's Guide to the Description of Architectural Drawings. While there is a great deal of overlap between these standards and those listed above, we suspect that a large number of standards specifically designed for architectural recording exist and that the Archaeology Data Service is not yet aware of the full range.

Summary of Archaeological/Museums Data Standards

ADS (1997) Aerial Photography and Remote Sensing Working Party draft metadata specifications ADS/English Heritage (1997) Geophysics Working Party draft metadata specifications Association for Industrial Archaeology (1993) Index Record for Industrial Sites (IRIS) British Museum MAGUS system structure and in-house thesauri CAS recording/coding forms (occasionally used as a loose and informal standard) CBA (1989) British Archaeological Thesaurus by C. Lavell CHIN (1994) Archaeological Sites Data Dictionary CHIN (1993) Humanities Data Dictionary CIDOC (1995) Archaeological Sites Working Group draft Sites and Monuments Standards CIDOC (1995) International Guidelines for Museum Object Information: CIDOC Information Categories Council of Europe (1995) draft European Bronze Age Monuments Multi-lingual Glossary Council of Europe (1995) Core Data Index to Historic Buildings and Monuments of the Architectural Heritage DOE (1983) Ancient Monuments Records Manual and County sites and Monuments Records English Heritage's Geophysics Database English Heritage's MAP II European Commission's Research on Advanced Communications for Europe's Remote Access to Museums

Archive Project Getty Art History Information Program (1990) Art and Architecture Thesaurus Getty Information Institute (1997) draft Thesaurus of Geographic Names IAE/CIPEG/ICOM Multilingual Egyptological Thesaurus International Taxonomic Databases Working Group for plant taxonomy MDA (in progress) draft Archaeological Object Name Thesaurus MDA (1994) Social History and Industrial Classification (SHIC) subject classification for museum collections MDA (1994) SPECTRUM MGC (1992) Standards in the museum care of archaeological collections 1991 (draft by C. Paine) MOL Multimimsy data structure and associated authority lists MOLAS recording/coding forms (occasionally used as a loose and informal standard) NASA's Center for Aerospace Information Thesaurus National Army Museum (1993) Thesaurus for Cataloguing Military Collections National Register of Historic Vessels NMR/SMR Maritime Type Standards PORMR (1992) Shipwreck Identification Thesaurus by A M Elkerton RCAHMS's Canmore and other database structures RCAHMW's ENDEX database structure RCHME's MONARCH database structure RCHME's Listed Building Database structure RCHME (1986) Thesaurus of Archaeological Terms RCHME (1992) Thesaurus of Archaeological Site Types RCHME (1995) Thesaurus of Monument Types RCHME (1996) Thesaurus of Building Materials

126 of 171

Page 127: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

RCHME and ACAO (1993) Data Standard for the Extended National Archaeological Record RCHME and English Heritage (1993) Data Standard for Urban Archaeological Databases RCHME/ALGAO/English Heritage (1997) draft SMR Data Standard Royal Botanic Gardens Vascular Plant Families and Genera List Scottish Institute of Maritime Studies (1993) draft Thesaurus of Vessel Types Smithsonian Institution's World List of Insect Families SPECIES 2000 index of world-wide species for joint networked operation University of Wales Department of Maritime Studies and International Transport (1993) Archaic and Alternative

Port and Place Names

Communication Standards

SGML DTDs: Aquarelle ICPSR Data Documentation Initiative CIMI WHOIS++ Z39.50 Profiles: CEOS CIMI for project CHIO CIP ESRC's Global Environmental Change Data Network Global Locator Service for Environmental Information ISITE

Metadata Standards

Dublin Core (DC) and the Warwick Framework FGDC Content Standard for Digital Geospatial Metadata (FGDC) MAchine Readable Catalogue (MARC) NASA's Global Change Master Directory Interchange Format (DIF) Resource Organization and Discovery in Subject-Based Services (ROADS)

127 of 171

Page 128: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.3.2. The Core Data Standard for Archaeological Sites and Monuments

Following the fields used in this Standard to describe an archaeological manufact.

Reference numberThe number or combination of characters, which uniquely identifies each monument or site recorded by the organization, e.g. 615649.Name of monument or siteA free-text field which records the name or names by which a monument or site was or is known, e.g. Stonehenge, Little Big Horn, Field system at West Cliff.Date of compilation and date of last updateThis sub-section records the date of compilation and last update.Date of compilationThe date the core data record was created. The ISO standard for date is recommended, e.g. 1986-06-22.Date of last updateThis date will be modified whenever the record is updated. The ISO standard for date is recommended, e.g. 1993-07-12.Originator of referenceName of the individual or organization responsible for curating the monument or site record. This information is useful in establishing the provenance of the record when data is exchanged between recording organizations, e.g. National Heritage Conservation Commission.Cross reference to related records of monuments or sitesThis sub-section enables cross-referencing to related records. For example, relating a record to its wider complex record, e.g. a house within a settlement. It is optional and can be repeated.Cross reference to documentationThis sub-section enables cross-referencing to the published and unpublished documentation associated with the site or monument. It is optional and can be repeated.Cross reference to archaeological eventsThis sub-section makes it possible to relate, for example, records of archaeological excavations or surveys to those of the monument or site. Where multiple events have occurred at a monument or site, e.g. a survey followed by excavation, separate entries in this sub-section should be completed. It is an optional sub-section, which can be repeated.LocationThis is a mandatory section, which locates the monument or site.Any combination of sub-sections defined below may be employed to identify the location of the monument or site. More than one type of sub-section may be used to more closely define the location or to make otherwise ambiguous locations more precise. It should be noted that at least one sub-section must be used but that no individual sub-section is mandatory.

Administrative Location Site location Address Cadastral reference/Land Unit Cartographic reference

TypeAn entry is mandatory and must be linked to an entry in Section 4 (Dating), e.g. Villa/Roman. Controlled vocabulary is necessary and should include "unknown". This section can be repeated to accommodate changes in type at a monument or site through time.

Monument or site type Monument or site category

DatingThis is a mandatory section allowing for precise dating when it is known, or date ranges or periods when it is imprecise. This section can be repeated.Sub-section 3.4.1 is mandatory but one or more of the optional sub-sections that follow it may be employed to more closely define the dating.

128 of 171

Page 129: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Cultural period Century Date range Scientific and absolute dates

Physical ConditionThis section is used to record the physical condition of the monument or site and the date of assessment. It is optional and can be repeated. It may be useful for the continued assessment of the management of the monument or site to maintain entries in this section over time. This will enable damage or deterioration to be logged. It may also be necessary to include additional fields to record management details, depending on the functions of the recording organization.

Condition Date condition assessed

Designation/Protection StatusThis is an optional section allowing for a statement on whether the monument or site is designated or protected and if so the type of designation or protection and the date at which it was granted. This section can be repeated.Type of designation or protection

Date of designation or protection Reference number Originator of reference

Archaeological SummaryThis optional section enables a brief free text description of the monument or site.

129 of 171

Page 130: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.3.3. The Dublin Core: Metadata on Archaeology

The Dublin Metadata Core Element Set, or Dublin Core, is a set of descriptive elements used to provide a simple means of describing networked electronic information resources to aid more effective discovery and retrieval. Identifying the Dublin Core as the object DC following items are to be described and used for description of archaeological sites and objects.

DC.title1  Title Label: TITLE

"The name given to the resource by the CREATOR or PUBLISHER." DC.creator2  Author or Creator Label: CREATOR

"The person(s) or organization(s) primarily responsible for the intellectual content of the resource. For example, authors in the case of written documents, artists, photographers, or illustrators in the case of visual resources."

DC.subject3  Subject and Keywords Label: SUBJECT

"The topic of the resource, or keywords or phrases that describe the subject or content of the resource. The intent of the specification of this element is to promote the use of controlled vocabularies and keywords. This element might well include scheme—qualified classification data (for example, Library of Congress Classification Numbers or Dewey Decimal numbers) or scheme—qualified controlled vocabularies (such as MEdical Subject Headings or Art and Architecture Thesaurus descriptors) as well."

DC.description4  Description Label: DESCRIPTION

"A textual description of the content of the resource, including abstracts in the case of document—like objects or content descriptions in the case of visual resources. Future metadata collections might well include computational content description (spectral analysis of a visual resource, for example) that may not be embeddable in current network systems. In such a case this field might contain a link to such a description rather than the description itself."

DC.publisher5  Publisher Label: PUBLISHER

"The entity responsible for making the resource available in its present form, such as a publisher, a university department, or a corporate entity. The intent of specifying this field is to identify the entity that provides access to the resource."

DC.contributors

6  Other Contributors Label: CONTRIBUTORS

"Person(s) or organization(s) in addition to those specified in the CREATOR element who have made significant intellectual contributions to the resource but whose contribution is secondary to the individuals or entities specified in the CREATOR element (for example, editors, transcribers, illustrators, and convenors)."

DC.date7  Date Label: DATE

"The date the resource was made available in its present form. The recommended best practice is an 8-digit number in the form YYYYMMDD as defined by ANSI X3.30—1985. In this scheme, the date element for the day this is written would be 19961203, or December 3, 1996. Many other schema are possible, but if used, they should be identified in an unambiguous manner."

DC.type8  Resource Type Label: TYPE

130 of 171

Page 131: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

"The category of the resource, such as home page, novel, poem, working paper, technical report, essay, dictionary. It is expected that RESOURCE TYPE will be chosen from an enumerated list of types. A preliminary set of such types can be found at the following URL:

http://www.roads.lut.ac.uk/Metadata/DC-ObjectTypes.html

DC.format

9  Format Label: FORMAT"The data representation of the resource, such as text/html, ASCII, Postscript file, executable application, or JPEG image. The intent of specifying this element is to provide information necessary to allow people or machines to make decisions about the usability of the encoded data (what hardware and software might be required to display or execute it, for example). As with RESOURCE TYPE, FORMAT will be assigned from enumerated lists such as registered Internet Media Types (MIME types). In principal, formats can include physical media such as books, serials, or other non-electronic media."

DC.identifier10  Resource Identifier Label: IDENTIFIER

"String or number used to uniquely identify the resource. Examples for networked resources include URLs and URNs (when implemented). Other globally-unique identifiers, such as International Standard Book Numbers (ISBN) or other formal names would also be candidates for this element."

DC.source11  Source Label: SOURCE

"The work, either print or electronic, from which this resource is derived, if applicable. For example, an html encoding of a Shakespearean sonnet might identify the paper version of the sonnet from which the electronic version was transcribed."

DC.language12  Language Label: LANGUAGE

"Language(s) of the intellectual content of the resource. Where practical, the content of this field should coincide with the Z39.53 three character codes for written languages. See: http://www.sil.org/sgml/nisoLang3-1994.html"

DC.relation13  Relation Label: RELATION

"Relationship to other resources. The intent of specifying this element is to provide a means to express relationships among resources that have formal relationships to others, but exist as discrete resources themselves. For example, images in a document, chapters in a book, or items in a collection. A formal specification of RELATION is currently under development. Users and developers should understand that use of this element should be currently considered experimental."

DC.coverage14  Coverage Label: COVERAGE

"The spatial locations and temporal durations characteristic of the resource. Formal specification of COVERAGE is currently under development. Users and developers should understand that use of this element should be currently considered experimental."

DC.rights15  Rights Management Label: RIGHTS

"The content of this element is intended to be a link (a URL or other suitable URI as appropriate) to a copyright notice, a rights-management statement, or perhaps a server that would provide such information in a dynamic way. The intent of specifying this field is to allow providers a means to associate terms and conditions or copyright statements with a resource or collection of resources. No assumptions should be made by users if such a field is empty or not present."

131 of 171

Page 132: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.3.4. Internet Survey of Sites on DB Standards on Archaeology

TITLE URL DESCRIPTION AUTHOR1The international comittee

for Documentation of the international Council of Museums (CIDOC)

http://www.cidoc.icom.org Home page CIDOC, an international committee with over 750 members in 60 countries

2A European Core Data Standard for Archaeological Sites and Monuments.

http://www.natmus.min.dk/cidoc/archsite/coredata/e_cds.htm

There are a few words of introduction to the Core Data Standard and some paragraphs about the aim of the Core Data Standard and its seven main sections, about the future plants and the reference of the project.

CIDOC: Dominique Guillot & Henik Jarl Hansen

3Electronic Communication on Diversa Data - The Role of the CIDOC Reference Model

http://www.geneva-city.ch/musinfo/cidoc/oomodel/CRMRole.html

It’s an introduction to the Core Data Standard

Dr. M. Doerr, ICS Forth, Crete, Greece - N. Crofts, DSI, Geneva, Switzerland.

4Cidoc Core Data Standard for Archaeological Sites and Monuments: Appendix. Existing Data Models.

http://www.natmus.min.dk/cidoc/archsite/coredata/archapp1.htm

This appendix contains four examples of national archaeological record data model (Denmark, England, France, Netherlands).

CIDOC

5Cidoc Core Data Standard for Archaeological Sites and Monuments: Implementing the Core Data Standard.

http://www.natmus.min.dk/cidoc/archsite/coredata/archimp1.htm

  CIDOC

6Cidoc Core Data Standard for Archaeological Sites and Monuments:3. Core Data Standard.

http://www.natmus.min.dk/cidoc/archsite/coredata/archst0.htm

From this page you can move to the pages about the definitions of the sections, subsections and fields proposed for the Core Data Standard.

CIDOC

7Data Example of the CIDOC Reference Model -Epitaphios GE34604

http://www.geneva-city.ch/musinfo/cidoc/oomodel/epitaphios.htm

There is an example of how data appear under the CIDOC Reference Model.

Martin Doerr, ICS-FORTH, Crete, Greece - Ifigenia Dionissiadou,Benaki Museum, Athens, Greece.

8CIDOC Conceptual Reference Model Information Groups

http://www.geneva-city.ch/musinfo/cidoc/oomodel/info_groups.html

It's a long section with an Introduction to the Conceptual Reference Model and the scheme of the database with the information’s description.

Ifigenia Dionissiadou - Martin Doerr - Pat Reed - Nick Crofts.

9ICCD Standard - Standard catalografici

http://www.iccd.beniculturali.it/standard/index.html

It's an index of the Italian Core data Standard.

ICCD Ministero per i beni e le attività culturali

10A model for the Description of Archaeological Archives

http://www.eng-h.gov.uk/archives/ This document describes the model used by the Centre for Archaeology to describe its archaeological archives. The model comprises:- A model of the conceptual organisation of the archive-A description of the component classes of material-A definition of the data elements required describing each archival entity.

English Heritage

11Computers Archaeology databases - Glasgow

http://www.gla.ac.uk/Acad/Archaeology/staff/jwh/computing/compdata.html

It's a list of 23 clickable links ).

University of Glasgow

12Archaeology Data Service-Standard in Archaeology-Data Documentation and Content Standard-Spatial Standard.

http://ads.ahds.ac.uk/project/userinfo/standards.html#draft

It's a list of clickable links ADS (Archaeology Data Service.) Its aim is to collect, describe, catalogue, preserve, and provide user support for digital resources that are

132 of 171

Page 133: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

created as a product of archaeological research. The ADS also has a responsibility for promoting standards and guidelines for best practice in the creation, description, preservation and use of spatial information across the AHDS as a whole. For those classes of archaeological data where there do exist archival bodies the role of the ADS will be to collaborate with the appropriate national and local agencies to promote greater use of existing services.

13Archaeology data service-Guidelines for Cataloguing Datasets with the ADS Version 1

http://ads.ahds.ac.uk/ahds/project/userinfo/catalogue.html#kinds

This site is about cataloguing dataset with Archaeology Data Service (The paragraph in the site is:I An Invitation to Catalogue Data with the ADSII. Is it more appropriate to catalogue datasets or deposit data with the ADS?III. What kinds of datasets can be included in the ADS catalogue?IV. Five reasons to catalogue datasets with the ADSV. Does the ADS catalogue commercial datasets?VI. What will cataloguing with the ADS cost?VII. What information is contained in the ADS catalogue?VIII. Creating catalogue records to the ADSIX. Submitting catalogue records to the ADSX. Contacting the ADS).

 

14Archaeological Holdings Search system

http://ads.ahds.ac.uk/catalogue/ It's an on-line catalogue. To begin searching, simply enter a word or phrase about the archaeology of the British Isles.

Archaeology Data Service,Department of Archaeology,University of York,King's Manor,York YO1 7EP, UK+44 (0)1904 433 954 phone+44 (0)1904 433 939 [email protected]

15A Manual and Data Standard for Monument Inventories

http://www.rchme.gov.uk/midas/index.html

MIDAS has been written for anyone who is thinking about creating a new inventory of monuments. It is also for those who already maintain a monument inventory and want to be certain that their records will stand the test of time by adopting a common standard. MIDAS contains a list of the information you need to record and how to record it, in order to compile an inventory of monuments.

FISHEN Forum on Information Standards in Heritage (England)

133 of 171

Page 134: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

16Monument Class Descriptions - English Heritage

http://www.eng-h.gov.uk/mpp/mcd/intro2.htmhttp://www.eng-h.gov.uk/mpp/mcd/jump3top.htm http://www.eng-h.gov.uk/mpp/mcd/mcdtop1.htm

It's an on-line database with a group of circa 225 Monument Class. It set up a synthesis of current knowledge about the monument class required (definition-date-general description-…). You can make a selection from one of the pick-list boxes.

English Heritage

17National Archaeological Database-Archaeology & Ethnography program-Database, Gis, Application.

http://www.cast.uark.edu/products/NADB/

It's an USA bibliographic inventory (query on line); there are texts about Native American… Thematic maps (GIS) about USA archaeology.

National Parck service (sponsor);Center of Advanced Spatial Tecnologies-Univrsity of Arkansas

18Archaeology Data Service Standard Relevant to Archaeologist

http://ads.ahds.ac.uk/ahds/project/metadata/wrkshp1_ann5_2.html

There is a list of standards relevant to archaeologists (No clickable list)

Alicia L. Wise

19English Heritage web site http://www.english-heritage.org.uk/ Homepage  

20The English Heritage Geophysical Survey Database

http://www.eng-h.gov.uk/SDB/http://www.eng-h.gov.uk/sdb-cgi/wow/sdb.dbquery_form

This Geophysical Survey Database aims to provide an on-line index of the archaeological geophysical surveys undertaken by Archaeometry Branch of the Ancient Monuments Laboratory.There is a clickable map about the Logical Data Structure of the Geophysical Survey Database. So clicking on any of the tables (yellow rectangles) in the figure should take you to a page describing the structure and fields of that table. Five maps (Whole of Great Britain, South East of England, South West, Midlands, North) depict the distribution of archaeological geophysical surveys recorded in our database for the South East of England. Clicking on the map will query the database to produce a list of surveys located within a 10km square centred on the point specified (It is a GIS). There is also a page that is a large form allowing the database to be queried on a number of the more useful parameters.

 

21Introduction to Multimedia in Museums

http://www.rkd.nl/pblctns/mmwg/home.htm

  Getty Information Institute - ICOM

134 of 171

Page 135: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.3.5. Geographic Information Systems

Introduction

A Geographic Information System is an organized collection of computer hardware, software, geographic data, and personnel designed to efficiently capture, store, update, manipulate, analyze, and display all forms of geographically referenced information.A computer system for capturing, storing, checking, integrating, manipulating, analyzing and displaying data related to positions on the Earth's surface. Typically, a Geographical Information System (or Spatial Information System) is used for handling maps of one kind or another. These might be represented as several different layers where each layer holds data about a particular kind of feature. Each feature is linked to a position on the graphical image of a map.

Oracle8‚ Spatial Cartridge - Features Overview

Oracle8 Spatial Cartridge is an integral component of Oracle’s Network Computing Architecture±.

Product Summary

Oracle8 Spatial Cartridge solves two problems: how to efficiently store, access, and manage both spatial and attribute data in a single database, and how to improve performance for very large databases holding hundreds of gigabytes of spatial data. Spatial data is stored using the standard Oracle number data type for the data values, and raw data type for the index. This allows for any mix of standard Oracle8 tables and spatial data tables, without needing to employ different access methods or languages to retrieve data all you need is SQL.

Geometric Representation Of Data

Oracle8 Spatial Cartridge data model is a hierarchical structure consisting of elements, geometrics, and layers, which correspond to representations of spatial data. Layers are composed of geometrics, which in turn are made up of elements. Oracle8 Spatial Cartridge supports three basic geometric forms that represent spatial data: 

Points and point clusters: Points can represent locations of buildings, fire hydrants, utility poles, oilrigs, boxcars, or tagged wildlife. 

Lines and line strings: Lines can represent roads, railroad lines, utility lines, or fault lines.  Polygons and complex polygons with holes: Polygons can represent outlines of cities, districts, flood

planes, or oil and gas fields. A polygon with a hole can geographically represent a parcel of land surrounding a patch of wetlands. 

Spatial data is modeled in layers, all defined by the same object space and coordinate system. For example, the spatial representation of a city might include separate layers for outlines of political districts or socio-economic neighborhoods, every business and domestic location, and the maze of water, gas, sewer, and electrical lines. Because all these layers share a common object space, they can be related through their spatial locations.  

Spatial Indexing

Oracle8 Spatial Cartridge introduces spatial indexing to relational databases. The object space is subject to quad-tree decomposition and the results define an exclusive and exhaustive cover of every element stored in a Spatial Cartridge layer. Such covers are sometimes referred to as "tiles." Oracle8 Spatial Cartridge can use either fixed- or variable-sized tiles to cover geometry. As spatial data is added to your database, the spatial index subdivides ("tessellates") the covering tiles into multiple tiles, preserving the spatial organization of the data. Database designers can specify the number of times a geometry should be tessellated to optimize the coverage with smaller and smaller fixed-size tiles. Designers can also choose to use a fixed number of tiles, which causes the geometry to tessellate into variable-sized covering tiles. Using either smaller fixed-size tiles or more variable-size tiles provides a better fit of the tiles to geometry represented by the data. The Spatial index is a table with index entries stored using the raw data type. You can create a standard Oracle b-tree index on this table for improved performance.

135 of 171

Page 136: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Spatial Queries

Oracle8 Spatial Cartridge uses a two-tier query model to resolve spatial queries and spatial joins. Two distinct operations are performed in order to resolve queries: a primary and a secondary filter operation. This technique significantly reduces load and query processing overhead and assures excellent scalability for access performance. Performance is more a factor of the amount of data you actually retrieve, rather than the total size of the database. The primary filter permits fast selection of a small number of candidate records to pass along to the secondary filter. The primary filter uses approximations in order to reduce computational complexity. The secondary filter applies exact computational geometry to the result set of the primary filter. These exact computations yield the final answer to a query. The secondary filter operations are computationally more intense, but they are only applied to the relatively small result set from the primary filter. Queries can be spatially constrained, as defined by an "area of interest" chosen by the user. Eliminating data outside the area of interest from consideration during queries ensures optimum performance levels. For example, a typical primary filter operation would be to compare the index covering tiles of a user-defined query window to the index entries of data stored in the database. The results of the primary filter are dependent on the size and fit of the covering tiles. The secondary filter would use the Spatial Cartridge relational functions to determine the exact result set desired such as those entries completely inside or outside the query area.

Optimized VLDB Performance Through Data Partitioning

Partitioning is a method of dividing tables into individual segments of table space, called partitions that are easier to manage than a single large table. Oracle8 Spatial Cartridge supports Oracle8 range-by partitioning for administration of large data sets. For example, in a GIS application, spatial data for each state or province could be stored in a separate partition of a large table. Data related to a particular state would be physically clustered for faster storage and retrieval. In addition, Oracle8 Spatial Cartridge supports a table-based partitioning method especially appropriate for multidimensional point data such as bathymetric soundings or stellar cartography. The point-data partitioning technique takes advantage of the spatial index structure to sort and store point data in multiple partitions that subdivide dynamically and automatically when required. Oracle8 Spatial Cartridge maintains the spatial organization of data throughout the database, grouping data that is dimensionally related. When a table containing point data for a specific region of space becomes too dense to maintain fast access times, it subdivides into multiple partitions based on its spatial organization. Because Oracle8 Spatial Cartridges does this automatically, the database designer only needs to supply a best estimate of a comfortable maximum partition size for a particular application.

Availability And Release Information

Oracle8 Spatial Cartridge is currently sold as an option to the Oracle8 Server, and is fully compatible with other Oracle8 options, including the procedural and parallel query options. Users with existing Oracle environments can add this new technology while fully preserving their investment in relational technology.

136 of 171

Page 137: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.3.6. Projects and References

ARCHTERRA project with which we should establish contacts. INCO-COPERNICUS: 977054 ARCHTERRA BLACKBOARDAvailable from: http://odin.let.rug.nl/~leusen/archterra/Miller, P., 1996, An application of Dublin Core from the Archaeology Data Service.Available from: http://ads.ahds.ac.uk/project/metadata/dublin.htmlCIDOC, 1995, Draft International Core Data Standard for Archaeological Sites and Monuments. ICOM–CIDOC.Available from: http://www.natmus.min.dk/cidoc/archsite/coredata/arch1.htmFGDC, 1994. Content Standards for Digital Geospatial Metadata. Federal Geographic Data Committee.Available from: http://geochange.er.usgs.gov/pub/tools/metadata/standard/metadata.htmlOpenGIS Consortium Home Page. Open GIS Consortium.Available from: http://www.opengis.org/homepage.htmlThe Aquarelle Home Page.Available from: http://aqua.inria.fr/Gill, T., Grout, C. & Smith, L., 1997, Visual Arts, Museums and Cultural Heritage Information Standards: a domain—specific view of relevant standards for networked information discovery. Visual Arts Data Service.Available from: http://vads.ahds.ac.uk/standards.htmlWise, A. & Miller, P., 1997. Why Metadata Matters in Archaeology, Internet Archaeology.Available from: http://intarch.ac.ukk/journal/issue2/wise_index.html

137 of 171

Page 138: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.4.Protocols / (de facto) standards

7.4.1. MPEG-4

Introduction

MPEG-4 is an ISO/IEC standard developed by the Moving Picture Experts Group Committee. It provides standardized technological elements enabling the integration of the production, distribution and content access paradigms offered by the World Wide Web, Digital Television and Interactive Graphics Applications. It provides a set of tools enabling multimedia authors to create content in an Object-Oriented fashion that allows for far greater flexibility and reusability than was possible before. It also provides ways to manage authors’ rights. For network service providers MPEG-4 offers transparent information, which is interpreted and translated into the appropriate physical signaling messages of each network. And finally, for the end-users, it provides the ability to interact with the received content (within the constraints set forth by the author of the content) while avoiding the risk of proprietary formats and players.

MPEG-4 Objects for Content Creation, Manipulation and Transmission

The key to obtaining high levels of content reusability, flexibility and interaction is the notion of objects. An audio-visual scene is composed of several Audio-Visual Objects (AVOs) organized in a hierarchical fashion. A general AVO is composed of several AVOs organized in a tree structure. The leaves of this structure we may find primitive AVOs such as a 2D fixed background, or a voice of a person, etc. MPEG-4 defines efficient coded representations for each such primitive object so that transmission of content may achieve as high a throughput as possible. The standard provides the tools and semantics for the synthesis of a composite AVO from primitive AVOs as well as for the synthesis of a scene from general AVOs. The standard also allows for AVO manipulation within a scene such as dragging an object or erasing it. The composition of AVOs can be very useful when for example binding together a visual object with an audio clip describing the visual object.AV object data are transmitted in one or more so-called elementary streams, which are multiplexed and synchronized in appropriate ways. Each stream is characterized by the request for QoS it asks from the network to provide. An Access Unit Layer uses appropriate multiplexers to make the transport of these data streams possible over the various heterogeneous networks.

Main Advantages of the MPEG-4 Standard for the ARCHEOGUIDE System.

The flexibility, reusability and interaction levels allowed by the MPEG-4 standard are strong points arguing for its use in the ARCHEOGUIDE infrastructure. In particular, it is a natural fit for an environment where virtual multi-media objects are created, given various modes of manipulation, and eventually transmitted over a network to the end user for interaction with them. For example the standard specifically claims the provision of end-user operations such as navigating through a scene, selecting the desired language among a set of languages, navigating through the scene etc. The emphasis given to efficiency in terms of optimizing latencies over the network in order to transfer the encoded objects from the server to the client and the optimization of the encoding/decoding process allowing for operation of video at frame rates of about 15Hz also argues strongly for the use of MPEG-4 in the ARCHEOGUIDE system.

7.4.2. XML

Introduction

XML is a standardized method for putting such structured data as spreadsheets, address books, configuration parameters, financial transactions, technical drawings, into a single text file. Programs that produce such data often also store it on disk, for which they can use either a binary format or a text format. Text format has the advantage of allowing inspection of the data without the program that produced it. XML is a set of rules for designing text formats for such data, in a way that produces files that are easy to generate and read (by a computer), that are

138 of 171

Page 139: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

unambiguous, and that avoid common pitfalls, such as lack of extensibility, lack of support for internationalization/localization, and platform-dependency.A data object is an XML document if it is well formed. A textual object is well formed if it meets the well-formedness constraints set forth by the standard. These constraints roughly speaking require that the structure of the textual object follows the XML syntactic rules, and is structured as a tree (i.e. each document contains one or more elements, with exactly one element forming the root of the tree and every other element being appropriately nested in another element forming a child of the other element). Following HTML model elements, the XML standard uses tags (words bracketed by ‘<’ and ‘>’) and attributes (in the form attribute-name = ”attribute-value”). However, the semantics of each such tag or attribute in XML (unlike HTML) is left to be determined by the application that reads the XML file. A set of tools has been designed around XML to allow creation and manipulation of XML files. These include Xlink (for handling hyperlinks), XSL (for expressing style sheets) etc.(http://www.w3.org/XML/)

Pros and Cons for the Use of XML in the ARCHEOGUIDE System

XML presents a unified way of structuring documents, and as such it can be used by site managers and/or site guides to structure the explanatory material that would be heard by a visitor of the sites where ARCHEOGUIDE is installed as they take the tour designed for them by the ARCHEOGUIDE site manager. Note however that XML may present an extra overhead to the content creation (through the learning curve of the site managers) process. Transmitting whole XML files to the mobile units where XML applications will manipulate the documents sent over (e.g. play an audio stream consisting of the XML document transmitted) is another possibility but it places high CPU resource overheads to the mobile units that are already burdened with the responsibility of displaying in real-time high quality graphics.

Integrating XML and VRML

Although designed originally for different problem domains, the two technologies of VRML and XML have much to offer each other, and there are a variety of areas where tighter integration between the two can provide powerful benefits. Some existing work in this area already exists; for example, the Visual XML proposal for using XML and VRML to represent and display structured information spaces. This paper investigates additional potential areas of synergy and suggests useful next steps:(http://www.vrml.org/WorkingGroups/dbwork/vrmlxml.html)

XML Tools

Tools offering generalized XML support are available from many vendors. Companies such as ArborText, Inso, Interleaf and SoftQuad have tools for authoring, editing and publishing. Chrystal Software, POET and Object Design provide storage for XML data. Microstar and extensibility have tools for generating DTDs and XML schemas.Other vendors, such as Frontier.UserLand.Com, Sequoia Software and Vignette Corp. have products available today based on XML for database publishing, content management, and data management.A wide variety of other middleware applications are expected to be developed in the coming months that translate information currently stored in databases into XML for delivery to the desktop. In addition, a rich authoring, schema design, and application developer tools to support XML as well as for databases to store and emit XML directly. Data format-specific tools such as wizards will need to be developed as new vocabularies are defined.Like Web development tools, there will be XML tools suited for authors as well as developers. The programming tools generally take the form of visualization tools and software code libraries that authors can use to create and manipulate XML content. Software libraries usually come first. For example, Internet Explorer 5.0 supports the W3C Document Object Model (DOM), a language neutral interface for XML which can be used from C, Java or scripts, that tools developers can build on to create high-level XML visualization tools or other XML development tools. Equipped with the ability to parse XML documents, programmers can start building high-level tools that enable authors (and users) to create, edit, browse and search XML documents. These tools range from general-purpose editors conversant in any XML vocabulary to vocabulary-specific applications. In the future, many application categories such as databases, messaging, collaboration, and productivity applications will incorporate support for new XML vocabularies as they are defined. This will enable

139 of 171

Page 140: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

interoperability within an application category, as well as across application categories, to allow address information in a customer database to be easily shared with a PIM application or e-mail client. ADEPT Editor 7.0 from ArborText, Inc. – Allows authors to write text, place graphics and create books, manuals, catalogues, encyclopedias, and similar types of information.(http://www.arbortext.com/)Astoria from Chrystal Software – An authoring support system used in complex technical and content publishing environments.(http://www.chrystal.com/)Balise from AIS Software – An XML/SGML application programming environment used to build information exchange and translation systems between various forms of structured information storage and representation.(http://www.balise.com/)DynaBase from Inso – An integrated content management and dynamic publishing system for teams who develop and operate large dynamic Web sites.(http://www.inso.com/)Frontier from UserLand Software – A powerful cross-platform Web scripting environment built around an object databases whose structure mirrors the structure of XML.(http://www.scripting.com/frontier5/)Near and Far Designer and Ælfred from Microstar – These tools provide XML DTD authoring and Java-based XML parsing.(http://www.microstar.com/products/)Object Design products for XML from Object Design – Object Design offers tools to store and manage XML data, including ObjectStore, PSE, an XML parser for Java, and a Document Object Manager.(http://www.odi.com/)POET Content Management Suite from POET – Includes the POET Object Server and the SGML/XML parser for interpreting SGML/XML documents and storing them in the database. It also includes the SGML/XML Navigator for viewing and administrating SGML/XML documents, and the POET Content Management Suite programmer’s API for building custom SGML/XML applications.(http://www.poet.com/)Real-time XML Editor from archiTag – The Real-time XML Editor is an experimental validating XML editor written in DHTML for Internet Explorer 5.(http://architag.com/xmlu/play/)Tango Enterprise Version 3.1 “Generation X” from EveryWare Development Corp. – Tango Enterprise Generation X automatically generates dynamic XML documents on the fly from business logic and the values of changing business data in SQL or ODBC databases. It is particularly useful for building intranet knowledge management systems and Web sales force automation systems.(http://www.everyware.com/)Web Automation Toolkit from webMethods, Inc. – A development environment that enables companies to connect applications to existing Web sites and utilize Web protocols to integrate business applications directly over the Web, using XML for data exchange.(http://www.webMethods.com/)webMethods B2B Suite from webMethods, Inc. – The B2B Integration Server and the B2B Developer enable business-to-business application integration between companies and their customers, partners, and suppliers, using XML for data exchange. Existing HTML-based Web sites can be leveraged via webMethods Web Automation technology.(http://www.webMethods.com/)XML <PRO> from Vervet Logic – XML editor that combines the power of XML with an intuitive user interface that allows users to easily create and edit XML-based documents, regardless of previous experience with SGML.(http://www.vervet.com/)XML Spy – XML Spy is a shareware XML and DTD editor for Microsoft Windows 95, Microsoft Windows 98, and Microsoft Windows NT.(http://www.everyware.com/)SigmaLink from Step – SigmaLink is a document management and editorial working environment with strong emphasis on workflow functionality. SigmaLink has full support for SGML and XML, and its document management and workflow capabilities can be used with non-SGML information objects, such as word processor documents, graphics and multimedia components.For SGML documents, special functionality is provided to exploit the additional structure. A variety of link types (object-to-content, content-to-object, content-to-content) permit the creation of enterprise-wide knowledge webs comprising multiple document types. Automatic decomposition and combination of information objects based on element hierarchies provide flexible collaborative authoring.

140 of 171

Page 141: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

SigmaLink is written in Java using Oracle or Sybase as the database. Verity provides the fulltext search engine with Balise for SGML/XML transformation and Netscape as the HTTP server.Communication between the client and the server is based on http, allowing clients to use standard intranet and extranet connections, for cost-effective worldwide collaboration. An offline mode makes it possible to work with downloaded information objects (checked-out and editable, or read-only) without maintaining an expensive connection and overloading the server.(http://www.stepuk.com/products/prod_sig.asp)XMetaL from Step - XMetaL is a powerful XML and SGML authoring tool that creates documents that conform to arbitrary DTDs, both ASCII and compiled. It features three easy-to-master editing views, a rich array of powerful authoring aids, and a host of other tools to make digital content creation.XMetaL features:

Three Editing Views – XmetaL’s three editing views deliver unprecedented flexibility to content contributors. In Normal view, authors are presented with a familiar word processor-like interface. In Plain Text, experienced users operate in a detailed text oriented view, complete with inline tags and attributes. In Tags On view, authors receive the best of both worlds: a word processor-like view with collapsible tags for immediate access to all elements and attributes.

Advanced Authoring Aids – All XMetaL editing views provide a variety of authoring aids to facilitate the creation of digital content. The context-sensitive Attribute Inspector and Element List are available in all views to show valid markup options at the current point in the document. Users can even drag and drop text, URLs, images and more, directly onto the Attribute Inspector for inclusion in a document. Hover Tag tips display all attribute values when in Normal view. A customizable “followed by” feature automatically inserts required elements when the author presses “enter”. Mini templates make authoring an easy task as replaceable text can be inserted into your document. XMetaL also features keyboard macros, a spell checker and thesaurus, as well as an advanced search and replace engine.

Resource Manager – XmetaL’s Resource Manager is an extensible, drag-and-drop object management system that allows users to easily manage boilerplate text, images, document fragments, logos, macros and more, whether they’re on a hard drive, a network or the Internet.

Database Import Wizard – With XmetaL’s Database Import Wizard, any ODBC data source can be accessed. XMetaL allows queries to be saved and re-used, and it supports complex queries with table joins. Database content can be inserted in your document as CALS or HTML tables or as XML.

Advanced Table Support – XMetaL provides in line table editing for both CALS and HTML tables. Users can quickly and easily add, delete and modify tables using the Table toolbar. Easily resize table cells, assign attributes and view changes on-screen.

(http://www.stepuk.com/products/prod_xme.asp)Parlance Document Manager by XyEnterprise – Parlance Content Manager (PCM) enables users and integrators to manage dynamic XML components from desktop to the Internet. Parlance offers a unique set of tools for developing and extending XML solutions including a powerful Application Programming Interface (API), integrated workflow and support for leading XML and SGML tools. PCM manages the life cycle of creating, maintaining, and delivering content throughout the enterprise.(http://www.XyEnterprise.com/XYE/products/prodindex.html)TARGET 2000 from Progressive Information Technologies – TARGET 2000 offers a user-friendly, relational database publishing system with object-oriented features. TARGET 2000 is Oracle-based, with an intuitive Windows 95 graphical user interface. SGML formatting ensures enhanced information adaptability for multiple outlets, including traditional print, web publishing, and CD-ROMs.(http://www.target2000.com/)

7.4.3. VRML

Purpose

The Virtual Reality Modeling Language (VRML) is a file format for describing interactive 3D objects and worlds. VRML is designed to be used on the Internet, intranets, and local client systems. VRML is also intended to be a universal interchange format for integrated 3D graphics and multimedia. VRML may be used in a variety of application areas such as engineering and scientific visualization, multimedia presentations, entertainment and educational titles, web pages, and shared virtual worlds.

141 of 171

Page 142: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Design Criteria

VRML has been designed to fulfill the following requirements:AuthorabilityEnable the development of computer programs capable of creating, editing, and maintaining VRML files, as well as automatic translation programs for converting other commonly used 3D file formats into VRML files. ComposabilityProvide the ability to use and combine dynamic 3D objects within a VRML world and thus allow re-usability. ExtensibilityProvide the ability to add new object types not explicitly defined in VRML. Be capable of implementation

Capable of implementation on a wide range of systems. PerformanceEmphasize scalable, interactive performance on a wide variety of computing platforms. ScalabilityEnable arbitrarily large dynamic 3D worlds.

Characteristics of VRML

VRML is capable of representing static and animated dynamic 3D and multimedia objects with hyperlinks to other media such as text, sounds, movies, and images. VRML browsers, as well as authoring tools for the creation of VRML files, are widely available for many different platforms.VRML supports an extensibility model that allows new dynamic 3D objects to be defined allowing application communities to develop interoperable extensions to the base standard. There are mappings between VRML objects and commonly used 3D application programmer interface (API) features.

Source

VRML97 Specificationhttp://www.vrml.org/

7.4.4. X3D eXtensible 3DX3D stands for Extensible 3D. It is a next-generation, extensible, 3D graphics specification that extends the capabilities of VRML 97. (VRML-NG, for VRML Next Generation, was an early name for what is now X3D.) The name X3D was chosen to indicate the integration of XML. (http://www.web3d.org/x3d.html)X3D is a next generation version of VRML 97. X3D will be fully backward compatible with VRML 97. This will be achieved by having a VRML 97 profile for X3D that provides all of the functionality of a standard VRML 97 browser. VRML 97 is an ISO standard. ISO standards are periodically updated to reflect progress and change in the standardized technology, but you can’t replace a standard wholesale. VRML 97 is not going away. VRML 97 content will be convertible to X3D.

X3D Tools

X3D-Edit is an Extensible 3D (X3D) graphics file editor that uses the X3D Document Type Definition (DTD) in combination with Sun’s Java, IBM’s Xeena XML editor, and an editor profile configuration file. X3D-Edit enables simple error-free editing, authoring and validation of X3D or VRML scene-graph files.Features:

Intuitive user interface Always creates well-formed scene graphs, nodes only fit where allowed Validates X3D scenes for VRML 97 profile and Core profile Platform independence using Java Tool tips and hints help you learn how VRML/X3D scene graphs really work

142 of 171

Page 143: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Automatically translate into VRML and launch browser to view results Extensible Style Sheet (XSL) translation: X3dToVrml97.xsl and X3dToHtml.xsl Design testing & evaluation of MultiTexture extension nodes included

Status:X3D-Edit is being used to develop and test the Extensible 3D (X3D) Document Type Definition (DTD) tagset. X3D-Edit also exercises various X3D graphics rendering and translation implementations.(http://www.web3d.org/x3d.html)OpenWorlds – OpenWorlds is a C++ toolkit for integrating X3D/VRML 97 into any new or existing applications. They support OpenGL, Optimizer, Performer, and Fahrenheit graphics API’s, and a variety of hardware platforms. For these platforms, they offer VRML browsers (Horizon), and VRML geometry loaders (Merchant), built using the OpenWorlds toolkit.(http://www.openworlds.com/)Cybelius – an essential x3d-contributor working on an X3D-capable authoring tool.(http://www.cybelius.com/)

7.4.5. PHYTONPython is an interpreted, interactive, object-oriented programming language. It incorporates modules, exceptions, dynamic typing, very high-level dynamic data types, and classes. Python combines remarkable power with very clear syntax. It has interfaces to many system calls and libraries, as well as to various window systems, and is extensible in C or C++. It is also usable as an extension language for applications that need a programmable interface. Finally, Python is portable: it runs on many brands of UNIX, on the Mac, and on PCs under MS-DOS, Windows, Windows NT, and OS/2.Python is used in many situations where a great deal of dynamism, ease of use, power, and flexibility are required.In the area of basic text manipulation core Python (without any non-core extensions) is easier to use and is roughly as fast as just about any language, and this makes Python good for many system administration type tasks and for CGI programming and other application areas that manipulate text and strings and such.When augmented with standard extensions (such as PIL, COM, Numeric, oracledb, kjbuckets, tkinter, win32api, etc.) or special purpose extensions (that you write, perhaps using helper tools such as SWIG, or using object protocols such as ILU/CORBA or COM) Python becomes a very convenient “glue” or “steering” language that helps make heterogeneous collections of unrelated software packages work together. For example by combining Numeric with oracledb you can help your SQL database do statistical analysis, or even Fourier transforms. One of the features that make Python excel in the “glue language” role is Python’s simple, usable, and powerful C language runtime API.Many developers also use Python extensively as a graphical user interface development aide.(http://www.python.org/)

7.4.6. UML – Unified Modeling Language

(http://www.platinum.com/corp/uml/uml.htm#1)

UML Tools

Rational Rose Data Modeler

Features Object-Relational Mapping gives you the ability to track the migration of an object model to a data

model, providing a way to gain a deep understanding of the relationships between the application and database and continue to keep them both up-to-date based on changes made during the development process.

Schema Generation automatically creates database schema from a data model. The schema can be generated directly against the database or saved as a script file for future implementation. The schema includes tables, columns, constraints, and more.

Round-trip Engineering allows a user to create a data model based on the database structures or create a database based on the model.

143 of 171

Page 144: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Comprehensive Database Support for Oracle 7.x & 8.x, MS SQL Server 6.x & 7.x, IBM DB2 MVS 5 & 6 and IBM DB2 UDB 5 & 6. There is SQL-92 support for those who have relational databases that aren’t listed.

(http://www.rational.com/products/rose/prodinfo/datamodel.jtmpl)

Together Enterprise

Together Enterprise provides a comprehensive, enterprise-wide “backbone” for developing Java-based solutions in Internet time.Features

Simultaneous round-trip engineering for Java and C++ Major UML diagrams plus Coad Object Models Fast robust documentation generation (now with optional command-line launching) Extensive configurability of reverse engineering and code generation Server-side support of team-wide configuration settings Rational Rose import/export Open API for extensibility Proven performance on multiple OS platforms

(http://www.togethersoft.com/together/togetherE.html)

144 of 171

Page 145: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.4.7. 3DML

3DML Tools:

Flatland Rover - Rover is the browser extension that displays rich media Web content created with 3DML.3DML describes full 3D environments, which can include streaming video, animation, sound, 2D graphics, text, and hyperlinks. 3DML is as easy to learn as HTML. Environments are created by arranging pre-rendered building blocks. No 3D modeling or programming is necessary.(http://www.flatland.com/)Visual InterDev 6.0 – Visual InterDev is the team-based development environment for designing, building, and debugging data-driven Web applications. ( http://msdn.microsoft.com/vinterdev )

145 of 171

Page 146: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

7.4.8. Integration of Animation

Humanoid Animation Group

The h-anim group objectives are to specify a way of defining interchangeable humanoids and animations in standard VRML 2.0 without extensions. Animations include limb movements, facial expressions and lip synchronization with sound. The goal is to allow people to author humanoids and animations independently.Archeoguide consortium can use this specification to model characters to enrich the sites.(http://ece.uwaterloo.ca/~h-anim/)

Development Tools

Spazz3d 2.2 – Virtock Technologies – 3d web authoring and animation tool, VRML animation editor.Import: Web3d/VRML animationExport: VRML 97 World(http://www.spazz3d.com/)Poser 4.0 – Metacreations – 3D-character animation and design tool for digital artists and animators.Import/Export: H-Anim, VRML 2.(http://www.metacreations.com/products/poser4/)Blaxxun avatar studio 1.0 – Blaxxun Interactive – Build 3d character that can be used in 3d virtual environments.Avatars created with Avatar Studio are based on the 3d VRML language standard.(http://www.blaxxun.com/)Cinema 4D XL - Modeling, animation, rendering.Import/Export:VRML1, VRML 2.(http://www.maxon.de/)SkeletonBuilder 1.1-SkeletonBuilder 1.1 allows you to create Holodesk Communicator avatars with common mid-end 3-D modellers, including 3DStudio Max and Alias|Wavefront Maya. SkeletonBuilder substantially automates the process of creating H-Anim humanoids, and provides wrappers for humanoids which add support for displacers (mesh deformations) in Communicator.(http://developer.holodesk.com/developer/tools/skeletonbuilderweb/index.htm)

U-Man

U-Man is the Human Model in EO. It features a behavior based motion, motion capture integration as well as a specialized editor to adjust the morphology, create behavior, modify the jointed models ... The model uses FreeForm Deformations (FFD) to model the skin of the manikin. The manikin avoids collisions with other objects when choosing a trajectory.Human modeling is an essential tool for Ergonomic analyses in Virtual Reality. U-Man is an array of technologies that model the behavior of the human body in Virtual Reality. U-Man has various manikin versions that can be used for various tasks, from a simple skeleton for motion capture playback to a detailed inverse kinematics implementation of the human body. Behaviors are supported in the model both pre-defined and developer defined. For visibility studies the model includes aid to assess the field of view, to test if objects are within the current field of view. The model has a vicarious experience module that lets the user experience the world as if he were in the shoes of another human being.(http://www.syseca.thomson-csf.com/simulation/)

Other proposals

Details on the Body description from the MPEG-4 proposal(http://www.epfl.ch/~boulic/mpeg4_snhc_bidy/dof.html/)Norman Badler’s high-level overview of virtual humans(http://www.cis.upenn.edu/~badler/vhpaper/vhlong/vhlong.html)Sandy Ressler’s collection of joint information(http://www.itl.nist.gov/iaui/ovrt/projects/vrml/h-anim/jointInfo.html)Mitra’s proposal for key frame animation(http://earth.path.net/mitra/h-anim/)Cindy’s proposal for adding accessories to an avatar(http://www.ballreich.net/vrml/h-anim/acc_anim.htm)

146 of 171

Page 147: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8. APPENDIX B - Survey of related projects

8.1.EC-Projects

8.1.1. AQUARELLE

Introduction

The main objectives of the AQUARELLE project were to use the Internet and DBMS technology to create a system that allows curators, urban and regional planners, publishers and researchers to create, store and manipulate cultural heritage information related to Europe. It uses the WWW as the platform on which it builds its services. In particular, AQUARELLE comprises of a number of (possibly heterogeneous) databases, called archive and folder servers geographically distributed, each of which provides cultural information in terms of hypermedia documents, images, and other still data. In this sense, AQUARELLE is just a web-based distributed information system that facilitates the dissemination and management of cultural heritage related material, with an enhanced retrieval system (an improvement over other existing internet search engines).

Comparison of AQUARELLE and ARCHEOGUIDE systems

AQUARELLE’s relation to ARCHEOGUIDE is essentially limited to their intersecting highest-level contributions to society: they both aim at enhancing a user’s ability to access and evaluate cultural information. However the technology behind the two projects is completely different. AQUARELLE uses Web technology coupled with an OODBMS to allow users to more easily create cultural information and link their documents to other material managed by the system. ARCHEOGUIDE aims at developing AR technology based on image-based positioning and tracking so as to provide to the visitors of a cultural site a virtual reconstruction of it superimposed on the real world. Such a visual spectacle allows the spectator to fully appreciate the history of the place. It also pushes the frontiers of the state-of-the-art in the related technologies and their integration.http://www.cordis.lu/ist/98vienna/xaquarelle.htm

8.1.2. ENREVI

Introduction

The ENREVI project proposes new development for the enhancement of a video sequence captured on a real scene with real-time rendering of 3D objects. Emphasis is put both on real time and on economically affordable solutions. Research and development objectives include : Fundamental research will provide tools for tracking in unknown environment. Software real time engine that could render 3D synthetic image on affordable hardware platform. Format to generate, store , and retrieve data in accordance with existing and future standards. New generation of chroma key product, software configurable. Integration of all different modules in a real life environment, under the specification and control of the end-

user.

Comparison of ENREVI and ARCHEOGUIDE systems

ENREVI and ARCHEOGUIDE are both using tracking technologies to register video images and virtual 3D objects. But the hardware and performance requirements of the ENREVI system are not comparable with the requirements of the ARCHEOGUIDE system running on a client-server architecture and providing user adaptive and tour guiding mobile computing, combining wireless communication, hybrid tracking, animation, and interaction.

147 of 171

Page 148: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

http://www.cordis.lu/ist/projects/99-11185.htm

8.1.3. PISTE

Introduction

The PISTE project is creating a system, addressing the needs of broadcasters and home viewers, which will transform TV watching into an immersive interactive experience during the coverage of sports events. The main objectives of PISTE are: The provision of tools based on digital video processing, 3D-visualisation and animation techniques, and a

novel virtual scene modeling language to broadcasters, for creating augmented reality views of the events. The development of tools for the encoding and playback of rich interactive multimedia content in MPEG-4.

The assessment of MPEG-4 for broadcasting such content over a DVB infrastructure and its presentation on set-top equipment.

The specification of requirements for the implementation of MPEG-4 playback on consumer electronics equipment.

The assessment of the DVB infrastructure in supporting large scale Virtual Environments through the use of MPEG-4.

The assessment of the system through experiments with the involvement of real actors.

Comparison of PISTE and ARCHEOGUIDE systems

The relation of PISTE and ARCHEOGUIDE is limited to registration and augmentation of images. The application areas, as well as the hardware and performance requirements of the PISTE project are quite different from ARCHEOGUIDE, running on a client-server architecture and providing user adaptive and tour guiding mobile computing, combining wireless communication, hybrid tracking, animation, and interaction.http://piste.intranet.gr/Introduction.htm

8.1.4. STARMATE

Introduction

The STARMATE project (SysTem Using Augmented Reality for Maintenance, Assembly, Training and Education) aims at specifying, designing, developing, and demonstrating a product dedicated to computer guided maintenance of complex mechanical elements. The system provides user assistance for achieving assembly/de-assembly and maintenance procedures, and workforce training to assembly/de-assembly and maintenance procedures. The system relies on augmented reality technology to provide more flexibility in working methods while preserving user mobility in context where access to conventional documentation is cumbersome. It improves work environment user-friendliness and allows user to access full documentation and manuals directly registered to his working environment. Visual and audio augmentation is used to guide the user through the right procedure to apply. The system is controlled through both speech and a pointing device system.Application areas focused by the product are optronics, aeronautics construction, and nuclear maintenance.

Comparison of STARMATE and ARCHEOGUIDE systems

STARMATE and ARCHEOGUIDE are both using augmented reality techniques and mobile computing devices to assist the user. Since the application areas cultural heritage and assembly are sited in quite different environments, the first focusing an outdoor scenario and the other working indoor, different strategies and techniques for modeling, registration, tracking, communication, infrastructure etc. are resulting.http://www.cordis.lu/ist/projects/99-10202.htm

148 of 171

Page 149: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8.1.5. TOURBOT

Introduction

TOURBOT is the name given to an EC project whose objectives were the development of an interactive TOUr-guide RoBOT that allows individual access to a museum’s exhibits and cultural heritage over the Internet. At the heart of the system is a robot equipped with vision capabilities (video cameras plus position tracking and navigation capabilities through image-based or range-finder technology). The robot is connected to the Internet and accepts orders from remote Internet users to move around the museum thus providing distant users the ability to take a look at the museum’s exhibits. At the same time, the robot is able to guide on-site museum visitors providing group or personalized tours. Therefore the TOURBOT system provides remote as well as on-site visitors a guide that allows its users access to a large amount of additional information. The TOURBOT project was developed by a consortium of organizations: the Foundation for Research and Technology-Hellas, the University of Bonn, the Foundation of the Hellenic World, THEON mobile platforms S.A. the Deutsche Museum at Bonn, the Byzantine & Christian Museum of Athens and the Albert Ludwigs University.

Comparison of TOURBOT and ARCHEOGUIDE

Both projects aim at improving the accessibility and promotion of cultural heritage. TOURBOT implements this goal by providing remote users the ability to interact with a robot that sends to them through the Internet a video stream from a camera attached to it. Through the use of advanced vision and other techniques the robot has very advanced navigation capabilities that allow it to move from area to area and provide extra information about particular exhibits. The use of vision in order to compute exact position and consequently to retrieve information related to objects in the line of sight of the user, is common in both projects. Further, both systems employ a client/server-distributed architecture where clients interact with the server to obtain more information about an object in their (perhaps virtual) vicinity through a network. There are other lessons to be learned from TOURBOT; the robot in TOURBOT operates in a human-populated environment, which requires the development of very high-quality navigation and position tracking mechanisms in the midst of a dynamic and changing environment. The same problems –perhaps even more exasperated because of the much higher demands for position accuracy needed for correct registration of the virtual objects- are present in the ARCHEOGUIDE operating environment. Finally, both projects emphasize the themes of personalized information retrieval, i.e. the ability of the user to interact with the system in order to obtain the information that they are interested in. Such interaction is a key for the ultimate user acceptance of the system.http://www.cordis.lu/ist/projects/99-12643.htm

8.2.National projects

8.2.1. ARVIKA

Objectives

The lead project ARVIKA, sponsored by the German BMBF (ministry of education and research) and supervised by the DLR (German Aerospace Center), uses augmented reality (AR) technologies to research and create a user-oriented and system-driven support of operation procedures. It focuses on the development, production, and service of complex technical products and systems. The project ideas are realized in various application areas of German industry, such as automobile manufacture and aircraft construction, mechanical engineering and system development.

149 of 171

Page 150: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Application areas and procedures

The main application-related topics of ARVIKA aim to verify augmented reality in the development, production and service cycles of the products being used. This also includes the service of the machines and systems required for the production environment. The scheduled project concentrates on the following areas: development: automobile and aircraft; production: automobile manufacture and aircraft construction; service: system techniques, in this case power stations and the tools and machines required for the production. This covers the major application areas of AR, avoids duplicate developments, and enables a profound, application-oriented verification of this novel technique.The project phases go along with user-centered system design that is based on scientific methods. All application in this research are based on AR Base-technologies supporting both the high-end/power applications in the development process and the low-end activity of the skilled worker using a belt-worn equipment in the real production and service environment. This is realized by an open platform that allows for different performance grades and especially for true wearability. This project is geared to support market requirements in production, manufacturing, and service-oriented information and communication technologies to be used by skilled workers, technicians and engineers. The whole project will comprise four years (7/1999 – 6/2003) and is divided into two parts. For each part of the project, prototypes will be provided.http://www.arvika.de/www/e/home/home.htm

150 of 171

Page 151: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8.3.Other projects

A huge number of projects developed in industry, universities, and other institutions are related to ARCHEOGUIDE components as cultural heritage, mobile computing, virtual and augmented reality, or wireless communication. They touch the representation and reconstruction of ancient artifacts and historical areas, web and cave visualizations of and interactions with the resulting 3D models, user interface design of mobile devices, indoor and outdoor registration of the user, fast rendering of 3D scenes, image based rendering, segmentation, matching, tracking devices and algorithms, network communication, databases etc.It’s impossible to list all of them. So the following projects are focusing primary on work related to the main aspects of ARCHEOGUIDE: wearable/mobile computing and augmented reality.

8.3.1. Columbia University

(http://www.cs.columbia.edu/)

MARS: Mobile Augmented Reality System (Touring Machine)

Keywords: Augmented reality, outdoor, guidance, mobile computing, wireless communication, hand-heldResearchers: S. Feiner, T. Höllerer et al.The research is aimed at exploring the synergy of Augmented reality, in which 3D displays are used to overlay a synthesized world on top of the real world, and mobile computing, in which increasingly small and inexpensive computing devices and wireless networking allow users to have access to computing facilities while roaming the real world. They make combined use of different display technologies ranging from head-worn to hand-held to palm-top to best support a mobile user. The touring machine, a backpack-based system uses centimeter-level GPS position tracking and inertial orientation tracking to track the user's head as they walk around the campus, allowing the overlay of information on a see-through head-worn display. The indoor systems stress the integration of different kinds of displays (head-worn, hand-held, desk-top, and wall-sized) into a coherent multi-user interface.http://www.cs.columbia.edu/graphics/projects/mars/

ARC: Augmented Reality for Construction

Keywords: Augmented Reality, construction, trackingResearchers: S. Feiner, A. Webster et al.The augmented reality construction system is designed to guide workers through the assembly of a spaceframe structure, to ensure that each member is properly placed and fastened. The system includes a head-worn display with integral headphones and orientation tracker. Position tracking is provided by an Origin Instruments DynaSight optical radar tracker, which tracks small LED targets on the head-worn display. The user interface also includes a hand-held barcode reader, which has an optical target mounted on it, and is also tracked by the DynaSight.http://www.cs.columbia.edu/graphics/projects/arc/

KARMA: Knowledge-based Augmented Reality for Maintenance Assistance

Keywords: Augmented Reality, maintenanceResearchers: S. Feiner, B. MacIntyre et al.KARMA is a prototype system that uses a see-through head-mounted display to explain simple end-user maintenance for a laser printer. Several Logitech 3D trackers are attached to key components of a printer, allowing the system to monitor their position and orientation. The virtual world is intended to complement the real world on which it is overlaid.http://www.cs.columbia.edu/graphics/projects/karma/karma.html

Architectural Anatomy

Keywords: Augmented Reality, architecture

151 of 171

Page 152: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Researchers: S. Feiner, T. Webster et al.The prototype application overlays a graphical representation of portions of a building's structural systems over a user's view of the room in which they are standing. The overlaid virtual world typically shows the outlines of the concrete joists, beams, and columns surrounding the room.http://www.cs.columbia.edu/graphics/projects/archAnatomy/architecturalAnatomy.html

8.3.2. Carnegie Mellon University

(http://www.cmu.edu)

Z-Key Project

Keywords: real time, stereovision, image keying, mixed realityResearchers: T. Kanade, K. Oda, A. Yoshida, M. Tanaka, H. KanoThe project focuses on image keying. This is a method, which merges two images, a real camera image and a virtual scene. In contrast to a standard chroma-keying used in the TV industry, the real and virtual objects can be placed into a synthesized image according to their local and global positional relationship. The depth information of the real image is extracted by using the CMUs stereo machine.http://www.cs.cmu.edu/afs/cs/project/stereo-machine/www/z-key.html

Magic Eye Project

Keywords: computer vision, registration, tracking, augmented reality, image overlay, real timeResearchers: M. Uenohara, T. KanadeReal-time object registration enables an image to be overlaid consistently onto objects even while the object or the viewer is moving. The video image of a patient's body is used as input for object registration. Reliable real-time object registration at frame rate (30 Hz) is realized by a combination of techniques, including template matching based feature detection, feature correspondence by geometric constraints, and pose calculation of objects from feature positions in the image.http://www.cs.cmu.edu/afs/cs/user/mue/www/magiceye.html

8.3.3. Georgia Tech

(http://www.cc.gatech.edu)

EPSS: Electronic Performance Support System

Keywords: Wearable, manufacturing, wireless communication, HMD, speech recognitionThe Georgia Tech Research Institute team in the Multimedia in Manufacturing Education laboratory is developing an electronic performance support system (EPSS) that workers can use on the job, anywhere in a plant. The user wears the computer on an adjustable belt. With the wireless network, the user can enter and receive information wherever the user goes. The head-mounted display allows the user to look at information on the computer display without blocking the user's forward view of the real world. The microphone and voice recognition software allow the user to control the computer while keeping the user's hands free to perform other tasks. The wearable computer system provides a wide variety of information to the user, including text, graphics, photographs, videos, narration, and sounds. http://wearables.gatech.edu/EPSS.asp

CyberGuide

Keywords: Mobile computing, Newton, Outdoor, GPSThe CyberGuide project focuses on how portable computers can assist in exploring physical spaces and cyberspaces. Development of handheld intelligent tour guides to demonstrate future computing environments.

152 of 171

Page 153: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

The Cyberguide project's overall system objective is to create a robust, intuitive, real-time posititioning system that provides direction, scalable vector mapping, the ability to recognize and input places of interest, to take digital pictures at these places of interest, to provide a Web diary of places visited, and to provide e-mail and Web access.http://www.cc.gatech.edu/fce/cyberguide/index.html

8.3.4. University of California, Berkeley

(http://www.berkeley.edu/)

InfoPad

Keywords: Mobile computing, wireless communication, speech recognition, handwriting recognitionThe goal of the InfoPad research project is to develop the hardware, software and mobile network support which will allow ubiquitous, wireless access of real-time multimedia data from high speed networks using an inexpensive, portable terminal. It in effect is a "network terminal", but with the additional capability of portability.http://infopad.EECS.Berkeley.EDU/infopad/

8.3.5. University of North Carolina

(http://www.unc.edu)

Ultrasound guided breast biopsy

Keywords: Registration, HMD, augmented reality, ultrasoundThe UNC research group is working to develop and operate a system that allows a physician to see directly inside a patient, using augmented reality (AR). This project uses ultrasound echography imaging, laparoscopic range imaging, a video see-through head-mounted display (HMD), and a high-performance graphics computer to create live images that combine computer-generated imagery with the live video image of a patient. An AR system displaying live ultrasound data or laparoscopic range data in real time and properly registered to the part of the patient that is being scanned could be a powerful and intuitive tool that could be used to assist and to guide the physician during various types of ultrasound-guided and laparoscopic procedures.http://www.cs.unc.edu/~us/

Wide-area tracking

Keywords: Tracking, HMD, indoorHead-mounted displays (HMDs) and head-tracked stereoscopic displays provide the user with the impression of being immersed in a simulated three-dimensional environment. To achieve this effect, the computer must constantly receive precise information about the position and orientation of the user's head and must rapidly adjust the displayed image(s) to reflect the changing head locations. This position and orientation information comes from a tracking system.The UNC Tracker Research Group brought its latest wide-area ceiling tracker online. The system uses relatively inexpensive ceiling panels housing LEDs, a miniature camera cluster called a HiBall, and the single-constraint-at-a-time (SCAAT) algorithm, which converts individual LED sightings into position and orientation data.http://www.cs.unc.edu/~tracker/

153 of 171

Page 154: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8.3.6. Massachussets Institute of Technology

(http://www.mit.edu)

DyPERS: Dynamic Personal Enhanced Reality System

Keywords: Augmented reality, Wearable, real-time visionDyPERS uses augmented reality and computer vision to autonomously retrieve 'media memories' based on associations with real objects the user encounters. These are evoked as audio and video clips relevant for the user and overlayed on top of real objects the user encounters. The system utilizes an adaptive, audio-visual learning system on a tether less wearable computer. The user's visual and auditory scene is stored in real-time by the system (upon request) and is then associated (by user input) with a snap shot of a visual object. The object acts as a key such that when the real-time vision system detects its presence in the scene again, DyPERS plays back the appropriate audio-visual sequence.http://vismod.www.media.mit.edu/vismod/demos/dypers/

WearCam

Combining mobile multimedia with wearable computing and wireless communications gives rise to a new form of connectivity, through an antenna mounted on the tallest building in the city. This connectivity extends across the MIT campus and nearby cities as well. One obvious application is in the personal safety device where connectivity through the Internet could allow friends and relatives to look out for one-another, forming a "Safety Net". This "global village" would be far preferable to relying on more Orwellian methods of crime reduction such as pole-top cameras mounted throughout the city. Another application of WearCam is the Personal Visual Assistant, or the visual memory prosthetic.http://www.wearcam.org/

Others

Keywords: Wearable, Augmented realityhttp://wearables.www.media.mit.edu/projects/wearables/http://wearables.www.media.mit.edu/projects/wearables/augmented-reality.htmlhttp://vismod.www.media.mit.edu/vismod/demos/smartroom/ive.html

8.3.7. University of Toronto

(http://www.utoronto.ca/)

ARGOS: Virtual Pointer

Keywords: stereoscopic displays, augmented reality, remote manipulation, teleoperationResearchers: P. Milgram, D. Drascic et al.The ARGOS (Augmented Reality through Graphic Overlays on Stereovideo) system is a tool for enhancing human-telerobot interaction, and more general also for image enhancement, simulation, sensor fusion, and virtual reality.http://gypsy.rose.utoronto.ca/people/david_dir/POINTER/POINTER.html

154 of 171

Page 155: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8.3.8. HRL Laboratories

(http://www.hrl.com)

Mobile Interaction

Keywords: Augmented Reality, outdoorResearchers: R. Azuma et al.Users who are on the move need new approaches based on natural human dialogue with the computer and augmented reality display of information, enabling hands free access and rapid understanding of the information. Whether it is a soldier on the digital battlefield or a driver in the networked car, humans need access to information within the context of their environment and according to their natural and intuitive modes of interaction. HRL works on the development of mobile augmented reality and is interested in the use of dialogue-based systems. These two technology areas represent an unlimited potential for extending the ability of people to interact with information and others.http://www.hrl.com/isl/isl_home.shtml

8.3.9. US Navy

(http://ait.nrl.navy.mil/)

BARS: Battlefield Augmented Reality System

Keywords: augmented reality, outdoor, urban environment, user interface, registrationThis project examines how three-dimensional strategic and tactical information can be transferred between a command center and individual warfighters who are operating in an urban environment. It is a multi-disciplinary research project that encompasses a number of research and technical issues. These include: the design of novel user Interfaces the design of new interaction methods the development of an interactive, scalable three-dimensional environment tracking and registration systems of sufficient accuracy develop a prototype demonstration system.http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html

8.3.10. IBM Research

(http://www.research.ibm.com)

WorldBoard MUSEware: Mobile Computers for Museum Visitors and Museum Staff

Keywords: Mobile computing, wireless communication, barcodeThe MUSEware product line is intended to provide mobile support to museum visitors and staff. This system utilizes a wireless network within a building to deliver content and triangulate location. Barcode scanners are used to track inventory or to retrieve information on exhibits for visitors.

WorldBoard WILDware Outdoor Learning Tool: Mobile Computers for Park Visitors and Park Staff

The WILDware product line is intended to provide mobile support to park visitors and staff. This system utilizes a wireless network within an outdoor setting (i.e., park, wildlife refuge, environmental education lab) to deliver content and triangulate location. Barcode scanners are used to track assets, or to retrieve information on locations or objects encountered by visitors. Sensors and probes can be attached to the device for scientific research.

155 of 171

Page 156: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

http://www.worldboard.org/

8.3.11. Rockwell Science Center

(http://www.rsc.rockwell.com/)

Virtual Reality Activities at RSC

Keywords: Virtual reality, augmented reality, wireless communication, speech recognition, user interactionThey have developed a testbed application to integrate advanced user-interface technologies in the command post of the future. This testbed application is loosely based on a Real-Time Strategy (RTS) game from which they borrowed the graphics. The following technologies have been attempted to demonstrate: wireless handheld control, speech recognition, panning audio, WINS (Wireless Integrated Network Sensor) node interaction (sensor output display, and sensitivity control using speech), and graphics tablet based interaction.http://hci.rsc.rockwell.com/

8.3.12. ATR

(http://www.mic.atr.co.jp/)

C-Map: Context-aware Mobile Assistant Project

Keywords: mobile computer, guide agent, profiling, communicationThe C-MAP is an attempt to build a personal mobile assistant that provides visitors touring exhibitions with information based on their locations and individual interests. ATR has prototyped the first version of the mobile assistant. A personal guide agent with a life-like animated character on a mobile computer guides users using exhibition maps, which are personalized depending on their physical and mental contexts. The project also includes services for facilitating new encounters and information sharing among visitors and exhibitors who have shared interests during/after the exhibition tours.http://www.mic.atr.co.jp/organization/dept2/index.e.htmlhttp://www.mic.atr.co.jp/dept2/c-map/index-jp.html (in japanese)

8.3.13. Sony CSL

(http://www.csl.sony.co.jp/)

CyberCode: Designing Augmented Reality Environments with Visual Tags

Keywords: Augmented Reality, indoor, hand held, barcodeResearchers: J. Rekimoto et al.The CyberCode is a visual tagging system based on a 2D-barcode technology. The tags can be recognized by low cost cameras of mobile devices, and it can also be used to determine the 3D position of the tagged object as well as its ID number.(http://www.csl.sony.co.jp/person/rekimoto.html)

156 of 171

Page 157: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

8.3.14. Keio University

(http://www.keio.ac.jp/)

AugPen: Augmented Pen

Keywords: Augmented Reality, human computer interaction, pen tabletThe system called "AugPen" uses a pen tablet and overlay technique. In this system, the user himself can register the target area on the physical documents by drawing on the physical desk top where the computer images are projected on. The obtained data will be seamlessly transported to the data of the spreadsheet application. They also propose the method of recognizing the situation of the physical documents by surrounding physical objects like "paper clips", and they developed an "Augmented Clip" device, which has an ability of predicting the action of the user turning the page of the documents. Some application systems are demonstrated such as the English-Japanese translation aid, and the correction support system.http://www.chi.mag.keio.ac.jp/activities/AugPen/index.shtml

8.3.15. University of South Australia

(http://www.cis.unisa.edu.au/)Keywords: Wearable computer, outdoor, GPS, navigationA system for outdoor augmented reality navigation with wearable computers. http://www.cis.unisa.edu.au/~ciswp/siemens/tsld001.htm

8.3.16. LORIA

(http://www.loria.fr/)

The bridges of Paris

Keywords: Augmented Reality, urban environment, outdoor, offline registrationRegistration for harmonious integration of real world and computer-generated objects.http://www.loria.fr/~gsimon/CCVCG99/sld007.htm

8.3.17. INRIA

(http://www.inria.fr/)Augmented Reality in Syntim: Mixing virtual objects and real worldhttp://www-syntim.inria.fr/syntim/analyse/video-eng.html

8.3.18. TU Wien

(http://www.cg.tuwien.ac.at/)

Studierstube: Collaborative augmented reality

Keywords: Augmented reality, HMD, collaborative workAt the heart of the Studierstube system, the Technical University Vienna is using collaborative augmented reality to embed computer-generated images into the real work environment. Augmented reality uses see-through head-

157 of 171

Page 158: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

mounted displays to overlay computer graphics onto a user's view of the real world. However, support has also been added for other display techniques such as the Virtual Table or stereo projection wall to allow for other forms of bringing computer graphics into the work environment. By allowing multiple users to share the same virtual environment, the Studierstube project enables computer supported cooperative work.http://www.cg.tuwien.ac.at/research/vr/studierstube/

8.3.19. Fraunhofer-IGD

Dunhuang Cave

Keywords: cultural heritage, modeling, visualization, trackingThe goal of the project is to use computer technology to help preserve, restore and propagate the caves of Dunhuang, China. Due to sand, humidity, time, and tourists parts of the cave site have been destroyed in the past. In cooperation between the Zhejiang University in Hangzhou (China) and the Fraunhofer IGD (Germany) a virtual reality presentation of the historical places has been developed. After modeling the geometry and textures, the VR technology gives the tourists and researchers a photo realistic impression of the beauty of the caves inside a five-sided CAVE-projection.http://www.igd.fhg.de/igd-a4/index.html

MOBIS

Keywords: handheld computers, exhibition guide, beacon-based locationBased on the concepts of situation aware mobile assistance, the Fraunhofer-Institute for Computer Graphics in Rostock has developed the generic mobile visitor information system "MOBIS", which relieves the visitors from the constraints of conventional guided tours.Available on PalmPilot and WindowsCE handheld computers, this mobile assistant acts as a personal guide, helping the visitor to actively and independently explore the fair, museum or exhibition. Using location- and object-sensing devices, MOBIS automatically calls up information on exhibits as the visitor is approaching them. There is no need for looking in a catalogue or studying a bulky map. It knows the visitor's location and what can be found around him.The MOBIS visitor information system uses a beacon-based location tracking system for determining the user's position. Throughout the exhibition, electronic beacons are installed at "interesting locations" – at exhibits, meeting points, way crossings, etc. Each beacon emits a unique signal that can be picked up by the mobile assistant when in range of the specific beacon. Once the mobile assistant receives a beacon signal, it knows its own location with a precision that is equivalent to the beacon range.

8.3.20. EML (European Media Lab)

(http://www.eml.org/)

Deep Map: Virtual Tourist

Keywords: GIS, database, tourist guide, reconstruction, speech recognition, cultural heritageEML develops the prototype of a digital personal mobile tourist guide which integrates research from various areas of computer science: geo-information systems, data bases, natural language processing, intelligent user interfaces, knowledge representation, and more.The objective of DEEP MAP is to develop a multidimensional geographic information system (GIS) for a novel intelligent electronic tourist guide. Deep MAP combines and extends geographic information systems and data bases for storing geographical, historical, and touristically relevant data on buildings, sights, points of interest, history and stories related to the city of Heidelberg. The system will be made accessible through the WWW and is the basis for a user adaptive, mobile system capable to propose individualized tours through Heidelberg in space and time.

158 of 171

Page 159: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

A speech recognition system will be created that converts the tourist's inquiries into a syntactic and semantic representation. This representation is processed by the spatial cognition engine and transformed into a plan of actions, which is communicated to the tourist via the language generation and articulation module.The 3-dimensional modeling of cities in their existing (actual) and former (historic) states will be carried out employing the current state of the art for turning the given building stock into a 3D-Model. Three individual areas of research have been identified as relevant for the Deep Map framework: the creation of interactive 3D models, the construction of 3D models from 2D representations and the automatic extraction of information from 3D images.http://www.eml.org/research/deepmap/deepmap.html

8.3.21. Conferences

The following conferences are relevant for Augmented Reality: CHI (Conference on Computer-Human Interaction) CCVCG (Confluence of Computer Vision and Computer Graphics) EGVE (Eurographics Workshop on Virtual Environments) HCI (Conference on Human-Computer Interface) MICCAI (International Conference on Medical Image Computing and Computer-Assisted Intervention) ISMR (International Symposium on Mixed Reality) ISWC (International Symposium on Wearable Computers)

http://iswc.gatech.edu/ IWAR (IEEE Workshop on Augmented Reality)

http://hci.rsc.rockwell.com/iwar/ SIGGRAPH (Special Interest Group on Computer Graphics)

http://www.siggraph.org/ VRAIS / VR (Virtual Reality Annual International Symposium / Virtual Reality)

http://www.caip.rutgers.edu/vr2000/ VRST (ACM Symposium on Virtual Reality Software Technology)

159 of 171

Page 160: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

9. Appendix C - QuestionnairesAiming to the better understanding of the users’ requirements we constructed the following questionnaires. These questionnaires where actually completed by some visitors in the archeological site of ancient Olympia and helped us to design the application scenarios according to their suggestions and special observations.

9.1.Experts’ Questionnaire

ARCHEOGUIDEAugmented Reality-based Cultural Heritage On-site GUIDE

Name:Property:

System Description

ARCHEOGUIDE project is targeting a wide range of users including visitors, cultural site managers,

researchers and content creators (or curators.) The basic idea of this system is the development of a

personalized guidance information system in cultural heritage sites that will improve the experience of

visiting a site using augmented reality methods. Through this system ancient monuments will be virtually

reconstructed in the visitors’ eyesight.

There are two approaches concerning visits in cultural heritage sites. In the traditional approach, visitors

go to a cultural heritage site and optionally follow a tour by a human expert guide. In the second –virtual

reality- approach the user/ visitor is able to interact with a virtual environment created on a computer

(virtual reality) that simulates the cultural heritage site using computer controlled devices, to move around

and to retrieve different types of multimedia information (audiovisual as well as textual.)

ARCHEOGUIDE project’s innovation relies on the fact that the visitor won’t be isolated from the

physical environment. Instead they will be able to move around the archeological site receiving

multimedia information and viewing 3D representations of various monuments of the site. This is

accomplished by superimposing the real with the virtual environment (augmented reality system.)

Visitors will be supplied with an HMD (see-through Head-Mounted Display), headphones, and a mobile

computer. A tracking system will determine the visitor’s actual position and orientation. The system

based on visitor’s position and personal interests will present audiovisual information aiming to his

guidance in the site and allowing the user to gain as much information as she wishes in matters regarding

the archeological site.

160 of 171

Page 161: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Questionnaire

A. General Information – Description of a virtual guidance system

1. What kind of information is currently most often requested by the visitors

2. How much improvement do you believe an augmented reality system could offer in a cultural site

tour? Such a system would require you to wear some computer controlled eye-glasses and would

allow you to view a 3D reconstruction of some of the ruins of the site.

None whatsoever

Enough

Significant

Comments

3. Do you believe that guided tours improve your overall experience from the site visit?

Yes

No

Comments

4. In an augmented reality tour system like the one described do you find the existence of a human

guide important?

Yes

No

Comments

B. System demands

5. In an augmented reality tour system it may be necessary to have in the site artificial objects (black

& white labels of dimensions 8cm x 8cm), which will help the system to identify the visitor’s

161 of 171

Page 162: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

position and orientation in the area. Where would be an appropriate place to “stick” such labels?

You may check zero or any number of boxes.

Trees

Garbage bins

Stones (big, immovable)

Other places: explain

6. How many of those artificial objects can exist in an area where a 3D virtual reconstruction of an

object should occur without disturbing the site?

Ten

Five

Two

None

Comments

C. Multimedia Information Organizing

7. How much help would be a personalized guide presenting audio/textual information during an

augmented reality visit in an archeological site?

None whatsoever

Enough

Significant

Comments

8. Around what criteria should the system build a personalized tour?

Visitor’s time availability

Level of detail requested

Other

162 of 171

Page 163: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

9. What information about the visitor should the system know in order to build a personalized tour?

You may check zero or more than one box.

Age (categories like child, young adult, adult etc.)

Education level (elementary, high-school, university, graduate etc.)

Interests (specific categories like sports, history, archeology, science, dance etc)

Time availability

Expertise (general audience, archeologist, expert in site history etc.)

Language spoken

Other, please explain:

10. How should the system organize any audio/textual information accompanying the visual

information of the 3D models? In other words, how should the system organize any audio/textual

information so that it can act as a personalized guide for the tour?

In the first model, the system matches information in the system that has been designated

as appropriate for the user’s profile, and builds a guided tour. It directs the user towards

areas of interest. For each area it presents first general information about the area, and

then describes in more detail (if available) each of the artifacts in the area. The visitor

may at any point interrupt the flow of information, or ask for more information about an

object. This implies that there may be many different tours that are dynamically created

by the computer according to the visitor’s profile. The system needs to know for each

piece of information to which categories of visitors is appropriate for presentation.

In a second model, each visitor is classified as member of a set of predefined categories

about which predefined tours exist. The system then presents the information from the

tour in the exact predefined order in which it was created. This implies a small static

number of tours that are not dynamically created by the computer but are pre-compiled by

the content creator or curator. The system does not need to know however for each piece

of information the exact categories of visitors it is most appropriate for.

Other, please explain.

163 of 171

Page 164: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

11. How should the system organize the audio/visual information for the site? One approach is to

have descriptions about the site, and the areas and objects it contains. Then for each area, to have

descriptions about the area and the objects it contains. Then for each object to have descriptions

about it and the parts, which it contains, and so on for each part that contains other sub-parts.

Should the system allow one area to overlap with other areas? Should one object belong to more

than one other object (e.g. a statue belonging to more than one temples)? At what level of detail

should the system contain audio/textual information about the objects in the site? I.e. should the

system have specific separate information about the roof of the temple of Zeus or the joins of the

columns of some other temple so that it can present this information to the visitors?

D. Presentation of multimedia/ audiovisual information

12. In an augmented reality tour system in what way would you prefer the tour to take place?

Free tour in the site with the capability of requesting more information on a subject

according to user’s demands

Directed/ Predefined tour and presentation of 3d and audiovisual information

Directed tour with the capability of interfering during the presentation of multimedia

information (repeat, pause, search etc.)

E. 3d representation of objects

13. Do you think it would be useful that the system may present the monument in different periods of

time?

Yes

No

Comments

164 of 171

Page 165: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

13. Sometimes, different groups of archeologists do not agree on the same ruin reconstruction

proposal.

a) How much helpful would be to have several proposals for the same ruin/part of ruin?

None whatsoever

Enough

Significant

b) How important would be to register the expert visitor opinion, in order to decide ?

None whatsoever

Enough

Significant

14. Do you think it’s important to include "live characters" (human representations) on the 3D

models in order to enrich the environment (e.g. greek athletes in the stadium,..) ?

Yes, very important

No, only the architecture is important

Depending on the category of the visitors (archeologists, students, ...)

Comments

15. Assume there is an obstacle in front of the monument you’re receiving a 3d representation. The

representation will normally be in front of any obstacle. Would that situation bother you?

(Occlusion problem)

Yes

No

16. When would you prefer to see a 3d representation of an object?

At a 20m distance and closer from the object of interest

All the time during the tour I’d like to see a 3d representation of all objects that should be

in any eyesight

Depending on my choice

165 of 171

Page 166: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

9.2.Visitors’ Questionnaire

ARCHEOGUIDEAugmented Reality-based Cultural Heritage On-site GUIDE

Name:Property :

System Description

ARCHEOGUIDE project is targeting a wide range of users including visitors, cultural site managers,

researchers and content creators. The basic idea of this system is the development of a personalized

guidance information system in cultural heritage sites that will improve the experience of visiting a site

using augmented reality methods. Through this system ancient monuments will be virtually reconstructed

in visitors eyesight, as they were the time they were built.

There are two approaches concerning visits in cultural heritage sites. In the traditional approach, visitors

go to a cultural heritage site and optionally follow a tour by a human expert guide. In the second approach

the user/ visitor is able to interact with a virtual environment created on a computer (virtual reality) that

simulates the cultural heritage site using a mouse or a keyboard, to move around and to retrieve different

types of information (textual or audiovisual).

ARCHEOGUIDE project’s innovation relies on the fact that the visitor won’t be isolated from the

physical environment. Instead he will be able to move around the archeological site receiving multimedia

information and viewing 3d representation of the site. This is accomplished by superimposing the real

with the virtual environment (augmented reality system). Visitors will be supplied with an HMD (see-

through Head-Mounted Display), headphones, and a mobile information system. A tracking system will

determine visitor’s actual position in the site. The system based on visitor’s position and personal

interests will present audiovisual information aiming to his guidance in the site and allowing the user to

gain as much information as he wishes in matters regarding the archeological site.

166 of 171

Page 167: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

Questionnaire

A. General Information – Description of a virtual guidance system

14. What type of presentations do you believe should be supported by the system?

3D presentation (augmented to real objects-virtual reality system)

3D presentation (without augmentation-augmented reality system)

Audio output

Text documents (annotated to real objects)

Text documents (without annotation)

Video

Images

Other:

15. How much improvement do you believe an augmented reality system could offer in a cultural site

tour?

Not at all

Enough

Significant

Other

16. In an augmented reality tour system like the one described do you find the existence of a natural

guide (in site person) important?

Yes, because:

No, because:

167 of 171

Page 168: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

B. System demands

17. How many kilograms would be acceptable for you to carry with you?

18. In an augmented reality tour system it is possible to have in the site artificial objects (labels of

dimensions 8cm x 8cm), which will help the system to identify visitor’s position in the area.

Would you be disturbed by the existence of such objects?

Yes

No

Comments

19. How many of those artificial objects can exist in the area without disturbing you?

Ten

Five

Two

None

C. Multimedia Information Organizing

20. How much help would be a personalized guide during an augmented reality visit in an

archeological site?

Not at all

Enough

Significant

Comment

21. What kind of information should the system use to build a personalized tour?

Time remaining

Degree of detail

Other

168 of 171

Page 169: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

D. Presentation of multimedia/ audiovisual information

22. In an augmented reality tour system in what way would you prefer the tour to take place?

Free tour in the site with the capability of requesting more information on a subject according to user’s demandsDirected/ Predefined tour and presentation of 3d and audiovisual information

Directed tour with the capability of interfering during the presentation of multimedia information (repeat, pause, search etc.)

a) If your last answer was not a free tour, should the system indicate the way to the next

interesting point?

yes

no

b) If yes, how should the system do that?

using a sound message

displaying a text message

showing graphical information (arrows, ...)

using land marks

23. What options should the system provide during the tour?

Help about the system services and the way it works

More information concerning a specific subject

Pause the flow of audiovisual information regarding an object or monument

Repeat audiovisual information

Choose an object and gain information about it (among those in your eyesight)

Other

 24. What kind of display would you prefer?

Hand held device and display

Head mounted display with "belt pack" computer

25. In 1-10 scale give the degree in which you would prefer to interact with the system using the

following devices

Computer device (mouse, keyboard etc.)

Voice recognition for simple commands

3D-pointing device

169 of 171

Page 170: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

26. While you are moving around (as opposed to while you are standing still) would you prefer to see

3d objects’ representation even in low quality and possibly unstable?

Yes

No

E. 3d representation of objects

27. Do you think it would be useful that the system presents the monument in different periods of

time?

Yes

No

28. Assume there is an obstacle in front of the monument you’re receiving a 3d representation. The

representation will normally be in front of any obstacle. Would that situation bother you?

(Occlusion problem)

Yes

No

29. When would you prefer to see a 3d representation of an object?

At a 20m distance and closer from the object of interest

All the time during the tour I’d like to see a 3d representation of all objects that should be

in any eyesight

Depending on my choice

30. What information is important to receive from the system (please rank 1-10)?

Original shape of the buildings

Change of the buildings over time

Demonstration of historical competitions

Visible Explanations

Auditive Explanation

Textual Explanation

Others 

31. What level of visual granularity would be acceptable?

I care to see only the shape of the monuments

170 of 171

Page 171: ARCHEOGUIDE application scenarios

ARCHEOGUIDE IST-1999-11306

I care to see advanced details of the monuments

I would like to be able to see every little detail of the monuments

171 of 171