13
The rapid evolution of pervasive computing technologies enables bringing virtual heritage applications to a new level of user participation. A platform for interactive virtual heritage applications integrates a high- end virtual reality system with wireless, connected portable and wearable computers, facilitating and enhancing user navigation, visualization control, and peer-to-peer information exchange. V irtual heritage (VH) technology has developed rapidly in the last decade. 1 Its evolution is propelled by the coming of age of several enabling technologies, namely high-speed computer net- works and the Internet, high-performance com- puter graphics hardware and software, improved data collection, and model construction devices and applications. The diffusion of VH applications can be strategic for the economic development of many countries, such as Greece and Italy, with an unrivaled historical tradition. VH technology can make their cultural wealth accessible to the world- wide public, with obvious positive consequences on the development of tourism and further val- orization of historical resources. VH evolution has gone through several phas- es. In its basic form, it’s limited to simple Web- based interfaces to historical sites, museums, and collections. More advanced, second-generation VH applications exploit virtual reality (VR) tech- nology to give users immersive experiences, such as archaeological site navigation, time and space travel, and ancient artifact reconstruction in 3D. In this context, our work focuses on the third generation of VH applications, which aims to provide an immersive and interactive user expe- rience. Interactive VH has a long tradition, but its development has been hampered by the limited availability, poor usability, and high cost of VR interface devices used in its early embodiments. Pervasive computing technologies have creat- ed the conditions for overcoming the limitations of early interactive VH prototypes. Our work exploits widely available portable and wearable computing platforms (PDAs and cellular phones) and software environments (Java) to create high- ly interactive VH applications based on popular, low-cost, wireless terminals with highly opti- mized interfaces. In other words, we enable the VH application user to interact in the VR envi- ronment with his or her own palmtop or mobile phone, without having to become familiar with a new I/O interface. Furthermore, we improve interactivity by enabling not only single-user interaction with a virtual environment but also multiple-user, peer- to-peer, and many-to-many interaction in the same environment. This is the first step toward a collaborative fruition of the VH content. The applications of these ideas and technologies are numerous, including virtual museums, virtual archaeology laboratories, group exploration of virtual historical sites, games, and entertainment. Use of ubiquitous computing and wireless technologies for interaction in immersive virtu- al environments (IVEs) poses many challenges in terms of real-time performance. VH applications need high interactivity in terms of subjective quality of system response observed by the users, to preserve their sense of immersion and pres- ence. Thus, it’s important to study system reac- tion time to events generated by users to ensure adequate interactivity. This is not trivial when using mobile systems; wireless devices are often affected by limitations related to processing resources and network bandwidth and latency. In this work, we describe a software and hard- ware infrastructure based on off-the-shelf com- ponents to support collaborative interaction in VH. We can enrich VH applications by displaying multimedia data and sharing them with other users. Furthermore, usability increases when pro- viding effective, mobile user interfaces that target a wide range of single users and groups. Our aim is not to offer a completely new expe- rience but to develop an appropriate system solu- tion for a real set of multimedia requirements, using commodity components. We describe a prototype VH application, namely a PDA-con- trolled navigation interface (see Figure 1), which 46 1070-986X/05/$20.00 © 2005 IEEE Published by the IEEE Computer Society Pervasive Computing for Interactive Virtual Heritage Elisabetta Farella, Davide Brunelli, Luca Benini, and Bruno Riccò University of Bologna Maria Elena Bonfigli CINECA Interuniversity Supercomputing Center Feature Article Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

Pervasive Computing for Interactive Virtual Heritage

  • Upload
    unibo

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

The rapid evolutionof pervasivecomputingtechnologies enablesbringing virtualheritage applicationsto a new level of userparticipation. Aplatform forinteractive virtualheritage applicationsintegrates a high-end virtual realitysystem with wireless,connected portableand wearablecomputers,facilitating andenhancing usernavigation,visualization control,and peer-to-peerinformationexchange.

Virtual heritage (VH) technology hasdeveloped rapidly in the last decade.1

Its evolution is propelled by thecoming of age of several enabling

technologies, namely high-speed computer net-works and the Internet, high-performance com-puter graphics hardware and software, improveddata collection, and model construction devicesand applications. The diffusion of VH applicationscan be strategic for the economic development ofmany countries, such as Greece and Italy, with anunrivaled historical tradition. VH technology canmake their cultural wealth accessible to the world-wide public, with obvious positive consequenceson the development of tourism and further val-orization of historical resources.

VH evolution has gone through several phas-es. In its basic form, it’s limited to simple Web-based interfaces to historical sites, museums, andcollections. More advanced, second-generationVH applications exploit virtual reality (VR) tech-nology to give users immersive experiences, suchas archaeological site navigation, time and spacetravel, and ancient artifact reconstruction in 3D.In this context, our work focuses on the thirdgeneration of VH applications, which aims toprovide an immersive and interactive user expe-

rience. Interactive VH has a long tradition, but itsdevelopment has been hampered by the limitedavailability, poor usability, and high cost of VRinterface devices used in its early embodiments.

Pervasive computing technologies have creat-ed the conditions for overcoming the limitationsof early interactive VH prototypes. Our workexploits widely available portable and wearablecomputing platforms (PDAs and cellular phones)and software environments (Java) to create high-ly interactive VH applications based on popular,low-cost, wireless terminals with highly opti-mized interfaces. In other words, we enable theVH application user to interact in the VR envi-ronment with his or her own palmtop or mobilephone, without having to become familiar with anew I/O interface.

Furthermore, we improve interactivity byenabling not only single-user interaction with avirtual environment but also multiple-user, peer-to-peer, and many-to-many interaction in thesame environment. This is the first step toward acollaborative fruition of the VH content. Theapplications of these ideas and technologies arenumerous, including virtual museums, virtualarchaeology laboratories, group exploration ofvirtual historical sites, games, and entertainment.

Use of ubiquitous computing and wirelesstechnologies for interaction in immersive virtu-al environments (IVEs) poses many challenges interms of real-time performance. VH applicationsneed high interactivity in terms of subjectivequality of system response observed by the users,to preserve their sense of immersion and pres-ence. Thus, it’s important to study system reac-tion time to events generated by users to ensureadequate interactivity. This is not trivial whenusing mobile systems; wireless devices are oftenaffected by limitations related to processingresources and network bandwidth and latency.

In this work, we describe a software and hard-ware infrastructure based on off-the-shelf com-ponents to support collaborative interaction inVH. We can enrich VH applications by displayingmultimedia data and sharing them with otherusers. Furthermore, usability increases when pro-viding effective, mobile user interfaces that targeta wide range of single users and groups.

Our aim is not to offer a completely new expe-rience but to develop an appropriate system solu-tion for a real set of multimedia requirements,using commodity components. We describe aprototype VH application, namely a PDA-con-trolled navigation interface (see Figure 1), which

46 1070-986X/05/$20.00 © 2005 IEEE Published by the IEEE Computer Society

PervasiveComputing forInteractiveVirtual Heritage

Elisabetta Farella, Davide Brunelli, Luca Benini, and Bruno Riccò

University of Bologna

Maria Elena BonfigliCINECA Interuniversity Supercomputing Center

Feature Article

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

is also a collaborative analysis and explorationtool for the tombs on the ancient Appian Way.2

These applications provide a realistic test bed forperformance analysis of the basic interactionfunctions implemented in our framework. Ourquantitative analysis demonstrates that perfor-mance levels are adequate for navigation andinteraction among a limited number of users.The “Background and State of the Art” sidebardescribes work by other researchers.

System requirementsOur primary objective is to manage interac-

tion and communication in a collaborative vir-tual environment shared by archaeologists andcultural heritage experts (but also architects, vir-tual model designers, art experts, computer sci-entists, and so on). In other words, we do notlimit ourselves to simple interactions as in thecase of a visitor in a VH site. We view the VH site

as an active workplace, where users have ampledegrees of freedom to proactively interact withthe environment and to cooperate with others.To achieve this objective, we need to satisfy threefundamental technical requirements: sense ofpresence, multiclient cooperation, and multime-dia data support.

47

Figure 1. Example of

our integrated system.

Research in innovative technologies aimed at presenting cul-tural heritage sites with immersive virtual reality (IVR) is active.As a representative example, researchers at the FraunhoferInstitute applied innovative visual metaphors in the IVR presen-tation of the cathedral of Siena.1 An avatar leads the visitorthrough the cathedral reconstructed in a passive immersiveenvironment. Alternatively, the visitor can choose views andnotable areas through a GUI that mimics an active book withmultiple pages and selection buttons. One of the most impor-tant results of virtual heritage (VH) is the presentation of con-tents not available otherwise. In the Nuovo Museo Elettronicodella città di Bologna (Nu.M.E.) project, the users are providedwith simple and efficient navigation tools to navigate a virtualworld in space and in time.2

At the Foundation of the Hellenic World, VR is employed tocreate immersive, interactive, and photorealistic experiences forthe general public.3 The VR equipment used in this exhibitionwas chosen with the goal of offering technologies accessible topeople with different backgrounds. The main limitation of thistype of application is interactivity, due to access provided via astationary console (for example, Behr et al.1 use a Windows PCwith a touch screen). This allows only a small group of visitors tosimultaneously interact with the virtual world, while others canhave only a passive experience. Moreover, VR equipment andinteraction devices are expensive, cumbersome, limited by mul-tiple cables, and not user friendly.

To overcome the limitations in interactivity in the VH envi-ronment, several researchers have explored pervasive comput-ing technologies.4 The wireless network infrastructure providesuntethered connection between I/O devices carried by the user

and the server infrastructure (for rendering and visualization) ofthe virtual environment. Furthermore, pervasive computingdevices are becoming widely available (noticeable in this senseare PDAs and mobile phones) and users are accustomed to theirinterfaces. Therefore, the objective is to exploit ubiquitous com-puting to enhance usability and interactivity in VR.

Examples of research efforts in this area started in 1993 whenFitzmaurice et al. verified spatial awareness and different inter-action metaphors for a tracked, small, handheld monitor actingas a palmtop computer.5 Other work has implemented virtualhandheld interaction devices, that is, physical devices for whicha representation is present in the virtual environment. An exam-ple, presented in 1995, is the Virtual Tricoder, which maps aphysical input device, the Logitech Flymouse, to a visual model.6

In 1999 the Haptic Augmented Reality Paddle (HARP) pro-ject implemented a handheld window through a physical pad-dle in the virtual environment.7 This technique is based ontracking of a small object and hiding it under a virtual repre-sentation. This provides the tactile and force feedback, and theuser has the illusion of holding a handheld interaction devicethat has the computing power of a high-performance graphicsworkstation. This technique was necessary because past hand-held devices didn’t have much computing power and storagecapabilities. This solution isn’t the best one, however, becausethe interaction through virtual devices is uncomfortable. Thesedevices can be nonintuitive and also invasive, occluding theuser’s view of the virtual world.

The rapid evolution of technology has made possible somerecent practical combinations of VR and real handheld devices.

Background and State of the Art

continued on p. 48

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

Sense of presenceSense of presence is the main contribution of

using IVR techniques. In fact, VR with its inter-active and experiential characteristics can simu-late real environments with various degrees ofrealism. The qualifying characteristic that reallydistinguishes VR from real-time 3D graphics isthat it provides the illusion of being presentinside a virtual world, through immersivity andinteraction. Moreover, the integration of immer-sivity in VH applications enables users to betterunderstand and examine 3D models. This is par-ticularly important in the case of reconstructionof objects and environments no longer availableor accessible in the real world. To achieve effec-tive immersivity and sense of presence, we mustguarantee some subrequirements:

❚ provide a projection system supporting stereo-scopic vision as well as software and hardwarefor real-time processing,

❚ enable major interaction tasks, and

❚ introduce physical devices that don’t divertfrom the sense of presence.

Stereoscopic vision and real-time process-ing. Usually, rooms such as Cave AutomaticVirtual Environments (CAVEs) or virtual theatersare the most common projection setup used toperform IVE visualizations. Alternatively, binoc-ular omniorientational monitors (BOOMs) andhead-mounted displays (HMDs) are used; thesetechnologies directly project the virtual worldinto binoculars or on the lenses of a helmet wornby the user. The advantage of projection roomsor theaters compared to BOOMs and HMDs isthat they don’t isolate users and prevent themfrom viewing their own body and being con-scious of the real surroundings together with thevirtual one. Moreover, both systems allow multi-ple users to share the virtual experience at the

48

IEEE

Mul

tiM

edia

This is well illustrated in Watsen, Darken, and Capps who used a3Com Palm Pilot to interact with a CAVE-like environment.8

Another example is the Java-Based Interface to the VirtualEnvironment (JAIVE) project, which exploits a handheld tabletand wireless networking. It uses Java and accommodates custom-designed interfaces.9 Finally, at Siggraph 2002, researchers pre-sented two interesting projects in this direction. First, VirtualReality Applications Center researchers presented Tweek, a mid-dleware tool providing an extensible Java GUI interacting withVR applications.10 This system can run on desktop and palmtopcomputers in projection-based VR systems or in a 3D virtualspace. The second work concerns the ARS Box and the Palmistproject, a PC-based VR system combined with a CAVE-like envi-ronment, where a handheld PC works as its interaction inter-face.11 This solution offers a low-cost system and at the same timeenhances interaction and content presentation through separatemanagement of three projection screens from a unique device,a handheld PC. A limitation is that the interface isn’t platformindependent, and only one person at a time can experience inter-activity. The latter is the major limitation also for Tweek.

References1. J. Behr et al., “The Digital Cathedral of Siena—Innovative

Concepts for Interactive and Immersive Presentation of Cultural

Heritage Sites,” Proc. Int’l Cultural Heritage Informatics Meeting

(ICHIM01), 2001, pp. 57-71.

2. M.E. Bonfigli and A. Guidazzoli, “A WWW Virtual Museum for

Improving the Knowledge of the History of a City,” Virtual

Reality in Archaeology, J.A. Barcelo, M. Forte, and D.H. Sanders,

eds., ArcheoPress, 2000, pp. 956-961.

3. A. Gaitatzes, D. Christopoulos, and M. Roussou,”Reviving the

Past: Cultural Heritage Meets Virtual Reality,” Proc. Virtual

Reality, Archaeology and Cultural Heritage, ACM Press, 2001, pp.

103-110.

4. M. Weiser, “The Computer for the 21st Century,” Scientific Am.,

vol. 265, no. 3, 1991, pp. 94-104.

5. G.W. Fitzmaurice, S. Zhai, and M.H. Chignell, “Virtual Reality for

Palmtop Computers,” ACM Trans. Information Systems, vol. 11,

no. 3, 1993, pp. 197-218.

6. M.M. Wloka and E. Greenfield, “The Virtual Tricorder: A Uniform

Interface to Virtual Reality,” Proc. 8th Ann. Symp. User Interface

Software and Technology (UIST), ACM Press, 1995, pp. 39-40.

7. R.W. Lindeman, J.L. Silbert, and J.K. Hahn, “Hand-Held

Windows: Towards Effective 2D Interaction in Immersive Virtual

Environments,” Proc. IEEE Virtual Reality Ann. Int’l Symp. (VRAIS

99), IEEE CS Press,1999, pp. 205-212.

8. K. Watsen, D.P. Darken, and W.V. Capps, “A Handheld

Computer as an Interaction Device to a Virtual Environment,”

Proc. 3rd Int’l Immersive Projection Technology Workshop (IPT),

1999, pp. 51-57.

9. L. Hill and C. Cruz-Neira, “Palmtop Interaction Methods for

Immersive Projection Technology Systems,” Proc. 4th Int’l

Immersive Projection Technology Workshop (IPT), 2000, pp. 1-5.

10. P. Hartling, A. Bierbaum, and C. Cruz-Neira, “Virtual Reality

Interfaces Using Tweek,” Proc. Siggraph, ACM Press, 2002, p.

278.

11. H. Hörtner et al., “ARS BOX with Palmist—Advanced VR-System

Based on Commodity Hardware,” Proc. Siggraph, ACM Press,

2002, p. 64.

continued from p. 47

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

same time. These characteristics make CAVEs andvirtual theaters suitable solutions for collabora-tive experiences.

A CAVE is a cube composed of display screenscompletely surrounding the viewer. The user canexplore the virtual world by moving around.3 Avirtual theater is a large-screen system that fillsthe field of view of users seated on chairs like in areal theater. In both cases the illusion of immer-sion is obtained wearing stereo glasses and pro-viding the viewers with multimodal devices(trackers, gloves, and so on) to interact with vir-tual objects in the scene, which are calculatedand projected in real time.

The satisfaction of real-time constraints isobtained through high-performance graphicallibraries and VR toolkits and through graphicworkstations with dedicated hardware compo-nents that relieve the main CPU from many tasks.Usually a graphical subsystem works in parallel aspart of a pipeline system, interfacing itself withthe CPU. This subsystem usually generates theactual data to be rendered, providing geometryoperation—that is, 3D objects transformation andtranslation from 3D world-space coordinates into2D screen-space coordinates or polygon subdivi-sion into triangles. The subsystem also handlesbuffering and rasterization—converting polygo-nal scene representation to pixel representation,drawing screen-space primitives into the framebuffer, performing screen-space shading, and exe-cuting per-pixel operations. These operationsguarantee real-time rendering of complex 3Dscenes. To display realistic scenes, the ability toperform fast texture-mapping functions is crucial.A state-of-the-art multipipe graphics subsystemcan help drive a single screen with up to 250 to300 million triangles per second of a sustainedperformance and 7 to 8 billion pixels per second.

Interaction tasks. Navigation, object selec-tion and manipulation, and system statuschanges all refer to interaction with the IVE. Atthe same time, these tasks enhance the sense ofpresence and immersion inside the virtual world.Obviously, we must respect real-time constraints,and interaction techniques should not affect nav-igation smoothness or the virtual world’s imme-diate reaction to user requests.

Navigation is one of the basic interaction tasksin an IVE. It lets users move from one place toanother (travel or motion) or helps the mental orcognitive process of guiding movement (wayfind-ing), which could be more difficult in a virtual

world compared with the same process in the realworld. Navigation is the aggregate task of the cog-nitive element and the motor element, wayfind-ing, and travel.4 We can implement travel throughpassive transport; for example, tapping the placeto reach on a map can cause the consequentmotion toward the target, or through an activetransport guiding the motion directly—for exam-ple, clicking an arrow button for each step in a cer-tain direction. Also, we can perform maneuvering(a subset of motion relating to adjusting position,orientation, or perspective even if the observer isnot traveling) through buttons for modifyingangle-of-view orientation (heading, pitch, androll). We could also provide the ability to selectthe speed of movement. Navigation is extensibleto the ability to move not only in space, but alsothrough time. This is a powerful feature in appli-cations regarding VH, where the chance to visitthe same environment through ages or to see thechanges of an artwork through time from the orig-inal state to the restoration adds value to thereconstruction.

Object selection and manipulation are respec-tively the specification of virtual objects andspecification of virtual object properties. Forexample, our interface provides object selection,rotation, and zoom in and out.

System controls refer to the possibility tochange the system state or the interaction mode.For example, in our implementation we providedthe ability to change the drawing mode (fromwireframe to solid, or to with or without textures).

Pervasive computing help. Currently, whileVR offers an increasing number of possibilitiesfor experiencing new methods of interaction, it’slimited by highly specialized user terminals. Thishas negative consequences on learning time andusability, because interaction tools and devicesrequire active attention, diverting from VR con-tents. Moreover, specialized devices (such asgloves, HMDs, BOOMs, and 3D joysticks) areoften felt as invasive, cumbersome, and limitingbecause they’re usually wired to stationary appli-ances. Furthermore, advanced VR technologiesare expensive and cannot benefit from the con-sumer market effort in optimizing productergonomics and usability.

We can obtain important benefits, instead,with the introduction of pervasive computingtechniques, which address these kinds of prob-lems. For this reason, in our system we use wire-less connectivity to achieve freedom of

49

July–Septem

ber 2005

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

movement for the user. At the same time, per-sonal devices (such as PDAs and, in the future,other wireless handheld devices such as cellu-lar phones) are perfect candidates to supportadvanced interfaces. The main advantage ofusing PDAs is that they’re already in widespreaduse with the prospect of further diffusion in themarket.5 The convergence of PDAs and mobilephones will result in more natural communica-tion among people. They’ll be able to send text,images, and other multimedia data; surf theWeb; listen to music; and watch a videothrough unique devices representing the natur-al transparent interfaces with digital multime-dia data.

From this point of view, PDAs are suitabledevices for our purposes because they’re morepowerful and manageable compared to mobilephones or pagers in collecting, organizing, inter-acting, and managing multimedia data. Anotherrequirement is to design the system with plat-form independence, to support different kinds ofhandheld devices with different hardware capa-bilities, thus enabling users to exploit their ownpersonal device.

Multiclient cooperationSupporting concurrent multiuser interaction

for networked environments, as in a virtual the-ater, and providing exchange and collection ofmultimedia data can be of great interest in edu-tainment, entertainment, and collaborative work.In the VH field, it’s particularly useful for makinga hypothesis of reconstructions and to exchangemultimedia data among people. These features

imply the development of peer-to-peer interac-tion among clients and scheduling protocols toorganize the priority in controlling the VR.

When several people—either close or separatedin proximity—share the same workspace, theyhave two main needs: to be aware of other usersand their actions and to actively interact withthem. Awareness of others, when they’re notactively communicating with each other, is essen-tial in shared virtual environments where only theview of one user at a time can be projected. In par-ticular, it’s important to provide the representationof each user’s current position in the system andto make available information on what anotheruser is doing, whether or not that particular user iscurrently controlling the system. Cooperatingrequires communication and exchange of multi-media data among clients—for example, openingprivate communication channels and exchangingor sharing data such as voice, sounds, music, video,graphics files, charts, and images.

Multimedia data support Another requirement in a productive work

environment is to have easy access to tools fortaking notes, sending and receiving emails, orbrowsing the Internet. In fact, a multitude of dif-ferent media is required for a more effective col-laboration, because each way to present andorganize data has its own special benefits. Inother words, we need a portable terminal thatruns an immersive application to allow friendly,active participation and to visualize and insertmultimedia data and comments. We’d like such adevice to provide the possibility of establishingrelationships between data and models, to collectand retrieve data, and to make a more accurateanalysis. Moreover, it’s useful for these terminalsto provide users with access to the network data-base available on the Internet.

System implementationWe developed our hardware and software

architecture in accordance with the requirementsdescribed in the previous section. The system isessentially composed of a server and user termi-nals (see Figure 2). The server infrastructure sup-ports the visualization of the VH world, namely,

❚ high-resolution, wide-angle stereoscopic visu-alization;

❚ high-throughput rendering of still and motionpictures; and

50

IEEE

Mul

tiM

edia

Figure 2. General

architecture

components.

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

❚ accurate modeling of the environment withmany levels of detail (obtained from model-ing tools).

Client support is provided by pervasive com-puting devices, with the following characteristics:

❚ low weight and suitable form factor for porta-bility and/or wearability,

❚ untethered connection for complete freedomof movement,

❚ low cost and wide availability, and

❚ highly familiar and rich user interfaces(including audio and video).

A wireless network connects the client andserver infrastructures, providing adequate band-width and latency. In our case, latency should bein the tens of milliseconds range to achieve thesense of immersion, while bandwidth should bein the megabits per second range to supportstreaming multimedia data transfer (especiallyfor video). Lower bandwidth (tens of Kbytes persecond) can be tolerated in a reduced-function-ality environment, where video streaming isn’tsupported on the clients. Fortunately, the marketoffers technologies and devices meeting therequired hardware and software specifications.

Targeted scenarios On the software side, our system is a multi-

client-server application, which provides threepossible usages: single user, many to many, andpeer to peer.

For single users, the current implementationprovides each user entering the system withstereoscopic visualization, IVE interaction andnavigation controls, and heterogeneous datastorage and organization. The server infrastruc-ture provides stereoscopic visualization. Thisinfrastructure is composed of an immersive pro-jection screen; real-time rendering libraries; and3D high-definition models of environments,buildings, and objects. The system also providesthe user with the ability to interact on a basiclevel with the IVE through a 2D GUI, which isnonintrusive because it’s displayed on the mobileterminal, apart from the 3D visualization.

Usability, which is a main requirement in VHbecause of the possible users’ heterogeneousbackground, is provided to the user via a touch

screen, pen, and a simple GUI on the palmtopcomputer. Navigation controls include clickablearrows and zoom controls implementedthrough buttons and other features selectablethrough easy-to-use menus. Figure 3 shows away to select objects or to highlight objectdetails through menu items.

A 2D map representing the 3D world achievesmany goals. First, in wide IVE—for example, rep-resenting an ancient built-up area—a 2D mapprovides a quick overview of the virtual environ-ment and its objects, helping users acquire spatialknowledge, a fundamental requirement forwayfinding. Through the map, users visualizetheir position and orientation in the virtual worldand which regions can be reached from the cur-rent position. Moreover, by outlining objects orviewpoints on the map, the virtual world model-er can guide users to the best detailed part of theIVE, or the archaeologist can suggest the most his-torically relevant part of the reconstruction.Clicking on an object or a viewpoint representedon the map can have many effects: select object,change point of view, or display multimedia datarelated to the object on the PDA.

51

July–Septem

ber 2005

Figure 3. Examples of

the GUI. (a–b) Ancient

Appian Way Project

showing menus that

highlight object details,

arrows and buttons to

manipulate the object,

and a 2D map to select

models. (c) PDA GUI

for navigation inside

the Curia Iulia (created

by the University of

California, Los Angeles,

Cultural VR Lab)

representing user

positioning through

viewpoint.

(a)

(b)

(c)

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

PDA use enables implementation of advancedfeatures such as the ability to

❚ take notes in a text editor;

❚ access the Internet through a wireless connec-tion; and

❚ show text, videos, images, and sounds inher-ent to the 3D models.

This is extremely useful in the cultural heritagefield. For example, archaeologists can use a PDAto directly insert and catalog notes and physicalevidence in situ during the excavation process.Then, through the same device they can managethe virtual experience, facilitating the process ofestablishing relationships between data and mod-els, collecting and retrieving data, and making abetter archaeological analysis. For example, whileanalyzing a 3D model of an artifact on the maindisplay, archaeologists can compare it with a shortmovie of the real artifact directly on the PDA with-out occluding the 3D model visualization.

For many-to-many interaction, we designedthe system considering the multiclient presenceand cooperation inside the IVE. During theprocess leading to the final 3D models of the vir-tual reconstruction of cultural heritage materials,we integrated many fields of expertise to avoid therisk of realizing misleading VH. The recreated VEsmust be connected with all the backgroundresearch material collected by all the peopleconnected to the virtual reconstruction activity,helping to establish cooperation among archaeol-ogists, computer scientists, historians, humanists,and so on. Thus, the resulting models are not onlyrealistic, but also historically accurate.

In our system users can select an object, rotateit, and zoom in and out. While manipulatingobjects they can show and comment on thereconstruction details to exchange ideas and sug-

gest corrections or add-ons. All types of data andmaterials can be displayed on each person’s PDAwithout disturbing the immersive shared projec-tion. Only one user at a time controls the camer-a’s point of view because the scene is shared. Thesame rule applies to object manipulation. Theclient who controls the camera isn’t necessarilythe one who is interacting with a certain object.In this case, the set of possible interactions is lim-ited to changes in attributes—for example, theuser can highlight the object or change its color,but can’t zoom in or out, a typical task needingcamera control. Thus, the first person who gainscamera or object control maintains it until she orhe explicitly releases it. Each user is provided witha 2D map of the position of all clients connectedto the system, letting them track the status ofeach client’s activities and know who is the mas-ter. Clicking on a master icon corresponds toopening a dialog box to request control. The mas-ter can then either accept or reject the request.

In a team of collaborators using peer-to-peercommunication and, in particular, in a multidis-ciplinary application field such as VH, it’s possi-ble that user subgroups or single users needinformation not useful or interesting to the restof the group. Hence, we gave our system the abil-ity to establish a private communication channel,implemented through a whiteboard displayed onthe PDAs of the users involved in the communi-cation (see Figure 4). As stated previously, PDAsuse intrusive interfaces diverting the users fromthe visualization. The user who decides to open aprivate channel with a peer user sends her or hisrequest to the other client by, for example, click-ing on the other’s representation on the 2D map.Depending on the answer, the communicationwill be held or not held.

DesignThe system design takes into account platform

heterogeneity (typical of a distributed system),independence from visualization software, andnetwork communication efficiency. Concerningthe first characteristic, we wrote the various appli-cation components in their highest level in Javaand exploited distributed computing libraries.Java, in fact, guarantees platform independence.We can benefit in the future from cutting-edgeprogramming solutions for small devices comingfrom the Java development community.

With reference to independence from visual-ization software, the system performs renderingby calling graphical and visualization libraries.

52

IEEE

Mul

tiM

edia

(a) (b)

Figure 4. (a) Peer-to-

peer interaction.

(b) Whiteboard

displayed in two

different client

workspaces.

Request

Acknowledge

Peer-to-peer communication

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

To achieve independence from the specific set oflibraries, the application associates a numericaltag to each interaction task (navigation controls,object manipulation, and so on). The tag identi-fies the remote procedure to call, transparentlyfrom the procedure implementation. Thus,changing the graphical library requires onlyminor changes to the code.

As a third point, targeting good network per-formance is critical in real-time environments,where unforeseen delays, packet loss, and so oncan affect networking. Our system uses TCP/IPfor transmitting packets reliably. Due to tags,communication among system components isefficient because it’s reduced to the exchange ofa limited number of bytes for each interactionrequest. Our tests prove the achievement ofappropriate timing. Moreover, to achieve effi-cient network communication, we performedmany optimizations of communication patterns.An example is the technique applied for manag-ing consistency maintenance among the clientsand with the server. Verifications showed thathaving all clients poll the server for updates isfaster than having the server send update eventsto each client as new events occur. In the lattersolution the update is a sequential task, while inthe former it’s concurrent.

We divided our distributed system’s architec-ture into modules, and the communicationamong them is mainly based on Java RemoteMethod Invocation (RMI).

We organized the server architecture (seeFigure 5a) into the following modules:

❚ scheduler;

❚ avatar manager, which provides updates of thestatus of users interacting inside the VE; and

❚ theater manager, which is employed in thevirtual theater scene updates procedure andinvokes the graphical APIs needed to modifythe VE.

The scheduler is in charge of accepting newclients into the system. The avatar manager man-ages the avatars to periodically update the status.When a new visitor subscribes, the server instan-tiates an avatar, assigning a unique identifier toit, giving the client and avatar a reference to eachother. The theater manager module is in chargeof retrieving the position and orientation of eachavatar and passes this information through the

Java Native Interface (JNI), the high-performancegraphical APIs (for example, Multigen-ParadigmVega and SGI OpenGL Performer) needed to per-form actions or other C/C++ functions specifical-ly created to modify the scene and theapplication’s graphical state.

We organized the client architecture (seeFigure 5b) into the following core modules:

❚ screen manager,

❚ event manager, and

❚ communication manager.

We implemented the screen manager toupdate and handle the GUI on the mobile plat-form (for example, the PDA). The user interactsdirectly with the GUI, generating events. Theevents are passed to the event manager, whichtranslates them into actions for both the com-munication manager and the screen manager. Infact, the GUI reconfigures itself depending on thegeneral system state, for example, displaying thecorrect part of the 2D map in which the user isnavigating, the icons which indicate other users,or enabling only the controls given to a certainuser at a specific moment. This reconfiguration isoften needed, especially when the server sendsdata to refresh the local user copy. At the sametime, however, the communication managerinteracts with the event manager. The commu-nication manager communicates with the resi-dent objects on the server through remoteprocedure calls. In practice, the communicationmanager is responsible for the calling of proce-

53

July–Septem

ber 2005

Figure 5. (a) Server and

(b) client architecture.

(a) (b)

Server

Communication manager

Event manager

Screen manager

Schedulerclass

Theatermanager

Avatar manager

Client Client Client

PDA

User

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

dures and functions corresponding to the eventsthat a user generates. Usually the communica-tion manager handles the remote reference of heror his own avatar and recovers the theater man-ager’s remote stub when the user wants to obtainvirtual theater control.

Testing the system: A case studyWe’ve applied the IVH system presented here

to many VH scenarios. One such scenario is thereconstruction of eight tombs along the AncientAppian Way.2 We exploited this particularimplementation to test the system in terms ofusability and performance. The implementationprovides an immersive virtual environment forcollaborative work6 in cultural heritage andarchaeology fields featuring these advancedcharacteristics:

❚ shared visualization in an immersive envi-ronment,

❚ peer-to-peer communication,

❚ availability of all needed multimedia data, and

❚ user friendliness to help nonexpert users.

In a typical virtual visit along the AncientAppian Way (see Figure 1) through the PDA, theuser can navigate a 2D map, select the tombs (seeFigure 3a), and manipulate them. The user canswitch the visualization from among differentreconstructions (see Figure 6a), while the systemcontextually displays text (see Figure 6b), images,and other multimedia data on the PDA. Moreover,the user can exchange data with other users (seeFigure 4b). Each client can identify the virtual

position on the 2D map of the other users alongthe Appian Way. Clicking on a colored dot, whichrepresents another user, opens a private commu-nication channel with that person (for example, awhiteboard session as in Figure 4b). While the userinteracts with VR models through our GUI, manydifferent applications can be loaded concurrentlyand independently on the PDA, for example, toretrieve, display, and modify data collected andstored during the excavation.

Performance assessmentOne of the main requirements for an effec-

tive VR system is real-time performance. Real-time interaction and rendering increase thesense of immersion and presence inside the VE.Since the part of the system responsible for real-time rendering is a commercial platform forprofessional use, the real-time graphics perfor-mance is fully reached. Thus, our tests are aimedat verifying the communication among systemcomponents to guarantee an efficient multi-client and peer-to-peer interaction in small usergroups. In fact, our system isn’t targeted forapplications needing fast interaction in mas-sively charged networks such as multiplayer net-worked games. Because RMI has nonnegligibleoverhead with a standard TCP/IP connection,7

we tested the average time needed by the RMIto call a method on a remote host through thewireless network to verify whether the systemmatches the requirements.

System hardware setupThe interactive, multiuser system is a combi-

nation of two main parts. The CINECA VirtualTheater (see http://www.cineca.it) consists of

❚ an SGI Onyx2 engine with three graphicpipelines, each with 64 Mbytes of texturememory and 8 MIPS R10000 processors;

❚ a cylindrical screen;

❚ three projectors;

❚ active stereo shutter glasses; and

❚ consoles to control the visualization.

The second part is the infrastructure for con-necting the portable terminals to the network(the base station, wired to the CINECA’s LAN,and the network interface cards) and the portable

54

IEEE

Mul

tiM

edia

Figure 6. (a) Menu to

switch among different

reconstructions for the

same tomb on the PDA

and the corresponding

visualization in the

virtual theater’s screen.

(b) Contextual display

of text.

(a) (b)

On PDA

On virtualtheater

On PDA

On virtualtheater

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

terminals themselves. The software’s server sideruns on the system’s server part (the CINECAVirtual Theater based on the SGI Reality Center)while the client software runs on the wirelesspocket PCs.

Specifically, we chose the Compaq iPAQ asportable terminals. The model on which we test-ed our application has a 320 × 240 resolution thin-film transistor color screen, 32 Mbytes of RAM, 16Mbytes of ROM, and runs Windows CE 3.0. At thestart of this project we had the fastest processoravailable on the market (a 206-MHz Intel StrongArm 32-bit RISC). IPAQs are expandable with theneeded hardware to connect through variouscommunication interfaces (such as IEEE 802.11b,General Packet Radio Service, Global System forMobile Communications, and Bluetooth).

We equipped the terminals with a Cisco wire-less card to exploit the IEEE 802.11b standard wire-less LAN interface working at 11 Mbits per second.This choice is reasonable because our applicationis meant for an indoor environment and the costsof the infrastructure were reasonable (that is, noadditional costs during connection time).

System software setupOn the server side, we loaded the Java

Runtime Plug-in v1.3.1 for SGI IRIX on top of theIRIX 6.5.14 operating system. The Java applica-tion calls the high-performance multiprocess ren-dering graphic libraries (Vega, Performer, andOpenGL) through JNI, which links to an inter-mediate C layer. On the client side, the JavaVirtual Machine we used is Jeode Runtime fromInsignia Solution, optimized for the Windows CEplatform. This JVM supports Java AbstractWindow Toolkit classes and RMI and JNI tech-nologies. Windows CE provides the possibility ofdeveloping software using specific APIs to writecode for PDA devices. We benefit from this whenusing JNI in our Java application to call theWindows CE API to concurrently run anotherapplication, such as an Internet browser, callingit directly from our system (selecting an item ona menu in our GUI).

Experimental resultsWe ran experimental tests to validate the sys-

tem in terms of client side usability and interac-tion, the place at which the system is most limitedin hardware and software resources. In this imple-mentation, a client interacts with the shared IVEby sending requests or events to a server. Someevents, such as those related to the awareness of

other clients, have to be distributed to all clients.Our first test studied this processing activity:essentially we examined how the server managesthe distribution of an increasing number of eventsper second as the number of clients goes up. Thescenario we considered was the movement of aclient on the 2D map in the GUI (on the PDA). Inthis case, the server manages the event to displaythe motion on all the clients’ 2D maps. The datapacket sent on the network is the set of coordi-nates indicating the new position. We consideredtwo procedures for managing the process:

❚ Sequential version. Any update event generatesa thread on the server, which is in charge ofsequentially contacting every client to refreshits status.

❚ Concurrent version. Any update event changeseach client’s status on the server. On eachclient a dedicated thread concurrently queriesthe server for status update.

Both solutions provide an acceptable delaydue to the small size of the packet sent (see Figure7). Our system implemented the second solutionbecause it provides better overall performance. Inaddition, the plot shows that the concurrentimplementation maintains an update time of lessthan 100 milliseconds with more than 15 users.This is a satisfactory delay for an interactive appli-cation. In general, a recommended timing of lessthan 150 to 200 ms must be achieved when a sys-tem provides visual and auditory feedback.8

We performed our second test, designed toassess wireless network performance, by calling ahuge number of dummy functions (mainly tosend nonsignificant packets back and forth). Weran the test on a number of different platforms tocompare the results and verify system portability(see Figure 8). The mean system response time ofunder 50 ms is maintained when the packets

55

July–Septem

ber 2005

Figure 7. Concurrent (1)

versus sequential (2)

solutions.

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

exchanged are less than 10 Kbytes and under 250ms for packets of 80 to 100 Kbytes. These resultsare more than acceptable. We found unusual

behavior in the form of a significant drop in per-formance when using packets of around 1,000bytes on the PDA. Our hypothesis is that this isan artifact of the pocket PC JVM and might bedue to the marshalling process inefficiency.

A final test attempted to evaluate what hap-pens when multiple peer-to-peer communicationchannels are in use. In particular, the test mea-sured the mean time needed to send an array ofJava point type elements (indicative size was 9bytes × array size) from one client to the others.We varied the number of clients and the arraydimensions. Each client tries to write concur-rently to the others 50 times.

Figures 9 and 10 show the results of tests per-formed with one, three, five, and ten clientsattempting to send between 30 to 1,000 points.This test differs from that relating to Figure 7, asthe size of data packages varies and the communi-cation is between clients. According to someauthors, end-to-end delay should not exceed 1,000ms and a delay of less than 200 ms is recommend-ed for tasks needing tight synchronization.9 Theresults indicate that the mean time becomes exces-sively long when the number of clients is greaterthan three to five and the data packages containmore than 70 to 100 points. This is still an accept-able result in VH workgroups, but is likely to be amajor limitation to adding concurrent features tothe PDA-based part of the application. The currentsystem uses Java RMI and it’s possible that wemight obtain performance improvements byadopting other communication paradigms, as RMIis a high-abstraction-level programming paradigmand limits protocol-level optimizations.

In conclusion, quantitative results demon-strate that the system is adequate for interactionwith VH for small user groups (four to twenty)depending on the usage of peer-to-peer interac-tion features, which are the most limiting interms of performance. Therefore, we’ve achievedour objectives for our system. This evaluation isconfirmed by user feedback collected informallyduring demonstration sessions at CINECA, wherethe virtual theater based on SGI Reality Centertechnologies and the PDA setup described previ-ously is used. Users accessing the demonstrationsessions had different technological back-grounds, ranging from extensive computer liter-acy to no technical experience. All of themconsidered the PDA and its GUI easy to use as awireless interaction device and appreciated theconnections offered between content on the PDAand the 3D world. Some users provided sugges-

56

IEEE

Mul

tiM

edia

Figure 9. Mean time as a function of number of clients for

differing sizes of the array sent.

Figure 10. Mean time as a function of size of the array sent for

one, three, five, and ten clients.

Figure 8. Mean time versus array size. Platforms tested: wireless

notebook running Windows 2000, wireless iPAQ, wired desktop

PC running Windows NT.

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

tions for future development of the system orpossible features to add to the GUI. Future workwill address a formal usability evaluation study.

Future workFuture work will explore different directions.

First, we’re defining a set of APIs to simplify theprocess of application development and toimprove multiuser communication. Furthermore,we’ll develop enhanced multiuser features to aug-ment peer awareness of other peers and peer-to-peer exchange of multimedia data. We plan toadd the ability to configure each client’s status,letting the clients choose the degree of awarenessthat others have of them. We have many techni-cal and social implications to consider regardingawareness concerns. We’ll combine this workwhen we optimize the system’s performance.

We’re already exploring multimodal interac-tion to combine the benefits of more than onemodality of I/O and to study the challenges ofintegrating different interaction devices.Currently, we’re integrating novel sensors intoour system, starting with accelerometers to exper-iment with gesture interfaces. MM

AcknowledgmentsWe thank Marco Gaiani (Politecnico di

Milano), scientific director of the Ancient AppianWay 3D Web Virtual GIS project, who supportedour research by giving us models and contents onwhich we based our case study and BernardFrischer, head of the Cultural VR Laboratory atthe University of California, Los Angeles, for let-ting us use the Curia Iulia model.

References1. A.C. Addison, “Virtual Heritage—Technology in the

Service of Culture,” Proc. Conf. Virtual Reality,

Archaeology, and Cultural Heritage, ACM Press,

2001, pp. 343-354.

2. M. Gaiani, “VR as a Tool for Architectural and

Archaeological Restoration: The Ancient Appian

Way 3D Web Virtual GIS,” Proc. 7th Int’l Conf.

Virtual Systems and Multimedia (VSMM), IEEE CS

Press, 2001, pp. 86-95.

3. C. Cruz-Neira, D.J. Sandin, and T.A. DeFanti,

“Surround Screen Projection Based Virtual Reality:

The Design and Implementation of the CAVE,”

Proc. ACM Siggraph, ACM Press, 1993, pp.135-142.

4. R.P. Darken and B. Peterson, “Spatial Orientation,

Wayfinding and Representation,” Handbook of

Virtual Environment, K. Stanney, ed., Lawrence

Erlbaum Assoc., 2002.

5. M. Weiser, “The Computer for the 21st Century,”

Scientific Am., vol. 265, no. 3, 1991, pp. 94-104.

6. Comm. of the ACM, special issue on collaborative

virtual design environments, vol. 44, no. 12, 2001,

pp. 40-67.

7. S. Campadello et al., “Wireless Java RMI,” Proc. 4th

Int’l Enterprise Distributed Object Computing Conf.

(EDOC), IEEE CS Press, 2000, pp. 114 -123.

8. A.J. Dix et al., Human–Computer Interaction, 3rd ed.,

2003, Pearson Education; http://www.hcibook.

com/e3.

9. J.C. Oliveira et al., “Java Multimedia Telecollabora-

tion,” IEEE MultiMedia, vol. 10, no. 3, 2003, pp.

18-29.

Elisabetta Farella is a PhD student

at the Department of Electronics,

Computer Science and Systems at

the University of Bologna, Italy.

She is also a research consultant at

the CINECA Supercomputing

Center. Her research interests include VR systems, cul-

tural heritage, pervasive computing, and human–com-

puter interaction. Farella has a BS in electrical

engineering from the University of Bologna.

Davide Brunelli is a PhD student

at the Department of Electronics,

Computer Science and Systems at

the University of Bologna. His

research interests include ubiqui-

tous computing oriented to coop-

eration through mobile devices and development of

graphic applications for portable systems. Brunelli has

a BS in electrical engineering from the University of

Bologna.

Luca Benini is an associate pro-

fessor at the Department of

Electronics, Computer Science

and Systems at the University of

Bologna. He also holds visiting

researcher positions at Stanford

University and Hewlett-Packard Laboratories, Palo Alto,

California. His research interests include all aspects of

computer-aided design of digital circuits, with special

emphasis on low-power applications, and in the design

of portable systems. Benini has an MS and a PhD in

electrical engineering from Stanford University. He is

an IEEE senior member.

57

July–Septem

ber 2005

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.

58

Bruno Riccò is a full professor at

the University of Bologna. His

research interests include solid-

state devices, microelectronics,

integrated circuit design, evalua-

tion and testing, multimedia sys-

tems, and human–computer interaction. Riccò has a

PhD from the University of Cambridge, UK. He is an

IEEE Fellow.

Maria Elena Bonfigli is a techni-

cal researcher at CINECA Super-

computing Center. Her research

interests include VR and

human–computer interaction

applied to cultural heritage. Bon-

figli has a BS in computer science and a PhD in history

and computing from the University of Bologna.

EXECUTIVE COMMITTEEPresident:GERALD L. ENGEL* Computer Science & EngineeringUniv. of Connecticut, Stamford1 University PlaceStamford, CT 06901-2315Phone: +1 203 251 8431Fax: +1 203 251 [email protected]: DEBORAH M. COOPER*Past President: CARL K. CHANG*VP, Educational Activities: MURALI VARANASI†VP, Electronic Products and Services: JAMES W. MOORE (2ND VP)*VP, Conferences and Tutorials: YERVANT ZORIAN†VP, Chapters Activities: CHRISTINA M. SCHOBER*VP, Publications: MICHAEL R.WILLIAMS (1ST VP)*VP, Standards Activities: SUSAN K. (KATHY) LAND*VP, Technical Activities: STEPHANIE M.WHITE†Secretary: STEPHEN B. SEIDMAN*Treasurer: RANGACHAR KASTURI†2004–2005 IEEE Division V Director: GENE F. HOFFNAGLE†2005–2006 IEEE Division VIII Director: STEPHEN L. DIAMOND†2005 IEEE Division V Director-Elect: OSCAR N. GARCIA*Computer Editor in Chief: DORIS L. CARVER†Executive Director: DAVID W. HENNAGE†* voting member of the Board of Governors† nonvoting member of the Board of Governors

E X E C U T I V E S T A F FExecutive Director: DAVID W.HENNAGEAssoc. Executive Director: ANNE MARIE KELLYPublisher: ANGELA BURGESSAssistant Publisher: DICK PRICEDirector, Administration: VIOLET S. DOANDirector, Information Technology & Services: ROBERT CAREDirector, Business & Product Development: PETER TURNER

PURPOSE The IEEE Computer Society is theworld’s largest association of computing profes-sionals, and is the leading provider of technicalinformation in the field.

MEMBERSHIP Members receive the month-ly magazine Computer, discounts, and opportu-nities to serve (all activities are led by volunteer members). Membership is open to all IEEEmembers, affiliate society members, and othersinterested in the computer field.

COMPUTER SOCIETY WEB SITEThe IEEE Computer Society’s Web site, at www.computer.org, offers information andsamples from the society’s publications and con-ferences, as well as a broad range of informationabout technical committees, standards, studentactivities, and more.

BOARD OF GOVERNORSTerm Expiring 2005: Oscar N. Garcia, Mark A. Grant, Michel Israel, Rohit Kapur, Stephen B. Seidman, Kathleen M. Swigger, MakotoTakizawaTerm Expiring 2006: Mark Christensen, Alan Clements, Annie Combelles, Ann Q. Gates, James D. Isaak, Susan A. Mengel, Bill N. SchilitTerm Expiring 2007: Jean M. Bacon, George V.Cybenko, Richard A. Kemmerer, Susan K. (Kathy)Land, Itaru Mimura, Brian M. O’Connell, ChristinaM. SchoberNext Board Meeting: 4 Nov. 2005, Philadelphia

IEEE OFFICERSPresident and CEO : W. CLEON ANDERSONPresident-Elect: MICHAEL R. LIGHTNERPast President: ARTHUR W. WINSTONExecutive Director: TBDSecretary: MOHAMED EL-HAWARYTreasurer: JOSEPH V. LILLIEVP, Educational Activities: MOSHE KAMVP, Pub. Services & Products: LEAH H. JAMIESONVP, Regional Activities: MARC T. APTERVP, Standards Association: JAMES T. CARLOVP, Technical Activities: RALPH W. WYNDRUM JR.IEEE Division V Director: GENE F. HOFFNAGLEIEEE Division VIII Director: STEPHEN L. DIAMONDPresident, IEEE-USA: GERARD A. ALPHONSE

COMPUTER SOCIETY OFFICESHeadquarters Office

1730 Massachusetts Ave. NW

Washington, DC 20036-1992

Phone: +1 202 371 0101

Fax: +1 202 728 9614

E-mail: [email protected]

Publications Office10662 Los Vaqueros Cir., PO Box 3014Los Alamitos, CA 90720-1314Phone:+1 714 8218380E-mail: [email protected] and Publication Orders:Phone: +1 800 272 6657 Fax: +1 714 821 4641E-mail: [email protected]

Asia/Pacific OfficeWatanabe Building1-4-2 Minami-Aoyama,Minato-kuTokyo107-0062, JapanPhone: +81 3 3408 3118 Fax: +81 3 3408 3553E-mail: [email protected]

Readers may contact Elisabetta Farella at the Dept. of Electronics, Computer Science and Systems, University of

Bologna, Viale Risorgimento, 2, 40136, Bologna, Italy; [email protected].

Authorized licensed use limited to: Universita degli Studi di Bologna. Downloaded on July 08,2010 at 18:26:55 UTC from IEEE Xplore. Restrictions apply.