27
I mmersive virtual reality (IVR) has the potential to be a powerful tool for the visu- alization of burgeoning scientific data sets and models. In this article we sketch a research agenda for the hard- ware and software technology underlying IVR for sci- entific visualization. In contrast to Brooks’ excellent survey last year, 1 which reported on the state of IVR and provided concrete examples of its production use, this article is somewhat speculative. We don’t present solu- tions but rather a progress report, a hope, and a call to action, to help scientists cope with a major crisis that threatens to impede their progress. Brooks’ examples show that the technology has only recently start- ed to mature—in his words, it “bare- ly works.” IVR is used for walkthroughs of buildings and other structures, virtual prototyping (vehicles such as cars, tractors, and airplanes), medical applications (surgical visualization, planning, and training), “experiences” applied as clinical therapy (reliving Vietnam experiences to treat post-traumatic stress disorder, treating agorapho- bia), and entertainment. Building on Brooks’ work, here we concen- trate on why scientific visualization is also a good application area for IVR. First we’ll briefly review scientific visualization as a means of understanding models and data, then discuss the problem of exploding data set size, both from sensors and from simulation runs, and the consequent demand for new approaches. We see IVR as part of the solution: as a richer visualization and interaction environment, it can potentially enhance the scientist’s ability to manipulate the levels of abstraction necessary for multi-terabyte and petabyte data sets and to formulate hypotheses to guide very long simulation runs. In short, IVR has the potential to facilitate a more balanced human-computer partner- ship that maximizes bandwidth to the brain by more fully engaging the human sensorium. We argue that IVR remains in a primitive state of development and is, in the case of CAVEs and tiled pro- jection displays, very expensive and therefore not in rou- tine use. (We use the term cave to denote both the original CAVE developed at the University of Illinois’ Electronic Visualization Laboratory 2 and CAVE-style derivatives.) Evolving hardware and software technol- ogy may, however, enable IVR to become as ubiquitous as 3D graphics workstations—once exotic and very expensive—are today. Finally, we describe a research agenda, first for the technologies that enable IVR and then for the use of IVR for scientific visualization. Punctuating the discussion are sidebars giving examples of scientific IVR work cur- rently under way at Brown University that addresses some of the research challenges, as well as other side- bars on data set size growth and IVR interaction metaphors. What is IVR? By immersive virtual reality we mean technology that gives the user the psychophysical experience of being surrounded by a virtual, that is, computer-generated, environment. This experience is elicited with a combi- nation of hardware, software, and interaction devices. Immersion is typically produced by a stereo 3D visual display, which uses head tracking to create a human- centric rather than a computer-determined point of view. Two common forms of IVR use head-mounted dis- plays (HMDs), which have small display screens in front of the user’s eyes, and caves, which are specially con- structed rooms with projections on multiple walls and possibly floor and/or ceiling. Forms of IVR differ along a number of dimensions, such as user mobility and field of view, which we discuss briefly when talking about the tradeoffs that exist in IVR technology for scientific visualization. Closely related to the sensation of immersion is the sensation of presence—usually loosely described as the feeling of “being there”—which gives a sense of the real- ity of objects in a scene and the user’s presence with those objects. Immersion and presence are enhanced by 0272-1716/00/$10.00 © 2000 IEEE Virtual Reality 26 November/December 2000 Immersive virtual reality can provide powerful techniques for scientific visualization. The research agenda for the technology sketched here offers a progress report, a hope, and a call to action. Andries van Dam, Andrew S. Forsberg, David H. Laidlaw, Joseph J. LaViola, Jr., and Rosemary M. Simpson Brown University Immersive VR for Scientific Visualization: A Progress Report

Immersive VR for scientific visualization: a progress report

  • Upload
    vonhu

  • View
    234

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Immersive VR for scientific visualization: a progress report

Immersive virtual reality (IVR) has thepotential to be a powerful tool for the visu-

alization of burgeoning scientific data sets and models.In this article we sketch a research agenda for the hard-ware and software technology underlying IVR for sci-entific visualization. In contrast to Brooks’ excellentsurvey last year,1 which reported on the state of IVR andprovided concrete examples of its production use, thisarticle is somewhat speculative. We don’t present solu-tions but rather a progress report, a hope, and a call toaction, to help scientists cope with a major crisis thatthreatens to impede their progress.

Brooks’ examples show that thetechnology has only recently start-ed to mature—in his words, it “bare-ly works.” IVR is used forwalkthroughs of buildings and otherstructures, virtual prototyping(vehicles such as cars, tractors, andairplanes), medical applications(surgical visualization, planning,and training), “experiences” appliedas clinical therapy (reliving Vietnamexperiences to treat post-traumaticstress disorder, treating agorapho-bia), and entertainment. Buildingon Brooks’ work, here we concen-trate on why scientific visualization

is also a good application area for IVR. First we’ll briefly review scientific visualization as a

means of understanding models and data, then discussthe problem of exploding data set size, both from sensorsand from simulation runs, and the consequent demandfor new approaches. We see IVR as part of the solution: asa richer visualization and interaction environment, it canpotentially enhance the scientist’s ability to manipulatethe levels of abstraction necessary for multi-terabyte andpetabyte data sets and to formulate hypotheses to guidevery long simulation runs. In short, IVR has the potentialto facilitate a more balanced human-computer partner-ship that maximizes bandwidth to the brain by more fullyengaging the human sensorium.

We argue that IVR remains in a primitive state ofdevelopment and is, in the case of CAVEs and tiled pro-jection displays, very expensive and therefore not in rou-tine use. (We use the term cave to denote both theoriginal CAVE developed at the University of Illinois’Electronic Visualization Laboratory2 and CAVE-stylederivatives.) Evolving hardware and software technol-ogy may, however, enable IVR to become as ubiquitousas 3D graphics workstations—once exotic and veryexpensive—are today.

Finally, we describe a research agenda, first for thetechnologies that enable IVR and then for the use of IVRfor scientific visualization. Punctuating the discussionare sidebars giving examples of scientific IVR work cur-rently under way at Brown University that addressessome of the research challenges, as well as other side-bars on data set size growth and IVR interactionmetaphors.

What is IVR?By immersive virtual reality we mean technology that

gives the user the psychophysical experience of beingsurrounded by a virtual, that is, computer-generated,environment. This experience is elicited with a combi-nation of hardware, software, and interaction devices.Immersion is typically produced by a stereo 3D visualdisplay, which uses head tracking to create a human-centric rather than a computer-determined point ofview. Two common forms of IVR use head-mounted dis-plays (HMDs), which have small display screens in frontof the user’s eyes, and caves, which are specially con-structed rooms with projections on multiple walls andpossibly floor and/or ceiling. Forms of IVR differ alonga number of dimensions, such as user mobility and fieldof view, which we discuss briefly when talking aboutthe tradeoffs that exist in IVR technology for scientificvisualization.

Closely related to the sensation of immersion is thesensation of presence—usually loosely described as thefeeling of “being there”—which gives a sense of the real-ity of objects in a scene and the user’s presence withthose objects. Immersion and presence are enhanced by

0272-1716/00/$10.00 © 2000 IEEE

Virtual Reality

26 November/December 2000

Immersive virtual reality can

provide powerful techniques

for scientific visualization.

The research agenda for the

technology sketched here

offers a progress report, a

hope, and a call to action.

Andries van Dam, Andrew S. Forsberg, David H. Laidlaw, Joseph J. LaViola, Jr., andRosemary M. SimpsonBrown University

Immersive VR forScientificVisualization: AProgress Report

Page 2: Immersive VR for scientific visualization: a progress report

a wider field of view than is available on a desktop display, and leverage peripheral vision when workingwith 3D information. This helps provide situationalawareness and context, aids spatial judgments, andenhances navigation and locomotion. The presentationmay be further enhanced by aural rendering of spatial3D sound and by haptic (touch, force) rendering to cre-ate representations of geometries and surface materialproperties.

Interaction is provided through a variety of spatialinput devices, most providing at least six degrees of free-dom (DOF) based on tracker technology. Such devicesinclude 3D mice, various kinds of wands with buttonsfor pointing and selecting, data gloves that sense jointangles, and pinch gloves that sense contacts. Both typesof gloves provide position and gesture recognition. Addi-tional sensory modalities may be engaged with speechrecognizers and haptic input and feedback.

IVR aims to create a rich, highly responsive environ-ment, one that engages as many of our senses as possi-ble. Realism—mimicking the physical world as faithfullyas possible—is often a goal, but for experiencing manyenvironments, it need not be. We can view IVR as thetechnology that currently lies at an extreme on a spec-trum of display technologies and corresponding inter-action technologies. This spectrum starts withkeyboard-driven, text-only displays and proceedsthrough 2D graphics with keyboard and mouse to 3Ddesktop graphics with 3D interaction devices to IVR.Thus, IVR can be seen as a natural extension of existingcomputing environments. As we argue later, however,it’s more appropriately seen as a substantially new medi-um that differs more from conventional desktop 3Denvironments than those environments differ from 2Ddesktop environments. Conventional desktop 3D dis-plays give one the sensation of looking through a win-dow into a miniature world on the other side of thescreen, with all the separation that sensation implies,whereas IVR makes it possible to become immersed inand interact with life-sized scenes.

Once mastered, post-WIMP (that is, post-windows, -icons, -menus, -pointing) multimodal interaction,3 suchas simultaneous speech and hand input, provides a farricher, potentially more natural way of interacting witha synthetic environment than do mouse and keyboard.Fish Tank VR on a monitor,4 workbenches,5 and single-wall projection displays,6 all with head-tracked stereo,provide semi-immersive VR environments—betweendesktop 3D and fully immersive VR.

IVR for scientific visualizationWe believe that IVR is a rich way of interacting with

virtual environments (VEs). It holds great promise for sci-entists, mathematicians, and engineers who rely on sci-entific visualization to grapple with increasingly complexproblems that produce correspondingly larger and morecomplex models and data sets. These data sets oftendescribe complicated 3D structures or can be visualizedwith derived 3D abstractions (such as isosurfaces) pos-sessing complicated geometry. We contend that peoplecan more readily explore and understand these complexstructures with the kinesthetic feedback gained by peer-

ing around at them from within, walking around themto see them from different aspects, or handling them.

Scientific visualization isn’t an end in itself, but acomponent of many scientific tasks that typicallyinvolve some combination of interpretation and manip-ulation of scientific data and/or models. To aid under-standing, the scientist visualizes the data to look forpatterns, features, relationships, anomalies, and thelike. Visualization should be thought of as task drivenrather than data driven.

Indeed, it’s useful to think of simulation and visual-ization as an extension of the centuries-old scientificmethod of formulating a hypothesis, then performingexperiments to validate or reject it. Scientists now usesimulation and visualization as an alternate means ofobservation, creating hypotheses and testing the resultsof simulation runs against data from physical experi-ments. Large simulation runs may use visualization asa completely separate postprocess or may interlacevisualization and parameter setting with re-running thesimulation, in a mode called interactive steering,7 inwhich the user monitors and influences the computa-tion’s progress.

Unfortunately, our ability to simulate or use increas-ingly numerous and refined sensors to produce everlarger data sets outstrips our ability to understand thedata, and there’s compelling evidence that the gap iswidening. Hence, we look for ways to make human-in-the-loop visualization more powerful. IVR has begun toserve as one such power tool.

We use the term scientific visualization and choseexamples primarily from science and technology, butmuch of the discussion would apply equally well toclosely related areas of information visualization, usedfor commercial and organizational data, and to conceptvisualization. Our discussions of archaeology (see thesidebar “Archave”) and color theory (see the sidebar“Color Museum”) IVR systems give some sampledomains and approaches.

Why scientific visualization?Visualization is essential in interpreting data for many

scientific problems. Other tools such as statistical analy-sis may present only a global or an extremely localizedpartial result. Statistical techniques and monitoring ofindividual points or regions in data sets expected to beof interest prove useful for learning the effect of a sim-ulation, but these techniques generally cannot explainthe effect.

Visualization is such a powerful technique because itexploits the dominance of the human visual channel(more than 50 percent of our neurons are devoted tovision). While computers excel at simulations, data fil-tering, and data reduction, humans are experts at usingtheir highly developed pattern-recognition skills to lookthrough the results for regions of interest, features, andanomalies. Compared to programs, humans are espe-cially good in seeing unexpected and unanticipatedemergent properties.

Much scientific computing uses parallel computers,benefiting from the productive synergy of using paral-lel hardware and software to do computation and par-

IEEE Computer Graphics and Applications 27

Page 3: Immersive VR for scientific visualization: a progress report

Virtual Reality

28 November/December 2000

Members of the Computer Science Department at BrownUniversity have been collaborating with archaeologists fromthe Anthropology and Old World Art and Archaeologydepartments to develop Archave, a system that uses thevirtual environment of a cave as an interface forarchaeological research and analysis.

VR in archaeologyArchaeologists and historians develop applications to

reconstruct and document archaeological sites. The resultingmodels are displayed in virtual environments in museums,on the Internet, or in video kiosks at excavation sites.Recently, a number of projects have been tested in IVRenvironments such as caves and VR theaters.1 Althougharchaeologists have used VR and IVR primarily forvisualization since Paul Reilly introduced the concept ofvirtual archaeology in 1990, interest is increasing in using VRto improve techniques for discovering new knowledge andhelping archaeologists perform analysis rather than simplypresenting existing knowledge.2 One proposed area forapplication of IVR is in the presentation and analysis of three-dimensionally referenced excavation data.

Archaeological methodThe database for the Great Temple excavation in Petra,

Jordan (see Figure A), contains more than 200,000 entriesrecorded since 1993. Following standard archaeologicalpractice, artifacts recovered from the excavation site arerecorded with precise 3D characteristics. All artifacts are alsorecorded in the site database in their relative positions by locior excavation layer and excavation trench, with a number offeature attributes such as object type (bone, pottery, coin,metal, sculpture), use, color, size, key features, and date. Thismethod ensures that all site data is precisely recorded for anaccurate record of the disturbance caused by the excavationand for the analysis that occurs after the excavation iscomplete. Unfortunately, the full potential of spatially

defined archaeological data is rarely realized, in part becausearchaeologists find it difficult to analyze the geometriccharacteristics of the artifacts and spatial relationships withother elements of the site.3

Current problems in analysisAs the excavation proceeds, there’s a strong need to

correlate all the objects in order to observe patterns within thedata set and perform standard analysis. Methods for this typeof analysis vary widely depending on excavation site features,dig strategy, and data obtained. A quantitative analysis of allmaterials grouped and sorted in various ways presented in TheGreat Temple five-year report (1998) showed statistics aboutthe percentages of different artifacts and their find locations,such as pottery by phase, pottery by area, and frequency ofoccurrence of pottery by area.4 This type of analysis can helpin a variety of statistical analyses using fairly comprehensiveinformation from the database. It can also let the archae-ologist quantify obvious patterns within the data set.

Unfortunately, many factors cannot be represented wellwith a traditional database approach and in reportsgenerated from it. Specifically, these methods cannotintegrate important graphical information from the mapsand plans, and specific attribute data, location, and relationaldata among artifacts and site features.

Besides obvious conclusions that can be drawn whenobjects are correlated spatially, combinations of artifactswhen viewed by a trained eye in their original spatialconfigurations can yield important and unlikely discoveries.Lock and Harris suggested that “vast quantities of locationaland thematic information can be stored in a single map andyet, because the eye is a very effective image processor,visual analysis and information retrieval can be rapid.”3

Although processing information visually would seem amore intuitive and thus effective way of processing 3D data,the idea hasn’t yet been proven. More graphical methods ofanalysis have been explored in geographic informationsystems (GIS) systems that overlay multiple types of 2Dgraphic representations of data such as maps, plans, andraster images with associated attribute data in an attempt topresent relationships among spatial data. However, manyfeel that it’s not clear that GIS systems are sophisticatedenough to provide a thorough description of heightrelationships. As Clarke observed, “The spatial relationshipsbetween the artifacts, other artifacts, site features, other sites,landscape elements and environmental aspects present aformidable matrix of alternative individual categorizationsand cross-combinations to be searched for information.”5

A new methodThe Archave system displays all the components of the

excavation with recorded artifacts and features in a caveenvironment. Like the excavation site, the virtual site isdivided into the grid of excavation trenches excavated overthe past seven years (see Figure B). Each trench is modeledso that the user can look at the relative layers or loci theexcavator established during the removal of debris in that

ArchaveDaniel Acevedo Feliz and Eileen Vote

A Aerial of the Great Temple site in Petra, Jordan.

Phot

o co

urte

sy o

f A.A

.W. J

ouko

wsk

y

Page 4: Immersive VR for scientific visualization: a progress report

allel human wetware to interpret the results. Computa-tional steering—inappropriate for massive, lengthy pro-duction runs—often proves useful for smaller-scaleproblems with coarser spatiotemporal resolution or fortest cases to help set up parameters for production runs.

Computer-based scientific visualization exploitinghuman pattern recognition is scarcely a new idea. Itstarted with real-time oscilloscope data plotting andoffline paper plotting in the 1950s. Science and engi-neering applications were the first big customers of the

IEEE Computer Graphics and Applications 29

trench. As the user dictates, information about artifacts canbe viewed throughout the site (see Figures C1 and C2).

We believe that the system makes it easier to associateobjects in all three dimensions, so it can accommodateobjects that cannot be related to each other in 2D or even3D map-based GIS. In addition, multiple data types such aspottery concentrations, coin finds, bone, sculpture,architecture, etc. can be visualized together to test forpatterns and latent relationships between variables.

Users have commented that they feel more comfortableusing the system in a cave because it allows them to accessthe data at a natural, life-size scale. The immersion providedby the cave gives the users improved accessibility to theobjects they need to identify and study, and a very flexibleinterface for exploring the data at different levels of detail,going smoothly from close-range analysis to site-wideoverviews. More importantly, the wide field of view providedby the cave’s three walls and floor let the user assimilate andcompare a larger section of data at once. For example, a userworking at close range in a trench has full visual access toneighboring trenches or other parts of the site. Therefore, it’spossible to assimilate and compare more information at onetime. This becomes crucial when users look for patterns or tryto match elements throughout the site or between trenches.

ConclusionThe Archave system lets the user establish new hypotheses

and conclusions about the archaeological record becausedata can be processed comprehensively in its natural 3Dformat. However, along with the ability to visually process acoherent, multidimensional data sample comes the need foran intuitive and flexible environment. IVR provides the userwith the ability to access the system in an environment

similar to the conditions that a working archaeologistencounters on site.

As stated in the beginning, current tendencies toimplement VR for reconstruction and visualization need notbe the only use for this technology. The standardarchaeological method provides a rich record in which high-level analysis can occur, and IVR can provide a significant test-bed for advanced forms of analysis not heretofore available.

References1. B. Frischer et al., “Virtual Reality and Ancient Rome: The UCLA Cul-

tural VR Lab’s Santa Maria Magggiore Project,” Virtual Reality inArchaeology, J.A. Barcelo, M. Forte, and D. Sanders, eds., BAR Inter-national Series 843, Oxford, 2000, pp. 155-162.

2. P. Miller and J. Richards, “The Good, the Bad, and the DownrightMisleading: Archaeological Adoption of Computer Visualization,”Computer Applications in Archaeology, J. Huggett and N. Ryan, eds.,British Archaeological Reports (Int. Series, 600), Oxford, 1994, pp.19-22.

3. G. Lock and T. Harris. “Visualizing Spatial Data: The Importance ofGeographic Information Systems,” Archaeology in the InformationAge: A Global Perspective, P. Reilly and S. Rahtz, eds., Routledge,London, 1992, pp. 81-96.

4. M.S. Joukowsky, Petra Great Temple: Volume I, Brown UniversityExcavations 1993-1997, E.A. Johnson Company, USA, 1998, p.245.

5. D.L. Clarke, “A Provisional Model of an Iron Age Society and itsSettlement System,” Models in Archaeology, D.L. Clarke, ed.,Methuen, London, 1972, pp. 801-869.

B Color-coded excavation trenches from the past seven yearsof the dig at the Great Temple in Petra, Jordan.

C Visualization of excavation data in one of the trenches insidethe Great Temple. (1) The texture maps show two parameters:The color saturation indicates the concentration of pottery, andthe density of the texture indicates the concentration of bone—here only a significant difference in bone concentration exists.(2) Making the trench semitransparent reveals special finds inthe exact location in which they were found inside the volume.

Page 5: Immersive VR for scientific visualization: a progress report

expensive interactive 2D vector displays commercial-ized in the 1960s. Graphing packages of various kindswere designed for both offline and interactive viewingthen as well. In the mid-1980s considerably higher-level

interactive visualization packages such as Mathemati-ca and AVS leveraged the power of raster graphics andmodern user interfaces.

The landmark 1987 National Science Foundation

Virtual Reality

30 November/December 2000

IVR can enable students to interact with ideas in newways. In an IVR environment, students can engage insimulations of real-world environments that are inaccessibledue to financial or time constraints (high-end chemistrylaboratories, for instance) or that cannot be experienced,such as the inside of a volcano or the inside of an atom. IVRsalso enable students to interact with visualizations ofabstract ideas, such as mathematical equations or elementsof color theory. The hands-on, investigative learning mostnatural in IVR offers an excellent way to train new scientistsand engineers. In addition, because the environment iscomputer-generated, it’s an ideal future platform forindividual and collaborative distance-learning efforts.

We’ve used our cave to teach elements of color theory.1

Color theory is often highly abstract and multidimensional,making it difficult to explain well with static diagrams oreven 3D real-world models. The desktop environmentprovides valuable flexibility in the study of color (forexample, making it easy to modify colors rapidly), butdoesn’t address the difficulty of understanding the 3Dnature of color spaces and the complex interactionsbetween lights and colored objects in a real-world setting.In the cave, users of our Museum of Color can view fully 3Dcolor spaces from any point of view and use a variety ofinteraction and visualization techniques (for example, a rainof disks colored by their location in a space and aninteractive cutting plane) to explore individual spaces andcompare spaces with one another (see Figure D). Inaddition, viewers can enter the spaces and become fullyimmersed in them, seeing them from the inside out.

We believe that the experience of entering a color spacein an IVR differs fundamentally from examining 3D desktopmodels and that the experience will help users develop abetter understanding of color structures and the relativemerits of different spaces. For example, our color spacecomparison exhibit lets the user move a cutting plane inMunsell space and see it mapped into both RGB and HSVspaces. The plane is defined by gradients of constant hue,saturation, and value, and thus is flat in the perceptuallybased Munsell space. In RGB, and especially HSV, however,the plane deforms, at times quite radically, demonstratingthe nonlinearities of those color spaces. Although we couldshow a single example of such a comparison in a picture(see Figure E), actual use of this technique in the cave letsusers actively explore different areas of the spaces andexperience their changing degrees of nonlinearity.

In other interactive exhibits in the museum, users canexperiment with the effects of additive and subtractivemixing by shining colored lights on paintings and 3Dobjects. This provides a hands-on approach impossible withdesktop software while offering a more easily controlled andmore varied environment than practical in a real-lifelaboratory. Future plans include more exhibits (such as oneon color scale that shows the user why choosing a color froma little swatch for one’s walls is often misleading), as well asuser testing of the pedagogy and interaction techniques.

References1. A.M. Spalter et al., “Interaction in an IVR Museum of Color,” Proc.

of ACM Siggraph 2000 Education Program, ACM Press, New York,2000, pp. 41-44.

D Falling disks are colored according to their changing posi-tions within the color space.

E A plane of constant perceived value is flat in Munsell space(right) but warped in HSV space (left).

Color MuseumAnne Morgan Spalter

Page 6: Immersive VR for scientific visualization: a progress report

report “Visualization in Scientific Computing”7 stressedthe importance of interactive scientific visualization,especially for large-scale problems, and reminded us ofHamming’s famous dictum: “The purpose of comput-ing is insight, not numbers.” The authors’ observationthat “Today’s data sources [simulations and sensors] aresuch fire hoses of information that all we can do is gath-er and warehouse the numbers they generate” is unfor-tunately as true today as it was then.

Why use IVR for scientific visualization?Several factors prompt the use of IVR for scientific

visualization. IVR also shows potential to surpass otherforms of desktop-based visualization.

Exponential growth of data set sizeMoore’s Law for computer processing power and sim-

ilar improvements in storage, network, and sensingdevice performance continue to give us ever-greatercapacity to collect and compute raw data. Unfortunate-ly, computational requirements and data set size in sci-ence research are growing faster than Moore’s Law.Thus, the gap between what we can gather or create andwhat we can properly analyze and interpret is widening(see the sidebar “Examples of Data Set Size”). In thelimit, the real issue is nature’s overwhelming complex-ity. Galaxy or plasma simulations, for example, areseven-dimensional problems, so doubling resolution canincrease computation by a factor of 128. It’s extremelydifficult to make headway against problems this hard,and there are hundreds of comparable complexity.

Problems and proposed solutionsThe 1998 DOE report “Data and Visualization Corri-

dors”8 proposed three technology roadmaps to addressthe crisis: data manipulation; visualization and graph-ics; and interaction, shared spaces, and collaboration.This document and subsequent reports show that sys-tems are unbalanced today and that our ability to pro-duce and collect data far outweighs our ability tovisualize or work with it. The main bottleneck continuesto be the ability to visualize the output and gain insight.

While the raw polygon performance of graphics cardsmay be on a faster track than Moore’s law, visualizationenvironments aren’t improving commensurately. Ingraphics and visualization, the key barriers to achiev-ing really effective visualizations are underpoweredhardware, underdeveloped software, inadequate visu-al idioms/encoding representations and interactionmetaphors not based on a deep understanding of humancapabilities, and disproportionately little funding forvisualization. We address some of these problems asresearch issues in the sections below.

The accelerating data crisis demands new approach-es short term and long term, new forms of abstraction,and new tools. Short term, Moore’s Law, visualizationclusters (parallel approaches), tiled displays (increasedimage fidelity), and IVR should help the most. Long term,artificial-intelligence-based techniques will cull, orga-nize, and summarize raw data prior to IVR viewing,while ensuring that the links to the original data remain.These techniques will support adjustable detail-and-con-

IEEE Computer Graphics and Applications 31

Examples of Data Set SizeAndrew Forsberg

At the Department of Energy’s Accelerated Strategic ComputingInitiative (ASCI, http://www.llnl.gov/asci/), a wide range ofapplications that generate huge amounts of data are run to gainunderstanding through simulations. Table A gives an overview ofthe size of current and anticipated ASCI simulations. Theseestimates are based on an actual run of a “multi-physics code.” Thevery largest ASCI simulation runs, known as hero calculations,produce even larger data sets that generate an order of magnitudemore data than typical runs. Developing techniques to manageand visualize these data sets is a formidable problem being activelyresearched at both DOE laboratories and universities.

The National Center for Atmospheric Research (NCAR,http://www.ncar.ucar.edu/ncar/index.html) studies data volumesassociated with earth system and related simulation. Climate,weather, and general turbulence are of particular interest. Climatesimulations generally produce about 1 TBytes of raw data per 100-year run. Running ensemble runs—several simulations with smalldifferences—is important and multiplies the amount of data thatmust be analyzed.

Monthly time dumps are currently used; in the future, these timedumps may be hourly for certain studies, increasing the data sizeby many orders of magnitude. Hurricane simulations at 1 kmresolution and sampled every 15 minutes may produce as much as3 TBytes of raw data. Unlike some simulations, all this data (interms of both time and space) must be visualized for manyvariables. In addition, geophysical and astrophysical turbulence hasbeen a particularly active and fruitful research area at NCAR, butresearchers are limited by both computational and analyticalresources.

One current effort in astrophysical turbulence runs at 2.5 kmresolution, and even with what is considered a crude andinsufficient time sampling, produces net data volumes of about .25TBytes. Given more resources, researchers could use a finer timesampling, add more variables, and conduct several runs forcomparison with one another. This would result in a final data setsize of about 6 TBytes. As soon as it’s practical to do so, researcherswill double the resolution of the simulation, yielding a 50-TBytedata set. And if it were possible, researchers would benefit fromrunning at four times the resolution.

Sensors that collect data produce data sets on the order ofpetabytes today. For example, the compact muon solenoiddetector experiment on CERN’s Large Hadron Collider(http://cern.web.cern.ch/CERN/LHC/pgs/general/detectors.html)will collect about a petabyte of physics data per year. A morevisualizable data set is CACR’s collection of all-sky surveys known asthe Digital Sky (http://www.cacr.caltech.edu/SDA/DigiSky/whatis.html); this is starting out at tens of terabytes and will grow.

Another example comes from developmental biology. Usingmultispectral, time-lapse video microscopy, it’s now possible toacquire volume images showing where and when a gene isexpressed in developing avian embryos. To accomplish this, agiven gene is modified so that when expressed it produces notonly the protein it represents, but also an additional markerprotein. The marker can be imaged in a live avian embryo as itdevelops, producing a time-varying volume image.

continued on p. 32

Page 7: Immersive VR for scientific visualization: a progress report

text views to let researchers zoom in on specific areaswhile maintaining the context of the larger data set.

IVR versus 3D desktop-based visualizationVisualization that leverages our human pattern-

recognition ability can be a powerful tool for under-standing, and any technique that lets the user “see more”enhances the experience. Complex 3D or higher-dimen-sional and possibly time-varying data especially benefitfrom interactive exploration to see more. One way ofseeing more is to use greater levels of abstraction/encod-ing in the data. However, for a given data representa-tion, the more the eye can rapidly take in, the better.

IVR allows much more use of peripheral vision to pro-vide global context. It permits more natural and thusquicker exploration of three- and higher-dimensionaldata.9 Additionally, body-centric judgments about 3Dspatial relations come more easily,10 as can recognitionand understanding of 3D structures.11 It’s easier to dosuch tasks when 3D depth perception is enhanced bystereo and motion parallax (via head tracking).

In a sense, using IVR’s kinesthetic depth perception

to visualize phenomena represents the life-size interac-tive generalization of stereo pairs in textbooks. Brysonmade the case that real-time exploration is a desirablecapability for scientific visualization and that IVR great-ly facilitates exploration capabilities.11 In particular, aproper IVR environment provides a rich set of spatialand depth cues, and rapid interaction cycles that enableprobing volumes of data. Such rapid interaction cyclesrely on multimodal interaction techniques and the highperformance of typical IVR systems. While any of IVR’sinput devices and interaction techniques could, in prin-ciple, work in a desktop system, IVR seems to encour-age more adventurous use of interaction devices andtechniques. (See the sidebar “Interaction in Virtual Real-ity: Categories and Metaphors.”)

Other interesting differences separate IVR and con-ventional desktop environments. Current research atCarnegie Mellon University and the University of Vir-

ginia shows that users make thesame kinds of mistakes in spatialjudgments in the virtual world thatthey do in the real world (such asoverestimating height-width differ-ences of objects), which isn’t thecase in 3D desktop graphics. Also,certain kinds of hand-eye coordina-tion tasks, such as rotating smallobjects, are easier in IVR. Typicaltimes for rotating objects to match atarget orientation using a virtualtrackball or arcball at the desktop12

fall in the range of 17 to 27 seconds,but having the user’s hand and thevirtual object collocated in 3D spacefor optimal hand-eye coordinationcan reduce this time by an order ofmagnitude to around two seconds.13

All these phenomena exemplifyhow the IVR experience comes clos-er to our real-world experience than

does 3D desktop graphics. Indeed, IVR produces anundeniable qualitative difference. Looking at a pictureof the Grand Canyon, however large, differs funda-mentally from being there. Again, IVR is more like thereal world than any photograph, or any conventional“through the window” graphics system, could be.

IVR is used in scientific visualization in two sorts ofproblems: human-scale and non-human-scale prob-lems. The case for using IVR is more obvious for the for-mer, as Brooks described,1 citing vehicle operation,vehicle design, and architectural design. For example,an architectural walkthrough will, in general, be moreeffective in an IVR environment than in a desktop envi-ronment because humans have a lifetime of experiencein navigating through, making spatial judgments in, andmanipulating 3D physical environments. Ergonomic val-idation tasks, like checking viewable and reachablecockpit instrumentation and control placement, can beperformed more quickly and efficiently in a virtual pro-totyping environment than with labor-intensive physi-cal prototyping. Bryson’s pioneering work on the virtualwind tunnel lets researchers “experience” fluid flow over

Virtual Reality

32 November/December 2000

The acquired data sets are large. Changes recorded every 90minutes—a moderate timestep in the scale of development,corresponding to the formation of one somite in a developingembryo—over the first four days of development roughly correspondto the first trimester of human development. A single acquisition canmeasure expression of three genes in a volume of 512 × 768 × 150spatial points for 64 time steps, producing 22 gigabytes of data. Asmany as 10 images are necessary to cover an entire embryo. Imagesof the 100,000 genes expressed in avians would exceed a petabyte.The real complexity of the problem, and its true potential, lies incorrelating hundreds or thousands of these images in order tounderstand the many different proteins working in concert. IVR hasthe potential to help solve this problem by showing the multivalueddata simultaneously so that the human visual system, arguably thebest pattern-finding system known, can search for these correlations.

Table A. Data output from one ASCI code.*

FY00 FY02 FY04Sizing Requirements 4 TFLOP 30 TFLOP 100 TFLOP

Number of zones (locations where material properties such as pressure, temperature, chemical species, stress tensor, and so on are tracked) 25 million 200 million 1 billionNumber of material properties per zone 10-50 10-50 10-50Small visualization file (such as description of mesh, one zonal, one nodal) 2 GBytes 12 GBytes 50 GBytesLarge plot/restart file size (with all physics variables saved) 60 GBytes 450 GBytes 1,500 GBbytesAverage length of run 20.5 days 20-40 days 20-40 daysNumber of visualization files per run 100 – small 200 – small 200 – small

180 – large 180 – large 180 – largeVisualization data set size per major run 6.4 TBytes 84 TBytes 280 TBytes*Data has been scaled linearly to FY02 and FY04 using data from a recent run. Data courtesy of TerriQuinn, Lawrence Livermore National Laboratory.

continued from p. 31

Page 8: Immersive VR for scientific visualization: a progress report

IEEE Computer Graphics and Applications 33

When discussing 3D user interfaces for IVR, it’s importantto break down different interaction tasks into categories so asto provide a framework for the design of new interactionmetaphors. In contrast to 2D WIMP (windows, icons, menus,pointing) interfaces, which have a commonality in theirstructure and appearance, only a few standard sets ofsophisticated interface guidelines, classifications, andmetaphors for 3D user interfaces1 have emerged due, inpart, to the complex nature of the interaction space and theadditional degrees of freedom. However, 3D user interactioncan be roughly classified into navigation, selection andmanipulation, and application control.

NavigationNavigation can be classified into three distinct categories.

Exploration is navigation without any explicit target (that is,the user simply explores the environment). Search tasksinvolve moving through the environment to a particularlocation. Finally, maneuvering tasks are characterized by short,high-precision movements usually done to position usersbetter for performing other tasks. The navigation task itself isbroken up into a motor component called travel, themovement of the viewpoint from place to place, and acognitive component called wayfinding, the process ofdeveloping spatial knowledge and awareness of thesurrounding space.2 For example, with large scientific datasets, users must be able to move from place to place andmake spatial correlations between different parts of the datavisualization.

Most travel interaction metaphors fall into one of fivecategories:

■ Physical movement: using the motion of the user’s body totravel through the environment. Examples include walking,riding a stationary bicycle, or walking in place on a virtualconveyer belt.

■ Manual viewpoint manipulation: the user’s hand motionseffect travel. For example, in Multigen’s SmartScene navi-gation, users grab points in space and pull themselves alongas if holding a virtual rope.

■ Steering: the continuous specification of the direction ofmotion. Examples include gaze-directed steering, in whichthe user’s head orientation determines the direction of trav-el, or two-handed flying, in which the direction of flight isdetermined by the vector between the user’s two hands andthe speed is proportional to the user’s hand separation.3

■ Target-based travel: the user specifies the destination and theapplication handles the actual movement. An example ofthis type of travel is teleportation, in which the user imme-diately jumps to the new location.

■ Route planning: the user specifies a path and the applicationmoves the user along that path. An example is drawing a pathon a map of the space or actual environment to plan a route.

Selection and manipulationA number of metaphors have been developed for

selecting, positioning, and rotating objects. The classical

approach provides the user with a virtual hand or 3D cursorwhose movements correspond to the physical movement ofthe hand tracker. This metaphor simulates real-worldinteraction but is problematic because users can pick up andmanipulate objects only within the area of reach.

One way around this problem is to use ray-casting or hand-extension metaphors such as the Go-Go technique4 for objectselection and manipulation. The Go-Go techniqueinteractively “grows” the user’s arm using a nonlinearmapping. Thus the user can reach and manipulate both nearand distant objects. Ray-casting metaphors are based onshooting a ray from the virtual hand into the scene. When theray intersects an object, the user can select and manipulate it.

With simple ray casting the user may find it difficult toselect very small or distant objects. Variants of ray castingdeveloped to handle this problem include the spotlighttechnique5 and aperture-based selection.6 Another approachwithin the metaphor of ray casting is the image-plane familyof interaction techniques,7 which bring 2D image-planeobject selection and manipulation, commonly found in 3Ddesktop applications, to virtual environments.

Instead of having users reach out into the virtual world toselect and manipulate objects, another metaphor for thisinteraction task is to bring the virtual world closer to the user.One of the first examples of this approach, the 3DMimmersive modeler,8 lets users grow and shrink themselvesto manipulate objects at different scales. In anotherapproach, the World-In-Miniature (WIM) technique9

provides users with a handheld copy of the virtual world. Theuser can indirectly manipulate the virtual objects in the worldby interacting directly with their representations in the WIM.

In addition to direct manipulation of objects, users cancontrol objects indirectly with 3D widgets.10 They extendfamiliar 2D widgets of WIMP interaction and arecombinations of geometry and behavior. Some are general,such as transformer widgets with handles to constrain thetranslation, rotation, and scale of an object. Others are task-

Interaction in Virtual Reality: Categories and MetaphorsJoseph J. LaViola, Jr.

continued on p. 34

F A user interacting with a scientific data set. He uses a multi-modal interface combining hand gesture and speech to changemodes and application state, such as creating and controllingvisualization widgets, and starting and stopping recordedanimations.

Page 9: Immersive VR for scientific visualization: a progress report

Virtual Reality

34 November/December 2000

specific, such as the rake emitting colored streamlines used incomputational fluid dynamics (shown to the left of the user’sleft hand in Figure F). They’re used to modify parametersassociated with an object and to invoke particular operations.

A third way of selecting and manipulating virtual objects isto create physical props, or phicons,11 that act as proxies forvirtual objects, giving users haptic feedback and a cognitivelink between the virtual object and its intended use.

A number of other techniques and metaphors have beendeveloped for selection and manipulation in virtualenvironments. For references see Poupyrev and Kruijff’sannotated bibliography of 3D user interfaces of the 20thcentury, available on the Web at http://www.mic.atr.co.jp/~poup/3dui/3duibib.htm.

One of the areas we must continue to explore is how tocombine the various interaction styles to enrich userinteraction. For example, physical props can help to increasethe functionality of virtual widgets and vice versa. The VirtualTricorder12 (see Figure G), which uses a physical prop andhas a corresponding virtual widget, is a good example of thiscombination. These types of hybrid interface approacheshelp reduce the user’s cognitive load by providing familiartransitions between tools and their functionality.

Application controlApplication control tasks change the state of the

application, the mode of interaction, or parameteradjustment, and usually include the selection of an elementfrom a set. There are four different categories of applicationcontrol techniques: graphical menus, voice commands,gestural interaction, and tools (virtual objects with an implicitfunction or mode). Example application control tools includethe Virtual Tricorder and rake widgets mentioned above.

These application control techniques can be combined ina number of different ways to create hybrid interfaces. Forexample (see Figure G), combining gestural and voice inputprovides users with multimodal interaction, which has thepotential of being a more natural and intuitive interface.Although application control is a part of most VRapplications, it hasn’t been studied in a structured way, andevaluations of the different techniques are sparse.

The futureWithin the last couple of years, we’ve seen a significant

slowdown in the number of novel IVR interaction techniquesappearing in the literature, largely because it’s becomingmore and more difficult to develop them. Among the newdirections to pursue in 3D user interface research should bemore evaluations of already existing techniques to see whichones work best for which applications and IVR environments.Despite our many different 3D interaction metaphors, welack an application-based structure to tell us where thesemetaphors are best used.

Another direction to consider is extending 3D interactiontechniques with artificial intelligence. For example, withincreasing data set size, AI techniques will become essentialin feature extraction and detection so that users can visualizethese massive data sets. Another example is using machine-learning algorithms so that the application can detectvarious user patterns that could aid users in interaction tasks.Incorporating AI into 3D interaction has the potential tospawn new sets of 3D interface techniques for VRapplications that would otherwise not be possible.

References1. D. Bowman et al., 3D User Interface Design: Fundamental Tech-

niques, Theory, and Practice, Course #36, Siggraph 2000, ACM,New York, July, 2000.

2. K. Hinckley et al., “A Survey of Design Issues in Spatial Input,” inProc. of ACM UIST ‘94, 1994, pp. 213-222.

3. M. Mine, ”Moving Objects in Space: Exploiting Proprioception inVirtual Environment Interaction,” Proc. ACM Siggraph 97, Ann.Conf. Series, ACM Press, New York, 1997, pp. 19-26.

4. I. Poupyrev et al., “The Go-Go Interaction Technique: Non-LinearMapping for Direct Manipulation in VR,” Proc. of the ACM UserInterface Software and Technology (UIST) 96, ACM Press, New York,1996, pp. 79-80.

5. J. Liang and M. Green, “JDCAD: A Highly Interactive 3D ModelingSystem,” Computer and Graphics, Vol. 4, No. 18, 1994, pp. 499-506.

6. A.S. Forsberg, K.P. Herndon, and R.C. Zeleznik, “Aperture-BasedSelection for Immersive Virtual Environments,” Proc. of User Inter-face Software and Technology (UIST) 96, ACM Press, New York,1996, pp. 95-96.

7. J.S. Pierce et al., “Image Plane Interaction Techniques in 3D Immer-sive Environments,” Proc. 1997 ACM I3D (Interactive 3D Graphics),ACM Press, New York, 1997, pp. 39-43.

8. J. Butterworth et al., “3DM: A Three Dimensional Modeler Usinga Head-Mounted Display,” Proc. ACM Symp. on Interactive 3DGraphics (I3D), ACM Press, New York, 1992, pp. 135-138.

9. R. Stoakley, M. Conway, and R. Pausch, “Virtual Reality on a WIM:Interactive Worlds in Miniature,” Proc. ACM Computer-Human Inter-action (CHI) 95, ACM Press, New York, 1995, pp. 265-272.

10.K.P. Herndon and T. Meyer, “3D Widgets for Exploratory Scientif-ic Visualization,” Proc. of ACM User Interface Software and Technol-ogy (UIST) 94, ACM Press, New York, 1994, pp. 69-70.

11.H. Ishii and B. Ullmer, “Tangible Bits: Towards Seamless Interfacesbetween People, Bits, and Atoms,” Proc. of ACM Computer-HumanInteraction (CHI) 97, ACM Press, New York, 1997, pp. 234-241.

12.M. Wloka and E. Greenfield, “The Virtual Tricorder: A Uniform Inter-face for Virtual Reality,” Proc. of ACM User Interface Software andTechnology (UIST) 95, ACM Press, New York, 1995, pp. 39-40.

continued from p. 33

G The display geometry of the Virtual Tricorder closely reflectsthe geometry of the 6DOF Logitech FlyMouse, enhanced withtransparent menus.

Page 10: Immersive VR for scientific visualization: a progress report

a life-sized replica of the space shuttle.14 Finally, in com-plex 3D environments such as oil refineries, orientationand navigation seem easier with IVR—the simulatedenvironment stays fixed while your body moves—thanwith desktop environments, where the mouse or joy-stick makes the VE rotate around you.

While some problems (such as visualizing numericalsimulation of arterial blood flow) aren’t naturallyhuman-scale, they can be cast into a human-scale set-ting. There they can arouse the normal human reactionsto 3D environments. For example, in arterial flow (seethe “Artery” sidebar), when our users enter the artery,they think of it as a pipe—a familiar object at their ownmacro scale. By entering the artery and viewing the 3Dvorticity geometry in 3D, they can make better decisionsabout which viewpoints and 2D projections are mostmeaningful. A similar example, which Brooksdescribed,1 is the University of North Carolina nanoma-nipulator project, in which the humans and their inter-actions are scaled down to the nanoscale.

The key question for non-human-scale problems iswhether the added richness of life-size immersive dis-play allows faster, easier, or more accurate perceptionsand judgment. So far, not enough controlled studieshave been done to answer this question definitively.Anecdotal evidence indicates that it’s easier, for exam-ple, to do molecular docking15 for drug design in IVRthan on the desktop.

In addition to varying perspectives of scale in datasets with inherent physical geometries, we often facedata that have no inherent geometry (such as flow fielddata) and perhaps even no physical scale (such as datadescribing statistical phenomena). The 3D abstractionsthrough which we visualize these data sets often pre-sent very irregular structure with complicated geome-tries and topologies. Just as IVR allows betternavigation through complicated architectural-scalestructures, we believe that IVR will be a better envi-ronment for conception, navigation, and explorationin any visualization of complex 3D structure. Indeed,what’s often far more difficult in nonimmersive inter-active visualizations is to gain rapid insight throughsimple head movement and natural multimodal inter-action. As Haase et al. pointed out in 1994, “[IVR canprovide] easy-to-understand presentations, and moreintuitive interaction with data” and “rather than relyingalmost exclusively on human cognitive capabilities,[IVR applications for analysis of complex data] engagethe powerful human perceptual mechanisms directly.”16

Despite the lack of conclusive evidence that today’sIVR “is better” for scientific visualization, we remainoptimistic that in due time it will improve sufficiently incost, performance, and comfort to become the mediumof choice for many visualization tasks.

Research challengesFirst we summarize the field’s current state. Next we

explore some IVR research challenges: display hardware,rendering (including parallelism in rendering), haptics,interaction, telecollaboration, software standards andinteroperability, and user studies. Finally, we cover thechallenges for scientific and information visualization.

Where is IVR today?Thanks to Moore’s Law and clever rethinking of graph-

ics architecture at both the hardware and algorithm lev-els, 3D graphics hardware has seen much progressrecently. Commodity chips and cards, such as those inNVidia’s GeForce2 Ultra and Sony’s PlayStation-2, pro-vide an astonishing improvement over previous genera-

IEEE Computer Graphics and Applications 35

ArteryAndrew Forsberg

In collaboration with Spencer Sherwin of Imperial College,researchers at Brown University are studying blood flow in arterialbranches.1 Understanding flow features and transport properties withinarterial branches is integral to modeling the onset and development ofdiseases such as arteriosclerosis.

We’re currently examining the geometries of arterial bypass grafts,which are surgically constructed bypasses around a blockage in anartery. They have a downstream (proximal) junction where the bypassvessel attaches to the original (host) artery and an upstream (distal)junction where the bypass vessel is reattached after the blockage. Thedisease occurs most frequently at this downstream junction, thereforethis geometry is of greatest interest (see Figure H).

These flows are typically unsteady and have no clearly definedsymmetries. The fields we’re interested in can be expressed in terms ofa vector, velocity, and scalar field. Interpreting this type of data requiresunderstanding the forces on a fluid element due to local pressure andviscous stresses. In general, these forces aren’t collinear or aligned withany preferred Cartesian direction. The use of traditional 2D visualizationcan therefore be limiting, especially when considering a geometricallycomplicated situation with no planes of symmetry.

It’s useful to consider the coherent structure identification as a wholeto get a general picture. The scale of this structure is typically of theorder of the artery’s diameter. The physical scales of the problem arebounded from above by the geometry diameter and from below bythe viscosity length scale (the length at which structures disappear).These coherent structures typically occur along the local primary axis ofthe vessels. Therefore, at the junction of the vessels we have two sets offlow structures in different planes. At higher flow speeds smaller flowfeatures can also occur due to the nonlinear nature of the flow as thetransition to turbulence takes place.

Output data is often so large it cannot be visualized. In their workon suppressing turbulence, Du and Karniadakis2 processed only asmall percentage of this data, typically in terms of statistics such as

Occluded Region

Bypass Graft

Artery

Original Flow Direction

H Diagram of an arterial graft.

continued on p. 36

Page 11: Immersive VR for scientific visualization: a progress report

tions of even high-end workstations. These advances aremade possible by graphics processor designs larger thanthose of microprocessors and produced on a timetableeven more aggressive. One could construct a very decentpersonal IVR system from a PC with such a high-endcard, a high-quality lightweight HMD, and robust, high-resolution, large-volume trackers for head-tracking and

interaction devices. Of these three components, trackertechnology is actually the most problematic, giventoday’s primitive state of the art.

Meanwhile, the number of IVR installations, includ-ing semi-immersive workbenches plus fully immersivecaves and HMD-based environments, is steadilyincreasing. Wall-sized single or tiled displays offer an

Virtual Reality

36 November/December 2000

mean value at one point. However, it’s difficult to relatethese point-wise quantities to the flow structures toconstruct theories: the statistics can show the effect, but notthe cause. Examining the detailed small-scale flow structurescan lead to discovery of the cause.

The challenge is how to interpret the dynamics at thejunction. One problem is that, when viewed from theoutside, some of the structures block each other; thusviewing from inside the junction can let us understand howeach flow feature interacts with others. Note that viewing atime-varying isosurface alone will not be sufficient tounderstand the whole flow pattern.

Figure I shows snapshots of the user visualizing the arterialflow data. In Figure I, part 2, the user is positioned justdownstream from the bifurcation, facing the bypass graftfrom which three streamlines emanate. The bifurcationpartially occludes the graft passage, and to the right of thebifurcation is the occluded region (see the correspondingfeatures in Figure H). The user holds a wand to control avirtual menu of options (“Streamline” is the currentselection) and to create and adjust the parameters ofvisualization widgets.

IVR may help in understanding the artery data in severalways. Most immediately, viewing the 3D data from theinside is easier in IVR than in desktop environments. IVR’sfundamental attributes (such as a wide field of view, a largernumber of pixels, and head-tracked stereo) contribute toenhanced 3D viewing. In an immersive environment youcan stand at a point such as the intersection of the bypassgraft and consider how flow features such as rotation andstraining occurring in different physical planes can interactand exchange information. Multimodal user interfacesenable users to control the visualization process rapidly andmore naturally than desktop WIMP interfaces. Groupinteraction is also a benefit of IVR.

Of course, problems remain to be solved. For example, wedesperately need to increase the fidelity (both in terms ofspatio-temporal resolution and overall aesthetic quality) ofthe visualization while maintaining interactive frame rates.When more than one person views the data, they can havedifficulties communicating because each person has adifferent point of view. Consequently, very natural andcommon tasks like pointing at features are deceptivelydifficult to implement (despite some research to enhancemulti-user interaction3).

References1. A.S. Forsberg et al., “Immersive Virtual Reality for Visualizing Flow

through an Artery,” Proc. of IEEE Visualization 2000, in publication.2. Y. Du and G.E. Karniadakis, “Suppressing Wall Turbulence by

Means of a Transverse Traveling Wave,” Science, Vol. 288, No.5469, 19 May 2000, pp. 1230-1234.

3. M. Agrawala et al., “The Two-User Responsive Workbench: Sup-port for Collaboration through Individual Views of a Shared Space,”Proc. of ACM Siggraph 97, Ann. Conf. Series, ACM Press, New York,1997, pp. 327-332.

I (1) A view within the artery looking downstream from thebifurcation. Shear-stress values on the artery wall are shownusing a standard color mapping technique in which blue repre-sents lower values and red represents higher values. Regions oflow shear stress tend to correlate with locations of futurelesions. (2) A view within the artery looking upstream at thebifurcation. The user holds a wand that controls a virtual menuand has created three streamline widgets and a particle advec-tion widget. A texture map on the artery wall helps the userperceive the 3D geometry.

(1)

(2)

continued from p. 35

Page 12: Immersive VR for scientific visualization: a progress report

increasingly popular alternative to IVR, particularly forgroup viewing. IVR environments augment tradition-al design studios and “wet labs” in such areas as bio-chemistry and drug design. Lin et al.17 reportedincreasing use in the geosciences. Supercomputer cen-ters, such as those at the National Center for Super-computing Applications and the Cornell Theory Center,provide production use of their IVR facilities. Thus it’sfair to say that, slowly but surely, scientists are becom-ing acquainted with IVR in general and multimodalpost-WIMP interaction in particular.

On the downside, while money does go into immer-sive environments, scientists and their managementcontinue to treat investments in all visualization tech-nologies as secondary to investments in computationand data handling.

Challenges in IVRThe following list of research issues in IVR is a partial

one; space and time constraints prevent a more detailedlisting. As often with systems-level research areas,there’s no way to partition the issues neatly into cate-gories such as hardware and software. Also, latency—a key concern for IVR—affects essentially all issues.

Furthermore, many of the issues outlined apply equal-ly to 3D desktop graphics. Indeed, IVR can be consid-ered as merely the most extreme point on the spectrumfor all these research issues; solutions will come fromresearchers and commercial vendors not focuseduniquely on IVR. A telling example is the Sony GScube,shown at Siggraph 2000, which demonstrated very highend performance using a special-purpose scalablegraphics solution based on 16 PlayStation-2 cards. Thecarefully hand-tuned demo showed 140 ants (from thePDI movie AntZ), made up of around 7,000 polygonseach, running in real time at 60 Hz at HDTV resolution,effectively over 1M polygons per frame at 1920 × 1080resolution. A Sony representative quoted about 60M tri-angles per second and peak rates of roughly 300M tri-angles. Load-balancing algorithms helped improve theperformance of the 16 PlayStation-2 cards and addi-tional graphics hardware.

While 2D graphics is mature, with the most progressoccurring in novel applications, we’ve reached no suchmaturity in 3D desktop graphics, let alone in IVR. Thisimmaturity is manifested at all levels, from hardwarethrough software, interaction technology, and appli-cations. Progress will have to be dramatic rather thanincremental to make IVR a generally available pro-ductive environment. This is especially true if our hopethat IVR will become a standard work environment isto be realized.

Improve display technologies. Hardware dis-play systems for IVR applications have two importantand interrelated components. The first is the technolo-gy underlying how the light we see gets produced; thesecond is the type and geometrical form of surface onwhich this light gets displayed.

Invent new light production technologies. A number ofdifferent methods exist for producing the light displayed

on a surface. While CRTs—and increasingly LCD panelsand projectors—are the workhorses for graphics today,newer light-producing techniques are still being invent-ed. Texas Instrument’s Digital Micromirror Device (DMD)technology is available in digital light projectors. Evenmore exotic technology from Silicon Light, which usesGrating Light Valve technology, will soon handle theaterprojection of digital films. There’s hope that this kind oftechnology may be commoditized for personal displays.

Jenmar Visual System’s BlackScreen technology (usedin the ActiveSpaces telecollaboration project of ArgonneLaboratories) captures image light into a matrix of opti-cal beads, which focus it and pass it through a black layerinto a clear substrate. From there it passes relatively uni-formly into the viewing area. This screen material pre-sents a black level undegraded by ambient light, makingit ideal for use with high-luminosity projection sourcesand nonplanar tiled displays such as caves.

A very different approach to light production, the Vir-tual Retinal Display (VRD), projects light directly ontothe retina.18 The VRD was developed at the HumanInterface Technology (HIT) Lab in 1991 and is nowbeing commercially offered by Microvision in a VGAform factor with 640 ×480 resolution. Because the lasermust shine directly onto the retina, visual registrationis lost if the eye wanders. Autostereoscopic displays suchas that described by Perlin et al.19 are less obtrusive thanstereo displays, which require shutter glasses. Light-emitting polymers hold the promise of display surfacesof arbitrary size and even curved shape. The large, sta-tic, digital holograms of cars displayed at Siggraph 2000demonstrated an impressive milestone toward the real-time digital holography we can expect in the far future.

Create and evaluate new display surfaces. Unfortu-nately, no “one size fits all” display surface exists for IVRapplications. Rather, many different kinds offer advan-tages and disadvantages. Choosing the appropriate dis-play surface depends on the application, tasks required,target audience, financial and human resources avail-able, and so on. In addition to Fish Tank VR, work-benches, caves, HMDs and PowerWalls, new ways ofdisplaying light continue to emerge.

Tiled display surfaces, which combine many displaysurfaces and light-producing devices, are very popularfor visualization applications. Tiled displays offergreater image fidelity than other immersive and desk-top displays today due to an increased number of pixelsdisplayed (for example, 6.4K × 3K in Argonne’s 15-pro-jector configuration, in contrast to typical display reso-lution of 1280 × 1024) over an area that fills most of auser’s, or often a group of users’, field of view.20 For athorough discussion of tiled wall displays, see CG&A’sspecial issue on large displays.6

Not surprisingly, practical drawbacks of IVR displaysystems are their cost and space requirements. Thisproblem plays a significant role in inhibiting the pro-duction use of IVR by scientists. However, semi-immer-sive personal IVR displays such as CMU’s CUBE(Computer-driven Upper Body Environment) areemerging. In addition, the VisionStation from Elumensand Fakespace Systems’ conCAVE are hemispheric per-

IEEE Computer Graphics and Applications 37

Page 13: Immersive VR for scientific visualization: a progress report

sonal displays that use image warping to compensatefor the nonplanar display topology.

Understanding which IVR display surfaces best suitwhich application areas occupies researchers today. Forexample, head-tracked stereo displays typically providea one-person immersive experience, since current pro-jection hardware can only generate accurate stereoimages for one person. (Some proposed strategies time-multiplex the output of a single projector,21 but exhibitproblems such as decreased frame rate and darkerimages.) Non-head-tracked displays, such as a Power-Wall, which takes some advantage of peripheral vision,proves much better for group viewing. Given the stillprimitive state of IVR, scientists—not surprisingly—gen-erally choose a higher-resolution, nonimmersive, sin-gle-wall display over a much lower-resolution immersivedisplay. Our optimism about the use of IVR for scientif-ic visualization is bolstered by the belief that the samehigh resolution will eventually be available for IVR.

Improve immersion in multiprojector environments.Although large-scale display systems, such as multipro-jector tiled and dome-based displays, show promise inproviding more pixels to the user’s visual field, a numberof technological and research challenges remain to beaddressed. For example, even though cave technology ismore than eight years old, the seams between displaywalls still have visual discontinuities that can break theillusion of immersion. Large-scale tiled wall and domedisplays also have problems with seams. Making imagesseamless across display surfaces and multiple projectorsrequires sophisticated image blending algorithms.22

We must also continue to explore methods for main-taining projector calibration and alignment, and colorand luminosity matching. In addition, with front-pro-jected displays (typical for domes), the user may occludethe projected images when performing various spatial3D interaction tasks.

Finally, large-scale displays also require higher reso-lution than is currently possible. To match human acu-ity, we need to display at least 200 pixels per inch in thecircular region with a 0.16 to 0.31 inch radius roughly 18inches distant in the gaze direction (the region of fovealvision); lower resolution could be displayed outside thisregion (for example, on portions of the display more dis-tant from the viewer position, and outside the cone offoveal gaze).

A simple calculation assuming a limiting discernableresolution of an arc-minute (typical for daylight gratingdiscrimination23) yields a requirement for a convention-al desktop display of 2400 × 1920 pixels—achieved, forexample, by IBM’s Roentgen active-matrix LCD display.However, for a 10-foot-diameter cave environment, inwhich today’s typical projection systems evenly distrib-ute pixels on a display surface and users can in principlecome as close to the walls as they do to a normal monitor,each wall must have 23,000 × 23,000 pixels to achievethe same resolution. Variable-resolution display tech-nology could use many fewer pixels because the pixelscould be positioned to best accommodate the human eye.

Human visual acuity also varies with the task. Forexample, our visual acuity increases dramatically with

stereo vision (so-called hyperacuity). For discriminat-ing the relative depth of two lines in stereo, our acuityis 10 to 20 times finer than the value quoted for daylightgrating discrimination,23 resulting in a 10-to-20-foldincrease in the numbers quoted here or a 100-to-400-fold increase in the total number of pixels needed(though careful and sophisticated antialiasing couldpermit coarser resolutions).

Develop office IVR. The history of computing has shownthat only a few early adopters will flock to a new tech-nology as long as it remains expensive and fragile, andrequires using a lab away from one’s normal workingenvironment. IVR will not become a normal componentof the scientist’s work environment until it literallybecomes indistinguishable from that environment.

UNC’s ambitious Office of the Future project24 and itsvarious near-term and longer-term variations aredesigned to bring IVR to the office in a powerful and yetaffordable way. The system is based on a unified appli-cation of computer vision and computer graphics. It com-bines and builds on the notions of the cave, tiled displaysystems, and image-based modeling. The basic idea is touse real-time computer vision techniques to dynamical-ly extract per-pixel depth and reflectance informationfor the visible surfaces in the office, including walls, fur-niture, objects, and people, and then to project imageson the surfaces or interpret changes in the surfaces.

To accomplish the simultaneous scene capture anddisplay, computer controlled cameras and projectorsreplace ceiling lights. By simultaneously projectingimages and monitoring geometry and reflectivity of thedesignated display surfaces, we can dynamically adjustfor geometric, intensity, and resolution variations result-ing from irregular and dynamic display geometries, andfrom overlapping images.

The projectors work in two modes: scene extraction(in coordination with the cameras) and normal display.In the scene extraction mode, 3D objects within eachcamera’s view are extracted using imperceptible struc-tured light techniques, which hide projected patternsused for scene capture through a combination of time-division multiplexing and light cancellation techniques.In display mode, the projectors display high-resolutionimages on designated display surfaces.25 Scene capturecan also be achieved passively and in real time26 usinga cluster of cameras and view-independent acquisitionalgorithms based on stereo matching.

Ultimately, such an office system will lead to morecompelling and useful systems for shared telepresenceand telecollaboration between distant individuals. It willenable rich experiences and interaction not possiblewith the through-the-window paradigm.

Improve rendering performance and flexi-bility. Although display processors have become fast,we still need additional orders of magnitude in perfor-mance. This problem increases in the context of tileddisplays on a single wall or multiple walls—we havemegapolygons-per-second rates, while we needgigapolygons-per-second, not to mention texels, voxels,and other display primitives.

Virtual Reality

38 November/December 2000

Page 14: Immersive VR for scientific visualization: a progress report

A question pertaining to both hardware and softwareis how rendering can take advantage of the full capabili-ties of the human visual and cognitive systems. Examplesof this concept include rendering with greater precisionwhere the eye is focusing and with less detail in theperipheral vision, and rendering so as to emphasize thecues most important for perception and discrimination.27

The field of rendering remains in flux as new tech-niques such as perceptually based rendering, volumetricrendering, image-based rendering, and nonphoto-realistic rendering28 join our lexicon. (Volume-rendering hardware—particularly important in scientificvisualization—is specialized now in dedicated chips likeMitsubishi’s VolumePro, whose output goes into a con-ventional polygon graphics pipeline as texture maps.)Current systems don’t integrate all these rendering tech-niques completely. (For example, the Visualization Tool-Kit can integrate 2D, 3D polygonal, volumetric, andtexture-based approaches, but not interactively for all sit-uations, such as when translucent geometric data isused.29) Integrating all these techniques into a commonframework is difficult, both at the API level, so that theapplication programmer can mix and match as suits theoccasion, and at the system level, where the various typesof data must be efficiently rendered and merged. Espe-cially at the hardware level, this will require considerableredesign of the conventional polygon-centric graphicspipeline. Furthermore, visual rendering needs to be coor-dinated with audio rendering and haptic rendering.

Use parallelism to speed up rendering. Parallel render-ing aims to speed up the rendering process by decom-posing the problem into multiple pieces that can executein parallel. We need to learn how to make scalablegraphics systems by ganging together commodityprocessors and graphics components in the same wayas high-performance parallel computers are built byganging together commodity processors. Projects atStanford, Princeton, Argonne National Lab, and otherlabs use this strategy to create tiled displays.6 Somegroups are also experimenting with special-purposehardware such as compositors. For example, Stanfordhas built the experimental, distributed, frame-bufferarchitecture Lightning2.

Parallel rendering is a less well studied problem thanparallel numerical computing, except for the embar-rassingly parallel problem of ray tracing, which has beenwell studied.30 While multiple parallel renderingapproaches are known (object or image space parti-tioning,31 for example), vendors haven’t committed toany standard scalable parallel approach. Furthermore,the typical goal of parallel computation is to increasebatch throughput, while the goal for IVR is to maximizeinteractivity. For IVR this is accomplished by minimizinglatency and maximizing frame rate (discussed later).

Another way of looking at it is that scientific comput-ing is most interested in asymptotic performance,whereas IVR is most interested in worst-case perfor-mance on a subsecond time-scale. Parallel renderinghas much in common with parallel databases (for exam-ple, efficient distribution and transaction mechanismsare needed), but is focused at the hardware level (for

example, the memory subsystem and the renderingpipeline).

Whitman31 proposed the following criteria for evalu-ating parallel graphics display algorithms:

■ granularity of task sizes■ nature of algorithm decomposition into parallel tasks■ use of parallelism in the display algorithm without

significant loss of coherence■ load balancing of tasks■ distribution and access of data through the commu-

nication network■ scalability of the algorithm on larger machines

An important requirement is making a parallel ren-dering system easy to use—that is, isolating the applica-tion programmers from the complexities of driving aparallel rendering system. Hereld et al.32 stated that “theultimate usability of tiled display systems will depend onthe ease with which applications can be developed thatachieve adequate performance.” They suggested thatexisting applications should run without modification,that the details of rendering should be hidden from theuser, and that new applications should have simple mech-anisms to exploit advanced properties of tiled displays.

WireGL33 is an example of a system that makes it easyfor the user and application programmer to leveragescalable rendering systems. Specifically, an OpenGL pro-gram needs no modification to run on a tiled display.The distributed OpenGL (DOGL) system at Brown Uni-versity is a similar, although less optimized, library—itrequires minor modification to an OpenGL program, butcan drive a tiled, head-tracked, stereo cave display. Ituses MPI, a message-passing interface common in par-allel scientific computing applications. These and othersystems work by intercepting normal (nonparallel)OpenGL calls and multicasting them, or channelingthem to specific graphics processors.

WireGL includes optimizations to minimize networktraffic for a tiled display and, for most applications, pro-vides scalable output resolution with minimal perfor-mance impact. The rendering subsystem relies not onlyon parallelism, but also on traditional techniques forgeometry simplification, such as view-dependent culling.(We discuss geometry simplification techniques later.)

To run a sequential OpenGL program written for a con-ventional 3D display device without modification requiresthat the interceptor deal with the complexities of man-aging head-tracking and stereo. In particular, this requirestransformations that conflict with those in the originalOpenGL code. Furthermore, the application program-mer must provide any additional interaction devicesneeded. Retaining the simplicity of the WireGL approach(running an OpenGL application in an IVR environmentwithout modification) while extending its capabilities toa wider range of display types is an open problem.

Make haptics useful for interaction and sci-entific visualization. Haptic output devices need sig-nificant improvements before they can becomegenerally useful. Commodity haptic devices, includingthe Phantom, the Wingman Force Feedback Mouse, and

IEEE Computer Graphics and Applications 39

Page 15: Immersive VR for scientific visualization: a progress report

a number of gaming devices such as the MicrosoftSidewinder Force Feedback joystick and wheel, deliversix degrees of freedom at best; handling objects requiresmany more degrees of freedom to distribute forces morewidely, such as over a whole hand.

Probably the most compelling use of force feedbacktoday (besides in games and certain kinds of surgicaltraining) comes from UNC’s nanomanipulator project,where a natural mapping takes place between the atom-ic forces on the tip of the probe and what the Phantomcan provide.

In addition to force displays that emphasize the forceitself, the other major kind of haptic display is tactile.34

Tactile feedback has been simulated with vibrating com-ponents such as Virtual Technologies’ CyberTouchGlove, pin arrays, and robot-controlled systems that pre-sent physical surfaces at the appropriate locations, likethose in Tachi’s lab at the University of Tokyo(http://www.star.t.u-tokyo.ac.jp/), but these tech-niques are either experimental or unable to span a suf-ficient range of tactile sensations.

Earlier molecular docking experiments showed thata robot-arm force-feedback device could significantlyspeed up the exploration process.35 Feeling forces, as inelectromagnetic or gravitational fields, can also aid visu-alization.35 Haptics used in a problem-solving environ-ment for scientific computing called SCIRun lets the userfeel vector fields.36

Because of their small working volume, today’s com-modity haptic rendering devices better suit Fish TankVR than walk-around environments like the cave, letalone larger spaces instrumented with wide-area track-ers. There are three basic approaches to haptics in suchlarger spaces:

■ make a larger ground- or ceiling-based exoskeletonwith a larger working area,

■ put a desktop system such as a Phantom on a pedestaland move it around, or

■ ground the forces in a backpack worn by the user,34

for example to apply forces by pulling on wiresattached to the fingers.37

None of these options is ideal. The first can be expen-sive and involve lesser fidelity and greater possibility ofhitting bystanders in the device’s working volume. Thesecond is clumsy and may cause visual occlusion prob-lems, and the third makes the user carry around signif-icant extra weight.

Besides distributed and mobile haptics, we list twoadditional research problems:

■ Developing real-time haptic rendering algorithms forcomplex geometry. Forces must be computed and cre-ated at high rates (on the order of 1,000 Hz) to main-tain a realistic sensation of even the simplest solidobjects.

■ Actually using haptics for interaction, not just doinghaptic rendering. Miller and Zeleznik described arecent example of adding haptics to the 2D user inter-face,38 but only a much smaller effort has begun onthe general problem of what should be displayed hap-

tically in general 3D environments, in addition to thesurfaces of objects.39

Make interaction comfortable, fast, andeffective. Several options tackle the problem of mak-ing interaction better: improving the devices, minimiz-ing system latency, maximizing frame rate, and scalinginteraction techniques. We consider each in turn.

Improve interaction device technology. Any input deviceprovides a way for humans to communicate with com-puter applications. A major distinction can be made,however, between traditional desktop input devices,such as the keyboard and mouse, and post-WIMP inputdevices for IVR applications. In general, traditional desk-top input devices have both a level of predictability thatusers trust and good spatiotemporal resolution, accu-racy, and repeatability. For example, when a user manip-ulates a mouse, hand motions typically corresponddirectly to cursor movement, and the control-to-displayratio (the ratio of physical mouse movement to mousepointer movement) is set in a useful mapping frommouse to display.

In contrast, many IVR input devices, such as 3D track-ers, often exhibit chaotic behavior. They lack good spa-tiotemporal resolution, range, accuracy, andrepeatability, not to mention their problems with noise,ergonomic comfort, and even safety (in the case of hap-tics). For example, a 2D mouse has good accuracy andrepeatability regardless of whether the mouse pointeris at the center or at the edges of the screen, but a 3Dmouse has adequate accuracy and repeatability only inthe center of its tracking range, deteriorating as it movestowards the boundaries of this range. IVR input devicesthus frequently have a level of unpredictability thatmakes them frustrating and difficult to use. Althoughthe HiBall Tracker,40 originally developed at UNC andnow available commercially from 3rdTech, has extreme-ly low latency, and high accuracy and update rates, it’stoo big for anything other than head tracking. Indeed,miniaturization of tracking devices is another impor-tant technological challenge.

Position and orientation tracking is a vital input tech-nology for IVR applications because it lets users get thecorrect viewing perspective in response to head motion.In most IVR applications, the user’s head and either oneor both hands are tracked. However, to go beyond tra-ditional IVR interaction techniques, we need to trackvery precisely other parts of the body such as the fingers(not just fingertips), the feet, pressure on the floor thatchanges as a user’s weight shifts, gaze direction, and theuser’s center of mass. With the standard tracking solu-tions, the user would potentially have not just two orthree tethers, but perhaps 10 to 15, which is complete-ly unacceptable.

Another example of this tethering problem is the cur-rent IVR configuration at the Brown University VirtualEnvironment Navigation Lab. An HMD and wide-areatracking system (40 × 40 feet) enables scientists to per-form IVR-based navigation, action, and perception exper-iments. An assistant must accompany the humanparticipant to manage the tethered cables for HMD video

Virtual Reality

40 November/December 2000

Page 16: Immersive VR for scientific visualization: a progress report

and tracking. Although wireless tracking solutions arecommercially available, for instance the Polhemus StarTrak, they have a limited working volume and require theuser to somehow carry signal transmitting electronics.

An alternate method of tracking employs computervision-based techniques so that users can interact with-out wearing cumbersome devices. This type of trackingis commonly found in perceptual user interfaces (PUIs),which work towards the most natural and convenientinteraction possible by making the interface aware ofuser identity, position, gaze, gesture, and, in the future,even affect and intent.41 Vision-based tracking has anumber of drawbacks, however. Most notably, even withmultiple cameras, occlusion is a major obstacle. Forexample, it’s difficult to track finger positions when thehands are at certain orientations relative to the user’sbody. The form factor of an IVR environment can alsoplay a role in vision-based tracking—consider usingcamera-based tracking in a six-sided cave withoutobstructing the visual projection. Vision-based trackingsystems are commercially available (for example, theMotionAnalysis motion capture system), but they’remostly used in motion-capture applications for anima-tion and video games. Like the wireless tracking tech-nologies from Polhemus and Ascension, thesevision-based systems are expensive, and they alsorequire users to wear a body suit.

In addition to researching robust, accurate, and unob-trusive tracking that scientists can use easily, we mustcontinue to develop other new and innovative interac-tion devices. Many input devices are designed as gener-al-purpose devices, but, as with displays, one size doesn’tfit all. We need to develop specific devices for specifictasks that leverage a user’s learned skills. For example,flight simulators don’t use instrumented gloves thatsense joint angles or finger contact with a virtual cockpit,but instead recreate the exact physical interface of a realcockpit, which is manipulated with the hands. Physicalhand-held props are also useful devices for some task-and application-specific interactions (see the sidebar“Interaction in Virtual Reality” for specific examples).

Another task-specific device is the Cubic Mouse,42 aninput device designed for viewing and interacting withvolumetric data sets. Intended to let users specify 3Dcoordinates intuitively, the device consists of a box withthree perpendicular rods passing through the center andbuttons for additional input. The rods represent the x-,y-, and z-axes of a given coordinate system. Pushing andpulling the rods specifies constrained motion along thecorresponding axes. Embedded within the device is a6DOF tracking sensor, which permits the rods to be con-tinually aligned with a coordinate system located in avirtual world.

Other forms of interaction device technology stillneed improvement as well. For example, speech recog-nizers remain underdeveloped and suffer from accura-cy and speed problems across a range of users. Motionplatforms such as treadmills and terrain simulators areexpensive, clumsy, and not yet satisfactory.

A main interface goal in IVR applications is to devel-op natural, human-like interaction. Multimodal inter-faces, which combine different modalities such as

speech and gesture, are one possible route to human-to-human style interaction. To create robust and pow-erful multimodal systems, we need more research onhow multiple sensory channels can augment each other.In particular, we need better sensor fusion “unification”algorithms43 that can improve probabilistic recognitionalgorithms through mutual reinforcement.

Minimize system latency. According to Brooks,1 “end-to-end system latency is still the most serious technicalshortcoming of today’s VR systems.” IVR imposes farmore stringent demands on end-to-end latency (includ-ing both command specification and completion) thana desktop environment. In the latter, the major concernis task performance, and the general rule of thumb isthat between 0.1- and 1.0-second latency is acceptable.44

In IVR we’re concerned not only with task performancebut also with retaining the illusion of IVR—and moreimportantly, not causing fatigue, let alone cybersick-ness. It’s useful to think of a hierarchy of latency bounds:

■ Some human subjects can notice latencies as small as16 to 33 ms.45

■ Latency doesn’t typically result in cybersickness untilit exceeds a task- and user-dependent threshold.According to Kennedy et al., any delay over 35 ms cancause cue conflicts (such as visual/vestibular mis-match).46 Note that motion sickness and other dis-comforts related to cybersickness can occur even inzero-latency environments (like the real world, whicheffectively has no visual latency) because cue conflictscan result from factors other than latency.

■ For hand-eye motor control in navigation and objectselection and manipulation, latency must remain lessthan 100 to 125 ms.46 For visual display, latencyrequirements are more stringent for head trackingthan for tracking other body parts such as hands andfeet. The HiBall tracking system40 uses Kalman filter-based prediction-correction algorithms to increaseaccuracy and to reduce latency.

■ Permissible latency until the result of an operationappears (as opposed to feedback while performing it)is a matter of the user’s expectations and patience. Forexample, a response to a data query ranging from sub-seconds to a few seconds may be adequate. Converse-ly, if time-series data produced by sampling eithersimulation or observational data plays back too rapid-ly, the user may become confused. The problem canworsen if the original sampling rate is too low and tem-poral aliasing occurs. (For the purpose of discussion,we assume the time to render a query’s result doesn’taffect frame rate—not always the case in practice.)

■ Latency affects user performance nonlinearly; whenlatencies exceed 200 ms, user interaction strategiestend to change to “move and wait” from more con-tinuous and fluid control.47

In IVR the dominant criterion is that the overall sys-tem latency must fall below the cybersickness thresh-old. Latency per se doesn’t cause cybersickness.However, in the context of IVR, where the couplingbetween body tracking (especially of the head) and dis-

IEEE Computer Graphics and Applications 41

Page 17: Immersive VR for scientific visualization: a progress report

play is so tight, latency can cause cybersickness. In addi-tion, minimizing latency is important to make it easierfor users to transition from performing tasks at the cog-nitive level to the perceptual level (that is, using musclememory to perform tasks). Worse, in the context of vari-able latency, it’s even more difficult to make this transi-tion. It’s still a significant research problem to identifyand especially to minimize all the possible causes oflatency at the tracking device, operating-system, appli-cation, and rendering levels48 for a local system, letalone for one whose components are networked, as dis-cussed next.

Unfortunately, latency thus poses a system-wide prob-lem in which no single cause is fatal in itself but the com-bination produces the equivalent of “the death of athousand cuts.” Singhal and Zyda49 pointed out thatlatency is one of the biggest problems with today’s Inter-net, yet has received relatively little attention. Networksintroduce uncontrollable, indeed potentially unpre-dictable, time delays. The causes are numerous, includ-ing travel time through routers to and from the network,through the network hardware, operating system, andinto the application.

Research in modern network protocols deals withminimizing and guaranteeing maximum network laten-cy, and other aspects of quality-of-service management,but clearly a lower bound exists on network latency dueto the speed of light. Since latency in network trans-mission is inevitable, we need strategies for making theuser experience acceptable. Some strategies for maskingnetwork latency include using separate time frames andfilters for each participant with periodic resynchro-nization when more accurate information becomesavailable,50 and visual effects allowing immediate inter-action with proxies at each location and subsequentresynchronization.51

Unfortunately, the networking research communityisn’t well aware of the needs of the real-time graphicscommunity, let alone the far more demanding IVR com-munity. Conversely, the graphics community is insuffi-ciently aware of ongoing developments in networking.An urgent need exists to bring these communitiestogether to come up with acceptable algorithms. How-ever, no matter how good these networking algorithmsturn out to be, some IVR tasks are clearly inappropriatefor today’s networks. Head tracking, for example, isn’ta candidate for distributed processing.

Maximize frame rate to meet application needs. Close-ly related to latency is frame rate—the number of dis-tinct frames generated per second. While most sourcesquote a rate of 10 Hz as the minimum for IVR, the gamecommunity has long known that for fluid interactivityand minimal fatigue 60 Hz is a more acceptable mini-mum. Worse, given the complexity of scenes that VRusers want to generate and the fact that stereo auto-matically doubles rendering requirements, attaining anacceptable frame rate for real smoothness in interactiv-ity will be an ongoing battle, even with faster hardware.While developers often switch to using lower resolutionsin stereo to compensate for the extra computation timerequired to render left and right eye images, this is hard-

ly a solution, since it compromises the already-inade-quate resolution to gain frame rate.

The comparison with the requirements of videogames also doesn’t take into account the inordinateinvestment of time by game designers in constructingand tuning their geometry and behavior models toachieve high frame rates, an investment typically impos-sible for IVR applications. Nonetheless, some of thegeometry simplification techniques used in the gamecommunity can transfer to IVR, such as level-of-detailmanagement and view-dependent culling. These tech-niques were, in fact, first developed for IVR walk-throughs. We review them briefly below.

A distinction separates simulation frame update rateand visualization frame update rate. Many scientific sim-ulations will probably never be fast enough to overwhelmthe human visual system. However, animated sequencesderived from a sequence of simulation time steps mayneed to be slowed down so that viewers don’t miss impor-tant details. (Think of watching a slow-motion replay ofa close play in a baseball game because the normal—orfastest—playing speed is inappropriate.)

Vogler and Metaxas52 showed that applications witha lot of motion (such as decoding American Sign Lan-guage) require more than a 60-Hz update frame rate tocapture all the information. It’s unlikely that many sci-entific visualization situations will require such a strin-gent update rate, since typically the scientist can controlthe dynamic visualization’s speed (slow motion play-back or fast-forward). Of course, actually providingthese capabilities in real time may be a formidable prob-lem, given the large size of some data sets.

Finally, the frame rate and system latency oftenchange over time in IVR applications, which can play asignificant role in user performance. Watson et al.53

showed that for an average frame rate of 10 fps, 40 per-cent fluctuations in the frame rate about the mean candegrade user performance. However, the same experi-ment showed that for an average frame rate of 20 fps,no significant degradation occurred.

Scale up interaction techniques with data and modelsize. To combat the accelerating data crisis, we need notonly scalable graphics and massive data storage andmanagement systems, but also scalable interaction tech-niques so that users can view and manipulate the data.Existing interaction techniques don’t scale well whenusers visualize massive data sets. For example, the state-of-the-art interaction techniques for selection, manip-ulation, and application control described in the sidebar"Interaction in Virtual Reality" are mostly proof-of-con-cept techniques designed for and tested only with smallsets of objects. Techniques that work well with 100objects in the scene won’t necessarily work with 10,000,let alone 100,000. Consider, for example, selecting oneout of 100 objects versus selecting one out of 100,000objects. If culling techniques are used for object selec-tion, some objects of interest may not even be visible,and some kind of global context map may be needed.Unfortunately, IVR remains in an early stage of devel-opment, and performance limitations are one of themain reasons that our interaction techniques are based

Virtual Reality

42 November/December 2000

Page 18: Immersive VR for scientific visualization: a progress report

on small problems. We must either modify existing inter-action techniques or develop novel techniques for han-dling massive data sets.

In considering how we interact with data during anexploration, we must consider all the various steps in atypical process: sensing or computing the raw data,culling a subset to visualize, computing the presenta-tion (the visualization itself), and the interaction tech-niques themselves. Below we discuss only the last threesteps. A more in-depth discussion on managing gigabytedata sets in real time appears elsewhere.54 Here we dis-cuss various techniques for cutting down the size of thedata set, automatic feature extraction techniques, a classof algorithms known as time-critical computing thatcompute on a given time budget to guarantee a speci-fied frame rate, and the high-performance demands forcomputing the visualization itself.

A standard approach to interacting with massive datasets is first to subset, cull, and summarize, then to usesmall-scale interaction techniques with the more man-ageable extracts from the original data. Probably ourbest examples of this type of approach are vehicle andbuilding walkthroughs. These require a huge amountof effort in real-time culling and geometry simplifica-tion, but the interaction is fairly constrained. While theunderlying geometry of, say, a large airplane or subma-rine may be defined in terms of multiple billions of poly-gons, only 100,000 to a million polygons may be visiblefrom a particular point of view.

After semantically filtering the data (for example,displaying only HVAC or hydraulic subsystems), wemust use a number of approximation techniques to bestpresent the remaining geometry. Nearby geometry maybe rendered using geometric simplification, typified byHoppe’s progressive meshes,55 while distant geometrymay be approximated using a variety of image-basedtechniques, such as texture mapping and image warp-ing. Objects or scenes with a high degree of fine detailapproaching one polygon per pixel may benefit frompoint cloud or other volumetric rendering techniques.UNC’s Massive Model Rendering System offers anexcellent example of a system combining many of thesetechniques.56 Rendering may be further acceleratedthrough hardware support for compression of the tex-ture or geometric data57 to reduce both memory andbandwidth needs.

Similarly, a scientific data set size can be reduced toallow visualization algorithms to operate on a relevantsubset of the full data set if additional information canbe added to the data structure. Two techniques forreducing the data set on which visualization algorithmsoperate are

■ precompution of data ranges via indices so that piecescan quickly be identified at runtime, and

■ spatial partitioning for view-dependent culling.

Data compression and multiresolution techniques arealso commonly used for reducing the size of data sets.However, many compression techniques are lossy innature; that is, the original data cannot be reconstruct-ed from the compressed format. Although lossy tech-

niques have been applied to scientific data, many scien-tists, especially in the medical field, remain skeptical—they fear losing potentially important information.

Bryson et al. pointed out that there’s little point ininteractively exploring a data set when a precise descrip-tion of a feature can be used to extract it algorithmical-ly.54 Feature extraction tends to reduce data size becauselower-level data is transformed into higher-level infor-mation. Some examples of higher-level features that canbe automatically detected in computational fluid dynam-ics data are vortex cores,58 shocks,59 flow separation andattachment,60,61 and recirculation.62 We need many moretechniques like these to help shift the burden of patternrecognition from human to machine and move towarda more productive human-machine partnership.

Time-critical computing (TCC) algorithms also proveuseful for interacting with large-scale data sets.63 TCCtechniques64,65,54 guarantee some result within a timebudget while meeting user-specified constraints. Ascheduling algorithm balances the cost and benefit ofalternatives from the parameter space (such as timebudget per object, algorithm sophistication, level-of-detail, and so forth) to provide the best result in a giventime. TCC suits many situations; here we discuss howvisualization techniques can benefit from TCC.

Two important challenges face TCC: determining thetime budgets for the various visualizations—a type ofscheduling problem—and deciding which visualizationalgorithms with which parameter settings best meet thatbudget. In traditional 3D graphics, time budgets oftendepend on the position and size of an object on thescreen.63 In scientific visualization applications, the tra-ditional approach doesn’t work, since it can take sub-stantial time to compute the visualization object’sposition and size on the screen, thereby defeating thepurpose of the time-critical approach. A way around thisproblem is to have equal time budgets for all visualiza-tion objects, possibly augmented by additional informa-tion such as explicit object priorities provided by the user.

Accuracy versus speed tradeoffs can also help in com-puting visualization objects within the time budget. Ofcourse, sacrificing accuracy to meet the time budgetintroduces errors. Unfortunately, we have an insufficientunderstanding of quality error metrics that would allowus to demonstrate that no significant scientific informa-tion has been lost. Metrics such as image quality used fortraditional graphics don’t transfer to scientific data.54

Transforming data into a visualization takes visualiza-tion algorithms that depend on multiple resources. Inmany cases, the time required to create visualizations cre-ates a significant bottleneck. For example, an analysis ofthe resources used by visualization algorithms revealsthat many techniques (such as streamlines, isosurfaces,and more sophisticated algorithms) can result in severedemands on computation and data access that can’t behandled in real time. These resources include many com-ponents—raw computation, memory, communication,data queries, and display capabilities, among others.

To help illustrate the situation, consider part ofBryson’s 1996 analysis11 based on just the floating-pointoperation capabilities of a typical 1996 high-perfor-mance workstation. He showed that you could expect

IEEE Computer Graphics and Applications 43

Page 19: Immersive VR for scientific visualization: a progress report

25 streamlines to be computed in one tenth of a second.Bryson’s assumptions for this result were instantaneousdata set access, second-order Runge-Kutta integration,a single integration of a streamline required about 200floating-point operations, a 20 megaflop machine, andmost systems perform about half their rated perfor-mance. Today, the same calculation scaled up byMoore’s Law would yield about 130 streamlines. How-ever, when you consider the cost of data access andtoday’s very large data sets, these performance figuresdrop substantially to perhaps tens of streamlines, oreven fractions of a single streamline, depending on thedata complexity and algorithms used. We need a bal-anced, high-performance computational system in con-junction with efficient algorithms, with both designedto support interactive exploration.

Many approaches to large data management for visu-alizing large data sets operate on the output of a simu-lation. An alternative approach would investigate howideas used to produce the data (that is, ideas in the sim-ulation community) might be leveraged by the visual-ization community. At Brown University we’re exploringthe use of spectral element methods66 as a data struc-ture for both simulation and visualization. Their natur-al hierarchical and continuous representation areindeed desirable traits for visualization of large datasets. Nonetheless, we see significant research issues indiscovering how best to use them in visualization, min-imizing the loss of important detail and accuracy, andvisualizing high-order representations.

Spectral elements combine finite elements, whichprovide discretization flexibility on complex geometries,and spectral methods using higher-order basis functions,which provide excellent convergence properties. A keybenefit of spectral elements is that they support twoforms of refinement: allowing the size and number ofelements to be changed, and allowing the order of poly-nomial expansion to be changed in a region. Both canoccur while the simulation is running. This attribute canbe leveraged to perform simulation steering and hier-archical visualization. As in the use of wavelets, youcould choose the necessary level of summarization orzoom into a region of interest. Thus you could changethe refinement levels to increase resolution only whereneeded while running a simulation.

Using scalable interfaces means not just interactingwith more objects, but also interacting with them overa longer period of time. Small enough problems or areduced-complexity version of a large problem (with acoarser spatio-temporal resolution) may run in realtime, but larger simulations won’t be able to engage theuser continuously. One interesting aspect of the systemproposed by de St. Germain et al.67 is the “detachable”user interface, which lets the user connect to a long-run-ning simulation to monitor or steer it. This type of inter-face allows the user to dynamically connect to anddisconnect from computations for massive batch jobs.This system also lets multiple scientists steer disjointparts of the simulation simultaneously.

Facilitate telecollaboration. The problems ofinteraction multiply when multiple participants inter-

act with a shared data set and with one another over anetwork. Such collaboration—increasingly a hallmarkof modern science and engineering—is poorly support-ed by existing software. The considerable literature incomputer-supported collaborative work primarily con-cerns various forms of teleconferencing, shared 2Dwhiteboards, and shared productivity applications; verylittle literature deals with the special problems of 3D,let alone of teleimmersion, which remains embryonic.Singhal and Zyda49 detailed many of the network-relat-ed challenges involved in this area.

An interesting question is how participants share animmersive 3D space, especially one that may incorpo-rate sections of their physical offices acquired and recon-structed via real-time vision algorithms.26 A special issuein representing remote scenes is the handling of partic-ipant avatars—an active area of research, particularlyfor the subcase of facial capture. Participants want to seenot just their collaborators, but especially their manip-ulation of the data objects. This may involve avatars forhands, virtual manipulation widgets, explanatory voiceand audio, and so on. As in 2D, issues of “floor control”arise and are exacerbated by network-induced latency.Collaborative telehaptics present extremely difficultproblems, especially because of latency considerations,and have scarcely been broached.

Other interesting questions involve how to handleannotation and metadata so that they’re available andyet don’t clutter up the scene, and how to record visual-ization sessions for later playback and study. In additionto the common interaction space, participants must alsohave their own private spaces where they can take notes,sketch, and so on, shielded from other participants. Insome instances, small subgroups of participants willwant the ability to share their private spaces.

An excellent start on many of these questions appearsin Haase’s work,16 and in the Cave6D and TIDE (Tele-Immersive Data Explorer)68 projects at EVL. In particu-lar, TIDE has demonstrated distributed interactiveexploration and visualization of large scientific data sets.It combines a centralized collaboration and data-storagemodel with multiple processes designed to allowresearchers all over the world to collaborate in an inter-active and immersive environment. In Sawant et al.68 theresearchers identified annotation of data sets, persis-tence (to allow asynchronous sessions), time-dependentdata visualization, multiple simultaneous sessions, andvisualization comparison as immediate research targets.

In a telecollaboration setting with geographically dis-tributed participants, one strategy Bryson and othersadopt is that architectures must not move raw data, butinstead move extracts. Extracts are the transmitted visu-al representation of a structure computed from a sepa-rate computation “close to the raw data.” In otherwords, raw data doesn’t move from the source (such asa running simulation, processor memory, or disk);rather, the result of a visualization algorithm is trans-mitted to one or more recipients. The extracts strategyalso provides a mechanism for decoupling the compu-tation of visualization objects and rendering, which isessentially a requirement for interactive visualizationof very large data sets.

Virtual Reality

44 November/December 2000

Page 20: Immersive VR for scientific visualization: a progress report

The visualization server the US Department of Ener-gy (DOE) expects to site at Lawrence Livermore,attached to the 14 teraop “ASCI White” machine, offersan example of the use of extracts. This server will compute extracts for all users, but do rendering onlyfor local users. One option being explored for remotevisualization is to do some form of partial rendering,such as producing RGBZ images or multilayered imagesthat can be warped in a final rendering step at theremote site. Such image-based rendering techniquescan effectively mask the latency of the wide-area net-work and provide smooth real-time rotation of objects,even if the server or network can handle only a fewframes per second.

Cope with lack of standards and interoper-ability. A Tower of Babel problem afflicts not justgraphics but especially the virtual environments com-munity—a situation about which Jaron Lanier, a pio-neer of modern IVR, is vocal. Lanier laments that the jobof building virtual environments, already hard enough,is made even harder by the lack of interoperability—nomechanism yet exists by which virtual worlds built indifferent environments can interoperate.

Dozens of IVR development systems exist or are cur-rently under development at the OpenGL and scene-graph level. Unlike the early 1990s, when the mosthigh-performance and robust software was availableonly commercially and ran on expensive workstations,currently multiple open-source efforts are available formany platforms. Nevertheless, the existence of so manyoptions addressing similar problems hinders progress.In particular, code reuse and interoperability prove verydifficult.

This is a general graphics problem, not just an IVRproblem. The field has been plagued since its inceptionby the lack of a single, standard, cross-platform graph-ics library that would permit interoperability of graph-ics application programs. Among the proto-standardsfor interactive 3D graphics are the oldest and mostestablished SGI-led OGL, as well as newer packages suchas Microsoft’s DirectX, Sun’s Java3D, Sense 8’s WorldToolKit, W3C’s X3D (previously called VRML), andUVA/CMU’s Alice. Only a few of these support IVR, andmany only the bare essentials, such as projection matri-ces for a single-pipe display engine. Multiple efforts maybe the only path in the short term to find a best of breed,but ultimately a concerted effort is needed.

Interoperability is especially important in immersivetelecollaboration, in which collaborators share a virtu-al space populated potentially by their own applicationobjects and avatars. The problem isn’t just one of shar-ing geometry; it also concerns interoperability of behav-iors and especially interaction techniques, intrinsicallyvery difficult problems.

One simple but partial solution conceived by Lanierand implemented under sponsorship of Advanced Net-work & Services, whose teleimmersion project hedirects, is Scene Graph as Bus (SGAB).69 This frameworkpermits writing applications to different scene graphsto interoperate over a network. This approach also cap-tures behavior and interaction because object attribute

modifications such as transformation changes areobserved and distributed on the bus.

Other systems have been built to allow interoper-ability of 3D applications, but most assume homoge-neous client software70 or translate only between twoscene graphs (without the generality of SGAB). The Dis-tributed Interactive Simulation (DIS)71 and High-LevelArchitecture (HLA)72 standards, for example, make pos-sible cooperation between heterogeneous clients, butonly as long as they follow a set of network protocols.The SGAB approach instead attempts to bridge the infor-mational gap between independently designed stand-alone systems, with minimal or no modification to thosesystems. More research needs to be done to create inter-operability mechanisms as long as no single networked,component-based development and runtime environ-ment standard is in sight.

For scientific visualization in IVR, rendering and inter-action libraries are only the beginning. A whole layer ofmore discipline-specific software must be built on top.A number of such scientific visualization toolkits existor are under development (such as VTK,73 SCIRun,EnSight Gold, AVS, Open DX, Open RM Scene Graph).As with lower-level libraries, no interoperability existsbetween these separate software development anddelivery platforms, and users must choose one or anoth-er. In some cases, conversion routines may be used tolink individual packages, but this solution becomes inef-ficient as problem size increases.

One group beginning to address the problem, theCommon Component Architecture Forum (http://www.acl.lanl.gov/cca-forum), plans to define a com-mon-component software architecture approach toallow more interchanging of modules. While not yetfocused on visualization, the group has explored howthe same approaches can apply to visualization softwarearchitectures.

Develop a new design discipline and valida-tion methodology. IVR differs sufficiently from con-ventional 3D desktop graphics in terms both of outputand interaction that it must be considered not just anextension of what has come before but a medium in itsown right. Furthermore, as a new medium, new met-rics and evaluation methodologies are required to deter-mine which techniques are effective and why.

Create a new design discipline for a fundamentally newmedium. In IVR, all the conventional tasks (navigation—travel for the motor component and wayfinding for thecognitive component—object identification, selection,manipulation, and so on) typically aren’t simple exten-sions of their counterparts for the 2D desktop—theyrequire their own idioms. Programming for the newmedium is correspondingly far more complex andrequires a grounding in computer, display, audio, andpossibly haptics, hardware, and interaction device tech-nology, as well as component-based software engineer-ing and domain-specific application programming.

In addition, while IVR may provide an experienceremarkably similar to the real world, it differs from thereal world in significant ways:

IEEE Computer Graphics and Applications 45

Page 21: Immersive VR for scientific visualization: a progress report

■ Cybersickness is similar to but not the same as motionsickness, and its causes and cures are different.

■ Avatars don’t look or behave like human beings(extending the message of the classic cartoon “On theInternet, no one knows you’re a dog”).

■ The laws of Newtonian physics may be augmented oreven overturned by the laws of cartoon physics.

■ Interaction isn’t the same. It’s both less (for example,haptic feedback is wholly inadequate ) and more,through the suitable use of magic (for example, nav-igation and object manipulation can mimic, extend,or replace comparable real-world actions).

■ Application designers may juxtapose real and fictionalentities.

In short, IVR creates a computer-generated world withall the power and limitations that implies.

Furthermore, because so much of effective IVR dealswith impedance-matching the human sensorium,designers must have a far better understanding of per-ceptual, cognitive, and even social science (such as smallgroup interaction for telecollaboration) than the tradi-tionally trained computer scientist or engineer. It’s espe-cially important to know what humans are and aren’tgood at in executing motor, perceptual, and cognitivetasks. For example, evidence indicates that humans haveone visual system for pattern recognition and a separateone for the hand-eye coordination involved in motorskills. Interactive visualization techniques must bedesigned to take into account, if not take advantage of,these separate processing channels.

Finally, since IVR focuses on providing a rich, multi-sensory experience with its own idioms, we can learnmuch from other design and communication disciplinesthat aim to create such experiences. These include printmedia (books and magazines), entertainment (theater,film and video, and computer games), and design(architectural, user interface, and Web page).

Prove immersion’s effectiveness for scientific visualiza-tion applications and tasks. One of the most importantresearch challenges is proving, for certain types of sci-entific visualization applications and tasks, that IVR pro-vides a better medium for scientific visualization thantraditional nonimmersive approaches. Unfortunately,this is a daunting task, and very little formal experi-mentation has tested whether IVR is effective for scien-tific visualization.

Application-level experimentation is extremely diffi-cult to do for a number of reasons. Experimental sub-jects must be both appropriately familiar with thedomain and the tasks to be performed, and willing todevote sufficient time to the experiment. Simple, sin-gle-parameter studies not related to application tasksaren’t necessarily meaningful. It’s also difficult to definemeaningful tasks useful to multiple application areas.From a technological standpoint, the hardware and soft-ware may still be too primitive and flaky to establishproper controls for the experiments. Finally, the cost inmanpower and money of doing controlled experimentsis large, and funding for such studies is lacking.

Although some anecdotal evidence from informal

user evaluations and demos supports IVR’s effective-ness for certain tasks and applications in scientific visu-alization, the literature contains only a few examplesof quantified user studies in these areas, with mixedresults. For example, Arns et al.74 showed that users per-form better on statistical data visualization tasks, suchas cluster analysis, in an IVR environment than on a tra-ditional desktop. However, users had a more difficulttime performing interaction tasks in an IVR environ-ment.1 Salzman et al.75 showed that students were bet-ter able to define electrostatics concepts after theirlessons in an IVR application than in a traditional 2Ddesktop learning environment. However, retention testsfor these concepts after a five-month period showed nosignificant difference between the desktop and IVRenvironments. Lin et al.17 showed that users preferredvisualizations of geoscientific data in an IVR environ-ment but also found the IVR interaction techniques lesseffective than desktop techniques. Although this studypresented statistical results based on questionnaires, itcollected no speed or accuracy data, and only a hand-ful of the human participants actually used the systemduring the experiment.

Hix et al.76 showed how to use heuristic, formative,and summative evaluations to iteratively design a bat-tlefield visualization VE. Although they didn’t try toshow that the VE was better than traditional approach-es to battlefield visualization, their work was significantin being among the first on developing 3D user inter-faces using a structured, user-centered design approach.Salzman et al.75 used similar user-centered designapproaches for their IVR application development.

Challenges in scientific (and information)visualization

To address the challenges facing scientific and infor-mation visualization, we can pull from the knowledgein other fields, specifically art, perceptual psychology,and artificial intelligence.

Develop art-motivated, multidimensional,visual representations. The size and complexity ofscientific data sets continues to increase, making morecritical the need for visual representations that canencode more information simultaneously. In particular,inherently multivalued data, like the examples in thesidebar “Art-Motivated Textures for Displaying Multi-valued Data,” can in many cases be viewed only onevalue at time. Often, correlations among the many val-ues in such data are best discovered through humanexploration because of our visual system’s expertise infinding visual patterns and anomalies.

Experience from art and perceptual psychology hasthe potential to inspire new, more effective, visual rep-resentations. Over several centuries, artists have evolveda tradition of techniques to create visual representationsfor particular communication goals. Inspiration frompainting, sculpture, drawing, and graphic design allshow potential applicability. The 2D painting-motivat-ed example in the “Art-Motivated Textures” sidebaroffers one example. That work is being extended to showmultivalued data on surfaces in 3D. These methods use

Virtual Reality

46 November/December 2000

Page 22: Immersive VR for scientific visualization: a progress report

relatively small, discrete, iconic “brush strokes” layeredover one another to represent up to nine values at eachpoint in an image of a 2D data set.77,78

Applying these ideas to map multivalued data ontosurfaces in 3D comes up against some barriers. First,parts of surfaces may face away from a viewpoint or be

IEEE Computer Graphics and Applications 47

Concepts from oil painting and other arts canbe applied to enhance information representationin scientific visualizations. This sidebar describessome examples developed in 2D. I’m currentlyextending this work to curved surfaces in 3Dimmersive environments and facing many of thechallenges described in the article.

Examples of methods developed for displayingmultivalued 2D fluid flow images1 appear in FigureJ and 2D slices of tensor-valued biological images2

in Figure K. While the images don’t look like oilpaintings, concepts motivated by the study ofpainting, art, and art history were directly appliedin creating them. Further ideas gathered from thecenturies of artists’ experience have the potentialto revolutionize visualization.

The images are composited from layers of smallicons analogous to brush strokes. The manypotential visual attributes of such strokes—size,shape, orientation, colors, texture, density, and soon—can all represent components of the data.The use of multiple partially transparent layersfurther increases the information capacity of themedium. In a sense, an image can become severalimages when viewed from different distances. Theapproach can also introduce a temporal aspectinto still images, using visual cues that becomevisible more or less quickly to direct a viewerthrough the temporal cognitive process of

understanding the relationships among the datacomponents.

Figure J, created through a collaboration with R.Michael Kirby and H. Marmanis at Brown, showssimulated 2D flow around a cylinder at Reynoldsnumber 100. The quantities displayed include twonewly derived hydrodynamic quantities—turbulent current and turbulent charge—as well asthree traditional flow quantities—velocity,vorticity, and rate of strain. Visualizing all valuessimultaneously gives a context for relating thedifferent flow quantities to one another in a searchfor new physical insights.

Figure K, created through a collaboration withEric Ahrens, Russ Jacobs, and David Kremers atCaltech, shows a section through a mouse spinalcord. At each point a measurement of the rate ofdiffusion of water gives clues about themicrostructure of the underlying tissues. Theimage simultaneously displays the six interrelatedvalues that make up the tensor at each point.

Extending these ideas to display information onsurfaces in 3D has the potential to increase thevisual content of images in IVR.

References1. R.M. Kirby, H. Marmanis, and D.H. Laidlaw, “Visualizing

Multivalued Data from 2D Incompressible Flows UsingConcepts from Painting,” Proc. of IEEE Visualization 99,ACM Press, New York, Oct. 1999, pp. 333-340.

2. D.H. Laidlaw, K.W. Fleischer, and A.H. Barr, “Partial-Vol-ume Bayesian Classification of Material Mixtures in MRVolume Data using Voxel Histograms,” IEEE Trans. onMedical Imaging, Vol.17, No. 1, Feb. 1998, pp. 74-86.

Art-Motivated Textures for Displaying Multivalued DataDavid H. Laidlaw

J Using art-motivated methods to display multival-ued 2D fluid flow around a cylinder at Reynoldsnumber 100. Shown at each point are velocity, vortic-ity, the rate-of-strain tensor, turbulent change, andturbulent current.

K 3D tensor-valued datafrom a slice of amouse spinalcord. The visual-ization methodsshow six inter-related datavalues at eachpoint of theslice.

Page 23: Immersive VR for scientific visualization: a progress report

obscured by other surfaces. In an interactive environ-ment, moving an object around can alleviate this prob-lem, but doesn’t eliminate it. Second, and perhaps morefundamental, the human visual system can misinterpretthe visual properties that represent data values.

Many visual cues can be used to map data. Some ofthe most obvious are color and texture. Within texture,density, opacity, and contrast can often be distinguishedindependently. At a finer level of detail, texture can con-sist of more detailed shapes that can convey informa-tion. What makes the problem complex is that thehuman visual system takes cues about the shape andmotion of objects from changes in the texture and colorof surfaces. For example, the shading of an object givescues about its shape. Therefore, data values mappedonto the brightness of a surface may be misinterpretedas an indication of its shape.

Just as brightness cues from shading are wronglyinterpreted as shape information, the visual system usesthe appearance and motion of texture to infer shape andmotion properties of an object. Consider a surface cov-ered with a uniform texture. The more oblique the viewof the surface, the more compressed the texture appears.The human visual system is tuned to interpret thatchange in the density of the texture as an indication ofthe angle of the surface. Thus any data value mappedonto the density of a texture may be misinterpreted as anindication of its orientation. Texture is also analyzedvisually to infer the motion of an object. These shapeand motion cues are important both for understandingobjects and for navigating through a virtual world, soconfounding their interpretation by mapping data val-ues to them carries a risk.

The visual system already “knows” how to interpretthe visual attributes that we “know” how to map ourdata onto. Unfortunately, the interpretation doesn’tmatch our intent, and the results are ambiguous. Avoid-ing these ambiguities requires an understanding of per-ception and perceptual cues as well as how the cuescombine and when they can be discounted. Becausestereo and motion are the primary cues for shape, per-haps shading can be overloaded with a different inter-pretation. Only on a task-by-task basis can hypotheseslike this be evaluated.

A third barrier to representing multivalued data withtextures is the difficulty of defining and rendering mean-ingful textures on arbitrary surfaces. Interrante79 andothers have made some excellent progress here, butmany unsolved problems in representing details of sur-face appearance efficiently remain.

Learn from perceptual psychology. Perceptu-al psychology has the potential to yield importantknowledge for scientific visualization problems. His-torically, the two disciplines of art history and percep-tual psychology have approached the human visualsystem from different perspectives. Art history providesa phenomenological view of art—painting X evokesresponse Y—but doesn’t consider the perceptual andcognitive processes underlying the responses. Percep-tual psychology investigates how humans understandthose visual representations. A gap separates art and

perceptual psychology: no one knows how humanscombine the visual inputs they receive to arrive at theirresponses to art.

Shape, shading, edges, color, texture, motion, andinteraction are all components of an interactive visual-ization. How do they interact? And how can they mosteffectively be deployed for a particular scientific task?We need to look to perceptual psychology for lessonson the effectiveness of our visualization methods. Eval-uating this is difficult, because not only are the goalsdifficult to define and codify, but tests that evaluatethem meaningfully are difficult to design and execute.These evaluations are akin to evaluating how thehuman perceptual system works. Visual psychophysicscan help to understand how an observer interprets andacts upon visual information,80 as well as how anobserver combines different sources of information intocoherent percepts.81

The experience of perceptual psychologists in design-ing experiments has much to offer, and the study of per-ception provides several clear directions for datavisualization. First, a better understanding of how visu-al information is perceived will allow the creation ofmore effective displays. Second, modeling how differ-ent types of information combine to create structureswill allow the presentation of complex multivalued data.Third, psychophysical methods can be used to assessobjectively the effectiveness of different visualizationtechniques.

Perceptual psychology can also help us to understandlimitations of IVR environments and the impacts ofthose limits. For example, cybersickness is believed toarise from conflicting perceptual cues—our visual sys-tem perceives us as moving in one way, but the vestibu-lar system in our inner ears senses no motion. IVRsystems today include many causes of discrepancies likethis. One is the time lag in visual feedback due to input,rendering, and output latencies. A second is the dis-agreement between where our eyes focus (on the IVRprojection surface) and where a virtual object is locat-ed (usually not on the surface).

Visualized data almost always includes some amountof error. IVR’s stringent update and latency constraintsforce the issue of accuracy tradeoffs. A thorough under-standing of error is critical to a theory of techniques forscientific visualization. Pang et al.82 reported that thecommon underlying problem is visually mapping dataand uncertainty together into a holistic view. There isan inherent difficulty in defining, characterizing, andcontrolling the uncertainty in the visualization process,coupled with a lack of methods that effectively presentuncertainty and data.

Transcend human limitations with scalableAI techniques. As technology improves, displays willapproach the limits of human visual bandwidth, anddata sets will become so large that even with the bestresolution and tools we can never look at more than atiny fraction of them. Even after we finish leveragingand impedance-matching the human visual system opti-mally, we’re still going to hit a brick wall. We need intel-ligent software to perform human-like pattern

Virtual Reality

48 November/December 2000

Page 24: Immersive VR for scientific visualization: a progress report

recognition tasks on massive data sets to winnow thepotential areas of interest for further human study. Welump such intelligent software under the title of AI.While many researchers remain skeptical of AI’s trackrecord, other enthusiasts such as Ray Kurzweil83 feelthat the expected exponential increase in computationalpower and algorithms together will make significantadvances possible—a bet that’s clearly fueling data min-ing projects, for example.

As Don Middleton of the National Center for Atmos-pheric Research (NCAR) wrote in an e-mail,

It’s not clear the commercial marketplace or thecommunity efforts currently under way willaddress the visualization and analysis require-ments of the largest problems—the terascaleproblems. It’s my own belief that by mid-decadewe’ll need to be looking very seriously at quasi-intelligent and autonomous analysis agents thatcan sift through the data volumes on behalf of theresearcher.

These kinds of techniques are precisely the focus of theIntelligent Data Understanding component of NASA’snew Intelligent Systems Program (http://ic.arc.nasa.gov/ic/nra/).

Scaling, which has always been a problem in adaptingAI techniques to real-world needs, is a particularly cru-cial issue here, especially if we consider the conse-quences of latency. Thus, we must not only develop thetechniques, but also ways of evaluating their scalabilityto terabyte and petabyte data sets.

ConclusionIt’s generally accepted that visualization is key to

insight and understanding of complex data and modelsbecause it leverages the highest-bandwidth channel tothe brain. What’s less generally accepted, because therehas been so much less experience with it, is that IVR cansignificantly improve our visualization abilities overwhat can be done with ordinary desktop computing. IVRisn’t “just better” 3D graphics, any more than 3D is justbetter 2D graphics. Rather, IVR can let us “see” (that is,form a conception of and understand) things we couldnot see with desktop 3D graphics.

We need to push any means available for making visu-alization more powerful, because the gap between thesize and complexity of data sets we can compute or senseand those we can effectively visualize is increasing at analarming rate. IVR’s potential to display larger and morecomplex data, to interact more naturally with that data,and possibly to reveal new patterns in the data throughthe use of our intrinsic 3D perception, navigation, andmanipulating skills, is tantalizing. We anticipate increas-ing interest in both the research agenda and in produc-tion uses of IVR for visualization.

However, many barriers block rapid progress. Tech-nology for IVR remains primitive and expensive, andinvestment for visualization—let alone IVR visualiza-tion—both for R&D and for deployment, lags far behindinvestment in computation and data gathering. Scien-tific computing facilities typically spend 10 percent or

less of their hardware budget on visualization systems.Nonetheless, we see considerable cause for optimism,given the success stories and partial success stories avail-able and the inexorable improvements in hardware,software, and interaction technology. Those of us in thefield, sensing the potential, wonder whether we areroughly at the 1903 Kitty Hawk stage of powered flight,with the equivalent of modern jet-airplane travel andsame-day package delivery inevitably to come, orwhether we are deluded by the late-1930s popular sci-ence prediction of a helicopter in every garage.

We do believe that we’re seeing the slow and some-what painful birth of a new medium, although we’re farfrom being at “Stage 3” (see The Three Stages of a NewMedium, http://www.alice.org/stage3/whystage3.html) of using the new medium to its fullest potential,that is, with idioms unique to it rather than imitatingthe old ones. Much research is needed to develop visu-alization and interaction techniques that take properadvantage of the IVR medium. The learning curve forthe new medium is far steeper than for 3D, let alone 2Dgraphics, at least in part because the technology is somuch more complex and our lack of knowledge ofhuman perception, cognition and manipulation skills isso much greater a limitation.

The research agenda for progress in using IVR for sci-entific visualization is long and provides interestingchallenges for researchers in many fields, especially ininterdisciplinary problems. The agenda includes the tra-ditional research areas of dramatically improving devicetechnology, producing scalable high-performancegraphics architectures from commodity parts, anddeveloping ways of significantly reducing end-to-endlatency and of coping strategies for unavoidable laten-cy. Among the interesting newer research problems are

■ how to do computational steering of exploratory com-putations and monitoring of production runs, espe-cially in IVR

■ how to use art-inspired visualization techniques forshowing large multivalued data sets

■ how to use our growing knowledge of perceptual, cog-nitive, and social science to construct effective andcomfortable IVR environments and visualizations

■ how to do application-oriented user studies that showunder what circumstances and for what reasons IVRis or isn’t effective

We hope that this article will stimulate serious inter-est in the research agenda we have highlighted here.Also, we hope that a review of this kind done in a decadewill start by listing important scientific discoveries ordesigns that would not have happened without the pro-duction use of IVR-based scientific visualization as anintegral part of the discovery or design process. ■

AcknowledgmentsThis article was truly a group effort, with many col-

leagues helping in substantive ways. Some contributedsidebars, others submitted detailed comments and sug-gested fixes that we have shamelessly but gratefullyincorporated. Yet others answered time-critical techni-

IEEE Computer Graphics and Applications 49

Page 25: Immersive VR for scientific visualization: a progress report

cal questions with very helpful responses. Any sins ofomission or commission are our responsibility.

First, we wish to thank John van Rosendale, withwhom Andy van Dam collaborated to produce thekeynote addresses that John gave at IEEE Visualization99 and that Andy gave at IEEE VR 2000. These talksformed the basis for this article. Other major con-tributers whose text we used literally or paraphrasedincluded Steve Bryson, Sam Fulcomer, Chris Johnson,Mike Kirby, Tim Miller, and Dave Mizell.

Next, we thank those who provided useful feedback ortechnical information: Steve Ellis, Steve Feiner, Jim Foley,David Johnson, George Karniadakis, Jaron Lanier, DonMiddleton, Jeff Pierce, Terri Quinn, Spencer Sherwin,and Colin Ware. Katrina Avery, Nancy Hays, and Debbievan Dam greatly improved the readability of the article.

Finally, we thank our sponsors: NSF, NIH, the ANSNational Tele-Immersion Initiative, DOE’s ASCI pro-gram, IBM, Microsoft Research, and Sun Microsystems.

References1. F.P. Brooks, Jr. “What’s Real About Virtual Reality,” IEEE

Computer Graphics and Applications, Vol. 19, No. 6,Nov./Dec. 1999, pp. 16-27.

2. C. Cruz-Neira, D.J. Sandin, and T.A. DeFanti, “Surround-Screen Projection-Based Virtual Reality: The Design andImplementation of the CAVE,” Proc. ACM Siggraph 93, Ann.Conf. Series, ACM Press, New York, 1993, pp. 135-142.

3. A. van Dam, “Post-Wimp User Interfaces: The Human Con-nection,” Comm. ACM, Vol. 40, No. 2, 1997, pp. 63-67.

4. C. Ware, K. Arthur, and K.S. Booth, “Fish Tank Virtual Real-ity,” Proc. InterCHI 93, ACM and IFIP, ACM Press, New York,1993, pp. 37-42.

5. W. Krüger and B. Fröhlich, “The Responsive Workbench,”IEEE Computer Graphics and Applications, Vol. 14, No. 3,May 1994, pp. 12-15.

6. “Special Issue on Large Wall Displays,” IEEE ComputerGraphics and Applications, Vol. 20, No. 4, July/Aug. 2000.

7. B.H. McCormick, T.A. DeFanti, and M.D. Brown, Visual-ization in Scientific Computing, Report of the NSF Adviso-ry Panel on Graphics, Image Processing and Workstations,1987.

8. Data and Visualization Corridors: Report on the 1998 DVCWorkshop Series, CACR-164, P. Smith and J. Van Rosen-dale, eds., Center for Advanced Computing Research, Cal-ifornia Institute of Technology, Pasadena, Calif., 1998.

9. J. Leigh, C.A. Vasilakis, T.A. DeFanti, “Virtual Reality inComputational Neuroscience,” Proc. Conf. on Applicationsof Virtual Reality, R. Earnshaw, ed., British Computer Soci-ety, Wiltshire, UK, June 1994.

10. R. Pausch, D.R. Proffitt, and G. Williams, “QuantifyingImmersion in Virtual Reality,” Proc. ACM Siggraph 97, Ann.Conf. Series, ACM Press, New York, Aug. 1997, pp. 13-18.

11. S. Bryson, “Virtual Reality in Scientific Visualization,”Comm. ACM, Vol. 39, No. 5, May 1996, pp. 62-71.

12. M. Chen et al., “A Study in Interactive 3D Rotation Using 2DControl Devices,” Computer Graphics, Vol. 22, No. 4, Aug.1988, pp. 121-129.

13. C. Ware and J. Rose, “Rotating Virtual Objects with RealHandles,” ACM Trans. CHI, Vol. 6, No. 2, 1999, pp. 162-180.

14. S. Bryson and C. Levit, “The Virtual Windtunnel,” IEEEComputer Graphics and Applications, Vol. 12, No. 4,July/Aug. 1992, pp. 25-34.

15. H. Haase, J. Strassner, and F. Dai, “VR Techniques for theInvestigation of Molecule Data,” J. Computers and Graph-ics, Vol. 20, No. 2, Elsevier Science, New York, 1996, pp.207-217.

16. H. Haase et al., “How Scientific Visualization Can Benefitfrom Virtual Environments,” CWI Quarterly, Vol. 7, No. 2,1994, pp. 159-174.

17. C.-R. Lin, H.R. Nelson, Jr., and R.B. Loftin. “Interactionwith Geoscience Data in an Immersive Environment,” Proc.IEEE Virtual Reality 2000, IEEE Computer Society Press,Los Alamitos, Calif., 2000, pp. 55-62.

18. M. Tidwell et al., “The Virtual Retinal Display—A RetinalScanning Imaging System,” Proc. Virtual Reality World 95,Mecklermedia, Westport, Conn., 1995, pp. 325-333.

19. K. Perlin, S. Paxia, and J.S. Kollin, “An AutostereoscopicDisplay,” Proc. Siggraph 2000, Ann. Conf. Series, ACMPress, New York, 2000, pp. 319-326.

20. M. Hereld et al., “Developing Tiled Projection Display Sys-tems,” Proc. IPT 2000 (Immersive Projection TechnologyWorkshop), University of Iowa, Ames, Iowa, 2000.

21. M. Agrawala et al., “The Two-User Responsive Workbench:Support for Collaboration through Individual Views of aShared Space,” Proc. ACM Siggraph 97, Ann. Conf. Series,ACM Press, New York, 1997, pp. 327-332.

22. R. Raskar et al., “Multi-Projector Displays using Camera-Based Registration,” Proc. IEEE Visualization 99, ACMPress, New York, Oct. 1999.

23. Virtual Reality: Scientific and Technological Challenges, N.I.Durlach and A.S. Mavor, eds., National Academy Press,Washington, D.C., 1995.

24. R. Raskar et al., “The Office of the Future: A UnifiedApproach to Image-based Modeling and Spatially Immer-sive Displays,” Proc. ACM Siggraph 98, Ann. Conf. Series,ACM Press, New York, 1998, pp. 179-188.

25. G. Bishop and G. Welch, “Working in the Office of the “RealSoon Now,” IEEE Computer Graphics and Applications, Vol.4, No. 4, July/Aug. 2000, pp. 88-95.

26. K. Daniilidis et al., “Real-Time 3D Tele-immersion,” in TheConfluence of Vision and Graphics, A. Leonardis et al., eds.,Kluwer Academic Publishers, Dordrecht, The Netherlands,2000, pp. 215-314.

27. J.A. Ferwerda et al., “A Model of Visual Masking for Com-puter Graphics,” Proc. ACM Siggraph 97, Ann. Conf. Series,ACM Press, New York, 1997, pp. 143-152.

28. L. Markosian et al., “Real-Time Nonphotorealistic Render-ing,” Proc. ACM Siggraph 97, Ann. Conf. Series, ACM Press,New York, Aug. 1997, pp. 415-420.

29. W.J. Schroeder, L.S. Avila, and W. Hoffman, “Visualizingwith VTK: A Tutorial,” IEEE Computer Graphics and Appli-cations, Vol. 20, No. 5, Sep./Oct. 2000, pp. 20-28.

30. S. Parker et al., “Interactive Ray Tracing for Volume Visu-alization,” IEEE Trans. Visualization and Computer Graph-ics, July-Sept. 1999, pp. 238-250.

31. S. Whitman, Multiprocessor Methods for Computer Graph-ics Rendering, Jones and Bartlett Publishers, Boston, 1992.

32. M. Hereld, I.R. Judson, and R.L. Stevens, “Tutorial: Intro-duction to Building Projection-based Tiled Display Sys-tems,” IEEE Computer Graphics and Applications, Vol. 20,No. 4, July/Aug. 2000, pp. 22-28.

Virtual Reality

50 November/December 2000

Page 26: Immersive VR for scientific visualization: a progress report

33. G. Humphreys et al., “Distributed Rendering for ScalableDisplays,” to appear in Proc. IEEE Supercomputing 2000,IEEE Computer Society Press, Los Alamitos, Calif., 2000.

34. G.C. Burdea, Force and Touch Feedback for Virtual Reality,John Wiley & Sons, New York, 1996.

35. F.P. Brooks, Jr. et al., “Project Grope—Haptic Displays forScientific Visualization,” Proc. ACM Siggraph 90, ACMPress, New York, Aug. 1990, pp. 177-185.

36. L.J.K. Durbeck et al., “SCIRun Haptic Display for Scientif-ic Visualization,” Proc. Third Phantom User’s Group Work-shop, MIT RLE Report TR624, Massachusetts Institute ofTechnology, Cambridge, Mass., 1998.

37. M. Hirose et al., “Development of Wearable Force Display(HapticGear) for Immersive Projection Displays,” Proc.IEEE Virtual Reality 99 Conf., IEEE Computer Society Press,Los Alamitos, Calif., 1999, p. 79.

38. T. Miller and R. Zeleznik, “An Insidious Haptic Invasion:Adding Force Feedback to the X Desktop,” Proc. ACM UserInterface Software and Technologies (UIST) 98, ACM Press,New York, 1998, pp. 59-64.

39. T. Anderson, “Flight—A 3D Human-Computer Interface andApplication Development Environment,” Proc. Second Phan-tom User’s Group Workshop, MIT RLE TR618, MassachusettsInstitute of Technology, Cambridge, Mass., Oct. 1997.

40. G. Welch et al., “The HiBall Tracker: High-PerformanceWide-area Tracking for Virtual and Augmented Environ-ments,” Proc. ACM Symp. Virtual Reality Software and Tech-nology (VRST), ACM Press, New York, Dec. 1999.

41. “Perceptual User Interfaces,” M. Turk and G. Robertson,eds., special issue of Comm. ACM, Vol. 43, No. 3, Mar. 2000.

42. B. Fröhlich et al., “Cubic-Mouse-Based Interaction in Vir-tual Environments,” IEEE Computer Graphics and Applica-tions, Vol. 20, No. 4, July/Aug. 2000, pp. 12-15.

43. P.R. Cohen et al., “QuickSet: Multimodal interaction forDistributed Applications,” Proc. ACM Multimedia 97, ACMPress, New York, 1997, pp. 31-40.

44. J. Nielsen, Usability Engineering, Academic Press, SanDiego, 1993.

45. S.R. Ellis et al., “Discrimination of Changes of Latency dur-ing Voluntary Hand Movement of Virtual Objects,” Proc.of Human Factors and Ergonomics Society (HFES) 99,Human Factors and Ergonomics Society, Santa Monica,Calif., 1999, pp. 1182-1186.

46. R.S. Kennedy et al., “A Simulator Sickness Questionnaire(SSQ): An Enhanced Method for Quantifying SimulatorSickness,” Int’l J. Aviation Psychology, Vol. 3, 1993, pp. 203-20.

47. T.B. Sheridan and W.R. Ferrell, “Remote Manipulative Con-trol with Transmission Delay,” IEEE Trans. Human Factorsin Electronics (HFE), Vol. 4, No. 1, 1963, pp. 25-29.

48. M. Wloka, “Lag in Multiprocessor VR,” Presence: Teleoper-ators and Virtual Environments, Vol. 4, No. 1, Winter 1995,pp. 50-63.

49. S. Singhal and M. Zyda, Networked Virtual Environments:Design and Implementation, Addison Wesley, Reading,Mass., 1999.

50. P.M. Sharkey, M.D. Ryan, and D.J. Roberts, “A Local Per-ception Filter for Distributed Virtual Environments,” Proc.IEEE Virtual Reality Ann. Int’l Symp. (VRIAS) 98, IEEE Com-puter Society Press, Los Alamitos, Calif., Mar. 1998, pp. 242-249.

51. D.B. Conner and L.S. Holden, “Providing a Low-Latency

User Experience in a High-Latency Application,” Proc. ACMSymp. on Interactive 3D Graphics 97, ACM Press, New York,Apr. 1997, pp. 45-48.

52. C. Vogler and D. Metaxas, “Toward Scalability in ASLRecognition: Breaking Down Signs into Phonemes,” Proc.Gesture Workshop 99, Springer, Berlin, Mar. 1999.

53. B. Watson et al., Evaluation of Frame Time Variation on VRTask Performance, Tech. Report 96-17, College of Comput-ing, Georgia Institute of Technology, Atlanta, Ga., 1996.

54. S. Bryson et al., “Visually Exploring Gigabyte Data Sets inReal Time,” Comm. ACM, Vol. 42, No. 8, Aug. 1999, pp. 83-90.

55. H. Hoppe, “View-Dependent Refinement of ProgressiveMeshes,” Proc. ACM Siggraph 97, Ann. Conf. Series, ACMPress, New York, Aug. 1997, pp. 189-198.

56. D. Aliaga et al., “MMR: An Integrated Massive Model Ren-dering System Using Geometric and Image-Based Accel-eration,” Proc. ACM Symp. on Interactive 3D Graphics (I3D),ACM Press, New York, Apr. 1999, pp. 199-206.

57. M. Deering, “Geometry Compression,” Proc. ACM Siggraph95, Ann. Conf. Series, ACM Press, New York, 1995, pp. 13-20.

58. D. Sujudi and R. Haimes, “Identification of Swirling Flowin 3D Vector Fields,” Proc. AIAA (Am. Inst. of Aeronauticsand Astronautics) Computational Fluid Dynamics Conf.,American Institute of Aeronautics and Astronautics,Reston, Va., June 1995, pp. 151-158.

59. P. Walatka et al, Plot3D User’s Manual, NASA Tech. Memo101067, Moffett Field, Calif., 1992.

60. J. Helman and L. Hesselink, “Visualizaing Vector FieldTopology in Fluid Flows,” IEEE Computer Graphics andApplications, Vol. 11, No. 3, May/June 1991, pp. 36-40.

61. D. Kenwright, “Automatic Detection of Open and ClosedSeparation and Attachment Lines,” Proc. IEEE Visualiza-tion 98, ACM Press, New York, Oct. 1998, pp. 151-158.

62. R. Haimes, “Using Residence Time for the Extraction ofRecirculation Regions,” Proc. 14th AIAA (Am. Inst. of Aero-nautics and Astronautics) Computational Fluid DynamicsConf., Paper No. 97-3291, American Institute of Aeronau-tics and Astronautics, Reston, Va., June 1999.

63. S. Bryson and S. Johan, “Time Management, Simultane-ity, and Time-critical Computation in Interactive UnsteadyVisualization Environments,” Proc. IEEE Visualization 96,ACM Press, New York, 1996, pp. 255-261.

64. T. Funkhouser and C. Séquin, “Adaptive Display Algorithmfor Interactive Frame Rates during Visualization of Com-plex Virtual Environments,” Proc. ACM Siggraph 93, Ann.Conf. Series, ACM Press, New York, 1993, pp. 247-254.

65. T. Meyer and Al Globus, Direct Manipulation of Isosurfacesand Cutting Planes in Virtual Environments, Tech. Report93-54, Computer Science Dept., Brown University, Provi-dence, R.I., Dec. 1993.

66. G.E. Karniadakis and S.J. Sherwin, Spectral Element Meth-ods for CFD, Oxford University Press, Oxford, UK, 1999.

67. J.D. de St. Germain et al., “Uintah: A Massively ParallelProblem Solving Environment,” Proc. of HPDC 00, NinthIEEE Int’l Symp. on High Performance and Distributed Com-puting, IEEE Computer Society Press, Los Alamitos, Calif.,Aug. 2000, pp. 33-41.

68. N. Sawant et al., “ The Tele-Immersive Data Explorer: ADistributed Architecture for Collaborative Interactive Visu-alization of Large Data Sets,” Proc. IPT 2000 (Immersive

IEEE Computer Graphics and Applications 51

Page 27: Immersive VR for scientific visualization: a progress report

Projection Technology Workshop), University of Iowa, Ames,Iowa, June 2000.

69. B. Zeleznik et al., “Scene-Graph-As-Bus: CollaborationBetween Heterogeneous Stand-alone 3D Graphical Appli-cations,” Proc. of Eurographics 2000, Blackwell, Cambridge,U.K., pp. 91-98.

70. G. Hesina et al., “Distributed Open Inventor: A PracticalApproach to Distributed 3D Graphics,” Proc. ACM VirtualReality Software and Technology (VRST) 99, ACM Press,New York, Dec. 1999.

71. Institute of Electrical and Electronics Engineers, Interna-tional Standard, ANSI/IEEE Standard 1278-1993, “Stan-dard for Information Technology, Protocols for DistributedInteractive Simulation,” IEEE Press, Piscataway, N.J., Mar. 1993.

72. F. Kuhl, R. Weatherly, and J. Dahmann, Creating Comput-er Simulation Systems, Prentice Hall, Upper Saddle River,N.J., 1999.

73. The Visualization Toolkit, An Object-Oriented Approach to3D Graphics, Prentice Hall, Upper Saddle River, N.J., 1997.

74. L. Arns, D. Cook, and C. Cruz-Neira, “The Benefits of Sta-tistical Visualization in an Immersive Environment,” Proc.IEEE Virtual Reality 99, IEEE Computer Society Press, LosAlamitos, Calif., Mar. 1999, pp. 88-95.

75. M.C. Salzman et al., “ScienceSpace: Lessons for DesigningImmersive Virtual Realities,” Proc. ACM Computer-HumanInteraction (CHI) 96, ACM Press, New York, 1996, pp. 89-90.

76. D. Hix et al., “User-Centered Design and Evaluation of aReal-Time Battlefield Visualization Virtual Environment,”Proc. IEEE Virtual Reality 99, IEEE Computer Society Press,Los Alamitos, Calif., 1999, pp. 96-103.

77. R.M. Kirby, H. Marmanis, and D.H. Laidlaw, “VisualizingMultivalued Data from 2D Incompressible Flows UsingConcepts from Painting,” Proc. IEEE Visualization 99, ACMPress, New York, Oct. 1999, pp. 333-340.

78. D.H. Laidlaw, K.W. Fleischer, and A.H. Barr, “Partial-Vol-ume Bayesian Classification of Material Mixtures in MRVolume Data using Voxel Histograms,” IEEE Trans. on Med-ical Imaging, Vol. 17, No. 1, Feb. 1998, pp. 74-86.

79. V. Interrante, “Illustrating Surface Shape in Volume Datavia Principle Direction-Driven 3D Line Integral Convolu-tion,” Proc. ACM Siggraph 97, Ann. Conf. Series, ACM Press,New York, 1997, pp. 109-116.

80. W.H. Warren and D.J. Hannon, “Direction of Self-MotionIs Perceived from Optical Flow,” Nature, Vol. 335, No. 10,1988, pp. 162-163.

81. M.S. Landy and J.A. Movshon, Computational Models ofVisual Processing, MIT Press, Cambridge, Mass., 1991.

82. A. Pang, C.M. Wittenbrink, and S.K. Lodha, Approaches toUncertainty Visualization, Tech. Report UCSC-CRL-96-21,Computer Science Dept., University of California, SantaCruz, 1996.

83. R. Kurzweil, The Age of Spiritual Machines: When Comput-ers Exceed Human Intelligence, Viking, New York, 1999.

Virtual Reality

52 November/December 2000

Andries van Dam is Thomas J.Watson, Jr., University Professor ofTechnology and Education, and pro-fessor of computer science at BrownUniversity, where he co-founded theComputer Science Department andserved as its first chairman. His

research has concerned computer graphics, text process-ing, and hypermedia systems.

van Dam co-founded ACM Siggraph and sits on the tech-nical advisory boards of several startups and MicrosoftResearch. He became an IEEE Fellow and an ACM Fellow in1994. He received honorary PhDs from Darmstadt Techni-cal University, Germany, in 1995 and from SwarthmoreCollege in 1996. He was inducted into the National Acade-my of Engineering in 1996 and became an American Acad-emy of Arts and Sciences Fellow in 2000. See his curriculumvitae for a complete list of his awards and publications:http://www.cs.brown.edu/people/avd/long_cv.html.

Contact van Dam at Brown University, Dept. of Com-puter Science, Box 1910, Providence, RI 02912, [email protected].

Andrew Forsberg is a researchstaff member in the Brown Universi-ty Graphics Group. He works pri-marily on developing 2D and 3D userinterfaces and on applications of vir-tual reality. He received his Mastersdegree from Brown in 1996.

David Laidlaw is the RobertStephen Assistant Professor in theComputer Science Department atBrown University. His research cen-ters on applications of visualization,modeling, computer graphics, andcomputer science to other scientific

disciplines. He received his PhD in computer science fromthe California Institute of Technology, where he also didpostdoctoral work in the Division of Biology.

Joseph J. LaViola, Jr., is a PhDstudent in computer science in theComputer Graphics Group and aMasters student in applied mathe-matics at Brown University. He alsoruns JJL Interface Consultants,which specializes in interfaces for vir-

tual reality applications. His primary research interestsare multimodal interaction in virtual environments anduser interface evaluation. He received his ScM in comput-er science from Brown in 1999.

Rosemary Michelle Simpson isa resources coordinator for theExploratories project in the Comput-er Science Department at Brown Uni-versity, and webmaster for severalgraphics and hypertext Web sites.She received a BA in history from the

University of Massachusetts.