Automation and Visualization in Geographic Immersive Virtual Environments Thomas J. Pingel, Northern...

Preview:

Citation preview

Automation and Visualization in Geographic Immersive Virtual Environments

Thomas J. Pingel, Northern Illinois UniversityKeith C. Clarke, University of California Santa Barbara

AutoCarto 2012 International Research SymposiumSeptember 16-20, 2012

Columbus, Ohio

Central Research Question:

How can we, in an automatable way, produce an immersive geographic virtual environment that

will assist in the interpretation, analysis, and understanding of specific, local events?

Outline

• Project overview• Code base• Terrain generation from LiDAR• Acquisition for of audio and video for model

overlay

Immersive Geographic Virtual Environments

• Immersive: “any virtual reality representation in which the user views her or her environment from a perspective view, and can freely move around in that environment”

• Multiple Psychologies of Space (Montello, 1993)– Figural , Vista, Environmental, Geographical

• Representing Environmental (or Geographical) spaces as Figural (or Vista) Objects while retaining some of the cognitive elements of each.

• Emphasis on representing places in a model that can both be manipulated as an object or experienced as a place.

Related Work

• Google’s Earth and Street View– Microsoft & Apple– No ability to alter the

terrain– Universality

• Virtual Tübingen– Designed for spatial

cognition testing– 200 structures, .5 x .15

km– Our study area

• 3.25 x 1.6 km• ~2000 structures

Image from Virtual Tübingen

Video Game Community

• Immense budgets and revenues– $65 billion annually

• Many perspectives– First Person Shooters– World of Warcraft – But few environment

& object perspectives• Highly structured

environments

Code Base – X3D• XML successor to VRML (and

GeoVRML)• Native Geo support• Native video texturing and

spatialized audio• Royalty free• Browsers can typically read

other 3D formats (e.g., COLLADA)

• Good input device integration– Space M ouse– Microsoft Kinect– Wiimotes

X3D DevelopmentAvalon & X3DOM

• Integration of next-gen specs in Avalon– Instantreality.org

• Integration with HTML5 with X3DOM– X3dom.org

• Full rendering within browser– No-add ins required

Terrain generation

• LiDAR– Cheap– Highly accurate– Portable– But needs processing

• Assumption of little available geodata– Ground cues can be

very valuable in street network ID

Point cloud of building and surrounding area

Terrain Extraction is Important

Davidson Library sits approximately 6 meters above the ground due to a terrain layer error.

Terrain Extraction: The Simple Morphological Filter (SMRF)

• Emphasizes reducing Earth-as-Object error

• Still very good at reducing Object-as-Earth error

• Lowest total error rate of any published algorithm tested against ISPRS dataset

• tpingel.org/code

LiDAR Visualization (Bonemaps)

• Image-like visualization of Digital Surface Model

• No registration errors• Slope-based intensity

mapping, w/ compensation for “cognitive slope”

• Higher contrast than hillshade

• Appropriate for mixed environments

SMRF + Bonemaps at El Pilar, Guatemala

Digital Surface Model

SMRF + Bonemaps at El Pilar, Guatemala

SMRF-derived terrain layer

Video Overlay

• Aerostat-based video capture

• Smartphone capture and relay

• Native video texturing in X3D

Acknowledgements

• IC Postdoc for funding the project.• Alan Glennon and Kitty Courier for kite

photography expertise.• William McBride for SRMF algorithm

development and aerostat design.

Recommended