5
A Fresh Look at Immersive Volume Rendering: Challenges and Capabilities Nicholas F. Polys * Virginia Polytechnic Institute and State University Sebastian Ullrich SenseGraphics AB Daniel Evestedt SenseGraphics AB Andrew D. Wood Virginia Polytechnic Institute and State University Michael Aratow Web3D Consortium Figure 1: Examples of Immersive Volume Rendering with the open X3D and H3D platform; stereo display of cell data in both segmented and isosurface styles (left) and volume rendering combined with haptic input (right). ABSTRACT In this paper, we begin from the premise that for science to progress, experiments must be repeatable. Similarly, for enterprises in a knowledge-driven economy, such as clinical medicine, the portable and repeatable presentation of visualization views is paramount. Yet both science and capitalism also rely on the innovation of re- searchers to explore and test new variables and models. We con- sider how this natural tension plays out with respect to immersive volume rendering and the open standards and open source move- ment. We first discuss the challenges of volume rendering in im- mersive environments and then describe how the ISO standard X3D meets these. Then, we provide examples with an implementation of these specifications in the open source toolkit H3D and conclude that extensibility and repeatability are not mutually exclusive. Fi- nally, we reflect on the implications for future research programs in this area. 1 I NTRODUCTION The last 15+ years of computer graphics and applications have proven the value of volume rendering techniques for scientific vi- sualization in general and interactive rendering applications across domains: from medical imaging to geophysics to transportation se- curity. There is a healthy community of researchers working on vol- ume rendering models and algorithms; their efforts are largely fo- cused on different approaches to illumination, rendering techniques and the handling of large datasets [7]. However, one issue that is seldom addressed is reproducibility of volume renderings. There are a lot of reasons why reproducibility is hard—from hardware and software architectures to network protocols to the growth of data * e-mail: [email protected] types and sizes, it is difficult to re-implement and re-create even ’common’ volume rendering algorithms on a shifting, babelized and proprietary technological landscape. In the knowledge enterprise however, it is ’mission-critical’ that data becomes verifiable information and that this information is in- teroperable between systems and portable across platforms. We use the term ’enterprise’ broadly to describe any organization (research, clinical, industrial) where the access of diversified workers to data must be scalable and efficient. First let us consider the following examples from academia, as a scientific research enterprise: a per- ceptual study about the value of a new volume rendering algorithm for stereo depth discrimination, and a 3DUI study about surgical planning with volumetric data. In both of these cases, the scientific results from the study are only valid in so much as they are repeat- able: that is, able to be independently implemented, run and eval- uated. Several use cases are apparent in the IEEE VR 2010 Work- shop on Medical Virtual Environments [9] for example. The pa- rameters generating the stimuli (views) include camera and object transformations, lighting equations, volume ray-casting algorithms, mesh and isosurface material properties, clipping planes, etc. all must share a conceptual interoperability and perceptual equivalence regardless of the proprietary application, rendering library or dis- play hardware used. Now, let us discuss an example from the clinical world. The use of clinical imaging modalities, especially CT, MRI, PET and ultrasound, has been steadily increasing over the last decade [12]. The CT and MRI modalities are creating increasingly more slices of images for a given study due to a parallel increase in hardware capabilities. Consider a patient who receives a CT exam at hospital A due to a malignancy. The image is reconstructed as a 3D volume, segmented and marked up by the radiologist (and may even have a camera animation if the exam involved a virtual colonoscopy [4]). The patient pursues a second opinion and brings a copy of the stud- ies to a specialist who practices at hospital B, which has a different vendor than hospital A for viewing medical images. Again, as in the previous example from academia, the parameters generating the

A Fresh Look at Immersive Volume Rendering: Challenges …1).pdf · The image is reconstructed as a 3D volume, ... ate Isosurface masks. 3 CAPABILITIES 3.1 Extensible 3D ... ware

  • Upload
    voliem

  • View
    217

  • Download
    0

Embed Size (px)

Citation preview

Page 1: A Fresh Look at Immersive Volume Rendering: Challenges …1).pdf · The image is reconstructed as a 3D volume, ... ate Isosurface masks. 3 CAPABILITIES 3.1 Extensible 3D ... ware

A Fresh Look at Immersive Volume Rendering:Challenges and Capabilities

Nicholas F. Polys∗Virginia PolytechnicInstitute and State

University

Sebastian UllrichSenseGraphics AB

Daniel EvestedtSenseGraphics AB

Andrew D. WoodVirginia PolytechnicInstitute and State

University

Michael AratowWeb3D Consortium

Figure 1: Examples of Immersive Volume Rendering with the open X3D and H3D platform; stereo display of cell data in both segmented andisosurface styles (left) and volume rendering combined with haptic input (right).

ABSTRACT

In this paper, we begin from the premise that for science to progress,experiments must be repeatable. Similarly, for enterprises in aknowledge-driven economy, such as clinical medicine, the portableand repeatable presentation of visualization views is paramount.Yet both science and capitalism also rely on the innovation of re-searchers to explore and test new variables and models. We con-sider how this natural tension plays out with respect to immersivevolume rendering and the open standards and open source move-ment. We first discuss the challenges of volume rendering in im-mersive environments and then describe how the ISO standard X3Dmeets these. Then, we provide examples with an implementationof these specifications in the open source toolkit H3D and concludethat extensibility and repeatability are not mutually exclusive. Fi-nally, we reflect on the implications for future research programs inthis area.

1 INTRODUCTION

The last 15+ years of computer graphics and applications haveproven the value of volume rendering techniques for scientific vi-sualization in general and interactive rendering applications acrossdomains: from medical imaging to geophysics to transportation se-curity. There is a healthy community of researchers working on vol-ume rendering models and algorithms; their efforts are largely fo-cused on different approaches to illumination, rendering techniquesand the handling of large datasets [7]. However, one issue that isseldom addressed is reproducibility of volume renderings. Thereare a lot of reasons why reproducibility is hard—from hardware andsoftware architectures to network protocols to the growth of data

∗e-mail: [email protected]

types and sizes, it is difficult to re-implement and re-create even’common’ volume rendering algorithms on a shifting, babelizedand proprietary technological landscape.

In the knowledge enterprise however, it is ’mission-critical’ thatdata becomes verifiable information and that this information is in-teroperable between systems and portable across platforms. We usethe term ’enterprise’ broadly to describe any organization (research,clinical, industrial) where the access of diversified workers to datamust be scalable and efficient. First let us consider the followingexamples from academia, as a scientific research enterprise: a per-ceptual study about the value of a new volume rendering algorithmfor stereo depth discrimination, and a 3DUI study about surgicalplanning with volumetric data. In both of these cases, the scientificresults from the study are only valid in so much as they are repeat-able: that is, able to be independently implemented, run and eval-uated. Several use cases are apparent in the IEEE VR 2010 Work-shop on Medical Virtual Environments [9] for example. The pa-rameters generating the stimuli (views) include camera and objecttransformations, lighting equations, volume ray-casting algorithms,mesh and isosurface material properties, clipping planes, etc. allmust share a conceptual interoperability and perceptual equivalenceregardless of the proprietary application, rendering library or dis-play hardware used.

Now, let us discuss an example from the clinical world. Theuse of clinical imaging modalities, especially CT, MRI, PET andultrasound, has been steadily increasing over the last decade [12].The CT and MRI modalities are creating increasingly more slicesof images for a given study due to a parallel increase in hardwarecapabilities. Consider a patient who receives a CT exam at hospitalA due to a malignancy. The image is reconstructed as a 3D volume,segmented and marked up by the radiologist (and may even have acamera animation if the exam involved a virtual colonoscopy [4]).The patient pursues a second opinion and brings a copy of the stud-ies to a specialist who practices at hospital B, which has a differentvendor than hospital A for viewing medical images. Again, as inthe previous example from academia, the parameters generating the

Page 2: A Fresh Look at Immersive Volume Rendering: Challenges …1).pdf · The image is reconstructed as a 3D volume, ... ate Isosurface masks. 3 CAPABILITIES 3.1 Extensible 3D ... ware

mark ups and camera animations must demonstrate interoperabilityand perceptual equivalence, or those from the initial radiologist willbe lost. This could lead to a delay of care as the specialist at hospi-tal B tries to communicate with her counterpart at hospital B and/ora repeat (and redundant) imaging study, leading to increased costsand possible increased radiation exposure.

Despite the multitude of abstractions and interfaces for interac-tive volume rendering, we claim a premium on the requirement fora cross-platform representation of interactive rendering parametersand the accessibility of volume rendered views by a greatest com-mon denominator—international standards and open source toolk-its (see Figure 1). The ISO standard of Extensible 3D (X3D), forexample, is truly a platform-independent scene graph with declar-ative and imperative representations of immersive virtual environ-ments and their behaviors (i.e. [11]). This value is especially im-portant for enterprises that require interoperability among multipleapplications, the portability across World Wide Web devices andthe durability of their volumetric presentations and environments(i.e. [10]).

In the case of durability, we ask ”how durable is this presenta-tion to avoid bit-rot and still be accessible in 30+ years?”. Unfortu-nately, this is not an empty rhetorical challenge. Consider design-ing a contemporary human factors study of presence and simulatorsickness that could be compared with SGI flight simulator resultsfrom the 1990s. Such a task would be nearly impossible; thosevirtual environments are frozen in human memory, never to runagain. Virtual environments and virtual reality will have their great-est practical impact when they can mature to a common, durableconformance level. Innovators and entrepreneurs can extend andadd value in an ecosystem on top of that base platform. Not onlyis this durability an imperative requirement for scientific progress,but also for the efficient realization of government, citizen’s rightsand the public interest.

2 CHALLENGES

Let us first identify and briefly describe the technological compo-nents and challenges of immersion and volume rendering; we cat-egorize the challenges into three groups: data, software and hard-ware.

Data must support diverse sources; could be numbered stacks ofimage files, DICOM data, or several other formats; we useNearly-Raw Raster Data (NRRD) as many tools and librariessupport this simple format for segmentations; some data for-mats, like DICOM with JPG2000, require royalties from soft-ware implementers.

Software interoperable with diverse processing pipelines for volu-metric data, depending on its sources and its data distribution.Typically, these pipelines involve: the choice of appropriatetransfer functions and render styles to visualize relevant dataor regions of interest, the segmentation, or grouping, of voxelsto apply different rendering styles, and the surfacing of inter-nal structures. Depending on the runtime and application, thesurfacing may be an explicitly-derived mesh of vertices andfaces or it may be a visual effect of rendering an isocontouralong a given voxel threshold.

Hardware accessibility for diverse platforms from hand-helds tomulti-core clusters, and visual display systems with stere-oscopy and multiple screens; as well support for tracking sys-tems (for user-centered projections) and high-DOF interactiondevices (haptic or tracked devices).

Figure 2: X3D Volume visualizations using H3D: at left, cell imagevisualization with segmentations from multi-channel microscopy; atright, the Parapandorina microfossil with segmentations used to cre-ate Isosurface masks.

3 CAPABILITIES

3.1 Extensible 3D (X3D)

The International Standards Organization (ISO) standard for 3Dgraphics over the Internet is Extensible 3D (X3D), which is main-tained and developed by the Web3D Consortium (http://www.web3d.org). The open and royalty-free ISO standard scene graphhas evolved through 15 years of hardware development and soft-ware boom-busts and still remains the greatest common denomi-nator for communicating real-time interactive 3D scenes over theweb. With dozens of implementations across industries and operat-ing systems, X3D specifies this scene graph in layers of function-ality, known as ’Profiles’ and several encodings, including binaryand XML. The Extensible 3D scene graph, X3D (ISO/IEC 19775),provides an expressive and durable platform to describe and deliverinteractive rich-media 3D application views.

Working above any specific rendering library, X3D provides apowerful set of abstractions to compose meshes, appearances, light-ing, animations, viewpoints, navigations and interactions. In addi-tion to encoding the scene graph in XML, UTF-8 and binary format,the X3D standard also species the Application Programming Inter-face (API), known as the Scene Access Interface (SAI). The stan-dard includes SAI language bindings for ECMAScript and Java;toolkits have demonstrated several other bindings including C++,Python and VBScript.

With a proven node set and transformation hierarchy, complexenvironments can be built and visualized at web enterprise scale.As a scene graph run-time, X3D also specifies a ’Behavior Graph’,which describes the routing of events between nodes for animationand interaction. Several types of sensor nodes can be instanced toenable their siblings with pointing and dragging as well as environ-mental (visibility, proximity, collision) events. X3D applicationsare being driven with a variety of user interfaces paradigms andhardware from laptops to multi-touch tables and from mobile de-vices to immersive VR installations today.

The newest revision of X3D: version 3.3 (2012) includesnew components and functionality for reproducible, platform-independent volume rendering. X3D’s Volume rendering capabil-ities were originally designed to improve the accessibility of 3Dreconstructions of CT, MRI, PET or ultrasound presentations out-side the radiology suite [5]. The expressive and compose-able ren-dering styles of X3D provide support for the visual differentiationof structures and systems in the volumes (segmentations), and theregistration / fusion of two or more segmented volumes (blending).Figure 2 shows example screenshots from VT’s X3D volume visu-alizations.

The new ISO specification Extensible 3D (X3D) adds significantvolume rendering capabilities with support for different types ofvolume data and a broad selection of render styles.

Page 3: A Fresh Look at Immersive Volume Rendering: Challenges …1).pdf · The image is reconstructed as a 3D volume, ... ate Isosurface masks. 3 CAPABILITIES 3.1 Extensible 3D ... ware

3.1.1 Volume data• The VolumeData node specifies a simple non-segmented vol-

ume data set that uses a single rendering style node for thecomplete volume.

• The IsoSurfaceVolumeData node specifies one or more sur-faces extracted from a voxel data set. A surface is defined asthe boundary between regions in the volume where the voxelvalues are larger than a given value (the iso value) on one sideof the boundary and smaller on the other side and the gradientmagnitude is larger than a predefined threshold.

• The SegmentedVolumeData node specifies a segmented vol-ume data set that allows for representation of different render-ing styles for each segment identifier.

3.1.2 Render styles• The OpacityMapVolumeStyle specifies that the associated

volumetric data is to be rendered using the opacity mappedto a transfer function texture. This is the default renderingstyle if no other is defined for the volume data (see Figure 3,left).

• The BoundaryEnhancementVolumeStyle node providesboundary enhancement, where faster-changing gradients(surface normals) are darker than slower-changing gradients.Thus, regions of different density are made more visiblerelative to parts that are of relatively constant density.

• The CartoonVolumeStyle generates a non-photorealistic ren-dering and uses two colours that are rendered in a series ofdistinct flat-shaded sections based on the local surface nor-mal’s closeness to the average normal with no gradients inbetween (see Figure 4).

• The EdgeEnhancementVolumeStyle node specifies edge en-hancement by darkening voxels based on their orientation rel-ative to the view direction (see Figure 5).

• The ProjectionVolumeStyle volume style node uses the voxeldata directly to generate output colour based on the valuesof voxel data along the viewing rays from the eye point (seeFigure 3, right).

• The ShadedVolumeStyle node applies the Blinn-Phong illu-mination model [1, 8] to volume rendering, similar to themodel used for polygonal surfaces.

• The SilhouetteEnhancementVolumeStyle specifies that the as-sociated volumetric data shall be rendered with silhouette en-hancement. Enhancement of the basic volume is providedby darkening voxels based on their orientation relative to theview direction.

• The ToneMappedVolumeStyle node specifies that the associ-ated volumetric data is to be rendered using the Gooch shad-ing model [3] of two-toned warm/cool colouring. Two coloursare defined, a warm colour and a cool colour. The renderershades between them based on the orientation of the voxelrelative to the user (see Figure 6).

• The BlendedVolumeStyle combines the rendering of twovoxel data sets into one by blending the values according to aweight function.

• Finally, the ComposedVolumeStyle node is a special render-ing style node that allows compositing multiple renderingstyles together into a single rendering pass (see Figure 6). Pro-jectionVolumeStyle is the only style of the above mentionedthat is not composeable.

Figure 3: Brain data set visualized with different render styles: Opaci-tyMapVolumeStyle (left) which is the default style, and ProjectionVol-umeStyle with a Maximum Intensity Projection (MIP) algorithm ap-plied (right).

Figure 4: Default volume style on left and CartoonVolumeStyle onright.

Figure 5: Default volume style on left and EdgeEnhancementVol-umeStyle on right.

Figure 6: Segmented volume data using a combination of Opaci-tyMapVolumeStyle and ToneMappedVolumeStyle.

Page 4: A Fresh Look at Immersive Volume Rendering: Challenges …1).pdf · The image is reconstructed as a 3D volume, ... ate Isosurface masks. 3 CAPABILITIES 3.1 Extensible 3D ... ware

Figure 7: X3D volume visualizations using H3D in an interactive pre-sentation, merging different image modalities in a X3D scene graph.

Figure 8: Alternative view of X3D volume visualizations using H3D inan interactive presentation, merging different image modalities in aX3D scene graph.

3.1.3 Scene graph interactionFor minimum conformance, sensor nodes that require interactionwith the geometry (e.g., TouchSensor) shall provide intersection in-formation based on the volume’s bounds. An implementation mayoptionally provide real intersection information based on perform-ing ray casting into the volume space and reporting the first non-transparent voxel hit.

Navigation and collision detection also require a minimal confor-mance requirement of using the bounds of the volume. In addition,the implementation may allow greater precision with non-opaquevoxels in a similar manner to the sensor interactions.

Figures 7 and 8 show examples of compositions of various scenegraph elements with polygon-based visualisations seamlessly inte-grated with volume rendering.

3.2 H3DH3DAPI is an open source haptics software development platformthat uses the open standards OpenGL and X3D with haptics in oneunified scene graph to handle both graphics and haptics. H3DAPIis released under the GNU GPL license, with options for commer-cial licensing. It is cross platform (e.g., Windows, OSX and Linux)and haptic device independent (e.g., support for CHAI3D but also

proprietary APIs like Sensable’s OpenHaptics). It enables audio in-tegration as well as stereography on supported displays (e.g., rang-ing from quad-buffered stereo-modes over spanned/tiled desktopsto anaglyphic rendering). Recently, HDMI 1.3 3D-Stereo framepacking was added, which allows for example stereo output on con-sumer display devices like the head mounted display HD1 fromSony. Unlike most other scene graph APIs, H3DAPI is designedchiefly to support a special rapid development process. By combin-ing X3D, C++ and the scripting language Python, H3DAPI offersthree ways of programming applications that offer the best of bothworlds—execution speed where performance is critical, and devel-opment speed where performance is less critical. However, mostimportantly in the context of this paper, H3D combines those fea-tures with the volume rendering extension of the X3D standard.

4 EVALUATION

4.1 Data

Figures 1 and 2 demonstrate X3D volume visualization acrossa number of crucial use cases including Microscopy, Paleo-biology, Custom Prostheses, Surgical Planning, Informed Consent,Anatomy Education, and Non-invasive Sensing. Volumetric datamay come from a variety of modalities, but as the number and res-olution of scans increases, there are two chief concerns: the util-ity of image archives (in electronic medical records for example)and the scalability of our current data models. In regards to thefirst, X3D contains a rich metadata capability, enabling the cross-referencing and fusion of image, scene graph and ontology data; aspart of the original TATRC (http://www.tatrc.org) fundedX3D project, we demonstrated the integration of the FoundationalModel of Anatomy (FMA) and SNOMED ontologies with interac-tive X3D volume renderings, enabling knowledge-based interactive3D applications [6]. The second issue (big data) clearly impacts thesoftware and hardware challenges (cluster rendering), but also thedata and file system format. Dougherty et al [2] propose HDF5 as abinary filesystem container to handle the current explosion of imagedata archives.

4.2 Software

The ability to process and expressively present volumetric data isa key requirement and rich in its aspects. For example, we makeextensive use of ImageJ (http://imagej.nih.gov/ij) andTeem (http://teem.sourceforge.net) to load volumet-ric image data. For open processing pipelines, Seg3D (http://seg3d.org) and ITKSnap (http://itksnap.org) arerobust tools supporting several interactive and algorithmic meth-ods for segmentation. ParaView (http://paraview.org) andVisit (https://wci.llnl.gov/codes/visit) can lever-age clusters for large-scale scientific volume rendering. Voreen(http://voreen.org) uses a visual programming model tobuild pipelines and advanced visual effects in its renderings. DI-COM still does not describe 3D presentation states, but its modelsof segmentations and surfaces align with X3Ds’ geometry types.

The X3D scene graph provides the formal structure to unify sev-eral data types from resources and services across the web and por-tray them as interactive scenes including lights, cameras, polygons,volumes animations and scripting. H3D (http://h3dapi.org) and InstantReality (http://instantreality.org)are two softwares that support X3D Volume rendering; at Web3Dand SIGGRAPH 2012, VicomTech and Fraunhofer IGD demon-strated X3D volume rendering natively in HTML5 browsers withX3DOM (http://x3dom.org) and WebGL.

The concept of sensors and event routing to fields in X3D allowsfor a very flexible implementation of interaction with a growingrange of input devices. The open source implementation H3D, withits API based on X3D, is also a prominent example with medical

Page 5: A Fresh Look at Immersive Volume Rendering: Challenges …1).pdf · The image is reconstructed as a 3D volume, ... ate Isosurface masks. 3 CAPABILITIES 3.1 Extensible 3D ... ware

simulators, featuring complex interaction schemes quickly proto-typed in Python and then implemented as C++ custom nodes. Nev-ertheless, in future work the standard will be extended to furtherinteraction concepts, such as advanced haptic rendering and soft tis-sue response. The Web3D Consortium has a User Interface Work-ing Group that welcomes further research and development to ad-dress the challenges in interaction design and paths to support andstandardize new interaction metaphors via X3D.

The combination of open standards and capable open-sourcetools here today is certainly a capable common denominator todeliver and interchange virtual environments. Both standards andtoolkits are key ingredients for repeatability; but what about exten-sion? Within an X3D standard runtime, there are formal methodsto extend the scene graph with custom nodes, known as Prototypes.With the open toolkits, developers have the additional option ofgetting ’under the hood’ of the scene graph and adding their ownclasses and modifying existing ones. Thus new information andinteraction designs can be implemented and evaluated.

4.3 HardwareWhile there are many different hardware combinations possible,we will briefly mention a few suggestions for a moderate budget.With the progress in home entertainment, 3D Display systems aregetting much more affordable. Recently, we added support forHDMI 1.3 3D-Stereo frame packing in H3D. This allows, for ex-ample, immersive stereo output on consumer display devices likethe head mounted display HD1 from Sony. For an even lower pricethe Oculus Rift HMD (http://www.oculusvr.com) shouldbecome available soon. As mentioned before, H3D supports awhole range of input/haptic devices. Good entry level systemsare the Novint Falcon (http://www.novint.com), which islimited to 3DOF input, or the SenseAble Phantom Omni device(http://www.sensable.com), with 6DOF input. Both de-vices support 3DOF force feedback. Other immersive input optionswithout haptic feedback, are the Microsoft Kinect system (http://www.xbox.com/KINECT) or the soon coming desktop equiv-alent Leap Motion Controller (https://leapmotion.com/).

5 DISCUSSION

X3D has proven to be a robust and extensible presentation standardto improve interactive visual access to volume and medical imagedata and clinical ontologies. Focusing on X3D Volume renderingadoption, conformance and extension can bring the full power ofX3D to bear, improving efficiency and outcomes. For example,the creation of open workflows and mobile HTML5 clients for 3Ddisplay of medical imaging data will stimulate innovation and lowerthe barrier to entry for clinicians, students and the layman alike.

This new ISO standardization has more far reaching effects inthat the imaging data is used not only for diagnosis and treat-ment, but as source material in physiological and surgical simula-tors (i.e. [13]). The surgical simulation field has traditionally beena mosaic of parallel efforts that produce cutting edge simulationsof specific organ systems, specific organs or general tissue dynam-ics which are isolated products that cannot communicate with eachother. To align these efforts so that the ’best of breed’ organ orsystem-specific models can be brought together and we can beginto realize a truly computational representation of a human being,standards such as X3D for 3D representation of medical imagingdata must be in place. Additional extensions to this standard (suchas [14]) will enable simulators to conform to approved specifica-tions and allow more valid comparisons between them when at-tempting to judge their fidelity and utility.

ACKNOWLEDGEMENTS

The authors gratefully acknowledge the contributions of the inter-national standards community, especially the Web3D Consortiumin building a rich platform for volume rendering. Also, thanksto Abhijit A. Gurjarpadhye for his X3D cell image visualizations.This work was supported in part by US TATRC (W81XWH-06-1-0096) and Virginia Tech Advanced Research Computing.

REFERENCES

[1] J. F. Blinn. Models of light reflection for computer synthesized pic-tures. ACM SIGGRAPH Computer Graphics, 11(2):192–198, 1977.

[2] M. T. Dougherty, M. J. Folk, E. Zadok, H. J. Bernstein, F. C. Bern-stein, K. W. Eliceiri, W. Benger, and C. Best. Unifying biologicalimage formats with HDF5. Communications of the ACM, 52(10):42–47, 2009.

[3] A. Gooch, B. Gooch, P. Shirley, and E. Cohen. A non-photorealisticlighting model for automatic technical illustration. In Proceedingsof the 25th annual conference on Computer graphics and interactivetechniques, SIGGRAPH ’98, pages 447–452, New York, NY, USA,1998. ACM.

[4] L. Hong, A. Kaufman, Y.-C. Wei, A. Viswambharan, M. Wax, andZ. Liang. 3d virtual colonoscopy. In Biomedical Visualization, 1995.Proceedings., pages 26–32, 1995.

[5] N. W. John. Design and implementation of medical training simula-tors. Virtual Reality, 12(4):269–279, 2008.

[6] N. W. John, M. Aratow, J. Couch, D. Evestedt, A. D. Hudson,N. Polys, R. F. Puk, A. Ray, K. Victor, and Q. Wang. MedX3D: stan-dards enabled desktop medical 3D. Studies in Health Technology andInformatics, 132:189–194, 2008.

[7] D. Jnsson, E. Sundn, A. Ynnerman, and T. Ropinski. Interactive vol-ume rendering with volumetric illumination. In Eurographics 2012 -State of the Art Reports, pages 53–74, 2012.

[8] B. T. Phong. Illumination for computer generated pictures. Commu-nications of the ACM, 18(6):311–317, 1975.

[9] N. Polys and N. John. In IEEE VR 2010 Workshop on Medical VirtualEnvironments, http://www.hpv.cs.bangor.ac.uk/vr10-med, 2010.

[10] N. Polys and A. Wood. New platforms for health hypermedia. Issuesin Information Systems, 13(1):40–50, 2012.

[11] N. F. Polys, D. Brutzman, A. Steed, and J. Behr. Future Standards forImmersive VR: Report on the IEEE Virtual Reality 2007 Workshop.IEEE Computer Graphics and Applications, 28(2):94–99, 2008.

[12] R. Smith-Bindman, D. L. Miglioretti, E. Johnson, C. Lee, H. S. Feigel-son, M. Flynn, R. T. Greenlee, R. L. Kruger, M. C. Hornbrook,D. Roblin, L. I. Solberg, N. Vanneman, S. Weinmann, and A. E.Williams. Use of diagnostic imaging studies and associated radiationexposure for patients enrolled in large integrated health care systems,1996-2010. JAMA, 307(22):2400–2409, Jun 2012.

[13] S. Ullrich and T. Kuhlen. Haptic palpation for medical simulation invirtual environments. IEEE Transactions on Visualization and Com-puter Graphics, 18(4):617–625, April 2012.

[14] S. Ullrich, T. Kuhlen, N. F. Polys, D. Evestedt, M. Aratow, and N. W.John. Quantizing the void: extending Web3D for space-filling hapticmeshes. Studies in Health Technology and Informatics, 163:670–676,2011.