49
- STUDIENARBEIT - Integration of a Hardware Volume Renderer into a Virtual Reality Application durchgef¨ uhrt am VRVis, Zentrum f¨ ur Virtual Reality und Visualisierung in Kooperation mit dem Institut f¨ ur Computervisualistik Universit¨ at Koblenz-Landau Betreuer : Dr. Katja B¨ uhler Dr. Anton Fuhrmann Pr¨ ufer : Prof. Dr.-Ing. Stefan M¨ uller vorgelegt von Andrea Kratz Matr. Nr.: 200210191 A - 1040 Wien, Favoritenstr. 25/2/18 Oktober 2005

Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

- STUDIENARBEIT -

Integration of a

Hardware Volume Renderer

into a Virtual Reality Application

durchgefuhrt amVRVis, Zentrum fur Virtual Reality und Visualisierung

in Kooperation mit demInstitut fur Computervisualistik

Universitat Koblenz-Landau

Betreuer : Dr. Katja BuhlerDr. Anton Fuhrmann

Prufer : Prof. Dr.-Ing. Stefan Muller

vorgelegt von

Andrea KratzMatr. Nr.: 200210191

A - 1040 Wien, Favoritenstr. 25/2/18

Oktober 2005

Page 2: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process
Page 3: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Abstract

This work presents a system which offers a visualization of volume data within a virtualenvironment. It deals with the integration of high-quality hardware based volume ren-dering into the virtual environment Studierstube.

In diagnosis, radiotherapy, surgery and treatment planning the knowledge of relativeplacement and size of anatomical structures is indispensable. Therefore, volumetric vi-sualization of medical data sets acquired by Computer Tomography (CT) and MagneticResonance Imaging (MRI) has become more and more important. Usually the two dimen-sional MRI or CT images have to be transformed mentally together to get an idea of thereal three-dimensional image. Contrarily, volume rendering enables the visualization ofthe whole 3D object - even of inner structures. A virtual environment additionally enablesa truly three dimensional, and therefore natural, view of the data. This could simplifytheir interpretation significantly. Furthermore, 3D interaction methods should provide anatural way of interaction.

When talking about virtual reality, highly interactive frame rates (at least 10 fps) have tobe achieved - in combination of volume and stereo rendering an especially great challenge.For a seamless combination of the libraries, the view of both has to be synchronized. Inaddition, occlusions between volume and the OpenGL geometry have to be considered tocreate a knowledge of space, which is essential for an intuitive interaction.

A system will be presented that realizes a nearly seamless integration of perspective directvolume rendering into virtual reality while achieving about 11 fps for a dataset of size 256x 256 x 128 and an image resolution of 1024 x 768. Furthermore the interaction meth-ods that were implemented will be introduced. This work ends with still open problems,like the user interface, additional direct manipulation widgets and the question, how tointegrate transfer functions into such a VR system.

Page 4: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Kurzfassung

Diese Studienarbeit prasentiert ein System, welches die Visualisierung von Volumen-daten innerhalb des polygonalen Renderings einer virtuellen Umgebung ermoglicht. Siebeschreibt die Integration hochqualitativen hardwarebasierten Volumen-Renderings in dievirtuelle Umgebung Studierstube.

In den Bereichen Diagnose, Operationsplanung und Strahlentherapie ist das Wissen uberGroße und raumliche Beziehung der Strukturen zueinander unerlasslich. Aus diesemGrund wird der volumetrischen Darstellung medizinischer Datensatze (zB. MRT- oderCT- Aufnahmen) eine immer großere Bedeutung beigemessen. Normalerweise mussen diezweidimensionalen MRT- oder CT-Schnittbilder mental zu einem dreidimensionalen Bildzusammengesetzt werden, um eine Vorstellung des zugrundeliegenden Objekts zu bekom-men. Im Gegensatz dazu, ermoglichen Verfahren des Volumen-Renderings tatsachlicheine dreidimensionale Visualisierung des Objekts - sogar innenliegender Strukturen. Einevirtuelle Umgebung ermoglicht zusatzlich eine stereoskopische, und damit naturliche, Sichtauf das Volumen, was die Interpretation der Daten entscheidend erleichtern kann. Zusatzlichsollen dreidimensionale Interaktionsmethoden einen naturlichen Umgang mit den Datenermoglichen.

Um den Begriff der virtuellen Realitat verwenden zu durfen, mussen interaktive Frame-raten erzielt werden (mindestens 10 fps) - besonders in der Kombination von Volumen-und Stereorendering eine grosse Herausforderung. Fur eine nahezu nahtlose Integrationdes Volumen-Renderers in das polygonale Rendering der Studierstube muss beachtet wer-den, dass sich das Volumen und die OpenGL-Geometrie gegenseitig verdecken mussen.Andernfalls wurde kein Raum entstehen, was fur eine intuitive Interaktion unerlasslich ist.

Es wird ein System prasentiert, das eine Integration von direktem Volumen-Renderingin eine virtuelle Umgebung realisiert. Es konnen Frameraten von bis zu 11fps fur einenDatensatz der Große 256 x 256 x 128 und einer Auflosung von 1024 x 768 erreicht werden.Desweiteren werden die implementierten Interaktionsmethoden vorgestellt. Diese Studien-arbeit endet mit weiterhin offenen Fragestellungen, wie zum Beispiel die Gestaltung derBenutzerschnittstelle, weiteren Widgets zur Manipulation des Volumens und letztendlichder Frage nach den Moglichkeiten, das Design von Transferfunktionen in ein solches VR-System zu integrieren.

Page 5: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Contents

1 Introduction 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Problem Statements and Objectives . . . . . . . . . . . . . . . . . . . . . . 11.3 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Related Work 32.1 Hardware Volume Renderer . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Studierstube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2.1 Personal Interaction Panel . . . . . . . . . . . . . . . . . . . . . . . . 42.2.2 Contexts and Applications . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Coin3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Fundamentals 83.1 Scene Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.2 Stereo Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93.3 What is Virtual Reality? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.4 Virtual Reality in Medicine . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.5 Volume Rendering - An Overview . . . . . . . . . . . . . . . . . . . . . . . . 13

3.5.1 Volume Rendering Pipeline . . . . . . . . . . . . . . . . . . . . . . . 143.5.2 The Volume Rendering Integral . . . . . . . . . . . . . . . . . . . . . 153.5.3 Basic GPU Ray Casting . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Integration 184.1 Viewing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184.2 Image transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2.1 Pixel Buffers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214.2.2 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.3 Intersections between OpenGL geometry and volume data . . . . . . . . . . 234.3.1 Vertex and Fragment Programs . . . . . . . . . . . . . . . . . . . . . 244.3.2 Framebuffer Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . 244.3.3 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . 27

5 VR Interaction with Volume Data 295.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.2 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.3 Clipping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.4 Slice View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.5 Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Page 6: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

CONTENTS VI

5.6 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

6 Results 366.1 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.2 Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6.2.1 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366.2.2 Clipping Cube Widget . . . . . . . . . . . . . . . . . . . . . . . . . . 366.2.3 Slice View Widget . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.2.4 Lighting Widget . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6.3 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

7 Conclusion and Future Work 397.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Acknowledgments 40

Bibliography 41

Page 7: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 1

Introduction

The main objective of this work is the integration of high-quality hardware based volumerendering into virtual reality (VR). It documents the implementation process as well asthe development of 3D interaction and navigation methods, based on the Studierstubeframework. Furthermore, a brief introduction of the main topics: virtual reality, volumerendering and virtual reality in medicine is given.

1.1 Motivation

Why is it useful to combine volume rendering and virtual reality?

Usually radiologists and surgeons have to put the two dimensional MRI or CT imagesmentally together to get an idea of the real three-dimensional image. Contrarily, volumerendering enables the visualization of the 3D object, even of inner structures. Further, astereoscopic view of such a visualization can facilitate it’s interpretation significantly.

By providing the user with a truly three dimensional, and therefore natural, view of thedata, it would make it easier for medical students to understand and to gain a knowledgeof anatomical structures, e.g. their size and relative placement.

When using a virtual environment as a user interface, the user becomes an active par-ticipant within the virtual world [Dam et al. ’00], instead of being an external observer,as head and hand tracking make sure that the user becomes the application’s center.Threedimensional interaction methods should provide a natural way of handling the data.

The combination of virtual reality is extremely useful especially in the field of volumerendering, because it enables a direct interaction and manipulation of the volume in anatural way.

1.2 Problem Statements and Objectives

This section first summarizes the problem in large. Secondly, the problems remaining forthis work are mentioned.

Medical visualization always has to deal with a trade-off between focusing on graphi-cal characteristics and efficiency. A high level of interactivity in virtual environments is

Page 8: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

1.3 Structure 2

absolutely essential. Only, if the user is able to navigate, select, pick, move and manip-ulate [Dam et al. ’00] the virtual object in a natural way, we are able to talk about VR.It is clear that this is only achieved in real time. The quality of the images is alsovery important, as it is the basis for any kind of diagnosis. Furthermore, it is obvious,that a perspective projection is needed, because the visualization should facilitate theevaluation of relative placement and scale of anatomical structures.The volume renderer developed at VRVis research center meets all these requirements. Itis able to render high-quality images of volume data sets in a perspective view in real time!

As mentioned in 1.1, a stereoscopic view of volume rendered images can facilitate theirinterpretation significantly. Therefore, the objective of this work is to integrate volumerendering into VR framework Studierstube. This leads to the following tasks. To achievea seamless combination of the two libraries, the view of both systems has to be synchro-nized. In addition, the objects rendered by the different libraries have to occlude eachother. Otherwise, the user would never gain a knowledge of space. Additionally, theseintersections are essential for an intuitive interaction with the volume, which should beas natural as possible.

The result of this work is an application that realizes such a seamless integration,including correct intersections between the rendered volume and OpenGL geometry. In-teractive frame rates in combination with direct volume rendering and a stereoscopic viewis not an easy task, as we have to render the whole scene twice. However, about 11 fps fora dataset of size 256 x 256 x 128 and an image resolution of 1024 x 768 could be achievedwhen using a NVIDIA Quadro FX 3400/4400 graphics card. However, loading a datasetof size 512 x 512 x 333 leads to a dramatic loss of performance. A maximum of 2 fps couldbe achieved, which is not feasible for the use in a virtual environment. Navigation andinteraction are hardly possible under this conditions - one cannot talk about virtual realityor intuitive navigation. In that case, the HVR framework enables the possibility to changethe image resolution. For example, one may reduce the resolution during interaction (likeclipping using the clipping cube widget) and change it again to high resolution to viewthe clipped volume in its best quality again.

1.3 Structure

Initially, related work is presented in chapter 2. Before getting into the main part - theprocess of integration - the relevant fundamentals are introduced in chapter 3. Specialattention is paid to volume rendering and GPU ray casting. The end of the chapter isdevoted to an introduction of VR in medicine. Chapter 4 finally documents the inte-gration. The primary concern is occlusion between OpenGL geometry and the renderedvolume. Next, the different interaction- and navigation methods are explained in chap-ter 5. Chapter 6 focuses on the results and chapter 7 ends with conclusions and possiblefuture work.

Page 9: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 2

Related Work

Several existing systems have been combined to accomplish this work. This chapter in-troduces the HVR framework used for volume rendering and the Studierstube virtualenvironment. In addition, Coin3D - a scene graph API for building three dimensionalscenes - is presented, because it forms the foundation of the Studierstube API.

2.1 Hardware Volume Renderer

The Hardware Volume Renderer (HVR), developed at VRVis Research Center over thepast three years, combines high-quality rendering with real-time performance. It is a ren-dering framework, that provides different rendering modes (isosurface rendering, directvolume rendering) and different rendering techniques, for example two-level volume ren-dering [Hadwiger et al. ’03].To distinguish individual objects of interest, contained in one single dataset, two-level vol-ume rendering combines multiple rendering techniques (like isosurface, direct volume andillustrative rendering). Furthermore the HVR framework provides a completely hardware-based segmentation.

Figure 2.1: The HVR framework is capable to render such high quality images in about12fps on a GeForce6 NVIDIA graphics card. The image shows a rendering ofa CT scan of a human head of size 512 x 512 x 333 using a rendering modethat combines isosurface and direct volume rendering.

Page 10: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

2.2 Studierstube 4

2.2 Studierstube

Studierstube is an enhanced virtual reality system, that offers the use of various input andoutput devices. In 3.3 it will be discussed that this is extremely useful, because almostevery VR application needs a different hardware setup, because of different requirements.Furthermore, Studierstube can be considered as framework, that eases the creation of VRapplications.

The Studierstube API is written in C++ built on top of Coin3D. It implements basicinteraction methods like navigating and manipulating objects, as well as 2D interactionelements, like sliders and buttons. One of the basic ideas of Studierstube, was the embed-ding of computer-generated images into a real work environment [Schmalstieg et al. ’02],which is controlled by the Personal Interaction Panel (PIP).

2.2.1 Personal Interaction Panel

The Personal Interaction Panel (PIP) (see figure 2.2) consists of a pen and a panel. Theuser is able to control the application by holding the panel in his off-hand, and the pen inhis primary hand [Schmalstieg et al. ’02]. The panel as well as the pen are equipped with,in our case, optical trackers and augmented with a so called pip sheet - the application’suser interface. The pip sheet is represented by a scene graph storing widgets for applicationcontrol. Virtual objects are either controlled by these widgets (change mode, manipulateparameters) or directly by using the pen.

Figure 2.2: The personal interaction panel consists of a pen and a panel.

Page 11: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

2.2 Studierstube 5

2.2.2 Contexts and Applications

Every Studierstube application is a subclass of the foundation class SoContextKit. Anapplication is a context (see figure 2.3) represented by a single scene graph, consisting ofinventor nodes (introduced in section 2.3). The application’s root node again is embeddedin the scene graph that builds the Studierstube framework. A context consists of a 3Dwindow associated with a user-specified scene graph (client area) and a pip sheet. Bothwindow and pip sheet geometry should be passed to the application using a loader file, asshown in figure 2.4.

The application code itself mainly consists of callback functions, calling OpenGL codeand those responding to certain 3D events (pen movement, button press).Applications are compiled as shared library (dll) and therefore may are loaded duringruntime into the framework.

root

context...

pipsheet_1

3D window

STBapplication

pipsheet_n

...

Figure 2.3: A context is implemented as a node and consists of a 3D window, a userspecified scene graph (the application) and at least one pip sheet.

Page 12: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

2.2 Studierstube 6

#Inventor V2.1 ascii

DEF VRSQUARED SoApplicationKit{

classLoader SoClassLoader{

className "VRS_SoKit"fileName "../apps/vrsquared/vrsquared"

}SoContextKit VRS_SoKit{

userID 0

# pip sheettemplatePipSheet SoPipSheetKit{

pipParts ( PIP_BODY | SHEET | SHEET_TABS )autoScaling FALSEsheets SoSwitch{

File { name "sheets/VRS_MainSheet.iv" }}

}clonePipSheet FALSE

# 3D windowwindowGroup Group{

SoWindowKit{

title "VRsquared"state MAXIMIZED

}}

}# end of SoContextKit}# end of SoApplicationKit

Figure 2.4: Example inventor file (*.iv) representing the class loader.

Page 13: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

2.3 Coin3D 7

2.3 Coin3D

Coin3D [Motion AS ’05] is a C++ class library built on top of OpenGL. It is used to buildand manipulate three dimensional scenes.Coin3D implements Open Inventor (OIV) - a high-level 3D API. As well as OIV, it isbased on the idea of scene graphs consisting of nodes. Nodes again contain informationsabout 3D objects, for example their shape and material. By connecting nodes to eachother, one can create a hierarchical structure that represents a three dimensional scene -the scene graph. A simple example is given in figure 2.5.

The basic inventor nodes are [Wernecke ’94].

• Shape NodesRepresenting three dimensional geometric objects (sphere, cube).

• Transformation NodesAffecting the objects’ movement.

• Property NodesAffecting the scene’s appearance (material, light).

• Group NodesRepresenting containers, enclosing multiple nodes within a scene graph.

Furthermore, Coin3D is not only used to create 3D scenes, but also for implementing 3Dinteraction methods and animations. Engines, for example, are used to create animationsby linking between fields.

root

material groupfigure

groupfigure

translate

geometrycone

translate geometrysphere

Figure 2.5: Simple scene graph, that draws two figures, like they are used in games.

Page 14: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 3

Fundamentals

This chapter introduces the needed fundamentals for further chapters. Firstly, a definitionof scene graphs is given. Secondly, it is explained, how the illusion of stereopsis is created.Thirdly, section 3.3 will focus on the definition of VR. In section 3.4 the impact of virtualreality in medicine is discussed. Section 3.5 is devoted to the field of volume rendering.After presenting different algorithms, it is stated, why we decided to use a ray castingapproach, which is finally introduced in more detail in 3.5.3. As the topic of this workis not volume rendering, but the integration of volume rendering into VR, only a briefintroduction is given. The interested reader may please refer to [Engel et al. ’04] - anexcellent tutorial about Real-Time Volume Graphics! Additionally, [Scharsach ’05b] and[Scharsach ’05a] give a detailed introduction into GPU ray casting.

3.1 Scene Graphs

As a reminder, Coin3D uses scene graphs to store data (e.g. geometries, textures) andevery Studierstube application is represented as a scene graph.

A scene graph is a data structure represented as directed acyclic graph (DAG). It storesa 3D scene and allows to define the position and orientation of 3D objects (i.e. nodes)to each other. All the information about the scene is stored in nodes (lighting, transfor-mations, camera position and orientation, textures, etc.).

The basic idea of scene graphs is more efficiency. They do not only offer a more efficientrendering of large scenes but also minimize state changes [Muller ’04a]. By traversing thegraph from left-to-right and top-to-bottom a traversal state is carried along and inter-preted as OpenGL call. By doing so, each node inherits attributes of his parent node(s)and therefore state changes are minimized.

A node in a DAG can have multiple parent nodes (it is not a tree), so it is possibleto have references between nodes (which would not be possible in a tree). Such referencesare useful, when connecting fields of different nodes. For instance, one may connect thefield of a frame rate timer with a text node to display the frames per second.

We can also use scene graphs to reduce visibility checks and therefore to render onlyrelevant parts of a scene. As an example, imagine a game composed of multiple rooms.

Page 15: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.2 Stereo Rendering 9

The scene graph contains all the rooms, the character, the lights, the camera(s), . . . . Toreduce visibility tests, the whole scene has to be separated into multiple bounding volumehierarchies (BV), each applied to a group node (e.g. a room, who’s children again aredoors and chairs). If the character is located in room A and A is parent node of a couple ofother ”rooms”, that are behind A, the whole subtree beneath A have not to be rendered,respectively checked for visibility. Certainly, the chairs and doors within an occluded roomhas not to be rendered at all.

A scene graph leastwise consists of [Muller ’04a] (remember the basic inventor nodes,introduced in 2.3):

• Groups

• Transformations

• Geometries (including their material or texture), and a

• Render() - method

3.2 Stereo Rendering

Figure 3.1: Schematic representation of Off-axis stereo rendering. Image taken from [Bourke ’99].

The distance between the two human eyes leads to two different images perceived by theleft and right eye, respectively. Stereoscopic systems use this effect to create a perceptionof depth.

Studierstube provides the class SoStereoCameraKit, consisting of two SoOffAxisCameras(modeling the two eyes), that are able to render off-axis. The distance between the cam-eras’ viewpoints has to match the distance between the left and right eye.

The off-axis method, depicted in figure 3.1, uses two non-symmetric frusta 1. This ensuresthat the projection plane is identical for both eyes, otherwise the eyes would converge,which would lead to a wrong stereo effect.

1When using OpenGL, glFrustum() would be needed, because glPerspective() is limited to symmetricfrusta.

Page 16: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.3 What is Virtual Reality? 10

Recapitulating, the quality of the stereo effect depends on the distance of the two camerasto the projection plane and the distance between the left and right camera. If the distanceis too large, in both cases a so-called hyperstereo effect appears, which means that theuser has the feeling of his eyes being dispersed. This may be perceived as displeasing.

3.3 What is Virtual Reality?

”[an] experience [. . . ] in which the user is effectively immersed in a responsivevirtual world”. Frederick P. Brooks, Jr.

The main properties defining the term virtual reality are [Muller ’04b]:

• ImmersionThe user should feel surrounded by - and therefore as part of - the virtual world.

• InteractivityThe environment should respond in a natural way to human interactions. Thereforethe application has to be highly interactive.

• Multi-modal Interaction

VR is mainly a technology. Being able to create a feeling of immersion mainly dependson different input and output devices, all of them appropriate to different requirements.Without head tracking, for example, it would not be possible to determine the user’s pointof view and therefore he would not have the feeling of being the application’s center whichwould make him feel less immersed.

Interaction is provided by different input devices. The first VR input device was thedata glove. Although, one may assume that navigation through such a glove is very intu-itive, they have a low accuracy. Therefore they may be cumbersome in use. Other inputdevices are space mice, game pads or even locomotion devices, where the user is standingon a kind of treadmill. Meanwhile, there also exist input devices with haptic feedback(mostly gloves). The last input device, that should be mentioned, is a 3D pointer, thatcan be compared to a 2D mouse. The pen used in this project is such a kind of input deviceoffering six degrees of freedom (DOF), so it can be arbitrarily moved. It is equipped withoptical trackers and two buttons (like a usual pc mouse). Additionally, the passive tactilefeedback experienced when the pen touches the panel [Schmalstieg et al. ’02] creates afeeling of presence.

Page 17: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.3 What is Virtual Reality? 11

Figure 3.2: Polarization glasses used for passive projection (left) and shutter glasses usesfor active projection (right).

For three dimensional display head mounted displays (HMD) can be used. They createa good feeling of immersion, because of a big field-of-view (fov). One of their disadvan-tages is, that the real environment cannot be perceived by the user, so it is difficult tomanipulate real input devices.

For this work a kind of workbench in combination with shutter glasses, see figure 3.2,was available. A workbench is an active projection screen. Contrary to passive pro-jection, only one projector is used. The images for the left and right eye are displayedalternately (shutter), while one eye is toggled to black. A disadvantage of using shutterglasses is the appearance of ghost effects. An advantage of that kind of technology incombination with head tracking is, that the user is able to view on the virtual table (seefigure 3.3) from arbitrary directions.

Figure 3.3: The virtual table equipped with optical trackers.

Figure 3.4 shows a VR setup using a passive backprojection. Therefore 2 projectorsare needed. The two images for the left and right eye are displayed simultaneously. Theuser is wearing polarization glasses, that separate the two images again. Such displaysare well suitable for collaborative work, as many people can sit in front of the projection

Page 18: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.4 Virtual Reality in Medicine 12

screen. Further, the glasses used for this kind of projection are cheaper than for exampleshutter glasses.

Figure 3.4: Projection Screen with back projection.

The kind of projection can also differ between front (projector in front of the projec-tion screen) and back (projector behind screen) projection. The main drawback of frontprojections is that the person in front of the screen casts shadows. The penalty of backprojection is the requirement of space, which is often not available. However, the qualityof the projected images is much better when using back projection.

3.4 Virtual Reality in Medicine

Generally, the idea of medical visualization is to help clinicians in their daily work. Medi-cal reconstructions for example serve for simulating unusual interventions. They aid incommunication between colleagues, especially multidisciplinary, when radiologists and sur-geons are working together.

In the domain of medical visualization in virtual reality, one has to distinguish betweentwo main application areas [Riva ’03]. Surgeons, for a example, want a realistic presen-tation of virtual objects. Psychologists, in contrast, take advantage of the user being theapplication’s center. As the user is an active participant within the virtual world, it couldhelp him learning how to handle difficult, or even fearful, situations.

Virtual environments can be used in surgery. For example a simulation could help intraining and planning a surgery. Augmented Reality can be used in open surgery, help-ing to navigate the surgical instruments. This could reduce the time needed for theoperation. Our main interest is in education. It should help students’ understanding ofbasic anatomy. Exploring the bones and organs by ”flying around, behind, or even insidethem” [Dam et al. ’00] will lead to a deeper understanding.

A big problem of medical visualization and VR in medicine is validation. Before suchapplications can be used in clinical routine they have to be validated. Therefore a usein education can be very helpful. Another difficulty is, that clinicians have to accept the

Page 19: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.5 Volume Rendering - An Overview 13

new technique. An early use in education would familiarize medical doctors to such newtechnologies already during their studies and therefore could lead to more acceptance.

3.5 Volume Rendering - An Overview

ShearWarp

Volume Visualization

Indirect Direct

Object order

Image order

Domainbased

Splatting RayCasting

MarchingCubes

MIP

Figure 3.5: Classification of different algorithms used for volume rendering.

Volume Rendering is a technique allowing the visualization of three-dimensional scalardata. The main application area is certainly in medical applications, where volume datais obtained from CT or MRI images 2. Both produce stacks of two dimensional images(x-y planes) along the z-axis. Usually the viewer has to put these slices together mentally,viewing the images one after another, to gain an idea of the real three-dimensional image.Volume rendering offers him the possibility to look at the data in 3D from an arbitrarypoint of view.

There is a wide range of algorithms used for volume rendering depicted in figure 3.5.It has to be distinguished between indirect and direct methods. Indirect methods describea surface and cannot be considered as ’real’ volume rendering. The marching cubes algo-rithm first introduced in [Lorensen, Cline ’87] is such an indirect algorithm for renderingisosurfaces. The idea is to pass each cell on the grid, while determining whether a surfacecrosses this cell or not. If such a transition is found a geometric approximation to thatsurface is constructed. The extracted surface is rendered by a polygon rendering algo-rithm. Therefore indirect methods can be considered as surface rendering.Contrary, direct volume rendering (DVR) visualizes the 3D dataset directly, withoutany intermediate steps like surface representation. DVR techniques assume an opticalmodel [Engel et al. ’04], which describes the behavior of light passing through the volume(various optical models are described in [Max ’95]). Based on the optical model, a transfer

2All following chapter will assume that kind of volume visualization used in medical visualization

Page 20: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.5 Volume Rendering - An Overview 14

function assigns a color and opacity value to each voxel.Direct Methods again can be separated into object- and image-order approaches 3.

Object-based approaches rely on the object itself. The final pixel color accumulationis done by iteration through all parts of the object (generally voxels). This image compo-sition step determines the contribution of each voxel (or slice) to the final image.

Image-based approaches rely on the image that finally should be generated. For eachpixel in the resulting image a ray is casted through the volume and the values along thatray are gathered and then blended (i.e. iterated).Maximum intensity projection (MIP) uses a maximum value, gathered along a ray, todetermine the color of a pixel. The integration along a ray is turned into a search alongthe ray [Shirley ’02] which is obviously more efficient but not as informative.

This work uses GPU based ray casting, because it offers a very good image quality andadditionally enables a perspective view on the dataset. Furthermore, the HVR volumerendering library used here is capable to render these high quality images in real time.Therefore there is no reason to switch to object-order approaches, like splatting, whichare assumed to be faster but on the cost of image quality.Basic GPU ray casting, is explained in more detail in section 3.5.3.

3.5.1 Volume Rendering Pipeline

Data Acquisition(e.g. CT, MRI)

acqired values

Sampling

Filtering

prepared values Classification voxel opacity

voxel color Resampling sample opacitysample color Compositing

image pixels

TransferFunction

Figure 3.6: Basic volume rendering pipeline, introduced by [Levoy ’88].

As mentioned before, volume rendering deals with the visualization of 3D scalar data.Figure 3.6 shows a basic volume rendering pipeline which will now be discussed in more de-tail. First, the scalar data has to be obtained by image acquisition (e.g. CT, MRI). Second,it is prepared for volume rendering by sampling the initial two dimensional slices. This

3 Domain-based algorithms should only be mentioned but are not considered in this work

Page 21: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.5 Volume Rendering - An Overview 15

step may also include some preprocessing for image enhancement (e.g. noise reduction,contrast enhancement, edge enhancement), which can be useful for further operations,for instance segmentation. The sampling results in voxel values, each describing an ab-sorption coefficient, alternatively a volume density. Third, the prepared values are usedas input to the classification step 4, where a transfer function is used to assign color andopacity to each voxel. Next, these values are resampled, for example along a ray, that iscast into the array of prepared values from the user’s point of view (ray casting). Theresampled color and opacity values are merged with each other and the white backgroundby compositing in back-to-front (see 3.1) or front-to-back (see 3.2) order. The result of thecompositing step, which also can be assumed as image blending, is a single color C. Thecompositing calculation is a linear interpolation between the accumulated colors. Cdst,αdst and Csrc, αsrc are the color and opacity values of the color buffer and the incomingfragment, respectively [Kruger, Westermann ’03].

Cdst = (1− αsrc)Cdst + αsrcCsrc (3.1)

Cdst = Cdst + (1− αdst)αsrcCsrc (3.2)

whereαdst = αdst + (1− αdst)αsrc (3.3)

3.5.2 The Volume Rendering Integral

How is light transported through the volume?

DVR techniques all assume an optical model (in general, the emission-absorption model,shown in figure 3.7), which describes how light is transported through the volume. Theamount of radiant energy reaching the observer’s eye is calculated by solving the volumerendering integral [Engel et al. ’04]. This process can be regarded as the calculation ofhow much of the light’s energy remains, when reaching the viewpoint. As mentioned

Figure 3.7: The amount of radiant energy reaching the observer’s eye has to be calculated. Imagetaken from [Engel et al. ’04]

above, in image-based approaches, like ray casting, for each pixel in the resulting image aray is cast through the volume. Such a viewing ray is parametrized by the distance t tothe viewpoint and denoted by ~x(t). A scalar value (i.e. volume density) at a sample pointt is denoted by s(~x(t)). This value is mapped to its physical quantities, which describe the

4Classification can be either done before resampling (pre-classification) or afterwards (post-classification). See [Engel et al. ’04] for more information.

Page 22: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.5 Volume Rendering - An Overview 16

emission c(t) and absorption κ(t) of light at that point (classification). These mappingsare denoted as functions of the eye distance t.

c(t) := c(s(~x(t))) (3.4)

κ(t) := κ(s(~x(t))) (3.5)

Shortly, emission can be characterized by color, and absorption can be assumed as opacity.The absorption along the ray is specified by the optical depth.

τ(d1, d2) :=

d2∫d1

κ(t)dt (3.6)

The outcome of this is the volume rendering integral.

C =

∞∫0

c(t) · e−τ(0,t)dt (3.7)

3.5.3 Basic GPU Ray Casting

Ray casting can be considered as the ”most direct” numerical method for evaluating thevolume rendering integral [Engel et al. ’04]. This section gives a brief introduction intothe basic GPU approach, as proposed by [Kruger, Westermann ’03].

The basis of this DVR algorithm is a 3D-texture, storing the prepared values after the firstsampling process. Then a bounding box for that dataset is generated, where the positions(i.e. 3D texture coordinates) inside are color coded. The algorithm itself can be dividedinto three main passes (see 3.10).

First, a ray has to be set up. Therefore the front faces of the bounding box which re-present the ray’s starting position are calculated. Figure 3.8 depicts the front and backfaces. By subtracting the color of the front faces from the color of the back faces at the

Figure 3.8: Front (left) and back (right) faces of the bounding geometry, color coding the currentposition. Image taken from [Scharsach ’05b].

current pixel position, a 2D RGBA direction texture (see figure 3.9) is retrieved. It storesthe normalized viewing vector, as well as its initial length (before normalization). Latteris stored in the alpha channel. Second, the ray has to be traversed by stepping alongthe viewing vector in a customized step size, while accumulating optical properties. The

Page 23: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

3.5 Volume Rendering - An Overview 17

Figure 3.9: Direction texture storing the normalized viewing vector and its initial length. Imagetaken from [Scharsach ’05b].

viewing vector is retrieved by a texture lookup into the direction texture at the currentpixel position. The resulting color (emission) and opacity (absorption) values are storedin a separate compositing texture.

Third, it has to be checked, whether the ray can be terminated or not. The traversalis stopped when the ray has left the volume bounding box again, which means that thecasted distance is greater than the alpha value stored in the direction texture, or if a con-stant threshold is reached, which reveals that all volume elements further away from theviewpoint are occluded anyway (early ray termination). Finally, the compositing textureis blended back to screen.

An overview of the algorithm is given in figure 3.10.

ray setup - =

ray traversalintegration

check ray termination

not t

erm

inat

ed

texture lookup

input

outputcomp

Figure 3.10: Schematic representation of the basic GPU ray casting algorithm.

Page 24: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 4

Integration

This chapter documents the integration of the volume rendering library (HVR) into theVR framework Studierstube. It can be separated into three main steps. First, the viewof both systems has to be synchronized. Second, the volume rendered images have tobe rendered into Studierstube’s window. Finally, we had to consider correct intersectionsbetween volume and OpenGL geometry had to be considered to gain a knowledge of space.

4.1 Viewing

Viewing, modeling and projection are defined by Coin3D, respectively Studierstube. There-fore the modelview- and projection matrices within the volume renderer have to be setaccordingly. Before the scene can be drawn, the following steps [Shreiner et al. ’05] haveto be performed 1.

1. Modeling transformation

2. Viewing transformation

3. Projection transformation

4. Viewport transformation

vm =

rx ux vx px

ry uy vy py

rz uz vz pz

0 0 0 1

(4.1)

where

• r := right vector (denotes the direction to the right of the camera)

• u := up vector (denotes, where is up)

• v := view vector (denotes the camera’s line of sight)

• p := position (denotes the camera’s position, translation part of vm)1When thinking in OpenGL calls the viewing transformation commands have to precede the modeling

transformations, so that latter take effect on the 3D objects first.

Page 25: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.1 Viewing 19

In Studierstube, viewing and modeling are separate steps, both obtained from trackingdata. The viewing matrix describes the user’s position, its line of sight and where is up.The default settings are: camera (eye point) at the origin, view vector pointing along thenegative z-Axis and up vector pointing along the positive y-Axis. The modeling matrixincludes rotation, scaling and translation of the virtual object. Here, modeling transfor-mations are performed by the movement of the pen (data is obtained from hand tracking).Viewing transformations are caused by the movement of the user’s head (data is obtainedfrom head tracking).

In the HVR framework, viewing and modeling are multiplied to one single matrix. Thewhole modelview matrix sent to the renderer every frame is calculated by (modeling trans-formations first)

mv = penrot · pentrans · vm (4.2)

where penrot and pentrans are representing the relative movement of the pen.

Coin3D provides a function call to retrieve the viewing as well as the projection matrix,so they do not have to be calculated.

// render_state saves the current OpenGL stateSoGLViewingMatrixElement::getResetMatrix( render_state );SoProjectionMatrixElement::get( render_state );

SoGLViewingMatrixElement is used to store the current viewing matrix (analogousSoProjectionMatrixElement), which is equivalent to the inverse camera matrix. The cam-era matrix again is built upon the current camera node (e. g. SoPerspectiveCamera) andany transformations, that precede the camera node in the scene graph.

As mentioned in section 2.2.2 an application based on the Studierstube framework is rep-resented as a scene graph. Therefore a callback node is added to the graph, embeddingthe needed OpenGL function calls. The callback function responsible for the viewingtransformations described in this chapter is shown in figure 4.1

Page 26: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.1 Viewing 20

void VRS_SoKit::display_CB( void* data, SoAction* action ) {VRS_SoKit* self = (VRS_SoKit*)data;VRS_Interface vrsInterface = VRS_Interface::getInstance();VRS_RenderVolume vrsRenderVolume = VRS_RenderVolume::getInstance();

if ( action->getTypeId() == SoGLRenderAction::getClassTypeId() ) {SoGLRenderAction* glRender = (SoGLRenderAction*)action;SoState* renderState = glRender->getState();

const SbMatrix vm =SoGLViewingMatrixElement::getResetMatrix( renderState );

const SbMatrix pm =SoProjectionMatrixElement::get( renderState );

vrsInterface.setProjectionMatrix( pm );vrsInterface.setViewingMatrix( vm );

// get current viewport sizeSbVec2s viewportSize =self->mViewer->getViewportRegion().getViewportSizePixels();

short width, height;viewportSize.getValue( width, height );

if ( self->isReshapeNeeded( width, height) ) {vrsRenderVolume.setViewportSize( ( int ) width, ( int ) height );vrsRenderVolume.reshape( ( int ) width, ( int ) height, pm );

}vrsRenderVolume.display();

}}

Figure 4.1: Function, that represents the callback node, which causes the volume render-ing.

Page 27: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.2 Image transfer 21

4.2 Image transfer

To render the volume within the polygonal rendering of Studierstube, the whole volumerendering is performed off-screen using pixel buffers (pBuffers). Later, the pBuffer isrendered to a dynamic texture, which is mapped onto a screen filling quad.

4.2.1 Pixel Buffers

A pBuffer [Graphics ’05] is a non-visible rendering buffer, having the same properties as anon-screen buffer (color, stencil, depth, accumulation bits). It is used, for instance, to gen-erate intermediate images (e.g. shadow maps, projective textures). We used it to performthe volume rendering in an additional buffer, so that it is separated from the polygonalrendering. A pBuffer is created using the OpenGL extension WGL ARB pBuffer. To de-termine an appropriate pixel format WGL ARB pixelFormat is needed. To bind a pBufferas a dynamic texture, the HVR framework uses the WGL ARB render texture extension.

4.2.2 Implementation Details

The steps needed to perform the whole off-screen rendering are:

1. Create pBuffer (once)

2. Bind pBuffer

3. Draw

4. Bind pBuffer to texture

5. Unbind pBuffer from texture

6. Unbind pBuffer

7. Destroy pBuffer (once)

The HVR framework provides all of these functions. All that is left to do, is to configurethe pBuffer (width, height, bit properties) once and then calling the functions that areresponsible for drawing the volume. Figure 4.2 shows a part of the implementation. Thedisplay function is called within the callback node introduced above.

Page 28: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.2 Image transfer 22

void VRS_RenderVolume::display() {

if( mIsFirstCall ) {// creates a valid pBuffer and contextpBufferConfig();

}

// bind pBuffer associated with current contextHVR_BindPbuffer( mContext );setProjection();// drawHVR_Display();// bind pBuffer to textureHVR_BindPbufferTexture( GL_TEXTURE0_ARB, mContext );GL_ActiveTexture( GL_TEXTURE0_ARB );glEnable( GL_TEXTURE_2D );drawScreenFillingQuad();glDisable( GL_TEXTURE_2D );// unbind pBuffer from textureHVR_UnbindPbufferTexture( mContext );// unbind pBufferHVR_UnbindPbuffer( TRUE );

if( mIsLastCall ) {HVR_DestroyPbuffer();

}}

Figure 4.2: Pseudo code, performing the off-screen and finally the volume rendering usingthe functions provided by the HVR framework.

Page 29: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.3 Intersections between OpenGL geometry and volume data 23

4.3 Intersections between OpenGL geometry and volumedata

A very important - and finally the most interesting - part of the integration were correctintersections between volume and OpenGL geometry. We achieved this by rendering adepth image of the geometry (see figure 4.3) which is sent to the renderer every frame.

Figure 4.3: Geometry depth images.

A fragment shader then calculates the position of the geometry in volume coordinatesbased on their z-Values. Terminating the rays at those positions leads to a clipped volume,as shown in figure 4.4 (left).

Figure 4.4: Terminating the rays at the geometry’s position leads to a clipped volume.Blending the geometry back again leads to correct occlusions between both.

The OpenGL extensions GL ARB vertex program and GL ARB fragment program pro-vide an assembly language for programming graphics hardware. To perform the off-screenrendering EXT framebuffer objects was used.

Page 30: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.3 Intersections between OpenGL geometry and volume data 24

4.3.1 Vertex and Fragment Programs

Vertex- and fragment programs replace parts of the original OpenGL pipeline. VertexPrograms replace the modelview-projection part. They are executed once for each vertex.Their tasks are the computing of the vertex’ position within the canonical volume (clipspace), it’s color and it’s texture coordinate.Fragment programs replace the end of the pipeline. They are executed once for each frag-ment created by the rasterization step. Their tasks are texture sampling and to computethe fragment’s color and depth value. Figure 4.5 depicts the functionality of a fragmentprogram. We use a fragment shader to calculate the geometry’s position and to overridethe current z-value.

Fragment Program color valuedepth value

colortexture coordinate

OpenGL state

textures [0,7]

Figure 4.5: Schematic representation of a fragment program.

4.3.2 Framebuffer Objects

To create a depth image of the geometry we have to redirect the rendering of the sceneinto a framebuffer object, instead of rendering to the ”real” framebuffer (i. e. window).The new EXT framebuffer object [Graphics ’05] extension is a great alternative instead ofusing pBuffers. It allows to read the content of the framebuffer directly as a texture.

A framebuffer object consists of several renderbuffer objects (logical buffers). A frame-buffer attachable image represents one of the logical buffers (color, depth, stencils) andserves as rendering destination.Framebuffer-attachable images are nothing more than 2D arrays of pixels, so they canbe either textures or off-screen buffers (renderbuffer). They have to be attached to a socalled attachment point - a process similar to texture binding - before drawing. Figure 4.6depicts the architecture of a framebuffer object and figure 4.7 gives an example of how toinitialize it to render into a color and a depth texture.

Page 31: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.3 Intersections between OpenGL geometry and volume data 25

framebuffer object

Color attachement 0

Color attachement n

Depth attachement

Stencil attachement

...

Color attachement n-1

renderbuffer objects

texture objects

textureimage

renderbufferimage

Figure 4.6: Framebuffer Object Architecture.

Figure 4.7 shows, how to initialize such a framebuffer object and its attachable images,which serve as rendering destination.

Page 32: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.3 Intersections between OpenGL geometry and volume data 26

void init_offscreen_rendering() {GLuint fbo, color_tex, depth_tex = 0;

// create fboglGenFramebuffersEXT( 1, &fbo );

// create color texture, that will contain the whole sceneglGenTextures( 1, &color_tex );glBindTexture( GL_TEXTURE_2D, color_tex );glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8,

screen_width, screen_height,0, GL_RGBA, GL_FLOAT, NULL );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );

// create depth textureglGenTextures( 1, &depth_tex );glBindTexture( GL_TEXTURE_2D, depth_tex );glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24,

screen_width, screen_height,0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL );

glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST );glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST );

// bind framebuffer objectglBindFramebufferEXT( GL_FRAMEBUFFER_EXT, fbo );

// attach color texture to color attachment point of current fboglFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT,

GL_COLOR_ATTACHMENT0_EXT,GL_TEXTURE_2D, color_tex, 0 );

// attach depth texture to depth attachment point of current fboglFramebufferTexture2DEXT( GL_FRAMEBUFFER_EXT,

GL_DEPTH_ATTACHMENT_EXT,GL_TEXTURE_2D, depth_tex, 0 );

// unbind framebuffer objectglBindFramebufferEXT( GL_FRAMEBUFFER_EXT, 0 );

}

Figure 4.7: Initialize framebuffer object and its attachable images

Page 33: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.3 Intersections between OpenGL geometry and volume data 27

4.3.3 Implementation Details

The rendering is split into two passes to be redirected into the framebuffer object (fbo). Af-ter the fbo is bound, the rendering is performed off-screen, likewise in the case of pBuffers.The depth buffer is rendered into a depth texture and the color buffer into a color texture,as we told the fbo during initialization (see figure 4.7). The depth texture again is sent tothe renderer for further calculations. Finally, the color texture is mapped onto a screenfilling quad, after unbinding the fbo.To achieve an early termination of the rays, when hitting the geometry, its positions haveto be transformed into volume coordinates.

Figure 4.8: Geometry positions in volume coordinates

Projective textures are used, for example, to calculate shadow volumes. In this case,a texture based on informations about the scene is determined.

strq

= MV P ·

xyz1

(4.3)

where MV P denotes the modelview-projection matrix.In our case, the (depth) texture is given. Therefore, informations about the scene aredetermined based on the texture, which means that we have to invert the equation givenabove.

xyz1

= (MV P )−1 ·

strq

(4.4)

where MV P denotes the current modelview-projection matrix, introduced in section 4.1.Therefore, MV P has to be set, before enabling the fragment shader.

glMatrixMode(GL_MATRIX0_ARB)

Page 34: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

4.3 Intersections between OpenGL geometry and volume data 28

Then, it can be used within the shader after the following command:

PARAM mvp_inv[] = { state.matrix.program[ 0 ].inverse };

Additionally, the generation of the direction texture, introduced in section 3.5.3, hasto be modified (see 4.9), because now the stop positions are represented by the positionof the geometry in texture coordinates, as shown in figure 4.8.

TEMP R0;TEMP R1;TEMP R2;

TEX R0, unit_tc, texture[ 0 ], 2D; # Get front face positionADD R0, geom_pos, -R0; # Store direciton vector in R0

DP3 R1.x, R0, R0; # normalize direction vectorRSQ R1.x, R1.x; # 1/sqrt(<R0,R0>)RCP R2.y, R1.x; # R2.y = length(R0)MUL R1.xyz, R1.x, R0; # R1 = normalized direction vector

# the direction texture stores the normalized dir. vector# the initial length of the dir. vector (before normalization)# is stored in the alpha channelMOV result.color.xyz, R1;MOV result.color.w, R2.y;

Figure 4.9: Direction texture generation in the fragment shader. The stop positions aresubstituted by the geometry positions.

Page 35: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 5

VR Interaction with Volume Data

As mentioned in 3.4 our main interest is in medicine, especially in medical education. Thesystem should also help medical doctors in diagnosis and surgical planning. It could aidin collaborative work among surgeons and radiologists and facilitate communication, evenbetween patients and medical doctors. As a traditional slice view obtained from imageacquisition devices, as for instance a CT, could mostly only be interpreted by radiolo-gists, the volume rendered image can also be interpreted by non-experts. In education,the system should help medical students to understand anatomical structures and theirplacement.

5.1 Concept

To create such a system, we have to keep the following questions in mind:

1. Which VR setup is the most convenient?

2. What kind of interaction do we have to provide?

To answer the first question, imagine a meeting between medical doctors. They do nothave enough time, because there is a lot of work to do. The room they are sitting in,might be restricted in space. Passive stereo is certainly better for collaborative work, asexplained in chapter 3.3, but what kind of projection should be used? Probably a frontprojection has to be provided, because of missing space, although a back projection (pro-jector behind the screen) creates better images. Luckily, as Studierstube offers the use ofvarious input and output devices, this can be solved dependent on the particular situation.

To answer the second question it is important to remember the requirements of VR.The user should have the possibility to explore the data in the most natural way, so in-teraction and navigation have to be intuitive. For interaction, the possibility to cut awayparts and to look into the volume is very important. Hence we have to provide clipping.The traditional slice view still has to be provided, because clinicians are used to that kindof visualization. It is also important to aid navigation and interaction by highlightingobjects, to give the user an immediate feedback.We also have to keep the traditional ergonomic problems of VR in mind. That is, forexample, the reason, why we plan to leave the design of transfer functions in 2D and thenloading it during runtime. However, section 5.6 will present a very interesting possibilityof transfer function design that could be used in VR.

Page 36: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

5.2 Navigation 30

The basic interaction elements realized in this work are navigation and selection using thepen. We also provide a two-handed interaction, when providing the slice view in com-parison to the volume rendered image. The system is controlled using widgets (sliders andbuttons) on the pip.Finally, it could be needful to keep the comfort in mind. Working at a virtual table orin front of a projection screen can be exhausting. So it might be important to provide apossibility to sit down.

5.2 Navigation

The most obvious direct interaction method is navigation. The user is able to move thevolume by positioning the pen inside the volume’s bounding box and then pressing theprimary button on the pen (can be compared to left mouse button). The object followsall motions and rotations of the pen as long as the user pushes the button.

5.3 Clipping

Clipping means cutting of graphical elements at user-specified lines, curves, axis-alignedand arbitrary aligned planes. By means of clipping, it is decided, which parts of an object(a scene) are within a clipping area and therefore visible - and which are not. The volumerendering library, used in this work, supports axis-aligned clipping, which is implementedanalogue to GL clipping. Figure 5.1 shows, how clipping is enabled in the HVR framework.

// enable one of the six clipping planesenableClippingPlane( plane, true );// get current plane equationfloat* eq = getClippingPlaneEquation( plane );// set d, the position of the corner, which has been movedeq[ 3 ] = val;// set current equationsetClippingPlaneEquation( plane, eq );

Figure 5.1: Enabling clipping in the HVR framework.

The equation argument points to the four coefficients eq = (a, b, c, d) of the implicitplane equation [Shreiner et al. ’05]

ax + by + cz + d = 0 (5.1)

that defines the plane in world coordinates, with a, b, c being the plane’s normalized nor-mals. The equation is saved in the vector eq (see figure 5.1). Generally, the plane equationdefines whether a point p = (px, py, pz)T lies on the plane or not. It lies on the plane, ifapx + bpy + cpz + d = 0. It lies in the direction of the normal, if apx + bpy + cpz + d > 0and it lies in the direction of the inverse normal, if apx + bpy + cpz + d < 0. The outcomeof this is the clipping equation.

ax + by + cz + d >= 0 (5.2)

Page 37: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

5.3 Clipping 31

The points (xe, ye, ze, we)T in eye coordinates that fulfill the following equation

(a, b, c, d)M−1(xe, ye, ze, we)T >= 0 (5.3)

are visible and therefore drawn. M−1 defines the inverse of the current modelview matrix.

In this work, clipping is realized in terms of a clipping cube (see figure 5.2), each facerepresenting one of the six clipping planes.The clipping cube is represented by a bounding box, with invisible spheres at the cube’scorners. When moving the pen into one of the spheres the bounding box is highlightedand the user knows that he is able to move the clipping planes. This causes a resize of thecube, keeping it axis-aligned. The sphere’s current position defines the value of d. Theoutcome is a clipping equation, that is sent to the HVR framework, as shown in figure 5.1.

X

Y

Z

0 1

54

7 6

23

min_X max_X

min_Y

max_Y

max_Z

min_Z

Figure 5.2: Clipping Cube

Page 38: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

5.3 Clipping 32

Figure 5.3: Clipping Cube Widget.

Page 39: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

5.4 Slice View 33

5.4 Slice View

The slices are provided in a 3D texture by the volume renderer. To display one slice,the appropriate texture coordinates have to be calculated. The rendering of the slices isrealized on the pip, so we are enabling a two-handed interaction. By moving a slider ona three dimensional coordination system (fig. 5.4, left image) the user is able to createa cut-away view of the volume by viewing the corresponding slice (fig. 5.4, right image),simultaneously.

Figure 5.4: Slice View Widget. Clipped volume (left) and corresponding slice view (right).

5.5 Lighting

The user is able to change the light’s direction and its intensity. A sphere indicates thelight source. It can be transformed in the same way as the volume object by movingthe pen inside and pushing the primary button. The distance between the sphere (lightsource) and the center of the volume’s bounding box defines the intensity. The sphere’sposition defines the light direction (see fig. 5.5).

Figure 5.5: The distance between the sphere and the center of the volume’s bounding boxdefines the intensity of the light. The sphere’s position defines the directionof the light.

Page 40: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

5.6 Transfer Functions 34

5.6 Transfer Functions

The idea of transfer functions is to transform a scalar value into optical properties (color,opacity).

T (f(x, y, z)) = {R, G,B, α} (5.4)

[Kniss et al. ’01] introduced an approach using direct manipulation widgets to manipu-late multi-dimensional transfer functions [Kniss et al. ’02]. They present an interactiontechnique called dual-domain interaction. The main idea is to reverse the traditionalway of manipulating transfer functions by moving control points and then observing thechanges in the resulting image into manipulating the transfer function by direct interactionin the spatial domain. Dual-domain interaction means interaction in the spatial and thetransfer function domain, simultaneously. Classification widgets decide how much a colorand opacity contributes to the final transfer function. They can be translated, scaled andresized during the design process. Figure 5.6 shows such a classification widget.

gradient magnitude

scalar data value

second derivative emphasize slider

transfer function widget

sphere, representing the data value and derivatives corresponding to the data probe tip

classification widget

Figure 5.6: Transfer function and classification widget. Image taken from [Kniss et al. ’01]and revised.

Page 41: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

5.6 Transfer Functions 35

A possible workflow, as shown in 5.7, could be:

1. The user moves the probe (comparable with pen) into the volume.

2. The corresponding values (data value, first and second derivative) are displayed inthe transfer function domain.

3. The region of interest then is set to high opacity.

4. All corresponding data values are set in the transfer function, so that the user isable to observe changes in the spatial domain.

5. If the transfer function fits all requirements, the user can save the result.

6. If it does not fit, the user can continue its work by either exploring the changes inthe transfer function domain by moving the probe or by fine tuning the results inthe transfer function domain.

Figure 5.7: Schematic representation of dual-domain interaction. Image takenfrom [Kniss et al. ’01].

Such an approach would be imaginable in a virtual environment. The idea of explorationperfectly fits into the principles of VR and might facilitate the understanding of transferfunctions, too. [Kniss et al. ’04] also deals with volume rendering within immersive envi-ronments. Their idea to solve the difficulty of transfer function design in VR is to split theprocess into two independent tasks: classification and assignment of optical properties.Classification is performed in a preprocessing step using a normal desktop PC. Then theclassified data is loaded into the virtual environment together with the volume dataset.Finally, optical properties are defined using a color picker to assign color, and a rotaryknob to assign opacity to a material.

Page 42: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 6

Results

This chapter will summarize the results of this work.

6.1 Integration

The integration comprised the steps needed to achieve a combination of both libraries.Apart from the synchronization of the viewing matrix, a seamless integration of volumerendering into the polygonal rendering of Studierstube was achieved by realizing occlusionsbetween the volume rendered image and the OpenGL geometry.

6.2 Interaction

The second part of this work dealt with the development of 3D interaction methods, basedon the Studierstube framework.

6.2.1 Navigation

Navigation is the most obvious direct interaction method, as just mentioned in chapter 5.The user is able to move the object in arbitrary directions by moving the pen inside thevolume and pressing the pen’s primary button, simultaneously. Unfortunately, the mostbasic interaction method turned out to be cumbersome for some users. The result of a userstudy was, that most people have difficulties to get along with the three dimensional view,although it is more natural than the two dimensional view, which is offered by traditionaloutput devices.

6.2.2 Clipping Cube Widget

Clipping is definitely the most important interaction technique in our application. Werealized axis-aligned clipping in terms of a clipping cube, each face representing one of thesix clipping planes. The clipping cube widget turned out to be very intuitive because itoffers the user direct control. An extension to this widget could be arbitray clipping usingthe pip as cutting plane. The user would have the possibility to move the pip in the realworld while causing a cut-away view in the virtual world.

Page 43: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

6.3 Performance 37

6.2.3 Slice View Widget

We are also providing a traditional slice view because medical doctors are familiar withthis kind of visualization. Unfortunately the brightness of the virtual table results in alow contrast, which could be problematic for the slice view as it does not offer the qualitymedical doctors are used to.

6.2.4 Lighting Widget

The light source can be transformed by moving a sphere. The sphere’s position definesthe light’s direction and it’s intensity. The lighting widget is quite intuitive. Its usefulnessfor the application can be discussed.

6.3 Performance

A system was developed that realizes direct volume rendering in a perspective stereoscopicview while achieving about 11 fps for a dataset of size 256 x 256 x 128 and an imageresolution of 1024 x 7681 (see fig. 6.1). However, loading a dataset of size 512 x 512 x333 leads to a dramatic loss of performance. A maximum of 2 fps could be achieved,which is not feasible for the use in a virtual environment. Navigation and interaction arehardly possible under this conditions - one cannot talk about virtual reality or intuitivenavigation.In that case, the HVR framework enables the possibility to change the image resolution.For example, one may reduce the resolution during interaction (like clipping using theclipping cube widget) and change it again to high resolution to view the clipped volumein its best quality again.

Figure 6.1: The image shows a direct volume rendering of a CT scan of a human hand,projected onto a stereoscopic display. Interaction with such a dataset of size256 x 256 x 128 and an image resolution of 1024 x 768 can be performed atinteractive frames.

1Screen Resolution, that is needed for a demonstration on the virtual table

Page 44: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

6.3 Performance 38

The table below gives a summarized view of the measured frame rates using the fol-lowing system:

• single processor PC, AMD Athlon 64 3800

• equipped with a NVIDIA Quadro FX 3400/4400 graphics card

• memory size 256 MB

• running Windows XP

• on a virtual table

Dataset Rendering Mode Mono [fps] Stereo [fps]Hand (256 x 256 x 128) ISO 20 fps 11 fpsHead (512 x 512 x 333) ISO 7 fps 2 fpsHand (256 x 256 x 128) DVR 17 fps 10 fpsHead (256 x 256 x 128) DVR 5 fps 1 fps

Page 45: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Chapter 7

Conclusion and Future Work

This chapter gives a short summary of the project presented in this work. It will give aconclusion and an outline of possible future work.

7.1 Conclusion

This work presented the integration of high-quality hardware based volume rendering intoa virtual reality application, as well as navigation, interaction and direct manipulationmethods. Additionally, the related topics: virtual reality, VR in medicine and volumerendering were introduced.

The integration of volume rendering into the polygonal rendering of Studierstube wasrealized by performing the entire volume rendering off-screen and then rendering to a dy-namic texture, which is mapped onto a screen-filling quad. Occlusions between volumeand OpenGL geometry were achieved by rendering a depth image of the geometry andthen calculating their positions in volume coordinates based on this depth image. Termi-nating the rays at those positions leads to a clipped volume, so all that is left to do is toblend the geometry back.

Navigation and Interaction are based on the Studierstube API. We are providing axis-aligned clipping, intuitive lighting by directly moving the light-source, a traditional sliceview and finally the possibility to load predefined transfer functions. Navigation is aidedby visual feedback.

At the moment, the application can be used for presentation purposes, but is far away fora use in a clinical environment. The main problems still are interactivity and usability.Higher framerates surely could be achieved using newer graphics cards, but this is notreally the solution of our problem. The combination of the HVR with the Studierstubeframework was the only possibility to reach the results presented here within the scopeof a student research project. However, it is not the most efficient solution. A betterapproach would be to extend the volume renderer for a use on a projection screen usingintuitive interaction methods. The system should work in combination with a normal PC,so that the user is able to navigate using a traditional PC mouse, too. It is also importantto create transfer functions and change rendering modes.

Page 46: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

7.2 Future Work 40

7.2 Future Work

The realized application builds the foundation for plenty of future work. Typical VRnavigation and interaction methods, like 3D magic lenses [Viega et al. ’96] or world-in-miniatures [Stoakley et al. ’95] turned out to be not useful for this kind of applicationwhich should be used in the medical domain. Therefore upcoming work should concen-trate more on the visualization than on VR interaction methods. Probably it will alsofocus on a user-friendly user interface.

Another interesting aspect could be to focus on the evaluation of different 3D rotationtechniques. For instance, the work of [Bade et al. ’05] focuses on 3D rotation especially inthe field of medical applications. Their work is based on the question, why medical doctorsstill prefer to read CT or MRI data slice by slice, although 3D visualizations offer a greatpotential for medical applications and could ease the clinical workflow. Their assumptionis, that one reason for this is due to unintuitive 3D rotation techniques. The rotationtechnique implemented in the Open Inventor toolkit and used in this work has the worstresults in their evaluation.

The HVR framework also enables the exploration of the dataset’s inside, while performinga fly-through. This functionality should be integrated, too. Initially, it was developed forvirtual endoscopy applications. In a virtual environment, the possibility of moving theviewpoint into the three dimensional dataset could be a very interesting feature - imaginea user moving his own head into the virtual one.

In the previous chapter we mentioned that 3D navigation could be cumbersome if theuser is not used to a stereoscopic visualization and VR input devices. Therfore an appli-cation for collaborative work that combines a VR setup and a normal desktop PC seams tobe the best solution. It would still provide an informative three dimensional view but alsotraditional input devices. The user would have the possibility to manipulate the volumein both domains (VR and a normal desktop PC), as he prefers. Such a combination is alsovery important to create appropriate transfer functions. Fine-tuning still can be done inVR. Furthermore, the application should be optimized for a use on the projection screenbecause it is better qualified for collaborative work, than a workbench.

Page 47: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Acknowledgments

This work has been carried out as part of the basic research on medical visualization andthe FemInVis project at the VRVis Research Center for Virtual Reality and Visualizationin Vienna (http://www.vrvis.at), which is funded in part by the KPlus program of theAustrian government. The medical data sets are courtesy of Tiani Medgraph and theDepartment of Neurosurgery at the Medical University Vienna.

First of all, I would like to express my appreciation to those who contributed in thedevelopment of the volume renderer, I was allowed to work with.

Foremost, I would like to thank Markus Hadwiger and Rainer Splechtna, who both sup-ported me during the implementation phase. I want to thank Markus for his help inrealizing the integration, his patience and for rising my interest in projective geometry :-)!I thank Rainer, for his help in understanding Studierstube and especially for helping meto finish long debugging sessions.

I would further like to thank Anton Fuhrmann, for helping me with the entire VR setup,together with Rainer Splechtna.

Katja Buhler gave me the opportunity to write this work at the VRVis. Therefore, Ithank her. I am grateful to having the possibility to work in the medical visualizationgroup as a student assistant, too. This facilitated (and still does) my stay in Vienna seri-ously. Further, I want to thank her for reviewing this work. Thanks also for making anattendance at Eurographics in Dublin as a student volunteer possible.

I thank the VRVis for providing me a work place and exceptionally for providing mea computer.

Last but not least, I want to thank Laura Fritz, Florian Schulze and Pascal Sproedtfor proof-reading.

Finally, I thank Stefan Mueller, for arising my interest in computer graphics and mo-tivating me to the step of writing this work externaly.

Page 48: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

Bibliography

Ragnar Bade, Felix Ritter, Bernhard Preim (2005). Usability Comparison of Mouse-BasedInteraction Techniques for Predictable 3d Rotation. In Smart Graphics, S. 138–150, 2005.

Paul Bourke (1999). Calculating Stereo Pairs, 1999. See http://astronomy.swin.edu.au/

~pbourke/stereographics/stereorender/.

Andries van Dam, Andrew S. Forsberg, David H. Laidlaw, Joseph J. LaViola Jr., Rose-mary Michelle Simpson (2000). Immersive VR for Scientific Visualization: A ProgressReport. IEEE Computer Graphics and Applications, 20(6):26–52.

Klaus Engel, Markus Hadwiger, Joe M. Kniss, Aaron E. Lefohn, Christof RezkSalama, Daniel Weiskopf (2004). Real-Time Volume Graphics. Course Notes for Course#28 at SIGGRAPH 2004. 2004.

Silicon Graphics (2005). OpenGL R© Extension Registry, 2005. See http://www.oss.sgi.com/projects/ogl-sample/registry/.

Markus Hadwiger, Christoph Berger, Helwig Hauser (2003). High-Quality Two-LevelVolume Rendering of Segmented Data Sets on Consumer Graphics Hardware. In IEEEVisualization, S. 301–308, 2003.

Joe Kniss, Gordon Kindlmann, Charles Hansen (2001). Interactive volume rendering usingmulti-dimensional transfer functions and direct manipulation widgets. In VIS ’01: Proceed-ings of the conference on Visualization ’01, S. 255–262, Washington, DC, USA, 2001. IEEEComputer Society.

Joe Kniss, Gordon L. Kindlmann, Charles D. Hansen (2002). Multidimensional TransferFunctions for Interactive Volume Rendering. IEEE Trans. Vis. Comput. Graph, 8(3):270–285.

Joe Kniss, Jurgen P. Schulze, Uwe Wossner, Peter Winkler, Ulrich Lang, Charles D.Hansen (2004). Medical Applications of Multi-field Volume Rendering and VR Techniques.In VisSym, S. 249–254, 350, 2004.

Jens Kruger, Rudiger Westermann (2003). Acceleration Techniques for GPU-based VolumeRendering. In IEEE Visualization, S. 287–292, 2003.

Marc Levoy (1988). Display of Surfaces From Volume Data. IEEE Computer Graphics andApplications, 8(3):29–37.

W. E. Lorensen, H. E. Cline (1987). Marching cubes: A high resolution 3D surface constructionalgorithm. In Proc. SIGGRAPH, S. 163–169, 1987.

Nelson L. Max (1995). Optical Models for Direct Volume Rendering. IEEE Trans. Vis. Comput.Graph, 1(2):99–108.

Prof. Dr. Muller (2003/2004a). Material for lecture ’Computergraphik’ 1 and 2, 2003/2004. Seehttp://www.uni-koblenz.de/FB4/Institutes/ICV/AGMueller/Teaching/.

Prof. Dr. Muller (2004b). Material for lecture ’Virtuelle Realitat und Augmented Reality’, 2004.See http://www.uni-koblenz.de/FB4/Institutes/ICV/AGMueller/Teaching/.

Page 49: Integration of a Hardware Volume Renderer into a Virtual Reality …cg/Studienarbeiten/SA_Kratz.pdf · rendering into virtual reality (VR). It documents the implementation process

BIBLIOGRAPHY 43

Systems in Motion AS (2005). Coin3D, 2005. See http://www.coin3d.org/.

G. Riva (2003). Applications of Virtual Environments in Medicine. Methods of Informatics inMedicine, 42(5):524–34.

Henning Scharsach (2005a). Advanced GPU Raycasting. In Proceedings of CESCG 2005, 2005.

Henning Scharsach (2005b). Advanced Raycasting for Virtual Endoscopy on Consumer GraphicsHardware, April 05 2005.

Dieter Schmalstieg, Anton L. Fuhrmann, Gerd Hesina, Zsolt Szalavari, L. MiguelEncarnacao, Michael Gervautz, Werner Purgathofer (2002). The Studierstube Aug-mented Reality Project. Presence, 11(1):33–54.

Peter Shirley (2002). Fundamentals of Computer Graphics. AK Peters, Ltd.

Dave Shreiner, Mason Woo, Jackie Neider, Tom Davis (2005). OpenGL programmingguide: the official guide to learning OpenGL, version 2. Addison-Wesley, Reading, MA,USA, fifth Auflage.

R. Stoakley, M. Conway, R. Pausch (95). Virtual Reality on a WIM: Interactive Worlds inMiniature. In CHI’95, S. 265–272. 95.

John Viega, Matthew J. Conway, George Williams, Randy Pausch (1996). 3D MagicLenses. In Proceedings of the ACM Symposium on User Interface Software and Technology,Papers: Information Visualization, S. 51–58, 1996.

Josie Wernecke (1994). The Inventor Mentor. Addison Wesley, Reading, Massachusetts.