92
Real-time Ray Tracing B123 June 11, 2009

Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Real-time Ray Tracing

B123

June 11, 2009

Page 2: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Det Teknisk-Naturvidenskabelige Basisar

Computer Science

Strandvejen 12-14

Telefon 96 35 97 31

Fax 98 13 63 93

http://tnb.aau.dk

Title:

Real-time Ray Tracing

Theme:

Reality and Models

Project period:P2, Spring semester 2009

Project group:B123

Participants:Lars ØstergaardCasper JensenMads CarlsenRasmus AbildgaardThanh Long Truong

Advisors:Line JuhlHeather L. Baca-Greif

Circulation: 9Pages: 85Appendices: 4Project finished June 11, 2009

Synopsis:

This report is about real-time ray tracing.Ray tracing is a technique, which modelsthe way light interacts with surfaces, tocreate realistic graphics.We start by investigating how the filmand gaming industry make use of realis-tic graphics and how current developmentsmay encourage the use of ray tracing forthese applications.The next part of this report covers com-puter graphics theory, with an introductionto two rendering techniques called raster-ization and ray tracing. In addition themathematical foundation of ray tracing iscovered. We then move on to describelighting, materials, shadows and MonteCarlo ray tracing.In the following part we implement ourown ray tracer, where we discuss the re-sults.Then we discuss an interview with a seniorsoftware developer at Pixar to find out howthe film industry benefits from ray tracing.Finally, we discuss how ray tracing can bemade as fast as possible, so as to make ita real-time rendering solution.

The contents of this report are freely available. Publication (with source reference) is permitted granted con-

sent from the authors.

Rasmus Møller Abildgaard Mads Vestergaard Carlsen Casper Jensen

Thanh Long Truong Lars Kærlund Østergaard

Page 3: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Preface

This project is written by five Computer Science students from Aalborg University. The reportis handled over a three month period during the spring semester 2009.

This report will discuss the theoretical and implementational aspects of a working ray tracer.Alongside the report, a CD has been included. This CD contains the source code of the raytracer, which documents and demonstrates a working ray tracer. Note, for the program to runit is required that Microsoft .NET Framework 3.5 is installed on the computer.The program is written in C# and the source code is organized into several of subfolders andthe files are named with namespace.

References are gathered in a bibliography in the end of the report. Figures and tables arenumbered in accordance with chapter: the first figure in Chapter 3 is numbered 3.1, the second3.2 and so forth. Figures and tables will have appertaining explanatory text, which is locatedunderneath the figures and tables.References to the quotations used in the report will be given by brackets containing a numberand, if available, with respect to the source and page numbers as well. An example of this couldbe [17]. This reference leads to the bibliography where books contain: author, title, publisher,date, if available, then also edition and adress; while Internet web sites contain: author, title,URL, and date.

i

Page 4: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Contents

1 Problem Analysis 11.1 The Entertainment Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Rendering Methods and Their Applications . . . . . . . . . . . . . . . . . . . . . 31.3 Why Ray Tracing is Interesting Today . . . . . . . . . . . . . . . . . . . . . . . . 51.4 The Challenges of Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Problem Statement 9

3 Computer Graphics 103.1 Rasterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.2 Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 Mathematical Foundation for Ray tracing . . . . . . . . . . . . . . . . . . . . . . 16

3.3.1 Cartesian Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . 163.3.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.3.3 Barycentric Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.3.4 Local Coordinate System . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3.5 Ray Tracing Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.4 Random Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.5 Pruning Intersection Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

4 Advanced Effects 354.1 Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.1.1 Color and Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.1.2 Light Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.2 Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2.1 Perfect Specular Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2.2 Perfect Diffuse Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . 414.2.3 Perfect Specular Transmission . . . . . . . . . . . . . . . . . . . . . . . . . 414.2.4 Perfect Diffuse Transmission . . . . . . . . . . . . . . . . . . . . . . . . . 424.2.5 Phong Reflection Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.3 Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.3.1 Simple Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.4 Monte Carlo Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.4.1 One-Dimensional Continuous Probability Density Functions . . . . . . . . 494.4.2 One-dimensional Expected Value . . . . . . . . . . . . . . . . . . . . . . . 494.4.3 Multi-Dimensional Random Variables . . . . . . . . . . . . . . . . . . . . 504.4.4 Estimated Means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.4.5 The Monte Carlo Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.4.6 Solving the Transport Equation . . . . . . . . . . . . . . . . . . . . . . . . 51

ii

Page 5: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

5 Complexity Theory 53

6 Implementation 566.1 Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.2 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

6.2.1 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.2.2 The RTracer Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.2.3 The Render Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.2.4 Recursive Ray Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606.2.5 Direct Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.2.6 Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.2.7 Texture Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646.2.8 .NET Bitmap Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 64

6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646.3.1 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

7 The Film Industry on Real-time Ray Tracing 707.1 Why Interview? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707.2 Interview with Per Christensen . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717.3 Interview Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

8 Conclusion 74

9 Discussion and Reflection 76

Appendices 78

A Correspondance with Per Christensen, Pixar 79

Bibliography 85

iii

Page 6: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

List of Figures

1.1 Killzone 2 (2009) on the Playstation 3 console and Empire: Total War (2009) for the PC. 21.2 The Full CGI Monsters Vs. Aliens (2009) and WALL•E (2008). . . . . . . . . . . . . 3

3.1 The rasterizer applies textures and various shading info to the scene, here seenwith lighting only/full scene, here from upcoming Starcraft 2 . . . . . . . . . . . 11

3.2 Ray traced image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 The eye, view window, and world. . . . . . . . . . . . . . . . . . . . . . . . . . . 143.4 Rays bouncing from the light source to the eye. . . . . . . . . . . . . . . . . . . 143.5 Tracing a new ray from each ray-object intersection. . . . . . . . . . . . . . . . . 153.6 Cartesian Cordinates and Points in R3 . . . . . . . . . . . . . . . . . . . . . . . . 163.7 Vector on the form ~r = xi + yj + zk . . . . . . . . . . . . . . . . . . . . . . . . . 173.8 A 2D triangle with vertices a, b and c in a barycentric coordinatesystem . . . . . 183.9 β as a signed scaled distance and calculating α, β and γ as the area of the

subtriangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.10 Right- and left-handed coordinate systems. . . . . . . . . . . . . . . . . . . . . . 203.11 Ray/Sphere Intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.12 Checking if the ray is inside the sphere . . . . . . . . . . . . . . . . . . . . . . . . 253.13 Closest approach on the ray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.14 Geometry of sphere intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263.15 Depth of field is often used for readability and a cinematic feel in CGI . . . . . . 303.16 Regular and jittered sampling with 1, 4, and 16 samples/pixel. . . . . . . . . . . . . . 313.17 A quadtree and its tree representation. . . . . . . . . . . . . . . . . . . . . . . . . 333.18 A k-d tree in two dimensions and its tree representation. Regions are subdivided

by either a vertical or a horizontal line. . . . . . . . . . . . . . . . . . . . . . . . 33

4.1 A photograph of a room showing a light source and various objects [2, p. 100]. . 364.2 Description of a wavelength. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.3 The spectrum of electromagnetic waves ranges from low-frequency radio waves to

high-frequency gamma rays. Only a small portion of the spectrum, representingwavelengths of roughly 400–700 nanometers, is visible to the human eye [22]. . . 37

4.4 Attaching a spectrum to a ray, describing the light travelling along that ray. Thespectrum is given by points on an intensity versus wavelength plot [11, p. 125]. . 38

4.5 The geometry of reflection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404.6 A ruler in a glass of water [11, p. 135]. . . . . . . . . . . . . . . . . . . . . . . . . 424.7 The geometry of refraction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.8 The index of refraction of fused quartz. . . . . . . . . . . . . . . . . . . . . . . . 434.9 The Phong Reflection Model [24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.10 Shadow terminology: light source, occluder, receiver, shadow, umbra, and penum-

bra. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

iv

Page 7: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

4.11 Left: an area light can be approximated by some number of point lights; four ofthe nine points are visible to p so it is in the penumbra. Right: a random pointon the light is chosen for the shadow ray, and it has some chance of hitting thelight or not. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

4.12 The geometry of a parallelogram light specified by a corner point and two edgevectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.13 The geometry for the transport equation in its directional form. . . . . . . . . . . 51

5.1 The dashed part of f(x) meets the condition f(x) < Cg(x) [29] . . . . . . . . . . 54

6.1 Class diagram of materials, textures and shapes. . . . . . . . . . . . . . . . . . . . . . 576.2 Class diagram containing the rest of the noteworthy classes within the project. . . . . . 586.3 A. 360.000 primary rays, 1 sample/pixel, time: 00:03:851. B. 5.760.000 primary rays,

16 samples/pixel, time: 01:02:431. C. 36.000.000 primary rays, 100 samples/pixel, time:06:16:723 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

6.4 A. 360.000 primary rays, 1 sample/pixel, time: 00:05:712. B. 5.760.000 primary rays,16 samples/pixel, time: 01:20:342. C. 36.000.000 primary rays, 100 samples/pixel, time:09:00:481. D. 144.000.000 primary rays, 200 samples/pixel, time: 35:19:853 . . . . . . . 66

6.5 A. 1.440.000 primary rays, 16 samples/pixel, time:00:31:776. B. 9.000.000 primaryrays, 100 samples/pixel, time:03:13:846. C. 36.000.000 primary rays, 200 samples/pixel,time:13:14:008. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

6.6 Functions interpolated from test results . . . . . . . . . . . . . . . . . . . . . . . . . 68

7.1 For realistic and accurate reflections, ray tracing was an effective choice. Hereseen from the Cars short spin-off, Mater and the Ghostlight . . . . . . . . . . . . 72

v

Page 8: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 1

Problem Analysis

The phenomenon of computer graphics is an area no longer just associated with simple images,such as plain two-dimensional diagrams and minimalistic user interfaces. It has evolved into so-phisticated three-dimensional graphics, depicting ever more detailed and realistic environments.During the last ten years the interest for computer graphics has grown tremendously. Technol-ogy is being pushed to still higher levels. Advancements in affordable hardware have allowedconsumers to play computer games with ever more impressive graphics. Many of the latestblockbuster movies, capture movie-goers’ attention and fascination with dazzling computer-generated effects and scenes.

As a consequence of the increased demand for highly detailed computer generated imagery(CGI), developers and film makers seek better and more efficent ways of creating these images.In this pursuit a popular method named rasterization, used in CGI, is falling short in a numberof areas. A rendering method, called ray tracing, which was once too computationally costlyhas again come into consideration for next generation computer games. Given current hardwaretrends, computationel power is increasing by the year, making it more feasible to incorporateray tracing techniques into films and real-time applications, such as video games, where usersexpect the generated graphics to update seamlessly as the environment is explored.

In this section we will take a short look at the entertainment industry and introduce raster-ization and ray tracing. We will cover the differences of these rendering methods and how eachof them apply to the two industries. Furthermore, we will examine the challenges associatedwith real-time ray tracing.

1.1 The Entertainment Industry

In modern society entertainment is an enormous industry. Especially the film and gamingindustry represent a large portion of this. The U.S. game industry alone grew to $21.3 billionin 2008 according to the NPD Group, which is nearly a 19% increase from 2007, which was $18billion [20].

While the U.S. box office alone brought in $9.63 billion in revenues in 2007; a 5.4% increasefrom 2006. If 2006 numbers are any indication this figure amounts to approximantely only20% of total revenue of the film industry. In the same year the worldwide box office reacheda historic high, with $26.72 billion, compared to $25.47 billion in 2006, a 4.9% increase. Withthese numbers in mind it is apparent that consumers spend an increasing amount of money onentertainment. According to Veronis Suhler Stevenson, an investment and research firm, thisnumber averages to $743.16 on media per year. This includes video games, box office and homevideo (also cable and satellite television, recorded music, cosumer internet, consumer books,and mobile). [7; 23]

1

Page 9: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3D Computer Graphics and the Entertainment Industry

Computer-generated 3D graphics are an important part of the gaming and film industry. Theyare the defining component of many modern video games. Furthermore they allow film makersto enhance motion pictures with 3D characters and effects.

Ongoing advancements in this field have allowed film makers and game developers to createever more realistic and immersive films and games. New rendering techniques and levels ofdetail have allowed film effects and video games to mature into a very convincing representationof reality.

In terms of film making the first extensive usage of 3D CGI (computer generated imagery)was, among others, in the well-known Star Wars Episode IV: A New Hope from 1977. Since thenCGI has been used increasingly in film productions; entire films have been created entirely byCGI. The first fully animated CGI film was Toy Story in 1995. Among to the most recent fullycomputer-animated films are Bolt (2008), Star Wars: The Clone Wars (2008) and WALL•E(2008) [8].

Why Realistic Graphics Are Important in Films and Games

When looking at figures 1.1 and 1.2, showing games and films from 2008 and 2009, it is apparentthat films and computer games show still high levels of detail and are looking increasinglyrealistic.

Figure 1.1: Killzone 2 (2009) on the Playstation 3 console and Empire: Total War (2009) for the PC.

2

Page 10: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 1.2: The Full CGI Monsters Vs. Aliens (2009) and WALL•E (2008).

High performance computers and console systems have enabled developers to add moredetail and effects to their graphics engine. An important question arises in this context: whyare realistic graphics important? The main answer is immersion. The better games are atcapturing the player’s senses and attention, the better the experience.

It is important to note that some researchers have shown that graphics alone are not sufficientfor immersive gameplay. Realistic graphics can have quite the opposite effect if other aspects inthe game are lacking. For instance animations, physics and sound quality also need to be of acertain quality. If the game is not balanced in these aspects, the discrepancies can distract theplayer and therefore lower the degree of immersion.

So, when the goal is to create a realistic game, many details must be taken into consideration,to make it believable. When it comes to CGI films studios have a specific art style and direction,which they take the film in. Because of this CGI films are not necessarily realistic and this canbe done on purpose, depending on the genre of the film and intention of the film makers. Thetwo films shown in figure 1.2 are not completely realistic, yet they still display high qualitygraphics with real world lighting effects. [37]

1.2 Rendering Methods and Their Applications

Computer generated graphics can be created using various techniques, depending on what thegoal is. Most notably there is rasterization and ray tracing. The two techniques take a differentapproach in the way that geometry is treated and how effects are applied.

Ray tracing was made popular in a paper published by Dr. Turner Whitted in 1980 withthe introduction of the recursive ray tracing algorithm. Dr. Whitted is a senior researcher atMicrosoft, he holds a Ph.D. in Electrical Engineering, and was the first to use the concept ofray tracing for global illumination in computer graphics. Ray tracing is based on an elegant andsimple algorithm that makes it relatively easy to render shadows and specular surfaces. [36; 17]

Ray tracing is usually used for rendering tasks, which require high levels of detail andrealism, such as 3D renderings of models and film scenes, where realistic phenomena such as

3

Page 11: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

lighting, reflection, refraction and caustics are important. Because of the associated complexityof algorithms using a ray tracing-based methodology, realistic rendering is usually limited tooffline rendering tasks. An example of a task for ray tracing is rendering a 3D animationin a film, where rendering time is a less important issue. Due to the algorithmic complexityconstraints associated with ray tracing, it has generally not been regarded as a valid alternativeto rasterization for rendering computer games.

Rasterization is therefore the universally used method for rendering 3D models and envi-ronments in computer games. This method is used because it allows game developers to createreasonably realistic animations of objects, environments and effects at interactive frame rates.Generally speaking, fast rendering time is an essential part of 3D computer games, since lowframe rates lower the player experience or can even render a game utterly unplayable. There-fore developers employ various techniques to maintain acceptable frame rates, whilst simulatingreal-world physics and effects, like animation, lighting, textures and shadows. These techniquescan be considered as necessary compromises to acquire specific degrees of realism in a game,while still preserving gameplay.

How Rasterization and Ray Tracing Are Different

The ray tracing and rasterization algorithms essentially boil down to solving the visibility prob-lem which involves determining which surfaces are hidden from the viewer. First we will lookat how ray tracing handles this task and afterwards how this is solved by rasterization.

In basic terms ray tracing involves simulating the way light waves (photons) behave by’shooting’ a given number of rays through each pixel and eventually averaging the correct colorfor each pixel by means of different calculations. However, this is done in a backwards manner,by determining which photons contribute to the final image. So, instead of following photonsfrom a light source, to the objects it hits and finally the visible pixel, it is done in reverse. Thisis much less expensive in terms of computation than by simulating every single photon. Due tothe fact that forward ray tracing is so expensive, ray tracing is most commonly associated withbackwards ray tracing.

Ray tracing deals with shadows by using special type of rays called shadow rays or ”shadowfeelers”. This essentially boils down to firing new rays from a given intersection point into thedirection of every light source in the scene. If the ray intersects an opaque object before thelight source, the point from which the ray originated is in a shadow with respect to that lightsource. Shadows can be made more realistic by increasing the amount of samples per pixel andthus enabling soft shadows.

This can be done by treating each pixel as a finite square instead of a as a single point on thescreen. Rays are then fired into the center of each pixel and the intensity of neighboring raysare matched up. If the intensity of these rays diverge by some pre-defined threshold, furtherrays are shot into the pixels. The intensities are then averaged once more to establish the finalcolor of the current pixel. [11]

Ray tracing solves the problem of hidden surface removal implicity by locating the nearestsurface encountered by each camera-ray. In effect this is the same as sorting through all geometryon a per-pixel fashion.

The raster pipeline processes pixels based on geometry visible at a given moment. Thefirst step of this process is to determine which geometry is visible and what is hidden fromthe viewing angle. As soon as the raster pipeline has established which geometry is visible,which is a computationally demanding part of the rendering process, the remaining geometryis examined in its smallest units, which are made up of triangles. Subsequently lighting andshading is applied in order to define the color of each triangle. Usually textures are also added

4

Page 12: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

on top of each triangle. The lighting between the vertices of each triangle can be interpolated(estimated). During post-processing a shader can be used to apply transformations to eachpixel on the screen. [15]

Rasterization can be done in a stream-like manner, where each triangle is sent and processedseparately. The advantage of this is that it is not required to access the entire scene at once.Yet, this makes it very difficult to apply global effects such as reflections, refraction, shadowsand complex illumination.

To sum up, rasterization involves sending all geometry and discarding what is not visible tothe viewer. In contrast, with ray tracing geometry is processed on demand, pixel-by-pixel. Raytracing enables this by means of built-in occlusion culling.

Scene Complexity and Rendering Performance

Once a scene’s complexity surpasses a certain level, ray tracing methods are more efficientthan raster techniques. In this context complexity represents the number of scene primitives,consisting of polygons, present in a particular scene. The reason for ray tracing methodsoutperforming rasterization with a given degree of complexity lies in how the algorithms scale.Rendering time scales linearly with image resolution with both methods. In terms of scenecomplexity, however, rasterization scales linearly, while with ray tracing logarithmic scaling canbe attained by using certain acceleration techniques, such as spatial data structures, like octreesor bounding volumes.

Moreover there is the notion of ray packets, which exploits locality by testing bundles ofrays at a time, thus reducing the amount of computation required. Furthermore is possible tooptimize ray tracing by taking advantage of modern CPU (Central Processing Unit) features,such as the cache, memory bus, pipeline and also SIMD (Single Instruction, Multiple Data)instructions. The SIMD concept is a method of improving performance when a large amountof uniform data needs to have the same instruction performed on it. Basically this enables theCPU to perform the same operation on numerous bits of data in parallel. [39; 33][2, p.415-416]

1.3 Why Ray Tracing is Interesting Today

With the arrival of interactive computer graphics twenty years ago, requirements for computergraphics were very different than those of today. Scenes were relatively simple, containing upto a few hundred triangles. At this scale the logarithmic complexity and the high initial cost ofray tracing could simply not compete with rasterization and its linear complexity in terms ofprimitives in the scene.

In addition, occlusion culling was not a concern in such simple scenes, since overdraw wasminimal. Overdraw is when two or more polygons cover a pixel.

Shading was also rather simple, since per-pixel shading was not even used. With such basicrequirements and with limited hardware resources available, ray tracing would have been thewrong choice at the time.

However, it is obvious that these requirements have changed vastly. There is an increasingneed for complex shading. Now rasterization also performs all tasks in floating point like raytracing. Many operations are performed by shaders on a per-pixel basis on triangles, takingup no more than a few pixels. In these cases interpolation and incremental operations do notwork. [38]

Modern rasterization-based rendering systems provides some support for realistic lightingand complex illumination. This is usually the z-buffer, a method of organizing image depth,

5

Page 13: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

i.e. a solution for the visibility problem as mentioned in subsection 1.2. Yet this results in anumber of drawbacks like, decreased artist and programmer productivity, system complexityand severe restrictions to the types of scenes that can be rendered. [12]

Moreover for actions like zooming into models, rasterization hits resolution limitations forsome effects such as reflection, whereas ray tracing allows unrestricted magnification, withoutloss of quality.

Rasterization uses environment maps to simulate reflections. An environment map is basi-cally an image of the surroundings with respect to a given object. With an environment map itis not possible create inter-reflections, like one would see in a reflective shape, such as a torusor a saddle surface (paraboloid). This effectively limits its usage to mostly convex objects, suchas a sphere or an ellipsoid. Another drawback is that flat surfaces are generally not compatiblewith environment maps. This main issue with a flat surface is that the rays which reflect offof it typically only differ slightly in terms of degrees. This is what causes a small part of theenvironment map being mapped onto a rather large surface. [2; 18]

Image quality can be improved with ray tracing by using sampling techniques, such asstochastic sampling, which can eliminate unsightly aliasing (jagged edges on objects) and en-ables many realistic effects, like soft shadows, motion blur, translucency, gloss, depth of field,refraction and caustics. All of which are very hard to accomplish with rasterization. [11]

Ray Tracing and Parallelism

Ray tracing can be easily parallelized because of the fact that each ray is evaluated independentlyfrom all other rays. This can be exploited by dividing an image into a given number of portionsand afterwards assigning each of these to an individual processor or a different computer. Byadding more processors to assist in rendering it is possible to get linear performance increases.[39] As if today most commodity PCs and laptops come with dual and quad core processors.This development paves the way for parrallel computing for tasks such as ray tracing.

Increased Processing Power and Dedicated Hardware Develop-ments

The processing power available on PCs is constantly on the rise. Gordon E. Moore, co-founderof Intel in 1968, is commonly known for his predictions for the semiconductor industry, widelyknown as ”Moore’s Law”. Moore’s law states that the number of transistors that can fit ona minimum-cost square inch silicon chip has doubled every 12 months since 1959. Since thenthe rate has slowed down to every 18-24 months. This trend essentially allows for increasinglypowerful chips while decreasing the associated costs. This prediction is expected to hold untilaround 2015. [16]

The number of transistors per chip depends on a few important factors, such as the size of thesmallest transistor, current equipment is able to etch into the silicon wafer, the size of the siliconwafer, the average number of defective chips per square inch of silicon, and the costs associatedwith manufacturing multiple components (i.e. packaging costs, the cost of integrating multiplechip components onto a printed circuit board). Chipmakers focus on all of these factors in orderto bring consumers higher levels of integration at lower prices. Higher integration means addingmore functionality onto the same chip, because of increased transistor density. For examplemodern processors have integrated SIMD (section 1.2) and floating-point hardware, which usedto be located on a separate chip. In addition, processors now operate with increasing clockfrequencies, which allow them to perform ever more calculations per second. [34]

6

Page 14: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Given this trend and the fact that multicore processors have hit the market, the prospect ofintroducing ray tracing to gaming seems more realistic than for, say, 10 years ago. Intel and thelesser known Caustic Graphics are currently working on new graphics processing units whichadd support for ray tracing, with the Larrabee chip and CausticRT platform, respectively.

The Larrabe chip can support ray tracing by offering an x86-based (x86 refers to the commoninstruction set architecture of commercial CPUs) multicore system with a special instruction setand wide SIMD extensions. The chip also allows more programmability than currently availablegraphics cards. [30]

CausticRT is an in-development platform consisting of a special purpose ray tracing libraryor API (Application Programming Interface), namely CausticGL, and PCI express card calledCausticOne with 2GB of memory. The platform organizes scattered (incoherent) light raysinto a data flow, which exploits the available processing power optimally. CausticRT aimsto drive the CausticGL API as an industry-standard acceleration interface for developing raytracing-based applications. This means that the API alone will be able to fall back onto existingmulti-threaded Intel CPUs or other hardware, like GPUs if they are capable of handling raytracing efficiently. CausticRT is solely a ray tracing acceleration platform and not a renderer, soit is up to the developer to implement the renderer, which in turn can benefit from the enhancedplatform.

In their latest fully ray traced demo, containing 5 million triangles, they have been able tooperate at 3-5 frames per second. At the same time the user is able to interactively manipulatethe camera view, all geometry and shaders, while maintaining the frame rate. [13]

The platform only supports triangles as geometric primitives, since most other shapes canbe constructed by enough triangles. This is different from the original concept of ray tracing,where almost any shape can be rendered, provided it is possible to plug its equation into theray tracer. The current limits of the platform are around 100 million vertices without pagingwith the CausticOne card and an excess of 400 million vertices with the upcoming CausticTwocard, which will have paging support for larger datasets. Paging is the concept of transferringof data between main memory and a supporting memory store, like a hard disk. [14]

In brief, Larrabe essentially allows acceleration of ray tracing by the means of a special-purpose processor, which is also designed to support rasterization. Whereas CausicRT deliversa software and graphics card acceleration solution, which is designed exclusively for ray tracing.Dedicated hardware and software support for ray tracing, such as the aforementioned products,play a critical role for it to become a real-time rendering alternative to rasterization.

1.4 The Challenges of Ray Tracing

For ray tracing to become serious competition to rasterization in games, a minimum performancelevel of 300 million rays per second would be needed. This is if we assume a minimum resolutionof 1024x1024 pixels, running at 30 fps (frames per second) with 10 rays per pixel. To obtainhigh quality anti-aliasing an adaptive supersampling approach with 8 samples per pixel may beincluded. If so, the number of rays required may increase with a factor of circa two.

Currently, 10 million rays per second can be achieved with a single CPU for static scenes.For dynamic scenes this number is halved. In terms of the minimum requirements, this wouldrequire an improvement in performance in software by a factor of 30 for static scenes and 60for dynamic scenes. With anti-aliasing twice the performance is needed. [10]

It is also important to note that there are tasks for which ray tracing may not be the answer,like for wireframe models and camera rays. Furthermore, memory bandwidth bottlenecks limitthe efficiency of ray tracing, if the scene rendering cannot fit into the memory available. So, itis also necessary to address this matter [6].

7

Page 15: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Ray tracing has another limitation that needs to be adressed, for it to maintain its sub-linear complexity for animation applications. When an object is moved or transformed, theacceleration data structure needs to be updated in order to quickly cull the scene and thuspreserve maximum performance. Updating the acceleration structure is hard to carry outrapidly, since it is often rather costly. The cost for building the acceleration structure is at bestlinear in the number of objects it needs to hold. Therefore it is not desirable to update it forevery frame.

It is worth noting, though, that several researchers have proposed a few methods of dealingwith this matter. One approach involves keeping dynamic (i.e. moving) objects out of theacceleration structure and checking each of these separately for every ray. Yet, this method isonly practicable if the number of dynamic objects is low. Another method consists of a dynamicacceleration structure, which is based on hierarchical grids. In this structure larger objects arestored in coarser volumes of the hieararchy. This allows the acceleration structure to be updatedin constant time. [39]

The primary challenge is to develop a highly parallel ray tracing system, which exploitsray coherence, has acceptable speed for animation, and supports efficient random memoryaccess. In addition there is a need for high-performance parallel and scalable hardware. Intime, a combination of specialized hardware and specific improvements to the algorithm couldallow game developers to create complex scenes and realistic global effects with less effort.Likewise enabling film makers to decrease rendering times significantly, consequently shorteningproduction times.

8

Page 16: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 2

Problem Statement

How can real-time ray tracing become more prevalent in the film industry and which challengesarise in this context? In relation to this question we ask the following sub-questions:

• How does ray tracing work?

• How can we design a ray tracer?

• Which advantages does ray tracing hold for the film industry?

• What can be done to speed up ray tracing to attain real-time rendering?

We will answer these questions by investigating the theory of ray tracing. In addition wewill attempt to create our own ray tracer as a proof of concept and test the performance of it.Finally, we will examine some of the latest research and interview experts in the field.

9

Page 17: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 3

Computer Graphics

To render three-dimensional images, various complicated processes must be executed.A proper method is needed to feed data to the machine, in a way it can be understood andshown on a raster display.

In this report we mainly deal with ray tracing, but the standard for real-time renderingtoday is rasterization. rasterization displays graphics through processes of coloring in trianglesdefined by a collection of vertices, whereas in ray tracing a ray is shot and the path of that rayand various offset rays and reflection rays are determined for each individual pixel.

3.1 Rasterization

Rasterization is the standard technique used in the industry today for real-time rendering,rasterization draws various elements of 3D scenes to a raster display, this is done in moderngames today with the use of the two major API’s; OpenGL and DirectX.

Raserization is the process of computing the various elements and maps of a scene intopixels, down to basics, the rasterizer renders three-dimensional vertices onto a two-dimensionalraster display, such as a computer monitor. Developers are able to bring us impressive graphicsby today’s standard, using different effects such as shaders - altering the way certain elementsof the scene are rendered.

Rasterization is developed for speed and is still the preferred choice over ray tracing, as stan-dard desktop computers still do not quite offer the computational power to match rasterizationfor speed, rasterization has continued to grow and is as powerful a choice as ever.The rendering process of rasterization makes use of triangles as a modeling primitive for definingthe images displayed.

These triangles are defined by a collection of interconnected vertices, these can be taggedwith certain information, such as color, and the color information is interpolated across thetriangles defined by these vertices. [31]

Graphics Pipeline

Developers today are able to bring us advanced graphics in real-time through the use of ras-terization, mostly due to the task specific hardware that has emerged, namely the GPU. Forrasterization-based rendering the standard today is through the raster pipeline. A pipeline is,as the term puts it, the process of processing data output of one stage as the input to the nextstage. This is where the GPU offloads the CPU through the graphics rendering pipeline, it iswith this raster pipeline that games and other interactive 3D-settings today are rendered.

10

Page 18: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 3.1: The rasterizer applies textures and various shading info to the scene, here seen withlighting only/full scene, here from upcoming Starcraft 2

The graphics pipeline most commonly used can be defined by three stages, the application-,geometry- and rasterizer stage. [2]In the application stage the developer has full control of the behavior of the process, however itis not divided into substages as the later stages of the pipeline. In the application stage variouscalculations and algorithms are applied, additionally user-input is handled in the applicationstage. In recent years with the introduction of multi-core processors this stage has seen muchimprovement due to performing the calculations of this stage across several processor cores,usually 2 or 4. Most importantly the application stage prepares geometry data for input in thegeometry stage.

The geometry prepares the geometry for the rasterizer stage, including various operations,such as model and view transform, and, vertex shading. Additionally clipping is handled inthis stage, clipping creates new vertices for objects that are partially off-screen, thereby onlypassing the information of the onscreen parts to the rasterizer.

The vertices from previous stages, associated with their shading information is passed tothe rasterizer stage. The rasterizer effetively computes and sets color for the picture elementsthat cover the objects in question. During rasterization any per-pixel shading computation isperformed, the per-pixel shading is ultimately computed on the GPU, importantly texturing isalso handled therein. [2]

Z-Buffer

During rasterization a hardware process called Z-Buffering (depth-buffering) is ued. The Z-Buffer is an algorithm used to determine visibility on the scene. The Z-buffer stores a z-valuefor each pixel rendered, relative to the position of the camera. When a primitive is beingrendered to the pixels on the display, the z-value of that primitive is compared to the z-valuedisplayed. If the z-value is greater than the current value stored in the z-buffer for that pixelthe primitive is not rendered, and the z-value is left at the lesser value.

The point of the z-buffer is to only render the pixels closest to the camera, thereby onlyspending computing power on the objects closest, and actually visible to the viewer. Through

11

Page 19: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

this graphics pipeline developers are able to customize their graphics engines in every wayimaginable, mainly offering vast choices of scalability upwards and most importantly down-wards, certain effects may be disabled or rendered in a lower quality, effectively making theengine scalable on lower ranges of hardware and thereby reaching a larger audience - effectivelybuilding a larger customer base.

12

Page 20: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3.2 Ray Tracing

In this chapter we deal with ray tracing theory. Ray tracing is a complex method, but in basicit is a technique for rendering three-dimensional graphics with light interactions. This meansthat this technique makes it possible to create pictures full of mirrors, shadows, and transparentsurfaces with absolutely stunning results. Ray tracing is based on the idea that you can modelreflection and refraction by recursively following the path that light takes as it bounces throughan environment.

Figure 3.2: Ray traced image.

Before explaining how ray tracing actually works, we should agree on some basic terminology.When creating any sort of 3D computer graphics, it is necessary to create a list of objects thatyou want your software to render. These objects are part of a world, so when we talk about“looking at the world”, we are just referring to the ray tracer drawing the objects from a givenviewpoint and not any philosophical point of views. These viewpoints are called the eye orcamera in graphics. The difference between a camera in films and the camera in graphics isthat it is placed behind the aperture, whereas in graphics the view window is in front. Thismeans that in computer graphics each pixel of the final image is caused by simulated light raysthat hit the view window on their path towards the eye. While in films the color of each pointis caused by a ray of light passing through the aperture, hitting the film.

In final our goal is to find the color of each point on the view window. When creating animage, it is prefered to subdivide the view window into small squares, where all the squarescorrespond to one pixel in the final image. To create an image at a resolution of 640× 400, it isnecessary to break up the view window into a grid of 640 squares across and 400 squares down.The main issue is to assign a color for each square, this is what ray tracing does.

When talking about ray tracing it is like saying the picture is traced through the scene.That means ray tracing tries to imitate the way that the path light rays travel as they bouncearound within the environment. The goal of ray tracing is to find the color of each light raythat strikes the view window before reaching the eye. However, this is also the disadvantagewith ray tracing, because when calculating every single ray there is a significant amount ofcomputational waste, because not all the rays will end up reaching the eye.

In the ray tracing process we begin with the light source, where we first have to decide howmany rays we are dealing with. Then we have to decide in what direction every single ray isgoing. This might cause an issue, which rays do we have to follow? There is an infinity ofdirections which the light ray can travel. Some of the rays we trace will reach the eye directly,

13

Page 21: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 3.3: The eye, view window, and world.

other will bounce around and after a while reach the eye, and the last part will probably neverreach the eye at all. For the last part of the rays that will never reach the eye, will the effort bea total waste. As figure 4.4 displays, there are alot of rays bouncing around and many of themwill never reach the eye and the time spend on tracing them will be a waste.

Figure 3.4: Rays bouncing from the light source to the eye.

In order to avoid this wasted effort, we will try to solve this problem by only tracing thoserays that are guaranteed to hit the view window and reach the eye. It will be difficult todetermine which rays will reach the eye, because we know that any given ray can bouncearound the room many times before it reaches the eye. The solution lies performing ray tracingin a backwards manner. If we start tracing the rays from the eye instead of starting at the lightsource, we will be able to solve our wasted effort problem. Now we can consider any point onthe view window which color we are trying to determine. The color of the object is given bythe color of the light ray that passes through that point on the view window and reaches the

14

Page 22: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

eye. The two rays will be identical, except for their directions. When viewing figure 3.5 onecan imagine the direction of the arrows reversed, and see that the backwards method does thesame thing as the original method, except it does not waste any effort on rays that never reachthe eye.

This is how ray tracing works in computer graphics. For every pixel on the view window,we define rays that extend from the eye to that point. We follow these rays out into the sceneand observe where they hit the different objects. The final color is defined by the color of theobjects a ray hits throughout the scene.

Due to the large number of bounces before the final ray hits the eye, or in backwards hitsthe light, we need to define a limit to the number of bounces allowed. Every time one singleray hits an object, we follow a new ray from that point of intersection directly towards the lightsource.

We see two rays in the figure 3.5, a and b, which intersect the purple sphere. To determinethe color of a, we have to follow the new ray a’, directly towards the light source. As thepicture displays, b will be shadowed because the ray, b’, towards the light source is blocked bythe sphere itself. The ray, a’, would have also been shadowed if another object had blocked theray a.

Figure 3.5: Tracing a new ray from each ray-object intersection.

15

Page 23: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3.3 Mathematical Foundation for Ray tracing

In this section we will talk about some of the basic mathematic needed to understand ray-tracing.

• Cartesian coordinate systems

– Points in Cartesian coordinate systems

– Shapes in Cartesian coordinate systems

• Barycentric coordinates

• Ray-tracing Primitives

– Sphere

– Plane

– Triangle

All of this is needed to build a ray-tracer and understanding the functions behind.

3.3.1 Cartesian Coordinate System

The Cartesian coordinate system is used in mathematics to describe each point in a two-dimensinal plane or three-dimensinal space uniquely. This is done by giving each point a set ofnumeric values. In the plane (R2) two values are used to determent the point’s alignment alongthe x-axis and y-axis, in the space (R3) a third axis is introduced, the z-axis. The axis in thetwo and three-dimensional coordinate system is commonly defined as mutually orthogonal toeach other (each at a right angle to the others).

Figure 3.6: Cartesian Cordinates and Points in R3

16

Page 24: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Points in the Cartesian Coordinate System

A point in the Cartesian system is written as (x,y) in R2 and (x,y,z) in R3, the point P(4.5,-3.6) would then be 4.5 units out of the x-axis and -3.6 out of the y-axis. The point where theaxis intersect is called origin or O and is given the value of (0, 0) and (0, 0, 0) in R2 and R3

respectively. An example is shown in figure 3.6 on page 16

Geometric Shapes

Geometric shapes such as curves or circles, can also be described using the Cartesian systemusing algebraic equations satisfied by the points lying on the shape, other shapes like squares ortriangles, cannot be described using one equation, but can instead be described using a collectionof points, there when connected form the shape, these forms is generaly called polygons.

3.3.2 Vectors

A vector is defined as an arrow from Origin to a point, and is simply written as ~a = (x, y)(R2)

or ~b = (x, y, z) (R3), but can also be written as either a row vector ~a =[xy

]or a column

vector ~b =[x y z

]Another way to express a vector is by introducing a set of basis vectors:

e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1).

Then a vector going from origin to the point P (a, b, c) can then be expressed as:

P = ae1 + be2 + ce3 as shown in figure 3.7.

Figure 3.7: Vector on the form ~r = xi + yj + zk

17

Page 25: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3.3.3 Barycentric Coordinates

In mathematics, barycentric coordinates are coordinates defined by the vertices of a simplex(an object with n+1 edges, where n is the dimension, ex a triangle in two dimenssions), inbarycentric coordinates the axis do not have to be orthogonal as in the Cartesian coordinates.An example of a barycentric coordinatesystem is shown in figure 3.8.

Figure 3.8: A 2D triangle with vertices a, b and c in a barycentric coordinatesystem

If we look at a barycentric coordinate system with origin in a and to points b and c eachalong one of the two axis, any point in the system can be written as

p = a+ β(b− a) + γ(c− a).Reordered: p = (1− β − γ)a+ βb+ γc.

(1 - β - γ) is often defined as a new variable α to improve the symmetry, giving

p(α, β, γ) = αa+ βb+ γc.

with: α+ β + γ = 1.

A particularly nice feature of barycentric coordinates is that a point p is inside the triangledefined by the three vertices a, b and c if and only if

0 < α < 1.0 < β < 1.0 < γ < 1.

If one of the variables is 0, and the other two is between 0 and 1, the point is on one of theedges, and if two of the variables is 0 and the last 1, the point is at one of the vertices.

Cartesian to Barycentric Coordinates

The conversion from Cartesian coordinates to Barycentric is done by finding the three variablesα, β and γ, which each is the signed scaled distance to there opposite edges, shown in figure3.9.

β =(ya − yc)x+ (xc − xa)y + xayc − xcya

(ya − yc)xb + (xc − xa)yb + xayc − xcya.

18

Page 26: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

γ is found in a simular way

β =(ya − yb)x+ (xb − xa)y + xayb − xbya

(ya − yb)xc + (xb − xa)yc + xayb − xbya.

The last variable α could be found in the same way, but using α=(1 - β - γ) is both easier andfaster.

Another way to compute the barycentric coordinates is to find the area of the subtriangledefined by the three edges and the point for which we want to find the barycentric coordinates,this is done as following

α = Aa/A.

β = Ab/A.

γ = Ac/A.

Where A is the area of the triangle, note that A = Aa +Ab +Ac. This rule still holds for pointsoutside the triangle as long as the areas are signed. [31]

Figure 3.9: β as a signed scaled distance and calculating α, β and γ as the area of the subtriangles

19

Page 27: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3.3.4 Local Coordinate System

In one of the later sections, we will discuss the shadow phenomen. For that, we will expandthe understanding of the three-dimensional coordinate system, since it will become useful whenhandling shadows in 3D graphics. An essential aspect to the geometric structure of a 3D graph-ics system, is a compact method to store and utilize descriptions of local coordinate systems.The local coordinate systems are used in the definition of the various components of a modeldescribing the geometry and other characteristics of the scene, much as the local coordinatesused on a plan are used in describing the design of a real object. For instance, the coordinateson the plan for a complete aircraft will necessarily be much different from the coordinates usedon the plan for the airplane’s wheel assembly.

The common representation of 3D coordinates in mathematics and engineering is to useright-handed coordinate system. This gives a natural organization with respect to the displayscreen, with the x-coordinate measuring horizontal distance across the screen, the y-coordinatemeasuring vertical distance up the screen, and the z-coordinate providing the third spatial di-mension as distance in front of the screen.However, in the early development of computer graphics, coordinate systems were often left-handed. In screen space, the difference is that the positive z or depth coordinate is measuredinto the screen. Figure 3.10 shows the ordering of right-handed and left-handed coordinatesystem. A local coordinate system can be positioned and oriented anywhere in space and is notusually aligned with the screen.

Figure 3.10: Right- and left-handed coordinate systems.

A local coordinate system is usually defined in terms of a small set of intuitive geometricoperations — the affine transformations;

• Translation: a change in the position of the origin of the local system.

• Scaling: a change in the scale of measurement in the local system.

• Rotation: a change in the orientation of the local system.

• Shear: transformation from an orthogonal coordinate system to a nonorthogonal system.

However, we will not make an in-depth discussion about the affine transformations in thisreport, but focus on basic transformations.The basic geometric unit is the 3D point, which is typically represented in a 3D graphics systemas a 3-vector and stored as an array of three elements, representing the x, y, and z components ofthe point. Orientation vectors, like surface normals and directions in space, are also representedby 3-vectors. Thus, the point (x, y, z) is given by the vector

20

Page 28: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

xyz

.A local coordinate system is usually specified by a four-dimensional (4D) homogeneous

transformation matrix of the form

M =

a11 a12 a13 a14

a21 a22 a23 a24

a31 a32 a33 a34

0 0 0 1

,which specifies a transformation from the local coordinate system to its reference coordinate

system. In other words, applying the transformation M to a point specified in the local coor-dinate system will yield the same point specified in the reference coordinate system. Anotherway of thinking of the same transformation matrix M is that when applied to the referencecoordinate system it aligns it with, and scales it to the local coordinate system. The 4D homo-geneous form of the transformation matrix M allows the unification of translation with scaling,rotation, and shear in a single matrix representation.

The transformation implied by matrix M is implemented by a three-step process. Assumingthat 3D geometric points are represented as column vectors in the local coordinate system, theyare transformed into the reference coordinate system by

1. Extending the 3D point p into a 4-vector ~v in homogeneous space by giving it a fourth,or w, coordinate of 1:

p =

xyz

=⇒ ~v =

xyz1

2. Premultiplying this extended vector by the matrix M yielding a transformed 4-vector ~v′:

M~v = ~v′ =

x′

y′

z′

1

3. Converting the resulting 4-vector ~v’ into the transformed 3-D point p′ by discarding its

w -coordinate:

~v′ =

x′

y′

z′

1

=⇒ p′ =

x′y′z′

Inspection of matrix M will show that it is defined to always send the original w-coordinate

to itself, thus making the third step legitimate.

21

Page 29: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Unfortunately we did not take the full advantages of the transforms in our ray tracer.However, we are aware of the benefits with transforms. With them, it is convenient to position,reshape, and animate objects, lights, and cameras. It can also ensure that all computations arecarried out in the same coordinate system, and project objects onto a plane in different ways[35, section 35].

3.3.5 Ray Tracing Primitives

In this section we will be describing some of the basic ray-tracing primitives and how to dointersection test to them.The primitives we will describe are:

• The Ray

– Our basic ray, the object all our primitives will be checked for intersection with.

• Spheres

• Planes

• Triangles

The Ray

The mathematical representation for a ray is defined in the following equation (3.1):

~Rorigin ≡ ~R0 ≡ (x0, y0, z0).~Rdirection ≡ ~Rd ≡ (xd, yd, zd).

x2d + y2

d + z2d = 1(normalized).

~R(t) = ~R0 + ~Rd · t, t > 0. (3.1)

A ray consists of an origin point and a propagation direction; a 3D parametric line. Herethe parameter is called t and the reason why it is necessary to require that t > 0, is that ift < 0, ~R(t) would be behind the origin, which can be thought of as a camera lens or eye. Thisalso means that small values of t are closer than larger values, i.e. if t1 < t2, ~R(t1) is closer tothe camera lens than ~R(t2). With this in mind it is possible to determine the nearest objectsintersected by a ray in front of the camera lens.Also note that the direction vector does not have to be normalized, but by doing so t representsthe distance from the origin. [31]

Sphere

A sphere is defined in Cartesian coordinates by:

Sphere’s center ≡ ~Sc ≡ (xc, yc, zc).Sphere’s radius ≡ Sr.The sphere’s surface is the set of points : (xs, ys, zs).

where : (xs − xc)2 + (ys − yc)2 + (zs − zc)2 = S2r . (3.2)

origin to center : ~OC = ~Sc − ~R0. (3.3)

22

Page 30: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 3.11: Ray/Sphere Intersection

The surface of the sphere is defined by an implicit equation, so points on the surface cannotbe directly calculated. Instead each point can be tested by inserting the point in (3.3), if equalityholds, the point is on the surface.To solve the intersection between the ray and the sphere, the equation of the ray is substitudedinto the sphere equation, and it is solved for t. This is done by expressing the ray from (3.1) asthe set of points (x y z)

x = x0 + xd · ty = y0 + yd · tz = z0 + zd · t (3.4)

And by substituting (3.4) into (3.3) we get:

(x0 + xd · t− xc)2 + (y0 + yd · t− yc)2 + (z0 + zd · t− zc)2 = S2r .

In terms of t this simplifies to:

A · t2 +B · t+ C = 0.

where

A = x2d + y2

d + z2d = 1,

B = 2 · (xd · (x0 − xc) + yd · (y0 − yc) + zd · (z0 − zc)),C = (x0 − xc)2 + (y0 − yc)2 + (z0 − zc)2 − S2

r .

The equation is quadratic, and the sulution for t is: (with A = 1)

t0 =−B −

√B2 − 4 · C2

.

t1 =−B +

√B2 − 4 · C2

.

When the discriminant is negative the ray misses the sphere. Since t > 0 the roots t0 andt1 are examined. The smallest positive root is the closest, if no such root exists, then the raymisses the sphere.

23

Page 31: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Once the distance t is found, the actural intersection point is:

rintersect ≡ ri = (xi, yi, zi) = (x0 + xd · t, y0 + yd · t, z0 + zd · t). (3.5)

And the unit vector normal to the sphere is simply:

~rnormal ≡ ~rn =(xi − xcSr

,yi − ycSr

,zi − zcSr

). (3.6)

[11]

Sphere Optimizations

The intersection algoritme described above is simple and easy understandable, but it is notvery effective. To optimize the algorithm, some knowledge about how calculations are done ona computer is needed. Eg. squareroot generaly takes 15-30 times longer than multiply, similarlydivison often takes longer than the multiplicative inverse (eg. 3

4 become 3 · 0.25).Another think to observe is that calculation often can be cut short. A number of test arepossible to determine if the sphere is hit at all, and by so only the calculations needed are done.By studying the geometry of the sphere some other properties become apparent. Eg. if the rayorgin is inside the sphere, the ray allways hits the sphere, and if the closest approach to thesphere is negative, the sphere is behind the ray-origin and there by miss. Due to this anotherapproach would be:

(1) Find if the ray’s origin is outside the sphere.(2) Find the closest approach of the ray to the sphere’s center.(3) If the ray is outside the sphere and points away, the ray must miss the sphere.(4) Else, find the squared distance from the closest approach to the sphere surface.(5) If the value is negative, the ray misses the sphere.(6) Else, find the distance between the ray-origin and the sphere.(7) Calculate the intersection point.(8) Calculate the normal at the intersection point.

This breaks up the equation from the previous section up in smaller sections, and introducetwo ”breakers”, if (3) or (5) is true the ray misses the sphere and no further calculations isneeded.Now to explain the strategy better, note that the ray from 3.1 and the sphere from 3.3 is used:First find if the ray is inside the sphere:

Origin to center vector ≡ ~OC = ~Sc − ~R0.

Lenght squared of ~OC ≡ L2OC = ~OC • ~OC.

If L2OC < S2r the ray is inside the sphere and must hit it, and if L2OC > S2

r the ray isoutside and might miss the sphere. The two cases is shown in figure 3.12

The next step is to calculate the distance from the origin to the point on the ray closest tothe sphere’s center:

Closest approach along ray ≡ tca = ~OC • ~Rd.

If tca < 0 then the center of the sphere is behind the ray-origin, this is not that important ifthe ray-origin is inside the sphere, but if the ray-origin is outside the sphere this means the raymiss the sphere. The two cases is showen in figure 3.13

24

Page 32: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 3.12: Checking if the ray is inside the sphere

Figure 3.13: Closest approach on the ray

Once the closest approach distance is calculated, the distance from this point to the sphere’ssurface is determined. This distance is:

half chord distance squared ≡ t2hc ≡ t2hc = S2r −D2. (3.7)

where D is the distance from the ray’s closest approach to the sphere’s center.Calculated by Pythagorean theorem:

D2 = L2OC − t2ca.

Substituted into (3.7):

t2hc = S2r − L2

2OC + t2ca.

Shown in figure 3.14. If t2hc < 0 the ray misses the sphere, this can of cource only happen ifthe ray-origin is outside the sphere.

At this point everything needed to calculate the actual intersection point’s distance alongthe ray, have been found. And t is

t = tca −√t2hc for rays originating outside the sphere. (3.8)

t = tca +√t2hc for rays originating inside or on the sphere.

All needed now is to calculate the actual intersection point and the sphere normal at thepoint, this is done using 3.5 and 3.6[11]

25

Page 33: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 3.14: Geometry of sphere intersection

Plane

A two-dimensional plane, in three-dimensional Cartesian coordinates, is defined by

The Plane ≡ A · x+B · y + C · z +D = 0. (3.9)

Where : A2 +B2 + C2 = 1.

The unit vector normal of the plane is

~Pnormal ≡ ~Pn = (A,B,C).

and the distance from the coordinate systems origin is D, the sign of D deternimed on whatside of the plane the system origin is located.The distance from the ray origin to the plane is simply found by substituting (3.4) into theequation for the plane.

A · (x0 + xd · t) +B · (y0 + yd · t) + C · (z0 + zd · t) +D = 0.

solved for t gives

t =−(A · x0 +B · y0 + C · z0 +D)

A · xd +B · yd + C · zd. (3.10)

In vector notation

t =−(~Pn • ~R0 +D)

~Pn • ~Rd.

To use (3.10) more efficently, first calculate the denominator

vd = ~Pn • ~Rd = A · xd +B · yd + C · zd.

if vd = 0 the ray is parallel with the plane and no intersection occurs. Admittedly, the raycould be in the plane, but in practice this case is irrelevant as hitting the plane edge-on has no

26

Page 34: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

effect on rendering.If the ray passes this test(vd 6= 0), calculate the secound dot product

v0 = −(~Pn • ~R0 +D) = −(A · x0 +B · y0 + C · z0 +D).

Now calculate the ratio of the dot products

t =v0vd.

if t < 0 then the intersection occurs behind the ray origin, and no actural intersectionhappens.Else calculate the intersection point:

ri = (xi, yi, zi) = (x0 + xd · t, y0 + yd · t, z0 + zd · t).

Usually, the desired plane normal is the one facing the ray, and so the sign of the normalmay be adjusted depending on its relationship with the ray direction Rd.

if ~Pn • ~Rd < 0(same as vd < 0) then~Pn = ~Pn

else~Pn = −~Pn

end if[11]

Triangle

There are many methods for calculating ray-triangle intersections, we have chosen a methodusing barycentric coordinates, because it requires no long-term storage beside the vertices of thetriangle. The parametric equation can be written in vector form as discribed in section 3.3.3.If the vertices of the triangle is ~a, ~b and ~c, then the intersection occurs when

Ro + t · ~Rd = ~a+ β(~b− ~a) + γ(~c− ~a) (3.11)

The hitpoint p wil then be at ~Rorigin + t · ~Rdirection and we know from section 3.3.3 that the rayintersects the triangle if and only if

0 < β < 10 < γ < 1β + γ < 1

To solve for t, β and γ in equation 3.11, we expand it into three equations for its three coordinates

x0 + t · xd = xa + β(xb − xa) + γ(xc − xa),y0 + t · yd = ya + β(yb − ya) + γ(yc − ya),z0 + t · zd = za + β(zb − za) + γ(zc − za)

27

Page 35: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

This can be rewritten as a linear equation xa − xb xa − xc xdya − yb ya − yc ydza − zb za − zc zd

βγt

=

xa − xoya − yoza − zo

The fastest way to solve this 3×3 system is using Cramer’s rule. which gives us

β =

∣∣∣∣∣∣xa − xo xa − xc xdya − yo ya − yc ydza − zo za − zc zd

∣∣∣∣∣∣|A|

,

γ =

∣∣∣∣∣∣xa − xb xa − xo xdya − yb ya − yo ydza − zb za − zo zd

∣∣∣∣∣∣|A|

,

t =

∣∣∣∣∣∣xa − xb xa − xc xa − xoya − yb ya − yc ya − yoza − zb za − zc za − zo

∣∣∣∣∣∣|A|

,

where A is the matrix

A =

xa − xb xa − xc xdya − yb ya − yc ydza − zb za − zc zd

and |A| denotes the determinant of A.The 3×3 Determinant have similaritys that can be used.Looking at the linear system with dummy variables. a d g

b e hc f i

βγt

=

jkl

.Cramer’s rule gives

β =j(ei− hf) + k(gf − di) + l(dh− eg)

M

γ =i(ak − jb) + h(jc− al) + g(bl − kc)

M

t =f(ak − jb) + e(jc− al) + d(bl − kc)

M

where

M = a(ei− hf) + b(gf − di) + c(dh− eg)

Note that some calculation occures more than once e.g. (gf − di) these can be store for effi-ciency.The algoritme for calculating intersection between the ray and a triangle have some conditionwhice will let us terminate early, giving

28

Page 36: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

compute βif (β < 0) or (β > 1) then

return Falseend ifcompute γif (γ < 0) or (γ > 1− β) then

return Falseend ifreturn True[31]

29

Page 37: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3.4 Random Sampling

A standard ray tracer render things ”too” perfect in comparison to the real world. Reflectionsare more ”perfect”, edges and boundaries are sharper and objects may be in and out of focus.All this must be taken into account if a renderer is being used to approximate reality. Sometimeswe want the shadows to be soft or the reflections to be blurry, as in brushed metal. Samplingis necessary to handle effects like anti-aliasing, soft shadows and depth of field.

As previously mentioned in section 4.3.1 shadows are handled differently in standard raytracing, because the focus is on a point instead of accounting for light being in an area.

The soft focus effect we strive to achieve on the pictures today, can be created by collectingthe light at a point. This is called depth of field. The camera lens receives the light from a coneof directions that has their point in a distance where everything is in focus. We can place the”window” we are sampling on a plane where everything is in focus, and the camera lens at theeye. The distance to the plane where everything is in focus, is called the focus plane and thedistance to the focus plane is chosen by the user. This effect emulates a normal camera wherethe user chooses the focus distance by adjusting the camera, to make the picture more readablein cinematic scenes.

Figure 3.15: Depth of field is often used for readability and a cinematic feel in CGI

For the picture to appear realistic and just like a normal camera, we should set the cameralens as a disk, however the result from using a square lens will very similiar. The reason forthis is that it is the way we can get the sampler to approximate reality in a better way. Thesidelength of this lens is chosen, and random samples are perfomed upon it.

Distributed Ray Tracing

It is possible to adjust image quality by implementing a jittered sampling technique, whichessentially enables one to specify how many view rays are fired into each pixel. A jittered sam-pling technique can be employed to avoid artifacts and patterns, which can arise from regularsampling and random sampling. This technique is also known as supersampling. The algorithm

30

Page 38: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

for supersampling is described in the following.

for each pixel (x, y) doc := 0 . c has default color.for i := 0 to n− 1 do

for j := 0 to n− 1 doc := c+ raytrace(x+ (i+ξ)

n , y + j+ξn ) . ray trace in a random point.

end forend forcxy := c

n2 . cxy has the average color.end forEach pixel is divided into a grid of n2 squares, as illustrated in figure 3.16. For each square

a random sample is taken by firing a ray into a random point, ξ, which is in the range [0, 1).[31, p.230-231]

Figure 3.16: Regular and jittered sampling with 1, 4, and 16 samples/pixel.

An example of using this technique: Set n = 4 and the resolution to 400 × 400 pixels. Inthis case 16 view rays will be fired into each pixel, which amounts to a total of 2.560.000 viewrays per image. With recursive ray tracing, each of these view rays may also spawn a givennumber of rays to take lighting and materials into account, further increasing the total numberof rays to be evaluated.

This technique allows us to achieve a number of additional effects, such as depth of field andmotion blur. Depth of field can be accomplished by jittering the camera/eye position by someξ. Motion blur can be achieved in a similar manner, where the position of a shape in motionis jittered backwards. Supersampling is an effective technique, since it allows us to mix andmatch various effects with ease.

31

Page 39: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

3.5 Pruning Intersection Tests

In the following we will explain some of the pruning methods, which can be used to accelerateray tracing, by minimizing the bottleneck of intersection testing, in a simple implementation ofclassical ray tracing, where a linearly scaling brute-force algorithm is used.

Bounding Volume Hierarchies

One pruning strategy for ray tracing is the usage of bounding volume hierarchies, which wefrom now on will refer to as BVHs.

The incorporation of a BVH is a way to improve the ray tracer, by avoiding the costlyoperation of calculating ray intersections with every object in the scene. This can be comparedto speeding up a linear search algorithm, by replacing it with binary search. There is a numberof alternative acceleration strategies to the BVH. The BVH is commonly used because it is rel-atively easy to implement and it works well. It offers logarithmic scaling, which is a substantialimprovement over linear intersection testing.

A central part of the BVH is idea of computing intersection of a ray and a bounding volume,which is simpler to test for intersections than the enclosed object. The bounding volume canbe defined as a sphere, ellipsoid, or in this case a box or parallelepiped. A bounding box issimply a three-dimensional box, which holds an object, such as a sphere or collection of smallerbounding boxes. Since the hierarchy consists of boxes it is referred to as a hierarchical boundingbox (HBB).

This technique involves inserting a number of bounding boxes within a larger bounding box.The benefit of this is that the number of intersection tests is reduced further, since it is possibleto disregard many objects from additional computation by a single intersection test of theirparent box. The hiearchy is created by iterating this routine, which results in a recursivelydefined tree.

Bounding Boxes

The main difference between the use of bounding boxes and brute-force ray intersection testsis that it is unnecessary to know where the ray intersects the box, rather whether it intersectsthe box or not. Once this is established the bounded object is either tested for ray intersectionor not if the latter is the case.

However this increases the number of calculations for rays that hit the bounding box. Yetin a standard scene, the bulk of rays only come near a small proportion of the objects. This iswhy the result is still an major net improvement in efficiency.

Bounding boxes can be implemented in two ways. They can either be axis-aligned boundingboxes (AABBs), with edges parallel to the x, y, z-axes, or they can be placed in an arbitraryorientation, these are called oriented bounding boxes or OBBs. Intersection testing againstOBBs is more complex than with AABBs, but in some cases they can more tightly enclosethe object. OBBS are also more flexible, since they can move with the object as it changesorientation.

Space Subdivision

Another approach to optimizing intersection tests, consists of partitioning space into regions.Each object in the scene lies in one or more regions. When a ray is tested for intersection withall of the objects in the scene, each region the ray intersects are traversed separately. For each

32

Page 40: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

region the ray intersects, ray intersection tests are performed for each object overlapping theregion.

There are different methods of performing spacial subdivision. Amoung these are quadtrees,octtrees, k -d trees, and BSP trees.

Figure 3.17: A quadtree and its tree representation.

A quadtree, as shown in figure 3.17, involves partitioning 2-space hierarchically into squareregions. The root of the quadtree is an axis-aligned square covering the entire scene. This regionis the divided into four subsquares by dividing it into half horizontally and vertically. Thesefour subsquares may then divided into four additional subsquares, and so forth, recursively. Thenumber of subdivisions performed depends usually on the number of objects that intersect theregion. The data structure of the quadtree is, not surprisingly, a tree, which can be traversedrecursively. Quadtrees can be generalized to three dimensions. These are called octtrees. Hereeach node is a cube, which can be split into eight subcubes.

Figure 3.18: A k-d tree in two dimensions and its tree representation. Regions are subdividedby either a vertical or a horizontal line.

A k -d tree, seen in figure 3.18, is a generalization of the quadtree and partition space of anydimension. Here axis-aligned planes are used. The root of the k -d tree is a rectangular boxcontaining the entire scene. Every nonleaf node in the tree has two children, where each child isa subregion. Each subregion is defined by selecting an axis and dividing the node with a planeperpendicular to the axis. The regular way of choosing the two subregions is by picking a vertexfrom an object in the region covered by the node. Then an axis is chosen and the region is splitinto two subregions based on the coordinate of the vertex for that axis. In two-dimentionalspace this implies choosing a vertical of horizontal line through the vertex.

k -d trees have the advantage over quadtrees in the ability to intelligently choose the vertexand axis, in order to attempt to split the region into subregions that partition objects into setsof roughly equal size.

A BSP tree, which stands for binary space partioning tree, is a generalization of the k -dtree. With BSP trees (BSPs) regions can be partitioned by an arbitrary plane (polygon-aligned)instead of just axis-aligned planes. Usually the dividing plane is picked so that it encompassesone of the faces of the objects in the region. A useful property of BSPs is that they can betraversed in such a way that geometrical objects of the tree can be sorted front-to-back from

33

Page 41: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

any viewing point. This sorting is approximate for axis-aligned BSPs (k-d trees) and accuratefor polygon-aligned BSPs. With this property BSPs stand in contrast to BVHs, which usuallydo not include any kind of sorting.

[32; 11; 4; 2]Any of the pruning techniques discussed in this section could be implemented in a ray tracer.

Each of them have their strengths and weaknesses, whether it be traversal speed or overheadassociated with initializing and updating the data structure. The latter of the two is very costlyand can be done at best with linear complexity. [25]

34

Page 42: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 4

Advanced Effects

In this chapter we will look at some of the essential effects that help to generate realisticimages. There will first be a discussion about light rays, and how they carry light information,including color. Then we will see how light interacts with objects and materials, such as throughabsorption or scattering. There will also be a shortly brief about homogenous transformationmatrices, which will be an introduction to the shadow phenomenon and how to implement softshadows to the light routine. Last, an explanation of Monte Carlo Integration will be covered.

4.1 Lighting

In order to generate realistic images such as the one in Figure 4.1, it is important to understandhow light behaves at the surfaces of objects. Certain phenomena are taken into account:

• Light is emitted by the sun or other sources (natural or artificial).

• Light interacts with objects in the scene; a part is absorbed and a part is scattered andpropagates in new directions.

• Finally, light is absorbed by a sensor (human eye, electronic sensor, or film).

In Figure 4.1, there are evidences of all three phenomena. The light is emitted from thelamp and sends it directly to the objects in the room. The objects’ surfaces absorb some ofit and scatter the rest into new directions. The light not absorbed continues to move throughthe environment, encountering other objects. A tiny portion of the light traveling through thescene enters the sensor used to capture the image; in this case the electronic sensor of a digitalcamera.

However, the nature of light is complicated and still not completely understood. Manytheories have been proposed through the ages to describe the physical properties of light. Thephysics of light are currently explained using several different models based on the historicdevelopments in the understanding of light. Light is most commonly modeled as geometricrays, electromagnetic waves, or photons (quantum particles with some wave properties).

The techniques of ray tracing are based mostly on the particle model, which essentially saysthat a light ray is a straight path of particles. However, there are certain aspects that theparticle model cannot explain and neither is it really complete or correct, but the model is goodenough for rendering purpose. As mentioned, the basic particle of light is called a photon, andit can be thought of as a little sphere flying through space. The photon is not just movingin a straight line, but it is also ’vibrating’. It turns out that with every photon there can beassociated a particular frequency of vibration. An alternative way of describing the photon’s

35

Page 43: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 4.1: A photograph of a room showing a light source and various objects [2, p. 100].

vibration is with a measure called its wavelength. The frequency and wavelength are very closelyrelated.

Imagine that the photon is vibrating in some fixed pattern (say up and down like a sinecurve, see figure 4.2) as it moves forward in space. And say the photon is at point A, it is justbeginning to move downwards in its cycle. The photon moves forward in space, vibrating downand then up as it goes. After some time, it will eventually finish its up-and-down cycle, andbegin to move down again. The point where it begins to repeat its cycle is marked B. If thephoton is moving forward at a constant speed, then each time it crosses the same amount ofspace as the distance from A to B it will also go through one complete cycle of its vibration.This distance is the wavelength of the photon.

Figure 4.2: Description of a wavelength.

If the frequency is increased, the photon will complete its cycle in less time, and so it willcross less space before it begins to repeat. So when the frequency goes up, the wavelength goesdown. Known from the theory of relativity that speed of light is a constant in any medium.The observations can be summarized with the equation

λ =c

f,

where λ is the wavelength (in meters), f is the frequency (cycles per second), and c is thespeed of light (in a vacuum, c ≈ 3.00 · 108ms−1).

In some situations it will be convenient to speak of the frequency of a photon, in othersituations it will be more natural to speak of its wavelength. Keep in mind that both terms

36

Page 44: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

describe the same thing in different ways. It is also useful to know that the energy of a photonis directly related to its frequency, that is

Q = hf,

where Q is the energy in Joule, and h is Planck’s constant (h ≈ 6.63 · 10−34J · s.To summarize the purpose of this, is it turns out that there is a direct correlation between

frequency (and thus the energy) of a photon that strikes your eye and the color you see inresponse. The individual frequencies of different photons are also what give rise to the perceivedcolors of everyday objects that do not radiate light themselves, but only reflect it [11, p. 121].

4.1.1 Color and Spectra

When looked at an ordinary ’white’ light bulb, it seems to be ’white light’. But that is notentirely true, nor is it possible for a rainbow to have a white band. Because white is not a purespectral color, no single vibrating photon can give the impression of white light. Instead, theimpression of white arises when photons of many different colors strike the same region of theeye nearly simultaneously. The eye blends together all these colors, giving the impression of asingle, white light.

Figure 4.3: The spectrum of electromagnetic waves ranges from low-frequency radio waves tohigh-frequency gamma rays. Only a small portion of the spectrum, representing wavelengths ofroughly 400–700 nanometers, is visible to the human eye [22].

In other words, the white light bulb is generating photons at many different frequencies,and it is interesting and useful to know just how many photons of each frequency are beinggenerated by a given light source. It is possible to set up a measuring instrument to count theaverage number of photons at each visible wavelength over some period of time, and then plotthe results. Such an intensity versus amplitude plot is often abbreviated simply as spectrum,see figure4.3. Thus, when discussing about a photon in some situation, imagine it is about awhole fleet of photons, arriving pretty much at the same time and along the same ray. Note thatamplitude is the power of a signal. The greater the amplitude, the greater the energy carried.

One convenient way to represent all this information will be to associate a spectrum with aray, see Figure 4.4. This model shows the spectrum summarize all the photons travelling alongthat ray, but the convenience of having all that information in one place makes the abstractionuseful. However, a problem with this model is that it cannot model refraction very well.

When light ray passes between two media, it usally changes direction by an amount depen-dent on wavelength. If a single ray is used to model all the visible wavelengths simultaneously,

37

Page 45: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 4.4: Attaching a spectrum to a ray, describing the light travelling along that ray. Thespectrum is given by points on an intensity versus wavelength plot [11, p. 125].

then there is no single direction that is going to work correctly. So a better way to go is toassign a particular single wavelength to each ray. That means if the purpose is to know theamplitude of many wavelengths of light leaving a surface, it is necessary to use many rays, onefor each wavelength in which it is interest [11, p. 124].

4.1.2 Light Emission

The intensity of a given light source is often given as the wattage, W, which is another termfor joules per second, of the source. The radiant flux, Φ or P, is the light source equal to thenumber of joules per second emitted

Φ =dQ

dt.

In principle, it is possible to measure the radiant flux of a light source by adding up theenergies of the photons it emits in a one-second time period. Irradiance, E, is the density ofradiant flux with respect to area. For a small point, light source with radiant flux, Φ that emitslight uniformly in all directions, we can compute the irradiance, E, at a surface as

Ex =Φ

4π r2 cos (θ), (4.1)

where r is the distance from x to the light source, θ is the angle between the surface normaland the direction to the light source, and 4πr2 is the surface area of a sphere.

Irradiance as mentioned, is defined with respect to a surface, which may be an imaginarysurface in space, but is most often the surface of an object. In photometry, the correspondingquantity illuminance is what is measured by a light meter. As shown in equation 4.1, irradianceE is used to measure light flowing into a surface, and exitance M (also called radiosity or radiantexitance) is used to measure light flowing out of a surface. The general term radiant flux densityis sometimes used to refer to both irradiance and exitance.The previous equation was intuitive - imagine a small source sending photons in all directions,where the density of the photons decreases with the distance to the source. The rate at whichthe photon density decreases is proportional to the surface area of a sphere, at the same distance

38

Page 46: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

(one can think of each batch of emitted photons as sitting on an expanding sphere). The cosinefactor in the denominator is due to the surface orientation. A surface facing the source willreceive more photons per area than a surface that is oriented differently [17, p. 17].

4.2 Material

In the context of a 3D graphics system, a material is an attribute of a geometric object thatprovides a description of how the surface of the object will appear when viewed from a particu-lar direction under a particular illumination. In physical terms, what we need to define here ishow a surface reflects or transmits light as a function of incident angle, reflection or refractionangle, and wavelength. A function providing these relationships is known as the material’sbidirectional reflectance distribution function, or BRDF.

The BRDF is an approximation of the bidirectional scattering surface distribution function,BSSDF, which we will not discuss in this report. In short, the BSSDF is costly to evaluate,since it is eight-dimensional. The BRDF assumes that light is reflected at the location thatit strikes a surface, and this reduces the BRDF to a six-dimensional function, which allows anumber of simplifications. The BSSDF has certain advantages, such of handling subscattering.In practical terms for computer graphics applications, it is usually enough to approximate theBRDF for a material with a collection of parameters and maps. A usual material specificationsystem will provide parameters for the specification of a material’s color, specular reflectancefactor, diffuse reflectance factor, transmissivity, and refraction index [17, p.18-20].This means we can further divide the interaction of light and a surface into four classes: specularreflection, diffuse reflection, specular transmission, and diffuse transmission.

Ambient Light

With that in mind, there is another aspect which is useful when generating effects. The phe-nomenon ambient light, which has a constant intensity for all points in a scene. The intensityof the reflected ambient light is direction-independent, thus ambient light is equally distributedover the surface of an object. Ambient light can be thought of as light with no immediatesource. In reality, ambient light reflection is an approximation of the indirect light caused bylight reflecting in all directions off numerous surfaces. However, this indirect lighting is not sim-ulated in the ray tracing algorithm, seeing as it is extremely expensive computationally. Thus,ambient light is included to give a very crude approximation of this phenomenon. Ambient lightwill make objects visible, but do not cause specular highlights or shadows to be cast.

4.2.1 Perfect Specular Reflection

Specular reflection happens when light strikes a smooth surface - typically a metallic surface ora smooth dielectric surface (such as glass or water). Note that this section will be discussingonly perfect specular reflection, even though there are no perfectly specular surfaces, but thisideal model will prove to be very useful. As an example, most of what is seen in a mirror isspecular reflection of the incoming light. The highlights on a shiny surface are also an exampleof specular reflection.

In other words, light from the light source is striking the top of the object’s surface and thenbouncing off, so it is barely subject to absorption and re-radiation - which means the absorbedradiation is emitted by the object. Figure 4.5 shows a photon arriving at a hard, flat surface

39

Page 47: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

and bouncing off. The angle between the surface normal, marked ~N , and the direction of theincoming (or incident) ray, marked ~I, is called the angle of incidence, which is denoted as θi.

Figure 4.5: The geometry of reflection.

The angle between the surface normal and the reflected ray, marked as ~R, is called the angleof reflection, denoted θr. Given ~N and ~I, it is possible to find ~R. Two physical laws help tofind an expression for ~R. The first is that the incident ray, the surface normal, and the reflectedray all lie in the same plane. The second principle is that the angle of incidence is equal to theangle of reflection.

An algebraic solution could be [11, p. 132]

~R = α~I + β ~N,

θi = θr. (4.2)

From Figure 4.5, we can get cos(θi) = | − ~I| · | ~N | (observe that it is needed to reverse thedirection of ~I to get the acute angle labelled θi in the figure, by dividing with). Also note thatcos(θr) = | ~N | · |~R|. So equation 4.2 can be rewritten as

cos(θi) = cos(θr)

−~I · ~N = ~N · ~R= ~N · (α~I + β ~N)

= α( ~N · ~I) + β( ~N · ~N)

= α( ~N · ~I) + β.

The last step is done by recalling that since | ~N | = 1, ~N · ~N = 1. If α is arbitrarily set to 1,then:

β = −2( ~N · ~I).

So the complete formula for the direction of a specularly reflected ray is:

~R = ~I − 2( ~N · ~I) ~N.

40

Page 48: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

4.2.2 Perfect Diffuse Reflection

The nice, clean situation of specular reflection discussed above usually holds only for hard, shinysurfaces. A surface with diffuse reflection is characterized by light being reflected in all direc-tions when it strikes the surface. This type of reflection typically occurs on rough surfaces. Itturns out that diffusely reflected light interacts with the surface. When a photon is absorbed byan atom of the surface, the photon may be turned into heat or it may eventually be re-radiated.If the photon is re-radiated, the direction cannot be determined. Although any given photonwill go only in one direction, many photons over the course of time will tend to go in all possibledirections. The upshot is that perfectly diffuse reflected light is reflected away from the surfacein all directions with equal intensity.

The geometry that is taken into account is how wide the angle between the incident lightvector and the surface normal, because the amount of light reaching the surface is proportionalto the cosine of that angle.

~L · ~N = |~L| · | ~N |cos(θ)

where ~L is here marked as the incident light.

That is the Lambertian reflection model of perfectly diffuse reflection, and as mentioned, itis as idealized a model as perfectly specular reflection [11, p. 133]. In computer graphics, theintensity of the reflection is calculated by taking the dot product of the surface’s normalizednormal vector, ~N , and a normalized light ray, ~L, pointing from the surface to the light source.This number is then multiplied by the color of the surface, c, and the intensity of the lighthitting the surface, Il

Id = (~L · ~N)c · Il,

where Id is the intensity of the diffusely reflected light (surface brightness). The intensity willbe the highest if the normal vector points in the same direction as the light vector(cos(0) = 1,the surface will be perpendicular to the direction of the light). The intensity will be lowest ifthe normal vector is perpendicular to the light vector (cos(π2 ) = 0, the surface runs parallelwith the direction of the light).

4.2.3 Perfect Specular Transmission

In a transparent object, light can arrive from behind the object’s surface and pass through,contributing to the light leaving the surface. For example, consider Figure 4.6, which shows aruler in a glass of water. The appearance of the bent ruler is due to the bending of the lightrays as they pass from the water to the glass, and then the glass to the air. It also means it isnot necessary that the media on both sides of the object be the same.

To properly handle transmitted light, we need to handle the bending of the light as itcrosses the boundary (or interface) between the media. This bending is called transmission orrefraction. It is important to note that each medium has an index of refraction, which actuallydescribes the speed of light in that medium compared to the speed of light in a vacuum. Todetermine how the light bends when crossing media, the indices of refraction of the two materialsand the angle of the incident light will be compared.

Figure 4.7 shows an incoming light ray (in this case marked as ~I), striking a surface withnormal ~N . The incident light makes an angle θi (the angle of incidence) with surface normal.The transmitted light, ~T makes an angle of θt (the angle of refraction) with reflected normal.

41

Page 49: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 4.6: A ruler in a glass of water [11, p. 135].

The vectors ~I, ~N , and ~T again all lie in the same plane. The equation relating the angles of theincident and transmitted light is called Snell’s Law :

sinθ1sinθ2

= η21 =η2

η1,

where η1 is the index of refraction of medium 1 with respect to vacuum, η2 is the index ofrefraction of medium 2 with respect to vacuum, and η21 is the index of refraction of medium 2with respect to medium 1.

Figure 4.7: The geometry of refraction.

Table 4.2.3 shows some additional indices of refraction, and it turns out that the index ofrefraction is dependent on the wavelength of the incoming light. This is why a prism separatesincoming light into a spectrum: the different wavelengths are refracted by different amounts.Figure 4.8 shows the index of refraction as a function of wavelength for fused quartz, which isa noncrystalline form of silicon dioxide (SiO2), it is also called silica [11, p. 134].

4.2.4 Perfect Diffuse Transmission

For the case where the medium supported perfect specular transmission, the light passed rightthrough without interference. This is an ideal situation that is rarely realized in practice; finecrystal usually comes close. On the other hand, a medium that has many small particles thatinterfere with the travelling photons is called diffuse transmission. One example of such a

42

Page 50: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Material Refraction valueAmber 1.54Cubic Zirconia 2.15Diamond 2.417Emerald 1.57Fused Quartz 1.46Garnet 1.73 to 1.89Glass 1.5Ice 1.309Ruby 1.77Sapphire 1.77Sodium Cloride 1.53Water 1.333

Table 4.1: Sample indices of refraction

Figure 4.8: The index of refraction of fused quartz.

material is transculent plastic; it allows light to pass, and colors it along the way, but it is notpossible to clearly see anything on the other side of the plastic.

Certainly the diffuse transmission part is easily satisfied by many materials, and a perfectlydiffuse transparent medium would scatter light evenly in all directions as it passes through, justas a perfect diffuse reflective surface scatters light in all directions as it is reflected [11, p. 141].

4.2.5 Phong Reflection Model

Phong reflection is an empirical model of local illumination. It describes the way a surfacereflects light as a combination of the diffuse reflection of rough surfaces with the specular re-flection of shiny surfaces. It is based on Bui Tuong Phong’s informal observation that shinysurfaces have small intense specular highlights, while dull surfaces have large highlights thatfall off more gradually. The reflection model also includes an ambient term to account for thesmall amount of light that is scattered about the entire scene.

For each light source in the scene, we define the components is and id as the intensities (of-ten as RGB values) of the specular and diffuse components of the light sources respectively. Asingle term ia controls the ambient lighting; it is sometimes computed as a sum of contributionsfrom all light sources.

43

Page 51: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 4.9: The Phong Reflection Model [24].

For each material in the scene, we defineks: specular reflection constant, the ratio of reflection of the specular term of incoming light,kd: diffuse reflection constant, the ratio of reflection of the diffuse term of incoming light (Lam-bertian reflectance),ka: ambient reflection constant, the ratio of reflection of the ambient term present in all pointsin the scene rendered, andα: is a shininess constant for the material, which is larger for surfaces that are smoother andmore mirror-like. When this constant is large, the specular highlight is small.

A specular highlight is the bright spot of light that appears on shiny objects when illumi-nated. Specular highlights are important in 3D computer graphics, as they provide a strongvisual cue for the shape of an object and its location with respect to light sources in the scene.

Further, ~L is defined as the direction vector from the point on the surface toward each lightsource, ~N as the normal at this point on the surface, ~R as the direction that a perfectly reflectedray of light would take from this point on the surface, and ~V as the direction pointing towardsthe viewer (such as a virtual camera).

Ip = kaia +∑lights

(kd(~L · ~N)id + ks(~R · ~V )αis),

where Ip is the shade value at a point, p on a surface. The diffuse term is not affected bythe viewer direction ~V . The specular term is large only when the viewer direction ~V is alignedwith the reflection direction ~R. Their alignment is measured by the α which is the cosine ofthe angle between them. The cosine of the angle between the normalized vectors ~R and ~V isequal to their dot product. When α is large, in the case of a nearly mirror-like reflection, thespecular highlight will be small, because any viewpoint not aligned with the reflection will havea cosine less than one which rapidly approaches zero when raised to a high power.

When there is color representations as RGB values, this equation will typically be calculatedseparately for R, G and B intensities.

4.3 Shadows

Shadows are important elements in creating realistic images and in providing the user withvisual cues about object placement. For that there are various shadow techniques which canusually be mixed as desired in order to maintain quality while still being efficient. To illustratethe shadow terminology, see Figure 4.10. Further explanation is as following: The occludersare objects that cast shadows onto receivers. Point light sources generate only fully shadowedregions, sometimes called hard shadows. If area or volume light sources are used, then softshadows are produced. Each shadow can then have a fully shadowed region, called the umbra,

44

Page 52: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

and a partially shadowed region, called the penumbra [2, p. 331].

Soft shadows are recognized by their soft shadow edges, which also means soft shadows aregenerally preferable because the soft edges let the viewer know that the shadow is indeed ashadow. Hard-edged shadows usually look less realistic and can sometimes be misinterpretedas actual geometric features, such as a crease in a surface. Without shadows as a visual cue,scenes are often unconvincing and more difficult to perceive.

Figure 4.10: Shadow terminology: light source, occluder, receiver, shadow, umbra, and penum-bra.

4.3.1 Simple Shadows

In a simple scene, with a few widely located objects and a ground plane, a projection from thelight source can be used to create shadows on the ground plane. It is also assumed that theplane is y = 0. Blinn [3] developed this shadowing technique that handles two possibilities;light sources at infinity and locally positioned light sources.

Light Sources At Infinity

If the light source is at infinity, the rays of light entering the scene are parallel. If ~L representsthe vector pointing in the direction the light is traveling, the points on the object cast theirshadow onto the ground plane along to this direction. The parametric equation of a line - whichrepresents three equations for the x, y, and z components of each point - between the lightsource direction ~L and a location on the object ~P is given by S(t) = ~P − t · ~L. The ground planelocation where the object location ~P casts its shadow is found by setting the equation for they-coordinate to 0 and then solving t, because when casting shadows onto the ground plane aty = 0, the y-value for all of the shadow points will be 0.

This means that t will have a value of yp/yl. If yl is 0, the light is being cast horizontally,so there is no shadow in the ground plane. The value of t can then be used to calculatethe x and z location where the object location ~P casts its shadow. Equation 4.3 shows thecomplete calculation of the shadow location when the calculation of t is substituted back intothe parametric equations.

xs = xp −ypyl· xl = xp −

xlyl· yp. (4.3)

zs = zp −ypyl· zl = zp −

zlyl· yp.

45

Page 53: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

As following, a matrix form of 4.3 can be made. This matrix is a parallel projection matrixinto the y = 0 plane.

xs0zs1

=

1 −xl/yl 0 00 0 0 00 −zl/yl 1 00 0 0 1

·xpypzp1

.Blinn describes a two-step process. First, the scene is rendered as usual, using the parameters

specified for the objects. Next, the shadow matrix is multiplied into the current transformationmatrix. The objects are rendered again [21, p. 277].

Local Light Sources

When a light source has a specific location instead of infinity, the process is still similar. Aparametric equation for a line is still used but with the light source as a location L instead of avector. This makes the parametric equation S(t) = P − t · (P − L). Projecting into the y = 0plane and solving for t gives that t = yp/(yp−yl). The values of the projection onto the groundplane are given by equations 4.4:

xs =xl · yp − xp · yl

yp − yl(4.4)

zs =zl · yp − zp · yl

yp − yl

A matrix form of equations 4.4 can be made as well, where xs and zs are found by dividingx′ and y′ by (yp − yl). Specifically, xs = x′/(yp − yl) and zs = z′/(yp − yl). The matrix is aperspective projection matrix into the y = 0 plane.

x′

0z′

yp − yl

=

−yl xl 0 00 0 0 00 zl −yl 00 1 0 −yl

·xpypzp1

.Once this is determined, the rest of the process for local light sources is the same as that

for light sources at infinity [21, p. 279].

Ray Tracing Shadows

One of the many advantages about ray tracing is that generating shadows becomes simple, be-cause all of the elements necessary to calculate the shadow are part of the ray tracing process.When the point on an object where a ray strikes is identified, whether or not this point is in ashadow, can be determined by the use of a shadow feeler. The shadow feeler is a ray that beginsat the surface point and ends at the light source, to check if the light is visible from the surfacepoint. This ray is used with the intersection calculations for each of the objects. If the shadowfeeler intersects any object, the object blocks that light source and the light source should notbe used in any local illumination calculations.

Where simple shadows in this method just ignore a light source that is blocked by any object,a more creative approach is to look at the specification of the blocked object. If that object istransparent, it does not really block the light, it only reduces its intensity. Depending on that

46

Page 54: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

object’s index of refraction, it might bend the light so that it casts a highlight in a differentplace. If an object is not completely transparent but rather translucent, only part of the lightfrom the light source will be blocked from the point being shaded. A more sophisticated methodis to reduce the light source intensity by the transparency of any objects between the currentpoint and the light source.

A shadow feeler needs to be generated for every light source and then tested against everyobject. As the number of light sources and objects increase this can become a computationallyexpensive task [21, p. 322].

Soft Shadows

The key to implementing soft shadows is to somehow account for the light being an area ratherthan a point. An easy way to do this is to approximate the light with a distributed set of Npoint lights each with one N th of the intensity of the base light. This concept is illustrated atthe left of Figure 4.11 where nine lights are used. However there are two potential problemswith this technique. First, typically dozens of point lights are needed to achieve visually smoothresults, which slows down the program a great deal. The second problem is that the shadowshave sharp transitions inside the penumbra, which means the colors of the penumbra will appearunnatural [31, p. 231].

Figure 4.11: Left: an area light can be approximated by some number of point lights; four ofthe nine points are visible to p so it is in the penumbra. Right: a random point on the light ischosen for the shadow ray, and it has some chance of hitting the light or not.

Instead of representing the area light as a discrete number of point sources, it is representedas an infinite number and choose one at random for each viewing ray. This amount to choosea random point on the light for any surface point being lit as it is shown at the right of Figure4.11. If the light is a parallelogram specified by a corner point c and two edge vectors a and bthen choosing a random point r is as following:

r = c+ ξ1a+ ξ2b,

where ξ1 and ξ2 are uniform random numbers in the range [0,1).However it is not prefered to always have the ray in the upper left-hand corner of the pixel

generate a shadow ray to the upper left-hand corner of the light, therefore another note is tojitter the points on the light, so that the samples will be scrambled. In that way the pixelsamples and the light samples are each themselves jittered, but so that there is no correlationbetween pixel samples and light samples. A good way to accomplish this is to generate two

47

Page 55: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 4.12: The geometry of a parallelogram light specified by a corner point and two edgevectors.

distinct sets of n2 jittered samples and pass samples into the light source routine:

for each pixel(i,j) doc = 0

generate N = n2 jittered 2D points and store in array r[ ]generate N = n2 jittered 2D points and store in array s[ ]shuffle the points in array s[ ]

for p = 0 to N - 1 doc = c + ray-color(i + r[p].x(), j + r[p].y(), s[p])

cij = c/Nend for

end forThis shuffle routine eliminates any coherence between arrays r and s. The shadow routine

will just use the 2D random point stored in s[p] rather than calling the random number gener-ator. A shuffle routine for an array indexed from 0 to N — 1 is:

for i = N — 1 down to 1 dochoose random integer j between 0 and i inclusive

swap array elements i and jend for

48

Page 56: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

4.4 Monte Carlo Integration

To implement realistic-looking lighting effects and soft shadows in a ray tracer, a method calledMonte Carlo integration can be used. This method is generally known as Monte Carlo raytracing. The accuracy of this method is controlled at the pixel level, meaning more samplescan yield better results. The major problem with Monte Carlo ray tracing is the noise that isgenerated in rendered images. This noise can be reduced to an acceptable level by increasing thenumber of samples. However, the convergence of this method is relatively slow, requiring manysamples to reduce the noise or variance to a satisfactory level. Variance can also be reduced byusing certain other techniques.

Monte Carlo integration is useful for global illumination, where we are often met with multi-dimensional integrals of discontinuous functions; in this case we refer to light fields. Theseintegration problems cannot be solved efficiently with regular quadrature rules.

Before we approximate integrals, we will need some tools to construct and characterizerandom samples. This is the domain of applied continuous probability.

4.4.1 One-Dimensional Continuous Probability Density Functions

When talking about continous random variable x, it is meant as a scalar or vector quantitythat ’randomly’ takes on a value on the real line R = (−∞,∞). The behavior of x is entirelydescribed by the distribution of values it takes. This distribution of values can be quantitativelydescribed by the probability density function (p.d.f.), p(x), which is associated with x, denotedby x ∼ p(x). The probability that x will assume a certain value in [a, b] is given by the integral:

Probability(x ∈ [a, b]) =∫ b

ap(x) dx.

The density p has two characteristics:

p(x) ≥ 0, (4.5)∫ ∞−∞

p(x)dx = 1. (4.6)

In equation 4.5, the probability is nonnegative, and in equation 4.6, the probability is 1,where x ∈ R.

4.4.2 One-dimensional Expected Value

The average value that a real function f(x) with the underlying p.d.f. will assume is called itsexpected value, E(f(x)).

E(f(x)) =∫f(x)p(x)dx.

The expected value of a one-dimensional random variable can be calculated by letting f(x) =x. It has an interesting and useful property; the expected value of the sum of two randomvariables is the sum of the expected values of those variables:

E(x+ y) = E(x) + E(y),

49

Page 57: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

for random variables x and y. Because functions of random variables are themselves randomvariables, this linearity of expectation applies to them as well:

E(f(x) + g(y)) = E(f(x)) + E(g(y)).

This linearity still holds whether or not the random variables are correlated. Variables thatare not correlated are called independent.

4.4.3 Multi-Dimensional Random Variables

The discussion of random variables and their expected values extends naturally to multidimen-sional spaces. Most graphics problems will be in such higher dimensional spaces. For example,many lighting difficulties are phrased on the surface of the hemisphere. However, if a measure,µ,is defined on the space, the random variables will occupy and everything is very similar to theone-dimensional case. Suppose the space S has associated measure µ; for example, S is thesurface of a sphere and µ measures area. It is possible to define a p.d.f. p : S 7−→ R, and if x isa random variable with x ∼ p, then the probability that x will take on a value in some regionSi ⊂ S is given by the integral:

Probability(x ∈ Si) =∫Si

p(x)dµ,

where Probability(event) is the probability that the event is true, which means the integralis the probability that x takes on a value in the region Si. Note that in graphics, S is often anarea (dµ = dA = dxdy), or a set of directions (points on a unit sphere: dµ = dω = sinθdθdφ).

4.4.4 Estimated Means

Many issues involve sums of independent random variables, xi, where the variables share acommon density p. Such variables are said to be independent identically distributed (iid) randomvariables. When the sum is divided by the number of variables, we get an estimate of E(x):

E(x) ≈ 1N

N∑i=1

xi.

As N increases, the expected error of this estimate decreases. We want N to be large enoughthat we have confidence that the estimate is ’close enough’.

4.4.5 The Monte Carlo Integral

As discussed earlier, given a function f : S 7−→ R and a random variable x ∼ p, it is possible toapproximate the expected value of f(x) by a sum:

E(f(x)) =∫x∈S

f(x)p(x)dµ ≈ 1N

N∑i=1

f(xi). (4.7)

Since the expected value can be expressed as an integral, the integral is also approximatedby the sum. However, we would rather prefer to approximate an integral of a single function grather than a product fp, this can get done by substituting g = fp as the integrand:∫

x∈Sg(x)dµ ≈ 1

N

N∑i=1

g(xi)p(xi)

.

50

Page 58: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Note that, for this formula to be valid, p must be positive where g is nonzero. By increasingthe number of samples, N , and having the g/p to be a low variance (g and p have similarshape), this approximation becomes more accurate. However, the convergence of Monte Carlointegration is slow. Yet, this method still provides superior convergence over any other methodsfor high-dimensional integrals, such as them we come across in global illumination.Also equation 4.7 shows a fundamental problem with Monte Carlo integration; the diminishingreturn. Because the variance of the estimate is proportional to 1

N , the standard deviation isproportional to 1√

N. Since the error in the estimate behaves similarly to the standard deviation,

it is necessary to quadruple N to halve the error [32, p. 145] [17, p. 153]

4.4.6 Solving the Transport Equation

Monte Carlo integration proves useful when we want to approximate the light transport equationin a ray tracer.

Figure 4.13: The geometry for the transport equation in its directional form.

The transport equation is defined as

Ls( ~k0) =∫all ki

ρ(~ki, ~k0)Lf (~ki) cos θi dσi, (4.8)

where ~ki is the ingoing and ~k0 is the outgoing direction, and dσ is the solid angle of ingoingdirection. Ls is the outgoing radiance of the surface at a given point and Lf is the incomingradiance at the point. The Monte Carlo formula for one sample is

∫x∈S

g(x) dµ ≈ g(x0)p(x0)

, (4.9)

Note, the solid angle relates the raw stream of photons (the flux) to the intensity of light,and it is almost exclusively the integration variable of choice in Monte Carlo ray tracing whenthe incoming radiance is integrated.

where x0 is a random variable with underlying density p. Since the transport equation isan integral, it can be approximated with Monte Carlo integration. Plugging equation 4.8 intoequation 4.9 yields

Ls( ~k0) ≈ρ(~ki, ~k0)Lf (~kicosθi)

p(~ki),

51

Page 59: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

where vector ~ki is chosen randomly with density p. This formula is most simplified when weuse p to cancel out terms in the numerator

p(~ki) =ρ(~ki ~k0)cosθi

~R( ~k0),

where ~R is the directional hemispherical reflectance. That particular choice of p yields

Ls(k0) ≈ ~R(k0)Lf (ki). (4.10)

The intuitive form in equation 4.10; the radiance of a surface is the fraction of energyreflected, multiplied by the color seen in the direction in which the viewing ray would scatter[32, p. 170].

52

Page 60: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 5

Complexity Theory

In this chapter we will discuss and demonstrate the analysis of algorithmic complexity, by useof Big-O notation. Dealing with the subject of three-dimensional computer graphics we findourselves including several algorithms for performing various data operations. A way to analysethe efficiency of a certain algorithm is by the means of Big-O notation, or rather, Big-O is amethod to determine the complexity of an algorithm. Thereby we can assert the efficiency ofeach individual algorithm. We can estimate the complexity of a given function through Big-O,which will demonstrate the growth of the function at an input of size n.

Because Big-O notation maps the complexity of functions, this is a useful way to investigatethe efficiency of algorithms, regardless of software or hardware performance. The analysisestimates the maximum number of operations an algorithm executes as the input grows, therebywe can measure not only the efficiency of an algorithm but also the efficiency for a specific task,for example the aforementioned Z-Buffer has O(n) convergence, where n is the number ofprimitives.

We define our functions f and g from real numbers or from integers.We say that f(x) is O(g(x)) if there constants(witnesses) C and k such that:

|f(x)| ≤ C|g(x)| when x > k.

Thereby we can determine whether f(x) is O(g(x)) by finding at least one pair of witnessesC and k so that the above is fullfilled.[29]

We introduce Ω(g(x)) and Θ(g(x)) as an addition to Big-O, Ω for providing a lower boundand Θ for providing both an upper and lower bound to the growth of the function. Put in adifferent way we can say that:

• f(x) is O(g(x)): The growth of f(x) is no more than g(x).

• f(x) is Ω(g(x)) : The growth of f(x) ≥ g(x).

• f(x) is Θ(g(x)) : f(x) is O(g(x)) and f(x) is Ω(g(x)).

In the pursuit of solving tasks with algorithms we need to establish whether or not thealgorithm in question provides a satisfactory solution to the problem at hand. Not only can dif-ferent algorithms differ greatly in efficiency but they must, of course, always provide the correctanswer to the problem. When analyzing algorithms several terms for the type of complexityapplies.

53

Page 61: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 5.1: The dashed part of f(x) meets the condition f(x) < Cg(x) [29]

We may deal with the space complexity, which is an analysis used to determine the amountof memory used by the algorithm. In this section we will discuss time complexity in regards toBig-O notation.

Time Complexity

Time complexity is the comparison of the number of operations for an algorithm to perform atask. The time complexity is an efficient method to determine the time spent across systems, aswe do not have to take things, such as clockrate or memory speed into consideration. An oper-ation to measure time complexity can be essentially any basic operation, such as a comparisonof integers or an addition of integers.

As an example we can examine the time complexity of a very basic algorithm for a specificinteger in a list of arbitrary integers, a linear search algorithm, as shown below.

54

Page 62: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

procedure LinearSearch(x : integer, a1, a2, · · · , an) . an : distinct integersi := 1while i ≤ n and x 6= ai do

i := i+ 1if i ≤ n then

location := ielse

location := 0end if

end whileend procedure

In this case two operations are performed for each entry on the list: one to establish whether ornot the end of the list has been reached, and one which compares the element x to the currentinteger, ai, and one additional comparison is done to determine the end result.

We can assert from this that, if x - our search term is in the list, or said in another way ifx = ai, the number of comparisons performed is 2i+ 1.However, if x is not in the list more comparisons are performed: 2n comparisons are done todetermine that x is not ai, another comparison to exit the loop and finally the last comparisonto determine the result. At worst the number of comparisons used by the linear search algorithmis 2n + 2. In the effort to establish that 2n + 2 has O(n) convergence in the upper bound wecan establish our constants C and k.We recall that f(x) is O(g(x) if f(x) < Cg(x). In this case f(x) is O(n) for all x ≥ 4, we willestablish our witnesses, k = 4, in relation to C = 2.From this we can establish Big-O from the aforementioned definition:

2n+ 2 ≤ C · n when n ≥ k, hence 2n+ 2 is O(n).The type of complexity we are dealing with here is a worst-case complexity, that is how manyoperations the algorithm will need to have a guaranteed solution to the problem.

It should be noted that an algorithm with the time complexity of O(n) is a linearly scalingcomplexity. For describing the growth of functions, and thereby algorithms, Big-O notation hasproved itself a useful way to describe the efficiency of the algorithmic time complexity in.

55

Page 63: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 6

Implementation

The purpose of this chapter is to document the requirements and choices we have made forthe ray tracer. It also includes a short section about the construction process with a few codeexamples. We have omitted ray intersection code samples from this chapter, since they arecovered in 3.3. Finally we will examine the final piece of software and discuss the results.

6.1 Specifications

Our goal is to create a ray tracer which supports lighting, texture mapping, different cameraangles and soft shadows.

Development Environment

We have chosen C# (.NET) as our programming language. The justification for this choicecomes from several considerations. First of all, an excellent IDE (Integrated DevelopmentEnvironment), called Microsoft Visual Studio (VS2008), is available. In addition the .NETFramework includes a large class library and automated garbage collection, which simplify manycommon programming tasks and prevent memory leaks, respectively. This and the debuggerin VS2008 allow us to shorten the development process a lot. Finally, we also think it wouldbe interesting to see how a ray tracer running on managed code performs, since most of theimplementations we have come across use unmanaged C++.

Primitives

The ray tracer needs to support the following primitives: spheres, planes and triangles. Onecould argue that a plane could consist of two triangles, though we can see an advantage by alsohaving the plane available. There are quite a few other interesting shapes, which could add tothe ray tracer, like the torus or the cylinder. Still, we consider the first three primitives as themost essential. By making the cut here, we can focus on getting materials and texture mappingright regarding these.

Effects

In terms of realism, our focus will be directed at lighting and rendering good-looking softshadows. We will also support different types of material, which are diffuse, phong metal,diffuse-specular, dielectric and luminious material. Texture maps will be enabled for sphereswith the choice between billinear and bicubic interpolation, which enables us to avoid jaggytextures. Image quality can be adjusted by an implementation of the jittered sampling algorithm

56

Page 64: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

shown in section 3.4. In addition, the effect of depth of field can be created by jittering thecamera position in the sampling routine.

Optimization

Our primary goal is to support the aforementioned features and if there is time for it, we will lookinto optimizing the rendering time of the ray tracer. This is because we believe it is importantthat the ray tracer can support realistic effects, since this is one of the main arguments for raytracing. So, basically we are looking at the problem in terms of having a set goal for the degreeof realism and optimization as a secondary concern.

In connection with optimization we considered using adaptive bounding-box hierarchies(BVHs) or uniform spatial subdivision (section 3.5). If we were to choose, it would be thefirst strategy, since it is a well-known, robust and simple way to reduce ray-object intersectioncomplexity to a sub-linear level. What remains is to plug in the functionality and create abounding-box class that allows ray-box intersection.

Additional Features

We may also include support for shape instancing by the means of matrix transformations, sothat is possible to rotate and tweak shapes.

6.2 Construction

6.2.1 Class Diagram

In this section two diagrams are listed, which illustrate the relations between the different classesin the source code.

Materials, Textures and Shapes

Figure 6.1: Class diagram of materials, textures and shapes.

As shown in figure 6.1, all materials descend from the Material class, which contains anumber of virtual methods that can be overriden by inherited classes. The use of polymorphismallows us to treat materials in a general way. If for instance a specific material does not overridea certain method, the method call will default to the base class.

57

Page 65: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Textures inherit from the abstract class Texture that simply encapsulates an abstractmethod, which returns the RGB color at the specified point. The responsibility of returningthe color at the given coordinate set is handled solely by the concrete inherited texture.

Lastly, shapes also inherit from a common abstract class, namely Shape. This class is alsosimple since it only contains three abstract boolean methods, used for regular and shadow rayhit testing and randomly sampling light sources.

The Ray Tracer

Figure 6.2 shows the additional classes used in the ray tracer. The classes that directly handlerendering are RayTracerForm, which contains the user interface and RTracer, which encap-sulates the core ray tracing functionality.

Figure 6.2: Class diagram containing the rest of the noteworthy classes within the project.

6.2.2 The RTracer Class

The RTracer constructor takes in a width and height parameter to define the image size. Thenumber of samples per pixel, which allows adjustment of image quality. Background color andambient light can also be set.

It is possible to specify the maximum number of secondary rays, i.e. diffuse and specularrays which can be spawned by the primary rays. This allows us to configure how many timesreflected light will be allowed to bounce of the surfaces it hits. As long as the original rayintersects with a surface and the maximum depth of each type of secondary ray has not beenexceeded, new rays are generated recursively.

The following code samples show methods from the RTracer class.

6.2.3 The Render Method

The ray tracer has a main rendering method called Render(), which is shown in listing 6.1.This method iterates through the pixels left to right from top to bottom. It is also here that

58

Page 66: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

the multi-sampling takes place. The random number generator returns a number in the rangeof [0, 1).

Also note that we can simulate depth of field here by jittering the camera position for eachsample (line 11). This makes objects far away from the viewing plane appear blurred and outof focus.

Listing 6.1: The Render Method

1 public void Render ( )2 3 for ( int y = 0 ; y < sc reenHe ight ; y++) 4 for ( int x = 0 ; x < screenWidth ; x++) 5 // Sampling6 RGB pi x e l C o l o r = new RGB( ) ;7 for ( int i = 0 ; i < numSamples ; i++) 8 for ( int j = 0 ; j < numSamples ; j++) 9 // I f depth o f f i e l d i s enabled , j i t t e r .

10 i f ( UseDepthOfField )11 Scene . Camera . J i t t e r (new Vector312 13 X = Scene . Camera . EyePos i t ion .X + rand .

NextDouble ( ) ∗ j i t t e r .X,14 Y = Scene . Camera . EyePos i t ion .Y + rand .

NextDouble ( ) ∗ j i t t e r .Y,15 Z = Scene . Camera . EyePos it ion . Z + rand .

NextDouble ( ) ∗ j i t t e r . Z16 ) ;17

18 p ix e lC o l o r += ColorTrace (19 new Ray20 21 Orig in = Scene . Camera . Pos ,22 Dir e c t i on = GetPoint (23 x + ( i + rand . NextDouble ( ) ) / numSamples

,24 y + ( j + rand . NextDouble ( ) ) / numSamples

,25 Scene . Camera )26 , 0 , 0) ;27 28 29 // Reset the camera f o r each p i x e l30 i f ( UseDepthOfField )31 Scene . Camera . J i t t e r (new Vector332 33 X = Scene . Camera . EyePos it ion .X,34 Y = Scene . Camera . EyePos it ion .Y,35 Z = Scene . Camera . EyePos it ion . Z36 ) ;37

59

Page 67: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

38 p ix e lC o l o r = p ix e lC o l o r / ( numSamples ∗ numSamples ) ;39 SetP ixe l (x , y , p i x e l C o l o r . ToDrawingColor ( ) ) ;40 41 42

6.2.4 Recursive Ray Tracing

Listing 6.2 shows the ColorTrace() method, which is invoked for each sample. The two param-eters supplied in the Color() method, tMin, tMax after the ray parameter in line 3 are usedto indicate minimum and maximum values allowed for t. tMin is set to a low number and notzero due to numerical precision problems when computing intersections, which can manifestthemselves as visual artifacts.

The two vectors hold arbitrary values, which are used two randomly sample light sourcesand create reflections.

Listing 6.2: The Color Method

1 private RGB ColorTrace (Ray ray , int depth , int specularDepth )2 3 return Color ( ray , 0 .00001 , 100000 , 0 ,4 new Vector2 ( rand . NextDouble ( ) , rand . NextDouble ( ) ) ,5 new Vector2 ( rand . NextDouble ( ) , rand . NextDouble ( ) ) ,6 depth , specularDepth , true ) ;7

The recusive ray tracing function in our implementation is called Color(). The code isincluded in listing 6.3. This method is called in the ColorTrace() method shown earlier. Re-cursion is used to simulate how a ray travels in space, where it hits a surface and is reflectedoff in a new direction, until the maximum number of bounces has been hit or if it does not hitanything.

In lines 14, 18 and 21 the material’s EmittedRadiance(), AmbientResponse() andDiffuseDirection()methods are called, which all in turn contribute to the final color. Note that in the latter methoda vector vOut is returned. This vector represents the new reflected vector, which is used forscattering.

The variable brdfScale is used in lines 28 and 32 to scale how much the direct lights andthe specular scattering contribute to the final color.

The final result from this method is the RGB color. If nothing is intersected the defaultbackground color is returned (line 38).

Listing 6.3: The Color Method

1 RGB Color (Ray ray , double tMin , double tMax , f loat time , Vector2sSeed , Vector2 rSeed , int di f fuseDepth , int specularDepth , boolcountEmittedLight )

2 3 HitRecord record = new HitRecord UVW = new ONB( ) ;4 bool c e l = fa l se ;5 RGB c o l o r = new RGB( 0 . 0 , 0 . 0 , 0 . 0 ) ;6

7 double brd fSca l e = 0 . 0 ;

60

Page 68: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

8 Vector3 vOut = null ;9 RGB rColor = new RGB( ) ;

10

11 i f ( I n t e r s e c t ( r e f record , r e f tMin , r e f tMax , ray ) ) 12 // Find the emi t ted radiance13 i f ( countEmittedLight ) 14 c o l o r += record . Mater ia l . EmittedRadiance ( record .UVW, −

ray . Di rect ion , r ecord . TexPoint , r ecord .UV) ;15 16

17 // Find ambient l i g h t18 c o l o r += Scene . AmbientColor ∗ record . Mater ia l .

AmbientResponse ( record .UVW, ray . Di rec t ion , record . Point ,r ecord .UV) ;

19

20 // t r a c e r e f l e c t e d ray21 i f ( record . Mater ia l . D i f f u s e D i r e c t i o n ( ray . Di rec t ion , record ,

r e f rSeed , r e f rColor , r e f c e l , r e f brd fSca l e , r e f vOut ) )

22 // d i f f u s e s c a t t e r i n g23 Ray newRay = new Ray Orig in = record . Point , D i r e c t i on

= vOut ;24 i f ( ! countEmittedLight ) 25 i f ( d i f fu seDepth < maxDiffuseDepth ) 26 c o l o r += rColor ∗ Color (newRay , 0 . 03 , double .

MaxValue , time , sSeed , rSeed , d i f fu seDepth +1 , specularDepth , c e l ) ;

27 28 c o l o r += brd fSca l e ∗ Direc tL ight ( ray , 0 , r e f sSeed ,

r e f r ecord ) ;29 else 30 // s p e c u l a r s c a t t e r i n g31 i f ( specularDepth < maxSpecularDepth ) 32 c o l o r += brd fSca l e ∗ rColor ∗ Color (newRay ,

0 . 05 , double . MaxValue , time , sSeed , rSeed ,d i f fuseDepth , specularDepth + 1 , c e l ) ;

33 34 35 36 return c o l o r ;37 else 38 return Scene . BackgroundColor ;39 40

6.2.5 Direct Lighting

The DirectLight() method in listing 6.4 handles light sources. In this method every light sourceis iterated. For each light a random point on its surface is sampled. A shadow ray is then fired

61

Page 69: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

from the original intersection to the previously found point on the light source. Subsequentlythe angle between the normal and the shadow ray is checked whether it is larger than zero andwhether any object is intersected by the shadow ray.

If this check holds, a call is made to the ExplicitBRDF() method, where the BRDF iscalculated. The angle between the normal and the negative direction of the shadow ray ischecked if it is greater than zero. If this is the case the light will contribute to the final color.

In line 34 the ShadowHit() method is included. This is where all shapes are iterated andtested for intersection shadow rays. A boolean true is returned if the number of intersectionsis greater than zero.

Listing 6.4: The DirectLight and ShadowHit Methods

1 RGB Direc tL ight (Ray ray , f loat time , r e f Vector2 seed , r e f HitRecordrecord )

2 3 RGB r e s u l t = new RGB( 0 . 0 , 0 . 0 , 0 . 0 ) ;4 i f ( Scene . L ights == null )5 return r e s u l t ;6

7 f o r each ( Shape l i g h t in Scene . L ights ) 8 Vector3 onLight = null ;9 double pdf = 0 ; // pdf i s the p r o b a b i l i t y d e n s i t y

10 RGB emitted = null ;11 Vector3 normal = null ;12

13 i f ( l i g h t . RandomPoint ( record . Point , seed , time , r e f onLight ,r e f normal , r e f pdf , r e f emitted ) )

14 // I n s t a n t i a t e a shadow ray15 Ray s r = new Ray Orig in = record . Point , D i r e c t i on =

onLight − record . Point ;16 double d i s t anc e = s r . D i r e c t i on . Length ;17 double co s in e0 = Vector3 . Dot ( s r . Di rect ion , record .UVW.W)

/ d i s t ance ;18 i f ( co s in e0 > 0 .0 && ! ShadowHit ( sr , 0 .00001 , 0 .9999 ,

time ) ) 19 RGB brdf = null ;20 i f ( record . Mater ia l . ExplicitBRDF ( record .UVW, −ray .

Di rect ion , s r . Di rect ion , r ecord . TexPoint , r ecord .UV, r e f brdf ) )

21 double co s in e1 = Vector3 . Dot(− s r . Di rec t ion ,normal ) / d i s t anc e ;

22

23 i f ( co s in e1 > 0 . 0 ) 24 r e s u l t += brdf ∗ emitted ∗ ( co s in e0 ∗

co s in e1 / (Math .Pow( d i s tance , 2 ) ∗ pdf ) ) ;25 26 27 28 29

62

Page 70: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

30 return r e s u l t ;31 32

33 // I n t e r s e c t s each shape wi th a shadow ray34 bool ShadowHit (Ray shadowRay , double tMin , double tMax , f loat time )35 36 return Scene . Shapes . Count ( shape => shape . ShadowHit ( shadowRay ,

tMin , tMax , time ) == true ) > 0 ;37

6.2.6 Materials

Materials inherit from a class called Material as seen in 6.1. A material has a number of over-ridable methods, namely EmittedRadiance, AmbientResponse, ExplicitBRDF, DiffuseDirection.

The EmittedRadiance() method returns the color of the light emitted from a certain pointon the object, and the AmbientResponse() method returns the color of the point on the objectwe are looking at. The ExplicitBRDF() method returns the color of the light reflected from thepoint on the object. The last method, DiffuseDirection(), returns a new reflected vector andthe color of the point.

The first two methods return a default RGB value of (0, 0, 0), which does not contribute tothe final color. The two other methods return false by default.

Our ray tracer has five types of materials

DiffuseMaterial

DiffuseMaterial is used for object with only diffuse reflection. DiffuseMaterial takes only oneargument, Texture.

AmbientResponse() is found by taking the color of the Texture Texture in the given point,and ExplicitBRDF() is found by taking one over the color of the Texture in the given point.

DiffuseDirection() is found by scrambling a 2D-vector and then determine the diffuse direc-tion from it. The color returned is again found by taking the color of the Texture Texture inthe given point.

DiffuseSpecularMaterial

DiffuseSpecularMaterial is used for objects with both diffuse and specular reflection. Diffus-eSpecularMaterial takes three arguments. Two Materials DiffMaterial and SpecMaterial and adouble R0.

AmbientResponse(), ExplicitBRDF() and DiffuseDirection() is all determined from the oneof the two material types. ExplicitBRDF() is always determined from the DiffMaterial() Ma-terial, and AmbientResponse() and DiffuseDirection() is determined from either the one or theother, where R0 is a factor of which to choose.

PhongMetalMaterial

PhongMetalMaterial is used on metal like objects. PhongMetalMaterial takes two argumentsTexture and PhongExp.

AmbientResponse() is found using the Texture Texture, and DiffuseDirection() is found bytaking Texture and PhongExp and using them in the Phong Reflection Model.

63

Page 71: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

DielectricMaterial

DielectricMaterial is used for glass objects. DielectricMaterial takes two arguments a doublent and extinction which is of the RGB class.

DielectricMaterial only uses one of the four material overrides, DiffuseDirection(), it uses itto determine if a incoming light ray is reflected or refracted in the glass object.

LuminaireMaterial

LuminaireMaterial is the material used for light sources. LuminaireMaterial takes four argu-ments.

The first argument, texture, is used to determine the color of EmittedRadiance(), it can beeither a color or a texture. In the case where it is a color, the same color of radiance is emittedin all direction of the light source. If the argument is a texture, the emitted light in a givendirection is dertermined from the texture mapped on the light source, and the color of the lightwill be different across the light source, like a paper lantern.The next two arguments multiplier and phongExp is used to determine the intensity of Emitte-dRadiance().The last argument material is used to determine AmbientResponse(), ExplicitBRDF() and Dif-fuseDirection().

6.2.7 Texture Mapping

The ImageTexture class enables us to map textures onto objects, provided we supply its Value()method with the corresponding UV-coordinates. The class contains an enum which allows us toflag which interpolation method we want to use. As mentioned earlier we have provided threemethods of texure map interpolation.

6.2.8 .NET Bitmap Optimization

During development it became apparent that there was a bottleneck in the code. It was obviousthat using the standard Bitmap class, available in the .NET framework, was far too inefficient.Therefore we had to create a wrapper class, which allowed us to get and set pixel values to abitmap more efficiently.

This FastBitmap class makes use of pointers to access memory directly and locks the bitmapobject from being accessed while in use. This minimizes the overhead from repeatedly lockingand unlocking when normally calling the Bitmap class’s GetPixel and SetPixel methods.

Bitmaps are required when instanciating texture maps to the Picture class’s raster arrayand when outputting the final bitmap to the screen. After using the FastBitmap class for this,we saw significant speedups in rendering time.

6.3 Results

In this section we will include a few screenshots demonstrating various scenes with differentsettings. Rendering time was recorded in minutes.

Scene 1

The scene, displayed in figure 6.3, contains one textured sphere (with bicubic interpolation), aplane and four spherical spot lights, above the sphere. The plane uses a diffuse material and

64

Page 72: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

the sphere uses a diffuse and specular material. The multiplier and phong exponent of eachlight source is set to 5 and 1, respectively, with a phong metal material.

The three tests shown in figure 6.3 were run with different settings at a resolution of 600×600on a laptop. The system specifications are: 1.83 GHz, Core 2 Duo, 2GB RAM on Windows XP.In each case the diffuse and specular depth limit were set to 4 and the ambient lighting termwas set to a dark slate blue color.

Figure 6.3: A. 360.000 primary rays, 1 sample/pixel, time: 00:03:851. B. 5.760.000 primary rays, 16samples/pixel, time: 01:02:431. C. 36.000.000 primary rays, 100 samples/pixel, time: 06:16:723

The variance in image A is clearly visible with fuzzy-looking edges and noticable randomnoise in the form of specks of light hitting the plane.

By increasing the number of samples per pixel to 16 in image B we see a significant reductionin variance, with sharply defined edges and more consistent color changes. We have eliminatedthe most undesirable dots which were scattered on the plane in first image. However noise isstill rather visible where the light colors the sphere and plane.

In image C noise is hardly visible. The spot lights and shadow projected by the sphere havesoft edges.

Scene 2

In this scene, shown in figure 6.4, we demonstrate the dielectric material. The ambient light wasset to a dark blue color and the background a sky blue color. All shapes use a simple texturewith a solid color, with either diffuse, specular or phong metal materials. The resolution andsystem remains unchanged from the previously discussed scene.

Image A is somewhat crude with an obvious high degree of variance. Notice the completelyblack pixels distributed primarily in the shadows. These black pixels would most likely not beas noticable in a darker scene, such as scene 1.

In the next image all shapes are sharply defined, yet the scene is still littered with noise.Image C contains much less noise than image B, since more than 6 times as many samples havebeen used to render the image.

Finally, image D displays an image where twice as many samples were taken compared toimage C. Noise is much harder to make out in this image, but at the cost of extended renderingtime.

65

Page 73: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 6.4: A. 360.000 primary rays, 1 sample/pixel, time: 00:05:712. B. 5.760.000 primary rays, 16samples/pixel, time: 01:20:342. C. 36.000.000 primary rays, 100 samples/pixel, time: 09:00:481. D.144.000.000 primary rays, 200 samples/pixel, time: 35:19:853

Scene 3 - Depth of Field

Figure 6.5 demonstrates the use of depth of field. The scene contains 9 primitives and 4 lightsources at a resolution of 300× 300. The ambient light was set to zero and the background to alight grey color. For each sampled camera position, the random number, ξ, was multiplied by120 . This resulted in a moderately blurred image. By multiplying ξ with a larger number, theamount of blurring increases.

This effect requires many samples, for it to look correct. Image C in figure 6.5, still displaysnoticable levels of noise in the reflective sphere in the upper right-hand corner, despite the factthat 200 samples/pixel were used.

66

Page 74: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 6.5: A. 1.440.000 primary rays, 16 samples/pixel, time:00:31:776. B. 9.000.000 primary rays,100 samples/pixel, time:03:13:846. C. 36.000.000 primary rays, 200 samples/pixel, time:13:14:008.

6.3.1 Performance Analysis

In this section we will analyse the performance of the ray tracer. Figure 6.6 shows how therendering time increases as a function of the number of samples. The functions are third orderpolynomials, interpolated with the data collected from the previously discussed test scenes.When we look at table 6.3.1 and figure 6.6, we see that scene 2 grows faster than scene 1, whichcontains less primitives. Scene 3, with the same number of primitives as scene 2, does not growas fast as the other two, since the resolution was half as large. It is not surprising that we seethis kind of growth, since sampling involves partitioning each pixel into n2 subpixels.

Scene Primitives Lights Resolution1 2 4 600× 6002 9 4 600× 6003 9 4 300× 300

Table 6.1: Scene details

67

Page 75: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Figure 6.6: Functions interpolated from test results

Ray-Shape Intersection

In this section we will analyze the time complexity of the intersection procedure, shown in listing6.3.1. Suppose our scene contains n shapes, which we wish to test for intersection in a givenpixel. For this we would invoke the Intersect method, which contains a single foreach loopthat performs one comparison per shape. The procedure will always perform n comparisons,since every shape is tested for intersection to find the nearest intersection, regardless of if hitis set to true. The complexity of this procedure is therefore O(n), which is equivalent to lineartime complexity.

1 bool I n t e r s e c t ( r e f HitRecord record , r e f double tMin , r e f doubletMax , Ray ray )

2 3 bool h i t = fa l se ;4 f o r each ( Shape shape in Scene . Shapes ) 5 i f ( shape . Hit ( ray , tMin , tMax , 0 , r e f r ecord ) ) 6 tMax = record .T;7 h i t = true ;8 9

10 return h i t ;11

In the end our ray tracer has a complexity closer to O(I · n), where I is the number ofsamples used. This is due to the fact that a ray traced for each sample, which then iterates

68

Page 76: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

through all shapes in the Intersect() method.

6.4 Discussion

Our ray tracer is able to render the primitives we set out to include. Spheres support both solidtextures and bitmap textures. Unfortunately we did not manage to implement texture mappingcorrectly with the other primitives.

We can enhance realism, in the form of lighting and soft shadows, with the BRDF lightingmodel, Monte Carlo sampled light sources and by increasing the number of samples per pixel.

It is clear that the fps delivered by our ray tracer are nowhere near real-time, unless we reducesampling greatly, cut down the number of lights and decrease the resolution significantly. Forthat reason the ray tracer, in its current state, is not able to deliver high quality images atinteractive frames rates at reasonable resolutions, such as 800 × 600. We are able to renderour test scenes with about 5-7 fps if we reduce resolution to 100× 100 pixels and use only onesample per pixel.

Had we implemented one of the pruning techniques discussed in section 3.5, such as BVHs,the complexity of intersection testing could be accelerated with logarithmic complexity.

69

Page 77: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 7

The Film Industry on Real-time RayTracing

To investigate in what manner the film industry uses ray tracing, and if there could be a use forreal-time ray tracing, we decided to contact experts in the 3D computer graphics field. We willalso expand on the reason why we made an effort to get an interview, and how it is valuable inthis report.

We contacted three different people in the computer graphics community, namely DanielPohl, Tomas Akenine-Moller and Per Christensen. Daniel Pohl is the lead developer of Quake3:Raytraced, which builds upon the OpenRT-API. Tomas Akenine-Moller is a professor in com-puter science with specialization in computer graphics at the Department of Computer Science,Lund University, Sweden and a co-author of [2]. Per Christensen is a rendering software devel-oper at Pixar in Seattle.[28; 1; 5]

7.1 Why Interview?

There are a lot of reasons to get an interview from an external contact, they may offer greattheoretical insight, have a different point of view which we have not considered and more. Mostimportantly for us, the interview offers great insight into the more practical applications of raytracing, and whether or not it would be of any interest to the industry today. The externalcontact is neutral in regards to our project, and offers unbiased opinions in relation to ourexpectations and premonitions.

Additionally each external contact may represent a different part of the field. Akenine-Mollercould offer a great insight into the application of ray tracing from a theoretical viewpoint, whilePer Christensen has a great view of an industry’s take on just that theory. In this specific contextan interview is qualitative in nature, as opposed to doing a quantitative questionnaire. Theobjectivity of a scientific interview can, according to some, be disputed. A scientific interview,however, can be objective in the sense of arithmetical intersubjectivity [19].

Ray tracing vs rasterization is an ongoing debate and there is a lot of different opinions inthe field of computer graphics. The main purpose of seeking an interview with these expertsis to acquire opinions on the future of real-time ray tracing. Unfortunately we were only ableto get a response from a single contact, namely Mr. Christensen. Nonetheless, we are verysatisfied with our findings and how they have aided in broadening the discussion of real-timeray tracing.

70

Page 78: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

7.2 Interview with Per Christensen

Pixar Animation Studios is an Academy AwardR©-winning computer animation studio, whichcreates computer-animated feature films. In partnership with Walt Disney Pictures, Pixarcreated and produced Toy Story (1995), A Bug’s Life (1998), Toy Story 2 (1999), Monsters,Inc. (2001), Finding Nemo (2003), The Incredibles (2004), Cars (2006) and WALL•E (2008)[26].

We asked Mr. Christensen about the challenges associated with real-time ray tracing, whichtasks ray tracing is better than rasterization for, and what role ray tracing could play in thefuture, the interview in entirety is available in appendix A. Mr. Christensen says:

”Real-time ray-tracing for complex scenes has become (or will soon become) feasiblewith the new processors such as Larrabee from Intel. The ray-tracing vs. raster-ization debate has raged for a few years already, and probably won’t be resolvedanytime soon. There’s a lot of industry dollars at stake as well, with nVidia andIntel trying to out-do each other. Here at Pixar we use a hybrid approach: rasteriza-tion of directly visible geometry, and on-demand ray tracing for shadows, reflections,ambient occlusion, etc.”

In short, Pixar uses hyrbrid rendering technology to get the advantages of both methods atthe same time. This is an interesting concept and it is a relevant point in the discussion of raytracing in the film industry.

Mr. Christensen begins by revealing that Pixar use an in-house developed algorithm fortheir rendering tasks, called Reyes, which is an acronym for Renders Everything You Ever Saw.The Reyes algorithm is used in Pixar’s redering software - PhotoRealistic RenderMan (PRMan)[27]. He points out that the main benefits of using this method are independence of shading,memory efficiency, shader-free fast motion blur and depth-of-field.

Next he indicates that if non-hybrid rasterization were used, they would encounter resolutionlimitations with reflection maps, shadow maps (A;Question 3). Also that ambient occlusion isvery awkward to compute, in that it requires many shadow maps from various directions. Ashadow map is a technique which can be used in the Z-buffer algorithm to cast shadows. Thetechnique works by comparing the location of each primitive on a per-pixel basis with regardsto the shadow map. This means that the quality of the shadows depend on the pixel resolutionof the shadow map, including the numerical precision of the Z-buffer [2, p.348-350], otherwisewe might see shadows with jaggy edges, as seen in some games today.

According to Mr. Christensen the main advantages of ray tracing in relation to creatingCGI films are that it is easy to obtain sharp or soft shadows, accurate reflections and ambientocclusion(A.Ambient occlusion is a shading method which enables higher realism by replicating the atten-uation of light because of occlusion.

We proceeded to ask Mr. Christensen whether real-time ray tracing could hold any benefitsto the CGI industry. To this he answered that anything which can speed up rendering isvaluable, however he goes on to explain that the real bottleneck is not the raw speed of the raytracing process, but rather evaluating the shaders at the points the rays hit. He goes on to saythat even with the time of ray tracing reduced to zero, the rendering time of most Pixar movieswould most likely only be reduced by 10-20%.

It is not the impression of Mr. Christensen that there would be any benefits in replacingthe current hybrid apporach with a brute-force ray tracing approach. Since the bottleneck ofmovie rendering lies in the shading, due to the complex computational tasks at each specific

71

Page 79: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

point in these highly complicated scenes. However he does not dismiss the possibility of raytracing being useful for real-time applications eg. - games.

We proceeded to ask Mr. Christensen what the advantage of Pixar’s current hybrid ap-proach is. The answer to this lies not so much in rendering speed but rather image quality.Where rasterization comes short, ray tracing shines through, offering precise shadows, accuratereflections, and ambient occlusion. Ray tracing however, is not effective at rendering depth-of-field and motion blur. It is not his belief that we are currently going through a transitionalphase, since there is no point in using ray tracing when rasterization can compute most of thesame tasks at greater speed. Ultimately rasterization is harder to implement, and using raytracing would simplify the rendering software a great deal - but simplification is not always theway to go.

The rendering algorithms that pixar uses and their software in general is versatile. Themain point of their rendering philosophy is to use ray tracing where appropriate, ray tracingwas used extensively in the movie ’Cars’, as Mr. Christensen elaborates in his paper, which herefers to in the end of the interview in appendix A.

Figure 7.1: For realistic and accurate reflections, ray tracing was an effective choice. Here seenfrom the Cars short spin-off, Mater and the Ghostlight

The movie industry will take any hardware speedups at hand, most exciting is specific multi-threaded hardware, which can offer some serious speed for the rendering process and make thecreative process a great deal simpler.

Specifically, Mr. Christensen mentions the Larrabee chip (A; Question 9). Larrabee is anew chip currently in development at IntelR©, specifically aimed at programmability [30]. The

72

Page 80: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

goal of this processing unit is to bring the power of GPU’s and the programmability of CPU’stogether, the architechture allows the graphics pipeline to be fully programmable.

Task-specific hardware allows for great speed improvements in any rendering proces. Es-sentially the processing unit grants greater control of exectuion, and developers can performoptimizations on any process, where the current GPU model is limited by technical applica-tions. Most importantly, through this programmability it is possible to speed up tasks such asshading, or any task in the graphics pipeline, through multi-threading computational tasks.

7.3 Interview Perspective

While the movie industry is ultimately interested in offering the best image quality, they arewilling to sacrifice rendering time for a better image - such as using ray tracing for shadowingand complicated reflections. However certain effects such as depth of field, which are quiteimportant for cinematic effect are achieved much simpler and much more effectively throughrasterization.

Real-time ray tracing could have a future in the industry of gaming, and possibly otherreal-time applications, due to the nature of rendering in real-time. When rendering in real-timewe only wish to stay at a certain performance threshold, at an output of minimum 30 framesper second [2], and a preferred 60. Seeing how the movie industry spends several hours for eachindividual frame, it is apparent that they are willing to sacrifice time in the rendering process,but will speed up the rendering process in any way possible through hardware improvement. Forreal-time rendering, application hardware will hopefully reach a point where it is able to sustain60 frames per second and at the same time offering photorealistic imaging in real-time, andnot as dependent on scene complexity as the rasterization approach in games is today. A greatcandidate for introducing more task-specific hardware is Intel’s Larrabee chip, and in regards toray tracing this programmability is just what would be needed to provide more computationalpower to specific tasks. This is a great improvement, as any task may be multi-threaded,something which is not possible with today’s GPUs.

Ultimately the movie industry business will want the best of both worlds, and is happy totake any speedups in their rendering process they can. But in the end speed will probably notinfluence their choice of ray tracing vs rasterization in their hybrid approach. There is probablygreater improvement to be had in shading speed.

73

Page 81: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 8

Conclusion

In this report we have covered ray tracing; a method which can be used to create realistic 3Dgraphics in films and video games. We have investigated the theoretical and technical aspectsof this method and also its real-world applications in the film industry. We set out to find outhow ray tracing could become more prevalent in the film industry.

We attempted to answer this question by interviewing Per Christensen, a rendering softwaredeveloper at the 3D animation film-making company, Pixar in Seattle. We interviewed him toget a view of ray tracing as of today, and to gather their expectations for the future from theperspective of an industry, which makes use of the method. We also coded a functioning raytracer in order to understand the mechanics of ray tracing and to gain insight in some of todayslimitations associated with ray tracing. In the following, we cover our findings with respect tothe four initial sub-questions.

In relation to how ray tracing works, we found that ray tracing is a technique which enablesdevelopers to produce photo realistic images with relative ease. It works by sending rays intoa space from the camera or ”eye”, and when these rays encounter an object, the color ofthat specific pixel is computed. The final color of the pixel is influenced by the material andreflectivity of the object, while the intensity of the color is determined by other shadows andreflections in the scene. The quality of the generated images depend on the amount of rays firedthrough each pixel, and how many times a ray may bounce.

When designing our own ray tracer, we chose C# (.NET) as our programming language.Even though this language does not offer the best performance, it presents us with valuabledebugging tools, which aid us in the process. Our ray tracer includes the primitives; triangles,planes, and spheres, which are some of the most basic primitives in computer graphics. Thematerials of these objects can be of different properties such as glass, metal or other materialswith different types of reflection. Those reflections may be specular or diffuse. The ray traceralso features texture mapping, allowing us to wrap a texture around spheres, instead of justgiving them a solid color. We make use of Monte Carlo integration, which allows us to approx-imate the influences of realistic lighting. We have also implemented distributed ray tracing, inorder to approach photo-realistic images.

After conducting the interview, it became apparent that the benefits of ray tracing lie in itsability to offer precise shadows, soft shadows, accurate reflections and ambient occlusion. Raytracing enables the film industry to achieve photo-realistic CGI films. It is our impression thatthe film industry is somewhat pragmatic when it comes to rendering. This is in the sense thatthey mix and match ray tracing and rasterization where it makes sense, instead of just relyingon a single method, which has its own shortcomings when used by itself. So, from the interviewwe have learned that film industry does not have an urgent need for real-time ray tracing, nor

74

Page 82: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

a belief that it will replace rasterization completely in film making.The process of designing our own ray tracer exemplifies the great deal of computational time

it takes, but ray tracers have the capability of being sped up by algorithmic improvements, suchas pruning intersection tests, and especially with hardware improvements. Regarding hardwarewe see more task-specific hardware emerging, with Intel’s Larrabee chip and the CausticRT raytracing platform. Larrabee will enable programmability in computational hardware, and thismay offer speedups for the ray tracing process. CausticRT promises a fully-fledged ray tracingfoundation with both software and hardware designed for accelerating ray tracing. Hardwaresupport for ray tracing is a crucial requirement for ray tracing to come anywhere near the framerates needed for it to be regarded real-time in commodity computers.

From our point of view, we have not been able to get a single defining answer to how thefuture looks for ray tracing. Yet, we believe that ray tracing could be used in other applicationsthan just films, such as video games, provided the absolutely important hardware support isavailable and a willingness in the gaming industry to adopt a ray tracing based approach tographics programming. Perhaps the gaming industry will embrace a hybrid rendering strategy,such as the one used at Pixar, if it makes more sense than solely relying on ray tracing. Thereis a lot of money invested in the hardware and gaming industry. Current rasterization-basedgraphics cards will not disappear over night. There has to be an economic incentive for hardwaremanufacturers as well game publishers to invest in ray tracing-based technology, or any newtechnology for that matter.

75

Page 83: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Chapter 9

Discussion and Reflection

To make ray tracing more prevalent in the film industry, it has to be efficient enough to com-pete with rasterization, and as Mr. Christensen said, ray tracing has several advantages overrasterization. However he also pointed out that, rasterization can do certain tasks faster thanray tracing. Still, the real bottleneck is the shader evaluations in movie rendering and not in-tersection testing. Therefore simply accelerating ray-shape intersection is not the biggest issue;it is part of a wider set of issues.

Another aspect in the discussion of ray tracing and rasterization, which Mr. Christensenrevealed, is the opportunity to use a hybrid rendering strategy that takes the best part fromboth methods; fast motion-blur and depth-of-field from the Reyes rasterization algorithm, andaccurate shadows, reflections, and ambient occlusion from ray tracing. With that in mind, hedoes not believe that ray tracing will replace rasterization in the near future, but they will takeany hardware speedups that is available and are very pleased with the competition betweenIntel and nVidia, pushing new generations of CPUs and GPUs. Also, multi-threaded hardwaresuch as Larrabee will support the parrallel nature of ray tracing.

Imrovements to the Ray Tracer

Our goal was to code a ray tracer capable of rendering lighting, texture mapping, differentcamera angles and soft shadows, where the speed was not in focus. The initial goal set hasbeen accomplished, but as a result, the trade-off for allowing these effects have been sacrificesin speed, ultimately slowing the rendering process significantly, for all of these effects to workproperly. As seen, the rendering process was often extended to several minutes. However, eventhough the ray tracer is rather slow, we do not regard it as a defeating problem. First of all,we had no expectations of outdoing some of the top-notch ray tracers available today. Secondlywe were only able to get limited results with the hardware we had available. Recall, that allrendering was done on a run-of-the-mill dual-core laptop.

With that being said, we have created a ray tracer which generates very pleasing results.Said in another way, it is acceptable for a process to take a long time, if it is the only way to getacceptable results. Our ray tracer has lived up to our initial expectations, and the approach ofcoding a ray tracer from scratch has had a great theoretical value. We met most of our initialdemands for the ray tracer.

Still, there are some obvious improvements we can make to the program. To begin we couldemploy one of the pruning strategies, we talked about earlier. We could also make the programeasier to use. In its current state we have to re-compile, if we want to make changes to thescene we are rendering. A much better solution would be to describe the scenes in a separatefile, which could be loaded and parsed by the ray tracer.

76

Page 84: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Single precision floating-point numbers (floats) have been used in most of the literatureabout ray tracing, we have come across. Whereas we have used double precision numbers(doubles), since they are used by default in the math library in the .NET framework. We couldjust as easily have used floats throughout the application, but that would also have requiredtype casting in various parts of the code, using math functions, such as power, cosine/sine, exp,etc. Even so, more numerical precision is usually a good thing if it is needed to get the renderedimages right.

If we want to squeeze more performance out of the code, we could port the program tothe opensource .NET alternative, Mono, which supports SIMD extensions (discussed in 1.2).With this we could perform vector operations more efficiently with less CPU cycles for eachcalculation. In addition we would not have to make any changes to our code, besides the vectorclasses, since Mono uses the same namespaces and, sure enough, supports C#.

It might seem at bit misleading that we have defended the fact that rendering good imageswith our ray tracer is time consuming, and then go on to discuss various optimization techniques.But these improvements are relatively straightforward to implement and it is certainly pleasantto be able to generate the same images in less time.

77

Page 85: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Appendices

78

Page 86: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Appendix A

Correspondance with PerChristensen, Pixar

B123 → [email protected]

Dear Per Christensen,

We are a group of Computer Science students from Aalborg University, Denmark. We arecurrently working on a project about realtime ray tracing, which approaches the subject of raytracing from the the angle that films and video games have become increasingly visually detailedand complex over the last decade. Our goal is to find out how the film and gaming industrycould benefit from realtime ray tracing.

With this in mind, our reason for writing is to find out if you would be willing to answer afew questions regarding the subject of raytracing. The questions we would like to ask you willbe related to challenges associated with realtime ray tracing, which tasks ray tracing is betterthan rasterization for, and what role you see ray tracing playing in the future.

If you have the time to answer a few short questions, we would like to set up a 15-20 min.Skype interview with you at your convenience.

Thank you for your time.

We look forward to your response,

Sincerely,

Long Thanh, Lars Oestergaard, Casper Jensen, Mads Carlsen and Rasmus AdbildgardAalborg UniversityDenmark

[email protected] → B123

Hi guys,

These are indeed interesting questions! Glad to hear that Aalborg U has a group interestedin these issues. Real-time ray-tracing for complex scenes has become (or will soon become)

79

Page 87: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

feasible with the new processors such as Larrabee from Intel. The ray-tracing vs. rasterizationdebate has raged for a few years already, and probably won’t be resolved anytime soon. There’sa lot of industry dollars at stake as well, with nVidia and Intel trying to out-do each other. Hereat Pixar we use a hybrid approach: rasterization of directly visible geometry, and on-demandray tracing for shadows, reflections, ambient occlusion, etc.

If you don’t mind, I’d actually prefer to answer your questions via e-mail. Then I’ll have abit more time to think about the answers and we don’t have to schedule a time that works forus all. We can do this in English or Danish – whichever you prefer.

Venlig hilsen,

– Per

B123 → [email protected]

Original e-mail omitted. Included in the following reply.

[email protected] -¿ B123

Hi again,

Thank you for the quick response! Your hybrid approach sounds very interesting.

E-mail is absolutely fine with us. We prefer keeping the questions in english, since that isthe language we are using in our report.

Before you answer any questions, we have to ask you if we have your permission to printthe answers in our report. This report will only be made available to our advisors and on theUniversity intranet.

Yes, that’s fine.

If you are OK with this, please feel free to answer as many of the questions below as youwant to:

Here are our questions:

1. What is your background in computer graphics?

I got a M.Sc. (civilingenior) at DTU in Lyngby. As part of my studies there, I took a CGclass and did a project (”independent study”) on ray tracing. Then I went to Universityof Washington to do my Ph.D. studies in computer science. Among other subjects, Itook classes in general computer graphics, computational geometry, etc. I did my Ph.D.thesis on a finite-element global illumination method similar to the well-known radiositymethod, but able to handle glossy reflections as well. I was very lucky to be mentored bytwo great advisors: David Salesin and Tony DeRose.

2. Could you elaborate on some of the rendering tasks you carry out at Pixarwith rasterization and why this method is used?

80

Page 88: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

The rasterization method we use is the Reyes algorithm which was developed at Pixarmany years ago. The greatest advantages of that method are:

- The rasterization is independent of the shading (computing the colors at each point)which is very important since the shaders are so complex that they take most of the time.- Due to the tile-based rendering, only a small part of the geometry needs to be stored inmemory at any given time. (The fully tessellated geometry would often use more memorythan what’s available on the computer, so it’s a good thing that it doesn’t all have to bein memory at the same time.)- Another big advantage (over ray tracing) is that it is a very fast method to get motionblur and depth-of-field (which is used in almost all frames of every movie) without havingto run the shaders more.

3. In relation to this, what are the main issues/limitations of rasterization whencreating 3D animations?

If pure (non-hybrid) rasterization is used, shadows have to be computed with shadow mapswhich can have resolution limitations. Reflections have to be computed with reflectionmaps which don’t look right if the reflected object is very close to the reflecting object (or,even worse, if they are the same object, ie. self-interreflections). And ambient occlusionis very cumbersome to compute – it requires a large number of shadow maps from manydifferent directions.

4. What are the advantages of ray tracing in relation to creating CGI films?

It’s an easy way to get sharp shadows, soft shadows, accurate reflections, and ambientocclusion.

5. Could real-time ray tracing add any value, in terms of production time andcosts, to your industry?

Yes, anything that speeds up rendering is valuable. But it is important to keep in mindthat the real bottleneck is not the raw ray tracing speed, but evaluating the shaders at thepoints where the rays hit. Even if raw ray tracing (ie. the ray-object intersection tests)took zero time, the total rendering times for Pixar movies would only be reduced by lessthan 50%. Probably closer to 10-20%.

6. Given the current hardware research going on, do you believe that ray tracingwill replace rasterization in the near future?

For some applications, and perhaps in games, but not in movie production. What peoplein the research community sometimes forget is that it is the *shading* that is the bottle-neck in movie rendering. The shading is so expensive because computing the color at asingle surface point typically requires dozens of texture map lookups, expensive procedu-ral computations such as noise, dozens of light source evaluations, and so on. The best(fastest) rendering algorithm is the one that will generate the desired images with theleast number of shader evaluations.

7. Since you use a hybrid rendering strategy, could you expand on what thebenefits are of using this?

The hybrid rendering strategy gives us the best of both worlds: fast motion blur anddepth-of-field from the Reyes rasterization algorithm, and accurate shadows, reflections,and ambient occlusion from ray tracing where needed.

81

Page 89: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

8. Will hybrid rendering be the most likely outcome or is it an indication thatwe are going through a transitional phase?

That’s the big debate. We believe that ray-tracing will not replace rasterization – thereis no point in using rays for camera rays when rasterization can do the job better/faster.On the other hand: from a pure software engineering point of view, it is very temptingto do everything with brute-force ray tracing. It would certainly simplify our renderingsoftware a lot.

9. Which outcome would probably be the most favorable for the 3D animationindustry?

We’ll take any hardware speedups we can get. We’re very excited about multi-threadedhardware such as Larrabee where both the shading, rasterization, and ray tracing can besped up by multi-threading. We’re fortunate that the competition between e.g. Intel andnVidia pushes new generations of CPUs and GPUs that are suitable for our needs.

Once again, thank you very much for your time!

You are very welcome. I hope this helps. You can find elaboration on some of these points inmy paper ”Ray tracing for the Movie ’Cars’”: www.seanet.com/~myandper/abstract/rt06.htm. There’s a great book about the Reyes rendering algorithm called ”Advanced RenderMan”by Apodaca and Gritz.

Good luck! (Feel free to send me a link to your project once it’s finished.)

– Per

82

Page 90: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

Bibliography

[1] Tomas Akenine-Moller. The Home Page of Tomas Akenine-Moller. http://www.cs.lth.se/home/Tomas_Akenine_Moller/, 2009.

[2] Tomas Akenine-Moller, Eric Haines, and Naty Hoffman. Real-time Rendering. A. K. Peters,Ltd., 880 Worcester Street, Suite 230, Wellesley, MA 02482, 2008.

[3] Jim Blinn. Me and My (Fake) Shadow. IEEE Computer Graphics and Applications, 8,1988.

[4] Samuel R. Buss. 3-D Computer Graphics: A Matematical Introduction with OpenGL.Cambridge University Press, 2003.

[5] Per H. Christensen. Per H. Christensen. http://www.seanet.com/~myandper/per.htm,2009.

[6] Per H. Christensen, Julian Fong, David M. Laur, and Dana Batali. Ray Tracing forthe Movie ’Cars’. Proceedings of the IEEE Symposium on Interactive Ray Tracing 2006,September 2006.

[7] Hugh D’Andrade. Hollywood’s Record Year Shows MPAA’s Piracy Folly. http://www.eff.org/deeplinks/2008/03/hollywoods-record-year-shows-mpaas-piracy-folly,March 2008.

[8] Tim Dirks. Milestones in Film History: Greatest Visual and Special Effects and Computer-Generated Imagery (CGI). http://www.filmsite.org/visualeffects9.html, 2007.

[9] Dominic Filion and Rob McNaughton. Starcraft II Effects and Techniques. Advances inReal-Time Rendering in 3D Graphics and Games Course – SIGGRAPH 2008, 2008.

[10] Heiko Friedrich, Johannes Gunther, Andreas Dietrich, Michael Scherbaum, Hans-PeterSeidel, and Philipp Slusallek. Exploring the Use of Ray Tracing for Future Games. theAssociation of Computing Machinery, 2006.

[11] Andrew S. Glassner. An Introduction to Ray Tracing. Academic Press, 1989.

[12] Venkatraman Govindaraju, Peter Djeu, Karthikeyan Sankaralingam, Mary Vernon, andWilliam R. Mark. Toward a multicore architecture for real-time ray-tracing. AnnualIEEE/ACM International Symposium on Microarchitecture, 2008.

[13] Caustic Graphics. Caustic Graphics - Raytracing Dynamic Geometry. http://www.vimeo.com/4202946, 2009.

[14] Caustic Graphics. Caustic RT. http://www.caustic.com/caustic-rt_intro.php, 2009.

83

Page 91: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

[15] Intel. Ray Tracing Goes Mainstream. techresearch.intel.com/UserFiles/en-us/Image/TS-docs/whitepapers/RayTracingGoesMainstream_061507.pdf, 2007.

[16] Intel. Intel Executive Biography - Gordon E. Moore. http://www.intel.com/pressroom/kits/bios/moore.htm, May 2009.

[17] Henrik Wann Jensen. Realistic Image Synthesis Using Photon Mapping. A. K. Peters, Ltd.

[18] Jingyi Yu and Jason Yang and Leonard McMillan. Real-Time Reflection Mapping withParallax. In Proceedings of the 2005 Symposium on Interactive 3D Graphics and Games,pages 133–138, Washington, DC, 2005. Massachusetts Institute of Technology, ComputerGraphics Group.

[19] Steinar Kvale. InterView. Hans Reitzels Forlag, 1997.

[20] Matt Matthews. NPD: Behind the Numbers. http://www.gamasutra.com/view/feature/3906/npd_behind_the_numbers_december_.php, December 2008.

[21] Jeffrey J. McConnell. Computer Graphics: theory into practice. Jones and Bartlett Pub-lishers, 40 Tall Pine Drive, Sudbury, MA 01776, 2005.

[22] Merriam-Webster. Electromagnetic wave spectrum. http://student.britannica.com/eb/art-70892/The-spectrum-of-electromagnetic-waves-ranges-from-low-frequency-radio,May 2009.

[23] MPAA. Entertainment Industry Market Statistics. http://mpaa.org/USEntertainmentIndustryMarketStats.pdf, 2007.

[24] Ken Perlin. Phong shading algorithm. http://www.mrl.nyu.edu/~perlin/courses/fall2005ugrad/phong.html, November 2005.

[25] Matt Pharr and Greg Humphreys. Physically Based Rendering: From Theory to Imple-mentation. Morgan Kaufmann, 2004.

[26] Pixar. Pixar corporate overview. http://www.pixar.com/companyinfo/about_us/overview.htm, 2009.

[27] Pixar. Pixar’s RenderMan. http://renderman.pixar.com/products/tools/renderman.html, 2009.

[28] Daniel Pohl. Ray Tracing In 3D Egoshooters. http://graphics.cs.uni-sb.de/

~sidapohl/egoshooter/, 2009.

[29] Kenneth H Rosen. Discrete Mathematics and it’s Applications. McGraw Hill, 2007.

[30] Larry Seiler, Doug Carmean, Eric Sprangle, Tom Forsyth, Michael Abrash, Pradeep Dubey,Stephen Junkins, Adam Lake, Jeremy Sugerman, Robert Cavin, Roger Espasa, Ed Gro-chowski, Toni Juan, and Pat Hanrahan. Larrabee: A many-core x86 architecture for visualcomputing. ACM Trans. Graph. 27, 3, Article 18, August 2008.

[31] Peter Shirley. Fundamentals of Computer Graphics. A. K. Peters, Ltd., 2005.

[32] Peter Shirley and R. Keith Morley. Realistic Ray Tracing. A. K. Peters, Ltd., 2nd edition,2003.

84

Page 92: Real-time Ray Tracing › downloads › raytracing.pdf · 2019-06-19 · This report is about real-time ray tracing. Ray tracing is a technique, which models the way light interacts

[33] Jon Stokes. SIMD Architectures. http://arstechnica.com/old/content/2000/03/simd.ars, March 2000.

[34] Jon Stokes. Understanding Moore’s Law. http://arstechnica.com/hardware/news/2008/09/moore.ars, September 27 2008.

[35] Allen B. Tucker. Computer Science Handbook: second edition. Chapman and Hall/CRC,2004.

[36] NC State University. Alumni - Department of Electrical and Computer Engineering. http://www.ece.ncsu.edu/alumni/jtwhitte, May 2009.

[37] Richard Wages, Stefan M. Grunvogel, and Benno Grutzmacher. How Realistic Is Realism?Considerations on the Aesthetics of Computer Games. 2004.

[38] Ingo Wald. Realtime Ray Tracing and Interactive Global Illumination. PhD thesis, SaarlandUniversity, 2004.

[39] Ingo Wald and Philipp Slusallek. EUROGRAPHICS ’01 STAR - State of the Art inInteractive Ray Tracing. 2001.

85