GLOBAL ILLUMINATION(BLACK) PHOTONS EVERYWHERE
Dragan [email protected]
PROBLEMS OF COMPUTER GRAPHICSgenerate digital imagery, so it looks realonly two problems:materials:bsdfbrdf (diffuse, glossy, specular reflections)btdf (refraction& transmission)bssdf (subsurface scattering)emitting2. cameraresolution + fovlens flareaberrationsbokeh dofhdr & tonemappingbloom & glowmotion bluranti-aliasingfilmgrain...
materials (describing how photons interact with surfaces and volumes; aka objects appearance)camera (describing how photons are gathered and displayed)
Materials emit photons, scatter them and absorb them. Thats whats happening in the scene. The final ouput of all this effects is captured by the camera system. And thats it!
GLOBAL ILLUMINATIONGI is a consequence of how photons are scattered around the sceneGI is an effect, i.e. doesnt exist per-se and is dependent of the sceneIn a CG terminology, GI is a set of algorithms that compute (ir)radiance for any given point in space, in the spherical domainThat computed irradiance is then used in combination with materials properties at that particular point in space, for final calculation of the radianceRadiance is used as the input to the camera systemglobal illumination sub-effects:shadowsambient occlusioncolor bleed/indirect illuminationcausticsvolumetric lighting
GI doesnt actually exist per-se.For example, camera effects exist independant of the actual scene and light. It takes input, modifies it, and outputs.Materials also exist independant of actual setup. They take input, modify it, output new values.GI is very dependant on the scene itself. GI is a consequence of the scene (scenes geometry and materials applied). Therefore, GI is an effect. In real-time graphics, we simulate effects.Path tracing is not a simulation, it is an evaluation of the processes that happen in the scene (emit-brdf modify-bounce). But in real-time graphics, GI is one of the consqeunces of the light transport. In reality, GI is the process itself, but we want to simplify it and simulate it, and therefore, weve created different effects that can be separated and evaluated independently (shadows, ao, indirect illumination...) and they are all in fact part of a overall bigger thing called global illumination.GI is a set of algorithms that calculate how much and what kind of light arrives at a certain point in scene. You can think of it as irradiance calculation. It combines effects such as direct lighting, ambient occlussion, indirect illumination, shadowing, caustics... GI algorithms are approaches how to effectively and efficiently compute irradiance in a spherical domain, for every point in space. GI algorithms can include all mentioned effects (shadow, ao, indirect illumination...), or some parts, but they are used to efficiently compute irradiance values that will be used for evaluation of the radiance, using the materials properties, that will present the final output of the rendering, just before being processed by the camera.3
shadowscheck if surface is lit directlyambient occlusioncheck how occluded the surface is and how hard is for the light to reach that point in spacecolor bleed / indirect illuminationis reflected light strong enough so even diffuse surfaces bleed their color on surrounding (non-emitters behave like light source)causticsis enough of the light reflected/refracted to create some interesting bright patternsvolumetric lightinghow does participating media interact with the light
global illuminationdescribes how light is scattered around the scene, how light is transported through the scenewhat interesting visual effects start appearing because of such light transport
shaosh + ind.illum.shsh + vol. + ind.illum.sh + caustics + ind.illum. + ao sh + ind.illum. + ao
Examples.IN CG, we always simulate some effects, while in reality they all happen simultaneously. We are just trying to mimic a nature the best we can.5
FORMULATION OF THE PROBLEManalytically calculate or approximate the irradiance over the sphere, for a certain point in space, in a converged state
how much each point [A] contributes to every other [B] in the scenehow much [A->B] influences point [A]how much does that influence [B] back....
recursive, but it can converge and reach a certain equilibrium[A->B][[A->B]->A][[[A->B]->A]->B][all light bounces]
What is GI trying to solve? Infinite light bounces that happen through the scene, that eventually converge to some stable state. We try to calculate or at least approximate the values of irradiance or radiance in such converged state.6
ALGORITHMSpathtracingradiosityphoton mappingRSM (reflective shadow maps)instant radiosityirradiance volumesLPV (light propagation volumes)deferred radiance transfer volumesSVOGI (sparse voxel octree GI, voxel cone tracing)RRF (radiance regression function) SSDO (screen-space directional occlusion) deep G-buffer surfels
PRT and SH
There are even more techniques and algorithms than mentioned here. But these are probably the major ones.There also screenspace techniques like SSDO (screenspace directional occlusion) and Deep G-buffer.There are also some offline techniques that use geometric approximation like surfels.
Most of the techniques will just briefly cover the algorithm. This talk is not intended to give an in-depth explanation of the techniques but rather provide an insight what different approaches are used and how people were thinking about the problem and what strategies were then developed.
SSDO: https://people.mpi-inf.mpg.de/~ritschel/Papers/SSDO.pdfDeep G-buffer: http://graphics.cs.williams.edu/papers/DeepGBuffer14/Surfels: http://graphics.pixar.com/library/PointBasedGlobalIlluminationForMovieProduction/7
PATHTRACINGsample the hemisphere over the point with Monte Carlofor every other sample, do the same thing recursively
for each surface-light path interaction, we evaluate the incoming light against the bsdf of the material
straighforward implementation of light bouncesvery computationally exhaustive, not real-timevery good results, ground truthall effects
The most straightforward simulation of photon interaction with the scene. For each direction, trace the light path the photon would take. Sample and gather over the hemisphere and do that recursively.8
for each surface element (patch), calculate how well it can see all other patches (view factor)progressively recalculate the illumination of a certain patch from all other patchesstart with direct illumination injected and iterate until convergence (or good enough)
not real-timeonly diffuse reflectionscan be precomputed and it is viewport-independent
Dont sample, but evaluate entire surface-to-surface view properties. Progressively iterate, where each step recaclulates the illumination of a patch coming from all the other patches. Only diffuse surfaces (usually). Used for architectureal visualization. Usually not real-time.
Patch == smooth, gradually changing piece of surface
REFLECTIVE SHADOW MAPS (RSM)generate RSMfrom lights perspective: depth, position, normal, fluxsample RSM to approximate lighting
the idea is used in other more popular algorithms
Paper: http://www.vis.uni-stuttgart.de/~dachsbcn/download/rsm.pdfRSM and instant radiosity: http://www.bpeers.com/blog/?itemid=51710
INSTANT RADIOSITYray trace from the light source into the scenefor each hit, generate VPL and render the scene with itgather the results
mix between sampling and radiositynot real-time
Instead of sampling illumination from other points like in pathtracing, better spread the light from a surface onto other surfaces. Each raytracing hit, we generate the light and spread it into the scene (render the scene with that light). We then sum the contribution of each hit to get the final result that approximates how the light is spread/distributed throughout the scene.
Paper: http://www.cs.cornell.edu/courses/cs6630/2012sp/slides/Boyadzhiev-Matzen-InstantRadiosity.pdfInstant radiosity: http://www.bpeers.com/blog/?itemid=51711
INSTANT RADIOSITY v2dont raytrace, but instead use RSMuse RSM to approximate where to place VPLsdeferred render with many lights
Instead of raytracing into the scene, use RSM to generate VPLs. We can utilize GPU to find all the VPLs and then we can use deferred pipeline to efficiently render many lights, that will eventually approximate the indirect illumination.
Paper: http://www.cs.cornell.edu/courses/cs6630/2012sp/slides/Boyadzhiev-Matzen-InstantRadiosity.pdfInstant radiosity: http://www.bpeers.com/blog/?itemid=51712
PHOTON MAPPINGshoot photons from light source into the scenegather nearby photon to calculate approximate radiance
good resultsgood for causticsnot real-time
Somewhat inverse of the pathtracing. Shoot photons from the light source and bounce them around the scene. When sampling the hemisphere for a certain hit, use local neighbors of the sample to get a better approximation of the lighting coming from that particular direction.13
SPHERICAL HARMONICSspherical Fourier decompositionLegendre basis functions that can be added together to represent the spherical domain function
Quick and dirty intro do spherical harmonics:What a cosine wave is to Foruier analysis, thats what a Legendre polynomials are to Spherical harmonics.We use basis functions that get multiplied with calculated coefficients to get an approximation of the spherical function. More coefficients, more accuracy in our approximation. If need just low frequency data from the spherical function,