89
SIX TUTS ON LIGHT AND SHADE by W Florian ild

Six tuts

  • Upload
    pjk3211

  • View
    112

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Six tuts

SIX TUTS ON LIGHT AND SHADEby WFlorian ild

Page 2: Six tuts

SUNNY AFTERNOONpart 1

Page 3: Six tuts

This tutorial series is intended to be used with mental ray for Autodesk Maya 8.5.

“Happiness is like the sun: There must be a little shade if man is to be comfortable.” - Let's start our exercise withthis little quote by Otto Ludwig.

Welcome to the first of the six-part tutorial series, discussing possibly the most challenging kind of 3D environment:interiors. mental ray (for Maya) users typically get cold feet and sweating fingers when it comes to this “closedcombat”; the royal league of environment lighting. It’s for no reason though, as all you need for the battle is a simplefield manual (this tutorial), and just a little bit of patience...

So what is it all about? Let’s have a look at our object for this demonstration (Fig. 1) ...

As you can see, we have a closed room; you can tell by the porthole and the characteristic door that it is a roominside a ship. Let’s imagine that it’s a tween deck of the ferry “MS No-Frills”, used as a lounge, and the staircase leadsto its upper deck.

From a lighter’s point of view, we can estimate by this analysis that there is light coming in from a) the opening in theceiling where the staircase leads outside, and b) from the porthole and the window beside it. That’s not much, and ifyou ever took a photograph under such conditions you will know that, even with nice equipment, you would have ahard time catching the right moment (the “magic hour”) to illustrate the beauty of this particular atmosphere.(Atmosphere is also defined, besides by the lighting condition itself, by things like a point in time, the architecture,the weather, and occasionally also the vegetation.)

So, for our first tutorial part, we will choose the following scenario: our ship, the MS No-Frills, is anchored somewherealong the shore of Tunisia (North Africa) in the Mediterranean Sea; it’s summer, the time is around early afternoon,and the weather is nice and clear. That’s all we need to know at this stage to get us started...

Fig. 1

Page 4: Six tuts

If you open up the scene, you will see that there’s no proper point of view defined yet. Feel free to either choose yourown perspective or use one of the bookmarks I have set in the default perspective camera (Fig. 2). By clicking on oneof the bookmarks, all relevant camera attributes (position, orientation, focal length, etc.) are changed to the conditionstored in the bookmark. This greatly helps when trying out different views without committing oneself, and withoutcreating an unnecessary mess of different cameras.

Before we start lighting and rendering the scene, we should have a little introduction to the actual shading of thescene and about a few of the technical aspects of things such as color spaces. If you find this too boring then youmight want to skip the next two paragraphs as this is not essential, but is nonetheless an explanation regarding howto achieve to the result at the end of this tutorial.

A Note on Shading.

All the shaders you see are built on the new mia_material that ships with Maya 8.5. This shader was intended as amonolithic (from the Greek words “mono”, meaning single, and “lithos”, meaning stone) approach for architecturalpurposes, and it can be practically used to simulate the majority of the common materials that we see every day.Unlike the regular Maya shaders, and most of the custom mental ray shaders, it implements physical accuracy, greatlyoptimized glossy reflections, transparency and translucency, built-in ambient occlusion for detail enhancement of finalgather solutions, automatic shadow and photon shading, many optimizations and performance enhancers, and themost important thing is that it’s really easy to use. And it’s all in one - thus “monolithic”. I therefore decided to use itin our tutorial...

Fig. 2

Page 5: Six tuts

A Note on Color Space.

As you may already know, usually all of the photographs and pictures that you look at on your computer are in the sRGB colorspace. This is because, for example, a color value of RGB 200, 200, 200 is not twice as bright as a color with RGB 100, 100, 100,as you would expect. It is of course mathematically twice the value, but perceptually it is not. As opposed to plain mathematics(like 2 x 100 = 200), our eyes do not work in such a linear way. And here’s where the sRGB comes in... This color space ‘maps’ thevalues so that they appear linearly. This is why most of the photographs are visually pleasing and look natural, which is not in atrue mathematically linear color space. However, almost every renderer spits out these old and truly linear images (because thissimply is how computers work - mathematically linear), unless we tell the renderer to do otherwise. Most people are not aware ofthis, and instead of rendering in the right color space they unnecessarily add lights and ambient components to unwittinglycompensate for this error. In Fig. 3 and Fig. 4, you can see two photographic examples illustrating the difference between a truelinear (left) and an sRGB color space (right). In Fig. 5, you can see the same from a CG rendering; you’ll notice that the true linearone looks a lot more “CGish” and unnatural. Even if you brightened it up and added/reduced the contrast, you still couldn’tcompensate for the fact that it’s in the wrong color space, specially if you carelessly used textures from an sRGB reference (i.e.from almost any digital picture you can find), which adds up even more to the whole mess. This is an essential issue in order tocreate visually pleasing and naturally looking computer graphics. If you have followed me up to here, and you think youunderstand the need for a correct color space, then go take a break and get yourself some coffee or delicious green tea and enjoylife for a while - you've earned it! This is all tricky yet fundamental knowledge. How this theory is practically applied in mental raywill be shown later on...

Fig. 3Fig. 3

Fig. 4

Fig. 5

Page 6: Six tuts

So, let’s get started with lighting the scene... Maya 8.5introduces, along with the mia package, a handy physical sunand sky system. This makes it easy to set up a naturallooking environment and we can then focus more on theaesthetic part of the lighting process, instead of tweakingodd-looking colors. The sky system is created from therender global’s environment tab (Fig. 6).

By clicking on the button, you practically create:

a) a directional light which acts as the sun’s direction;

b) the corresponding light shader mia_physicalsun;

c) the mia_physicalsky, an environment shader that connectsto the renderable camera’s mental ray environment (Fig. 7);

d) a tone mapping lens shader called mia_exposure_simple,which also connects to the camera’s mental ray lens slot.

It’s also worth mentioning here that this button also turnsFinal Gathering ON.

Fig. 6

Fig. 7

Page 7: Six tuts

Now that we have a default sun and sky system set up, we are almost ready to render. Before we do the first test render, let’smake sure we are in the right color space, as mentioned. By default, we are rendering in true linear space (for an explanationplease refer to the previous notes on color space), which is - for our needs right now - not correct. The lens shader we createdhowever brings us into a color space which closely approximates sRGB by applying a 2.2 gamma curve (see the Gamma attribute)globally to the whole rendered image, as we calculate it. Generally, this is a good thing and is desirable. But if we apply a gammacorrection in this way, then we would have to “un-gamma” every single texture file in our scene. This is due to the fact that thetextures already have the “right” gamma (this is usually true for any 8bit or 16bit image file), and adding a gamma correction ontop of that would double the gamma and could potentially wash out the textures’ colors. What a bummer!

So, we either have to “un-gamma” every texture file (boring and tedious), or instead of the lens shader’s gamma correction, wecan use mental ray’s internal gamma correction (still boring, but less tedious).

As you can see from Fig. 8, we set the Gamma value in the Render Globals’ primary framebuffer menu to the desired value, whichis - simply because mental ray works this way - 1 divided by the value (2.2 for approximating sRGB in our case), which equals0.455. At the same time, we also need to remove the gamma correction of our lens shader, so we must set its Gamma attribute to1.0 (linear equals no correction; you can select these shaders from the hypershade’s Utilities tab). Thus we completely hand overthe gamma correction to mental ray’s internal mechanism, which automatically applies the right “un-gamma” value to every of ourtextures. There are no more worries for our color textures now. If we use “value” textures however (like for bump maps,displacement maps, or anywhere where a texture rather feeds a value than an actual color), we'd have to disable this mechanismfor the particular “value” texture by switching a gammaCorrect node in front of it, with the desired gamma compensation (2.2 inour case) filled into the gammaCorrect's Gamma attribute (note: this attribute does not mean the value for the actualcolor^gamma function, it rather indicates the desired compensation situation, i.e. the “inverse”, or reciproke of the gammafunction - no one ever would tell you about that, but now you know better). This is a long-winded theory, but now we’re ready togo!

I tweaked the Final Gathering settings (Fig. 9) sothat we will get a relatively fast converging, yetmeaningful, result. I also turned down themia_physicalsun’s Samples to 2.

Fig. 9

Fig. 8

Page 8: Six tuts

It’s kind of dark and has a few errors (Fig. 10), mainlybecause of insufficient ray tracing settings.

Let’s now increase the general ray depths (Fig. 11) and theFinal Gathering ray depths (Fig. 12). We’re also turning theSecondary Diffuse Bounces on. However, the SecondaryBounces button in the Render Globals only sets their BounceDepth to 1; we want it to bounce twice so we’re selectingthe actual node where all the mental ray settings are stored,which is called “miDefaultOptions”.

Fig. 10

Fig. 11

Fig. 12

Page 9: Six tuts

You can do this by typing in “miDef*” in the input line withLMB Select by name on (the asterisk is a wildcard for lazypeople like me, see Fig. 13).

Once we select the miDefaultOptions, all more or less hiddenmental ray settings are exposed to the attribute editor.There’s also some stuff in the mentalrayGlobals node, butwe’re focusing on the Final Gather tab in themiDefaultOptions right now. Let’s set the FG Diffuse Bouncesattribute to 2 (Fig. 14). These ray depth settings shouldsuffice to get the result at the end of this tutorial.

Let’s re-render (Fig. 15). It is still pretty dark, but you cantell that the indirect light contribution is sufficient (don’tworry about detailed shadowing, we’ll get to that later on),so we need to actually raise the exposure level of our piece,somehow.

Fig. 13

Fig. 14

Fig. 15

Page 10: Six tuts

Remember, we’re all still on the very basic default settings foreverything. One setting used to tweak the exposure is theGain attribute in the mia_exposure_simple, which isconnected as a lens shader to our camera. Let’s increase theGain value to 0.5 (Fig. 16).

That’s much better, and gives a more natural feeling (Fig.17).

Fig. 16

Fig. 17

Page 11: Six tuts

Now we can start to actually make decisions on the lightingand aesthetic accentuations. For this part, please don’t feelconstrained to the settings and colors that I choose - feelfree to follow your own ideas! I’m rotating the sunDirectionto X -70, Y 175, Z 0 to accentuate certain elements by directsunlight, and I’m setting the attributes of themia_physicalsky to the values you can see in Fig. 18. Iincreased the Haze value to 0.5 (note that this attributetakes values up to 15, so 0.5 is rather low). Then I set theRed/Blue Shift to 0.1, which basically means a white-balancecorrection towards reddish (towards blue-ish would be anegative value, like -0.1). I also raised the Saturationattribute to 2.0, which is it’s maximum value. I then madeslight adjustments to the horizon, which does not have mucheffect on the global look but I experimented with what wecould see through the porthole and the window.

Fig. 18

Page 12: Six tuts

The last thing I changed was the Ground color. I gave it agreenish tint because I thought this gave it a more lagoon-like feeling, and I think it gives the whole piece a moreinteresting touch (Fig. 19). From my own point of view, thisis a good base for what we intended to accomplish with theearly afternoon in the Mediterranean Sea scenario.

If we’re satisfied with the general look, we can then go aboutsetting up the scene for a final render. Firstly, let’s increasethe Final Gathering quality, because we can reuse the FinalGathering solution later on. As you can see from Fig. 20, Iraised the Accuracy to 64, but more importantly, andespecially for the shadow details, the Point Density is now at2.0. With a denser Final Gathering solution we can also raisethe Point Interpolation without losing too much shadowingcontrast. I also set the Rebuild setting to Off, because thelighting condition is not changing from now on and we cantherefore re-use existing Final Gather points.

Let’s have a look (Fig. 21). As you can see, there is still alack of detail in the shadowed areas, especially in the doorregion. We can easily get around this with the newmia_materials which implement a special Ambient Occlusionmode. You only need to check on for Ambient Occlusion inthe shaders, as everything else is already set up fairly well bydefault (all I did was set the Distance to a reasonable valueand darkened the Dark color a little).

Fig. 19

Fig. 20

Fig. 21

Page 13: Six tuts

The main trick is the Details button in the mia_material(leaving the Ambient at full black). By turning on the Detailsmode, the Ambient Occlusion only darkens the indirectillumination in problem-areas, avoiding the traditional globaland unpleasant Ambient Occlusion look. See Fig. 22 with theenhanced details.

Note: to adjust the shaders all at once, select allmia_materials from the hypershade, and set the Ao_onattribute in the attribute spread sheet to 1 (Fig. 23) (theattribute spread sheet can be found under Window >General Editors > Attribute Spread Sheet). Also note thatswitching on the Ambient Occlusion in the shader scraps theFinal Gathering solution; it will be recalculated from scratch.If you find the Final Gathering taking too long, turn the PointDensity down to 1.0 or 0.5, as this still gives you nice resultsbut the lighting details will suffer.

Fig. 22

Fig. 23

Page 14: Six tuts

Now let’s increase the general sampling quality (Fig. 24). Thesample level is now at Min 0 and Max 2, with contrast at 0.05and the Filter set to Mitchell for a sharp image.

Last but not least, if you are having problems with artifactscaused by the glossy reflections, raise the mia_material’sReflection Gloss Samples (Refl_gloss_samples) up to 8 forsuperior quality. You can do this with the attribute spreadsheet, as well.

For the final render, I chose to render to a 32bit floatingpoint framebuffer, with a square 1024px resolution. This canbe set in the Render Globals (Fig. 25).

If I want to have the 32bit framebuffer right out of the GUI(without batch rendering), I need to turn the PreviewConvert Tiles option On and turn the Preview Tonemap Tilesoption Off, in the Preview tab of the Render Globals (Fig.26).

Fig. 24

Fig. 25

Fig. 26

Page 15: Six tuts

Important: I also need to choose an appropriate imageformat. OpenEXR is capable of floating point formats and it’swidely used nowadays, so let’s go for that (Fig. 27).

When rendering to the 32bit image, you will get some funkycolors in your render view, but the resulting image will bealright - don’t worry. After rendering, you can find it in yourprojects images\tmp folder. Fig. 28 shows my final result: apretty good base for the post production work.

Fig. 27

Fig. 28

Page 16: Six tuts

Since we rendered to a true 32bit image, we have great freedom for possibilities. See for my final interpretation wherethere is no additional painting, only color enhancement. Try it for yourself!

I hope you have enjoyed following this tutorial as much as I enjoyed writing it!

Page 17: Six tuts

TWILIGHTpart 2

Page 18: Six tuts

Welcome back aboard to the second part of the environmentlighting series for Autodesk Maya 8.5. Again, we will be usingmental ray for Maya for this challenging interior illumination,so all you need for the this is to get your CPU at operatingtemperature and the basic maya scene of our ship's interior.

Before we can start, we need to properly set the project (Fig.1). If you're not familiar with the use of projects, you mightwant to know that (one of) the main reasons for doing this isbecause of the relative texture paths Maya uses. Theserelative paths ensure that we can port the scene from onefile location (e.g. my computer) to another (your computer)without any hassle, as opposed to absolute paths whichwould always point to a static location that might differ fromsystem to system.

So we're back aboard the MS No-Frills, still anchoredsomewhere in the Mediterranean Sea (Fig. 2). For thissecond tutorial, we will set our goals for accomplishing atwilight atmosphere, which would usually occur at eitherdusk or dawn.

Before we actually look at the scene, let's take a fewmoments to think about this very special situation (you mightwant to skip or come back later to this paragraph if you wantto go straight to the execution). Twilight, from a technicalpoint of view, is the time (usually around half an hour)before sunrise or after sunset. In this condition the sun itselfis not visible; the sun's light is however scattered towardsthe obeserver in the high layers of the atmosphere, either bythe air itself (Rayleigh-scattering) or aerosols. This scatteringeffect causes the beautiful and different colors that we enjoyevery dusk or dawn. From an artistic point of view, twilightmay happen in a variety of occasions, for example in stormyweather, or when natural and artificial light sources meet -typically whenever two (thus “twi-”) light sources or lightconditions compete for predominance (imagine two wrestlersintensely fighting on the floor, and it's absolutely impossibleto tell who's going to win the fight). Twilight always has thisdramatic sense to it, and often the dramatic colors as well. Incase of a storm, they might even range from greenish todeep blue. Usually, in the case of dusk and dawn, colorsrange from blue to purple, and from yellow to orange andred. The crux is that these colors are mostly equallydominant (and therefore leave us with great artistic andinterpretational freedom) - as opposed to any other lightingcondition, where there is usually one light source which ispredominant. With this in mind, we are now ready tosimulate the very particular case of twilight.

We will use the same base scene as used for part 1 of thistutorial (the sunny afternoon), so all shaders and texturesare ready to rumble. All surface shaders are made from themia_material that ships with Maya 8.5 (you might want toread back to the “note on shading” in part 1 - sunnyafternoon - which explains its basic functionality).

Fig. 2

Fig. 1

Page 19: Six tuts

Again, we are using the newly introduced physical sun andsky system, which can easily be created from the renderglobals (Fig. 3). This button saves us time setting up all thenodes and connections to make the system work properly(thus it also turns final gathering ON). It basically consists ofthree things:

Fig. 3

Page 20: Six tuts

The sun, whose direction we control using the directional light (called sunDirection, by default) with it's light shadermia_physicalsun; the sky, which consits of an environment shader (mia_physicalsky) connected to the camera; and asimple, yet effective, so-called tonemapper (mia_exposure_simple), used as a lens shader on the camera (Fig. 4).

Fig. 4

Page 21: Six tuts

Before we start rendering, let's firstly think about a reasonable sun direction that would fit our needs for twilight. It is verytempting to actually use an angle that leaves the sun below the horizon line, however this would yield a diffuse, not very dramaticlighting. You might want to experiment with this a little, but I have decided to have a more visible indication of where the sunactually is. I rotated the sun on X -12.0 Y 267.0 Z 0.0; this makes the direct sunlight shine through the back windows, stillproviding a very flat angle (Fig. 5).

There's still one important point that we should consider before pushing the render button: the color space. As already explainedin the “note on the color space” in the first tutorial (sunny afternoon), we should make sure we work in a correct space, which issRGB, or in our case an sRGB closely approximating 2.2 gamma curve.

Fig. 5

Page 22: Six tuts

The mia_exposure_simple already puts us into this space by default (The Gamma attribute defaults to 2.2), but by doing it thisway, we double the gamma on our filetextures, which by default are already in sRGB - that's a big secret no-one may have evertold you before, but trust me - it's like that. So we either need to remove the gamma from our textures (“linearize” them) beforerendering, which can be done with a gammaCorrect node in front of them in the shader chain with Gamma set to 1/2.2, which is0.455 rounded (important: the gammaCorrect node works inversedly - the value we put in there is the desired gammacompensation value, not the actual gamma function itself!), OR we can use mental ray's internal gamma correction mechanism -which I prefer. So we abandon the mia_exposure_simple's gamma correction, simply by setting its Gamma attribute to 1.0, andenable mental ray's mechanism by setting the primary framebuffer's Gamma to 1/2.2 = 0.455, in the render globals (Fig. 6).

So we're ready to go and do the first test rendering (Fig. 7).As you can see the scene is pretty dark and has a few errorscaused by the insufficient ray depths. However, we are stillusing the render globals default Draft quality preset...

Fig. 7

Fig. 6

Page 23: Six tuts

Let's now increase the raytracing depths to a reasonableamount (Fig. 8). The values you see in Fig. 8 should satisfyour requirements; we might increase the reflection depthlater on...

I also tweaked the final gathering settings to a lower quality(Fig. 9). This way, we get a fast converging - yet meaningful- indirect illumination for our preview renders. But besideslowering the general final gathering quality, I increased itstrace depths, and, more importantly, turned the SecondaryDiffuse Bounces button on. This button however only givesus a single bounce of diffuse light, as that's how theydesigned the render globals, but as I'm not satisfied withthat let's go under the hood of the mental ray settings...

Fig. 8

Fig. 9

Page 24: Six tuts

We are selecting the miDefaultOptions node (for example bytyping “select miDefaultOptions” without the quote marks inthe MEL command line) (Fig. 10). This node is basicallyresponsible for the export of all the settings to mental ray.The regular render globals are practically a more userfriendly “front-end” to the miDefaultOptions. There's alsosome stuff in the mentalrayGlobals node, but this does notaffect us right now.

As you can see, the FG Diffuse Bounces attribute is actuallyexposed; we set it to our desired depth, which is 2 for now.

Fig. 10

Page 25: Six tuts

It looks better, but still appears to be seriously underexposed (Fig. 11). There are several ways to adjust thegeneral exposure level in mental ray for maya, but let'schoose the easiest one: raising the Gain attribute of ourmia_exposure_simple...

You can navigate to the mia_exposure_simple either byselecting your camera (to which it is connected), or byopening the hypershade and selecting it from the Utilitiestab. I gave it a serious punch and boosted the Gain to 4.0(Fig. 12).

Fig. 11

Fig. 12

Page 26: Six tuts

Now it's much better from an exposure point of view, but itlooks very cold and not very twilightish (Fig. 13). You mightwant to experiment with the sun's direction, but if we overdothis then we will loose the nice light which is playing on thefloor. I therefore decided to solve the problem using themia_physicalsky - the environment shader which isresponsible for pretty much the entire lighting situation.

Fig. 13

Page 27: Six tuts

I upped the Haze parameter to 2.0, which gives us a nice“equalization” of direct light coming from the sun, and thelight intensity of the sky (Fig. 14) At lower haziness, thesunlight would be too dominant for our twilight atmosphere.I then shifted the Red/Blue attribute towards reddish, toachieve a warmer look (if I wanted to shift it towardsblueish, i.e. doing a white balance towards a coolertemperature, I would have to use a negative value for theRed/Blue shift). I also slightly increased the Saturation,which is pretty much self explanatory. Now, for an interestinglittle trick to make the whole lighting situation moresunset/sunrise-like, whilst still maintaining the direct light onthe floor (i.e. the actual light angle), I increased the HorizonHeight to 0.5. This not only shifts the horzizon line but alsomakes the whole sky system think that we have a higherhorizon, and thus provides a more accentuatedsunset/sundawn situation. Remember this does not have toomuch of an effect, yet it's still an interesting way to tune thegeneral look. The last two things I changed were the HorizonBlur and the Sun Glow Intensity, however both of theseattributes dont have much of a visible effect on the generalillumination of our interior.

Fig. 14

Page 28: Six tuts

Once we're finished setting up the basic look, we can goabout configuring the render globals for the final quality (Fig.15). First of all, let's increase the final gathering quality, sincewe can reuse the final gathering solution later on. In Fig. 15you can see the values I used - 64 for accuracy, whichmeans each final gather point shoots - in a random manner -64 rays above this point's hemisphere (less accuracy wouldgive us a higher chance of a blotchy final gathering solution).To work against the blotchiness we could also increase thePoint Interpolation to really high values, like 100+, but thiswould most likely wash out the whole contrast and detail ofour indirect illumination if we dont have a sufficient PointDensity value. The Point Density - in conjunction with areasonable Point Interpolation - is the most responsible partin achieving nicely detailed shadowing, and so we have tofind a good correlation between these two. In our case, Ifound it sufficient to have a Point Density of 2.0 and a PointInterpolation of 50. You might want to try a density of 1.0(or even 0.5) if you think the former settings take too long tocalculate, but you'll surely notice the lack of detail in theindirect illumination. Note that increasing/decreasing theinterpolation does not affect the final gathering calculationtime at all. It also does not hurt the actual rendering timetoo much. The crucial value is the point density which addsto calculation time, as well as the accuracy. Also note thatyou might be able to comfortably experiment with the PointInterpolation if you freeze the final gathering solution (setRebuild to Freeze).

It looks much better now, but there are still some areas thatseriously lack detail, such as the door region (Fig. 16). Toreveal these details we could render a simple ambientocclusion pass and multiply it over in post production. Thiswould accentuate the problem areas, but at the same time itwould add this typical all-present, physically incorrect andvisually displeasing ambience. To overcome this, and still usethe advantage of ambient occlusion, we can use themia_material's internal ambient occlusion mode...

Fig. 16

Fig. 15

Page 29: Six tuts

We simply need to enable it in the shader, and set the Detailattribute to ON (which it is by default) (Fig. 17). This specialambient occlusion mode is intended to enhance the problemareas' details, where the point density might still not suffice.

To enable the ambient occlusion in all shaders, we simply select them all from the hypershade and open the attribute spreadsheet, from Window > General Editors > Attribute Spread Sheet (Fig. 18). There we navigate to the attribute called Ao_on and setits value to 1 (ON).

Fig. 17

Fig. 18

Page 30: Six tuts

Although it still might be physically incorrect, it reveals all thedetails that the final gathering was not able to cover (Fig.19). Of course, it still looks very coarse, and this is mainlybecause the general sampling settings are still at extremelylow values.

To ensure nice edge antialising, as well as better shadow andglossy sampling, we set the min/max sample levels to 0/2and the contrast values each to 0.05 (Fig. 20). The filtershould be changed, too; I chose Mitchell for a nicely sharpimage. I'm also raising the Reflection Gloss Samples(Refl_gloss_samples) up to 8 in the mia_materials. Note thatthis happens on a per shader basis, and we can use theattribute spread sheet again to do this all at once for allshaders.

Fig. 19

Fig. 20

Page 31: Six tuts

Last time we rendered to a full 32bit floating pointframebuffer. This time, for my final render, I chose to renderto a 16bit half floating point framebuffer (Fig. 21). The 16bithalf takes less storage (and bandwith) but still provides theincreased dynamic range of floating point buffers. If we wantto render the floating point buffer right out of the GUI,wihtout batch rendering, we need to make sure the datawritten into the buffer actually is floating point; thus thePreview Convert Tiles in the Preview tab of the renderglobals needs to be switched ON, and the Preview TonemapTiles option needs to be switched OFF. This will producefunky colors in your render view preview, but the imagewritten to disk (typically in your project's images\tmp folder)should be alright.

The use of a 16bit half framebuffer forces us to use ILM'sOpenEXR format, as it is the only supported format right nowfor this particular kind of framebuffer (Fig. 22). That's notactually bad, since OpenEXR is a very good and nowadayswidely used format.

Fig. 21

Fig. 22

Page 32: Six tuts

Here's the final rendered, raw image (Fig. 23) - a good base for the post production work.

Fig. 23

Page 33: Six tuts

In my final interpretation I decided to exaggerate the colors that make a dramatic twilight atmosphere (Fig. 24). Again, there is nopainting happening, only color enhancement which done using Adobe Lightroom 1.0.

I hope you enjoyed following this second part of the series as much as I have enjoyed writing it. Stay tuned for part 3 where wewill be covering an extremely interesting and no less challenging lighting situation: moonlight.

Fig. 24

Page 34: Six tuts

MOONLIGHTpart 3

Page 35: Six tuts

Hello and welcome to the third part of the environmentlighting series for Autodesk Maya 8.5, where we will bediscussing a very interesting lighting situation: naturalmoonlight. So let’s wait for full moon and a cloudless sky,then we can turn off the lights and get started...

If you followed the preceding two tutorials (which Irecommend), you will already be familiar with the scene (Fig.1). Before we start placing lights and tuning parameters, weshould take some time to think about what ‘moonlight’actually is. If you are not interested in this concept then youmight want to skip or come back later to the next twoparagraphs, as they are not essential. They are howevervaluable for the understanding of why certain methods havebeen used in the execution of this moonlight setup.

So what is moonlight? First of all, by moonlight we mean anighttime situation, and for the sake of convenience let’s saywe have a full-moon/nighttime situation. There are severalsources and components of illumination in this setting (i.e. inthe descending order of energy): the moon itself (byscattering sunlight from its surface in all directions), the sun(by scattering light around the edge of the earth), planetsand stars, zodiacal light (dust particles in the solar systemthat scatter sunlight), airglow (photochemical luminescencefrom atoms and molecules in the ionosphere), and diffusegalactic and cosmic light from galaxies other than the milkyway. All of these illumination sources have theircharacteristics, and in order to super-realistically simulatesuch a night-sky, we would have to account for all of them.But please bear with me, we will only be concentrating onthe moon itself, and an atmospheric ‘soup’ including all theother ingredients.

Page 36: Six tuts

Besides, and this is very interesting, even if we did thatsuper-realistic night-sky simulation then we would perhapsget a very photo-realistic rendering, but I am sure manypeople would be disappointed by it. This is for the simplefact that, seeing a night-sky/moonlit photograph isfundamentally different from actually viewing such a scenewith our own eyes. The photograph might be physicallycorrect, but also completely different from what we are usedto physiologically perceiving. In the end, we would mostlikely shift the photograph’s white balance heavily towardsblue, because this is what we are used to seeing: Opposedto how a camera sensor works in dim lighting levels, thesensitivity of the human perception of light is shifted towardsblue; the color sensitive ‘cones’ in the eye’s retina are mostlysensitive to yellow light, and the more light sensitive ‘rods’are most sensitive to green/blueish light. At low lightintensities, the rods take over perception and eventually webecome almost completely color blind in the dark, hence itappears that the colors shift towards the rods’ top sensitivity:green and blue. This physiological effect is called the“Purkinje” effect, and is the reason why blue-tinted imagesgive a better feeling of night - even though it’s not correctfrom a photographic point of view.

So we will rely on a hint of artistic freedom, rather than strictphoto-realism, for this tutorial. To simulate the moon’s light Ichose a simple directional light with the rotation: X -47.0 Y -123.0 Z 0.0 (Fig. 2).

Page 37: Six tuts

For the light color I decided to use mental ray’s mib_cie_d shader (Fig. 3). ItsTemperature attribute defaults to 6500 K (Kelvin), which means an sRGB‘white’ for this so-called D65 standard illuminant, which is commonly used fordaylight illumination, will be as follows: every temperature above 6500 K willappear blueish, and every temperature below 6500 K will appear reddish. Thevalid range is from 4000 K to 25000 K. Although the moon actually has a colortemperature of around 4300 K, I chose a temperature of 7500 K. This is notnecessarily correct from a physical point of view, for various reasons. Firstly,the moon is not a black body radiator and so its color cannot precisely (onlyapproximately) be expressed with the Kelvin scale. Second, the moon’s actualcolor is mainly a result of the sunlight (with a temperature of around 5700 K -still lower than the white point of our D65 illuminant, or in other words morereddish if expressed with it), a slightly reddish albedo of the moon’s surfaceand the reddening effect of rayleigh scattering (blue light, i.e. smallerwavelengths, tend to scatter more likely than red light and greaterwavelengths, therefore a higher amount of blue light gets scattered in theatmosphere leaving more red light from the perspective here on Earth). Thiswould, in photo-reality, surprisingly yield a quite reddish moonlight, even if wedid choose a very low white balance for our photograph at maybe around3200 K (which is considered ‘tungsten film’). However, for the physiologicalreasons described previously, I went for 7500 K on the D65 illuminant as thisgives a pleasing - not too saturated but still very natural - blueish light.

To cut a long story short, if you wanted to go for photo-realism you wouldhave to use a reddish light color, but you would most likely white balanceeverything towards blue afterwards to achieve the cool night feeling! Andthat’s basically what I did - only in a rush...

Page 38: Six tuts

For the same reasons I chose a turquoise (blue-greenishcolor) for the surrounding environment, which was simplyapplied as the camera’s background color. Although this willonly have a subtle effect it makes sense for thecompleteness, and after all we will see this color through ourback windows. Note that what we see on the actualBackground Color’s color swatch will be (deliberately) gammacorrected later on. To overcome this and to ensure that thecolor I choose is the color that I will see later on in therendering, I use a simple gammaCorrect node, with theinverse gamma applied. The gammaCorrect is connected viammb drag&drop onto the ‘Background Color’ slot.

Page 39: Six tuts

Before we push the render button, let’s make sure we havesomething that takes care of our indirect illumination, andthat we are rendering in an appropriate color space. For thesake of simplicity I chose final gathering with SecondaryDiffuse Bounces for the indirect light contribution. This iseasy to set up, yet effective. As you can see I set low qualityvalues, but since we are only doing a preview this willsuffice.

Because there is a little shortcoming with the SecondaryDiffuse Bounces setting, I’m selecting the miDefaultOptionsnode, which is basically the back-end of the render globals.There I set the FG Diffuse Bounces to 2, which is my desiredvalue for the indirect illumination bounces. To select themiDefaultOptions simply type “select miDefaultOptions”(without the quote marks), in the MEL command line, andthen hit Enter.

Page 40: Six tuts

I’m also setting the Ray Tracing depths to reasonable values- they seem very low, but are absolutely sufficient for ourneeds.

To take care of the desired color space (sRGB) we simplyneed to set a gamma curve in the Primary Framebuffer tabof the render globals. Since a gamma curve of value 2.2 issimilar to the actual sRGB definition, we only need to set theGamma attribute to 1/2.2 = 0.455, as this is how mentalray’s gamma mechanism works. For a basic understanding asto why we should render in sRGB, I greatly encourage you togo through the “Note on color Space” in the first tutorial ofthis series (Sunny Afternoon), if you haven’t already. As ageneral note, it has to do with the non-linearity of humanlight perception and rendering in a true linear space (gamma= 1.0), as any renderer usually does by default, which is themain reason for CG looking “CG-ish” (which we dont want).Spread this knowledge to your buddies and with thisunderstanding you’ll be the cool dude at every party, trustme!

So here is our first test render. It looks a bit dark, and sincewe want to have a full-moon the shadow seems a bit toosharp.

Page 41: Six tuts

To soften the shadow, let’s increase the Light Angle of ourdirectional light. Because widening the light angle introducesartifacts, we should also increase the amount of shadow raysto yield a smooth and pleasing shadow. I’m also increasingthe intensity of the mib_cie_d a little.

This is a good base and all we need to do now is increasethe general quality settings for our final render.

Page 42: Six tuts

For better anti-aliasing and smoother glossy reflections weshould crank up the global sampling rates (Fig12). Amin/max value of 0/2 and a contrast threshold of 0.05 shouldsuffice. I used a Gauss 2.0/2.0 filter for a sharp image.

For the final gathering this time I chose a fairly unorthodoxmethod... Remember the last couple of times we used theautomatic mode, which in most cases does a really good job.Well, in automatic mode all we need to worry about are thePoint Density and Point Interpolation values. However,sometimes in this mode the interpolation becomes quiteobvious and displeasing, especially in corners where you canusually spot a darker line where the interpolation happens tobe very dull. For a sharper interpolation, I decided to use thescene unit dependant Radius Quality Control (Fig13). Itgenerally takes a little time to estimate the proper min/maxvalues (in scene unit values), but as a guideline you mightwant to do a diagnostic automatic Final Gathering solution(see Diagnostics in the render globals) as a base, to see itspoint densities. Then, step by step, approximate this densitywith the scene unit Max Radius control. Note that the densityis only decided by the Max Radius (the lower the Max Radius,the more Final Gathering points are being generated); theMin Radius only decides for certain interpolation extents.Once you are satisfied with this general density, you willusually want to raise the Point Density value. This PointDensity is added to the density we estimated with themin/max radii; however, the interpolation extents do notchange so we are basically only adding points to theinterpolation, which is similar to raising the PointInterpolation in automatic mode (only more rigid andsomehow it puts the cart before the horse this way). It’salways good to know how and why things are happening,and this knowledge is useful if you ever want to use theOptimize for Animations feature. It’s also a bit easier if theView radii are being used, since the min and max radii canbe generalised (min/max 25/25 or 15/15 in pixel units is agood starting point).

Page 43: Six tuts

As a little trick to enhance details in our scene, I turned theAmbient Occlusion on in the mia_material shaders, in theDetails mode. Simply select them all and switch the Ao_onattribute to 1 (On), using the attribute spread sheet.The Details flag, in combination with Final Gathering,ensures that we don’t get that rather unpleasant dark-cornered-and-strange Ambient Occlusion.

Page 44: Six tuts

To prepare for the final render, I set the framebuffer to halffloating point and the image format to OpenEXR (Fig15).Floating point means the image gets stored with a highdynamic range, as opposed to 8bit or 16bit integer images,which are clipped at RGB values greater than 1.0 (‘white’).With a floating point image we can map values greater than1.0 back to the visible range in post-production (i.e. we willbe able to eliminate completely burnt areas). Half floatingpoint means the floating point with half precision, taking lessmemory and bandwidth. To be able to render a floating pointimage right out of the GUI we need to set the PreviewTonemap Tiles to Off, but keep the Preview Convert Tiles atOn. The preview in the render view might look very dark andpsychedelic, but the OpenEXR image written to disk in theimages\tmp folder will be alright, and that’s the one we willbe processing later on in Photoshop (or any other HDRIeditor of your choice). Mind that floating point images arestored without gamma correction (i.e. linearly), and e.g.photoshop applies (hopefully) proper correction by itself. Ifthe image looks incorrect when being imported to photoshopor whereever else, you most likely have to apply the gammacorrection by yourself there. This does not relieve us fromsetting the proper gamma value in the render global'sframebuffer menu however, as the textures still need to belinearized before rendering!

Page 45: Six tuts

Here’s my final render without post processing.

Page 46: Six tuts

As with any photograph we shouldn’t judge the raw shot;instead let’s take it into the ‘darkroom’ and apply some colorand contrast improvements here and there .

I hope you’ve enjoyed following this little exercise as muchas I have enjoyed writing it! Sadly this is the last partconcerning natural exterior lighting, but the upcomingelectric light tutorial will be no less challenging and just asmuch fun, I’m sure!

Page 47: Six tuts

ELECTRICALpart 4

Page 48: Six tuts

Hello and welcome back aboard! This time, following up our lasttutorial about natural moonlight, we will be discussing a very'CGI-traditional' fashion of illumination: electrical lighting.Although this kind of light is considered 'artificial' we will learnlater on that it has a very natural background (at least as longas we stay with a tungsten light, which we propose in thistutorial).So, why 'CGI-traditional' you might ask? Well, ever since there isCGI (computer generated imaging), tungsten bulbs have been avery 'easy' to simulate type of lightsource, for mathematicalreasons. The classic tungsten bulb has a relatively limited areaof light emission, which, in the 3d/simulation world, can besimplified down believably to an infinitesimal point - the classicpoint-light (as a side note, its little brother, the spot-light, isnothing but a point-light with more sohpisticated features). Inthe past of CGI this infinitesimal (infinitely small) point made itpossible to render 3d images effectively and fast, due to a logicreason: To simulate a lightsource, we basically need threepoints for the math, i.e. the position of the 'eye' of the observer,the point on the surface thats being lit, called the 'intersectionpoint', and the position of the lightsource - all these togethermathematically make out the rendering, and since aninfinitesimal point is obviously the most simple element in 3dspace, it can be computed with very little expense in thiscontext - and even more important, it converges noise-free perse, since the point is strictly determined. Back in the timeswhen computers werent as high-clocked as today this wascrucial, and point-light based lighting was mandatory, alongwith closely related techniques as spot-lights and directional-lights (which uses an infinitely far away point instead).So for CGI the point light was pretty much as important asEdison's light bulb for real life. Computer lightsources haveevolved since then however, just as the real bulb did, and still(for both!) the principles have stayed the same. And still themost believable deployment of a point lights is at the simulationof a tungsten bulb.Enow with the history though, let's have a closer look at howtungsten bulbs actually work and why they look as they look.This is, as always, the essential starting point when trying tosimulate a specific case.The operation of a usual incandescent bulb is quite simple: anelectric current is passed through a tungsten (also calledwolfram) filament, which is enclosed by a glass bulb thatcontains a low pressure inert gas, to avoid oxidation of theelectrically heated filament. Depending on the type of thefilament, the operation heat is typically between 2000 and 3300degree Kelvin (around 3140 to 5480 degree Fahrenheit, or 1727to 3027 degree Celsius). This thermal increase induces radiation(also, but not only) in the human visible light spectrum, in theform of a so called 'black body'.The interesting thing about this black body (which actually is anidealized physical model of a radiator/light emitting body) isthat its emitted spectrum, i.e. the color, can be estimated bysolely knowing the (absolute) temperature of the black body,according to Planck's Law. Inversedly, one application of this isin astrophysics, where scientists can measure the temperatureof a star by analyzing its spectrum. And furthermore, this waythe movement of stars and galaxies can be determined, if thisestimated spectrum is shifted either towards blue (gettingcloser) or red (moving away), due to the electromagneticequivalent of the sonic Doppler-effect, called redshift or Hubble-effect.Well, this all means we have a (at least theoretically) strictlydefined spectrum, or color in our case, for a glowing tungstenbulb. This color lies on the so called Planckian locus , acoordinate in a particular color space, and ranges, for ourneeds, from the visible red, over white to blue. There is severalblack-body-Kelvin-temperature-to-color converters on theinternet, but fortunately there is a standard tool that ships withmental ray, which makes our life a bit easier.

Page 49: Six tuts

It's called, guess what, mib_blackbody and can be found inMaya under the 'mental ray lights' tab in the hypershade.This utility outputs the desired color, according to thetemperature we feed it.

So let's model the actual light. To deliberately break withthe tradition I decided to use a spherical area light (insteadof the good ol' point light), placed closely to the center ofthe actual bulb geometry, so that it's encompassed by it(Fig. 3).

Page 50: Six tuts

Obviously, if we rendered it this way, we would face troubledue to the occlusion caused by the bulb geometry. Theresseveral ways to come around this - we could either adjustthe bulb's glass shader, so it handles the transparency,though we have to increase the ray depths accordingly. Or,and that's a bit smarter in this case, because we wouldnthave to mess with the ray depths, we simply exclude thebulb from shadow and reflection/refraction tracing by settingsome flags in the object's shape node. Since the bulb is'incandescent' anyway we can neglect its shadow.

To give our light the desired color, I simply create themib_blackbody node and connect it to the area light's colorslot.

Page 51: Six tuts

I also set its decay rate to 'quadratic' - this is very importantto give it a natural falloff and to obey physical rules. Theintensity is left at 1.0, I completely hand this over to themib_blackbody, where I also set a reasonable temperaturefor our tungsten filament (something between 2000 and3300, I decided for 3000 degree Kelvin).

I repeat all this steps for the second bulb, except that I usedthe same mib_blackbody node for its color, just to speed upthe workflow a bit, as we assume that both bulbs are of thesame type.

Page 52: Six tuts
Page 53: Six tuts

Because the final gathering diffuse bounces setting have alittle shortcoming in Maya 8.5, I set them in the actuallycontrolling node, which is called miDefaultOptions (type'select miDefaultOptions' without the quotes into the MELcommand line to bring it up in the attribute editor).

Page 54: Six tuts

Well, here's our first test rendering with the settings above.Straaange things happening, I know.

Last but most important, I put ourselves into the right colorspace, which is sRGB, the commonly used space for thingslike photographs. Although we cannot precisely apply thiscolor profile right away (at least not easily in mental ray forMaya 8.5), we simply apply a so called gamma correctioncurve of value 2.2 to our image, which usually is sufficient.This implies some caution: because the textures we usuallyuse are already in sRGB, or hence gamma corrected, weneed to un-gamma them before we correct the whole imageagain. That seems awkward and unnecessary but makestotal sense for a reason - if we want the (gammacorrected/sRGB) texture to look like we are used it to looklike, we need to remove the gamma correction first, beforewe RE-apply it on the whole image. Odd stuff, but makes ourpicture look pretty and more natural.

Thankfully mental ray has this remove-texture-gamma-and-re-apply-it thing built in already, and we simply set thedesired gamma correction value in the framebuffer>primaryframebuffer tab of the render globals. However, mental raywants us to actually specify the inverted function, which is1/2.2=0.455 in our case. For more information on thegamma issue, I encourage you to read the 'Note on ColorSpace' in the very first part of this tutorial series.

Page 55: Six tuts

The reason for this is the very close proximity of geometry toour area light - the final gathering usually goes nuts on this.There's a cheap solution to this, we simply set the finalgathering filter to greater than 0, I decided for 1 whichusually does a good enough job (Fig12). Usually it isdesirable to completely avoid this filter (i.e. leave it at 0),because it introduces strange bias in some situations, e.g. ifwe lit our scene completely by HDRIs. So use it wisely, oronly if you are forced to, like in our case. If you are stillencountering artifacts, exclude the lamp guard andbasement as well from the reflection/refraction tracing.

Let's see if it helped, and yep, that looks much better.

Page 56: Six tuts

I'm preparing for the final rendering now, by upping thegeneral anti aliasing quality. The final gathering needs somelifting too

Page 57: Six tuts

Here we go.

Page 58: Six tuts

The last thing I added was the mia_material's built-in detailambient occlusion, by selecting all the mia_materials andchange the Ao_on attribute to 1 (ON) in the attribute spreadsheet (Fig16). This reveals little details without hammeringthe wellknown and usually way too strong ambient occlusioncorner-darkness onto our image.

Also, I decided to render to a higher fidelity fancy superduper 32bit framebuffer - simply because everyone does..!No seriously, at least for stills it's better of course to renderto a floating point format. After all this gives us a morepeaceful sleep while the renderer works over night. However,for reasons of efficieny I decided for a 16bit half framebuffer,which is still a floating point format but less space-/bandwith-eating. To use this, the only possible fileformat fornow is OpenEXR - that's not a bad thing, since OpenEXR isquite fancy (for real!).

Page 59: Six tuts

After touching some contrasts and colors here and there Icame up with my final interpretation.

I hope you enjoyed following this little tutorial about electriclight, and be with us next time with the candle light session!

Page 60: Six tuts

CANDLE LIGHTpart 5

Page 61: Six tuts

Ahoy, and welcome back to the fifth part of our lighting tutorial series! Interestingly the general matter on thisone will technically be the same as the last time, where we discussed the behavior of electric light bulbs, howeverthe result will be considerably different. So lets turn off the lamps and fetch the matches, to get our candle lighttutorial started.

In the last tutorial we already learnt the technical aspects of heated bodies, like tungsten filament, or wick. Itbecame clear that in a simplified yet meaningful way the emitted color always has a very determined type, onlydepending on the temperature of the heated body. And curiously this special rule does not depend at all on thematerial of the heated body. So we can pick up where we left, and simply translate this rules to our new topic.

Lets recall the behavior of a heated 'black body'. Whenever matter is heated, it emits photons with certainintensities at distinct frequencies. This 'fingerprint' of the radiation is then called a spectrum. Now a black body isan 'ideal physical model' which absorbs all radiation and which does not reflect any at all. The interesting thingabout this is that the spectrum ('color') of such a body is strictly defined by physical law, and is solely dependanton the actual temperature of the body. Of course this is somewhat simplified, as the actual emission spectrum ofour heated material (i.e. carbon and hydrogen, bounded in the, lets propose, paraffin of our candle) is neglectedthis way. Still this 'ideal model' does a good job at simluating our situation.

Now that we have an idea of how to model the color of our candle-light we can start to give it shape. Accordingto gravity and buoyancy laws (hot things move upwards due to their lower density), the candle flame has thiswellknown 'drop shape'. If you ever wondered how a candle burnt at zero gravity, see the right - the hotand 'lighter' gas does not circulate ('convect') as well as down here on earth, instead it spreads uniformly and nooxygen (although available!) raises after it, so it is likely forced to extinguish soon.

picture

Page 62: Six tuts

For the sake of simplicity I decided to use a simple fotographof a candle flame as a so called sprite, or billboard object. Ialready adapted the image's hue to the temperature we willbe using later on, which you might want to consider too, butmore on this shortly.

The billboard is then placed closely to the wick, to model theflame. This is a simple and popular method of representingrather complex to shape or simulate stuff, be it flames, snow,leaves, grass, pylons, and probably an arbitrarily huge bunchof other things one could think of.

Page 63: Six tuts

It obviously makes sense to take care of certain factors whendealing with such 'tricks', so I adjusted all necessary renderflags of this sprite, to avoid render artifacts. For example it ofcourse does not make sense to let this helper object castshadows (after all it is replacing a light emitting entity), or toleave it visible to reflections or refractions (the actual lightwill handle this later on with the 'highlights').

Page 64: Six tuts

The next rational step in our abstraction of the candle light isto build the actual light emitting 3d representative. I chose aspherical area light for this job, with a little scale in the 'up'direction'. I placed it closely to where our 'fake' flame is,right above the wick. Since we took care of the sprite'srender flags it does not interfere with the light at all.

Now that we have our light source constructed, we shall giveit life with an appropriate color. As described earlier, we haverobust guidelines on how to deal with this, in order to createnaturally looking candle light. We solely need to know theapproximate temperature at which a candle flame burns inaverage. The sources on this however seem to diverge quitea bit; some state a temperature of around 1300 Kelvin(~1000 degree Celsius, or ~1800 degree Fahrenheit) andsome even state it at around 2300 K (~2000 °C, or ~3600°F). I went for the middle of these values, and decided for atemperature of 1800 K, which equals to around ~1500 °C or2800 °F. This is the tempereatre (color) we should align ourcandle sprite texture to, in order to yield a convincingcongruence in the rendering.

There are many Kelvin-to-color converters on the internetwhich we could use to obtain the desired color, but luckilythere is also a built-in tool that ships with mental ray forMaya. It is called mib_blackbody and can be found under themental ray lights tab of the 'Create Render Node' menu inthe hypershade.

Page 65: Six tuts

This node only has two attributes we need it to feed, thetemperature (in Kelvin, or 'absolute' temperature), and anintensity value. If we wanted to really (really!) exactlysimulate a candle light, or any light at all, we would have toactually know it's luminous power, also called luminous fluxor lumen (read on this link for further information), and thenwe would have to convert this value into the Maya/mentalray world with some effort on both the emitting (light) andreceiving (camera) side. Maya 2008 has some built-inimprovements on this, however, since we dont do a radio-/photometric scientific simulation we simply GUESSTIMATEthe intensity. I went for a value of 2500. To finally make useof this little tool, I connect it to the light's color slot - thelight's intensity is left at 1.0 (this is handled by themib_blackbody), and I also make sure the decay rate is setto 'Quadratic'.

That's pretty much it for the scene part, lets head over to therendering department.

We prepare the final gathering settings for quick yetmeaningful convergence of the indirect illumination. We onlyneed few rays (32) and a coarse point density (0.5) for ourpreview. Of course we will refine everything for our finalimage. I left the final gather 'mode' at automatic, i.e. the'Optimize for Animations' and 'Use Radius Quality Control' arekept OFF.

Page 66: Six tuts

The trace depths however need to be increased, along withthe general raytracing settings; I decided for 2 'bounces', aswell for the diffuse contribution, which I revise in themiDefaultOptions node: although I turn them ON in therender globals, they are stuck at 1 bounce due to a little bug.I want them to be of depth 2, so I adjust them 'under thehood' in the miDefaultOptions.

Page 67: Six tuts

Before we actually render, we must care about the colorspace, so its time for our little gamma-mantra (since we dontwant odd and cg-ish looking, grungy true-linear shadings).Thus I put ourselves into the right color space, which issRGB, the commonly used space for things like photographs.Although we cannot precisely apply this color profile rightaway (at least not easily in mental ray for Maya 8.5), wesimply apply a so called gamma correction curve of value 2.2to our image, which usually is sufficient. This implies somecaution: because the textures we usually use are already insRGB, or hence gamma corrected, we need to un-gammathem before we correct the whole image again. That seemsawkward and unnecessary but makes total sense for areason - if we want the (gamma corrected/sRGB) texture tolook like we are used it to look like, we need to remove thegamma correction first, before we RE-apply it on the wholeimage. Odd stuff, but makes our picture look pretty and morenatural.

Thankfully mental ray has this remove-texture-gamma-and-re-apply-it thing built in already, and we simply set thedesired gamma correction value in the framebuffer>primaryframebuffer tab of the render globals. However, mental raywants us to actually specify the inverted function, which is1/2.2=0.455 in our case. For more information on thegamma issue, I encourage you to read the 'Note on ColorSpace' in the very first part of this tutorial series.

A quick test render yields some strange, blotchy artifactsthough. This is due to the close proximity of certain objectsto the area light - we would have to either move them (orthe light) a little farther away, or exclude them from the finalgathering and reflection/refraction computation somehow.Since we obviously have a great demand to keep the lightclose to the candle, we are forced to take the latter solution.

Page 68: Six tuts

We simply switch OFF the corresponding render flags in thecandle's and wick's shape node. This basically cures thebright-blotches-problem. To furthermore suppress this kindof blotches, I decided to use a final gathering filter of 1. Thisfilter should be handled with care, and only be used as a lastresort.

Page 69: Six tuts

Another test rendering verifies this, and we have takeoffclearance. Lets raise the quality to something more usable(which basically means we are extending the flaps, to stay inthe metaphor).

First, lets raise the general sampling settings. The minimumlevel is kept at 0, the maximum level is set to 2, whichmeans a maximum of 4^2, or 16 samples per pixel (whereasthe rule is 4^n, and n means the sampling level). Thecontrast is lowered to 0.05 each. I usually use a narrowedgauss filter of width 2.0 (default is 3.0!) both in x and y,which gives sharp, fast and nice sample filtering.

Page 70: Six tuts

I also turned on the 'detail ambient occlusion' mode of themia shaders. All you need to do is to select all the miashaders in the hypershade, open up the attribute spreadsheet, and set the Ao_on attriubte to 1 (ON). This ensureswe see all the little details that are too small to be capturedproperly by a rather coarse final gathering solution.

Page 71: Six tuts

Last but not least, we could go for a floating pointframebuffer, if we liked. To do so from within the render viewand without going to a batch, we simply had to switch theframebuffer to either RGBA (Float), or RGBA (Half), turn the'Preview Convert Tiles' ON, the 'Preview Tonemap Tiles' OFF,and use an appropriate file format, like OpenEXR.

Page 72: Six tuts

Thats it. I came up with this final interpretation, after goingthrough a few color, white-balance and contrast imageoperations.

I hope you enjoyed following our little candle light exerciseas much as I enjoyed writing it! And I'd be glad to welcomeyou next time to our final part, which is probably the mostchallenging and most definitely the eeriest one: aboutunderwater lighting!

Page 73: Six tuts

UNDERWATERpart 6

Page 74: Six tuts

Hello and welcome to the sixth and last part of our environment lighting tutorial series! In the preceeding parts we discovered theworld of natural environmental lighting, artificial kind of lighting, and the mixtures between them. In our last feature we will bediscussing a rather special case: the case of an underwater environment situation. This implies some more or less 'unusual'prerequisite. More precisely, we will be in need of a truely visible 'medium', let's call it volume or ether. Most often people tend tofake such volume by simply using so called 'volume shadows' on their 3d lights, i.e. lights casting a visible 'light ray' into anapparent (though not existant) volume. This is not the real deal, however it is a favored method of both professionals (because itrenders fast, which is essential specially for animations) and beginners (because its rather easy to set up and.. well I dont know.But its like the No.1 thing people wish to do when getting their hands on a 3d program). Anyhow, we will be going the way of thecowboy, or cowgal, and do it the tough style. Since this is all about rendering stills, we can afford to have this extra nuance of'bought' prettiness.

Well. So we're back aboard.. though this might be a rather inappropriate description - we are sunk! The ship's body is below thewaterline and filled with seawater. To believably illustrate this situation shall be the challenge of our tutorial. We will also becreating an eerie, or unfamiliar, uncommon lighting to support the feeling of being in a different world.

Before we start to do anything we need to have a few thoughts on this different world, because this time we actually have awhole different (or lets say: a more exaggerated) situation than usual. Mainly there are two things we need to consider: FirstWHAT makes underwater look underwater, and second HOW can we achieve/simulate it. These might sound trivial, and in fact thecircumstances are so trivial indeed, that most people seem to forget about them.

Lets begin by comparing our usual situation (land / more or less dry air) with our new situation (under the sea). In our habitualenvironment, like our office, the living room, or wherever inside a building, we usually do not have much of a visible 'volume' -except if we romp around and raise some dust. When this dust gets into the air, it naturally, like any matter, reflects light. Thus itgets 'visible'. The more dust we raise into the air, the 'thicker' the apparent volume gets, and the light rays seem to becomeactually visible - although all we see is the dust reflecting them. There is a nice (albeit philosophical) quote of Andre Gilde thataptly says: "Without the dust, in which it flashes up, the sunray would not be visible".

Page 75: Six tuts

Now there is more 'things' than plain dust in the air webreath, in fact there is tons of gases and particles, which allmake up what is commonly called the 'aerosol'. This ratherinvisible mixture of microscopic solid particles and liquiddroplets have the same reflecing, or essentially scatteringimpact on incident light as the regular (substantially larger)airborne dust.

This has an interesting effect: when light gets scattered (i.e.forced to diffusely deviate it's naturally straight trajectory) bya surface much smaller than the wavelength of it (like theaerosol ingredients), the so called 'Rayleigh scattering'occurs. Named after the physicist Lord Rayleigh, this generalapproximation rule says that the scattering 'probability' of alight ray is dependant on its wavelength - whereas thesmaller wavelengths (blueish, ultra violet domain) have ahigher chance of getting scattered than the largerwavelenghts (reddish, infrared domain) (Fig. 1). Have youever asked yourself why the sky is blue? THIS is the answer.The rather neutral, virgin and 'white' sunlight enters theearth's atmosphere, and the distinct portions of it getscattered by the aerosol - since the blue part of the light hasa largely higher probability to get scattered, we seem to besurrounded by a diffuse blue environment. As opposed to asunset or dawn, where mostly unscattered light from thedirection of the sun reaches the observer - and appears red,due to the lower wavelength.

Fair enough. Much pondering about the air, but what aboutour concrete underwater situation? Well, its basically thesame story! The ocean IS blue. Not only because it reflectsthe sky, but also because of the Rayleigh rules explainedabove. This scattering rules basically apply to anything atanytime. In cgi we only neglect it, or often we fake it basedon observational facts. And after all, computing truewavelength based Rayleigh scattering is a seriously complextask, and its questionable if the effort can be justified, sinceit's mostly rather marginal effect would 'steal' the renderingtime we could spend on other things that make our imagepretty.

Have you ever asked yourself, why e.g. Maxwell Renderoutdoor images look faint, whilst the indoors look pimp?Because they neglect this light scattering (at least to thispoint in time)! The scattering effect is not as eminent in theindoor/interior renderings, but has a large impact on the'naturalness' of outdoor, larger scale situations. The Rayleighrule is omnipresent, unless you're in complete vacuum.

And it is even more evident in 'thicker' mediums, or volumes,like the ocean water, which is full of more or less tinyparticles. The only difference here is that the light getsscattered and absorbed earlier, which is often referred to asa higher 'extinction'. A light ray entering such volume has acertain probability to either get scattered forwards (along it'soriginal trajectory), backwards (the direction it came from),something inbetween, or to get completely absorbed bysome particle. Every volume has it's own characteristics athow much of each of the former criteria is being applied, notto forget that the wavelength of the light ray looms largelyover this...

This behavior can be modelled, or simulated by a so calledray marching shader. We are not going to obey thewavelength dependant rules strictly (it'll be more of aguesstimation), but lets finally get our hands on our actualscenery.

As a reference I like to usehttp://www.underwatersculpture.com by Jason Taylor, whichhas various and no less beautiful photographs on the day-to-day-things-underwater subject.

Page 76: Six tuts

To build up our medium, I decided to simply create a largesurrounding cube as a 'container' of our volume. This is thesimplest and mostly fail-safe way to set up this kind of stuff.We could alternatively build our volume through ourcamera's volume shader slot, which would basically have thesame effect unless a ray would hit 'nothing', where thissecond way would simply return the un-approximatedenvironment color. And besides this alternative way couldtake longer to render, because the ray marcher couldpossibly take some more and unncessary steps further intothe depth (not in our case however).

The ray marching utility we will be using is the rather ancientthough still nicely working mental ray 'parti_volume' shader,which can be found under the 'mental ray VolumetricMaterials' tab in the hypershade. This is not to be confusedwith the parti_volume_photon, which is used for volumephoton tracing, but we wont use photons to obtain indirectillumination in our tutorial anyway. Our method will be a bitless accurate but still nice and fast enough to create ourdesired look and feel.

Page 77: Six tuts

Lets have a look at the volume shader. Foremost, we assigna new 'black' surface shader to our cube container, andconnect the parti_volume to its shading group's 'VolumeShader' slot. Thats pretty much it for the set-up part, and wecan have a closer look at the parti_volume's diverseattributes.

Page 78: Six tuts

Most important for our needs right now is the scattering part(Scatter, Extinction), the so called scatter lobes (R, G1, G2,more on this later), and the ray marching quality settings(Min_-, Max_step_len). The other attributes, which we willneglect however, are for filling the volume only partially(Mode - 1 means 'do it' - and Height), to add a noise, orrather density variation (Nonuniform, 0.0 means 'no noise')and stuff we really dont need (Light_dist, Min_level,No_globil_where_direct). As you can see, there's lots oftechy stuff, but we'll concentrate on the essential things of it(Fig. 4).

First the scattering factors, Scatter and Extinction. Scatterbasically controls the color of the medium and is closelyrelated to the Extinction, which controls the density of themedium. Both go hand in hand, and the hassle about this isthat to work with half-way rational values we need to have aquite dark Scatter color and a quite low Extinction factor - ifany of the two goes into higher extremes we'll typically endup with undesired results. So I decided for a value of RGB0.035, 0.082, 0.133 for the Scatter color, which is a naturalblueish tint. Since we dont do wavelength dependantcalculations I decided for this predominant color that mimicsand supports the Rayleigh rules explained above. For theExtinction I used a low appearing value of 0.004, but keep inmind that this is all correlative with the Scatter color, andvery sensitive. So this value will give us an extinction thatswallows almost all of the light in the rear corners, and that'sway enough.

Now about the scattering lobe. That's a bit more difficult atfirst glance. Basically, a negative value for G (either G1 orG2) means a backscattering lobe (back into the direction thelight ray came from) and a positive value means a forwardscattering lobe (forward along the original trajectory of thelight ray) - and R simply means the mixture between G1 andG2. So you typically chose one backward scattering lobe (i.e.a negative value for G1) and one forward scattering lobe (i.e.a positive value for G2), and weighten both with the Rattribute. Whereas 1.0 for R means 'use only G1' and 0.0means 'use only G2' and 0.5 would weighten both equally... Iknow - there must have been some really funny guy atmental images who wrote this shader, and I'm pretty surehe's still laughing up his sleeve.

Anyhow. I chose a rather foward scattering volume, but Iencourage you to experiment with the values. The forwardishscattering creates these nice glow-like appearing lightsources when the light points towards the camera (its viceversa if the light is e.g. behind the camera of course). So Iused R 0.1, G1 -0.65, G2 0.95 for my final image.

Last but not least I trimmed the Min_- and Max_step_len to50.0 each. This attribute decides at which distances (steplengths) to stop for looking up a volume sample - hence therays 'march' through the medium, and the lower the steplengths the more samples will be taken, the better (lessnoisier) the image quality gets and the longer it'll take torender. If you think it takes too long to render, boost thisvalue up. On the other hand, if you think you get too muchnoise and artifacts in your image, reduce it. Generallyhowever the manual proposes to use a value of about 10percent of the Max_step_len for the Min_step_len, so youmight want to try this as well (5.0min/50.0max). It is worthmentioning that the step length values are in actual sceneunits, so in our case it looks up a volume sample each 50centimeters.

Page 79: Six tuts

Ok, we have our medium set up and running (almost), nowlets create some lights to make it shine. Since our volumeshader relies more on direct rather than indirect light wecannot rely much on the later final gathering for the 'diffuse'incoming illumination. That's why I created two area lightsfor this job, one above the hatch, and one right behind therear windows. For the main light source however I used twospot lights shining in from outside.

Page 80: Six tuts

For this main lights I used a mib_blackbody helper utility at2200 Kelvin to obtain a rather warm and diver-flash-light-likecolor (the method of using a blackbody temperature as colorsource has been explained more extensively in the twopreceeding tutorials!). Though one could also imagine thatits the sun shining in from windows, you decide it, and feelfree to play around with it (to put it with Bob Ross: there'sno failures, only happy accidents!).

Page 81: Six tuts

The two area lights need a mixture of natural blue (due toLord Rayleigh's stuff) and green (due to many small greenishmicro organisms floating in the sea, like plankton or algae).This mixture is commonly referred to as cyan, turquoise,mint or cobalt, depending on which color is weighted, ormost felicitous: aquamarine.

Page 82: Six tuts

So far so good? Uhm.. there's one last very important thingwe need to consider. Remember the funny shaderprogrammer? He decided to omit every light that is NOT onhis list. That's strange attitude, but not stranger than theother stuff in the parti_volume, no? So we need to link everylight on it's the light list. You can either put in the (casesensitive!) name of the light, or mmb drag and drop the lighttransform from the outliner onto a spare field (you need tore-select the parti_volume each time you connect one light,so the mechanism can add another open slot).

Page 83: Six tuts

Now that we have this part running, lets think about addinga few details that would add more to the underwaterimpression. In Maya we fortunately have the Paint Effectssystem, which is easy to use and even has some built-in'underwater' brushes. I used some sea urchins here andthere, a hint of shells, and a few starfishes all around. Also Iadded a little of the seaweed to some corners.

To be able to render the Paint Effects with mental ray weneed to convert them to regular polygons. I also convertedtheir Maya shaders to mental ray mia_materials, which isalways a good idea to obtain a consistent shading behavioracross the scene, since in our case everything else is builtwith them as well. This needs to be done manually however.

Page 84: Six tuts

That's it we're finally ready to render. I used a fixed samplerate of 2/2 this time. This is quite a brute-force way, and youmight consider using an adaptive sampling of 0/2, but beadvised to tune up the sampling of the area lights along withit, since they are all left at 1/1 right now. Also you shouldconsider lowering the parti_volume step lengths if youencounter artifacts with the adaptive sampling. It is alsoworth mentioning that to actually 'cast' a shadow into thevolume, we need to have a shadow (and general max-) raytrace depth of at least 4.

For the indirect illumination I chose a rather low-qualityappearing final gathering with diffuse bounces. This time,due to the volume stuff, the final gathering will not add alltoo much to the image, but it still has a nice contribution tothe general look of our piece.

Page 85: Six tuts

Before we push the render button we need to chant thegamma mantra though, as always. Since we want our imageto look nice, natural and appealing, instead of dark, smudgyand cg-ish, we need to pull it from it's default color space,i.e. mathematically linear, into the one we are used to see,i.e. gamma corrected sRGB. There's a deeper explanation onthis matter in the very first of the tutorials, the one aboutsunny afternoon. To recall the essential basics however, letsrepeat why we need to care about the gamma issue BEFOREwe render out our image. As mentioned, the (any) rendererdoes it's internal calculations in mathematically linearmanner, which foremostly is a good thing. We could pick thistruely linear result and take it into our post application andgamma correct it there (because gamma correction / puttingthings into the sRGB color space is desirable in almost anycase - probably almost everything you see, i.e. photographs,pictures are in this sense already gamma corrected, withoutyour knowledge, they usually ARE per se). IF and as you cansee that's a big IF, we wouldnt use image textures, which areALREADY gamma corrected from the outset. When usingregular image files, which usually have the sRGB/gammacorrection 'baked' into them a priori, we need to remove thisgamma correction, before we RE-apply it on the wholeimage. Makes sense, no? I know its confusing, but unlessyou dont want to have double-gamma-washed-out-lookingtextures we need to obey this little rule. Applying the rightgamma on the whole image afterwards isnt enough, if wewant the textures to look as they should (i.e. as we are usedto see them, in their sRGB color space). Now, many peopledont care about this whole issue and thus render in the plainmathematically linear space. And wonder why their imageslook strange and unnatural, and have this strangely dark andsmudgy look and blown out highlights and overbright areasall over. Specially realtime 3d has yet to 'learn' thatmathematically linear rendering is not what the eye is usedto see in nature (the human brain reaches a 'gammacorrected', or rather logarithmically corrected image too, ifyou will! Although human perception is far more complex ofcourse).

So we want to have it gamma corrected/sRGB. Our renderermental ray has a built-in function to automatically 'remove'the gamma from the textures before rendering, and applythe inverse of this gamma on the rendered pixel/image. Todo so, we go to the Primary Framebuffer tab in the renderglobals and put the appropriate gamma value, which is 1/2.2or 0.455, into the Gamma field.

Page 86: Six tuts

As a last enhancement lets turn on the 'detail ambientocclusion' mode of our mia_materials. It should all be set upalready by default, we simply need to switch it on byselecting the mia_materials and raising the Ao_on value from0 (off) to 1 (on). We can do this easily for all selectedshaders at once by using the attribute spread sheet, fromthe Window> General Editors> Attribute Spread Sheetmenu.

Page 87: Six tuts

We should come up with a render similar to what I got. Irendered to a regular 16bit image format, and took it intophotoshop for some contrast and color adjustments. That'sthe most fun part of it.

Page 88: Six tuts

After playing around with the white balance, crushing theblacks, enhancing certain color elements (i.e. the blues andaquamarines), and after having fun with the 'liquify' functionin Photoshop I came up with my final interpretation. I alsoput a 'dust/grime' image on top of the image, to support thefeeling of a thick medium. I hope you like it.

And I hope you enjoyed following our environment lightingtutorial series, as it is time to say good bye for the timebeing. I have had a great time sorting out my guesses on allthe subject matters, and most defintely learned a lot alongthe way, as you hopefully might have as well. If you haveany question, critic, comment, addition or whatever input onthe tutorials or me, dont hesitate to contact me in one of thevariously available ways.

Page 89: Six tuts

Florian ild http://www.floze.org/W