33
  Plugins While V-Ray works with most of the standard 3ds Max lights, materials and maps, it also includes several additional plugins which offer functionality not found in 3ds Max itself. They are specially optimized for work with V-Ray and using them instead of the standard ones can i ncrease rendering speed significantly. The V-Ray rendering system includes the following plugins for 3ds Max: Plugin name Description V-Ray renderer The V-Ray renderer plugin VRay2SidedMtl  A utility material that allows to cr eate thin translucent surfaces like paper, cloth etc. VRayOverrideMtl  A utility material that allows to sp ecify different materials to be used for reflections, GI, refractions and shadows. VRayLightMtl A material for creating light-emitting objects. VRayMtl A specialized V-Ray material supporting (glossy) reflections/refra ctions, absorption, sub-surface scattering etc. VRayMtlWrapper A specialized V-Ray material that allows you to specify additional rendering parameters for any material. VRayLight An area light plugin VRaySun A V-Ray sun light with accurate intensity and co lor based on its position over the horizon. VRaySky A procedural HDR environmap map that works with the VRaySun light to create realistic daylight environments. VRayShadow A raytraced shadow plugin (sharp and area shadows) VRayDirt A procedural texture that can be used for dirt-like effects or for simulating ambient occlusion. VRayColor A utility texture that always returns a specified color. VRayMap A map for adding (glossy) V-Ray reflections/refractions to non-V-Ray materials VRayHDRI A map for loading HDR images (.hdr file extension) and mapping them as environments VRayEdgesTex A map that s hows mesh edges (useful for wireframe-style rendering). Can also be used as a bump map to smooth the sharp edges of mesh objects. VRayDisplacementMod  A modifier that enables the V-Ray displacement for an object VRayFur  A plugin that g enerates simple render-time fur VRayProxy A plugin that allows you t o specify render-time geometry that will be loaded from an external file

Varios Vray

Embed Size (px)

DESCRIPTION

Explicacion de varios comandos Vray - 3d studio

Citation preview

  • Plugins

    While V-Ray works with most of the standard 3ds Max lights, materials and maps, it

    also includes several additional plugins which offer functionality not found in 3ds

    Max itself. They are specially optimized for work with V-Ray and using them instead

    of the standard ones can increase rendering speed significantly.

    The V-Ray rendering system includes the following plugins for 3ds Max:

    Plugin name Description

    V-Ray renderer The V-Ray renderer plugin

    VRay2SidedMtl A utility material that allows to create thin translucent

    surfaces like paper, cloth etc.

    VRayOverrideMtl A utility material that allows to specify different materials to

    be used for reflections, GI, refractions and shadows.

    VRayLightMtl A material for creating light-emitting objects.

    VRayMtl

    A specialized V-Ray material supporting (glossy)

    reflections/refractions, absorption, sub-surface scattering

    etc.

    VRayMtlWrapper A specialized V-Ray material that allows you to specify

    additional rendering parameters for any material.

    VRayLight An area light plugin

    VRaySun A V-Ray sun light with accurate intensity and color based on

    its position over the horizon.

    VRaySky A procedural HDR environmap map that works with the

    VRaySun light to create realistic daylight environments.

    VRayShadow A raytraced shadow plugin (sharp and area shadows)

    VRayDirt A procedural texture that can be used for dirt-like effects or

    for simulating ambient occlusion.

    VRayColor A utility texture that always returns a specified color.

    VRayMap A map for adding (glossy) V-Ray reflections/refractions to

    non-V-Ray materials

    VRayHDRI A map for loading HDR images (.hdr file extension) and

    mapping them as environments

    VRayEdgesTex

    A map that shows mesh edges (useful for wireframe-style

    rendering). Can also be used as a bump map to smooth the

    sharp edges of mesh objects.

    VRayDisplacementMod A modifier that enables the V-Ray displacement for an

    object

    VRayFur A plugin that generates simple render-time fur

    VRayProxy A plugin that allows you to specify render-time geometry

    that will be loaded from an external file

  • VRayPlane A geometry plugin that implements an infinite plane

    primitive.

    VRayToon An atmospheric plugin that produces simple cartoon-style

    effect.

    VRayBmpFilter A plugin for loading texture maps without filtering.

    VRayPhysicalCamera A new camera type that allows to simulate the workings of

    a real-world camera.

    VRayFastSSS A material for quick simulation of sub-surface scattering

    VRayBlendMtl A utlity material for efficient layering of several different materials.

    VRayToon

    VRayToon is a very simple atmospheric plugin that produces cartoon-style outlines

    on objects in the scene. The source of VRayToon is available as part of the V-Ray

    SDK. Note that VRayToon is not intended to be a full NPR (non-photorealistic

    rendering) effect. However, it may be useful in many cases.

    Why an atmospheric effect?

    There are several solutions for adding toon effects to 3d renderings for 3ds Max;

    most of them function either as special materials (shaders) or as render (post)

    effects. Each of these approaches has both advantages and limitations. VRayToon

    has been implemented as an atmospheric effect for several reasons:

    The implementation is very simple.

    Works with any geometry supported by V-Ray, including displaced objects,

    VRayFur and VRayProxy objects, etc. Works with any camera type supported by V-Ray (spherical, fish-eye etc).

    Works with any camera effects (depth of field and/or motion blur).

    Works with raytraced effects such as reflection and refraction.

    Smooth and consistent outlines for intersecting objects.

    Creating a VRayToon atmospheric effect

    VRayToon can be created from the Environment dialog of 3ds Max. To create a

    VRayToon effect choose Rendering > Effects... from the 3ds Max main menu.

    Then click the Add... button and choose VRayToon:

    Parameters

    Basic parameters

  • Line color - this is the color of the outlines.

    Pixels width - this is the width of the outlines in pixels.

    World pixels width - this is the widthe of the outlines in world units. Lines closer

    to the camera will be thicker.

    Opacity - opacity of the outlines.

    Normal threshold - this determines when lines will be created for parts of the

    same object with varying surface normals (for example, at the inside edges of a

    box). A value of 0.0 means that only 90 degrees or larger angles will generate

    internal lines. Higher values mean that more smooth normals can also generate an

    edge. Don't set this value to pure 1.0 as this will fill curved objects completely.

    Overlap threshold - this determines when outlines will be created for overlapping

    parts of one and the same object. Lower values will reduce the internal overlapping

    lines, while higher values will produce more overlap lines. Don't set this value to

    pure 1.0 as this will fill curved objects completely.

    Do reflections/refractons - this will cause the outlines to appear in

    reflections/refractions as well. Note that this may increase render times.

    Trace bias - this parameter depends on the scale of your scene, it determines the

    ray bias when the outlines are traced in reflections/refractions.

    Maps

    This group of parameters allows the user to control the outlines using various

    texture maps.

    Color map - a texture map for the outline color. Screen-mapped maps will work

    best. Maps with World XYZ mapping are also supported, but may not work very

    well.

    Width map - a multiplier texture for the outline width. Screen-mapped maps will

    work best. Maps with World XYZ mapping are also supported, but may not work

    very well.

    Distortion map - a texture that will be used to distort the outlines. This works

    similar to bump-mapping and will take the gradient of the texture as direction for

    distortion. Note that high output values may be required for larger distortion.

    Screen-mapped textures work best, although World XYZ mapping is also supported.

    Opacity map - a texture for the outline opacity. Screen-mapped textures work

    best, although World XYZ mapping is also supported.

    Include/exclude objects

    Using these controls you can specify which objects will be affected by the

    VRayToon atmospheric. Using these options, you can achieve different outlines for

  • different objects in the scene. To do this, you need to create several VRayToon

    atmospherics and include each object in one of them.

    Notes

    VRayToon only provides outlines. You will need to come up with your own

    cartoon-style materials (for example, using falloff maps etc or other third-

    party material plugins).

    VRayToon has no per-object settings; however, you can add several

    VRayToon atmospherics to the scene, each applied to a different set of

    objects through the Include/exclude objects list.

    VRayToon will not work properly with objects that have their Cast

    Shadows property turned off.

    The quality of the lines depends on the current settings of the Image

    sampler.

    Distributed rendering

    Introduction

    Distributed rendering is a technique for distributing a single render job within a

    single frame across many computers in a network. There are different approaches

    of doing this but the main concept is to reduce the render times by dividing

    different parts of the rendering pipeline and giving each participant different parts

    of the job. The most common way to do this is to divide the frame to be rendered

    into small regions(buckets) and give each machine to render a number of them.

    Then get the results and combine them into the final image.

    V-Ray organization

    V-Ray supports DR. It divides the frame into regions and spreads them across the

    participants in the distributed rendering. This is done completely through TCP/IP

    protocol which is the standard protocol of the Internet and thus the most common

    protocol that is supported by the hardware. V-Ray itself does not need additional

    file or directory sharing (note that you may actually need some file/directory

    sharing for the bitmaps or other additional files used during rendering). The

    distribution management is divided into Render Clients and Render Servers.

    Render Clients

    The render client is the computer that the user is currently using and from which

    the rendering is started. It divides the frame into rendering regions and spreads it

  • across the Render Servers. It distributes data to the render servers for processing

    and collects the results.

    Render Servers

    A render server is a computer that is part of the so-called render farm - it requests

    render data from the render client, processes it and sends the result back.

    Installation notes

    See the distributed rendering section in the install instructions. Note that if you

    create a fresh install of 3ds Max on a machine, you will need run at least one

    Backburner job with this machine as a server, before you can use the machine for

    distributed rendering.

    How to test

    First start with the testing of the render server:

    Start the vrayspawnerXX.exe program from the Start menu (Start menu >

    Programs > Chaos Group > V-Ray for 3dsmax > Distributed

    rendering > Launch V-Ray DR spawner). It will automatically try to

    find the 3dsmax.exe file and start it in server mode. You should end up

    with 3dsmax minimized down in the task bar. It should have the title

    "vraydummy.max". If 3ds Max stays there alive without closing then you

    have the Render Server part working. If you see the a 3ds Max window

    flashing on the taskbar and then disappearing, right-click on the V-Ray DR

    spawner icon in the taskbar tray, select exit to close the DR spawner, and

    try running a regular Backburner job with this machine as the server.

    After that, try to start the V-Ray DR spawner again.

    Now test the render client:

    Open your 3ds Max as you normally would.

    Open a scene you would like to render (preferably a simple one for this

    test).

    Choose V-Ray as your current renderer and make sure you have checked

    Distributed Rendering ON in the V-Ray System section.

    Press the Settings button in the Distributed Rendering section.

    Add the machines used for rendering - either with their IP address or their

    network name and close the dialog.

    Render the scene as you normally would. You should see the buckets

    rendered by the different servers.

    If something fails

    Practically every action taken by V-Ray DR is logged. You can find all the log files in

    the C:\ directory and find out what has failed. If you do not understand the problem

    you can compress and send the files to us to analyze them and eventually try to

    help

  • If any of the servers fails, you should get a notification and the render client will try

    to reassign the buckets to another server.

    V-Ray Distributed rendering settings

    The Distributed rendeing

    settings dialog is accessible

    from the System rollout of

    the renderer settings.

    Add server - this button

    allows you to manually add

    a server by entering its IP

    address or network name.

    Remove server - this

    button deletes the

    currently selected

    server(s) from the list.

    Resolve servers - this button resolves the IP addresses of all servers.

    Notes

    Every render server must have all the plugins and texture maps in their

    proper directories loaded so that the scene you are sending will not cause

    them to abort. For example having a PHOENIX plugin used in the scene will

    cause a server failure in any of the servers that do not have the PHOENIX

    plugin installed. If you have mapped your object with a file named

    JUNGLEMAP.JPG and you do not have that map in the BITMAPS directories of

    the render server installation - you will get bucket rendered at that machine

    as if the map was turned off, unless you also turned on the Check for

    missing files option in the V-Ray System rollout, in which case the render

    server will refuse to render the scene.

    Incremental add to current map and Add to current map modes for the

    irradiance map are not supported in distributed rendering mode. In Single

    frame mode and Bucket mode, the calculation of the irradiance maps is

    distributed among the render servers to reduce the render time.

    When you cancel a DR rendering, it may take some time for the render

    servers to finish working and they may not be immediately available for

    another render.

    Default lights are not supported in DR mode and will not render. If you need

    any lighting in the scene, you should add it explicitly.

  • Default displacement

    This section allows you to control displacement of objects with displacement

    materials, which do not have a VRayDisplacementMod modifier applied.

    Parameters

    Override Max's - when this option is on, V-Ray will render objects with

    displacement materials using its own internal microtriangle displacement. When this

    option is off, the objects will be rendered with the standard 3ds Max displacement.

    Edge length - this determines the quality of the displacement. Each triangle of the

    original mesh is subdivided into a number of subtriangles. More subtriangles mean

    more detail in the displacement, slower rendering times and more RAM usage. Less

    subtriangles mean less detail, faster rendering and less RAM. The meaning of Edge

    length depends on the View-dependent parameter below.

    View-dependent - when this is on, Edge length determines the maximum length

    of a subtriangle edge, in pixels. A value of 1.0 means that the longest edge of each

    subtriangle will be about one pixel long when projected on the screen. When View-

    dependent is off, Edge length is the maximum subtriangle edge length in world

    units.

    Max. subdivs - this controls the maximum subtriangles generated from any

    triangle of the original mesh. The value is in fact the square root of the maximum

    number of subtriangles. For example, a value of 256 means that at most 256 x

    256 = 65536 subtriangles will be generated for any given original triangle. It is

    not a good idea to keep this value very high. If you need to use higher values, it

    will be better to tesselate the original mesh itself into smaller triangles instead.

    From build 1.45.20 onward, the actual subdivisions for a triangle are rounded up to

    the nearest power of two (this makes it easier to avoid gaps because of different

    tesselation on neighboring triangles).

    Tight bounds - when this is on, V-Ray will try to compute the exact bounding

    volume of the displaced triangles from the original mesh. This requires pre-

    sampling of the displacement texture, but the rendering will be faster, if the texture

    has large black or white areas. However, if the displacement texture is slow to

    evaluate and varies a lot between full black and white, if may be faster to turn this

    option off. When it is off, V-Ray will assume worst-case bounding volumes, and will

    not presample the texture.

  • Amount - this is a scaling parameter for the default displacement. Values larger

    than 1.0 increase the displacement amount, while values lower than 1.0 reduce it.

    Relative to bbox - if this parameter is on, the actual displacement amount is

    based on the bounding box of the objects, like this is done by 3ds Max itself by

    default. If this option is off, the displacement is expressed in generic world units

    where white areas in the displacement map correspond to displacement of 1

    generic unit. You can use the Amount parameter to increase or decrease

    displacement amount.

    Notes

    The default displacement amount is based on the bounding box of an object.

    Therefore, it is not a good choice when you have deforming objects. In that

    case, you can either turn off the Relative to bbox option, or you can apply

    a VRayDisplacementMod modifier, which supports constant displacement

    amount.

    V-Ray and 3ds Max

    As a rendering plugin for 3ds Max, V-Ray supports most of the standard 3ds Max

    features and many of the 3rd party plugins. Nevertheless, there are some

    differences in how V-Ray interacts with 3ds Max. These are outlined below.

    V-Ray Features

    The V-Ray rendering system has many advanced features; here only some of them

    are listed. For a full list of all V-Ray modules and options, please see the help index.

    Note that all features are subject to change without special notice.

    Core architecture

    Multi-platform object-oriented API

    Fully multithreaded core

    Unified sampling system based on Schlick sampling

    Distributed rendering

    Efficient shading system specifically optimized for ray-tracing

    Modular architecture - many components of the system can be replaced with custom ones

    Geometry

    Efficient geometry handling

    True instance rendering

    On-demand dynamic geometry creation

    On-demand geometry loading from disk files

  • Displacement mapping

    Catmull-Clark and Loop subdivision surfaces

    Extensible with custom geometric primitives through the V-Ray SDK

    Image sampling

    Three different image sampling methods

    Full-scene antialiasing

    Progressive path tracing

    Support for additional render elements (diffuse, reflection, GI etc)

    Advanced color (tone) mapping controls

    Extensible with custom image samplers through the V-Ray SDK

    Illumination

    Physically accurate full global illumination solutions

    Different GI algorithms: path tracing, irradiance cache, photon maps, light

    cache Reusable GI solutions for accelerated rendering of walk-through animations

    and animations with dynamic objects

    Physically accurate area lights

    Efficient illumination from HDR environments

    Procedural sun & sky models

    Extensible with custom lights through the V-Ray SDK

    Shaders

    Physically plausible materials

    Blurry reflections/refractions

    Accurate hilights

    Sub-surface scattering

    Support for efficient material layering

    Extensible with custom shaders through the V-Ray SDK

    Camera effects

    Depth-of-field with bokeh effects

    Accurate motion blur

    Physical camera model

    Extensible with custom cameras through the V-Ray SDK

    Extras

    Toon effect

    Fur generator/raytracer

    Extended matte/shadow capabilities

    Support for Render-to-Texture mode of 3ds Max

    VRaySphereFade to isolate only specific portions of the scene for

    compositing

  • Frame buffer

    V-Ray specific frame buffer with integrated color corrections and display of

    multiple rendering elements

    Direct rendering to disk for extremely large images, either as OpenEXR files

    or as .vrimg files

    Posibles problemas My system locks up and freezes while rendering

    I get an "unhandled exception" message window while rendering

    Excessive memory usage while rendering

    I get splotches in my rendering when using the irradiance map

    Search Keywords: problems, troubleshooting, bug

    My system locks up and freezes while rendering

    This is most likely a hardware problem. V-Ray cannot cause system freeze by itself.

    The worst that V-Ray can do is crash 3ds Max. Check your memory chips and/or

    your CPU temperature.

    I get an "unhandled exception" message window while rendering:

    The exact text in the message box may differ, depending on where in V-Ray the

    error occurred. There are different reasons for this message to appear:

    Insufficient RAM - one of the most common reasons for the unhandled

    exception. See the section on Excessive memory usage below for more

    details.

    CPU overheating or RAM defects - this is another reason for unhandled

    exceptions that has become quite common recently with the increased

    clock speed of moden processors. It is characterized by random crashes

    during rendering and may be quite difficult to distinguish from a software

    problem. Installing a CPU temperature monitoring software and checking

    the RAM for defects may help to determine whether the problem is in the

    hardware or the software.

  • Crashes with motion blur enabled - A common problem is when there

    are motion-blurred objects depending on one another; in that case, hiding

    the original object and using a copy of it for rendering helps.

    Using the native 3ds Max Skylight light type - either on its own, or as a

    part of the Daylight system. If this is the case, use V-Ray's own

    environment dialog to create the skylight effect.

    Incompatibility with other plugins - if you suspect this is the reason for

    the error, please write to [email protected] and to the plugin vendor

    and explain the situation. Please note that the problem might be in

    the plugin, and not in V-Ray. Some plugins were specifically coded for

    the default scanline renderer and may behave unpredictably with V-Ray.

    A bug in V-Ray - if you believe that this is the problem, try to isolate it (if it

    occurs in a specific situation related to a certain object, material,

    atmospheric effect etc.) and email the file c:\vraylog.txt as well as the

    3dsmax scene to [email protected]

    Excessive memory usage while rendering

    Like every other program, V-Ray needs a certain amount of RAM to render the

    image. Depending on the scene complexity and the render settings, V-Ray will use

    varying amounts of RAM. Sometimes, the available system RAM may be less than

    the amount needed for rendering. In this case, you will most likely get an

    unhandled exception. You can check the memory usage from the Windows Task

    Manager.

    On 32-bit machines, the Windows operating system allows by default up to 1.5

    GB for any single process (application). That means that even though you may

    have more physical RAM (for example 2GB), the operating system will not allow an

    application to use all of that memory.

    However, on the Windows XP operating system, you can change this by using the

    famous /3GB switch in your boot.ini file. This will enable the OS to allocate up to 3

    GB of RAM for any given application. Using that switch may allow you to render

    your scene without using any of the methods below for reducing memory usage.

    On 64-bit platforms, the 64-bit version of the Windows operating system allows

    the usage of all available physical memory without limitations.

    If you cannot use any of these methods to allow more RAM for rendering, the only

    choice is to reduce the amount that is needed by changing your scene and your V-

    Ray settings.The scene elements that take up most of the RAM while rendering can

    be divided into the following groups:

    Geometry - scenes with lots of objects and/or triangle counts require more

    memory to render. There are several ways to reduce this amount:

    o Adjust the raycaster settings in the System rollout (reduce Max.

    levels, increase Min. leaf size, increase Face/level coefficient,

    switch from Static to Dynamic Default Geometry).

    o If all else fails, use VRayProxy objects.

    o

    Mapping UVW channels - in 3dsmax, every mapping channel takes up the

    same or larger amount of RAM as the geometry itself. Unused mapping

    channels can increase RAM usage dramatically, while not affecting the

    scene in any way. In recent 3dsmax versions, texture channels are

    generated by default for all objects when they are created. V-Ray has no control over RAM usage for texture coordinates - you will have to make

  • sure that only the channels you need are actually present in the scene.

    Using VRayProxy objects is also a solution since in that case texture

    coordinates are also cached to disk along with the actual geometry.

    Displacement mapping - objects displaced with the 2d displacement

    mapping method may require a lot of RAM to render, especially with large

    displacement maps. If this is the case, use the 3d displacement mapping

    method. Also, if you have several distinct displacement modifiers with the

    same displacement map, it is better to replace them with one modifier,

    applied to all the necessary objects. This is because each modifier will

    take RAM for the displacement map, separately from other modifiers,

    even if they have the same map.

    Bitmaps - these tend to take up large amounts of RAM, especially if the

    maps are large. Since textures are managed by 3dsmax, V-Ray has no

    direct control over their memory usage. However, you can use the Bitmap

    pager settings of 3dsmax to reduce the RAM taken up by bitmaps. For

    more information, consult your 3dsmax documentation.

    Bitmap filtering - Summed area filtering uses much more memory than

    Pyramidal filtering.

    Shadow maps - these may also take up significant amounts of RAM. Again,

    these are managed by 3dsmax and V-Ray has no direct control over their

    memory usage. To reduce memory usage, you can switch to raytraced

    VRayShadows instead.

    Image buffer - large output resolutions require a significant amount of

    RAM to store the final image. Additional G-Buffer channels increase that

    amount. There are several ways to reduce this amount:

    o Use the 3dsmax Bitmap pager, if you are rendering to the 3dsmax

    default VFB. o If you use V-Ray's own VFB, use the Render to V-Ray raw

    image file option and then use the V-Ray raw image file viewer to

    convert the resulting file to a different format.

    o Render the image in several different passes and stitch the pieces

    in a compositing program.

    Image samplers (AA) - the image sampling algorithms of V-Ray require

    some amount of RAM to hold all the data for the sampled image. This

    amount can be quite large, depending on the chosen bucket size and

    sampling rate. To reduce that amount:

    o Reduce the bucket size.

    o Switch to a different image sampler - for example, the Adaptive

    DMC sampler uses less RAM than the Adaptive subdivision

    sampler.

    Global illumination caches - irradiance maps, photon maps and light

    maps all require additional memory to store them. Each of these has

    different methods for controlling its memory usage:

    o For the irradiance map - the memory depends on the number of

    samples in the map; you can reduce this number by using lower

    Min/Max rate, and more loose threshold values (higher Color

    threshold, higher Normal threshold, lower Distance

    threshold).

    o For the photon map - the memory depends on the number of

    photons stored. You can reduce this number by reducing the

    Diffuse subdivs for the lights, or by increasing the Max. density.

    o For the light map - increase the Sample size.

    3dsmax scene - 3dsmax itself stores a lot of information about the scene.

    V-Ray has no control over that memory, but here are some things you can do to reduce it:

    o Collapse modifiers to editable meshes

    o Cache animations with a PointCache modifier

  • I get splotches in my rendering when using the irradiance map

    There may be several reasons for splotches when rendering with the irradiance

    map:

    Regular noisy splotches - these are usually a result of insufficient

    Hemispheric subdivisions for the irradiance map. Usually they appear

    in difficult lighting situations when the default setting is too low. Examples

    of difficult lighting conditions are small bright sources of indirect light, hdri

    environments etc. You can avoid these splotches in several ways:

    o If you are using Quasi-Monte Carlo GI for secondary GI bounces,

    try using another method - the light map or the photon map.

    o Increase the Hemispheric subdivisions for the irradiance map.

    Note that the effect of the increased subdivisions depends also on

    the settings of the DMC sampler.

    o Decrease the Noise threshold of the DMC sampler.

    Isolated bright splotches - there may be different causes for these:

    o GI caustics - if you have reflective or refractive surfaces in your

    scene, especially if they are glossy, V-Ray may try to compute the

    GI caustics for these surfaces. Since caustics usually require a lot

    of sampling to get right, there may be splotches.

    o Incorrect or missing UVW coordinates - if some objects in your

    scene lack UVW coordinates, or the UVW coordinates fall outside

    the texture map, this can produce splotches or weird colors in the

    irradiance map. The solution would be to apply correct UVW

    coordinates to those objects.

    Frequently Asked Questions

    Question: I have a dual CPU system, but the rendering seems not to be

    multithreaded. What is wrong?

    Answer: Check Multithreading option in MAX's preferences - Customize >

    Preferences > Rendering > Multi-threading. Make sure this is On.

    Question: When I render a particular object with the V-Ray renderer Dark

    rectangles appear at odd places on the object. When I render the same object with

    the Max default scanline renderer, the dark rectangles disappear.

    Answer: This could happen if your object is a thin box with overlapping faces.

    Increase the Height or remove the overlapping faces and the squares should go

    away.

    Question: How can I render wireframe with V-Ray?

    Answer: You can use the VRayEdges texture to get a similar (although not exactly

    the same) result.

    Question: Why the rendering of one and the same scene using different size region

    divisions results in significantly different rendertimes? However there is no

    difference in the quality.

    Answer: There should be no difference in the final image, no matter what is the

    size of your render regions. However, every region requires some setup time. Also,

    when you use antialiasing filters, there is a border around each region that needs to

    be additionally rendered so that the edges of regions blend smoothly. As the region

    size increases, this additional work takes less time compared to the total render.

  • Smaller regions allow for faster screen update - you can see your image in

    progress; regions save memory; regions allow easy multithreading and (more

    important) easy distributed rendering. The choice of slowdown/update rate is best

    left to the user. Values of 16-64 are recommended.

    Question: Why the HDRI map has no effect on the specular level of the rendered

    objects?

    Answer: With the standard MAX materials, the specular level is just a way to

    simulate shiny look for objects. For a real world object to be shiny it has to be

    reflective. The same applies for objects rendered in V-Ray. To achieve shiny look

    with V-Ray lights, glowing objects and environment maps you will need to make the

    objects reflective.

    Question: Why do I get very weak shadows produced by the HDRI map?

    Answer: For sharper shadows, get a HDRI map with a high enough dynamic range.

    Question: Why do I get loss of detail when I render an object with lot of detail

    using V-Ray GI?

    Answer: Increase the max rate - for example set to 0. You may also try to reduce

    the color and normal thresholds. Additionally to make the GI more detailed you can

    (a) reduce the interpolation samples or (b) use another interpolation method -

    Delone triangulation will not blur the GI, it will just interpolate it; however the

    min/max rate still must be enough to capture all the detail.

    Question: Will the distributed engine upload the complete scene to every machine

    including textures etc. or will each machine just receive what is needed to render

    its bucket? Answer: It will ship the full scene without the textures. Which means the textures

    must be visible from all the local machines. You need not share the scene itself.

    Question: Can you change easily the task priority for all machines that participate

    in distributed rendering?

    Answer: Yes, this is configurable. You can control the process priority remotely for

    each rendering server. For more information refer to the Dsitributed rendering

    section.

    Question: Is the "machine list" in the distributed rendering based on machine

    names or IPs?

    Answer: It is based on machine IPs. You can give names to each machine as you

    wish and they don't have to be the same as your network names. The important

    part is a valid IP. There is an autosearch mechanism which looks for all the

    available servers in the local network and includes them in the list. You can

    configure a box manually though.

    Question: How do I use HDRI map with V-Ray to illuminate the scene?

    Answer: Either load it in your environment map if you want to show it in your

    background or in the render panel you can load it in the section for overriding the

    max environment for skylight.

    Question: How can I set the IOR when I use a standard material with VRayMap for

    the refractions?

    Answer: Change the IOR of the material (in the Extended parameter roll-up of the

    Standard material).

    Question: Does the antialiasing filter affect the irradiance map? Can I calculate an

    irradiance map with one filter and then use it with another?

  • Answer: No, the irradiance map is not affected by the antialiasing filter. In fact if

    you only need to compute an irradiance map, you can render without antialiasing

    and without a filter. You can add these later on for your final rendering.

    Question: Can I render the irradiance map at 400x400 and then use it to render a

    800x800 image? What will be the effect?

    Answer: Yes, you can do that. The final effect will be as though you have

    computed the irradiance map at a lower sampling rate.

    Question: Why does my fog light the scene when I use V-Ray with GI?

    Answer: This is so because of the way the MAX standard fog written - it is self-

    illuminated (unless you use volume light, but standard fog and volume fog are

    selfillumed), and since V-Ray takes volumetrics into account when calculating light

    hitting surfaces for GI, you can see illumination coming from those volumetrics.

    Terminology [Prev][Main][Next]Analytic sampling

    This is one of the V-Ray's techniques for calculating motion blur. Instead of taking a

    number of time samples, the analytic method blurs the moving triangles perfectly.

    It will take in consideration all triangles crossing a given ray during a given time interval. Keep in mind that because of its "perfection" this method will be extremely

    slow on high-poly scenes with fast motion. (See also: Motion blur parameters,

    Motion blur, Quasi Monte Carlo sampling) Antialiasing (Image sampling)

    Antialiasing is a special technique for producing smooth images of high-contrast edges and small details in materials and objects. V-Ray achieves antialiasing by

    taking additional image samples where necessary. To determine if more samples

    are needed, V-Ray compares differences in color (and/or other properties) of

    neighboring image samples.. This comparison can be performed in several ways. V-

    Ray supports fixed, simple 2 level and adaptive antialiasing (See also: Image

    samplers parameters, G-Buffer, G-Buffer Antialiasing) Area lights

    Area light is a term describing a non-point light source. These types of light sources

    produce area shadows. V-Ray supports rendering of area lights through VRayLight.

    (See also: VRayLight parameters, Area shadows) Area shadows (Soft shadows)

    Area shadows are blurred shadows (or shadows with blurred edges) that are caused

    by non-point light sources (Area lights). V-Ray is capable of producing the effect of

    area shadows either through VRayShadow or through area lights. (See also:

    VRayShadow parameters, Area lights) BRDF (Bi-Directional Reflectance Distribution Function)

    One of the most general means to characterize the reflection properties of a surface

    is by use of the bi-directional reflection distribution function (BRDF), a function

    which defines the spectral and spatial reflection characteristic of a surface. V-Ray

    supports the following BRDF types: Phong, Blinn, Ward. (See also: VRayMtl) BSP (BSP Tree, Binary Space Partitioning Tree)

    BSP is special data structure for organizing scene geometry in order to speed up

    ray-triangle intersections (intersecting a ray with the triangles in the scene is the

    task most frequently performed by a raytracer). Currently V-Ray implements two

    types of BSP Tree. These are a static BSP Tree for scenes without motion blur and a

    motion blur BSP Tree. (See also: Motion Blur) Bucket (Region, Rendering region)

    A bucket is a rectangular part of the current frame that is rendered independently

    from other buckets. The division of a frame into rendering regions allows for

    optimal resource utilization (CPUs, PCs, memory). It also allows for distributed

    rendering. (See also: Distributed rendering) Caustics (Radiosity)

  • This is the effect of light refracted by a non-opaque object hitting a (diffuse)

    surface. (See also: Caustics parameters) Depth of field (DOF)

    Depth of field is the effect of having a particular point in the scene to appear

    focused (sharp) and the rest to out of focus (blurry) depending on camera shutter

    properties and the distance from the camera. This is similar to how real world

    cameras work so this effect is especially useful for producing photorealistic images.

    (See also: DOF parameters, Camera parameters) Distributed rendering (DR)

    Distributed rendering is a technique for utilization of all available computational

    resources (all CPUs in a machine, all machines in a LAN, etc.). DR divides the

    currently processed frame into rendering regions and keeps all CPUs in LAN-

    connected machines busy computing the rendering result. Overall DR assures that

    V-Ray makes the most out of your equipment when rendering a single frame. For

    animation sequences however, you should use MAX's standard network rendering

    as it may be more efficient. (See also: Bucket, Distributed rendering) Early termination

    Early termination is a technique for reducing the samples taken for evaluating a

    blurry value. This basically works by looking at the samples as they are computed

    one by one and deciding, after each new sample, if more samples are required.

    Early termination is used throughout V-Ray for all blurry values. See also importance

    sampling. G-Buffer

    This term describes the collection of various data generated during image

    rendering. These could be Z-values, material IDs, object IDs, non-clamped colors

    etc. This has proven to be very useful for performing post-rendering image

    processing. (See also: G-Buffer parameters, Antialiasing, Image samplers

    parameters) G-Buffer Antialiasing

    V-Ray is capable of antialiasing the rendered image based on the differences in one

    or several G-Buffer channels. (See also: Antialiasing, Image sampler parameters,

    G-Buffer) HDRI (High Dynamic Range Image)

    A High Dynamic Range Image is an image containing high-dynamic range colors

    (with components exceeding the range 0.0-1.0, or 0-255). This type of images is

    often used as an environment map to light the scene with natural light. Importance samping

    Importance sampling is a technique for basing the number of samples required for

    evaluating a blurry value, on the effect that value has on the final result. For

    example, dark materials require fewer samples for evaluating GI than bright

    materials; dim area lights can do with less samples than bright lights etc.

    Importance sampling is used throughout V-Ray for all blurry values. See also early

    termination. Index of Refraction (IOR)

    The index of refraction is defined as the speed of light in vacuum divided by the

    speed of light in a given medium. IOR = C/V, where V is the light speed specific for

    the different mediums. To achieve a material with a specific IOR you have to set

    the Index of refraction field value in MAX's standard materials in the section

    Extended parameters.

    Material Index

    Vacuum 1.00000

    Air at STP 1.00029

    Ice 1.31

    Water at 20 C 1.33

  • Acetone 1.36

    Ethyl alcohol 1.36

    Sugar solution(30%) 1.38

    Fluorite 1.433

    Fused quartz 1.46

    Glycerin 1.473

    Sugar solution (80%) 1.49

    Typical crown glass 1.52

    Crown glasses 1.52-1.62

    Spectacle crown, C-1 1.523

    Sodium chloride 1.54

    Polystyrene 1.55-1.59

    Carbon disulfide 1.63

    Flint glasses 1.57-1.75

    Heavy flint glass 1.65

    Extra dense flint, EDF-3 1.7200

    Methylene iodide 1.74

    Sapphire 1.77

    Heaviest flint glass 1.89

    Diamond 2.417

    Indirect Illumination (Global lighting, Global Illumination)

    In real world when a particle ray of light hits an object it produces multiple reflected

    rays with different intensity in all directions. These rays on their turn may hit some

    other objects and produce even more rays and so on. This process, multiply repeated, generates the so called Global Illumination. (See also: Indirect

    Illumination parameters, Irradiance map)

    Irradiance map

    Indirect Illumination in V-Ray is generally achieved by calculating GI samples. The

    irradiance map is a special cache where V-Ray keeps precalculated GI samples.

    During the rendering process when V-Ray needs a particular GI sample it computes

    it by interpolating the nearest precalculated GI samples stored in the irradiance

    map. Once computed, the Irradiance map can be saved in a file and reused in

    subsequent renderings. This can be especially useful for camera fly-through

    animations. Samples for VRayLight can also be stored in the irradiance map. (See

    also: Indirect Illumination parameters, Indirect Illumination, Area lights, Area

    shadows)

    Low accuracy computations

    In certain cases V-Ray will not need to compute absolutely precisely a ray

    contribution to the final image. V-Ray will then use faster but less precise methods

    for computation and will take fewer samples. This produces slightly noisier results,

    but decreases rendering times. Users can control the degree of optimization by

    changing when V-Ray switches to Low accuracy computations by changing Degrade

    depth values. (See also: Degrade depth, Low subdivs)

    (Quasi) Monte Carlo sampling

  • Monte Carlo sampling is a method for numerical computation of integrals of

    functions by evaluating these functions at a number of random points. Quasi Monte

    Carlo sampling is a modification of this method, which instead of randomly

    generated points uses points forming a low-discrepancy sequence, which are more

    evenly distributed than purely random ones. This is the method used by V-Ray to

    evaluate complex things like global illumination, blurry reflections, depth of field,

    motion blur and image antialiasing.

    Motion Blur

    This effect is observed when looking at some fast-moving object. The motion is so

    fast that one can not focus the object and the object's image appears blurred to the

    viewer. (See also: Motion Blur parameters, Analytic sampling, Monte Carlo

    sampling)

    Photon, Photon map

    This is a simulation of a real world photons (a photon is a light particle). In order to

    produce caustics effects V-Ray traces certain amount of photons that come out of

    the light sources. Then the results are stored in a photon map and used during the

    rendering process so that highly realistic caustic effects are produced.

    Reflections

    As an advanced raytracer V-Ray supports accurate reflections. Glossy reflections

    are as well supported (See also: VRayMap parameters, VRayMtl parameters,

    Glossiness, Reflections, VRayMtl)

    Refractions

    Refraction is the bending of a wave when it enters a medium where it's speed is

    different. The refraction of light when it passes from a fast medium to a slow

    medium bends the light ray toward the normal to the boundary between the two

    media. As an advanced raytracer V-Ray supports true accurate refractions. V-Ray

    also handles glossy refractions (See also: VRayMap parameters, VRayMtl parameters, IOR, Translucency, Glossiness, Reflections, VRayMtl) Russian roulette

    This is a technique for..? Subdivs

    In V-Ray subdivs is a measure for the maximum amount of samples (rays) that V-

    Ray will use to compute a certain value. The maximum number of samples is

    proportional to the square of the subdivs value. For example, if the subdivs value of

    a glossy reflection is 5, V-Ray will never make more than 5 x 5 = 25 samples to

    evaluate the reflection.

    Translucency

    Translucency is a term describing the interaction of light with a non-opaque

    medium (wax, marble, skin etc.). V-Ray supports a simple translucency model that

    can nevertheless produce quite natural results. (See also: VRayMap parameters,

    VRayMtl parameters, Refractions)

    GI methods

    The rendering equation

    Virtually all modern GI renderers are based on the rendering equation introduced by

    James T. Kajiya in his 1986 paper "The Rendering Equation". This equation

    describes how light is propagated throughout a scene. In his paper, Kajiya also

    proposed a method for computing an image based on the rendering equation using

    a Monte Carlo method called path tracing.

  • It should be noted that the equation has been known long before that in

    engineering and has been used for computing radiative heat transfer in different

    environments. However, Kajiya was the first to apply this equation to computer

    graphics.

    It should also be noted that the rendering equation is only "an approximation of

    Maxwell's equation for electromagnetics". It does not attempt to model all optical

    phenomena. It is only based on geometric optics and therefore cannot simulate

    things like diffraction, interference or polarization. However, it can be easily

    modified to account for wavelength-dependent effects like dispersion.

    Another, more philosophical point to make, is that the rendering equation is derived

    from a mathematical model of how light behaves. While it is a very good model for

    the purposes of computer graphics, it does not describe exactly how light behaves in the real world. For example, the rendering equation assumes that light rays are

    infinitesimally thin and that the speed of light is infinite - neither of these

    assumptions is true in the real physical world.

    Because the rendering equation is based on geometric optics, raytracing is a very

    convenient way to solve the rendering equation. Indeed, most renderers that solve

    the rendering equation are based on raytracing.

    Different formulations of the rendering equation are possible, but the one proposed

    by Kajiya looks like this:

    where:

    L(x, x1) is related to the light passing from point x1 to point x;

    g(x, x1) is a geometry (or visibility term);

    e(x, x1) is the intensity of emitted light from point x1 towards point x;

    r(x, x1, x2) is related to the light scattered from point x2 to point x through point

    x1;

    S is the union of all surfaces in the scene and x, x1 and x2 are points from S.

    What the equation means: the light arriving at a given point x in the scene from

    another point x1 is the sum of the light emitted from all other points x2 towards x1

    and reflected towards x:

    Except for very simple cases, the rendering equation cannot be solved exactly in a

    finite amount of time on a computer. However, we can get as close as we want to

    the real solution - given enough time. The search for global illumination algorithms has been a quest for finding solutions that are reasonably close, for a reasonable

    amount of time.

    The rendering equation is only one. Different renderers only apply different

    methods for solving it. If any two renderers solve this equation accurately enough,

    then they should generate the same image for the same scene. This is very well in

    theory, but in practice renderers often truncate or alter parts of the rendering

    equation, which may lead to different results.

    I: Exact vs approximate methods

    As noted above, we cannot solve the equation exactly - there is always some error,

    although it can be made very small. In some rendering methods, the desired error

    is specified in advance by the user and it determines the accuracy of the

    calculations (f.e. GI sample density, or GI rays, or number of photons etc.). A

  • disadvantage of these methods is that the user must wait for the whole calculation

    process to complete before the result can be used. Another disadvantage is that it

    may take a lot of trials and errors to find settings that produce adequate quality in

    a given time frame. However, the big advantage of these methods is that they can

    be very efficient within the specified accuracy bounds, because the algorithm can

    concentrate on solving difficult parts of the rendering equation separately (e.g.

    splitting the image into independent regions, performing several calculation phases

    etc.), and then combining the result.

    In other methods, the image is calculated progressively - in the beginning the error

    is large, but gets smaller as the algorithm performs additional calculations. At any

    one point of time, we have the partial result for the whole image. So, we can

    terminate the calculation and use the intermediate result.

    Exact (unbiased or brute-force) methods.

    Advantages:

    Produce very accurate results.

    The only artifact these methods produce is noise.

    Renderers using exact methods typically have only few controls for specifying image quality.

    Typically require very little additional memory.

    Disadvantages:

    Unbiased methods are not adaptive and so are extremely slow for a

    noiseless image.

    Some effects cannot be computed at all by an exact method (for

    example, caustics from a point light seen through a perfect mirror).

    It may be difficult to impose a quality requirement on these methods.

    Exact methods typically operate directly on the final image; the GI

    solution cannot be saved and re-used in any way.

    Examples:

    Path tracing (brute-force GI in some rendereres).

    Bi-directional path tracing.

    Metropolis light transport.

    Approximate (biased) methods:

    Advantages: Adaptive, so typically those are a lot faster than exact methods.

    Can compute some effects that are impossible for an exact method

    (e.g. caustics from a point light seen through a perfect mirror).

    Quality requirements may be set and the solution can be refined until

    those requirements are met.

    For some approximate methods, the GI solution can be saved and re-

    used.

    Disadvantages:

    Results may not be entirely accurate (e.g. may be blurry) although

    typically the error can be made as small as necessary.

    Artifacts are possible (e.g. light leaks under thin walls etc).

    More settings for quality control.

    Some approximate methods may require (a lot of) additional memory.

    Examples:

  • Photon mapping.

    Irradiance caching.

    Radiosity.

    Light cache in V-Ray.

    Hybrid methods: exact methods used for some effects, approximate methods for others.

    Advantages:

    Combine both speed and quality.

    Disadvantages:

    May be more complicated to set up.

    Examples:

    Final gathering with Min/Max radius 0/0 + photon mapping in mental ray.

    brute force GI + photon mapping or light cache in V-Ray.

    Light tracer with Min/Max rate 0/0 + radiosity in 3ds Max.

    Some methods can be asymptotically unbiased - that is, they start with some

    bias initially, but it is gradually decreased as the calculation progresses.

    II: Gathering vs shooting methods

    Shooting methods

    These start from the lights and distribute light energy throughout the scene. Note

    that shooting methods can be either exact or approximate.

    Advantages:

    Can easily simulate some specific light effects like caustics.

    Disadvantages:

    They don't take into consideration the camera view; thus they might

    spend a lot of time for parts of the scene that are not visible or do not

    contribute to the image (e.g. caustics that are not visible - they must

    still be computed).

    Produce more precise solutions for portions of the scene that are close

    to lights; regions that are far from light sources may be computed with

    insufficient precision.

    Cannot simulate efficiently all kinds of light effects e.g. object lights and

    environment lights (skylight); non-physical light sources are difficult to

    simulate.

    Examples:

    photon mapping (approximate).

    particle tracing (approximate).

    light tracing (exact).

    some radiosity methods (approximate).

    Gathering methods

    These start from the camera and/or the scene geometry. Note that gathering

    methods can be either exact or approximate.

  • Advantages:

    They work based on which parts of the scene we are interested in;

    therefore, they can be more efficient than shooting methods.

    Can produce a very precise solution for all visible parts of the image.

    Can simulate various light effects (object and environment lights), non-

    physical lights.

    Disadvantages:

    Some light effects (caustics from point lights or small area lights) are

    difficult or impossible to simulate.

    Examples

    path tracing (exact)

    irradiance caching (final gathering in mental ray), (approximate).

    some radiosity methods (approximate).

    Hybrid methods

    These combine shooting and gathering; again, hybrid methods can be either exact or approximate.

    Advantages:

    Can simulate nearly all kinds of light effects

    Disadvantages:

    May be difficult to implement and/or set up.

    Examples:

    final gathering + photon mapping in mental ray (approximate).

    irradiance map/brute force GI + photon map in V-Ray (approximate).

    bi-directional path tracing and metropolis light transport (exact).

    some radiosity methods (approximate).

    III: Approximate methods: view-dependent vs view-independent solutions

    Some approximate methods allow the caching of the GI solution. The cache can be

    either view-dependent or view-independent.

    Shooting methods

    Advantages:

    Shooting methods typically produce a view-independent solution.

    Disadvantages:

    The solution is typically of low quality (blurry and lacking details).

    Detailed solution requires a lot of time and/or memory.

    Adaptive solutions are difficult to produce.

    Regions that are far from light sources may be computed with

    insufficient accuracy.

    Examples:

    photon mapping

  • some radiosity methods

    Gathering methods

    Gathering methods and some hybrid methods allow for both view-dependent and

    view-independent solutions.

    View-dependent solutions

    Advantages:

    Only the relevant parts of the scene are taken into consideration (no

    time is wasted on regions that are not visible).

    Can work with any kind of geometry (i.e. no restriction on geometry

    type).

    Can produce very high-quality results (keeping all the fine details).

    In some methods, view-dependent portions of the solution can be

    cached as well (glossy reflections, refractions etc).

    Require less memory than a view-independent solution.

    Disadvantages:

    Requires updating for different camera positions; still, in some

    implementations portions of the solution may be re-used.

    Examples:

    Irradiance caching (in V-Ray, mental ray, finalRender, Brazil r/s, 3ds

    Max's light tracer).

    View-independent solutions

    Advantages:

    Solution needs to be computed only once.

    Disadvantages:

    All of the scene geometry must be considered, even though some of it

    may never be visible.

    The type of geometry in the scene is usually restricted to trianglular or

    quadrangular meshes (no procedural or infinite geometry allowed).

    Detailed solutions require lots of memory.

    Only the diffuse portion of the solution can be cached; view-dependent

    portions (glossy reflections) must still be computed.

    Examples:

    Some radiosity methods.

    Hybrid methods

    Different combinations of view-dependent and view-independent techniques can be

    combined.

    Examples:

    photon mapping and irradiance caching in V-Ray.

    photon mapping and final gathering in mental ray. radiosity and light tracer in 3ds Max.

  • GI methods supported by V-Ray

    V-Ray supports a number of different methods for solving the GI equation - exact,

    approximate, shooting and gathering. Some methods are more suitable for some

    specific types of scenes.

    Exact methods

    V-Ray supports two exact methods for calculating the rendering equation: brute

    force GI and progressive path tracing. The difference between the two is that brute

    force GI works with traditional image construction algorithms (bucket rendering)

    and is adaptive, whereas path tracing refines the whole image at once and does not

    perform any adaptation.

    Approximate methods

    All other methods used V-Ray (irradiance map, light cache, photon map) are

    approximate methods.

    Shooting methods

    The photon map is the only shooting method in V-Ray. Caustics can also be

    computed with photon mapping, in combination with a gathering method.

    Gathering methods

    All other methods in V-Ray (brute force GI, irradiance map, light cache) are

    gathering methods.

    Hybrid methods

    V-Ray can use different GI engines for primary and secondary bounces, which

    allows you to combine exact and approximate, shooting and gathering algorithms,

    depending on what is your goal. Some of the possible combinations are

    demonstrated on the GI examples page.

  • Irradiance map viewer Home

    The irradiance map viewer can be used to view, merge and save irradiance map

    files created by V-Ray. Note that the viewer is a separate program and does not

    require 3ds Max in order to run.

    Installation

    By default, the installation of V-Ray for 3ds Max installs the irradiance map viewer

    in the folder [Program Files]\Chaos Group\V-Ray\3dsmax Rx for yyy\tools.

    A shortcut to the irradiance map viewer is also installed in the Start Menu: click

    Start Menu > Programs > Chaos Group > V-Ray for 3dsmax > Tools >

    Irradiance map viewer.

    Usage

    There are several methods of running the irradiance map viewer, which are outlined

    below.

    The simplest way is to double-click the imapviewer.exe file. This will bring

    up an Open File dialog box that lets you browse for an irradiance map to

    open. This is the same as starting the irradiance map viewer from the Start

    Menu. The menus of the program allow you to do various things with the

    irradiance map (merging and saving etc).

    You can also run the irradiance map viewer from the command line. In this

    case there are several possible choices:

    o If you type just

    > imapviewer

    o

    on the command line, this is the same as double-clicking the file. It

    will bring up the File Open dialog for selecting an irradiance map file.

    o You can also type

  • > imapviewer

    o

    where is the name of an irradiance map file. This file will

    be opened automatically.

    o A third way is to use the viewer to merge several maps into one:

    > imapviewer -load -load ... [-save

    ] [-nodisplay]

    o

    This will load the specified maps and combine them into one

    irradiance map. If the -save option is specified, the result will be

    written to the given file. If the -nodisplay option is specified, the

    resulting map will not be displayed (otherwise the viewer will display

    the final result). o

    Notes

    Use the "-help" option on the command line to see a short description of the

    usage of the irradiance map viewer;

    .ply to .vrmesh converter

    This is a command-line utility that can be used to convert .ply and .obj files to

    .vrmesh files for rendering by V-Ray through a VRayProxy.

    Installation

    By default, the installation of V-Ray for 3ds Max installs the ply2vrmesh tool in the

    folder [Program Files]\Chaos Group\V-Ray\3dsmax Rx for yyy\tools. A

    shortcut to the converter is also installed in the Start Menu: click Start Menu >

    Programs > Chaos Group > V-Ray for 3dsmax > Tools > PLY to .vrmesh

    converter.

    Usage

    > ply2vrmesh

    This converts the given .ply or .obj file and writes it to the given .vrmesh file.

    Note that you must specify the file extension, it will not be added automatically.

    There are also additional options that you can specify; run the ply2vrmesh

    program without any parameters to see the description of these options.

  • Notes

    The converter understands most of the popular .ply formats, both ASCII and

    binary, big-endian or little-endian.

    In addition to the geometric data (faces and vertices), the converter

    recognizes some of the most common additional channels like diffuse

    surface color. Vertex colors are recognized if they are specified as "red", "green" and "blue" or "diffuse_red", "diffuse_green" and "diffuse_blue"

    vertex properties. In 3ds Max, those can be visualized by applying a Vertex

    Color texture in the diffuse slot of the corresponding VRayProxy object.

  • GI methods Home

    The rendering equation

    Virtually all modern GI renderers are based on the rendering equation introduced by

    James T. Kajiya in his 1986 paper "The Rendering Equation". This equation

    describes how light is propagated throughout a scene. In his paper, Kajiya also

    proposed a method for computing an image based on the rendering equation using

    a Monte Carlo method called path tracing.

    It should be noted that the equation has been known long before that in

    engineering and has been used for computing radiative heat transfer in different

    environments. However, Kajiya was the first to apply this equation to computer

    graphics.

    It should also be noted that the rendering equation is only "an approximation of

    Maxwell's equation for electromagnetics". It does not attempt to model all optical

    phenomena. It is only based on geometric optics and therefore cannot simulate

    things like diffraction, interference or polarization. However, it can be easily

    modified to account for wavelength-dependent effects like dispersion.

    Another, more philosophical point to make, is that the rendering equation is derived

    from a mathematical model of how light behaves. While it is a very good model for

    the purposes of computer graphics, it does not describe exactly how light behaves

    in the real world. For example, the rendering equation assumes that light rays are

    infinitesimally thin and that the speed of light is infinite - neither of these

    assumptions is true in the real physical world.

    Because the rendering equation is based on geometric optics, raytracing is a very

    convenient way to solve the rendering equation. Indeed, most renderers that solve

    the rendering equation are based on raytracing.

    Different formulations of the rendering equation are possible, but the one proposed

    by Kajiya looks like this:

    where:

    L(x, x1) is related to the light passing from point x1 to point x;

    g(x, x1) is a geometry (or visibility term);

    e(x, x1) is the intensity of emitted light from point x1 towards point x;

    r(x, x1, x2) is related to the light scattered from point x2 to point x through point

    x1;

    S is the union of all surfaces in the scene and x, x1 and x2 are points from S.

    What the equation means: the light arriving at a given point x in the scene from

    another point x1 is the sum of the light emitted from all other points x2 towards x1

    and reflected towards x:

    Except for very simple cases, the rendering equation cannot be solved exactly in a

    finite amount of time on a computer. However, we can get as close as we want to

    the real solution - given enough time. The search for global illumination algorithms

    has been a quest for finding solutions that are reasonably close, for a reasonable

    amount of time.

  • The rendering equation is only one. Different renderers only apply different

    methods for solving it. If any two renderers solve this equation accurately enough,

    then they should generate the same image for the same scene. This is very well in

    theory, but in practice renderers often truncate or alter parts of the rendering

    equation, which may lead to different results.

    I: Exact vs approximate methods

    As noted above, we cannot solve the equation exactly - there is always some error,

    although it can be made very small. In some rendering methods, the desired error

    is specified in advance by the user and it determines the accuracy of the

    calculations (f.e. GI sample density, or GI rays, or number of photons etc.). A

    disadvantage of these methods is that the user must wait for the whole calculation

    process to complete before the result can be used. Another disadvantage is that it

    may take a lot of trials and errors to find settings that produce adequate quality in

    a given time frame. However, the big advantage of these methods is that they can

    be very efficient within the specified accuracy bounds, because the algorithm can

    concentrate on solving difficult parts of the rendering equation separately (e.g.

    splitting the image into independent regions, performing several calculation phases etc.), and then combining the result.

    In other methods, the image is calculated progressively - in the beginning the error

    is large, but gets smaller as the algorithm performs additional calculations. At any

    one point of time, we have the partial result for the whole image. So, we can

    terminate the calculation and use the intermediate result.

    Exact (unbiased or brute-force) methods.

    Advantages:

    Produce very accurate results.

    The only artifact these methods produce is noise.

    Renderers using exact methods typically have only few controls for

    specifying image quality.

    Typically require very little additional memory.

    Disadvantages:

    Unbiased methods are not adaptive and so are extremely slow for a

    noiseless image.

    Some effects cannot be computed at all by an exact method (for

    example, caustics from a point light seen through a perfect mirror).

    It may be difficult to impose a quality requirement on these methods.

    Exact methods typically operate directly on the final image; the GI

    solution cannot be saved and re-used in any way.

    Examples:

    Path tracing (brute-force GI in some rendereres).

    Bi-directional path tracing.

    Metropolis light transport.

    Approximate (biased) methods:

    Advantages:

    Adaptive, so typically those are a lot faster than exact methods.

  • Can compute some effects that are impossible for an exact method

    (e.g. caustics from a point light seen through a perfect mirror).

    Quality requirements may be set and the solution can be refined until

    those requirements are met.

    For some approximate methods, the GI solution can be saved and re-

    used.

    Disadvantages:

    Results may not be entirely accurate (e.g. may be blurry) although

    typically the error can be made as small as necessary.

    Artifacts are possible (e.g. light leaks under thin walls etc).

    More settings for quality control.

    Some approximate methods may require (a lot of) additional memory.

    Examples:

    Photon mapping.

    Irradiance caching.

    Radiosity.

    Light cache in V-Ray.

    Hybrid methods: exact methods used for some effects, approximate methods for others.

    Advantages: Combine both speed and quality.

    Disadvantages:

    May be more complicated to set up.

    Examples:

    Final gathering with Min/Max radius 0/0 + photon mapping in mental ray.

    brute force GI + photon mapping or light cache in V-Ray.

    Light tracer with Min/Max rate 0/0 + radiosity in 3ds Max.

    Some methods can be asymptotically unbiased - that is, they start with some

    bias initially, but it is gradually decreased as the calculation progresses.

    II: Gathering vs shooting methods

    Shooting methods

    These start from the lights and distribute light energy throughout the scene. Note

    that shooting methods can be either exact or approximate.

    Advantages:

    Can easily simulate some specific light effects like caustics.

    Disadvantages:

    They don't take into consideration the camera view; thus they might spend a lot of time for parts of the scene that are not visible or do not

    contribute to the image (e.g. caustics that are not visible - they must

    still be computed).

    Produce more precise solutions for portions of the scene that are close

    to lights; regions that are far from light sources may be computed with

    insufficient precision.

  • Cannot simulate efficiently all kinds of light effects e.g. object lights and

    environment lights (skylight); non-physical light sources are difficult to

    simulate.

    Examples:

    photon mapping (approximate).

    particle tracing (approximate).

    light tracing (exact).

    some radiosity methods (approximate).

    Gathering methods

    These start from the camera and/or the scene geometry. Note that gathering

    methods can be either exact or approximate.

    Advantages:

    They work based on which parts of the scene we are interested in;

    therefore, they can be more efficient than shooting methods.

    Can produce a very precise solution for all visible parts of the image.

    Can simulate various light effects (object and environment lights), non-physical lights.

    Disadvantages:

    Some light effects (caustics from point lights or small area lights) are

    difficult or impossible to simulate.

    Examples

    path tracing (exact)

    irradiance caching (final gathering in mental ray), (approximate).

    some radiosity methods (approximate).

    Hybrid methods

    These combine shooting and gathering; again, hybrid methods can be either exact

    or approximate.

    Advantages:

    Can simulate nearly all kinds of light effects

    Disadvantages: May be difficult to implement and/or set up.

    Examples:

    final gathering + photon mapping in mental ray (approximate).

    irradiance map/brute force GI + photon map in V-Ray (approximate).

    bi-directional path tracing and metropolis light transport (exact).

    some radiosity methods (approximate).

    III: Approximate methods: view-dependent vs view-independent solutions

    Some approximate methods allow the caching of the GI solution. The cache can be

    either view-dependent or view-independent.

  • Shooting methods

    Advantages:

    Shooting methods typically produce a view-independent solution.

    Disadvantages:

    The solution is typically of low quality (blurry and lacking details).

    Detailed solution requires a lot of time and/or memory.

    Adaptive solutions are difficult to produce.

    Regions that are far from light sources may be computed with

    insufficient accuracy.

    Examples:

    photon mapping

    some radiosity methods

    Gathering methods

    Gathering methods and some hybrid methods allow for both view-dependent and

    view-independent solutions.

    View-dependent solutions

    Advantages:

    Only the relevant parts of the scene are taken into consideration (no

    time is wasted on regions that are not visible).

    Can work with any kind of geometry (i.e. no restriction on geometry

    type).

    Can produce very high-quality results (keeping all the fine details).

    In some methods, view-dependent portions of the solution can be

    cached as well (glossy reflections, refractions etc).

    Require less memory than a view-independent solution.

    Disadvantages:

    Requires updating for different camera positions; still, in some

    implementations portions of the solution may be re-used.

    Examples: Irradiance caching (in V-Ray, mental ray, finalRender, Brazil r/s, 3ds

    Max's light tracer).

    View-independent solutions

    Advantages:

    Solution needs to be computed only once.

    Disadvantages:

    All of the scene geometry must be considered, even though some of it

    may never be visible.

    The type of geometry in the scene is usually restricted to trianglular or

    quadrangular meshes (no procedural or infinite geometry allowed).

    Detailed solutions require lots of memory.

    Only the diffuse portion of the solution can be cached; view-dependent

    portions (glossy reflections) must still be computed.

  • Examples:

    Some radiosity methods.

    Hybrid methods

    Different combinations of view-dependent and view-independent techniques can be

    combined.

    Examples:

    photon mapping and irradiance caching in V-Ray.

    photon mapping and final gathering in mental ray.

    radiosity and light tracer in 3ds Max.

    GI methods supported by V-Ray

    V-Ray supports a number of different methods for solving the GI equation - exact,

    approximate, shooting and gathering. Some methods are more suitable for some

    specific types of scenes.

    Exact methods

    V-Ray supports two exact methods for calculating the rendering equation: brute

    force GI and progressive path tracing. The difference between the two is that brute

    force GI works with traditional image construction algorithms (bucket rendering)

    and is adaptive, whereas path tracing refines the whole image at once and does not

    perform any adaptation.

    Approximate methods

    All other methods used V-Ray (irradiance map, light cache, photon map) are

    approximate methods.

    Shooting methods

    The photon map is the only shooting method in V-Ray. Caustics can also be

    computed with photon mapping, in combination with a gathering method.

    Gathering methods

    All other methods in V-Ray (brute force GI, irradiance map, light cache) are

    gathering methods.

    Hybrid methods

    V-Ray can use different GI engines for primary and secondary bounces, which

    allows you to combine exact and approximate, shooting and gathering algorithms, depending on what is your goal. Some of the possible combinations are

    demonstrated on the GI examples page.