FALLSEM2015 16 CP1853 26 Aug 2015 RM01 Hidden Surface Elimination

Embed Size (px)

DESCRIPTION

hidden surface elimination

Citation preview

Slide 1

D.AjuSchool of Computing Science and EngineeringVisible Surface DeterminationVisible Surface DeterminationRealistic scenes: Closer objects occludes the others.Several ApproachesMore memoryMore processing timeSpecial objectsVisible surface detection or hidden surface removal.Have to really decide which method to go with.

Visible Surface DetectionClassification:Object space methodsImage space methodsObject Space MethodsAn object space method compares objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, should be labeled as visible.Object Space MethodsAlgorithms to determine which parts of the shapes are to be rendered in 3D coordinates.Methods based on comparison of objects for their 3D positions and dimensions with respect to a viewing position.For N objects, may require N X N comparison operations.Efficient for small number of objects but difficult to implement.Depth sorting, Area subdivision methods.Image Space MethodsIn an image space algorithm, visibility is decided as point by point at each pixel position on the projection plane.Image Space MethodsBased on the pixels to be drawn on 2D. Determine which object should contribute to that pixel.Running time complexity is the number of pixels times number of objects.Space complexity is two times the number of pixels:One array of pixels for the frame bufferOne array of pixels for the depth bufferCoherence properties of surfaces can be used.Depth-buffer and Ray casting methods.Depth CueingHidden surfaces are not removed but displayed with different effects such as intensity, color, or shadow for giving hint for third dimension of the object.Simplest solution: Use different color-intensities based on the distances of the shapes.

AlgorithmsBack-Face DetectionDepth-Buffer MethodA-Buffer MethodScan-Line MethodDepth-Sorting MethodBSP-Tree MethodArea-SubDivision MethodOctree MethodRay Casting Method

Image-Space Method Depth-Buffer Method A-Buffer Method Scan-Line Method Area-Subdivision MethodObject-Space Method Back-Face Detection BSP-Tree Method Area-Subdivision Method Octree Methods Ray-Casting MethodImage-Space Method vs.Object-Space Method Back-Face Detection(Inside-outside Test)The equation of the plane surface is expressed as Ax +By + Cz + D = 0; (x,y,z) is any point in the plane.A,B,C,D are constants describing the properties of the plane.

A point (x, y, z) is inside a surface with plane parameters A, B, C, and D if Ax +By + Cz + D < 0

Back-Face Detection(Inside-outside Test) The polygon is a back face if

V is a vector in the viewing direction from the eye (camera) N is the normal vector to a polygon surface

N = (A, B, C)V Back-Face Detection(Inside-outside Test)If the object descriptions have been converted to projection coordinates,The viewing direction is parallel to the viewing Zv axis. Then

Back-Face DetectionHowever after the viewing transformation, by looking down the negative z-axis. The polygon is a back face if:

Back-Face DetectionBack-face detection can identify all the hidden surfaces in a scene that contain non-overlapping convex polyhedra.But we have to apply more tests that contain overlapping objects along the line of sight to determine which objects obscure which objects.View of a concave polyhedron with one face partially hidden by other surfacesDepth-Buffer MethodDepth-Buffer MethodAlso known as z-buffer method.It is an image space approachEach surface is processed separately one pixel position at a time across the surface.The depth values for a pixel are compared and the closest surface determines the color to be displayed in the frame buffer.Applied very efficiently on polygon surfaces.Depth-Buffer MethodTwo buffers are usedFrame BufferThe image buffer is used to store the color values of each pixel positionDepth Bufferz-buffer is used to store the depth values for each (x,y) positionThe z-coordinates (depth values) are usually normalized to the range [0,1].Depth-Buffer Method

Depth-Buffer AlgorithmInitialize the depth buffer and frame buffer so that for all buffer positions (x,y),depthBuff (x,y) = 1.0, frameBuff (x,y) =bgColorProcess each polygon in a scene, one at a timeFor each projected (x,y) pixel position of a polygon, calculate the depth z.If z < depthBuff (x,y), compute the surface color at that position and setdepthBuff (x,y) = z, frameBuff (x,y) = surfCol (x,y)

Calculating depth values efficientlyThe equation of the plane surface is

Ax +By + Cz + D = 0;

The depth can be calculated using the polygon surface equation:

Calculating depth values efficientlyFor any scan line adjacent horizontal x positions or vertical y positions differ by 1 unit.The depth value of the next position (x+1,y) on the scan line can be obtained using

Depth-Buffer MethodIs able to handle cases such as

View from theRight-side

Z-Buffer and TransparencyWe may want to render transparent surfaces (alpha 1) with a z-buffer.However, we must render in back to front order.Otherwise, we would have to store at least the first opaque polygon behind transparent one.Partially transparentOpaqueOpaque1st2nd3rdFront1st or 2nd1st or 2ndMust recall this color and depth3rd: Need depth of 1st and 2ndOK. No ProblemProblematic OrderingA-Buffer MethodA-Buffer MethodAn extension of the ideas in the depth-buffer method is the A-buffer method.The A-buffer method represents an antialiased, area-averaged, accumulation-buffer method developed by Lucasfilm for implementation in the surface-rendering system called REYES (an acronym for "Renders Everything You Ever Saw").A-Buffer MethodExtends the depth-buffer algorithm so that each position in the buffer can reference a linked list of surfaces.More memory is required.However, we can correctly compose different surface colors and handle transparent surfaces.A-Buffer MethodEach position in the A-buffer has two fields:Depth field (stores a positive or negative real number).Intensity field (stores surface-intensity information or a pointer value).

single-surface overlap multiple-surface overlapA-Buffer MethodData for each surface in the linked list includesRGB intensity componentsopacity parameter (percent of transparency)depthpercent of area coveragesurface identifierpointer to next surfaceA-Buffer Method If the depth field is positive The number at that position is the depth. The intensity field stores the RGB. If the depth field is negative Multiple-surface contributions to the pixel. The intensity field stores a pointer to a linked list of surfaces.A-Buffer MethodThe A-buffer can be constructed using methods similar to those in the depth-buffer algorithm.

Scan lines are processed to determine surface overlaps of pixels across the individual scan lines.

Surfaces are subdivided into a polygon mesh and clipped against the pixel boundaries.

Using the opacity factors and percent of surface overlaps, we can calculate the intensity of each pixel.Scan line MethodScan line MethodIn this method, as each scan line is processed, all polygon surfaces intersecting that line are examined to determine which are visible.

Across each scan line, depth calculations are made for each overlapping surface to determine which is nearest to the view plane.

When the visible surface has been determined, the intensity value for that position is entered into the image buffer.xvyvABS1EFS2HDCGScan line 1Scan line 2Scan line 3Scan line Method

Depth SortingDepth Sorting (Painters Algorithm)In creating an oil painting First paints the background colors The most distant objects are added Then the nearer objects, and so forth Finally, the foregrounds are painted over all objects

Each layer of paint covers up the previous layerDepth SortingProcessSort surfaces according to their distance from the view plane.The intensities for the farthest surface are then entered into the refresh buffer.Take each succeeding surface in decreasing depth order.Depth SortingFirst draw the distant objects than the closer objects. Pixels of each object overwrites the previous objects.652341112123123412345123456Depth Sorting(Overlapping Test - 1)Tests for each surface that overlaps with SThe bounding rectangle in the xy plane for the two surfaces do not overlap.

zvxvYvFirst check for overlap in the x direction.Then check for overlap in the y direction.Depth Sorting (Overlapping Test - 2)Tests for each surface that overlaps with S Surface S is completely behind the overlapping surface relative to the viewing position. zvxvSSDepth Sorting (Overlapping Test - 3)Tests for each surface that overlaps with SThe overlapping surface is completely in front of S relative to the viewing position. zvxvSSDepth Sorting (Overlapping Test - 4)Tests for each surface that overlaps with SThe projections of the two surfaces onto the view plane do not overlap. zvxvSSDepth SortingIf all the surfaces pass at least one of the tests, none of them is behind S and no reordering is necessary.Depth Sorting(Surface Reordering) If all four tests fail with S Interchange surfaces S and S in the sorted list Repeat the tests for each surface that is reordered in the list

Depth Sorting(Drawback) If two or more surfaces alternately obscure each other Infinite loop Flag any surface that has been reordered to a farther depth It cant be moved again If an attempt to switch the surface a second time Divide it into two parts to eliminate the cyclic loop The original surface is then replaced by the two new surfacesBSP Tree Method BSP Tree Method (Characteristics)Binary Space-Partitioning(BSP) TreeDetermining object visibility by painting surfaces onto the screen from back to front Like the painters algorithm Particularly useful The view reference point changes The objects in a scene are at fixed positions

BSP Tree Method (Process) Identifying surfaces inside and outside the partitioning plane Intersected object Divide the object into two separate objects(A, B)

BSP Tree Method (Examples)

BSP Tree Method When the BSP tree is complete, we process the tree by selecting the surfaces for display in the order back to front, so that foreground objects are painted over the background objects. Area-Subdivision MethodArea-Subdivision Method - CharacteristicsTakes advantage of area coherence Locating view areas that represent part of a single surface. Successively dividing the total viewing area into smaller rectangles untilit is a single pixel, wholly by a part of a single visible surface or no surface at all.

Require tests Identify the area as part of a single surface. Tell us that the area is too complex to analyze easily.Area-Subdivision Method Area-Subdivision Method - Process Staring with the total view Apply the identifying tests If the tests indicate that the view is sufficiently complex Subdivide Apply the tests to each of the smaller areas Until belonging to a single surface Until the size of a single pixel Example With a resolution 1024 1024 10 times before reduced to a pointArea-Subdivision Method - Identifying TestsFour possible relationships Surrounding surface Completely enclose the area Overlapping surface Partly inside and partly outside the area Inside surface Outside surface

Surrounding SurfaceOverlapping SurfaceInside SurfaceOutside SurfaceArea-Subdivision Method - Identifying TestsNo further subdivisions are needed if one of the following condition is true All surface are outside surfaces with respect to the area Only one inside, overlapping, or surrounding surface is in the area A surrounding surface obscures all other surfaces within the area boundaries.Octree MethodOctree - Characteristics Extension of area-subdivision method Projecting octree nodes onto the viewplane Front-to-back orderThe nodes for the front sub-octants of octant 0 are visited before the nodes for the four back sub-octantsThe pixel in the frame buffer is assigned that color if no values have previously been stored Only the front colors are loaded01327456Displaying An Octree Map the octree onto a quadtree of visible areas Traversing octree nodes from front to back in a recursive procedure The quadtree representation for the visible surfaces is loaded into the framebuffer

13274560Octants in SpaceRay Casting MethodRay Casting Method - Characteristics Based on geometric optics methods Trace the paths of light rays Line of sight from a pixel position on the viewplane through a scene Determine which objects intersect this line Identify the visible surface whose intersection point is closest to the pixel Infinite number of light rays Consider only rays that pass through pixel positions Trace the light-ray paths backward from the pixels Effective visibility-detection method For scenes with curved surfacesRay Casting Method - Characteristics

ComparisonAlgorithm/ MethodsMemorySpeedIssues in ImplementationRemarksZ-BufferTwo ArraysDepth ComplexityScan Conversion, HardwareCommonly UsedPaintersOne ArrayApriori Sorting helps SortingScan ConversionSplitting and Sorting the major bottleneckRay CastingObject Database0 (#pixels, #surfaces or objects)Spatial data structures help speed upExcellent for CSG, Shadows, TransparencyScanline, Area-SubdivisionHardCannot be generalized for non-polygonal models