Mark Nelson mjas@itu.dk 3d projections Fall 2013

Preview:

Citation preview

Mark Nelson mjas@itu.dk

3d projections

Fall 2013 www.itu.dk

The 3d pipeline (expansive view)

Tools stage Asset conditioning stage Application stage | Geometry processing stage | Rasterization stage

Tools stage

3d modeling

Export meshes (possibly w/ metadata)

Create textures

Asset conditioning stage

Platform- or engine-specific format conversations

Dependency resolution

”Baked-in” effects E.g., static lighting

Application stage

Run-time management in the engine

Prepare a scene Combine e.g. Movable objects into one scene description Omit anything that can’t possibly be visible Set GPU rendering parameters

Basic GPU pipeline

Receive triangles Triples of (x,y,z) vertices

Compute transformations

Rasterize Turn into (x,y) screen pixels

World space

One 3d coordinate axis with all objects in a scene Pre-culled by the engine to omit things that can’t possibly

be visible

Constitutes the world geometry E.g., can compute distances, collisions, etc.

Model space

We could have only world space But, we often model objects externally (e.g. in 3dsmax)

Model space is the local coordinate space of one model, independent of a scene

Typically: centered at (0,0,0) aligned to an axis

Model to world space

To build a scene, all models have to be converted from local to world coordinates

Place in scene, then translate, rotate, and/or scale

Can be done ahead of time or on the GPU

Scene graph

Hierarchical data structure Represents how to build a scene out of models

Root is world space A transformation applies to anything below it in the tree

Can enable other optimizations

Scene graph

Camera

Engine and scene graph build up a scene description In world space, from models in model space

We the viewer are somewhere in this world At a coordinate (x,y,z) Facing along a particular direction vector (x’,y’,z’)

What it looks like to us is view space

View space

In view space, we are: at (0,0,0) perpendicular to the (x,y) plane facing along the z axis

Need to translate and rotate the world-space coordinates 3d version of rotating a map so up is where we’re facing

Projection

Project the (still 3d) view space onto our 2d screen

Orthographic projection Just ignore z coordinate: (x,y,z) (x,y) for all points

Perspective projection Further away objects look smaller

Frustum

Perspective options

#1: First turn 3d view space into 3d perspective space Make further away stuff smaller Then later do an orthographic projection

Or, #2: Project directly

Impacts how things like frustum culling work

Simple perspective projection

If viewable depths are from z=1 to z=infinity:

x’ = x/z y’ = y/z

2d screen centered at (0,0)

Wireframe projection

For each triangle Project each vertex to 2d Draw lines connecting them in 2d

Wireframe projection

Summary

Model space to world spaceWorld space to view spaceProjection

Missing: occlusion, lighting, shading

Transformation matrices

2d rotation

As matrix:

Transformation matrices

3d rotation is analogous Can also do: scaling, shearing

However, translation can’t be directly done as a matrix x’ = x + x_offset y’ = y + y_offset

No matrix-multiply equivalent

Homogeneous coordinates

Extend 3d points and vectors to a 4d space Stand-in dimension w=1

Now can define a translation transform as well So all basic transforms can be chained

Get back to 3d by dividing x/y/z by w

Translation in matrix form

Affine transformations

Can represent all the relevant transformations with homogeneous coordinate 4x4 transform matrices Translation, rotation, scaling, perspective transform

Common way of representing any transformation in APIs

Advanced alternative: quaternions

Project 2: a DIY renderer

Wireframe renderer Due 22 October

Input: 3d coordinates, view position, view direction Project to 2d coordinates, and draw (to screen or image)

Tuesday: more on perspective, and surfaces

Recommended