26

HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
Page 2: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
Page 3: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics
Page 4: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Use of 3DIn general, there are fundamental differences between Movie and Game generated assets. A primary concern is polygon count and efficiency. Currently the only way to model in video games is by using polygons, which can require a denser mesh to emulate smoother or more natural looking models such as humans and animals. NURBS models can be created, but need to be converted and optimized to polygons for use in the game. In pre-rendered movies, any technique is allowed to create your models.Movie models can be generated up to millions of polygons using several different techniques at once. A model consisting of NURBS and polygons as well as subdivision surface models is normal and completely acceptable.Gaming models have to be more efficient in their use of modeled details to maintain a manageable data set to render. The reasoning here is that an efficient streamlined environment composed of the lower poly assets will render more smoothly and give better frame to frame renders during gameplay. What your gaming system is in essence, is a renderer that constantly has the task of rendering each frame of gameplay at 30 frames per second. Some games hit the magic number of 60 frames a second. If this rate drops during the game the result is a poor experience and hampered gameplay. This applies to PC games as well, although they will typically have more processing power to run higher resolution models.With constant innovations and improvement in next-gen consoles and technology, development of more advanced techniques and processes give us more detailed looking models at a lower cost. One of these advances is the use of normal mapping. A normal map acts like a bump map, in that is adds surface detail without adding polygons. Normal maps go a step further because they actually replace the surface normal with new multi-channel data to represent an X, Y, Z coordinate system. What this means is that we can create a high resolution model of 2 or 3 million polygons and bake the high resolution detail down to a normal map that retains the component space data of that high resolution model. It is then a process to create a streamlined model that emulates the general proportions of the high density model, but at a much more efficient poly count of 2500, for example. Once the normal map data is applied to this low-res rendition of our high-res monster, the model immediately looks more complex geometrically but at an affordable rendering

Page 5: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

cost. Movie productions also use Normal Mapping techniques, but the asset that they use the Normal Map on is typically a more detailed model than the one used in games. Another difference between movie and game modeling is the fact that not everything needs to be built for a movie or pre-rendered model. It is common practice for film to only build those elements in the scene that you can actually see on the screen. In a game environment, it is necessary to make most things viewable from 360 degrees. Can you imagine walking around your favorite game level and not seeing the back side of 3D car you just walked up to? Or not being able to see the back of the character you just spoke to? It wouldn’t keep you immersed in the game very long. Well in a movie if the camera never travels to the rear of that set or never moves around the corner, it doesn’t need to be built. This is certainly true for aspects of the gaming world, like the far off detail of the mountains, or implied buildings that you as a player can’t actually get to in the game.

A common practice among the two disciplines is that of creating LOD models, or Level Of Detail models. In a game, when a character carrying a machine gun walks up to you from the far end of a long hallway, chances are it is not a consistent model the entire journey for the character or gun. When it is far away, a lower resolution model, with lower resolution textures is used. The reasoning for this is that the details cannot be discerned at that length so there is no need to use CPU time to render those higher resolution elements. As the character approaches, there may be 2 or 3 changes that swap the model and textures out with higher and higher resolution assets, until it has walked right up to you in camera. If done properly, these “swap outs” go unnoticed for the most part.

Movie modeling might use aspects of LOD’s too. There are close up models and models built for distance shots too. The main difference for film models is that rarely do the various LOD’s have to seamlessly blend. Much of this decision making process lies in the story or action that needs to be conveyed for that shot. For the very next shot, it may require a completely different set of assets and details that didn’t apply to the first shot. Typically there are three levels of modeling that occur for movie models: Block, Medium and Detailed. Each stage identifies and solves different problems for the production. At the block stage, the overall proportions are identified with a simple low detail model. This helps to define the silhouette of the model and have a low

Page 6: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

resolution asset useful for animatics or test renders. Medium level models take the next step and begin by adding other details onto the Block model that help to define the finished look of the model. Additions like antennae, guns, rear view mirrors or other details that are not defining the general shape of the model qualify. This stage helps to identify moving parts and areas that may require special attention from a technical artist. Finally there is the Detailed model, which contains all of the detailed parts and pieces on a higher resolution chassis.An example utilizing these ideas is a space-ship model that flies past the screen as it speeds towards its destination. Because we only see the one side of the ship, this is the only part that needs to be built. This close fly by model needs to have a high amount of detail and geometry to look convincing.There are no concerns for efficiently, really, in the movie created asset. As long as the model can render, it is considered to be acceptable. For a pre-rendered sequence, render time can be extensive, but typically there are large render farms that can tackle the job. There is also the safety factor for these models that any render anomaly can be fixed in Post, where the game model must work all the time at every frame it is rendered in. Other stipulations sometime burden the game model such as the fact that at times the game asset must be “water tight”. What this means is that all of the vertices on the model need to be welded or merged. Render times for real-time shadows and advanced lighting can be complicated if a model is not sealed at the vertex level, and therefore they take longer to compute.It is a common expression that there is a time and place for everything. Nothing could be more true when discussing modeling for Movies or Games. There are certainly similarities between the two mediums and many different approaches to solve the task at hand. As game systems become more and more advanced, these two approaches may become more and more alike. Perhaps one day there may be no distinction in the modeling process between the two.

This Article is about the company 3D Museum that describes how they construct and represent a 3D model.

Laser Scanning

Page 7: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

The first step in building a three-dimensional (3D) model is to digitize the object. A high-speed and high-accuracy laser scanner (Minolta Vivid 910) is being used, which not only samples the model with high precision, but also provides rich color information. Due to its light weight, the 3D scanner can travel with us to other collections. Data ProcessingThe raw 3D scan data need to be processed to produce a complete surface model of the fossil. The crucial step is to accurately merge the individual scans into a single mesh. Most of our processing is done in Raindrop Geomagic Studio, but Rapidform has also been used. Presentation For research purposes, high resolution 3D data is being kept, but for data exchange via the web they reduce the filesize – this guarantees fast and smooth loading of the 3D objects.Rapidform offers a 3D compression and publishing tool using ICF (INUS Compression Format). The two other file formats we are providing, Wirefusion (WF) and 3D Compression (3DC), are based on VRML (Virtual Reality Modeling Language). 3DC files do not preserve the vertex colors of VRML files, leaving fossil images monotone.

Sources: http://www.siggraph.org/publications/newsletter/volume-41-number-2/modeling-techniques-movies-vs-games, http://en.wikipedia.org/wiki/Video_game, http://www.guardian.co.uk/life-in-3d/gaming-and-3d-technology, http://www.cyberjam.com/3d_interactive_media.html, http://3dmuseum.org/?page_id=241

3D Modelling Techniques

Page 8: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Drafting has come a long way from blueprints into the new world of 3D Modeling where files can be updated almost instantly, and sent online through email. CAD designers can create computer files with CAD software which can be read by manufacturing machines to produce products. The 3D CAD designer is the one who actually materializes the 3D model. CAD drafting services offer a wide array of services to the public also.

With the new advancements in technology recently, almost every type of technical drawing is done with the use of computers. Blueprints are still used in the field, and for other reasons, but all the drawings are done on a computer. In the past if an update needed to be made to the blueprints the draftsmen would have to either erase, or start all over. With CAD though, the draftsmen will simply open the file, and make the necessary changes. Another great feature is that the file can be saved to your computer, some type of external hard drive, or online. Just make sure its somewhere safe.

The person behind the scenes of 3D modeling is the CAD designer. They use special CAD software to create the 3D models. Within the software the developers have incorporated tools for creating lines, circles, arcs, and other 2D related objects. Also this software has commands for sculpting, cutting, revolving, mirroring, and other 3D tools. Also the software has the ability to render images with color, texture, lighting, and backgrounds. With all of this at the CAD designers disposal, anything imagined can be designed.

Drafting encompasses many different practices and principles within it. There is mechanical drafting, architecture drafting, civil drafting, electrical drafting, structural drafting, drafting for plumbing, 3D modeling, and drafting for just about anything you can imagine. CAD software has designed programs for each one of these fields and has made special accommodations for each. For example, within architectural programs there is a command for creating walls, doors, roofs, slabs, and other architectural features. This allows the CAD drafter to work much faster, and be more efficient within drawing.

3D models have allowed the design process to be done more accurately and efficiently than in the past. Drafting has had many changes over the years, and updates to CAD software are made routinely. These new type of blueprint are much more flexible and allow for changes to be made at a moments notice. Once a design is complete it can go directly to the manufacture to be developed. CAD is used with everything from architecture to inventions and is the main tool used in any type of technical drawing. This technology allows

Page 9: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

engineers to examine work before production, and has made life on the general public more safe.

Displaying and Constructing 3D ModelsModeling is the first part of the graphic pipeline. When we are modeling in 3D we are in Cartesian space. When we are modeling we use shapes; the most basic ones e.g. cone, cylinder, sphere, box.

In 3D animation, a polygon is the exact same thing, only these polygons are connected to build your 3D model. Individual polygons are stitched together along the sides or at the vertex points to create the full model. Think of it as putting together puzzle pieces to create a whole, except that rather than seeing a printed image on the pieces, you're instead forming a whole other three-dimensional shape whose boundaries and volume are defined by smaller two-dimensional shapes. Polygons are the wrapper on the chocolate Easter bunny; the candy coating on your M&Ms.

More polygons in a model can mean more detail and smoother renders, but it can also mean longer render times and more problems caused by overlapping lines and vertices.

Application Programming Interface (API):

Application Programming Interface (API) is a set of functions and rules that a computer use to communicate with each other to do certain jobs, just like how a player communicates to a game by pressing a certain button to do certain action. (application programming interface, eg Direct3D, OpenGL; graphics pipeline, eg modelling, lighting, viewing, projection, clipping, scan conversion, texturing and shading,display; rendering techniques (radiosity, ray tracing); rendering engines; distributed rendering techniques;lighting; textures; fogging; shadowing; vertex and pixel shaders; level of detail.)

Direct 3D:

Page 10: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Direct 3D is only available for windows 95 and up and that it renders 3D graphics especially in gaming as it uses the Graphics card. It all started in 1992 with Servan Keondjian who started a company called RenderMorphics and they developed a 3D graphical Application programming interface (API for short), It was used in medical imaging and CAD (computer aided design) software. Two versions of this API were released. And in February 1995 Microsoft bought RenderMorphics. When Direct3D was used to render they used a thing called a Buffer to render 3D geometry but the process was AWKWARD and had complex stages that you have to do manually and so Open GL was made to make it simpler.

Rendering:

Rendering is a way to display 3d objects, lighting and textures together, to create an image or animation from the data sent by the 3D modeling program. There are 4 types of renders:

Rasterize Raycasting Raytracing Radiosity

Rasterize:

Rasterize is majorly used on real time applications such as games. It is done similarly to what most technologies in digital graphics of any sort uses to display a render, instead of rendering the whole scene pixel by pixel, it renders the geomertries that you see on screen and it will change accordingly. A good example of rasterizing would be Oblivion as you travel across the land of Tamriel.

Page 11: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Raycasting:

Raycasting is similar to Raytracing since they both share similar algorithms. The only thing that distinguishes the two is that Raycasting is a faster version of Raytracing and that it cannot render secondary rays, where as Raytracing can.

Raytracing:

Ray tracing is a technique that renders out an image by casting out rays onto the scene and as the rays cast upon the geometry, the colour value of that pixel is calculated. It can produce high degree of visual realism, but it will cost time to render the scene. It is capable of simulating different variety of visual effects such as reflection (an example would a glass), scattering (where the light rays hits the geometry and it bounces back and scatters) and refraction (refraction is used on water or air and it will change depending on the change of direction).

Example of using raytracing:

Page 12: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Ray tracing is best used on still images, special effects, and TV, sadly it is not suited to be used on games.

Radiocity:

Radiosity is a technical term in which it is uses two types of lights, an incident light (in which the light source hits onto the subject) and a reflective light (where the light reflects off from the subject’s surface). This is used especially on interior design.

Example of using Radiosity:

and a video example http://www.youtube.com/watch?v=NO3uvnbwCKM

How to apply sample fog on 3DS Max:

Go to Rendering > Environment (hotkey 8)

Page 14: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

You can change the density of how far or near the fog will appear as you render the scene

I think this is not the best way of producing HQ fog and that this should be done through Adobe After Effects.

How to make textures not blurr in viewport:

First apply the textures on the material editor by dragging and dropping the textures onto the shaders or click on the maps section and then the diffuse slot and select the file

Page 16: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

and tick “Match Bitmap Size as Closely as Possible” on the Background Texture Size section and also tick the same thing again on the Download Texture Size section as well

Page 17: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

and finally all you need to do is click on the material editor again and click the texture that you want to see more clearer, to refresh it.

Progressive and Interlace scanning:

So what is Progressing and Interlace scan?

Interlace and progressive scanning describes how images are displayed on our TV screens. The image is displayed rapidly and updating the screen all the time, this associates with computer monitors as well.

Progressive scan:•The image is displayed rapidly and drawn in sequence•Requires a higher refresh rate•Associated with computer monitors•Latest HD TV’s can display Progressive Scan•Can display fast moving images•Requires a high bandwidth (more data per image)

Frame Buffer:

•This is the area of video memory which is stored ready tp be transmitted to the monitor device. To display moving images (flipbook)•High resolution and more bit depth requires more video memory to store images.

Interlace scanning:

• Unlike Progressive scanning, the interlace scanning takes half the bandwidth of non interlaced scanning (progressive).•Interlacing is used by all the analogue TV broadcast systems•Interlace scanning is done by drawing out the even numbered rows, then the odd numbered rows (or vice versa doesn’t make a difference)

Page 18: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

Vertex Lighting:

Vertex Lighting (also known as Gouraud shading) is a method that is used to display and simulate differing effects of light across the surface of a 3d object. This is done by calculating the vertices around the subject as well as where the light source is projecting at, the more amount of vertex there is, the better the specualar lighting, the lower the amount of vertices there is, the less quality you will have from a high poly specular lighting.

Distributed rendering

Distributed rendering (also known as DR) is a technique in which lots of computers are rendering the same scene and that it helps reduce the rendering time that it originally has.

Vray on 3ds Max is capable of doing this process. The process is done by using TCI/ IP protocols and when you’re using Vray, there are two things you need to know, there is a Render Clients and Render Servers.

Render Clients

The render client is the main source of where the renders servers will need to get the information from and it divides the frames into bits and spreads it across the Render Servers. It distributes data to the render servers for processing and collects the results.

Render Servers

A render server is a computer that collects the information that the Render Clients have sent and it processes it and sends the result back.

ref:

Clipping 3D:

Clipping is used to display the inside and outside of the geometry, you can disable this and make the inside of the geometry transparent on 3Ds Max, to do this, right click > object properties > tick back force cull

Page 20: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics

http://animation.about.com/od/glossaryofterms/g/What-Is-A-3d-Polygon.htm

http://www.fastgraph.com/help/3D_clipping.html

http://en.wikipedia.org/wiki/Projective_geometry

http://www.google.co.uk/search?hl=en&q=what+is+clipping+3d%3F&meta

http://www.spot3d.com/vray/help/150SP1/distributed_rendering.htm

http://en.wikipedia.org/wiki/3D_computer_graphics

http://en.wikipedia.org/wiki/3D_model

http://www.best3dsolution.com/services/3d-rendering/

http://www.blender.org/

http://ezinearticles.com/?3D-Modeling-Technology&id=6102955

Page 21: HA5 – COMPUTER ARTS BLOG ARTICLE – 3D: The Basics