Download pdf - VFX-An Overview

Transcript

Visual effects Visual effects (commonly shortened to Visual FX or VFX) are the various processes by which imagery is created and/or manipulated outside the context of a live action shot. Visual effects involve the integration of live-action footage and computer generated imagery to create environments which look realistic, but would be dangerous, expensive, impractical, or simply impossible to capture on film. Visual effects using computer generated imagery has recently become accessible to the independent filmmaker with the introduction of affordable and user friendly animation and compositing software. Visual effects are often integral to a movie's story and appeal. Although most visual effects work is completed during post-production, it usually must be carefully planned and choreographed in pre-production and production. Visual effects primarily executed in Post-Production, with the use of multiple tools and technologies such as graphic design, modeling, animation and similar software, while special effects are made on set, such as explosions, car chases and so on. A visual effects supervisor is usually involved with the production from an early stage to work closely with production and the film's director design, guide and lead the teams required to achieve the desired effects. Visual effects may be divided into at least four categories:

• Models: miniature sets and models, animatronics, stop motion animation. • Matte paintings and stills: digital or traditional paintings or photographs which serve as

background plates for keyed or rotoscoped elements. • Live-action effects: keying actors or models through blue screening and green

screening. • Digital animation: modeling, computer graphics lighting, texturing, rigging, animating,

and rendering computer-generated 3D characters, particle effects, digital sets, backgrounds.

• Digital effects (commonly shortened to digital FX or FX) are the various processes by which imagery is created and/or manipulated with or from photographic assets. Digital effects often involve the integration of still photography and computer generated imagery (CGI) in order to create environments which look realistic, but would be dangerous, costly, or simply impossible to capture in camera. FX is usually associated with the still photography world in contrast to visual effects which is associated with motion film production.

A miniature effect is a special effect created for motion pictures and television programs using scale models. A scale model is a physical model, a representation or copy of an object that is larger or smaller than the actual size of the object, which seeks to maintain the relative proportions (the scale factor) of the physical size of the original object. Very often the scale model is used as a guide to making the object in full size. Scale models are built or collected for many reasons. Scale models are often combined with high speed photography or matte shots to make gravitational and other effects appear convincing to the viewer. The use of miniatures has largely been superseded by Computer-generated imagery in the contemporary cinema. Where a miniature appears in the foreground of a shot, this is often very close to the camera lens — for example when matte painted backgrounds are used. Since the exposure is set to the object being filmed so the actors appear well lit, the miniature must be over-lit in order to balance the exposure and eliminate any depth of field differences that would otherwise be visible. This foreground miniature usage is referred to as forced perspective. Another form of miniature effect uses stop motion animation. Use of scale models in the creation of visual effects by the entertainment industry dates back to the earliest days of cinema. Models and miniatures are copies of people, animals, buildings, settings and objects. Miniatures or models are used to represent things that do not really exist, or that are too expensive or difficult to film in reality, such as explosions, floods or fires. Special Effects (often abbreviated as SFX, SPFX, or simply FX). The illusions or tricks of the eye used in the film, television, theatre, video game, and simulator industries to simulate the imagined events in a story or virtual world are traditionally called special effects. Special effects are traditionally divided into the categories of Optical Effects and Mechanical Effects. With the emergence of Digital Film-Making Tools a greater distinction between Special Effects and Visual Effects has been recognized, with "visual effects" referring to digital post-production and "special effects" referring to on-set mechanical effects and in-camera optical effects.

Optical effects (also called photographic effects), are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes, or the Schüfftan process, or in post-production processes using an optical printer. An optical effect might be used to place actors or sets against a different background. Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of Mechanized Props, Scenery, Scale Models, Animatronics, Pyrotechnics and Atmospheric Effects: creating physical wind, rain, fog, snow, clouds, etc. Making a car appear to drive by itself and blowing up a building are examples of mechanical effects. Mechanical effects are often incorporated into set design and makeup. For example, a set may be built with break-away doors or walls to enhance a fight scene, or prosthetic makeup can be used to make an actor look like a non-human creature.

Early development

In 1856, Oscar Rejlander created the world's first "Trick Photograph" by combining different sections of 30 negatives into a single image.

In 1895, Alfred Clark created what is commonly accepted as the first-ever motion picture special effect.

While filming a reenactment of the beheading of Mary, Queen of Scots, Clark instructed an actor to step up to the block in Mary's costume. As the executioner brought the axe above his head, Clarke stopped the camera, had the entire actors freeze, and had the person playing Mary step off the set. He placed a Mary dummy in the actor's place, restarted filming, and allowed the executioner to bring the axe down, severing the dummy's head. "Such… techniques would remain at the heart of special effects production for the next century."

This was not only the first use of trickery in the cinema; it was the first type of photographic trickery only possible in a motion picture, i.e. the "Stop Trick".

In 1896, French magician Georges Méliès accidentally discovered the same "Stop Trick"

According to Melies, his camera jammed while filming a street scene in Paris. When he screened the film, he found that the "Stop Trick “had caused a truck to turn into a hearse, pedestrians to change direction, and men turn into women.

Melies, the stage manager at the Theatre Robert-Houdin, was inspired to develop a series of more than 500 short films, between 1914, in the process developing or inventing such techniques as multiple exposures, time-lapse photography, dissolves, and hand painted colour.

In Analogue photography and cinematography, “Multiple Exposures” is a technique in which the camera shutter is opened more than once to expose the film multiple times, usually to different images. The resulting image contains the subsequent image/s superimposed over the original. The technique is sometimes used as an artistic visual effect and can be used to create ghostly images or to add people and objects to a scene that were not originally there.

Time-lapse photography is a technique whereby the frequency at which film frames are captured is much lower than that used to view the sequence. When played at normal speed, time appears to be moving faster and thus lapsing.

For example, an image of a scene may be captured once every second, then played back at 30 frames per second. The result is an apparent 30-times speed increase. Time-lapse photography can be considered the opposite of High Speed Photography or Slow Motion.

Some classic subjects of Time-Lapse Photography include:

• cloudscapes and celestial motion • plants growing and flowers opening • fruit rotting • evolution of a construction project • people in the city

Mattes are used in photography and special effects filmmaking to combine two or more image elements into a single, final image. Usually, mattes are used to combine a foreground image (such as actors on a set, or a spaceship) with a background image (a scenic vista, a field of stars and planets). In this case, the matte is the background painting. In film and stage, mattes can be physically huge sections of painted canvas, portraying large scenic expanses of landscapes.

In film, the principle of a matte requires masking certain areas of the film emulsion to selectively control which areas are exposed. However, many complex special-effects scenes have included dozens of discrete image elements, requiring very complex use of mattes, and layering mattes on top of one another.

For an example of a simple Matte, we may wish to depict a group of actors in front of a store, with a massive city and sky visible above the store's roof.

We would have two images—the actors on the set, and the image of the city—to combine onto a third.

This would require two masks/mattes. One would mask everything above the store's roof, and the other would mask everything below it.

By using these masks/mattes when copying these images onto the third, we can combine the images without creating ghostly double-exposures. In film, this is an example of a Static Matte, where the shape of the mask does not change from frame to frame. Other shots may require mattes that change, to mask the shapes of moving objects, such as human beings or spaceships. These are known as Traveling Mattes.

Traveling mattes enable greater freedom of composition and movement, but they are also more difficult to accomplish. Chroma key techniques that remove all areas of a certain color from a recording - colloquially known as "Blue screen" Or "Green screen" after the most popular colors used - are probably the best-known and most widely-used modern techniques for creating traveling mattes, although Rotoscoping and multiple Motion Control passes have also been used in the past. Computer-Generated Imagery, either static or animated, is also often rendered with a transparent background and digitally overlaid on top of modern film recordings using the same principle as a matte - a Digital Image Mask.

Mattes are a very old technique, going back to the Lumière brothers. Originally, the matte shot was created by filmmakers obscuring their backgrounds with cut-out cards. When the live action portion of a scene was filmed, the background portion of the film wasn’t exposed. Once the live action was filmed, a different cut-out would be placed

over the live action. The film would be rewound, and the filmmakers would film their new background.

This technique was known as the In-Camera matte and was considered more a novelty than a serious special effect during the late 1880s.

A good early American example is seen in The Great Train Robbery (1903) where it is used to place a train outside a window in a ticket office, and later a moving background outside a baggage car on a train 'set'.

Around this time, another technique known as the Glass Shot was also being used. The glass shot was made by painting details on a piece of glass which was then combined with live action footage to create the appearance of elaborate sets. The first glass shots are credited to Edgar Rogers.

The first major development of the matte shot was the early 1900s by Norman Dawn ASC. Dawn had seamlessly woven glass shots into many of his films: such as the crumbling California Missions in the movie Missions of California and used the glass shot to revolutionize the in-camera matte.

Now, instead of taking their live action footage to a real location, filmmakers would shoot the live action as before with the cut-out cards in place, then rewind the film and transfer it to a camera designed to minimize vibrations. Then the filmmakers would shoot a glass shot instead of a live action background. The resulting composite was of fairly high quality, since the matte line – the place of transition from the live action to the painted background – was much less jumpy.

In addition, the new in-camera matte was much more cost effective, as the glass didn’t have to be ready the day the live action was shot. One downside to this method was that since the film was exposed twice, there was always the risk of accidentally overexposing the film and ruining the footage filmed earlier.

The in-camera matte shot remained in use until the film stock began to go up in quality in the 1920s. During this time a new technique known as the Bi-Pack camera method was developed.

In cinematography, bi-packing, or a bi-pack, is the process of loading two reels of film into a camera, so that they both pass through the camera gate together. It was used both for in-camera effects (effects that are nowadays mainly achieved via optical printing) and as an early subtractive colour process)

To achieve the In-Camera Effect, a reel would be made up of pre-exposed and developed film, and unexposed raw film, which would then be loaded into the camera. The exposed film would sit in front of the unexposed film, with the emulsion of both films touching each other, causing the images on the exposed film to be contact-printed onto the unexposed stock, along with the image from the camera lens.

This method, in conjunction with a static matte placed in front of the camera, could be used to print angry storm clouds into a background on a studio set.

The process differs from Optical Printing in that no optical elements (lenses, field lenses, etc.) separate the two films. Both films are sandwiched together in the same camera and make use of a phenomenon known as Contact Printing.

This was similar to the in-camera matte shot, but relied on one master positive as a backup. This way if anything was lost, the master would still be intact. Around 1925 another method of making a matte was developed. One of the drawbacks of the old mattes was that the matte line was stationary. There could be no direct contact between the live action and the matte background. The traveling matte changed that. The traveling matte was like an in-camera or bi-pack matte, except that the matte line changed every frame. Filmmakers could use a technique similar to the bi-pack method to make the live action portion a matte itself, allowing them to move the actors around the background and scene – integrating them completely.

The technique, if used with a camera not specially designed for contact printing, runs the risk of jamming the camera, due to the double thickness of film in the gate, and damaging both the exposed and unexposed stock.

On the other hand, because both strips of film are in contact and are handled by the same film transport mechanism at the same time, registration is kept very precise. Special cameras designed for the process were manufactured by Acme and Oxberry.

These process cameras are usually recognizable by their special film magazines, which look like two standard film magazines on top of each other. The magazines allow the separate loading of exposed and unexposed stock, as opposed to winding the two films onto the same reel.

The Thief of Bagdad (1940) represented a major leap forward for the Traveling Matte and the first major introduction of the Blue Screen technique invented by Larry Butler when it won the Academy Award for Best Visual Effects that year, though the process was still very time-intensive, and each frame had to be hand-processed.

Computers began to aid the process late in the 20th century. In the 1960's, Petro Vlahos refined the use of motion control cameras in Blue Screen and received an Academy Award for the process.

The 1980's saw the invention of the first digital mattes and blue screening processes, as well as the invention of the first computerized non-linear editing systems for video.

Alpha compositing, in which digital images could be made partially transparent in the same way an animation cel is in its natural state, had been invented in the late 1970's and was integrated with the Blue Screen process in the 1980's.

Digital Blue Screening began with The Empire Strikes Back in 1980, for which Richard Edlund received the Academy Award for his work to create a special kind of Optical Printer for combining mattes, though this process was still partially analog.

The first fully Digital Matte shot was created by painter Chris Evans in 1985 for Young Sherlock Holmes for a scene featuring a computer-graphics (CG) animation of a knight leaping from a stained-glass window.

Evans first painted the window in acrylics, and then scanned the painting into Lucas Film’s Pixar system for further digital manipulation. The computer animation blended perfectly with the digital matte, something a traditional matte painting could not have accomplished.

In the modern era, nearly all modern mattes are now done via digital video editing and the chroma key technique - a digital generalization of the Blue Screen - is now possible even on home computers.

A non-linear editing system (NLE) is a video - (NLVE) or audio editing (NLAE) digital audio workstation (DAW) system that performs non-destructive editing on source material. The name is in contrast to 20th century methods of linear video editing and film editing.

Non-linear editing is the most natural approach when all assets are available as files on video servers or hard disks, rather than recordings on reels or tapes—while linear editing is tied to the need to sequentially view film or hear tape.

Non-linear editing enables direct access to any video frame in a digital video clip, without needing to play or scrub/shuttle through adjacent footage to reach it, as was necessary with historical video tape linear editing systems. It is now possible to access any frame by entering directly the timecode or the descriptive metadata.

Chroma key compositing, or chroma keying, is a special effects / post-production technique for compositing (layering) two images or video streams together based on color hues (chroma range).

The technique has been used heavily in many fields to remove a background from the subject of a photo or video – particularly the news casting, motion picture and videogame industries. A color range in the top layer is made transparent, revealing another image behind. The chroma keying technique is commonly used in video production and post-production.

This technique is also referred to as color keying, colour-separation overlay or by various terms for specific color-related variants such as green screen, and blue screen – chroma keying can be done with backgrounds of any color that are uniform and distinct, but green and blue backgrounds are more commonly used because they differ most distinctly in hue from most human skin colors. No part of the subject being filmed or photographed may duplicate a color used in the background.

It is commonly used for weather forecast broadcasts, wherein a news presenter is usually seen standing in front of a large CGI map during live television newscasts, though in actuality it is a large blue or green background. When using a blue screen, different weather maps are added on the parts of the image where the color is blue. If the news presenter wears blue clothes, his or her clothes will also be replaced with the background video.

An Optical Printer is a device consisting of one or more film projectors mechanically linked to a movie camera. It allows filmmakers to re-photograph one or more strips of film. The optical printer is used for making special effects for motion pictures, or for copying and restoring old film material.

Rear Projection (also known as Process Photography) is part of many in-camera effects cinematic techniques in film production for combining foreground performances with pre-filmed backgrounds. It was widely used for many years in driving scenes, or to show other forms of "distant" background motion. The presence of a movie screen between the background image and foreground objects leads to a distinctive washed-out look that makes these "process shots" recognizable. The actors stand in front of a screen while a projector positioned behind the screen casts a reversed image of the background.

This required a large space to film, as the projector had to be placed some distance from the back of the screen. Frequently the background image would appear faint and washed out compared to the foreground. The film that is projected can be still or moving, but is always called the plate. One might hear the command "Roll plate," to instruct stage crew to begin projecting.

These so-called process shots were widely used to film actors as if they were inside a moving vehicle, which were, in reality, in a vehicle mock-up on a soundstage. In these cases the motion of the backdrop film and foreground actors and props were often different due to the lack of steadicam-like imaging from the moving vehicles used to produce the plate. This was most noticeable as bumps and jarring motions of the background image that would not be duplicated by the actors.

Forced Perspective is a technique that employs optical illusion to make an object appear farther away, closer, larger or smaller than it actually is. It is used primarily in photography, filmmaking and architecture. It manipulates human visual perception through the use of scaled objects and the correlation between them and the vantage point of the spectator or camera.

Examples of forced perspective:

• A scene in an action/adventure movie in which dinosaurs are threatening the heroes. By placing a miniature model of a dinosaur close to the camera, the dinosaur may look monstrously tall to the viewer, even though it is just closer to the camera.

Forced perspective can be made more believable when environmental conditions obscure the difference in perspective. For example, the final scene of the famous movie Casablanca takes place at an airport in the middle of a storm, although the entire scene was shot in a studio. This was accomplished by using a painted backdrop of an aircraft, which was "serviced" by dwarfs standing next to the backdrop. A downpour (created in-studio) draws much of the viewer's attention away from the backdrop and extras, making the simulated perspective less noticeable.

Role of light

Early instances of forced perspective used in low-budget motion pictures showed objects that were clearly different from their surroundings: often blurred or at a different light level. The principal cause of this was geometric. Light from a point source travels in a spherical wave, decreasing in intensity (or illuminance) as the inverse square of the distance travelled. This means that a light source must be four times as bright to produce the same illuminance at an object twice as far away. Thus to create the illusion of a

distant object being at the same distance as a near object and scaled accordingly, much more light is required. When shooting with forced perspective, it's important to have the aperture stopped down sufficiently to achieve proper DOF (depth of field), so that the foreground object and background are both sharp.

Since miniature models would need to be subjected to far greater lighting than the main focus of the camera, the area of action, it is important to ensure that these can withstand the significant amount of heat generated by the incandescent light sources typically used in film and TV production.

Nodal Point: Forced Perspective in Motion

Peter Jackson's film adaptations of The Lord of the Rings make extended use of forced perspective. Characters apparently standing next to each other would be displaced by several feet in depth from the camera. This, in a still shot, makes some characters appear much smaller (for the dwarves and Hobbits) in relation to others.

A new technique developed for “The Lord of the Rings: The Fellowship of the Ring” was an enhancement of this principle which could be used in moving shots. Portions of sets were mounted on movable platforms which would move precisely according to the movement of the camera, so that the optical illusion would be preserved at all times for the duration of the shot. The same techniques were used in the Harry Potter movies to make the character Hagrid look like a giant. Props around Harry and his friends are of normal size, while seemingly identical props placed around Hagrid are in fact smaller.

Morphing is a special effect in motion pictures and animations that changes (or morphs) one image into another through a seamless transition. Most often it is used to depict one person turning into another through technological means or as part of a fantasy or surreal sequence. Traditionally such a depiction would be achieved through cross-fading techniques on film. Since the early 1990s, this has been replaced by computer software to create more realistic transitions.

Early examples of morphing

Though the 1986 movie The Golden Child implemented very crude morphing effects from animal to human and back, the first movie to employ detailed morphing was Willow, in 1988.

A similar process was used a year later in Indiana Jones and the Last Crusade to create Walter Donovan's gruesome demise. Both effects were created by Industrial Light &

Magic using grid warping techniques developed by Tom Brigham and Doug Smythe (AMPAS).

In 1985, Godley & Creme created a primitive "morph" effect using analogue cross-fades in the video for "Cry". The cover for Queen's 1989 album The Miracle featured the technique to morph the four band members' faces into one gestalt image. In 1991, morphing appeared notably in the Michael Jackson music video Black or White and in the movies Terminator 2: Judgment Day and Star Trek VI: The Undiscovered Country. The first application for personal computers to offer morphing was Gryphon Software Morph on the Macintosh. Other early morphing systems included Image Master, Morph Plus and Cine Morph, all of which premiered for the Commodore Amiga in 1992. Other programs became widely available within a year, and for a time the effect became common to the point of cliché. For high-end use, Elastic Reality (based on Morph Plus) saw its first feature film use in In The Line of Fire (1993) and was used in Quantum Leap (work performed by the Post Group). At VisionArt Ted Fay used Elastic Reality to morph Odo for Star Trek: Deep Space Nine. Elastic Reality was later purchased by Avid, having already become the de facto system of choice, used in many hundreds of films. The technology behind Elastic Reality earned two Academy Awards in 1996 for Scientific and Technical Achievement going to Garth Dickie and Perry Kivolowitz. The effect is technically called a "spatially warped cross-dissolve". The first social network designed for user-generated morph examples to be posted online was Galleries by Morpheus (morphing software).

In Taiwan, Aderans, a hair loss solutions provider, did a TV commercial featuring a morphing sequence in which people with lush, thick hair morph into one another, reminiscent of the end sequence of the Black or White video.

Modern Morphing Techniques

Computer-animated morphing was used in the 1974 Canadian animation Hunger.In the early 1990s computer techniques that often produced more convincing results began to be widely used. These involved distorting one image at the same time that it faded into another through marking corresponding points and vectors on the "before" and "after" images used in the morph.

For example, one would morph one face into another by marking key points on the first face, such as the contour of the nose or location of an eye, and mark where these same points existed on the second face. The computer would then distort the first face to have the shape of the second face at the same time that it faded the two faces. To compute the

transformation of image coordinates required for the distortion, e.g. the algorithm of Beier and Neely can be used.

Later, more sophisticated cross-fading techniques were employed that vignetted different parts of one image to the other gradually instead of transitioning the entire image at once. This style of morphing was perhaps most famously employed in the video that former 10cc members Kevin Godley and Lol Creme (performing as Godley & Creme) produced in 1985 for their song Cry. It comprised a series of black and white close-up shots of faces of many different people that gradually faded from one to the next. In a strict sense, this had little to do with modern-day computer generate morphing effects, since it was merely a dissolve using fully analog equipment.

Present Use of Morphing

Morphing algorithms continue to advance and programs can automatically morph images that correspond closely enough with relatively little instruction from the user. This has led to the use of morphing techniques to create convincing slow-motion effects where none existed in the original film or video footage by morphing between each individual frame using optical flow technology. Morphing has also appeared as a transition technique between one scene and another in television shows, even if the contents of the two images are entirely unrelated. The algorithm in this case attempts to find corresponding points between the images and distort one into the other as they cross fade.

While perhaps less obvious than in the past, morphing is used heavily today. Whereas the effect was initially a novelty, today, morphing effects are most often designed to be seamless and invisible to the eye.

Prosthetic Makeup (also called FX prosthesis) is the process of using prosthetic sculpting, molding and casting techniques to create advanced cosmetic effects. Prosthetic makeup was revolutionized by Dick Smith in such films as Little Big Man.

Technique

The process of creating a prosthetic appliance begins with life casting, the process of taking a mold of a body part (often the face) to use as a base for sculpting the prosthetic. Life cast molds are made from prosthetic alginate or more recently, from skin-safe silicone rubber. This initial mold is relatively weak and flexible. A hard mother mold, typically made of plaster or fiberglass bandages is created overtop the initial mold to provide support.

Once a negative mold has been created, it is promptly filled with gypsum cement, most commonly a brand called "Ultracal-30", to make a "positive" mold. The form of the prosthetic is sculpted in clay on top of the positive. The edges of the clay should be made as thin as possible, for the clay is a stand-in for what will eventually be the prosthetic piece. Along the edges of the mold, "keys" or mold points are sculpted or carved into the life cast, to make sure that the two pieces of the mold will fit together correctly. Once sculpting is completed, a second mold is made. This gives two or more pieces of a mold - a positive of the face, and one or more negative mold pieces of the face with prosthetic sculpted in. All clay is carefully removed and the prosthetic material is cast into the mold cavity. The prosthetic material can be foam latex, gelatin, silicone or other similar materials. The prosthetic is cured within the two part mold - thus creating the beginning of a makeup effect.

One of the hardest parts of prosthetic make-up is keeping the edges as thin as possible. They should be tissue thin so they are easy to blend and cover giving a flawless look.

Rotoscoping is an animation technique in which animators trace over footage, frame by frame, for use in live-action and animated films. Originally, recorded live-action film images were projected onto a frosted glass panel and re-drawn by an animator. This projection equipment is called a rotoscoped, although this device was eventually replaced by computers.

In the visual effects industry, the term Rotoscoping refers to the technique of manually creating a Matte for an element on a live-action plate so it may be composited over another background.

The technique was invented by Max Fleischer, who used it in his series Out of the Inkwell starting around 1915, with his brother Dave Fleischer dressed in a clown outfit as the live-film reference for the character Koko the Clown. Max patented the method in 1917.

Fleischer used Roto-Scoping in a number of his later cartoons, most notably the Cab Calloway dance routines in three Betty Boop cartoons from the early 1930s, and the animation of Gulliver in Gulliver's Travels (1939). The Fleischer studio's most effective use of Roto-Scoping was in their series of action-oriented Superman cartoons, in which Superman and the other animated figures displayed very realistic movement.

Rotoscope output can have slight deviations from the true line that differs from frame to frame, which when animated cause the animated line to shake unnaturally, or "boil". Avoiding boiling requires considerable skill in the person performing the tracing, though causing the "boil" intentionally is a stylistic technique sometimes used to emphasize the

surreal quality of Rotoscoping, as in the music video "Take on Me" and animated TV series Delta State.

Rotoscoping (often abbreviated as "roto") has often been used as a tool for visual effects in live-action movies. By tracing an object, a silhouette (called a matte) is created that can be used to extract that object from a scene for use on a different background. While blue and green screen techniques have made the process of layering subjects in scenes easier, Rotoscoping still plays a large role in the production of visual effects imagery. Rotoscoping in the digital domain is often aided by motion tracking and onion-skinning software. Rotoscoping is often used in the preparation of garbage mattes for other matte-pulling processes.

A "garbage matte" is often hand-drawn, sometimes quickly made, used to exclude parts of an image that another process, such as bluescreen, would not remove. The name stems from the fact that the matte removes "garbage" from the procedurally produced image. "Garbage" might include a rig holding a model, or the lighting grid above the top edge of the bluescreen.

Mattes can also force inclusion of parts of the image that might otherwise have been removed by the keyer, such as too much blue reflecting on a shiny model ("blue spill"), but in this case, technically, they should not be called "garbage" mattes (though they can be created using the same tool).

Rotoscoping has also been used to allow a special visual effect (such as a glow, for example) to be guided by the matte or rotoscoped line. One classic use of traditional Rotoscoping was in the original three Star Wars films, where it was used to create the glowing light saber effect, by creating a matte based on sticks held by the actors. To achieve this, editors traced a line over each frame with the prop, then enlarged each line and added the glow.

Motion capture is the process of recording the movement of objects or people. It is used in military, entertainment, sports, and medical applications, and for validation of computer vision and robotics. In filmmaking and video game development, it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking more usually refers to Match Moving.

In cinematography, Match Moving is a cinematic technique that allows the insertion of computer graphics into live-action footage with correct position, scale, orientation, and

motion relative to the photographed objects in the shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, match moving is related to Rotoscoping and photogrammetric.

In Motion Capture sessions, movements of one or more actors are sampled many times per second, early techniques used images from multiple cameras and calculate 3D positions; motion capture often records only the movements of the actor, not his or her visual appearance. This animation data is often mapped to a 3D model so that the model performs the same actions as the actor.

Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking.

Advantages

Motion capture offers several advantages over traditional computer animation of a 3D model:

• More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation. The Hand Over technique is an example of this.

• The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries, giving a different personality only limited by the talent of the actor.

• Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner.

• The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines.

• Potential for free software and third party solutions reducing its costs.

Disadvantages

• Specific hardware and special software programs are required to obtain and process the data.

• The cost of the software, equipment and personnel required can be prohibitive for small productions.

• The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion.

• When problems occur, it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone.

• The initial results are limited to what can be performed within the capture volume without extra editing of the data.

• Movement that does not follow the laws of physics cannot be captured. • Traditional animation techniques, such as added emphasis on anticipation and follow

through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later.

• If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over-sized hands, these may intersect the character's body if the human performer is not careful with their physical motion.

Go Motion is a variation of stop motion animation which incorporates motion blur into each frame. It was co-developed by Industrial Light & Magic and Tippett.

Go motion was originally planned to be used extensively for the dinosaurs in Jurassic Park, until Steven Spielberg decided to try out the swiftly developing techniques of computer-generated imagery instead.

Today, the mechanical method of achieving motion blur using go motion is rarely used, as it is more complicated, slow, and labor intensive than computer generated effects. However, the motion blurring technique still has potential in real stop motion movies where the puppet's motions are supposed to be somewhat realistic. Motion blurring can now be digitally done as a post production process using special effects software such as After Effects, Boris FX, Combustion, and other similar software.

Virtual Cinematography is the set of cinematographic techniques performed in a computer graphics environment. This includes a wide variety of subjects like photographing real objects for the purpose of recreating them as three dimensional objects or algorithms for automated creation of camera viewpoints.

Virtual cinematography came into prominence following the release of the The Matrix films. The directors, Andy & Larry Wachowski, tasked visual effects supervisor John Gaeta with developing techniques to allow for virtual "filming" of realistic computer-generated imagery. Gaeta, along with Kim Libreri and his crew at ESC Entertainment succeeded, where many others failed, in creating photo-realistic CGI versions of performers, sets, and action.

Wire removal is a visual effects technique used to remove wires in films, usually as a safety precaution or to simulate flying in actors or miniatures. Wire removal can be partly automated through various forms of keying, or each frame can be edited manually. First, the live action plates of actors or models suspended on wires are filmed in front of a green screen. Editors can then erase the wires frame by frame, without worrying about erasing the backdrop, which will be added later. This can be accomplished automatically with a computer. If the sequence is not filmed in front of a green-screen a digital editor must hand-paint the lines out. This can be an arduous task.

Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called "chroma key", "blue screen", "green screen" and other names. Today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the trick films of Georges Méliès in the late 19th century; and some are still in use.

All compositing involves the replacement of selected parts of an image with other material, usually, but not always, from another image. In the digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then every pixel within the designated color range is replaced by the software with a pixel from another image, aligned to appear as part of the original. For example, a TV weather person is recorded in front of a plain blue or green screen, while compositing software replaces only the designated blue or green color with weather maps.

In television studios, blue or green screens may back news-readers to allow the compositing of stories behind them, before being switched to full-screen display. In other cases, presenters may be completely within compositing backgrounds that are replaced with entire “virtual sets” executed in computer graphics programs. In sophisticated installations, subjects, cameras, or both can move about freely while the computer-generated imagery (CGI) environment changes in real time to maintain correct relationships between the camera angles, subjects, and virtual “backgrounds.”

Virtual sets are also used in motion pictures filmmaking, some of which are photographed entirely in blue or green screen environments; as for example in Sky Captain and the World of Tomorrow. More commonly, composited backgrounds are combined with sets – both full-size and models – and vehicles, furniture, and other physical objects that enhance the “reality” of the composited visuals. “Sets” of almost unlimited size can be created digitally because compositing software can take the blue or green color at the edges of a backing screen and extend it to fill the rest of the frame outside it. That way, subjects recorded in modest areas can be placed in large virtual vistas. Most common of all, perhaps, are set extensions: digital additions to actual performing environments. In the film, Gladiator, for example, the arena and first tier seats of the Roman Colosseum were actually built, while the upper galleries (complete with moving spectators) were computer graphics, composited onto the image above the physical set. For motion pictures originally recorded on film, high-quality video conversions called “digital intermediates” are created to enable compositing and the other operations of computerized post production. Digital compositing is a form of matting, one of four basic compositing methods. The others are physical compositing, multiple exposure, and background projection.

In Physical Compositing the separate parts of the image are placed together in the photographic frame and recorded in a single exposure. The components are aligned so that they give the appearance of a single image. The most common physical compositing elements are partial models and glass paintings.

Partial models are typically used as set extensions such as ceilings or the upper stories of buildings. The model, built to match the actual set but on a much smaller scale, is hung in front of the camera, aligned so that it appears to be part of the set. Models are often quite large because they must be placed far enough from the camera so that both they and the set far beyond them are in sharp focus.[1]

Glass shots are made by positioning a large pane of glass so that it fills the camera frame, and is far enough away to be held in focus along with the background visible through it. The entire scene is painted on the glass, except for the area revealing the background where action is to take place. This area is left clear. Photographed through the glass, the live action is composited with the painted area. A classic example of a glass shot is the approach to Ashley Wilkes’ plantation in Gone with the Wind. The plantation and fields are all painted, while the road and the moving figures on it are photographed through the glass area left clear.

A variant uses the opposite technique: most of the area is clear, except for individual elements (photo cutouts or paintings) affixed to the glass. For example, a ranch house

could be added to an empty valley by placing an appropriately scaled and positioned picture of it between the valley and the camera.

Animatronics is the use of mechatronics to create machines which seem animate rather than robotic. Animatronic creations include animals (including dinosaurs), plants and even mythical creatures. A robot designed to be a convincing imitation of a human is specifically known as an android.

Animatronics is mainly used in movie making, but also in theme parks and other forms of entertainment. Its main advantage over CGI and stop motion is that the simulated creature has a physical presence moving in front of the camera in real time. The technology behind animatronics has become more advanced and sophisticated over the years, making the puppets even more realistic and lifelike.

Animatronics is used in situations where a creature does not exist, the action is too risky or costly to use real actors or animals, or the action could never be obtained with a living person or animal. Animatronic figures are most often powered by pneumatics (compressed air), and, in special instances, hydraulics (pressurized oil), or by electrical means. The figures are precisely customized with the exact dimensions and proportions of living creatures. Motion actuators are often used to imitate “muscle” movements, such as limbs to create realistic motions. Also, the figure is covered with body shells and flexible skins made of hard and soft plastic materials. Then, the figure is finished by adding details like colors, hair and feathers and other components to make the figure more realistic

Pyrotechnics is the science of using materials capable of undergoing self-contained and self-sustained exothermic chemical reactions for the production of heat, light, gas, smoke and/or sound. Pyrotechnics include not only the manufacture of fireworks but items such as safety matches, oxygen candles, explosive bolts and fasteners, components of the automotive airbag and gas pressure blasting in mining, quarrying and demolition. Individuals responsible for the safe storage, handling, and functioning of pyrotechnic devices are referred to as Pyrotechnicians.

Explosions, flashes, smoke, flames, fireworks or other pyrotechnic driven effects used in the entertainment industry are referred to as theatrical special effects, special effects, or proximate pyrotechnics. Proximate refers to the pyrotechnic device's location relative to an audience. In the majority of jurisdictions, special training and licensing must be obtained from local authorities to legally prepare and use proximate pyrotechnics.

Although most special effects work is completed during post-production, it must be carefully planned and choreographed in pre-production and production. A Visual effects

supervisor is usually involved with the production from an early stage to work closely with the Director and all related personnel to achieve the desired effects.

Bullet Time also known as frozen time, the big freeze, dead time, flow motion, or time slice is a special and visual effect that refers to a digitally enhanced simulation of variable-speed like slow motion, time-lapse, etc broadcast advertisements, and video games. It is characterized both by its extreme transformation of time (slow enough to show normally imperceptible and un filmable events, such as flying bullets) and space (by way of the ability of the camera angle—the audience's point-of-view—to move around the scene at a normal speed while events are slowed). This is almost impossible with conventional slow-motion, as the physical camera would have to move impossibly fast; the concept implies that only a "virtual camera", often illustrated within the confines of a computer-generated environment such as a virtual world or virtual reality, would be capable of "filming" bullet-time types of moments. Technical and historical variations of this effect have been referred to as time slicing, view morphing, slow-mo, temps mort, and virtual cinematography.

The bullet time effect was originally achieved photographically by a set of still cameras surrounding the subject. The cameras are fired sequentially, or all at the same time, depending on the desired effect. Single frames from each camera are then arranged and displayed consecutively to produce an orbiting viewpoint of an action frozen in time or as hyper-slow-motion. This technique suggests the limitless perspectives and variable frame rates possible with a virtual camera. However, if the still array process is done with real cameras, it is often limited to assigned paths.

In The Matrix, the camera path was pre-designed using computer-generated visualizations as a guide. Cameras were arranged, behind a green or blue screen, on a track and aligned through a laser targeting system, forming a complex curve through space. The cameras were then triggered at extremely close intervals, so the action continued to unfold, in extreme slow-motion, while the viewpoint moved. Additionally, the individual frames were scanned for computer processing. Using sophisticated interpolation software, extra frames could be inserted to slow down the action further and improve the fluidity of the movement (especially the frame rate of the images); frames could also be dropped to speed up the action. This approach provides greater flexibility than a purely photographic one. The same effect can also be produced using pure CGI, motion capture and universal capture.

The technique of using a group of still cameras to freeze motion occurred before the invention of cinema itself. It dates back to the 19th-century experiments by Eadweard Muybridge, who analyzed the motion of a galloping horse by using a line of cameras to photograph the animal as it ran past. Eadweard Muybridge used still cameras placed along a race-track, and each camera was actuated by a taut string stretched across the track; as the horse galloped past, the camera shutters snapped, taking one frame at a time.

Muybridge also took photos of actions from many angles at the same instant in time, to study how the human body went up stairs, for example. In effect, however, Muybridge had achieved the aesthetic opposite to modern bullet-time sequences, since his studies lacked the dimensionality of the later developments. A debt may also be owed to MIT professor Doc Edgerton, who, in the 1940s, captured now-iconic photos of bullets using xenon strobe lights to "freeze" motion.

Computer-Generated Imagery is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, and simulators. The visual scenes may be dynamic or static, and may be two-dimensional (2D), though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television.

The term computer animation refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments.

Computer graphics software is used to make computer-generated imagery for films, etc. Availability of CGI software and increased computer speeds has allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers.

The Schüfftan Process is a movie special effect named after its inventor, Eugen Schüfftan (1893–1977). It was widely used in the first half of the 20th century before being almost completely replaced by the travelling matte and bluescreen effects.

The process was refined and popularized by the German cinematographer Eugen Schüfftan while he was working on the movie Metropolis (1927), although there is evidence that other film-makers were using similar techniques earlier than this. The movie's director, Fritz Lang, wanted to insert the actors into shots of miniatures of skyscrapers and other buildings, so Schüfftan used a specially made mirror to create the illusion of actors interacting with huge, realistic-looking sets.

Schüfftan placed a plate of glass at a 45-degree angle between the camera and the miniature buildings. He used the camera's viewfinder to trace an outline of the area into which the actors would later be inserted onto the glass. This outline was transferred onto a mirror and the entire reflective surface that fell outside the outline was removed, leaving transparent glass. When the mirror was placed in the same position as the original plate of glass, the reflective part blocked a portion of the miniature building behind it and also reflected the stage behind the camera. The actors were placed several meters away from the mirror so that when they were reflected in the mirror, they would appear at the right size.

In the same movie, Schüfftan used a variation of this process so that the miniature set (or drawing) was shown on the reflective part of the mirror and the actors were filmed through the transparent part, as shown in the illustration.

Over the following years, the Schüfftan process was used by many other film-makers, including Alfred Hitchcock, in his films Blackmail (1929) and The 39 Steps (1935), and as recently as The Lord of the Rings: The Return of the King (2003), directed by Peter Jackson. The Schüfftan process has largely been replaced with matte shots, which allow the two portions of the image to be filmed at different times and give opportunities for more changes in post production.

The Schüfftan process's use of mirrors is very similar to the 19th century stage technique known as Pepper's ghost.


Recommended