52
Making Headlines $6.00 USA $8.25 Canada Weta evolves its VFX techniques for the all-CG Tintin www.cgw.com December 2011/January 2012

CGW 12 2012 Downmagaz.ws

Embed Size (px)

Citation preview

Page 1: CGW 12 2012 Downmagaz.ws

Making Headlines

$6.00 USA $8.25 Canada

Weta evolves its VFX techniques for

the all-CG Tintin

www.cgw.com December 2011/January 2012

CGW1211-Cover.indd 2 11/25/11 4:39 PM

Page 2: CGW 12 2012 Downmagaz.ws

©2011 NewTek, Inc. All rights reserved. www.newtek.com

Image courtesy of Chris O’Riley

Take Your Art to the Brink of RealityThe latest version of NewTek LightWave™ 3D animation software takes your art to the edge.

A complete palette of tools, LightWave 11 is professional, faster and way more powerful.

TV. Film. Architectural visualization. Print. And Game development. Get incredible detail.

Instancing. Flocking motion. Fracture. Bullet Dynamics. Virtual Studio Tools. HyperVoxels™

Blending. GoZ™ technology. Freedom to stretch your imagination. For real.

“LASSETER HAS NOT ONLY INVIGORATED HIS ORIGINAL CONCEPT BUT ENABLED THE VIEWER TO

APPRECIATE AND SHARE HIS PASSION.”JUSTIN CHANG | VARIETY

www.WaltDisneyStudiosAwards.com© 2011 DISNEY/PIXAR

FOR YOUR CONSIDERATION

BEST ANIMATED FEATUREBEST FILM EDITING • BEST SOUND MIXING

BEST SOUND EDITING • BEST VISUAL EFFECTS

FOR YOUR CONSIDERATION

BEST ANIMATED FEATUREBEST FILM EDITING • BEST SOUND MIXING

BEST SOUND EDITING • BEST VISUAL EFFECTS

Cars 2 - For Your ConsiderationComputer Graphics World - Issue date:Trim: 8" x 10.75"Bleed: 8.25" x 11"Safety: 1/4" from trim

colorproof #1 Nov. 21 3:55pm

CGW Ad Template 1211.indd 100 11/22/11 11:15 AM

Page 3: CGW 12 2012 Downmagaz.ws

1December 2011/January 2012

ON THE COVER

SEE IT IN

Weta Digital created Tintin’s CG characters, and then per-formed them using data captured from actors wearing head rigs that are part of a facial-capture system developed at the studio. See pg. 10.

December’s Post Magazine takes a look at the Strengths, Weaknesses, Opportuni-ties, and Threats relating to Audio, the Business of Post, New Media, Directors & Filmmaking, Stereo 3D, and Training.

Features

Animation Evolution

10The comic-book series “Tintin” may be an old classic, but recently it has accomplished a number of fi rsts: the fi rst animated feature directed by Steven Spielberg, the fi rst animated feature produced by Peter Jackson, and the fi rst animated feature created by Weta Digital. By Barbara Robertson

Magic Man

20 Martin Scorsese’s Hugo features a wide range of visual effects, but the biggest accomplishment is the way stereoscopic 3D is used to tell a moving period story. By Barbara Robertson

Dancing the Ice Away

26 Animators from down under get their groove on, providing fancy footwork for the computer-generated penguins appearing in Happy Feet Two.

By Barbara Robertson

Road to Oscar

32 The year is nearly over, but the box offi ce is just beginning to heat up with holiday tent-pole fi lms. Find out what our industry has to say about this year’s visual effects and animated fi lms. By Karen Moltenbrey

Mean Streets

42 Batman: Arkham City, one of the year’s top games, incorporates German Expressionistic cinematography within an interactive, expansive environment where CG villains run amok and CG heroes try to restore law and order. By Martin McEachern

COVER STORY

December 2011/January 2012 • Vol. 34 • Number 8 I n n o v a t i o n s i n v i s u a l c o m p u t i n g f o r D C C p r o f e s s i o n a l s

DepartmentsEditor’s Note Stereo 3D: Visible Difference

2 Stereo 3D fi lms are growing in popularity, and even though theaters have spent a great deal of money preparing for this new medium, at times there just weren’t enough 3D screens to accommodate the newest releases—a testament to the quality of the work being generated.

Spotlight

4 Products The Foundry’s Katana 1.0. Dell’s mobile workstations. Dassault Systemes’ cloud services. Boxx’s 3DBoxx 3970 Xtreme. Panasonic’s TH-65VX300U plasma display. AMD’s FirePro V4900. Imagineer/Boris FX’s Motion Tracking for Editors. The Foundry’s Ocula 3.0. News Third-quarter graphics shipments are up. Embedded graphics processors killing off IGPs.

Back Products

48 Recent hardware and software releases. xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x

4232

1010

2620

Look for CGW’s VES Awards Supplementin January.

Also, visit CGW.com for Web-exclusive features.

CGW1211-TOCfin.indd 1 11/25/11 4:46 PM

Page 4: CGW 12 2012 Downmagaz.ws

CHIEF [email protected]

The Magazine for Digital Content Professionals

EDITORIALKAREN MOLTENBREY

Chief [email protected] • (603) 432-7568

CONTRIBUTING EDITORSCourtney Howard, Jenny Donelan, Kathleen Maher,

George Maestri, Martin McEachern, Barbara Robertson

WILLIAM R. RITTWAGE Publisher, President and CEO,

COP Communications

NATASHA SWORDSVice President of Marketing

[email protected](818) 291-1112

ADVERTISING SALESMARI KOHN

Director of Sales—[email protected]

(818) 291-1153 cell: (818) 472-1491

JEFF VICTORDirector of Sales—West Coast

[email protected](847) 367-4073

GARY RHODESSales Manager—East Coast & International

[email protected](631) 274-9530

KELLY RYANMarketing [email protected]

(818) 291-1155

Editorial Office / LA Sales Office:620 West Elk Avenue, Glendale, CA 91204

(800) 280-6446

CREATIVE SERVICES AND PRODUCTION

MICHAEL VIGGIANOArt Director

[email protected]

CUSTOMER [email protected]

1-800-280-6446, Opt 3

ONLINE AND NEW MEDIAStan Belchev

[email protected]

Computer Graphics World Magazine is published by Computer Graphics World,

a COP Communications company.

Computer Graphics World does not verify any claims or other information appearing in any of the advertisements contained in the publication, and cannot take any responsibility for any losses or other damages incurred

by readers in reliance on such content.

Computer Graphics World cannot be held responsible for the safekeeping or return of unsolicited articles, manuscripts, photographs, illustrations or other materials.Address all subscription correspondence

to: Computer Graphics World, 620 West Elk Ave, Glendale, CA 91204. Subscriptions are available free to qualified individuals within the United States. Non-qualified subscription rates: USA—$72 for 1 year, $98 for 2

years; Canadian subscriptions —$98 for 1 year and $136 for 2 years; all other countries—$150 for 1 year and $208 for 2 years.

Digital subscriptions are available for $27 per year. Subscribers can also contact customer service by calling

(800) 280 6446, opt 2 (publishing), opt 1 (subscriptions) or sending an email to [email protected].

Postmaster: Send Address Changes to Computer Graphics World,

620 W. Elk Ave., Glendale, CA 91204 Please send customer service inquiries to

620 W. Elk Ave., Glendale, CA 91204

Editor’sNote

2 December 2011/January 2012

Just a few years ago, we were marveling at the quality, and the quantity, of the 3D films released in theaters. Yet in the rush to embrace 3D (and, of course, collect more at the box office), some studios simply jumped on the bandwagon with a

half-hearted attempt at stereo. Audiences were quick to forgive, being just as anxious to see a 3D movie as the studios were in offering one. Not so today.

In 2010, there were less than 20 films released in stereo 3D. Not among them: Harry Potter and the Deathly Hallows, Part 1. Could Warner Bros. have made a lot

more money with a 3D film? Absolutely. The brand was strong enough and the fans numerous enough to warrant it. But rather than put out a mediocre product, the studio decided to wait. A half-year later, the wait was over. Part 2, which marked an end to the Potter saga, gave us a fitting farewell that embraced the true magic of 3D. In 2011, we saw nearly 30 movies utilize the medium, some coming out on the heels of another. I had to see Pirates in 2D be-cause less than a week after its debut in stereo, theaters began push-ing it to their 2D screens to make room for the 3D version of Kung Fu Panda 2. Green Lantern and Cars 2 had the same problem, as did Harry Potter and Captain America.

Nevertheless, the number of 3D-equipped theaters continues to grow. In the coming year, however, audiences will have to pony up more dollars for 3D glasses. I hadn’t realized that the cost for the plastic eyewear (definitely an improvement over the old-style paper ones) had been subsidized by the studios. Now Sony is saying that it will no longer do so starting in May, about the time when its new Spider-Man film will be released. Other studios have not commented on their plans, but I expect more to follow Sony’s lead. So, in addition to the added admission price for a 3D film, moviegoers may have to dig a little deeper for the cost of the eye-wear. In Europe, viewers pay roughly $1 for a pair of reusable RealD glasses; in Asia, some plunk down a refundable deposit. Of course, there is a third option: designer 3D glasses. A number of designers are jumping at this trend, Oakley among them. And this past summer, Marchon3D began installing vendor machines at cinemas that dispense 3D designer eyewear ranging in cost from approximately $20 to $70. At first, this sounded ridiculous, given that the glasses are worn in dark theaters—who would even notice them? But even though the glasses are geared for RealD movies, they can also be used with passive laptops, gaming consoles, and HDTVs.

With all the scheduled 3D releases in 2012, there is little question that moviegoers will bite the bullet and rent or purchase the necessary eyewear to see films like Men in Black III, Brave, The Amazing Spider-Man, The Hobbit, and more. Kicking off the new year is a new look to a classic: Beauty and the Beast 3D. Just a few months ago, Disney released the hugely popular Lion King (1994) in stereo. Initially planned for a two-week run, the film did so well at the box office—garnering approximately $94 million (in addition to the $826 million generated by the original)—that Disney extended its run. The studio also decided to reissue some other classics in 3D, includ-ing Finding Nemo (September 2012), Monsters, Inc. (January 2013), and The Little Mermaid (September 2012). “Great stories and great characters are timeless, and at Disney, we’re fortunate to have a treasure trove of both,” said Alan Bergman, president of The Walt Disney Studios. “We’re thrilled to give audiences of all ages the chance to experience these beloved tales in an exciting new way with 3D.”

Just don’t forget your glasses! ■

Stereo 3D: A Visible Difference

Recent awards:

CGW1211-editfin.indd 2 11/23/11 9:17 AM

Page 5: CGW 12 2012 Downmagaz.ws

CGW Ad Template 1211.indd 3 11/22/11 1:36 PM

Page 6: CGW 12 2012 Downmagaz.ws

4 December 2011/January 2012

The Foundry Introduces Katana 1.0

The Foundry recently released Katana 1.0, a look development and lighting tool, replacing the conventional CG pipeline with a flexible recipe-based asset work-

flow. In tandem with this release, Industrial Light & Magic (ILM), a Lucasfilm Compa-ny, has purchased a site license. Currently in use for upcoming productions, ILM made this investment to boost their production pipeline across its ILM and Lucasfilm companies. As a Katana site license holder, ILM will deploy the software both in its San Francisco and Singapore studios.

Katana is specifically designed to address the needs of a highly scalable asset-based workflow to: allow updating of assets once shots are already in progress; share lighting setups, such as edits and overrides, between shots and sequences; and allow use of multiple render-ers and specifying dependen-cies between render passes; allow shot-specific modifica-

tion of assets to become part of the lighting “recipe” for shots, to avoid having to deal with large numbers of shot-specific asset variants. Furthermore, Katana is built from the ground up with the needs of modern productions in mind. Exten-sive APIs mean it integrates with current pipelines, shader libraries, and workflow tools, while its collaborative nature allows it to scale to meet the needs of the most demanding productions.

The main attraction of The Foundry’s Katana stems from the flexibility of the product, as it has the ability to produce incredibly complicated shots while allowing artists to retain control. Katana is backed by The Foundry, a provider of high-end visual effects tools, and has been production-proven on over 20 shows since 2004 at Sony Pictures Imageworks.

Dassault Looks to the Cloud

Dassault Systèmes (DS) recently announced a cloud-based part-

nership with Amazon.com’s Web Services arm that will enable clients to use its 3D design and manufac-turing software remotely over the cloud. PLM and 3D software are traditionally memory-intensive, but by partnering with Amazon Web Services, DS will be able to offer clients a preconfigured environ-ment to remotely run 3D and PLM software without having to buy expensive hardware.

DS is leveraging multiple AWS services to power its Version 6 software platform, providing high performance and highly available resources via the Amazon Elastic Compute Cloud (Amazon EC2) for discrete compute environments. This expands the geographic reach of DS customers, regardless of their physical location. Customers now easily access design content, while DS can store volumes of design data without having to support an extensive array of legacy platforms.

In other news, DS has made its new online Version 6 plat-form—offered as a subscription model—available over the cloud. Also, DS announced its strategic investment in Outscale, a start-up providing next-gen SaaS for leveraging dynamic public cloud resources allocation. Lastly, the firm has updated its Version 6 software to V6R2012, delivering an open, collaborative platform by broaden-ing the value of digital assets into new solutions such as immersive retail store experiences and global production system planning.

Dell Takes a Terabyte from Mobile Workstation Storage

Dell’s Precision M6600 and M4600 mobile workstations, which launched in May, are now available with 512gb (SATA3) Mobility Solid State

Drives. The M6600 is also offering the Nvidia Quadro 5010M mobile profes-sional graphics GPU with 4gb of dedicated GDDR5 memory.

The Dell Precision M6600 and M4600 are the first mobile workstations to offer 512gb SATA3 Mobility SSDs, giving users 500mb/sec read and 300mb/sec write times. With the M6600 offering two full storage slots with up to two 512gb SSDs and one mini-card slot with up to 128gb, workstation users can experience more than a terabyte of solid-state storage in a mobile workstation. The 512gb SSD and Nvidia 5010M are available with pricing starting at $1120 and $1640, respectively.

CGW1211-Spotlightfin.indd 4 11/22/11 12:07 PM

Page 7: CGW 12 2012 Downmagaz.ws

FINALKung Fu Panda 2 - For Your ConsiderationCGW- Issue date: Dec. 1, 2011Bleed Size: 8.25"(w) x 11"(h)Trim Size: 8"(w) x 10.75"(h)Live Area: 7"(w) x 9.75"(h)Prepared by Eclipse Advertising: November 17, 2011 5:32 PM

T H E # 1 A N I M A T E D M O V I E O F T H E Y E A R“The scale of the visuals is enormous, and the animated

images are beautiful and spectacular to behold.”-James Verniere, BOSTON HERALD

FOR

YOUR

CONSIDERATION

BEST VISUAL EFFECTSAlex Parkinson

THE #1 ANIMATED MOVIE OF THE YEAR

CGW Ad Template 1211.indd 5 11/22/11 11:18 AM

Page 8: CGW 12 2012 Downmagaz.ws

6 December 2011/January 2012

According to Jon Peddie Research (JPR), the industry’s research and

consulting firm for graphics and multime-dia, the estimated graphics chip ship-ments and suppliers’ market share for Q3 2011 is up 16.7% over last quarter and 18.4% over last year. Intel led the quarter with 36.5% growth, with Nvidia at 30% growth. Shipments during the third quar-ter of 2011 did (finally) behave according to past years with regard to seasonal-ity, and were higher on a year-to-year comparison for the quarter. 2011 is still an unusual year for the PC and graphics suppliers, however, as businesses take their own path to recovery.

The third quarter of the year is usually the growth quarter, and was this year, which is a positive sign looking forward. The growth in Q3 comes as a welcome change—but is it inventory building for the holiday season?

This quarter, Intel celebrated its seventh quarter of embedded processor graph-ics CPU (EPG, a multi-chip design that combines a graphics processor and CPU in the same package) shipments, and had a very strong double-digit growth in desktops and notebooks. AMD lost in overall market share, while Intel gained more compared to last quarter, and Nvidia declined due to its exiting from the integrated segments.

Year to year this quarter, Intel market share increased (9.5%), AMD broke even, and Nvidia slipped (-23%) in the overall market partially due to the company withdrawing from the integrat-

ed segments. However, Nvidia gained 10.9% in the desktop discrete area.

The quarter’s change in total shipments from last quarter increased 16.7%, above the 10-year average of 13.9%. AMD’s HPU quarter-to-quarter growth has been

extraordinary at an average of 58.4% for desktop and notebook, and Intel’s EPG growth was significant at an average of 23.6%. This is a clear showing of the industry’s affirmation of the value of CPUs with embedded graphics and is in line with JPR’s forecasts. The major, and logical, impact is on older IGPs, and some on low-end, low-cost add-in boards (AIBS). Almost 92 million PCs shipped worldwide this quarter, an increase of 8.8% compared to last quarter (based on an average of reports from Dataquest, IDC, and HSI).

At least one and, often, two GPUs

are present in every PC shipped. It can take the form of a discrete chip, a GPU integrated in the chipset, or a GPU embedded in the CPU. The average has grown from 115% (in 2001) to almost 160% GPUs per PC. Discrete graphics

processing unit (GPU) chips and other chips with graphics are a leading indicator for the PC market.

Market shares shifted for the big three and put pressure on the smaller three, and most showed a decrease in market share as indicated in the chart on this page. Intel continues to be the over-all market share leader, elevated by Core i5 EPG CPUs, Sandy Bridge, and Pineview Atom sales

for netbooks. AMD lost market share quar-ter to quarter, and Nvidia lost share.

Nvidia is exiting the integrated graphics segments and shifting focus to discrete GPUs. The company showed significant discrete market share gain (30% quarter to quarter). Nvidia credits strong connect with new Intel Sandy Bridge notebooks. Ironically, Nvidia enjoyed some serendipi-tous sales of IGPs in Q3 due to some older AMD CPU sales in Asia.

AMD’s overall graphics market share dropped 0.3% from last quarter, even though the company’s HPU-class Fusion APU processors are selling very well.

Q3 Graphics Shipments Up

GROWTH FROM Q2 TO Q3

35%

30%

25%

20%

15%

10%

5%

0%

-5%

-10%

-15%

Growth from Q1 to Q2 2.27% Growth from Q2 to Q3 13.98%

2001 2002 2004 2005 2006 2007 2008 2009 2010 20112003

The quarter’s change in total shipments from last quarter increased 16.7% above the 10-year average of 13.9%.

The Foundry Unveils Ocula 3.0

The Foundry has rolled out Ocula 3.0, a significant upgrade to its stereo plug-in tool set for the Nuke compositing system. Ocular, used in

production on groundbreaking live-action stereo projects including Avatar and Tron: Legacy, provides artists with a set of Nuke tools that assist with the integration of elements and help correct common stereo 3D defects.

Ocular 3.0, the biggest upgrade of the product to date, brings new tools to help fix mis-focused camera pairs and retime in stereo, as well as a range of workflow tweaks and improvements to speed up day-to-day Ocula work. The new version is priced starting at $5400.

CGW1211-Spotlightfin.indd 6 11/22/11 12:07 PM

Page 9: CGW 12 2012 Downmagaz.ws

CGW Ad Template 1211.indd 7 11/22/11 1:37 PM

Page 10: CGW 12 2012 Downmagaz.ws

8 December 2011/January 2012

Imagineer, Boris FX Release Motion Tracking Bundle

Imagineer Systems has teamed up with Boris FX to launch the Motion Tracking for Editors bundle, a motion-tracking

tool set bundle designed to work with Adobe After Effects, Premiere Pro, Apple FCP 7, Motion, and Sony Vegas Pro.

Available immediately for $299, the Motion Tracking for Editors bundle includes Imagineer Systems’ newest release of Mocha AE v2.6.1 and the Boris Continuum Motion Tracker Unit from Boris FX. Mocha AE is a stand-alone

planar tracking and roto tool. The Boris Continuum Motion Tracker Unit delivers matchmove, corner pin, witness-protection face blurring, and wire remover capabilities. As a result of this collaboration, the new Motion Tracker for Editors bundle enables users to export tracking data from Mocha AE directly to Boris Continuum Complete, giving editors access to more visual effects capabilities within their host system.

Panasonic Launches 3D-Ready Pro Plasma

Panasonic revealed the TH-65VX300U, the newest

addition to its family of HD professional plasma displays. The 65-inch display’s color reproduction approaches digi-tal cinema standards, while the display’s ultra-high-speed drive technology achieves clear and extremely detailed 3D video as well as enhances 2D content. The advanced drive provides smoother gradation, which is double the smoothness of conventional models, resulting in richer gradation expression in a dark area of the screen.

The TH-65VX300U is also equipped with multiple customizable functions for the postproduction experience, including a wide color gamut that can be selected from five setting types, an option to customize placement of RGB, and added adjustment menus. Furthermore, independent RGB On/Off functionality checks secondary colors or monochrome images, helping with individual color calibration. The display, priced at $6250, includes a waveform monitor to confirm the incoming signal.

AMD Fires Up FirePro V4900

AMD has launched the AMD FirePro V4900, which delivers unequalled perfor-mance for DCC and CAD professionals at an entry-level price point. By leverag-

ing AMD’s most advanced graphics technology, including AMD Eyefinity2 technology, the AMD FirePro V4900 improves application performance. In fact, the AMD FirePro V4900 more than doubled the performance of competitive offerings in many CAD and DCC application tests.

The AMD FirePro V4900 is designed to exceed the needs of graphics profession-als. The GPU’s 1gb of 128-bit GDDR5 RAM drives memory bandwidth to 64gb/sec, allowing rapid data access, while Microsoft DirectX11, OpenGL 4.2, and OpenCL support empowers users to render and manipulate models using the broadest range of tools and applications. Enhanced AMD Eyefinity and DisplayPort 1.2 technology enables six-screen multidisplay setups. The V4900 is available in select Dell and Fujitsu systems and HP workstations. As of November 1, it is being sold for $189 at select online resellers.

Boxx Goes Xtreme

Boxx Technologies has released the 3DBoxx 3970 Xtreme, pitched by

the company as “the world’s fastest workstation for Autodesk Revit, Solid-Works, and other frequency-bound software applications.”

The 3970 XT—priced at just over $2900—features a performance-enhanced, second-generation (over-clocked) Intel Core i7 processor, along with Intel Smart Response Technology, which enables quick access to media files and accelerated performance overall. Both technologies, currently unavailable in mass-produced worksta-tions, enable the system to automati-cally learn which files users access frequently and copies them from the hard-disk drive to the solid-state drives (SSDs). So the next time these files are requested, the system loads them from the SSDs rather than the slower hard drive, for faster booting, faster application loading, and accelerated performance.

CGW1211-Spotlightfin.indd 8 11/22/11 12:07 PM

Page 11: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 9

High PerformanceCamera Tracking

Use SynthEyes for animated critter insertion, �xing shaky shots, virtual set extensions, making 3D movies, architectural previews,

accident reconstruction, virtual product placement, face and body capture, and more.

32-bit only $399Windows-32 / Windows-64 / OS X-32 / OS X-64

IncludesStereoscopic features

used in AVATAR

According to Jon Peddie Research (JPR), in 2011, with the full-scale production of scalar X86 CPUs

with powerful multi-core, SIMD graphics processing elements, a true infl ection point occurred in the PC and related industries. As a result, the ubiquitous and stal-wart integrated graphics processor (IGP) is fading out of existence. For several reasons, many people believed (and some hoped) that the CPU and the GPU would never be integrated: GPUs are characterized by a high level of complexity, with power and cooling demands, and dramati-cally different memory management needs; GPU design cycles are faster than those of the CPU; the GPU has grown in complexity compared to the CPU, exceeding the transistor count, and matching or exceeding the die size of the CPU; and the x86 has steadily increased in complexity and power consumption, and become multi-core.

With four times the number of transistors possible in the same space as the previous manufacturing node, Moore’s Law seems unstoppable. With the move to 32nm, and

now 28nm, the possibilities for integration of such complex and alien functionality is not only possible and feasible, but a reality.

Jon Peddie, president of JPR, notes a new trend impact-ing discrete GPUs due to the combination of devices being offered with integrated graphics (IGPs, EPGs, and HPUs). “The integrated processors will impact GPU sales and change traditional sales patterns. The trend may even put the category in decline—at least so some believe,” he says, “but it’s not that simple. Nothing in the PC industry is.”

The EPG/HPU will revolutionize the PC and associated industries. The amount of computation capability available in the size, weight, and power consumption of systems with EPG/HPUs, coupled with the attractive prices they will carry, will upset the market dynamics like never before, and maybe not since the introduction of the PC. Further details are available in “The Market Dynamics Created by the Embedded Graphics Processors Study” from JPR.

Embedded Graphics Processors Killing off IGPs

CGW1211-Spotlightfin.indd 9 11/22/11 12:06 PM

Page 12: CGW 12 2012 Downmagaz.ws

n n n n Animation

ANIMATION

EVOLUT ON

December 2011/January 201210

n n n n Character Animation

Images ©2011 Paramount Pictures.

CGW1211-TINTINfin.indd 10 11/22/11 11:19 AM

Page 13: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 11

Character Animation n n n n

If the young reporter Tintin, star of the comic-book series by the Belgian artist Hergé and, most recently, an animated feature film, were to write about the making of that film for his news-paper, Le Petit Vingtieme, he’d surely headline it: “First animated feature directed by Steven Spielberg! First animated feature pro-duced by Peter Jackson!” And then we’d see the headline espe-cially interesting to those in computer graphics: “First animated feature created at Weta Digital!”

Or, is it? It wouldn’t be much of a stretch to call large por-tions of Avatar—the most successful film of all time, also largely created at Weta Digital—an animated feature. After all, in much of that film, the Na’vi are animated characters in a virtual environment. And, as they did for Avatar, Weta Digital animators performed Tintin’s characters using data captured from actors wearing head rigs as part of a facial-capture sys-tem developed at the studio. Award-winning directors famous for live-action, action-adventure movies directed the actors for both films on a performance-capture stage set up by Giant Stu-dios and “filmed” them with a virtual camera while watching a real-time, on-set composite.

“[Tintin] was really an evolution of what we’ve done for vi-sual effects,” says Joe Letteri, senior visual effects supervisor at Weta Digital, who received Oscars for the work on Avatar, King Kong, and the two Lord of the Rings trilogies he supervised. And therein lies one of those clues that Tintin and his dog Snowy so famously uncover: A clue to the reason critics are praising it as the most successful performance-capture film to date. Letteri brushes off the distinction.

“We rolled straight into what we had done for Avatar,” Let-teri says. “We developed a new subsurface technique for the skin to have it look a little better, we developed some new facial software to add a layer of muscle simulation beyond what we could track and solve from the facial capture, and we developed

a new hair system that we also used on Planet of the Apes. But, from a performance-capture point of view, we are still recording an actor’s performance. It was no different from mapping data to the Na’vi or an ape. We were making comic-book inspired characters, not ones that looked like humans, but there’s always a level of animation and interpretation. We had big sequences in King Kong that were entirely computer-generated, most of the scenes in Avatar were entirely in a CG virtual world, and Tintin is in a virtual world all the way. For us, there’s no difference.”

Tintin was successful two months before opening in the US. The film’s approval rating on Rotten Tomatoes hovered around 86 percent as it topped the international box office during the first two weeks following its release in Europe, and by the end of the third week, Tintin had captured $159.1 million at the box office, even though it had yet to open in the US or many other regions.

Presented by Paramount Pictures and Columbia Pictures, the film is a rollicking action-adventure that sends Tintin and his dog Snowy dashing through Europe and Africa, on ships, trains, and planes, and even into the past, and a comparison to Spielberg’s Indiana Jones films is apt. It stars Jamie Bell as Tintin; Andy Serkis as the whiskey-soaked Captain Haddock; Daniel Craig as Ivan Ivanovitch Sakharine, a pirate and a descendant of Red Rackham (whom he also plays); Toby Jones as the pickpocket Silk; Simon Pegg and Nick Frost as the bumbling detectives Thompson and Thomson; and Snowy, a little white terrier who is Tintin’s con-stant companion. All the characters are CG; Snowy is the only star performed entirely with keyframe animation.

And yet, everything about Tintin, except for the fact that it is an animated film, has a live-action sensibility. The characters have a cartoon patina, and their performances are a bit broader than a human’s, but the artists started with real performances and then referenced reality to add skin, clothes, and hair. For environments, the crew didn’t have live-action plates, so they

ANIMATION

EVOLUT ON The artists at Weta Digital turn their fastidious talents to their next film and create a remarkable animated feature to the surprise of everyone except themselves

By Barbara Robertson

CGW1211-TINTINfin.indd 11 11/22/11 11:19 AM

Page 14: CGW 12 2012 Downmagaz.ws

December 2011/January 201212

n n n n Character Animation

referenced the comic books for design and the real world for textures and dynamics. The film may trace its origin to comic books from the early 1940s, but this is not your father’s ani-mated film. The attention to detail is amazing.

“We ran this show exactly like every other show,” says Simon Clutterbuck, digital crea-ture supervisor, “as if we were doing 100 shots in a visual effects movie. The focus on every texture, every motion, every simulation was intense. We never said, ‘Oh, that’s done,’ and locked an asset. We looked at everything ev-ery day. We had a process where things ran in parallel; we even built while lighting. If some-thing in a shot needed to change, we changed it. All the way through production, shots con-stantly evolved and got better and better.”

A Model ProductionA team of modelers that ranged between 40 and 60 people built the 4000 digital assets needed for the film, creating face shapes and deforma-tions for the animators and adding fur and hair to the characters. Modelers at Weta Digital work within an Autodesk Maya pipeline. Many modelers also sculpt using Autodesk’s Mudbox, originally developed at the studio, and a few add Pixologic’s ZBrush to the mix.

Modelers moved back and forth between hard-surface models and characters, although one team specialized in fur and hair, and an-other in creating face shapes and deforma-tions. Also, the modelers gave the characters especially detailed hands. “We had amaz-ing reference—an MRI and a life cast of a guy’s hands that we used to build new, high- fidelity hand models,” Clutterbuck says.

The main character, Tintin, had the most difficult face to model. “He’s a balloon with two dark eyes and an oval mouth,” says Wayne Stables, visual effects supervisor. “That worked

for Hergé. But we had to develop a three- dimensional character.”

The artists started with the 2D character, picking frames that, when combined like a flip book, created a three-dimensional look. Next, they translated that look to a rigged CG mod-el, asked an actor to mimic Tintin’s expressions from the comic books, and applied those ex-pressions to the 3D model. “Then, we began exploring changes,” Stables says. “We changed the model’s nose and gave Tintin cheekbones and a jaw. By the time we had a Tintin we liked, we had tried 1600 variations.”

In addition to the main characters, the modelers built hundreds of crowd characters. “We created new characters all the way to the end,” says Marco Revelant, models supervi-

sor. “I remember adding a female character in the last month. We generate models from the same elements, even using the same topology for the main characters and the generic char-acters. The distinction between a main charac-ter and a crowd character is in the complexity of the facial system, not in the model itself. For the generic characters, we have an automatic way to generate a basic facial system.”

To rig the bodies, character technical di-rectors worked with a generic model, which they call “genman,” a fully simulated muscle model. Two creature TDs rigged all the char-acters, one working on Snowy, the other on the human characters.

“We hadn’t done a dog, so that was a full-time job,” Clutterbuck says. “But we had done a lot of development on bipeds for Avatar and had a good genman model. We used it on Apes, but we took it to an extreme for Tin-tin and built everyone from the same guy, all procedurally. We started with a surface model and used a process we call ‘warping,’ to fit

the whole rig from a base model to the new model, and it’s good to go. If we weren’t happy with something, we’d fix it on the template and push it out to all the characters.”

For Avatar’s nearly naked Na’vi, the crew had developed Tissue, a simulation system, to build muscles, skin, and fat. “It’s a linear-elastic finite-element system,” Clutterbuck says, “a stand-alone thing with a front-end bolted onto Maya so artists can interact with it. We plug anima-tion into the system and it adds the simulation on top; it’s our tool set for deformation work.”

For Tintin, though, the crew pushed the system further to add dynamics to facial de-formations driven by the captured data and keyframed animation. The developers plan to submit a technical paper to SIGGRAPH 2012 on the technique.

“We wanted wobbly cheeks, chin folds, skin colliding with itself around the facial area,” Clutterbuck says. “To get that, we need-ed both dynamics and facial deformations. So we took what’s effectively a series of blend-shapes rigged in the facial puppet and mapped them into the simulation system to add simu-lated elements to the face.”

To control the constraint-based simulation, artists painted attribute maps on the facial puppet. Clutterbuck gives an example: “We have a [jowly] character named Barnaby, and we have the performance for his chin and lips, but we wanted those areas to interact with his wobbly chin. So, instead of trying to do two separate solutions and blend them, this system unified everything. We painted little patches around his lips, and the attribute map set up everything once. After that, the simulation was procedural. The solver can also wobble, wrinkle, and buckle all at the same time. The animators didn’t see any of this; they concen-trated on the performance.”

Perfecting PerformancesThe animators received data for the characters’ faces, bodies, eyes, thumb, index, and pinky fingers, captured performances that provided what animation supervisor Jamie Beard calls a “starting block.” Beard worked on Tintin for five years, supervising the previsualization and then leading the team of between 50 and 60 animators.

“We offered the director an animated and a live-action world,” Beard says. “On set he could be a live-action filmmaker, blocking out the actors and directing them. Once captured, if the scene was perfect, we’d work on the per-formances only a limited amount to change them slightly if Steven [Spielberg] wanted to tweak them; directors being who they are

Animators at Weta Digital started with performance-capture data for all the characters except Snowy, the little white terrier. Steven Spielberg directed actors who performed the characters on a motion-capture stage using a system at Giant Studios similar to the one James Cameron had used for Avatar.

CGW1211-TINTINfin.indd 12 11/22/11 11:19 AM

Page 15: CGW 12 2012 Downmagaz.ws

FINALPuss In Boots - For Your ConsiderationCGW - Issue date: Dec. 1, 2011Bleed Size: 8.25"(w) x 11"(h)Trim Size: 8"(w) x 10.75"(h)Live Area: 7"(w) x 9.75"(h)Prepared by Eclipse Advertising: November 18, 2011 11:54 AM

F O R YO U R C O N S I D E R AT I O N

BEST VISUAL EFFECTSKEN BIELENBERG

“Easily one of my favorite � lms to hit the 3D format...

I really can’t imagine not enjoying it.”-Harry Knowles, Ain’t It Cool News

“Best animated � lm of the year.”-Roger Moore, Orlando Sentinel

CGW Ad Template 1211.indd 13 11/22/11 11:18 AM

Page 16: CGW 12 2012 Downmagaz.ws

December 2011/January 201214

n n n n Character Animation

always have more ideas. But, we’d always go back to make sure we hadn’t detoured too far from the original essence. If we had given Tin-tin a bigger smile, we made sure he still had Jamie Bell’s performance.”

Finding the balance between Tintin’s stylis-tic photorealism and reality was the challenge. Unlike for Avatar, in which the animators wanted the audience to see Sigourney Weaver in her avatar’s face, Tintin’s animators needed to apply the facial system to cartoony char-acters. “We had to cross a threshold,” Beard says. “We have the actors’ performances, but the look comes from Hergé. We wanted those performances, but we had to fit those perfor-mances on characters that didn’t look like the

actors. That’s when the artistry of the animators came in. We used the same fidelity of data cap-tured from the small cameras that we used on Avatar only in a completely different way, tak-ing Steven’s direction to fit the expressions and make an animated film. But, you can still see the performances they captured on the characters. We spent a lot of time learning how to move the muscle system for our cartoony humans.”

In addition to the main characters, the animators also manipulated data captured for crowds. “Once they finished principal photography [performance capture] for the main actors and the shot was cut together, they would capture actors for the crowds,” Beard explains. “For the pirate battle, which needed

120 people, I had six actors. We’d do multiple passes with those six people to fill up the scene.” Similarly, to fill marketplaces in England and Morocco with crowds, the crew captured six people at a time.

Because the entire world is digital, the animators also worked on other elements as well—cans of paint rolling on the floor, coins, ships in the ocean, and so forth, animating by hand all the props and vehicles that couldn’t be animated procedurally. “Procedural animation doesn’t lend itself well to comedy,” Beard says, providing an example. “We had a scene with sleeping sailors on bunks, and they all had to be flopping in their bunks, snoring. The bunks drifted around, and the chains moved inde-pendently. The animators who were assigned to that scene had their eyes roll back in their heads. It all had to look natural and slightly comedic. It was a real task.”

Beard divided the team by shots, choosing those that reflected particular animators’ skill sets. “Some people were skilled at animating big, heavy scenes, so they would do every-thing in those shots,” Beard says. “And, I had some fantastic animators who had a really good handle on Snowy. One strong animator, Aaron Gilman, knew Snowy very well. Aaron has lots of energy, and he’s inquisitive, and the more I talk about him, the more I realize that he is Snowy. He fit the role perfectly.”

Scene StealerSnowy, the only hand-animated character in the film, appears in most of the scenes with Tintin, sometimes even driving the story. Hergé based the dog on a wire fox terrier, and like that breed, Snowy is intelligent, active, and mischievous. As in the comic books, he’s a scene stealer.

On the performance-capture stage, a pup-peteer moved a toy version of Snowy for blocking and giving the actors proper eye lines. In addition, Beard put cutouts of print-ed images of Snowy on cardboard stands near Spielberg’s monitor to remind him that Snowy would play a big role.

As in the film, at Weta Digital, Snowy often drove the story. “There’s a fine line between a photoreal dog and the caricatured animal in the comic books,” says Clutterbuck. “Find-ing that balance took a reasonable amount of time. We’d build him, animate, and render him, show him to Peter and Steven, and then fine-tune his proportions until we had a real animal that was also Hergé’s Snowy. It was a full-time job.”

Inside, Snowy has cutting-edge technology. His canine anatomy required a new simula-

Back StorySteven Spielberg discovered Hergé’s comic books and became a fan after a re-viewer in France compared the first Indiana Jones to “Tintin.” In fact, when Spiel-berg and executive producer Kathleen Kennedy first approached Weta Digital about making Tintin, they planned to make a live-action film.

“The idea was to have us create Snowy,” says Joe Letteri, senior visual ef-fects supervisor at Weta Digital. “So we shot a test with someone from Weta Workshop dressed in a Tintin costume and started on a realistic digital version of Snowy. But, in the meantime, I talked to Peter Jackson and came up with the idea of having him on camera auditioning for Captain Haddock, with Snowy stealing the scene from Peter.” The scene was a tip of the hat to Hergé, who often had Snowy steal scenes from Tintin in his comics.

Letteri first showed Spielberg the test that the director thought they were working on, and then the test with Jackson. “Steven said to Peter, ‘OK, we’re working together,’ ” Letteri says. Thus, the two directors/producers began ex-ploring ways to make Tintin’s world together, and as they talked, Jackson be-gan suggesting they make it digital. Spielberg was cautious. So, Letteri and Jackson arranged a test.

“By then, we had finished King Kong, and we were getting ready for Avatar, so we asked Jim [Cameron] if we could bring Steven [Spielberg] over to have a look,” Letteri says. “Jim gave Steven and Peter the stage for two days during Thanksgiving break, and that got the ball rolling.” And it rolled all the way into an animated feature created with computer graphics, a film, given the antics of Snowy and the wild action scenes, that could never have been made with live-action photography.

Spielberg and Jackson shot the film on a performance-capture stage at Giant Studios in Los Angeles using Giant’s motion-capture technology and the head-rig hardware and software that Weta Digital had developed to capture facial performances for Avatar.

“Steven was on stage directing, and Peter checked in remotely because he was still working on Lovely Bones.” Letteri says. “They would confer and work out from day to day what to do next. Peter stayed involved for as long as he could be, but he had to go off and prep Hobbit, and he was involved with that as we finished up. He’d still review things and give notes, but our daily calls were with Steven.”

Letteri continues: “I think Steven enjoyed the process. It was freeing to go in and work like he was used to working with the actors and camera, to explore scenes quickly, and then he kicked back to us the things that take a long time. He didn’t have to travel or wait for sets to be built.” –Barbara Robertson

CGW1211-TINTINfin.indd 14 11/22/11 11:19 AM

Page 17: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 15

Character Animation n n n n

tion model because, unlike humans and apes, dogs don’t have collar bones; the shoulder bone—that is, the scapula—is disconnected. A fascia, which is a connective tissue, sur-rounds groups of muscles, blood vessels, and nerves, and holds them in place.

“We had to build a fascia system that was like a tissue layer that enveloped the muscles,” Clutterbuck says. “Now you can see the form of Snowy’s shoulder down to the elbow changing under the surface of the skin. Richard Dorling [lead software engineer for creatures] developed key muscle models that he attached to the skin to get the surface doing the right thing.”

Before giving Snowy’s performance to the animators, the crew tried motion-capturing a dog. “We did only one motion-capture session and then realized it had to be anima-tion,” Beard says. “You’d think motion capture would free you up, but the dog on the live-action set would be led by a trainer and would look up at the trainer. To get the real terrier attitude of Snowy, we had to animate him all the way. He became one of those characters the animators could really put themselves into. We kept thinking of things Snowy could do to keep people entertained.” In fact, Snowy’s antics are one justification for making an ani-mated feature rather than a live-action film.

For reference, the animators visited local dog clubs, brought dogs into the studios, watched videos, and, of course, read Hergé’s comic books because although Hergé based Snowy on a real dog, he was a comic-book character.

“Snowy has human characteristics in the comics, particularly in his eyes and brows,” Beard points out. “This isn’t a world with a one-to-one relationship with reality. We would start animating with him and find his nose had to be smaller or bigger, and then we would go back and animate him again. And it was hard to light a character with white fur.

His eyes would become two black dots, and we couldn’t see what he was thinking. So, we had to keep going back into it and reworking until we could read his expressions, making sure his fur wasn’t changing his performance.

Hair TodayRevelant, who was in charge of the hair and fur team from the modeling side, has been work-ing with fur at Weta Digital since King Kong. After Avatar, he and code department supervi-sor Alasdair Coull worked on a prototype sys-tem called Barbershop that Coull then took to completion. “The system we had was a problem because it had a long learning curve, and only a few people could use it properly,” Revelant says. “Barbershop really helped with Snowy.”

Hergé’s Snowy has a simple design; he’s white, with no shading. “He’s like a cloud that a kid would draw,” Revelant says. “He’s defined by his outline. We found reference of the dog Hergé used as reference, but the prob-lem was that Snowy doesn’t look like that dog. So, we had to figure out two things: What was under the fur, and how was the fur going to work. We’d take the model, apply the fur, look at it, change the model, and transfer the fur to it, back and forth.”

With the previous system, the artists would have had to place guide hairs that multiplied into thousands at render time, and after ren-dering, the artists could not move any one of the resulting strands of hair. With Barbershop, each of Snowy’s million strands of hair could be a curve with which the artists could groom the terrier’s rough coat. Similarly, digital barbers used the system to perfect Tintin’s iconic coif.

“The concept is that what you see in Maya is what gets rendered,” Revelant says. “You can see the full density in Maya, although artists can reduce the level of density as they refine the look. And, we use an OpenGL shading

scheme that gave us a good representation of the lighting while we groomed; it uses the same algorithm we use on our [Pixar Render-Man] side. We don’t interpolate hair; there is no creation of hair after we finish grooming.”

When hair and fur groomers “brush” the hair, they move control points but at any time can convert the hair to curves and manipulate the curves, as with any Maya primitive. “You can basically use the brush to give parameters to the hair,” Revelant says. “You can comb it the length you want, straighten the curve, or curl it. You’re not painting on a map; all the information stays in the hair.”

One advantage of the system is that it is in-dependent from the underlying mesh, which means that changes to the UVs in the topol-ogy do not necessarily affect the fur. It also means that the artists could transfer the hair groom for one character to another and gener-ate variations without much hassle. “We can even merge one groom with another and cre-ate a third one,” Revelant says. “We used that a lot for the crowd characters.”

Hair Lights In addition to having tricky grooms, Tintin and Snowy also had the most difficult hair to ren-der. Tintin’s light-red hair could easily look too blond or too dark. And Snowy’s hair is white. They were the first two light-haired main char-acters Weta Digital had encountered, and their hair demanded new shading models.

Jedrzej Wojtowicz supervised a team of 16 people in the shading department who, with the help of R&D, dealt with the issue. “The problem was the scattering of light,” Wojtow-icz says. “Previously, most of the hair we cre-ated was dark, so we could have simpler mod-els than we needed for Tintin. Imagine a hair fiber as a metal tube. If I shine a light on it, it reflects that light; the light bounces back in a straightforward fashion. That’s synonymous to black hair. Light-colored hair is closer to a candle, a cylinder that’s partially reflective but allows light to travel through it. As some of the light travels through, it picks up some of the coloration and bounces out with a differ-ent color. The rest of the light travels though in a straight line and absorbs some color. So the problem was how to model the interaction between hundreds of these highly light-scat-tering hairs. What does the light that picked up color from the first hair do when it bounces into another hair?”

And that’s only part of the problem. As the light propagates through a volume of hair, the color it absorbs varies depending on how rough or shiny the hair is. Rougher hair scatters

Modelers referenced Hergé’s reference materials, found photographs of the objects he referenced, and then created period-appropriate CG vehicles in the same style as those in Hergé’s comic books.

CGW1211-TINTINfin.indd 15 11/22/11 11:19 AM

Page 18: CGW 12 2012 Downmagaz.ws

December 2011/January 201216

n n n n Character Animation

light in more directions than smooth hair, the energy spreads and imparts a different quality and amount of light to neighboring hairs.

“This happens in real life,” Wojtowicz says. “Our goal was to imitate it as best we could us-ing materials we can generate by studying pho-tography and by doing spectral measurements. Nature sits as a precedent; that’s why we attack things from a physically based way. If we have to make assumptions after the fact, we will.”

To solve the problem, the shader developers moved from a model based on light interact-ing with a single hair fiber to a dual-scattering model. And then, they found ways to create shadows within the volume. “We had worked on scattering the light between the hairs, but what if the character’s hand blocked half the hair?” Wojtowicz questions. “How do the scat-tering and absorptive techniques work with our shadowing techniques? Each hair had to ask, ‘How exposed to light am I? How deep in the volume?’ ”

To move the hair based on the characters’ actions or on elements such as wind in the environment, the character team used Maya nCloth for dynamic simulations, along with various other methods. “We had different models for different things,” Clutterbuck says. “Hair in the wind took one simulation approach. Snowy took another. And, Barber-shop has a deformation interface built into the grooming tool, so we can deform the hair any way we want. For a shot when Tintin walks past a mirror and combs his hair with his hand, we built an animation puppet that we plugged into the animation system to deform the hair. We used a bit of everything”

Skin TightWeta Digital artists leveled the same degree of attention to detail to create the characters’ digital skin and other textures in the Tintin environment; however, this process derived from the physical world, not the digital. Gino Acevedo, creative art director and textures su-pervisor, devised the technique for Avatar and enhanced it for Tintin: He does life casts to capture fine details, and then uses a process to scan the result into Adobe’s Photoshop to make displacement maps.

For Avatar, Acevedo used a material made from seaweed. For Tintin, he switched to a silicon-based material that he says captures 30 percent more detail than the material he had used before. “I made a huge library of skin patterns—faces, elbows, knees, backs, fronts, butts, feet,” he says. “And the great thing about the process is that it works for rocks and trees. We used it a lot for the tree bark. I’d take my

little bucket of silicon and slather it on the sides of trees, then peel it off. It works incred-ibly well—so much better than scanning.”

To capture textures for Tintin’s face, Ace-vedo started by painting a thin layer of the silicon material on someone with what he calls “interesting skin,” leaving the volunteer’s nose open. The material sets quickly, and once set, he applied plaster bandages over it to create a model of the face. Then he removed the plas-ter cast, which doesn’t stick to the silicon, care-fully peeled off the silicon, and placed the thin layer of silicon in the plaster cast, which acted as a cradle.

Next, Acevedo brushed a two-part mix-ture of urethane into the negative face cast and sloshed it around until it set. “I usually do a couple of layers to build up the thickness and create a shell,” he says. “Then I reinforce

it even more with a rigid polyurethane foam that I pour into the back. It takes up the space and sets up in a few minutes.”

When Acevedo removed the plaster ban-dages and peeled the silicon skin from the ure-thane, he had a perfect cast of the person’s face, “every nook and cranny,” he says. But, he wasn’t done yet. Next, Acevedo mixed a transparent silicon material, the same type of material used for animatronic puppets, until it was as thick as honey, and poured it over the face cast.

“I prop it up and use an air hose to blow [the silicon] around to get an even consisten-cy,” says Acevedo. “When I come back in the morning, it’s cured. When I pull it off, from the top of the forehead down, I get a skin the thickness of a latex glove. It’s a copy of the face. If you hold it up to the light, you can see all the skin detail.”

The next task was to digitize the silicon

skin. “We cut darts into it to lay it on a flatbed scanner,” Acevedo says. “It looked like a tex-ture map.” Even so, it wasn’t completely flat, so they modified the scanner.

“We cut pieces of Plexiglas to build a wall around the top of the glass and filled the void with baby oil,” Acevedo explains. “We put an-other piece of glass on top and got perfect scans: 8k-resolution maps with incredible detail.”

Then, in Photoshop, artists removed any dust, scratches, and air bubbles, and amped up the contrast to create the displacement maps. “For the most part, though, the scans were 85 percent ready to go,” Acevedo says. “We saved them online in a library for the artists. When we started a character, say Captain Haddock, we would look at all the scans of people with crow’s-feet and pick one. Then, in Mari, our 3D paint program, we would move the texture around

and paint the displacement onto the model.”Tintin, who has younger, smoother skin,

created special problems. “Tintin aged all of us, but I think what we ended up with looks good,” Acevedo says. He explains: “People with perfect skin are very difficult. He’s a red-headed kid, so we thought maybe he should have freckles, but he looked too much like Howdy Doody. So, we started studying young people’s skin to find some details we could use. Tintin now has little scars, like maybe he had a little chicken pox, and very subtle freckles you don’t notice when you first see him, but if they weren’t there, you’d know.”

They also experimented with his skin color. “We had different masks for his cheek area to give him a rosy blush from time to time,” Acevedo says.

To develop shaders, the team started with those used on Avatar. “Even though Jake was

Weta Digital’s hair groomers controlled coifs, coats, and beards with a new “what you see is what you get” system called Barbershop. Tintin and Snowy’s light hair caused researchers and character effects TDs to devise new shading models to more accurately scatter light through the volumes.

CGW1211-TINTINfin.indd 16 11/22/11 11:19 AM

Page 19: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 17

Character Animation n n n n

blue and Tintin close to pink, we knew the specular qualities of the skin, the technical setup and structure, and how to exploit RenderMan in the best way,” Wojtowicz says. “We could transfer all that. All the characters then veered from that, but at their core, we started from a unified base in terms of the technical structure.”

A new subsurface scattering model helped give the fleshy characters in Tintin a more realistic look, and even helped Snowy. “We had used a dipole model through Avatar,” Wojtowicz says. “That gave us shallow scat-tering. The new model allowed us to scatter light at a deeper level for different extremes. We could get good-looking candles and have dark-skinned characters, as we do in Africa. It also gave us the ability to give Snowy’s ears that nice pink glow; if we backlit char-acters, the light would scatter in a more aes-thetically pleasing way.” The research into the new subsurface scatting model resulted in a SIGGRAPH 2011 technical paper titled “A Quantized-Diffusion Model for Rendering Translucent Materials” by Eugene d’Eon and Geoffrey Irving.

Costume Department All the characters except Snowy, of course, wear period costumes, and 15 people worked on those digital costumes, creating patterns for all the garments, dressing the characters in multiple layers of clothes, and simulating the movement. “When you look at Tintin, you forget that the guy is wearing a three-piece suit,” Clutterbuck says. “It’s just there, and you expect it to do the right thing. But, it repre-sents years of work.”

The studio used nCloth in Maya for the simulation, augmented with proprietary soft-ware. “We’ve never done clothing to this scale,”

Clutterbuck says. “The Na’vi wore loincloths. We started thinking that if we can’t see a shirt under a jacket, we wouldn’t need to simulate it. But you don’t get the right look. So, all the clothes are real; they all have dynamics. We solved the shirt, under the jumper, under the jacket, altogether.”

For cloth textures, Acevedo scanned materi-als directly. “We had a wardrobe department that made the costumes and put them on models so the creatures department could take videos of the clothes and see how the different types of material moved. We did scans of those materials and used them for the textures.”

Hergé’s WorldThe artists took as much care with the envi-ronments as they did with the characters, care-fully creating a world that respected the world Hergé had drawn. This was possible in part because, in addition to the comic books, Her-gé’s [Georges Prosper Remi’s] estate gave Weta Digital access to the artist’s original references. “Hergé had a realistic style, but quirky,” Letteri says. “The way he worked was similar to the way we work as visual effects artists. He’d gath-er all this reference and create, say, a tank that would be a mix of a couple of tanks he liked. We saw his old photos, so we would try to find the objects he photographed. We looked for additional photos as well. We’d figure out the way he drew the object, and then fill it out in three dimensions. It was a really good project.”

Tintin’s apartment, for example, which the artists modeled and textured to match artwork from the comic book, has a phone based on the phone Hergé used as a reference. The cars, the street where Tintin lives, the market are all part of the same European style that Hergé used. “We based everything on reality,” says

Stables. “If we don’t have reference for some-thing, it doesn’t exist.” At the VIEW confer-ence in Turin two days before the film opened in Italy, Stables demonstrated the crew’s de-termination to match Hergé’s world by over-laying a 3D building from the film on a page from the comic. The two matched perfectly.

“The assets in this film represent a huge effort from the research and modeling side,” Revelant says. “We have a way to dress the sets procedurally, but generally we hand modeled everything. We went through all the panel art to find the buildings Hergé drew, and looked for references for buildings with the same style and shape. When you go to that level, proce-dural is not an option. You want to do it right.”

Stables supervised much of the work in Tintin’s apartment, inside a ship, and an ex-citing chase sequence through a marketplace, but the film also puts Tintin on pirate ships and in the middle of a pirate battle. Another supervisor, Keith Miller, handled the neigh-borhood outside Tintin’s apartment, sev-eral shots of a seaplane taking off and flying through a storm, and 85 shots in the pirate battle. All told, five visual effects supervisors split the work on the film.

“Water was the most challenging,” Miller says, “particularly for the pirate battle. We tried to keep it as photoreal as possible. The previs stylization was non-physical, so we tried to maintain that character yet preserve the nat-ural aspects of water.” To do that, the team up-dated its Fast Fourier Transform (FFT) library with new algorithms to simulate the waves and created Smoothed-Particle Hydrodynam-ics (SPH) simulations for the cresting foam. “We used [Exotic Matter’s] Naiad for hero simulations and interactions when we’re dis-turbing surfaces with sinking objects,” Miller says, “as well as our own Synapse software.”

Concept art from Michael Pengrazio, an artist whose first matte paintings were for Star Wars: Episode V – The Empire Strikes Back in 1980, and who worked as an art director at Weta Digital on several live-action films start-ing with King Kong, helped everyone visualize the world they wanted to create. “When you look at his work, it seems plausible,” Stables says, “like a day I could photograph.” Even concept art needed to look real.

“I felt like I was making a live-action mov-ie,” Stables says, “like I was making an Indiana Jones film, even though we were animating. The way we approached the show—from effects, to simulation, to lighting, to the cam-era—was to base everything in a plausible, realistic way, with the idea we could take liber-ties. Steven [Spielberg] is a live-action director.

A multi-step method that begins with life casts resulted in libraries of displacement maps that artists could draw from to produce skin textures for characters ranging from craggy Captain Haddock to youthful Tintin. The artists captured tree bark and other textures from the real world, as well.

CGW1211-TINTINfin.indd 17 11/22/11 11:19 AM

Page 20: CGW 12 2012 Downmagaz.ws

December 2011/January 201218

n n n n Character Animation

His world has been in live action and film, and live action is a world we understand. The fact that we’re using animated characters and we aren’t filming backgrounds didn’t make any difference. We’re composing and lighting as though we were on a live-action film. The biggest issue for me, though, was the interiors. We had to push our indirect illumination.”

Using RenderMan, the lighters sent rays inside a point cloud, which was a simplified color version of a scene. “Then for final beauty renders, the surface shaders did a lookup into the point cloud to do the indirect illumina-tion,” Stables says. “For shadows, we used our PantaRay to generate big point clouds. When the shader executes the final beauty pass, the specular looks up into the point cloud, as well. It’s not a mirror type of reflection. We weren’t doing caustics; we weren’t bouncing specular around. But we were getting a glossy reflection.”

The test case was a sequence that takes place within a ship’s corridors. “We couldn’t get away with just diffuse light,” Stables says. “We had to account for specular light. We couldn’t do the kind of cheating and magic lights we might have done in CG. We didn’t want to, and also, Steven Spielberg is extremely particu-lar about lighting.”

The indirect specular and indirect diffuse lighting were especially important for lighting the characters. “Because specular is angle-depen-dant, it’s really the main component that allows you to read the shape of an object,” Wojtowicz says. “So a lot of our look development centered around dialing in the specular qualities to their best, especially with Tintin. In the comics, his face approximates a sphere, and to be faithful to a degree to that, he’s geometrically simple.”

The more haggard characters, like Haddock and Sakharine—older, more mischievous, with interesting geometry in their faces—are easier to light. Tintin’s simple, youthful face gave the lighters nothing to hang shadows on, no angles. “We had to squeeze details from a wide array of techniques, and one of those was having an intricate specular response,” Woj-towicz says. “If we were to put Tintin in his apartment with its walls of brightly colored wallpaper, and put a couple of hot light sourc-es at either end, the entire room would light up and wash him out with all the diffuse light contribution from all the angles in the room. So, if we don’t have a specular reflection, we lose his shape. We even used indirect specu-lar in exterior scenes when we needed to in-crease the visual complexity of an object mov-ing through the scene, or the camera moving through the scene. We were more selective because it’s a bit more expensive in terms of

render time, but we did use it.”Because Tintin chases through several coun-

tries during the film, the lighters faced situa-tions ranging from the desert in the middle of the day to overcast oceans, and all the lighting needed to interact in a consistent manner with the new hair shading models and the new sub-surface scattering models for the skin.

All this attention to detail—the new muscle system for the characters’ faces and Snowy’s shoulders, capturing skin textures, new hair and fur systems, new shaders for hair and skin, the 1600 variations of Tintin that it took to produce a character that looked right, the re-search into reference materials and research into scientific methods, and more—combined to make a film that critics such as Variety’s Les-lie Felperin praise: “The motion-capture per-formances have been achieved with such exac-titude they look effortless, to the point where the characters, with their exaggerated features,

almost resemble flesh-and-blood thesps wear-ing prosthetic makeup.”

Asked how he was able to keep the char-acters in Tintin out of the notorious uncanny valley, Letteri’s answer is, “We didn’t try. We weren’t thinking about it. To tell you the truth, the question only came up when other people started asking about the movie. For us, these are just characters we like to watch. They either work or they don’t, and if they don’t work, you can call it whatever you want. When you’re working on a film, you’re focusing on the spe-cifics. Is that eyelid doing the right thing? Is that lip doing the right thing?”

But certainly the studio’s experience with live-action films, with the rigors of matching the real world and often substituting virtual for real, had an effect. “In live-action films, when you have a visual element that isn’t real, it’s becoming easier to create the reality and

what’s around it digitally,” Letteri says. “The whole shot becomes digital, and most people don’t know the difference—and that’s the in-teresting part. It doesn’t matter. So, it’s hard to define the lines these days. In a way, that’s what Jim [Cameron] was trying to do with Avatar. There should be no barrier moving between these different worlds.”

“But,” Letteri continues, “live-action visual effects ground you. You have a photographic plate. You judge everything by the pixels next to it. You know when it doesn’t work. And I think that was the hardest thing about [mak-ing an animated film]. If you’re going to try to make it look real, you need a touchstone for reality. In a world that’s completely digital, it becomes easy to convince yourself that some-thing looks good because it looks better than the last time you saw it. But if you put it next to something real, it doesn’t [look as good]. So we couldn’t let ourselves be convinced. Because

we come from visual effects, we strive for accu-racy, to make everything believable. We photo-graphed lots of reference. We constantly judged against something real. When we needed to know what Tintin’s hair looked like wet, we persuaded someone with red hair to cut it like Tintin’s and soak his head in a barrel of water.”

There you have it. If you want to stay out of the uncanny valley, soak a redhead in a barrel of water. And then hire the best artists and researchers you can find, ones who work me-ticulously for years to make the world on the movie screen seem real. n

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at [email protected].

The challenge for the water-simulation team was in creating photoreal water in a comic-book style. An up-dated Fast Fourier Transform library for the waves, Smoothed-Particle Hydrodynamics for cresting foam, Exotic Matter’s Naiad for hero interactions, and Weta’s own Synapse fluid-simulation software helped.

Use your smart-phone to access related video.

CGW1211-TINTINfin.indd 18 11/23/11 9:20 AM

Page 21: CGW 12 2012 Downmagaz.ws

focalpress.com

Creativity has an endless shelf life.Focal Press Books are available wherever fine books are sold or through your preferred online retailer.

By Eric Luhta and Kenny Roy $44.95

This is an animator’s workflow with complete, step-by-step walkthroughs of essential animation techniques to increase your efficiency and speed.

By Adam Mechtley and Ryan Trowbridge $59.95

The first book to focus exclusively on how to implement Python with Maya - a complete reference for Maya Python and the Maya Python API.

For a look at our holiday titles, visit www.focalpress.com/holiday

By Lee Montgomery $49.95

Explore Disney’s 12 principles of animation — from squash and stretch to timing and appeal, while learning how to animate in Maya.

with focal this holiday season

fa la la la la

CGW Ad Template 1011.indd 13 10/14/11 4:23 PMCGW Ad Template 1211.indd 19 11/22/11 11:24 AM

Page 22: CGW 12 2012 Downmagaz.ws

December 2011/January 201220

■ ■ ■ ■ Stereo 3D•Visual Effects

Heralded as the most artistic use of stereo 3D since Avatar, and perhaps even including Avatar, Martin Scorsese’s love letter to fi lmmaking takes place in

1930s Paris, as seen through the eyes of a boy and realized as if fi lmed on an early 20th century movie set.

Hugo, based on the award-winning children’s book by Brian Selznick, stars Asa Butterfi eld as Hugo Cabret, the orphaned son of a clockmaker who now lives in a secret part of a Paris train station. Hugo’s father left him a broken automaton, and Hugo believes that if he can repair the machine, a small mechanical man, he might bring back something of his father. To operate the automaton, though, he needs a key, and as if by magic, Hugo meets Isabelle (Chloë Grace Moretz), a girl with the key. But, the real key to the story’s secrets and to the fi lmmaker’s vision is through Isabelle’s godfather, a toymaker named Georges (Ben Kingsley). � e toymaker, we will realize, is Georges Méliès, a pioneering fi lmmaker who instilled the movies he made

between 1896 and 1914 with cinematic versions of the illusions he had created in his magic theater shows. He invented special eff ects. But, driven out of business by larger studios, Méliès became a toy salesman at the Montparnasse train station.

In the beginning of the fi lm, we see a vision of Paris enhanced, as is much of the fi lm, with visual eff ects used to mimic and augment traditional special eff ects. In an aerial shot of the city from above the Arc de Triomphe, time-lapse photography of

traffi c on the 12 streets that radiate out from the center circle give the sequence a me-chanical quality. As the camera pans past the Eiff el Tower, we see the hint of a clock mechanism.

“We wanted to plant something in your head so the later dialog will make sense,” says Rob Legato, second unit director and visual eff ects supervisor. � e later dialog is a bit of philosophy Hugo shares with Isabelle: that machines are never built with extra parts; that all machines have only the parts they need to run and no more. He posits that if the world is a machine, he must be a part, which means there is a reason why he exists.

ILM artists created the feeling of Paris as a mechanism by using their proprietary Zeno pipeline, which includes Autodesk’s Maya, Adobe’s Photoshop, and other software, and drew on Luxology’s Modo to create streaks of traffi c on Parisian streets.

VISUAL EFFECTS ARTISTS PUSH DEEP INTO CINEMA

HISTORY TO HELP MARTIN SCORSESE CREATE HUGO

MAGICMAGICMAGICMAGICMAGICMAGICMAGICMAGICMAGICMAGICMAGICMAGIC MANMANMANMANMANMANBY BARBARA ROBERTSON

Images ©2011 GK Films, LLC. Photos: Jaap Buitendijk.

CGW1211-Hugofin.indd 20 11/22/11 12:20 PM

Page 23: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 21

Stereo 3D•Visual Effects ■ ■ ■ ■

MANMANMANMANMANMAN

CGW1211-Hugofin.indd 21 11/22/11 12:20 PM

Page 24: CGW 12 2012 Downmagaz.ws

December 2011/January 201222

n n n n Stereo 3D•Visual Effects

“We wanted to create a subconscious visual of that philosophy,” Legato says, “of Paris as part of a mechanism, so the audience has that in mind when he says his dialog. It’s a touchy kind of thing. Delicate. But John Knoll and Industrial Light & Magic did a fantastic job.”

Ben Grossmann led the visual effects teams, working from Pixomondo, which handled the majority of the shots. Nvizage developed the previs, Yannix helped with matchmov­ing, ILM created the opening sequence, Lola “youthenized,” Matte World Digital produced matte paintings, and Uncharted Territory built a scene on the banks of the river Seine in Paris. Paramount Pictures and GK Films produced the movie, which Paramount is distributing.

All told, the feature contains 850 VFX shots. “We did every trick in the book,” Grossmann says. “The film is a homage to Georges Méliès, so we did the visual effects checklist. In stereo.”

At its core, Hugo is a story of parts fitting to­gether, of art and craft. And so, too, the mak­ing of the movie—beginning with the use of stereo 3D. Legato, credited with creating the virtual production for Avatar, has worked with Scorsese on Shutter Island, Aviator, and other films. He won an Oscar for Titanic’s visual effects and received a nomination for Apollo 13. And, he helped Scorsese design Hugo.

Stereo Design“We planned [stereo 3D] from the begin­ning,” Legato says. “And everyone was on board. [Production designer] Dante Ferretti designed the sets with depth, [cinematog­rapher] Bob Richardson lit the scenes with depth, Marty [Scorsese] directed the scenes and blocked stereo out as another tool to tell the story. We were all blown away. You can’t add 3D later. It’s like any other piece of art. It

has to be planned from the beginning.”One rainy day in New York City, Scorsese,

Legato, and others screened 3D movies from the ’50s in a private theater, movies that been made in 3D but never shown in stereo because the craze had ended. They also watched Ava-tar, Dial M for Murder, and 2D moves from the ’40s and ’50s, especially those directed by Carol Reed, such as The Third Man and others that featured children.

“The fun part about working with a direc­tor like Marty is that he adores the history of moviemaking, and this film is about the history of moviemaking,” Legato says. “There’s a sense of reality in the films back then that changes the story. It’s hard to describe. You just feel it.”

With the help of Pixomondo and the other visual effects studios, Scorsese embraced that sense of reality and deepened it with stereo. “It’s hard to separate one from the other now,” Legato says. “We had stereo in the forefront and the back of our minds in every scene, every edit, the way we lit the scenes…it all became part of the mix. Everything was de­signed, viewed, and staged for the dramatic value of 3D; the depth became part of the storytelling.”

Legato provides an example: “We have a little boy in a 1932 Paris train station, in over­whelming surroundings. So we use stereo in those shots to emphasize the size and structure and largeness of the building against the small­ness of the boy. When you block out the scene, as soon as you see it in depth, it alters the way you consider it. Maybe a wide shot will sell the shot, maybe there is something interesting that you want to look at for a long time. It’s a cumulative thing.”

As was true in the early days of filmmaking, Scorsese shot most of the movie on sound­stages. He used production facilities in the UK, and much of the visual effects work involved extending those sets and building virtual envi­ronments for previs and then later for the final shots. “We rebuilt Georges Méliès’ original studio on a back lot at Shepperton Studios [in Surrey, England],” Legato says, “constructing it to the exact plans, and then photographed it for real. It was a great moment. One of the thrills of moviemaking is to create history and walk around in it. But, we didn’t build much of the train station, and Marty didn’t want to walk onto the stage guessing what the shot would be.”

Thus, previs helped Scorsese and Legato design shots prior to set construction, and see digital environments in shots with sets that the visual effects crew would extend or create. “Nvizage did previs on set and prior to produc­

Artists at Pixomondo created most of the shots in the film, which include digital environments to extend sets, such as the train station (at top), period tweaks to create 1930s Paris (at bottom), and dozens of visual effects that pay homage to Georges Méliès’ illusions.

CGW1211-Hugofin.indd 22 11/22/11 12:21 PM

Page 25: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 23

Stereo 3D•Visual Effects n n n n

tion,” Grossmann says. “We had mechanical representations of common camera equipment so Rob [Legato] could direct a shot with lots of visual effects in previs. We could show Marty [Scorsese] what we were thinking, and he could pre-approve an edit of the sequence. And, if Marty wanted to design a shot before he built a set, he could operate the camera virtually.”

To shoot the film, the crew used a Fusion 3D system from the Cameron-Pace Group. Before production, artists from Nvizage reproduced the sets to scale digitally and loaded them into Autodesk’s MotionBuilder for real-time play-back. Then, during filming, motion-control encoders mounted to the camera equipment fed the movements into MotionBuilder.

“It was similar to the virtual camera system used for Avatar,” Grossmann says. “We had encoders on camera cranes, dollies, pan-and-tilt heads, anything that moved. Wherever Bob Richardson moved the camera, our real-time CG matched it and replaced the greenscreen. So, Marty could see the train station, the trains, whatever, anywhere the camera pointed, with a real-time composite of the actors. If the cameras pointed toward 500 extras walking around in front of a greenscreen, he would see the city of Paris and bridges behind the extras.”

This process of providing directors real-time composites of actors in virtual backgrounds has become a familiar part of filmmaking these days. However, Hugo’s director was Mar-tin Scorsese, who is anything but typical.

“This wasn’t a documentary,” Grossmann says. “It was a movie someone would make on a movie set. We might be panning to fol-low Hugo, and Marty would say, ‘I want the Eiffel Tower over here, and maybe over here we’ll see the Arc de Triomphe.’ If he was in the moment, he might walk to the visual effects tent and previs what a shot would look like if the stage didn’t have a roof. He’d tell the ac-tors where he wanted them to be, and then tell us what he wanted in the set, and we’d design the shot while he was shooting.”

Matching the Vision All the information gathered on set—the wit-ness camera footage, the data from the mo-tion controllers on the camera equipment, the shots from the stereo camera that Scorsese di-rected—went to Pixomondo to help augment the matchmoving and camera tracking. “For every camera position, our visual effects wran-glers would feed the data into the system and create an [Autodesk] Maya file that showed where all the cameras were and where they moved,” Grossmann says. “We’d know plus or minus one degree where the camera was. But,

it took several months to develop the match-move pipeline to get all the cameras tracked.”

Pixomondo’s Beijing studio did much of the matchmoving using a customized version of Andersson Technologies’ SynthEyes. Yannix also did matchmoving for the project.

“On some productions, the directors don’t embrace the medium—they direct as though they’re shooting a traditional film,” Gross-mann says. “But Marty was passionate about shooting in stereo. He’d ask, ‘What is the most amazing shot we can do in stereo?’ We’d have these famous actors, Christopher Lee, Ben Kingsley, and Marty would say, ‘Ben, you did good. I just need to do another shot with a slight stereo adjustment.’ ”

All those tiny adjustments made it difficult for the matchmovers later. “In stereo, the in-terocular difference between two cameras is so precise you can see a difference of a quarter of an inch,” Grossmann says. “The entire city of Paris can look like a tabletop set if you’re not careful. It took months to work out all the re-lationships—the cameras are two inches apart, and the left camera panned this degree at this frame and that degree at that frame. There’s no forgiveness in stereo.”

Meanwhile, at various Pixomondo offices, artists began building sets and set extensions

and creating the effects, with Grossmann par-celing out the work by sequences and specialty. “Some offices specialize in animation,” he says. “Another might be good at effects and destruc-tion; another might complete the lighting, rendering, and compositing. Much of my job was deciding what went where, but most of

the assets started in London.” There, model-ers worked from blueprints received from the art department, and then distributed assets to other offices.

Most of the Pixomondo offices have Maya-based pipelines with Chaos Group’s V-Ray for rendering, but in some, the artists used Auto-desk’s 3ds Max, as well. In addition to the train station —the concourse, lobby, tunnel, clock tower, and so forth, inside and out—the artists created the trains and several sections of Paris.

“The big problem for our asset team was that in visual effects, we’re obsessed with mak-ing things realistic,” Grossmann says. “So, the asset team built the train station as planned, and then they’d hear, ‘Why don’t we knock down that wall and get more trains in here?’ Or, ‘In this shot, we’ll remove the roof.’ It drove them bonkers.” The answer was to break everything into components that the artists could turn off and on, and move around.

For textures and reference, the artists had footage shot on location in Paris, and firsthand information from two visual art directors who were on location. “They’d see the materials and textures, and sit with Dante Ferretti and do concept work,” Grossmann says. “They became immersed in the visual guidelines.” With the movie set in the 1930s, one challenge was to

create materials and textures that made the digi-tal assets look as if they were new in 1930, or built before and weathered appropriately to that age. Concept art and production paintings that the art directors created helped the visual effects artists create the look Scorsese wanted.

“They helped keep the creative consistency,”

Previs from Nvizage and on-set composites helped director Martin Scorsese and senior visual effects supervi-sor Rob Legato think about how to film shots with stereo 3D, even narrative sequences such as this with Hugo (Asa Butterfield) and Isabelle (Chloë Grace Moretz).

CGW1211-Hugofin.indd 23 11/22/11 12:20 PM

Page 26: CGW 12 2012 Downmagaz.ws

December 2011/January 201224

n n n n Stereo 3D•Visual Effects

Grossmann says. “Continuity was out the win-dow in major ways—the Eiffel Tower moved where it needed to be, some routes made no sense at all, the train station would look differ-ent in some shots—but there was consistency in that everything looked good. That was the continuity.”

The second challenge for the artists was in understanding how to achieve the look of movies from the early 20th century. “The hard-est part and the most exciting part for the art-ists around the world was the exploration,” Grossmann says. “In most movies, you’re doing something like swinging Spider-Man across a bridge. For the artists on this movie, it was never as simple as, ‘Here’s your desk and your shots.’ It was, ‘Here’s your desk and here are 16 hours of highlight reels, some books, and a thousand images of old sets, old trains, old train stations.’ No one cranked out work for weeks, sometimes months, until they got into the mood and tone and look. And then, so much of this movie is a homage. An artist might present a shot and point to something that was distracting, and we’d say, ‘Yes, but it’s distracting on purpose because it references this old film, this old clip.’ ”

The lighting artists had similar challenges. As always, they would light the scenes to be photographically real, but their reality needed to be a film shot on a back lot in 1930. “This wasn’t an available light film,” Legato says. “It was a lit movie, and the lighting is part of the story telling. It’s not real life. So, we had more than one sun. We put lamps behind windows, arc lights behind alleys. It took a while for the artists to get it because it isn’t what we’re trained to do. We usually try to fool the eye that some-thing is hyper-real. We were still making it photographically real, but the photo graph had a tone to it. So we’d show people examples, tear

sheets, clips from old movies.”Making it even more interesting for the

lighters was that the angles might change from one shot to another, as if the sun moved 180 degrees. “It works because the shots are beauti-ful,” Grossmann says. “It all feels the same, but if you mapped it out, you’d see that it’s all over the place.” Knowing this, the visual effects crew didn’t bother shooting chrome balls on set to gather HDRI and match the lighting.

“We realized that if Bob Richardson lit some-thing, he’d light what’s there,” Grossmann says. “If there were five people in the room, he’d light those five people. So, if we added a glass roof, a train, and a luggage cart, it wouldn’t do any good to have HDRI because if those elements had been on the set, he would have lit it differ-ently. We had to match his intent.” Similarly, the artists needed to match Richardson’s intent in all the CG shots that had been impossible for Scorsese to shoot traditionally.

Magic HourIn addition to set extensions and virtual back-grounds, much of the visual effects work cen-tered on Méliès’ illusions. “As the film starts to explore who Georges Méliès is, we see shots that are magical in nature,” Grossmann says. “I could talk for hours about all the little mag-ic tricks. Hundreds and hundreds of shots. By the time we were done, I realized we had done every trick in the book. They’re not like cool magic-wand gags. They all have ground-ing in old film tricks and in some part of the story. We did all the classic cinema tricks from modern times to today, and pushed beyond anything done before. Miniatures. Digital characters. Stop motion. Time-lapse photog-raphy. Persistence-of-vision animation. Matte paintings. Motion-captured characters. Iris wipes. Morphs. CG augmentation. Even the

choreography of a cross-dissolve became a new art form and became visual effects. It was a homage to the kind of work Georges Méliès did, but in a modern-day fantasy film. And we created all those tricks for stereo. I’ve got all my passport pages full now.”

In one scene, the children open a secret box that causes an explosion of CG papers to fly out. The images on the papers represent the collected work of Georges Méliès. They swirl around the room in a way that creates an opti-cal illusion, the perception of animation. It’s a persistence-of-vision trick, like a flip book, but a 21st century visual effects version.

In another scene, Hugo fixes a mechanical mouse, a mouse that, when Méliès winds it up and places it on the table, spins around, wig-gles its tail, and looks up and down. The crew used stop-motion animation for the mouse, shot it in stereo, and then augmented it with visual effects.

For a montage that shows the degenera-tion of Méliès’ studio from happy success into post-World War I bankruptcy, the artists mim-icked time-lapse photography using computer graphics to create the images.

“The movie is full of these things,” Gross-mann says. “We’d take sections of sets and performances, string them together, and cho-reograph them as if they were one shot. In some shots, we’d have two minutes of visual effects strung back to back. We didn’t have to do a CG tsunami. But, we had CG crowds, fire, snow, wind, water, steam, smoke—a crazy amount of effects. It was humbling to do this while referencing and studying someone who invented the genre. Méliès’ work was pretty miraculous. When we studied his work, some-times it took days to figure out how in the hell George Méliès did this.”

In this film, the innovation was in mak-ing the visual effects created to bring Mé-liès’ illusions to life seem real, and to do so artfully. “We used visual effects and stereo 3D not as separate items, but as a tasteful, integral part of the storytelling, as impor-tant as music and lighting and acting,” Le-gato says. “Our innovation is in appreciat-ing the art of filmmaking by using the tools that used to blow us away with how clever and technical they are, with, now, how beautiful they are.” n

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at [email protected].

Lighting artists at Pixomondo learned that the continuity in this film was in its consistent beauty. Rather than trying to match the lighting on partial sets, they discovered how to mimic the cinematographer’s intent.

Use your smart-phone to access

related video.

CGW1211-Hugofin.indd 24 11/23/11 9:21 AM

Page 27: CGW 12 2012 Downmagaz.ws

Your imagination...plus the powerto make it happen!

Source code: CGW

*SIX MONTHS SAME AS CASH OFFER-Valid for purchases over $500. Limited time offer. Call for details. • *FREE PARALLELS DESKTOP OFFER-Get Parallels Desktop 6.0 for Mac free after $20 mfr. and $60 MacMall mail-in rebates with purchase of any new Apple computer. Price before rebates is $80. Ends 12/7/11. • ALL OFFERS VALID WHILE SUPPLIES LAST. Download rebate coupons at www.macmall.com/rebates. Although we do our best to achieve 100% accuracy, occasionally errors and inaccuracies do occur. Should you encounter an error or inaccuracy, please inform us so it can be corrected.

Call 1-877-293-6255 or visit macmall.comNo Payments + No Interest if paid in full in 6 months!* FREE Parallels Desktop!*

24" Display!Wacom Cintiq 24HD

Interactive Pen DisplayWork directly on its giant

high-definition LCD screen!

only $2589!#8870936

3D Animation!Autodesk Maya® Ent. Creative Suite 2012

Animation, modeling, simulation, effects, rendering, and compositing!

only $3495!#8356560

New Version!Adobe® Creative Suite® 5.5 Master Collection for Mac

Newly updated for better delivery to smartphones and tablets!

only $242297!#8356305

Portable Power!Apple 15.4" MacBook Pro

Quad-core Intel® Core™ i7 2.20GHz, 4GB RAM, 750GB Hard Drive!

only $203595!#8300861

#1 Apple Direct Seller Over 20 Years of Experience The Creative Pro's Choice

CGW Ad Template 1211.indd 25 11/22/11 11:52 AM

Page 28: CGW 12 2012 Downmagaz.ws

26 December 2011/January 2012

■ ■ ■ ■ Animation

When Warner Bros. released the fi rst Happy Feet movie, people wondered what they were thinking down under. An animated feature in which many of the character performances started with motion-capture data? Blasphemy. But, Happy Feet’s joy-

ous story caught the imagination of audiences worldwide, and the fi lm went on to win the Oscar for Best Animated Feature Film in 2007.

Following that win, director George Miller founded his own studio, Dr. D, in Sydney, Australia, and began preparing for a sequel. In 2009, he hired Rob Coleman to build an animation team and direct the animation for Happy Feet Two, which picks up where the fi rst fi lm left off . Mumble, the Emperor Penguin who could dance but not sing, is now married to Gloria; they have a son, Erik. Erik can’t dance, but when he meets the “Mighty Sven,” a puffi n that Erik mistakes for a penguin, Erik becomes determined to fl y. Returning penguin voice actors include Elijah Wood as Mumble and Robin Williams as Ramon and Lovelace.

Prior to joining the Happy Feet Two crew, Rob Coleman was an ani-mation director and supervisor at Industrial Light & Magic, where he received two Oscar nominations for best visual eff ects (for Star Wars: Episode II – Attack of the Clones and Episode I – � e Phantom Menace) and two BAFTA nominations (for Episode I and Men in Black). We spoke to Coleman soon after work on Happy Feet Two wrapped.

How did you begin this project? I sat with George [Miller] and looked at what he liked and didn’t

like in the fi rst fi lm, and I spent the fi rst year, from April 2009 to April 2010, building an animation team.

How many animators did you have on your team?I had 75 animators at peak from 14 countries, with 32 from Aus-

tralia. I was worried when I fi rst came down here because I knew CG animation wasn’t huge. � ere are companies doing CG, but there aren’t a lot of character animators. But, just before I started hiring, Animal Logic was fi nishing Guardians and didn’t have another big show yet, so I was able to pick up a lot of senior and mid-level animators and a couple of leads who probably otherwise would have gone to Canada or the UK. � en, I committed to hiring only Australian junior animators.

How did you organize the team?I had a number of leads, which is similar to the way I worked at

ILM, and divided the work into sequences. At peak, we had nine teams, but most of the time we had six or seven. Each lead had around seven animators. Everyone did penguins, but two of the teams became really good at animating krill, so I cast more krill sequences to them. And, we spoke to Coleman soon after work on Happy Feet Two wrapped. good at animating krill, so I cast more krill sequences to them. And, we

CGW1211-Happy Feetfin.indd 26 11/22/11 12:19 PM

Page 29: CGW 12 2012 Downmagaz.ws

27December/January 2011-2012

Animation ■ ■ ■ ■

didn’t teach every team how to animate elephant seals, which were all keyframed.

� e krill?Will the Krill and Bill the Krill, voiced by Brad Pitt and Matt Da-

mon. � ey’re the reason I wanted to make this movie. � ere’s a parallel story about the tiny little krill, and their story is so good and so funny. And, to animate things as little as krill sounded amazing. � ey look like little brine shrimp. � ey are almost at the bottom of the food chain; they’re insignifi cant. But they have a big impact on the biosystem of the world. Fish feed on them, whales feed on them. We have thousands and thousands of krill.

What is their story?Will the Krill decides he doesn’t want to be in the krill swarm, so he and

his best friend, Bill the Krill, break away from the swarm and end up as two little individuals in the ocean. � ey have contact with our hero pen-guins. Although neither species knows about the other, we see them both.

Did you use motion capture for the penguins, as on the fi rst fi lm?We used some motion capture, predominately for the dancing and

dramatic sense when the penguins walk around. [Director] George Miller comes from a live-action background, and he’s comfortable di-recting actors on stage, so he could do take after take quickly. He could get a performance he wanted in an hour or a day that would have taken us a month. But he also enjoys the animation process because he can plus the performances and add facial animation.

Motion capture allowed the two worlds to come together, and because the characters are humanoid and walking around, I’m fi ne with it. When the characters come to animation, they have their weight built in already. So, we get the combination of movement from talented performers di-rected by the director and performances stylized by our talented anima-tors. Also, when George wanted thousands of characters on screen danc-ing intricate choreography, keyframing would have been impractical.

Did you motion-capture any of the other characters?We also used motion capture for Sven, the puffi n, when he’s on the

ground, but we keyframed him when he’s fl ying.

But, not the krill, of course.� ere was an attempt. � ey did a bunch of experiments. � e krill

have 10 legs and arms, so they had a conga line with fi ve dancers trying We used some motion capture, predominately for the dancing and have 10 legs and arms, so they had a conga line with fi ve dancers trying

Images courtesy Warner Bros. Pictures.

CGW1211-Happy Feetfin.indd 27 11/22/11 12:19 PM

Page 30: CGW 12 2012 Downmagaz.ws

December 2011/January 201228

n n n n Animation

to do the legs. They captured Savion Glover [dancer, choreographer] tapping for the krill, as well, which was extremely beneficial for me and the animators. The mo editors [motion editors] could take what he did, apply that to a low-resolution krill model, and we could study the feet and get movements that would have been difficult if we were keyframing 20 legs and arms. We also had early experiments with a puppeteer moving a krill body on the mocap floor, but we couldn’t get the right scale of mo-tion when we put the krill into the water.

Were there any other unusual motion-capture experiments?

There were always experiments. For the el-ephant seal, we had four people performing to-gether, but trying to wrangle that was too much effort. I could get a talented animator to do something really beautiful in not too much time.

What was the motion-capture process?We used a Giant Studios’ system. We had

a bunch of talented people here who had worked on the original, then worked on Ava-tar, and came back to do Happy Feet Two. George [Miller] would cut the audio first, and they would broadcast that onto the floor so everyone could hear it. The performers panto-mimed to the dialog. They might not hit the accent of a word exactly, so it was up to the mo editors to make it feel like the voices were coming out of the bodies.

Each day we would recalibrate the dancers. We’d measure their legs and arms precisely so we could translate them to the character maps for each species of penguin. We could capture up to 10 at a time, and could see the penguins walking around in real time as the dancers performed. Their feet weren’t locked to an ultra-resolution set, but we could see where they were.

Did you need to change the data much to have the dancers move like penguins?

The dancers all went to ‘penguin school,’ and learned how to dance like Emperor or Adelie Penguins, but it took a fair amount of labor to get [the characters] to move and act like penguins. Penguins are like little flour sacks, like little fluffy pillows. When you have a human walking like a penguin, it’s one thing if they pantomime it for you by keeping their legs together and waddling. It’s another to have it look real. Our motion editors had to manipulate the data to make it work the way George Miller wanted.

Once George had directed the motion capture, he would make selects. The motion

editors worked in [Autodesk’s] MotionBuilder and [Giant Studio’s] Nuance. They would pick the matching human performance and re-map the data onto the penguin bodies, and then put the penguins on the ultra-resolution set, the undulating ice field, and spread their toes. I would review the work in progress and make critiques. When I was happy, they converted the files into [Autodesk] Maya files and sent those rigs to the keyframe animation team.

Did you develop any particular tools or rigs for the keyframe animators?

We had a similar skeleton for each species of penguin, and we had offset rigs. The offset rig was a parental rig on top of a child rig. The child rig received the keyframe data from Mo-tionBuilder. With the parental rig, the anima-tors could add rotation and translation to the big volumes—the hips, head, shoulders, and chest. They were all IK. Often, once George Miller saw [the motion-captured animation], he wanted to go broader than what he saw on the floor. So the animators might put a trans-lation on the chest, or change the eye direc-tion by swinging the head, and so forth. The animators could supersede the data and move-ment with our offset rig.

When did they do keyframe animation?We keyframed the animation when the

characters swam or when they did dangerous actions. We also keyframed the whales, leop-ard seals, and fish. The fish are basically food in this film.

And, there’s no facial capture. Every penguin ended up being a hybrid. George was very hap-py with what Animal Logic had done. But now that he had some experience with animation, he wanted to spend more time on the move-ment of the eyes, the dilation of the eyes—the eye dart-ness as he calls it. He was very particu-lar about beak sync, lip sync, and phonemes—about the movement of the tongue and lips—and that was cool with me. And, we spent a lot of time on the non-verbal, reaction shots. It was challenging to get the penguins to look good from multiple angles and still con-nect with the audience and characters on the screen. You have to see their emotion. They have humanoid faces, but their eyes aren’t bin-ocular. They’re set 30 degrees back on an angle.

Did you use the models from the first film?

We based the characters on where they left off on the first film, but we were using Maya, and Animal Logic had used Softimage XSI. So, most of the characters were redone and rebuilt; we upgraded the models. And, we started over and redid all the rigs. That wasn’t a big factor for me. If we hired someone who

At top: A team of 75 animators from 14 countries worked at Dr. D studios in Sydney, Australia, to perfect and amplify performances captured from dancers for the penguins, and to add facial expressions. At bottom: A separate team of animators used motion cycles and a rules-based system to animate crowds of penguins and schools of fish.

CGW1211-Happy Feetfin.indd 28 11/22/11 12:19 PM

Page 31: CGW 12 2012 Downmagaz.ws

CGW Ad Template 1211.indd 29 11/22/11 11:53 AM

Page 32: CGW 12 2012 Downmagaz.ws

December 2011/January 201230

n n n n Animation

knew XSI, we could teach them the new key-strokes in a week. What we cared about were their animation and acting skills.

Did the animators have video of the ac-tors as reference, as well as the motion data?

During the voice recordings in Los An-geles and Sydney, I had a team of videogra-phers shooting the main actors. Even though they’re performing to microphones, once they get into the characters, they start performing with their faces. There are nice things you can do if you’re there to watch or capture them on video; you can use their expressions to drive the animation later. Elijah [Wood] did some things with his eyes that became part of Mumble’s performance. Brad [Pitt] might do something with the tilt of his head, the furl of his brow, that is inspiration for the animators down the road. We weren’t motion-capturing. We were videoing. But, we would see patterns. I cut together what I called ‘spirit reels’ from the videos and had QuickTimes for the ani-mators to reference.

Did any of the actors record the dialog together, or did they work separately?

Brad and Matt came in for three days, so we had them in the same room, acting to each oth-er. And we had many of the other voice actors in the same space at the same time interacting; upward of eight performances in a big space all recorded at the same time. You get better perfor-mances. You get talk-over, which you also have in live action, so why not have it in animation? George [Miller] had the actors do the initial performances until they were happy together. He recorded that. Then, they could do the lines themselves as they had done in the ensemble piece. That way he had the clean lines, and if someone stepped on a word, he could replace it.

How did you animate the schools of fish? We’d animate the main characters, the hero

characters, and provide swim cycles for the fish—and the penguins—to the crowd team. We had about 25 artists plus a director and supervisor on the crowd team. They’d attach

our keyframe animations to a system that had run-time rules, and the fish would scatter like in nature when they came near the penguins. It was amazing.

When the characters are swimming, did you animate to the movement of the water, or did the simulation team move the water based on the keyframe animation?

We handled water in two ways. Basically, if the shot was about the character, the water team would match the animation. We would talk about the water with George, and he’d tell us whether he wanted the water to be calm or move a lot. We’d do some keyframe animation, and he might tell us to tumble the characters more or soften their movements as if they were in a swell. Then the effects artists would put the water around them. If the shot was about the effect, like ice tumbling into the water, we would match their simulation.

We used [Exotic Matter’s] Naiad for all the splashes and for the interaction of the characters with the water. The effects team then stitched the Naiad splash elements into a high-resolution surface simulated in [Side Effects Software’s] Houdini. They were able to create a realism on the surface of the water that I think is breathtaking. They also did volumetric light shards coming down through the water. There’s a beautiful shot with our two hero krill clutching the bottom of a piece of ice, with smaller pieces of ice tum-bling in the turbulence of the water. We look up through the water and see the caustics.

For the krill, because they are about the size of a thumbnail, they put silt and dust in the water to help with the scale. It’s amazing when you see it in stereo.

Do the little krill survive?They do. Through a series of events, a mas-

sive rogue iceberg crashes into the entrance of Emperor Penguin land and traps the pen-guins, which is something that actually hap-

pened in the real world. In our movie, all the communities come together to overcome the troubles of the world, and even the krill have an impact.

Did the icy environments change much? We had two main environments, Adelie land

and Emperor land, and they change during the movie. We start with compacted snow on ice, and then we have fluffier, powdery snow. When the animators go into the scenes, they had a packed-ice layer or a packed-snow layer for the penguins’ feet. Then another team added foot-prints in the snow, so we’d see little foot trails. The rendering of the snow, with sparkling high-lights, is so amazing. It makes it feel like you’re there. Snow is a big part of the story; we had about 50 artists working on the effects team cre-ating character effects, water, volumetrics, and destruction. There are beautiful shots of com-pacted ice and snow breaking apart.

Did you have favorite characters?Well, the krill are certainly high on my list.

The elephant seals were a pleasure to animate. The main one, Beach Master, was a fantastic character to get into, and he has a sidekick named Wayne. I liked them a lot. And, animat-ing to Robin Williams is always great. He was Ramon the Adelie penguin, and the Rockhop-per. Ramon is very over the top theatrically, and he gets a love interest in this film, so that added a whole other layer to his performance. We see Mumble worried about his son, and Sven wrapped up in being an inspirational speaker. We have big catastrophes and massive dance numbers. The first film hit a very high mark. We tried to step above it. n

Barbara Robertson is an award-winning writer and a contributing editor for Computer Graphics World. She can be reached at [email protected].

Director George Miller recorded Brad Pitt and Matt Damon acting out the dialog together for three days to give Will the Krill and Bill the Krill their voices. Although the team experimented with motion capture for the tiny creatures, animators created all the performances with keyframe animation.

Use your smart-phone to access

related video.

CGW1211-Happy Feetfin.indd 30 11/23/11 9:21 AM

Page 33: CGW 12 2012 Downmagaz.ws

Terms: POs accepted from schools and government agencies. • All checks require 7–10 days to clear. • Defectiveproducts replaced promptly. RMA number required for all merchandise returns. Returns accepted within 20days, in original packaging, postage prepaid, undamaged. Opened software not returnable. Shipping chargesnot refundable. Returns subject to a 18% restocking fee. • Not responsible for typos. Prices subject to change.

800-544-6599

www.sharbor.comS O L U T I O N S F O R G R A P H I C S P R O F E S S I O N A L S

Safe Harbor Computers530 W. Oklahoma Ave. Ste. 500Milwaukee, WI 53207

800-544-6599 Information & Orders414-615-4560414-615-4567 Fax

Mon–Fri 8:30am–5pm CST© 2011 Safe Harbor Computers. All rights reserved.

Cinema 4D R13Now with all new character tools, integrated stereoscopiccapabilities, streamlined multi-artist collaboration and physicalrendering. Available in four flavors tailored to your needs:Prime, Visualize, Broadcast, andStudio. Academic pricing andmany upgrade options available.

Vue 10 xStreamOffers CG professionals thepremiere solution for creatingexceptionally rich and realisticDigital Nature environments, fully immersed within 3ds Max,Maya, Softimage, Lightwave andCinema 4D. Ideal for VFX Studios,Animators, Architects, MattePainters, CG students and more!

Adobe® Photoshop®

CS5 ExtendedThe ultimate solution for advanceddigital imaging, delivering all theediting/compositing capabilities ofPhotoshop CS5 plus breakthroughtools that let you create and edit 3D and motion-based content.Select, adjust, paint and recomposeimages with precision and freedom.Bundles and upgrades available!

www.7-t.co.uk

NVIDIA® Quadro® 4000 by PNYIf you’re an artist, designer, orvideo professional, accelerateyour entire workflow with theQuadro 4000 by PNY graphicssolution. Delivering excellentgraphics performance across abroad range of video, design and animation applications, Quadro4000 by PNY allows you to domore, faster. Mac/PC availability.

modo 501Artist-friendly 3D software thatcombines modeling, sculpting,painting, animation and renderingin a fused workflow. Ideal forartists and designers working inadvertising, package design, gamedevelopment, film and broadcast,architectural and design visual-ization, and education. Mac/PC.

Your workflow is unique — make sure the power behind it is too. Withtoday’s creative professionals facing more competition than ever before,it’s time to give yourself the edge you need. Building on over 20 years ofexperience, let Safe Harbor Computers help you configure your customgraphics workstation, designed to meet both your needs and your budget.

www.sharbor.com/CGWMAGVisit this unique link for a specialworkstation offer to CGW readers

Configured with your choice of hardware and software, a TSUNAMI fromSafe Harbor Computers is the professional’s first choice for 3D graphics.Maximize productivity with 64-bit 12-core processing power and up to 48GB of memory. With an optional NVIDIA® Quadro® display card by PNY,amp up your entire post production workflow and make your machineperfect for rendering 3D models, motion graphics, animation and more.

Animation: Master v16Create entire three-dimensionalworlds and two-dimensionalprojects with this fully featured,intuitive, fun to learn and easy to use 3D animation softwarepackage. Affordably model,bone, texture, animate, light and render finished cinema-quality animation. The only limit is your imagination!

GPU-Xpander Desktop Pro 2Save time and money by addingexpansion slots, power and coolingto your desktop. Gain killer 3Dgraphics or computing cores forhigh-performance projects using an open PCIe slot to expand I/Ocapability. Available in a wide array of desktop and rack mountconfigurations for Mac & PC.

Intuos4 - WirelessPopular and versatile pen tabletwith the comfort and controlthat artists, photographers anddesigners demand. Reduce cord clutter and enjoy the freedom to move about yourwork area. Features 2,048 levels of sensitivity for naturalfeel and accuracy. Bundle withPhotoshop CS5 and save!

CGW_DecJan12.qxp 11/18/2011 2:28 PM Page 1

CGW Ad Template 1211.indd 31 11/22/11 11:54 AM

Page 34: CGW 12 2012 Downmagaz.ws

started off rather slowly at the box offi ce—possibly an economic statement more so than one refl ective of the movie releases. Despite a fi rst-quarter lineup with virtu-ally something for every taste—� e Rite, Green Hornet,

I Am Number Four, Battle: Los Angeles, Rango, Mars Needs Moms, and Sucker Punch, to name a few—audiences just didn’t open their wallets as expected. However, the dour box-offi ce numbers changed quickly for the better during the summer holidays, as moviegoers, likely feel-ing less fi nancial pinch, fl ocked to theaters to see a number of highly anticipated fi lms. Records were broken on Memorial Day. Crowds were entertained. Hollywood smiled and breathed a sigh of relief.

As of press time, there were still a handful of tent-pole fi lms yet to be released, including � e Adventures of Tintin and Hugo. And judging from the hype surrounding these movies, as well as a few other holiday releases, there’s little doubt that 2011 will close on a very happy note—both fi nan-cially as well as with amazing movies. How studios and digital artists were able to achieve such a high level of work and continue to push the visual eff ects and animation bars ever higher in these economic times is a double feat for which they should be applauded.

Studios spend a long time working on a fi lm that’s in theaters briefl y, only at the end of the year to have viewers narrow down their favorites that, for some reason or another, grabbed their attention. � is is what awards season is all about—what people liked both then and now.

People love superhero movies. And this year, there were plenty of choices in this regard: Captain America, � e Green Lantern, � e Green Hornet, � or, X-Men. Some of these heroes were larger than life, capti-vating audiences with their digital powers; others dazzled with amazing CG sets and backgrounds. No matter how you look at it, visual ef-fects played a major role in the fi lms. Perhaps the most popular super-hero fi lm this year did not contain live-action stars, but a unique set of computer-generated characters who kicked their way into the hearts of theatergoers: Po and the Furious Five in Kung Fu Panda 2.

� e year also gave us some rather unexpected treats at the theater: a range of entertaining characters and story lines—and, of course, jaw-dropping visual eff ects. While many are still trying to comprehend the story from � e Tree of Life, there is little confusion about its beautiful imagery, especially during the formation of the universe and expan-sion of the galaxies, followed by explosive volcanoes and prehistoric beasts. Johnny Depp, reprising his role as Captain Jack Sparrow, left us scratching our heads at times. But that’s Jack. And while he had a

somewhat new crew onboard with this latest Pirates of the Caribbean fl ick, we were treated to some nice VFX gems in the fi lm, among them the digital mermaids.

And if Depp’s live-action alter ego was not enough to entertain us, we also had his CG character Rango kicking up dust in a very uncommon all-CG spaghetti western—live-action director Gore Verbinski’s fi rst animated feature foray and the fi rst animated feature to move through ILM’s VFX pipeline. � e dirt and dust of the desert created an unusual look for the movie—nearly as unique as the computer-generated char-acters. ILM was also kicking up more dust (sandy grit and star dust) with the eff ects in Cowboys & Aliens, a sci-fi western directed by Jon Favreau. A strange clash of worlds, both ripe for awesome visual eff ects.

A sci-fi fan favorite for decades, Planet of the Apes burst into theaters as a series reboot, using new methods of motion capture to give the movie’s simian cast their realistic performances, especially Caesar, the chimpanzee star performed by Andy Serkis. A relatively new sci-fi fa-vorite, Transformers rocketed to the top of the box offi ce with even more complicated Autobots and Decepticons to fi ll the screen.

On the animated side, like in Rango, we met entirely new casts of CG characters starring in Rio, a colorful production from Blue Sky; Hop, an Easter-themed movie delivered by Rhythm & Hues; Gnomeo and Juliet, a unique twist on a classic; and Mars Needs Moms, an out-of-this-world fi lm from ImageMovers Digital before the innovative performance-capture technology company closed its doors. 2011 also brought back older classics albeit in cutting-edge computer graphics form (Smurfs) as well as updated characters for grand re-entrances (Kung Fu Panda 2, Puss in Boots, Happy Feet Two, Cars 2).

As we close out the year, anticipation is high for the hair-raising ef-fects of Breaking Dawn and the digitally boosted action in Mission: Im-possible – Ghost Protocol. Yet generating the biggest buzz seems to be Peter Jackson/Steven Spielberg’s Tintin, a CGI stereo presentation of a classic Belgian comic-book character. Released early overseas, Tintin quickly established itself on the Oscar watch list. Another late-year re-lease, Hugo is mesmerizing audiences with its dazzling digital work. But let us not forget the year’s top box-offi ce champ as of press time: the last fi lm in the Harry Potter series, with its ambitious visual eff ects that spanned a decade and culminated in digital mastery.

We know what the box offi ce says, and we have heard what the press and audiences have said, about this year’s fi lms. Now, let’s hear what the experts in our industry think.

December 2011/January 201232

■ ■ ■ ■ VFX•Animation

CGW1211-oscarsfin1.indd 32 11/22/11 5:19 PM

Page 35: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 33

VFX•Animation ■ ■ ■ ■

CGW1211-oscarsfin1.indd 33 11/22/11 5:19 PM

Page 36: CGW 12 2012 Downmagaz.ws

December 2011/January 201234

n n n n VFX•Animation

Captain America: The First AvengerRelease date: July 22 (US)Production companies: Marvel Enterprises, Marvel Entertainment, Marvel Studios In an unexpected role reversal, digital ef-fects were used to depict actor Chris Evans as the weakling Steve Rogers, as opposed to the muscled superhero Captain America. To many, this was an unexpected use of CGI. “Lola VFX really stole the show on this one,” notes Matthew Ward, director of photography at Rainmaker Entertainment. “I remember every one in the industry buzzing with the question, How did they make the newly buff Chris Evans so skinny? The head and body seaming was flawless and helped introduce the character as the complete opposite physique as we’ve all known Captain America to be.”

This movie really surprised Aharon Bour-land, CG supervisor at Tippett. “I had a good time watching it. One of the more interesting effects was probably the subtlest. The way they made [Chris Evans] all scrawny and small dur-ing the first half of the movie was nice. I’m still not quite sure how they did the Red Skull’s face. I couldn’t tell if it was makeup or digi-tal augmentation; it was probably both. But it was cool that I couldn’t tell right off the bat how they did it.”

The Girl with the Dragon TattooRelease date: December 21Production companies: Film Rites, MGM, Scott Rudin Productions, Yellow Bird FilmsThe book series spoke to millions. Can the film do the same? “If David Fincher’s record

is any clue as to what we can expect to see in this film, it’ll be another marvel at visual ef-fects so well hidden we’ll never even know they were there,” predicts Rainmaker’s Ward. “Come awards season, we’ll start to see reels showing how effects were done, and we’ll want to go stand in line to watch the film again to see what we think we should have had the eye to pick out in the first place.”

Green LanternRelease date: June 17Production companies: Warner Bros. Pictures, De Line Pictures, DC EntertainmentBruce Woloshyn, visual effects supervisor at Method Studios (Jack and Jill, The Twilight Saga: Breaking Dawn—Part 1) relays that this year, he and his 12-year-old son, David, resolved to go and see more movies together. Of course, they “had” to see Green Lantern. “I have been a semi-serious comic-book collec-

tor for more than 20 years, and both my son and I were really looking forward to seeing Oa come to life on the big screen. We both agreed, as we discussed the film over ice cream after the screening, that the animation and appear-ance of the actual Green Lantern costumes were outstanding (or, to use 12-year-old ver-nacular, ‘cool’),” he says. “Even with knowing that Sony Imageworks had created CGI uni-forms for the corps, we both agreed that it was so well executed that after the initial, ‘Wow, check out the suit,’ we never gave it a second thought…and that’s a good thing.”

“Doing a full-body replacement for the lan-tern suite seemed like a pretty ambitious plan. It could have easily gotten kind of strange looking, but it came together and helped set the character apart from superheroes in other movies,” says Bourland.

Harry Potter and the Deathly Hallows: Part 2Release date: July 15Production companies: Heyday Films, Moving Picture Company, Warner Bros. Pictures, Warner Bros.For a decade, fans have witnessed the digital magic required to take Harry Potter from the pages of a book to the big screen. Over the years, the magic has grown more intense, as have the effects. This summer, the franchise culminated in a range of digital work, from the expected to the unexpected.

“I’ve often thought the Harry Potter films, above others, are much more enjoyable big, in the theater, than at home. There’s something about being in a dark theater with these char-acters, and the effects push the story in every shot,” Ward points out. “I’m sad to see the fran-chise wrapping up, as the films have each been worth watching and remain enjoyable.”

Tippett’s Bourland says he was super-excit-ed about this movie. “The Potter films have constantly gotten better and better, and the final one did not disappoint. The dragon in Gringotts vault was really cool. The Dragon Slayer dragon has always held a special place in my heart, and you could see more than a little bit of it in this dragon’s design,” he says. Moreover, the magic effects were also really pretty, as usual, adds Bourland. “My favorite was when the Death Eaters were destroying the shield that the good wizards built around Hogwarts. I always wanted to work on a Potter movie, so it was a little bitter sweet to realize my last chance had passed.”

As Steve Garrad, VFX executive producer at Image-Engine notes, in another year when it seems the visual effects industry is deter-

CGW1211-oscarsfin1.indd 34 11/22/11 5:20 PM

Page 37: CGW 12 2012 Downmagaz.ws

In Ho Beak Han Yu Sheng-Hua Yang

take classes Online Or in san franciscO

acting*

advertisingAnimation & Visual Effects

architecture*

art educationfashionfine art

Game DesignGraphic Design

illustrationindustrial Design

interior architecture & Designlandscape architecture*

Motion Pictures & televisionMultimedia communications

Music Production & sound Design for Visual Media

PhotographyWeb Design & new Media

enroll now

earn your aa, ba, bfa, ma, mfa or m-arch accredited degree engage in continuing art education courses

explore pre-college scholarship programs

www.AcAdEmyArt.Edu800.544.2787 (u.S. Only) or 415.274.2200

79 neW MOntGOMery st, san franciscO, ca 94105

Accredited member WASC, NASAD, CIDA (BFA-IAD), NAAB (M-ARCH)

*Acting, Architecture (BFA) and Landscape Architecture degree programs not currently available online.

Visit www.academyart.edu to learn about total costs, median student loan debt, potential occupations and other information.

CGW Dec'11.indd 1 11/3/11 11:18 AMCGW Ad Template 1211.indd 35 11/22/11 11:56 AM

Page 38: CGW 12 2012 Downmagaz.ws

December 2011/January 201236

n n n n VFX•Animation

mined to tell everyone how bad everything is, the interesting thing for me is how good the quality and consistency of the work being produced globally is. To this end, his two per-sonal choices of films contained featured visual effects from companies based in London and Wellington, New Zealand. “Only one of the summer blockbuster films was a slight let-down in my opinion—and again, that is all that is, my opinion,” he says. “There will be many reasons, mostly not due to any vendor’s faults, that the thousands of man-days spent on that project would not end up being en-tirely present up on the silver screen.”

That said, Garrad’s personal favorite for this year’s Oscar is Harry Potter. “Not only was it an excellent film, but the VFX had the neces-sary scale and size to end the series; they were consistent and flawless throughout,” he says. “They have been throughout the series; it is time this crew were recognized people!”

Hugo Release date: November 23Production companies: GK Films, Infinitum NihilOn the verge of being released as this issue went to press, a number of folks declined to comment on the film, having not seen it. Nevert heless, the imagery in the trailers is daz-zling, supporting a heart-warming story.

VES President Jeff Okun, a visual effects supervisor, is among those who have not seen the film. “But what I have seen looks astonish-ing—the real-ness of the robot, the world that cannot be real, yet is,” he says. “It may be the ultimate demonstration of what is good with VFX because they were used properly by an artist, like Martin Scorsese.”

Scott Farrar (ASC), visual effects supervisor at ILM, notes that he likes to see the films Mar-tin Scorsese makes because the director tries different types of stories and they always have

wonderful characters. “For me, Hugo looks in-teresting because of its steam-punk design sen-sibility. That style seems fun and is particularly well suited to stereo 3D and storybook-style vi-sual effects shots,” he says. “I’m looking forward to seeing what Rob Legato, the VFX supervisor, and Martin came up with.”

ImmortalsRelease date: November 11Production companies: Relativity Media, Atmo-sphere Entertainment MM, Hollywood Gang Productions, Virgin ProducedAccording to Ward, epic films require epic ef-fects, and there seems to be no shortage in Im-mortals. “We’ve seen films like this made, and, at times, the effects were so featured they took away from the story rather than supported it, entertaining [us] nonetheless,” he says. Ward notes that the trailers look to be big in scope, along with a 3D conversion. “No doubt it’ll be an entertaining film and certainly a spectacle to enjoy in the effects realm.”

In a film with so many effects, an insider points to the Titan fight scene as “amazing.”

Mission: Impossible — Ghost ProtocolRelease date: December 21Production companies: Paramount Pictures, Bad Robot, FilmWorks, Skydance Productions, Stillking Films“Tom Cruise, Brad Bird, and Mission Impos-sible sequel? I’m in,” says Rainmaker’s Ward. “I think for all of us VFX and animation art-ists, most of us are fans of Brad’s work on The Iron Giant and The Incredibles. Needless to say,

we’re all excited to see what Brad brought to this production, and we’re all certainly expect-ing amazing things.” As Ward notes, the film’s trailer shows action in its modern definition: explosions, high-wire acts, gunfights, and hand-to-hand combat. “The Mission Impos-sible franchise has always delivered new, clever action sequences, usually only achievable with the help of visual effects artists,” he adds. “I’m very curious to see what this latest chapter has in store for audiences.”

Image-Engine’s Garrad notes that out of the yet-to-be-released films, the only one that stands a chance, in his humble opinion, of up-setting the applecart is Mission: Impossible—Ghost Protocol. “The trailer looked like great fun with big visual effects, but as it’s not out yet, we’ll have to wait and see,” he says.

Pirates of the Caribbean: On Stranger TidesRelease date: May 20Production companies: Walt Disney Pictures, Jerry Bruckheimer Films, Moving Picture CompanyIn a film with Johnny Depp, you can expect a level of quirkiness, and this Pirates film brought that to the screen for another adven-ture on the high seas. “Again, great work,” says

CGW1211-oscarsfin1.indd 36 11/22/11 5:20 PM

Page 39: CGW 12 2012 Downmagaz.ws

SEE FOR YOURSELFFree 42-day trail version @ www.maxon.net

Award-Winning Features

CGW SILVER EDGE WINNER

NEW CHARACTER TOOLS

STEREOSCOPIC WORKFLOW

PHYSICAL RENDERING

“CINEMA 4D R13 builds on MAXON‘s“CINEMA 4D R13 builds on MAXON‘s“CINEMA 4D R13 builds on MAXON‘s25-year legacy of enabling 25-year legacy of enabling 25-year legacy of enabling digital content creators to produce digital content creators to produce digital content creators to produce engaging content quickly and easily engaging content quickly and easily engaging content quickly and easily for a variety of industries.“for a variety of industries.“for a variety of industries.“

- Computer Graphics World Magazine, Oct/Nov 2011- Computer Graphics World Magazine, Oct/Nov 2011- Computer Graphics World Magazine, Oct/Nov 2011

• New Character Tools •ImpressiveStereoscopicWorkflow •PhysicalRenderingEngine •ImprovedAfterEffectsIntegration

© Daihei Shibata

© Marjin Raeven - www.raeven.be

CGW Ad Template 1211.indd 37 11/22/11 11:56 AM

Page 40: CGW 12 2012 Downmagaz.ws

December 2011/January 201238

n n n n VFX•Animation

Okun. “ILM is killing on these, pushing the envelope on natural phenomenon, chaos, and look—the water, the clouds, smoke, interac-tion were all fantastic.”

Real SteelRelease date: October 7Production companies: Touchstone Pictures, DreamWorks SKG, 21 Laps Entertainment, An-gry Films, ImageMovers, Reliance EntertainmentWith this film, image-based capture was on full display, yet it was the performance of the robots, in a very un-robotic style, which resonated with the industry. The robots also looked as good as they moved. “The best part of the work in this film was probably the rendering quality and lighting work done on the robots,” says Bourland. “They seamlessly fit into the plates, which is not an easy task when you are dealing with characters made of everything from translucent plastic with lights inside to rusty metal. The character design was fun. I really liked the design of Noisy Boy.”

Rise of the Planet of the ApesRelease date: August 5Production companies: Twentieth Century Fox Film Corporation, Chernin Entertainment, Dune EntertainmentAccording to Digital Domain visual effects su-pervisor Stephen Rosenbaum, who is current-ly supervising Jack the Giant Killer for director Brian Singer, there are two distinctly different types of FX movies being made these days: those that indulge us with spectacular, gra-tuitous visuals, which are short on substance but fun to watch, and those which offer a rela-tively new brand of sentient digital creatures. “This year, we witnessed the perfect exploita-tion of FX technology used to create the latter type of movie—Rise of the Planet of the Apes,” says Rosenbaum, who received an Oscar for his work on Forrest Gump and Avatar. “Unlike

some movies from the last year that used mo-tion capture to produce pedestrian characters, Apes used the technology to help deliver a per-formance. It demonstrated the real potential for actors to embody a digi-tal creature and inject into it a soul.”

In fact, Rosenbaum is one of many who applauds the performance of actor Andy Serkis as he brings yet another digital charac-ter to life. “Once again, we were treated to Andy Serkis slapping on digital makeup and performing the role of a principal character. While the primates looked fantastic—the eyes, fur, movements—I can assure you, all would be lost if not for Andy’s masterful understanding of how to personify a character within an untradition-al medium,” he adds. “As with any role, it’s all about understanding how the character thinks, moves, and responds to its surround-ings. Without those fundamentals, a good-looking chimp with no personality will very quickly become boring to watch.”

Rainmaker’s Ward concurs. “If there’s any-thing we all remember from the original Plan-et of the Apes, it’s the rubber-mask prosthetic work of the 1960s makeup artists. Arguably, the effect worked back then, but today’s au-diences demand a much more believable illusion, and better still than the much more slick rubber masks of Tim Burton’s 2001 re-telling,” he says. “A movie like this can dodge a little close to the uncanny valley, but Weta has nailed the apes even better than they did in King Kong. Another perfect use of perfor-mance capture tools, this film allowed us to believe that the apes were real with what seemed to be a hint of the actors playing them. In one way or another, you could define this

film using truly digital makeup—instead of placing a rubber mask on Roddy McDowall, try replacing Any Serkis completely with a digital ape that is driven by every single twitch comprising Andy’s performance. Forget those clunky rubber masks and enjoy every detail on these apes, no mater how close the camera gets . . . and it got pretty close in this film—it’s always a challenge for any VFX shot, yet a tri-umph for Weta.”

“The work overall was very good. For me, the standout was the orangutan; that thing was amazing,” says Tippett’s Bourland. “The details in the sculpt and fur groom were outstanding. It also stole the scene with some classic lines—the ‘dumb apes’ line was priceless. A lot of peo-ple kept talking about how good the eyes on the apes looked, but I don’t think that was their best feature. I feel their performances and over-all presence on screen were more impressive.”

One film that especially resonated with Shawn Walsh, visual effects executive producer at Image-Engine, was Apes. “Due to our par-ticipation in Apes as a primary previs vendor, we were privy to some of the stunning visual effects work that was evolving at Weta Digi-tal,” he says. “Kurt Williams showed me some early shots that were being produced during the long shoot, and I was floored by how sophisticated and nuanced the performance-capture work was turning out to be. The eyes especially were working as a true window to the soul, and I thought, ‘Man, this is going to be exceptional work!’ ”

Image-Engine’s Garrad names Rise of the Planet of the Apes as his second favorite movie this year. “Again, a good film, which always helps. The VFX were of the highest quality, and the animation was fantastic,” he says. While he acknowledges that the performance was indeed based on Andy Serkis, he points out that it was assisted by lots of very talented animators. Gar-rad notes that he ranked this film as a second to Harry Potter because in his opinion, the consis-tency of the work was not as good.

Daniel Jeannette, animation director (Where the Wild Things Are, Happy Feet), says

CGW1211-oscarsfin1.indd 38 11/22/11 5:20 PM

Page 41: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 39

VFX•Animation n n n n

The Adventures of TintinRelease date: December 21Production companies: Columbia Pictures, Paramount Pictures, Amblin Entertainment, WingNut Films, The Kennedy/Marshall Company, Hemisphere Media Capital, Nickelodeon MoviesWhile the film had not been released in the US as of press time, there was no shortage of comments pertaining to this highly anticipated film. “Friends of mine either in the industry or not who have seen this film, film geeks, and even the harshest of couch-surfing critics are all boasting at how amazing this film is,” says Ward from Rainmaker. “It’s a winning combination in every way: Spielberg, Jackson, Weta, Georges Rémi’s great writing of the ‘Tintin’ comics.”

Ward also believes that it’s here with this film where performance capture as a medium may finally find its foothold. “America will have to wait a little longer for this one, but I’ll continue to drool over the trailers until I can buy my ticket,” he says.

Okun describes this film as “technically groundbreaking and amazing work!” But more importantly, he says, it raises the question of whether [the work] is VFX or something else. “Is it something new? Something forecasting our futures in terms of what can be done? It will be a game-changer for the future, as the crossover between acting and VFX will seam-lessly merge and no one will ever again be able to tell technique,” Okun adds. “It will be hid-den from common understanding—depend-ing on how it is applied in the future.”

Cars 2Release date: June 26Production companies: Walt Disney Pictures, Pixar Animation Studios“I loved the feel of this film,” says Okun about Pixar’s latest offering. “While it clearly uses new-er techniques to arrive at some of the imagery, it also felt warm and comfortable, so the VFX were invisible to the story, as they should be.”

Method Studios’ Woloshyn also enjoyed the film. “There is nothing quite like seeing a Pixar film through the eyes of a child. See-ing Cars 2 with my younger son was indeed

he was amazed by what he saw in Apes. “The level of subtleties and complexity delivered in the performance of Caesar from the combina-tion of both Andy Serkis’ performance capture and the team of animators at Weta is truly groundbreaking. I feel it’s a very strong favor-ite for visual effects awards.”

In addition, Rosenbaum, AMPAS mem-ber of the Visual Effects Branch, is among a growing contingent that feels it is time that an actor’s performance in a digital role is fully recognized. “It is time for the Actor Branch to finally acknowledge that the believability of Caesar came from an actor’s performance. How he looked will surely be recognized by my Branch,” says Rosenbaum.

ThorRelease date: May 6Production companies: Paramount Pictures, Marvel Entertainment, Marvel Studios“I grew up mainly reading British comics, like ‘Judge Dredd,’ and was never really exposed to ‘Thor,’ ” admits Ben Shepherd, VFX supervisor at Cinesite. “Not knowing what to expect, I was pleasantly surprised by Thor. There were mas-sive set pieces and environments, particularly

the impressive Asgard environment. The battle with the ice warriors was well rendered, and there was some very accomplished CG in there.”

Transformers: Dark of the MoonRelease date: June 28Production companies: Paramount Pictures, Hasbro, Di Bonaventura PicturesShepherd is not alone in selecting this sum-mer’s Transformers as the best of the franchise so far. “For me, this was the best of the three films they’ve released. In the first two mov-ies, I found the action too fast and confusing, but in the latest installment, the combat has been slowed down (possibly to help the ste-

reo), which worked much better,” he says. “I wouldn’t place myself in the Transformers fan bracket, but I thought the film was awesome. The quantity and quality of the destruction ef-fects were amazing.”

Rainmaker’s Ward challenges folks to find a camera in this film that isn’t moving or barely be-ing operated. “As anyone in VFX knows, a mov-ing camera means a matchmove, and a moving Michael Bay camera often means a matchmove from hell.” Ward describes himself as a fan of Mi-chael Bay’s camera work and was excited to hear him getting back together with DP Amir Mokri after enjoying the crazy sequences they conjured up on Bad Boys 2. “The reunion paid off, as TF3 didn’t disappoint—the ride was constant with every robot-filled frame. Who can forget the de-tail in Shockwave’s Driller as it tore through the Chicago skyline? Another standing ovation for ILM’s seamless work in this film and a huge pat on the back for all the stereo work in the film. This was the best use of stereo 3D this year.”

Destruction, says Tippett’s Bourland, that is what Transformers is about. “Watching Shock-wave’s giant mechanical death worm chew its way through a building that our heroes are running around in was probably the ‘best

building being destroyed’ sequence ever,” he says. “I also really appreciated the fact that Mi-chael Bay actually got some guys to squirrel-suit-jump into downtown Chicago.”

The Twilight Saga: Breaking Dawn — Part 1Release date: November 18Production companies: Summit Entertainment, Imprint Entertainment, TSBD Canada Produc-tions, TSBD Louisiana, TSBD Productions, Total Entertainment, Zohar InternationalA lot can be said for some of the effects in this film, but the consensus seems to be that the CG wolves were done extremely well.

CGW1211-oscarsfin1.indd 39 11/22/11 5:20 PM

Page 42: CGW 12 2012 Downmagaz.ws

December 2011/January 201240

n n n n VFX•Animation

a special treat, especially in IMAX 3D. And, despite what some ‘grown-up’ reviewers had to say about the film, Pixar’s target audience (my Joseph) demonstrated for me what is truly magic about great animation, layout, and ed-iting (the things grown-ups think about),” he says. “To my son Joseph, Lightning Mc-Queen, Mater, and the rest of the cast are as ‘real’ as any live-action characters. And to be immersed in the IMAX 3D presentation of the film was about as magical an experience for him as meeting them at Disneyland.”

Happy Feet TwoRelease date: November 18Production companies: Kennedy Miller Mitchell, Dr D Studios, Village Roadshow PicturesIn 2007, Happy Feet took the Oscar for Best Animated Feature, besting Pixar’s Cars. In 2012, we have part two of a showdown. “Miller versus Lasseter again in this category, featuring sequels of the same films. Will it turn out the same as last time? It’ll have to be the stronger story that wins,” observes Ward. “Both these films feature brilliant animation and look incredible.”

Kung Fu Panda 2Release date: May 26Production company: DreamWorks AnimationAs Ward points out, the art direction in this film, the color, the lighting—it burns in your mind days after having watched it. Dream-Works Animation’s use of various styles of

animation helps keep the look interesting and engaging, he adds. “When I see a 3D charac-ter having a flashback in 2D. . . well, it makes sense, doesn’t it? The feathers on the peacock (Lord Shen) were a show alone. Wet fur, wet feathers; it was like I could reach out and touch these characters, without 3D glasses!”

Mars Needs MomsRelease date: March 11Production companies: Walt Disney Pictures, ImageMovers Digital“I can speak firsthand at witnessing some of the industry’s best talent working on this film,” says Ward, who had been layout supervisor at ImageMovers before migrating to Rainmaker. “Though audiences met the movie with less than warm appraise, I think ImageMovers Digital did

an amazing job on the final product. The look fell somewhere between the likes of A Christmas Carol and Monster House, but still held its own unique style, offering a stylized character study with realistic shaders. The incredible designs of

Doug Chiang and his top-notch art department scream in every shot, as you can literally com-pare the design work to the final frames.” Puss in BootsRelease date: October 28Production company: DreamWorks AnimationTo prepare Puss in Boots for his leading role, the DreamWorks team gave him more fur that responds better to his movements. A lot of work also went into the film characters’ fa-cial expressions. Another big challenge was the environments, particularly cloud world, with its volumetric clouds. The shooting beanstalk in stereo 3D was also impressive.

RangoRelease date: March 4Production companies: Blind Wink Productions, GK Films, Nickelodeon MoviesAs Bourland points out, ILM really broke out of the animated feature mold with this one. “The world they created was rich and dirty, not all clean and polished like a Pixar or PDI film. The amount of detail they put into even background characters was impressive,” he says. “The volu-metric effects and style of lighting they chose also gave the film a much more cinematic feel then any other animated feature to date.”

“Leave it to ILM and Gore Verbinski to raise the bar on what you ‘can and can’t do’ in a family animated film,” notes Ward.

Jeannette was another who was impressed by the visuals in Rango, citing the saloon scene as his favorite moment in the film. “The visu-als and lighting were breathtaking,” he says.

RioRelease date: April 8Production companies: Blue Sky Studios, Twentieth Century Fox AnimationThere’s no question, Rio is colorful. “Talk about saturation of colors!” notes Ward. He also believes the movie contains some of the

best camera work he has seen in an animated film lately—well operated and conducted. “Having to track the action of birds isn’t easy, nor is animating them to move so realistically and with so much character.” n

CGW1211-oscarsfin1.indd 40 11/22/11 5:20 PM

Page 43: CGW 12 2012 Downmagaz.ws

learn / network / inspirelearn / network / inspirelearn / network / inspire

W W W . G D C O N F . C O M

GDC12_CGW_Sky_203x273.indd 1 10/11/11 10:08 AMCGW Ad Template 1211.indd 41 11/22/11 11:56 AM

Page 44: CGW 12 2012 Downmagaz.ws

42 December 2011/January 2012

For the sequel Batman: Arkham City, Rocksteady Studios extended the action from the confi nes of Arkham Asylum to the sprawling mean streets of Gotham, a much larger environment.

■ ■ ■ ■ Gaming

CGW1211-Batmanfin.indd 42 11/22/11 11:54 AM

Page 45: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 43

Gaming n n n n

When Christopher Nolan’s Dark Knight exploded onto the cultural landscape in 2008, and Rock-steady Studios’ Arkham Asylum

arrived on its heels in 2009, the ground-breaking film and equally groundbreaking

video game (still lauded by The Guinness Book of World Records as the “best superhero game ever”) were a dynamic duo that set collective imaginations on fire, transcended genre, broke sales records, and established new, almost unreachably high standards for comic-book art in their respective mediums.

Now, that dynamic duo is poised to return with a one-two punch that culminates with Nolan’s Dark Knight Rises in 2012 and begins with Rocksteady’s Batman: Arkham City, the eagerly awaited follow-up to Arkham Asylum that sold a staggering two million copies in its first three weeks alone.

Written again by Paul Dini, directed by Sefton Hill, and art-directed by David Hego, the sequel eclipses the scope and scale of its predecessor in almost every way, lifting Batman out of the claustrophobic confines of Arkham Asylum and releasing him onto the mean streets of Gotham, an environment that’s more than five times bigger. Scaling up the playing field meant scaling up the cast of villains and thugs—a population explosion that has the city overrun with almost every super-villain from the Batman mythos. As a result, Rocksteady had to adapt Batman’s gameplay to the massive Gothic sprawl. They gave him a Power Dive to glide between buildings and the ability to chain attacks to contend with the relent-less gang assaults, sometimes comprising as many as 30 or more assailants—a far cry from the one-on-one combat of Arkham Asylum (see “Dark Matter,” October 2009).

The sequel is set one year after the original. Batman has foiled the Joker’s plans to poison Gotham’s water sup-ply with the zombie-making Titan chemical, but Quincy Sharp, former warden of Arkham Asylum, has taken credit for the collar. Parlaying his notoriety into a successful bid for mayor, Sharp’s first act is to buy out a large section of the slum-infested North Gotham to house the burgeoning inmate population, creating a makeshift prison-city policed by a private military contractor called Tyger.

To oversee the so-called Arkham City, Sharp hires psychotic psychiatrist Hugo Strange, who not only has a hidden agenda for the city, but also knows Batman’s true identity, “leaving him vulnerable and exposed in a way he’s never been before,” says Hill. Surveying the open city from atop his gargoyle perch, watching from a distance as it factionalizes under each villain vying for rule, Bat-man is eventually forced into the city when Two-Face kidnaps Catwoman, his former love, and devises a plot to publicly execute her. Through it all, Batman tangles with Catwoman and allies with Robin to stop Gotham from descending into total chaos.

‘Batman in Gotham’ FeelWhether Batman’s motivation is love or heroism, Rock-steady’s primary motivation for relocating the Caped

Crusader to Arkham City was to deliver, according to Hill, that “Batman in Gotham feeling.”

“That sensation of gliding through the streets of Gotham City as the Dark Knight was one of the key objec-tives we set for ourselves,” says art director David Hego. “Moving the action out of the asylum and onto the streets was a huge creative and technical undertaking; Batman’s navigation abilities needed to step up, providing an en-tirely new set of gameplay opportunities for the player. From an artistic perspective, the priority was to create a world suffused with a lot of realistic elements so it would feel believable but still uniquely Gotham.”

This uniquely Gotham feel borrows and expands on the architectural styles and atmosphere set in Arkham Asy-lum. Like before, crumbling Gothic and Victorian build-ings abound, where old-fashioned turrets, spires, and gargoyles clash with glaring splashes of neon signage. For Arkham City, however, art directors added flourishes from other architectural and art movements of the 20th century.

“Of course, the architecture is reminiscent of Arkham Asylum for the simple fact that we wanted the world we created to remain consistent and logical,” says Hego. “However, we expanded on the world architecturally, but-tressing it with new conceptual pillars. Gothic and Victo-rian-style structures are still present as the foundation and DNA of Gotham City and its dark feel. On top of these two strong styles, we’ve added Art Nouveau elements in the architecture and design. It’s fascinating to explore real-world history and borrow elements to re-create piece by piece for our world.”

In addition to borrowing from real-world history, the team extracted visual threads from the early history of cin-ema and wove them into its dark, visual tapestry. “Another

The game features a wide range of villains who roam the expansive playing field, including Two-Face.

CGW1211-Batmanfin.indd 43 11/22/11 12:23 PM

Page 46: CGW 12 2012 Downmagaz.ws

December 2011/January 201244

n n n n Gaming

inspiration for the atmosphere of Arkham City came from German expressionism (think 1920s The Cabinet of Dr. Caligari). We took cues not so much from structures and per-spective, but more in the way we lit the world, with crude light and shadows, which is appro-priate for Gotham City,” adds Hego.

The multiplicity of villains greatly informed the set design, too. Over the course of the game, Batman squares off against a who’s who of villains, including Mark Hamill’s Joker, a Cockney Penguin, Two-Face, and Mr. Freeze. Each villain has staked out his or her own little enclave in Arkham City, where the architec-ture, graffiti, lighting, and art direction per-sonify the unique psychology of the character. It’s a city diversified and variegated by villainy. Hence, as players make their way from one enclave to another—say, from the courthouse of former DA Harvey Dent (aka Two-Face) to the Penguin’s Iceberg Lounge—they had to feel as if they were making a physical transition to another “emotional space” through the art.

“A great example of this is the Solomon Wayne Courthouse, where Two-Face is holed up,” says Hego. “Not only is the location spe-cifically relevant to him as the ex-district at-torney of Gotham, but he’s also remodeled the building to reflect his own duality. The right side of the courthouse—both inside and out—is defaced, like his own mutilated right half, smashed up and burnt, symbolizing his lust for chaos and carnage, while the other side is perfectly rendered in accordance with his belief in order and justice. So, in this way, moving from one district to another is sign-posted by subtle changes in the features and landmarks of the street.”

By the same coin, the Joker has established his territory in the Sionis steel mill nestled in the industrial part of the North Gotham docks. Here, the Joker’s gang has redesigned the area

into a massive, morbid funfair. “The mix of funfair elements with the industrial setting cre-ates an explosive environment, rich in color, just like the Joker’s personality. It was a great experi-ence trying to imagine how each villain’s faction would mark its territory,” adds Hego.

This ghettoized city stretches out before Bat-man in great vistas when he enters his Power Dive, arcing over streets and the skyline. “The city is such a rich, dense place, filled with these little iconic elements and details,” Hego points out, “that we had to be clever with what we display on screen. To that end, we employed a complex LOD system to hide superficial de-tails at a distance, while keeping texture den-sity and geometry impressively high at street level or while grappling between buildings.”

Unreal to the MaxUsing Epic Games’ Unreal Engine 3 and Auto desk’s 3ds Max, artists forged all Arkham City’s texture maps, geometry, and lighting. The German Expressionist cinematography—crude, angular, brooding—came mainly from the way the moon lights the world. “It’s not just about the lighting by itself, but about how the light interacts with the materials, the normal map, and the specular levels of the snow, the water, the buildings; and the way the water towers and chimneys cut through the moonlight with dynamic light shafts,” says Hego. “That’s the key to capturing the striking Gothic atmosphere.”

Another crucial light source in the game, of course, is the Bat-Signal, not just because of its connotation within the Batman universe, but for its narrative function, too, pointing the player to the next objective as it refracts and reflects off smoke and clouds. “The Signal can be placed arbitrarily anywhere on the map by the player (which means it could end up too distant and dim), so we had to find a way to

make its integration coherent without feeling fake. Using the stock lighting of the Unreal en-gine, somewhat re-engineered by our coders, we decided to use an arrow-style representa-tion of the cone of light through the smoke, instead of a solid light cone. With the arrow-style lighting, the Signal achieves its functional purpose and is visually impactful without looking out of place.”

Rogues GalleryThe city’s massive cast, composed not mere-ly of homogeneous non-player characters (NPCs), but of highly unique super-villains and their equally unique minions, put Rock-steady’s character modelers to the test. After sculpting a rough base mesh in 3ds Max, modelers refined the geometry in Pixologic’s ZBrush to produce a high-resolution version of each character. From this, they created the in-game model and extracted the normal map. “The poly counts of the in-game models aren’t low, ranging around 15k per character, but the normal map is still vital, to keep all the details of the high-res version,” says lead artist Pablos Hoyos.

While most of the intricate details—wrin-kles, scars, caking face paint, and so forth—were baked into the normal maps, artists used ZBrush and Adobe’s Photoshop to paint dif-fuse maps, specular, and specular power maps, as well as transmission maps, to simulate sub-surface scattering of light through skin, flesh, and veins. “We always try to add as much de-tail as we can, especially in the faces. We have skin imperfections, like moles and skin marks, different types of pores, stubble, skin lines, wrinkles, skin tones, and so on. All these details are present in each map of the shader system and, when layered together, produce an aston-ishing sense of realism,” says Hoyos.

Indeed, unlike the square-jawed, non-descript neckless grunts who inhabit most games, Arkham City’s faces reflect the subtleties of strong, nuanced personalities. “Some of the faces presented quite unique challenges, such as Two-Face’s burnt flesh, Solomon Grundy’s “zombified” look, and Penguin’s old skin, which he cakes in makeup because of his van-ity,” says Hoyos.

Separating the villains with a colorful in-dividuality was a challenge, contends Hoyos. Catwoman was all about playing with her pro-portions until we got the right mix of beauty and sex appeal. Penguin was all about making his face look pure evil, and the broken glass monocle was an unusual, character-defining touch. Mr. Freeze’s armor is a complex assem-blage of many individuals, so modelers had

Each bad guy has his own lair, an environment designed by artists to reflect the villain’s personality.

CGW1211-Batmanfin.indd 44 11/22/11 12:23 PM

Page 47: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 45

Gaming n n n n

to carefully plan out their work with the rig-ging and cinematics teams to ensure the pieces behaved correctly (without intersections) and looked as good as possible.

Mocap Method Acting To drive the characters’ animations, riggers built an IK skeleton using a basic 3ds Max bi-ped rig that was augmented with additional fa-cial bones. “Because we also have to deal with a lot of motion capture, we built a version of this IK setup that runs in MotionBuilder— the primary work space for motion-capture-based animation,” says lead animator Zafer Coban. Unlike Arkham Asylum, in which ani-mators relied heavily on normal-map blend-ing for delicate deformations in subtle facial expressions, this time Rocksteady wanted to push the range in the performances by relying more on mocap.

“The primary difference between the two games relate to Batman’s face and face setup,” says Coban. “Primarily, we wanted to enhance his performances and those of all the characters by developing a facial motion-capture pipeline whereby face actors would repeat the lines of the original voice actors [like Mark Hamill or Kevin Conroy, who plays Batman], providing facial acting in the process.” This necessitated a more highly articulated facial rig, granting a much larger and more accurate range of pos-sible motion.

In more than 200 motion-capture sessions, Rocksteady shot a total of 14 hours of facial data and 17 hours of body motion for the in-game animation (excluding the hours shot for cinematic sequences). In fact, every single character’s animation set includes some mo-tion capture.

The motion-capture setup at Rocksteady is an optical, marker-based Vicon system com-

prising 32 cameras. The team uses Vicon’s Blade software to capture and process the initial data. “We have a solution that ensures good, sturdy data is baked onto the skeleton within Blade before moving things on to the next stage in MotionBuilder,” says animation programmer Tim Rennie. Here, animators can take the original, actor-scale performance and drive the final in-game character setup. Tweaks and embellishments to the perfor-mance happen in MotionBuilder, but much of the final animation is keyframed in 3ds Max, which is the final destination before ex-port to the engine.

“We produced about 45 minutes of final fa-cial capture for most characters, during which time our facial actors match their voices to the original actors’ performances,” says Rennie. This involved a complex process in which the facial actor would repeat 10-second clips until we had a perfect, fully synced performance. These clips would be captured with their correct time code so they could be reassembled in MotionBuilder. In MotionBuilder, animators would use video reference from the capture to combine every-thing—marker data, hand-tweakable controls, and automatic correction scripts—to drive the final performance. The resulting facial anima-tion could then be merged separately onto the body capture within the Unreal Engine without the animators having to worry about breaking the sync to the final audio.

Along with facial capture, the facial ani-mation system also employs OC3 Entertain-ment’s FaceFX, depending on the importance and complexity of the scene. Using blend-shape targets in a fully articulated FaceFX rig, animators could keyframe expressive eye ani-mations or subtle facial tweaks to punctuate a particular body movement, polishing very quickly hundreds of lines of dialog. “In addi-

tion, we also constructed a hassle-free pipeline to quickly embed FaceFX animations onto body animations for all in-game movements,” says Rennie. Small mercy in a game with 100-plus unique faces to animate, excluding weight variants, like the fat or thin.

Feline FinesseWhile Batman remains the story’s main hero, for about 10 percent of the game, the player can slip into the sleek, skin-tight leather of Catwoman. Armed with a whip and bolo, Cat woman exploits the chaos in Arkham City to go on a kleptomaniacal rampage, thieving jewels and valuables like there’s no tomorrow. In her first mission, to steal an orchid for Poi-son Ivy, she glides lithely across rooftops and alights upon some unsuspecting Tyger secu-rity guards standing over a manhole cover—her access point to a maze of sewers leading to a vault. Inside, she performs her signature “ceiling climb,” dropping down on guards to pickpocket their keys. When the alarm blares, she unloads with a flurry of fluid roundhouse leg kicks that would daze Batman with their blinding speed and grace.

In designing Catwoman’s gameplay, Rock-steady’s first objective was to make sure players did not feel like they were guiding a curvier version of Batman. This entailed a wholesale reworking of the rigging and weighting of the standard IK chain, enabling greater speed and flexibility in her animations.

“Catwoman’s rig is, of course, unique to her. She’s a lot slimmer, shorter, and has a bunch of bespoke controls for her whip,” says Co-ban. “As soon as we decided to include Cat-woman as a playable character, we wanted the gamer to have a totally new experience of the Arkham City world, not a re-skin of any sort. With combat, we’ve taken our influences from acrobats, gymnasts, and ballerinas to bring a unique flavor to her fighting style. We’ve con-centrated on legwork more, and left the hard-hitting, brutal punches to Batman himself.

For example, Coban says, when Batman hits, the impact registers with sheer, blunt-force trauma, whereas Catwoman’s attacks, while less impactful, are faster, more agile, athletic. “We played with those elements, and it really shows during combat. Players will par-ticularly appreciate this unique legwork in her Stealth Predator gameplay, where she’ll flip up onto a thug’s shoulders, wrap her legs around his arms and head, and choke him out with those long, hardened legs.”

Everything from Catwoman’s gadgets to her navigational skills is custom-made for the feline fatale. “She doesn’t have the Grapnel

Artists baked most of the facial details into the normal maps, and then used ZBrush and Photoshop to paint diffuse, specular, and specular power maps to simulate subsurface scattering through the skin and veins.

CGW1211-Batmanfin.indd 45 11/22/11 12:24 PM

Page 48: CGW 12 2012 Downmagaz.ws

December 2011/January 201246

n n n n Gaming

Gun, so we’ve given her the Whip Swing and the Claw Climb; altogether, it looks and feels very different playing her,” says Coban. Cat-woman is also a prime example of Rocksteady not only pushing facial and body mocap to en-hance performances, but also run-time phys-ics to enhance the dynamic motion of hair, coats, straps, and, specifically, Catwoman’s whip, which snaps and coils with astonishing realism. “We always had the ability to add ad-ditional movement to a character’s animation using run-time physics simulation, but we re-ally pushed it hard on Arkham City.”

Cape AnimationNewly enhanced run-time physics simulation also underlies much of Batman’s improved cape animation, which billows in the wind, unfurls during the character’s Power Dive, and pleats and settles as he slows into combat mode. The in-game cape combines a mixture of elements: real-time cloth simulation driving a skeletal mesh rig; hand-keyed skeletal anima-tion; and off-line cloth simulation, authored in 3ds Max cloth and baked onto skeletal ani-mation. To produce the ultra-realistic clothing animations in the pre-rendered cinematics, artists baked this off-line cloth sim onto vertex animation.

From the outset of production, the team wanted to have extremely fine control of the cloth but also ensure that it was reacting in a nat-ural and dynamic way to the environment, the weather, and Batman’s movements. “The biggest change during the development of Arkham City was the redeveloping of the cape rig midway through production. We had to make it easier for animators to pose the cape more intuitively for any particular action,” says Coban. This pose would then form the driving shape for the final physics simulation at run time.

During most of the gameplay, the in-game

cape is pure real-time simulation, but when a particular iconic move or a stylized result was needed, the keyframed animation kicks in. In some situations, an animation is used for the overall shape, while the sim adds physics detail at the edges. For example, the wind rippling through the cape is achieved by level artists plac-ing volumes in the world nearby. “We spent a lot of time trying to retain a high level of anima-tor control while still running the cape under live physics simulation,” says Coban. Balancing animator control with live physics simulation was also crucial to animating Robin’s staff. “It’s a complicated piece of kit that can bend, flex, extend, and turn into a shield, all while he’s swapping it from hand to hand,” he adds.

Combat ChoreographyTo handle the crush of assailants and the sprawling, open setting, Batman’s range of

movement for maneuvering through the en-vironments and for hand-to-hand combat has undergone an aggressive expansion. In fact, his animation set has doubled. According to Hego, the expansion of the game world drove a redesign of every aspect of Batman’s naviga-tion and combat, as well as a huge overhaul in the way that the team conveyed story and narrative elements to the player. “For example, the enhanced Power Dive—through which Batman gets around the city and the player experiences the freedom and exhilaration of flying through alleys and over the skyline—was a completely new development challenge for us, resulting in the full-momentum gliding system,” he says.

Furthermore, doubling the number of combat moves was essential to convey a sense of variety in Batman’s combat skills, so critical to the feeling of power and dominance offered by the FreeFlow Combat system.

Waylaid by Oswald Cobblepot’s goons and the Frankenstein-like Solomon Grundy in the Iceberg Lounge, Batman chains his punches and kicks to clear the room, following up a roundhouse kick to one thug with a swift leg sweep to another in a seamless series of multi-ple, simultaneous counters, all the while react-ing to thrown objects and without the slightest hitch in the blending system.

“In Arkham Asylum, thugs would generally attack one at a time, but in Arkham City, we’ve blown that out of the water, letting thugs rain punches and kicks in simultaneous assaults that really make the player feel pressure—as they would in a real fight,” says Hego. “Con-sequently, Batman can now perform double

Cinematic TouchThe numerous cut-scenes spliced throughout the game unspool through the Un-real game engine using an advanced lighting rig setup. The rig—which uses hun-dreds, if not thousands, of lights—simulates global illumination and bounce lighting to add realism, and can be tailored precisely for the atmosphere and design of the shots.

“Our engine is so powerful,” says cinematics director Paul Boulden, “we were able to create shaders that simulate subsurface scattering and ambient occlusion. It was important for us to not merely create a realistic visual, but to stylize it accord-ing to Rocksteady’s trademark vision.”

Boulden meticulously directed the cut-scene performances, adding them together from extensive facial-capture sessions. “We wanted to bring a new level of realism to the characters. One of our main goals was to bring the characters to life by making them more believable,” he says. “We captured actors with a physical marker setup on their faces. We were able to capture subtle gestures and nuances that would have been otherwise impossible to get. Rocksteady is driven by the conviction that bringing characters to life will yield a stronger connection to the audience, thus allowing us to tell a more convincing and immersive story.” —Martin McEachern

The animators increased the number of Batman’s combat moves to make him appear more powerful.

CGW1211-Batmanfin.indd 46 11/22/11 12:24 PM

Page 49: CGW 12 2012 Downmagaz.ws

December 2011/January 2012 47

Gaming n n n n

and triple counters, whereby he deflects and dodges all these blows and sends counter­attacks [on multiple characters] in one swift move.”

The daunting task of programming AI for these complex ensemble fights and endless counterattacking fell to AI programmer Tim Hanagan and his team of coders. “There were so many challenges involved in increasing the crowd combat from 10 or 12 to about 30. First, we had to optimize the performance of all the various systems so that these large­scale fights could run at a consistent 30 frames per second,” he says. “The second was managing the positioning of so many enemies, to pre­vent the fights from feeling too cramped.” From a visual standpoint, the group had to allow the player to see clearly and assess each situation. Most of these challenges were ad­

dressed through the studio’s custom character collision system.

Implemented within the Unreal engine, this character collision system replaces the existing stock Unreal system with one that’s much faster, more efficient, and streamlined.

Hanagan explains: “It uses the navigation mesh data to allow faster collision queries against an approximation of the real­level ge­ometry. This was a major contributing factor in allowing us to support so many active char­acters at once.”

The team additionally implemented a real­time path­smoothing system to improve the look and realism of the paths the AI take when traversing levels. In addition, they developed a character scripting system within the Unreal Kismet scripting editor to allow animators to implement complicated scripted events with­out any code support—all this while still al­lowing for a high level of player interaction. While the character collision system has the

capacity to support more than 30 combatants in some areas, Hanagan cautions that pushing the crowd beyond that number only made the gameplay confusing.

Batman is also armed with a new “context­sensitive mechanic,” which allows him to in­tegrate his immediate environment into his fighting—improvising with a nearby railing, brick wall, pillar, or street lamp—to subdue as­sailants. These context­sensitive moves require code that could rapidly sample the local area to identify whether the surrounding geometry could be used within the current combat move, notes Hanagan. The problem with all systems like this, however, is trying to make sure that the developers balance the accuracy of the checks with the need for fast run­time performance. “You always want to minimize the number of line checks performed, but at the same time,

you don’t want to end up performing a wall animation on a 10 cm­wide lamppost, or end up slamming a thug’s head into what should be a railing but, in reality, is the space between two railing­high benches,” he says.

In building this robust combat system, Rennie contends that the most important thing wasn’t any particular piece of technol­ogy, but the animators, gameplay coders, and tech artists all sharing the same studio space and collaborating closely. Beyond this close collaboration, Rocksteady’s artists relied on the studio’s own animation blending system, which harnesses all the standard tools and techniques. This includes cross­blending, time warping, additive animation, motion extrac­tion, and mirroring. A particular focus was placed on automatically aligning animations. For example, if two characters are interacting with each other, the system will automatically blend them into the correct position based on their relative positions in the animation. If a

character wants to interact with an item in the world, then a positional marker embedded in the animation tells the system how the charac­ter should be aligned.

Of course, much of this complex interac­tion with Gotham’s urban jungle (as Batman or Catwoman) involves scaling walls and ledges, climbing through sewers and ventila­tion ducts, trying to gain a precarious foothold or handhold on a cornice, gargoyle, crack, or crevice. For aligning hands to walls and ledges, Rocksteady used some of the Unreal Engine’s built­in arm IK. However, the team tackled most of the challenge by building the environ­ments to standard grid sizes and then animat­ing to those sizes. So for a wall climb, there are separate animations for 128­, 256­, and 384­unit height walls, and if the animations don’t quite match up to the real wall, then the artists would use a blend to shift the entire character.

“For aligning the character to the floor, we dynamically calculate a virtual floor plane that approximates the actual floor geometry under­neath the character. A standard two­bone leg IK then skews the animation to fit the plane. In some situations, where leg IK is insufficient (like Catwoman crawling on the ceiling), we rotated the entire character to fit the virtual plane,” says Rennie.

Dark Knight RisingIndeed, Rocksteady’s commitment to story­telling, high production values, and acting—both animated and mocapped—has delivered the studio to the forefront of the industry’s leaders, and Arkham City to the forefront of contenders for Game of the Year. At this year’s Comic­Con, Hamill, Conroy, and Dini held court on a panel that was one of the conven­tion’s biggest draws, no small feat considering the presence of Peter Jackson and Steven Spiel­berg pushing Tintin.

And it’s all by design, too, for the seeds of Arkham City were laid, like Chekov’s gun, in secret plans hidden in a backroom of warden Sharp’s office two years ago in Arkham Asylum. Who knows what little bread crumbs have been dropped for future sequels in Gotham’s mean streets. Only Rocksteady knows.

What’s certain is that the developer’s achievements left fans waiting with bated breath for this sequel, and if Rocksteady holds fast to its resolve of pushing the bar higher and higher, The Guinness Book of Records may find itself passing the mantle again…and again. n

Martin McEachern is an award-winning writer and contributing editor for Computer Graphics World. He can be reached at [email protected].

Developers used Epic Games’ Unreal Engine 3 and Autodesk’s 3ds Max for the textures, geometry, and lighting.

CGW1211-Batmanfin.indd 47 11/22/11 12:24 PM

Page 50: CGW 12 2012 Downmagaz.ws

ANIMATIONStrike a PoseSmith Micro Software has released Poser 9 and Poser Pro 2012, marking the first time the company has issued simulta-

neous releases of the animation tool. Both soft-ware applica-tions now offer vertex weight-m a p r i g g i n g s u p p o r t a n d subsurface scat-tering capabili-ties, along with a simple-to-use

user interface that has evolved from past releases, yet maintains a familiarity that existing users will appreciate. The releases also include more than 3gb of ready-to-use content, including figures (humans, skeletons, etc.) and architectural elements. Full scenes also have been put together, including office and crime lab settings, helping to save users time in building their own environ-ments. Poser 9 is a 32-bit application that’s priced at $249 and offers full-level rendering control. Poser Pro 2010 is a 64-bit application and includes updated PoserFusion plug-ins and Collada support for professionals who may want to export animations into programs such as Maya, Softimage, Cinema 4D, and LightWave. The 64-bit application also includes the FireFly Render Engine and a vertex weight-map editing tool suite. Smith Micro Software; www.smithmicro.com/poser

MIDDLEWARESim SolutionAI.implant, Presagis’ multi-platform artificial intelligence (AI) authoring and runtime software solution, has been upgraded to Version 5.7. AI.Implant is designed for simulation and analysis projects requiring realistic and dynamic urban environments, including unmanned aerial vehicle (UAV) and helicopter training, air traffic control applications, and driving simulation. The updated release improves attributes asso-ciated with traffic and human interactions, and enables users to build complex and realistic scenarios faster. As a COTS middleware product, AI.implant seam-lessly integrates into existing pipelines and simulation engines. The new release improves realism of road traffic and pedestrian interactions. Vehicles can now pass other vehicles using slower or oncoming lanes. Traffic lights can be customized to suit the simu-lation, or they can also run in automatic mode. The TrafficSolver manages the advancement through the traffic light cycle as the simulation runs. And a vehicle that is not in the correct lane when approaching an intersection will reset its path.Presagis; www.presagis.com

LIGHTING KeyShot Tool KitLightmap has launched the HDR Light Studio Live plug-in for Luxion KeyShot renderer, which brings a professional real-time HDRI lighting tool kit directly into KeyShot real-time visualization software. The plug-in allows users to improve the quality of the renders with custom lighting designs for each shot, all via a simple-to-use interface. Existing HDR environ-ments can be augmented to improve the

quality of their renders, with more control over lighting and reflections. Lighting adjustments take place in real time in the KeyShot viewport. HDR Light Studio Live for KeyShot is included with the HDR Light Studio 2.0 Pro edition. Lightmap; www.hdrlightstudio.com

GPUFollow the LinkAMD unveiled the FirePro SDI-Link, which helps bring real-time, GPU-accel-erated performance to pipelines requiring Serial Digital Interface (SDI) input and output. The FirePro SDI-Link is receiving support from manufacturers, such as AJA, Bluefish444, Blackmagic Design, Deltacast, DVS, and Matrox, as it allows for the design of fully featured SDI- and GPU-based solutions with ultra-low laten-cy between select AMD pro graphics cards and third-party SDI input/output products. AMD also showed the FirePro V7900 SDI, a new graphics card that is the first to support AMD FirePro SDI-Link. The V7900 SDI will be certified as compat-ible with all five manufacturers providing PCIe cards offering advanced SDI video signal I/O capabilities. The V7900 SDI is designed specifically for broadcast graphics pipelines. The unit is the first to leverage AMD’s DirectGMA technology to help ensure system-level, low-latency synchronized data transfer between the AMD FirePro professional graphics and third-party devices over the PCIe bus. AMD began delivering the FirePro V7900 SDI in October for $2499. AMD; www.amd.com

For additional product news and information, visit CGW.com

SOFTWARE

WIN MAC

WIN MAC

48 December 2011/January 2012

December 2011/January 2012, Volume 34, Number 8: COMPUTER GRAPHICS WORLD (USPS 665-250) (ISSN-0271-4159) is published bimonthly (6 issues annually) by COP Communications, Inc. Corporate offices: 620 West Elk Avenue, Glendale, CA 91204, Tel: 818-291-1100; FAX: 818-291-1190; Web Address: [email protected]. Periodicals Postage Paid at Glendale, CA, 91205 & additional mailing offices. COMPUTER GRAPHICS WORLD is distributed worldwide. Annual subscription prices are $72, USA; $98, Canada & Mexico; $150 International airfreight. To order subscriptions, call 847-559-7310.

© 2011/2012 CGW by COP Communications, Inc. All rights reserved. No material may be reprinted without permission. Authorization to photocopy items for internal or personal use, or the internal or personal use of specific clients, is granted by Computer Graphics World, ISSN-0271-4159, provided that the appropriate fee is paid directly to Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. Prior to photocopying items for educational classroom use, please contact Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. For further information check Copyright Clearance Center Inc. online at: www.copyright.com. The COMPUTER GRAPHICS WORLD fee code for users of the Transactional Reporting Services is 0271-4159/96 $1.00 + .35.

POSTMASTER: Send change of address form to Computer Graphics World, P.O. Box 3296, Northbrook, IL 60065-3296.

WIN LINUX

HARDWARE

CGW1211-BackProdsfin.indd 48 11/22/11 11:14 AM

Page 52: CGW 12 2012 Downmagaz.ws

B e c a u s e i t m a t t e r s .

All together nowAJA and Avid

www.aja.com

Desktop or laptop, PC or Mac, AJA products are designed to keep video professionals ahead ofthe game, delivering unrivaled quality and connectivity. Now, users of Avid software can benefitfrom the same workflow-enhancing features that Apple Final Cut Pro, Adobe CS5, and AutodeskSmoke editors have come to rely on.

From Io XT, our portable Thunderbolt solution, to KONA 3G with its multi-format 4:4:4capture/playout and full 3D stereoscopic capability, AJA KONA and Io products have got yourworkflow covered. All models feature 10-bit uncompressed video I/O, SD and HD compatibilityand AJA’s renowned hardware-based format conversion.

Compatible with PC or Mac, and with a choice of feature sets, AJA hardware provides anyworking editor with a powerful combination of professional performance and true flexibility -and the freedom to work with the software of your choice.

Find out about using AJA products with Avid at www.aja.com

AJA KONA, Io XT and Io Express. Broadcast-quality capture, monitoring and outputfor Avid® Media Composer® 6.0, Symphony® 6.0, and NewsCutter® 10.0.

#29996 - Avid CGW_Layout 10/11/2011 09:58 Page 1

CGW Ad Template 1211.indd 102 11/22/11 11:15 AM