Computer Graphics World

DECEMBER 09

Issue link: https://digital.copcomm.com/i/5474

Contents of this Issue

Navigation

Page 15 of 59

December 2009 14 n n n n CG Characters•Environments In the film, Jake Sully (actor Sam Worthing- ton), a paraplegic war veteran, is given the op- portunity to inhabit the athletic body of an avatar. He opts in. His avatar is an alien, a Na'vi, a race of humanoids that populate the planet Pandora. He, like all Na'vi, is blue. A 10-foot-tall biped with a stretched, cat-like body. Almond-shaped eyes. Tail. Pointed ears. rough his avatar, Jake immigrates to Pandora, a lush planet filled with water- falls, jungles, and six-legged creatures, some of which fly. ere he meets the beautiful Neytiri (actor Zoe Saldana) and assimilates into the Na'vian culture. Everything on Pandora—every plant, crea- ture, and character—is digital, created by art- ists using computer graphics tools and moved by animators working with keyframe and mo- tion-capture data. "e planet was really inspired by Jim's [Cameron] underwater dives," Letteri says. "ere's bioluminescence. e creatures have blue skin, and the animals have vivid patterns. We all know the rules: Big animals don't have vivid colors. But, they do underwater, and Jim said they can exist on this planet. So we brought that color palette to the surface and made it believable. However, the big thing was that Jim wanted to do facial motion capture." Performing Characters For Gollum in e Lord of the Rings, Weta had captured Andy Serkis's body, not his face. For King Kong, they glued markers on Serkis's face and captured him in a high-resolution vol- ume, and then retargeted the motion data to Kong's CG face. "Jim didn't want to go that route," Letteri says. "He was more interested in a video head rig." To make a head-mounted system that would encumber the actors as little as possible, Weta decided to create software that could track fa- cial movements using one camera. en they took it a step further by re-projecting the mo- tion onto a 3D model in real time. "We knew Jim would have real-time mo- tion capture on the stage for the characters, and would be recording the faces," Letteri says. "We thought, wouldn't it be cool if we could do real-time faces? We knew he was coming in six weeks, so we did some all- nighters and got a system working." When Cameron arrived, he could see actors on stage wearing a head rig that was driving the facial expressions for a CG character in real time. Stephen Rosenbaum—who had been on the crew at Industrial Light & Magic for e Abyss as a CG artist, was a CG animator on Termi- nator 2, and who had won a visual effects Os- car for Forest Gump—was the liaison between Cameron and his Lightstorm group in Los Angeles and Weta in New Zealand. He helped integrate Weta's creatures, avatar puppets, and facial-capture system into previs and the real- time motion systems developed by Lightstorm and Giant Studios. Rosenbaum was one of six visual effects supervisors at Weta who worked with Letteri on the film. e other five were Dan Lemmon, Eric Saindon, Wayne Stables, Chris White, and Guy Williams. "Lightstorm created environments at a pre- vis level," Rosenbaum explains. "We created the creatures and character puppets at Weta that they used within the environments. Gi- ant used our puppets during motion capture. And, when they had scenes where actors need- ed to interact with creatures, we also provided pre-animated characters so they could see the action during motion capture." Giant and Lightstorm performed the real- time motion capture that allowed Cameron to see the CG version of the film at a game- quality level as the actors performed in a mo- tion-capture volume approximately 40 feet wide by 70 feet long. Giant set up the volume using close to 120 industrial cameras from Basler Vision, and handled the re-targeting, Weta used an absorption-based subsurface scattering routine to give the blue-skinned avatars and Na'vi a fleshy, believable look. Pandora in Stereo When the characters run past Pandora's digital plants, they look like they're in a deep jungle in stereo 3D because Weta integrated and composited the elements volumetrically. "We did volumetric lighting, smoke, fire . . . everything became volumetric," says Joe Letteri, senior visual effects supervisor at Weta Digital. "It's all depth-based. We have our own proprietary version of [Apple's] Shake, so we wrote a stereo version that does everything in parallel, and we had a 3D depth compositing system inside. We also worked with The Foundry on its new stereo tool sets for Nuke. Because of the stereo, it wasn't practical to shoot elements for anything; it all had to be spatial." On set, Cameron could look at the output of the Autodesk MotionBuilder files from the performance-capture sessions in stereo and adjust the camera so that Weta knew the interocular distance that he wanted and where he wanted the convergence plane. "He goes for a natural feeling," Weta VFX supervisor Eric Saindon says, "a window into a 3D space. He seldom brings things past the convergence plane, but he definitely draws your eye where it should be." Creating the stereo version of the film was, as it turned out, not much of an issue. "Our 3D implementation has been really good," Saindon says. "Because we know everything is correct in [Autodesk's] Maya, we don't do the stereo 3D until Jim buys off on the 2D. Then we render the other eye. The early shots were awkward, but the later sequences worked well. At the end of the day, the stereo 3D was less of a factor than we thought it would be." –Barbara Robertson

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - DECEMBER 09