Computer Graphics World

DECEMBER 09

Issue link: https://digital.copcomm.com/i/5474

Contents of this Issue

Navigation

Page 19 of 59

December 2009 18 n n n n CG Characters•Environments Animating Performances Animators also keyframed Na'vi ears and tails. "We'd whip their tails around if they were up- set, and use them as a counterbalance when they ran," says Jones. "ey were like another appendage. We also found the ears really use- ful for adding emotion to the character." e ears tell when a Na'vi is angry or shocked, just as they do for cats and dogs. For the Na'vi bodies, the motion capture worked extremely well. "Giant's body capture was fantastic," Jones says. "We still had to ani- mate their hands and fingers, but the offsets and targeting and retargeting was well done. ey kept the weight. And, the data was clean." e characters' design might have helped with the retargeting. Rather than completely altering the human proportions, the designers created the Na'vi with similar proportions to humans, but with slim hips, narrow shoulders, and long necks. "It made the retargeting pro- cess easier," Jones says. Oddly, although animators often use mo- tion-captured data to add the tiny movements to help bring alive a character that is standing still, Weta's animators found themselves add- ing jitter to the mocapped data in some cases. "When someone was yelling or scream- ing, the high-frequency jitters were often filtered out," Jones explains. "e system couldn't distinguish between muscle shake and noise precision. So we would animate it back in, and all of a sudden it felt like the characters were screaming, not just opening their mouths. We had the body muscle rig, but when a bicep fires, there needs to be a jitter. When [Cameron] saw us doing that, he really loved it." e muscle rig is new, developed at Weta specifically for this film. "It's a dynamic system that simulates muscles properly," Saindon says. "It calculates the fat layers and colliding volumes much more accurately than in the past." Prior to this, after animation, the charac- ter TDs needed to fine-tune the look of the character and fix problems—intersections, muscles that didn't look right, and so forth— by sculpting the character on a shot-by-shot basis. With the new system, that was rarely necessary. "We'd get something much more accurate and realistic straight out of the box," Saindon says. "We had to do little in the way of going back and fixing things." Creatures In addition to the characters, Weta anima- tors performed approximately 10 creatures, a hellfire wasp, and thousands of insects. "Ev- ery single frame has something alive in it, whether it's a moving plant or bugs," Wil- liams says. Of the creatures, four fly and most have six legs. "Our first approach was typically to hide the middle legs, animate the animals as quad- rupeds, and then bring the middle legs back in," Jones says. e animators might animate a horse-like creature by having the leg move- ment cascade, or change the gait by changing the offset. A cat-like creature might arch its back, lift its front legs, and use them as arms and hands. Jake learns to ride a creature that looks like a flying horse, and for those shots, the crew used a gimbaled motion-control rig. "e good thing about motion capture was that it gave us the posing [Cameron] liked for the character on top of the creature, where the character should be looking, and the riding style," Jones explains. "But it was obvious that his legs weren't reacting to his chest popping up and down, so we couldn't use the motion capture completely." Am I Blue? Facial capture was perhaps the biggest chal- lenge. e second biggest challenge for the technical team was keeping the aliens from looking like someone had poured blue paint on them. "It was a tricky problem," Letteri says. "ey needed to have warmth under their skin, so we had to find the right shades of blue and blood color that would look good in firelight, blazing sun, overcast skies, and rain. Blue skin quickly wants to look like plastic." Seeing Virtual To film the CG characters and creatures in their digital world, James Cam- eron used a virtual camera. "Imagine a nine-inch LCD screen with a steering wheel around it and tracking markers on it," says visual effects supervisor Stephen Rosenbaum. "A stage operator would load the CG puppets and environment and set up the lighting, and then Jim [Cameron] would pick up this virtual camera and move it around the environment. It drove [Autodesk] MotionBuilder's camera, so he could see the characters perform and set up camera angles as they delivered their performance." With traditional motion capture, directors record the performances, edit them, and then derive the camera angles. With this system, Cameron could move around the performance stage and compose shots while seeing the actors' performances, including facial expressions on the CG characters. "He could dolly in, pan, boom, have any rig he wanted," Rosenbaum says. "He could have a huge crane, a wire rig, a steadicam, a dolly rig. It didn't mat- ter. There was a three- or four-frame latency when we were doing full-body and facial performances, but it wasn't significant enough to affect his shoot- ing." –Barbara Robertson Animators at Weta persuaded director James Cameron to add a stripe pattern to suggest eyebrows on the Na'vi's faces to help give the computer-generated characters the same emotional feeling as the actors performing them.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - DECEMBER 09