Computer Graphics World

DECEMBER 09

Issue link: https://digital.copcomm.com/i/5474

Contents of this Issue

Navigation

Page 17 of 59

December 2009 16 n n n n CG Characters•Environments in real time, of motion from actors onto the rendered, 10-foot-tall aliens. Lightstorm's virtual cinematography system, developed by Glen Derry, blended the characters into the virtual set using Autodesk's MotionBuilder for real-time rendering. "We could tie into the body capture and add our facial capture simultaneously," Rosen- baum says. "So [Cameron] could see the body performance and the facial gestures happen [on the CG characters] with the dialog, which was a nice feature." e real-time facial performances weren't always practical—video projection onto the characters' faces was sufficient for all but the most subtle scenes. However, Letteri believes it's game changing. "It's one of those things," Letteri says. "You can see a motion-capture demo, and it's kind of interesting. But, on set, seeing actors and CG characters performing at the same time, well, that's really cool. It doesn't even demo well in a video. When you're there, it's a whole different feeling. You have to see it in person." Rosenbaum estimates that more than 80 percent of the film is virtual. "We're deliver- ing about 110 minutes of full CG," he points out. "I would guess that another 20 minutes have a combination of CG and live action. And, there are some other VFX facilities helping out. We sent some flying creatures, Na'vi, environments, and vehicles to ILM, Framestore, and a few other vendors, as well. But, the bulk of the CG work is being done at Weta." e list of other vendors that worked on previs and postvis for the film includes BUF, Halon, Hybride, Hydraulx, Lola, Pixel Liberation Front, Stan Winston Studio (now Legacy Effects), and e ird Floor. Capturing Faces Each actor captured on set wore a helmet with a lipstick camera attached to a boom arm, and green makeup dots on his or her face. e crew positioned the camera between the ac- tor's nose and upper lip to capture the mouth movement and to see the eyes. To paint the dots, the makeup artists used a vacuform mask cut with small holes designed for each actor. "We'd put the mask on the face, draw a pen mark for the dots, pull it away, and paint on the green dots," Rosenbaum says. "e actors loved it. It took only five or 10 minutes and they were back on stage." To plot the dot pattern, the facial motion- capture crew had first taken video of the ac- tors doing a FACS session—creating particu- lar expressions, mouthing phonemes, doing prescribed facial gestures—and, if they had dialog, saying their lines. e FACS analysis helped the crew identify major muscle groups for each face so they could position the dots, sometimes as many as 70, most effectively. For the eyes, Weta developed software to track the pupils. "We had an LED array around the camera so we could illuminate the face and see the pupil clearly," Rosenbaum says. "And if we couldn't get good data, we'd track the pupils from the video. Traditional facial capture has always been a problem, but I think our eye movement is fantastic. It sells the characters." e eye movement was particularly impor- tant because although the avatars have eye- brows, the Na'vi didn't, so their eyes needed to express much of their emotion. Yet, the iris in the Na'vi eyes was so big, the white of their eyes showed only when they were shocked. "We ended up adding a stripe pattern to suggest eyebrows," says Andy Jones, animation director. "We studied Zoe's [Saldana] expres- sion, and found it was really tricky to get the same feeling on her CG character without eye- brows. To prove it to [Cameron], I roto'd Zoe's eyebrows out of her face, and he realized what we were up against. at's when we textured in a pattern to get the feeling of eyebrows back in there." e motion captured from the actors on stage drove a facial system developed by Jeff Unay on their corresponding CG characters. To help with the lip sync, character design- ers had created the lips on the Na'vi to match those of the actors performing them. "We kept the characteristics of the actors and reshaped them into alien characters," Letteri says. "at gave us a good basis." "Solving" software applied the data to Weta's facial system, and a facial-solving team adjusted the result. e motion data worked best for lip sync and mouth movement; ani- mators spent more time tweaking brow and eye animation. "When the overall expression straight out of the facial solve was not what it should have been, the team would push the data around to get the right poses and extremes, yet still keep the live feeling of the data," Jones says. "As the team adjusted poses with sliders—they called it 'tuning' because they tuned the solve on various frames—the solving software learned which poses to use." Unay based the underlying system on blend- shapes. "We started with a dynamic muscle rig for the faces, but although it was good at preserving volume, it was coming up short in terms of level of detail," Jones says. "[Cameron] was very specific. If he saw tension in Zoe's mouth, he wanted exactly that [in Neytiri]. We had to art-direct and sculpt her face." So, Unay modeled blendshapes to mimic a volume-based system using FACS, which de- scribes the muscle groups that control parts of the face. ousands of shapes. e resulting rig for Neytiri, for example, has 1500 blend- shapes. "e animators use sliders that control only about 50 shapes at a time," Jones says. "e system switches to banks of shapes de- pending on which muscle sliders they move. It all happens under the hood without the ani- mators knowing. e combinations of shapes look amazing; the skin looks like it's pressing and pulling." As the animators worked in Autodesk's Maya, they could bring up, on their screens, reference video shot in HD from multiple an- gles. "We could see the skin and get the timing from the helmet camera, but it distorted the face too much to see the overall mood," Jones says. "We needed cameras farther away." Weta modeled all the plants in the rain forest on Pandora, seen here virtually, using a rule-based growth system. Some plants have as many as one million polygons.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - DECEMBER 09