Computer Graphics World

July/August 2014

Issue link: https://digital.copcomm.com/i/371911

Contents of this Issue

Navigation

Page 20 of 67

j u ly . a u g u s t 2 0 1 4 c g w 1 9 D I G I T A L C H A R A C T E R S prosumer-grade Sony cam- eras that record movements and facial detail. If it became impossible to use the full-per- formance capture kit, those cameras made it possible to do "faux" cap. "The actors wore suits made of Velcro-friendly material," Lemmon says. "When we couldn't use the active markers, we stick on high-contrast dots and track those markers opti- cally later." Konoval describes the pro- cess from an actor's point of view: "It was a daily wrestle get- ting into those gray motion-cap- ture suits," she says. "They had lots of Velcro and LED sensors with wires that attached to a battery pack, like a turtle shell on the back. Each day of fi lming, they'd fi t a mask on our faces and paint 52 dots into holes on specifi c muscles. I don't even remember the number of scans they did of our facial structure that went on outside the fi lming to meticulously determine the muscles in our faces. But the Weta experts were the most grounded, decent, and fun peo- ple to work with." The camera on the facial helmet gave the crew 60 frames-per-second, high-defi - nition data, a solid base for the animators. "The facial animators looked at the reference data as well," Winquist says. "It isn't just about making sure Caesar blinks at the same time as Andy. It's about making sure the anima- tors interpret the performance faithfully. Not the geometry, the emotional performance. We look at the video frame by frame. In the blink of an eye, we can see an emotional state change from elation to fear to concern. We have to fi nd the fa- cial muscles to fi re to make that emotional change happen on the ape, even if it deviates from the actor's performance." And of course, when the apes needed to accomplish actions impossible even for skilled stunt actors, animators created those performances using keyframe animation. M O R E T H A N D I G I T A L M A K E U P Animators used a familiar system of controls and workfl ow to translate the motion-capture data into the apes' fi nal perfor- mances. Under the hood, how- ever, was a more complicated network of shapes than before. "For the fi rst fi lm, we had looked primarily at chimpanzee reference," says Dan Barrett, animation supervisor. "Not ex- clusively. We included elements of Andy's face in Caesar, for example. But in this fi lm, we had more hero ape work and more dialog. So, we had to go back to the face puppets and work out how to deal with the dialog. We ended up with human variations of certain mouth shapes." An obvious example is the funnel shape used to say the "sh" phoneme. A chimpanzee's lips can extend into a very long pucker forward, something that doesn't work well with dialog. "The funnel was one with a human variation to have a fuller lip and more tightening," Barrett says. "We also looked at eyes and eyebrows. Chimps tend to have almost a mono-brow with a little shelf. Humans have more independent eyebrows and a creasing in the forehead between. So, we created brows closer to human eyebrows. We wanted to get Andy's furrowed brows, and the distinctive creases when Toby frowns." The translation from data to animation curves was auto- mated to some extent. "The solver does a good job, but it can make a fairly complicated network of curves, so o en we need to simplify," Barrett says. "And sometimes the motion editors needed to do full key- frame. It runs the gamut. Jack [Eteuati] Tema, our lead facial motion editor, prepared a lot of the dialog shots. He pretty much started from scratch for the mouth in particular. It was easier because then he could Weta Digital's Tool Kit Visual Eff ects Supervisor Dan Lemmon opens Weta Digital's tool kit. "We use Maya and Nuke, and have our own system for inter- facing between Maya and RenderMan. We're also starting to use Manuka, a new renderer in its infancy that we're developing here. For fl uid simulations, we use internal so ware we call Naiad, and have a host of tools for tissue simula- tion and muscle dynamics. We sent wind simulations through fur and cloth with Maya's nCloth. Our imple- mentation of the rigid-body solver Bullet handled de- struction and other eff ects simulation through Maya. At the core of our simulation work is Odin, our unifi ed multiphysics simulation platform. We use another in-house program, Lumber- jack, for plant creation and simulation. Mari, of course, for painting textures, and Mudbox for modeling. And, Barbershop and Wig are our own fur-grooming tools." – Barbara Robertson FACIAL ANIMATORS USED DATA CAPTURED FROM ACTORS ON LOCATION WHO WORE A HELMET WITH AN ATTACHED CAMERA AND FROM REFERENCE FOOTAGE TO CREATE SUBTLE, EVER-CHANGING FACIAL EXPRESSIONS.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - July/August 2014