Computer Graphics World

March 2011

Issue link: https://digital.copcomm.com/i/27779

Contents of this Issue

Navigation

Page 39 of 51

n n n n Performance Capture built your 3D models, you have the actors in a box,” Wells says. “You can have them do their best takes over and over and over without complaining, and you can do pho- tography to your heart’s content. We pasted the facial performances onto the low-res models so we could see the actors. It was a bit squirrely to have what looked like a mask stuck on a 3D model, but it was very useful for timing.” Wells found the process of filmmaking us- ing this method exhilarating and a bit danger- ous. “You can wander around and look at the actors from any angle you like,” he says. “So it’s a rabbit hole that you can quickly go down. I was shooting and reshooting and reshooting the same performance again and again. It takes a certain discipline to make decisions and move the story from scene to scene.” With the shots edited into sequences, the camera moves in place, and the overall rhythm of the movie set, Wells began working with the animation team in August 2009. Huck Wirtz supervised the crew of approximately 30 ani- said, though, being able to translate that emo- tion through the medium took skill.” Refining the Performances Wirtz organized the work by sequences that he scheduled on a nine- to 10-week basis, and then split the work among himself and two sequence leads. “We all worked equally on everything, but if it came down to someone having to get pummeled, it was usually me. “I was glad to take it, though.” In addition, five lead animators took re- sponsibility for particular characters, usually more than one character. And, Craig Halperin supervised the crowd animation. “We cre- ated the motion cycles for him to use in Mas- sive,” Wirtz says. Otherwise, the crew used Autodesk’s Maya for modeling, rigging, and animation, and Mudbox for textures. Te animators began translating the actors’ emotional performances by “cleaning up” data from the body capture first. Ten, they moved on to the more difficult tasks of refining the motion-captured data for the hands and faces. 66 percent stage. Between 66 and 99 percent, the animators worked on the fine details. “We did a lot of hand tuning on everything,” Wirtz says. “A lot on the faces, on anything they hold or grab, and any contacts—feet on the ground, things like that. Sometimes the data comes through perfectly and you’re amazed right out of the box, but you always have to do the eyes.” Te facial animation system used blend- shapes based on FACS expressions. “It looked at where the markers were and tried to simu- late the expressions,” Wirtz says. “We also based the system on studying muscle motion. Te system kept evolving; we kept refining it. It takes a lot of heavy math to spit out a smile. Simon was happy with the performances by the actors on stage, so we worked hard to keep that emotion. We were definitely translating a human performance onto an animated charac- ter, and what makes it come through is that we tried to get back to the human performance.” Keeping the characters out of the Uncanny Valley, where they look like creepy humans mators who refined the captured performanc- es working with the motion-captured data ap- plied to higher resolution models. Here, Wells departed from Zemeckis’s approach, too. “Bob [Zemeckis] and I have the same feel- ing about staying true to the actor’s perfor- mance,” Wells says. “But he was trying to get photoreal movement. I chose to take it to a degree of caricature, which was in tune with the character design. I’d take an eyebrow lift and push it a bit farther. Te smiles got a bit bigger. It was exactly what the actor did, just pushed a bit.” Wells believes that most people won’t notice the caricature, which isn’t cartoony, but gave the characters more vitality. “It makes them feel a bit more alive than the straight transfer of motion data,” he says. “In terms of actual choices, the way the character behaves emo- tionally came entirely from the actors. Tat 38 March 2011 IMD could capture data from faces, hands, and bodies from as many as 13 actors on set at one time. A crew of approximately 30 animators refined the performances once they were applied to CG models. “We had a great system worked out,” Wirtz says. “We’d start by showing Simon a whole scene with the data on the high-res models, but with no tweaks. We called that zero percent.” Next, they’d adjust the eye directions, adding blinks and altering eye motion, if needed, so all the characters were looking at the right spots. And, they worked on the hands. “We made sure the characters grabbed what they needed to grab,” Wirtz says. Tey showed the result to Wells and called that stage “33 percent.” Once Wells approved the 33 percent stage, the animators moved on to the mouths, mak- ing sure the lips synched to the dialog and that the facial expressions were appropriate. “We wanted to be sure the eyes caught the tone Simon wanted,” Wirtz says. Tat resulted in the rather than believable characters, depends pri- marily on the eyes, Wirtz believes. “We paid close attention to what the eyes are doing,” he explains. “We tried to follow every tick carefully. It’s not the eyeball itself, it’s also the flesh around the eyes. It has to be there, working correctly, all the motivational ticks and quirks in the eye- brows. Te other side of it is the rendering.” Creating the Look Rendering, along with texture painting, look development, lighting, effects animation, compositing, and stereo, fell under visual ef- fects supervisor Kevin Baillie’s purview. Artists textured models using Adobe’s Photoshop, Maxon’s BodyPaint 3D, and Autodesk’s Mud- box for displacements. Pixar’s RenderMan pro-

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - March 2011