Computer Graphics World

Dec/Jan 2011-12

Issue link: https://digital.copcomm.com/i/48986

Contents of this Issue

Navigation

Page 29 of 51

n n n n Animation to do the legs. They captured Savion Glover [dancer, choreographer] tapping for the krill, as well, which was extremely beneficial for me and the animators. The mo editors [motion editors] could take what he did, apply that to a low-resolution krill model, and we could study the feet and get movements that would have been difficult if we were keyframing 20 legs and arms. We also had early experiments with a puppeteer moving a krill body on the mocap floor, but we couldn't get the right scale of mo- tion when we put the krill into the water. Were there any other unusual motion- capture experiments? There were always experiments. For the el- ephant seal, we had four people performing to- gether, but trying to wrangle that was too much effort. I could get a talented animator to do something really beautiful in not too much time. What was the motion-capture process? We used a Giant Studios' system. We had a bunch of talented people here who had worked on the original, then worked on Ava- tar, and came back to do Happy Feet Two. George [Miller] would cut the audio first, and they would broadcast that onto the floor so everyone could hear it. The performers panto- mimed to the dialog. They might not hit the accent of a word exactly, so it was up to the mo editors to make it feel like the voices were coming out of the bodies. Each day we would recalibrate the dancers. We'd measure their legs and arms precisely so we could translate them to the character maps for each species of penguin. We could capture up to 10 at a time, and could see the penguins walking around in real time as the dancers performed. Their feet weren't locked to an ultra-resolution set, but we could see where they were. Did you need to change the data much to have the dancers move like penguins? The dancers all went to 'penguin school,' and learned how to dance like Emperor or Adelie Penguins, but it took a fair amount of labor to get [the characters] to move and act like penguins. Penguins are like little flour sacks, like little fluffy pillows. When you have a human walking like a penguin, it's one thing if they pantomime it for you by keeping their legs together and waddling. It's another to have it look real. Our motion editors had to manipulate the data to make it work the way George Miller wanted. Once George had directed the motion capture, he would make selects. The motion 28 December 2011/January 2012 At top: A team of 75 animators from 14 countries worked at Dr. D studios in Sydney, Australia, to perfect and amplify performances captured from dancers for the penguins, and to add facial expressions. At bottom: A separate team of animators used motion cycles and a rules-based system to animate crowds of penguins and schools of fish. editors worked in [Autodesk's] MotionBuilder and [Giant Studio's] Nuance. They would pick the matching human performance and re-map the data onto the penguin bodies, and then put the penguins on the ultra-resolution set, the undulating ice field, and spread their toes. I would review the work in progress and make critiques. When I was happy, they converted the files into [Autodesk] Maya files and sent those rigs to the keyframe animation team. Did you develop any particular tools or rigs for the keyframe animators? We had a similar skeleton for each species of penguin, and we had offset rigs. The offset rig was a parental rig on top of a child rig. The child rig received the keyframe data from Mo- tionBuilder. With the parental rig, the anima- tors could add rotation and translation to the big volumes—the hips, head, shoulders, and chest. They were all IK. Often, once George Miller saw [the motion-captured animation], he wanted to go broader than what he saw on the floor. So the animators might put a trans- lation on the chest, or change the eye direc- tion by swinging the head, and so forth. The animators could supersede the data and move- ment with our offset rig. When did they do keyframe animation? We keyframed the animation when the characters swam or when they did dangerous actions. We also keyframed the whales, leop- ard seals, and fish. The fish are basically food in this film. And, there's no facial capture. Every penguin ended up being a hybrid. George was very hap- py with what Animal Logic had done. But now that he had some experience with animation, he wanted to spend more time on the move- ment of the eyes, the dilation of the eyes—the eye dart-ness as he calls it. He was very particu- lar about beak sync, lip sync, and phonemes— about the movement of the tongue and lips—and that was cool with me. And, we spent a lot of time on the non-verbal, reaction shots. It was challenging to get the penguins to look good from multiple angles and still con- nect with the audience and characters on the screen. You have to see their emotion. They have humanoid faces, but their eyes aren't bin- ocular. They're set 30 degrees back on an angle. Did you use the models from the first film? We based the characters on where they left off on the first film, but we were using Maya, and Animal Logic had used Softimage XSI. So, most of the characters were redone and rebuilt; we upgraded the models. And, we started over and redid all the rigs. That wasn't a big factor for me. If we hired someone who

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Dec/Jan 2011-12