Computer Graphics World

September/October 2013

Issue link: https://digital.copcomm.com/i/196542

Contents of this Issue

Navigation

Page 20 of 52

Motion Capture ■ FACEWARE OFFERS a head-mounted camera rig and software for real-time facial motion capture. from the eyes, and we think the Kinect shouldn't be used that close. You need to have the camera connected to a helmet because global head movement introduces a lot of noise in tracking; the worst thing for a tracking algorithm is head rotation. Most of the time, if you turn your head more than 10 or 15 degrees from the camera, the tracking fails. So, a headmounted system is more accurate. " It is not as accurate, however, as a marker-based system. "Marker-based systems tend to be accurate, but you need a lot of markers to capture facial movement, and some regions, like eyes and inner lips, can barely be captured, Breton says. " "Markerless systems tend to be more successful in capturing whole expressions. " So, Breton imagines that some customers will use Cara's marker-based system in combination with the Dynamixyz system. "They can send us the video feed from Cara, and we can work on that, he says. "We feel strongly this is the " winning solution. They get the accuracy from the markers and can work the way they are familiar with using a marker-based approach, and also have the extra information we can add by capturing whole expressions and areas like the inner lips with our markerless system. " In addition to providing facial capture for the entertainment industry, the company's computer-vision algorithms help doctors and medical researchers. "We have an R&D project with a French hospital to help children with cerebral palsy learn how to re-educate the movement of their faces, Breton says. "The " kids are prisoners of their own bodies. They don't want to see themselves in the mirror because it's so [emotionally] painful. But using a visual corrector, they see their own face moving the way it should move. The company offers an evaluation and production license, and annual licenses, and are considering project fees rather than licenses. The evaluation license provides a functional system, free to use. The company also is developing a network of valueadded service providers that work with motion-capture studios. "The message we want to send is that we're strongly R&D oriented, Breton says. "If a company selects us and works " with us, they have the assurance that they'll always have the latest cutting-edge technology in computer vision. " 18 ■ CGW Se pt em ber / O c t ober 2013 This game developer has a state-of-the-art motion capture facility in Vancouver, but CG Supervisor Leon Brazil decided not to use that facility to create the 3D avatars for the generals in the new Command and Conquer Generals game scheduled for release later this year. "These are 3D generals, resident in the UI, who give you prompts and feedback," Brazil says. "We wanted to give them personality and character. So, I started looking at solutions on the market. For Medal of Honor, we had 12 characters with full facial performances. But, that was a massive cost and time sink." EA has an internal tool called Face Pose that can convert bone-driven data to face values, according to Brazil. "The data is shareable, so one performer can drive different faces. We use it a lot for sports games. But, that stems from a marker-based system that takes time to set up." While working on Medal of Honor, Brazil saw a live demo of Faceshift in 2011. "They didn't have the eye-tracking system yet, but I thought, 'there's the future,'" he says. When he finished Medal of Honor in November 2012, he considered pay-by-the-second facial-capture solutions, but by then Faceshift had introduced a product with eye tracking. "We had lots of generals, and for each we needed 10 minutes of animation," Brazil says. "If we paid for each second of data used, my budget would go through the roof. Faceshift had a product I could license per year at a phenomenal price point." Brazil tested a trial version product at his desk and became excited about the possibilities. "We had audio sessions booked, and I thought it would be cool if I put a camera in front of the performers and captured their facial expressions while they performed," Brazil says. "I did a lot of tests at home, and then we bought two licenses." During the audio sessions, he put a camera in the booth and connected it with a 20-foot cable to his PC. "We did a 15-minute calibration," he says. "The actor could see the point-cloud capture of his face morph into a geometric version of his head. I gave them that little taste, then I turned it off so they weren't distracted. In three days, I had 10 minutes of production-ready facial animation targeted onto a character. To be able to do that with this software and a $200 PrimeSense camera is amazing. We are only doing head and shoulders, so it was perfect for this project." Even though the final characters are small, the capture data drives 52 bones in their faces, and they have a realistic look. "I could open the character in Maya, but there was pretty much no cleanup," Brazil says. "We used Maya's nCloth and nHair, and pre-rendered them in V-Ray using subsurface scattering and all the goodies. Their mouths look fleshy and the eyelids track correctly with the eyes. You get an obvious read of an emotional character when they deliver their lines." – Barbara Robertson

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - September/October 2013