Computer Graphics World

September/October 2013

Issue link: https://digital.copcomm.com/i/196542

Contents of this Issue

Navigation

Page 22 of 52

Motion Capture faceshift Real-time 3D markerless tracking and retargeting Company Vice President Doug Griffin arrived at Starbucks to demonstrate Faceshift carrying his laptop and a 3D sensor in a sock. Griffin joined the company, also named Faceshift, after leading motion-capture teams at Industrial Light & Magic, Electronic Arts, and ImageMovers Digital, and he was previously vice president of product and strategy at Vicon. He is eager to demonstrate the reason he joined Faceshift. "We want to democratize facial tracking, Griffin says. "With " our system, you can have live results with a five-minute setup and game-ready results with a couple minutes of post-processing. It's so inexpensive you could outfit an entire team with Faceshift. You could capture faces in a sound booth. " Griffin aims the sensor at himself, points to a 3D model of a face on-screen, and explains how the system works. "We use the PrimeSense 3D sensor, he says. "We start " with a standard character head that we train. I hold a pose, look back and forth while the sensor scans, and then you can watch while Faceshift deforms the character to match the scan data. We suggest doing 18 training poses. That's the five-minute process. You only have to do it once. And, you don't need makeup or markers. " Brian Amberg, co-founder and CTO, explains: "We decompose the training data into 48 asymmetric expressions. Then, as an actor performs and the system runs live, it searches for the combination of these expressions that best match the current Gollum, the Na'vi, King Kong, Tintin, Caesar. All those digital characters had facial expressions captured from actors and applied to CG models at Weta Digital, and each of the films in which they appeared garnered awards for best visual effects – Oscar nominations and awards for The Lord of the Rings films, Oscars for Avatar and King Kong, a BAFTA nomination for Tintin, and an Oscar nomination for Rise of the Planet of the Apes. "When we started, there wasn't anything off the shelf that met our needs, " says Visual Effects Supervisor Dan Lemmon. Lemmon received an Oscar nomination for best visual effects in Rise of the Planet of the Apes and is currently visual effects supervisor for Dawn of the Planet of the Apes. "But so much of how 20 ■ CGW Sep t em ber / O c t ober 2013 expression of the actor. It does this frame by frame in real time. For even better results, our post-processing algorithm can optimize across many frames. You can also touch up detections, and that will retrain the algorithm and improve the results. " The system is, in effect, puppeteering the blendshape model. If the model is highly characterized rather than a digital double, the blendshapes' 0 to 1 values might, for example, raise the character's eyebrow a lot when the actor raises an eyebrow a little. A team of PhDs from Swiss universities created the algorithms that do the real-time tracking and retargeting. "We had started developing our own face-tracking system and hardware before the Kinect came out, Amberg says. "Suddenly there " was a consumer scanner with 3D data. So, we changed course to focus on this sensor. The consumer-grade cameras meant we could make face capture available to everyone. That's what I find exciting. Independent artists can afford to make a film. And, animators can have facial capture on their desks and use it like a mirror. We think many people will use it. " People can buy and download Faceshift Studio and Faceshift Freelance from the website for $800 to $1,500, with various types of licensing and academic discounts available. Amberg and Griffin note that in addition to making it possible for more artists to use facial capture for animated characters, the low price point and real-time capture is enabling other markets, as well. "Our SDK can do tracking without the setup face, Amberg says. "The expressions aren't quite as accurate, " but it's exciting for consumer applications. You could put your we track and analyze the tracking data to determine which muscles are firing and how we apply it to our characters is so specific to the way our animation system works that even now we probably couldn't find anything. " Between 60 and 70 people work in Weta Digital's motion-capture and editing department and run the studio's two motion-capture stages. Both stages are in action now for two films: The Hobbit: The Desolation of Smaug and Apes. Everything the studio uses for facial capture and motion capture is custom. "On Avatar, Glenn [Derry] provided the facial helmets, and the cameras came from Giant Studios, says Motion" capture Supervisor Dejan Momcilovic. "Now we have a system that we built with Standard Deviation in Santa Monica [California] that works with Giant's software and our own. " For the head-mounted rig, the team combined a Japanese camera that has a global shutter, soft auto-exposure, and auto-iris with layers of other technology, including recording and encoding boards. "There's a certain comfort you can have when you design your own software and hardware, Momcilovic " says. "It's expensive, but then you have the freedom to do whatever you want with it. We have entire sets of tools, a tracker, solver, and retargeter in various versions, and editing tools for facial and body capture. " At Weta Digital, the team asks actors to perform facial calisthenics during a FACS session; the expressions become reference for modelers who generate a series of individual shapes. "We handmodel key shapes and use our system to procedurally generate as many

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - September/October 2013