Computer Graphics World

September/October 2013

Issue link: https://digital.copcomm.com/i/196542

Contents of this Issue

Navigation

Page 23 of 52

facial expressions on an avatar in a virtual world like Second Life. " "Or, Griffin says, "imagine you just woke " up. Your hair's a mess. You could Skype with an avatar and look great. " The facial-capture data could also drive something other than another face. The company has developed plug-ins for MotionBuilder, Maya, Unity, and other software platforms. "We give you 0 to 1 curves, Griffin says. "You " can read those into a 3D application and trigger whatever you want. " Thus, it could trigger digital music and other sounds. Or, images. One artist, for example, has created a digital painting that viewers can interact with through their facial expressions. The traveling exhibition is currently in Moscow. "When the viewers smile, the sun rises, Amberg says. " Launched in November 2012, the product is already a success, according to Griffin. "We have customers in 21 countries, and a number of big studios have bought licenses, he " says. "I'm surprised by how many people I ring up and they say, 'We've already got it.'" And that's just the beginning. Researcher Hao Li at USC, for example, has developed algorithms for capturing faces and hair using a Kinect device. "The biggest thing I've done is solve the problem of correspondence between two scans using a method called a graph-based, non-registration algorithm, he says. "The idea is " combinations and in-betweens as we can, Lemmon says. "For any character, " we will have as many as 30,000 shapes on the face. That shape-based model " catches data captured from the actor's performance during filming. Many facial-capture systems use an underlying model – you tell the system what the model can do and then track a performance and fit expressions from the performance onto that model. But, Weta has taken a different approach. "We use an underlying model, too, " Momcilovic says, "but our tracker and solver have different models and work in different ways. To give the animators the option to change how a frame is going to look, I had to break solving and retargeting the face away from tracking. The most important thing is to give control to the animators so they can go through a sequence, change a frame, and it's all real time. They have instant feedback. " ■ ATOMIC FICTION CREATED this stylized digital version of Spaghetti Western actor Lee Van Cleef using systems from Faceware. that if you're given two meshes, two surfaces, the computer computes the correspondences automatically. If you want to do facial tracking, you can throw 2,000 frames into the solver and it will find the correspondence. " Li plans to show his work at SIGGRAPH Asia. Once mass-market consumer devices have embedded 3D sensors into various mobile devices, as is likely to happen, the door for facial capture bursts wide open. ■ CGW Barbara Robertson is an award-winning writer and a contributing editor for CGW. She can be reached at BarbaraRR@comcast.net. For the facial capture, the motioncapture team typically paints marker patterns on the actors' faces using between 200 and 400 markers. "Markers are convenient because our solver is marker-based, but the tracker doesn't need markers, Momcilovic points out. " "We can train it to track a markerless face. " However, the markers have advantages. "Some facial-capture techniques require a lot of data processing afterward, Lemmon says. "If an actor pulls " a particular expression, because each frame becomes its own new shape, you might have to correct that shape later. It's a brute-force approach. We'd rather set up a system that defines what a character's face actually does and dial in the muscle movements as the character goes through the performance. " Other facial-capture systems do a per-frame scan of the face to record the movent of the geometry. "That has limitations, as well, Lemmon says. "We " don't want to put them in a chair or a booth. If we can have a system that pulls all the detail off the face, great. If we can't, we'll use video reference and keyframe the performance, and that works well, too. For us, it's all about getting as much information as we can without getting in the way of the performance. " For Lemmon the magic happens after the capture. "It's when you get back in the studio and take all the information you've acquired, all the data, all the pictures, and start working on shots, he " says. "And, suddenly, one day in dailies you have a render with facial animation, lighting, skin, fur, and you see a living, breathing creature doing what the actor on set did and you think, 'Oh my God, we brought something to life with all the spirit of Andy Serkis.' That's the payoff. – Barbara Robertson " C G W S e p t e mb e r / O c t o b e r 2 013 ■ 21

Articles in this issue

Links on this page

Archives of this issue

view archives of Computer Graphics World - September/October 2013