Post Magazine

May/June 2019

Issue link: https://digital.copcomm.com/i/1127068

Contents of this Issue

Navigation

Page 7 of 43

BITS & PIECES www.postmagazine.com 6 POST MAY/JUNE 2019 CUBIC MOTION GETS ANIMATED RELEASES NEW FACIAL ANIMATION SOLUTION 'PERSONA' UK-based Cubic Motion (www.cubicmotion.com) was founded in 2009 by an international team of award-winning PhD computer vision scientists who have been at the forefront of image analysis for more than 20 years. The company has grown to become a leading provider of automated, performance-driven facial animation. At the recent 2019 NAB Show in Las Vegas, Cubic Motion announced its new live digital facial performance solution, Persona. Here, we speak with Tony Lacey, product manager for Persona and Steve Caulkin, CEO. Can you tell us a little bit about Cubic Motion? Steve Caulkin: "We've been going for 10 years and we have previous history in the animation industry. Prior to that, we really come from a computer vision background broadly. What we've done is, we have built some technology that allows [users] to track and animate faces and we initially deployed that as a service to produce animation for video games and TV shows and films — mainly video games. I suppose about three years ago, we began to develop that into a realtime capability. So we did a couple of events, one of the Real-Time Live's at SIGGRAPH 2016, and did SIGGRAPH 2017 and GDC 2018, and basically demon- strating realtime character animation of a digital character and the culmination of that really is to now produce a product, Persona, which allows people to get their hands on that technology and that capability." Who is your target audience for Persona? Tony Lacey: "There's probably several audiences for the product. The key thing about Persona is that it allows realtime animation of characters driven by an actor's performance. The rate at which the actor performs, the animation is delivered. So that means we can target, at one end of the time spectrum, the live performance, so this is delivering an immediate performance to an audience. As the actor performs, so does the character. For instance, people with digital assets tied up in digital world such as games or film, or even television where that asset typically only appears in an offline world, bringing that asset to live, realtime situations. So think of game characters coming to 'con events, actually interacting with their audience members, big screens on the stage, actors performing those characters, able to have realtime feedback in that situation. "There are also potential social media applications, where people want a rapid turn around of their character interacting with audience and fans. Maybe online videos quickly responding to incoming tweets, where maybe the capture is done very quickly, packaged up and sent out in a matter of a few minutes. Again, the actor's performance being delivered straight away. So these are kind of, near realtime events. "We're also thinking about the situation where, in a traditional capture situa- tion, where you're onset capturing a performance, but instead of watching vid- eo of your actor performing, you can also pre-visualize the character perform- ing, and so instead of leaving your capture session with purely video, you leave that session with initial animation as well. You're able to make decisions about the quality of your capture and your subsequent animation, based on anima- tion data delivered on that day. So, this fits in to the world of virtual production, which is becoming increasingly important to content generators, given the kind of agile way in which content is being delivered today." Tell us about the custom head display? TL: "I think we're probably the only people that produce a complete system. We produce both the hardware and the software. So we've developed our own helmet-mounted camera system, which has two cameras on it, which is a front-facing camera and our signature profile camera at the side. We do this because it allows our algorithms to better disambiguate a lot of the more complex puckers, and different tightening shapes, particularly around the mouth. That also comes with on-board electronics for lighting. We use infra-red lighting, because that doesn't interfere with the actor, so they're not shining lights in their eyes that they can see. We also have timecode genera- tion equipment and on-board computer. Rather than just recording the cam- era performance, that on-board computer actually takes that camera data, tracks that actor's performance, folds that onto the character and wirelessly streams animation off the actor. So rather than just delivering video perfor- mances through the cameras, we're actually streaming off ready-to-render animation data wirelessly." How does this product and this program fit into an entire virtual production? SC: "A traditional production will basically be at a mo-cap stage, and it'd be var- ious components of capture that are done at the time. So let's assume we're not

Articles in this issue

Links on this page

Archives of this issue

view archives of Post Magazine - May/June 2019