Computer Graphics World

Edition 3

Issue link: https://digital.copcomm.com/i/1011494

Contents of this Issue

Navigation

Page 15 of 75

14 cgw | e d i t i o n 3 , 2 0 1 8 "We tracked her movements, and then in real time we would drive the animation controls on the digital puppet, and our goal was a one-to-one match," says Harris. Cubic Motion provided the technology to capture and track the facial perfor- mance, and to then calculate, or solve, in real time what pose the face rig should be in. Cubic Motion then tracked the facial contours, aligning the curves to the eyes, nose, and mouth. Meanwhile, 3Lateral provided the facial rig and underlying controls that would enable the digital model to achieve any pose that the actress herself could do. Using a pro- prietary machine-learning algorithm, Cubic Motion would drive the 3Lateral rig based on the contour tracking of the face and position the digital character in the pose to match the actress. "It took refinement. The face-tracking technology is sensitive to the lighting environment, and you have to train the solver to get an appealing result, and that takes some time," says Harris. OVERCOMING OBSTACLES Crossing the Uncanny Valley has been dif- ficult for artists, mainly because our minds are tuned to recognize another human and react to their expressions and micro-expres- sions. So, we tend to focus on little things, whether it's the tear ducts or the smallest of details, such as the hairs on the face. "We can improve each of those to the point where we feel like we're 100 percent there. But when you back up and look at the whole face, you notice there's still something that feels a little uncanny," Harris adds. "That's because there are still incremental improve- ments to be made in lots of little areas. You might be off by a fraction of a percent in some of those areas, but it can still make a big difference." Consider, for instance, the mix of technol- ogy required to create "Siren." "It's exciting to see that it's possible to create a digital human in real time and track the actress's movement. But it's also eye-opening to see how much gear she has to wear. So, we have a ways to go, not in terms of just making the characters look more realistic, but also in making sure the technology can be less cum- bersome for the performers," Harris says. Nevertheless, with each new application, we are making progress. Sometimes that progress comes from large advances, other times from incremental changes in tech- niques and technologies such as rendering, depth of field, subsurface scattering, and more. "Siren" falls into that latter descrip- tion. It is worth pointing out that for internal applications like "Siren," the Epic team does not use extremely specialized tools, but instead uses an internal version of its commercial engine, and oen many of the features used in the projects find their way into the next Unreal Engine release. But make no mistake: "Siren" pushes the envelope when it comes to rendering and live-performance capture, with Epic advancing real-time rendering to the point where the quality of the real-time rendering is on par with what had previously only been achieved through a soware renderer. REAL-TIME IN A PRE-RENDERED WORLD Hailing from the pre-rendered VFX world, Harris is excited over the benefits that real time presents. "Lights can be moved in real time. You're not clicking Render, going to get a coffee, and coming back 20 minutes later to see what the image looks like," he says. "It's a huge change in the way you can be productive as an artist. I think maybe people in the gaming industry have known this for a long time, and now that we're doing really high-fidelity work and in real time, even with raytracing in some cases, it's pretty exciting." The most obvious advantage this brings is the ability to iterate quickly. Also, because the content isn't locked, it can be custom- ized for viewers. "Imagine changing the content of a cartoon on the fly based on each viewer because it's being rendered in real time, it's not pre-rendered." In the near term, Harris believes we will see more realistic avatars in the games we're playing and in the apps we are using. Longer term, he believes the intersection of AI and "really, really convincing digital humans" is going to be a big paradigm shi. "Obviously we're not there yet, but I think these are the technologies that you're going to see converge – the combination of AR, convinc- ing digital humans, and artificial intelligence," says Harris. Indeed, the convergence of this tech, along with real-time advances, are "enabling us to do some of the things we have been doing in the pre-rendered world for a long time," notes Harris. However, the real-time aspect adds a whole other dimension. "It will just keep going forward, so as this gets faster, we'll be able to do more raytrac- ing. More strands of hair. We'll be able to have more blendshapes, to have more convincing expressions. We'll be able to have more believable synthesized performances," says Harris. "Each year, I think we're going to see really exciting leaps in all these areas, and we are moving forward at incredible speeds." Leading us eventually to lifelike digital humans the likes of which are hard to imagine just now. "I think it will be inescap- able, everywhere. But that's a little ways off," Harris predicts. But hard to resist, just like the call of the mythical Sirens of the sea, only this time leading the industry to endless productive possibilities. Karen Moltenbrey is the chief editor of CGW. 'Siren' on Stage The "Siren" application without ques- tion pushes the boundaries of real time. In fact, "Siren" can be presented live, with an actress performing the character in real time. Or, the real-time performance can be filmed and played back as pre-recorded video. In both instances, the character is rendered in real time. At GDC, the pre- sentation consisted of pre-recorded video of actress Alexa Lee performing Siren (in place of Jiang). To create the video, shot on Vi- con's capture stage, Lee wore a full- body mocap suit and head-mounted camera. Using Vicon's Shogun 1.2 soware, her body and finger move- ments were captured on one screen, while the data was streamed into Unreal Engine using Vicon's new Live Link plug-in. On a second screen, the Siren character moved in sync, driven in-engine at 60 fps. To ensure the highest possible fidel- ity, Vicon solved directly onto the Siren custom skeleton, removing the com- plex targeting step. It also developed a new algorithm to help realistically animate the fingers.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Edition 3