Computer Graphics World

Winter 2019

Issue link: https://digital.copcomm.com/i/1192296

Contents of this Issue

Navigation

Page 21 of 47

18 cgw w i n t e r 2 0 1 9 received mixed reviews – with Rotten To- matoes' aggregate scores for critics a rotten 26 percent approval, but audience scores at a fresh 83 percent. Lee chose to shoot the film in 4K 3D at 120 frames per second, which bothered some critics and made life difficult for Weta Digital's rendering team. Bill Westenhofer, who had received an Oscar for Lee's Life of Pi (and another for The Golden Compass), was the overall visual effects supervisor; three-time Oscar nominee Guy Williams supervised the visual effects creat- ed at Weta Digital. To create Will Smith's cloned character at age 25, the Weta Digital crew adopted what has become a traditional method of cap- turing performances, one honed especially through three award-winning Planet of the Apes films and this year's Alita: Battle Angel. To devise Junior's look, however, researchers at Weta Digital developed state-of-the-art technology for skin color and textures. On set, when both characters appeared in a scene, Smith's stand-in, Victor Hugo, played Junior, knowing he would be replaced later. Then, Smith played Junior in the same shots wearing motion-capture "pajamas" and a head-capture rig. In the film, Junior is 100 percent digital. "A performance doesn't exist only in the face," Williams says. "Everything you do moves you from your feet to your eyes, and all of this adds to how we recognize a person. It isn't solely about facial motion. So, we choreographed all the motion together. Otherwise, you end up with a bobble head." The face carries most of the emotion, though, and Williams notes two challenges in creating and performing Junior's face: "First, it becomes easy to lose his likeness," he says. "And second, Will Smith hasn't aged much in 25 years. We had to get into the deep science of youth versus age to create enough of a difference. We knew the distance from lips to nose changes, and the jowls and cheeks sag. But, we also had to put youth in his pores, in the color around his eyes, in the moisture of his lips and in his eyes to make sure every- thing our brains perceive as youth is properly represented. Digital humans live or fail in insanely subtle nuances. Skin turned out to be a major component." Poring Over Details Weta Digital started by creating a digital model of Will Smith at his current age using photo shoots, photogrammetry scans, skin lighting capture at ICT, and two FACS ses- sions. Then, they modified the digital model of 50-year-old Smith to change his facial structure and appearance. For reference, they had Smith's early films and 23-year-old actor Chase Anthony, whose skin looked like Smith's. Initially, the crew considered relying on their standard approach in which they use a live cast for skin textures. "But, one of our shader guys thought he could grow the pores," Williams says. "Early tests gave us hope, and in the end, he created a pore structure that was better than anything we'd have gotten from the live scans. It's not 100 percent accurate, but it's incredibly accurate. If Will Smith had 35 pores in an area, we might have 34." The simulation is controlled with em- pirical maps that define how to grow the pores – deeper here, isotropic there, denser, sparser, and so forth. "What happens is that we 'pelt' a number of points to distribute the points evenly across the surface, and then flow the points across the face," Williams explains. "From every point, the simulation draws lines to neighboring points without crossing another line. The soware interprets the flow field and can take a bias from the flow. That creates anisotropy: The flow of pores in one direction might be more dominant than in another direction." The simulated pore structure resulted in a nine million tetrahedron mesh. "The beauty of this is that we can move it," Williams says. "We can pipe the facial animation into the simulation soware with the mesh. The mesh moves based on motion capture cleaned up by an animator. The way the face moves shapes the pores and changes the shape of highlights in an anistrophic way. We can get micro-wrinkling; the pores can collapse into fine wrinkles." Thus, the simulated pore structure pro- vided the model for skin texture. For color, the crew simulated melanin and blood flow. Rather than painting multiple color maps, they first created pale-pink skin using blood flow under the surface, and then layered in two types of melanin to color Junior's skin. As a result, the color of Junior's face comes from a complex interaction of sim- ulated melanin and blood flow with light. It doesn't depend on colored light bouncing off a textured surface. More Than Skin Deep "Melanin is a pigment layer with thickness," Williams says. "The density creates the color, and it's angle-dependent. At an angle, you see more of the thickness, so it looks darker than when you view it straight on. We would squeeze blood and melanin into parts of the face. As the face moved, the color flowed, and the overall compression of the skin af- fected the color. We ran that as a simulation, and soware applied it to a shader. When I say that we put melanin in the skin, we actu- ally measured it so it interacts correctly with light – our renderer is based on wavelengths of light, not RGB colors." This level of detail extended into the eyes. Weta Digital artists used a volumetric sphere for the eye built for previous shows. This digital eye has a cornea with fluid inside, an iris, and layers of sclera. "It's gorgeous as is," Williams says. "But, we added a couple more things." A conjunc- tiva surface with pigmentation and thick- ness that covers the sclera put color in the corner of the eye. A choroid layer beneath the sclera created a dark ring around the iris. Oil added to the thin film of water covering the eye created a proper meniscus effect, a curve in the upper surface of the liquid, and appropriately dimmed harsh reflections. "We couldn't set a value of oil to water," Williams says. "The ratio changed from day to day. We had to modify it from shot to shot." And as the number of simulations used for Junior's face grew, so, too, did render times. "Our bakes are slow because there is so much simulation," Williams says. "But the thing that really slowed us down was 120 fps. And, Ang [Lee] likes to linger on a perfor- mance. We had many shots that were over a minute long, and two that were over two minutes. Our bakes could take two weeks." Avengers: End Game and Captain Marvel: Lola Visual Effects Lola is famous in the industry for its artists' digital cosmetic enhancements to actors' filmed appearances. But in 2006, the studio also pioneered de-aging by creating youth- ful Magneto (Ian McKellen) and Professor X (Patrick Stewart) for the film X-Men:

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Winter 2019