Computer Graphics World

Summer 2019

Issue link: https://digital.copcomm.com/i/1144162

Contents of this Issue

Navigation

Page 66 of 75

s u m m e r 2 0 1 9 c g w 6 5 performance, the nuance of his face, and wanted to be sure we got as good a representation of that as we could," says Earl. "So, we started to improve the SNAP solve system by taking the meshes we were solving, and then basically comparing them back to the shapes we had captured in that initial Medusa session." The results were better in this newer system, known as Xweave, but ILM learned about Disney Research Studio's Anyma, which up to this point was used as an ADR- style booth with three stationary cameras for recording performance dialog aer the fact. "We had asked the guys at Disney Re- search if they were able to adapt the Any- ma solver to work with our head-mounted camera footage that we had already shot," says Earl. It took some time, and it was already in the latter part of production, but they were able to make that happen. Anyma doesn't just rely on the low-res- olution mesh generated from the points; rather, it generates a mesh per frame and does a photometric solve based on the footage from the head-mounted cameras. "So, it looks at the actual pixels and the images to give you a much higher-fidelity solve," Earl explains. While the performance solve was being completed with the new system, ILM was si- multaneously rebuilding the re-targeting as- pect, whereby Ruffalo's performance would be applied onto Smart Hulk. Spearheading this was Abs Jahromi, ILM's Endgame facial technology supervisor, with Owen Calouro, layout supervisor, initiating the process from the production shot side. Once ILM reviewed the solves on a Banner mesh, the crew compared them back to plates of Ruffalo, making sure they had a one-to-one match. Then, using the system, called Blink, they re-targeted the Banner solve onto the Hulk model using new code that Jahromi wrote. At the same time, that re-target broke the per-mesh solve into Hulk facial shapes and provid- ed animators a much more user-friendly version to work with. "Typically, you would get that data and the solve, but it would be very difficult to handle or change the data," explains Earl. "In this instance, it was all translated into the underlying Hulk facial shapes, which had very friendly animation controls that our animators here in San Francisco, led by Kevin Martel, could then use to compare Ruffalo's performance to the performance on Smart Hulk and adjust it accordingly, dialing it up or down in intensity once it was on the target model." When it came to animation, ILM used new deformers that were also more user friendly for dialing up or down the different aspects of Ruffalo's performance. Additionally, ILM provided its Hulk model, shader information, and base rigging to Framestore, which also worked on some Smart Hulk shots. T H A N O S As DeLeeuw points out, Thanos, like Smart Hulk, was another character who was pushed further for this film, particularly for the end battle. "We pushed the level of detail in his face and the level of detail in his movements, capturing even more of Josh Brolin's performance," he says. Whereas Hulk's performance is very broad and his face very elastic, Thanos's face is intense yet subdued. "Here you've got opposite ends of the spectrum, and you're dealing with a really precise performance," says DeLeeuw, comparing Thanos and Smart Hulk. "There's not a lot of movement in [Thanos's] face." Nevertheless, the character's perfor- mance is crucial to the storytelling. "We need to understand his decision-making at an emotional level because it's his decision to come into the future and destroy the universe, and rebuild it from scratch, which motivates the entire third act," says Matt Aitken, VFX supervisor at Weta. "He had to work at an emotional level, so we did a lot of very finely craed performance work to make that happen." In the final shots, Thanos does not speak, but it is clear what he is thinking based on his computer-generated body language and facial expressions. Early in the film, Digital Domain again handled shots of Thanos, as it had for Infinity War, with Weta taking control of the character when he attacks the Avengers' compound well into the film and creating a few hundred Thanos shots for Endgame, building on the work it also had done for Infinity War. "When we were working on him for Infinity War, we hit a little bit of a wall with some of the fine detail on his face, particularly around the corners of his mouth. We felt we didn't have the com- plete range of expression that we needed, so we had to patch some of those shots by hand," says Aitken. In the short time between the two films, Weta continued to work on the character. According to Aitken, the facility's facial modeling team created some new target shapes around the edges of the mouth. The team took advantage of new developments THANOS WAS UPDATED, REFLECTING MORE OF JOSH BROLIN'S PERFORMANCE.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Summer 2019