Computer Graphics World

January / February 2016

Issue link: http://digital.copcomm.com/i/639267

Contents of this Issue

Navigation

Page 41 of 51

40 cgw j a n u a r y . f e b r u a r y 2 0 1 6 This resulted in a massive amount of rendering. "We are up on the towers for a long time, over 20 minutes of the film. So we had to figure out how to get all that rendering done," Baillie notes. The team opted for a cloud- based rendering solution, using a platform called Conductor to manage the workflow, which enabled the artists to access and interface via the cloud with up to 15,000 processors on demand. "This was the largest use of cloud computing ever on a movie," says Baillie, noting the film required 9.1 million hours of rendering in the cloud – basical- ly, more than 1,000 years using a single processor. "So you could say we spent a millennium in the cloud creating the visuals." According to Baillie, the film's budget was far less than other VFX-heavy features, but by using the cloud, they were able to turn around the work quickly, saving an estimated 50 percent in rendering costs compared to using traditional methods. Also atypical was the use of matte paintings on the film. When Gordon-Levitt steps out onto the wire, it is sunrise, but as the walk progresses, the sun gets higher in the sky, and by the end, the environment is stormy. A pure, traditional matte-painting approach would have required close to 20 paint- ings, "which would have been completely overwhelming," says Baillie. Instead, the artists followed a hybrid approach whereby the CG lighting and matte-painting departments worked in concert. Since every detail below the wire was geometry, they could blend the light bouncing off these very detailed, live-rendered CG assets with matte-painted augmenta- tions, and change the CG light as the sequence progressed. The end result contained the beauty of a painting with the flexibility of CGI. V E R T I G O Baillie points out that The Walk is one of the great successes that happens when all the departments work together in perfect sync: production, set design, digital effects, camera, stereo 3D. To prepare for the walk, a 40-by-60-foot corner of a rooop – about a sixth of the actual size – was used for Digital artists at Atomic Fiction created a CG version of Actor Joseph Gordon-Levitt's face for scenes that required a stunt performer. The scanning was done at Pixelgun Studio with a mobile photogrammetry rig that captures ultra-high-resolution facial expressions and full bodies. The solution utilizes an array of more than 100 cameras that surround and simul- taneously photograph the subject. Proprietary soware uses perspective dif- ferences among the cameras to construct a perfect geometric representation of the face, and since cameras are used as the source, the texture perfectly aligns to the mesh. In a half hour, the group acquired 40 to 45 poses of Gordon-Levitt's face. "We had him act out poses where we may have needed to re-create his face as he performed on the wire," says Atomic Fiction's Kevin Baillie, overall visual effects supervisor. "It tells us how, when he furrows his brow, blood flows from his forehead to his temples; what the skin and blood under the skin do. Just as reproducing the offices and the bumps on the surface of the towers elevat- ed the reality of the environments, so did this information for the performer. Every single detail on the skin and under it increased the level of reality." The resulting mesh and textures provided the artists at Atomic Fiction with substantial data that was later refined artistically and with the use of Chaos Group's V-Ray skin shader. "We've learned over the years that reality is not always the right answer; it's what looks right," Baillie says. On the last day of filming, Gordon-Levitt donned a Faceware Technologies helmet camera for approximately 50 takes that replicated moves by the stunt performer which likely would require face replacements in the shots. "Joe walked across a tape mark on the ground with the head cam, and Faceware gave us very clean and lightweight data to use," says Baillie. "It got us 60 to 70 percent there, as opposed to a detailed motion-capture session with thou- sands of markers, which would have been data overload and required more time for us to undo than to animate the shot."

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - January / February 2016