Computer Graphics World

Edition 2 2018

Issue link: https://digital.copcomm.com/i/997232

Contents of this Issue

Navigation

Page 12 of 37

e d i t i o n 2 , 2 0 1 8 | c g w 1 1 THE CG IRON GIANT BACKS UP PARZIVAL. V I R T U A L R E A L I T Y where Spielberg would do virtual camera- work. The second soundstage was used for bluescreen shots that would take place in the film's real world. The crew on the performance-capture stage used Oculus headsets. The Vcam lounge had HTC's Vives and Microso's Hololens. "We used the Oculus on the main stage because it was easier to use their on board sensors along with the mocap system to generate the ability to walk 100 feet and be immersed," Roberts says. "In the Vcam lounge, in addition to the Vive, we used the Hololens to see the CG characters with real people. Steven could get an idea of the eyeline for an eight- or nine-foot character." Actors on the motion-capture stage wore traditional body mocap suits and custom head-mounted cameras. "We worked with ILM on a custom hel- met, on the makeup, the camera positions, and the lighting on the faces," Roberts says. Each actor had four cameras attached to his or her helmet running at 48 fps (47.972, precisely). In addition, eight wit- ness cameras trained on the actors' faces provided reference for the ILM animators, who would translate the actors' perfor- mances to their avatars. "It all paid off," Roberts says. "It was amazing to see how well the facial perfor- mances went straight through." Before directing the actors, Spielberg would look at the virtual set through a VR headset, check to be sure the physical world – the practical elements on the motion-capture stage – matched the virtual world. "Sometimes he would change the blocking because he could see the space in VR as if he were really there," Roberts says. "That happened especially in Aech's garage. As the actors were going through their lines, he was changing the blocking and reposi- tioning the Iron Giant to make the framing more interesting. He couldn't have done that without putting a VR headset on." Then, Spielberg would take the head- set off, pick up the Vcam, and shoot the motion- capture performances. "One of our guys would stand next to Steven and could pull focus, adjust the lighting, and change the controls in the virtual camera," Roberts says. "It was easy to follow him around and react quickly. We recorded all this in real time." Multiple people on set could also view the virtual world simultaneously, and the multiple views would all be in sync. "Steven would have his own view into the virtual world, and another workstation might be rendering another view that Janusz [Ka- minski, cinematographer] would be lighting," Roberts says. "As Janusz made changes, they were propagated to everyone else in real time. A set designer could be working on the set from a set deck view. And anyone could pick up a VR headset and walk the entire capture. You could see the characters one to one, and you could see where Steven was in the world. If you wanted to see what he was looking at, you could walk to him in virtual space and look over his shoulder. If Janusz was lighting, you could see his view and see his lighting changes in your world." Because it would be easy to have a situ- ation in which two or three people wanted to change something simultaneously, the crew developed a "conch shell" system. The person with the conch shell, the last person who selected it, was the one who could make a change; no one else could. "One of my philosophies is that the only way for filmmakers to really come into this virtual world and use their tools to do their work is to give everyone their own view into the world, just like in the real world," Roberts says. Video from the eight witness cameras, sound, the four HD camera feeds of each actor's face, and the real-time view of the Vcam Spielberg held all traveled through video assist to editorial for Spielberg to make his "selects." The actors' motion- captured performances were recorded in Autodesk's Motion Builder and Unity along with lighting and camera information. Then, a team called "The Lab," compris- ing 36 artists, would record the real-time 3D version for each select and generate a master scene running in Unity. "Steven's selects might include multiple performance takes from the stage," Rob- erts says. "For example, in the Distracted Globe we might have one set of perfor- mances for one dancer and another for another dancer. The team would combine all those performances into one scene file so Steven could put a virtual camera on it. The Lab would sometimes be prepping until midnight to get ready for the next day." In the Vcam lounge, Spielberg would view the master scene in VR through the Vive headset, perhaps make small changes, then switch to the Vcam to drive the master scene. "Steven would roll onto stage between 6 and 6:30 in the morning," Roberts says. "He'd go straight into the Vcam lounge and start

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Edition 2 2018