Computer Graphics World

Jan-Feb-Mar-2023

Issue link: https://digital.copcomm.com/i/1495163

Contents of this Issue

Navigation

Page 21 of 33

20 cgw j a n u a r y • f e b r u a r y • m a r c h 2 0 2 3 — namely gaming, VFX, and film. The Quarry blurs the line between video games and film in a way that no other title has, and that's something that I personally would love to continue to explore. CGW: How did you collaborate with the Supermassive Games team to bring such a massively interactive, visually ambitious game to life? Aruna Inversin: Working daily with Supermassive on all aspects of the schedule — from actor acquisition to motion capture performance — allowed both teams to be transparent in their communication on what tasks needed to be delegated. This process occurred through video calls and the occasional in-person meetings. There was a huge amount of data to categorize and transfer, and we set up a specific pipeline and process to give the Supermassive team data as the mocap shoot day was finished. This delivery consisted of body animation data, HMC [head mounted camera] video data, and audio streams. That data was then uploaded nightly, Pacific Time, which allowed the Supermassive team working under Greenwich Time to ingest and push this into the engine over their next, full day. On the morning of the next shoot day, our California stage and production teams could review all the performances in real-time. I called this internally our "Cadillac pipeline," as it had nearly every technology available! Paul Pianezza: Lots and lots of open and honest communica- tion. If we were ahead of schedule or behind, it was all shared. We also had weekly Zoom calls during production, and shared end of week emails highlighting what was achieved and what was out- standing. We also created an email alias so that if Supermassive had a question, it went to four or five members of our leadership group so that nothing was missed. CGW: Tell us how your team organized the workflow for this project. How many people were involved, and how did you divide the workload? Aruna Inversin: Overall, roughly 150 people worked on this project on the Digital Domain side. Not all were full-time, as there were ebbs and flows in our schedule and across the divisions of artistry. Our pri- mary task at hand was relatively straightforward in description, but extremely complex in execution: Take an actor's HMC performance, solve that performance to their digital character, and play it back in real-time. In the execution of this task, we developed new methods of HMC stabilization and a new product delivery, Chatterbox, which allows a decimated set of ideal facial shapes to be run on the major- ity of modern game consoles. Digital Domain provided the anima- tion, animation rigs, and facial meshes, which Supermassive would use with our acquired actor data (texture and actor face and body meshes) to create a believable, in-engine performance. Paul Pianezza: We had around 150-ish people work on the proj- ect, including staff and project hires. That doesn't include vendors or people that helped out with little tasks for a few days here and there. We organized the work in a traditional VFX manner. On the produc- tion side it was producer, digital producer, associate producer, and coordinators. Then on the artist side, it was VFX supervisor, CG su- pervisor, department supervisors, department leads, and artists. Of course we had lots of TD [technical director] and pipeline support. Then along the way, we created new departments and new work- flows, so we had to adapt to situations as they arose. CGW: Tell us about the motion capture process for this project. How long was the mocap shoot and how did your team utilize the Masquerade 2.0 system to process data? Aruna Inversin: Digital Domain's involvement did not just rest on the digital creation of the actors, but also on our performance capture stage in Los Angeles. The actual stage time covered a total of 42 days across nearly 30 months, with the majority of the performances occurring in January/February of 2021. There were additional scan and training days for most of the actors, which happened during 2020. For our prologue characters — Laura, Max, and Travis — those actor acquisitions happened in mid-2019, with the prologue sequence shot in late 2019. The Masquerade 2.0 process allowed us to leverage our Masquerade workflow while also adding new technology and pipeline components to allow us to service the 32 hours of facial performance that this game would eventually create. It begins with high-fidelity photometric and 4D data capture by our preferred partners, and continues with our HMC fitting and training days, before concluding with the final perfor- mance capture shoots. Overall, this Masquerade 2.0 process covers the creation, tracking, solving, and optimization of facial performanc- es from an HMC camera input into a fully-realized, digital double that The team captured millions of frames to process across 32 hours of facial performance.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Jan-Feb-Mar-2023