Computer Graphics World

April-May-June 2021

Issue link: https://digital.copcomm.com/i/1358125

Contents of this Issue

Navigation

Page 17 of 35

16 cgw a p r i l • m ay • j u n e 2 0 2 1 there's been a huge uptick in people stream- ing. We think the next gen of that includes interactivity – not just choosing your own adventure, like Bandersnatch, but actually true, real-time graphics that you're able to interact with, but still in a very cinematic way while at home in front of a TV streaming. How has your technology evolved? It's evolved quite a bit. Again, we've been pri- marily looking at VR projects. The hardware for that has changed dramatically; one of the biggest changes in that area has been inside-out tracking, like on the Ri S and on the Quest. It really frees you up from having to instrument a space and lets you navigate much larger areas. That's been sort of a big thing for us. Obviously computers get more powerful every year, so we can do more with the graphics. One of our key tenets is high fidelity, which comes from Lucasfilm and ILM; everything we want to do is at the highest quality. So we always push for the most extreme and best-looking graphics we can. And every year we get to do more and more. Unreal En- gine keeps getting better, and graphics cards keep getting better. The technology just keeps moving, and we keep wanting more. Also, Oculus (Quest and Quest 2) has been a big platform for us. We've released the Vader Immortal series and now Tales from the Galaxy's Edge on that platform. I don't think anybody could have predicted that you'd have a mobile device doing VR at 90 frames a second at this point. It's a pretty great piece of hardware. Describe the pipeline. Engineering-wise, for the interactivity, we primarily use Unreal Engine, which is where we do all of our scripting, blueprinting, and so forth, and combine everything. The pipeline in terms of building characters, envi- ronments, and so forth is moderately similar to what we do in visual effects. There's some different tips and tricks that you would use so the models become real-time ready, but that process is fairly similar [to a typical linear film] in that we have people who specialize in creatures, do rigging and cloth- sim setups, and that type of thing. We have animators who animate, and they still use [Autodesk] Maya for that. That's our general tool for doing animation. We still do a lot of mocap, too, just like we do in visual effects. That mocap comes in through Maya and then is put into Un- real Engine. So probably the biggest piece of pipeline that we've generated is getting from the asset creation pipeline into Unreal, whereas before it was getting the asset creation into our other renderers, like [Pixar's] RenderMan or whatever we were using for VFX. A lot of the pipeline glue that we had to create was to get our assets into Unreal in an efficient way, and also in a way that we could update them. We use [Epic's] revisioning system and Per- force. So the way we store the assets, then check them in, and that kind of thing is very standard, but the pipeline glue is proprietary. For example, when we do look develop- ment on an asset, we're able to use that same look development both on a VFX asset as well as on real-time assets. That's proprietary stuff, so our texture artists can create something and have it actually work in both sides – in an Unreal and VFX pipeline. What's the biggest difference when building assets for VR versus VFX? The assets are built dramatically different. Some visual effects use geometry as detail, which can't really be done in real time yet. So, if you actually look at the geometry, they're very different; therefore, when you go to texture, you're also doing some things quite differently –for instance, we use nor- mal maps for real time versus displacement maps for VFX. So, the actual assets are different from a technical standpoint, but when they're finished, they look pretty similar. Of course, the fidelity of the VFX off-line renders are still arguably higher. But if you put a lot of effort into a singular asset, you could make a real-time asset look just as good as a visual effects asset. But, you have to be able to run that at scale. There's a lot of technical [hurdles] in how much you can push through the real-time en- gine and at what fidelity. You have to balance all of that. Whereas for visual effects, you just make everything at the highest fidelity; it just takes longer to render. In real time, you only have so long to render, and if you have a lot of content in there, then you have to downgrade the fidelity to get it to run. There's different balancing tricks, too. Artists have to learn how to model for real time; they have to under- stand budgets and so forth. How much content is created for an experience? Each chapter in the Vader Immortals series had about 40 minutes of content. It's more story-driven and linear, so it has more of a fixed time frame. The new one we just did, Tales from the Galaxy's Edge, is more explor- atory, with quests – somewhat like an MMO, if you will – and the content keeps running and changing as you go. So the time can be dramatically different for each person. I've heard of some getting through in a few hours; I've also heard of others taking longer. ere's a lot of R&D involved? Internally, it's our R&D groups. There's the ADG group, the Advanced Development Group, and our engineers at ILMxLAB in that. When we begin a project, we go into it asking ourselves, 'What's the idea, what do we want to do?' Then we'll figure out how to do that. As a result, we do heavy R&D into rendering techniques for every single project. For ex- ample, for Tales from the Galaxy's Edge, which came out on the Oculus Quest platform in late 2020, there was quite a bit of work done from a rendering aspect to get it to look as good as it does on the Quest 2. That's a mobile piece of hardware, and if you look at the graphics, we think they look better than mobile, and that's due to a lot of rendering engineers actually getting in there and making some base chang- es to the rendering code so we can use differ- ent tricks to make the content look better. In terms of hardware, going back to CARNE y ARENA, it was very hard to navigate VR more than say, 15 feet, and we needed to go 50 feet. This made us do a lot of work hardware-wise in order to use motion-capture systems to capture a person's location in the VR headset and trick the VR headset into thinking it was tracking itself. So, we look at the projects and see what the creative is, then we try to figure out, backtrack, in how to do that, rather than coming at it from the hardware standpoint and saying, 'This is what we have now.' What other technology has helped advance these projects? Right now, for animation, we rely heavily on motion capture and facial capture to make the most believable characters we possibly can. To make the story worlds look great we rely heavily on the GPU hardware, Unreal, and on good headsets (those that truly are 90 frames or faster).

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - April-May-June 2021