Computer Graphics World

Edition 2 2020

Issue link: http://digital.copcomm.com/i/1277231

Contents of this Issue

Navigation

Page 26 of 67

e d i t i o n 2 , 2 0 2 0 c g w 2 5 simple uses are easy to understand, and the complete behavior can be described as the recursive LIVRPS (pronounced "liver peas") algorithm, documented on openusd. org. We refer you to the website for deeper exploration of USD's composition arcs, but we will describe several of them here as they pertain to addressing the scalability problems we discovered way back on Toy Story. Note that each of the composition arcs can be used to solve other problems as well, and one of the things in which we have invested heavily is trying to make sure that any possible combination of composition arcs behaves in reasonable ways. Layering for Multi-Artist Collabora- tion – Layering in USD is similar to layers in Photoshop, except that in USD each layer is an independent asset, and oen, each layer will be "owned" by a different artist or de- partment in a pipeline. While the modes by which data in different layers can be merged together is much more restricted in USD than it is in Photoshop, layer compositing al- lows multiple artists to work simultaneously in the same scene without fear of destroying work that another artist is doing. One artist's work may "override" another's because their layer has higher precedence than the other artist's, but since layer prece- dence in a sequence or shot in a CG pipeline oen corresponds to stages of the pipeline, this is not oen a problem. When it is, the fact that each artist works non-destructively with respect to other artists' work means we can deploy a range of tools to handle situations in which something unexpected happens. Referencing and Variant Sets for Asset Complexity – Referencing in USD is not dis- similar to referencing features in packages like Autodesk's Maya, but more powerful in that careful consideration has gone into specifying how multiple references can combine as we build up assets, aggregate assets, sequences, and shots via chained and nested application of referencing. References allow us to build simple or complex "component" assets out of mod- ular parts. References also allow us to build up environments out of many references to individual component assets – some of the costs will be shared at run time among many references to the same asset, and when we enable those references to be instanced, that sharing increases substan- tially more. References allow us to bring environ- ments into sequences and shots. At all "levels of referencing," USD allows you to ex- press overrides on the referenced scenes in a uniform way, so once you learn the basics of "reference and override," you can use it to solve many problems. One significant source of pipeline complexity is the number of "variations" of published assets that are typically required to provide the level of visual richness that audiences expect from modern CG films. USD provides variant sets as a composition arc that allows multiple different versions of an asset to be collected and packaged up together, providing an easily discoverable "switch" for users of the asset to select which variation they want. This switch (called a Variant Selection) is available in the USD scene regardless of how "deeply" (via referencing and layering) it was defined in the scene's composition structure. One of the most powerful aspects of Vari- ant Sets is that they can vary anything about an asset, and an asset can have multiple of them, nested or serial. In Pixar's pipeline, it is common for assets to have a modeling variant set, one or more shading variant sets, and rigging and workflow LOD variant sets. And More– USD also provides com- position arcs for broadcasting edits from one prim to many others (Inherits); special references called Payloads for deferring the loading of parts of a scene, to allow users to cra manageable working sets; scene-graph instancing to not only control the weight of the scene graph, but be able to reliably inform renderers and other consumers what things in the scene can be processed iden- tically; and many other features to facilitate non-destructive editing of large scenes. HYDRA IMAGING ARCHITECTURE One of the key tools that allowed TidScene to succeed as a project was an inspection tool called tdsview. The tool allowed users to inspect the contents of a TidScene file visually in 3D and gave a very fast way to debug shots. As we were going to replace TidScene with USD, we needed to provide a corresponding tool, with perhaps loier ambitions. The heart of such a tool is a fast 3D viewport, and we wanted to build as good a viewport as we possibly could. Also by then, some of us were beginning to itch to replace Presto's imaging technology. We saw a won- derful opportunity to build a generic imaging architecture that we could plug in to the various proprietary tools that needed a 3D viewport. So, Hydra was born: a multi-head- ed beast, each head representing any one of our scene formats and associated tools. Hydra was originally a state-of-the-art OpenGL-based render engine capable of displaying feature-film scale assets in as close to real time as we could get. Supporting the idea that Hydra should give high-fidelity feedback to artists, we looked to RenderMan, our production final-frame renderer, for the ground truth of how certain scene geome- tries and parameters ought to be interpreted. Along with the ability to embed Hydra into as many applications as needed, we began to fundamentally expand on the goals that OpenSubdiv pioneered: high fidelity real-time feedback that looked consistent everywhere. Today, Hydra has grown into a much rich- er architecture that supports not only mul- tiple "heads" (scenes and applications), but also multiple "tails" (renderers). We factored out the OpenGL renderer into its own Hydra back end that we now call Storm (though many folks still say "Hydra" when they mean to say "Storm" – and we've had to come to terms with that, well, we've mostly come to terms with that). We've also implemented a RenderMan back end for Hydra. We had already spent an enormous effort integrating Hydra into our various applications, like Presto, and now almost for free, we are able to see the scene as rendered by RenderMan. This has the promise to be transformational for our users, but we'd be remiss if we suggested that we're already able to fully benefit from this technology. Truthfully, we're not yet able to take full advantage of its full potential for several reasons. Chief among them is that some data required for high-fidelity rendering isn't available in our Presto scenes early enough in production. Similarly, our scenes are transformed significantly before they hit the renderer for final-frame production. We know we're not done yet, and we're excited by the future potential workflows we'll be able to achieve. PRACTITIONER TOOLSET USD was built by practitioners, for practi- tioners, and one of the ways in which that

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Edition 2 2020