Computer Graphics World

Edition 2 2020

Issue link: https://digital.copcomm.com/i/1277231

Contents of this Issue

Navigation

Page 24 of 67

e d i t i o n 2 , 2 0 2 0 c g w 2 3 F or as long as we have been creating digitally synthesized images, we have needed ways to describe the 3D scenes we are synthesizing in a way that is mathematical enough for computers to understand, but which is also understand- able and manipulatable by technologists and artists. This article tells the story of some of the challenges Pixar faced in describing, sharing, and transporting 3D scenes as our pipeline evolved over 25 years, from creating the first feature-length computer animated movie, to making it possible for our productions' ar- tistic visions to keep growing in richness and collaboration. Thus, we present Universal Scene Description (USD), Pixar's open- source soware for describing, composing, interchanging, and interacting with incredi- bly complex 3D scenes. USD is a cornerstone of Pixar's pipeline and has been on a rapid and broad adoption trajectory, not only in the VFX industry, but also for consumer/Web content and game development. BEFORE THERE COULD BE TOYS When Pixar set out to make Toy Story in the early '90s, we had industry-leading, com- mercially-available products at the front and back of our 3D pipeline – Alias' Power- Animator at the front for modeling, and Pixar's own RenderMan at the back. But to handle the scale of making a feature-length animation, we needed to invent or improve a suite of custom tools and data formats for everything that needed to happen between modeling and rendering. "Scale" here impacts several aspects of production that weren't quite as daunting when making Pixar's earlier projects (short commercials and effects): number of artists who needed to be simultaneously working on the same project, complexity of the envi- ronments, and number of acting characters, number of assets, sequences, and shots. We had already figured out that one important component to successfully work- ing at feature-scale was to separate out different kinds of data into different files/ assets. While "appropriate format" was a consideration, more important was that by separating out geometry, shading, and ani- mation into different files, we enabled artists of various disciplines to work independently and safely, simultaneously, and made it easier for them to reuse components. With these and other organic improve- ments, Toy Story was possible, but also illuminated many ways in which our pipeline needed to become more scalable and more artist-friendly. THE INDUSTRY EVOLVES & PIPELINES GET MORE COMPLEX As more studios began making fea- ture-length CG animation, more varied and powerful soware became commercially available, which was terrific because it ex- tended artistic reach, but it also made pipe- lines more complicated. The three problems of scale only grew (and still do!), but now the industry faced an additional problem of data interchange, since it was rare for any one vendor's product to read data in its full richness from another vendor's – some- times not even products within a vendor family could speak to each other. Many VFX studios, including Pixar, built pipelines around in-house soware, but with entry and exit points for data from many vendors. Formats for 3D interchange were available, both open and proprietary, such as obj, Collada, and FBX, but none could satisfy the goal of rich (in terms of schema and kinds of data), universal interchange. Studios developed different strategies to deal with interchange; Pixar built a "many to many" conversion system called Mojito that defined a core set of data schemas into which you could (in theory) plug any DCC in as an input and get out a data file for any other DCC (or renderer, though we only ever implemented support for RenderMan). Mojito allowed us to adapt to new soware in a reasonably modular way, but it was expensive to maintain and did not help with any of our scale problems. Finally, in 2010, the industry received a quantum leap forward on the interchange problem, when Sony Pictures and Industri- al Light & Magic released its open-source collaboration, Alembic (see "Share and Share Alike," CGW, Summer 2019 issue). Alembic was designed with rigor and deep knowledge of VFX pipelines to address the interchange of "baked" (or time-sampled) geometric data, with vetted schemas for geometric primitives and transformations, and a data model that abstracted "file format." That abstraction was critical because it allowed Alembic to gradually deploy formats like Ogawa that addressed some of the important scalability issues for VFX, such as being able to store massive amounts of animated data in a file, while primarily paying the cost (IO and network traffic) for only the pieces of data a partic- ular application/node needed for its task. Alembic arrived (not coincidentally) at a time when many studios were lighting in different applications than they were rigging and animating, so its "bake out the rigged animation into time-sampled mesh data" approach was exactly what was needed. To keep the scope and mission of the project tight and manageable, Alembic stayed focused on interchange of "flat" data, free of any concerns of composing multiple Alembic archives together, proceduralism, execution, rigging, or even a run-time scene graph on which such behaviors could be layered. Within several years, Alembic had deeply penetrated the VFX industry, and the While making Toy Story, an issue of scale required custom toolsets.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - Edition 2 2020