Computer Graphics World

MARCH 2010

Issue link: https://digital.copcomm.com/i/8034

Contents of this Issue

Navigation

Page 32 of 51

Pixar’s Up is one of the latest feature animations to use RenderMan’s point-based approach for color bleeding, as evidenced in the image above, but Sony’s Surf’s Up was the fi rst. More than 30 fi lms have used the technique for VFX and animated features. under a windowsill, for example, or a character’s nose. It is calculated knowing the geometry, not light, and in some ways is self-shadowing. T e demo group had implemented a version of ambient occlusion using notes from Hayden Landis’s SIGGRAPH 2002 course. (Landis, his colleague at ILM Hilmar Koch, and Ken McGaugh, now at Double Negative, received a Technical Achievement Award from the Academy this year for advancing the technique of ambient occlusion rendering.) “T e only problem [the demo team] had was that it took about eight hours to compute the ambient for a 30-second demo,” Bunnell says. “It looked good, but it was still an off -line process. Basically, they baked in the shadows.” So, with a publication date for a new GPU Gems in the offi ng, Bun- nell decided to tackle the problem. And by then, Nvidia’s GPUs were faster and more programmable, with branching and looping built into the chip. First, Bunnell created a point cloud from the vertices in the geometry. “I created a shadow emitter at each vertex of the geometry,” he says. “And, I had each vertex represent one-third of the area of every triangle that shared the vertex. I approximated that area, kind of treat- ing it like a disk. T en I used a solid angle calculation that tells you what percentage of a surrounding hemisphere the disk would obscure if you were looking at that disk. T at tells you how much shadow the disk creates.” He “splatted” the result, vertex by vertex, onto pixels on the screen, adding and subtracting depending on how dark the disks were. And then he realized he didn’t need to do that. “Instead of splatting, I could make the emitters at each vertex be receivers,” Bunnell says. “I could go through the list of all these verti- ces and calculate one sum for that point, and accumulate the result at full fl oating precision. So, I made the points (where I did the calcula- tions) do more than cast the shadow for ambient occlusion—they also received shadows from other data points.” And that led to a breakthrough. “Since I had thrown the geometry away, I could combine points that were near each other into new emit- ters,” Bunnell says. “So, I would gather four points or so in an area and use them as an emitter. T en, I had a hierarchy where I combined these emitters into a parent emitter, a hierarchy. So, if I’m far enough away from the points, I can use the sum, the total of all the children, and I don’t have to look at all the children; I can skip a whole bunch of stuff . If not, I can go down one level, and so forth. I can traverse the tree instead of going to each emitter that’s emitting a shadow value.” T e second breakthrough was in realizing that if he ran multiple passes, he could get a closer approximation each time. “I could get an accurate result without looking at the geometry,” Bunnell says. “T en I realized if I could use this for shadowing and occlusions, I could use it as a cheap light transport technique.” T at made indirect illumina- tion—which needs to know about light—possible. And, he wrote about all this in GPUGems 2. The Next Step Meanwhile, at ILM, Christophe Hery had developed a method of rendering subsurface scattering by using point clouds to speed the process. He used RenderMan, which had always diced/tessellated all geometry into mi- cropolygons. “It does this tessellation very fast,” Hery says. “So I wrote a DSO (dynamic shared object) that could export a point cloud corresponding to the tessella- tion RenderMan had created. My intention was to use it only for scattering, but I learned I could store anything.” In 2004, Hery spoke at Eurographics in Sweden about how he used point clouds for scattering, and in the au- dience was Per Christensen, who had joined Pixar. “He came to me and said that he wanted to implement this in Render- Man,” Hery recalls. And he did. Christensen and the RenderMan team made sure the rendering software could generate a point cloud and had the appropriate supporting infrastructures. Everything was in place for the next step. In 2005, Rene Limberger at Sony Pictures Imageworks, where work on Surf’s Up had begun, saw Christensen at SIGGRAPH. “He asked me if I would take a look at Bunnell’s article and see if I could implement it in RenderMan,” Christensen says. So Christensen created a prototype version targeted to CPUs in a renderfarm, rather than a GPU. “I also extended it somewhat,” Christensen says. Mike [Bunnell] computed occlusion everywhere fi rst, and then if something realized it was itself occluded, he would kind of subtract that out. I came up with a method that I believe is faster because it doesn’t need iterations and it computes the color bleeding more accurately. It’s a simple rasterization of the world as seen from each point. It’s as if you have a fi sh-eye lens at each point looking at the world and gathering all that light. Develop- ing the prototype was quick because the point-cloud infrastructure was already in place.” Christensen gave Limberger that prototype implementation to test. “And, right at the same time, I got an e-mail from Christophe Hery at ILM,” he says. “He had the same request. I said, ‘Funny you should ask. I just wrote a prototype. Give it a try and give me some feedback.’ It would have been unethical for me to tell Christophe that Rene was testing it as well, so he didn’t know the guys at Sony were doing similar work. But, Christophe picked it up quickly and put it into production right away.” Christensen considers the close collaboration with Limberger and Hery to have been very important to the process. “T ey are doing fi lm production, so they knew what would be super useful,” he says. “T ey did a lot of testing and feedback, and suggested many improve- ments that I implemented.” Pixar fi rst implemented the color-bleed- ing code in a beta version of RenderMan 13 in January 2006, and the public version in May. “ILM had collaborated with Pixar for years,” Hery says, “but this was more.” T e two exchanged ideas, feedback, and source-code snip- pets at a rapid pace, on nearly a daily basis. Speed Thrills Christensen, who considers himself a raytracing fanatic, ticks off the advantages this approach has over raytracing. “It’s an approximation, but raytracing is an approximation, too,” he says, “and both of them will eventually converge to the correct solution.” “T e eff ect is exactly the same,” Christensen continues. “But using the point cloud is faster. Raytracing is a sampling. If you raytrace to get ambient occlusion, you shoot all these rays, count how many hit and March 2010 31

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - MARCH 2010