Computer Graphics World

OCTOBER 2010

Issue link: https://digital.copcomm.com/i/18322

Contents of this Issue

Navigation

Page 11 of 51

n n n n Viewpoint This image was rendered using a new volumetric scattering model (“micro-flakes”) in conjunction with an extended form of the radiative transfer equation. The approach makes it possible to rigorously compute the subsurface scattering component of various types of scattering media while accounting for their anisotropic internal structure. volumetric or translucent materials that have an anisotropic structure. At SIGGRAPH Asia 2009, two methods that attempted to recover the details of hair using image-based approaches were published. One proposed a methodology for captur- ing the small-scale structure of real hair from photographs; the other attempted to recover the color of real hair using the data acquired from photographs. Now Hear This So far we have looked at bridging photographs and computer graphics, especially those tech- niques that have been important to the film and game industries. Now let’s turn our atten- tion to a new bridging technology that may lead to a remarkable contribution in the fu- ture: the bridging of sound and graphics, of- ten called “sound rendering.” Such projects are still in the experimental stages, and no practi- cal implementations from an industrial per- spective have yet been seen. But the potential is apparent judging from recent SIGGRAPHs, particularly at the 2009 conference, where a technique perfectly synchronized the physi- cally based sound of water with the animation created by a fluid simulation. Tis application (“Harmonic Fluids” by Changxi Zheng and Doug L. James) attracted strong interest from a wide range of people, and it was followed by the synthesis of fracture sounds (“Rigid-body Fracture Sound with Precomputed Sound- banks,” also from Zheng and James), as well as other impressive, physically based sound rendering methods that appeared at this year’s SIGGRAPH. Sound rendering itself has a long history. Te term seems to have originated in the paper “Sound Rendering,” released in 1992. Te paper proposed a general framework to adequately synchronize sound (recorded or 10 October 2010 synthesized) with CG animation (hand-drawn or procedural). Until then, most of the sound works were targeting fields such as virtual re- ality by providing a realistic virtual environ- ment; therefore, the paper’s concept can be thought of as being new in the sense that it put the first priority on the synchronization of sound and CG animation. It also suggested that ideas in CG rendering (such as raytracing) would be useful in sound synthesis because of the analogy between light and sound, which although recognized in physics, had not yet been emphasized in CG. CG needed to wait for the dawn of a new century before it could welcome physically based sound synthesis approaches. In 2001, two early, physically based sound-rendering methods were published at SIGGRAPH. One method attempted to synthesize the sound produced by deformable solid objects through the simulation analysis of a physically based de- formation (“Synthesizing Sounds from Physi- cally Based Motion” by James O’Brien, Perry Cook, and Georg Essl). In 2000, a course pre- sentation related to audio synthesis was held at SIGGRAPH, bringing together researchers in the fields of audio synthesis and computer graphics and spawning the work that resulted in 2001. Another method aimed at synthesizing rigid-body impact sounds to make them act as a digital sound foley (“Foley Automatic: Physically-based Sound Effects for Interactive Simulation and Animation”) was presented by Kees van den Doel, Paul Kry, and Dinesh Pai at SIGGRAPH 2001. Impact sounds were driven by contact forces, which were computed using dynamic simulations or physically based procedural methods. Introducing the concept of modal synthesis, impact sounds were mod- eled by a set of frequency-dependent bases that contributed to generating a sound foley in real time. Tese works were followed by a method that enabled the synthesis of aerodynamic sound caused by vortices, such as those cre- ated behind a stick-like object placed in a flow (“Real-time Rendering of Aerodynamic Sound using Sound Textures based on Computa- tional Fluid Dynamics” by Yoshinori Dobashi, Tsuyoshi Yamamoto, and Tomoyuki Nishita at SIGGRAPH 2003). Physical information around these vortices was computed first us- ing fluid simulation, which was recorded in textures and used at run-time to synthesize the sound in real time. All the above works successfully introduced physically based insights into the sound-mod- eling process; however, the process by which sound radiated from vibrating surfaces was often ignored or approximated using simple formulas that had large limitations, such as ignoring important wave diffraction effects. Te solution for this problem was provided in “Precomputed Acoustic Transfer” (by Doug L. James, Jernej Barbic, and Dinesh K. Pai and published at SIGGRAPH 2006). Te method aimed to accurately simulate the sound radi- ating from vibrating rigid objects. Essentially, simulating sound radiation requires solving the wave equation, which is a costly computa- tion. Terefore, the method introduced virtual sound sources (called multipoles), which ap- proximate the solution of such complicated equations. Te sound produced by multi- poles can be represented by very simple func- tions, and once those virtual sound sources are placed during a pre-processing stage, the computation during run-time sound render- ing is just the summation of these simple func- tions—which can be done at real-time rates. As this is a very generic approach, it applies to a large variety of phenomena where sound radiates from vibration surfaces. In fact, it was an important contribution to the computa- tion of the sound radiation of water (where sound radiates over two different fluid layers: water and air) and fractures (where the topolo- gy of the surface dramatically changes), which emerged later. Synthesizing water sounds was especially challenging because it required a number of breakthroughs, such as learning to accurately and efficiently compute the complex sound radiation over the interface of the water and air. In one of these breakthrough develop- ments, the tiny air particles in water, which act as oscillators, called “acoustic bubbles,” were used as water-sound sources, and radiations of Courtesy Wenzel Jakob.

Articles in this issue

Archives of this issue

view archives of Computer Graphics World - OCTOBER 2010