CAS Quarterly

Summer 2016

Issue link: https://digital.copcomm.com/i/713016

Contents of this Issue

Navigation

Page 17 of 47

18 S U M M E R 2 0 1 6 C A S Q U A R T E R L Y vidual elements. The squeak of the hinges would be attached (or sound emitted from) the top and bottom hinge. The door-handle rattle sound would emit from the handle and the sound would also move with the door as it opens. This atten- tion to detail really sells the immersive environment of VR. I think the main difference between mixing in surround sound and mixing for 3D VR audio is that, in linear sur- round sound, you have to imagine (to some extent) where the sound is coming from and what sounds are happening off-screen. In VR, you don't have to imagine it or really even mix for it because it's all happening in real-time. How do you approach mixing since you don't exact- ly know when sounds will be playing together— given that a player is making real-time choices that determine the sounds heard? Well, there is less of a need to pre-mix a sound or music cue because you can't predict if there will be dialogue spoken over the music or gunfire or any number of other sound effects. If you have predetermined triggers for music in an area of a game (for example, you know a cue will play over combat), then you can mix that cue to leave headroom for explosions and weapons. Or, if a piece is being played over conversation, you could notch out some mid-high frequen- cies to leave space for the dialogue. It's also important to note that music in games is repur- posed in other areas of the game. So creating a composition that is dynamic in how it's orchestrated and arranged is just as important as how the music is mixed. Music that can work in various areas of the game helps with budget and is an effi- cient way to fill out a soundtrack—especially for games that require hundreds of hours of music. As someone who also composes and creates sound design, has working as an audio implementer affected your creative approach? Ha, yes! As a composer who knows how music is imple- mented and all the various adaptive techniques to play back music in-game, having that knowledge can sometimes be a creative blocker. But at the same time, since I know how it's going to be implemented, I know that I need a composition that can work with just a few stems or the full mix. In terms of sound design, having the knowledge of implementation hasn't affected the creative process, but it has made it easier to prepare assets for in-game use. An elevator sound, for example, isn't just a one-shot sound. There's a short start sound followed by a looping eleva- tor sound and then a separate stop sound. Knowing how sounds will be hooked up to events and played back has really helped me in designing sounds for games. I'm always checking with level designers and asking how a certain object is working in-game. muted depending on the intensity of the music needed at a given moment in-game. Or music can be pre-rendered as three or four intensity levels and cross-faded in-game. For example, the tension cue might include percussion and pad, combat low [cue] to include percussion, pad, strings, and combat high [cue] to include the full mix. Tension, com- bat low, and combat high are all bounced from the same com- position, same length, and tempo changes so that they can be switched in real-time seamlessly at any point in the cue. Sound effects are more modular in how they are imple- mented. Let's say a weapon-firing sound is made up of five elements. In a film, a sound designer might print that sound and hand it to an editor. For a game, I would take all five of those elements that make up that sound, create 20 variations of each element and render them all out as separate files. Then, import those files into the game engine and essentially "rebuild" what I created in Pro Tools, in the game engine. The variations are placed in random "containers" that play a different variation each time that weapon is fired. Typically, the more frequently a sound is played, the more variations that are needed. This fundamental technique is used for weapons, footsteps, explosions, character efforts/vocaliza- tions, impacts, etc. Not all sounds need variation—and there are times when you want to just render-out what you created in your DAW. But the principle of modular sound design implementation is utilized in many aspects of game audio. How do you approach 3D and immersive audio as opposed to straight surround in the interactive environment? Audio for VR is certainly a different beast than non-VR game audio, but you're not treating the sound environment that much differently. The main difference with VR is that you can now hear sounds in a 360-degree space versus just a 2D plane. So now, if you have a jet flying overhead or someone talking one level below you, you'll be able to hear that those sounds [originate] above and below you. There's really no change in how those sounds are implemented, but the playback system has now changed. One advantage of VR is utilizing ambisonics. Third-party audio middleware tools like Audiokinetic's Wwise as well as Playstation's 3D Audio API, both process TetraMic record- ings beautifully [those created using Core Sound's TetraMic for Ambisonic recordings]. These types of recordings are great for ambience, room tone, diegetic music, and other object-based sounds placed in the game environment. The other aspect that is different in implementing for VR audio is that you need more emitters than non-VR (a loca- tion where the sound is emitting from). For example, think of a creaky door opening. Traditionally, you'd just render a one-shot sound of the player opening a door. But in VR, you have to go a step further and break that sound apart into indi-

Articles in this issue

Archives of this issue

view archives of CAS Quarterly - Summer 2016