CAS Quarterly

Summer 2016

Issue link: http://digital.copcomm.com/i/713016

Contents of this Issue

Navigation

Page 13 of 47

14 S U M M E R 2 0 1 6 C A S Q U A R T E R L Y their attenuation radii, their high & low roll-offs with respect to distance, the "width" of the sound as you approach the sound source... these are all factors we agonize over for every sound in the game. Fortunately, middleware, such as Wwise, allows us to exert as much (or as little) control over these factors as we choose. Can you give me an example? We probably spent the most amount of time designing and iterating on how combat sounds use the 3D space in order to not only sound visceral and immersive, but to convey critical information to the player. In a game like ours [ Elder Scrolls Online], scores of players and NPCs [non-player characters] can be on-screen and engaged in combat simultaneously. When that kind of aural chaos is unfolding, it's important to the player to be able to distinguish sounds that affect them from what is essentially the background din. The sound of a successful attack against an opponent—no matter how far away that opponent is—needs to cut through the mix. The directionality of one's opponents or allies needs to be as clear as possible, especially when they're not in the player's field of vision (i.e., off-screen). So we set some initial values for all those position-related factors based on our experience as gamers and developers, then we play test and adjust. Then we play test and adjust some more. Eventually, we get to a point where the audio is telling the player what they need to know—and it sounds fantastic. Are you receiving components—such as music cues—mixed so that certain elements of a cue are already positioned in a particular place in the immersive field? No. We do all of our positional mixing at runtime, using the Wwise middleware. It allows us to not only set the mul- tichannel positioning and "width" for every sound element, but to update it dynamically based on various game data. So as the location of the sound source changes relative to the player, location and width (and, of course, volume and filter- ing and other parameters) are updated. Given that they may be shifting, how do you keep sounds out of each other's way to avoid conflict? In addition to the combat-specific discussion above, we carve out positional niches for all the various elements of the mix—voice-over, UI, music, ambience, etc., and take great pains to ensure they all play nice together. While the sounds associated with on-screen elements have their position largely determined by their placement in the game world, things like voice-over, UI, and music can be freely placed wherever feels good and doesn't interfere with those on-screen elements. For example, even when VO comes from on-screen char- acters, a good amount is fed to the center channel for intel- ligibility. UI tends to be LCR. Music is essentially 4.1, with just a little bit sent to the rear channels. And those ambient sounds not tied to on-screen elements (wind, insects, dis- embodied voices, etc.) are scripted to be randomly placed in the multichannel field. The aim is to provide a mix that maintains clarity, even at its most dense. As someone who also composes, has working as an audio director affected your creative approach? Being both audio director and composer has tremendously changed the way I do both jobs. So much so, that I can't imagine doing only one or the other job at this point. The most obvious benefit is the elimination of any guesswork or communication errors when it comes to one of those posi- tions understanding the needs, constraints, abilities, etc., of the other. As an audio director designing an interactive music system, I know exactly what the composer can or cannot provide as assets for that system. I understand, not just on a technical level (e.g., stem options), but on a musical level—whether it's even possible to create music that will work within the parameters and confines of the system and still come out sounding like good music. For example, I'll know how to compose for a particular combat sequence because I've played through it many times myself. On the other side of the fence, as a composer who is already intimately familiar with, not only the music system I'm composing for, but also familiar with every aspect of the game as a whole, I'm pretty well posi- tioned to create what is needed with very little iteration. I'm primed to compose music that is stylistically well suited for the game. [And] since I've been working on the game day in and day out since Day One as the game's audio director, it's a pretty sweet symbiosis. In linear media, we don't have something that we consider "middleware." Can you give an example of what it does with the music, for instance? Sure. In the game I'm currently working on, for instance, loads of game data are gathered at runtime and fed into the combat music system, which then tells our middleware (Wwise) what to do musically—and the combat score evolves with the ongoing encounter. As easy or hard enemies are added to the fight or die off, as the player's health goes up or down, as friendly players join in to help or abandon the player, and when the fight is finally ended, all that data is fed to Wwise. Wwise will then add or remove stems from a mul- titrack mix, adjust the volumes of those parts, or branch into whole new musical areas altogether. The result is a musical flow that sounds engaging and appropriate to the fight, no matter what kind of turns it may take.

Articles in this issue

Archives of this issue

view archives of CAS Quarterly - Summer 2016