Issue link: https://digital.copcomm.com/i/1538144
Audio Director Simon Koudriavtsev MPSE design pipelines is waiting. Waiting for final animations, waiting for final gameplay timing, just so audio work can finally begin. Worse, if anything shifts late in the process, sounds must be reworked just to match new timings. Audio, almost always, comes at the end. One game director I know calls audio the "caboose of fun." I found myself asking, "What if it didn't have to be?" I started looking for ways to decouple the sound design process from those delays and possibly even get ahead of them. That's where granular synthesis came in. Instead of sculpting a fixed sound to match a specific event duration, I began building sound events as granular instruments. This allowed us to scrub through the sound from start to end, stretching or shrinking playback time as needed. It let me create rich, final- feeling audio based on the concept of an event or an asset, even before it existed. Sure, for some things a loop might be enough. But many sounds need progression, for example, a generator or force field powering up, a ship landing, or a balloon inflating until it pops. Some of this can be done with traditional synths, but that limits you to the synths available in the audio engine. Granular design allows time-flexible sound design using any audio you create in your DAW. Of course, everything has limits. A one-second source sound won't stretch well to two minutes. I've found that having source material slightly longer than the maximum expected playback time yields better results. Also, some sounds show granular artifacts more readily than others. Generally, sci-fi or fantastical sounds work great with this, while more grounded sounds can be trickier to keep natural. We standardized the progression value from 0 to 1 sent from the game to the audio engine. The granular synth would use that to progress from start to end, over whatever time the game dictated. The sound could be made ahead of time, and once a game asset started passing progression values, the sound adapted to match. No re-editing. No re-sculpting. If timing changed, even dynamically, the sound still worked. It's a mindset shift with huge implications for production. Granular design isn't just a cool technique. It's a tool for temporal flexibility. In fast-moving productions, it lets audio stay ahead of the curve instead of chasing it. Upward Compression: A Lightweight Alternative to HDR? In game audio, HDR systems are sometimes used to manage dynamic range, lifting quiet details when possible and ducking less important sounds when the mix gets loud. The effect can be powerful, but setup isn't simple. It requires detailed planning, careful tuning, and constant upkeep. For a fast-moving or shifting project, that's a tough sell. That's what led me to experiment with upward compression, a technique more common in music mastering. Unlike downward compression, which reduces louder signals, upward compression lifts quieter ones. It can make sounds feel more present without the artifacts of traditional compression. You can reduce dynamic range while preserving transients. I had an engineer create an upward compression plugin for a project that was shelved shortly afterward. So, we never tested it on a full scale, but it showed real promise. While I'd still call it semi-experimental, I believe it's worth exploring, especially for teams seeking simpler ways to manage dynamics in an interactive mix. I offer it here in the hope others might try it. Upward compression might offer some of the sonic benefits of HDR but with far less overhead. A few notes for anyone experimenting: •If you search "upward compression" online, you'll find many forums equating it with parallel compression. This is incorrect. •Clever side-chaining could pay off well with this technique. •Like any effect, it's possible to overdo it. Dynamics matter—and you probably don't want to squash them completely. Looking Ahead: Machine Learning Tools All three of the techniques above came from the same mindset: building systems that take on the heavy lifting so we, as designers, can focus more of our energy on making cool stuff. It's not just about efficiency. It's about shifting the human workload away from the repetitive and toward the creative. When we spend less time wrestling with the process, we spend more time shaping moments, exploring new ideas, and creating unique, emotional experiences. That's one reason I'm excited about the future of machine learning in game audio. Not as a replacement for craft, but as a way to hand off the mundane to machines. We're already seeing this in tools that remove noise from dialogue for example, but we're only scratching the surface. What I'm really looking forward to are tools that learn how we work, adapt to our creative flow, and take the tedium off our plate so we can stay in the artistic zone longer. M OT I O N P I CTU R E S O U N D E D I TO R S 107

