Issue link: https://digital.copcomm.com/i/1524863
M OT I O N P I CTU R E S O U N D E D I TO R S 115 us to achieve the same level of accuracy as in a linear game, but without the opportunity for pick- up sessions to meet localization deadlines. Consequently, we had to get the performance right in a single recording session and to ensure our dialogue implementation system flowed smoothly. DO: Because of this, implementing placeholder content in the build at the earliest opportunity has been essential. If we can identify issues early, we can find solutions. If we didn't do this, we would likely have to cut or delay content. BS: This involved close coordination with other Arrowhead departments to align dialogue expectations with ongoing game updates, synchronize our milestone schedules, and request substantial technical support. DO: In terms of implementation pipelines, everything needs to be a well-oiled machine and the path of least resistance identified. When new voice lines are being designed, the first port of call is to diagnose if existing gameplay triggers can be adapted for this, or if new ones are required. The latter is a larger code dependency, whereas the former may not require any additional coder support at all. BS: Maintaining and updating localized versions of dialogue can be a significant challenge as well. For simultaneous updates across all languages, we needed careful planning and coordination with localization teams. We kept up to date with our script metadata, which helped us keep track of thousands of lines scheduled for release at various DLC milestone dates. DO: When implementing new content into Wwise, we always try to use or adapt existing workflows as much as possible. We purposely spent a significant amount of time ensuring that our structures were super granular and our systemic mix systems do a lot of heavy lifting in terms of ensuring dialogue clarity, even when the game gets very loud. For example, one way we route voice lines includes dividing them into the various projection levels and sending those to aux buses. This allows us to vary the processing and sidechaining relationships with other content. We also have many systems in the engine to decrease the likelihood of in-game crosstalk (culling lines from playing in situations they are unnecessary or would clash without currently playing important content). In the rare situations that this does occur; we also have contextual-based VO sidechaining. This allows us to control the VO mix whereby more important voice lines can duck less important ones. Because of these strong foundations, slotting in new content into the mix is rather straightforward. Firepower from orbit can be called in to provide additional firepower when needed, a sound designer's delight!