The State of the Game Audio Industry with Brian Schmidt

BrianSchmidtSeated_300DPI (1)

From tech tools to crunch time, the founder of GameSoundCon examines the trends in game audio.

Brian Schmidt is a game audio sound designer and composer with over 30 years of experience. He’s also the Founder and Executive Director of GameSoundCon – the audio industry’s premiere conference on video game music and sound design. GSC is coming up from October 29-30 in LA!

Register for GameSoundCon here:

REGISTER HERE

We sat down with Brian to discuss trends and improvements in the game audio industry, both technologically and culturally:


Colloage2019-1200x628

Thanks so much for the time Brian. Going through your website, it was interesting to see that you have twenty patents. Could you share us some info on those?

I’ve always liked tinkering/inventing new stuff, and I’m fortunate enough to have worked with companies large enough to encourage people to take time to work on things like patents. Overall, there’s a pretty big mix. 20 years ago, a lot of plumbing/technical challenges were still being tackled. These were needed to making things work efficiently enough in a game system so as to actually be usable — things from efficiently getting data off of a DVD at the nitty gritty end (“Determining latency and persistency of storage devices and presenting a save/read user interface through abstracted storage “) to things we had to deal with to get surround sound working in games (“Packet Multiplexing Multi-channel Audio”), (“Audio buffers with audio effects”) to the design of high-level audio scripting languages (“Scripting solution for interactive audio generation.”). One of the more controversial technologies was the basis for allowing end gamers to customize their experience by utilizing their own background music in their game (“Music Replacement in a Gaming System”). 3D audio has obviously been pretty big lately; in any environment where you have converging technologies (VR/AR wearables, 3D sound, etc.), there are lot of opportunities to find fun/unique solutions to problems (eg “Mixed reality system with spatialized audio”) — which is what patents basically are...

The industry has changed quite a bit throughout your career. It seems that in the early days of game audio, you would tackle a challenge by creating the tool needed for the job. Today, we have a great set of tools, sometimes with talent dedicated to pushing those tools further for the specific project. What’s next?

Yes, there definitely was a time where the only real game audio ‘tool’ was a ‘C’ compiler. I used to keep a list on my whiteboard of what my production/productivity bottlenecks were or cool interactive audio features I wanted, but my system didn’t do. If I could write a quick tool to knock something off the top of the list, I’d usually take the time to do it and was always glad I did. That said, I don’t particularly miss the days of writing music note by note in a text editor, except nostalgically!

Workflow and content pipeline will always be a challenge and an area for exploration and improvement. We have so many incredible technologies to create sound, and technologies to make sound interactive. But the end to end process is still somewhat clunky. A typical game audio workflow means keeping 3 or more screens going: Game, DAW and Middleware. Although they are starting to make things more seamless, we’re still quite a ways from a truly smooth, integrated workflow. Really solving that problem will be a big win. Better productivity means more iterations which can mean better content, or better content without having to stay up to all hours to try out what you want (vis a vis your ‘crunch’ question below).


"Better productivity means more iterations which can mean better content, or better content without having to stay up to all hours to try out what you want."



ML_Ambeo

One other thing that sometimes gets missed is tools’ hidden darkside. Although tools such as Wwise, FMOD, Fabric, ADX-2, Tsugi have greatly increased what we do in game audio, they’re simultaneously facilitating and limiting. There are a lot of untried things in game audio, and sometimes using the features of these wonderful tools can subtly nudge us towards approaches that the tools do well/easily, when perhaps some totally ‘out there’ approach, but which isn’t supported by the tools, might be even cooler. But we don’t try it because it’s so easy to do XXX, and we just move on. So a “what’s next” would definitely include systems that facilitate the straightforward without unconsciously discouraging experimentation and non-traditional solutions.

One thing I’m excited about on the techie side is that processors are getting big enough to handle some things we though perhaps out of the real of game audio for a while. We’ve got 3 separate talks on procedural synthesis this year. And of course there are a whole raft of unsolved issues related to Virtual Reality, and their close cousins Augmented and Mixed Reality.

And sometimes the least sexy sounding items end up being really important: I recall a couple years ago when Nuendo added a batch-rename feature. About as un-sexy sounding feature as one can imagine. But it was HUGE for the game audio community, and solved a real problem.

From a content standpoint, we’ve really had some great sounding games the past few years. However, the game audio industry has had it’s challenges. We keep praising studios for their “lack of crunch” and cheering on companies that, “don’t lay their employees off”. This is a pretty low standard. What can we do to change this?

Work/life and crunch are really tough challenges in games. On one hand, a small bit of crunch can be healthy — even ‘fun’. That final 2 weeks before a ship deadline, where the whole team comes together and puts in that little bit of extra ‘oomph’ in the details can be the difference between some thing pretty good and something great. How often does the best guitar solo come from the 2am session?

What’s utterly broken, though, is crunch-culture. Where there are unending weeks upon weeks of eating the take out dinners provided by the company. We certainly don’t want to lose that final polish and shine that just takes time and iteration to accomplish. But we need to get away from the non-stop tread-mill. Games have changed from “here’s the final CD image — that game is DONE FOREVER” to some games never really being ‘done.’ It’s easy to lose that sense of satisfaction and release from the game being completed. How much can you enjoy the RTM build, when you know the next day you have to start working on the “day one update?”

For those who haven’t been to Game Sound Con, what can they expect? Anything new this year for veterans to look forward to?

For those attending for the first time, expect 2 days of musical/audio/technical geekery and to drink from the firehose. It’s pretty concentrated, with a lot of sessions. Outside of the sessions, the conversations are my favorite part. In one corner of the room there may be a heated discussion on the importance of HRTF personalization for Virtual Reality audio. Next to that, you may find people conversing on the what kind of orchestration techniques are the most useful when doing layered interactive music. And next to that may be someone talking about new tools to record game dialogue, or simply be discussing how impossibly hard the piano part is for the Hindemith Tuba Sonata (that last one really happened).

For veterans, I’m always inspired by the talks. One thing that strikes me is that I — as a 30+ veteran in the industry — always seem to learn something new, even when listening to some of the “Game Audio 101” talks. Listening to Tom Salta present the basics of interactive composition always seems to give me some new nugget; some new perspective on the craft I hadn’t really considered fully. We have over 70 talks this year, covering the range from “Game Audio Business Essentials” to advanced usage of procedural audio in games and detailed descriptions of interactive music for games like Crackdown 3. I’ve also always found the new Research Track and Game Audio Studies track fascinating. Many of the techniques and technologies we use in games have come from research institutions (FM synthesis, which revolutionized game audio, was born out of research at Stanford University). So seeing what’s brewing in the labs is always interesting. Veterans may also want to check out the Dialogue and Performance Track sessions — even modest games these days have hundreds or thousands of lines of narrative dialogue; seeing how the pros manage that from casting to Excel-wrangling is a great way to see what lessons they learned.

And we have Audiokinetic again this year bringing their crew for several sessions on Wwise, ranging from an introduction/overview to advanced integration techniques with Unity. So there should be pretty much something for everyone!

"One thing that strikes me is that I —as a 30+ veteran in the industry — always seem to learn something new, even when listening to some of the “Game Audio 101” talks."

MLeap


Thanks to Travis Fodor for conducting this interview, and to Brian Schmidt for participating!

Follow Brian Schmidt:
Website: https://www.brianschmidtstudios.com/
Twitter: @GameSound

Follow Travis Fodor:
Linkedin: https://www.linkedin.com/in/travisfodor/
Twitter: @travisfodor

Audio for VR & AR: Not What You Think

Accessibility and Audio in Marvel's Guardians of the Galaxy

Accessibility and Audio in Marvel's Guardians of the Galaxy

How to Sound Design First-Person Shooter Gunshot Sound Effects - with Mark Kilborn