What? Huh? Why your favorite shows and films sound worse than ever

Dialogue has never been more unintelligible. Is there anything we can do, other than turn on the subtitles? In short: not really

What? Huh? Why your favorite shows and films sound worse than ever
From left to right: The Sandman (Photo: Netflix); Tenet (Photo: Warner Bros.); The Dark Knight Rises (Photo: Warner Bros.) Graphic: Rebecca Fassola

Television today is better read than watched—and frankly, we don’t have much of a choice in the matter. Over the last decade, the rise in streaming technology has led to a boon in subtitle usage. And before we start blaming aging millennials with wax in their ears, a study conducted earlier this year revealed that 50 percent of TV viewers use subtitles, and 55 percent of those surveyed find dialogue on TV hard to hear. The demographic most likely to use them: Gen Z.

Mounting audio issues on Hollywood productions have been exacerbated in the streaming era and made worse by the endless variety of consumer audio products. Huge scores and explosive sound effects overpower dialogue, with mixers having their hands tied by streamer specs and artist demands. There is very little viewers can do to solve the problem except turn on the subtitles. And who can blame them?

“It’s awful,” Jackie Jones, Senior Vice President at the Formosa Group, an industry leader in post-production audio. “There’s been so much time and client money spent on making it sound right. It’s not great to hear.”

Formosa is one of the many post-production houses struggling to keep dialogue coherent amid constant media fracturing. “Every network has different audio levels and specs,” Jones told The A.V. Club over Zoom. “Whether it’s Hulu or HBO or CBS. You have to hit those certain levels for it to be in spec. But it really is how it airs and how it airs is out of our control.”

After it leaves a place like Formosa, the mix might go through an additional mix at the streamer and another mix, so to speak, by the viewer’s device. Of course, this is the last thing they want in the audio industry. “Dialogue is king,” sound editor Anthony Vanchure told us. “I want all the dialogue to be clean as clear as possible, so when you hear that people are struggling to hear that stuff, you’re frustrated.” And yet, we still end up with the subtitles on. If we’re just going to read an adaptation of The Sandman on Netflix, why even bother making it?

The Oldest Game | The Sandman | Netflix Philippines

“Everybody’s very unhappy about it,” said David Bondelevitch, associate professor of Music and Entertainment Studies at the University of Colorado Denver. “We work very hard in the industry to make every piece of dialogue intelligible. If the audience doesn’t understand the dialogue, they’re not going to follow anything else.”

Streamers and devices make terrible music together

With all this technology at our fingertips, dialogue has never been more incoherent, and the proliferation of streaming services has made the landscape impossible to navigate. Aside from the variety of products people watch media on, no two streamers are alike. Each one may have a different set of requirements for the post-production house.

As far as streamers go, editors say Netflix is the best for good sound and even published their audio specs publicly, but the service is an outlier. “They have put an awful lot of money into setting up their own standards, whereas some of the other streamers seem to have pulled them out of their asses,” Bondelevitch said. “With some of these streamers, editors get like 200 pages of specifications that [they] have to sit there and read to make sure that they’re not violating anything.”

All streamers aren’t so forgiving. “I was at lunch with a couple of friends recently off of a mix, and they were at lunch answering emails because they did the mix, completed the mix, and everybody’s happy,” said Vanchure. “And then the director got like a screener or was able to watch it at home, you know, being whatever streaming service he was using. And he was like, ‘Hey, this sounds completely different.’”

Neil & Protogonist Meeting | Neil Intro Scene in UHD IMAX 4K 60FPS X265 10BIT HDR

Today, sound designers typically create two mixes for a film. The first is for theatrical, assuming that the film is getting a theatrical release. The other is called a “near-field mix,” which has less dynamic range (the difference between loud and quiet parts of a mix), making it more suitable for home speakers. But just because the mixes are getting better doesn’t mean we’ll be able to hear them.

“‘Near field’ means that you’re close up on the speakers, like you would be in your living room,” said Brian Vessa, the Executive Director of Digital Audio Mastering at Sony Pictures. “It’s just having a speaker near you so that what you’re perceiving is pretty much what comes out of the speakers themselves and not what is being contributed by the room. And you listen at a quieter level than you would listen to in the cinema.”

“What the near-field mix is really about is bringing your container in a place where you can comfortably listen in a living room and get all of the information that you’re supposed to get, the stuff that was actually put into the program that that might just kind of disappear otherwise.”

Vessa wrote the white paper on near-field mixes, creating the industry standard. He believes a big part of the problem is “psycho-acoustic,” meaning we simply don’t perceive sound the same way at home and at the theater, so if a good near-field mix isn’t the baseline, audiences are left to fend for themselves.

Complicating matters, where things end up has never been more fluid. “In TV we anchor the dialogue so it is always even and clear and build everything else around that,” said Andy Hay, who delivered the first Dolby Atmos project to Netflix and helped develop standards for the service. “In features we let the story drive our decisions. A particularly dynamic theatrical mix can be quite a challenge to wrestle into a near-field mix.” With so many productions being dumped on streaming after a movie’s complete, audio engineers might not even know what format they’re mixing for.

And there is the home to deal with. Consumer electronics give users a number of proprietary options that “reduce loud sounds” or “boost dialogue.” Sometimes they simply have stupid marketing names like “VRX” or “TruVol,” but they are “motion smoothing” for sound. Those options, which may or may not be on by default from the manufacturer, attempt to respond to noise spikes in real-time, usually trying to grab and “reduce” loud noises, like explosions or a music cue, as they happen. Unfortunately, they’re usually delayed and end up reducing whatever’s following the noise.

It’s not just the speakers that are the problem. Rooms, device placement, and white noise created by fans and air conditioners can all make dialogue harder to hear. A near-field mix is supposed to account for that, too. “I listen very intently and very quietly, because that way all of these other factors, the air conditioner, the noise next door, all the other stuff that could be clattering around and stuff starts to matter. And if I lose something, we got to bring that up.”

The long road to bad sound

The sound issues we’re experiencing today are the result of decades of devaluing the importance of clear audio in productions. Bondelevitch cites the move from shooting on sound stages with theatrical actors as the first nail in the coffin. Sounds stages provide an isolated place to pick up clear dialogue, usually with the standard boom mic “eight feet above the actors.” The popularity of location shooting made this impossible, leading to the standardizing of radio mics in the ’90s and 2000s, which present their own problems. Cloth rustle, for example, is tricky to edit and leads to more ADR, which actors and directors alike hate because it diminishes the performance given on set.

In the early days of cinema, when most actors were theatrically trained for the stage, performers would project toward the microphone. Method acting, however, allowed for more whispering and mumbling in the name of realism. This could be managed if more time were put into rehearsal, where actors could practice the volume and clarity of their lines, but very few productions have that luxury.

One name that keeps being brought up by sound editors for this shift is Christopher Nolan, who popularized a growly acting style through his Batman movies. The problem remained consistent throughout his Dark Knight trilogy, with Batman’s and Bane’s voices being two consistent complaints even among fans of the movies. When Bane’s voice was totally ADR’d following the film’s disastrous IMAX preview, it overpowered the rest of the movie. “The worst mix was The Dark Knight Rises,” he said. “The studio realized that nobody could understand him so at the last minute they remixed it and they made him literally painfully loud. But the volume wasn’t the problem. [Tom Hardy’s] talking through the mask, and he’s got a British accent. Making it louder didn’t fix anything. It just made the movie less enjoyable to sit through.”

The Dark Knight Rises Bane’s restored undubbed voice.

Volume is an ongoing war not just among sound editors but inside the government. In 2010, the Federal Communications Commission passed the Commercial Advertisement Loudness Mitigation (CALM) Act to lower the volume of commercials. Instead, networks simply raised the volume of the television shows and compressed the dynamic range, making dialogue harder to hear. “They’re trying to compress things so much that they can keep getting louder,” said Clint Smith, an assistant professor of sound design at the University of North Carolina School of the Arts School of Filmmaking, who previously worked as a sound editor at Skywalker Ranch.

Smith has been teaching audio engineering for five years and encourages his students to embrace subtitles and work to work them into the narrative of a film in more creative ways. “What does it look like? Ten years down the road, 20 years down the road where subtitles become more prevalent because I don’t see them as going away,” Clint asked his students. “I was kind of just curious about…how can we actually have the subtitles be part of the filmmaking process. Don’t try to run away from them.”

As unintelligible dialogue becomes more common, we’ll have no choice but to embrace the subtitle. But at what point are studios and streamers not even bothering to mix sound properly and assuming viewers will just read the dialogue? With subtitles being an option for every streamer, soon, “we’ll fix it in post” could become “they’ll fix it at home.”

Sound you can feel

There are some things that we can do. For instance, there’s always buying a nice sound system. Even more important is setting it up properly. Most of the sound mixers interviewed recommended having professional help but also mentioned that many soundbars today come with microphones for home optimization. None sounded too convinced by soundbars, though.

“If you’re using a soundbar,” Bondelevitch said, “Get the best soundbar you can afford. And if you’re listening on your earbuds or headphones, get good headphones. If it’s a noisy environment, get over-the-ear headphones. They do really isolate sound much better and do not use noise canceling headphones because those really screw up the audio quality.”

But more than anything, they emphasized how this is a selling factor for movie theaters. If you want good sound, there’s a place that has “sound you can feel.”

“It’s a bummer because you want the theater experience,” said Vanchure. “People aren’t going out to theaters as much nowadays because everything’s just streaming. And that’s how you want people to hear these things. You’re doing this work so you can hear this loud and big.”

Correction: An earlier version of this article misspelled David Bondelevitch’s name and stated that he taught at the University of Denver. He teaches at the University Of Colorado Denver. We regret the error.

 
Join the discussion...