May 12, 2015

A Simple Trick That Will Make Your Dialogue Sound Better & Stand Out in the Mix

Parametric EQ Notching
Mixing dialogue for maximum impact and clarity can be challenging, especially within the context of a mix that has multiple tracks of sound effects and music.

There's one extremely simple trick, however, that can make your dialogue stand out from everything else in the mix, which makes it easier for your audience to focus on and comprehend said dialogue. It's an equalization trick called notching, and Curtis Judd has an excellent tutorial to show you how it's done.

What is notching, you ask? Well, if you imagine a parametric EQ interface -- essentially a graph where the horizontal axis represents frequency and the vertical axis represents volume -- a notch is where you pull the volume a specific frequency down. It's a simple as that. Here's a good visual representation of that concept taken from the Parametric EQ effect in Adobe Audition. In this example, I've pulled down the frequency at 300Hz by about 10dB.

Parametric EQ in Adobe Audition

Because the human voice generally sits in a wide frequency range between roughly 50Hz-2kHz, pulling this particular frequency down (if we assume it's a deeper male voice) on any music or SFX tracks that are clashing with your dialogue will result in more clarity and punch in that dialogue. For an added boost to really separate the dialogue from other audio in the mix, you can slightly boost the same frequencies on your dialogue tracks. Here's what that looks like.

Parametric EQ in Adobe Audition

This is one of those techniques where just a little bit of "notching" goes a long way. Audio can start to sound unnatural when certain frequencies are pushed too high or low in the mix, so a good rule of thumb is to start small, notching out the problematic frequency by -5dB or so. If that's not enough to clear up your dialogue, you can then try deepening that notch even more, maybe to -10dB, and boosting the same frequency on the dialogue track.

Additionally, it can be tricky to figure out which particular frequency to notch because every human voice is different (which sometimes means that you need to apply this technique multiple times if you're mixing, say, a conversation between a male and female with drastically different voices). You can use tools like the spectral frequency display in Audition to figure out which frequencies are most prevalent in your dialogue track, and then apply a parametric EQ to notch those frequencies. Last but not least, notching can be subjective. Some people prefer to keep their notch between 1kHz and 2kHz, whereas others will opt for using lower frequencies.

Do you guys use EQ notching to make dialogue clearer and more easily audible? Which frequencies do you prefer to use? Let us know down in the comments!      

Your Comment

29 Comments

I'm really being picky here-- but I believe that 'notching' normally refers to a super tight Q/curve. Used mainly for doing very surgical frequency removal. It might be a bit misleading, considering what the video and your article talk about is mainly larger 'bell curves'. It's always been two separate terms in my world, but maybe they're interchangeable now-a-days! Either way a great way to get started in audio mixing! :)

Cheers!

May 12, 2015 at 9:59AM

0
Reply
avatar
Kyle Delso
Director / Writer / Audio
154

Interesting. I've always referred to it as notching, but then again, it was taught to me by someone who definitely wasn't an "audio guy." Maybe I've had it wrong all along!

May 12, 2015 at 10:30AM

3
Reply
avatar
Robert Hardy
Founder of Filmmaker's Process
4032

Yeah, a notch filter is generally for dropping one narrow, annoying frequency. The use-case above makes a lot of sense, but the name is pretty confusing.

May 12, 2015 at 10:59AM

0
Reply

What would be a better name for this technique? I mean, it's a slight cut or boost to a small selection of midrange frequencies, but that doesn't really roll off the tongue like "notching."

May 12, 2015 at 11:28AM

0
Reply
avatar
Robert Hardy
Founder of Filmmaker's Process
4032

I might suggest, Grooving or Ditching. Groove perhaps because of how I picture hardwood floors can be sealed together with the tongue. Ditch perhaps because how water can be channelled through a ditch, but also how In Raiders of the Lost Ark, Indiana Jones went under the 10 wheel truck, they created a small ditch to give the stuntmen more room. Take your pick?

May 12, 2015 at 4:33PM, Edited May 12, 4:35PM

0
Reply
avatar
Kyle J. Sawyer
Sound Designer/Editor/Mixer
81

"filtering" is the word of choice as in "filter the frequencies between 2k and 4k by 4dB"

May 13, 2015 at 5:02AM

0
Reply

Grooving, ditching?
No, never these words. Ever.
Filtering, yes.

May 15, 2015 at 5:33AM, Edited May 15, 5:35AM

0
Reply

Kyle, you're absolutely right, "notching" refers to a tighter cut, that was my mistake.

May 12, 2015 at 7:56PM

1
Reply
avatar
Curtis Judd
Audio, Lighting
81

Still a great tutorial. Enough to pique the curiosity of some who might not dabble in sound, but enough to be extremely helpful and always useful. I appreciate you're work, Mr. Judd! Keep on keepin' on! :)

May 13, 2015 at 1:56AM

14
Reply
avatar
Kyle Delso
Director / Writer / Audio
154

So true... good to get the feet wet, right now, am miles away from the pool with regards to sound.

May 22, 2015 at 11:07AM

8
Reply
avatar
Kayode
767

I kinda use this trick too, but not as explained here. I adjust while listening and looking at the multi spectral analyser. What I do is I fix the EQ of the dialogtracks first, which can be multiple adjustments in combination with a light compressor. I mix the dialog to make a good manually normalised track. EQ adjustments can differ between persons, rooms and microphones. While I edit the track I make mental notes of the frequency ranges I need to adjust later for the other tracks. I tend to use a larger Q for the EQ of the other other tracks. Then I listen on speakers, listen again on my laptopspeakers. If it sounds ok on both my audiomix is done and then I only adjust the main volume to make it -23 LUFS for R128.

May 12, 2015 at 10:28AM

10
Reply
avatar
Wouter van Gestel
Colorist, Editor, Motion Graphics Artist, Movie Lover
106

What are LUFS and R128, and why -23?

Excuse my ignorance...

May 12, 2015 at 1:22PM

0
Reply
Adrian Graham-Smith
Camera Operator/Editor
114

LUFS is simply a Loudness Unit (LU), LUFS is the more technical description. R128 a measurement in Europe, Its full name is EBU (European Broadcasting Union) R128. -23 because the EBU researched for a few years to come up with a standard since a program would be X level LU, then a Commercial would be Louder or softer. There was no balance between the two so a Standard was set, I cant tell you how many times I pressed mute when an Infomercial came on because it was so painful to the ears.

There are other standards to keep in mind as well, American standards are different. Belgian are also different because someone suffered hearing damage.

May 12, 2015 at 4:08PM

0
Reply
avatar
Kyle J. Sawyer
Sound Designer/Editor/Mixer
81

Thanks Kyle, very handy to know.

May 14, 2015 at 7:54AM

7
Reply
Adrian Graham-Smith
Camera Operator/Editor
114

Harvey Norman and Domayne take note!

May 17, 2015 at 10:11PM

0
Reply

By the way, did anyone notice the crossing the line?

May 12, 2015 at 6:12PM

16
Reply
Adrian Tan
Videographer
819

Yeah, big time!

May 13, 2015 at 4:51AM

0
Reply
avatar
Matt Carter
VFX Artist / Director / DP / Writer / Composer / Alexa Owner
587

I think in this case the geography is simple enough that the audience wouldn't get confused. It's just two talking heads.

May 13, 2015 at 3:27PM

0
Reply

Adding a band at 300Hz isn't going to do you much good. While the human voice generally sits across the range of 100Hz to 3kHz, the defining elements of it are around 1k-2k Hz. Not because that's the dominant frequency of the voice, but because that's the range we hear most clearly at - it's our ears not our mouths that make it so.

300Hz is one of the harder areas for us to hear and resolve. Sonically, that's where the TV in the apartment next door falls - it's muffled, and no matter what you do in that frequency range it's not going to get any clearer unless you make it significantly louder.

May 12, 2015 at 10:37PM

0
Reply

Might that be exactly why he boosted the level in dialogue and reduced the level in the background track? Seems like you made a case for it..

May 13, 2015 at 12:31PM

2
Reply
avatar
Jonathan Gentry
DP Potomac Media
255

In the video the band is lifted and cut in the proper place. Robert's comments are where the 300Hz was mentioned.

May 13, 2015 at 2:07PM

7
Reply

Nice little trick to know while Audio mixing. Thanks for the post, Bob and thanks to Curtis for the tutorial. Cheers.

May 12, 2015 at 11:48PM

0
Reply
avatar
Arun Meegada
Moviemaker in the Making
426

In mixing bands for records, I've generally been taught to keep any band boost or gain under 6dB. There are lots of potential phase issues that can make something start sounding odd. There are plenty of exceptions, but it's worth keeping in mind.

May 13, 2015 at 2:03PM, Edited May 13, 2:07PM

7
Reply

Research done at Cornell University all the way back in the sixties found that each human voice basically consists of two parts. One part carries most of the speech information, and occupies practically the same frequency band for all speakers. The second part consists of gender and status markers, as well as vowel sounds. This second part is often referred to as voice formants. For example, men have several formants (roughly "resonances" or energy peaks) below the speech information band, while women have several above. These formants mostly carry information about who we are and the state we are in, and not much about the actual words we are saying. It's the reason we say men have lower voices and women have higher voices, when in fact we all communicate using roughly the same band of frequencies. The lower or higher formant frequencies are saying "I am male" or "I am female", among other things.
If you need convincing that we all communicate in the same band of frequencies, think of how difficult it is to understand the words a soprano is singing. In the case of a soprano, she has learned to actually move her speech information to higher frequencies, where most of us are not used to hearing and interpreting it, and it makes it difficult, if not impossible, to understand.
I believe this knowledge has implications for mixing conversations. If you wish to make the information content (words) clearer, apply the boost and cut suggested in this article to roughly the telephone band of frequencies, from 300 to roughly 3300 Hz, for any speaker. This range was carefully chosen for telephones based on considerable research into creating the clearest conversations while using the least resources in the telephone network. The subtle changes suggested in this article won't make the conversation sound like a telephone; it will simply increase the clarity.
I believe that if, on the other hand, your goal is to emphasize the virility or femininity (and perhaps other status--think Darth Vader/James Earl Jones) of a voice, rather than the words being said, apply the boost and corresponding cut to the formants below (for men) and above (for women) this "telephone" range of frequencies.
Just an aside: Telephones don't actually carry the formants of male voices, these formants are all well below 300 Hz, starting at around 85 Hz; yet we still know immediately when a speaker on the phone is male. It is because the telephone does carry the pattern of the overtones of the formants, and the ear/brain is excellent at recreating the fundamental frequencies from that pattern. Experiments show we actually hear a fundamental, a formant, even when it is not physically present, but if only the overtones are present. It is an audible illusion.

May 14, 2015 at 4:59PM

10
Reply
Eric Peters
Entrepreneur & Inventor
74

Excellent.

May 15, 2015 at 5:39AM

2
Reply

Very helpful article! Generally speaking, cutting, rather than boosting frequencies is a cleaner way to add intelligibility to any sound, especially in a dense mix that includes music, dialogue and sound effects. Phase distortion can rear its ugly head when adding dBs to any frequency, although using a phase linear EQ will help.
Notching (narrow "Q") is usually used to eliminate feedback in a live sound setting, lessen the impact of environmental distractions, like a low frequency rumble, or correct issues arising out of incorrect mic selection or placement. Just a FYI...The wider the "Q" the more natural sounding the perceived change. Adjustable "Q" is used constantly in the mixing world.
This article also touches on a critical, general rule of mixing. Each sound in a complex mix should have an individual EQ "home". Consider this: A symphony is arranged in a very specific configuration to enhance the clarity of each instrument and especially how they interact with each other. For example, The violins are seated in the front, then woodwinds, then the brass...the kettle drum is in back. This creates a live mix that enables each instrument to be heard clearly. This works not only due to the relative volume of each instrument but also their timber/tone, or EQ "home".
By simply identifying an individual center frequency for each sound element, usually by slightly cutting overlapping frequencies of it, or other sounds adjacent to it, you can add much clarity and impact to a mix. When done correctly, each sound is no longer "fighting" for space in the mix. Always utilize this technique before you reach for a volume fader to increase ineligibility. There are many other mixing techniques that could be discussed, but this one is particularly foundational. Hope this helps! Love this site!

May 16, 2015 at 3:54PM, Edited May 16, 3:54PM

0
Reply

Think of notch filtering as surgical, precise eqing; which is normally used for eliminating certain unwanted frequency. Like hum from using dirty power, that ends up on recordings sometimes.

What your describing is really just cutting (and/or boosting) frequencies with a wide Q, or bell. Some eqs have presets to set bands for notch, which is narrow. Which can mislead people in using this narrow Q. The result will be unnatural.

Also, filtering is really eliminating unwanted rumble or hiss, as the name implies. ie low pass/high pass filters.

One thing that should be covered is volume riding music against dialog.. And or using side chain compression to "duck" music.

One last thing.. 3db increments to adjust any sound source is a better start, as the casual listener can hear this change.

Just my 2cents... From an experienced post production mixer.

:D

September 12, 2015 at 5:56AM

0
Reply
Dave Rivera
Chief Sound Engineer
74

Another method that most professionals use, is referred to as "ducking". It's a simple method that compresses the music track during speech and allows the dialogue to shine through. Side-chaining ;)

www.justinvanhout.com

February 15, 2016 at 3:42PM, Edited February 15, 3:42PM

0
Reply
Justin Van Hout
Certified Audio Engineer • Sound Designer • Music Supervisor
74

Sorry Dave -- I just now saw your comment stating what I posted.
Great minds think alike. ;)

February 15, 2016 at 10:48PM

1
Reply
Justin Van Hout
Certified Audio Engineer • Sound Designer • Music Supervisor
74