
Mixing dialogue for maximum impact and clarity can be challenging, especially within the context of a mix that has multiple tracks of sound effects and music.
There's one extremely simple trick, however, that can make your dialogue stand out from everything else in the mix, which makes it easier for your audience to focus on and comprehend said dialogue. It's an equalization trick called notching, and Curtis Judd has an excellent tutorial to show you how it's done.
What is notching, you ask? Well, if you imagine a parametric EQ interface -- essentially a graph where the horizontal axis represents frequency and the vertical axis represents volume -- a notch is where you pull the volume a specific frequency down. It's a simple as that. Here's a good visual representation of that concept taken from the Parametric EQ effect in Adobe Audition. In this example, I've pulled down the frequency at 300Hz by about 10dB.
Because the human voice generally sits in a wide frequency range between roughly 50Hz-2kHz, pulling this particular frequency down (if we assume it's a deeper male voice) on any music or SFX tracks that are clashing with your dialogue will result in more clarity and punch in that dialogue. For an added boost to really separate the dialogue from other audio in the mix, you can slightly boost the same frequencies on your dialogue tracks. Here's what that looks like.
This is one of those techniques where just a little bit of "notching" goes a long way. Audio can start to sound unnatural when certain frequencies are pushed too high or low in the mix, so a good rule of thumb is to start small, notching out the problematic frequency by -5dB or so. If that's not enough to clear up your dialogue, you can then try deepening that notch even more, maybe to -10dB, and boosting the same frequency on the dialogue track.
Additionally, it can be tricky to figure out which particular frequency to notch because every human voice is different (which sometimes means that you need to apply this technique multiple times if you're mixing, say, a conversation between a male and female with drastically different voices). You can use tools like the spectral frequency display in Audition to figure out which frequencies are most prevalent in your dialogue track, and then apply a parametric EQ to notch those frequencies. Last but not least, notching can be subjective. Some people prefer to keep their notch between 1kHz and 2kHz, whereas others will opt for using lower frequencies.
Do you guys use EQ notching to make dialogue clearer and more easily audible? Which frequencies do you prefer to use? Let us know down in the comments!
Check-Out: Pro Video, Pro Audio, Lighting – Great Deals on Gear you made need !!
With any & every B&H purchase You will automatically be entered into the Monthly Gift Card Raffle.
Your Comment
31 Comments
I'm really being picky here-- but I believe that 'notching' normally refers to a super tight Q/curve. Used mainly for doing very surgical frequency removal. It might be a bit misleading, considering what the video and your article talk about is mainly larger 'bell curves'. It's always been two separate terms in my world, but maybe they're interchangeable now-a-days! Either way a great way to get started in audio mixing! :)
Cheers!
May 12, 2015 at 6:59AM
Interesting. I've always referred to it as notching, but then again, it was taught to me by someone who definitely wasn't an "audio guy." Maybe I've had it wrong all along!
May 12, 2015 at 7:30AM
Yeah, a notch filter is generally for dropping one narrow, annoying frequency. The use-case above makes a lot of sense, but the name is pretty confusing.
May 12, 2015 at 7:59AM
What would be a better name for this technique? I mean, it's a slight cut or boost to a small selection of midrange frequencies, but that doesn't really roll off the tongue like "notching."
May 12, 2015 at 8:28AM
I might suggest, Grooving or Ditching. Groove perhaps because of how I picture hardwood floors can be sealed together with the tongue. Ditch perhaps because how water can be channelled through a ditch, but also how In Raiders of the Lost Ark, Indiana Jones went under the 10 wheel truck, they created a small ditch to give the stuntmen more room. Take your pick?
May 12, 2015 at 1:33PM, Edited May 12, 1:35PM
"filtering" is the word of choice as in "filter the frequencies between 2k and 4k by 4dB"
May 13, 2015 at 2:02AM
Grooving, ditching?
No, never these words. Ever.
Filtering, yes.
May 15, 2015 at 2:33AM, Edited May 15, 2:35AM
Kyle, you're absolutely right, "notching" refers to a tighter cut, that was my mistake.
May 12, 2015 at 4:56PM
Still a great tutorial. Enough to pique the curiosity of some who might not dabble in sound, but enough to be extremely helpful and always useful. I appreciate you're work, Mr. Judd! Keep on keepin' on! :)
May 12, 2015 at 10:56PM
So true... good to get the feet wet, right now, am miles away from the pool with regards to sound.
May 22, 2015 at 8:07AM
I kinda use this trick too, but not as explained here. I adjust while listening and looking at the multi spectral analyser. What I do is I fix the EQ of the dialogtracks first, which can be multiple adjustments in combination with a light compressor. I mix the dialog to make a good manually normalised track. EQ adjustments can differ between persons, rooms and microphones. While I edit the track I make mental notes of the frequency ranges I need to adjust later for the other tracks. I tend to use a larger Q for the EQ of the other other tracks. Then I listen on speakers, listen again on my laptopspeakers. If it sounds ok on both my audiomix is done and then I only adjust the main volume to make it -23 LUFS for R128.
May 12, 2015 at 7:28AM
What are LUFS and R128, and why -23?
Excuse my ignorance...
May 12, 2015 at 10:22AM
LUFS is simply a Loudness Unit (LU), LUFS is the more technical description. R128 a measurement in Europe, Its full name is EBU (European Broadcasting Union) R128. -23 because the EBU researched for a few years to come up with a standard since a program would be X level LU, then a Commercial would be Louder or softer. There was no balance between the two so a Standard was set, I cant tell you how many times I pressed mute when an Infomercial came on because it was so painful to the ears.
There are other standards to keep in mind as well, American standards are different. Belgian are also different because someone suffered hearing damage.
May 12, 2015 at 1:08PM
Thanks Kyle, very handy to know.
May 14, 2015 at 4:54AM
Harvey Norman and Domayne take note!
May 17, 2015 at 7:11PM
By the way, did anyone notice the crossing the line?
May 12, 2015 at 3:12PM
Yeah, big time!
May 13, 2015 at 1:51AM
I think in this case the geography is simple enough that the audience wouldn't get confused. It's just two talking heads.
May 13, 2015 at 12:27PM
Adding a band at 300Hz isn't going to do you much good. While the human voice generally sits across the range of 100Hz to 3kHz, the defining elements of it are around 1k-2k Hz. Not because that's the dominant frequency of the voice, but because that's the range we hear most clearly at - it's our ears not our mouths that make it so.
300Hz is one of the harder areas for us to hear and resolve. Sonically, that's where the TV in the apartment next door falls - it's muffled, and no matter what you do in that frequency range it's not going to get any clearer unless you make it significantly louder.
May 12, 2015 at 7:37PM
Might that be exactly why he boosted the level in dialogue and reduced the level in the background track? Seems like you made a case for it..
May 13, 2015 at 9:31AM
In the video the band is lifted and cut in the proper place. Robert's comments are where the 300Hz was mentioned.
May 13, 2015 at 11:07AM
Nice little trick to know while Audio mixing. Thanks for the post, Bob and thanks to Curtis for the tutorial. Cheers.
May 12, 2015 at 8:48PM
In mixing bands for records, I've generally been taught to keep any band boost or gain under 6dB. There are lots of potential phase issues that can make something start sounding odd. There are plenty of exceptions, but it's worth keeping in mind.
May 13, 2015 at 11:03AM, Edited May 13, 11:07AM
Research done at Cornell University all the way back in the sixties found that each human voice basically consists of two parts. One part carries most of the speech information, and occupies practically the same frequency band for all speakers. The second part consists of gender and status markers, as well as vowel sounds. This second part is often referred to as voice formants. For example, men have several formants (roughly "resonances" or energy peaks) below the speech information band, while women have several above. These formants mostly carry information about who we are and the state we are in, and not much about the actual words we are saying. It's the reason we say men have lower voices and women have higher voices, when in fact we all communicate using roughly the same band of frequencies. The lower or higher formant frequencies are saying "I am male" or "I am female", among other things.
If you need convincing that we all communicate in the same band of frequencies, think of how difficult it is to understand the words a soprano is singing. In the case of a soprano, she has learned to actually move her speech information to higher frequencies, where most of us are not used to hearing and interpreting it, and it makes it difficult, if not impossible, to understand.
I believe this knowledge has implications for mixing conversations. If you wish to make the information content (words) clearer, apply the boost and cut suggested in this article to roughly the telephone band of frequencies, from 300 to roughly 3300 Hz, for any speaker. This range was carefully chosen for telephones based on considerable research into creating the clearest conversations while using the least resources in the telephone network. The subtle changes suggested in this article won't make the conversation sound like a telephone; it will simply increase the clarity.
I believe that if, on the other hand, your goal is to emphasize the virility or femininity (and perhaps other status--think Darth Vader/James Earl Jones) of a voice, rather than the words being said, apply the boost and corresponding cut to the formants below (for men) and above (for women) this "telephone" range of frequencies.
Just an aside: Telephones don't actually carry the formants of male voices, these formants are all well below 300 Hz, starting at around 85 Hz; yet we still know immediately when a speaker on the phone is male. It is because the telephone does carry the pattern of the overtones of the formants, and the ear/brain is excellent at recreating the fundamental frequencies from that pattern. Experiments show we actually hear a fundamental, a formant, even when it is not physically present, but if only the overtones are present. It is an audible illusion.
May 14, 2015 at 1:59PM
Excellent.
May 15, 2015 at 2:39AM
Very helpful article! Generally speaking, cutting, rather than boosting frequencies is a cleaner way to add intelligibility to any sound, especially in a dense mix that includes music, dialogue and sound effects. Phase distortion can rear its ugly head when adding dBs to any frequency, although using a phase linear EQ will help.
Notching (narrow "Q") is usually used to eliminate feedback in a live sound setting, lessen the impact of environmental distractions, like a low frequency rumble, or correct issues arising out of incorrect mic selection or placement. Just a FYI...The wider the "Q" the more natural sounding the perceived change. Adjustable "Q" is used constantly in the mixing world.
This article also touches on a critical, general rule of mixing. Each sound in a complex mix should have an individual EQ "home". Consider this: A symphony is arranged in a very specific configuration to enhance the clarity of each instrument and especially how they interact with each other. For example, The violins are seated in the front, then woodwinds, then the brass...the kettle drum is in back. This creates a live mix that enables each instrument to be heard clearly. This works not only due to the relative volume of each instrument but also their timber/tone, or EQ "home".
By simply identifying an individual center frequency for each sound element, usually by slightly cutting overlapping frequencies of it, or other sounds adjacent to it, you can add much clarity and impact to a mix. When done correctly, each sound is no longer "fighting" for space in the mix. Always utilize this technique before you reach for a volume fader to increase ineligibility. There are many other mixing techniques that could be discussed, but this one is particularly foundational. Hope this helps! Love this site!
May 16, 2015 at 12:54PM, Edited May 16, 12:54PM
Think of notch filtering as surgical, precise eqing; which is normally used for eliminating certain unwanted frequency. Like hum from using dirty power, that ends up on recordings sometimes.
What your describing is really just cutting (and/or boosting) frequencies with a wide Q, or bell. Some eqs have presets to set bands for notch, which is narrow. Which can mislead people in using this narrow Q. The result will be unnatural.
Also, filtering is really eliminating unwanted rumble or hiss, as the name implies. ie low pass/high pass filters.
One thing that should be covered is volume riding music against dialog.. And or using side chain compression to "duck" music.
One last thing.. 3db increments to adjust any sound source is a better start, as the casual listener can hear this change.
Just my 2cents... From an experienced post production mixer.
:D
September 12, 2015 at 2:56AM
Another method that most professionals use, is referred to as "ducking". It's a simple method that compresses the music track during speech and allows the dialogue to shine through. Side-chaining ;)
www.justinvanhout.com
February 15, 2016 at 12:42PM, Edited February 15, 12:42PM
Sorry Dave -- I just now saw your comment stating what I posted.
Great minds think alike. ;)
February 15, 2016 at 7:48PM
woops just posted this again. then saw this. nice call!
May 31, 2017 at 2:24PM
Also another technique which can be used in addition to this is side-chain compression or "ducking" super simple to do and allows for the dialogue to have room in the mix. Put a compressor plug-in on your music track effects chain and then route the vox track signal through it and every time the vox track kicks in the music track gets compressed out of the way... also sounds great in hip hop and & electronic music for pairing a kick & bass together without getting a lot of bass buildup or muddying the low end!
May 31, 2017 at 2:23PM