Q It Up Logo 4(Note: The question below was intended to refer to music tracks under VO in a commercial or promo, but I did not make that clear in the question. We received a few responses regarding processing music that’s played on air and thought they were interesting enough to include.)

Q It Up: Do you use EQ and/or dynamics processing on your music tracks? For example, do you use EQ to boost highs or bass, or perhaps subtractive EQ in the mids to allow the VO to punch through? What about dynamics processing on music tracks, or perhaps on a bus? What about other non-voice elements such as sound effects and imaging FX? Do you process these? If you answer yes to any of the above, explain your reasoning and methods/plug-ins used, and feel free to add any other comments.

John Weeks, John Weeks Voice Overs: Yes, depending on the music track, I might use subtractive EQ. If the track is midrange heavy, I'll drop out some of those mid frequencies to allow more room for the voice over. I use the parametric EQ that comes with Adobe Audition to do this.

I also add effects to sound effects on occasion. For example, to make a hit sound bigger, I add some reverb. I usually just play around with effects on those elements until I get the sound I'm looking for.

As far as dynamic processing, I use some compression on the final mix. I mostly use the "Multiband Compressor" in Adobe Audition for this. I go pretty mild with this plugin. I can tweak the EQ and use the brick wall limiter as well with this plugin.

Art Hadley: I’m pretty free and easy about using EQ on my voice (and others). I’m a little bassy on the Neumann U87, so I keep the bass cut in AND drop a few dB of bass on the mixer. And sometimes, depending on the music, the delivery, the final venue (like phone messages vs. trade show announcements vs. broadcast commercials) and the context of the spot, SFX, too, usually need some EQ touchup. But honestly, in decades of audio production, I don’t think it’s ever occurred to me to EQ the music. I suppose that if I’ve been unhappy with music frequency spectrum in the past, I’ve probably just selected new music.

Ben Thorgeirson, Ben Thorgeirson Voice Over: I look at it as an as needed basis. Just yesterday I lowered the mid-highs on a music bed because it was just too powerful and drowned out the voice. The same goes for FX. If it's one that sounds like it was recorded in 1950, then I'll usually throw some EQ and a limiter on it.

Drake Donovan, Drake Donovan Creative Services, Warren, OH: I used to never touch the music tracks, or SFX for that matter, because they’ve already been processed enough by their original creators. However, thanks to RAP Mag & Dave Foxx’s Production 212 column, I learned about the “mid-range trough” as I like to call it. Basically, it amounts to dropping the frequencies in the music that compete with the human voice. I usually start around 150 Hz and end the dip around 3KHz. This way I have the gain of the music higher without washing out the V/O.

Dave Savage, Vice President | Creative Services Group, iHeartMedia: All music tracks are different in the way they are processed and obviously with frequencies, so I always do a rough mix to see how the VO and bed sound without doing anything other than my normal VO processing that I usually do. If the voice is competing with the bed, I’ll do a frequency analysis on the voice. The most prominent frequencies on the VO are the ones I’ll lower a bit on the bed. That usually helps it cut through pretty well.

Andrew Frame, BAFSoundWorks: I'll adjust level and do some limiting to flatten out the overall volume of the audio so I don't have to do a lot of fiddling with the track volume. I don't want to kill the dynamics, but I do want to even it out so the v/o isn't fighting with it. I will use compression if there's a really obnoxious range difference in the audio, something that a limiter would make sound poopy. But that is rare.

Dennis McAtee: I have a track with a high-cut and mid-cut EQ curve. For jingles, concert spots and such, I split the music file and move the part of that file that goes under the voiceover to the EQ'd track. If the voiceover is properly processed, the underlying music can be heard without getting in the way. Usually, I don't add dynamic processing to music, most of which is compressed and limited into a rectangle nowadays. 

Russell More, Island Radio/Jim Pattison Broadcast Group, Nanaimo, BC: I don't always process music tracks... in fact very rarely do I. The only instance I do is if I'm finding a voice especially difficult to "seat" into a mix.

If this is happening I use Adobe's stock Graphic EQ to lower the frequencies on the music track around the 2.5-8K mark. Obviously depending on the voice, that range can vary slightly, but not usually by much. And I only lower them enough to help bring the voice forward, never going so far as to clearly alter the music track.

I always figure someone hopefully more knowledgeable than me took the time to mix a piece of music, so they must have done it that way for a reason, right?

Chris Diestler, Hutton Broadcasting, Santa Fe, NM: Sometimes I will play with the EQ on a bed behind a voice in commercial, especially to cut the high end and allow the voice to ride on top, if it's an instrument with a lot of high frequency. If the music needs to recede even further into the background, I may throw some effects on it (in Audition, FX > studio reverb > drum plate [small] is the most common – this is also a good trick to chop off the music but have it echo out).

I usually shy away from processing music tracks if they're playing on the air as a full song, because the record producer has most always saturated that signal anyway.  I have heard one top-ranked country station in the northwest U.S. that speeds up all their songs (like we used to on top 40) by about 2 percent just to make them sound "brighter." Not a fan. 

Joshua Mackey, www.MackeyVoiceTalent.com: I usually add some EQ and light processing to music tracks in spots. Typically, it's reductive EQ - taking out some of the upper-mids so the vocal sits nicely within the spot. This can prevent the music from overpowering the vocal while still allowing the music to play a prominent role in the spot (when appropriate). Regarding other processing, I have found, at times, that a particular mix of a specific music track and a specific vocal interfere with each other too much due to some higher-end elements in the music. In these cases, I've often added a De-Esser to the music track just to take out some of the sibilance that can compete with the vocal. Beyond that, the final compression and/or limiting on the mix is the only additional processing I'll add to the music bed. Sound effects are a different story. Depending on what the sound effects are used for, they may be EQ'd, have Reverb added, have echo added, compressed, or anything else to make sure they're doing what they need to do in the mix.

This email address is being protected from spambots. You need JavaScript enabled to view it. CAS, Clio/Emmy Winning Sound Design: My biggest issue is when a centered vocal or midrange solo instrument is fighting voice. So of course I'll pull out the mids... but I'll also frequently use a Blumlein Shuffler to lower the center, pulling the mids back up when the voice pauses.

A Blumlein Shuffler, if you don't know, converts between left-right stereo and mid-side. It's symmetrical -- two shufflers in series gives you back your original signal (with some level adjustment) -- but you can manipulate the sounds in-between. It's incredibly easy to implement in almost every platform or DAW. I use a VST/AU shuffler plug-in I threw together in about ten minutes, using the free Sonic Birth.

For an interesting (or funny, or disturbing) example of shuffling, go to www.jayrose.com/dv/quiz1.html. You'll hear a simulated very-late-night performance of "Hotel California", where the Eagles' band slowly drifts into a different key from their vocals!

Anyway, I recently developed a new technique. I was doing a piece about a singer-songwriter, and in one part she does a long voice-over mixed with her own acoustic vocal. Total train-wreck to mix, and since this was a theatrical film with a Director! and Producers!, I didn't have the option of moving the elements.

So I shuffled her song and ran the center through a multiband processor. I tuned one band to where the consonants in her singing were interfering most with the voice-over, and set the band to expand with a high ratio and slow attack time. Basically, it dipped the initial consonants but let the sustained musical notes come through. Shuffled it back to stereo, and it mixed like a fine cocktail.

For what it's worth, I used Izotope's RX Final Mix... and also brought out the bass notes for some extra oomph where it wouldn't interfere. But other plug-ins will also work. The attached screenshot shows the settings, with an inset for how it's all hooked it up. Bear in mind that Final Mix's interface doesn't exactly show an expander; in this case it works like an automatic equalizer for the critical frequencies.

GRAPHIC FROM JAY!!!

Jay Helmus, Newcap Radio, Richmond, British Columbia Canada: I will often process my music beds, yes. An aggressive high pass filter can serve to highlight a thought, or perhaps create contrast between two transitions. If the voice is having a hard time cutting through the mix, I might also dull the highs to allow the voice to punch through a bit. If there's an annoying frequency somewhere, I might notch it out.

But when it comes to the actual music that plays on the station playlist though, I don't recommend processing that for most genres. It's already compressed and limited to the max. Anything we do to it will likely just degrade the quality and make the tracks sound worse. I will sometimes use subtractive EQ, or perhaps add an effect if it serves a purpose... but seldom will I ever apply compression or limiting if I can avoid it. They are loud enough.

Kyle Whitford: I try very hard to never process music. I figure the producer knows how to make it sound good and has gone to some trouble to tweak it for dynamic response in his / her own studio.

Of all the things that can get tiring due to processing, music is a huge one.

The argument I've seen recently in the news regarding digitized music and the tiring affect / effect is nothing new. Been going on since digital came in.

Producers DO have to meet level standards and punch it up enough to cut through and that's a catch 22.

My rule of thumb: Don't process anything unless you really need to. I enjoyed reading an audio article about Michael Jackson's producer in the 80s-90s. He spoke of depending on the mic itself with zero processing if at all possible. He would try out lots of different mics, even the same make and model, to get the desired sound without processing. (Every mic is different no matter what.) I like that .

This email address is being protected from spambots. You need JavaScript enabled to view it., GM/News Director, TLC Media, Bay City, TX: I generally process the entire music track with a "soft knee". Then in the areas of the VO, I drop the 800Hz-1.5kHz mids just a little bit (depending on music track and VO talent) with a parametric EQ.

I usually have to process the voice tracks I'm handed as well. Compressing voice 5.5:1 above 24db and expanding below 24db 1.3:1. I mix it down to a new file and hope for the best. It's just a few clicks in Cool Edit, and I think there's a plugin for Pro-Tools that will handle it all for you very easily.

The thing I've noticed and had the most trouble with is that the work sounds different on sister stations than it does on our main. Omnia One sounds great but when my sister station plays it with their classic Orban 8100 setup, I'm not getting what I wanted or expected, especially in the added FX. Hey here's an idea! Put descent mic processors in studios, then set the FM processor/stereo generator up right and lock things up so jocks don't make adjustments! Pick a mic processor with lots of buttons and knobs. The young ones love having knobs to play with, and the veterans have learned to adjust themselves to any situation. It's easier than making an adjustment. I'd rather change vocabulary than make a change to the de-esser ever again!

Michael Shishido, Dir. Creative Services, KUMU / KDDB / KPOI / KQMQ, Honolulu, HI: I can't remember any time I've had or wanted to sonically alter a music track for radio production purposes. The most I've changed a music track would be to make a passage louder or softer. As far as non-music elements, it doesn't happen often, but I might EQ a sound effect by accentuating the midrange to make it pop better in the finished spot, much like you would a voice.

Craig Jackman: I don't ever put dynamics on a music track. There's no need as any commercial music or music service release is going to be compressed to hell already. I do EQ music tracks, adding a touch of bass and scooping out mid and high to make room for the VO. I call it complimentary EQ because I'm adding mid and high and taking away lows in the voice. As for other effects, I'll use on beat echo to smooth over music transitions on occasion, and I'll use any other effects as needed to get the sound I need for a particular situation.

As for which EQ plugin, it has to be parametric. More precise than graphics and is as easy to use once you spend a little time with it. For a long time I swore by the TL Audio plugin from Steinberg as it had a little of the TL tube sound, but I get acceptable results from the Audition built in parametric.

Heikki Wichmann, Production Manager, NRJ Finland Oy, Helsinki: In most cases I don't use any processing for background music. I try to adjust music level so that my DAW's master processor is not working at all when only background music is playing.

I use multiband dynamics on the master bus.

In some cases I use EQ on music tracks, mostly if I do some music related promo and I've to use music with sung vocals and VO tracks simultaneously.

I also try to keep my effects and imaging FX tracks clean. By doing that I feel it gives more dynamic for the VO to punch through from the master mix.

Adam Venton: I usually EQ and pull out a touch at 400hz to allow the VO to sit better. Depending on the bed, I’ll occasionally boost at 62Hz or 125Hz to provide a bit of drive and rhythm from the kick and bass. Likewise, if the track being used is too heavy in the low end, I’ll pull down 62Hz and 125Hz. All depends on the music choice. I’ll sometimes shelve off the high end, but only a fraction! I only add compression when overlaying drums, just to glue it all together. Other than that, I leave alone. It’s usually an artist’s track being used, so it’s been professionally mixed and mastered over and over, why would I try and make it better!?

As for imaging FX, again it depends on the FX package being used. If it’s my own sound design, I tend to compress it harder as it’s not as slick as some of the packages I use (usually with an L2). With really high quality sound design FX I don’t have to do anything – just volume control so they sit nicely in the mix without taking over. It’s easy to overcompensate for a bad mix with lots of loud impacts and FX. I prefer to get the flow and drive from the music, rather than the FX. They’re just there as the glue.

Mitch Todd, Sr. Director of Production, SiriusXM: In our pure digital environment, we have to be very careful with our use of both EQ and dynamics processing to keep the sound clean and robust after a few levels of data compression. We generally try to be conservative in these two areas, yet we attempt to match the sonic characteristics of the music played on any particular channel. On aggressive new rock & pop channels, that’s where things can get a little hairy. Unlike in the analog FM world (with multi-band compression/limiting & composite clipping), the pure digital world’s audio processing will NOT mask the multitude of sins that an Optimod will!

In fact, very often I’ll bypass the “mastering” bus and have the tracks with music play completely unprocessed vs. all the other elements. This way the hook in the promo shouldn’t sound any different than when hearing the entire song on the air. Also, I’ve never been one to “notch” out a certain frequency in any background music where the VO usually sits. Only rarely will I do that if I’m hearing the VO getting too lost in the mix.

Now when it comes to altering sound/production effects and field audio of fans & artists, naturally I’ll manipulate virtually all of those pieces (even if it’s a subtle shelving roll off filter). I often don’t even use a Pro-Tools template (or more often I modify an existing one) and try to approach every project fresh…when time permits!

Bottom line: All that matters is how it sounds as a complete project, so often once I’m closing in on a final mix, I’ll start tweaking all over the session. Ray Charles put it best when someone asked him how he could mix when he can’t see the meters, overload leds, etc.: “I use my ears man”! That’s in a clip from a documentary ALL reading this should see: Tom Dowd & the Language of Music… but I digress…

This email address is being protected from spambots. You need JavaScript enabled to view it., Radio Veronica, The Netherlands: My main workflow is always to use three busses for audio tracks: Bus 1 for voice over(s) including EQ and compression; Bus 2 for music, also with usage of EQ and compression. Indeed subtractive EQ to allow the VO to punch through, medium overall compression for a consistent loudness, and a ducking compression with the VO as a key to create a bit more space for the VO and increase the music level. Bus 3 for sound effects with also a ducking compression just like bus 2.

Then I also use a master bus with bus summing emulation, tape simulation, compression, EQ, multiband EQ and limiter plugins for a thick glued sound. Through the whole chain I prefer to use a lot of compressors each only compressing a little bit instead of one compressor compressing and pumping a lot.

Dave Cox, VP, 2dogs Digital Audio, Inc.: The coolest trick is an old trick for getting the VO up over the music by using an audio compressor's Key/Side-chain feature on the music bed's track.

It's easy OK... send a bus feed from the VO track to a compressor on the music bed track that has a key/side-chain input. Sending the VO on Bus1 and telling the compressor's key to look for input on Bus1 let's the compressor know to react to the key or side-chain input on Bus1; so the voice, not the music triggers compression. We use Waves Renaissance R Channel because it allows this key/side-chain input and EQ.

Set the compressor's attack to the quickest setting allowable and the release to 150ms or so with the compression ratio set to 6:1 or 8:1.

You want to knock down the music bed by -6 to -9db when the voice triggers the compressor.

It's an automatic ducker. When the VO is present on input to the compressor, the music bed's loudness is reduced by -6 to -9 db. When the voice halts, the music comes back up... that's the release time on your compressor.

Depending on the program material, you may have to make the release time longer than 150ms, just get it sounding natural and you're halfway to a mix.

We use the EQ to dig a little hole in the frequencies present in the music that cover up the voice. Speech pathologists tell me 1750 to 2k are the most important speech recognition frequencies. I think 3.5k to 4k are ugly sounding and cover up the 2nd order harmonics of the voice, also necessary for voice recognition, so we cut a little (-3db max) at those frequencies as well.

These are the sounds that get in the way of your voice track.

Be careful of too much EQing if there's a sing-out, you can really damage the client message/sound.

Jus' like cookin', you can sweeten and add spices to taste, being pretty sure the volume ratio between voice and music are OK.

As for FX, since you're not locked to picture, the easiest thing is to just put them in when the voice isn't talking. OR... route the music and FX to a Sub/Aux/Stem and apply the “Automatic Ducker” described above to that Sub.

Joey DiFazio, Siriux/XM: Being a purist, I don’t go too deep in the weeds with processing. Light compressor/limiter on the final mix, just to even things out a bit. I do tweak elements, though. Play by play usually needs help, and sometimes there is the call to shove 4lbs into a 3lb bag. In those cases, I will string together all of my dry elements, do a mix down, then time compress the track to fit my needs. You will need to then go back into the session and adjust the music placement to match the new location of elements. Once the music is laid back over the dry track, any kind of time compression anomalies are harder to detect. I can usually get away with 2-3 sec of time compression on the dry track without it being too noticeable.