Q-It-Up-Logo-4Q It Up: Once you have a voice track recorded and placed on a track in your project, you probably do most your processing of the voice at this point, either on the track itself or on a bus the track is assigned to. Most producers are going to have these two plugs in the effects chain: compression/limiting of some kind, and an equalizer. The question is, which comes first for you, and why? And if you have other effects in the chain, is their placement in the chain also important in your case? Explain. And feel free to add any other comments on the subject!

This email address is being protected from spambots. You need JavaScript enabled to view it., Imaging Director, Bell Media, Edmonton, Alberta: The order of my chain is (I developed this chain all by both what I was hearing, and what I was seeing the files do -- it all basically adds to the ease of my mix. These steps also change based on the VO talent.)

Dynamics Processing – soft Limit, Compression, EQ, Processing – Soft Limit, Hard Limit -3, Reverb/Delay

Also, one thing to note… leave some headroom for the rack/air compression. I see young folks butcher this all the time.

This email address is being protected from spambots. You need JavaScript enabled to view it., BAFSoundWorks, Lehigh Acres, Florida: Compressor is dead last. I only use it for an overall smoothing once everything else is done. So, limiting, EQ, gating, anything else is first, compressor last.

Since I get v/o in from all kinds of sources (mouths and microphones), I use a fair amount of pre-processing, so it sounds consistent with our reference v/o audio. That file is dropped in the multitrack.

Depending on the spot texture, I may or may not put a light compressor on the track. Once the session is mixed down, I will apply a light compressor to the overall mix, then a light limiter to kill any overshoots.

I try to keep any processing light, and to a minimum, since broadcast facilities will crush the audio in the transmission chain anyway.

If the job is for non-broadcast, then I'll use a stronger compressor so we have good levels for the end product - unless the producer wants the mix dry so they can manipulate.

So, in the end, any processing depends on who is getting the final mix.

This email address is being protected from spambots. You need JavaScript enabled to view it., SBE Certified Audio Engineer, Radio America Network, Arlington, Virginia: Many hardware mic processors like the venerable Symetrix 528 lay out the flow as Comp>EQ because it works. You've got to get those levels tame before you add the sugar.

I would compress first, then sweeten with the EQ. But I would also consider a gate or expander in the front to keep room noise out of your tracks, then add a peak limiter at the tail end to make certain that any EQ changes won't spike and hit the rails.

As an alternative, try a free standalone multiband compressor plug-in. The ReaXComp from Cockos (the REAPER workstation guys) is worth a look, as is the Stardust processor from Argurus Software.

This email address is being protected from spambots. You need JavaScript enabled to view it., Bell Media, Calgary, Alberta: This is a great question. Personally, I have my voice tracks side-chained through a bus, on which I have placed (in this order) Compressor, EQ, and Reverb. I’ve often wondered if the order of these matters, and to get to my conclusion, I processed a voice track manually within my session. I found that after I’ve normalized, once I apply compression, you can see the voice track get “squished” down, but once you apply the EQ, you’ll see the track waveform boost right up again, leaving enough headroom for it to sound nice and clear without being too crushed.

What’s great about side-chaining your vocal processing is it leaves plug-in rack space on your voice tracks themselves for any additional effects you want to apply to them (for more effect heavy promos or spots), and you’ll know that whatever you have on the individual tracks will get the same processing, as it’s ran through the bus. It’s just a way of keeping the sound constant, and your session clean and organized.

This email address is being protected from spambots. You need JavaScript enabled to view it., Newcap Radio, Richmond, British Columbia: Typically for radio -- 99% of the time -- I'll EQ, then compress/limit. If I'm adding any reverb, delays or other effects, those go on last because I don't want the compression to pick up that stuff and amplify it. However, depending on the voice or music, I'm never averse to switching it up and trying it out. Bottom line, whatever sounds best to me that day - I just go with that.

When I produce and mix music though, that's a totally different ballgame. It all depends on the arrangement/instrument/voice, etc... I experiment with the order often.

I prefer to have most of my voice effects on a parallel chain so I can mix a wet/dry version together for the right blend of effect. That also allows me to keep the dynamics and harmonics of the main VO untouched (if I wish). Just a preference thing I guess.

Overall though, I would say that I definitely fall in the group of people who believe there's no right or wrong way to do anything. There are rules, and it's important to know them; but it's also important to know when it's okay to break them. I believe it was Dave Foxx who always said that.

Ralph Mitchell, Production Director, iHeartMedia, Mobile/Pensacola, Florida: For purposes of commercial radio and TV audio, I find that the voiceover punches through best if I apply any necessary EQ first, and then dynamic processing afterwards. The reason for this is that the reverse order will boost selected frequencies over and above your already predetermined level, which causes those particular frequencies to dominate the final mix more than other possibly more subtle tones. I might apply them in reverse for some other applications, but knowing that a radio commercial is usually experienced in a noisy vehicle, the vocal is already battling to be heard over all of the other sounds and distractions in traffic. So I choose my EQ based upon which mic I used and the voice I've recorded - as a rule, seeking to make sure the post-EQ signal sounds as much like the person does in real life as possible, in much the same way we would EQ a band instrument -- not colorizing it to sound un-natural, but rather subtracting frequencies that aren't heard coming straight from the instrument... or mouth as-it-were.

After EQ, I'll choose the appropriate dynamic processing for the project. Light compression for many spots will keep the more natural nuances, while increasing intelligibility. If it's a "yell-and-sell", then a steeper compression slope is employed, and often a limiter as well. I will often deviate from this for NON "announcer" parts, such as any character voices, etc., while watching the levels for those voices very closely so as not to get lost in the mix later on.

And just before I begin adding music and sfx, I'll always normalize the v/o to a standard 85% which allows me to set my physical mixer at unity gain when dubbing the final mixdown into the radio station's computers. This is what works for me in my work environment, but I look forward to reading what others do. It's always fun to experiment and find new techniques to make the mixdown sound better, or even just different from the others. And it doesn't hurt to make the process of cranking out massive amounts of commercials a little bit more efficient.

Walter Wawro, WFAA-TV, Dallas, Texas: I'm a Pro Tools user. The PT 7 band EQ comes first with the hi-pass to take out the room rumble and other uselessness under 75Hz. I'll use the steepest slope, -24db. Depending on the voice I'll consider taking out some of the "chesty-ness" at around 200-250Hz, usually no more than 2db. Male voices usually don't need compression; females are on a case by case basis. If needed I use the stock PT Dyn 3 compressor/limiter, and I've tweaked the settings to a preset I call "this sounds about right." It usually does.

Craig Jackman, Professor – Radio Broadcasting, School of Media, Arts, and Design, Belleville, Ontario: Due to force of habit more than any particular audio reason, compressor first then EQ. I’m more concerned with getting my levels under control and getting the mix started, and the amount of EQ I add on things usually isn’t enough to dramatically change levels. That said, I’m not adverse to adding a limiter after the EQ if the levels are noticeably changing, or if I’m using a soft compression setting to begin with, running the whole file through the compressor a second time. I usually have EQ and dynamics control on a per channel basis rather than bussing them to a single source, again more through habit than audio reasoning. I’ll use bussing only if I’m doing something out of the ordinary. Actually, because I became used to using these tools in the old Cool Edit Pro days, I’ll often add EQ and dynamics in 2-track mode still, and changing the raw file. I have to remind myself that it’s better to add them in Multitrack to save options for down the road.

With all of this, the only thing that really matters is what comes out the speaker. If it sounds right, it is right.

Von Coffman, West Jordan, Utah: Audio processing… Which comes first… the chicken or the egg and what about the barn? Then of course there is the barn yard.

The EQ or the Compressor/Limiter. I remember hashing this out when you had a Compressor and you had a Limiter, both in the rack and not integrated. Ahhh but I date myself. Back to the question at hand. In the software program I use (SAW Studio), it’s designed in a way that takes you back to the good old days of Multi-track machines and Neve consoles. If you have ever sat down at a console (don’t laugh, some folks have never seen a Mackie let alone a Neve), then you know how the sound path works.

So after the attenuator… comes EQ. So, naturally that’s what comes first. Then the Gate and Compressor, FX Patching both Pre and Post fader then the real engine to the mix, the Aux sends.

Now keep in mind this is just the Channel FX. When you ask what you put first, I am assuming you mean plug-ins. Think of it this way. A plug-in is a side channel box you have stacked up in a rack next to your board. Whatever you put on the top goes to the next and the next and… you get the picture.

The sweet part about the Aux Buss is what you can do with them if you are parking your Channel effect there. For example, Reverb. If you use your Reverb plug-in on an aux bus and send it back completely wet, you can then have greater control of your reverb back over on the channel you have your Voice parked on. Same goes for any plug-in.

Depending on the effect you want on the Voice you have recorded, depends entirely on what comes first. As a standard though, I use very little compression on the sub-mix (as my mic runs through a pre-amp/comp before it hits the DAW), but I do add it in on the final on Master Bus out. Also sitting on the Master Bus is a Graphic EQ that I use to get back some frequency if needed.

There are as many ways to process a voice as there are producers processing them. But the best hint I can give is less is more until you really need more. Confused yet? I have been voicing and producing commercials for 30 years and I’m still trying to figure it all out. That’s what makes this so fun!!!!!

Jim Harvill, iHeartMedia, Fayetteville, Arkansas: For me, EQ first and then add compression.

D McAtee, Journal Communications, Tulsa, Oklahoma: I've done it both ways. Now, I get the best results doing it both ways, at once. Roll off below about 80 Hz and a shallow notch at 250 Hz before dynamics processing, mid-range and high-end boosts after.

Edgar Gomez, Production Director, Univision Radio, San Francisco, California: I usually have the equalizer first then compression and limiting last. This chain has work petty good for me, and for some of the most prestigious engineers, from what I have read of their tips.  

Earl McLean, Earl McLean Productions, Mississauga, Ontario: The answer (for me) is dynamics before EQ, when the vox is recorded by me. Audio from outside sources, it tends to be a tossup between dynamics first vs. processing a la EQ.

Dave Cox, 2dogs Digital Audio, Inc., Moline, Illinois: If the voice track has a problem, you'd fix that with subtractive EQ or noise reduction. For sweeting a perfect voice track, I use the Focusrite Liquid Mix plug-in (SSL Brit Desk 3, @2:1 -2db and a Pultec shelve) to taste and Lexicon verb with an output consistent with -23db LKFS as per ATSC A-85.

Ashley Bard, Imaging Producer, Capital FM, London: There isn’t a ‘one size fits all’ for processing, and every producer sets up their sessions differently, even if they’re part of the same team. What’s key is knowing what you want and knowing how to get it using the tools you have at your disposal.

For me, the voice set-up has a few individual mono voice tracks (1 for Clean, Thin & Distorted etc.). This method allows for full control, as you can bus all tracks into a Vox Mix Aux before mastering.

When it comes to the VO track, a C1 Compressor holding at a threshold around -13 doesn’t compress the sound too much but does allow me to bring the VO to life, whilst retaining a natural feel.

Although Waves and Focusrite are part of my everyday production toolkit, I like to follow with Avid’s own Plugin - an EQ3 7-Band. This part of processing helps to deliver a great overall sound, but I regularly drop it in after the compressor to act as an extra boost to enhance the signal.

Back up is provided with a L1 Limiter, creating a brick wall so the vocal signal is guaranteed to peak at a certain point, but cannot spill over.

As I mentioned before, this is ultimately all fed into a Vox Mix Aux channel, holding a number of plugins. These plugins, alongside another compressor & Q10 Paragraphic Equalizer, all help to mesh the vocals together.

This process is one I return to, but doesn’t cover everything, and I adapt it to match the production I’m working on.

George Johnson, Voicebox Productions, Edmonds, Washington: I do not have the technical or acoustical sophistication within my small Pro Tools project studio like most large AM/FM studios have. Sound is subjective. What might sound good to me, may be different to you. Like taste -- some people enjoy "liver". Frankly, I can't stand it. The voice is already being compressed as it goes through the voice processing, in the mic chain, to the designated voice track on the computer. I'll then plug in the EQ with some light reverb, from a bus on that track. Depending on what kind of project it is, I'll then add the music, FX, etc. on other selected tracks, and then make the ancillary adjustments, or spacial enhancements, at mix down. During the mix, a Dither is selected, and final compression (using the Maxim Plug-in) is employed to bounce/send to disc or a selected file folder.

I've always operated on the KISS Principle: "Keep It Simple Stupid". My final critique comes when I take a recorded copy of the project out to my car or home stereo and listen to it out in the real world.

Michaël Gendron, Producer, Bell Media, Quebec: I put the EQ first because I think the compressor after will do a better job. The best basic chain for me is: EQ, TrueVerb (a very soft one), a compressor and a limiter. I find that it sounds bigger and better, more defined like this.

David Boothe, CAS, Senior Producer & Chief Engineer,  Hope for the Heart, Inc., Dallas, Texas: Usually, I put the EQ first, then the dynamics. Most often, I am using the EQ to get the voice to sound the way it should, which sometimes means correcting a problem (if I did not record it) or making the mids cut through. Therefore, I want the EQ to affect the compression. Also, it seems to keep the voice track “tighter” than the other way around.

I often use a de-esser. Typically, this goes at the end of the chain, because both EQ and compression can increase sibilance.

Lately, I have been using an analog tape simulator as the first plugin on each track of a project. In this case, I’ll often put the de-esser first, before the tape simulator, since that’s what I did back in the Stone Age, I mean, analog days. With analog tape, excessive sibilance could cause tape saturation and thus really unpleasant distortion. So if there was a sibilance problem out of the mic, I would always de-ess in recording. Then EQ and compression in the mix, and sometimes de-ess again.

So far I’ve carried this methodology into the analog-simulated digital world. I mean the whole idea is to simulate analog sound – without needing to spend an hour each day aligning tape machines – right? However, the tape simulator is still new (to me), so I’m still experimenting.

Adam Venton, Group Imaging Producer, UKRD Group, Bristol, UK: Here’s a relatively short answer. I always EQ first, then compress. My logic being that I want to remove unwanted frequencies first – why would I emphasize them by compressing them? Find the frequencies you don’t want or need and remove using EQ, meaning everything being compressed are frequencies that you want. Same goes for other effects, such as modulation and the like. I EQ first, setup my effect and tweak the wet/dry mix until I’m happy, then compress – then go back and tweak the wet/dry mix if needed. I use a limiter at the end to cut rogue peaks and thicken up my sound.

Gord L. Williams, Georgetown, PEI: Compression and EQ was probably more necessary, I mean used

judiciously, but an integral to the mix in the days of analogue. Coming partly from that world, I suppose I am sensitive to both.

But I have to say, I rolled back since the early years of recording projects on my own. I understand more that there is a median level, and it’s just not possible in a pure recording process to change things that much. That would be a special effect, and that’s different. Quite a few of us back in the day used to think that the compression was key to sounding cool on the radio, and to an extent it is.

To me, because a compressor compresses dynamics and the EQ can attenuate frequencies adding a little gain at each stage, it’s a balancing act between one and the other. Some recordings with source noise, you may be limited in terms of changes to gain, so as not to bring up the level of flaws you don't want to be presented.

In some cases, compression is used like an effect and it’s very audible what it does, particularly in concert with some level of filtering or EQ. But there I go again… it’s a balancing act.

Since I try to begin with an end in mind and never really end up getting there. I never do sound like James Earl Jones or Don Lafontaine. I cut a lot of slack for building a sound, like I used to try to do. I wanted to sound different in the salad days of radio. Not even sure I could afford salad at that point.

The engineers sure tied everything down, so it’s a pleasure to play with sound textures in a way to enhance a production. But I found out they knew what they were doing, for the most part.

Many of us just didn't understand it fully. It’s one colour on the palate and we all know what saturation of anything does. My answer is to do what my ears tell me and keep copies of things more naked so I can go back and reassess.

Colin McGinness, Group Production Manager, UKRD, Bristol, UK: Before I even put voice audio into my project I tend to run a mild compression across the track after de-breathing. This actually depends on the voiceover. Any VO that is running compression on their end of the ISDN line, I won't do that to. Once in the project, it's EQ first then bussed to compression. 

Audio

  • The R.A.P. Cassette - November 1990

    Welcome to the November '90 Cassette. Side A offers quite a variety of promos, some you simply have to hear to believe. Cut 13 is a promo that WLAV...