As a summary of the above, and using the above-mentioned techniques, I’ll describe how you apply them to each track, along with other techniques such as compression.
Firstly, for each instrument, see whether you need to do any “safety” EQ. By safety EQ I mean rolling off either or both of the top and bottom end, using “rolloff” filters in order to protect yourself against certain undesirable things.
Shouldn’t you do this only if necessary? Is it right to always punch in the filters and restrict the sonic range when there might not be a problem at all?
Perhaps – it depends how much time you have. Ah – that magic word “time” again! Under tight time constraints it might take too long to go through every track and listen out for specific faults. Generally speaking, time is always in short supply in the recording studio, and in any case there is a lot to be said for working quickly (to save your ears getting tired), and so it would not be unusual to find a mix engineer who always switches in the filters as a matter of course. I don’t feel that it is “wrong” provided that it has no significant effect on the desired portion of the sound.
After you’ve “trimmed” the frequency range with filters, and removed any unwanted, troublesome harmonics, then here’s the fun part: Focussing on just one instrument, start playing with the EQ on that sound. Now before people jump all over me, I need to make it absolutely clear that I’m not for one moment claiming or recommending that you will actually use the EQ on anything and everything at all!
Instead, what you are doing at this point, is not using the EQ as a tool to change the sound (not yet anyway), instead you are temporarily using the EQ as a tool to explore the sound – like a microscope. This is unfortunately much harder to do using a “soft” (screen based) control interface, than it is when using real knobs. If you are using a PC-based multitrack recording and editing package, if it is possible to control the EQ using some kind of MIDI controller with real knobs on it, you may find this quicker and more intuitive to use than trying to do everything with the mouse alone.
With each sound, use the EQ to find the thing that “characterises” that particular instrument. “Tune in” and “narrow down” what the interesting harmonics are. Then you can decide whether you need to boost it, subdue it, or perhaps even leave it alone completely. Don’t EQ something for the sake of it – make sure you understand why what you are doing works from an audio perspective.
Don’t “over characterise” the instrument, unless you intend to place it far back in the mix and you find it becomes indistinct at that volume. Instead – for parts that are playing independent musical parts – try to accentuate the “voice” of each instrument such that it simply retains it’s own place in the mix whilst still being sufficiently big to fill it. Separate the parts out by using the “distance placement” technique described earlier.
For parts that are playing the same part together, it is often a good idea to not make them sound independent, but instead to make them combine into one, single, “bigger” sound. Separating them can, in any case, be difficult as the ear will be trying to merge them. There’s little point fighting against your ears and brain and trying to convince them that the two parts are distinct. From a musical perspective they are not, and they have usually been actively designed that way, so that they can be combined into one new, bigger sound.
Doing this requires a little effort, but can yield eerily effective results. Think of George Bensons distinctive “scat” vocals that accompany some of his guitar solo parts, and how the vocal and guitar blend together in order to form a bizarre new guitar sound. Similarly, mixing synth sounds with (e.g.) live strings can be much more effective and bold-sounding if you try and form them together into a new type of string sound, rather than leaving it sounding like a silly little synth playing over a massive orchestra.
It’s hard to discuss in mere text, what you do when working with each of the main sounds, and I was going to add some brief observations on the kind of thing you can expect to find when working with some of the more common “main parts”, but it is impossible to give each instrument adequate coverage (I tried and got exhausted after just two inadequate examples), and this article is already long enough anyway. More importantly it would be missing the point of this article to talk about specific lead instruments when it is general mixing principles that I am trying to explain here.
I’d summarise the EQ process – regardless of what instruments you are dealing with, as using the EQ initially as a kind of “microscope” that lets you examine the sound in great detail for a minute or two. Once you are familiar with the bits that constitute that particular sound, you can then decide how you want to deal with them – and that might not necessarily involve EQ.
For example, by playing with the EQ as a listening tool you might discover that an instrument (perhaps a guitar) potentially has a lot of “attack” to it, that isn’t yet being realised. How should you bring out that “attack”? Perhaps you might decide to use the EQ – but that can often turn the sound – particularly guitars – into a thin sound, without much body or depth. You might instead decide to use some compression with a slow attack in order to accentuate the attack on the instrument.
How To Mix A Pop Song From Scratch