Jump to content
IGNORED

Best practices when using DAW Eqs - advice for a new user


Recommended Posts

Really good video. I'm such a newbie at all of this and I'd say there is probably quite a lot of quick fix advice out there that clearly isn't helpful in the long term so cheers!

  On 5/31/2015 at 9:19 PM, chim said:

Physics and the way audio processing works prevent 100% foolproof accurate adjustments in the time and frequency domains simultaneously. All EQing works by a process called phase shifting, which is a type of distortion/offset in the time domain, completely different from typical distortion which only affects amplitude, but it's what allows frequency adjustment. The more you adjust a certain band, and the more bands and frequency-specific devices you use on a specific audio programme, the more you get that type of distortion. Small amounts of phase shifting will affect the sound very slightly, while severe phasing will actually cancel out large portions of the sound. Additionally, every band you activate splits the incoming signal in that section, and the device then has to sum the result - in the digital domain this is done by an algorithm of sorts, and no algorithm is perfect, they're all more or less arbitrary compromises and each developer makes their own decisions in this regard. All this means that EQing has a tendency to color your audio (even when you haven't adjusted any band), ranging from completely imperceptible to noticeable artifacts, from the detrimental to quite pleasing, and that different EQs affect your programme differently. It's a relatively small factor in normal audio processing and nothing to take too seriously, but in the long run it helps to be aware of this. Phase problems are abound in audio processing and generally undesirable for high quality purposes, and you have to draw the line somewhere - For beginners it's easy to imagine more processing will lead to a bigger improvement, but at some point you're going to audibly hear how excessive EQ, or using an inferior/unsuitable type of EQ, has affected the tone of your signal. This is most noticeable on delicate acoustic recordings like nylon-stringed guitar - it's easier than you think to lose depth and end up with a "flat" sound, and in a big track those little artifacts and phase issues add up.

 

 

your long post made me think a bit and I'd like to address this first paragraph in particular. also thanks to mcbpete for that good video.

 

so I understand (and agree with) your critique against too much EQ or say too much processing but I'm not sure I agree with this bit on the phase. EQ does shift the phase of given frequencies of an incoming signal (except if you use a linear-phase EQ), but why should it be regarded as a problem if, in addition to the track you're EQing, there are no other phase-correlated tracks in the song you're mixing (which I assume is often the case in electronic music)? As far as I know phase shifting can't be heard just on a single track : you'll start to hear it when you shift the phase of a track while listening to it with one or several other tracks which are phase-correlated with it (for example a guitar recorded with two different mics, or say a synth dry + a wet version recorded from an amp, etc...).

 

but maybe we just have a terminology issue there and the bad sides of EQ you're talking about are not exactly due to phase shifting. Anyway I'd be interested in having a look at any documentation on the subject that you could share.

Phase-shifting can be heard on a single track, it just depends on the (type of) filter. All-pass filters are the most obvious of the bunch.

 

Anyway, because of that thread, I've just tried to boost instead of cutting on a synth bass : the resulting curve itself was similar in both cases, but the sound differed quite a bit (because of the way that precise EQ works). I sent an email to its devs, I'm pretty curious to know how valid is the "cut is better than boost" theory from a DSP perspective.

  On 6/2/2015 at 4:01 PM, Antape said:

 

  On 5/31/2015 at 9:19 PM, chim said:

Physics and the way audio processing works prevent 100% foolproof accurate adjustments in the time and frequency domains simultaneously. All EQing works by a process called phase shifting, which is a type of distortion/offset in the time domain, completely different from typical distortion which only affects amplitude, but it's what allows frequency adjustment. The more you adjust a certain band, and the more bands and frequency-specific devices you use on a specific audio programme, the more you get that type of distortion. Small amounts of phase shifting will affect the sound very slightly, while severe phasing will actually cancel out large portions of the sound. Additionally, every band you activate splits the incoming signal in that section, and the device then has to sum the result - in the digital domain this is done by an algorithm of sorts, and no algorithm is perfect, they're all more or less arbitrary compromises and each developer makes their own decisions in this regard. All this means that EQing has a tendency to color your audio (even when you haven't adjusted any band), ranging from completely imperceptible to noticeable artifacts, from the detrimental to quite pleasing, and that different EQs affect your programme differently. It's a relatively small factor in normal audio processing and nothing to take too seriously, but in the long run it helps to be aware of this. Phase problems are abound in audio processing and generally undesirable for high quality purposes, and you have to draw the line somewhere - For beginners it's easy to imagine more processing will lead to a bigger improvement, but at some point you're going to audibly hear how excessive EQ, or using an inferior/unsuitable type of EQ, has affected the tone of your signal. This is most noticeable on delicate acoustic recordings like nylon-stringed guitar - it's easier than you think to lose depth and end up with a "flat" sound, and in a big track those little artifacts and phase issues add up.

 

your long post made me think a bit and I'd like to address this first paragraph in particular. also thanks to mcbpete for that good video.

 

so I understand (and agree with) your critique against too much EQ or say too much processing but I'm not sure I agree with this bit on the phase. EQ does shift the phase of given frequencies of an incoming signal (except if you use a linear-phase EQ), but why should it be regarded as a problem if, in addition to the track you're EQing, there are no other phase-correlated tracks in the song you're mixing (which I assume is often the case in electronic music)? As far as I know phase shifting can't be heard just on a single track : you'll start to hear it when you shift the phase of a track while listening to it with one or several other tracks which are phase-correlated with it (for example a guitar recorded with two different mics, or say a synth dry + a wet version recorded from an amp, etc...).

 

but maybe we just have a terminology issue there and the bad sides of EQ you're talking about are not exactly due to phase shifting. Anyway I'd be interested in having a look at any documentation on the subject that you could share.

You're right to a certain degree, I presented a really oversimplified version of things. First of all, in audio filtering all phase shifting is audible since phase shifting and equalization are synonymous. A digital EQ doesn't actually shift the phase, it uses a digital delay line, but the effect is essentially the same.

 

What you're right about is that you don't hear phasing per se or get the effects of phase cancellation on a lone waveform, but the time distortion inherent in minimum phase equalization is still there and affects the tone of a signal beyond the change in the frequency response. A harmonic exciter with a frequency-specific time offset option (like Ozone's) demonstrates this phenomenon more obviously, but it's this type of frequency-specific smearing, in conjunction with device-specific crossover coloration (at the band-split points), the potential to emphasize subtle comb filtering inherent to a certain recording, as well as the overarching psychoacoustic effect of changing the relationship between discrete harmonics, that has the potential to affect tone, transients and result in various types of artifacts. Not all of these are bad of course, if even audible in many cases. The low-frequency smearing inherent in some analog devices is generally considered musically pleasing and often highly sought after (I've heard it being lovingly called "goosh" on more than one occasion). But in my experience, there's a tendency for EQs to "soften" up transients, reduce the sense of spatial depth, partially obscure details in complex waveforms and create a muddy sound when these factors combine in a multitrack environment - even if none of that comes from "actual" phase problems - which is probably more likely to occur from wide stereo mixing and effects treatment on headphones, or incorrectly treating multi-miked material.

 

Most of my knowledge comes from years of corresponding with engineers, lurking gearslutz and other places so it's hard to produce a neat set of articles for you to read. I may certainly be wrong on some of this, especially with the terminology... but I tend to gravitate towards trusting my ears and I simply get better results in the end when keeping these things in mind.

Edited by chim
  • 2 weeks later...

These are more easy tips. Chim did a better job than I could at explaining filtering.

 

The best tip I can give is EQ as you go. When you have a layer down, stop and do the eq then. I almost always leave EQ for last and it becomes such a huge hassle and pain that I grumble all the way through it (EQing 75-100 layers is not fun). I almost always do cuts on either end in tiny amounts and boosts in the mid-range for vocals and things that I want up in the foreground.

 

Secondly if you can't EQ yourself out of a resonance peak with mid to high-range percussions you can try panning or phase offset the left and right channels a bit. This will thicken up the sound somewhat but its saved me a huge hassle when I leave the EQ for last and become frustrated with overlapping mixed frequencies that the compressor doesn't get.

 

Finally, if you can't EQ something to sound more in the background of your mix, use reverb! Its a good way to simulate 'distance' in a song.

 

Bonus: Sometimes using several cascading filters with smaller identical tweaks is better than to use one filter with large tweaks. I find this works best with biquadratic and bandpass filters.

Edited by Entorwellian
  On 6/12/2015 at 7:11 AM, Entorwellian said:

 

Bonus: Sometimes using several cascading filters with smaller identical tweaks is better than to use one filter with large tweaks. I find this works best with biquadratic and bandpass filters.

this is pretty much the only way i work anymore with any eq on a channel

  On 6/12/2015 at 6:19 AM, Mesh Gear Fox said:

 

  On 6/2/2015 at 7:03 PM, lin said:

Phase-shifting can be heard on a single track, it just depends on the (type of) filter. All-pass filters are the most obvious of the bunch.

 

Anyway, because of that thread, I've just tried to boost instead of cutting on a synth bass : the resulting curve itself was similar in both cases, but the sound differed quite a bit (because of the way that precise EQ works). I sent an email to its devs, I'm pretty curious to know how valid is the "cut is better than boost" theory from a DSP perspective.

interesting. i think it's a bit like you said though, works differently for different eqs. let us know what the follow up is on this one.

 

 

I'll quote their super-insightful answers:

 

"regarding the math, there really is no qualitative difference at all. But there are practical differences worth being considered:

  • The phase shift happening as a side effect will be opposite for boosting and cutting (not the "time inverse" of course, but a mirror to the other one).
  • Without equal loudness helpers, boosting tends to trouble perception (i.e. seem to sound better), while cutting helps finding objectively better solutions, simply because user is much less troubled by "louder = better".
  • In the case of *******'s EQ SAT, you already pointed out that cuts do not get saturated at all.

But otherwise, the amount of quantization noise, filter performance, impulse smearing, etc is equivalent for both approaches. And this is really general to all plugins and even analog EQs.

Technically, you're replacing a "+" with a "-". In the math sense, both are additions.

An important details is whether your EQ has symmetrical cut/boost shapes or not. Most ******* modes for example have different shapes for boosting and cutting, where cutting is typically narrower than boosting. It's explained in the manual (or product video)."

 

And :

 

"For minimum phase EQs the phase response is 100% derived from its amplitude response. So 2 equal amplitude curves have equal phase responses.

The boost vs. cut in digital EQs is probably has some differences in quantization errors but 64-bit floating point processing precision keeps these errors far below any reasonable threshold. The boost vs. cut in analog EQs is all about noise.

In my opinion, this boost vs. cut stuff matters only for sharp curves (Q far above 1). In this case sharp cut has more pleasant sound because there's a dip in amplitude response where phase response is mostly distorted so it's less audible.

******* has very broad curves that's why this cut vs. boost trick doesn't work here (in linear mode). So boosting with auto-gain on is very cool usage scenario."

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   1 Member

×
×