The Pitfalls of High & Low Pass Filtering

The Pitfalls of High & Low Pass Filtering

 

Producing nowadays, one must develop a knack for sifting through the troves of information and misinformation that has made its way online. One of the techniques I see referenced often is to high pass your master buss around 20hz to remove “mud and low frequencies that you can’t hear / won’t be represented by your system.” In addition to this, I have also seen it recommended to low pass the master buss at 16khz to “increase headroom.” Now, it makes sense that cutting unnecessary frequencies out of your master would remove mud and increase your headroom, right? Well, unfortunately not everything in the world of audio makes sense when you try to reason it out from a tangible, physical reference point like what you would use in your everyday life. However, fortunately for you, I am going to explain exactly why you shouldn’t make either of these decisions when mastering your next track.

Before we get into the technicalities, if you have low frequencies that are present that you don’t want in your mix, the ideal place to take care of those is in the mix, not in the master where all the elements have already been bussed together. For example, if I have a bit of rumble in a synth I would high pass it on its own individual send before bussing it into the master, this gives me more control over the cutoff of the high pass as well as any other adjustments I want to make. Once things have been bussed together, you inherently lose an amount of fine control that you have when they are on separate tracks. If you mix properly, high passing elements as required, then there should be no need to have a high pass on the master, it’s not like frequencies will magically be generated when content without them is summed together.

Now that we have that out of the way, it’s time to delve into the more technical aspect of things. Most digital filters are based on a process called “minimum phase” equalization.[1] Minimum phase filters work by changing the phase of the signal to create a boost or cut in the spectrum. Due to this, the boost or cut that is created is more closely linked to phase than it is to the peaks and troughs of the waveform; taking something away from the spectrum does not necessarily translate to lower peak values in the waveform. Below are two graphs generated by Steve Duda’s Serum synthesizer, with the left graph representing the relative amplitude of the filter output, and the right graph representing the relative phase of the filter output.

The Pitfalls of High & Low Pass Filtering
The Pitfalls of High & Low Pass Filtering
The Pitfalls of High & Low Pass Filtering
The Pitfalls of High & Low Pass Filtering

Usually, the effects of this phase change are fairly inconsequential, but using filters on low frequencies such as subbass will noticeably increase their decay time.

The Pitfalls of High & Low Pass Filtering
The Pitfalls of High & Low Pass Filtering

Here is an F2 (43.65hz) sine wave coming directly out of Serum over half a bar at 140 BPM. Looking good, it starts instantly, consistently peaks at -3.5dbfs[2], and ends at the half-measure.

The Pitfalls of High & Low Pass Filtering
The Pitfalls of High & Low Pass Filtering

Now, look what happens to our sine wave when we apply an 8 pole steep (48 db/o) high pass from FL Studio’s Parametric Equalizer 2[3] at 20hz. Even though the filter shouldn’t be touching the sine wave, as the cutoff is below the frequency of the sine, it still influences the waveform. It no longer has that nice, consistent peak at -3.5dbfs, instead peaking as high as -2.7dbfs and averaging peaks around -3.8dbfs. However, that’s not the only thing this filter did to our sine wave, it no longer starts or stops instantly when given input, instead it takes around 3 oscillations to reach its highest point (keep in mind the original never changed volume from oscillation to oscillation) and takes around a 16th note of extra time to decay. This is quite a significant difference in the context of bass music, which typically features a tight bassline as one if its core elements. High pass filtering, instead of tightening up the low end and removing mud, actually significantly changes the peaks of the waveform and distorts the bass in the time domain.[4]

Now, using some of this information we can move on to low pass filters since we already covered the key technical aspect of minimum phase filtering that is important to us in this context. Low pass filtering your masters will not give you any additional headroom either due to the fact that minimum phase filters change phase (and thus peaks) to achieve a change in amplitude. Now, I have seen this argument spread around so much that it has embedded itself into the common unconscious as just “something to do.” Tracing it back to its origins, however, reveals a much less high-brow starting point. If you are trying to reverse-engineer someone’s master and you have ripped the mp3 from SoundCloud or another source online, you’ve likely fallen into this trap. Mp3s use a lossy compression algorithm to reduce the filesize from that of a lossless file such as wav or FLAC. For the most part, this loss of filesize (and data) is negligible, but if you are trying to analyze the files, you may be tripped up by what I’m about to reveal. Mp3s (and some other lossy encoding algorithms) cut the highs from a file to reduce its size. Shown below is both a lossless FLAC version of Virtual Riot’s track Pray for Riddim and a version that I rendered out to a 128kbps mp3 using Audacity.

Even without a technical background, you should be able to see which spectrum contains more data. On the left is the lossless FLAC file and on the right is the 128kbps mp3. Would you look at that, a significant cut to the highs above 16khz. So, if you see this in a song, then it may not even be something the mastering engineer did, it may be the result of a low quality mp3 somewhere in the chain from the artist picking samples to the tune entering your earholes. So, unless you want your master to be reminiscent of a lossy compression algorithm, don’t low pass the buss.

These shoddy techniques of low and high passing your masters to achieve an elusive benefit that is not actually there is the snake oil of the audio engineering world. Worse, it will hurt your masters more than it will help them by changing peak values and influencing decay. No proper, high-grade mastering engineer would make such a cut on their masters.

To summarize:

1. Minimum phase filters adjust phase to influence spectral amplitude, and cuts or boosts are not always reflected similarly in the waveform’s amplitude.

2. Filtering frequencies, lows in particular, changes their decay time.

3. High-end cuts may be an artifact of a lossy compression algorithm, not the choice of a master engineer.


[1] Another type of digital equalization technique is also linear phase, which comes with its own set of problems that would be much worse than minimum phase for filtering the low end in this way. See: window size & pre-ringing.

[2] DBFS stands for “decibels full scale,” it is what’s used to measure amplitude in the digital world, with 0 DBFS being the loudest a signal can possibly be without peaking and -infinity DBFS being “absolute zero” in terms of amplitude.

[3] For this demonstration oversampling was turned off, as that will also change peak values. The bandwidth value used was 0.55035400390625.

[4] When filtering high frequencies this is much less of a problem due to the greater number of oscillations per second.

Leave a Reply

Your email address will not be published. Required fields are marked *