
By Nicole Bouwkamp
Have you ever listened to music for a while before suddenly feeling exhausted? Or having to turn your favorite song off because you just needed some silence? Have you driven a long way listening to the radio only to have your ears become sore and sounds muted? Ear fatigue, often felt as tiredness and a soreness, loss of sensitivity, discomfort of the ears, is caused by prolonged exposure to sound.
Thanks to the trends of music and listening environments today, ear fatigue can be experienced anywhere at any time. You just need to turn on the radio and listen for a while before you feel it or listen to music on headphones from a streaming service while in a crowd. Today’s music is part of the equation of experiencing ear fatigue. More specifically, a tool used to create music and broadcast it online and on the radio: compression.
Compression is the process of reducing the dynamic difference between the loudest and the quietest parts of an audio sample – the loud material gets quieter, and the quiet material gets louder. This is why a song may be described as punchy or having presence. The frequencies of the recorded sound are at naturally varying levels, and when compression brings up the softer frequencies and brings down the louder frequencies, the result is a more present and punchier sound (or as I like to say, a beefy sound).
Compression is often used when recording drums. Drums are the biggest producers of transient sounds, meaning that they are a loud sound with lots of attack, but decay in sound very quickly to where there is very little sound beyond that first attack. With compression, the attack is brought down in volume while the sound left after the attack is brought up. Frequencies that are naturally dynamically different are brought closer together, and you get a beefier recorded drum hit.
So, you hear everything better. That would be good with music, right?
The thing is, everything in music isn’t meant to be heard evenly all the time. One of the glories of music is the dynamic range and nuances within it, the little hidden gems of musical ideas that you discover after listening to a song multiple times, or the rise and fall of moments that can evoke emotions of triumph or despair. If there is a part of the music that grows from soft and intricate to loud and powerful, you need to actually (not) fully hear everything in relation to each other.
With compression, everything is louder, and we tend to lose the dynamic range of the music. The small nuances become more prominent and muddy the main melodies and harmonies, the rise and fall of dynamics becomes flatter, and “imperfect” playing is homogenized. This trend has been growing for nearly 30 years now, and no music is safe.
This isn’t to say that compression is bad by any means, it can actually be vital in the recording process to achieve a cleaner signal from a particularly temperamental drum, or to even out the sound from a singer who is not familiar with distancing the mic properly when they sing. Compression when cleaning the recorded sounds in the mixing process can be useful for achieving a better balanced song in the end, but I prefer to control the volume manually.
I will work harder to control the overall dynamics if it means I can keep the more natural dynamic sound of the instruments throughout. However, my ideas on how music should sound are my opinion, I will admit, and the opinion contrary to mine follows the idea of slapping compression on all the instruments for the entire song to get a more even dynamic range. This method has been steadily ruining how we listen to music for decades.