[This is my weekly column for GlobalNews.ca. – AC]
If you came of age musically in the 1970s and ’80s, you probably spent a lot of money on audio equipment. And I mean a lot.
If you’re a guy — and this was an overwhelmingly male thing — a sizeable portion of your disposable income went to things like speakers, amps, turntables, tape decks, and later, CD players for both the home and your vehicle.
Seeking perfection audio quality was an obsession. We wanted equipment that was not only loud but clear, accurate, and free from all manner of distortion. Conversations were peppered with phrases like “frequency response to 20 kHz,” “low flutter and wow,” “watts RMS,” and “signal-to-noise ratios.” Recording studio technology had improved to the point where recordings sounded better than anyone that could ever be produced in the real world and we dreamed of owning equipment that could reproduce every single nuance captured in the music.
And even as we basked in glorious full-frequency high-fidelity audio, we knew it could still be better if we could only afford better gear. Such is the Sysiphian life of an audiophile.
By the late ’90s, though, something had gone awry.
The first problem was a growing issue known as the Loudness Wars, an insane pursuit of the perceived volume of a recording. Labels believed that songs needed to jump off a CD and into the ears of listeners to grab their attention. That, they said, led to greater popularity and higher sales. Instead, though, we ended up with extra distortion, increased listener fatigue, and a complete perversion of all the work that went into making the recording in the first place.
Keep reading. It’s important.