You might think I’ve started with with a poor pun. Mastering for (laughs) LUFS. Nope, it’s a serious business. After all, I had been doing it all wrong before. Although I’d read up about it and tried it with the limited setup my project studio has, I’d been doing it all wrong, assuming that in Ableton Live, I could do the mastering of a recorded track on the master track of the song file. Wrong. It doesn’t work like that.
Now I have to admit that the songs I’d recorded were louder and more prominent, whether I’d used Ableton’s own mastering presets as a starting point or using third-party plug-ins like T-Racks. No, you really have to save an unmastered track, then do the mastering work separately on the WAV or AIFF of that track. Even in Ableton. And quite a bit more besides. Well, now I know.
Mastering has come a long way since a young man in New York called Bob Ludwig was given the task of taking a full album of tracks and making sure all the tracks on that album played at more or less the same volume. Record sales outlets in the late 1950’s, when Long Playing Records started to become popular, had been receiving complaints from customers that when listening to a record they had to continually get up from their seated position to adjust the volume on their gramophone as some tracks were very quiet and others too loud. Mastering’s original purpose was to appease the listening public and give them a more comfortable listening experience. And in a sense, it still is.
During the 1960’s mastering started to become more technical and more of an art form by being able to enhance the warmth or colour of a track by using EQ, compression and peak limiting. Eventually, in the 1980’s, it was used to add post-production polish and cement a signature sound of a particular band or album and in mastering, now a respected profession, engineers became sub-divided into the genres of music they were individually most adept at mastering. Hip Hop would use different techniques and sometimes different equipment for mastering an album than say, the rock or classical genres.
And in the early 2010’s along came streaming and with the effective demise of the “album”, artists took to having their single songs mastered. Even a single song had to sound the best it could sound on the most modern technology listening devices. That song might be the artist’s best (and sometimes only) opportunity to get his/her name in lights. And the mastering engineer is now tasked with capturing the nuances of the sound of the mix such that the final track for release is representative of the artist’s attitude, vibe, feel, drive, vocal subtleties and so forth. Not an easy task. The process is essentially simple, but often takes great skill and experience to get right.
I used to have a neighbour who was a mastering engineer. He’d been a session drummer in his youth and had sat in with bands in the late ’60’s such as Fleetwood Mac - yes, those kinds. His mastering studio was in Hammersmith, London and he’d invited me round to have a quick look. Acoustic insulation was strategically placed around the walls and ceiling. The ceiling had bass reflection attenuation and the main door was, as in most professional recording studios, doubled and gapped. Noise suppression was fitted to the air conditioning and the PC he used was muffled. The room achieved the objective of being able to eliminate all noise from the exterior of the room and reflections from within. I reckon it would have cost him some serious investment. Of course it was his profession and he was working with top-line acts and mastering albums which would be nominated for a Grammy, some of them. It was on this day that he told me streaming was about to change everything.
And until recently, I’d been blissfully unaware of the modern mastering requirements. I’d generally mastered to get a sound that I thought was an improvement over the unmastered sound. Plus, you don’t master the same for a vinyl record as you would for a CD. The medium affects the outcome. But what if there is no tangible medium? What do you do for the case of streaming, where songs exist solely in cyberspace? Mastering for Loud and Streaming.....
And so to the LUFS. Loudness Units (relative to) Full Scale. This was a kind of loudness measurement originally proposed by European Broadcasting Union and finally, in its revised form in 2020, a paper called EBU R 128 recommending loudness normalisation and maximum level of audio signals, mainly for television and radio broadcasts. And where do most television and radio programmes now live? On the internet, of course. And streaming, like it or not, is the modern day personalised radio station.
So, as most artists are prioritising streaming plays on platforms like Spotify, Deezer et al, mastering is now predominantly directed to streaming play. And each platform has their own audio quality requirements largely adhering to the recommendation from the EBU. The generally accepted criteria for a track to be uploaded to a streaming platform is; -14 LUFS and True Peak maximum level -1dB.
Of course, you don’t have to conform to this standard as the streaming platform will normalise your track (these days using bots) for you to achieve the standard loudness requirements, but if you can’t meet or at least get close to the standard, you have little control how your track will actually sound once the streaming platform has used it’s output processors to compress and limit the track. It might sound low volume and weak. It might sound overblown and distorted.
So, as an indie musician/artist/producer do you need a mastering engineer? It’s always good to use an expert. They know what they’re doing after all. But as these relatively new standard requirements seem to be more important per song than for a full album most indie project producers, spare bedroom studio operators on a budget, can do this for themselves without the extra expense.
I’m still using Ableton Live 10 as my DAW of choice. I love it. I’m just finishing off a new rock album and decided to check YouTube for instruction videos on how to master tracks on Ableton, which is where I’d found out what I’d been doing previously was all wrong. There are great tools available for mastering which are compatible with most DAWs including the professional WaveLabs and Izotope and the budget yet very effective T-Racks, but Ableton it appears has all its own tools without having to resort to third-party plug-ins.
The Audio Professor (YouTube handle @TheRealAudioProfessor) is a resource with oodles of instructional videos, all free, on all things audio on all different DAW platforms. He has a short video on how to master in Ableton Live 10 and goes through the process step by step so that you can set it up a mastering chain and try for yourself. The only third party device you’ll need is a free loudness meter from YouLean.
In the example he uses, the basic track is an electronic synth instrumental. He’s set up a Live arrangement with an audio effect chain which is as follows: A Bass Mono device which centres most of the bass sounds and in doing so helps to prepare for a clean mix. Then an eight-point configurable EQ device. Then a Mastering compressor device (called a Glue Compressor in Ableton). Then a Saturator device to add warmth or shimmer or whatever colouring you want. Then a limiter device and finally, at the end of the chain, the loudness meter.
I found this chain setup to be a good starting point for rock music as well as electronic although I had to change some EQ settings and other parameters depending on the track I was mastering, but using the loudness meter, which has readings for integrated LUFS and True Peak Maximum dB’s, you can master a track which gets pretty close to the LUFS and Peak dB requirements of the steaming platforms. Why bother? So that your tracks will pretty much sound like you want them to sound on Spotify et al. If it’s boomy and distorted, chances are no one will like it. If it’s too quiet, chances are no one will listen to it.
I’ve been able to achieve close to the -14 LUFS requirement without compromising something important in the way I want a track to sound, but I'm quite willing to accept + or - 1.1 LUFS (so, -15.1 to -12.9 LUFS) as long as my True Peak max is less than -1dB (normally -1.4 to 2 dB). So I don’t bother trying to waste time being too exact. I found that once I had tried a few new album tracks out on some settings, I settle on one good setting for these tracks and see how they turned out. Mostly, by adjusting compressor threshold, limiter ceiling and limiter gain I was generally returning LUFS between 13.9 and 14.6 and always a peak dB above 1, even if it was only 1.1 (although generally it was around the 1.5 mark). As LUFS is measured as dB (1LUFS = 1dB) and the decibel scale is not linear, but logarithmic, it’s important to get close to the -14 LUFS line as you can to reduce the differential effect on your sound. You ideally want all your tracks in a given album to play at the same volume, but even so some might appear to be louder than others because of what is known as 'perceived loudness'. But at the same time, you don't want to impose too much compressor threshold or a big dB limit and squish your track until it sounds unnatural. I've found it necessary on occasion to go back to the track and fix the mix, sometimes finding errors in the mix I'd missed, then master the new mix with success.
Now, I’m not 100% sure about this as I’ve been trying to work this out, but If your track sits at, say, -12 LUFS, the streaming platform will reduce the volume until it hits -14. If you’ve a track sitting at -15 LUFS, it’ll increase the track volume until it hits -14. If your track is at, say, -6 LUFS, the problem is that the sound reduction to -14 LUFS will be considerable, as the relationship is logarithmic, not linear and your track will sound weak. If your track is at -19, the risk is that if it makes the track come up in volume to -14 LUFS, it might get boomy and distorted.
Sometimes you’ll master a track and it ends up quieter than the unmastered track. Getting the sound balance right is more important than going for a loud, super compressed track. The “Loudness Wars” may not be over yet, but loud doesn’t necessarily mean balanced. I think that if you’re mastering an album, you have to accept that occasionally a given track is going to be quieter than the others, otherwise you’ll have to make compromises which might mean going back to the original track and producing a different mix.
Anyway, that’s my tuppence-worth.
Here’s the links to The Audio Professor’s tutorial:
and
All about LUFS in Wikipedia: https://en.wikipedia.org/wiki/EBU_R_128
Comentarios