Mastering in the Present: Techniques and Tools for Modern Music Production

Perhaps it goes without saying how special the science of mastering seems to an untrained eye. In this article, I couldn't explain how complex and yet sometimes how simple this process actually is. The inspiration for this article came from my recent exchanges with many "professionals" who have spent the past decades in a bubble. That is, they have acquired extraordinary knowledge in mastering but stopped following modern solutions and thus fell behind. The following article will contain highly technical and historical information, so only read it if you are open to the world, and even if you don't agree with everything I say, perhaps you won't spit at what I write here.

If we look at the history of mastering development, what counted as the mastering process in the 1950s is hardly relevant in today's world. At that time, the main problem was how to transfer the signal to the vinyl record so that it would sound acceptable. Perhaps the most important moment of change was that the pursuit of purely technical implementation also brought about the inclusion of creativity in the audio material. EQ, compressor, Mid-Side, Dolby - these all became important tools in the science of mastering engineers. With the advent of digital technology, the mastering process has changed, expanded, and transformed, just as it did after the initial period.

And this change is ongoing to this day. Unfortunately, there are those who, lagging behind the times, look askance at today's mastering solutions and are unable to accept the changes that come with time. Obviously, some criticisms are valid, but even these criticisms have another side. Let me give you an example.

From the mid-2000s, we can hear a drastic increase in loudness in most music. According to older teachings, the audio material should be kept dynamic, and it is unnecessary to squeeze extreme loudness out of the material, as it reduces dynamics. However, one or two resourceful mastering engineers discovered that if they increase the loudness, it will surpass all other music on the radio, in cars, and in clubs, and the listener will consider it the standard. At first, engineers implemented this subtly, but over time, since everyone wanted to increase the loudness, the industry entered a vicious cycle. The maximization of loudness reached a threshold beyond which only extra tricks could achieve good results. Pure digital mastering technology was not yet prepared for this problem, as digital mastering software was simply unsuitable for increasing the loudness without audibly damaging the music. So, expensive analog compressors clearly outperformed digital software. Perhaps the first software to break into the digital consciousness in terms of loudness enhancement was the Waves L2 maximizer, but even this was not yet able to compete with really expensive compressors and limiters. We can look at this situation from two sides. Why couldn't everything have stayed as it was in the past? With this loudness boosting craze, we have only increased the problems. On the other hand, this unstoppable process of development brought about an element of evolution, that is, the essence of development is to have a problem that, when solved, allows us to step onto the next level. This problem was maximizing loudness in a digital environment so that it could compete with analog systems. Perhaps many are critical of digital solutions because they are significantly cheaper than expensive dedicated hardware. In this light, even those who did not fully understand the essence of technical processes started experimenting with software. To this day, we can hear over-maximized music that has been ruined in the name of loudness enhancement. Let's not be hypocritical; those who actively master music constantly have undoubtedly fallen into this mistake themselves.

In the case of pop music, mastering engineers constantly balanced between increasing loudness and producing a sufficiently dynamic sound. One could say that this is a bad thing, and I have to agree, but every coin has two sides. If this phenomenon did not exist, software designers would not have started to find more convincing solutions. If we have a bicycle, why can't we have a motorcycle? And with that, we have arrived in the world of iZotope Ozone.

Smart Master Mastering Plug-in

The fifth version of iZotope Ozone brought about a new change that would define the direction for many years to come. This software, above all, showed the future, but no one knew yet that this would be the direction. The Ozone IRC III technology was perhaps one of the best solutions at the time for maximizing loudness without damaging the dynamic range so much that it would be clearly audible.

The loudness war also brought about many other innovations, with their advantages and disadvantages alike. Staying with Ozone, the breakdown of spatial enhancement into a multiband section is a technological solution that can have both advantages and disadvantages. By increasing spatial sensation, we increase the sensation of loudness, but if applied recklessly, phase problems occur. However, I think this is not the software's fault; the user is responsible if they do something wrong, and they must decide how far they can go. A traditional professional would say right away that this spatial enhancement technology is nonsense and completely meaningless. Yes, this was true for the 1980s and even the 2000s, but today, this argument no longer holds. Whether we like it or not, mastering processes and solutions now change based on users' needs, not what a mastering professional would need in an ideal situation.Of course, one can distance themselves from this process and reject it, only if every client consistently and expertly mixes their music, but in my experience, this is very rare.

According to the current modern trend, mastering can be done using any available technique, as long as it benefits the mixed material. In the classical approach, mastering only adds a slight touch to the final material. I do not want to argue with anyone about which is better, as personally, I see both approaches as good, given the circumstances.

I understand the skeptics, as a $200 software is much more accessible than a $10,000 Fairchild compressor. Consequently, almost anyone can use it, which leads to many poorly produced materials entering the market. The exclusivity of the profession is beginning to fade. However, the software provides the same opportunity to do something well as it does to do something poorly. This is not the developers' fault.

Currently, digital mastering has not yet achieved all the capabilities of analog hardware. If we look purely at compression capabilities, it has reached the quality of analog technologies, but it cannot yet accurately reproduce the flavor that an analog device can provide to the sound. There are attempts, such as UAD, but from this perspective, even this is not perfect. Some digital developments may seem like smoke and mirrors, but they are not aimless. Just think of Ozone's Tonal Balance software or the Auto EQ in my own software for approaching linearity. These two features may seem unnecessary for a trained professional but can be a great help for an untrained one. The truth is, however, that even a trained professional can benefit from them in certain cases. For example, if they hear that the EQ does not deteriorate the material, but they need to approach a reference sound, why not use it to speed up their work? So the question is not really what to use, but when and how to use it. The use of linear EQ has recently become fashionable. For those working with traditional methods, using linear phase EQ is unnecessary, as analog devices are incapable of working with this method. Those living in the digital world mistakenly believe that it can solve all phase problems. In reality, however, I can only repeat that it can be both advantageous and disadvantageous.

Looking to the future, if we talk about mastering tools and solutions, high-end analog and general digital mastering software represent separate paths to achieve their results. Each has its disadvantages and advantages. In the future, digital technology will reach the sound world offered by the best analog circuits, but this is not happening yet.

The path of digital technology is currently heading towards automation. There are already examples today that help make creating a master just a few clicks away. Before anyone stones me, obviously, humans cannot yet be excluded from the process, but let's look at Ozone's analysis functions or LANDR's automated service. They are not perfect, but they already work and will only get better. Should I cry for this? I don't think so.

We can see that the working style of those in the mastering field is constantly changing, as long as they follow the call of the modern age. In the digital realm, it is clear that the market decides the legitimacy of a software's existence, so there are always new software offerings with some novelty or a different approach. I don't think there's any harm in experimenting. And as a result, solutions that were not previously part of a mastering professional's toolkit are now included. Such solutions include transient improvement tools. Those who create genuinely useful products will have the opportunity for further development, while those whose products are replaceable will move in a different direction.