Viewing: mixing - View all posts

Does indie music really mean shitty music? 

image

I’ve heard it often: Indie music means bad sounding music, in other words, it’s shitty!

First, indie music is often confused with music recorded in a garage by inexperienced musicians with little or no knowledge of recording and mixing, and sometimes less than adequate equipment. 

It’s also confused with a genre that would be some kind of lo-fi punk alternative.
This is why, more often than not, I would use the term “unsigned” rather than “indie” to talk about independent music, recorded by talented musicians and producers all around the globe, in every possible genres you can imagine.

Now, while it’s true that some of it can sound bad, it’s detrimental to think that all of it does, and I would argue that more and more, with the prices and quality of recording gear and digital audio workstations (DAW for short) being so affordable nowadays, and tons of resources on how to record and mix, the end result is getting better and better and more and more unsigned artists are producing quality music.

Still, there are some things that contributes to the myth:

1/ the fact that people are listening on devices that are less than adequate to get a good sound (phones, tablets and laptops speakers are not meant to be hi-fi, and even most bluetooth smart speakers are too often synonym of lo-fi, no bass, mono sound)

2/ streaming platforms and internet radios are using low rates* mp3 quality to air the music, this is because bandwidth has a cost, in terms of speed, and also in terms of prices when it comes to the power of computers able to sustain hundreds or thousands of listeners in a continuous stream. This power cost also translates directly to services costs that radios are subject to.

* streaming rate is measured in kbps, short for kilo bits per second, this is the amount of data that is used to reproduce the sound - the higher the better, up to 320 kbps which is the upper limit and almost lossless.

Now there is a reason why most streaming platforms (like SoundCloud, Spreaker, or even Spotify in their free tier) and most internet radios are streaming at 128 kbps mp3 or more. They have recognized that this is the absolute minimal limit when it comes to listenable quality. Anything under that rate is creating so much artifacts and distortion to the sound that it’s barely recognizable anymore.

Check out this example of a snippet compressed at 128 kbps and the same snippet compressed at 64 kbps. You will hear the enormous difference between the two, check out how muffled the 64 kbps mp3 sounds, how much the cymbals are drowned in a kind of swirling phase artifact, and how horrible this truly is, it’s even worse that cassettes were back in the 80s…
No matter what device you are using I bet you will be able to hear the difference!

You can go back and forth between two snippets in the player below (opens in a new tab/window):

Compression comparison

To me the 64 kbps version is hardly listenable. I wouldn't want my music to sound this bad, I bet most indie artists will agree.

Anyway, when you’ll hear a shitty sound don’t just assume the source music itself has been poorly recorded and mixed, check that the streaming rates you are served are not below the minimum of 128 kbps, I and every unsigned artists striving to produce great sounding records will thank you!

Deconstructed 

Tomorrow, Saturday, October 12, at 3 pm EST, I’ll launch a new series of videos on my YouTube channel called “Deconstructed”. From then on, every Saturday there will be a new video.

The idea is to take very well known songs from very well known bands and artists of many genres and many era and listen to the multi-tracks to discover how this was recorded, what makes them sound good and also talk about a few production/songwriting tips and tricks that anyone recording could use to make their song better. So we’re really going to be deconstructing songs, track by track and listen to what’s been recorded in detail.

This should interest indie artists, producers, engineers, but also any curious music lover, as I intend to let people hear things they might not have noticed in these recordings and point out why this makes a difference, and how it worked within the context of the song.

Now, some might say that it doesn’t go well with my fight against streaming platforms… after all, YouTube is the biggest one of them, and the one who’s paying the less in royalties. But I consider YouTube as a video channel more than anything, and a great platform for learning, and this is what this series is going to be about… whether it’s a fun fact, or a detail in a song you’ve never noticed although you know the song by heart, or about some tips for recording and mixing, it’s a way to share my love of this music that has been part of my life and most likely yours too. 

Anyway, the bands and artists will get any little royalties coming from their “deconstructed” songs, because these videos will most likely be monetized on their behalf (or their label’s). Some videos might be taken down from label’s DMCA, and if so, I’ll possibly put them on a private site, with a password protection, so that people really interested will still be able to access later. We’ll see how it goes.

In any case, I hope you will like this new series, whether you’re just curious about music, or serious about learning of songwriting/production.

So don’t forget to subscribe https://www.youtube.com/c/ghostlybeard and hit that “bell” button to be notified each week when a new video comes up!

Compression #7 - wrapping up 

I hope you’ve learned a little bit about compression, what it’s used for and how it’s misused as well.

As a listener you should be able to hear over compression and ask your favorite radio hosts to ease off on it if you ear them overdoing it. You can point them here on this blog if that helps. There’s also tons of reference literature on compressors and compression all over the internet.

Here’s a little audio example that should make you clearly hear the destructive aspects of over-compression:

One way I’ve read compression described was that it was similar to a boxer kicking a boxing bag as opposed to a concrete wall (with no compression). The sound is splattered and rounded the same way. The problem is that music needs concrete walls too!

Radio concerns

Don’t mistake over-compression (which sounds like there’s little difference between the low levels and the high levels of a song and everything seems to have been smashed against a rubber wall) with file compression or streaming compression (as we said earlier, most radios air at 128 or 192 kbps), the effects of file compression is often that the sound is a bit phase-y, like the left and right side of the stereo image are a bit off and you get that impression that the sound source location is hard to pick up, the sound kind of swirl in your ears. Bad but unless the bandwidth gets higher (to 256 or 320 kbps) there’s nothing much to be done. But audio over-compression can and should be avoided.

For radio hosts, compression has its use when it comes to your own mic levels, because you want to be heard loud and clear and this will help evening out your voice and put it on top of any background. However, when it comes to music you should be very wary of your compressor/limiter settings. If in doubt, avoid compression and especially limiting.

Since most music sent to radios is already mastered, no compression should be needed.

If limiting is applied, it should only be used as a brick-wall to avoid digital distortion from peaks that would go over 0 dB, depending on how hot the masters you receive are and how you push your faders. But it should never be reducing more than 1 or 2 dB and it shouldn’t be working all the time… if it does, you’re doing it wrong!

In short, please give a chance to the dynamics of songs you are playing. If the songs you get have already been over compressed that’s not your problem, but some songs are mastered to their best level and they shouldn’t be penalized by over compression after the fact.

Remember how tricky it is to assess the quality of a sound when louder always sound better, so you will be fooled by your own ears thinking you’re making it sound bigger and better. More than anything remember that the ultimate level control is up to the listener, so trying to make things artificially louder is going to be futile in the end, and will only be detrimental to the sound quality. When your radio is played quietly, the effect of over compression will make it sound bad, and there’s no reason for it.

The new loudness standards

There is now a general standard in audio land, and indeed most streaming platforms have adopted it as well as most TV and FM radio broadcasters. The consensus nowadays is to use loudness compensation to bring down everything around -14 LUFS (Loudness Unit Full Scale – I will not go over the details of this norm but there is plenty of literature on the subject all over the internet and I invite you to research a little bit about it).

The fact is that a lot of internet radios and podcasts shows I hear nowadays are playing around -8 LUFS, sometimes even less, sometimes a lot less! Which means that on average they are 6 LUFS (roughly 6 dB) under the generally accepted level of dynamic range. Their sound is over-compressed way more than necessary, and if you remember that a difference of 3 dB is perceived as doubling the level, you will understand that 6 dB of lost dynamics is huge.

Finally, my advice to all radios and podcasts is: have a look at your compression and limiting settings, and when in doubt, avoid it entirely. Your listeners will thank you in the long run. I sure will!

Compression #6 - over-compression 

With audio technology becoming more and more sophisticated, and the advent of digital audio in particular, some limitations of the analog world stopped being an issue and compressors and compression techniques started to be more and more used to try and grab the listeners’ attention.

We said before that given two identical sounds, if one is played louder it will sound better to our ears. There’s probably some anthropological explanation but the fact is that play something quiet, then follow it by something else louder and most listeners will prefer the louder part…

This is why compression was used more and more during airplay on TV and radio to try and bring out the commercials at a louder level than the movie or songs played before and after, as an attempt to capture the attention of the listeners.

The loudness war

Audio engineers and studios caught up with that idea and started to apply more and more compression to songs during the mastering phase. This is why most of the digital remasters done during the 90s for CD re-release were more compressed than the original. This went so far that it’s been coined “the loudness war” (more compression = more overall loudness).

For example, let’s have a look at a graph comparing an original with 2 remastered re-issues:

You can clearly see that the amount of compression applied went totally crazy. Now the problem with that is that a lot of details of the original were lost in the process. A lot of the transients were leveled, and everything is basically at the same level… There is a huge loss of dynamics (the difference between the loud parts and the quiet parts), when dynamics is what makes music. It’s hard to appreciate something loud all the time, it’s better if the music is flowing and there are ups and downs, quiets and louds…

Another thing to realize is that over-compression is fatiguing to the ears.

Remember that sound is basically air waves that are expanding and contracting and finally trapped by our ears. So, increasing the compression increases the air pressure sent into our ears. This can sound more immediately pleasing but in the long run it creates an ear fatigue that is pretty damaging and just plain boring.

In the next episode we’ll wrap up with some final thoughts about compression and why you should care. See you then!

Compression #5 - parameters 


We’ve seen that a compressor is acting based on a threshold (of Amplitude) to know when it should start compressing. We’ve also seen that with a gain control we can rise the output level of the whole signal after compression.

Let’s have a look at 3 other parameters that are going to change the way compression works: attack time, release time, and ratio.

Attack time

The first one, attack time allows to dial how fast a compressor is going to react to signal that is over the threshold. It can be very long (up to 500 milliseconds for some) or extremely short (down to 1 nanosecond). Changing the attack time will mostly change the way the attack of a sound will be treated. This is where we can say to a compressor: as soon as you hear a transient over the threshold you need to reduce it, or we can say, take your time to let the transient play over the threshold before you reduce the sound. So, in effect we can reduce the transients or emphasizes them relative to the sustain of the sound using that parameter.

For example, let’s look at a typical snare sound again, this whole sound will take 480 ms to ring, now if we setup an attack time to 220 ms we allow the attack/transient of the snare to pass through, but we reduce it’s sustain, so we make the snare sound ‘thinner’ which is the opposite of our example of last week, where we’ve reduced the attack/transient (with a fast attack), then added some gain to the whole signal to make it sound “fatter”.

image

Release time

The next parameter of a compressor is the release time, and it will tell the compressor how much time it will take before getting back to normal (letting the signal untouched). Dialing the release time is often used in EDM (Electronic Dance Music) to make the whole sound ‘pump’ (go up and down) in rhythm with the tempo, because it can be used to reduce the sound for a certain time between each new beat.

In general, if you want a compressor to act more naturally, you will want to dial a shorter release time, but it’s often dialed according to the tempo for the reason above. The longer the release time, the more compressed the overall sound will be, but if the release time is too short comparatively to the sustain and the beat, it will give the effect of sound levels pumping up and down, also too short release (not leaving time to the compressor to stop compressing) can create audio artifacts and too much pressure.

Ratio

The next parameter that is important to define how a compressor is going to process sound is the ratio.
The ratio is expressed as 2 numbers like 2:1 or 5:1 or more. The second number is always one, but the first number defines of how much decibels the compressor will reduce the sound for each decibel over the threshold. It is a divider.

A radio of 1:1 there’s no compression. For 1 dB of input there will be 1 dB of output. With a ratio of 2:1 when a sound is 2 dB over the threshold, it will be reduced to 1 dB, if it is 8 dB over the threshold, it will be reduced to 4 dB , if a sound is 1 dB above the threshold, it will be reduced to 0.5 dB . With a ratio of 5:1, a signal at 10 dB over the threshold will be reduced to 2 dB.

image

When the ratio is over 20:1 up to infinity:1 we’re talking about limiting. At infinity:1 this is also called brick-wall limiting, because no signal over the threshold will be able to pass (depending on the attack time some transients might be able to pass through briefly, but they will be reduced to the threshold level as soon as the attack – wait – time is over).

The ratio is an essential parameter, it defines how hard a compressor will compress the sound.

(Some advanced compressors also have a knee parameter, that defines how much of the compression is to happen around the threshold, allowing the compression to bleed bleed under it to avoid any sharp difference between compressed and uncompressed sound. But this is really advanced and its effect is subtle enough that it shouldn’t be a concern in radio land anyway)

As we’ve seen before it can be beneficial to compress a sound, so high compression is not necessarily a bad thing, it does change the sound though, and this is where the issue can arise, especially when it happens in radio land where the sound is supposed to have been dialed as best as possible in the mixing and mastering stages already…

More about that in our next episode where we’ll look at the disastrous effects of compression and limiting.

Compression #4 - usage 

We talked about the main components of a sound when it comes to Time and Amplitude: Attack (or Transients) and Sustain. Then we examined how the sound is stored in digital land and how we cannot go over 0 dB.

So, a compressor will be essential to store (and reproduce) more significant sounds without distorting. Making sure nothing goes over it, and making sure every sound that we want to hear is pushed forward enough within the absolute limit.

A compressor’s main purpose is to reduce the Amplitude (level) of a sound during its lifetime.

You can think of it as a fader or volume control, but one that is automatic and can act extremely quickly, reducing the level of the sound at various phases, depending on a few parameters…

Evening out levels

This can be very useful for sounds that vary a lot, like a vocal for example… It’s not unusual for a vocal to have a lot of variation in amplitude, even during of one single vocal line. For example, look at this vocal take:

Compression here is going to help evening out the performance by lowering the highest parts (the peaks) … Once everything is at a similar level we can then make the whole thing louder and upfront as it should be in a song.

If we tried to raise the level of this sequence as a whole, to bring out the lowest parts, the highest parts would go over the limit of 0 dB, so they would be clipped and distort (the nasty sort of distortion). By first lowering the highest part (evening out the whole sequence) with a compressor, we can then raise the level of the sequence without going over the limit and without distortion!

This is the typical and simplest way to use a compressor. And it’s used A LOT during the mixing phase.

Fattening a sound

Now, another way to use a compressor will be to even out the difference between the attack and the sustain inside a single sound/a single note (not a whole performance like above), making it appear “fatter”. How so?

Remember that a compressor can act very very fast (some modern compressors can see the peaks before they even appear and play – it’s called look-ahead -, and they can react in mere nanoseconds), so it can act during the lifetime of a single note at a time, and this is where it will be used to alter the sound and make it fatter.

Let’s see how this goes. First, you need to understand one of the main parameters of a compressor which is its threshold. The threshold is the volume level over which a compressor will start acting. The picture below should tell you what a threshold is:

Everything that is over the threshold will be processed by the compressor. Everything under it will stay untouched. So, with the threshold parameter, we can tell the compressor which parts it should work (reduce) on and which ones it should leave alone.

Let’s have look at a typical snare hit before compression:

If we were trying to make this snare hit louder as it is, it could go over 0 dB which is not desirable.

But if we apply a fast compression, we can reduce the attack peaks, like this:

You can see that the threshold was set so that the Attack of the snare was reduced relative to its sustain (which was left untouched). Now, because the attack has been reduced, we can actually make the whole sound louder and it will not distort. If we do so now (using another parameter of the compressor, called gain which is applied AFTER the reduction and will raise the overall output level), it will look like this:

The initial attack is back roughly to where it was, but notice that the sustain has been made louder, thus making the snare sound “fatter”!

Next time we’ll look at some other usage of a compressor and a few other parameters that are used to alter a sound, mainly attack time, release time and ratio. Then we’ll talk about limiting. And finally, we’ll talk about loudness, the loudness war and why it’s important to know about it. See you then!

Compression #3 - digital sound 

To understand one crucial role of compression, which is to avoid digital clipping, you also need to understand a little bit(!) how sound is stored and processed in digital land.

Then and now

In analog land, when music was stored on tape and vinyl, the sound waves were truly waves, and they were captured and played by components that could reproduce the air pressure that is truly the nature of sound. Waves were at the start, they were stored as waves and reproduced as waves…

The digital revolution has changed that. We are now storing sounds (and images and anything on a computer) as bits: 0s and 1s. There’s no real in between (at least until quantum computers are mainstream but that’s another story!)

Storing sound

The way a wave is stored on a computer is by cutting it into discrete pieces of information usually by grouping 8, 16, 24 or 32 (and even 64) bits together. These are called bytes and they can store a maximum range of information, no more, no less. For 8 bits, we can store 28 values, so between 0 and 255. For 16 bits, 216 so 0 to 65535, etc.

A sound is stored by analyzing a wave in time and determining its amplitude, from 0 to x (depending on the number of bits used). The sample rate will determine how fast that analysis happens, it’s called “sampling” (taking a sample of the amplitude of a sound at a given time and storing it in a byte).

Sampling

Typically, a sound from a CD is sampled with 16 bits at 44.1khz, meaning there will be 44100 values (ranging from 0 to 65535) per second. There are all sorts of other sample rates and bit rates, but let’s keep it at that as our reference. Just know that the higher the sample rate and the closer the bits of discrete information are, thus more capable of a smoother reproduction of the initial sound wave. The higher the bit rate and the more discrete differences in amplitude (dynamics) we can store*.
But know that the 44100 range values between 0 and 65535 per second are more than capable of reproducing the waves that our ears are able to discern (that is unless you truly have golden ears, which might be the case of a truly gifted 0.000001% of the world population).

* The 16-bit compact disc has a theoretical un-dithered dynamic range of about 96 dB, however, the perceived dynamic range of 16-bit audio can be 120 dB or more with noise-shaped dither, an advance technique taking advantage of the frequency response of the human ear.

As you can see from the picture above, the values of the waves are transformed into discrete little samples, and these sample will only be able to store up to a maximum amplitude value. This maximum value is called 0 dB. dB is short for decibel and it’s the measure of amplitude of a sound (to note that it is not a linear scale, but a logarithmic one: A difference of 3 dB in a sound is generally perceived by the human ear as twice louder).
Every measure of sound level is always minus something… 0 dB being the absolute a sound can be stored, in 16 bits, it is going to be the value 65535. We go from 0 (which is -infinity) to 65535 (which is 0 dB).

Clipping

In digital land, there’s no way we can store more than 0 dB, because a byte of 16 bit will not be able to store more than that value of 65535. If a sound goes over this limit, it will be “clipped” meaning its value will still be stored as 65535.

This was not the case in analog land, when we were pushing an amplifier, it was distorting the sound but in a way that we’ve become accustomed to, what some people call the “warmth” of analog sound, this is especially true of tube amplifiers which were overheating and distorting the sound in very pleasing way. Of course, if you were truly going over a certain level you could also blow your amplifier and get a nasty sort of distortion. But in general, you could achieve a great sound with distortion, and indeed this has been used to great effect by every guitarist in the rock world, as Jimi Hendrix could have told you!

Now the problem is that in digital land, you cannot really push a sound over the limit, it will just be “clipped” and the result of it is a nasty sort of distortion that is not at all pleasing to the ears. Think high pitch noise that could come from a robot in a bad sci-fi movie, or something that is more like white noise and hissing dirt in your ears, not at all pleasing.

All of this to say that one crucial role of compression will be to avoid clipping and digital distortion. We will see in a next part how this is achieved with some audio example as well… stay tuned!

Compression - part 1 

Compression is a great tool! When used during mixing and mastering especially, it has many uses. But during airplay it’s very rarely beneficial, especially when you have no idea what you’re doing…

A land of confusion

But when talking about compression, the first thing we need to define is what type of compression we’re going to look at. Because when it comes to audio, there are 2 types of compression that people might talk about. Welcome to the land of confusion! Hopefully, I’ll be able to help clear things up a little bit…

The first type of compression is the one we’re going to look at in details. It is the one used during mixing, mastering and also during airplay. It affects the audio directly, and you might see it referred to as dynamic range compression. Another term that we’re going to see used for compression is limiting (or even brick-wall limiting), which is nothing else but audio compression with extreme settings.

File compression

The second type of compression that you might hear about is file compression. This is the difference between a .wav (or .aif) file and a mp3 for example.

There are various types of file compression, some are lossless (because they will not affect the sound in the end, no information will be lost because these formats will be de-compressed when played) others are lossy (some information is lost during the compression process).

Think of lossless as a zip file. It is a compressed file, but you can always decompress it and get the contained files intact after the process. Lossy compression though will remove some information based on clever algorithms that analyze the sound to get rid of whatever is deemed non-essential to reproduce it. It’s based on the physics of how we perceive sound and what frequencies are more important than others, and on various other factors. How much the sound is compressed with lossy compression depends on the bitrate per second, measured in kbps (Kilo Bits Per Second), the maximum for mp3 being 320kbps, which is almost (but not quite) lossless.

So, in audio land you can have:

  1. raw files (not compressed at all), like .wav or .aif 
  2. lossless files like .flac or .ogg 
  3. lossy files like .mp3 or .aac

Although lossy compression affects the sound (and the lower the bitrate the more it will), this is not what we’re going to look at. The reason being that most radios will play at a rate of 128 kbps or 192 kbps (some use 64 kbps which is hardly listenable), and although of course this means a loss in quality compared to raw files (for 128 kbps it can mean as much as 90% of the initial information lost), it is bound to the bandwidth they have, that bandwidth itself being based on how much they pay and how many listeners the stream provider can support at that rate. So, in short, there’s not much they can do about it…

What online radios can work on to improve the quality of their sound is the first type of compression, which is audio compression (and limiting). So, this is mainly what we’re going to examine in detail, in the hope that it will give everyone a clue as to what they hear and whether too much compression is damaging it… 

Tune in next week to start diving into the wonderful world of audio compression!

Compress or impress? 

image

There’s a lot of misconceptions about compression, how it works, how it affects the sound, what are the benefits and how to use compression (or avoid it) in mixes but also during airplay. What’s the “loudness war”? What are the standard nowadays? How can compression damage the sound?

Routinely, I hear radios who are over-compressing, actually limiting, tunes that have already been compressed and limited during the mixing and mastering phase. This doesn’t help the sound, in fact it’s badly hurting it! Add to the fact that most internet radios and podcasts are streaming at 128 kbps which is quite a low bit rate, already damaging the sound, and you get a lot of shows where the sound is pretty atrocious.

This week, I was also asked my opinion on a tune that is to be released for Xmas and is supposed to be a cover of a pop song, I was surprised to hear such an amount of compression and limiting in that tune that it was sounding more like Metallica in its worst days than a light-hearted pop tune for a young audience… that mixing and mastering engineers made such mistake in their assessment of the amount of compression for the genre is rather disturbing.

This really made me think that I should try and write a few articles on compression, what it means, what it can do, how it can help the sound but also how it can damage the sound irreversibly. Dynamics is a vast subject and very misunderstood, even by some novice sound engineers (and apparently some seasoned ones!) and indeed by a lot of radio hosts as well.

Now, the trick will be to find a way to explain this complex subject with something anyone can understand. I’m thinking of a “compression for dummies” kind of refreshing course in a series of articles… If I can pull that off, maybe this will help radios (and even listeners) recognize the effect of over-compression and make them strive for a better/more natural sound.

I fight for darkness 

image

Well, maybe not exactly the way you think… but let me explain!

This week-end I’ve received the masters of my new album from my mastering engineer. He’s great and has a really good ear, for example he has found little issues in my mixes that I had totally overlooked, like some overbearing hiss on the drum part of one song, or a click in the middle of a word on the vocal track of another. I’ve listened to these a thousand times and was rendered blind to these details…

But one thing is that his masters tend to be quite bright, and me, well, I’m more on the dark side…

I suppose it’s a sign of the times, but it makes me think that when people moan that CDs and digital sounds bad, and mourn the good old days of vinyl and the warmth of analog gear, perhaps all they are really longing for is a darker sound.
To be honest, the good old days were riddled with noise and distortion and many unwanted issues, and the sound was suffering from it.

Fact is that vinyl has limited frequency range compared to digital: it’s true of the low end that cannot be extended to the kind of subs the EDM crowd is used to, but it also has limited range in the high end, which generally equals a darker overall sound… 

With the advent of CDs and digital, there are no such limits anymore, and we can render up to 20Khz and even more (which is pretty pointless actually) and this makes for more clarity in the high end, extended subs, etc. 

Add to that the extreme compression that the recent loudness war has accustomed our ears to, and the fact that people listen to their cell phones, laptops with ear buds, with little low end, and all of that makes for a modern sound that is generally very bright.

Me, being the old dinosaur that I am, I tend to favor a darker, mellower sound, I even suspect that some amount of mud in a mix is necessary for a better sounding overall listening experience, something that doesn’t fatigue your ears and caresses instead of hammers your eardrums.

So in the end, I have asked my mastering engineer to add some mud back and to embrace the dark side. I’m sure Darth Vader would be proud! :D