During mastering what exactly do you do? (for a single song)

Discussion in 'Mixing and Mastering' started by stav, Jun 24, 2024.

  1. Sinus Well

    Sinus Well Audiosexual

    Joined:
    Jul 24, 2019
    Messages:
    2,071
    Likes Received:
    1,584
    Location:
    Sanatorium
    This is wrong in two respects.
    1. spotify always converts the audio material - regardless of the level - to Ogg/Vorbis, AAC or HE-AACv2, depending on settings, used player and streaming bandwidth.
    2. spotify does not use limiting, no matter how dynamic your track is - unless for whatever reason the user has gone into his preferences and manually activated a limiter by selecting the "loud" mode, which compresses dynamic material to -11 LUFS. But why would someone who cares about audio quality do something so stupid? ;)
     
    Last edited: Jul 5, 2024
  2. eXACT_Beats_

    eXACT_Beats_ Audiosexual

    Joined:
    Apr 21, 2018
    Messages:
    727
    Likes Received:
    520
    I didn't want to bring this into the conversation because, like Loudness/dB/LUFS, it didn't seem on-topic with the OP. Absolutely a well-conveyed point though.

    I could never get into top-down mixing, for a pile of reasons, but one of the main ones was that it starts you with a frustratingly stupid EQ task, of cleaning up shit on a large scale that actually needs to be cleaned on a per-track scale. Top-down advocates say that it's fine to do small stuff like that first, it's not "cheating"... ... but then more "small stuff" pops up and you have to deal with that as well, and before you know it, you're not top-down mixing, or even bottom-up mixing (?) you're just haphazardly bouncing around like a drunken goon, pretending you got things under control. So... yeah, just my take on the whole deal.
    But—I'm all for putting finalizing plugins at mid-to-late stages in a mix (checking and then bypassing them, of course,) using them for insight on where things are sitting. I started checking my mixes with a limiter on individual busses, and then on the 2-buss later in a mix, as soon as I wrapped my head around the basics of mixing, since it seemed mad dumb to get late-stage and then find out shit was wildly out of wack once I added some cohesive compression to final busses, or a limiter to the 2-buss. Turns out the harder you push your mix into a limiter, beyond where you'd to actually limit it, you start to hear weak spots in your mixes, or where things fall apart (that reference to The Roots album was probably subliminally intended.) Since then I've seen people start to bring it up more often, including the guys from Masteringdotcom, so that seems to be an indicator that I'm not unique in that.
     
  3. eXACT_Beats_

    eXACT_Beats_ Audiosexual

    Joined:
    Apr 21, 2018
    Messages:
    727
    Likes Received:
    520
    You had me at "psychedelic era." I listen to everything, but I grew up on my parents rocking 60s/70s stuff. I found the compilation on Bandcamp and I'm slowly working my way through it (where's Volume 1?) Your favorite track on here sounds like The First Edition's "Just Dropped In (to See What Condition My Condition Was In) made a love-baby with "Love Potion No. 9" in a bust-ass VW bus. :rofl:
    By the way, "Echo Train" must have been a nightmare to master. That shit is giving me that weird sense of disorientation that Korneff's WOW Thing gives you when you get crazy and start cranking knobs. The vocals' play and purposeful (I'm assuming,) imbalance between speakers is just a part of it. Old 4-track hard-panned records don't fuck with me this bad. :wow:
     
  4. Lieglein

    Lieglein Audiosexual

    Joined:
    Nov 23, 2018
    Messages:
    970
    Likes Received:
    554
    Yes, whats "needed" is again very objective and indeed the question remains:
    And what's the thing with the quality listening environment if everyone just has "his" ears? :unsure:


    So far I still stay with my first statement:
     
  5. No Avenger

    No Avenger Audiosexual

    Joined:
    Jul 19, 2017
    Messages:
    9,020
    Likes Received:
    6,250
    Location:
    Europe
    Well, TBH, I load up my mastering chain and improve the sound. [​IMG]
     
  6. eXACT_Beats_

    eXACT_Beats_ Audiosexual

    Joined:
    Apr 21, 2018
    Messages:
    727
    Likes Received:
    520
    I string together eight to twelve instances of Sausage Fattener on my master, each one slamming the next with enough force to move a small planet, ya'know, so as to remove all hints of subtlety that @Lieglein seems so afraid of. Where's my Grammy? :chilling:
     
  7. No Avenger

    No Avenger Audiosexual

    Joined:
    Jul 19, 2017
    Messages:
    9,020
    Likes Received:
    6,250
    Location:
    Europe
    But you need at least six times God Particle too. :guru: :rofl:
     
  8. clone

    clone Audiosexual

    Joined:
    Feb 5, 2021
    Messages:
    6,794
    Likes Received:
    2,970
    I was speaking about producing the track with a "finalizer" plugin in place, that you are going to remove. This is the kind of thing you do when you do not want the CPU resources usage of an actual mastering chain full of cpu intensive plugins on your 2 bus before you even start mixing an unfinished track. I do not think this "reverse" article is about anything more than working top down, and nothing to do with Mastering at all.

    Maybe you can explain this to me. Lets say I have channels in a project causing the main to be over 0dB. Redlined the entire way. You drop that plugin in, and it will go immediately to -0.1 dB or so. It could even be a clipper, it doesn't matter. Anyway, that last plugin's output to the master is resulting in -0.1 dB. But when you add any other plugin immediately after it (doing nothing), the master will shoot right back up to wherever the original signal was registering at, way over the -0.1 dB. If you switch the order of the inserts, it's immediately back to -0.1dB. A clipper after your final limiter does not do this. Why?
     
    Last edited: Jul 5, 2024
    • Interesting Interesting x 1
    • List
  9. No Avenger

    No Avenger Audiosexual

    Joined:
    Jul 19, 2017
    Messages:
    9,020
    Likes Received:
    6,250
    Location:
    Europe
    :woot: Doesn't happen here. If I insert a plugin with an output of -0.1dB and add another one after it, the out remains at -0.1dB as long as the second plugin isn't doing anything. But even if its doing something, how could the level jump back to the value it had before the first plugin? And it doesn't even matter whether the master is overloaded or not.
     
  10. Sinus Well

    Sinus Well Audiosexual

    Joined:
    Jul 24, 2019
    Messages:
    2,071
    Likes Received:
    1,584
    Location:
    Sanatorium
    Because of oversampling... just guessing
     
    • Interesting Interesting x 1
    • List
  11. clone

    clone Audiosexual

    Joined:
    Feb 5, 2021
    Messages:
    6,794
    Likes Received:
    2,970
    I know it "shouldn't happen", which is why I remembered it and asked about it. I figured it for a bug in either that plugin or the one after it. If I run into it again, I will revisit it. I thought there might be a logical explanation for it I was unaware of, like how low cutting causes drastic phase rotation. :dunno:
     
  12. lbnv

    lbnv Platinum Record

    Joined:
    Nov 19, 2017
    Messages:
    359
    Likes Received:
    198
    It's easy. Just treat a track as if it was a single note. Compress and limit it to shit, make it a single note... Done!
     
  13. Barncore

    Barncore Platinum Record

    Joined:
    May 25, 2022
    Messages:
    369
    Likes Received:
    255
    Exactly, it's worse. That's why making a song too quiet is a bad thing.

    As it says in the link you provided, all they're doing is lowering the volume of playback. There's no converting going on. People have a misconception that a track being above -14 LUFS will be "penalized" sonically by Spotify. Probably a misconception fuelled by Ian Shepherd's loudnesspenalty.com website. Or just based off Spotify's dire warnings directly. Normalization is just for playback, nothing more. They don't alter the sonics.

    The "loud" preset however, i'm fairly sure they're using their limiter algorithm for that, from memory. Avoid it.

    Well, if your song is going to be limited, would you rather it be by Spotify's "one size fits all" limiter algorithm, or a high quality limiter plugin in the DAW?
    I know what i'd choose.
    That's why making your song too quiet isn't an efficient choice.

    1. Yes, correct. They are converting the file format/compression, sure, but they aren't doing anything loudness-wise in that process. Normalization is a volume adjust for playback, they don't bake normalization into the file, is the point. You're getting into semantics here.
    2. Incorrect. Limiting is the way they add gain to songs that are below a certain LUFS threshold (i think -20 LUFS or something, don't remember the exact number). They have to use it because if they added gain without it they risk having it peak over 0db.
     
    Last edited: Jul 6, 2024
  14. BaSsDuDe

    BaSsDuDe Guest

    Making a tune that is musically undynamic for the sake of streaming makes the writer a slave. He was talking about shitty limiting. In that scenario I agree with you. In writing something musically interesting, nobody wants to hear something at the same volume with zero variance solidly for 45mins-60mins. Human concentration needs diversity to not lose concentration. Fine for trance because they may well switch off and go into a trance if it's all the same volume, but for live performances with real instruments, no great live music is the identical volume. Shit, nobody wants to hear a ballad that is red-lining the entire tune, no matter the style. Fine for many up-tempo tunes but not for deeper compositions.
     
  15. readytowok

    readytowok Noisemaker

    Joined:
    Mar 14, 2015
    Messages:
    14
    Likes Received:
    3

    Dude you are taking this picture wrong. Its about time that takes you to learn hot to make you professional tracks/mixes. Its not about mixing that track again and again when you know how to make pro tracks.
    Mix is always finished.
     
  16. BlackHawk

    BlackHawk Producer

    Joined:
    Nov 28, 2021
    Messages:
    277
    Likes Received:
    137
    And you do not do that thinking about loudness and do the according limiting by yourself because? Now I am curious ...

    I am not surprised that EVERY BS about mastering comes up in a predictable order like in all other 3.457.2984.745 threads to the same topic. :-( That all proves that the internet is making this planet and its population into the (stupid as stupid can be) global village Marshall McLuhan predicted. (The "stupid"-part I inserted.) No change since 500.000 years. Mankind as a swarm is unbeatable stupid and acts like a ... yep, brainless swarm.

    Reminder: answer my question above in a meaningful way. Without repeating any of the EVERY BS about mastering.
     
  17. Sinus Well

    Sinus Well Audiosexual

    Joined:
    Jul 24, 2019
    Messages:
    2,071
    Likes Received:
    1,584
    Location:
    Sanatorium
    I would just like to point out that your post was exclusively about semantics:

    As I said, no limiter is used in Quiet and Normal mode, only normalisation.
    If the loudness of the material is quieter than -23 LUFS or -14 LUFS, the material is normalised to -1 dBTP.
    Regardless of whether it has to be normalised up or down, no dynamic processing takes place.
    Only if the user manually switches on Loud mode a limiter is used to limit the material to -11 LUFS / -1 dBTP.

    Quiet: -23 LUFS / -1 dBTP (normalisation)
    Normal: -14 LUFS / -1 dBTP (normalisation)
    Loud: -11 LUFS / -1 dBTP (limiting)
     
    Last edited: Jul 6, 2024
    • Interesting Interesting x 1
    • List
  18. taskforce

    taskforce Audiosexual

    Joined:
    Jan 27, 2016
    Messages:
    1,994
    Likes Received:
    2,102
    Location:
    Studio 54
    Bro why on earth you are nitpicking stuck on the same thingie.
    Mastering is two things but first and foremost it's a technical scientific process hence the term engineering. Educated experienced ears will quickly hear the tone/frequency imbalances and there are always the sophisticated analyzers to indicate what has to be done for a track to be levelled/eq'd to almost flat (which supposedly is the original purpose of mastering in the first place or at least was back in the day i started heheh). Thing is humans with experience doing this job don't really look at the "robot 's" indications. They trust their ears more than anything and this is where the second thing, artistic initiative comes into place. Knowing the genres really top to bottom therefore knowing how to make a track "pop" the way it should and why, for the media types it is intended to be used on.
    For instance, you might use a limiter of sorts on a techno or EDM track but not on a classical track. Acoustic contemporary tracks are also most sensitive to any type of compression. And most always things are not this obvious like black and white, too many people produce hybrid tracks with all sorts of influences. Even radio friendly pop music is so diverse now it can have roots in any genre or can be the combination of many.
    If you find this too subjective you should be doing some other thing really because subjective is what music is to each and everyone's ears. Your medicine my poison etc etc. Plus for the uninitiated there are reference tracks in every genre, something to start with. Dunno what to do with your tracks? Just mimic what's proven good and is close to what you do. Do this ten times and it 'll start growing on you, 20 times you will start having a personal opinion and preference as to what to use, where and why, it's that simple.
    Cheers
     
  19. Lieglein

    Lieglein Audiosexual

    Joined:
    Nov 23, 2018
    Messages:
    970
    Likes Received:
    554
    Because that's what's happening if someone thinks it should sound like this, and then an other guy thinks afterwards it should sound like that. :dunno:

    And not to forget this problem:
    How influenced are the decisions in changing the sound by the listening environment? :excl:

    Maybe this was a misunderstanding, but I never found what people say here too subjective. I rather find the definition of "what's needed" too objective. :shalom:
    And someone else changing the whole thing afterwards again:


    The fundamental thing is that defining such a guy as a "better maker" - because that's the decrypted definition of such a person with the attributes you can read anywhere and the logical consequence of someone that has the ability to evaluate what's needed - is a fallacy. :yes:
     
    Last edited: Jul 6, 2024
  20. eXACT_Beats_

    eXACT_Beats_ Audiosexual

    Joined:
    Apr 21, 2018
    Messages:
    727
    Likes Received:
    520
    I'm beginning to regard people who're on this "gaaawd Particle is all you need" hype the same way I do those who "... only trust UAD plugins for my mixes," though you can replace UAD with any other company that's considered superior. Dumb.
    ... that said, maybe six is the ticket. :invision:

    I've tried explaining this to musicians before who don't know anything about production and the way that I've found it makes sense to most people is referring to it as Internal and External volume. Or, recently, I explained it as a man going up and down a ladder, the height the man is at is the volume, while the man's weight represents LUFS. It doesn't have to be a perfect metaphor for it to help people understand, mainly it just has to convey that they can change independently from one another. Not that that isn't a perfect metaphor... :rofl:

    My point was, you don't actually know what they have going on under the hood. Maybe they have some amazing limiting algorithm creating by gawd him/her/it—self that can't be looked at directly or you straight explode. I won't discuss the probabilities of that, but my point is, you don't know. And not to be intentionally contrary, but the quality of their limiting algorithm doesn't matter as so long as it sounds good. Remember, this isn't in production stage anymore, this is end-game, nobody listening is scanning for irregularities in an RX spectrum or something, so it really doesn't matter if Spotify is using 15 FET comps stacked on each other in lieu of a single, high-quality limiter, as long as the results coming out a speaker aren't of lesser quality.

    All I got from that—as your question isn't the way it's worded— is that the answer has to be within your boundaries of what you consider to be truth... and that people are dumb. Maybe have your mechanical bride rewrite it for you? (Incidentally, I try my best to keep a rein on my devouring misanthropic nature while discussing production. Take that for what it is.)

    Because people who are "professionals" produce mixes that are so vastly superior that they don't feel the need to try and make them just a bit better, pondering what-ifs in the back of their head afterwards? Come on. A mix is never finished if you're into the craft, not as long as there's one more "... what if I had..." floating around in your head. It's not about lamenting subjective mistakes you may have made, it's about wondering if you could have made it just a bit better.
     
Loading...
Loading...