Normalization vs Mastering

Discussion in 'Mixing and Mastering' started by Crater, Mar 6, 2019.

  1. Crater

    Crater Ultrasonic

    Joined:
    Jun 25, 2014
    Messages:
    101
    Likes Received:
    33
    Location:
    Europe
    How much do you care about normalization when mastering?
     
  2.  
  3. quadcore64

    quadcore64 Audiosexual

    Joined:
    Jun 13, 2011
    Messages:
    1,753
    Likes Received:
    961
    Normalization is usually in the starting stage of mastering, if needed, correct?

    If the songs are mixed well, including relative levels, there should be no need for the mastering engineer to use normalization.
     
  4. KungPaoFist

    KungPaoFist Audiosexual

    Joined:
    Nov 20, 2017
    Messages:
    1,696
    Likes Received:
    973
    Location:
    CA
    If true this would be new to me. A mix engineer usually doesn't mix to 0db but leaves some headroom for the mastering process.

    Normalizing will raise peaks to the zero but a limiter will allow volume to increase, as peaks hit threshold/ceiling they are stopped but overall volume can still be raised more. Ceiling would compress signal when normalize keeps it intact wihen volume hits thresh.
     
  5. Downlo

    Downlo Producer

    Joined:
    Apr 6, 2017
    Messages:
    97
    Likes Received:
    76
    Location:
    Holland
    I care that people do not normalize before i master for them.
    Personally i mix my own tracks no louder than say -6 to -3db. Without any limiting on my master fader.
    So the mastering engineer has plenty of room to work.

    I think an engineer could still work with a normalized file if it wasn't too heavily compressed/ saturated :dunno:
    Maybe if normalized to 99% :dunno: But i wouldn't recommend it.
     
  6. mozee

    mozee Audiosexual

    Joined:
    Jun 29, 2016
    Messages:
    639
    Likes Received:
    562
    I don't understand how mastering can go up against normalization.

    If by normalization you mean to bring up peaks to full scale.

    That is not what producing a master track for duplication is / the two have nothing to do with each other, so you should not care.

    If you mean should you normalize your file when sending them off to someone else to master or before mastering them yourself... the two have nothing to do with each other.

    Though it is considered nice and polite to leave some headroom for an ME to work with it isn't necessary with lossless digital files, attenuating the file isn't that difficult and it is 100% transparent.

    It's more important not deliver a good mix that isn't overly distorted, or already limited and communicate what your goals and targets are and what trade-offs if any you are willing to make to achieve them ... and if what you want is even possible with what you are delivering.
     
    Last edited: Mar 6, 2019
    • Agree Agree x 4
    • Like Like x 1
    • Winner Winner x 1
    • List
  7. quadcore64

    quadcore64 Audiosexual

    Joined:
    Jun 13, 2011
    Messages:
    1,753
    Likes Received:
    961
    AS stated..."If needed"
     
  8. Satai

    Satai Rock Star

    Joined:
    Feb 23, 2013
    Messages:
    453
    Likes Received:
    418
    I didn't wanna muddy the waters about the original question, so OP please don't read the weird things I'm about to say.

    @mozee, I've recently been listening to videos put out by AirWindows Chris (the dither guy), which were enlightening in a number of ways... For instance, to my total surprise I learned that digital gain changes, even with high precision floating point, are not actually transparent or lossless. It's a very tiny coloration that can't be heard normally, but it becomes audible with stuff like reverb all-pass networks where there is a lot of feedback by design. The errors introduced by our supposed 100% transparent digital gain changes come out to muddy things up then.

    Chris also made a little plugin for 100% transparent gain changes, based on bitshift instead of floating point multiplication. https://www.airwindows.com/bitshiftgain/

    :scrapbox:
     
    • Interesting Interesting x 2
    • Like Like x 1
    • Useful Useful x 1
    • List
  9. albert001

    albert001 Producer

    Joined:
    Apr 5, 2017
    Messages:
    140
    Likes Received:
    89
    Location:
    Always In My Mind
    if that is truth den I am a dump ass for more den 20 years ...for me bitshifting is something to avoid cause the sound gets thinner and thinner ...if you are right then why they make bit perfect sound carts ...why do I use something like this to alter bit perfect volume on my android or other low level gear... I don't know man, it's up to you to get your head around digital limitations.what I know is that its better to shape noise to a inaudible range then dealing with bit imperfections.... but just my 2 cents.

    best regards!
     

    Attached Files:

  10. Satai

    Satai Rock Star

    Joined:
    Feb 23, 2013
    Messages:
    453
    Likes Received:
    418
    It's using the bitshifting that C/C++ programmers know and love, not the bitcrushing we're used to in audio (and the two of them are unrelated, just similar names). Bitshifting in the C programming language sense does not have a sound to it, while the usual floating point multiplication operation sorta kinda does, if you repeat it hundreds of times and the tiny inaccuracy adds up...
     
    • Agree Agree x 1
    • Useful Useful x 1
    • List
  11. Seedz

    Seedz Rock Star

    Joined:
    Feb 5, 2016
    Messages:
    460
    Likes Received:
    354
    Location:
    Sitting on a Cornflake
    Good to see ya back @mozee
     
  12. korte1975

    korte1975 Guest

    normalization is the amateur way of making things loud
     
  13. Baxter

    Baxter Audiosexual

    Joined:
    Jul 20, 2011
    Messages:
    3,828
    Likes Received:
    2,657
    Location:
    Sweden
    Normalization is so 1998. Just learn proper gainstaging (it's so 2018).
    Edit: Normalization isn't even used anymore. Especially not in mastering. Try to keep the technical terms seperated, as they do not really click and has no relevance in this context.
     
    Last edited: Mar 7, 2019
  14. No Avenger

    No Avenger Moderator Staff Member

    Joined:
    Jul 19, 2017
    Messages:
    8,915
    Likes Received:
    6,112
    Location:
    Europe
    To me there are a lot of confusing statements here:

    - why is it normalitzing vs mastering? What exactly do you mean by this, please?

    - normalization when mastering? When I master a track I take care about -1dBTP according to LUFS standard, that's all.

    - you can normalize to different levels (even to -10dB, if you want to).

    Yep, because this depends on the treatment. If you apply analogue hardware or it's software emulation you probably need a level way below 0dBFS.
    If you apply an (digital) eq which hasn't an adjustable in- and/or output level 0dBFS can also be way too high.

    Normalization has nothing to do with compression and/or saturation.

    ??? You normalize a track in dB not %, at least I've never seen that.

    - If you mix with FP (floating point) the level of the mix doesn't really matter (could have +6dB or even more, no problem) because you can turn (normalize) it down.

    But only in 6dB steps...

    So the pro way is not to change a tracks level???

    Guys, please, for topics like this we need some accurate and reliable technical information and statments.
     
  15. mozee

    mozee Audiosexual

    Joined:
    Jun 29, 2016
    Messages:
    639
    Likes Received:
    562
    @Satai

    I wouldn't worry too much about the /sound/ of a single volume changes before processing.

    I finally understood what you were referring to when I read your second post on the matter.

    Yes it is a fact that changing the volume on any PCM file encoded as floating point number will always incur a rounding error on the LSB and that error has a 1 in 8 chance of being the dreaded even / odd decimation error, however the processing that is about to be applied to the signal after that volume change is going to create much more damage (the vast majority of it intentionally.) Filters and Dynamics processors, especially the ones that have been programed with analogue emulating non-linearity are going to magnitudes higher distortion and noise into the signal than a volume change. This isn't even taking into account the self noise and noise bed already present in the recording.

    This is mitigated somewhat by higher precision summing inside a which is generally a couple of bits higher than what ADCs are inputting a few scales of magnitude higher than what most line level outputs and non-scientific analogue capture devices are capable of recording. On a 32bit Float, which is a sign (+/-) bit, an 8 bit exponent and a 24bit significand (base16 number) we get a number that looks like this:

    0 01111111 00000000000000000000000b2 = 3f80 0000b16 = 1 (ONE)

    Now the next higher number that isn't 1

    0 01111111 00000000000000000000001b2 = 3f80 0001b16 ≈ 1.0000001192

    As far as the computer doing the math is concerned there are no numbers between 1 and 1.0000001192 so any value that falls in between those two real numbers must be rounded up or down. This isn't a problem most of the time when a calculation is ordered there will always be one reserve bit on the end of those numbers that will be deprecated once the calculation is complete ( the computer uses these guard bits to decide if it should round up or down) if these reserve bit should equal exactly half the computer will not round up as that would require a complex operation it will instead go to the nearest even number as that is a simpler logical transformation. Since the two bits on or two bits off will never be considered for rounding there is a one in eight chance of having he least significant bit not only approximated but wrongly shifted to its nearest neighbor.

    As academically interesting (or not - I guess it depends on the person) it means that an error in computation on a 24bit PCM in a quasi-random computational error and we call this the noise floor. So any computation will always result in an error somewhere -144dbFS and -138dbFS but as noted this adds up. So after 32 summations: n floats to sum, and each addition adds 2−24 of error: the total error will be n×2−24. If we have 32 tracks to sum, then n=32=25 and the total error is 2−19, or -114 dBFS. This is though the worst case scenario possible. This is a significant distortion and even in this worst case scenario it is still well bellow 100dBFS.

    I am going to stop rambling as it looks like I just inadvertently started derailing this topic.

    Yes 100% digital volume changes are no transparent but they are very close and when one considers what is about to happen after that volume change they are comfortably a lot less harmful than what occurs next.

    @albert001

    I use an analogue external attenuator as well but that has nothing to do with any of this, you are mixing oranges and cantaloupes into a carvery cold cuts basket.

    If you open up an other thread on the matter I will be happy to stop by and we can all chat about it.
     
    • Like Like x 1
    • Useful Useful x 1
    • List
  16. No Avenger

    No Avenger Moderator Staff Member

    Joined:
    Jul 19, 2017
    Messages:
    8,915
    Likes Received:
    6,112
    Location:
    Europe
    Respect! :shalom:
     
  17. Downlo

    Downlo Producer

    Joined:
    Apr 6, 2017
    Messages:
    97
    Likes Received:
    76
    Location:
    Holland
    Normalizing in % can be done in Adobe Audition 1.5
     
    • Interesting Interesting x 1
    • List
Loading...
Similar Threads - Normalization Mastering Forum Date
Any loudness normalization plugins? Software Jan 14, 2022
Loudness Normalization - You need to know this! Logic Jun 20, 2017
Normalization on samples: yes or not? Working with Sound Jan 5, 2017
Looking for some kind of a normalization plugin Software Jun 2, 2016
TC Electronic intros LCn Loudness Correct Normalization Plugin Software News Jan 20, 2015
Loading...