Normalization Advice Needed! ;-)

Discussion in 'Working with Sound' started by tommyzai, Nov 6, 2024 at 5:07 PM.

  1. tommyzai

    tommyzai Platinum Record

    Joined:
    Feb 7, 2012
    Messages:
    783
    Likes Received:
    211
    I normalize all my recorded audio files in my DAW before mixing, sending out stems, etc. I've done this for over 30 years. I find normalizing helps keep some uniformity . . . it's part of my process.

    I'm not talking about mastering . . . just concerned with the raw tracks on the DAW . . . in preparation for me editing/comping, then sending out track stems for someone else to mix.

    In the past, I used -1.00 dBFS for everything, but after researching I wonder if there are better settings for type of normalization and target value.

    Thanks for any constructive, relevant suggestions. I beg of you not to respond by arguing that normalization is not necessary and/or harmful. If, for some reason, you don't like normalizing . . . please skip this thread. Thanks in advance.
     
    Last edited: Nov 6, 2024 at 7:38 PM
  2.  
  3. Radio

    Radio Platinum Record

    Joined:
    Sep 20, 2024
    Messages:
    574
    Likes Received:
    286
    Do mastering engineers use stems?

    Not all mastering engineers use stems. Traditionally, mastering involves enhancing and boosting a single stereo audio file rather than manipulating stems. Still, stem mastering may be necessary when an engineer wants more control over the balance of the various components in a song.

    Can you master a song without stems?

    You can, of course, master a song without stems. A mastering engineer can simply use the final mix. In this scenario, it's especially important that the mix is well balanced before it's passed on for mastering. This is because any mistakes in the mix will ultimately show up in the mastering of the track.

    What bit depth should stems be?

    Your stems should typically be exported at 24-bit, but this can vary depending on the final application of your song. If you're ever in doubt, ask your mastering engineer for specific settings so you can deliver the right file type the first time.

    Is it good to normalize audio?
    Yes and no—it all depends on the context. Some formats and platforms require that you deliver your audio at exact levels. In those situations, audio normalization is definitely your friend. However, it’s important to understand whether normalization is a requirement of the platform or a feature of it. For example, Netflix requires -27 LKFS ±2 LU, dialogue gated. Spotify, on the other hand, has a feature that normalizes songs to -14 LUFS on behalf of the end user.

    In other words, sometimes you may need to normalize your audio before you deliver it, while other times you may want to normalize your audio to preview how it will sound in particular playback scenarios. However, in those latter cases, you would still want to deliver the full-level, un-normalized version.

    Source: www.izotope.com/en/learn/audio-normalization.html
     
    Last edited: Nov 6, 2024 at 6:54 PM
  4. Myfanwy

    Myfanwy Platinum Record

    Joined:
    Sep 16, 2020
    Messages:
    373
    Likes Received:
    176
    Doing something for 30 years doesn't prove it to be the best way.

    I'm doing mastering for many years, and for me it just makes no sense to get normalized stems. It is an extra step, and it's contra productive because the volume proportions created while mixing are altered. To make it worse, if during stem mastering, my client decides to change something or correct a small part in a guitar stem for example and because of a louder or quieter peak the normalization has changed, the whole master sounds different unless I correct the level of the stem manually to match the old one.

    Think about it and ask your mastering or mixing engineer what he does prefer. The sense of stems while mastering is to get the exact same mix as the stereo sum, but to be able to work on individual groups of instruments or vocals.
     
  5. tommyzai

    tommyzai Platinum Record

    Joined:
    Feb 7, 2012
    Messages:
    783
    Likes Received:
    211
    I'm not talking about mastering . . . just concerned with the raw tracks on the DAW . . . in preparation for me editing/comping, then sending out track stems for someone else to mix.
     
  6. Myfanwy

    Myfanwy Platinum Record

    Joined:
    Sep 16, 2020
    Messages:
    373
    Likes Received:
    176
    Ok, sorry then, "sending out stems" is what sounds like stem mastering to me. But still, it's better asking the people you are actually sending your files to what they prefer instead of random people on a forum who can only tell what their wokflow is.
     
  7. tommyzai

    tommyzai Platinum Record

    Joined:
    Feb 7, 2012
    Messages:
    783
    Likes Received:
    211
    This is also about me keeping tracks tidy and consistent. I always used -1, but then researched became concerned.
     
  8. stopped

    stopped Platinum Record

    Joined:
    Mar 22, 2016
    Messages:
    541
    Likes Received:
    197
    my mixer treated me as if I were a criminal when I sent him normalized tracks, so I've been trying to break the habit conceptually. I understand that I no longer need to make things loud enough to cover up the hiss from my soundcard but it's still hard
     
  9. Myfanwy

    Myfanwy Platinum Record

    Joined:
    Sep 16, 2020
    Messages:
    373
    Likes Received:
    176
    Absolutely agree on this, that's what I tell my clients, too. Consistence is far more important for a hassle free workflow. Working with 24 bit files means 144 dB dynamic range, so even a headroom of 20-30 dB leaves more information than any DA converter can deliver.

    Also, DAW (peak) normalization is (in most cases) an additional step that takes place after rendering to a 24 bit file, so you win nothing but a more convenient visual peak overview in your track.
     
  10. Stevie Dude

    Stevie Dude Audiosexual

    Joined:
    Dec 29, 2020
    Messages:
    2,422
    Likes Received:
    2,186
    Location:
    Near Nyquist
    I hate it when I get files like that and will batch process them down -6 to -10db. I know modern DAW has floating point processing and all but the audio will clip 95% of the plugins out there internally. Some plugin even has internal soft clip that goes down to -3db now (ie. Lindell). Mixer needs to clip gain everything down to have better fader resolution and if they are working on template everything will be all over the place if the printed FX are sent as well. I can't see 1 benefit in doing that AT ALL, but problems all over. Stop doing it, 100% sure the mixer you hired secretly hates you and it's close to impossible to get the best result possible out of him like that.
     
    • Like Like x 1
    • Agree Agree x 1
    • List
  11. Myfanwy

    Myfanwy Platinum Record

    Joined:
    Sep 16, 2020
    Messages:
    373
    Likes Received:
    176
    Research less, be concerned less. As long as it's not clipping, everything is fine.

    BTW, have you tested your Audient interface if it's delivering a constant RTL value and finally fixed the wrong latency reporting? Someone asked me if he should buy an ID24, but I'm concerned after reading all your problems with it.
     
  12. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    768
    Likes Received:
    275
    i´ve always wondered why some people do this. it is probably one the most stupid things you can do. no offense. :)

    even if you do acoustic recrodings and your DAW can only do +6db in the mixer, it should simply never be required.

    in your starting post you reasoned it with 3 things: you find that it helps, you always did it, and you would be concerned not doing it.

    that does not make a valid reason.

    yes, leaving things as they are.

    sorry. next time put that at the beginning. ;)

    and it is not harmful, it is just not neccessary.
     
  13. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    768
    Likes Received:
    275
    plug-ins do not clip signals, master fader also do not clip audio.

    what is important is to at least understand the difference between these very basic things such as level and loudness, as they have nothing to do with each other.

    none of your statements makes any sense and it does not help to string abbreviations together when you don´t know what they mean.

    netflix max audio level is -2dbTP.

    and in regards to the threadstarter: you can not "normalize" to levels other than -0dbFS, because using the whole range to scale values is the definition of "normalisation".
     
    Last edited: Nov 6, 2024 at 8:38 PM
  14. 9ty

    9ty Producer

    Joined:
    Dec 25, 2021
    Messages:
    119
    Likes Received:
    79
    Do I understand you right: you get all your tracks to a peak level of -1db before sending them out? Then how you do this? Do you record your tracks around -1db peak level or do you normalize the tracks later in your DAW? In my understanding the latter is not really relevant, unless you are not clipping (or using 32bit floating point) ... oops, perhaps I should skip this thread :bleh:

    Either way ... to me -1db seems too hot. It would drive me crazy if even subtle track layers were screaming at me with -1db peak level. Also I thought about consistency ... to me it would not feel consistent, if my drum stem was peaking at -1db, ok. But for example a pad sound without any relevant transient information would just sound really harsh in comparison.

    I don't know if it changes anything, but as you asked, I'd at least lower the normalization level.
     
  15. BlackHawk

    BlackHawk Platinum Record

    Joined:
    Nov 28, 2021
    Messages:
    350
    Likes Received:
    171
    For me the whole discussion seems to be an creativity exercise in making up non existing problems ...
     
  16. SacyGuy

    SacyGuy Kapellmeister

    Joined:
    Feb 15, 2024
    Messages:
    115
    Likes Received:
    63
    oh boy... maybe you want to explain better how, why and what you are doing, with the details, because nobody understood your "problem".

    we wanna help you, not mock you, but at this point its hard to understand
     
  17. Radio

    Radio Platinum Record

    Joined:
    Sep 20, 2024
    Messages:
    574
    Likes Received:
    286
    Gain Staging - The Better Choice

    Correctly adjusting levels in your DAW is called gain staging.

    It involves checking the volume of every element you record and making sure it doesn't exceed a healthy level in your mix.

    As a general rule, you want to keep the peaks of your track around 9-10 dBFS and the rest of the waveform around -18 dBFS.

    Gain staging is especially important for the master bus of your DAW session. If you've left enough headroom in your mix, you should also have enough room on the master fader so that clipping doesn't occur anywhere.

    With all that headroom you have, you can make your tracks louder using the fader or any of the other methods I've mentioned. This gives you solid control over the level without having to resort to normalization.
     
    Last edited: Nov 6, 2024 at 9:29 PM
  18. Will Kweks

    Will Kweks Rock Star

    Joined:
    Oct 31, 2023
    Messages:
    543
    Likes Received:
    321
    I do normalize multitrack audio to -0.3, but using relative gain so the mix stays consistent. Force of habit, no real need to do so, but I find it easier to edit since vertical zoom of waveforms just doesn't agree with me.

    My two eurocents.
     
  19. angie

    angie Producer

    Joined:
    Nov 26, 2012
    Messages:
    397
    Likes Received:
    112
    Location:
    Milano
    What do you mean? A clipper plugin... clips. :dunno:
     
  20. saccamano

    saccamano Rock Star

    Joined:
    Mar 26, 2023
    Messages:
    1,212
    Likes Received:
    483
    Location:
    CBGB omfug
    I usually normalize to 0 or -1db depending on whether the input track has been over or under modulated initially. Before importing the track(s) and working on/with them I will make certain the file(s) are converted to 32 or 64bit float while keeping bit counts/rates the exact same if possible. In this way there is nothing you can do to those files that will degrade them digitally or aurally and Normalization can do it's thing unimpeded.
     
Loading...
Loading...