In/Out Sample Rate ?

Discussion in 'Mixing and Mastering' started by zikko, Mar 1, 2025 at 10:20 AM.

  1. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    You're all good, mate! No problem at all :) … Look at my Avatar…

    I thought, together, we could dump all over JW's and Aliens, lol. Maybe that's inappropriate and impolite. I'll go to Gab for that.
     
    • Like Like x 1
    • Funny Funny x 1
    • Love it! Love it! x 1
    • List
  2. clone

    clone Audiosexual

    Joined:
    Feb 5, 2021
    Messages:
    7,993
    Likes Received:
    3,497
    It's not that difficult of a question to answer. Say I run all my sessions at a consistent 24bit 48khz, and some rando tells me I should be running my sessions at 16 bit, 44.1 khz instead. The OP's question was never about storage space or file sizes. It was about audio quality.

    At that point, the onus is on him to prove that 16bit 44.1khz sessions are going to consistently arrive at better results than 24bit 48khz sessions.

    Other Peoples' Youtube Videos are on the horizon.
     
    Last edited: Mar 2, 2025 at 6:10 AM
  3. LuciusKing

    LuciusKing Newbie

    Joined:
    Sep 23, 2023
    Messages:
    7
    Likes Received:
    0
    I work at 48kHz and sometimes enable oversampling on certain mix elements to enhance their clarity. Based on my observations, it seems to add a kind of "crystallization" to the sound.
    You know how in stereo mixing, having something hit in mono creates contrast between thick and thin elements, making the stereo field feel wider? I think oversampling works in a similar way—it creates a contrast between slightly muddy and more crystalline sounds.
    This technique helps highlight vocals or lead parts, and so far, it’s the only practical use of oversampling I’ve found over the years. If you have any cool tricks with it, I’d love to hear them!
     
  4. zikko

    zikko Newbie

    Joined:
    Oct 6, 2024
    Messages:
    15
    Likes Received:
    2
    only available for eq8 and glue compressor i see.
     
  5. shinyzen

    shinyzen Audiosexual

    Joined:
    Sep 28, 2023
    Messages:
    901
    Likes Received:
    575
    if you have the means to, 96k, 32 bit. if you dont have the processing power, storage or whatever, 48k, 24 bit.

    This is a very divisive issue in the audio community. Many will swear there is not difference at 96 k, and 44.1 is even fine. While, sure, really anything is fine, there is a difference. While Dan Worral may say otherwise, there are plenty of others who would disagree with him. Besides, i wouldnt really consider him one of the most respected men in audio engineering. A respected man in audio engineering youtube? sure.

    There is a difference. If you have excellent monitors a well treated room, you can hear it. It can be a small difference, and its up to you to decide if its worth it. For me it is. An extra 3% difference in quality, even 1% stacked across an entire song is worth it for me. To the average listener, they could care less. They just want a bop, but it doesnt hurt to make that bop as best as you can. here some benefits of working at 96k, from Chatgpt, as i dont feel like typing all of this out:

    1. Synths (Not Samplers) – More Detail & Smoothness
    • Some virtual analog synths (especially those with oversampling options) benefit from the higher sample rate, producing smoother oscillators, especially in high-frequency content. This applies to LFO's and internal modulation as well.
    2. Algorithmic Delays & Reverbs – More Spatial Depth
    • Some reverb and delay algorithms scale their internal processing with sample rate, meaning you may get a more refined, less grainy tail.
    • The modulation in certain delays and reverbs might sound more natural and less "steppy."
    3. Saturation & Distortion – Smoother Harmonics
    • Many saturation plugins introduce high-frequency artifacts at lower sample rates, which get pushed further out of the audible range at 96kHz.
    • This can lead to less harshness, smoother harmonics, and better-defined transients.
    4. EQ – Less Cramping, More Open Highs
    • Minimum-phase EQs (especially analog-modeled ones) have less high-end phase distortion and cramping near the Nyquist frequency.
    • This can result in a more open, airy top end and better preservation of transients.
    5. Transients & Dynamics – More Precision
    • Some compressors and transient shapers work better at higher sample rates, especially if their envelope detection is linked to sampling precision.
    • This can lead to slightly tighter transient response and better micro-dynamic detail.
    6. Modulation Effects – Smoother Movement
    • Chorus, flangers, and phasers often use LFOs that are sampled at the project’s rate, meaning they may have smoother movement and less quantization in their modulation curves.

    The differences can be subtle, but cumulative, leading to "better" end result. Your mixes can feel more open and detailed, and have more depth or whatever other buzzword you wan. BUTTTT, good listening environment / monitors, proper gain staging, plugin selection, and knowing what the hell you are doing matters more than just cranking up the sample rate. Many plugins already oversample internally, and you can always oversample something using bluecat's patchwork etc.
     
  6. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    Keep your mixdown/bounce at 24/48! Problem solved!
     
  7. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    ----removed----
     
  8. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    “Cumulative.” Hmmm. That might make sense if there was a problem to begin with. Can you please point out where the problem is that accumulates?
     
  9. clone

    clone Audiosexual

    Joined:
    Feb 5, 2021
    Messages:
    7,993
    Likes Received:
    3,497
    If you have the gear and software to run at 96Khz, plugins that might possibly benefit from it, maybe you want to do this. You are right.

    What you do not see anymore are very many users going with the reduced bit depth for their daw session. If your project's final delivery format is 16bit, convert it then. But that is an outlier, an exception to the rule for most of us. I'd go so far to say, I would want the projects all saved at the highest possible bit depth and sample rate for storage also. Not having to convert downward 90 percent of the time, is better than converting up 90% of the time; even if they are equal fidelity.

    You: What is Equal Fidelity in Audio?

    Equal fidelity means that two or more audio files, signals, or playback systems have the same level of sound quality, without any degradation, loss of detail, or added distortion. This ensures that the listening experience remains consistent across different formats, sample rates, or playback devices.

    Scenarios Where Equal Fidelity Matters:
    1. Format Conversion (Lossless to Lossless)
    • If you convert a WAV (uncompressed) file to FLAC (lossless compression), fidelity remains equal because no audio data is lost.
    • However, converting WAV to MP3 (lossy compression) reduces fidelity due to removed frequencies.
    2. Bit Depth & Sample Rate Considerations
    • A 24-bit, 96 kHz file converted to 24-bit, 96 kHz in another format (e.g., WAV to AIFF) maintains equal fidelity because no data is lost.
    • Downsampling (e.g., 96 kHz to 44.1 kHz) may introduce slight quality loss unless handled with proper resampling techniques.
    3. Playback on Different Systems
    • If two audio systems have the same frequency response, distortion levels, and dynamic range, they offer equal fidelity.
    • However, differences in DACs (digital-to-analog converters), speakers, or headphones can affect perceived fidelity.
    4. Signal Routing in a DAW or Audio Interface
    • Using digital transfers like AES/EBU, ADAT, or S/PDIF without conversion maintains equal fidelity.
    • But converting from analog to digital (or vice versa) introduces potential signal degradation.
    Key Takeaways
    • Equal fidelity means no loss of quality between audio formats, playback devices, or conversions.
    • Lossless formats and proper signal paths help maintain fidelity.
    • Any lossy compression, resampling, or digital-to-analog conversion can reduce fidelity.

    You: Is dither down or dither up the better last audio conversion in my Project?

    You: is dither down or dither up the better last audio conversion in my Project?

    GPT: Dither should only be applied when reducing bit depth, meaning dither down is the correct choice when finalizing your project.

    Why Dither Down?
    • Dither is used to prevent quantization distortion and truncation artifacts when converting from a higher bit depth (e.g., 24-bit or 32-bit float) down to a lower bit depth (e.g., 16-bit for CD).
    • Since dither adds low-level noise, it smooths out the transition from a higher bit depth to a lower one, making the noise floor less noticeable.
    Why Not Dither Up?
    • If you're increasing bit depth (e.g., from 16-bit to 24-bit), you don’t need dither because no new quantization distortion is introduced—you're just adding empty bits, not improving quality.
    Best Practice for Dithering in the Final Export
    • Apply dither only once at the very last step of your mastering chain.
    • If you're exporting to 16-bit for CD, use dither.
    • If you're keeping the file in 24-bit or 32-bit float, dither is unnecessary.

    Here's another reason you will be better at 24bit 48k.


    You: if I use S/PDIF or ADAT digital connections to a 16bit session, what happens?

    ChatGPT said:

    If you use S/PDIF or ADAT digital connections while working in a 16-bit session, a few things can happen depending on the bit depth of the incoming signal:

    1. If the Incoming Signal is 16-bit
    • No conversion is needed, and the signal is transmitted as-is.
    • S/PDIF and ADAT both support 16-bit audio, so there will be no issues.
    2. If the Incoming Signal is 24-bit or Higher
    • Your DAW or interface will need to truncate or dither the signal down to 16-bit.
    • If truncation occurs, the extra bits are just cut off, which can cause quantization distortion.
    • If dithering is applied, a small amount of noise is added to preserve detail from the discarded bits.
    3. Interface Behavior
    • Some audio interfaces automatically truncate to 16-bit without dithering.
    • Others may have an option to apply dithering before the signal reaches your DAW.
    • Check your interface settings to see if it allows selecting bit depth.
    4. ADAT Considerations
    • Standard ADAT Lightpipe only supports up to 48 kHz at 24-bit (or 96 kHz in S/MUX mode).
    • If your ADAT source is 24-bit, your 16-bit session may force truncation or dithering, depending on your DAW settings.
    Best Practice
    • If you must record in a 16-bit session, try to apply dither when reducing from 24-bit to 16-bit to avoid harsh quantization noise.
    • If possible, work in 24-bit or 32-bit float and only convert to 16-bit at the final export stage.


     
    Last edited: Mar 2, 2025 at 11:51 AM
  10. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    Aside from "noise floor" which I already perfectly understand, please help me to understand why you require higher bit depths.
     
  11. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    Lol! "Dither up?" It doesn't even make sense!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! I need a friend right now. This is too much bullshit for me to handle.
     
  12. xorome

    xorome Audiosexual

    Joined:
    Sep 28, 2021
    Messages:
    1,311
    Likes Received:
    961
    a) How difficult would it be for you to fill out this single blank in this series: 1, ?, 3, 4, 5, 6, 7.

    b) How difficult would it be for you to fill out this single blank in this series: 1, ?, 7

    a = high sample rate, b = low sample rate

    Which one's more difficult to put back together?

    Indeed - but only very few. Any respectable synth generates its waveforms at an internal sample rate that is decoupled from the DAW's sample rate.

    I think this is really difficult to judge. If the detector signal is oversampled, like in TDR compressors, you're adding more delay or pre-ringing to it, which is going to add its own form of distortion to detection.

    if you really meant "grainy", then I'd imagine you were thinking about bit-depth instead of SR?

    Higher than what though? I mean... render out a real snare hit with a full tail (>5s) to 16 bit, without dither. Re-import the render, put OTT on it and slam it, play the entire thing - protect your ears. Do the same, but render to 16 bit with dither, then again but at 24 bit.

    Maybe you want to re-use your old samples for a different project a couple years down the road. 24/32 = long term "headroom" for whatever post processing you might want to do in the future.

    Just in case: You're not mixing in 16 bit when you set your DAW to 16 bit. Mix bus depth (where plugins do their processing) and output bus depth (what you hear and what gets printed to a file) are different things.
     
  13. vuldegger

    vuldegger Platinum Record

    Joined:
    Mar 15, 2021
    Messages:
    385
    Likes Received:
    169
    since every smartphone/tablet/music player/car stereo/tv/video editors can play 48/24 files i see no point working/downsampling to 44/16 anymore
     
  14. saccamano

    saccamano Audiosexual

    Joined:
    Mar 26, 2023
    Messages:
    1,471
    Likes Received:
    599
    Location:
    CBGB omfug
    That may be the potential cause for the ignorance in a nutshell - computer-envy... These days there really shouldn't be anyone that lacks the computing power to be able to do what ever they wish. Especially on the PC side, if you're savvy enough. Computing power is cheaper per processor cycle that it ever has been (apple not included)... But to be thinking some dipshit youtuber should be dictating what is what in the audio world is beyond myopic and smacks of borderline lunacy...
    that's all.
     
    Last edited: Mar 2, 2025 at 10:51 PM
  15. FrankPig

    FrankPig Audiosexual

    Joined:
    Jan 31, 2021
    Messages:
    752
    Likes Received:
    542
    Location:
    Hog Heaven
    48kHz/24bit
     
  16. Will Kweks

    Will Kweks Rock Star

    Joined:
    Oct 31, 2023
    Messages:
    698
    Likes Received:
    432
    This is not directed at you personally, but there's the opposite side, which is "bigger number = better" so obviously 96kHz sample rate is better than 48kHz because it's bigger (similarly for bit depths, but that's another discussion).

    So what do you gain at higher sample rates really, since we can't produce or hear ultrasonic audio information? Let's also ignore weird aliasing due to potential ultrasonics since that's mostly theoretical. I'd argue that a lot of plugins aren't designed for high sample rates, they work because they have to be programmed to work with multiple rates already, but are the actual inside algorithms and techniques really geared towards them?

    But say you gain better temporal resolution, which is true, but unless your recording is a simple matched stereo pair not touched by any temporal process this win isn't actually. Say you adjust phase in a multi-mic drum recording, now the temporal difference of one sample nudge is already in tens of nanoseconds (20ns in 48k, 10ns in 96k) , who is actually going to analyse and tweak it down to that level? A simple filter is probably going to phase shift that so much it's impossible to control accurately.

    Now let's say our delivery format is 48k, how much of those carefully adjusted temporal relationships survive downsampling? I don't know, I never looked into it.

    I'm not saying that high sample rates are useless, or won't be better in some cases. What I am saying though is that 96/192kHz etc. are not an automatic improvement, it requires knowledge and adds a few more considerations into the process.

    So yeah, I too give not one single shit what some YT yahoo says. I've tried 88.2kHz and 96kHz to see whether the lower latency was useful for my purposes, and If see any improvement in the materials (a mixture of acoustic and synthetic) I process. Since I don't deal with exact replications of concert hall acoustics the difference to me is negligible.
     
  17. _wildpineapple_

    _wildpineapple_ Ultrasonic

    Joined:
    Jul 18, 2020
    Messages:
    60
    Likes Received:
    21
    So , how professional people work is , We work with the sample rate the Client send their the files in. It doesn't matter which sample rates they use .
    But If I track the client, i use either 44.1k for modern genres ( mainly rap and pop ) and 96k for the others . And the bit depth always stays the same , 24 bit for tracking , 32 bit when mixing and whatever resolution files are necessary for delivery .
    All these doesn't have any specific reason , it's just how I've been working for past 10-12 years and it just works fine for me .
     
  18. clone

    clone Audiosexual

    Joined:
    Feb 5, 2021
    Messages:
    7,993
    Likes Received:
    3,497
    As always, any time some supposed "information" in audio is offered up with someone's "resume" as the only proof of what they are claiming; it's basically BS. Otherwise they would simply support it with facts. Plenty of people do things the wrong way for a long time.

    There are many "professionals" who do mastering and so they can use all their expensive DAC's and hardware; they will re-record into their DAW at whatever session sample rate and bit depth they feel will let them do their work best based on their gear and their workflow.

    The client never even sees it. They get back whatever delivery format they request and pay for.
     
  19. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    Internal processing is either 32 or 64 bit. I understand that.

    A single audio sample for reuse later down the track, does not require 192, 144, or even 96 dB of dynamic range. If you use 32-bit for reduced noise floor, then okay, I can understand that logic.
     
    Last edited: Mar 3, 2025 at 7:40 AM
  20. UTiLiTY

    UTiLiTY Ultrasonic

    Joined:
    Oct 25, 2024
    Messages:
    139
    Likes Received:
    22
    That's just accepting what you get from the artist in exchange for some dollars... And I totally understand that. I've done that as well. I think it's debatable as to whether or not it's "professional." - No offence intended.
     
Loading...
Similar Threads - Sample Rate Forum Date
Search for Sample Rate Converters that accept "odd" values (e.g. 46034 kHz) Software Sep 20, 2024
Windows RDP - cannot change samplerate? Working with Sound Jul 4, 2024
sample rate on saffire mix control Studio Jun 13, 2024
Batch audio processor for Mac? (sample rate, bit depth, etc.) Software Jun 13, 2024
Converting bit depth/samplerate clipping question Mixing and Mastering May 27, 2024
Loading...