False: 1 bit can represent 6dB of amplitude. Multiply this by 16 and you get 96 dBFS of dynamic range. So the noisefloor can't technically be reproduced lower than - 96 dbfs, as stated earlier, because it cannot be represented within the range of the 16 bits.
Lel, it was only cuz someone else mentioned about another thread and thought some troll had done something... anyway personally i think u should go back to ur tigers pls...
So, I'm rendering 24 bit stems as we speak so I can reimport into session and save ram (no sample rate change). If I'm understanding correctly I don't need to dither in this instance is that correct? I'm using abletn live, does it work in 32 float and need to dither down to 24 for these stems instead? I watched the video but didn't really understand the solution for my situation...
I'm convinced that the 24 bit difference is not noticeable by the majority of casual listeners. That's not to say that there are not cases where 24 bit is desirable. There are many variables, such as: type of recording, listeners ear discrimination, playback system, etc. For some reason IMO, this kind of discussion sparks more controversy than is warranted. I'm not going to get caught up in a debate with a geek with a slide rule and a black board full of impressive looking math. There are plenty of math mavens that get a pass because few people are up to the task of going toe to toe purely on a math level. Of course I believe in giving credit where credit is due, though. I've conducted limited experiments some years ago with a commercial 24 bit recording and noticed a clear difference between 16 and 24 bits. A/B same 24 bit recording by switching the playback hardware between 16 and 24 bit (if there was an issue with methodology, it might be here). 16 bit sounded fantastic I thought. Smooth, clear, perfect. However, 24 bit sounded "fuller", "fatter", more defined". 16 bit sounded clearly "flatter" by comparison. 16 bit sounded perfect though and I would not have been the wiser had I not compared the two versions. I trust my ears, and if my methodology is flawed, I'm open to revision. All that said, there's probably many cases where the 24 bit difference is not going to be noticeable. Frankly, I'm not an audiophile and can enjoy recordings in either lossless or lossy.
As you will see in link below, not even by experts or audiophiles. Unfortunately applied technology works like that. Just think if a tech company for its quality control would check its products just ONLY by looking or by touching. Instead, they use rulers, micrometers, spectrum analyzers, THD, and the like, in a word, metrology. The only bias free way to perform this kind of test is blind, with a controlled environment and known equipment (+specs). While these are common terms, they don't express absolute values, as your fatter or more defined could be different from mine. It's good to trust ears, but I don't think any ear could discern a difference that a spectrum / distortion analyzer could not. As already said, that doesn't matter at least in 16/24 bits comparison, see link below. "Conclusions: In a naturalistic survey of 140 respondents using high quality musical samples sourced from high-resolution 24/96 digital audio collected over 2 months, there was no evidence that 24-bit audio could be appreciably differentiated from the same music dithered down to 16-bits using a basic algorithm (Adobe Audition 3, flat triangular dither, 0.5 bits)." http://archimago.blogspot.com/2014/06/24-bit-vs-16-bit-audio-test-part-ii.html Last edited: Feb 27, 2020
My only assumption would be that we are getting in the realms of thermal noise produced by the components of the computer ("Grundrauschen"). Spectrum Analyzers have a noise floor too (at least in hardware). But I am not a physician nor a methematician, so this is guesswok. But what I do know is this: -140 to -120 dB is pretty damn silent. And I only care about silence in terms of composition, I don't care about technical noise floor that low at all. I think nobody producing music should care about it. Becasue noise this low would probably is inaudible for humans even in a dead silent room. It's more likely the sound your blood creates while streaming through your arteries would mask this noise. Last edited: Feb 27, 2020
https://i.imgur.com/8t61Q7t.png My image was more to refute that 16bit can't have a noise floor of -96dB. It can, as I stated originally, c. -125dB.
that is probably not 16bit. as far as i know most applications use higher bit rate to work in than the imported audio.
"The Nyquist frequency should not be confused with the Nyquist rate, the latter is the minimum sampling rate that satisfies the Nyquist sampling criterion for a given signal or family of signals. The Nyquist rate is twice the maximum component frequency of the function being sampled. For example, the Nyquist rate for the sinusoid at 0.6 fs is 1.2 fs, which means that at the fs rate, it is being undersampled. Thus, Nyquist rate is a property of a continuous-time signal, whereas Nyquist frequency is a property of a discrete-time system. " I will keep it simple. If any sort of loud music is all you make/listen to, you 'll have a hard time noticing differences. On the other hand, music like classical or jazz or any kind of music which may include really quite passages with extremely louder crescendos needs 24bit or else those passages aren't translated correctly onto data. Add to this that acoustic recordings need to capture every nuance of the players no matter how low in volume it may be, and you end up with a view like this: Recording a top classical orchestra in 16bit equals an obligatory musical castration of sorts lol. It simply isn't adequate. But this of course is my view, i don't expect anyone to conform to this just because i said so or "the rules" imply so. I 've heard great songs with bad or mediocre production and total crap with top notch quality. And blimey it's usually the latter since tech has been widely more avail the past 20 yrs. Cheers
https://www.sciencedirect.com/topics/engineering/nyquist-rate To be clear, according the Nyquist rate, to fully capture a wave, it should be sampled at twice its highest frequency. In other words, a higher sample rate, and a greater bit depth, gives your sound more wiggle room, meaning sound peaks are less likely to be truncated and the subtleties of the music are less likely to be drowned out.
Sampling rate = accuracy of the temporal resolution: The more samples used per unit of time, the smaller is the deviation in time of the digital recording compared to the original analog signal. Bit depth = accuracy of the signal level resolution: The more bits used, the smaller is the deviation in level of the digital recording compared to the original analog signal. 16 vs. 24 bit: So that level errors (noise) of a 16-bit recording can be heard at all by a person, a nominal sound pressure level (SPL) of 96 dB must be monitored (with absolute silence in the area!), which is quite louder than the average listening level. Last edited: Feb 27, 2020
back in my day we didn't even have fucking electricity to bitch about sample rate and bit depth. You kids are fucking lucky u bloody shit cunts... I would cane you all if mi bones weren't all fucked up now...
You know what, I'd probably could do with a well mastered 12-bit nonlinear audio as the end 2-channel result, but I'd not rather skimp on the resolution when coming up with the shit. 24 bit, float, whatever, it's all good. But on the deliverables 16 bit is all good, really.