Why are "mono instruments" often sampled in stereo?

Discussion in 'Working with Sound' started by El Duderino, Jan 23, 2024.

  1. Xupito

    Xupito Audiosexual

    Joined:
    Jan 21, 2012
    Messages:
    7,228
    Likes Received:
    3,995
    Location:
    Europe
    As rudimentary it may sound... if it's bigger (wider) than your head and the sound is different on every side that should be recorded stereo.

    You pretty much answered your question. An orchestra/choir as a whole it's stereo, individual instruments are not. A piano listened closely would or could be.
     
    • Like Like x 1
    • Agree Agree x 1
    • List
  2. Zenarcist

    Zenarcist Audiosexual

    Joined:
    Jan 1, 2012
    Messages:
    4,251
    Likes Received:
    2,728
    Location:
    Planet Earth
    Double track or duplicate track?
     
  3. Zenarcist

    Zenarcist Audiosexual

    Joined:
    Jan 1, 2012
    Messages:
    4,251
    Likes Received:
    2,728
    Location:
    Planet Earth
    Not everything has to be mono, but it's hard to understand the reason why some people always turn true mono into stereo.
     
    Last edited: Jan 24, 2024
  4. justwannadownload

    justwannadownload Audiosexual

    Joined:
    Jan 13, 2019
    Messages:
    1,308
    Likes Received:
    848
    Location:
    Central Asia
    You really, really should've phrased it differently, eh? Nobody's even bothering to read the explanation, it's like only the first and the last posts exist.
     
  5. Lois Lane

    Lois Lane Audiosexual

    Joined:
    Jan 16, 2019
    Messages:
    4,767
    Likes Received:
    4,692
    Location:
    Somewhere Over The Rainbow
    I've found over the years that if I record let's say a violin in stereo that both the room and the microphone choices come more into focus. In a good room omni patterns give more the natural sound of the instrument and the personality of the space. In a not such fantastic room, cardioid or supercardoid are much more applicable as unwanted negative impacting reflections are also captured. So too are the actual microphones themselves coming into play as some mics haven't the nicest off axis response. For instance I have a Beyerdynamic MC930 that is wonderful for close micing and which I really love on acoustic instruments and some percussion, but when I pull back, the bleed from other instruments in a group setting isn't very flattering and mucks up the works. On the other hand, my Gefell m930 has the sweetest off axis sound going and flatters anything within its earshot (as well as an incredibly musical and tunable proximity effect which is as smooth as butter from about 3 feet out to right up against the source).

    That being said, why are mono instruments often sampled in stereo? The only explanation I can discern in a practical sense is to offer a certain sound stage in which the person mixing can make choices in regard to the sound which they are striving for. To close mic each instrument in an orchestra would not be very flattering I guess to listen to, whereas a fine tuned sound stage could imply the enormity and impact of the entire orchestra even with only two main microphones , spot micing to better exclaim solos.

    I've never recorded a vocalist in stereo and might only do so if the space itself, a church or another large room with a big personality informed the solo performance itself. The same can be said for my idea of recording an electric guitar coming out of a single speaker enclosure. There is no real need to record in stereo. Using more than one mic yes, like a 57 and a large capsule condenser in order to blend to taste, but not for stereo for at least rock and roll.
     
  6. pratyahara

    pratyahara Guest

    While a single microphone recording a single instrument indeed won't have phase issues arising from different arrival times, the microphone membrane itself can introduce subtle phase shifts depending on the frequency of the sound.
    High-frequency sound waves have shorter wavelengths, meaning they cause the microphone membrane to vibrate quickly and with smaller displacements. The membrane readily follows these rapid changes, accurately capturing the sound's timing and phase.
    Low frequencies: Bass tones, on the other hand, have much longer wavelengths and cause the membrane to vibrate slowly and with larger displacements. The membrane's inertia can make it lag slightly behind the rapid changes in pressure, introducing a minute delay in its response. This delay can translate to a slight phase shift in the recorded sound, especially at very low frequencies.
    This phase shift is usually negligible for most listening purposes, especially for solo instrument recordings. However, it can become noticeable in certain situations:
    When mixing multiple tracks: If you're mixing a recording with multiple instruments, the phase shift introduced by the microphone on bass frequencies can cause subtle cancellation or comb filtering effects when combined with other tracks, especially if those tracks were recorded with different microphones.
    In very low-frequency ranges: For sub-bass frequencies (below 20 Hz), the phase shift can become more pronounced and potentially affect the overall feel of the sound.
    The type of microphone can influence the phase shift. Condenser microphones generally have a faster response than dynamic microphones, making them less prone to phase issues at low frequencies.
    The microphone's frequency response curve can also play a role. Some microphones are designed to roll off the low frequencies, which can help mitigate the phase shift issue.
    In most cases, the phase shift caused by the microphone membrane is not a major concern for casual listening. However, if you're working on a critical recording or mixing multiple tracks, it's something to be aware of and potentially compensate for.
     
  7. pratyahara

    pratyahara Guest

    In other words, each type of instrument has a different pattern of directivity (its own polar characteristic of emitting its sound) or sound dispersion. This cannot be captured by a single microphone. For example, the piano is wide enough to produce substantial directed phase differences for people listening to it live. However, a single mono microphone cannot capture the direction from which these phase differences originate.
     
  8. Zenarcist

    Zenarcist Audiosexual

    Joined:
    Jan 1, 2012
    Messages:
    4,251
    Likes Received:
    2,728
    Location:
    Planet Earth
    After reading this thread I am so happy I no longer use Kontakt :)
     
  9. macros mk2

    macros mk2 Rock Star

    Joined:
    Sep 22, 2022
    Messages:
    442
    Likes Received:
    316
    Location:
    seattle
    When I fart and sneeze at the same time I become a stereo instrument is what I've learned from this thread.
     
  10. Trurl

    Trurl Audiosexual

    Joined:
    Nov 17, 2019
    Messages:
    2,480
    Likes Received:
    1,464
    Double track, as in play it another time :wink:
     
  11. Zenarcist

    Zenarcist Audiosexual

    Joined:
    Jan 1, 2012
    Messages:
    4,251
    Likes Received:
    2,728
    Location:
    Planet Earth
    It adds more character to a track for sure, depending on the genre :like:

    I still like X-Y @ 90° on predominantly acoustic guitar music though, it just sounds right.

    I also replaced all my Kontakt string libraries with a Solina and a Mellotron, so no more first world problems and my music sounds better :)
     
    Last edited: Jan 24, 2024
  12. Haze

    Haze Platinum Record

    Joined:
    Nov 28, 2013
    Messages:
    213
    Likes Received:
    174
    Location:
    UK
    I would say the primary reasons for stereo samples are consistency, compatibility and versatility.

    Stereo channels have different behaviour to mono channels. Stereo processing will only deliver channel one when placed on a mono channel. The simplest demonstration of this would be an autopanner - set this up with a square wave and listen to it acting like a gate, periodically muting channel two. When using internal processing in Kontakt that won't be the case however, but if the output is routed to a mono channel, any post stereo processing inserted would behave exactly as described in the panning example. This behaviour may differ in different samplers (probably not) but limiting options by restricting the channel count, thereby reducing potential applications, would seem short-sighted in a commercial sample library.

    Samples, of course, can also be utilised directly on the timeline, and are more often used in this fashion by electronic producers. In this scenario mono files don't make much sense, unless placed on a mono channel and processed entirely mono. This is very limiting behaviour however, prohibiting stereo processing inserts. Single channel files can of course be placed on a stereo channel and will be processed identically to a two channel version, though there will be a 3 dB gain reduction. If a dual mono file is summed to mono by bringing both left and right center then there will be an additional increase of 3 dB - this is all the result of console/DAW panning laws. Single channel files can cause problems in this regard - a normalised single channel with its fader set at 0 will actually be at - 3 dBFS as opposed to 0 dBFS for a two channel version, the difference is huge from an audible point of view, and we all know louder is better right...
    :speaker::metal::drummer::metal::speaker:

    Ok, not necessarily...

    There always seems to be some confusion surrounding Mono / Dual Mono / Stereo / Stereo Interleaved. The best way to understand stereo is that it is actually two distinct channels containing information that may or may not sum to mono (M/S is better understood as Sum/Difference). All two channel PCM files are dual mono, they contain individually encoded information (which can of course be duplicate information which will present as mono to a listener). Mono and Stereo are really subjective terms describing the position in space of a sound as perceived binaurally, it isn't actually what's going on in the data. A two channel file that is split into two single channel files and hard-panned is identical to the two channel version in every way. Stereo interleaved isn't a perceptual audio phenomenon, it's a term used for how the data for dual mono files is stored.
     
  13. Zenarcist

    Zenarcist Audiosexual

    Joined:
    Jan 1, 2012
    Messages:
    4,251
    Likes Received:
    2,728
    Location:
    Planet Earth
    A very good explanation :wink: You should post more often, as 107 posts in 10 years is clearly not enough! :)
     
  14. Xupito

    Xupito Audiosexual

    Joined:
    Jan 21, 2012
    Messages:
    7,228
    Likes Received:
    3,995
    Location:
    Europe
    Without entering in details you both know better than me... I left out of my very simple-minimal explanation the natural reverb of the room/hall from the instrument to the listener/mic. That alone justifies record in stereo in several situations.

    Also, for the very specific case of drumkits, the micing is so complex because there's effects like the same reverb that are applied only to certain pieces of the kit when mixing. So more mics could be needed and with minimum crosstalk of the pieces that mic isn't supposed to record. I think that's called bleeding. First time I read it "bleeding" in that context I was like "holy shit, can a drumset bleed? pooor guys..." :rofl:
    Someone correct me if I'm wrong but I seem to recall this from somewhere.
     
  15. Xupito

    Xupito Audiosexual

    Joined:
    Jan 21, 2012
    Messages:
    7,228
    Likes Received:
    3,995
    Location:
    Europe
    I second that
    We need more @Haze quality posts :wink:
     
  16. Garamondo Furbish

    Garamondo Furbish Audiosexual

    Joined:
    Nov 13, 2023
    Messages:
    1,834
    Likes Received:
    881
    Location:
    North America
    realism, most people have 2 ears, and hence hear in stereo,which is why stereo was invented years ago.
     
  17. Garamondo Furbish

    Garamondo Furbish Audiosexual

    Joined:
    Nov 13, 2023
    Messages:
    1,834
    Likes Received:
    881
    Location:
    North America
    drumkits are a combo of drums and cymbals, cymbals are problematic. Drums are often the skeleton of a track that everything else hangs on. The bass follows the drum, the guitar follows the bass etc.
    The drums have to be done right or nothing else works right.

    Fun fact Pink Floyd worked with Alan Parson on Dark Side of the Moon, because Parson's could get consistent and great drum sounds out of Mason's drum kit. Floyd was a touring band at the time and would bring in their gear to record then teardown and gig with it. Alan Parsons could get a consistent great sound over and over with that drum kit. Knowing how to mic a drum kit can get you amazing work if you are good at it.
     
    • Agree Agree x 1
    • Interesting Interesting x 1
    • List
  18. fleschdnb

    fleschdnb Kapellmeister

    Joined:
    Jan 31, 2014
    Messages:
    126
    Likes Received:
    45
    Pure and simple - its done for compatibility reasons. Industry standards for the target market (usually electronic music producers/bedroom producers/noobs) Mono plugins, batch processing, etc.. I wont go into it all.

    I think the whole question is being lost on most people in the replies though - These are still mono sounds, even if they do have a left and right channel technically.. there is only mid info, no side info.. These arent made into stereo files to "capture the room they were recorded in, etc" as they are still just mono recordings essentially stacked and panned hard left and hard right... the two channels would cancel eachother out completely is phase inverted, so there is no side info, only mid info, monophonic.. one mic used to record.. If the person wanted to capture the room, they would have used more than one mic, and we wouldnt be having this conversation.

    Its just purely compatibility/ease of use. There is absolutely no reason for a mono sound file in modern times..
     
    • Agree Agree x 2
    • Like Like x 1
    • Love it! Love it! x 1
    • List
  19. Slavestate

    Slavestate Platinum Record

    Joined:
    Jul 28, 2019
    Messages:
    441
    Likes Received:
    196
    The console all my samplers and hardware runs through only has so many channels. If a sample is not true stereo, there is no reason to waste another channel on the mixer when it's going to be the same exact thing panned to one side, not to mention it also eats polyphony. Leave it in mono and set it in the main stereo field yourself.
     
    • Like Like x 2
    • Agree Agree x 1
    • List
  20. Haze

    Haze Platinum Record

    Joined:
    Nov 28, 2013
    Messages:
    213
    Likes Received:
    174
    Location:
    UK
    Cheers, it's all about time availability.. Plus, more often than not, most questions are pretty well dealt with by other members well before I get to them.
     
Loading...
Similar Threads - mono instruments often Forum Date
What instruments should be mono? Mixing and Mastering Sep 5, 2018
Convert ANY Mono Track Into Stereo The Right Way Every Time! Mixing and Mastering May 14, 2024
Saving a monolith instrument that includes all its resources? Kontakt Mar 2, 2024
korg monopoly vst3 crashing cubase 12 Software Mar 29, 2023
Monosounds offers 2 Bundles for $15/each Software News Mar 27, 2023
Loading...