Currently creating a reverb, and I need some opinions

Discussion in 'Working with Sound' started by Fowly, Apr 17, 2024.

  1. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    144
    Likes Received:
    253
    Hey guys, for the past few months, I've been working on a kinda new way of doing realistic reverb. As I often work on orchestral music, blending different sample libraries together is a very important thing to me, and reverb plays a huge role in this. However, I've been unsatisfied by current reverb offerings. I didn't find an algorithmic reverb that sounded as nice and realistic as real reverb (even velvet noise based algos), and I also have trouble with convolution reverbs, as separating the direct sound, the early and the late reflections is essential for flexible mixing and a convincing sound, but very hard to do with impulse responses, and I find that most "controls" about this in convolution plugins simply don't sound good IMO.

    So my idea was to make a kind of hybrid between algorithmic and convolution, using advanced techniques normally used in acoustic simulation software that can't be used in real-time, and generating an impulse response out of them that could be convolved in real-time in a DAW. The advantage will be that it could have the sound of a real space but with different impulse responses for direct sound, early and late reflections that have been truly separated in the calculation. Also, it would bypass most of the artifacts and limitations that come with a conventional IR capture, done with speakers and microphone. For example, simulating with great accuracy the different dispersion characteristics of a trumpet vs a snare drum is possible in this setup.

    But turns out that extracting an IR from even the most advanced acoustic simulation softwares didn't sound great out of the box. They simply weren't made for mixing. But with a bit of research and experimentation, I frankensteined something that I think sounds really great and convincing. Sadly, the full process can't really be implemented in a plugin destined for artists and engineers.

    So what I'm gonna do is render the impulse responses in advance, in a way that it gives enough flexibility to most users. Think of it as a convolution++ reverb, with the possibility of having different presets for rooms, instrument and mic placement, and instrument dispersion. However, the rendering times are VERY long, and a lot of manual work is still required, so I can't do 500 spaces like in Altiverb.

    I'd like to know what you guys usually do with convolution reverb. What spaces do you use, in which plugins, and what kind of instrument do you use them on ? And do you use algorithmic reverbs when going for a realistic sound ? If so, which plugin and what kind of settings ?

    Thank you if you read all of this, your opinions would really help me to create a good product :wink:
     
    Last edited: Apr 17, 2024
    • Like Like x 4
    • Interesting Interesting x 2
    • Love it! Love it! x 2
    • List
  2.  
  3. Lieglein

    Lieglein Audiosexual

    Joined:
    Nov 23, 2018
    Messages:
    1,021
    Likes Received:
    579
    To my understanding the realistic sound in any case comes from the movement of air in the reflective space. That is what is missing in reverbs and why people have big struggle to find the "correct" reverb all the time.
    The "vivid hall" algorithm of Acon Digital verberate emulates this.

    I exclusively use this reverb on everything. I really only change the reverb time. Not the room, nor any other parameter besides rarely the "reverb level".

    To my experience it simply works on everything with most minimalistic tweaks.
     
    • Interesting x 5
    • Like x 3
    • Love it! x 2
    • Agree x 1
    • Winner x 1
    • List
  4. No Avenger

    No Avenger Audiosexual

    Joined:
    Jul 19, 2017
    Messages:
    9,117
    Likes Received:
    6,353
    Location:
    Europe
    AFAIK, you can split ERs and LRs in most IR reverbs and thus combine them freely. Waves IR full has the option to move the starting point or shorten the IR down to 1/10th sec. Together with its level env it offers a high versatility.
    Also, have you already tried https://www.parallax-audio.com/?
     
  5. AudioEnzyme

    AudioEnzyme Producer

    Joined:
    Jan 20, 2023
    Messages:
    299
    Likes Received:
    118
    Tips for ERs & LRs
     
    • Interesting Interesting x 3
    • List
  6. Garamondo Furbish

    Garamondo Furbish Audiosexual

    Joined:
    Nov 13, 2023
    Messages:
    2,005
    Likes Received:
    979
    Location:
    North America
    how about a Preverb, that can process a sample with a group of reverbs and you can audition them until you hear the sound you want.
     
  7. SirGigantor

    SirGigantor Ultrasonic

    Joined:
    Oct 14, 2022
    Messages:
    126
    Likes Received:
    36
    You could do that, but it's unlikely to work. By "blending", I'm assuming you mean "resonating together in groups". You can try that all you want, Kontakt always fails at it, because it has nothing to do with the "space".

    You want comb filters, there's only one reverb that actually does it well:

    http://soundbytes.de/Sympathizer/

    It's 32bit, so you need a bridge. When all the strings play together, for instance, the ones with the most mass emit the most energy, because large things need more energy to vibrate, these give off enough energy to vibrate lower things, i.e. contra basses emit a sufficient energy to cause low level sympathetic vibrations in violins and so on.

    Do stuff with space is basically useless for what you've described, you want comb filters and/or unison effects, i.e. replicating the fundamentals and overtones of each note and allowing them to collectively interfere BEFORE space matters.

    People make that mistake all the time, you can use all manner of reverbs without achieving any such effect, you'll just be replicating things after the fact, when you actually want a variety of sympathetic vibrations interfering, first.

    If you get old, 32 bit, Synthedit from sister site, you can probably open that. When you use 3rd party Jbridge to bridge any Synthedit plugin from the 32 bit era, it splits the .DLL into the DLL and the individual Synthedit modules, I've yet to mess around with it, but I believe you can tinker with the .SEM files and/or the whole plugin in Synthedit.

     
    • Interesting Interesting x 2
    • List
  8. Semarus

    Semarus Producer

    Joined:
    Mar 26, 2022
    Messages:
    237
    Likes Received:
    109
    It is very common among working composers to use algorithmic for ER and convolution for LR. It's something I've been doing as well for the past couple years, depending on the libraries I'm using and what I'm going for, I'll use Pro-R2 or Cinematic Rooms for ER and 7th heaven or Hofa IQ reverb with an IR from my collection for LR. Each section gets their own ER/LR so that I can place them properly in the room, and then I slap Savant Lab's Quantum 2772 on the master with a very light touch, just for a bit of diffusion to bring everything into the same space.

    Other than that, I also play around with Berlin Studio when I want to bring in other libs in an Orchestral Tools heavy project (plus I just love that room), which also works great on synths IMO. Also, I try to dial in a mic mix that focuses on room placement and leans towards dry (the room in the original samples is still very important, especially for brass), as opposed to relying only on the tree or close mics helps with maintaining realism.

    I don't know that the solution is wrapped up entirely in the convolution IRs or how they are made. I think it comes down to how the ER interact with the LR, and then how those n-order reflections thereafter interact with one another. This is truly difficult to calculate in a real-time audio context, so by combining the two methods we try to obfuscate those crucial points of interactions to create the illusion of what we know real reverberation would sound like, so that they don't stand out so much. Perhaps until such calculations are practical to use in real time, the path toward natural sounding reverb is in simulating an estimation of the result of the way those reflections interact.
     
  9. mk_96

    mk_96 Audiosexual

    Joined:
    Dec 31, 2020
    Messages:
    1,103
    Likes Received:
    771
    Location:
    Your heart
    Altiverb, EW Spaces, mostly trying to recreate specific places. Say, for example, i'm using a kontakt library that was recorded on Teldex and i want to add an external instrument while making it sound like it's in the same place, i'd probably go for a Teldex IR, it's not guaranteed to work but it's a good place to start and it usually works pretty well. It's a compatibility thing between something that has a real space mixed in and something that doesn't.

    I also use convos to emulate the sound of hardware reverbs that have not yet been emulated in the algo world, or that are already out there but the IRs just sound different, in which case i just use it for whatever i would use a plate reverb, spring or things like that for. And also sometimes i use unrelated IR's because it just sounds cool.

    Sometimes. For realistic sound i go for something that allows me to time and/or ballance reflections, with whatever method they may use (algo, convo, something in between), and the settings i use are mostly aimed to that (plus extra features like EQ, ducking or whatever the plugin may have). Since we're talking about algos: Verberate, Raummaschine, Lexicon PCM native, Waves H-Reverb (Hybrid, i know, but the features man).

    BUT, this is general purpose "realism", like making something be perceived as being in a specific space, like if you were looking at a picture of the space and you go all "oh yeeeeah, it does sounds like it belongs there". If you reeeeeally want to feel the space in detail, like, the space being THE main feature, idk, i've never been in a situation where i've needed to zoom in that much.
     
    Last edited: Apr 18, 2024
    • Like Like x 3
    • Interesting Interesting x 1
    • List
  10. clone

    clone Audiosexual

    Joined:
    Feb 5, 2021
    Messages:
    7,608
    Likes Received:
    3,345
    I rarely use any spaces other than those which were included with Altiverb 7. I have been also using the Import IR function in Fabfilter Pro-R2, and also Accentize Chameleon when I do not need or want Altiverb 7 loaded into a project. " Realism" is not as important in synth-centric electronic tracks. For me, anyway. So I will usually just use one or two Altiverb7 instances in a project, max. I'm also using Sonible Smart:Reverb as another "lighter" option than Altiverb; but I guess that could really apply to most other reverbs.
     
  11. RajuPalliBabu

    RajuPalliBabu Ultrasonic

    Joined:
    Nov 20, 2023
    Messages:
    100
    Likes Received:
    38
    I love convolution reverbs, i have couple of secret reverbs that i found by going through them all and they actually sample great algo reverbs so its kind of retarded but i dont have money for bricasti m7

    Just wanted to say definatly keep going with this project and keep your ears on the sound. Altiverb for me is more for goofy stuff like putting your sounds in Bob Marleys bathroom or inside oil tanker , fun but useless (yes i know it has great famous spaces and is great but still i never seem to want to use any of that, maybe realistic is not what i want)
     
  12. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    144
    Likes Received:
    253
    Yeah I've tested VSS2, it's a nice algorithmic alternative to MIR and Berlin Studio, but personally I find that the sound quality is not as good as those. It is very flexible in its features though.

    It seems that this plugin uses comb filters in feedback loops. So in other words, tuned delay, something possible to do with impulse responses. It's true that sympathetic resonances can play a role in the overall sound of an orchestra. That's why Spitfire recorded their BBC Symphony Orchestra with all of the instruments in place. (However this doesn't replicate the players dynamically changing the transfer function of the sympathetic resonance of their instruments by playing/muting, nor the change of the fundamental frequency when percussion instruments are tuned differently). But I think that it is important to consider the overall level of these sympathetic resonances. When looking at an impulse response, late reflections look like gaussian noise. Similarly to gaussian dither, it will cover any resonances that are ~6dB lower. So for sympathetic resonance to be even measurable, its level has to be higher than -6dB compared to late reflections. With distant mics in reflective spaces, I don't think it is the case but maybe I'm wrong. With close mics, it's maybe a different story, but I'm not planning on doing a simulation of spill from the mics of different sections (Spitfire BBCSO actually has those), nor am I planning to do a reverb based on an anechoic chamber lmao

    I don't think it's bad practice, if it sounds good then it's good :wink:Also, I don't think that the Bricasti M7 is particularly impressive compared to what today's plugins offer. But I guess that plugins miss the coolness and financial destruction that comes with hardware :cool:
     
    Last edited: Apr 19, 2024
  13. SirGigantor

    SirGigantor Ultrasonic

    Joined:
    Oct 14, 2022
    Messages:
    126
    Likes Received:
    36
    I'll have to look into the things you said, but I'm just speaking from experience, like taking different string VSTs/etc . . . and running them through that Soundbytes plugin to try and get a better "Orchestra" sound, plus I went to a college that has a music school attached, even though I was in the music program, I watched quite a few of the performances live, so I know "something" is missing in a lot of the plugins, but I don't know what.

    Another way to put it is: The Sounbytes plugin does *work* for the effect I think you're describing, but the mathematics might be "wrong", in the sense that it may be achieving the effect accidentally or in a manner that can be refined. It's one of those things where it does achieve effect more often than any other plugin when I use it, but it is "mathematically correct"? Well, that's another story ...

     
  14. Lois Lane

    Lois Lane Audiosexual

    Joined:
    Jan 16, 2019
    Messages:
    4,888
    Likes Received:
    4,804
    Location:
    Somewhere Over The Rainbow
    Why don't you check out Chris from Airwindows. His plugins are open source and he has created a bunch of different reverb iterations. You might pick up some good programming ideas, though for GUI creativity you might need to look elsewhere.

    https://www.airwindows.com/category/reverb/

     
    • Like Like x 1
    • Agree Agree x 1
    • List
  15. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    770
    Likes Received:
    278
    while i support your idea to roll our own, you should know that the idea is not new at all; you can find multi-position IR sets of real rooms in orchestral kontakt players which are 10 years old and older.

    outside of classical music i personally find artificial room reverbs with comparable properties (allowing sets of different source and listener positions for the same room) even more interesting.

    they keyword here is "raytraycing". i.e. you first design a virtual room until you like it, and then allow parallel instances to be set, based on this same room over and ovber again.
    you can get quite good examples with this even when you completely ignore the complex and unpredictable shit that normally happens to phases in a physical room and limit your algo to wall damping (arbitrary frequency filter, for example readymade FIR presets for diff materials), air absorption (by a simple parallel lowpass design) and have for example 3 first and 3 second reflections for each wall in a moorer oder schröder like reverb design, then run these reflections individually through an aural filter bank (for headphone or for stereo triangle, also FIR) depending on their relative direction to the listener.
    diffusion and a cherry on top and you´re done.

    (though one should always be aware of the fact that you can almost only hear a source´s position in contrast to other position or in best case when sources or the listener are moving.)
     
    • Like Like x 1
    • Interesting Interesting x 1
    • List
  16. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    770
    Likes Received:
    278
    you can use combs for reflections when you simply combine two to make allpasses of them. for feedback loops not-upsampled tapping buffers should be fine.
     
  17. SineWave

    SineWave Audiosexual

    Joined:
    Sep 4, 2011
    Messages:
    4,436
    Likes Received:
    3,571
    Location:
    Where the sun doesn't shine.
    When you mentioned processing early and late reflections with different IRs+algorythms it reminded me of a really nice reverb I used to like: WizooVerb. Maybe you'd like to look it up. It was one of a kind. You could combine algorithmic and IR processing for early and late reflections freely with it. Really nice. Shame the company went down long time ago. :sad:

    I wonder if you can come up with something similar. Good luck! Cheers! :wink:
     
    • Like Like x 1
    • Interesting Interesting x 1
    • List
  18. Arabian_jesus

    Arabian_jesus Audiosexual

    Joined:
    Jul 2, 2019
    Messages:
    981
    Likes Received:
    763
    • Interesting Interesting x 2
    • Like Like x 1
    • List
  19. ArticStorm

    ArticStorm Moderator Staff Member

    Joined:
    Jun 7, 2011
    Messages:
    7,865
    Likes Received:
    4,037
    Location:
    AudioSexPro
    as inspiration i would take a look at Abletons Hybrid Reverb. its actual works quite good, just some settings are way to limited.

    Abletons hybrid reverb concept itself is pretty good.
     
    • Agree Agree x 1
    • Interesting Interesting x 1
    • List
  20. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    144
    Likes Received:
    253
    That's for sure, but I believe that capturing an IR in the real world has some limitations. While it makes it very easy to capture the sound of a real space, separating the direct sound from the reverb in the impulse response is actually quite tricky, especially in small spaces, or when the mics get distant enough from source. Also, truly separating the early reflections from the diffused tail, a very important feature for flexible mixing, is usually impossible. Common convolution-based reverbs will allow some controls over the ER/LR balance with this kind of model :

    [​IMG]

    But in reality, the diffused tail arrives very quickly (sometimes under 10ms) and combines with the ERs, thus making a true separation pretty much impossible :

    [​IMG]

    Yep, ray-tracing is at the heart of the early reflections engine I'm building, with a material simulation that has frequency dependent absorption, diffusion, transparency and dispersion coefficients. While it's accurate for reflections, it's not good enough for the diffused late reflections. For this part, I use a wave-based simulation, with a bit of homemade trickery so that it sounds good in high frequencies too, while still keeping rendering times under 12 hours. I need to use my computer during the day hahaha.
     
    • Interesting Interesting x 3
    • List
  21. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    770
    Likes Received:
    278
    i see what you mean, but then use a bandlimited spike instead of a sweep and for the cost of a bit of spectrum distortion you have what you want and can cut off the first 2 ms - which you then later can replace by the dry track with or without distance filtering and with eventually "artificially" lowered or raised amplitude.

    as for the later stages of the effect, that´s more difficult.

    what about windowing/fading between the stages?

    ...or when there is no direct signal because the source is located in the side wing of a cathedral. :)


    modelling physics accurately is great, but the method has for sure limits.

    for example in reality both your head and the violin on stage move and turn around a bit. in the IR they are fixed...



    the base for such a modular reflection toy could be captured IRs of a single wall in an unechoic chamber. then you do not have to calculate or mimic dispersion, phase, frequency response and runtime, you have it all in one go. then you build rooms with that thing.
     
Loading...
Loading...