Currently creating a reverb, and I need some opinions

Discussion in 'Working with Sound' started by Fowly, Apr 17, 2024.

  1. ArticStorm

    ArticStorm Moderator Staff Member

    Joined:
    Jun 7, 2011
    Messages:
    7,700
    Likes Received:
    3,923
    Location:
    AudioSexPro
    Are you planning to code this in JUCE?
     
  2. SineWave

    SineWave Audiosexual

    Joined:
    Sep 4, 2011
    Messages:
    4,431
    Likes Received:
    3,569
    Location:
    Where the sun doesn't shine.
    And are you on Github maybe? :wink:

    I'm lately socialising more on Github than other social media, except for this forum and some Youtube. :)
     
    • Like Like x 1
    • Interesting Interesting x 1
    • List
  3. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    141
    Likes Received:
    250
    Actually I plan on just designing the 3D models and rendering the impulse responses, and license this to an established plugin dev. While I'm familiar with the maths and DSP, I don't know anything about plugin development. I'm more of an acoustician, not a dev.
     
    • Like Like x 2
    • Interesting Interesting x 1
    • List
  4. ArticStorm

    ArticStorm Moderator Staff Member

    Joined:
    Jun 7, 2011
    Messages:
    7,700
    Likes Received:
    3,923
    Location:
    AudioSexPro
    i can see that. plugin development is quite hard, even the DSP understand and the maths is there.

    Hope someone picks your ideas up and makes it a plugin.
     
  5. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    768
    Likes Received:
    275
    you are probably aware that position vs filter depends on the output master, i.e. the user normally needed to create a headphone mix and a stereo mix with each different aural filters.

    one solution is to not include the "direction" part into the IRs or FIRs but instead use some kind of intermediate immersive format - followed by possibly different filtersets. how is your approach on that?
     
  6. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    141
    Likes Received:
    250
    I don't plan on doing things like custom binaural rendering. I'm familiar with how to do this, it's just that I'm focusing on a more traditional style of reverb, recreating the sound of a room in a recording.

    99%+ of people don't have the required setup to accurately reproduce the sound-field of a room reverb, which is either a HOA/WFS speaker setup or a calibrated binaural system with a custom SOFA file. There's no way for stereo speakers or standard headphones to accurately reproduce the direction of each reflections, which is an essential part of the experience. So I don't think it's worth it to do a "virtual monitoring" grade room simulation, like what is used in acoustical research. This reverb is targeted for artists and sound engineers.

    However, one of the virtual microphone positions that I chose for rendering these IRs is an AB pair with omni mics seperated by 19cm, like average human ears. This allows for a realistic soundstage on headphones/IEMs (especially considering that they should be calibrated to a diffuse-field target curve in any case) that accurately reproduces the real experience of late reflections, but still sounds great on speakers unlike what would happen with a binaural microphone.

    Yep, what could be very interesting is Ambisonics, which would allow easy downmixing for Atmos, surround, binaural etc... However I'm simply focused on stereo right now, and if I find some devs interested in this, I will discuss potential immersive formats with them.
     
    • Interesting Interesting x 1
    • List
  7. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    768
    Likes Received:
    275
    okay, so you are not aware of that (maybe?) requirement. :)

    not ideal, but good enough.

    as you say, you can use A/B or dummy head mics and have the directional filtering in a room response for an IR file already inluded - one reason why we use this tech. and you can do the same with ambi or vpab, where you simply interpolate between a handful of positions.

    no need to support such output formats in any way, it will all just happen inside the reverb algo.

    what some reverb designers have been doing is to include aural panning FIRs only for the frontal 60 degrees - that is an option in between these methods.

    however, the main component is probably how you use it in a mix across different virtual sources.
     
  8. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    141
    Likes Received:
    250
    ?

    Okay I think was a little confused by what you were saying previously. So yeah, my ray-tracing rendering takes care of "direction" for the early reflections. I have virtual mics with frequency dependent polarity filters. So it happens inside the algo, for now :wink:


    Anyway, as a I saw a few responses mentioning Teldex, I'll look into it. I'm currently working on IRs of a famous concert hall and I'll share a demo in 2 weeks or so.
     
  9. Obineg

    Obineg Platinum Record

    Joined:
    Dec 7, 2020
    Messages:
    768
    Likes Received:
    275
    i was just thinking about one of the room situations our ears like the most: concert halls. there, the rear really does not matter, these rooms are basically "only stereo", isnt it?

    so if it for "good sounding" and not for recreation of every possible space, that is fine.
     
  10. SirGigantor

    SirGigantor Ultrasonic

    Joined:
    Oct 14, 2022
    Messages:
    126
    Likes Received:
    36
    What do you actually mean by developing?

    If you're writing it in native C++, you do have the option of writing modules in Synthedit, people crap on it a lot, but then they'll use JUCE and be just fine with it. The modules in Synthedit are C++, plus there's other GUI stuff.

    Even if you never actually release the Synthedit VST, you can still use it for marketing, i.e. designing a GUI so whoever you're selling it to can actually interact with it instead of just a bunch of code.

    That's also why some people use Synthedit, they make sort of lesser featured plugins, stuff that's easy if you know "the mathz" and link to it on their site to get traffic, then, sometimes, they get snagged by other Devs.

     
  11. SirGigantor

    SirGigantor Ultrasonic

    Joined:
    Oct 14, 2022
    Messages:
    126
    Likes Received:
    36
    The more I look at what you've posted here, the more it seems like custom Synthedit modules make sense, because this seems like two separate processes:

    1: Intensification of a signal
    2: Degradation of a signal

    The first one, you'd want to be isolating fundamentals and overtones, then said overtones, obviously, have their own overtones (i.e. any "sound", technically, contains all other pitches, albeit in lesser percentages), which degrade into some kind of noise, you said Gaussian earlier, the specifics of which I'm unaware, but the general nature of which, from what I understand, is a sort of non-linear interaction.

    Keep in mind the overtone series, Schoenberg explains this, there's a the fundamental, then the cycles which are repeated the most frequently, i.e. the most "consonant" sounds in a major chord, as well as those sounds which are the secondmost "consonant", i.e. the overtone which is repeated first IN SERIES, but that becomes secondmost, in terms of total repetitions, overtime.

    What kind of mathematics are you actually using to approach this? It seems like you're looking to shift from Physics into Statistics . . . And those you'd have to interrelate with some kind of waveform analysis, since they're different.

    You might actually have to have consonance and dissonance settings or something like that. . .

     
  12. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    141
    Likes Received:
    250
    Little sneak peek :wink: (no reverb vs. reverb) :

    https://pixeldrain.com/u/VWxAPA91

    The algorithms behind the ray-tracing and wave based renderers are pretty much finished, and right now I'm creating a MIDI orchestra session to check that everything is working properly, and will use it as a proof of concept later. On the mixer, you see that every stage of the reverb is broken down as single plugins, but that's just for the development process. At the end, it will be a single plugin similar to Berlin Studio.
     
    Last edited: Jun 8, 2024
    • Like Like x 3
    • Interesting Interesting x 1
    • List
  13. Fowly

    Fowly Platinum Record

    Joined:
    Jan 7, 2017
    Messages:
    141
    Likes Received:
    250
    I finished my first of IRs based on Suntory Hall in Tokyo ! Here's a little demo :

    https://pixeldrain.com/u/DbPBaic7

    Every instrument has been processed with my reverb, each one of them with a specific instrument dispersion profile. So I'm able to use very dry/anechoic libraries like Samplemodeling, Aaron Venture, Tokyo Scoring Strings etc, but with a very realistic tone, just like libraries that have been recorded wet. Honestly, it's kinda bonkers the level expression that can be achieved thanks to this. And to my ears, all those different libraries blend very well in this demo. I'll be working on a video to explain the technology in more details, and that I'll release on YouTube.
     
    Last edited: Sep 23, 2024
    • Like Like x 1
    • Love it! Love it! x 1
    • List
Loading...
Loading...