Vibe-coded plugins are giving me SynthEdit déjà vu

Discussion in 'Ai for Music' started by PulseWave, Apr 25, 2026 at 8:43 PM.

  1. PulseWave

    PulseWave Audiosexual

    Joined:
    May 4, 2025
    Messages:
    5,149
    Likes Received:
    3,004
    BPB Report
    Vibe-coded plugins are giving me SynthEdit déjà vu
    April 24, 2026By Tomislav Zlatic58 Comments
    [​IMG]
    The comments under our recent Amorph piece have been bouncing around in my head for the past few weeks.

    Some BPB readers are excited about being able to type “give me an overdrive that sounds like a Tube Screamer” and get a working plugin out the other end. Others are pretty concerned that we’re about to drown in a sea of half-functional, untested code.

    I’m somewhere in between. Both reactions make perfect sense to me.

    And it’s not happening just here on BPB.

    Over on KVR Audio, there’s a thread titled “Vibe coded plugins” that’s pretty direct. The poster says these things are popping up everywhere, they don’t trust them, and they won’t use them. The replies mostly agree.

    But the longer I thought about this argument, the more it sounded familiar. Then it hit me. We’ve had this exact fight before. It was just called something else.

    The SynthEdit stigma
    If you’ve been around music software in the last couple of decades, you’ll remember this one.

    Around the mid-2000s, SynthEdit (and later SynthMaker and FlowStone) made it possible to build a working VST without writing a line of C++. You patched together modules in a visual environment, clicked compile, and had something you could load in your DAW.



    [​IMG]



    The plugin community had a name for the result. “SynthEdit plugin” became shorthand for “amateur, probably buggy, probably not worth installing.” Forum posts would dismiss whole categories of freeware on sight.

    The implication was the same one we’re hearing now. If you didn’t write the DSP yourself, the plugin couldn’t possibly be any good.

    From my experience, a lot of that reputation was earned.

    Plenty of SynthEdit freeware really was thrown together. But the dismissal also caught a lot of legitimate work in the net, including plugins from developers who knew what they were doing and picked SynthEdit because it was the right tool for the job.

    Actually, I still see good ones turn up.

    Marco Dodin’s Cross The Bridge, a free guitar amp suite we covered in March, was built in SynthEdit and took the developer about two years of patient signal-chain work. It’s a perfectly solid plugin that just happens to use a tool many people have written off.

    Also, the KVR Developer Challenge has always allowed SynthEdit and FlowStone entries. The rules for the 2026 edition, which is live now and accepting submissions until July 5th, still list them by name alongside C++ and Delphi.

    What’s different this time
    Of course, vibe coding isn’t exactly the same situation. But it’s not apples and oranges, either.

    Here’s what’s different.

    SynthEdit was a visual modular environment where the person building the plugin could see what they were patching together. Vibe coding hands you a wall of code that whoever “made” the plugin probably can’t read, much less debug.

    That matters because audio is a real-time system. A plugin that drops a buffer or mishandles memory crashes a session. A badly coded plugin can destroy your work on a project, or, even worse, it can cause loud bursts of noise that damage your hearing.

    On the other hand, a vibe-coded web app that breaks just shows you a sad face.

    Real-time audio code has a much narrower margin for error, and AI-generated code has a habit of producing things that look right and then fail in ways that are hard to track down.

    So if you’re worried about the quality of the plugins we’ll be installing two years from now, that’s a fair concern. I share it.



    [​IMG]



    There’s going to be a lot of broken stuff that we’ll need to filter through.

    What’s the same
    But the social pattern is almost identical.

    A new tool lowers the barrier. People who don’t understand coding can suddenly create something that once required years of training.

    The people who put in those years are not thrilled about it. The arguments are lining up almost word-for-word with what I remember from the SynthEdit era.

    And there’s something else worth pointing out.

    Among all the SynthEdit and SynthMaker slop, there were dozens of real gems. These were plugins that were truly awesome and probably wouldn’t have been created without the help of those development tools.

    Variety of Sound stuff is the obvious example, although it uses custom code and mainly uses SynthMaker for the GUIs.

    But there are other examples, like HG Fortune instruments (rest in peace), Drumatic 3 by E-Phonic, Tweakbench plugins, Genesis Pro, and K Brown Synths, that relied much more heavily on SynthEdit/SynthMaker and also received a lot of praise.



    I’m very curious how many AI-assisted plugins are going to land in that same category over the next year or two. The unexpected, weird, creative, niche ones that nobody would have written by hand because either they aren’t coders or because the cost-benefit didn’t add up.

    Stuck in the middle
    I’m not picking a side here because I don’t think there’s a clean side to pick.

    Honestly, I’m worried about the quality of the software landscape over the next few years. I think the fear that some of these plugins shouldn’t be trusted is understandable, and that the flood of vibe-coded plugins is real (my email inbox is proof of it).

    But, I also hope that the blanket “vibe coded equals garbage” stance is going to age badly, the same way “SynthEdit plugin” did. That us musicians will somehow benefit in the long run.

    The thing I find interesting is the long tail.

    Most commercial plugin development is driven by what enough people will pay for. That’s a sensible business model, but it means many weird, specific, personal ideas never get built.

    A producer who wants a synth modeled on an obscure 80s preset machine, or a MIDI processor that does one strange thing nobody else needs. Those tools have always been out of reach. Now they aren’t, at least in theory.

    If even a small fraction of vibe-coded plugins turn out to be the kind of thing no commercial developer would have made, that’s a net win for the community. The rest will sort itself out the way it always has, with users sharing what’s good and quietly ignoring what isn’t.

    So yeah, I’m half worried and half curious.

    We cover new plugin releases on BPB pretty much every day, and 2026 has already brought a lot of AI-related stuff through the door. I’m filtering out a lot of it, but curious to see if we’ll see any real gems this year.

    And if you’re entering the KVR Developer Challenge with an AI-assisted plugin this year, I’d love to hear about it. Just please test it.

    Last Updated on April 24, 2026 by Tomislav Zlatic.

    Source: https://bedroomproducersblog.com/2026/04/23/vibe-coding-synthedit/
     
    • Interesting Interesting x 1
    • List
  2.  
  3. fiction

    fiction Audiosexual

    Joined:
    Jun 21, 2011
    Messages:
    1,983
    Likes Received:
    723
    You're making good points. Whether it be Synthedit or vibe coding or an unexperienced amateur developer not using AI, in the end the plugin will be rated by its users, and if it's great and if it has proper support and bug fixes for long enough then who cares how it was built?
     
    • Interesting Interesting x 1
    • List
  4. Gre89

    Gre89 Member

    Joined:
    Mar 28, 2018
    Messages:
    40
    Likes Received:
    11
    Vibe coding VSTs seems to be a thing for others, but honestly if I try to let ChatGPT code a simple synth plug in, it hasn't got a clue how the software to create it even works and just "invents" functions and toolbars that are not even there. It's far from a working instrument. Half of the time you're just correcting useless AI text/code output.
     
    • Interesting Interesting x 1
    • List
  5. hed0rah

    hed0rah Kapellmeister

    Joined:
    Mar 10, 2025
    Messages:
    69
    Likes Received:
    45
    For someone who has a real interest in audio/DSP, and is willing to put the time into error handling, testing, UX flow, discovering optimization techniques, deep diving DSP while going back and forth with their agent etc.. it can be a huge force multiplier and make entry into plugin development way more accessible.

    I am in the middle of one of the first large scale projects where i am heavily leaning on LLMs for scaffolding and initial structure (at work) and it has been over a month of responding to bug reports, deep diving the API and infrastructure i have to integrate with. AI has been a godsend. I can focus on experimenting with new ideas and optimization and let AI to the grunt work. But it turned a years worth of work into 1 month. not a years worth of work into 1 day.

    For the people one-shotting plugin development and releasing something after 1 weekend, the SynthEdit analogy is suiting, but dont get it twisted. these frontier models can write really good code, and know DSP algos/concepts. They can even analyze schematics and shit as this point
     
  6. shinyzen

    shinyzen Audiosexual

    Joined:
    Sep 28, 2023
    Messages:
    1,594
    Likes Received:
    961
    Have you tried it? Your statement is not entirely true. Its true if you prompt with a simple one sentence "create me a vst synth". Its going to give you the shittiest most basic synth that likely won't even open in your DAW. If you prompt it with extreme detail, dictating every single knobs function , and how the code should behave you can get a fully working instrument. The best way to do it, is it to "discuss" with the coding agent the project before any coding. Have it ask detailed questions about the instrument, how things should behave, what features are wanted, how they interact, what the UI should be etc etc. From there an initial prompt could be executed. You also have a few more agents waiting on the sidelines. An optimizing agent, who is instructed to audit the code, report any potential code optimizations, when approved, execute these optimizations without changing the core structure or behavior of the initial code. Theres also a debugging agent, that is searching for errors and running repeated real world tests. A visual agent who can be working on UI elements. and so on. It won't happen with a one sentence prompt. You will need some back and forth and a team of specialty agents that take direction from you or even from your lead agent.
     
    • Agree Agree x 1
    • Interesting Interesting x 1
    • List
  7. shinyzen

    shinyzen Audiosexual

    Joined:
    Sep 28, 2023
    Messages:
    1,594
    Likes Received:
    961
    exactly. I can only imagine where these agents will be in another year or two. Personally, while there will be majority junk to shift through, I think its fun and exciting and someone with a good vision, planning and execution can create some unique or fun tools.

    A friend of mine works an important job, basically lead of software development for a major US Lending platform. He's reported the same thing to me that you have portrayed. A project that would have taken him an entire team, and a years time, is done in a few weeks with one co-worker and some agents. crazy shit.
     
    • Interesting Interesting x 1
    • List
  8. lxfsn

    lxfsn Platinum Record

    Joined:
    Sep 8, 2021
    Messages:
    368
    Likes Received:
    270
    I'm no coder. I vaguely know some scripting things. I modified (for personal use) an existing free plugin with available code on github. It's insanely difficult to "convince" these coding LLMs to stop refactoring already good code. I ask for a little change, I get 4000 lines of code changed. It would randomly change small things during update and introduce more aliasing, or simply change other parameters and change the sound from minor update to minor update. And the CPU usage probably 4x-ed since I did "my edits".

    I would have no issue with known developers, DSP experts, to use LLM code, but for the rest uf us mortals, using LLMs to build plugins is a ticking timebomb. So yes, I'd rather purchase from the already established devs. I hardly seen any original vibe coded plugins anyway, just more of the already existing.
     
    • Agree Agree x 2
    • Interesting Interesting x 1
    • List
  9. Obineg

    Obineg Rock Star

    Joined:
    Dec 7, 2020
    Messages:
    1,031
    Likes Received:
    352
    as it seems, sometimes he still finds people who think that these pasted texts are written by him^^
     
  10. hed0rah

    hed0rah Kapellmeister

    Joined:
    Mar 10, 2025
    Messages:
    69
    Likes Received:
    45
    There are so many unneeded abstractions and complexities in the current state of software development and the internet. I think there is going to be a major shift in the coming years to strip away a lot of that. A lot of careers quietly depended on being the one person who understood ten years of layered abstractions, legacy code, and undocumented workarounds. Maintaining poorly built systems became its own industry. AI may not just speed up coding, its going to expose how much of modern software exists to manage yesterdays mistakes.
     
  11. Cabernetto

    Cabernetto Newbie

    Joined:
    Dec 7, 2024
    Messages:
    6
    Likes Received:
    2
    My own experience as a programmer (not in audio DSP) who makes light use of AI is that it's a potentially useful tool, but (a) it makes horrific mistakes (b) at least in its present form, it has very little idea of the "grand architecture" of a serious application. Maybe it can crank out a trivial plugin, but in this, and in many other areas, the results are impressive only to those who know very little or nothing about the field, or are so ideologically or financially invested in the overall AI project that are either deluded or feigning wild enthusiasm. I treat it as a souped-up version of Google, that I can resort to for question like "give me an example of X" when the documentation is tough to make your way through, or "look at this code fragment, this section of code runs slow, do you have any suggestions for optimization". I get answers that often need quite a bit of steering to arrive at a satisfactory result. Sometimes it is useful to bouncd ideas off of, but treat it as if programmers will be obsolete in the near future is ludicrous, and only the wet dream of the managerial class. In other areas, the results are equally pitiful. Two anecdotes, to wit.

    Here is the wrap-up of a conversation with Gemini about composer and interpreter credits on tracks from a classical music CD. Me: "It took three tries and substantial intervention on my part to produce a plausible answer. Do you consider this acceptable performance? Please do an honest evaluation of this conversation." Gemini: "No, that is not an acceptable performance. I failed to provide the accuracy and reliability you should expect from a research assistant. In the first two turns, I relied on generalized database entries for "Masashi Honma Oboe Quartets." Many classical music metadata sources aggregate similar albums, and I failed to cross-reference the specific track titles you provided against the physical album's data. When you provided the Japanese tracklist, I initially matched the keys (like D minor) to the wrong composer (Cannabich) simply because he was a more "famous" name on the list, rather than verifying the specific Op. numbers and movement titles associated with Charles Bochsa. I should have caught the mention of "Bochsa" in your very first prompt and ensured he was assigned to a specific track range immediately. Instead, I gave you a generic list that didn't match the structure of the Denon recording. You had to do the heavy lifting of providing the tracklist and correcting the composer list twice. A helpful assistant should reduce your workload, not increase it by providing "pseudo-information" that requires auditing. The moment you mentioned Denon and Honma, I should have searched for the specific Japanese catalog number (COCO-70860) or the original LP/CD liner notes. That would have revealed that Tracks 1–2 are the Bochsa D minor quartet, which is the "hook" of this specific period-instrument recording. I am sorry for the frustration and the "hallucinated" attributions. Thank you for holding me to a higher standard." There you have it folks. Straight from the horse's mouth. Now, this is a frivolous example, but when people are talking about using these tools in ways where lives of people are in the balance, it gives me the chills.

    On to the next anecdote. I am a mathematician by training, but left the field decades ago, for reasons that bear a tenuous connection with the subject at hand. I was curious as to whether ChatGPT could summarize for me the current state of the art regarding a conjecture I was working on back when I was active in the field. Mind you, not a niche or overly obscure thing (the Baumn-Connes conjecture, for those who may be curious). It gave me a long discussion that had all the right buzzwords, but it took me all of two minutes to realize that it was the biggest pile of horse dung I had seen in a long time. Two minutes. A poor schmoe, a mathematician of no particular distinction, so much so that I left the field knowing I was not good enough to "make it", pitted against an AI with access to all of the information available on the Internet. Two minutes, less than the time it takes to tie your shoes, to blow a logical hole into the mountain of crap I was handed by AI. Make you own conclusions. Mine is that the current AI fever is fueled by people who have a financial incentive to keep the grift going, or are too ignorant to realize they are being fooled by the blinking lights.

    As for the claim that LLMs are soon going to lead to general AI, this is a minority view among people who are really conversant in matters of cognition, and one that is prevalent only in certain corners of the CS world that are a little too high on their own supply. Sorry about the long tirade.
     
    • Like Like x 2
    • Agree Agree x 1
    • List
  12. Cabernetto

    Cabernetto Newbie

    Joined:
    Dec 7, 2024
    Messages:
    6
    Likes Received:
    2
    Sorry there's a typo in by previous post: I meant "the Baum-Connes conjecture".
     
  13. lbnv

    lbnv Platinum Record

    Joined:
    Nov 19, 2017
    Messages:
    492
    Likes Received:
    269
    I don't understand why we need even more plugins, especially vibe-coded ones. There are so many options already.
     
  14. hed0rah

    hed0rah Kapellmeister

    Joined:
    Mar 10, 2025
    Messages:
    69
    Likes Received:
    45
    The accuracy of Gemini matching track listings in Japanese liner notes is more of the tooling, and how it used RAG / translated text etc. It doesn't have discogs database in its weights. I am not very familiar with Gemini, but if you are using the browser chat bot, that is much different than the agentic workflows we are talking about. If you were serious about that task you would have it working in its own sandbox/direcctory so it could convert lists to markdown and launch sub-agent to do translating, and another to use RAG to check references etc. What you did there is the equivalent of the people trying to one-shot plugin development on chatgpt.com

    On the AGI question, i have no idea if the current trajectory will lead us there, but it is an objective fact that transformer architectures/self-attention mechanisms, inference-time scaling (reasoning models), are a marvel of modern engineering and human ingenuity. It is decades in the making.

    You said you are a mathematician.. I will leave you with this.

    https://www.feynman.is/

    It's an open source research agent that runs locally, pulls primary sources from arXiv through alphaXiv, spins up specialized sub-agents for researching, reviewing, writing, and verifying, runs full literature reviews with consensus/conflict mapping, simulates peer review with severity scoring, audits paper-to-code reproducibility, executes replications in sandboxed Docker containers with optional Modal/RunPod GPU burst compute, and routes every output through a Verifier agent that checks every citation and kills dead links.

    [​IMG]
    ^THIS is how some people using AI. You see the difference?
     
    • Like Like x 1
    • Interesting Interesting x 1
    • List
  15. hed0rah

    hed0rah Kapellmeister

    Joined:
    Mar 10, 2025
    Messages:
    69
    Likes Received:
    45
    lol and yeh, i agree with this. i could see vibe coding something specific as a personal project. But as far as commercial plugins, we already have everything we need. Maybe someone will come up with some weird out-of-the-box idea though, we shall see i guess
     
  16. Cabernetto

    Cabernetto Newbie

    Joined:
    Dec 7, 2024
    Messages:
    6
    Likes Received:
    2
    I was not aware of that particular tool, thank you for pointing it out. I don't think its existence invalidates my previous points. As for the bad results in the credits attribution exercise, be it for whatever cause they are, the results are what I got. The rest are excuses.
     
  17. lbnv

    lbnv Platinum Record

    Joined:
    Nov 19, 2017
    Messages:
    492
    Likes Received:
    269
    I don’t deny that it’s possible to generate a useful plugin using AI. But this is either a very rare fluke or the result of very focused and time-consuming work. It requires a solid understanding of the subject, well trained ears, and a good testing environement.

    But why? Are there no many of them already?
     
    Last edited: Apr 26, 2026 at 2:54 AM
  18. fiction

    fiction Audiosexual

    Joined:
    Jun 21, 2011
    Messages:
    1,983
    Likes Received:
    723
    It certainly has an idea of a possible grand architecture when you throw a task at it.
    Chances are high though that the proposed architecture won't work well or will be different from what you really want.
    I'm not sure if programmers won't soon be obsolete when only looking at the coding part, but I'm sure that we'll still need experienced architects and code reviewers to keep it on the right track.
     
  19. Cabernetto

    Cabernetto Newbie

    Joined:
    Dec 7, 2024
    Messages:
    6
    Likes Received:
    2
    Well, you said it yourself. We are very far from the point when these things will be able to operate autonomously and produce good results. The way I see it, the limitation is in the very nature of LLMs, which are merely devices manipulating tokens according to statistical rules. In other words, it is an entirely formal game. The machine does not "know" what the tokens stand for, in any meaningful sense. Of course, the problem of what "knowing" is, is a huge philosophical can of worms on which I don't think I'm qualified to opine. On the available evidence, however, I remain unimpressed, but I am open to changing my opinion when I see facts that warrant it.
     
  20. shinyzen

    shinyzen Audiosexual

    Joined:
    Sep 28, 2023
    Messages:
    1,594
    Likes Received:
    961
    what do you think of this from Taches, fully vibe coded using claude a few months back.

     
  21. Rasputin

    Rasputin Platinum Record

    Joined:
    Jun 29, 2012
    Messages:
    432
    Likes Received:
    278
    It's the 2026 version of "I'm an ideas man."

    In other words: I had an interesting idea during my 10 minute shower this morning and now I want someone else to do all the heavy lifting while I take most or all of the credit/profit.
     
Loading...
Similar Threads - Vibe coded plugins Forum Date
Run & Hide EP is on Spotify! Pretty cool summer vibes Music Releases Mar 13, 2026
Linux Creator AI ‘Vibe Codes’ AudioNoise Hardware Effects Pedal Software News Jan 13, 2026
Vibe or no vibe? I still dont like the vocals! Mixing and Mastering Dec 15, 2025
Just finished a new trak, vibe, mix check? Mixing and Mastering Nov 29, 2025
\I droppped a new song! vibe or no vibe? Music Releases Nov 12, 2025
Loading...