Hi all,
I just finished a PhD in machine learning, and in the long run I want to use this knowledge to build innovative audio tools that use AI creatively. In the meantime, I’m starting a long period where I’ll have a lot of free time and I want to use it to become more proficient in building audio-related software with modern tools and best practices. For the past years I’ve been mostly doing Python / Pytorch and math stuff, but I do have a decent background in modern C++ which I was tutoring at uni. Over a year ago I started a small JUCE project in my spare time to learn the rudiments of the framework, e.g. GUI, managing state with ValueTrees, OpenGL, etc. but I got bored pretty quickly because it was too much of a “toy” project with no clear objective.
From now on I want to build something that will continuously grow over time, and which in the long run ends up involving most aspects of complex audio apps, mostly for the sake of learning but also to have something I can showcase. My main inspiration is the Kilohearts ecosystem, where a “Host” plugin (like Phaseplant or Multipass) is a blank slate with panels where you can add components from a collection of audio effects / oscillators / modulators (LFOs, etc), and where almost any signal can be used to modulate any parameter. Obviously I’m not aiming to recreate a full-fledged product like this, but I’m taking it as an model.
My first question: Is JUCE itself a sufficient starting point to build something like this, or should I already start from something that builds on it, like Tracktion ? In practice I’m absolutely not trying to build a DAW and my plugin is not a time-based app, so I feel like it’d be overkill. Still, I remember some pretty useful utility tools like the ValueTreeObjectList or the Tracktion Graph which seem to fit with the idea of a “dynamic” plugin where inner components can be added / removed by the user. On the other hand I also see an “AudioProcessorGraph” class in JUCE itself, which seems to do the job.
Another thing I want to plan in advance are very solid foundations for the modulation system. I’ve been producing on Bitwig for a while and noticed that most modulators like LFOs can actually go all the way up to audio rates while still being able to modulate virtually anything else (basically you can even use the LFO to modulate the DC Offset device and you’d have an oscillator). I really don’t see this often in traditional plugins (in that respect even Kiloheart’s modulators are not audiorate). So my second question is: Would that kind of modulation capabilities be possible with the traditional way of handling parameters in JUCE ? The main reason I’m interested in this is that I see it as a great practical way to directly expose myself to the difficulties of real-time code as well as how to handle signals in a “generic” way (i.e. parameter changes should be handled correctly by the audio processor whether they originate from the GUI thread given user input or an LFO running at audio rate). I’ve been consistently blown away by Bitwig in that respect.
My final questions relate to building some part of the GUI in a foreign language (suppose that I want to have a crazy looking animation for some component which I can’t do in JUCE). Here, I’m a bit lost. From what I’ve seen, JUCE handles OpenGL, so one option for this would be to bind to a GUI framework (using FFI) that is drawing in some OpenGL context associated with the JUCE component corresponding to that specific part of the GUI. Would that be a correct approach ? If so, I am a confused with some other approaches which rely on web-tech (I really don’t know much about the web) and with the idea of native vs. browser-based GUIs. For example, Output Inc. made a talk at ADC this year on how they use React + PixiJS (WebGL) to build the GUIs of their apps, and they seem to just run everything in the WebComponent from JUCE. Is this a simpler approach ? How do they compare performance-wise ? The reason I’m asking this is that I’d like to explore quite early on the possible ways to build components that can be both put on the web while also easily binded to JUCE. Basically, if I do end up developing some innovative audio tool, I’m pretty sure I’ll want to create a nice GUI for it, then both put it on my personal site to showcase and make it available as a VST3 plugin and in my custom ecosystem.
Thanks a lot in advance!
Mathis.