Multiple questions to build a modular ecosystem of components using JUCE as a backend

Hi all,

I just finished a PhD in machine learning, and in the long run I want to use this knowledge to build innovative audio tools that use AI creatively. In the meantime, I’m starting a long period where I’ll have a lot of free time and I want to use it to become more proficient in building audio-related software with modern tools and best practices. For the past years I’ve been mostly doing Python / Pytorch and math stuff, but I do have a decent background in modern C++ which I was tutoring at uni. Over a year ago I started a small JUCE project in my spare time to learn the rudiments of the framework, e.g. GUI, managing state with ValueTrees, OpenGL, etc. but I got bored pretty quickly because it was too much of a “toy” project with no clear objective.

From now on I want to build something that will continuously grow over time, and which in the long run ends up involving most aspects of complex audio apps, mostly for the sake of learning but also to have something I can showcase. My main inspiration is the Kilohearts ecosystem, where a “Host” plugin (like Phaseplant or Multipass) is a blank slate with panels where you can add components from a collection of audio effects / oscillators / modulators (LFOs, etc), and where almost any signal can be used to modulate any parameter. Obviously I’m not aiming to recreate a full-fledged product like this, but I’m taking it as an model.

My first question: Is JUCE itself a sufficient starting point to build something like this, or should I already start from something that builds on it, like Tracktion ? In practice I’m absolutely not trying to build a DAW and my plugin is not a time-based app, so I feel like it’d be overkill. Still, I remember some pretty useful utility tools like the ValueTreeObjectList or the Tracktion Graph which seem to fit with the idea of a “dynamic” plugin where inner components can be added / removed by the user. On the other hand I also see an “AudioProcessorGraph” class in JUCE itself, which seems to do the job.

Another thing I want to plan in advance are very solid foundations for the modulation system. I’ve been producing on Bitwig for a while and noticed that most modulators like LFOs can actually go all the way up to audio rates while still being able to modulate virtually anything else (basically you can even use the LFO to modulate the DC Offset device and you’d have an oscillator). I really don’t see this often in traditional plugins (in that respect even Kiloheart’s modulators are not audiorate). So my second question is: Would that kind of modulation capabilities be possible with the traditional way of handling parameters in JUCE ? The main reason I’m interested in this is that I see it as a great practical way to directly expose myself to the difficulties of real-time code as well as how to handle signals in a “generic” way (i.e. parameter changes should be handled correctly by the audio processor whether they originate from the GUI thread given user input or an LFO running at audio rate). I’ve been consistently blown away by Bitwig in that respect.

My final questions relate to building some part of the GUI in a foreign language (suppose that I want to have a crazy looking animation for some component which I can’t do in JUCE). Here, I’m a bit lost. From what I’ve seen, JUCE handles OpenGL, so one option for this would be to bind to a GUI framework (using FFI) that is drawing in some OpenGL context associated with the JUCE component corresponding to that specific part of the GUI. Would that be a correct approach ? If so, I am a confused with some other approaches which rely on web-tech (I really don’t know much about the web) and with the idea of native vs. browser-based GUIs. For example, Output Inc. made a talk at ADC this year on how they use React + PixiJS (WebGL) to build the GUIs of their apps, and they seem to just run everything in the WebComponent from JUCE. Is this a simpler approach ? How do they compare performance-wise ? The reason I’m asking this is that I’d like to explore quite early on the possible ways to build components that can be both put on the web while also easily binded to JUCE. Basically, if I do end up developing some innovative audio tool, I’m pretty sure I’ll want to create a nice GUI for it, then both put it on my personal site to showcase and make it available as a VST3 plugin and in my custom ecosystem.

Thanks a lot in advance!

Mathis.

Yeah, It is feasible to perform all modulation at audio rate these days, and it can actually be cleaner and simpler to implement this than the various schemes that attempt to ‘save’ CPU via complex ‘control rate’ based systems, because you inevitably need to upsample the control-rate to audio anyhow to smooth the signal.
One thing that can make audio-rate modulation more efficient is the use of ‘silence flags’ (VST3) or ‘streaming flags’ (SynthEdit). The idea is that when a signal is steady and not changing, you can dynamically switch to an optimized code path.
For example when you are modulating a filter with and envelope generator, the calculation of the co-efs is quite heavy. But if you can recognize that during the ‘sustain’ portion of the envelope that the modulator is ‘static’ (not changing), then the filter can skip the overhead of recalculating the coefs.

(the white graph shows CPU consumption increases during the Envelope ‘attack’ and reduce during the ‘sustain’)

CPU Saving

1 Like

Ok, good to know then, thanks a ton!


Bitwig is in fact not modulating at audio rate speeds. I mean, pretty close, but not exactly audio rate. For example here’s a DC Offset module modulated by the LFO into a midrange-y frequency (around 520hz) and you can see there’re harmonics with the highest non-fundamental harmonic being at around -28db. so not incredibly artefacty, but also not totally clean. Apparently it is still best to add a few performance hacks if you wanna make a big modulation system

Interesting! Just tested though, if you take the “Audio Rate Modulator” and make it listen to the output of a Sine Osc then you don’t have the harmonics anymore and you can still map it to anything. Or you can take the Wavetable LFO and select the sine and again, no harmonics (at least before -120dB). So I guess the functionality is really there in the system but they maybe optimize on a device-by-device basis! Maybe that will also depend on the destinations (e.g. a filter cutoff is not truly modulated at audio-rate, but a DC offset is, etc).

1 Like

I am also building an audio environment which is emphatically not a DAW. I started out with Tracktion but ended up ditching it, because Tracktion is basically a disassembled DAW and with my particular needs I had to fight it every step of the way. So far I’ve found that there’s a JUCE class for everything audio related that I need. Implementing it correctly might take a long time, but in the end it works.

I have no way of knowing if this will be the same for you though–perhaps your system is more DAW like than mine, and Tracktion would prove to be useful in some way.

Good luck and keep us updated with your progress.

Hi Liam,

Yeah that’s kind of the fear I had with Tracktion, and what you describe makes it pretty clear that I won’t need it at first, so that’s really helpful.

Thanks a ton, I’ll try to keep you updated with the progress!

Mathis.