Another Audio Graph Discussion

Hey All,

Currently biting the bullet and customizing a version of the JUCE Audio Processor Graph which suits my needs.

The biggest pain points for me are:

A) On creation of new connections, the full rendering sequence is built from scratch
B) In a very large graph, the async updating freezes the GUI as the new rendering sequence is generated.

What i’m thinking is most ideal, is to find a way to change the rendering ops during run time, without fully reconstructing them, or, in the less optimal case, rebuild the rendering ops on a separate background thread instead of the GUI.

Wondering if anyone has gone down this route? I think the current graph has a much more flexible design than what i need for instead of an audio plugin. Open to any suggestions or tips!

How many nodes are you creating? If it takes that long I guess it’s a lot and then maybe you should consider making a simpler graph. I use a patched version of AudioProcessorGraph in some of my projects to enable the user to make connection between modules. I split the entire processing into large “modules” so only the connections that can change are handled by AudioProcessorGraph - everything else is hardcoded. From your post I get the impression you are using it like you would create a reaktor or msp patch and in my opinion it is not suited for that.

Edit for original question: I estimate there are around 60 nodes, but each with many many inputs from a complex modulation system.

Hey, thanks for the input. That’s an interesting idea but I’m not sure how it would work in practice. I’m just using it to generate standard synthesizer / effects layouts. For example: perhaps there is an oscillator node, an lfo node, a filter node, and then a group of nodes which are dynamically routed to create an fx slot. These nodes would all have inputs for audio, as well as modulation.

Indeed all of the connections are built via the dynamic connections system. I’m not quite sure what hardcoding them would look like. I would need on each audio processor, for some inputs to be fixed/hardcoded, and some to be dynamic to handle modulation routing.

This is probably the most standard approach to audio graphs? Or maybe I’ve built something crazy :sweat_smile:

In my opinion, 60 nodes should be fine. Are you sure the graph is only recreated once when you change a preset? I saw similar issues when I had many listeners in place that would rebuild the graph when the user changes modulation. On a preset switch they used to fire all at the same time, leading to many graph rebuilds in a row. I had to add a mechanism to pause graph updates until restoring a preset was complete to fix this.

I never experienced GUI freezes, but I still patched AudioProcessorGraph to not use AsyncUpdater but update on a separate thread instead where I have more control over things.

I’m thinking that might be my solution too for the mean time.

But, no the graph updates whenever modulation connections are made. When new connections are made, asyncupdater is called, and then buildRenderSequence.

It’s not a huge issue, but a slight pause in the UI whenever you drop a modulator onto a knob isn’t the best UX & definitely want to clean it up. Also, this is a relatively small application compared to what I’m planning in the future so thinking it’d be best to get it solved before things get even more intensive.

Are you aware of a way to create a connection without updating the graph? I handle modulation as IO on the graph itself. Perhaps there’s a better way.

No I do it the same way, but I absolutely make sure there is always only one rebuild on larger changes.

I think I’ll be able to use SOUL in the future.

Yeah I’ve added some setRebuildRenderSequenceOnChange() to the graph for the time being to avoid that problem, if I make a load of change I renable that after and it rebuilds the render sequence. I think it’s just the amount of modulation I’m allowing which is slowing it down, some of the nodes have upwards of 160 inputs.

Maybe one solution could be moving a cache of the connections into the node object so it doesn’t have to traverse the connections to find whether or not it has active inputs & needs a rendering op step.

There are multiple ways building the rendering sequence could probably be improved, but getting things to work reliably is a lot of work. When I last checked, the approach used by JUCE could still be called mostly a brute-force algorithm. However tweaking the algorithm is a rabbit-hole. I do think the boost graph library could be very helpful to get a better algorithm.