Rendering sequence in AudioGraph

Hi,

I am trying to figure out how the juce::AudioProcessorGraph class works.

I see that each time the graph topology changes, a new RenderingSequence is created and swapped with the current one.

I would like to ask what’s the rationale behind using a parallel data structure such as the RenderingSequence and the various RenderingOperators.

I appreciate that there might be the need to reorder the graph, so that each single node is processed according to the flow of the audio through the graph.

But why use another data structure (RenderingSequence) instead of, for instance, an array of pointers to the very nodes of the graph, for the audio processing ?

thanks

Because threads :slight_smile:

Yet there is a ScopedLock(getCallbackLock()) protecting the rendering sequence, which could likewise protect the graph.

I guess the rendering sequence is then used to reduce the use of the critical section by the non-audio thread to a short swap of RenderingSequences objects then.

thanks

Yeah, less locking is certainly a reason, but there are many other subtle ways things can go totally nuts when you try to process audio using a set of objects that could be constantly changing at any moment.

1 Like