I am trying to figure out how the juce::AudioProcessorGraph class works.
I see that each time the graph topology changes, a new RenderingSequence is created and swapped with the current one.
I would like to ask what’s the rationale behind using a parallel data structure such as the RenderingSequence and the various RenderingOperators.
I appreciate that there might be the need to reorder the graph, so that each single node is processed according to the flow of the audio through the graph.
But why use another data structure (RenderingSequence) instead of, for instance, an array of pointers to the very nodes of the graph, for the audio processing ?