AudioProcessorGraph vs ProcessorChain

I have been doing a lot of reading on how both the AudioProcessorGraph and ProcessorChain classes in JUCE work. They seem to share some commonalities so I was wondering if anyone had any thoughts about their similarities, differences, advantages and disadvantages of using both approaches to achieve modular signal paths (osc -> gain -> filter etc.).

1 Like

The ProcessorChain is the newer approach. It is much more performant, because it can optimise the graph at compile time. But it can’t be changed at runtime in return.

1 Like

The question is… has anyone tried mixing approaches? Can you put an AudioProcessorGraph (or more than one) in a ProcessorChain ?


And on top of the above question^, can you pass audioprocessorvaluetreestate (APVTs) between nodes within an AudioProcessorGraph?

If you create AudioProcessor subclasses that allow that, yes.