Advantages of dsp::ProcessorChain vs AudioProcessorGraph

I’m making a channel-strip plugin with multiple, sequential DSP effects. I started using AudioProcessorGraph in my PluginProcessor. I’m laying out DSP effect units by implementing the ProcessorBase class, as done in the AudioProcessorGraph tutorial.

However I also have been looking at DSPModulePluginDemo, which sets up its DSP chain in this manner:

 using Chain = dsp::ProcessorChain<dsp::NoiseGate<float>,
    Chain chain;

I’m not sure which approach to use, or some other approach, for my plugin’s DSP processing chain. I think it would be interesting to compare the benefits and meaningful differences between the two.

From what I’ve heard, dsp::ProcessorChain may allow for
more optimized DSP code. An advantage of AudioProcessorGraph is that it allows for re-ordering of audio processors, which can’t be done by dsp:ProcessorChain (not required by this plugin)

dsp::ProcessorChain is more efficient due to its use of templates, the compiler can take each DSP ‘stage’ in the chain and create a single function worth of DSP code that can be executed very quickly.

AudioProcessorGraph on the other hand relies on dynamic ‘stages’, virtual classes/‘plugins’ that can’t always be determined at compile-time. Hence, the compiler has to create extra code to determine what ‘stages’ are to be used and how at run-time, every time the function is called.

The trade-offs you have already mentioned. You can reconfigure an AudioProcessorGraph at runtime, add third-party plugins to it, etc. But a dsp::ProcessorChain is completely fixed and your software will need to be rebuilt for any changes to be made.