Intercepting processing of a graph

Hi
I’m still experimenting with Juce, and now plugin hosting.
I have set up a simple test using a AudioProcessorGraph and a AudioProcessorPlayer and this is working fine.
You can think of it like the Host demo, with out all the fancy extras.
Now I want to add some properties to each of the processors. Lets say, input gain output gain and some MIDI processing. and maybe getting the VU in and out.
This would require processing “outside” the actual filter, so I need access to the samples going in to the filter and to the samples coming out.
Being a new to this I see two options, creating a new class and inheriting for example AudioPluginInstance, this way I would get access to the virtual method processBlock and should be able to do what I want inside this (Or?). The down side is that I have no idea if this will break something else. Clearly I would not be able to use CreateNewPluginInstance to create the plugin instance. so that prat will need to be rewritten. Are there any other drawbacks, or would it even work?

I could setup additional custom, internal filters before and after each plugin-filter, but that seems a bit messy, as each plugin would actualy be three filters in the graph.

Any input on this would be great.
Thank you.

I’d like to offer some advice here, but I’m a little confused. Why would these things have to happen ‘outside’ the filter? I assume that you add AudioProcessor derived objects as nodes to your signal graph(AudioProcessorGraph::addNode(AudioProcessor *newProcessor, uint32 nodeId=0))?
In their processBlock() methods you can access both incoming and outgoing samples, as well as process MIDI events. Hence it should be trivial to add IO gain controls? Or am I missing something?

Sorry if I wasn’t clear, and I may be missing something basic here.
Yes I could use a derived class from AudioProcessor, but I’m not sure of the best way to this, since the filters could be VST/AU (something else) Should I derive from AudioPluginInstance ? if I do I can’t use AudioPluginFormatManager ::createPluginInstance so I would need to re-write this code to return my derived class.
Well to sum-up I guess I’m confused as to what class I should derive from, and if I would need to create additional new classes to “support” this derived class.
Hope this cleared it up a bit

I think I might have got the wrong end of this. Are you trying to add IO gain controls to third party plugins that you load into your graph, or add them to your own native plugins? If are trying to add them to third party plugins I think you will need to place your native processors into the signal chain between existing nodes. But once you have your graph set up, and you know where the nodes are you can easily insert your own filters into the chain by doing something as simple as this:

MyAudioProcessor* myNativePlugin = new MyAudioProcessor(whatever...);
graph.addNode (myNativePlugin, nodeId); 

Then call AudioProcessorGraph::addConnection() to insert the filter into the signal chain at a specified point(you’re probably doing all this already?). I don’t think there is anything wrong with this approach but I should add that I’ve not done much work with these classes, so it could be that there is a better approach!

Yes, I’m talking about third-party plugins. So the signal chain would go like this for each third-party plugin:
Input(from somewhere in the graph) > Mystuff > plugin > Mystuff > Output

Ok, so you would go for the “add extra processors”-approach. I guess it seems logical, I could create a container class, that manages the three processors (for a single plugin), and make sure they are always connected, etc.

How about creating a processor that contains another processor. A Container Processor that is added to the main graph, when this gets processed it does the pre-processing, calls the processBlock of the contained processor, and then does the post processing.

I never thought of embedding the third party plugins into a plugin container with independent gain controls. So you could end up doing things like myPluginContainer->setParameter(…) or myPluginContainer->embeddedPlugin->setParameter(…). I guess you’re thinking of rolling your own mixer interface or? I’d be interested to hear how other people have approached this.

I have set up a test app using this composition scheme, and it seems pretty cool. Although initially inheritance seemed like the natural route, I think this is way easier.
Created a PluginContainer class that takes care of all the custom stuff.
Derived it from a PluginContainerBase class that is an AudioProcessor.
The base class owns a AudioPluginInstance (The third party plug) and implements most of the virtual methods of AudioProcessor.
They are easy to implement, as they mostly just call the same function in the owned plugin instance.

Yes, It would be great to have more input on how other people do this.