Should each midi track have its own AudioProcessorGraph?

I am making an app that needs to play multiple midi tracks. It’s not exactly a sequencer, but it’s similar in the way that it needs to play multiple midi tracks. Each track might be played through a different collection of VST instruments and effects.

For each track I’m creating an AudioDeviceManager, an AudioProcessorGraph and an AudioProcessorPlayer. I then add VST plugins to the graph, and play one track of the midi through it by calling getMidiMessageCollector().addMessageToQueue(midiMessage) on the AudioProcessorPlayer.

Is this the right thing to do? Or should I be sharing some of these objects across multiple tracks? If I should be sharing them, could you give me some tips about how to route different midi tracks through the different chains of instruments and effects?

thanks,
Richard

Hmm I don’t think you need multiple AudioDeviceManager objects.

Thank you, TheVinn!

I’m now creating one AudioDeviceManager and sharing it across my tracks. I had confused myself by thinking that the AudioDeviceManager::addAudioCallback(player) was a ‘set’ function rather than an ‘add’ function. So I had wrongly thought that I could only add one player to the device-manager and that I needed one audio-device-manager per player.

thanks again,
Richard

Can someone diagram the best method of handling this? I am referring to handling multiple tracks of midi data like a sequencer

Do you use multiple AudioProcessorPlayers or just one. If multiple how do you connect each plugin to each player.
Do you use multiple AudioProcessorGraphs or just one.

Midi data source 1---->Plugin—>Effects---->Audio out
Midi data source 2---->Plugin—>Effects---->Audio out

etc…

or simply how to take data from

void handleIncomingMidiMessage( MidiInput *source, const MidiMessage &message )
{
}
and send it to a random plugin(node).

And the best solution would be for someone to write a routine like the Apple AudioUnit function MusicDeviceMIDIEvent()

Actual source code preferred.

Thanks

Well, the optimum solution clearly depends on your application’s requirements.

There’s merit on both approaches.

Option 1:
Use a single AudioProcessorGraph

  • infrastructure implementation can be done with custom classes that handle your channel structure in processing
  • can be serialized easily by implementing the Xml serialisation like in the Host example
  • you can derive from AudioProcessorGraph and customize the class to your needs

Option 2:
Use multiple nestes AudioProcessorGraphs

  • Each channel/track would have it’s own class derived fom AudioProcessorGraph
  • serialisation is spread among multiple classes, so it’s more complicated to implement
  • you can in theory re-use stored subgraphs in other projects (which is harder for the single graph)

Just my 2 cents.