I am making an app that needs to play multiple midi tracks. It’s not exactly a sequencer, but it’s similar in the way that it needs to play multiple midi tracks. Each track might be played through a different collection of VST instruments and effects.
For each track I’m creating an AudioDeviceManager, an AudioProcessorGraph and an AudioProcessorPlayer. I then add VST plugins to the graph, and play one track of the midi through it by calling getMidiMessageCollector().addMessageToQueue(midiMessage) on the AudioProcessorPlayer.
Is this the right thing to do? Or should I be sharing some of these objects across multiple tracks? If I should be sharing them, could you give me some tips about how to route different midi tracks through the different chains of instruments and effects?
I’m now creating one AudioDeviceManager and sharing it across my tracks. I had confused myself by thinking that the AudioDeviceManager::addAudioCallback(player) was a ‘set’ function rather than an ‘add’ function. So I had wrongly thought that I could only add one player to the device-manager and that I needed one audio-device-manager per player.
Can someone diagram the best method of handling this? I am referring to handling multiple tracks of midi data like a sequencer
Do you use multiple AudioProcessorPlayers or just one. If multiple how do you connect each plugin to each player.
Do you use multiple AudioProcessorGraphs or just one.
Midi data source 1---->Plugin—>Effects---->Audio out
Midi data source 2---->Plugin—>Effects---->Audio out
etc…
or simply how to take data from
void handleIncomingMidiMessage( MidiInput *source, const MidiMessage &message )
{
}
and send it to a random plugin(node).
And the best solution would be for someone to write a routine like the Apple AudioUnit function MusicDeviceMIDIEvent()