How can I create my own MidiMessageCollectors and associate with plugins?

I understand in principle how MIDI events come in from the outside world and are appended into a MidiMessageCollector which are then retrieved when a plugin is processing its audio samples.

What I don't understand is how to create MidiMessageCollectors independently of MIDI devices and associate them with individual audio processors so that, for example, I could have three tracks of MIDI data, each track of which sends events to three different plugins.

Is it just a question of getting at the AudioProcessorPlayer associated with an AudioProcessor and then use getMidiMessageCollector()?

Does anyone have some example code that they would be willing to share that shows how to do this?

Thanks,

D

Is it just a question of getting at the AudioProcessorPlayer associated with an AudioProcessor and then use getMidiMessageCollector()?

This would be my approach. Simply call addMessageToQueue own the midi message collector of the AudioProcessorPlayer.

Yeah, I get that --- but what happens when I have multiple AU plugins to which I want to send different MIDI messages? Must I have a separate AudioProcessorPlayer with each AudioProcessor?

I think you would want to use the AudioGraph class.

I'm not aware of that class --- do you mean the AudioProcessorGraph? If so, that's what I'm using but the issue is that there doesn't seem to be any way to direct MIDI to specific nodes. You can only throw them into the "top" of the graph.

Don't know what you mean with "top of the graph".

You add the midi input node to the AudioProcessorGraph with AudioProcessorGraph.addNode. Same as with your "multiple AU plugins". Then you connect the midi input node to one or more of your multiple AU plugin nodes using AudioProcessorGraph.addConnection.

If you want to filter out certain parts of the midi stream (tracks, note ranges or whatever) you'd probably do that in a plugin of your own that you connect between the midi input node and any of you multiple AU plugins.

​By the way, I don't think you need to bother about the MidiMessageCollector when doing it this way.

​Have a look at Plugin Host for an example of how to work with the AudioProcessorGraph.

There are multiple issues here

 

1) As far as I can tell, you can only add one MIDI node and its input comes from physical MIDI devices, i.e. deviceManager.addMidiInputCallback.....   (that's what I meant by the top of the graph, by the way, perhaps a bad choice of terms but I meant it in the sense that there's only a single global entry into the graph for MIDI as far as we can tell)

So if your MIDI events are coming from somewhere else, such as a track of data or via calculation, how do you get those events INTO the MIDI node?

 

2) The notion of filtering data OUT from a stream is distasteful. Imagine you have 500 tracks in a sequence, perhaps along with some real-time events from some MIDI devices. You don't want to have a look at every single event, figure out from which track or device it came and then (somehow) figure out to which AUs to send (or not) each event. That's a lot of work and bookkeeping, specially if you're running on a real-time thread. Surely one should be able send the events directly to the desired AUs.

So the question is how to inject unique MIDI data to different AU/VST plugins?

We've come up with a hack to do this (and MidiMessageCollectors are convenient because they're threadsafe and you can feed them directly into a MidiBuffer in a ProcessBlock method) but I'd rather do it the "JUCE" way, if it exists.

Cheers,

D

 

Imagine you have 500 tracks in a sequence, perhaps along with some real-time events from some MIDI devices

Looks to me you're talking about a (midi) sample player. (Feel free to correct me if I'm wrong...). If you by "midi tracks" mean midi files there area convenient Juce methods to read such files, into MidiMessageSequences.

You can associate each MidiMessageSequence (or a combination of them dep on you use case) with an AudioProcessor such as outlined in http://www.juce.com/forum/topic/host-sync-noteon-noteoff (but of course the midi events now comes from the MidiMessageSequence instead of the hardcoded noteon/notoff switch)

Now you have a bunch of midi processors to add to the AudioProcessorGraph as midi nodes. Connect them to your AU plugins in whatever fashion you like. Adding another Midi input node and connecting that to a physical midi device will give you the possibility to use real time events as well.

The notion of filtering data OUT from a stream is distasteful. Imagine you have 500 tracks in a sequence...

Don't know if you're up to a sampler (i.e your application is supposed to produce sound sound eventually) or just some sort of gigantic Midi junction (outputting a stream of processed midi events over a number of midi channels), but if it's the former, it means that it will have to process samples with a number of instructions at a pace of at least 44100 samples per sec. Compared to that the cpu load of any midi event filtering will probably not even be measurable.

And assuming your application is supposed to do anything at all midi event wise, I can't imagine that the application won't have to deal with individual midi events eventually anyway. After all isn't that just what your application is supposed to do?

Thanks for the info, much appreciated. The tracks are NOT standard MIDI files, they have meta data and/or generator functions that have to be converted to MIDI on the fly in real time. The issue was how to associate individual MidiMessageSequences with individual AudioProcessors that represent VSTs/AUs. If I understand you correctly, then each of those midi processors is implemented as an AudioProcessor which means that it has a Processblock which means that each of those midi processors has its processblock  called  44100 times/second. That seems silly given the sporadic nature of MIDI such that most of the time there would be nothing to process.

I was hoping to avoid that step. It may be though that I'm still not totally understanding the Node model.

No you're misunderstanding things. The processblock is called at a pace of samplerate/buffersize, typically 44100/100 which means you process about 441 samples each call. You set the buffer size in the audiodevice. This is how the audio processing works on pc.

Not really sure what you're up to. Are your app meant to produce sound or just generate midi events. Or something else?