NonRealtime MIDI Processing

I have my application working well with realtime processing, but am having some issues with non realtime - which I refer to as offline processing, where buffers of audio are pulled through my plugin host faster than realtime while midi data is sent to the host timestamped relative to a starting sample as opposed to the system clock.

I have setNonRealtime(true) on my AudioProcessorGraph, and getCurrentProcessor()->suspendProcessing( true) on my AudioProcessorPlayer to stop the audio thread from making calls while I process offline by calling processBlock() on my AudioProcessorPlayer myself.

Looking at my realtime interaction with MidiMessageCollector, it seems this won’t just magically work as I had hoped.

MidiMessageCollector::addMessageToQueue() uses the system time to create sample based offsets relative to the previous call of MidiMessageCollector:removeNextBlockOfMessages(). And MidiMessageCollector:removeNextBlockOfMessages() pulls midi data out based on system time intervals.

How are people getting offline processing to work? I assume the offline midi stream is being sent elsewhere than a MidiMessageCollector, and is being sent to the plugins in the AudioProcessorGraph in a loop apart from what I currently have set up. Are there already mechanisms provided for this that I haven’t found yet?

Thanks again for your help.


The cricket chirps I’m hearing in response to my question are leading me to think that I should borrow the code for MidiMessageCollector and create a similar class that deals with sample-stamps instead of time-stamps, doesn’t toss MidiData that gets stale, and has its MidiData pulled out of it in my offline loop and sent to my AudioProcessorGraph bypassing any AudioDeviceManager processing, while the pulling audio data from the processBlock() method of my AudioProcessorPlayer.

The crickets seem to approve of this approach.

Yeah, the message collector is definitely designed for real-time use, not for general-purpose message processing! Surely just a MidiBuffer would be more appropriate if you just need a buffer for some messages?

I often seem to overshoot the easier solution and land on the messier.


when you say:

Do you mean it is real-time in terms of keeping the right timestamp information or
is it said to call this class from inside the audio callback? The latter one would make
me nervous as a lock is introduced…