Using timestamps of MIDI messages in processBlock

I’m developing a plugin where I’m receiving MIDI messages within the processBlock function and then forwarding them on to a custom MIDI output. The code I’m currently using is based off the example at the bottom of this JUCE tutorial, and is along the lines of:

MidiBuffer processedMidi;
int time;
MidiMessage m;

for (MidiBuffer::Iterator i (midiMessages); i.getNextEvent (m, time);)
{
     midiOutput->sendMessageNow(m);
}

However there is a timing issue with this due to not taking the MIDI message timestamps into account, which isn’t noticeable if the current block size is small but causes the MIDI messages to be sent too soon if the block size is large.

Am I correct in thinking that the time variable will store the sample offset from the beginning of the block?

What would be the best solution to forward the messages to the output at the correct point in time? Should I be launching a separate Thread to send these messages?

Hey Liam,

Yeah you’re correct this is the sample position of the midi note in the buffer. If you process all of the midi messages from a single position at once and don’t store them with sample position for the receiving processors, then it sounds like you may only be processing the last messages in the buffer by the time your processors get around the running.

There’s a bunch of different ways to handle this, and it really depending on the audio system you’re using.

If you’re not using any sort of processor graph, one way might be to give the midi buffer directly to the processors which need it, during each sample of the process block you could check for a midi messages at that sample position and respond accordingly.

One method I’ve really enjoyed is to convert the midi buffer into some sort of audio buffer which you never actually send to an output, but can be hooked up and sent around your audio system as if it were a modulation connection. If you go this way, then anytime you find a processor needs midi information, you can give it a new input and create a connection to hook up the output and get a sample stream which represents the midi at sample accurate values. It’s a bit more cumbersome to get set up, but once you get it running, it can make things much easier. like:

for (int sample = 0; sample < inNumSamples; sample++) {
    if (inMidiBuffer[sample] == kNoteOn) {
         processNoteOn();
    }
}

At some point you’ll need to make a trade off, obviously if you need to represent things on screen, you’ll want to make all of that happen at some sort of central location, and it will need to all happen within one processing loop. I generally handle this in the place in the code where I convert the midi buffer into a sample buffer. At that point, it would send any Gui messages which need to represent things on screen, I don’t worry much about the sample accuracy of this as there’s really no way to make it sample accurate across the entire system, and it’s a tiny latency anyways. So disconnecting the messages which should be async to the Gui, and keep a sample accurate version of the audio line processing is pretty important.

2 Likes