Send Midi events at precise time intervals?

I’m trying to send MTC Quarter Frame events, they need to be sent at precise time intervals. For example, for a 24 FPS timecode, each quarter frame has to be sent every 10,417 milliseconds, which is 1000 (milliseconds) / 24 (frames) / 4 (quarters).

In void processBlock(AudioBuffer<float>& buffer, MidiBuffer& midiMessages) I am filling the buffer this way:

void PluginProjectAudioProcessor::processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages)
{
	midiMessages.clear();
	buffer.clear();

	int sampleOffset = 0;
	int sampleInterval = getSampleRate() / (double)FPS / 4.0;

	unsigned char MTC_Piece = 0;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, ff & 0b1111), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, ff >> 4    ), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, ss & 0b1111), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, ss >> 4    ), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, mm & 0b1111), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, mm >> 4    ), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, hh & 0b1111), sampleOffset); sampleOffset += sampleInterval;
	midiMessages.addEvent(MidiMessage::quarterFrame(MTC_Piece++, (hh >> 4) | (fps_nib << 1)), sampleOffset);
}

I’m not sure how the sample offset should be calculated… Is it absoute? Relative to the block?
When trying this code, events are received at arbitrary intervals, sometimes two adjacent events have the same timestamp.

When trying with a hardware unit (MOTU MTP/AV) I can see that the QuarterFrame events are sent approximately 11 milliseconds apart.

For this to work properly, you need to keep track of the “global” timeline of samples, and only send out messages that happen to ‘fall’ within the current block.

So for example if you want to send a message every one second, with a sample rate of 44100 and block size of 100, you will count 441 blocks, and then send the message at the first sample of the 442nd block.

Isn’t Juce taking care of this? Otherwise I don’t understand what the second parameter of MidiBuffer::addEvent is for.

When I’m in a processing block, and add events to the MidiBuffer, how do I know when the message is going to be sent? Will it be sent exactly now? Or at the beginning of next block?

I made other applications (not plugins) that send Midi events at precise time intervals, but in that case I was using a separate Thread and used MidiOutput::sendMessageNow.

Maybe I should do the same thing in a plugin?

That will only work properly for events within the current block. That parameter is for offset within that block - for example to specify sample #32 out of a buffer size of 64.
(you get the buffer size from the AudioBuffer passed into processBlock)

It’s gonna be sometime after you finish processing your entire `processBlock’, at that point the entire MIDI buffer is sent to the destination along with the offsets relative to the current buffer.

Maybe I should do the same thing in a plugin?

No you should not. That’s just asking for troubles. The sending of Midi events is controlled by the clock of the (possibly) sound card. The timers used in Juce, which I assume you would use to time the events in a different thread is ultimately controlled by a different clock in the computers mother board. And they are not in sync and will never give you a stable, precis midi timing. It might work sufficiently for a while but is susceptible to break for any os update. Or juce update… :slight_smile:

Have a look at this thread Inject notes to a synthesiser - Development - JUCE

It deals with the sending of midi events at precise timings and will hopefully give you a clue to solve your task.

When I do time critical operations in threads I usually use std::chrono in a controlled while loop.

I have seen your code but it’s not clear to me…

Suppose I want to send a Sysex message every 10 milliseconds at a samplerate of 44100 and buffer size of 256, I would do:

void PluginAudioProcessor::processBlock(AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
{
	midiMessages.clear();

	for (int s = 0; s < buffer.getNumSamples(); s++)
	{
		sampleCounter++;
		if (sampleCounter >= 441)
		{
			sampleCounter = 0;
			int offset = s; // Should be the exact sample position into the current buffer

			midiMessages.addEvent(MidiMessage::createSysExMessage(std::begin<unsigned char>({ 0x00 }), 1), offset);
		}
	}
}

I’m counting samples until reaching 441, at that moment, I create the Midi Event and set the sample position at which that time occurred, after that I can start counting from 0 for the next event.

But I’m not getting what I expect… timestamps on the output are irregular.

No, you need to keep the counter outside of the processblock… Reread the last comment in my link, reproduced below, above sampleCounter += numSamples;…

void processBlock(AudioSampleBuffer& buffer, MidiBuffer& midiMessages) override
	{
		int numSamples = buffer.getNumSamples();
		auto endOfThisBuffer = sampleCounter + numSamples;

		/*
		* we check for every processBlock if we have any notes to play in this time spot [sampleCounter .. sampleCounter + numSamples]
		* N.B we us while() here instead of if() while there could (most probably) be more than one midi event at the same position
		* (you could check this logic by modifying timeToPlay and set the notes to start at the same time)
		* */
		while (nextPlayTime < endOfThisBuffer)
		{
			/*this term is really important. It controls where in time the midi event will be placed
			* in the time duration of the processblock, 0 means at the start, numSamples -1 at the very end
			* if ouside this interval it will most probably be disregarded and never hear of 
			* */
			auto positionInBuffer = int(nextPlayTime - sampleCounter);

			midiMessages.addEvent(notesToPlay[currentNote], positionInBuffer);

			//get the next playTime
			if (++currentNote < timeToPlay.size())
				nextPlayTime = timeToPlay[currentNote];
			else
				nextPlayTime = std::numeric_limits<int64>::max();	//run out of playTimes, set nextPlayTime to eternity...
		}
		
		/*this is the key to be able to play a sample anywhere on the timeline i.e at any time after start
		* we add the played number of samples of every processBlock being played
		* after some amount of time (processblock calls) we reach the first playPos (nextPlayTime above)
		* */
		sampleCounter += numSamples;
	}

The sampleOffset specifies where inside the block the event will be output when the block eventually gets processed. So you can only add however many events will fit within the current block size.

So you need to take the block size, divide by the number of events that will fit inside it, add those specifying the sampleOffset as you have done, but then carryover the remainder of the blocksize to the next block, and it to the next block and continue. From block to block the sampleOffsets will be different, because of the block size and sampleRate (but usually cyclic in some way). Hope that makes sense.

At 48000 samples per second, 1 ms is 48 samples. If your block size is 512, then 512/48 means one block equals 10.6667 ms. You could fit two events of 10.417 ms in the first block (with the first one at zero), with some remainder to be carried over to the next block.

10.417 * 48 = 500 (500.016) - so one event every 500 samples

block size 512

event 1 - sample offset 0
event 2 - sample offset 500 - remainder 12

next block size 512
event 3 - sample offset 488 (500-12) remainder 24

next block size 512
event 4 - sample offset 476 (500-24) remainder 36 … etc.

Something like that. This may not be exactly correct, as I know how I am doing it, but it’s difficult to explain and there’s modulo’s involved… The idea is to carryover the values between blocks received.

Maybe you already know that and I’m misunderstanding.

What if the block size is smaller than the sample interval?

In my example, I’m on 44100 / 256, should send an event every 441 samples.

Block 1: event 1, offset 0;
Block 2: event 2, offset 185 (441 - 256)? Remainder: 71
Block 3: nothing
Block 4: event 3, offset 114 (882 - 768) or 114 + 71?
Block 5: nothing
Block 6: event 4, offset ?

I have tried your method this way:

void processBlock(AudioSampleBuffer& buffer, MidiBuffer& midiMessages) override
{
	auto numSamples = buffer.getNumSamples();
	auto interval = 441.0;
	auto endOfThisBuffer = sampleCounter + numSamples;

	if (sampleInterval < endOfThisBuffer)
	{
		auto offset = int(sampleInterval - sampleCounter);

		midiMessages.addEvent(MidiMessage::createSysExMessage(std::begin<unsigned char>({ 0x00 }), 1), offset);
		sampleInterval += interval;
	}

	sampleCounter += numSamples;
}

It “more or less” works, but I still can’t see a precise 10 millisecond interval between events, but I am noticing that is is tolerated by MTC readers, for example now Cubase reads the Quarter Frame messages I’m generating from my program.

I’ve also tried excluding interference from Midi drivers and hardware, but when I try with the MOTU MTP I get that 99% precise tick… With a MTC at 25 FPS I see a Quarter Frame event every 10 milliseconds, sometimes 9, sometimes 11.

I think applications like Cubase apply some sort of tolerance, maybe 3-4 milliseconds, if a QF event exceeds this tolerance, the whole timecode frame is ignored. If more frames are erroneous, the MTC stream is completely ignored.

By the way, I don’t like the idea of a variable that is continuously and indefinitely increasing its value while the executable is running. I know that an UINT64 increasing at a sample rate of 44100 would take more than 13 million years to reach 2^64… but you never know… :grin:

You don’t need to do it that way. Something like this:


member variable:
samplesToProcess = 0.0;

//===========
void processBlock(AudioSampleBuffer& buffer, MidiBuffer& midiMessages) override
{
    auto numSamples = buffer.getNumSamples();
    auto interval = 441.0;
    samplesToProcess += (double) numSamples;

    while (samplesToProcess > interval)
    {

       // add your event

        samplesToProcess -= interval;
    }

Well, bearing in mind that Midi is send with technology emanating from the eighties, serial transmission with a rather limited bitrate (31250 b/s ?), I’d say a 1 ms tolerance is what you would expect. Probably getting even worse if you play some really bpm extreme music at the same time… Or just doing aftertouch…

But anyway, what are you trying to do, I would have thought that if you’re sending sync pulses you’d want to synchronise something, but you’re not telling what?

I am generating MTC (Midi Time Code) Quarter Frame events, as I said in the original post.
My goal is a standalone application that decodes the LTC SMPTE audio signal coming from a tape machine, using libltc, and generate the MTC information that will be interpreted by Cubase or other DAWs.

I have a couple of old racks that already do this (MOTU MTP/AV and Opcode 8Port/SE) but I wanted a handy software solution.

It’s actually working right now. I press play on my Fostex E-16 and Cubase follows right along.

Glad to hear you found a solution that worked!

Just FYI… the program I was working on is this:

SMPTE Tool

A free utility for Windows and for macOS that replaces an hardware synchronizer unit.