MIDI in problems with Plug-in Host Example


In JUCE 1.45, midi input to the plug-in host in the examples folder works but about 20% or so of the midi note off events are missed. In the latest tip (377), midi in from external devices to the plug-in host doesn’t work at all; although midi in from the keyboard in the plug-in host does work. Also, I can send internally generated midi to synthesizer objects in my own code using either version of Juce. I’ve tried 3 different midi input devices with the same results, 1 of them hardware, the other two software: Novation X-station (h/w), Midi Yoke (s/w), the keyboard bundled with Cakewalk Pro Audio (s/w). I’m running Windows XP. I’m not sure if this is a bug or not, but any insight or help is appreciated.


It’s probably a bug, as I’ve been reworking the whole graph structure in the current version and haven’t finished yet.

Not sure about the missing 20% of note-offs though. That’s a bit odd. You’d expect things like that to either work or not work…


Another clue: there are more missed note offs when running the debug version than the release version. This seems strange as MIDI processing should be less demanding than audio processing, but perhaps there’s some subtle timing error or even a “<” instead of “<=” that causes some midi events to be dropped.

A related question: I’m thinking of adding vst hosting capabilities to my project. Would you recommend using the released version of Juce or the latest tip? I’m wary of using code that’s not finished yet; on the other hand, the new filter graph and audio processor family of objects are very cool and they handle details that would otherwise have to be manually coded if using the released version. For example in the 377 tip version of the plug-in host, the actual rendering is handled in the Juce framework in the AudioProcessorPlayer whereas in the released version, the plug-in host’s FilterGraph class handles the rendering.


Maybe they’re getting delivered in the wrong order? So if a note-off and note-on have the same timestamp, they could be getting sorted incorrectly? That may even be a driver-specific quirk.

The only bit of the tip code that isn’t stable at the moment is the plugin hosting stuff, so not sure what to recommend. Depends on when you’re planning to release, because eventually of course the tip will be the best thing to use, but perhaps not just right now…


In order to get midi input from an external device and not only from the virtual keyboard component, you might want to add the line marked with mod to the constructor of GraphDocumentComponent:

GraphDocumentComponent::GraphDocumentComponent (AudioDeviceManager* deviceManager_)
: deviceManager (deviceManager_)
addAndMakeVisible (graphPanel = new GraphEditorPanel (graph));

graphPlayer.setProcessor (&graph.getGraph());

keyState.addListener (&graphPlayer.getMidiMessageCollector());

addAndMakeVisible (keyboardComp = new MidiKeyboardComponent (keyState,

addAndMakeVisible (statusBar = new TooltipBar());

deviceManager->setAudioCallback (&graphPlayer);

// @mod+, midi callback
deviceManager->addMidiInputCallback (String::empty , &graphPlayer );



Worked for me.


I just downloaded version 1.46 and tested the MIDI functions in the Host to see if my original complaint was fixed. I was pleased that note offs were no longer being dropped, but dismayed to find that external MIDI input wasn’t working. Frank, thanks so much for the one line fix.

My question: shouldn’t external MIDI input be enabled by default, that is, is this a bug?


I think it’s just something that I need to tweak when I get chance.


Got exactly the same problem here now in my own JUCE app (has nothing to do with graphs): I get hanging notes.

I’m using a MidiMessageCollector and have MIDI input coming either from a MidiKeyboardComponent or the hardware MIDI Input.

It doesn’t matter where the MIDI comes from (hardware MIDI or MidiKeyboardComponent), note on’s or off’s get lost sometimes (or maybe they are in wrong order?).

If there’s a fix in the new JUCE for this (I’m still using 1.45), in which file or class was this bug?


After some investigation, I’m more or less sure there’s something wrong in MidiMessageCollectorEx::removeNextBlockOfMessages (). I rewrote the function like this (this is for 1.45) and now I don’t get any hanging notes anymore:

[code]void MidiMessageCollector::removeNextBlockOfMessages (MidiBuffer& destBuffer,
const int numSamples)
// you need to call reset() to set the correct sample rate before using this object
jassert (sampleRate != 44100.0001);

const uint32 timeNow = Time::getMillisecondCounter();
const int msElapsed = timeNow - lastCallbackTime;

const ScopedLock sl (midiCallbackLock);
lastCallbackTime = timeNow;

if (! incomingMessages.isEmpty())
	const uint8* midiData;
    int numBytes, samplePosition;
	MidiBuffer::Iterator iter (incomingMessages);
	while (iter.getNextEvent (midiData, numBytes, samplePosition))           
          destBuffer.addEvent (midiData, numBytes,
                                 jlimit (0, numSamples - 1, samplePosition) );

I have no clue why the original function code is so complicated. I don’t understand everything in it.

BTW, found out that the Time::getMilliSecondCounter() is only accurate as 15ms on my system!! So, I get 0, 15, or 30 but nothing in between. Time::getMilliSecondCounterHiRes() is good though. So used that one instead in my function (I noticed that this is also the case in the MidiMessageCollector on the SVN).


You’ve got rid of all the code that deals with the event list not syncing with the destination buffer. Sure, there could be a bug in my code that’s dropping notes, but the way to fix it isn’t to just delete everything!

I guess the problem might be in that numSourceSamples is based on the timer, which might be inaccurate, but I really can’t see anything in there that could lose any of the events in the list. It’ll shift their times around a bit though - maybe you’re filtering out events later on based on their timestamp, and losing them at that point?


I don’t do any filtering later. I’ll keep my code for now, since it works for me.


I can’t imagine that it’d work very well. If you get too many messages coming in, you’ll end up with timestamps that are out of range.


Yes, it does work and I don’t get any hanging notes anymore, which is absolutely logical, because the messages stay in the right order and nothing is discarded.

I grab them in the audioIODeviceCallback(). If the audioIODeviceCallback() is called in regular intervals, this is a very good way of doing it. Why should the timestamps be out of range? They will always be in the range of 0 to numSamples-1. So that’s correct.

If the audioIODeviceCallback() is not called regularly (if it jitters), then the only right way to do it is the following:

  • all incoming midi messages are timestamped with milliseconds (double)
  • [this is purely theoretical, it would require some extra code to take in account that the samplerate is never exactly accurate, for example it is 44.10001 kHz in reality]
    in the audioIODeviceCallback() following is done:
    • in the very first call of audioDeviceCallback the time, expressed in milliseconds is sampled into variable TIME and ((bufferSize*2)/samplerate)*1000 is added to TIME.
    • in this call and all next calls of audioIODeviceCallback(), we remove from the MidiBuffer of the MidiMessageCollector only the messages who have their timestamp<TIME and put them in the range from 0 to numSamples-1. All other messages have to stay in there.
  • in this call and all next calls of audioIODeviceCallback TIME is increased by 1000*(bufferSize/samplerate)

This is in my opinion the only possible algorithm to avoid MIDI Jitters if the audioIODeviceCallback() is not called in regular intervals. The downside is that the MIDI latency will be twice the soundcard output latency. So, no MIDI Jitter anymore (except the one from the drivers), but higher MIDI In latency.


BTW, you’re right, I could add a line to my code that checks if the messages coming in are older than 2 seconds and if they are, clears the midi buffer because the audioIODeviceCallback didn’t pick them up.


Nope. The messages arrive in a different thread, using a different clock, so won’t be in sync with the audio callbacks. You could easily end up grabbing a set of messages that go from 0 to numSamples + some amount. That’s what my extra code is there for - when that happens, it just squashes them all to fit into the buffer size you’re asking for.


Yes, but since I limit them from 0 to numSamples-1 with jlimit, they will have valid timestamps and still be in the right order. Your code shows no benefits to me. I know that my code will cause small jitters if the audioCallback is not called in regular intervals, but so does your code. It does not cure anything.

And the only way of doing it really right (getting no MIDI jitter) is how I described previously.


BTW, found out why your code throws away messages (in JUCE 1.45). It´s in the line:

if (numSourceSamples > maxBlockLengthToUse)

In fact, if the ms counter does not work well (which is the case on my computer if one does use Time::getMillisecondCounter() instead of Time::getMillisecondCounterHiRes() ), then there can be false positives.

Example: the samplerate is set to 64 samples @44.1kHz = 1.13ms
On my computer the time returned by Time::getMillisecondCounter() only has an accuracy of 16ms. So the msElapsed is always 0 or 16 in my case. But since 16 is greater than (1.13 * 8 ) or otherwise expressed numSourceSamples > maxBlockLengthToUse, your function will throw away messages that should not be thrown away.

Since Time::getMillisecondCounterHiRes() seems to me much more accurate, this also explains why in the new sourcecode, one does not have the same problem.

Further, I think this whole code for squeezing or moving the time of MIDI messages doesn´t make any sense at all. It will maybe even worsen the timing of played notes.

To have a good timing one must actually know the time T when exactly the first sample of the last buffer was PLAYED (from the hardware; not talking about when the audioIODeviceCallback() occured, that is not accurate) and use that information as a clock to select what MIDI messages from the incoming buffer should be used in the actual audioDeviceCallback() (all messages that have their timestamp < T) and what MIDI message should stay in that buffer because they are too young (timestamp >= T) and should be used in the next audioDeviceCallback().

I think for most good ASIO drivers, the last buffer will just have started to play when the new audioIODeviceCallback() starts and that´s why my function is good. No need to squeeze or move samples because that will not make the timing “better” if it was wrong from the start anyway.

Otherwise you should get the time T from the bufferSwitchTimeInfo() in ASIO. See chapter II 6) Media synchronisation in the ASIO SDK 2.2 PDF.
If I understand it right, that time T expressed in ms, is the system time when the new block will be playing rather than the time when the bufferSwitchTimeInfo() occured; so that´s exactly what one needs as reference for knowing what MIDI messages to take and what messages to let in the MIDI buffer for the next audioIODeviceCallback().

If the ASIO driver is very bad, as ASIO4ALL for instance, and issues 2 buffer switches more or less immediately one after another, T would have to be not the time of the sampleposition 0 of the last played block, but of the sampleposition 0 of the previously last played block.

I don´t know how this works for CoreAudio though.

So, to finish, in my opinion it would make sense to add a further parameter to the audioIODeviceCallback() method. This parameter would be called “double bufferSwitchTimeMs” and would contain the time, expressed in milliseconds, when the buffer that is being calculated will in fact be played by the hardware. This would be the ultimate clock reference for all MIDI.


Ok, well I had already changed it to use the high-res timer, so if I increase that maximum block size a bit, that should sort it all out.

It’s a nice idea about providing timer info to the audio callback, but I don’t think any of the audio APIs provide any info that could possibly be used for that. The best they can do is give you their latency figures.


I’m not sure if you really understood what I meant, so I’ll prefer to repeat it :slight_smile:

What I mean is not really “Timer info”, but rather time information about at what exact time expressed in ms the HARDWARE will play the first sample of the block that audioIODeviceCallback() is requested to compute.

That time is in the future, and does ofcourse not correspond to calling Time::getMillisecondCounterHiRes() in the audioDeviceIOCallback() which is a bad, innacurate reference for Audio/MIDI synchronisation.

ASIO provides the time when the buffer will be played by the hardware. See chapter II 6) Media synchronisation in the ASIO SDK 2.2 PDF.

I’m rather sure CoreAudio must somehow also provide it, because it’s a really fundamental thing for Audio-MIDI synchronisation and because OS X is a media-based OS.

If it is not the case, the bufferSwitches() would have to always start exactly at the time where the hardware starts playing the previously computed buffer.


Ok, I see, but haven’t time to get that working with ASIO, or to figure out how best to emulate it for the APIs that don’t have it. If you want to chuck me some code that does it I might have chance to hack it in, but am really busy right now.

I also think a better way to present it would be to add a method to the device class that returns the value, rather than breaking people’s code by changing the parameters to the callback. After all, it’s unlikely that many people would ever want to use the value.