Is there a process for requesting code changes to JUCE?

I want to build an AudioProcessor whose purpose is to do simple (and maybe later not so simple) MIDI transformations. I'd rather perform those transformations when MIDI events arrive (generally from the outside world and obviously happening at a far lower rate than audio events) rather than within the processBlock callback.

From reading the code, my sense is that the best way to do this would be to subclass the MidiMessageCollector so that we could implement our own version of "addMessageToQueue". However, in MidiMessageCollector, that method is not marked as virtual. Obviously I could (and in fact have done so) just mark that original method as virtual and be on my way but I'm wondering whether this is something that would make more sense as a general change.

Thoughts (and perhaps alternative approaches) appreciated.

Sorry, they go via the processBlock method for very good reasons. It'd be impossible to write processors that handle midi as individual events.

The main problem is that hosts and plugins just don't work like that. The timing of these events can't be just "whenever the method gets called", they need to be time-stamped and handled in blocks.

For example, how would you create something like a midi arpeggiator? Your plugin would need to have its own high-accuracy timer thread triggering its generated events in realtime.. How could you have a host where every plugin is randomly sending midi events on their own threads all the time - it'd be a threading disaster! Not to mention the impossibility of off-line rendering and a thousand other problems..!

Actually, all processing is always 'offline' so some extent, as everything is buffered an merged behind the scenes before it gets to your speakers. 

It is only that a DAW displays its playhead with the same delay, which gives you the illusion of it being "real time" ;-)

Thanks, Jules --- but perhaps I didn't explain properly --- I understand that MIDI events have to be "processed" in the context of the processBlock to go to the plugin but such things as keyboard splits (i.e, whether an incoming MIDI event should be allowed through in the first place), transpose, channel mapping, can be done as part of the original "receipt" of a MIDI message from the outside world, i.e., when the event is actually being added with addMessageToQueue. Indeed, quite a bit of MIDI mapping (changing aftertouch to pitchbend, changing one CC to another and so forth) could be done there.

It seems silly to be testing the MIDI queue 44100 times/second and very occasionally finding an event which then MAYBE has to be transposed or blocked (if outside the split range) when you can do it just once when the event arrives from the outside world.

I did in fact make addMessageToQueue virtual and subclassed MidiMessageCollector so that the subclassed version of addMessageToQueue would be called and the simple filtering gets done there before invoking the superclass method. It seems to work beautifully.

If there's a deep reason why we shouldn't be doing this, it would be helpful to know why not. Otherwise I continue to make the recommendation that addMessageToQueue should be virtual.

The other possibility of course is to have listeners so that MIDI filters can be registered and called at the beginning of the original addMessageToQueue

The feedback is appreciated.

It seems silly to be testing the MIDI queue 44100 times/second and very occasionally finding an event which then MAYBE has to be transposed or blocked (if outside the split range) when you can do it just once when the event arrives from the outside world.


You really don't seem to understand the system that you're criticising.

It's a trivial couple of lines of code to write a loop that iterates the midi events and performs some kind of modifier function on each one. It adds zero overhead when there are no events, and I've no idea why you think that anything would get called 44100 times/second..!

Sigh, I'm not criticizing anything and I think the project is an incredible achivement. You're right I mispoke about 44100/s, I forgot to divide by sample buffer size so yeah, if I'm using 16 samples/buffer, then there must be about 2700 calls, certainly an order of magnitude smaller, double that if I switch to 96k, something on my horizon.

Of course it is trivial to write a couple of lines of code to iterate the events, but even so, when feasible, it seems conceptually cleaner to process events WHEN they occur as opposed to testing 44100 2700 times/second to see if something needs to be processed. I suppose I'm just naturally disposed to want to have as little going on as possible in real-time loops that are called many times.

...just a thought, it doesn't work the way, that a synth can produce data, when it wants, and if it doesn't nothing happens. It's the other way round, the audio driver's output keeps asking for sample data, and it does it at fixed times, that is, when the last buffer was played back via the AD-converter or any other output. Timur explains that very good in a video on the cpp con (this video and similar should really get a prominent link on the webpage -> look for "Timur c++ audio" on youtube).

Hypothetically, if I understand you right, you could just fill the output buffer with silence. And then whenever a midi event occurs, you add your generated wave to the buffer. BUT you may not write to the buffer, because you don't want to lock the audio thread. To solve this, you will need to impllement some kind of FIFO to decouple the output buffer from your synth wave creation thread. So in the end, you have exactly the same situation as you have now... And I believe the solution now to have the processBlock being called and produce wave blockwise with the information of timestamped midi events sounds way more stable to me...

Yes, I understand that the audio buffers must be filled continuously. However, I'm not trying to create my own waveforms nor touch the audio buffers in any way.

Appreciate the feedback.  The approach I mentioned at the beginning is working very well and avoids processBlock() methods from having to deal with potentially dozens of tests for possible MIDI transformations.

I just thought others might want to do something similar but it did require a single change to JUCE, making the addMessageToQueue method in MidiMessageCollector be defined as virtual.


Let it go! It's not a workable idea!

Apart from threading problems and other general nastiness as mentioned above, this wouldn't even work for most plugin formats! I think it's only the VST3 wrapper that even uses a MidiMessageCollector! The others just deal with all the midi buffering internally.