I've opened a MIDI file and merged all eight channels into a single MidiBuffer.

I've got it playing with my synth.

But is there any way to dynamically alter the volume or tempo?  It looks as though I need to reconstruct a new MidiBuffer every time my user changes the volume or tempo sliders.

This is how I'm constructing my MidiBuffer:

        for (int t = 0; t < M.getNumTracks(); t++) {
            const MidiMessageSequence* track = M.getTrack(t);
            for (int i = 0; i < track->getNumEvents(); i++) {
                MidiMessage& m = track->getEventPointer(i)->message;
                int sampleOffset = (int)(sampleRate * m.getTimeStamp());
                if (sampleOffset > totalSamples)
                    totalSamples = sampleOffset;
                midiBuffer->addEvent(m, sampleOffset);

Maybe I can do this in the render callback -- so if I'm running at double tempo I need to copy a block with timestamps from startSample to startSample+2*buflen, then I need to correct the timestamp of each back into the range [0, buflen) -- but this involves storing a bunch of messages, and I'm wary of allocating on the callback thread.

Also, I'm hoping to use this MidiBuffer for a piano roll style visualisation. But the iterator https://www.juce.com/doc/classMidiBuffer_1_1Iterator only seems to support moving forwards. Which means that if I scrub to a random location in my MIDI file it is going to be a little awkward to fill in notes that were already playing before entering the visualizer window.

It's not a big problem, it is no big deal if my display loses the occasional really long note while scrubbing. Mainly curious...

EDIT: Doing a bit of digging, I can see that MidiBuffer.addEvents looks like it does some allocation.  For every event that is added, everything timestamped afterwards is shunted to the right and the event is inserted.

Did team JUCE evaluate using a doubly linked list as an alternative design pattern?  e.g.

struct MidiMsg { MidiMsg* prev;  MidiMsg* next;  uint8_t data[MAX_MIDIMSG_BYTES] }

And one-time:

reserved = malloc(MAX_ELTS * sizeof(MidiMsg))

And then in the render callback just set

MidiMsg* nextFree = & reserved[0]; 
reserved[0] = MidiMsg{0}; // end marker

And then inserting an element at position k would just be:

MidiMsg* lmarker = & reserved[k-1];
MidiMsg* rmarker = & reserved[k  ];
lmarker->next = nextFree;  
  nextFree->prev = lmarker;  
  nextFree->next = rmarker;
rmarker->prev = nextFree;

!!! memcpy elt into nextFree.numBytes & data

This would entirely avoid allocations in the callback.

But I can't imagine more than 50 elements appearing in a single callback, so maybe it doesn't matter which way it's done. If it works as it is, devices will only get faster.

It certainly isn't space efficient, but it would support binary searching as each element is of fixed size, and mixing tracks would be incredibly fast -- just a single allocation. And if say one track has 500 consecutive messages before the next one contributes one then the entire block can be memcpy'd over in one go.

One thing I noticed is that when I open a 5 min MIDI file and condense all tracks into a single buffer (code at the top), there is a noticeable delay -- maybe 2-3s on a really fast machine.


Did you evaluate using a doubly linked list as an alternative design pattern?  e.g.

This is some terrible advise in terms of efficiency. Iterating a linked list is one of the slowest things you can do on a CPU. Unless you use a custom allocator it's likely these will all be in completely different parts of memory and therefore cause a cache miss for every iteration.

Take a look at some of these talks for some information on why cache coherency is probably the most important thing for performance on modern machines:

CppCon 2014: Mike Acton "Data-Oriented Design and C++": https://www.youtube.com/watch?v=rX0ItVEVjHc

code::dive conference 2014 - Scott Meyers: Cpu Caches and Why You Care: https://www.youtube.com/watch?v=WDIkqP4JbkE

CppCon 2014: Chandler Carruth "Efficiency with Algorithms, Performance with Data Structures": https://www.youtube.com/watch?v=fHNmRkzxHWs (Note how he bashes std::map)

Understanding Compiler Optimization - Chandler Carruth: https://www.youtube.com/watch?v=FnGCDLhaxKU


The whole point of MidiBuffer is that it's one contiguous block of memory so it's very cache local and therefore fast to iterate. This is the kind of access you usually perform in audio processor callbacks so it's optimised for this use case.

It sounds like you need a different type of structure for your model (and perhaps you could use this in your UI too) and then convert it to a MidiBuffer when it needs to be played back?

Choosing the right data structure for the right purpose is the key to getting good performance.


Thanks for the explanation!

btw, I wasn't offering advice! I'm in no position to advise these people, they know what they're doing -- this much is clear!  Rather I was trying to understand why a particular design was chosen.

It is starting to make sense now.

I've just realised that even if MidiBuffer's iterator was bidirectional, there is still the problem of some really long note that spans the entire visualisation window, i.e. It starts offscreen to the left and finishes offscreen to the right.


No worries, I didn't mean to put down ideas (I'm all for new ideas ;). Optimising is a difficult problem and is often counter-intuitive to what you would think unless you have a good understanding of how modern CPUs work. We really should make some of those videos stickys...


If it's taking a long time to load the MIDI file have you tried profiling it to see where it's spending time?

Often simply pre-allocating Arrays can give huge performance increases because not only do you avoid the constant re-allocation but you maintain the cache hotness.