I've opened a MIDI file and merged all eight channels into a single MidiBuffer.
I've got it playing with my synth.
But is there any way to dynamically alter the volume or tempo? It looks as though I need to reconstruct a new MidiBuffer every time my user changes the volume or tempo sliders.
This is how I'm constructing my MidiBuffer:
for (int t = 0; t < M.getNumTracks(); t++) { const MidiMessageSequence* track = M.getTrack(t); for (int i = 0; i < track->getNumEvents(); i++) { MidiMessage& m = track->getEventPointer(i)->message; m.multiplyVelocity(0.01f); int sampleOffset = (int)(sampleRate * m.getTimeStamp()); if (sampleOffset > totalSamples) totalSamples = sampleOffset; midiBuffer->addEvent(m, sampleOffset); } }
Maybe I can do this in the render callback -- so if I'm running at double tempo I need to copy a block with timestamps from startSample to startSample+2*buflen, then I need to correct the timestamp of each back into the range [0, buflen) -- but this involves storing a bunch of messages, and I'm wary of allocating on the callback thread.
Also, I'm hoping to use this MidiBuffer for a piano roll style visualisation. But the iterator https://www.juce.com/doc/classMidiBuffer_1_1Iterator only seems to support moving forwards. Which means that if I scrub to a random location in my MIDI file it is going to be a little awkward to fill in notes that were already playing before entering the visualizer window.
It's not a big problem, it is no big deal if my display loses the occasional really long note while scrubbing. Mainly curious...
EDIT: Doing a bit of digging, I can see that MidiBuffer.addEvents looks like it does some allocation. For every event that is added, everything timestamped afterwards is shunted to the right and the event is inserted.
Did team JUCE evaluate using a doubly linked list as an alternative design pattern? e.g.
struct MidiMsg { MidiMsg* prev; MidiMsg* next; uint8_t data[MAX_MIDIMSG_BYTES] }
And one-time:
reserved = malloc(MAX_ELTS * sizeof(MidiMsg))
And then in the render callback just set
MidiMsg* nextFree = & reserved[0]; reserved[0] = MidiMsg{0}; // end marker
And then inserting an element at position k would just be:
MidiMsg* lmarker = & reserved[k-1]; MidiMsg* rmarker = & reserved[k ]; lmarker->next = nextFree; nextFree->prev = lmarker; nextFree->next = rmarker; rmarker->prev = nextFree; !!! memcpy elt into nextFree.numBytes & data nextFree++;
This would entirely avoid allocations in the callback.
But I can't imagine more than 50 elements appearing in a single callback, so maybe it doesn't matter which way it's done. If it works as it is, devices will only get faster.
It certainly isn't space efficient, but it would support binary searching as each element is of fixed size, and mixing tracks would be incredibly fast -- just a single allocation. And if say one track has 500 consecutive messages before the next one contributes one then the entire block can be memcpy'd over in one go.
One thing I noticed is that when I open a 5 min MIDI file and condense all tracks into a single buffer (code at the top), there is a noticeable delay -- maybe 2-3s on a really fast machine.
π