I actually used that audio plugin as my starting point although I deleted most of the synth producing pieces (although not the method calls) as I’m really trying to generate MIDI output to virtual MIDI devices in response to incoming MIDI data.
I finally took the time to understand how the jucer GUI builder integrates and now I’m able to instantiate that (GUI modified) plugin in MainStage and send MIDI events into it (starting to play with the MidiInCallback stuff) — but now I want to produce generative MIDI output with precise timing and I’m still just reading through the classes (on my iPad ) to see what’s available.
In the “old” days (i.e, last time I did this was probably 15 years ago) I would create a high precision timer and generate what I needed each time I got a tick. So far, I’ve only found a mechanism that lets me put MIDI events in a buffer with timestamps (sendBlockOfMessages) but the problem with that is I don’t actually know what I want to send UNTIL I reach the desired point in the future so I can’t actually prefill that buffer with anything (and if the tempo is changed while there are still events in the future, I would want to be able to adjust the actual times anyway).
However, I am assuming that that sendBlockOfMessages has very accurate timing in it so it can grab the event at the appropriate time. That would lead me to believe there is a way to get at a very precise timing source (in a cross-platform manner) that I could use to trigger my own processing.
Does that make sense?