I understand that it must sound a bit odd. I was originally planning to use a timer, but was convinced by other people that I actually need to sync my events to the audio playhead (see my original post).
Basically, I am building a sequencer that can service external musical events and also its own internal state. So for instance, on beat #1 it could send Midi note 55 through a Midi channel to an external device, and on beat #2 it could send an internal message to update its own state, so that next time around, beat #1 will send Midi note 56.
At the moment, I’m just trying to get my program to the point where it can send Midi and / or OSC messages through the ports to other devices, and in won’t have any other audio capabilities in this prototype phase. However, phase 2 would be to expand it so that it can store and play samples internally, and also hopefully host VST plugins. I’m worried that if I used timers, it would make my future goals very challenging, even though it seems like they would be easier in the short run. Nevertheless, I might be completely off the mark with using the Tracktion Engine code I’ve drafted here–it’s really just a stab in the dark at this point.