What is the right type of timer for my sequencer?

I’ve been doing research into the various methods that JUCE offers for sleep / wait functions, to help me build my sequencer. I’ve looked at Time:: waitForMillisecondCounter(), Timer and HighResolutionTimer, but I suspect that there are still more JUCE methods that offer timed callbacks. I need some advice from more experienced programmers, because getting the right sort of timer seems like it will be critical for my program, and so far I’ve only really been working with GUI and Component related JUCE code.

I understand that the right type of timer depends entirely on the situation, so I’ll describe the shape of my project here. If there are important details that I’m missing, please point this out.

  1. My project is a sequencer only, with no direct audio capability of its own. Once the prototype is complete, Phase 1 is to get it working with OSC and / or MIDI, so that it can communicate with other audio software (ie. PD) to work as a master sequencer. Phase 2 is to build in VST hosting capabilities, and also give it the ability to play samples stored in a sample bank. Phase 2 is very far away and I probably won’t get close to it for another year or two. However, I wouldn’t want to jinx myself at this early stage by writing code that will create headaches for phase 2 (ie. code that is not thread-safe).

  2. The sequencer I am building is not a regular grid sequencer which sends out ticks at regular intervals (a la Timer). Instead, the user cues custom messages, which are delayed by any number of milliseconds and spit out again at the right time. Crucially, the system needs to be able to deal with any number of messages with different delay times, all running in parallel. (I have an idea about how this might work: cued messaged are stored in an ordered list, and the timer picks them off the front one by one (side question: does such a class already exist?))

  3. All data is stored in a ValueTree hierarchy which is quite complex (already implemented), and there will be a certain amount of string parsing and interpreting done on the cued messages (not yet implemented). I can provide more details here if necessary; I’m bringing up ValueTrees here because I know they are not thread safe…

One question which seems crucial here is do I need to utilize multiple threads? Given that my sequencer won’t be working with audio directly, I am hoping that I can keep everything on the message thread, use Timer or Time:: waitForMillisecondCounter(), and keep the code-base nice and simple. However, I don’t want my program to freeze up, and as I mentioned already, I don’t want to be creating barriers for myself in the phase 2, and so perhaps I should be thinking about more robust approaches at this stage.

If people here recommend that I use multiple threads, then I’m going to respond with a few follow-up questions about how I might do that. But for now, the questions are:

  1. Should I put my timer on its own thread?
  2. Which timer should I use?

That is a good question. Short answer: there is no Timer, that would be the right one.

The Timers, sleep and all the things you mentioned are circulating around execution time. But the interesting time would be the playback time (also known as presentation time). Those are different dimensions, they have no relation at all.

We can assume in real time, that when the processBlock() or getNextAudioBlock() are being called, the previous block just got delivered, and the next one, that the playhead refers to, is shortly due. But we could be the first plugin or instrument, that was called during that block, or the last one, so we really can’t know.

Hence all triggering must only be synchronous to the playhead or audio clock (which is a counter you set up yourself, e.g. since start of the program, or if you create a timeline like structure, that would be able to tell, what the next block should have as start sample.

The problem becomes even more obvious, if you consider, when your instrument is running in non real time mode (bounce), the execution time has no connection to the audio whatsoever.

When you design now your live setup, all messages you receive (MIDI or OSC) need to get a timestamp as soon as they enter. You need to queue your events now and the player will compare the timestamp of the scheduled event with its own clock and play it, if appropriate. 1)

About your question of this queue, there is the MidiMessageSequence belongs in that context.

And about using multiple threads: you don’t need them necessarily, and I would start using them only, once you have to. It will complicate things, so better to keep it out as long as possible.

  1. play it, if appropriate means, add it to the outgoing midi buffer with a timestamp relative to that block of audio. And the timestamp shall not be later than the end of the block, otherwise keep it in the queue for the next block.
4 Likes

It seems like your current and future plans are not compatible. If you are not currently doing any kind of interfacing/syncing with audio hardware, it may be very hard to do that later based on the code you are developing now. When you switch over to audio hardware output and hosting 3rd party plugins, you can no longer rely on the Timer etc clock time facilities, rather everything must be synced with the audio driver callbacks and audio buffers you are going to get.

1 Like

I’ve read your responses several times. To summarize what I’ve learned: I was hoping to avoid using the audio thread, but now it seems that this would be a mistake, and that I will have to use the audio thread for timing. This should be done in the form of a timeline or playhead which synchronizes internal sequence events with audio blocks, and also timestamps incoming events.

If I make a rudimentary playhead like this now and then build my code around it, will I at least be heading in the right direction? Again, I am not looking for perfection at this stage–I am mostly trying to avoid the scenario where I have to re-write 6 months worth of code because of poor design choices.

@daniel, you have suggested that I stay away from multi-threading for as long as possible. This has always been my intention, but I don’t see how it will be possible if I’m going to be using the audio thread for timing. Surely I need some way of ensuring thread-safety. Or am I mistaken?

About the queue question, I did look at some of these Midi classes, but what I need to cue are strings, not Midi events. Perhaps I could store strings as meta-events on the MidiMessageSequence?

You should probably just use the tracktion engine, and let it handle all the threading and event dispatch. Then you can just tell it what MIDI you want played and when, and let it run.

1 Like

This sounds like an excellent idea–I don’t know why I didn’t think of it myself. Thank you for making all these awesome tools open source!

I’ll start looking through the Tracktion StepSequencerDemo and see if I can get my head around it. I expect I’ll have a lot more entry-level questions as I proceed.

If you do all in the audio thread, that would qualify as single threaded, right? :wink:

You will always have the two threads, and they are set up by juce just automatically.
Basically all code that is reached from processBlock() or getNextAudioBlock() runs on the audio thread, the rest you can assume runs on the message thread. There might be background things, that are usually handled in the background out of your reach (like BufferingAudioSource, or the AudioTransportSource can do buffering on a background thread), so that is not as dangerous.

Your challenge is only to see, what resources or memory are accessed from both threads, and make sure they don’t collide. Some things are already handled by contract, e.g. the API guarantees, that no processBlock shall occur until prepareToPlay has finished. Surely you can end up with a broken host, that violates that, but you can initially work on that assumption.

Using a higher level API like Tracktion engine is a good idea, although, I didn’t use it yet, so I don’t know how hard it is to learn compared to writing the engine directly with juce, but definitely worth checking it out.

edit: I wrote a follow up post here, but then decided to start a new thread instead, with a Tracktion-engine tag. The new thread is here.