Scheme for Juce: Making schedulers in/with Juce?

Hi folks, I’m beginning to port my Scheme for Max & Pure Data external over to JUCE to make an open source lisp-based sequencing toolkit. It’s heavily inspired by Common Music, though is a different engine -Common Music’s Grace front end is also built on JUCE, and also uses s7 Scheme as the high-level scripting language. Scheme for JUCE will target a bit of a different audience, intended to be more for folks who want to hack on the internals. It’s also intended to allow one to use the same domain code in s7 Scheme across Pd, Max, and Juce for building sequencing, algorithmic music, and live coding tools.

Most of this will not be hard to port, as the lion’s share of the work has been done in the Max version. But in Juce I will need to implement a scheduler. Making good schedulers is a tricky business with lots of trade offs for different patterns. I’d ultimately like to give the user/developer the opportunity to choose between a few or at least tweak how it works. I’m curious if people here who have implemented sequencing schedulers in Juce apps have any tips, or pointers to resources, or projects with code that would be worth looking at (Juce or not). This will not be a plugin, but a standalone app btw (though I suppose making it possible to run as a plugin later might be worth exploring too). I’m just a big believer that hunting through prior art is a good plan generally…

thanks!
iain
(If you are interested in what I’m porting, site is here with links to docs and demo videos in the readme. GitHub - iainctduncan/scheme-for-max: Max/MSP external for scripting and live coding Max with s7 Scheme Lisp )

Hi!

Can you describe a little bit more what the requirements of your scheduler are?

Juice has its own Thread Pool that can be helpful. A simple scheduler would be a tight loop (maybe the audio callback even) reading a queue of timed event and sending jobs to the thread pool.

Rather the TimeSliceThread and TimeSliceThreadClient. The client can return how soon it wants to be called again, but I don’t think that counts already as scheduler… :wink:

Sure. So what happens in Scheme for * is that one can write Scheme code for all high level events. It includes the functions “delay” and “clock”, which enable one to put a Scheme callback into the future. In Max there is also a tick version, in Pd, just ms versions. In both, these use the host scheduler, which is pretty simple to use in C. The C API for both enables me to register a function that receives a single pointer argument and this will be called at the “right time” (clock repeatedly, delay is one shot). The host scheduler doesn’t do anything else for me, the rest is in portable Scheme. However, in both cases the scheduler is smart in that it self-corrects over the long haul for any quantization jitter (I haven’t looked at how, but obviously some mechanism where it knows what the logical time is in additional to the actual clock time).

Max and Pure Data use Music N/Csound style block processing, where a vector of samples is calculated at once, and message events (such as those originating from my Scheme code) will always start on a sample vector boundary, introducing quantization jitter of up to half the sample vector, but in both cases the underlying scheduler keeps track of where the event should be, so this jitter never accumulates to more than half the vector. If you want no jitter, you set sample vector to 1 sample and use more CPU, and get sample accurate event boundaries. This actually works very well with Scheme, s7’s GC is pretty fast and can be locked out or run on demand, so in my tests so long as the output sample buffer is reasonable, I get rock solid timing, and the self-correction mechanism is reliable. Basically the latency needs to be big enough for the GC to run and finish, which typically takes 1-2 ms, so I run with 6-10ms latency usually.

To begin with, I want to do the same thing, because first order of business is portability. So I need to implement a scheduler for running c-code (s7’s API is ANSI C) that self corrects over the long haul, and just executes a function that receives a pointer to a generic data structure. However… this does need to run in the same thread as the s7 interpreter, so that may be the tricky part. I realize this is not “normal audio dev practice”, but the whole point of the project is to allow one to run Scheme in the thread with tight timing.

Ok, so for this task, although it is really tempting to use Timers, you can’t, because of the lack of precision and the deviation you will accumulate.

A simple way to achieve this is as I suggested to have a queue of timed events, that is read in a more or less tight loop, and each event is dispatched to a thread pool.

For your scheduler to self correct, it is pretty simple. Juce provides a high resolution Time that you can access regularly to check where you are. Let’s say you run a loop that wakes up every 1 millisecond. You could, every N iterations, check that indeed 10ms have passed. If you realize that 11 passed, adjust your current time accordingly and dispatch following events if needed.

If your events are not super fast to execute and you need them to be done on time, then your events will need to integrate some kind of anticipation mechanism.

Thanks, this is exactly the kind of info I wanted to avoid dead ends. It sounds like I will be doing some rolling-of-my-owning… :slight_smile: Couple of questions.

Is Time accurate enough to use for correction at the sample-accurate level?

Or is the right approach to implement a block processor of my own? As in, register a callback that runs every X samples clocked from the audio driver and count samples, calculating off that? And if so, if you can point me at the right parts of Juce for doing that or things I might look at, that would be lovely.

thanks!

The Juce tool I shared goes down to the millisecond so no it is not enough. You could use STL chrono instead to get the system time in nanoseconds, which is more than enough.

Regarding using the audio callback, well there are pros and cons. It would allow you to synchronize your events with the audio driver (which may be needed if these are audio events?) but also add constraints since you shouldn’t be blocking the audio thread.

Thanks dimbouche. It sounds like using audio callback is the right method. This is not meant to be a polished commercial product but a kit for hacking your own tools, so the way it works in Max and Pd is a user-beware scenario too: nothing prevents you from stalling the audio thread and causing an overrun if you put too much Scheme code in, but if you are reasonable about it, you can reliably trigger audio events from Scheme at sample accuracy. To paraphrase Bill (the author of s7) on Snd, this is meant to be the Emacs of sequencers: if you want to bring the system to its knees with your elisp, you are empowered to do so. :slight_smile:

I take it there is no built in Juce scheduler that clocks off the audio callback then?

I see so yes using the audio callback is the right method.

Juce provides you the device audio callback :grin: Look Here.
So your strategy should be to execute all pending events that have a date included in each callback. So if the audio device is set to 512 samples buffer size, you will execute events by time windows of 512 samples (and position their results accordingly in the buffer).

Of course, to get the audio callback, a device needs to be opened. So this won’t work if the user selects no audio output for you app (in case you plan to allow such workflow)

Thanks, that should give me some stuff to chew on.

iain

BTW thanks very much for taking the time to provide the help! :slight_smile:

1 Like

The Pure Data scheduler is roughly a while loop that triggers callbacks of registered clocks (according to their set time) and sleeping 1ms at each turn (here my version).

You could make your own in a custom thread, or use a kind of ThreadPool with clocked jobs.

The main problem is to think twice about the best/easier approach for multi-threading questions (with GUI rendering/interactions occuring into the application message loop).

That is actually quite similar to the TimeSliceClient, that can return the number of ms until it wants to be called again.
The drawback of TimeSliceThread and TimeSliceClient is, that the TimeSliceThreads cannot be bundled to a group of threads serving. Each TimeSliceThread has their own queue.
But it would be a great project to write a TimeSlicePool that keeps the API but can use the next free thread with the next pending client…

Yes, you’re right there. I’m not sure how I’m going to do the multi-threading yet, I need to work my way through JUCE materials and take a look at how Common Music/Grace does it to. It’ll be a long project I’m sure!

thanks for the tips everyone

Sounds to me like the simplest solution is to use something like juce::Synthesiser that already splits the block when a message arrives so you can schedule messages and have them performed in a sample accurate way without resorting to 1 sample buffers.

If you don’t like the idea of MIDI as the message format, you can just see what it’s doing and apply your own block-splitting in the processBlock.

The issue of threading and audio thread safety is of course always there, but I think that’s a problem that isn’t really associated to your scheduling problem, which by definition needs to happen during the audio processing block.

1 Like