Real-Time Multi-Threading in an Audio Application

Hi, I have read many topics on here about using multi-threading for dsp and other real-time audio purposes in audio plugins. The majority of people seem to say not to do it due to the host application already performing thread management and optimisation.

I am curious about this topic when it comes to general audio applications. Obviously multi-threading in real-time comes with its own difficulties such as thread synchronisation and avoiding allocations/os calls. However is there a general consensus as to whether to do this or not? And if so are there any tips/tricks/rules to use when doing it? DAWs and other audio applications do use multi-threading so I guess there must be ways. It’d be great if anyone had any resources on this topic that they could point me to.

I didn’t find many resources about that specific topic.
There are about concurrency (such as Ross Bencina and Jeff Preshing blogs, or/and C++ book from Anthony Williams) but none really about how to make a DAW!
The only way i found was to decorticate open source codes (in my case mainly Pure Data and SC since i was focusing about such visual programming).

But i’m also very interested to get ideas/tips about that from others here!

There is no general rule to use or not use multi-threading. But the basic problem might just come down to how realtime everything really needs to be. If everything really needs to be as realtime as possible, you’re a bit screwed, because at some point you will need to put all the processing together to give it to the audio driver, and on non-RT OSes you have no guarantees as to when a chunk of work is finished.

The more you can relax the “realtimeness”, the more you can add some latency, and the more latency you can afford, the less it’s about how long some work takes in the worst case, but rather than how long it takes on average.

DAWs will go out of their way and try to relax realtime requirements, so they will e.g. run only the currently selected or “listening” track with low latency and do some sophisticated graph scheduling to pre-roll as much as possible. To do that, you need knowledge of the “future”, which the DAW has.

Most DAWs will also try to schedule all the plugins and stuff in a way so that they can somehow balance the load on each core reasonably, obviously to get most out of the available processing power versus time constraints. The more they know and are able to predict how much time a plugin needs to process a frame, the better this works. And of course, to do that, the DAW needs to be in charge of things.

When you do multithreading in the plugin as well, it becomes much harder for the DAW to measure and predict what you’re doing, because it doesn’t “see” the additional threads you spawn, and it doesn’t understand that you sometimes have to wait for some synchronization. Also, your plugin’s extra threads now compete with those that the DAW is trying to manage, making it much harder for anyone involved to manage what’s going on.

4 Likes

Thanks for the answers!

@nicolasdanet - Here are some more resources on communicating between realtime and non-realtime threads:

Timur Doubler - Using Locks in Real-Time Audio:
https://timur.audio/using-locks-in-real-time-audio-processing-safely

#Fabian Renn-Giles & Dave Rowland - Real-Time 101:

@hugoderwolf - I see so how does a DAW try to determine how much time a plugin will take to process a frame?

The main issue I see it coming down to is making sure that all additional real-time threads have completed their processes by the time the audio callback needs to return. This can be ensured by threads regularly checking whether they need to exit. An option which can be used as an alternative or in addition to this is using a backup option that the program can fall back on if one of its threads is not ready in time. Obviously this is not always possible.

1 Like

It’ll probably just use a stopwatch. :wink:

I’m not sure what techniques are used exactly. But consider the following: Logic for examples spawns as many worker threads as there are cores available (there is also a setting that lets you reduce the number of worker threads). It then has a graph of audio paths (tracks, plugins, busses, mixers) that it somehow has to wrestle with. So it’ll try to divide the graph into sections that can be processed in parallel. By adding some additional buffers and latency (and compensating for it of course), it could create more possible parallelizations if needed. You then have a bunch of processes that you somehow need to distribute among your worker threads. The more evenly the scheduler manages to distribute that load to the worker threads, the more CPU it can utilize before dropouts happen. But that’s easier said than done, especially if you have many plugins with very variable processing time. The more predictable the plugin process, the better this will work.

Also, Logic reserves the last core for the actual “realtime” path. That means if you have an instrument track selected, this track and all following busses up to the master, incl. all plugins on there, are run on that last core (which I assume is also responsible for mixing the master bus and finally hauling it all to the audio device).

That means you somehow have to deal with unfinished work. In practice, the only way to tolerate unfinished work is when you don’t actually need the data in the first place. If results are optional, many constraints can be lifted. :wink:

2 Likes

Although it doesn’t go extremely into depth, @dave96 talked a bit about multi threaded audio graph rendering strategies in his ADC talk on Tracktion Graph Introducing Tracktion Graph: A Topological Processing Library for Audio - Dave Rowland - ADC20 - YouTube which I found quite interesting and much more straightforward than what I always had in mind when I thought about multithreaded rendering :wink:

1 Like

Please bear in mind that the version of multi-threading I gave in that talk is the starting point for a multi-threaded audio graph. Once you start optimising that process it gets a bit more complicated.

Ah ok, so the main way of approaching this is through worker threads, that loop and look for tasks? Does anyone have any alternatives to this?

I’ve seen a few people talk about exponential back off when it comes to worker threads. This involves constantly slowing down the looping speed of the thread as it looks for work. Everytime a job is run this back off is started right from the top again. Any ideas about using that in relation to real time audio?

That’s what we do in the ThreadPoolRT here: tracktion_engine/tracktion_graph_NodePlayerThreadPools.cpp at tracktion_graph · Tracktion/tracktion_engine · GitHub

The problem with that is getting the worker threads to be “awake” as new nodes are ready to be processed. And the strategy required will vary a lot by OS, buffer size, power requirements etc.

We have six options in the most recent Waveform builds and most people say the one using a condition variable or semaphore combined with a bit of spinning backoff works the best.


There are other approaches if your nodes are more consistent in their time required to process.

2 Likes

These are exactly the kind of things I was looking for. Thanks!