MultiThread advise needed

My plugin is a multichannel sequencer. I would appreciate any suggestions on how to multi-thread it considering the following design:

Each GUI sequencer channel has its own AudioOutEngine object which is the brain under the hood. PluginProcessor holds an OwnedArray of AudioOutEngine like this:
OwnedArray<AudioOutEngine> audios;

PluginProcessor also holds a sorted set to keep track of the currently active channels:
SortedSet<int> activeChannels

I use this set since I don’t want the plugin to waste time on empty channels. So whenever the user turns on a step (GUI click) the current channel number is added to the sorted set (just the channel index, not the object itself):
and if the has user turned off all steps, the channel number is removed from the currently active channel list

AudioOutEngine does the actual processing in the following function:
void AudioOutEngine::sendToOutput(AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
which takes as references the well-known parameters from PluginProcessor::processBlock and feeds them. This is how looks like:
PluginProcessor::processBlock (AudioBuffer<float>& buffer, MidiBuffer& midiMessages)
for (int i = 0; i < activeChannels.size(); ++i)
audios[activeChannels[i]]->sendToOutput(buffer, midiMessages);

To put it in words: for each currently active GUI channel (an active channel is defined as having one or more steps about to play soon) the program repeatedly checks if it is time to send some audio out. The time is right when the clock is over an active sequencer step (On = TRUE) and then AudioOutEngine will process some audio stored in an AudioBuffer and send the output directly to PluginProcessor::processBlock where it sums all active channels into one AudioBuffer<float>& buffer.

I hope that all makes sense. Now for my question:
Where and how should I use multi threads?

  1. different thread for each active channel, waiting for some note to get hit and closing the thread if all the channel’s notes were turned off by the user.
  2. different thread for each note that gets hit, closing the thread when the note has finished playing.
  3. different thread for each sendToOutput (or processBlock) call.
  4. Any other option?

Also - which is better to use in this case: ThreadPool or TimeSliceThread? What is the main difference between them?


If it is a plugin: not at all.

Rationale: If your sequencer is used in a DAW, there are many other things already parallelised by the host. So chances that some cores idle around are minimal.
To use multiple threads feeding into the audio thread, you need to synchronise those threads, which is a substantial amount of overhead.

Conclusion: background threads in a plugin are only useful to do non-audio-producing stuff, i.e. visualisation (e.g. analyser), loading samples, etc.

That is my opinion shared by many other audio developers here. You might come to different conclusions, that is fine…


Thanks Daniel.

A while ago I was struggling with a different problem of summing up all the channels into one main output and it was you who advised me to use TimeSliceThread in general:

I did not have time to consider multi threading back than because I was too busy just trying to get the audio out… Now that it is working I am trying to improve performance because at about 5 active channels the app starts to crackle. I thought that it might be happening because I am going through the active channels one by one where I wait for each channel to finish processing before moving on to the next. So according to your opinion, I should be looking elsewhere for optimization opportunities instead of offloading work from the message thread?

You shouldn’t be doing any audio work on the message thread, to begin with. How is your code actually working at the moment? 5 channels of audio should be trivial to handle, you should probably be able to do dozens of channels easily without any particular optimizations, unless your channels have a lot of special processings going on. (For example if you host 3rd party plugins, all bets are off, they can obviously use any amount of CPU…)

1 Like

Sorry, I obviously meant the audio thread but just to be clear, I am referring to the processBlock call of an AudioProcessor. My code goes through several AudioBuffers at each call with some processing (but not too heavy I assume) adding each output to AudioBuffer<float>& buffer.
Most of the time there is no latency but when it does occur, it happens when several channels are active. Never just one.

I think before you try to implement a useful multithreading scheme you should look into where your performance gets burnt. Playing back 5 buffers shouldn’t be a problem usually. So either you are doing something very CPU-demanding or your code isn’t wait-free. As long as you don’t know pretty exactly what your problem is, multithreading is very very likely not the solution for your problems, but a good source of additional problems on top.

Did you try profiling the code to see where it spends its time?

1 Like

Yes, I did profile the code and my suspect actually isn’t on the red list. But as I said, adding channels and voices (each channel can have several voices) is definately the cause of the problem. It is probably as you mentioned, some non-wait-free issue which can be resolved with some logic rather than with more threads.

I still wonder though, for general discussion, what are the differences between a TimeSliceThread and a ThreadPool.

Thanks everyone for yout input.

The number of channels shouldn’t really affect latency(*) but obviously it will affect the total CPU usage. I am quite curious what is actually happening in your code…

(*) Unless there’s for example some routing involved between channels that cause latency on purpose.

1 Like

Consider this: I have 3 shortlists. One to keep track of all active channels, one to keep track of currently turned-on steps in a sequencer channel (one for each active channel), and one to keep track of the number currently-playing/active-voices on a channel (again, one for each active channel).
These lists are being updated dynamically as I click on the GUI and as audio samples are triggered on and off by the clock. They also rely on each other in order to perform logic and some can control others using change broadcasters and listseners. Most importantly - they decide when audio samples are starting and ending, which is where the crackling occurs.

There are many places in the code that I need to check first before I can call this a hot function problem. Perhaps I will be able to create a compact version that captures this glitch for your review.

Sure, offloading the loading of files or prefetching of samples still makes sense. But not the actual realtime stuff. If you are streaming media and you don’t need to react in low latency, then using BufferingAudioSource to have always a longer block of audio available makes sense, as it relieves the audio thread of some work. But to start synth sounds on an extra thread, when you need it asap, is not really a good option.

But moving on to the end of the discussion:

The TimeSliceClient you would usually use for something ongoing, i.e. like the BufferingAudioSource. It is a task that naturally stops and waits for certain conditions to continue (like filling the buffer and wait, until new data is needed).
The ThreadPool and ThreadPoolJob is something, that runs once and finishes, e.g. load a session.

1 Like