Hi,
I want to make a plugin that needs to do a lot of processing which does not directly relate to the audio samples. It is a complex algorithm that has to be run during several audio blocks. And when the processBlock() is called it needs to use the result of this process.
My question is if I can setup extra threads for this processing.
I assume the real-time audio processing (processBlock()) is done on a separate (real-time) thread.
Can I without problems create some threads in the initialization phase and use that at will? Or does this conflict with the plugin host?
Can I make this extra thread a real-time thread so that I can be sure the processing can be done?
No, this isn’t the idea of an audio plugin. Audio processing has to be done while processBlock is called. You can use different threads in the method but this also adds some CPU costs, because you need to synchronize the threads and process the results. It also introduces a lot of complexity. I wouldn’t do that.
Or does this conflict with the plugin host?
Yes. The host does not know anything about your threads. You take away CPU power he needs for other plugins. Even worse when the user uses more than one instance of your plugin. Some PC’s have only 4 cores. Your thread may even run on the same CPU.
Threads can be handy for preparing data for the UI or loading file data. Otherwise i wouldn’t use them in a plugin.
I would try to do all processing synchronous in the process block method. Look for a different solution if that doesn’t work. Measure and optimize your code.
I could chop all this (none time domain) processing in small pieces and do each processBlock() call some of it.
I would need some high resolution CPU clock counter to be able to make sure I use each processBlock() call some maximum CPU time. This raises the questions:
how much can I add?
is there some CPU tick counter that can be used for this?
You say you would not use extra threads during the processBlock() call. But what if you have a very special plugin that you would like to give 8 of the 16 available CPU cores (just an example).
You never know when processBlock happens. You can’t use a timer for this.
Create the threads when you start the plugin. When processBlock starts you assign the work to your threads. Then wait in processBlock until each thread has done its work.
I guess the processBlock() call may not take to long to prevent overruns. My other (none time domain) processing takes much longer then the time of one audio buffer. But occurs at a lower rate then an audio buffer period. So I have to split the processing into smaller chunks.
The idea is that I first do my time domain stuff in processBlock() and after that some of the other (none time domain) processing. But then I need some way to stop this so that the processBlock() call doesn’t take to long.
Suppose the audio buffer size is 128 samples. Suppose the sample rate is 48KHz. Then the processBlock() function will be called each 2.67 ms. Suppose my time domain stuff just takes 0.5 ms. Then theoretically I could spend maximal 2.17 ms on the other processing. But I guess I need to leave some time for the other plugins.
So I need a timer or CPU clock counter that I can poll to check if I need to stop with the (none time domain) processing.
B.T.W. how does a DAW handles muti threading? Does it give each plugin its own thread? What if some of the plugins take a long time while others are very quick. How does a DAW handles that?
Yes you can, question is if you should. If some processing happens on big chunks, like 16k size FFT, it makes perfect sense to do that in an extra thread, otherwise the CPU load in processBlock() would be quite uneven.
It is however very poor style to actively wait for the thread to finish in your processBlock(), instead of this, best do everything in audio thread and try to do some load balancing.
I use a worker thread for convolution where bigger chunks of data are processed with relaxed timing requirements and no locks for synchronizing to audio thread.
In case you use a thread, be aware that not all processing happens in realtime, e.g, when rendering, you need to check and make sure it also works non-realtime.
The DAW will try to process plugins on different threads/cores. This also depends on the signal flow. Every DAW might have its own way of doing this. We just don’t know.
To avoid any problems the best you can do is to try to keep the real-time processing in the audio thread. Don’t make assumptions. There is no guarantee that the processBlock function is called in equal time steps and you should not use the whole time between the calls for the processing. People may also have different CPU’s.
I would do a prototype and then optimize it. Make sure you measure in release mode. Debug is much slower.
You say that the calculations are not related to the audio samples. can’t you precompute all or most of the data?
maybe if you specify the type of calculations someone can provide you with the appropriate method, or some optimization that will get you out of trouble.
Well, it is still real time data. But not directly on the time domain samples. It is like some FFT is going on and the result is needed for the time domain processing.