Convolution Background Loader forever loop

I’m using juce::dsp::Convolution class a lot on my app (Juce v 6.0.8)
I load the IR files on the audio processing thread (processBlock method) as instructed on the documentation. I load the wav files from compiled binary resoource.

But I have noticed weird behaviour at least on OS X. I have multiple Convolution Background Threads running simultaneously and they seem to take huge amount of CPU. It seems that loader threads never exit

I haven’t noticed similar problem on the ConvolutionDemo application.

Have you experienced similar issues and do you have any idea what might be wrong?
See the image for more info.

If you have multiple Convolution instances in your project, you can create a single instance of ConvolutionMessageQueue and pass it by reference to the constructors of each of the Convolutions. This will allow them all to share the same background loader thread.

1 Like

Thanks @reuk.

Oh yes, you are right. I will modify my code to use shared ConvolutionMessageQueue.

However, is it intended functionality that Convolution Backround Loader thread stays running all the time even there are no loads pending? Just trying to make sure that I understand everything correctly.

It is a usual pattern to keep the thread running because creating, starting, stopping and destroying threads can be an expensive operation. The sleep in the thread loop should throttle the CPU use, though. Maybe your profiler counts the time spent in the sleeps as CPU use?

Yes, it stays running until the ConvolutionMessageQueue is destroyed. As @xenakios pointed out, the sleep should prevent the thread from hammering the CPU. The background loader thread is also responsible for building a convolution engine for each loaded IR, which might take some time - if you’re rapidly updating IRs, this will put more load on the background thread.

It’s difficult to say any more without seeing any profiling info.

@xenakios & @reuk

Thanks for the clarification. I just refactored my code to use shared ConvolutionMessageQueue and based on my understanding I have a huge drop on CPU usage.

I have been using Instruments tool for profiling (bundled with Xcode) but I have to say that I don’t master that tool so I might be using it wrong.

Here’s a screenshot before making the change (depending on the run I got cpu usage weight from 10% to 20%.

After the the change (with shared queue) the weight drops to 0.1%.

I also have an insane number of Convolution Background Loader threads, and would love to use a single ConvolutionMessageQueue. But I’m using the convolution within a dsp::ProcessorChain. Is it possible to initialise it with such a parameter, or does it only use the default constructor?

As a workaround, I have removed the Convolution from the ProcessorChain and applying it (if not bypassed) after that. But oh boy! The drop in CPU consumption was massive!

My plugin has about 8 different buses, each with its own processor chain that includes Convolution (but that is rarely used). And another of my apps loads multiple instances of this plugin. Even when they were all bypassed, the loading thread used to consume a whole lot of CPU for nothing. I’m really glad I found this thread.