How can I switch impulse response quickly with dsp::Convolution without artifacts, and what does ConvolutionMessageQueue do?

Hi, I’m new to JUCE programming, and I’m having trouble switching HRIRs (head related impulse response) with a rotary slider on the GUI thread.

I’ve implemented a plugin by referring the IR switching part of the convolution demo on the JUCE repo(https://github.com/juce-framework/JUCE/blob/master/examples/DSP/ConvolutionDemo.h), but I can hear the “zipper noise” artifact when I try to quickly rotate the slider, which binds to a variable which keeps track of which IR to load to the dsp::Convolution instance.
It would be very helpful if anyone can help me to figure out how to achieve clear transition among the IRs.

In addition, when I was skimming through the documentation of dsp::Convolution, I found a class called dsp::ConvolutionMessageQueue as a constructor parameter of a dsp::Convolution class.

What does this class exactly represent, and can this class anyhow fix my problem by chance?

I think you would have to fade between multiple instances of convolution. Not the most friendly on the cpu but I don’t think there is another way.

This is because each sample outputs as many samples as are in the the impulse response so that zipper will be happening when they are cut off.

If you fade between them then you can do a smooth crossfade without cutting the samples off.

Thanks for your quick response!

I see. So can I assume that the loadImpulseResponse is not suitable for this situation?
What would be the best practice to implement the crossfading?

Never used the juce convolution myself until now, but I’m quite sure that it performs crossfading on IR updates. But I also experienced that depending on the use-case different crossfade functions might sound better or worse.

All in all for the use case of binaural synthesis, where you probably have all your HRIRs precomputed, the JUCE convolution implementation strategy could not be the most efficient one, so I would advise to build your own convolution engine.
You could make sure that all precomputed HRIRs have the same length (the juce convolution allows different IR lengths) and you can precompute their transfer functions once instead of computing them on IR load as the juce implementation does. All this will allow you to build a much more performant implementation with the ability to perform IR changes really fast. And you can go and play around with your own crossfading functions until you find one that sounds best for your use case.

1 Like

Thank you for responding!
I would appreciate it if you can show me any resources you recommend that I can refer to do the implementation.
I assume that it would not be a easy task for me to do…

I learned a lot from this thesis, which is especially on partitioned convolution algorithms for stuff like binaural synthesis: http://publications.rwth-aachen.de/record/466561/files/466561.pdf
It’s not so much on implementing all that in C++ but takes a very detailed look at the theory behind all that – in my opinion understanding that is the biggest part, the implementation should still be complex but doable if you understood what you are implementing :wink:

You should be familiar with some dsp math basics though… Don’t know what’s your background?

Thanks a lot! I will read through it.

I’m a master’s student researching topics around loudspeakers and room acoustics, and I’ve done some offline acoustic analysis on Python but not a real-time one in C++.
Hopefully that background will help my reading :slightly_smiling_face:

I’m still curious about ConvolutionMessageQueue class, and I would appreciate it if someone can tell me about it!

The ConvolutionMessageQueue class wraps a background thread which processes commands generated by a Convolution instance. Loading an IR into the Convolution engine is quite expensive, and can’t be done safely on the audio thread. Instead, the new engines are created on the thread managed by the ConvolutionMessageQueue instance.

Spinning up background threads is itself quite expensive, so it’s possible to create a single ConvolutionMessageQueue which is shared between multiple Convolution instances. That way, there will be just one background thread, rather than one per convolution instance. In projects that only use a single Convolution instance, directly using the ConvolutionMessageQueue shouldn’t be necessary. If a ConvolutionMessageQueue isn’t passed to the Convolution constructor, the Convolution will create its own internal private queue.

1 Like

I could significantly reduce the zipper noise by implementing my own convolution engine (with overlap-add and FFTW library)!
It could be achieved by simply telling the convolution engine the address to the next pre-computed frequency domain IR (tell me if this is not safe in term of memory management).
Thank you so much for your advise! You saved my life :joy:!

I hope the JUCE dsp module can somehow handle this issue in the future.

I can still notice some small artifacts due to switching, so I will keep on reading the crossfading part of the paper you recommended.

If you wanted a pure juce way (and a heavy cpu load!) you could run through multuple convolution instances simultaneously. Use the selection as a gain controller going into each one and then combine after. Smooth the gain changes and you should have something that works as you intend, even if it is a Cpu hog (depending on your number of IR’s).