Avoiding clicks while switching impulse responses

Are you sure it’s not just blockwise changing coloration due to the HRTF switching? Depending on the HRTF set you are using, the crossfade of the convolution engines can completly attenuate specific frequencies when they are out of phase in both HRTFs

I wouldn’t expect that to be the issue. The call to get, which happens on the audio thread, uses a try-lock rather than a lock. If the mutex is already locked, this should return immediately, rather than waiting for the mutex to become available.

Hi Joaquin! Sorry from bringing an old thread to you, but I’m struggling with the same problem (binaural convolution) and I can’t make the final mix without breaking some audio hosts. Could you clarify how you did “outBuffer = inBuffer2;”? Thanks!

Hi @pauloassis ! No problem.

I’ll tell you what I ended up doing. I’m using a juce::dsp::ProcessorChain in which I included as one of the processes, a struct that I defined. That struct is the one that has insider two convolvers and the mixer.

The struct should have the necessary methods so that ProcessorChain can use it as a processor (prepare, reset, getLatency, process).

So in the process function of that struct, that receives a context, I process both contexts with each convolver and then mix them.

Is something like this:

in the .h

struct myConvolverStruct
{
myConvolverStruct();
void prepare (const juce::dsp::ProcessSpec& spec);
void reset();
float getLatency() const;
void process (const juce::dsp::ProcessContextReplacing& context);
private:
juce::dsp::DryWetMixer mixer;
juce::dsp::Convolution convolverA;
juce::dsp::Convolution convolverB;
juce::AudioBuffer AuxBufferA;
juce::AudioBuffer AuxBufferB;
};

in the .cpp

void myConvolverStruct::process (const juce::dsp::ProcessContextReplacing& context)
{
if(! context.isBypassed)
{

    context.getOutputBlock().copyTo(AuxBufferA);
    juce::dsp::AudioBlock<float> inBlockA (AuxBufferA);
    juce::dsp::ProcessContextReplacing<float> contextA (inBlockA);
  
    context.getOutputBlock().copyTo(AuxBufferB);
    juce::dsp::AudioBlock<float> inBlockB (AuxBufferB);
    juce::dsp::ProcessContextReplacing<float> contextB (inBlockB);
    
    convolverA.process(contextA);
    convolverB.process(contextB);

    mixer.pushDrySamples(contextA.getOutputBlock());
    mixer.setWetMixProportion(MixerProportion);
    mixer.mixWetSamples(contextB.getOutputBlock());
                        
    context.getOutputBlock().copyFrom(contextB.getOutputBlock());

}
}

Hope that helps.

Best,

Joaquin.

2 Likes

Hi! This is a GREAT help to me, I’ll try a ProcessorChain, hope it works - my code was already sounding good inside Reaper, but crashing in SoundSource.
Many many thanks!
Best, Paulo

1 Like

Thank you for providing your solution! I think I’m in the same situation where the IR needs to be continuously updated during the processing, and could you elaborate more on how you managed to update the pre-allocated buffer and did the convolution without calling the loadImpulseResponse function?

1 Like

Well as I said, I didn’t use the convolution class supplied by juce but wrote an in-house convolution engine from scratch. This gave us all the freedom to optimize it to the specific needs of our reverb plugin, in terms of multithreading, crossfading logic and even the underlying FFT implementation used. Explaining all the details will take quite long and since it’s closed source code I cannot share the code itself here. Still I’m happy to answer detailed questions on building your own convolution engine, but if I got your question right, you wanted to know how to do that with the JUCE convolution?

1 Like

I see. Yes, I was wondering how to achieve the same goal with the Juce Convolution. It seems to me that it’s impossible to get rid of creating two convolution objects under the hood and instead just update the audio buffer itself, so it would be indeed useful to rewrite the convolution from scratch. Could you suggest a few FFT libraries that I could look into as well? Thanks again!

  1. Two buffers always being processed while only one is heard…

Or

  1. An envelope triggered upon user click.

Convolve.process(block);

In sample loop.

Output = (1 - envelopeOut) * sample;

1 Like

The solution proposed by @TypeWriter obviously works, but since a convolution takes up lot of resources for longer impulse responses, having two of them always running in parallel might eat up a lot of unnecessary CPU.

You can e.g. always transform the input signal once into the frequency domain, even during an ongoing crossfading. We ended up performing two spectral multiplications in parallel during crossfading and blending over in the time domain. This allows us to use zero latency convolution approaches. If minimising latency is no primary concern for you, you can also go for a fixed block size approach and do the crossfading in the frequency domain, which is even more efficient.

However, while optimising the FFTs involved can give you some performance boost, optimising the complex valued spectral multiplication had a much bigger impact in our use-case, profiling revealed that most CPU is spent in that part, at least if you are working on something like a reverb with impulse response lengths of several seconds.

We went with the Intel IPP FFT for Windows and Apple Accelerate for macOS. The latter makes sense especially if you plan targeting ARM macs (which you should) – the IPP is no option here since it’s x86_64 only. With both FFT implementations, you can boost some performance by choosing an optimised frequency domain vector data layout. This can be the split complex layout preferred by Accelerate instead of the interleaved layout a vector of std::complex values would have or some permuted data layout the IPP implementation offers (don’t know the exact naming of that atm) which is closer to how the values are computed in memory and thus removes the need for shuffling spectral data back and forth.

Hope this gives you some starting points, given you already have some theoretical background on optimised fast convolution algorithms in general.

2 Likes