Multiple DelayLine objects cause crackling distortion when delay times get adjusted

I now use .exchange(), thanks. What is peculiar about that, is that I can no longer reach a breakpoint within that if-statement… the next reachable breakpoint is within extrapolate().

Also, why would you recommend ParameterAttachment over the timer method? Do you generally think that’s better?

The way I understood it (which is a VERY basic level of understanding), creating a thread around the call of the extrapolate() function made everything in it “not-on-the-audio-thread”. While reading this back this to myself, I can totally sense how naive this thinking is.
The .setAzimuth() function is just a normal setter function with no other tricks to it. My original way of not-blocking-the-audio-thread was using a mutex in extrapolate before the dp.set… block.

//std::lock_guard<std::mutex> lock(extrapolMutex);
dp.setAzimuth(aziExtrap);
dp.setElevation(eleExtrap);
dp.setDistance(disExtrap);

I also tried using a mutex in the actual delay-setting functions in the DelayProcessor instances, where the DelayLine.setDelay() function gets called. But I get a feeling that I have some big misconcenptions about not-blocking-the-audio-thread.

This sounds actually very interesting, while obviously not trivial. The current CPU Load in Reaper is shown as 25% on my device and the additional overhead might still be okay. Honestly though, this will be one of the later tries, after smoothing has been tested out. As you all have pointed out (thank you btw!), it is the non-existing smoothing of any kind to the delay time parameter, that’s causing this unwanted noise. Playing with the timer interval just makes the clicks more “granular”.

Stuff like the Doppler Effect are very well accepted here, the whole binaural processing is supposed to be “very plausible” in the end. Then when I think about what you say about “interpolating the delay times per sample” I feel like I did not give that much of a thought before, because I automatically thought adding a DelayLineInterpolationType takes care about that as well, which was another big misconception, because it does not handle changes to its temporal settings. I have only seen one other post about a per sample processing of the delay time modulation (Delay Line artifacts) but I do not completely comprehend it. The way it should happen “sample-wise” just confuses me.

This, luckcily, is covered by the code, upon which I am building this whole thing. Rotation matrices and HRTF interpolation included.