Hi guys,
I’m still running on 6.0.8, so I don’t know if that has been fixed already. Basically, I’m currently using the DSP Convolver class for short IRs (less than 200ms long), using non-uniform partitioned convolution with a head size of 1024. Performances are pretty much acceptable, but if I want to use this class for something heavier, then the performance becomes unacceptable.
I tested dsp::Convolution::NonUniform against other convolvers, either commercial or free.
Mac Pro 2013 (Xeon-based)
Sample Rate: 44100
Block Size: 256
IR: 24 seconds, 48kHz, Stereo
Ableton Live 11. CPU Meter set on Current
While others have a CPU usage around 4~5%, the JUCE’s one goes around 60%.
Adjusting the head size seems to improve the performance a little, but if I change the block size to a smaller value, I start getting glitches.
the implementation I have is pretty straightforward:
- a private member:
dsp::Convolution convolver{ juce::dsp::Convolution::NonUniform { 1024 } };
I initialize the convolver as expected, passing a ProcessorSpec
Loading and process, as suggested on another thread:
void processContext(dsp::ProcessContextReplacing<float> context) noexcept
{
ScopedNoDenormals noDenormals;
// Load a new IR if there's one available. Note that this doesn't lock or allocate!
bufferTransfer.get ([this] (myThreadedBuffer& buf)
{
convolver.loadImpulseResponse (std::move (buf.buffer),
buf.sampleRate,
dsp::Convolution::Stereo::yes,
dsp::Convolution::Trim::no,
dsp::Convolution::Normalise::yes);
});
convolver.process(context);
}
bufferTransfer and myThreadedBuffer are thread-safe classes to pass IR buffers to the convolver.
Is there something I’m missing or the dsp::convolver class performance is that bad?
Thanks,
Luca