Use of dsp::Convolution in an audio plugin?

Before I am investing a lot of time, is dsp::Convolution fast enough to use in an
audio plug-in reverb? My synth plug-in currently consumes about 1-3% of CPU (i5-10600K)
I don’t want this to go up to beyond say 4-6% just because the reverb is active…

Simply put, no. It works barely fine with IRs below 200ms, but with longer impulses the CPU usage is too much and not viable to be used in a commercial product.

The DSPModulePluginDemo includes a Convolution. You could try replacing the impulse responses with your own files, and check whether the performance is acceptable.

Good one reuk. Just tried a 7 second theater IR and the CPU % increase was only about 1%.
Nice!

I use dsp::Convolution in my plugin and it works fine. I have had no issues.

You could try the NonUniform configuration and give it a headSize of your choosing and see how it affects performance

juce::dsp::Convolution convolutionReverb{juce::dsp::Convolution::NonUniform{ 512 }};

Another idea is to use convolution for the beginning of the reverb and transition to an algorithmic reverb during the tail

Could you call convolutionReverb.process() for each audio frame?

Yes, just the normal JUCE DSP module way with ProcessContextReplacing:

In your prepare to play:

// Prepare the JUCE DSP module:
juce::dsp::ProcessSpec spec;
spec.sampleRate = sampleRate;
spec.maximumBlockSize = samplesPerBlock;
spec.numChannels = numChannels;

// Set up convolution reverb:
convolutionReverb.prepare(spec);
convolutionReverb.reset();

In your process block:

// Turn JUCE buffer into an AudioBlock
juce::dsp::AudioBlock<float> reverbBlock (buffer);
// Process
convolutionReverb.process(juce::dsp::ProcessContextReplacing<float> (reverbBlock));

This is a call for each audio block, not for each audio frame,
So you need to fill one audio block with frames, coming from e.g. a synth,
and then call the convolutionReverb to update the reverbBlock.

What do you mean by audio frame in this situation?

In my synth plugin, I have following code for nextAudioBlock. Note that it calls nextAudioFrame in the loop, where all the frame-accurate audio is generated. Up to now, all my oscillators, eg’s, lfo’s, filters, etc operate at this audio-frame level. Looks like I need to make a (small) exception for the reverb.

void SynthEngine::nextAudioBlockFloat(int numInputChannels, int numOutputChannels, juce::AudioSampleBuffer buffer)
{
// populates the next audio block, only 2 audio output channels are supported.

AudioFrame next{ 0.0, 0.0 };

if (numOutputChannels == 2)
{
    auto* channelDataL = buffer.getWritePointer(0);
    auto* channelDataR = buffer.getWritePointer(1);
    int numSamples = buffer.getNumSamples();
    for (auto sample = 0; sample < numSamples; ++sample)
    {
        next = nextAudioFrame();
        channelDataL[sample] = (float)next.left;
        channelDataR[sample] = (float)next.right;
    }
}
else
{
    // silence:
    for (int channel = 0; channel < numOutputChannels; ++channel)
    {
        auto* channelData = buffer.getWritePointer(channel);
        for (auto sample = 0; sample < buffer.getNumSamples(); ++sample)
        {
            channelData[sample] = 0.0;
        }
    }
}

}

Done. Looks like an 8s reverb adds about 1% to my CPU load, very good.

Ahh, per sample processing. Yeah there’s no processSample() function in the convolution but I reckon you could make a 1 sample buffer and wrap it in an AudioBlock to process