How to handle "sliced" Blocks (AU plugins), esp. with Convolution?

So, i’ve been developing a little convolution plugin, using a friend’s partitioned convolution library.

Now, some plugin hosts, esp. AU hosts (encountered this in Garage Band), sub-divide the audio blocks, so that the blocksize retrieved in prepareToPlay(...) isn’t reliable at all.

But, the convolution library needs some buffers set to the correct size when loading a filter kernel, something you’d typically avoid doing in the processBuffer(...) method.

My first idea would be pre-buffering the smaller buffers, and once the actual source buffer is “full”, perform the convolution, but then, how do i return the results with the correct size ?

Is there any accepted best practice how to handle this situation ?

Best,
n

I see JUCE’s dsp::convolution as best practice of uniformly-partitioned convolution. It handles also single sample calls of the process function, which might happen in many DAWs (e.g. Reaper at the end of a loop, or in general with sample accurate automation).

What’s going on there:
No matter how many samples come in, they will be convolved with the first partition (zero-padding to length 2*partitionSize, FFT, complex multiplication with first IR partition, IFFT) and written back to the output (IFFT result plus previous overlap).

The input samples are then gathered, and in case a block is full, it will trigger all the other partitions, and write them back to the output (and overlap buffer).

If your friend’s partitioned convolution doesn’t support single sample calls, you can fill a buffer and wait for it to be full and then call the convolution, however you have to introduce a delay, in order to achieve causality :slight_smile: The delay will be one full blocksize, so you will lose the zero-delay property. Until the first block is full and processed, you’ll have to write back zeros.

Thanks!

Sounds worth a try, i was hesitating to try it because the JUCE doc says it’s not necessarily tuned for speed (not sure what the current status here).

Hmm even though, my friend’s library (that indeed doesn’t support single sample calls) is using non-uniformly partitioned convolution … so in case i’d need that, there’s no way around the delay, i guess ?

Well, a) you will have to live with the delay
or
b) your friend could implement it, as it is more or less trivial, especially as he already implemented a non-uniformly partitioned convolution which is huge :slight_smile:
or
c) you could strip the first block of your IR and feed a JUCE convolution with it, and use your friend’s engine for the rest with the block delay, which would be no problem as the first block is already processed with zero delay

Hello @parkellipsen and @danielrudrich,

Maybe I misunderstood a problem, but I think a sub-buffer could be used.

I am playing with delay plugin and I am using a pre-buffer for feedback processing (ex. low pass filter).

  1. First I prealocated “prebuffer” in prepareToPlay(...) assumming that argument samplesPerBlock is the maximum possible buffer.
tmpDelayBuffer.setSize(numChannels, maximumBlockSize, false, true, false);
  1. and then in processBlock(...) declare a sub-buffer of the tmpBuffer:
AudioBuffer<SampleType> tmpDelayBufferSubBuffer{ tmpDelayBuffer.getArrayOfWritePointers(), (int)inputBlock.getNumChannels(), (int)inputBlock.getNumSamples() };

According to AudioBuffer documentation, this constructor creates a buffer using a pre-allocated block of memory, so it does not perform memory allocation.

Have such implementations any issues? Would it be OK for convolution?

Kindly regards,
Mateusz