Calling setSize on buffer in AudioProcessor

Maybe a silly question, but It is allowed to resize the buffer provided by processBlock()? its not clear from API documentation. Suppose I have an effect which processes blocks of 1024 sample-frames at a time, but my input buffer is only 400? I would then output 2x0 length output, and 1x 1024 (retaining the remaining 176 sample-frames for the next time).

No you should not resize the buffer provided by processBlock.
If your effect needs 1024 samples to process, you need to introduce latency. Cache the input until you have enough to process. And report your latency by overriding getLatencySamples()

if you have given it the maximum block size in prepareToPlay you can use setSize in processBlock to change its numSamples member to the current actual block size. but you need to make sure it avoids reallocation

Wait, what is the point though?
If the host calls processBlock with a buffer of size N and you resize it to M > N, the extra M - N samples you get are “garbage”, as of not provided by the host. Correct?

usually it’s best to write code in a way where you don’t have to use setSize() in pB, but i once needed to write my own oversampling class, in order to avoid AudioBlock, and there was a nice application of using setSize in pB. because you can let your oversampler have a buffer of maxBlockSize * oversamplingFactor in prepareToPlay, but in processBlock you’d give that buffer numSamples * oversamplingFactor. since that is always smaller than the buffer’s actual size it won’t allocate if you selected to avoid reallocation in pB’s setSize(). then my oversampling class’ upsampling method either returned a pointer to the input buffer or the oversampling one depending on wether or not oversampling is turned off or on and the upcoming processes always get a buffer, where numSamples has the desired value. and it doesn’t matter if the other samples are garbage or 0. they are not used anyway

Ok I think I understand your use case, thanks for explaining.

Although, even if the original post title may be confusing, it seems that OP was asking to actually process more samples than what the host provides.
In that case there is no way around adding latency.

I think there’s no guarantee the plugin will work correctly if you resize the buffer that is passed to processBlock, so you should never do that.

In some cases, the plugin wrapper will hold onto channel pointers in the AudioBuffer, but changing the size of the buffer might invalidate those pointers, causing the wrapper to crash.

1 Like

oh, you’re right. in that case my comment was rather unrelated and we should rather encourage OP to use a fifo to save incoming samples until it’s full and only then pass that buffer to the process that needs a certain size and don’t resize anything in pB