Hello,
I just started the implementation of an audio plugin and my processing algorithm is designed such that it needs a 25 ms audio frame as input, and this whatever the block size is. And between each frame, the hop size is of 10ms.
Thus, I need to have some kind of framework that manages to give the right amount of audio samples to my processing algorithm and this independently of the buffer size. I have different ideas to implement this, but I’d like to know what are the best practices to do this efficiently. My algorithm is quite computationally expensive and I don’t want to waste any processing time in copying and buffering audio data inneficiently.
My current idea is just to have a second audio buffer that I fill everytime I receive audio and I launch the processing only when I have enough samples.
But building such a auxiliary buffer is quite cumbersome as you have to deal with small or large buffer sizes, introducing a different latency…
I think a lot of people faced this issue before because it is inherent to frame-based audio processing algorithms. I’d like to know if there’s somehting in Juce that helps to deal with this (a class ?).
I’m also open for any other ressource you might find interesting for my problem.