Best practices for audio processing independent of buffer size

Hello,

I just started the implementation of an audio plugin and my processing algorithm is designed such that it needs a 25 ms audio frame as input, and this whatever the block size is. And between each frame, the hop size is of 10ms.

Thus, I need to have some kind of framework that manages to give the right amount of audio samples to my processing algorithm and this independently of the buffer size. I have different ideas to implement this, but I’d like to know what are the best practices to do this efficiently. My algorithm is quite computationally expensive and I don’t want to waste any processing time in copying and buffering audio data inneficiently.

My current idea is just to have a second audio buffer that I fill everytime I receive audio and I launch the processing only when I have enough samples.
But building such a auxiliary buffer is quite cumbersome as you have to deal with small or large buffer sizes, introducing a different latency…

I think a lot of people faced this issue before because it is inherent to frame-based audio processing algorithms. I’d like to know if there’s somehting in Juce that helps to deal with this (a class ?).
I’m also open for any other ressource you might find interesting for my problem.

1 Like

Hi there, couple of years ago I wrote this class to deal with buffering especially for FFT processing: https://github.com/DanielRudrich/OverlappingFFTProcessor.
It might need some adjustments to work with JUCE 6, and I admit it lacks proper documentation, but maybe it helps :slight_smile:
It automatically applies a window (whose constructor method you can overwrite). If you don’t need input data windowing, then you might want to change that in the code.

Edit: I recently used it in a project and improved the interface. Will push it when I find time to do proper documentation :slight_smile:

2 Likes

I’m still a bit new to JUCE, but when I had a similar problem, the best solution ended up being to wrap the processBlock into a processBlockPrivate() function that only calls when you have enough samples to feed it.

For your problem, I might try something like this:

where

  • desiredNumSamples is an integer variable calculated in prepareToPlay using the samplerate & the desired length of each processing frame (25 ms, it sounds like…)

  • storageBuffer is a larger AudioBuffer that will store the incoming audio and be passed to processBlockPrivate() once it contains enough samples. This is some rough code, I would suggest making it a circular buffer by creating some readPosition & writePosition variables, etc…

      processBlock (juce::AudioBuffer<float>& buffer) {
          const int numSamples = buffer.getNumSamples();
          for(int chan = 0; chan < buffer.getNumChannels(); ++chan) {
              storageBuffer.copyFrom(chan, storageBuffer.getNumSamples(), buffer, chan, 0, buffer.getNumSamples());
          }
          if(storageBuffer.getNumSamples >= desiredNumSamples) {
              processBlockPrivate(storageBuffer);
              // reset latency counter...
          } else {
              // increment a latency counter here...
          }
      };
    
      processBlockPrivate (juce::Audiobuffer<float>& buffer) {
      };
    

I think it will be hard to get around doing something like this, for the frame-based processing you’re describing… as far as updating latency, you could keep track of the # of times processBlock() is run before processBlockPrivate() is triggered…

1 Like

Thanks a lot, this class will definitely help :slightly_smiling_face:. I understand that to use auxiliary audio buffers is the way to go, and also to use FloatVectorOperations for efficient copying, adding …

But I’m not sure to understand why one should use the dsp module here (I don’t know much about this class but I’ll go through the doc and tutorials).

Thanks !

1 Like

I added it so the interface can be used with dsp processing classes like ProcessContext and AudioBlock. It’s indeed not really necessary, you could also adjust the interface to take references to AudioBuffer<float> or even raw pointers.

Honestly I’m a bit confused about the DSP module as well-- just, like, in general…

The only reason I’ve used the DSP module in my own projects thus far is to use some native Juce features that require it, like the limiter included in the DSP module. I just do all my processing with AudioBuffers and then at the end of my signal chain, transfer from my buffer to an AudioBlock so it can be passed into the limiter ¯_(ツ)_/¯

1 Like

There’s an example implementation in DSP Testbench. Take a look at the FixedBlockProcessor in AudioDataTransfer.h.

1 Like