Best practices for audio processing independent of buffer size

I’m still a bit new to JUCE, but when I had a similar problem, the best solution ended up being to wrap the processBlock into a processBlockPrivate() function that only calls when you have enough samples to feed it.

For your problem, I might try something like this:

where

  • desiredNumSamples is an integer variable calculated in prepareToPlay using the samplerate & the desired length of each processing frame (25 ms, it sounds like…)

  • storageBuffer is a larger AudioBuffer that will store the incoming audio and be passed to processBlockPrivate() once it contains enough samples. This is some rough code, I would suggest making it a circular buffer by creating some readPosition & writePosition variables, etc…

      processBlock (juce::AudioBuffer<float>& buffer) {
          const int numSamples = buffer.getNumSamples();
          for(int chan = 0; chan < buffer.getNumChannels(); ++chan) {
              storageBuffer.copyFrom(chan, storageBuffer.getNumSamples(), buffer, chan, 0, buffer.getNumSamples());
          }
          if(storageBuffer.getNumSamples >= desiredNumSamples) {
              processBlockPrivate(storageBuffer);
              // reset latency counter...
          } else {
              // increment a latency counter here...
          }
      };
    
      processBlockPrivate (juce::Audiobuffer<float>& buffer) {
      };
    

I think it will be hard to get around doing something like this, for the frame-based processing you’re describing… as far as updating latency, you could keep track of the # of times processBlock() is run before processBlockPrivate() is triggered…

1 Like