Processing big buffer blocks

The processing I want to do is dependent on a long section of audio, multiple buffers long. I cannot process each buffer separately because the program needs to know what’s in its neighboring buffers to do the processing.

Is there a way to pool many buffers, then change their values only when the processing is done for all the buffers?

Would the following work?

void processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiMessages)
{
   bufferVector.push_back(buffer);
   if (bufferVector.size() == threshold)
   {
      for (each buffer in bufferVector)
      {
         // do processing on buffer
      }
      bufferVector.clear();
   } 
}

This looks like a wrong approach to the problem from a few points of view.

  1. The idea of the processBlock callback is that it passes you a block of input samples to work on and then expects you to modify them inside the callback. The same buffer then might be passed to the next plugin in your processing chain. So your code would never be able to do anything but analysing the samples during processing, since you are working on a copy of the data.
  2. Usually algorithms require a certain number of samples to be present. Since the DAW might decide to pass you blocks of different size between 1 sample and the maximum reported in prepareToPlay at any time, counting the number of blocks received does not ensure a certain number of samples.
  3. Resizing heap-based containers during processBlock is a no-go since that triggers memory allocation and de-allocation which is an operation with non-deterministic execution time. You have to avoid such calls during the audio callback since they might trigger audio dropouts in case they take unexpectedly long.

A usual approach to your problem would be something like a ring buffer with a size matching your desired sample count at the sample rate which is allocated once in prepareToPlay. Then, push in new samples at the one end while popping out the already processed samples on the other end and replace the samples in the processing buffer passed to you with those old samples. When all samples have been exchanged like that, perform your fixed size processing on that buffer in place, then continue like described above. This will obviously introduce an audio latency of your block size to the signal, which you need to report to the host so that it can reflect that latency via latency compensation. If you only do analysis, you can of course just skip writing samples back and don’t report any latency.

The implementation of all this can vary from straightforward to highly optimised :wink:

I think that you should report that big N block sample latency to the DAW do your block processesing in big chunks, but deliver them block wise once you have processed data.

@rafa1981 The approach @PluginPenguin described is the correct and only possible approach.

The latency is in all cases that need a bigger amount of samples the number of the requested samples by the algorithm-1, since worst case is that there was one sample missing before the algorithm can do its work.

I stand corrected, somehow I misread that the user algo had a lot of latency (e.g. FFT)

It wasn’t wrong what you said. But the interesting part is the FIFO approach as opposed to keeping multiple blocks.
The key takeaway for the OP is that you always return the same amount of samples that you got delivered and there is no chance to do things later.