Behind the scenes of processBlock() and getNextAudioBlock()

In the processBlock() and getNextAudioBlock() methods of AudioProcessor and AudioSource classes (and maybe others) we developers are provided with a buffer and bufferToFill that contain a block of audio data with which we can play around.
In the attempt of diving a bit deeper into JUCE and into buffer-based processing in general I would like to understand how/when/by whom these buffer are created and filled.

In the tutorials the fact that buffer and bufferToFill are already nicely filled with audio data coming from the DAW is always given for granted. I tried to look at the call history of the processBlock() callback in Xcode but got kind of lost (ended up in the AudioProcessorPlayer::audioDeviceIOCallback code) and I am not sure I understood correctly how these buffers are filled.

Can someone help me understanding what is happening behind the scenes?
I understand it is a complicated story and might not be feasible to provide a fully comprehensive answer, but any rough idea of how the buffers are created/filled and any suggestions for code to look into would be very helpful.

How they are filled with your data is left as your business. How they are created, is a complicated matter. They might be created completely by JUCE or they might be the actual buffers the operating system or the DAW host application has created. But you should usually never need to care about that detail, you should just figure out the fastest and the most time predictable way to read and/or process the buffers.

Indeed I should have said “created” instead of “filled” thanks for pointing out.
Ok, you are saying I should just be happy to have already those buffers without doing the dirty work. And what about the processBlock and getNextAudioBlock callbacks?I was also wondering how often are they called, who is passing them the buffers with the audio data rom the DAW and where I can read more details about how the DAW interact with the AudioProcessor in general?

I’ve never cared much about how exactly those buffers come into existence…If my code is running as a plugin inside a DAW application (processBlock in the AudioProcessor code), they are probably the buffers the DAW application has created and is feeding data into. If my code is running as a standalone application (maybe by getNextAudioBlock or audioDeviceIOCallback being called), the buffers might be coming directly from the operating system or even the audio hardware involved. But none of that concerns what I am doing with the audio data. Especially when using something like JUCE, there isn’t much I could control about that anyway.

Why are you interested in the exact details?

edit : the callbacks are called when the new audio data is needed. If you are really concerned about how often exactly that happens, you can insert some instrumentation into your code. The calls likely happen at every 64 to 1024 samples needed, but you can only know for sure by measuring it yourself. It can be more or less often and it can also vary between each callback, depending on the platform.

processBlock() and getNextAudioBlock() are the places, where the audio thread is doing the work. But as we know, on the AudioThread you must not have any allocation, so the buffers have to come from outside.

The audio pipeline is a so called “pulling” model. That means the driving part is actually the audio driver pulling data, i.e. handing a place of memory, and asking the AudioIODeviceCallback to fill it.
An audio application usually has a subclass of AudioIODeviceCallback, to react to the drivers request for an audio buffer.
The AudioAppComponent for instance has an AudioSourcePlayer and sets itself as AudioSource to provide the data.
The buffer handed from the driver is now handed to the AudioSource, which fills the buffer in AudioSource::getNextAudioBlock().

The AudioProcessor is a different thing. It is not providing data, but altering it. The buffer already contains already data (not necessarily, but most of the time), that is then altered in place by the AudioProcessor::processBlock() call.
You can have a private audio processor in an AudioSource and processing the data there:

MyProcessingSource::getNextAudioSource (AudioSourceChannelInfo& bufferToFill)
{
    // assume we pull from a source beforehand
    source.getNextAudioBlock (bufferToFill);
    // this will reference data, not allocate it
    AudioBuffer buffer (bufferToFill.buffer->getArrayOfWritePointers(), 
                        bufferToFill.buffer->getNumChannels(),
                        bufferToFill.startSample, 
                        bufferToFill.numSamples);
    processor.processBlock (buffer, {});   // ignore midi here
}

AudioProcessor serves a second purpose, that it can be dynamically loaded into a host. But the principle is the same, the host provides a buffer full of audio, that it wants the AudioProcessor to process.

I hope that clears things a bit up.

4 Likes

Ok, thanks for sharing your take on this :smiley:

Why are you interested in the exact details?

I guess because I am a physicist by training and because I am curious about how things I use work, very broadly speaking. In particular now I am digging into buffer-based processing and I was not happy with the introductory (and thus a bit superficial by definition) presentation of JUCE provided by the tutorials.

Also, I find amazing that with JUCE you can open a plugin project, add one line of math in the processBlock and have a plugin that runs in any DAW!I am trying to understand a bit better how this is achieved…

This is REALLY helpful @daniel.

It is exactly the kind of input I was looking for.