getNextAudioBlock() methods of
AudioSource classes (and maybe others) we developers are provided with a
bufferToFill that contain a block of audio data with which we can play around.
In the attempt of diving a bit deeper into JUCE and into buffer-based processing in general I would like to understand how/when/by whom these buffer are created and filled.
In the tutorials the fact that
bufferToFill are already nicely filled with audio data coming from the DAW is always given for granted. I tried to look at the call history of the processBlock() callback in Xcode but got kind of lost (ended up in the AudioProcessorPlayer::audioDeviceIOCallback code) and I am not sure I understood correctly how these buffers are filled.
Can someone help me understanding what is happening behind the scenes?
I understand it is a complicated story and might not be feasible to provide a fully comprehensive answer, but any rough idea of how the buffers are created/filled and any suggestions for code to look into would be very helpful.