Reading audio buffers... just that!

Can you elaborate on this?

I was just reading the dsp tutorial which includes some helpful information about the signal processing lifecycle. E.g. the prepare, process, and reset functions and how they are called in series.

But what you are saying is, these don’t work with live data; they only on data passed to them by the getNextAudioBlock->processBlock chain, but not directly on the buffers?

Sorry if this is a poorly phrased question, I’m just encountering some of the same confusion about dataflow and naming conventions as others have and trying to understand.