The graph uses its own internal audio buffers to store the audio data that is passed between nodes. If the user passes a buffer larger than the max block size to processBlock, then the internal buffers may be too small to store all the required intermediate audio data. In this case, the block is processed in smaller chunks that will fit into the internal buffers. The “splitting up” is handled by the if at the top of the function. The “normal” case, with no splitting, is handled by the bottom half of the function.
ok, so , if I understood, it is risen if I create an instance of AudioProcessorGraph, prepare it with a block size of 512 and after in processblock I pass a buffer with size of 1024 samples, is it correct?