AudioBuffer vs. AudioBlock



Hi folks,

While browsing the dsp sources, I realised, that the dsp works on the newly introduced AudioBlock rather than on the classic AudioBuffer. I understand, that the AudioBlock will only reference sample data vs. the AudioBuffer can actually own the sample data.

1.) What is the benefit of duplicating the AudioBuffer’s functionality?

2.) Is this meant to join somewhere down the development line, or will the AudioBlock replace the AudioBuffer at some point later?

Thanks for clarification.


P.S. an introducing text on the dsp module and some background about it’s architecture and intended use would certainly help, or is there some documentation, that I missed?

audioIODeviceCallback vs ProcessBlock vs process(): Why?

Think of them kind of like String and StringRef. Both have their own purpose and are very different internally, but the interface they present is similar.

I think we’ll probably want to make them converge a bit in the long-term, but AudioBuffer is a really useful class that we’ll definitely want to keep.

(And yes, we really need to add some high-level blurb about the dsp module’s architecture, it’s on our to-do list…)


Thanks Jules, that makes things clearer!
I’ll see, if I can wrap my head, which one to use in which instance, so I can avoid overhead.