While browsing the dsp sources, I realised, that the dsp works on the newly introduced AudioBlock rather than on the classic AudioBuffer. I understand, that the AudioBlock will only reference sample data vs. the AudioBuffer can actually own the sample data.
1.) What is the benefit of duplicating the AudioBuffer’s functionality?
2.) Is this meant to join somewhere down the development line, or will the AudioBlock replace the AudioBuffer at some point later?
Thanks for clarification.
P.S. an introducing text on the dsp module and some background about it’s architecture and intended use would certainly help, or is there some documentation, that I missed?