I’m working on a small audio plugin (a simple filter) and I am modeling the audio stream operations as a series of AudioSource objects operating on the input buffer received from the
processBlock callback of my main AudioProcessor. As I’m reading the documentation and the source code for the existing AudioSource objects, I’m a little confused about how to get input into a series of AudioSource objects and then back out. For example, the IIRFilterAudioSource and the ReverbAudioSource objects both take another AudioSource as a parameter in their constructor methods, but the docblock comment describing the buffer in AudioSourceChannelInfo suggests that you could pass input samples to your audio source via that buffer and have your AudioSource just overwrite that same buffer.
Which approach is preferred? Why?
It seems to me that the simplest way to approach something like this would be as follows:
// In the `processBlock` callback of my main AudioProcessor AudioSourceChannelInfo info = AudioSourceChannelInfo(buffer); firstAudioSource.getNextAudioBlock(info); // Reads from and writes to `info` secondAudioSource.getNextAudioBlock(info); // Reads from and writes to `info` thirdAudioSource.getNextAudioBlock(info); // Reads from and writes to `info`
But that would seem to go against the example I’m seeing in the implementation of the IIRFilterAudioSource and the ReverbAudioSource which leads me to believe there’s some issue with the kind of approach shown here that I’m unaware of.
Any help here would be greatly appreciated, thank you!