I was looking at the “Tutorial: Build an audio player”, and digging deeper into the AudioSourcePlayer, noticed that if it’s using a device that has both active input and output channels, then it copies the samples from the input channels into output, before calling source->getNextAudioBlock().
I was thinking that this might not produce the desired effect in such cases as… Imagine I open a device with 2 active input and 2 active output channels. I provide a source that wants to play a mono track. Firstly, the work of copying the input to output seems useless (since it’s going to be overwritten with whatever comes from source). Secondly, I am not sure if the mono from the source is going to be copied to both output channels, or only to the first one (in which case, the second input channel will just go into the output, which is not desired).
I was thinking to re-implement my version of AudioSourcePlayer which only writes to output channels, without copying anything from the input channels first. I think it should be pretty simple, as the code for AudioSourcePlayer is rather short.
But I was wondering why a class named AudioSourcePlayer actually processes input into the output? Are there some historical or architectural reasons (for example if the idea was that player could also be used to receive the data from the inputs at some point, and it’s just not covered by the “Tutorial: Build an audio player”? Is the meaning for this class more like AudioProcessor?