Why AudioSourcePlayer copies inputs to outputs?

I was looking at the “Tutorial: Build an audio player”, and digging deeper into the AudioSourcePlayer, noticed that if it’s using a device that has both active input and output channels, then it copies the samples from the input channels into output, before calling source->getNextAudioBlock().

I was thinking that this might not produce the desired effect in such cases as… Imagine I open a device with 2 active input and 2 active output channels. I provide a source that wants to play a mono track. Firstly, the work of copying the input to output seems useless (since it’s going to be overwritten with whatever comes from source). Secondly, I am not sure if the mono from the source is going to be copied to both output channels, or only to the first one (in which case, the second input channel will just go into the output, which is not desired).

I was thinking to re-implement my version of AudioSourcePlayer which only writes to output channels, without copying anything from the input channels first. I think it should be pretty simple, as the code for AudioSourcePlayer is rather short.

But I was wondering why a class named AudioSourcePlayer actually processes input into the output? Are there some historical or architectural reasons (for example if the idea was that player could also be used to receive the data from the inputs at some point, and it’s just not covered by the “Tutorial: Build an audio player”? Is the meaning for this class more like AudioProcessor?

Thank you!

The AudioSourcePlayer is a very trivial helper class. It is used in AudioAppComponent, which is used as simple boilerplate for small applications.

In most real use cases you will probably write your own version, like you already intended to.

The AudioSourcePlayer is not intended to be mixed with other AudioIODeviceCallbacks, so if there is an input it is a valid assumption, that the inputs were selected intentionally.

Thanks for your reply, Daniel.
That’s what I thought (approximately), but just wanted to confirm.
I’ll write my own replacement for it (as I am also not inheriting from AudioAppComponent, and having a slightly more complicated structure of the app).
Thanks again.

AudioAppComponent was one of the first classes to ditch IMHO, since adding GUI and DSP in the same class is a bit smelly design, so I only use AudioAppComponent for projects with less than 5 classes (rule of thumb).

Yeah, I’ve come around to this on first refactoring as well :slight_smile:

Now having audio processing in separate classes managed by the the application, and then GUI is separate. I guess it had to be done like this, for the ease of the introductory tutorials, which is good for people who are starting out.

Thanks for your comments!

1 Like