Hmm, there’s something here that I must just not be understanding. One of the first things that caught my eye was the general structure of this callback:
virtual void audioDeviceIOCallback
(const float** inputChannelData, int numInputChannels, float** outputChannelData, int numOutputChannels, int numSamples) = 0 ;
Where it’s sort of implied that the mic and speakers are being run together for a common effect. From other library examples, this actually seems quite unusual - as the two devices would normally be running independently. At fist I thought this looked interesting since the one important case (for us), where these flows would interact directly is when echo cancellation was being performed (speakers are being driven by remote mic across network, and can record back into the live mic on the local side).
Of course, JUCE is not supporting echo cancellation, so what is really going on here? What is the case, other than “open mic EC” where you need to run input/output together like this? And why would the sample rates ever need to match? I can see why this might be “desirable”, but I’m still missing why this would ever be a “requirement” (i.e. breaks if they cannot match)?