Testing on JUCE-Master from GitHub on my 2014 iMac running (BootCamp) Windows, it appears that the microphone and speakers do not use the same sample rate:
http://ibin.co/2UZO8qRVQ3yL
To reach that screenshot I just set the input and output device pulldowns to the standard inbuilt.
Investigating my audio settings I observed that my speakers are set to 24 bit 48 kHz, where is my microphone is set to 2 channel, 16 bit, 44.1 kHz.
For my project, should I simply adjust one of these values so that they match, or is it possible to handle mismatching sample rates in JUCE?
I would prefer the latter, as that would save my future users from having to fiddle their settings in order to use my app.
I suspect I would need a separate component for input and output, each deriving from AudioAppComponent with SetAudioChannels(1,0) and (0,2) respectively.
π
PS Adjusting my system settings (lowering the speakers to 44.1 kHz), the audio settings page works. But curiously the latency detection page fails. No error. It just doesn't seem to hear the signal it emitted. Could this be some hardware level echo cancellation magic at work?