Have been using JUCE for years on OSX, iOS and Windows. No problems. But now have started adding audio to my apps and have a couple of issues, one theoretical, one practical.
First, how many AudioIODevice instances can I have? For example, on a PC, if I want one such object to correspond to a microphone input from a web cam, and another such to correspond to a headphone output from an onboard sound card, is that allowed? Or am I restricted to only one physical object providing both input and output, like a sound card? If I am in fact allowed more than one physical object, do I need to instantiate a separate AudioDeviceManager for each?
Second, when I am attempting to initialise audio after my app is fully up and running, handling messages and so on, in the debugger I get an error shortly after calling setAudioDeviceSetup on the AudioDeviceManager. It appears to be an asynchronous error, the call stack showing it to be starting in juce::WASAPIClasses::WASAPIAudioIODeviceType::ChangeNotificationClient::OnPropertyValueChanged,
going on to juce::DeviceChangeDetector::triggerAysncDeviceChangeCallback,
and ending in a jassert in Timer::startTimer with the commented message "if you’re calling this before or after the MessageManager is running then you’re not going to get any timer callbacks.
It looks like I am doing stuff out of the required sequence but I don’t understand what. Can you give any general advice on the kinds of circumstances that trigger this error?
FYI, this problem is happening on JUCE 5, though I expect it is my problem, not a JUCE bug.