JUCE audio input and output device setup issues

Have been using JUCE for years on OSX, iOS and Windows. No problems. But now have started adding audio to my apps and have a couple of issues, one theoretical, one practical.

First, how many AudioIODevice instances can I have? For example, on a PC, if I want one such object to correspond to a microphone input from a web cam, and another such to correspond to a headphone output from an onboard sound card, is that allowed? Or am I restricted to only one physical object providing both input and output, like a sound card? If I am in fact allowed more than one physical object, do I need to instantiate a separate AudioDeviceManager for each?

Second, when I am attempting to initialise audio after my app is fully up and running, handling messages and so on, in the debugger I get an error shortly after calling setAudioDeviceSetup on the AudioDeviceManager. It appears to be an asynchronous error, the call stack showing it to be starting in juce::WASAPIClasses::WASAPIAudioIODeviceType::ChangeNotificationClient::OnPropertyValueChanged,
going on to juce::DeviceChangeDetector::triggerAysncDeviceChangeCallback,
and ending in a jassert in Timer::startTimer with the commented message "if you’re calling this before or after the MessageManager is running then you’re not going to get any timer callbacks.
It looks like I am doing stuff out of the required sequence but I don’t understand what. Can you give any general advice on the kinds of circumstances that trigger this error?
FYI, this problem is happening on JUCE 5, though I expect it is my problem, not a JUCE bug.

Trying to answer myself the first part of my query and hope for confirmation from yourselves. Reading the docs further, looks like I can only have one AudioDeviceManager. However, what I am hoping is that I can dispense with the AudioDeviceManager altogether and create and manage separate AudioIODevice instances myself. To create them I presumably would need to use one of the platform-specific AudioIODeviceType member functions. Is that correct?

To add a further puzzle to this issue, I have now found working experimental code from 5 years ago. In it, I instantiate two separate derived AudioDeviceManager objects, one for handling an input device and another for handling a separate piece of output device hardware. I remember now I had this code working, quite happily exchanging two way audio (using a codec) with a Linux box across the network whose audio was handled by Qt libraries. Was I just lucky at the time, or is this the way to handle input and output audio on JUCE when (a) you want separate physical devices for input and output and (b) you want to be able to specify different sample rates and sample counts between input and output (I use a sample converter to convert between rates/counts)?