Add multiple, specifics microphones to an AudioProcessorGraph

I’m having a hard time figuring how to add multiple specific microphones as input to an AudioProcessorGraph.

Specifically, how to specific a certain microphone is beyond me. If I use this code, it seems the system’s default microphone is always used:

mAudioGraph.addNode (std::make_unique<juce::AudioProcessorGraph::AudioGraphIOProcessor> (juce::AudioProcessorGraph::AudioGraphIOProcessor::audioInputNode));

Do I need to call AudioDeviceManager::setAudioDeviceSetup(…) before adding each microphone? And if so, that API seems to only work on device names, not device ids or paths. It’s beyond me how this is supposed to work with two microphones using the same name?

Thanks for any hints.

Some background information on what I’m trying to achieve:

As far as I know, the AudioDeviceManager in Juce isn’t designed to support things like multiple USB microphones, which appear as separate audio devices.

Well, on Mac you can create so called Aggregate Device and attach to it many audio devices/interfaces. All their inputs and outputs will be available in AudioDeviceManager then.

Yes, but even if we were willing to educate our users on setting up such an aggregate device, we were still limited to one such device at at time, and the solution wouldn’t work on Windows.

Since I have seen a few questions like mine come up on this forum, with nobody going ahead and implementing multi-device support, I’m guessing there must be design decisions in JUCE that makes adding multi-device support quite hard. :thinking:

The main problem is likely that the clocks of those individual devices are not synchronized. Even if you configure all devices to run at the same sample rate, the hardware oscillators used will never run excactly in sync and start drifting slightly over time. This means there has to be some kind of re-synchronisation algorithm that deals with this offset, which is not trivial to implement with high audio quality. Also proper threading might be a non-trivial task when combining multiple audio streams. This is why it is done best at the system driver level which leads to the aggregated device approach mentioned.

In the end, JUCE core focus is on high quality professional audio applications and professionals will always chose to work with a single multichannel audio device and proper clock synchronisation solutions when combining multiple digital input sources.

I‘m not sure if there are open source projects that implement cross platform audio device stream combination. Would be a great community effort to tackle this challenge for JUCE :slightly_smiling_face: