The nature of audio devices

The following output shows that the types of device supported by my Windows computer are Windows Audio and DirectSound, and that while there are no devices of type Windows Audio, there are two input and two output devices of type DirectSound.

Device Type Name = Windows Audio
Device Type Name = DirectSound
   Input Device Name = Primary Sound Capture Driver
   Input Device Name = Microphone (Parallels Audio Con
   Output Device Name = Primary Sound Driver
   Output Device Name = Speakers (Parallels Audio Controller)

The DirectSound devices are maintained in a structure called DSoundDeviceList.

struct DSoundDeviceList
{
      StringArray outputDeviceNames, inputDeviceNames;
...

What I glean from this structure is that the number of input devices need not be the same as the number of output devices. In general, is that correct?

Until I found this structure, I was under the impression that input and output devices were paired, so that in UML terms a device type defines the type of many devices, and a device has an input device and an input device. However, that seems wrong to me now. On the basis of this structure, it appears as though there's no such thing as a device - just device types and input and output devices. So, a device type defines the type of many input devices and many output devices.

Any thoughts?

Sorry, I meant to say that this suggests is that it's possible to create an AudioIODevice in which the types of the input and output devices are different, i.e. they're not both DirectSound or Windows Audio; one could be DirectSound and the other Windows Audio. Is that correct?

No - all of the AudioIODevice classes are for a particular audio API.

At some point soon we'll be adding a class that allows you to sync multiple AudioIODevice objects into a single one, but that's not ready yet.

Presumably I could use the AudioIODeviceCombiner class (once it's made public) to synchronise the input of one kind of device (say DirectSound) with the output of another kind of device (say Windows Audio), by creating a source and target instance of AudioIODevice, where the source device has no ouput channels enabled and the target device has no input channels enabled.

Yep. That's the idea.

Thanks for your response.

Suppose there are two identical devices D1 and D2 of type DirectSound installed on a system, and that D1 and D2 have an input device IN and an output device OUT. After scanForDevices() is called, the array of input device names will be [IN, IN] and the array of output device names will be [OUT, OUT]. Now, I know there's a function for appending numbers to array elements to ensure that they are unique, so let's assume that this has been applied to both arrays and that they contain [IN1, IN2] and [OUT1, OUT2].

1.   How do you know which output is associated with which input?

2.   Is a pair of input and output names, e.g. (IN1, OUT1), considered to be unique - a key, in other words - in the context of all objects of AudioDevice?

3.   If the answer to 2 is yes, what is to stop me creating two objects of AudioIODevice with the same input and output names?

Let's suppose that IN1 and OUT1 are associated with D1, and IN2 and OUT2 are associated with D2.

4.  In the scheme I outlined previously, will it be possible to synchronise IN1 with OUT2, and IN2 with OUT1, i.e. to configure the device combiner to combine (IN1, 0), (0, OUT2), (IN2, 0) and (0, OUT1), where 0 means no input or no output? I'm not saying that I want to be able to do this; I'm just trying to understand the issues.