How to create (or get pointers to) nodes for input and output devices?

I’m trying to write an audio mixer based on the Juce library. I am slowly getting the hang of the Juce API, and most of what I need to do. I am however unclear on a few things…

I am trying to create a “graph” to route audio through various components of my mixer application. I can’t however find any info on how I create the “nodes” (or get pointers to them) for the input and output devices that I select using “AudioDeviceSelectorComponent”, so that I can connect the nodes in my graph (ie: input device -> effect -> output device).

I have also seen documentation for a “MixerAudioSource” class, but I can find any info or example on how that would be implemented within an application. Is it treated as an “AudioProcessor” (ie: within a graph)?

It would be really great if there was a little more example code. (ie: there should be at least 1 example of code for each significant class).

Thanks in advance for any help you can offer.

MixerAudioSource is part of the AudioSource class hierarchy, so it doesn’t work as an AudioProcessor.

Since you already mentioned wanting to create an audio graph, have you looked at the Juce AudioProcessorGraph class?

The audio hardware inputs and outputs are not available as “nodes”, you get the audio buffers you need to use with those in the audio callbacks.

Hi

Thanks for getting back to me on this. I appreciate the help.

I have in fact looked at the AudioProcessorGraph class. The only example I could find for that is with the “AudioPluginHost” code. But in that example, the output node (pins for each channel) need to be connected from the input device, to the to the inputs of plugins loaded within the graph, and then from the plugin outputs to the output device. I just can’t find within the code where that is happening.

I am also unclear about the use of “MixerAudioSource”. There doesn’t seem to be any example code for that class (along with many others). Do have to instantiate that class within my application (ie: in the MainWindow class), and pass the data from the input and output devices to it, or am I supposed to use that class to replace the AudioSource class (ie: AudioAppComponent)?

It also doesn’t seem very obvious to me how I create a variable number if input devices, and control each of them independently. My mixer application has a configurable number if mixer input channels, and I need to configure 1 channel (ie" left or right) from each input device, and process it independently (ie: allow an effect/plugin to be optionally loaded for each), and then pass the output from each channel to a mixer class, that sums them all together, and the output of that is passed to the output device. There doesn’t seem to be example code, or any tutorial page, that describes how to do this.

The AudioProcessorGraph has 2 special IO node types that act as the connection to the “outside world”. These are not by default actually connected to the audio hardware but can be made to work so. The easiest way is probably using the AudioProcessorPlayer class.

It’s a bit difficult and messy to write meaningful example code for these because Juce doesn’t for example come with other AudioProcessor subclasses than the AudioProcessorGraph itself and AudioPluginInstance. But here’s an example where I create a graph : graph input->plugin->graph output. The AudioProcessorPlayer acts as a callback for the AudioDeviceManager and takes care of interfacing with the audio hardware’s inputs and outputs.

You probably don’t want to be using the MixerAudioSource for an actual mixer application, since it doesn’t have any other features besides mixing the connected AudioSources.

I am not sure what you mean by “variable number of input and output devices”? Juce does not support using multiple different audio interfaces simultaneously. (Unless your operating system like Mac Os has a device aggregation feature.)

1 Like

Hi again

I really do appreciate the help! I don’t yet fully understand the code you sent, but I will lookup some of the classes you use within the Juce documentation, and hopefully I will get a little more clear on what I need to do.

You stated above that “Juce does not support using multiple different audio interfaces simultaneously.” That concerns me, as it seems like a pretty major flaw! I was trying to design my Mixer application to run on both Windows and Mac, and didn’t want to develop code individually for each, so using Juce seemed to make sense. I had thought that the Juce Library code was developed by a company that had created a Digital Audio Workstation (Sequencer). I would think that being able to take (simultaneous) audio input from as many audio interfaces and the user has connected to their computer would be a “no brianer”.

I wanted to allow the user of my Mixer application to configure (up to a predetermined max) how many input “tracks” they wanted on their Mixer. and then assign a “channel” from an input device to each “Track”. My mixer would then allow the use to set the “input level” (gain) and “pan” (left/right position) for each “Track”, and the software would mix the audio streams (and allow optional software/VST effects for each) into a stereo output (as a hardware mixer would).

On my test/dev system (a Windows 10 PC), I have a 4 channel (ASIO/DirectSound based) audio interface, and also a stereo (DirectSound based) interface. This would (I thought) mean that I could test mixing up to 6 input channels, and send the mixed audio to a selected stereo output.

If on Windows, I can only test with the 4 channel audio interface, I guess that’s OK for my initial development, but I would like to try to work out a way around that (if needed by modifying the Juce library) to get around that in the future.

It seems like the input and output does not need to be on the same device… correct?

I can’t right now think of any Pro Audio software that supports multiple different audio devices at the same time. The assumption in those softwares has been that if the user needs extra channels for the I/O, they will have a single audio interface device that handles all the channels. (The difficulty in the multi device scenario is that the clocks in the devices are not likely to be completely synced, even if they run at the same sample rate.)

Unfortunately, also the input and output devices will likely have to be the same. The AudioDeviceSelectorComponent is a bit misleading in that regard. (You can of course try if it happens to work.)

Hi again

Just FYI…Regarding Pro Audio software that allows input from more than one audio device (on Windows), if configured to use audio devices as DirectSound or KernelAudio devices, I know for a fact that the following software supports it:

  • Steinberg Cubase
  • Cakewalk
  • Mixpad

You are correct that if the software is used (on Windows) in ASIO mode, only a single input device can be selected for all channels, however the output does not need to be sent to the same audio device.

Regarding the code you sent me…
I think I get it that the “AudioProcessorPlayer” class is used to get audio from the AudioSource (?) class and send it to the “AudioProcessorGraph” class, which can in turn be configured to send audio buffers to the the audio processor/plug-in that is loaded.

I am still totally unclear on how an AudioSource class is associated with a specific input device/audio channel? I also don’t get how the output is configured for (associated with) a specific device or output buffers are sent to the output device?

I have been able create/configure the “AudioDeviceSelectorComponent”, and use it to select 1 or more input and output devices. I (think I) then get the “getCurrentAudioDevice” to get a pointer to the “AudioIODevice” that was selected.

If I create an “AudioSource” class (or “MixerAudioSource”), how to I configure the audio device to send the audio to that class?

How do I then get the buffers that are processed by the “getNextAudioBlock” sent to the output device.

I wish there was some kind of diagram that shows how all of these classes relate to each other.

Look up the AudioDeviceManager and AudioIODeviceCallback classes, those are the relevant ones for getting the audio running and accessing the device input and output buffers.

Like you can see from the documentation on the page, AudioProcessorPlayer is a subclass of AudioIODeviceCallback :

https://docs.juce.com/develop/classAudioIODeviceCallback.html

I think MixerAudioSource is not going to be useful at all for what you are trying to do. (It is meant for mixing together other AudioSources like the AudioFormatReaderSource.)

edit : I did a super simple AudioIODeviceCallback subclass that sums its inputs into a stereo output with gains and pans applied. It’s not intended to be efficient or 100% correct but to illustrate how you could do the mixing at the lower level code.

For production code, you will probably want to use the AudioProcessorGraph and AudioProcessorPlayer instead, especially if you are planning on supporting plugins too. You would need to do your own AudioProcessor subclasses for your mixer specific processings. (Like applying gains, pans etc.)

Hi again. Just wanted to say thanks for your help with this topic. I think I have a handle on it now.

Hi! I’m sorry to bump on this at such delayed time. I am having the same exact problem that you seemed to have. Any suggestion?