Suggestions for mixer architecture

I’m working on a commercial mixer application for Mac and Windows that allows the user to route audio input from several sources (microphones and applications) to a fixed number of audio outputs (e.g. line-out and monitor).

For the next release of our software, we’d like to add VST (and AU) filter support, and are looking into using JUCE to replace our home-grown audio mixer, while keeping our current user interface written in QT.

What would be a good starting point architecture-wise in the JUCE universe for such an audio mixer? The AudioProcessorGraph class?

I looked at the AudioPluginHost example, but it seems to only allow for a single input to be connected to a single output, instead of n inputs linked to a an output. I’m unsure if this is a limitation of the example app, or of the AudioProcessorGraph class the sample uses.

Thanks for any pointers into the right direction.

This is less a “suggestions of a mixer architecture” and more about how to glue JUCE with what you have, isn’t it?

First of all, I suggest ironing out running JUCE internally to your app; what you’re looking for relies on the message thread to be perfectly cut and dry, and it helps drive many components underlying the audio systems (for async messaging, and whatever else). You’ll need to kick it off via initialiseJuce_GUI and shut it all down by calling shutdownJuce_GUI.

Yes, the AudioPluginHost is totally simplistic, but you’d be glossing over the fact that it serves as a good starting point by putting together everything UI and audio related from plugin scanning, plugin loading, graph handling, audio device management, and so on. So yeah - the AudioProcessorGraph is closer to what you’re looking for.

Notice that this graph class is an AudioProcessor, which can have its I/O configured basically any which way. In this same example host, the I/O are configured as per the AudioProcessorPlayer which is controlled by the AudioDeviceManager. From there, each created node put into the graph controls its own I/O (being AudioProcessor instances), telling the parent graph how it should be processed. If you’re seeing disconnects between what the nodes offer for number of I/O (plus MIDI), it can be attributed to any number of reasons: it depends on which node (ie: AudioProcessor) type you’re referring to, and what it relates to.

1 Like

I am not completely sure what you mean by that, but it definitely supports all kinds of I/O configurations for the processing nodes, like this 32 in/4 out configuration for the GRM Tools Spaces plugin :

If you mean supporting several input and output hardware audio devices running at the same time, that isn’t supported in the Audio Plugin Host, but that isn’t really supported in Juce anyway to begin with.

1 Like

Here is a simplified mockup of what I’m trying to achieve:

Each connection would come with it’s own gain and optionally a varying number of user definable audio filters.

Guess you are right. Figuring out what parts of the JUCE universe are right for my purpose is challenging.

Thanks. I’ll start from this and see how far I get.

Right, I don’t think that’s going to work. While the AudioPluginHost appears to allow adding those additional Audio Input and Audio Output nodes, they won’t really work, due to how Juce works. Juce expects that inputs and outputs are handled by the same device, because that’s the usual way things will work with semipro and pro audio hardware. This is not a limitation of the AudioProcessorGraph, but how the AudioDeviceManager and the audio processing callbacks work with that.

:thinking: In that case, JUCE might not be able to do what I want at all??

Possibly not, but someone who has actually attempted it, would need to explain how it works or doesn’t work.