I’m just converting my app from CoreAudio to JUCE and I’d like to get some advice to get started.
Output routing
My app currently has a rather sophisticated routing matrix which lets you create multichannel output busses and assign them to the currently selected audio device’s outputs - similarly to how Pro Tools does it. I’m currently using an audio output bus class which internally holds an AUGraph to do this with this setup:
Output Unit <-- Mixer Unit <-- Audio Source
The mixer unit is there to add multiple audio sources to the same output bus.
Each audio output bus has a channel map and thus only gets callbacks for the used channels.
Is a similar behavior doable with JUCE? Is there an overhead involved if I just use the standard AudioDeviceIOCallback with all channels the device provides and just discard the channels I don’t need? I’m not sure if this is what the AUGraph does internally somehow when a channel map is set.
Is it common to start/stop the audio device when no audio is needed or should I just let the audio running and pass zeroed audio buffers? I know that starting up the device can take some time so I’m always wondering if I should just let the audio output loop running.
From what I understood about your app, I think the AudioProcessor and AudioProcessorGraph classes might be what you’re looking for, if you haven’t already checked those.
It lets you create very flexible (run-time) chaining of AudioProcessor instances, and within the AudioProcessor class there’s the BusesLayout class, that lets you configure very sophisticated bus settings.
I’d suggest you give them a try and get back with your observations. Those classes are very popular, surely lots of people here will be able to help you with them .
EDIT: And the AudioProcessor is also the base class for plugin wrappers, in case you need it too.
What I usually do is, instead of playing to the AudioIODevice, I create an AudioFormatWriter, pull in a loop from my pipeline (either getNextAudioBlock(), if it is based on AudioSource (i.e. your AudioDeviceIOCallback was an AudioSourcePlayer), or processBlock() (if it was an AudioProcessorPlayer). The buffers you get there you put into the AudioFormatWriter.
While writing I thought, it might be a nice idea to inherit AudioIODevice that renders from the callback directly into the file, but usually you have transport controls as well, so you might end up with something custom anyway.
Caveat: if you call getNextAudioBlock() from a loop instead of an actual audio device, you need to replace all non-blocking classes (especially BufferingAudioSource) with a blocking version, otherwise you never get data, because these classes will deliver zeroes, if they had no time to read actual data…
Thanks for your reply! I had a look at the AudioProcessor class before but I didn’t understand how it is connected to the AudioIODevice.
Does my class hold an AudioIODevice and implement its callback while forwarding the calls to the AudioProcessors? What’s the best way to mix the different audio processor outputs? In my current setup, the mixer unit does this for me and it’s of course not a big deal writing code to do this. I’m just wondering what the right way to handle this would be.
How would you go about my 2nd question? Do you only start the AudioIODevice once and let it run or do you start/stop as needed? Sorry if this is a dumb question