Core Audio Multitrack Engine -> Juce?

Hello all, Jules,

I want to convert my existing Core Audio engine (AudioUnits in an AUGraph) to Juce. I’m using it to power a basic multitrack play/record setup. It goes something like this:

INPUT
HAL Input -> Recording Callback (write input to file(s))
-> Matrix Mixer 1 (feed input to processing line for monitoring as shown below)

OUTPUT (AUGraph)
Track 1: AUFilePlayer -> Matrix Mixer 1 -> Track 1 plugins…. -> Matrix Mixer 2 -> Hal Output.
Track n: AUFilePlayer -> Matrix Mixer 1 -> Track n plugins…. -> Matrix Mixer 2
Hal Input 1 -> (mixed into track 1)
Hal Input n -> (mixed into track n)

-All the tracks and Hal inputs are fed into Matrix Mixer 1.
-Hal Inputs are fed over into the track outputs of Matrix Mixer 1.
-Track processing is done between the 2 mixers.
-The second matrix mixer combines the tracks down to stereo for output to the device.

  • Should I use AudioProcessorGraph and AudioProcessors and try to build up a similar system? I’m suspecting that my current architecture won’t translate directly into Juce so I’m open to a new approach. I’ve read in the forum that you (Jules) had plans to create AudioProcessors for reading files and so forth, but your efforts there seemed to stop short of doing so. Should I be trying to wrap AudioSources with AudioProcessors to complete the graph?

  • Core Audio’s AudioFilePlayer allows many regions to be loaded up for which it handles the playback at the specified time. There doesn’t seem to be anything to do that here. I’m gathering that I’ll need to represent each clip with an AudioFormatReaderSource and then watch the sample count in processAudioBlock (from the wrapper processor) to determine when to start and stop providing data. Does that sound right? Does AudioTransportSource fit in here somewhere?

  • Also, I generally like to stay out of the dsp stuff and I’ve found Core Audio to be convenient in that regard as it provides a number of built in units to handle panning, reverb, compression…etc. Juce doesn’t appear to offer anything in this regard. Does everybody write their own here or is there a place where I can find commercial friendly code that I could wrap into processors?

Just to note, I have read through everything I can manage to find, including the Juce demo, Plugin Host Demo, Plugin Demo, the api docs and the forum!

Thanks.

[quote=“Graeme”]Hello all, Jules,

  • Also, I generally like to stay out of the dsp stuff and I’ve found Core Audio to be convenient in that regard as it provides a number of built in units to handle panning, reverb, compression…etc. Juce doesn’t appear to offer anything in this regard. Does everybody write their own here or is there a place where I can find commercial friendly code that I could wrap into processors?
    [/quote]

Did you have a look in Google code ? I’m pretty sure I remeber seeing something like that under a MIT licence there.

I can’t find it could you please point me in the right direction?

I’ll try to have a look yeah. However note that I’m not sure at all it was on google code. It was just a suggested place to look …

Thanks for the suggestion. Much appreciated! I haven’t been able to find anything myself, but I will keep looking around. :slight_smile:

Yes, the development of the AudioProcessor stuff never quite carried on as I intended. Originally it was going to replace the AudioSources, but then I realised there was a bunch of stuff that didn’t fit into the AudioProcessor model very neatly, and ended up keeping both. I’ll probably be doing some more audio dev work this year, so might go back and polish it all up a bit.

Ok thanks Jules, that’s good to know. Would you be able to tell me then if it’s a reasonable approach to wrap an audiosource with an AudioProcessor and proceed with using the graph at this point? Or should I just stick with sources? I’d like to be able to load plugins, and the Plugin Host example is AudioProcessor based.

I’d just like some direction here so I don’t go running down the wrong road with this.

Thanks!

Yeah, you could wrap audiosources, the only thing to watch out for is sample-rate conversion, because that works nicely in AudioSources (i.e. it’s easy to create a chain of audiosources that pass the audio to each other at different rates), but an AudioProcessor graph has to all run at the same rate.

Ok, if I scrapped the graph, would that mean I’m also scrapping Plugins then? Or can it work the other way around where AudioProcessors are used within an AudioSource chain?

It’d be very difficult to use plugins outside of a graph, the graph classes were originally designed specifically for holding and running plugins.

ok. that’s very helpful to know. thanks!

Not sure what you mean…I use plugins and I haven’t touched anything outside of AudioPluginFormatManager, KnownPluginList, AudioPluginFormat, and AudioPluginInstance. These all work great by the way!