I have a standalone JUCE app for sound spatialization across custom loudspeaker configurations - up to 54 channels and growing...
The sound is rendered in a single audio callback, and thus, is not utilizing the power of multicore or multiprocessor systems. I would love to be able to start by having one core/processor be responsible for rendering half of the speakers and the other for the other half.
I've heard Max uses JUCE extensively, though I'm not sure if that's the case for the audio. I read this about a new feature of Max 6, which sounds similar to what I want to accomplish: "We now run the audio of every top-level patcher in its own thread for effortless multicore processing."
I first looked at AudioDeviceManager's addAudioCallback, but adding additional callbacks sums them in the same thread rather than spawning new threads. Seems like the approach to take would be to setup two AudioDeviceManagers using the same output device and then feed those my separate audio callbacks - would those then be on separate threads?