Audio output channel mapping

Hi all,

Can anyone enlighten me if there is a way to easily re-map input/output audio channels?
I have an app that uses a stereo output, what I am trying to achieve is to have a ComboBox for each of my “output channels” to set what physical output to use (Something like MaxMSP has at its “Audio Status”-?“IO Mappings”).

I found a post that suggested, in a similar issue, to do the mapping manually using another AudioSampleBuffer,
but I was wondering if there is a way to get this done via the AudioDevideManager or something like that, without messing with the processBlock() ?

-N

Hi @Nikolas_K,

the DeviceManager is only responsible for AudioDevices and MidiConfiguration. It is not involved in anything in AudioProcessors, because they are meant to be hosted plug-ins. So in processBlock you are on your own.

But Standalone apps are not driven in processBlock calls but rather getNextAudioBlock (like AudioSource and it’s descentants). So here you can use ChannelRemappingAudioSource for instance.

HTH, Daniel

Hello @Daniel,

Thank you for your answer, I found an older of your posts about the ChannelRemappingAudioSource.
I started the project as a GUI application, so it just inherits “JUCEApplication”. I add the AudioCallback manually via the AudioDeviceManager.
By processBlock, I mean any “audio callback” method that is used in that case, sorry for not clarifing it.

The current problem is that in order to add the AudioCallback, my DSP handling class, needs to inherit from AudioIODeviceCallback, but the ChannelRemappingAudioSource needs an AudioSource, which is not that handy!

What whould a correct approach be, keeping in mind that my project is just a JUCEApplication?

-N

Wanted just to edit my post, when I realised, that it might not be that straight forward with your use case, but wanted to double check for syntax. I had this idea as alternative:

float* const pointers[2] = {
    bufferToFill.buffer->getWritePointer(0),
    bufferToFill.buffer->getWritePointer(1),
};
AudioBuffer<float> stem (pointers, 2, bufferToFill.startSample, bufferToFill.numSamples);

// either for channel Source
const AudioSourceChannelInfo stemBufferToFill (&stem, bufferToFill.startSample, bufferToFill.numSamples);
yourSource.getNextAudioBlock (stemBufferToFill);

// or processBlock:
yourProcessor.processBlock (stem, midi);

Maybe that adds some flexibility…

I am not sure I follow your example’s aim, but If I understand correctly you suggest,getting the channel pointers and handle the mapping manualy?

Could I just inherit both from AudioSource and AudioIODeviceCallback to do the channel mapping with the ChannelRemappingAudioSource or is that “too much” ?

Sorry, that was written in a bit of a hurry.

I wanted to point out, that you can use an AudioBuffer without allocating samples. In this case you have a fully functional AudioBuffer, which just refers to audio data. You can use this buffer in either function, processBlock or getNextAudioBlock.

For your last question, I tend to use the juce made players, that ARE actually AudioIODeviceCallbacks. Have a look at the inheritance diagram of AudioIODeviceCallback, there are:

You can inherit one of these, if you want to play back a juce style processor or source. But you can roll your own and get inspiration in juce’s code, that’s up to you.

HTH

The AudioSourcePlayer seems very helpfull in my situation.
Thank you very much @daniel for your time and answers!

-N