Multichannel audio - approaches?

Hi all

I would like to gather some info on building a multitchannel audio app,
Generally: What is the best approach with Juce?

Seems to me like the juce by nature proposes a solution centered around manipulating an array of AudioSampleBuffer objects
writing/reading data from that would be playing/recording

Audiodriver stuff, init, i/o and main audioprocessing loop could be done using a single global instance of AudioDeviceManager

Mixerobjects of various kinds is also need i suppose

What say your experience?

Thanks in advance - and thumbs up for a super lib jules!

This is kind of where I’m heading with the new AudioProcessor classes are heading - they’ll let you interconnect channels easily between processing units.

Thanks Jules!
I’ve looked a little into the classes surrounding AudioProcessor
I’m not sure i understand your proposal.

Are you saying that I could build a multitrack/multichannel-enviroment by manipulating many seperate “channels” that are AudioProcessors.

Or should i build the hole lot as a class derived from AudioProcessor?

The idea is that you create audioprocessors for all your processes, and wire them up into a graph using the audioprocessorgraph. That gives you full control over where all the channels go.

Im not sure i follow but i can try to explain what im trying to build:

I wanna have 8 audiotracks running in as perfect as can be synch. live-like.

Im thinking:
I make a class AudioTrack derived from AudioSource to hold functionality like start, stop,mut,record for each track. Data is kept in an AudioSampleBuffer object inside the class.
AudioTracks plugs into mixer objects.

1 Like

yeah, that sort of approach would work.

I’ve created an app that is similar to what you’re trying to do. It’ll playback up to 8 stereo or mono tracks and can record 1 stereo/mono track at the same time. It also has a mini drum machine (4 drums) wrapped as a PositionableAudioSource; the timing is sample locked to the tracks being played back or recorded.

The individual tracks wrap an AudioFormatReader as a PositionableAudioSource. I did this instead of using the AudioFormatReaderSource so that the user can control the individual tracks (gain, pan, etc.). The tracks are mixed to stereo by Krakken’s MixerPositionableAudioSource and use the AudioTransportSource and an AudioSourcePlayer to control playback. The AudioTransportSource uses a background thread to read ahead and I’ve found that a 64K buffer size works fine though I haven’t experimented a whole lot with other buffer sizes. You do need a separate reader thread; otherwise the audio loop may glitch. This is especially true if you’re decompressing on the fly. Fortunately, you supply a non-zero buffer size to its constructor and the AudioTransportClass does the rest.

I use my own AudioIODeviceCallback so that I can record the line in and call AudioSourcePlayer::audioIODeviceCallback() from my class’s audioIODeviceCallback. The recorder has a separate thread that does the actual writing to disk. The audio callback and recorder threads share an AudioSampleBuffer instance whose indices are wired as a circular buffer.

Finally, I’ve noticed that sample positions are not preserved when decompressing mp3 file and then re-compressing. This makes syncing them problematic. Ogg files and of course wav files do not have this problem.

1 Like

OK, perhaps someone could give me a hint here again, although the forum is full of questions like mine, I’m still confused :oops:

My app should read some mono inputs and process each to several outputs. So my first idea was to create an AudioSource for each (mono) input and mix their multichannel-outputs together with a MixerAudioSource.

While reading some threads I saw Jules suggesting the use of AudioProcessor and an AudioProcessorGraph for that issue, but I just don’t get behind that idea…

Could somebody just help me find a suitable approach to my aim?

Do you mean inputs from your audio card and outputs to the audio card? In that case an AudioSource won’t work–it’s for audio that generated elsewhere, for example from a file or stream or generated directly using, say, the ToneGeneratorAudioSource. The audio from the sound card finds its way into your app by associating an instance of AudioDeviceIOCallback with the instance of AudioDevice representing your sound card. This is usually done through the AudioDeviceManager.

If you don’t want to write your own AudioDeviceIOCallback (not hard, but low level), then you can use an AudioSourcePlayer to “play” objects in the AudioSource hierarchy or an AudioProcessorPlayer to play objects in the AudioProcessor hierarchy. Both of these objects are derived from AudioDeviceIOCallback and override the audioDeviceIOCallback method which pumps audio in and out of the system:

An object that implements AudioIODeviceCallback, such as AudioSourcePlayer or AudioProcessorPlayer gets called periodically approximately every blockSize / SampleRate seconds where blockSize is the buffer size specified when the AudioIODevice was set in the AudioDeviceManager. This object then reads sample data from inputChannelData, processes the data in some way and writes the result to outputChannel Data.


Juce supplies two classes derived from AudioIODeviceCallback: AudioSourcePlayer and AudioProcessorPlayer that “play” AudioSources and AudioProcesses respectively. The AudioSourcePlayer plays its attached AudioSource by calling its getNextAudioBlock method which has the following prototype:

AudioSourceChannelInfo consists of an AudioSampleBuffer and two ints: startSample which specifies where the AudioSource should put the first sample and numSamples, which specifies how many samples to put in the buffer. I won’t go into the detail of AudioSampleBuffer here, just to say it is the heart of any DSP or mixing. There are methods to copy between buffers, apply gain or pan, determine levels, get samples in or out of the buffer, and read/write to a file or stream. Get to know it well.


An AudioProcessorPlayer, plays its attached AudioProcessor by calling its processBlock method which has the following prototype:

Although similar to getNextAudioBlock, processBlock is bi-directional. Thus AudioProcessorPlayer places data for the AudioProcessor to play in buffer (and Midi messages in midiMessages, but I’m skipping Midi for this discussion), calls processBlock, and then reads or copies the processed audio from buffer. Data then gets to the audio out port by copying it to the outputChannelData argument of audioIODeviceCallback. The actual method of how input and output channels are mapped in the AudioSampleBuffer is a bit tricky, but well explained in the comments for AudioProcessor::processBlock.

AudioSources or AudioProcessors or roll your own, Which Do I Use

Jules has made it clear in many posts that the AudioProcessor architecture is the future of audio in Juce. It is more modular that the AudioSource architecture in the sense that it can incorporate or be incorporated into 3rd party software through the use of plug-ins. Then there is the plug-in framework built around the AudioProcessorGraph and AudioPluginInstance classes. The AudioProcessorGraph class is used to wire together AudioProcessors into an audio processing network of arbitrary complexity. The AudioPluginInstance is used to incorporate external plug-ins such as VSTs on Windows or AudioUnits on the Mac. There’s also built in Midi processing which has to be added, via MidiInputCallback or the MidiMessageCollector if using the AudioSource hierarchy.

On the other hand, the AudioSource architecture includes more objects that do something to the audio. For example, there is the AudioFormatSourceReader to stream in audio from a file, or the MixerAudioSource to mix several sources together, or the AudioTransportSource which can be started, stopped, and moved to an arbitrary position in the stream.

The problem with the AudioSource architecture is that although the built in objects can be wired together to build a simple file player or non-midi controlled synthesizer, you have to crack open or derive new objects to add functionality. For example, the MixerAudioSource can’t mix incoming audio with its attached AudioSources, and you’d have to write your own AudioDeviceIOCallback derivative to capture audio to a file, where as these can be done in the AudioProcessor framework by wiring together plug-ins that may not have even been created in Juce. Furthermore, the wiring can be done by the user at run time (see the Audio Plug-in host demo for an example of this).

Finally, you can do audio processing directly in audioIOCallback and skip the need for the Audio classes (but you’re giving up alot of functionality!). As an example, here’s the audioDeviceIOCallback for an AudioLoopback class that just copies input audio to output. Note it is not industrial strength as it assumes that data is packed contiguously into the lowest channels. Thus, for stereo, it would assume data in channels 0 and 1. You can break it with a multichannel device if you disable the lower channels. For example, if you use the Line 6 TonePort GX device, which allows the user to select 2 of 4 channels, and select channels 3 and 4, you will crash this code. But it does show the basic processing.

void AudioLoopback::audioDeviceIOCallback (const float **inputChannelData, int totalNumInputChannels, float **outputChannelData, int totalNumOutputChannels, int numSamples) { for(int i = 0; i < totalNumInputChannels; i++) { if(i < totalNumOutputChannels && outputChannelData[i] != 0) { if(inputChannelData[i] != 0) { // have an output channel that wants data, so give it to it memcpy(outputChannelData[i], inputChannelData[i], sizeof(float)*numSamples); } else { // fill with zeros zeromem(outputChannelData[i], sizeof(float)*numSamples); } } } // zero extra outputs for(int j = totalNumOutputChannels; j < totalNumInputChannels; j++) { if(outputChannelData[j] != 0) { zeromem(outputChannelData[j], sizeof(float)*numSamples); } } }

Finally, if you go with the AudioProcessor hierarchy, be sure to check out the Audio Host and Plugin example code as careful study of these programs will answer a lot of your questions.


First, thx mrblasto for your detailed reply (although I’m a bit late perhaps).
I played a bit around, writing an AudioCallback was really no problem but I would really like to use an AudioProcessorPlayer. I took a deeper look at the AudioHostDemo, but it’s quite complex and I didn’t get the basics out of it.

I tried to write a simple setup to loop back audio:

[code]class Player : public AudioProcessorPlayer
AudioDeviceManager audioDeviceManager;
AudioProcessorGraph* processorGraph;
AudioProcessorGraph::AudioGraphIOProcessor* inNode;
AudioProcessorGraph::AudioGraphIOProcessor* outNode;

processorGraph = new AudioProcessorGraph();

inNode = new AudioProcessorGraph::AudioGraphIOProcessor(AudioProcessorGraph::AudioGraphIOProcessor::audioInputNode);

outNode = new AudioProcessorGraph::AudioGraphIOProcessor(AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode);


const String error ( audioDeviceManager.initialise( 1, 1, 0, true, String::empty));

if (error.isNotEmpty())
  AlertWindow::showMessageBox( AlertWindow::WarningIcon,
                                                  T("Could not open an audio device!\n\n")+error );
   audioDeviceManager.setAudioCallback (this);


delete processorGraph;

Shouldn’t that just loopback my first (activated) input channel to my first (activated) output channel? So that later I just plug in some more Processors to do the actual work?

Thanks in advance for any help and greettings to all,


I'm a noob and have been wanting to develop a multitrack player. Don't need to have the ability to record audio nor midi.

Some good information in the post to get me started although I'll expand my idea further.

I'd require NO GUI and need to run as a command line application like a server similar to linuxsampler however maybe too complicated .

No eye candy since I'd envisage porting to Rapsberry Pi. For the moment, I'd like to develop on a windows as I have VS2010 to get prove and debug..

Has any developer written code to do be able to load/stream multiple wav files at 48K/24bits both mono and stereo tracks < 24 tracks total and neableto  playback together assinged to multiple audio out ports? 2-8?

Futhermore, can a midi file be played at the same time? Both midi file and audio file laoded have been recorded in a DAW and are sample accurate between tracks and the midi track. Tempo and time signature embedded in the midi track.

In addition I need the midi tempo meta events change the transport to show (text line) the bpm of the song at anytime. Big ask, I know however I'm sure this can be done with JUCE with some additional code..

How do you load a song? Thought you'll ask this.

runs in the background listening to a midi port (remote port) for a midi bank select program change number on a specific channel and compares this with the value within a global list of songs in a flat file or xml  Once the song is know to be assigned in the file, the song file is gunzipped to a directory where the tracks are loaded into the player and ready to begin playing. A midi note on message is used to start, stop and pause the transport. on request on the next song. The previous song is unloaded and the next loaded.

1. Each song etc 'SongTitle.sng'  is a gzipped file containing the individual wav tracks including a midi track and a configuration file for the song.

example of a expanded .sng file





16 - Synth2.wav



2. SongTitle.cfg file holds the values for each track to which output it's assigned to, track volume level and pan value and  time the track stops (end of the song file) and/or loop points for later looping of parts.



1.Does JUCE support cli environment to do what I want?

2. Playing 24 tracks together with a midi track sample accurate do-able in JUCE? Has anyone done this?






1 Like

holy shit all these years later and this answer is still worth its weight in GOLD thanks so much! This should be in the official docs, a section where all these classes are explained IN CONTEXT of how it makes sense to use them together. I will save this in at least 10 separate locations to ensure no natural disaster can deprive me from this answer. Thanks a lot m8, not all heroes wear capes, Cheers

There are a few additions to make IMHO:

  • a disadvantage for AudioSources is, that the AudioSourceChannelInfo is agnostic to channel layouts. They usually have a mechanism to cope, but if it is any more special than mono to stereo, they are bound to go wrong quite often
  • AudioProcessors need a buffer in exactly the right length, vs. the AudioSource can act only on a subset of samples in the buffer. That is handy in instruments like synths and samplers, but you can create a referencing buffer to call an AudioProcessor with, so there is a workaround.
  • AudioProcessorGraph is very versatile, you can connect and disconnect on the fly (i.e. at runtime). But that also means, that the optimiser can only see each processBlock intividually.
    With the new dsp module, the ProcessorChain is defined at compile time, giving the optimiser a better chance to optimise the process call as a whole.

So my answer in short is:

  • to produce a continuous stream of data I would go with AudioSource
  • to create an effect consisting of separate processing operations called sequentially I would go with dsp::ProcessorBase
  • for the plugin itself, as well when hosting plugins I’d use AudioProcessor
  • If I would do serious multi channel formats, I would create my own wrapper around PositionableAudioSource (which I did in my open source video engine), adding the channel layout information and having upmix and downmix instances to be plugged between the nodes

Thanks so much for the insight!
I am actually working on something where I need to combine multiple audio sources and playback them together, (multiple files), while also adding in a microphone recording in the mix.
To achieve this I drew inspiration from your input on another thread where you advise creating a MultiChannelAudioSource. Thats what I based my class on. It was precisely what I wanted to have.

Now I am at the stage where I need to figure out how to playback all those AudioFormatReaderSources that the MultiChannelSource combines. I figured the way to do this, is to bring this multisource class into a player class such as AudioSourcePlayer. If I understand correctly, I should not use an AudioAppComponent because that contains already its own AudioSource, whereas I already have my own MultiChannelAudioSource , so this means I should roll my own AudioSourcePlayer which takes the MultiChannel as the source. Kinda trying to wrap my head around all these now