AudioProcessorGraph buffer mixing


Hi there,

I’m building a setup using the AudioProcessorGraph and I’m just wondering about the best way to mix all the nodes output together.

A simple version is I have a bunch of separate samplers (based on the Synthesizer class), wrapped in AudioProcessor’s, inserted into a graph but it seems in their processBlock methods the buffer is just the main buffer. If each synth clears the buffer (like in the juce demo code) it clears what the previous synth has written. If I don’t clear the buffer there is garbage in it. It works if the first sampler clears the buffer in it’s processBlock and the rest just add to it but that doesn’t seem the right way to do it. (especially when putting graphs in graphs etc.)

Psuedo code:

AudioProcessorGraph graph;

AudioProcessorGraph::Node *midi_generator = audio_processor_graph.addNode( new MidiGenerator() );

AudioProcessorGraph::Node *simple_sampler_node = audio_processor_graph.addNode( new MySampler() );
AudioProcessorGraph::Node *simple_sampler_node_2 = audio_processor_graph.addNode( new MySampler() );

AudioProcessorGraph::Node *output = audio_processor_graph.addNode( new AudioProcessorGraph::AudioGraphIOProcessor(AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode) );

//...Connect the midi_generator to the samplers 

//...Connect the samplers to a graph output

//...Connect AudioProcessorPlayer to Device manager etc. (you get the idea, the sound works so no probs here)

In this setup only the second sampler will sound if I clear the buffer in the sampler node, or if I don’t clear the buffer I get garbage.

I sort of assumed the AudioProcessorGraph has some kind of process for mixing things but is the correct approach to create my own ‘mixer’ node that would provide separate buffers to the Sampler nodes and then sum them? Is the AudioProcessorGraph made to work with parallel chains like this or do I have to add that functionality?




It’ll certainly deal with that kind of graph. I don’t know what you’ve done to make it go wrong, but presumably it’s something you’re doing in one of your processor implementations (?)


Yeah thought it was a bit strange. I think it’s just down to me missing something tbh.

Am I meant to set the number of input and output channels on my graph node/AudioProcessor somewhere?

This is all my graph setup code:

	// Initialise the AudioDeviceManager
	audio_device_manager.initialise(0, 2, nullptr, false);

	// Load the default audio formats

	// Set the audio IO
	audio_graph_player.setProcessor( &audio_processor_graph );

	// Connect the IO to the audio device manager
	audio_device_manager.addAudioCallback( &audio_graph_player );


	// Create the Samplers
	MLSimpleSampler *simple_sampler = new MLSimpleSampler();
	simple_sampler->LoadSample("C:/Audio/Samples/Breakbeat.wav", 36);

	MLSimpleSampler *simple_sampler_2 = new MLSimpleSampler();
	simple_sampler_2->LoadSample("C:/Audio/Samples/Snares001.wav", 37);

	// Create the nodes
	AudioProcessorGraph::Node *midi_loop = audio_processor_graph.addNode( new MidiFilePlayer() );

	AudioProcessorGraph::Node *simple_sampler_node = audio_processor_graph.addNode( simple_sampler );
	AudioProcessorGraph::Node *simple_sampler_node_2 = audio_processor_graph.addNode( simple_sampler_2 );

	AudioProcessorGraph::Node *output_node = audio_processor_graph.addNode( new AudioProcessorGraph::AudioGraphIOProcessor(AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode) );

	// Connect

	// Midi file player to drum machine
	audio_processor_graph.addConnection(midi_loop->nodeId, AudioProcessorGraph::midiChannelIndex, simple_sampler_node->nodeId, AudioProcessorGraph::midiChannelIndex);
	audio_processor_graph.addConnection(midi_loop->nodeId, AudioProcessorGraph::midiChannelIndex, simple_sampler_node_2->nodeId, AudioProcessorGraph::midiChannelIndex);

Strangely I don’t have to connect the samplers to the output node to get sound, I thought that’s how it’s meant to work? (if the output node isn’t created I get no sound out though…)

The (simplified for this problem) sampler is just an AudioProcessor with a Synthesizer in it. Basically the code from the juce demo.
The code in the Sampler:

// Renders the next block. 
void MLSimpleSampler::processBlock (AudioSampleBuffer &buffer, MidiBuffer &midiMessages)
    // the synth always adds its output to the audio buffer, so we have to clear it
    // first..
	buffer.clear();   // <- ***** Comment this out to get garbage, leave it in to only hear the 2nd samplers output 

	// and now get the synth to process the midi events and generate its output.
	drum_machine.renderNextBlock(buffer, midiMessages, 0, buffer.getNumSamples());

Cheers for taking the time to answer. In around 6 or so years of using Juce this is the first time I’ve actually had to post on the forum because all my other questions have been answered by previous answers. :slight_smile:


No, that’s not how it works! Perhaps you’re just hearing the output of an uninitialised buffer, and it happens to contain something like the thing you’re expecting. Try connecting the nodes together and see what happens!


Yeah, thought that was strange.

I was on the wrong track. I’ve added the lines back to connect the SimpleSampler nodes to the output nodes but noticed they both actually failed. My Sampler AudioProcessor returns it has no outputs.

From what I can see in AudioProcessor code, the only way to set the number of outputs is with the method:

The comment seems like it’s not something to use but I called it in my Samplers constructor (bit of a pain getting the AudioDeviceManager settings for sample rate and buffer size) and it all works perfectly. Is there a better way to be doing that?


Yes, that comment’s a bit misleading - it should say that it’s for internal use, i.e. something that the processor can call itself, but not to be called from outside. I’ll re-word it…


Ah yup, I see. Cool thanks. :slight_smile: