How to use an AudioProcessorGraph with synthesizer voice

I am attempting to add an AudioProcessorGraph to my synthesizer voice (similar to how the intro to DSP tutorial adds a processor chain to its synth voice). This is happening inside a standalone app not a plugin.

I am using a MidiKeyboardComponent to obtain midi messages from the keyboard, and these get passed down into my voice from my SynthAudioSource. The voice contains a mainProcessor member as well as members for the audioInput, audioOutput, and oscillator nodes as well.

Inside my voice, I create a processor graph with a single oscillator node and connect it (audioInput -> oscillator -> audiooutput):


    audioInputNode  = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::audioInputNode), AudioProcessorGraph::NodeID(audioInputID));
    audioOutputNode = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::audioOutputNode), AudioProcessorGraph::NodeID(audioOutputID));
    midiInputNode   = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::midiInputNode), AudioProcessorGraph::NodeID(midiInputID));
    midiOutputNode  = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::midiOutputNode), AudioProcessorGraph::NodeID(midiOutputID));

    oscillatorNode = mainProcessor->addNode (std::make_unique<OscillatorProcessor>(), AudioProcessorGraph::NodeID(oscillatorID));

    for (int channel = 0; channel < 2; ++channel) {
        mainProcessor->addConnection({{audioInputNode->nodeID, channel},
                                      {oscillatorNode->nodeID, channel}});
        mainProcessor->addConnection({{oscillatorNode->nodeID,  channel},
                                      {audioOutputNode->nodeID, channel}});

    // connectAudioNodes();

My voice’s startNote method sets the frequency of the oscillator:

void SineWaveVoice::startNote(int midiNoteNumber, float velocity, SynthesiserSound *sound, int currentPitchWheelPosition)

    auto cyclesPerSecond = MidiMessage::getMidiNoteInHertz (midiNoteNumber);


And in the voice’s renderNextBlock method I call the processBlock method of my main processor.

void SineWaveVoice::renderNextBlock(AudioBuffer< float > &outputBuffer, int startSample, int numSamples)

for (auto i = outputBuffer.getNumChannels(); --i >= 0;)
    outputBuffer.clear (i, 0, outputBuffer.getNumSamples());

MidiBuffer incomingMidi;
mainProcessor->processBlock(outputBuffer, incomingMidi);


However I am not getting any audio output. I made sure to call

mainProcessor->setPlayConfigDetails(0, 2, sampleRate, samplesPerBlockExpected);
mainProcessor->prepareToPlay (sampleRate, samplesPerBlockExpected);

before I initialize the graph as well.

The intro to dsp tutorial uses a processor chain, and in the renderNextBlock method it manipulates the output buffer like so:

auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
        juce::dsp::ProcessContextReplacing<float> context (block);
        processorChain.process (context);

        juce::dsp::AudioBlock<float> (outputBuffer)
            .getSubBlock ((size_t) startSample, (size_t) numSamples)
            .add (tempBlock);

Which obviously I am not doing. Is that the issue? I thought the call to mainProcessor->processBlock(outputBuffer, incomingMidi);
would be all I needed to get the outputBuffer updated. What am I missing?

Ah! made a silly mistake and set the input channels to 0. I get sound output now (althought its constant and the frequency doesnt seem to be changing when I press keys) but I am making progress at least!

The audio is very distorted. I think something may be up with my blockSize or something.

I have fixed the distorted sound by enabling all buses on the oscillator node:


Now the only issue is polyphony doesnt seem to be working even though I have multiple synth voices added to my synth in my audio source.

So I believe the reason polyphony isnt working correctly is because I have to clear the buffer in my renderNextBlock call.

I am not sure how to get around this though. with the processorChain code in the intro to DSP example its able to operate on a separate block and then add that block to the buffer:

auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
        juce::dsp::ProcessContextReplacing<float> context (block);
        processorChain.process (context);

        juce::dsp::AudioBlock<float> (outputBuffer)
            .getSubBlock ((size_t) startSample, (size_t) numSamples)
            .add (tempBlock);

however the processor graph processBlock() call operates on the entire buffer. How do I add my voices together?

did you manage solving this? currently having the same challenge…

Hey stone here. Lost access to that old account so I use this username now. So I think what I ended up doing was using a separate buffer for each synth voice, and then adding the contents of the voice buffers to the output buffer in the renderNextBlock method.

Basically add a audio buffer member variable to your voice class, initialize it in prepare to play, and in the render next block call you pass the voice buffer to your processor graph, then after that add the processed samples to the output buffer. Oh also instead of clearing the output buffer you just clear the voice buffer ie:

MidiBuffer incomingMidi;
mainProcessor->processBlock(voiceBuffer, incomingMidi);
outputBuffer.addFrom (0, 0, voiceBuffer.getReadPointer (0), voiceBuffer.getNumSamples());
outputBuffer.addFrom (1, 0, voiceBuffer.getReadPointer (0), voiceBuffer.getNumSamples());
1 Like