I am attempting to add an AudioProcessorGraph to my synthesizer voice (similar to how the intro to DSP tutorial adds a processor chain to its synth voice). This is happening inside a standalone app not a plugin.
I am using a MidiKeyboardComponent to obtain midi messages from the keyboard, and these get passed down into my voice from my SynthAudioSource. The voice contains a mainProcessor member as well as members for the audioInput, audioOutput, and oscillator nodes as well.
Inside my voice, I create a processor graph with a single oscillator node and connect it (audioInput → oscillator → audiooutput):
mainProcessor->clear();
audioInputNode = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::audioInputNode), AudioProcessorGraph::NodeID(audioInputID));
audioOutputNode = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::audioOutputNode), AudioProcessorGraph::NodeID(audioOutputID));
midiInputNode = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::midiInputNode), AudioProcessorGraph::NodeID(midiInputID));
midiOutputNode = mainProcessor->addNode (std::make_unique<AudioGraphIOProcessor> (AudioGraphIOProcessor::midiOutputNode), AudioProcessorGraph::NodeID(midiOutputID));
oscillatorNode = mainProcessor->addNode (std::make_unique<OscillatorProcessor>(), AudioProcessorGraph::NodeID(oscillatorID));
for (int channel = 0; channel < 2; ++channel) {
mainProcessor->addConnection({{audioInputNode->nodeID, channel},
{oscillatorNode->nodeID, channel}});
mainProcessor->addConnection({{oscillatorNode->nodeID, channel},
{audioOutputNode->nodeID, channel}});
}
// connectAudioNodes();
connectMidiNodes();
My voice’s startNote method sets the frequency of the oscillator:
void SineWaveVoice::startNote(int midiNoteNumber, float velocity, SynthesiserSound *sound, int currentPitchWheelPosition)
{
auto cyclesPerSecond = MidiMessage::getMidiNoteInHertz (midiNoteNumber);
dynamic_cast<OscillatorProcessor*>(oscillatorNode->getProcessor())->setFrequency(cyclesPerSecond);
}
And in the voice’s renderNextBlock method I call the processBlock method of my main processor.
void SineWaveVoice::renderNextBlock(AudioBuffer< float > &outputBuffer, int startSample, int numSamples)
{for (auto i = outputBuffer.getNumChannels(); --i >= 0;) outputBuffer.clear (i, 0, outputBuffer.getNumSamples()); MidiBuffer incomingMidi; mainProcessor->processBlock(outputBuffer, incomingMidi);
}
However I am not getting any audio output. I made sure to call
mainProcessor->setPlayConfigDetails(0, 2, sampleRate, samplesPerBlockExpected);
mainProcessor->prepareToPlay (sampleRate, samplesPerBlockExpected);
before I initialize the graph as well.
The intro to dsp tutorial uses a processor chain, and in the renderNextBlock method it manipulates the output buffer like so:
auto block = tempBlock.getSubBlock (0, (size_t) numSamples);
block.clear();
juce::dsp::ProcessContextReplacing<float> context (block);
processorChain.process (context);
juce::dsp::AudioBlock<float> (outputBuffer)
.getSubBlock ((size_t) startSample, (size_t) numSamples)
.add (tempBlock);
Which obviously I am not doing. Is that the issue? I thought the call to mainProcessor->processBlock(outputBuffer, incomingMidi);
would be all I needed to get the outputBuffer updated. What am I missing?