AudioProcessorGraph Example in Docs?

Hey all,

First time poster here, so let me first just say that JUCE has been massively helpful for me, for which I want to say thanks!

Now, I've built a small 2-oscillator monophonic subtractive synth, and I want to reorganize the internals to use the AudioProcessorGraph, as that seems like the right approach to growing this project into a more involved VST. But, after reading the documentation, looking through the source code, and trying things out in my project, I'm still confused about how to actually wire up a successful graph. For that reason, I think it'd be extremely helpful to have a simple example, just a couple lines of code, either here or in the docs to show a basic, working case.

Some questions I've been unable to answer in particular:

1) I'm confused about the AudioGraphIOProcessors. Need I make an input-mode and an output-mode AudioGraphIOProcessor and put them in my graph in order to get audio flowing correctly? Connect my input-mode node into my custom AudioProcessor modules, then sum those into the output-mode node?

1b) If the answer to the previous question is "no," then how do I control which nodes are outputting to the final buffer? Is it any nodes which have no specified output connection?

2) What's the role of the AudioProcessorPlayer? My understanding was that, in the `processBlock` method of my main plugin processor, I could just call `myGraph.processBlock` with the same buffer and all would be well. But everywhere I find information about how to use the AudioProcessorGraph, I find mention of the AudioProcessorPlayer. Have I been missing something? Need I use the player to collect the buffer from the graph?

Thanks for your help!

1 Like

Yes you will need to add an input node and output node.  You will need to use the graph's api to connect things together.  See addNode and addConnection for that.

To create an input or output node you need to do something like this for each one:

AudioPluginInstance * instance = new AudioProcessorGraph::AudioGraphIOProcessor (AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode);

Then add the instance to the graph using addNode()

You don't need to use an AudioProcessorPlayer but there are things you will need to do for the graph that AudioProcessorPlayer does.  These are things like setPlayConfigDetails and prepareToPlay.  With those all in place then you should be able to call processBlock on the graph.  I believe some people have tried this and had some difficulty getting it to work right, but it is doable.  Also note that the graph uses the message thread to build it's internal state.  May not be an issue but something to keep in mind.

Also, might not be as basic as you're looking for but you can find usage of the graph in the audio plugin host demo in the examples folder. 

Thanks, that definitely clears some things up. At this point I feel like I should be really close, but I'm still not getting any sound output. Here's my code:

In my main plugin processor constructor:

    AudioProcessorGraph::AudioGraphIOProcessor* input =

        new AudioProcessorGraph::AudioGraphIOProcessor(

            AudioProcessorGraph::AudioGraphIOProcessor::audioInputNode);


    AudioProcessorGraph::AudioGraphIOProcessor* output =

        new AudioProcessorGraph::AudioGraphIOProcessor(

            AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode);


    

    m_graph.addNode(input, 1);

    m_graph.addNode(output, 2);

    

    m_graph.addNode(new OscillatorNode(), 3);

    m_graph.addConnection(1, 0, 3, 0);

    m_graph.addConnection(1, 1, 3, 1);

    m_graph.addConnection(3, 0, 2, 0);

    m_graph.addConnection(3, 1, 2, 1);

Then, in `prepareToPlay` of my main plugin processor:

    m_graph.setPlayConfigDetails(getNumInputChannels(), getNumOutputChannels(), sampleRate, samplesPerBlock);

    m_graph.setProcessingPrecision(AudioProcessor::singlePrecision);

    m_graph.prepareToPlay(sampleRate, samplesPerBlock);

And, finally, in `processBlock` of my main plugin processor, I just render out the graph:

    m_graph.processBlock(buffer, midiMessages);

My `OscillatorNode` class just writes random white noise values into its output buffer in every call to `processBlock`, so I'm pretty sure the issue isn't there... Have I made a mistake in the code above? Am I missing something? Thanks again.

Hmm.. it generally looks right.  I see the graph you're using has the recent changes for supporting doubles in it.  I haven't gotten familiar with what's new there yet so I might be missing something with that.

That said I'd suggest putting some breakpoints/logs in and make sure the processBlock on your OscillatorNode is getting called.  Make sure it's sample rate, bufferSize and num channels are all set proper.  buildRenderingSequence() is where the nodes have their prepareToPlay called.  Make sure the samplerate and buffer size are good at the point it is called.  Also make sure that things are happening in the right sequence and that nothing is getting wiped out before/after something else.  Also take a peek at AudioGraphIOProcessor::setParentGraph.  You'll see that is where the input and output node have their setPlayConfigDetails called.  Make sure everything is set there as well.

The graph isn't the easiest to debug.  The basic workings are that it holds onto your Nodes and Connections and each time you make a change it will rebuild itself.  It doesn't rebuild right away though, it triggers an async update on the message thread.  This means you can make any number of changes on one pass of the message loop and it will rebuild on a later pass.  When it does rebuild, it takes a look at your nodes and connections and generates a list of "mini" processors (AudioGraphRenderingOps) to carry out the buffer processing needed for the graph.  So when it comes to debugging, you want to verify that anytime buildRenderingSequence is called, that your nodes and connections are in good state, and that the settings for the graph are properly applied to each node.

 

 

Hm, ok. I think I'm getting close. Here's what I found while debugging, and I'm a little stumped: `AudioProcessorGraph::Node::prepare` for my `OscillatorNode` gets called during `m_graph.prepareToPlay`, where `prepare` attempts to propagate the `setPlayConfigDetails` call that the graph received earlier. This call would then make sure the AudioProcessor (which is my OscillatorNode) has the appropriate internal state to start rendering output. However, that call to `setPlayConfigDetails` in `AudioProcessorGraph::Node::prepare` never actually gets called because the node's internal state at that point has `isPrepared = true`, even though its internal state is definitely not ready.

Unfortunately, XCode's debugger is somehow just *not* working so I'm struggling to figure out where the first call to `prepare` comes from. But it seems that `unprepare` only gets called from `releaseResources`, which I'm not calling until the destructor of my graph. So the first run through `prepare` is setting the wrong state, and the second just gets ignored with `isPrepared = true`. If you have any idea where that might be coming from, I'd love to hear it! Either way, I'll keep digging.

 

 

Yeah that isPrepared check is a bit of an obstacle.  If I recall correctly (this goes a few years back) that's where the order of things comes into play.  If the graph calls buildRenderingSequence without the settings being right then the nodes won't get the right settings (or something like that).  

Anyway, first thing I'd do is just remove the isPrepared check and see if the graph starts working for you.  If it does then great.  Then it's just a matter of getting the order right.  It might take calling releaseResources just before prepareToPlay.  I remember it being a bit tricky to get right.  The graph I run now has a lot of changes to the code and I use it differently than the stock graph, but in my version I no longer have the isPrepared check.  Instead it checks to see if the processor's settings match and if they don't then it calls prepareToPlay.  I'm not really recommending that (although it might work for you).

So I think actually that the following line is the problem:

m_graph.addNode(new OscillatorNode(), 3);

And the reason I think it's problematic is that `addNode` creates a `Node` instance and sets that node's `processor` member to the AudioProcessor object created by the call to `new OscillatorNode()`. Later, during the `prepare` call, the node will use it's `processor` member's getter methods to set properties on the processor itself:

processor->setPlayConfigDetails (processor->getNumInputChannels(),
                                 processor->getNumOutputChannels(),
                                 newSampleRate, newBlockSize);

However, at the point that this node tries to prepare itself, the AudioProcessor object that it points to has not yet been configured with these details. Thus `getNumInputChannels()` and `getNumOutputChannels()` both return 0.

This all has me thinking that, rather than writing `m_graph.addNode(new OscillatorNode(), 3);` I should instead be writing something like:

memberVar = new OscillatorNode();

m_graph.addNode(memberVar);
// And at some point later...
memberVar->setPlayConfigDetails(...);
// Later still,
m_graph.setPlayConfigDetails(...);

I'll have a chance to explore that in a few hours, but if that rings any bells for you, I'd love your input. Thanks for all the help so far!

Yes you'll definitely want to have those channels set.   If you haven't done so already you may want to check the returned value when you call addConnection.  It will return false if the processor doesn't have the specified channel.

Ok, I finally got sound! Here's my code:

void MyAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)

{

    m_graph.setPlayConfigDetails(getNumInputChannels(), getNumOutputChannels(), sampleRate, samplesPerBlock);

    m_graph.setProcessingPrecision(AudioProcessor::singlePrecision);

    m_graph.prepareToPlay(sampleRate, samplesPerBlock);

    

    AudioProcessorGraph::AudioGraphIOProcessor* input =

    new AudioProcessorGraph::AudioGraphIOProcessor(

        AudioProcessorGraph::AudioGraphIOProcessor::audioInputNode);


    AudioProcessorGraph::AudioGraphIOProcessor* output =

    new AudioProcessorGraph::AudioGraphIOProcessor(

        AudioProcessorGraph::AudioGraphIOProcessor::audioOutputNode);


    mOsc1Node = new OscillatorNode();

    mOsc1Node->setPlayConfigDetails(getNumInputChannels(), getNumOutputChannels(), sampleRate, samplesPerBlock);


    m_graph.addNode(input, 1);

    m_graph.addNode(output, 2);

    m_graph.addNode(mOsc1Node, 3);


    m_graph.addConnection(1, 0, 3, 0);

    m_graph.addConnection(1, 1, 3, 1);

    m_graph.addConnection(3, 0, 2, 0);

    m_graph.addConnection(3, 1, 2, 1);

}

You're right that the ordering is extremely important, but it's entirely unclear from the documentation, hence my suggestion for an example in the docs :). It also seems weird to me that I should have to wait until `prepareToPlay` to assemble my graph, but that's the point at which I can configure everything appropriately, which needs to happen before I assemble my nodes and connections. Thanks for all your help, and, if you see anything in the above code segment that raises a red flag in your mind, let me know!

1 Like

Success!  Great to hear you got it working.  The graph is a sort of funny thing with the way the setup works but I believe it was originally written to be setup in just a certain way, specifically the way you'll find it in the audio plugin host example.  So that's why you won't find docs for setting it up like this.  But really the graph doesn't need some of the things it has in place.  One example is the usage of triggerAsyncUpdate and the reliance on the message thread.  It makes it easy to use for the way it was intended but isn't really needed.  I maintain my own version of the graph and have removed a number of things which makes it a bit more straightforward to use. 

There's also a thread from a while back where I sped up the code for the internal graph building.  If you start adding lots of nodes and the building starts to slow down then you might want to try it out (note that it doesn't have the recent changes which added double precision support, I'll have to update it when I get some time).  http://www.juce.com/comment/305234#comment-305234

There's also another version of the graph that another dev, chkn, made which has multithreaded processing.  I haven't tried it out but you might want to take a look at it.  http://www.juce.com/forum/topic/multithreaded-audioprocessorgraph-source-code.

1 Like

This thread was super helpful in getting my own AudioProcessorGraph code to work, thank you! I took what I learned here and put it on GitHub:

This modifies the Juce “Processing Audio Input” tutorial to use a graph and an AudioProcessorPlayer. May be useful as a working starting point – works for me on Windows with a Scarlett 2i2 external USB interface. (Crashes on shutdown though, any tips appreciated on that!)

2 Likes

Hmm… My code looks exactly like this, yet I get crashed during non-realtime export in Ableton.

has anyone had this issue?

Best,
J