AudioProcessorGraph question


I’m implementing a graph of plugins to be used inside an audiosource, and would like to know what is the recommended way to ‘connect’ I/O of this AudioProcessorGraph.
I see there’s AudioProcessorGraphPlayer, but this is for connecting with Device, and I would like to process straight in/out of the ProcessBlock function.
I also seen AudioGraphIOProcessor but unfortunately, when using process block function, I get crackle in the audio, which seems to be due to the buffer size. Each buffer feeded to the graph has most often 32 extra samples. And very often I also get variable size buffer.

When tring to process single node, it work fine. But thru the whole graph, I can’t get around this buffer size issue at the moment.

Should I use the AudioGraphIOProcessor ?
Also, I’m calling this in prepareToPlay :

filterGraph.getGraph().setPlayConfigDetails(2, 2, sampleRate_, samplesPerBlockExpected); filterGraph.getGraph().prepareToPlay (sampleRate_,samplesPerBlockExpected);
Though the buffer size are changing and I get Juce assertion about size.
I guess there’s something I’m overlooking… Maybe a latency compensation thing that would make those 32 samples ? Any advice is appreciated.



Ok, it seems I found the solution :
The problems lies with latency. I always have to prepareToPlay a buffersize a little larger than the audio I want to process because of the added latency :int maxLatency = 44100; filterGraph.getGraph().setPlayConfigDetails(2, 2, sampleRate_, samplesPerBlockExpected + maxLatency); filterGraph.getGraph().prepareToPlay (sampleRate_,samplesPerBlockExpected + maxLatency);



Can you please explain, what you mean with “added latency”? In the context of audio processing, latency usually means a short delay introduced by your sound-card’s driver and other parts of your software. With ASIO drivers you would get something like 10ms or less. Even with “normal” sound drivers and hardware, you would rarely get more than 50ms.

From your code, I see that you add another 44100 Samples - that is usually 1sec :shock:

What’s the point?!



I mean latency added by the plugins which are loaded in the graph. Check out AudioProcessor::getLatencySamples ().
In DAW’s that kind of latency is usually compensated by PDC (plugins delay compensation) or ADC (Automatic Delay Compensation).

Most plugins have zero processing latency. Many have near zero latency (i.e. dynamic processing with lookahead), and some (i.e. fft style pitch shifter) can have hudge processing delay (up to thousands samples). 44100 is the maximum latency I expect for my whole plugin’s chain.


If your graph is set in a processor player, and the player is set as a callback to a device manager, your code sequence of calling “setPlayConfigDetails” and “prepareToPlay” will be pointless… The device manager has the responsibility of prepping the graph to play when a device is part of the scenario, if I’m not mistaken (otherwise, you get free buffer mismatch issues, or free pitch shifting).

Have a read of these posts, too:,