Allow Juce AudioGraph to return overall latency of entire graph

The JUCE AudioGraph is amazing in terms of how it manages latency internally, i.e. it will automatically synchronize the graph internally, however, when you query the graph to get the overall latency on the graph, it doesn’t return a helpful value, it doesn’t have any sort of notification system in terms of when latency changes.

Is there a way we could get into the codebase a way for the graph to return the total latency of all nodes in the graph? – and be notified of changes to the latency?

This would allow situations where an audiograph is being used inside of a plugin to properly report latency and latency changes to the host.

See:

Rail

2 Likes

If you’re interested in why this is a difficult problem there’s this talk at ADC next month:

https://audio.dev/talks/introducing-tracktion-graph-a-topological-processing-library-for-audio

2 Likes

Beside from that, it would be great if the JUCE only AudioGraph/AudioPluginHost could automatically detect a changed latency from a plugin and would recreate the graph automatically (or as an option).
This currently does not happen with AudioUnit and VST3 (did not check VST2)
I also found a minor bug, if you don’t give the host-process access to the source-code directory (for security reasons) the AudioPluginHost Demo will crash because AUv3SynthProcessor does not find the singing.ogg, and does not check for nullptr in loadNewSample

On a related note, I’d like to add a request for a specific latency changed callback which would help in this situation.

One of the big problems is detecting a latency change in the first place. The JUCE API only offers the audioProcessorChanged callback for all plugin changes. This will include preset changes, parameter name changes and various other things depending on the underlying plugin format and specific plugin.

It is possible to track the latency of the plugin and compare it every time audioProcessorChanged is called but unfortunately this doesn’t separate latency changes from the other type of changes. For example, if parameter names change and latency changes at the same time, you will probably only get a single audioProcessorChanged callback. This unfortunately means in our wrapper around plugins we need to rebuild all of the parameter wrappers every time audioProcessorChanged is called, even if it is only a latency change.

This may seem trivial but add to this automation and modifier updates we really want to minimise the number of times we rebuild these parameters.

3 Likes

Hey @t0m – definitely don’t mean to say it’s trivial! The render sequence building is quite complex, and there are certainly still parts of it I don’t fully understand.

However like in @railjonrogut’s suggestion, it is possible with a couple minor tweaks to get the overall latency managed correctly, and it appears there is some attempt of this in the render sequence builder:

graph.setLatencySamples (totalLatency);

I guess another good question is – if totalLatency isn’t the overall latency of the graph, what is it meant to represent?

When it creates the rendering ops for the node these few lines appear to interact with the overall latency of the graph, but it’s not entirely clear to me what’s mean to be going on here:

        delays.set (node.nodeID.uid, maxLatency + processor.getLatencySamples());

        if (numOuts == 0)
            totalLatency = maxLatency;

Wouldn’t it be possible if while doing these render ops, we just summed the total of all processor.getLatencySamples() we could get the overall latency of the whole shabang?

When it creates the rendering ops for the node these few lines appear to interact with the overall latency of the graph, but it’s not entirely clear to me what’s mean to be going on here:

        delays.set (node.nodeID.uid, maxLatency + processor.getLatencySamples());

        if (numOuts == 0)
            totalLatency = maxLatency;

Wouldn’t it be possible if while doing these render ops, we just summed the total of all processor.getLatencySamples() we could get the overall latency of the whole shabang?

It seems that these code lines actually add the latency of each processor inside the graph until you get to the last one.
The last if supposes that if the node it’s being checked has zero outputs (therefore it could/should be the last one), then totalLatency has the overall latency in the graph.

The only reason it doesn’t show the actual total latency could be that in the processing chain there is more than one node (processor) with no outputs(?)