Running a circular connection in an audiograph

Hey everyone, this is something I’m curious about because I’d like to be able to implement something of the sort in my app.

Basically, in an AudioProcessorGraph, if you connect nodes in a loop - does this cause a problem in the AudioProcessor logic?

I did a brief test and got some hardcore digital buzzing noises, and when I set the “loop” gain to zero, the input was clearly not getting through anymore. So I’m wondering if the way the AudioProcessor works makes it so that it computes in a permanent loop.

Does this make sense? The nature of my app makes it so that these kinds of layouts are somewhat necessary. Ideally the audiograph would only compute each processor once. I’m worried that the logic makes it so that it computes recursively, infinitely. If it’s so, how could I go about modifying the processor logic?

Well, what you have there is a feedback loop which by nature is recursive. It’s what most filter designs are based on. I guess the first question is, why do you need such loops in your graph?

Rail

2 Likes

So, from your example, are filters impossible to implement through more than one audioprocessor in a graph? I’m guessing actual examples are implemented in a single audioprocessor so that the feedback is internal.

Hey, thanks Rail for your links. I looked through the different posts that were embedded.

This one seems to get to the root of the issue, impressive digging by OP.

But from what I gather, the problem has not actually been fixed? Since I had the same behaviour, the input route being muted by the node receiving the feedback loop.

Has anyone found a workaround of some kind?

I somehow doubt the AudioProcessorGraph actually supports feedback connections. (It might have supported that at some point in time, but the code has been changed a bit during the years and maybe the feedback case handling was lost at some point.) There are methods that tell if an attempted connection in the graph would be legal. Maybe try checking the results of those.

The Juce AudioPluginHost allows making feedback connections in the GUI, but it doesn’t sound like it’s actually doing any audio feedback.

Yeah, that seems to be the case. The PluginHost is analogous to the Graph implementation, you can legally connect the nodes but the audio is cut out.

I don’t really have a choice in terms of implementation. I probably will need to implement some sort of custom AudioGraph to handle these feedback loops.

If possible refactor your code so you don’t need to use the AudioProcessorGraph.

Rail

Is it common to allow feedback loops (cycles) in audio graphs like this?

It was mentioned to enable ping-pong delay effects but wouldn’t the effect then be dependent on the buffer size being used?

I think a feedback can only work, if there is a delay in the loop (which was said before). But it also needs a specific implementation, that is incompatible with the replacing processBlock() calls.

The processing would need to happen in multiple steps, starting at the delay(s) borrowing samples from the buffer and filling up once the circle is resolved.

I am not sure if it is possible to implement that in a clean and generic way.

1 Like

My question wasn’t really about implementation but about why this would be desired.

If you introduce a cycle and solve it using latency then you don’t have a coherent signal anymore. This would indicate it’s an undesired feature for the most part.

As I mentioned Before though, someone else said this could be employed for musical effects. But I’m that case the effect would be tied to the buffer size (a small buffer would be a short delay, a large buffer a longer delay). This seems to introduce an undesired coupling to the prepared block size which would appear to me to make the “feature“ undesirable too.

So I was just asking generally (not about technical implementation details) about when this would be actually useful and desired?


I’m mainly asking because I have to make this very decision myself in some work I’m doing at the moment and I can’t currently see a case for allowing cycles in the graph.

@dave96
On the “why” idea, for me at least, it’s purely because it grants freedom to be more creative with effects. But I don’t see any other situation, other than a basic delay, for cycles.

And I guess the buffer size would be a minimum size constraint on the delay. I maybe don’t know enough about the under-the-hood stuff, but it seems it would not make technical sense to have a “smaller than buffer” buffer to do the delay. Maybe we’d need to implement to find out.

On the other hand, before starting any audio programming at all, I had imagined to use some sort of a “compiler” algorithm that takes the different operations (which, in my case, are very small, near-atomic) and sticks them into bigger groups, like into a single AudioProcessor. Looking at it now, this may be a promising solution.

Before I was into audio programming, I liked DAW’s that supported channel feedback so that I was able to make specific types of delays or pitched reverbs. By that time I didn’t know much about buffer sizes and why the feedback is strange from a technical point of view, since there is latency that is affected by the buffer size.

But, although I didn’t know the technical side of it, I liked the fact that some DAW’s provided this feature. I guess at least Logic and Pro Tools support this while Studio One, one of my favourites, doesn’t. I blamed S1 for that, but now I understand it.

That’s how I, as a user, looked at this in the past :slight_smile:

2 Likes

It’s quite common to change buffer sizes to low whist tracking to minimise latency and gradually increase it whilst arranging or mixing to enable the use of more plugins as at this point round-trip latency isn’t that important.

Personally I think it would be very odd if the sound of the session changed during this process simply because the buffer size being used changed. But then is this is a well know and expected side effect then perhaps it’s still useful?

I can just imagine all kinds of complaints from DAW users though about why they have slight delays or phasing issues because they don’t fully understand the signal routing they’ve set up. This is more subtle and difficult to comprehend than some signal path being silent because the cycle has been blocked in my experience.

Ideally, the audio engine should allow using smaller internal processing buffers unrelated to the current hardware buffer size. (For example, the hardware could be running at 1024 sample buffer size, but plugins and other processing could be run at 64 sample buffers in the engine, when applicable.)

1 Like

Yes, but that’s usually for other reasons such as automation accuracy and anti-aliasing.

Do many hosts explicitly allow this? We do it implicitly internally when required (i.e. if there are automation or modifiers on the plugin) but I can’t really see the point beyond that as it defeats the purpose of setting a small buffer for lower latency and using a big buffer for reduced CPU usage.

It just seems such an fringe use case that would be difficult to deterministically rely on to me.

1 Like

Could it be possible that internal buffers, just like internal processing, don’t affect the processing speed like the “macro” buffers, due to the macro buffers’ speed mainly being a factor of overhead to start and end a processing process?

In other words perhaps an internal buffer doesn’t require any extra overhead - say one AudioProcessor writes to a buffer in memory and the next reads from it: the read/write is “negligible” compared to the overhead that the buffer requires to start and finish the processing. Therefore the macro buffer size has a noticeable effect but the micro does not.

@Xenakios, would this be practical in real-time audio?

@dave96 I can definitely see this being strange and unreliable, and the “buffer size reliability problem” you pointed out still holds. In my implementation though, any circular connection would have to have an assigned delay time assigned to it.