Is polling a lot of parameters from the AudioThread a good strategy?


I’m facing a dilemma regarding how to share data between UI and Audio threads.

I totally understand that I won’t have the perfect solution and it’s just a matter of choice and trade off.

I read almost all the topics talking about it on the JUCE forum and also the great ADC videos. However I cannot still make a decision and the point of this post is to share ideas about it and maybe it could help someone who is facing the same problem.

I currently written a plugin with multiple effects. I also have a LFO that can modulate these effect.
The parameter UI enables min/max modulation range and need to display them plus the current scaled value computed regarding the lfo value (like Serum with its destination pattern).

I’ve more than 30 UI effect where I should display Min - Max - scaledValue related to the lfo.

I’m facing 2 cases:

Case 1: all computation in AudioThread
The MainThread slider will only sent min max between [0,1] to the AudioThread.
The AudioThread will listen them convert the [0,1] value to the desired effect range (with NormalisableRange), it will also listen to the lfo parameter and will use it to scale the effect value and use it.

The full process will be:

  • the UI change the min max value and send it to the AudioThread using ValueTree
  • the AudioThread get the normalized min max and store them into the real min max range.
  • the AudioThread get the changed lfo value and scale the parameters.
  • the MainThread poll the min/max/scaledValue and propagate them into the related UI component to display value on labels.

I will need to send to the MainThread a lot of parameter min/max/scaledValue (around 30 * 3 = 90 atomic values).
I know how to handle it with a map and other pattern but it still a lot of code since I have to pass them into the UI component after taking them into the timberCallback of the editor.
Also having the NormalisableRange inside the Processor seems to be a smell code.

case 2: all computation in MainThread
The MainThread listen to the lfo value change using a timerCallback related to the AudioThread (classic pattern). The value is propagated to the desired UI component, and can compute the scaled value and display it directly but I will use ValueTree to send these data to the AudioThread.

I reduce a lot the code but I’m worried about the effeciency now.

The full process will be:

  • lfo value is changed
  • the MainThread listen the AudioThread with a callback and get this value
  • the value is propagated into the UI desired class
  • the UI do the computation to get the scaled value, display the labels and send the scaled value to the AudioThread using a ValueTree
  • the AudioThread will read the new value only at the next (or more?) call of processBlock.

I would like to be able to use this case since it reduce the code but I’m worried using this with a lot of parameters.
Also I don’t even know how to compare the speed performance to evaluate these 2 cases.

Somebody could share his thought regarding this problem?
Maybe I’m worried for nothing but I’ve the feeling the case 1 won’t scale very well.

Case 2: If the LFO is in the DSP thread, and you compute values in the main thread, to use them back in DSP thread… you will probably have unregular jumps in the results, isn’t it?

That’s what I think. I believe that you’re right @nicolasdanet (not because I think the same ahah but because it seems logical indeed :slight_smile: ). Doing this will cleary reduce the code complexity but also introduce some “undefined time response”.
However I doesn’t know how to calculate and prove this.

The case 1 is definitely the safest way. But having a NormalisableRange inside the processor then poll the scaled value was a little disturbing but I guess at least it’s safe.

You could cache the results, and compute them only if a parameter has changed between.

Notice that i didn’t used JUCE since many years :sunglasses: (i’m going back to it) and thus i can not really analyse from my head ValueTree / NormalisableRange thread safety and such…

Edit: Oops, after few minutes of :thinking: i’m pretty confident that my advice is stupid.

Yes I compute the value only when needed. I already have this pattern.

I make 2 images to explain my thought in a better way:

Case 1: all computation in AudioThread

Case 2: all computation in MainThread

Dashed line mean async update where I cannot predict the time of update.
As we see the case 2 is simpler but there is a little delay between the lfo value coming to the audio thread and the update in the processBlock. I don’t know how to calculate this delay.

Hey @DEADBEEF I’m trying to understand the setup of this plugin and getting a little lost. So, the Min and Max values are user adjustable, like maybe displayed on a Slider with two thumbs? And the y-value, the “current scaled value” - is that user adjustable, or is that just for displaying the output of the LFO?

Maybe a screenshot would help illustrate your desired effect?

As for what’s going on behind the scenes, are you using AudioProcessorValueTreeState, possibly linking it to your UI Components with the SliderAttachment class?

I guess that it is the display of the rescaled value by the LFO.
I guess that this value is also used for some computation into the DSP thread.

IMHO that delay is not constant.
The message thread could even arbitrary been preempted for a while.

Yes that was my sense of it.

And I’ll add that between the “Case 1” and “Case 2” diagrams that @DEADBEEF outlines above, it’s clear to me that Case 1 is the way to go.

If you have some value that is used for DSP computation, that value should be generated in the Audio thread. That value needs to be calculated at the sample rate (e.g. 44.1k Hz), so it’s available for DSP work. The value as displayed in the UI thread is only needed at the Editor’s update rate (e.g. 30 Hz).

So in essence, when choosing how to display DSP values in a GUI, you have a downsampling problem. You’re not going to update your GUI at 44.1k Hz, because that would be a processing hog AND at that framerate it’s more data that your eye needs anyways.

Depending on the nature of the value, you have to pick how you’re going to handle that downsampling to 30 Hz. For a peak meter, you can handle the downsampling by displaying the maximum absolute sample value seen in that 30 Hz time frame, ensuring that peaks aren’t missed. For the display of an LFO value (which I think is what @DEADBEEF is trying to do here), you have to resample the LFOs output at the Editor’s timerCallback rate – remembering that if you’re using a JUCE Timer set at 30 Hz, you’re not really getting a 30 Hz “clock”, only an approximation of that with a healthy level of jitter.

Yes, absolutely - you cannot trust the message thread to get anything done on time.

1 Like

Just realized why this is definitively an unworkable approach: what will happen when your plug-in’s Editor isn’t showing?

I’ve worked at companies that use both approaches. some notes:

‘All computation in main thread’ - This has the advantage of minimizing the CPU load in the audio thread, which (all else being equal) allows you to run reliably at low latency. This has the disadvantage of parameter changes suffering latency (it takes time for the ‘round trip’ from audio->message-thread->audio) and jitter (the round-trip times varies in a semi-random fashion due to the non-deterministic waiting times when transferring data between threads).

‘All computation in audio thread’ - This has the advantage of supporting strict sample-accurate parameter updates (there’s no round-trip at all). It also supports audio-rate modulation of parameters. The disadvantage is that it uses more CPU on the audio-thread, and that CPU may vary depending on if parameters are being modulated or not. i.e. you may tend to get more dropouts under heavy automation.

my preference is ‘All computation in audio thread’. i.e. to tradeoff (slightly) higher CPU for precise slick automation - because it gives the end-user a higher-fidelity experience. Anecdotal evidence is that plugins that offer higher quality at the expense of more CPU are perceived highly by customers, kind of like owning a car with a V8.


Woo thank you so much for all the replies. I didn’t expect that much.

@refusesoftware and @nicolasdanet

Sorry if it wasn’t very clear, I tried to minimize the information quantity for the readers but it seems I forgot some details.

You can view this as a Serum lfo automation. When you link an lfo value to a parameter, you see the new min and max and also the current scaled value in this new range. Like the following image:

knob automation

The original range of this filter parameter is [20Hz, 22000Hz]. Linked to the lfo, the new range become something like [11000Hz, 22000Hz] and the scaled value (here the slider white line) will be in this range. Hope it makes sense.

Also seens the parameters has to be non automatable, I use ValueTree listener to send the range from the UI to the Audio thread with the great class of @daniel : foleys_AtomicValueAttachment.h

Also thank you @JeffMcClintock for your feedback, you and @refusesoftware clearly close the debate.
I will definetely follow the case 1: all computation in audio thread.

PS: @refusesoftware I feel so dumb I didn’t think about the case when the editor isn’t showing. It’s cleary a no go to the case 2 (all computation in MainThread) :sweat_smile:

So you have a number of effects with their min/max and lfo-scaled values. As I see it, it’s easier to let each thread do its own scaling. I would consider using a fifo for sending parameters to the audio thread -there are a lot of min/max, and you could save some overhead by precalculating derived values (I wouldn’t scale with NormalisableRange). For the audio->ui flow, I’d send a single, unscaled lfo value. You could just set it on the audio thread to a location accessible from the ui thread.

If you want to ensure that peaks are shown, use an atomic shared location. On the ui Timer callback, read it with (-1.0f);

If the result is -1, just ignore it -it means there’s been no updates since the last frame. Then on the audio thread

if (lfoValue == -1.0f || std::abs (newLfoValue - 0.5f) > std::abs (lfoValue - 0.5f))
    lfoValue = newLfoValue;

So every time you poll from the ui thread, you get the value furthest from 0.5 since the last poll.