Is polling a lot of parameters from the AudioThread a good strategy?

Hi!

I’m facing a dilemma regarding how to share data between UI and Audio threads.

I totally understand that I won’t have the perfect solution and it’s just a matter of choice and trade off.

I read almost all the topics talking about it on the JUCE forum and also the great ADC videos. However I cannot still make a decision and the point of this post is to share ideas about it and maybe it could help someone who is facing the same problem.

I currently written a plugin with multiple effects. I also have a LFO that can modulate these effect.
The parameter UI enables min/max modulation range and need to display them plus the current scaled value computed regarding the lfo value (like Serum with its destination pattern).

I’ve more than 30 UI effect where I should display Min - Max - scaledValue related to the lfo.

I’m facing 2 cases:

Case 1: all computation in AudioThread
The MainThread slider will only sent min max between [0,1] to the AudioThread.
The AudioThread will listen them convert the [0,1] value to the desired effect range (with NormalisableRange), it will also listen to the lfo parameter and will use it to scale the effect value and use it.

The full process will be:

  • the UI change the min max value and send it to the AudioThread using ValueTree
  • the AudioThread get the normalized min max and store them into the real min max range.
  • the AudioThread get the changed lfo value and scale the parameters.
  • the MainThread poll the min/max/scaledValue and propagate them into the related UI component to display value on labels.

Issue:
I will need to send to the MainThread a lot of parameter min/max/scaledValue (around 30 * 3 = 90 atomic values).
I know how to handle it with a map and other pattern but it still a lot of code since I have to pass them into the UI component after taking them into the timberCallback of the editor.
Also having the NormalisableRange inside the Processor seems to be a smell code.

case 2: all computation in MainThread
The MainThread listen to the lfo value change using a timerCallback related to the AudioThread (classic pattern). The value is propagated to the desired UI component, and can compute the scaled value and display it directly but I will use ValueTree to send these data to the AudioThread.

I reduce a lot the code but I’m worried about the effeciency now.

The full process will be:

  • lfo value is changed
  • the MainThread listen the AudioThread with a callback and get this value
  • the value is propagated into the UI desired class
  • the UI do the computation to get the scaled value, display the labels and send the scaled value to the AudioThread using a ValueTree
  • the AudioThread will read the new value only at the next (or more?) call of processBlock.

I would like to be able to use this case since it reduce the code but I’m worried using this with a lot of parameters.
Also I don’t even know how to compare the speed performance to evaluate these 2 cases.

Somebody could share his thought regarding this problem?
Maybe I’m worried for nothing but I’ve the feeling the case 1 won’t scale very well.

Case 2: If the LFO is in the DSP thread, and you compute values in the main thread, to use them back in DSP thread… you will probably have unregular jumps in the results, isn’t it?

That’s what I think. I believe that you’re right @nicolasdanet (not because I think the same ahah but because it seems logical indeed :slight_smile: ). Doing this will cleary reduce the code complexity but also introduce some “undefined time response”.
However I doesn’t know how to calculate and prove this.

The case 1 is definitely the safest way. But having a NormalisableRange inside the processor then poll the scaled value was a little disturbing but I guess at least it’s safe.

You could cache the results, and compute them only if a parameter has changed between.

Notice that i didn’t used JUCE since many years :sunglasses: (i’m going back to it) and thus i can not really analyse from my head ValueTree / NormalisableRange thread safety and such…

Edit: Oops, after few minutes of :thinking: i’m pretty confident that my advice is stupid.

Yes I compute the value only when needed. I already have this pattern.

I make 2 images to explain my thought in a better way:

Case 1: all computation in AudioThread

Case 2: all computation in MainThread

Dashed line mean async update where I cannot predict the time of update.
As we see the case 2 is simpler but there is a little delay between the lfo value coming to the audio thread and the update in the processBlock. I don’t know how to calculate this delay.

Hey @DEADBEEF I’m trying to understand the setup of this plugin and getting a little lost. So, the Min and Max values are user adjustable, like maybe displayed on a Slider with two thumbs? And the y-value, the “current scaled value” - is that user adjustable, or is that just for displaying the output of the LFO?

Maybe a screenshot would help illustrate your desired effect?

As for what’s going on behind the scenes, are you using AudioProcessorValueTreeState, possibly linking it to your UI Components with the SliderAttachment class?

I guess that it is the display of the rescaled value by the LFO.
I guess that this value is also used for some computation into the DSP thread.

IMHO that delay is not constant.
The message thread could even arbitrary been preempted for a while.

Yes that was my sense of it.

And I’ll add that between the “Case 1” and “Case 2” diagrams that @DEADBEEF outlines above, it’s clear to me that Case 1 is the way to go.

If you have some value that is used for DSP computation, that value should be generated in the Audio thread. That value needs to be calculated at the sample rate (e.g. 44.1k Hz), so it’s available for DSP work. The value as displayed in the UI thread is only needed at the Editor’s update rate (e.g. 30 Hz).

So in essence, when choosing how to display DSP values in a GUI, you have a downsampling problem. You’re not going to update your GUI at 44.1k Hz, because that would be a processing hog AND at that framerate it’s more data that your eye needs anyways.

Depending on the nature of the value, you have to pick how you’re going to handle that downsampling to 30 Hz. For a peak meter, you can handle the downsampling by displaying the maximum absolute sample value seen in that 30 Hz time frame, ensuring that peaks aren’t missed. For the display of an LFO value (which I think is what @DEADBEEF is trying to do here), you have to resample the LFOs output at the Editor’s timerCallback rate – remembering that if you’re using a JUCE Timer set at 30 Hz, you’re not really getting a 30 Hz “clock”, only an approximation of that with a healthy level of jitter.

Yes, absolutely - you cannot trust the message thread to get anything done on time.

2 Likes

Just realized why this is definitively an unworkable approach: what will happen when your plug-in’s Editor isn’t showing?

I’ve worked at companies that use both approaches. some notes:

‘All computation in main thread’ - This has the advantage of minimizing the CPU load in the audio thread, which (all else being equal) allows you to run reliably at low latency. This has the disadvantage of parameter changes suffering latency (it takes time for the ‘round trip’ from audio->message-thread->audio) and jitter (the round-trip times varies in a semi-random fashion due to the non-deterministic waiting times when transferring data between threads).

‘All computation in audio thread’ - This has the advantage of supporting strict sample-accurate parameter updates (there’s no round-trip at all). It also supports audio-rate modulation of parameters. The disadvantage is that it uses more CPU on the audio-thread, and that CPU may vary depending on if parameters are being modulated or not. i.e. you may tend to get more dropouts under heavy automation.

my preference is ‘All computation in audio thread’. i.e. to tradeoff (slightly) higher CPU for precise slick automation - because it gives the end-user a higher-fidelity experience. Anecdotal evidence is that plugins that offer higher quality at the expense of more CPU are perceived highly by customers, kind of like owning a car with a V8.

3 Likes

Woo thank you so much for all the replies. I didn’t expect that much.

@refusesoftware and @nicolasdanet

Sorry if it wasn’t very clear, I tried to minimize the information quantity for the readers but it seems I forgot some details.

You can view this as a Serum lfo automation. When you link an lfo value to a parameter, you see the new min and max and also the current scaled value in this new range. Like the following image:

knob automation

The original range of this filter parameter is [20Hz, 22000Hz]. Linked to the lfo, the new range become something like [11000Hz, 22000Hz] and the scaled value (here the slider white line) will be in this range. Hope it makes sense.

Also seens the parameters has to be non automatable, I use ValueTree listener to send the range from the UI to the Audio thread with the great class of @daniel : foleys_AtomicValueAttachment.h

Also thank you @JeffMcClintock for your feedback, you and @refusesoftware clearly close the debate.
I will definetely follow the case 1: all computation in audio thread.

PS: @refusesoftware I feel so dumb I didn’t think about the case when the editor isn’t showing. It’s cleary a no go to the case 2 (all computation in MainThread) :sweat_smile:

So you have a number of effects with their min/max and lfo-scaled values. As I see it, it’s easier to let each thread do its own scaling. I would consider using a fifo for sending parameters to the audio thread -there are a lot of min/max, and you could save some overhead by precalculating derived values (I wouldn’t scale with NormalisableRange). For the audio->ui flow, I’d send a single, unscaled lfo value. You could just set it on the audio thread to a location accessible from the ui thread.

If you want to ensure that peaks are shown, use an atomic shared location. On the ui Timer callback, read it with

lfoValue.exchange (-1.0f);

If the result is -1, just ignore it -it means there’s been no updates since the last frame. Then on the audio thread

if (lfoValue == -1.0f || std::abs (newLfoValue - 0.5f) > std::abs (lfoValue - 0.5f))
    lfoValue = newLfoValue;

So every time you poll from the ui thread, you get the value furthest from 0.5 since the last poll.

Thank @kamedin for your reply and also sorry for my late one. As you understand, I currently jump between multiple projects.

The issue I see with doing the scaling in each thread is how to synchronise the NormalisableRange? If the skewFactor is modify I would have to be carefully to do the same on the other thread. I don’t think it possible to share NormalisableRange using ValueTree (in order to have it one place only).

Also you mentioned that you don’t scale with NormalisableRange. What are you using on your side?

Thanks for the tips regarding the interesting pattern to avoid unnecessary updates :slight_smile:

For modulation I’d also opt for case 1.
But not for sample accurate paramter updates: On the call of processBlock you get a whole block to process. Of course you can pull your parameters every sample but does that mean you get the parameter value for the current sample you are processing? No, because the timing of pulling the parameter is not related to the sample number. For sample accurate parameters we’d need some kind of timestamp.

Thank @gustav-scholda for your input.
I understand it and it’s an important notification. Also the min/max changement won’t occur often. I wasn’t clear about it but when I said:

The parameter UI enables min/max modulation range and need to display them plus the current scaled value computed regarding the lfo value (like Serum with its destination pattern)

Like this kind of parameter in Serum, you just set the min max and it doesn’t move a lot wherease the lfo value will change very often.

So I guess It won’t be an issue if only one process isn’t sample accurate parameter :thinking:

Do you have exemple of “For sample accurate parameters we’d need some kind of timestamp.” implementation. Are you talking about atomic value or something related?

For modulation (for example with an LFO) you can work sample accurate because you can advance your LFO sample by sample as you move through the buffer.
What I meant is sample accurate parameters from the host. It doesn’t make sense to get the parameters every sample because you don’t know when that happens, it’s related to the processing speed.

1 Like

Just meant that it’s probably too much infrastructure for simple linear maps.

This makes it more problematic, because for the audio thread to safely access the range parameters (start, end, interval, skew) set from the UI thread, you’d need them to be atomic. I don’t think you can do that with NormalisableRange.

I think it’s easier to let each thread do the scaling because it limits the audio->ui synchronization to the lfo value, and the overhead of 30 scalings per second seems tiny compared with the audio thread scaling. If there are really a lot of parameters and the UI does get sluggish it may matter, but then synchronizing all the scaled values won’t be free either, especially if you want to ensure that peaks are shown. Anyway I’m not seeing the full picture, it was an intuitive guess -I’d need to consider how are modulation sources connected and how the whole UI update routine works to see what’s more effective.

I think I see what you mean. Indeed it’s an important point.
The only parameter that will come from the UI would be the min and max and theses one are not supposed to be changed often. However the lfo value in the image comes from the audio thread and is sample accurate (this is what you talked about I guess)
You know like the parameters in synth like Serum:
knob
The min max here is the blue line and this one isn’t changed often. Also the way it will work is that the min max won’t be changed during the playback very often. Usually you set the min max and then start the playback. Of cours the algo is able to handle the change during the playback but missing a process block can be a reasonable trade off.

Oh yeah I see. Yes sorry I didn’t go into detail regarding it (as you see it’s only a little part of the very big picture) but unfortunately all this parameters will have different skew or scales (log for filter for exemple).
The most simple solution I found is to have [0, 1] linear slider in the UI, so the AudioThread receive linear value between [0, 1] and then use a NormalisableRange with specific skew to convert to the right value.
I’ve a custom range that encapsulate the full range and also a dynamic range. When I receive a min max between [0, 1], this custom range will change the dynamic range inside. If the full range is [0, 100] and the new max is 0.5, the dynamic range will be [0, 50].
Then I convert the lfo value (which is also betwen [0, 1] with the dynamic range: dynamicRange.convertFrom0To1(lfoValue);
For exemple if the the lfo value is also 0.5 and that the parameter is only linear, the final value I would get will be 25 (dynamic range of [0, 50] then 0.5 in this range).

Sorry for the long message but now you understand why I didn’t go deep in order to prevent people being lost :sweat_smile:

My feeling doing the feedback in both thread is that one thread scaling could be different from the other. And debugging this in a environment a lot of parameters could be a PITA, it could go in production without being found before (even if the app is tested, we don’t forget that the tests are made by us and we could also have issues in our tests :sweat_smile:)

Yup, it is complicated heh. You haven’t even got to modulator assignment, and there’s the fixed value when there’s no modulation. I wouldn’t know what to suggest without trying some options and see how they behave.

It could for a short time, until the other thread updates, but it doesn’t matter -if parameters go UI->audio and modulators audio->UI, there won’t be any permanent inconsistency, just an asynchronous relation, which is the only possible one. I think I’d try a bunch of approaches -maybe polling all scaled values from the UI is not so complicated, but if it happens to be, there’s this other option, which I suspect would be simpler at least.