Is it OK to call AudioProcessorParameter::beginChangeGesture() on the audio thread?

Is AudioProcessorParameter::beginChangeGesture() designed to be called on the audio thread? Is it OK to do so? It does take a lock and do lots of other things. Is this lock always going to be uncontended? And if not, how should we do this instead?

Thanks,
Timur

Why would it be called from the audio thread? It is connected to the sensor of the user touching a UI control for “latch automation” control. Most of the time slider.mouseDown(). I can’t see a use case to do that on the audio thread…

Are you thinking of meta parameters?

Maybe for plugins like for an example the Waves Vocal Rider that write automation data based on realtime data analysis generated while audio is running, even if the GUI is closed?

If you were generating automation data on the audio thread, that might be a use-case where it’s simpler to call this when & where you need to, rather than ‘bouncing’ back to the main thread, where it’s difficult to syncronise with calls to setValueNotifyingHost (which must occur between calls to begin/endChangeGesture).

Anyway, the code inside these functions is almost exactly the same as the code in setValueNotifyingHost, so any JUCE plugin that wants to write automatation data has to take this lock on the audio thread.

I am not sure, if it’s possible to write synchronous automation data from the audio thread. Automation data is read at presentation time, so a proper host should compensate for latency. Plus there is no concept for timestamped gui changes, so you will only be able to send one value per block.

IIRC the VST3 has a concept for this use case, but that is not used in JUCE. Same goes for AAX, which has a clear distinction between execution time and presentation time.

So the hacky way round tripping via the message thread and getting a coarse result seems the only option IMHO.
I’m happy to be proven wrong…

One use case for this is MIDI control of plugin parameters.

Wouldn’t this be the responsibility of the host via MIDI-learn?

A possible use case is a sequencer that lives inside a plugin and generates automation data. That sequencer runs on the audio thread because it might also be triggering sounds etc.

What is the recommended way to do this in JUCE? Can we call AudioProcessorParameter::beginChangeGesture() in such an audio-thread sequencer or not? And if not, what should we do instead? Should the sequencer instead call this function asynchronously on the GUI thread?

(with all the caveats that you can’t create an async notification on the audiothread etc. so you need to instead do stuff like setting an atomic flag or pushing into a lock-free fifo and pick up the automation data on a timer etc…)

1 Like

@timur Did you ever decide on a solution to calling beginChangeGesture() on a non-message thread?

I am working on a similar problem where I have to generate automation data based on the internal state of a plugin, and decided on using a HighResolutionTimer to update the parameters. The data is generated in processBlock on the audio thread, fed to a FIFO Queue, and then consumed by the high-resolution timer callback which updates the parameters.

Unfortunately I seem to have encountered a problem: This works in most DAWs except for Nuendo, which seemingly is rejecting any non-message-thread parameter updates or gestures.

What solution did you ultimately settle on?