I’m using the APVTS to keep track of the global state and manage parameters. Because the APVTS syncs parameters to an underlying ValueTree, there seems to be always two ways to listen to changes for some parameter:
making a listener to the parameter, directly
making a listener to the ValueTree property that corresponds to this parameter in the APVTS inner state
Is there a better option for some performance reasons, or one that is more idiomatic for some reason ? I find it strange to mix the two. For example, SliderAttachment is based on attaching directly to the parameter, but some other portions could be checking for parameter changes via the ValueTree. I find it strange not to have a single source of truth.
Indeed, the observer pattern has advantages, but also disadvantages.
The AudioProcessorParameter API is meant to interface the plugin with the host.
It wraps the value in an atomic, so it is safe to read it from any thread.
It also adds the beginChangeGesture/endChangeGesture so the GUI can communicate if the parameter is currently controlled by the user (and might have to be recorded by the host).
The ValueTree is not meant to be read in realtime, because it is updated from a timer on the message thread. If you use this in your DSP, you might get delayed updates, especially in offline bounce situations.
The main advantage is the ability to serialise and deserialise in set/getStateInformation and to use it for non automatable state that is not time critical.
The APVTS addParameterListener() has the downside that it is unknown from which thread it is being called. It can be from GUI, audio thread or even a bespoke automation thread if the host decides so.
I highly recommend to keep a typed pointer and read/copy the value at the beginning of the processBlock.
Im using this one most of the time, especially when having a lot of parameters. It has some disadvantages, like the mentioned calling threads but also that they don’t get called when the value is set but has the same value.
Looking forward for sample accurate parameter automation, I have some hope that the listers will work or that I can forward changes to my listener classes somehow.
Ok I see, thank you for all these details! I do keep references to my parameters in the relevant structures for easy access in the processBlock.
One area where I was having more doubts was for things like GUIs. In a GUI app (i.e. not a plugin) I would just write components that keep a reference to some subtrees (or properties) of the base ValueTree of my app (which holds all state). Then I would register my components as listeners to these subtrees to sync the UI to the internal state.
In a plugin, if the GUI elements are related to the plugin parameters, there are some neat classes like SliderAttachment which suggest that the UI can be synced to the parameters directly and not to the ValueTree. From your explanation, it seems I should privilege this when possible, because the APVTS ValueTree will always be a bit “late”.
I guess my only issue with this is that the UI code might look a bit inconsistent in some cases. For example, suppose I have a slider in my GUI that’s not related to any plugin parameter but to some other aspect of my app, which I’ve added to the ValueTree of the APVTS (like the brightness level of the UI or something like this). Then I’m forced to using the ValueTree::Listener again for that slider. In this case I might have some Components that use SliderAttachment for some sliders and some other type of slider for others, which seems inelegant. On the other hand if I base everything on the ValueTree, I can write some kind of AttachedSlider class based on ValueTree::Listener, and sometimes the slider will be related to plugin parameters, sometimes to other stuff, and this is the role of the APVTS to do that part of the syncing. It seems better, but I might be missing something ?
I’ve been thinking about the implications of what you mentioned for juce::ValueTree and one aspect is a bit unclear to me. Until now I’ve been taking inspiration from the DSPModulePluginDemo example from JUCE, which uses many juce::dsp objects in a single plugin. In practice, these DSP objects hold internal member variables for the DSP state. For example, a float is stored internally in juce::dsp::gain, and parameter smoothing is performed internally by the object. As a consequence, it is the responsibility of the developer to check for parameter changes and update these internal values if needed. To achieve this, the example is built as follows:
The main processor is registered as a ValueTree::Listener to the APVTS state. If any change is detected on the state, an atomic flag requiresUpdate is set to true.
Structs are built for each of the DSP processors to wrap references to the actual APVTS parameters, for easy realtime-safe access in processBlock.
Once in processBlock, this atomic flag is checked. If it is true, a set of parameter comparisons are made between the DSP internal values and the APVTS parameter references, to find which parameter has changed and update the DSP blocks before processing.
This code seemed practical to me because:
Parameter values are only queried if some change has been detected, otherwise processBlock directly jumps to DSP code. I guess this can be beneficial since most of the time parameters do not change, and in some cases updating DSP state can be costly (e.g. when updating filter coeffs).
It is realtime safe because elements of the ValueTree are never read from the audio thread.
However, given what you mentioned, is it possible for an update to be late by one block if the juce::ValueTree from the APVTS has not been synced in time ? Or if the juce::ValueTree::Listener has not detected the change in time ? That could be quite problematic in bouncing mode with a large block size and sharp automation. What’s the workaround for this ? Is it safer to use juce::AudioProcessorValueTreeState::Listener and set the requiresUpdate atomic in the parameterChanged callback ? I’m not convinced by this because it seemed to me that this listener is just listening to the internal tree. If it’s the case, then is really AudioProcessorParameter::Listener the only object that reacts synchronously to parameter changes ? Does that mean I need to register all parameters individually with this type of listener to ensure the correct (synchronous) behavior ?
I’m also using the listeners to update the values in our products. As far as I noticed, the parameter changes are forwarded synchronously and as fast as possible to the listeners. This can be the audio thread or the UI thread.
I’m also using a class that holds the atomic values used in the DSP processing, but I’m updating them directly if possible, even while processing. Theoretically, this would allow sample-accurate automation if the DAW sends the changes from the audio thread. It has also the advantage that you don’t have to find out what value has changed.
But at the moment we don’t have any sample-accurate automation. We don’t have any control over things like this. Because of this, we have to smooth some of our parameter value changes to avoid steps. You can expect small buffer sizes, also for offline rendering.
I’m also using a class that holds the atomic values used in the DSP processing, but I’m updating them directly if possible, even while processing.
Do you mean that you are not even checking if something’s changed in your plugin parameters when you update the DSP internal atomics ? Like, if your have a biquad filter, processBlock will recompute the coeffs for every block ? I agree this is of course a very safe and hassle-free path to sample-accurate automation (in the event that JUCE allows it one day). I’m also very much interested in sample-accurate automation, but I was hoping there would be a sample-accurate way to check if a parameter has changed. What I was worrying about in my previous post was that some designs would not even be block-accurate, which seems terrible.
Here’s my understanding so far:
juce::ValueTree is not updated synchronously, so registering a juce::ValueTree::Listener to the state tree of the APVTS may not provide callbacks that are called fast enough to set an atomic before the next processBlock. In that sense it is not even block-accurate, at least this seems to be what @daniel was suggesting. Since updates are based on a juce::Timer running on the main thread, audibly delayed DSP updates could be heard if the buffer is large. The worst part is that the presence of these delays would depend on what’s running in the DAW overall (since the main thread is shared amongst all plugins).
Now, on the other hand, from the docs, juce::AudioProcessorParameter::Listener is called synchronously when a parameter changes, so this could actually be used to update an atomic that warns of parameter changes. This would be block-accurate (and even sample-accurate) unless I’m missing something. More precisely, I could register all my parameters with a listener of this type, providing them with a common lambda that just switches a global atomic flag to true. However, if other listeners are registered to the same parameter, it’s unclear to me in which order the callbacks are triggered.
The implementation of juce::AudioProcessorValueTreeState::Listener seems to be not completely disclosed. Is it only relying on juce::AudioProcessorParameter::Listener ? Several posts like Best practices for AudioProcessorValueTreeState and child components seem to suggest that it’s not the case and these two types of Listeners will actually react differently to parameter changes. This is also suggested by the fact that the docs for parameterValueChanged in juce::AudioProcessorParameter::Listener explicitly warn that the callback code must be fast enough because it’s called synchronously, while the docs for parameterChanged of juce::AudioProcessorValueTreeState::Listener do not suggest anything. Does anyone know the final answer to this ?
I’m still discovering all this so I might make it unnecessarily complicated, but I’m still at early stages of development so I kinda want to find the right design patterns. Right now it seems that the safest option is to create my own structs of parameter references that I can pass around and build a parameter system revolving solely around juce::AudioProcessorParameter::Listener and synchronous calls (no timer). AFAIK, the only timers needed are for UI updates that happen at the end of the chain, e.g. in the parameter attachment helper classes for UI widgets, we do need some async behaviour to avoid updating the UI state from the audio thread in case the parameters were changed from automation.
I will still have the APVTS lying at the root of my plugin for state serialization, but I will not use its helper functions for parameter listening, because they might be implemented with a particular type of compromise in mind and may involve a hidden timer. Again, I might be wrong or overthink all of this, happy to learn more!
That is indeed not the case. The parameter listener approach will still be block accurate. You have to understand that the processing doesn’t happen in realtime (despite the name). You get the call and return a few ms later, while the block actually lasts longer. So if the listener fires somewhen inside the currently playing block, it will still be at a different time.
A sample accurate automation will use a timestamped mechanism, just like the midi buffer. It is unknown how the juce API will look, I assume you will have the two optons to query the parameter if there are parameter changes in that block and when they will occur, so you can compute the changes in your DSP accordingly, or there is a method to get
float parameter->getValue(int samplePos)
similar to the SmoothedValue, where you simply ask for the next value in a loop.
The sources are out there, so anybody can check.
The parameter changes will happen either from the host, which usually does that before calling processBlock (regardless if it is the audio thread or a bespoke automation thread) or if it comes from the user via message thread, then it can happen at any time, even during processBlock. That’s the reason why an atomic is needed.
If your listener callback is simply changing the atomic there is no problem. But if your listener will change more complex data you need a strategy to do it in a thread safe manner, for instance a FIFO or just a swap buffer that is then fetched on the next processBlock start.
You have to understand that the processing doesn’t happen in realtime (despite the name). You get the call and return a few ms later, while the block actually lasts longer.
I’m not sure I understand completely. I guess I’ve been confusing “synchronous” and “realtime”. Suppose the DAW modifies the parameter value from a dedicated automation thread (whenever that happens is the responsability of the DAW). Suppose also that this parameter has been registered with an AudioProcessorParameter::Listener providing some callback. What I’m expecting is the following sequence of actions:
the DAW swaps the atomic wrapped by the parameter
the callback from the listener is called immediately after
In particular, I expect that nothing can happen between 1. and 2. on the corresponding thread. If there are multiple listeners with multiple callbacks, I expect that they are executed sequentially in the order they were added to the parameter object (AFAIK this is not mentioned in the docs, but that would be the most obvious choice).
Is this what happens ? If so, I understand that the contents of the callback must execute fast, and that it’s my responsibility to set asynchronous behavior if needed otherwise. Also, if it is actually what happens, I don’t understand how this is not realtime from the POV of the automation thread, i.e. there are no timers involved in that sequence of actions. If it’s not what happens, I don’t understand what synchronous means.
The sources are out there, so anybody can check.
Indeed, I had looked too quickly and thought it was not there, but in the end the implementation seems to rely on some ParameterAdapter. This thing wraps the parameter and keeps its own list of listeners to be called. It inherits from AudioProcessorParameter::Listener so they should have the same (synchronous) behaviour. In that case I still find it odd that the docs don’t give an equivalent warning for the two.
Coming back to my very initial question, all of this seems to point to the following design for me:
for anything that requires knowing the most recent state of the param (for example: setting a requiresUpdate atomic that processBlock can rely on), use references to parameters and add either some AudioProcessorValueTreeState::Listener or AudioProcessorParameter::Listener to them. The corresponding callbacks will be executed as soon as the DAW makes the modification.
for anything else that can wait to be performed on the message thread, use the apvts.state ValueTree directly. This one might be updated a bit late, but at least this avoids setting some separate timers and async updaters for every parameter, in the event that parameter change was made on the audio thread. Effectively the APVTS acts as a wall between the message thread and any other thread, which is actually convenient convenient, because that way there is only a single timer in the app, instead of one per parameter when using stuff like SliderAttachment all over the place. This seems ideal, but the existence of these attachments and the fact that many people still use them still leaves me a bit doubtful.
I thought there was no timer because the calling thread can be different threads. I had in mind that the UI has the timers in the attachments. I need to have a look at the code again…
There is also no timer when you call setValueNotifyingHost(). This immediately calls all listeners.
We are using this solution in all our plugins and never had any delays or problems with automation so far.
I still hope there will be a listener solution for real-time automation. But you are probably right. Parameters may passed with MIDI data and timestamps in the future. But it will be easy to connect them to listeners with almost no code.
I think there is a misunderstanding. I wasn’t saying there is a timer. I tried (unsuccessfully) to explain the two different time domains of the playback time (presentation time) and the processing time which happens in short bursts for the whole block so the following plugins get a chance to also process the result of your plugin.
The only Timer I am aware of is when the APVTS copies regularly the atomics from the Parameters to the non-realtime ValueTree.
I’m definitely confusing different notions here because all of this is new to me.
The way I’m understanding it, processBlock might execute very quickly (e.g. if DSP is simple), but nevertheless the DAW will call it only once per block. So if an important requiresUpdate atomic has not been set before some processBlock, the audible delay time will be of one block, which can be large (I’m frequently producing with a buffer of 2048 samples), independently on the actual processing time. If it’s not the case I might’ve missed an important notion.
With all that said, I might just me missing on the simplest design here, which is to just always compare the new parameter values against the DSP internals, for every processBlock, and update them only if they’re different. I wanted to even avoid atomic comparisons but maybe this is just not so costly to do once per block and is much simpler than setting an atomic via a Listener.
Yes, I guess that’s fine in most cases. Just trying not to spread a design pattern all over my code that would allow for these delays to appear If I can avoid a 43ms delay within my own code I’m more than happy! Audio devs discuss a lot about realtime-safety because the worst thing that can happen is to block the audio thread, but as a producer my frustration from some plugins has always been related to things being “slightly off” in a way I can’t rely put my finger on. Now that I’ve started programming I get an understanding of all the design choices that could lead to that.
My original motivation for this thread was to understand the implications of having two structures representing the same data when relying on the APVTS, and therefore two types of Listeners whose callbacks may not be called in the same way. I didn’t know what consistent choices to make. I’m happy I asked because your answers revealed that the code in DSPModulePluginDemo is not the safest option for a large app. This can be fixed super easily but none of that was clear to me at first