As the title describes, I am wondering whether it is better practice to keep a separation between the components and the processor, and have all of my custom components store a local copy of the data they represent, or should they always just refer to the processor’s data and not store any data themselves?
Any chance you could explain a bit more what you meant about threads? I’m trying to understand why having say two copies of some data, one in the UI and one in the processor, would make things more thread safe? You would still need to synchronize the two sets of data to make sure that your data was as ‘un-stale’ as possible right?
Long Answer: It’s very easy to share a single value between threads (especially when the hardware supports atomic reads/writes on that value type). This can lead the novice programmer to believe that sharing more than one value/parameter is not much more complicated.
The reality is that sharing two or more values (correctly) is a whole other ballgame. The reason is that when one thread (e.g. the GUI) updates two values at ‘the same time’, the Audio processing thread may observe those two changes as happening during different timeslices. For example imagine the GUI updating a plugin’s left and right volume parameters at virtually the same time, but due to the audio thread switching buffers in-between - having one channel’s update actioned one buffer later than the others. This would result in the audio being louder on one channel than the other for a few ms. This would not be acceptable in a pro-audio application.
I see, thanks for the explanation. So how does having multiple copies of the data solve this problem? At some point one thread will need to update it’s data from the other thread and wouldn’t this suffer from exactly the same issue - how can we ensure the update will take place inside a timeslice and not across several?
how can we ensure the update will take place inside a timeslice and not across several?
It depends on what you’re doing, but basically comes down to limiting ownership/mutability of the data to a single object/thread at a given moment in time, and/or verifying that a mutation was valid to begin with. Sidenote: this is why Rust’s “borrow checking” is really intriguing since it enforces those concepts at compile time.
So how does having multiple copies of the data solve this problem?
It doesn’t, it just changes how race conditions manifest. If you store copies and trigger a race condition, the copy will be outdated, but semantically valid. If you referenced the data instead (eg through a const*) then the race condition can be seen as a split update, where some data is current and some is outdated, which could be garbage - you solve that with a lock.
At some point one thread will need to update it’s data from the other thread
Not necessarily. You can pipe updates through well defined constructs like bounded FIFOs, Michael-Scott queues, or other data structures/algorithms used for sending data across threads. That data can be updates themselves, so only one thread ever needs to mutate anything. This is how a lot of my parameter/graph changes are implemented. Very little data is shared between editor/processor at all, mostly just what’s needed to send data between them.
I will say though, if the underlying data is a POD type that can fit in a single register, you can wrap it in std::atomic and probably (do test this) be fine.
The main rule is: You can share data, or you can mutate (update) data, but not BOTH at the same time.
So one way to solve the problem is to have the GUI update a private copy of the data, then organize to swap the GUI’s copy with the Audio-thread’s copy in a safe manner (e.g. switching pointers to the data only at the start of a timeslice). The idea being that the GUI can take it’s time updating more than one parameter in it’s private parameter-set, then swap/transfer the entire bunch in one ‘hit’ to the audio thread. Once that data is transferred to the audio-thread it must be treated as read-only by the GUI (i.e. shared, but not mutable). Hope that makes sense.
Thanks for the great explanations - they are helping but I have to admit my brain is not fully comprehending this stuff and I know that it’s crucial to grep this topic otherwise I will find myself in all kinds of trouble.
Lets say in my GUI there are three buttons which each have a bool value for on/off. The processor will generate a note depending on the state of these buttons. So, in the APVTS’s value tree I have three properties to represent the state of these buttons. Now, these values can be changed in two ways.
By incoming midi events that arrive directly in the midiBuffer of the processor thread.
From interaction with the GUI buttons on the UI thread.
In the case of 1. there are not race conditions to worry about since everything happens on the processor thread right? The GUI does need to be updated by these changes but i’m guessing that it’s not as critical that it happens within a specific time slice so it should be enough for the buttons to set their displayed state by referring to the properties in the APVTS’s value tree.
In the case of 2. I should not directly update the APVTS properties since these changes could occur in different timeslices. If I understand correctly, I should have a copy of the state of these buttons in the UI thread and firstly update this. Then I should schedule the transfer of the button states to update the APVTS properties at the start of the next timeslice from the processor thread?
Does this sound right? Should the editor own its own value tree separate from the APVTS’s value tree for storing the UI state?
Again, many thanks - I am a novice trying to avoid making the schoolboy errors
Should the editor own its own value tree separate from the APVTS’s value tree for storing the UI state
APVTS takes care of the vast majority of thread safety. The recommended approaches for updating parameters from the UI (widget attachments, etc) all call setParameterNotifyingHost which acquires a lock and guarantees that updates happen safely.
The issues with sharing data between processor/editor come into play when it’s more complicated than simple parameters or you need to update the parameter from the audio thread and the locking is causing you trouble (which it probably won’t, but computers are weird).
I’m not using APVTS Parameters though - i’m using the APVTS’s value tree, so no attachments or parameters - just making properties in the value tree to store state. This is because I have been informed that the APVTS parameters are not sample accurate so changes to these cannot be relied on to happen at a specific time.
This is because I have been informed that the APVTS parameters are not sample accurate so changes to these cannot be relied on to happen at a specific time.
That’s an issue with the JUCE plugin wrappers, not APVTS. I can think of a way to fix the limitation without drastically changing the abstraction but the only thread safety concerns would be to post a message to the message thread when a real time parameter change is present in the audio callback to update listeners. Since under the hood the only thing changing in the audio callback is the POD type for the parameter, it should be safe to access even if it’s a little outdated. Copies might help if you’re displaying parameters somewhere, for example smoothing them in a meter or something.
You mean fixing the problem that automation changes are not sample accurate?
The host is calling processBlock, and there is no real time, that you could update parameters while processBlock is executing. It is not that the method would run exactly the time of one buffer, that would be a disaster.
IIRC VST3 has a list of parameter changes, that are sample accurate, but it is not forwarded by JUCE. That is indeed a shortcoming of the wrappers. A solution I heard @dave96 proposing was, to cut down the blocks, so you send small blocks on each parameter change. E.g. if a parameter changes at sample #10 it sends a block of 10 samples, updates the parameters and sends the rest of the buffer. That way no plugin would have to change.
You mean fixing the problem that automation changes are not sample accurate?
and there is no real time, that you could update parameters while processBlock is executing.
I meant real-time as in parameter updates arrive along with audio data, as in VST3.
cut down the blocks, so you send small blocks on each parameter change
That’s more or less what I’ve been doing in non-JUCE plugins for sample accuracy. The one significant difference is that continuous parameters become a targets and need to be smoothed, but I think most people are doing that anyway.
iirc the only API that doesn’t support sample accurate automation is VST2, and since that’s being killed, I think it might make sense to wrap that behavior. But I haven’t dug too deeply outside of VST3 to see how the other APIs do it, and VST3 is always wack.
That way no plugin would have to change.
Personally I’d like to have access to the parameter updates in the plugin’s callback rather than hiding it away, but that’s just me.