Reading/writing state from the audio thread in a ValueTree-based app


I’m working on an open source midi sequencer which runs on a rapsberry pi and uses Ableton’s Push2 as UI. Source code repository (with rather hacky code so far) is here: GitHub - ffont/shepherd

After some initial development and getting some basic features working, I’m now refactoring the code to base the app on a state stored in a ValueTree, following the techniques explained by @dave96 in his ADC 2017 talk (David Rowland - Using JUCE value trees and modern C++ to build large scale applications (ADC'17) - YouTube). The concept of storing the state in a ValueTree sounds great to me and also is interesting because then I can sync the ValueTree with an external application that deals with the actual UI*. I already did some work for this refactoring using juce::CachedValue and the drow::ValueTreeObjectList as explained in the talk.

What I understand from Dave’s talk (and from reading in many posts here) is that I should not read/write directly from the ValueTree in the audio thread. I’m trying to learn all these concepts with thread safety and real-time safety issues, but I’m no expert C++ dev so this is becoming a bit confusing. Here are my main questions which I hope someone can help me clarify:

  1. When in the getNextAudioBlock (audio thread), how should I go about reading the state? My state contains a list of tracks which has a list of MIDI clips with MIDI notes. Should I be adding some “atomic” members in my “proxy” Track and Clip classes so when properties are updated in the value tree (eg, clip length is updated using the juce::CachedValue<double> length property) these atomic members are also updated and are later read from the audio thread?

  2. If tracks/clips/notes are added or removed while the audio thread is working, this could cause conflicts. How can I prevent the state from being changed at all while the audio thread is doing something relevant? The Track, Clip , and Note objects are automatically synched with the ValueTree using drow::ValueTreeObjectList, therefore I guess the best solution would be to simply delay any writing to the ValueTree happening in the message thread to a moment where the audio thread is not reading it, but I don’t know how to achieve that in a nice way…

  3. How can I update the state from the getNextAudioBlock function (eg to update the current playhead position of a clip)? Maybe the answer is that I should design my application so I don’t need to write to the state form there? I guess I could do something like that by simply removing some things from the ValueTree state (so my audio thread does not depend on them) and directly use object members. Then, if I need this data to be synchronized with the UI (eg to display clip playhead position) and I do my synchronization with the main ValueTree, I guess I could still use a timer on the message thread to add copies of these clip properties to the ValueTree (ie add playhead position to the value tree) so these get reflected in the UI.

As you can see I have some ideas about how to proceed, but because I’m not 100% sure I understand all the concepts I’d like to get some feedback before continuing with this. I’m not planning to add more threads in my app besides the message thread and the audio thread, so hopefully the solution is not too complex?

Thanks a lot in advance for your help!

*So far I’m doing something like that in a very hacky way, but I expect to do it much better using ValueTree and change listeners.

#1 sounds quite close to an idea I have used before, which worked pretty well. Write a templated class that’s very similar to juce::CachedValue, which contains e.g. a std::atomic<T> instead of a T. The valueTreePropertyChanged callback sets that atomic, and later in your process callback you can read from it.

I have since moved on from this technique, using a FIFO queue to send changes - each ValueTree and related object has an id so that the change messages can be linked up at the other end. I like how this method enforces a proper separation between threads - the queue is the only point of contact between threads, instead of having objects that are touched by both threads. But it’s a bit fiddly to get started with.

#2 is getting onto a much more difficult area. In general, adding/removing objects from any sort of audio graph in real time is hard. One option is to rebuild the entire object graph on the main thread when a change occurs, then swap it with the ‘active’ one that’s used by the audio graph. The ‘swap’ can be done with an atomic pointer, or a try-lock if you’re careful. This can work great, but you can run into problems if your objects contain ‘ephemeral’ audio-processing state that’s not captured in your ValueTree, e.g. envelopes or fades that might be in-progress when the swap happens. This can be mitigated by ‘matching up’ old objects with new ones, and stealing them when you do your swap.

A more difficult but more efficient route is to allocate new objects on the main thread, then send pointers to them across on a FIFO queue. Again, you’ll need some id system so that you can find the parent on the other side to attach them to. For removal/deletion of objects you probably want to wrap them in a shared_ptr and keep a reference in a ‘garbage collection’ object before you send them across the queue to the audio thread. There’s more information about this technique in this video.

#3 is another tricky thing. If you’re doing the ‘atomic CachedValue’ thing, I think that the only option is to check each atomic looking for changes on a main thread timer. Try not to use a different timer for every value! Use a single timer and scan them all. If you’re using queues instead, this is a bit more obvious - you just do the inverse of the main → audio thread queue.

1 Like

Hi @widdershins, thanks a lot for your answer!
I’m watching more videos and reading more about all these issues, lock-free FIFOs to see if I can get a better understanding of the big picture and make the good decision. The thing is that it is sometimes hard to follow talks and discussions about real-time programming and thread-safety for someone with not long experience with C++, but I’m slowly getting the hang of it.

About your first solution for #1, I guess this is similar to what @dave96 explains in the last slides of his ADC 2017 I linked above when he talks about thread safety (slides here 54-56). I assume using these tricks he mentions I should be able to read from the audio thread.

However, I should still take care of #2 in an intelligent way and make sure that adding/deleting objects is not generating conflicts. I’ll learn about the suggested techniques.

I’m a little less worried about #3 because I believe I can fix it by designing the app so that I don’t need to updated things stored in the state from the audio thread, and only update the value tree state with a timer in the message thread. This should work in my case because this updated information will only be used by the UI and not the audio thread so I don’t care if there are some small delays. The FIFO from audio thread to message thread might work as well, but I think it will be more efficient to simply “collect” all the relevant bits of information I need and adding them to the state in the timer (e.g., iterate through all clips and copy their “playhead position” in the state).

Thanks a lot, I’m sure I’ll get back with more questions after learning a bit more and coding.