According to this old thread I should never lock anything in the process callback: “You should never lock ANYTHING in your process callback, not even a mutex! You’ll find many threads on this forum discussing lock-free fifos and other tricks for communicating back to your UI code from the audio thread.”
My problem is that all forum posts I find are quite complicated and there’s no tutorial on this. It’s hard to get started.
How do I actually communicate parameters and references lock free:
From processBlock to the outside?
From the outside to processBlock?
I gather from this tutorial that I can use an AudioParameter to communicate a value from the GUI to the processBlock, but is this lock free? If so, how is it protected from not being written and read at the same time?
And how would I communicate in the other direction or pass references safely and lock free?
I mainly aim at plugins where processBlock is the process callback but I guess the answer to this is valid for any similar process callback structure.
for things <= sizeof(double) just putting it into an atomic is enough for thread-safety write and read, like
once things are bigger things are a little harder. you can always read stuff from somewhere else, but write only if you can make sure it’s not being read simultanously from somewhere else, right? so that’s where the different mechanisms come into play, like the fifo. instead of changing an audio processing object from the message thread directly you only send it instructions on how to change itself. or you safely swap a pointer to the object with a modified copy of it. or you always have full objects in the fifo and just swap indexing values on changes
Thanks, I was not sure if atomics were regarded as lock free. So these will work for single values. Great.
And would std::unique_ptr be an appropriate class to use for swapping pointers? I’m primarily thinking of pointers to the dsp::IIR::ArrayCoefficients< NumericType > . Since these need to be recalculated whenever frequency, gain or q in a filter is changed and the filters themselves does not have any internal method for updating…
I was thinking the right way to do this could be to do the calculation outside the audio thread and then swap pointers between the newly calculated and the corresponding “old” array in the filter used by the audio thread.
But I’m not sure if this is the right way to go about these things.
I’m doing things like this most time in the processing thread. You may need to smooth frequency changes anyway on a sample basis to get smooth changes when the user changes the filter values. Most time i check if a value changed before doing this.
I’m not sure about this. Sounds complicated to me. Wouldn’t it be easier if you just update the coefficients in the array at the beginning of the processing block instead of replacing the whole array?
I assume that the array size stays the same and it looks like the result is a normal array that you can modify and update.
In my plugin project testing atomic<bool>, this compiles:
atomic<bool> myParameter; //in *.h
bool test = myParameter.is_lock_free(); //in *.cpp
While trying your example I don’t understand how to make it compile. No use I provide my faulty code (EDIT: with that I meant my own failed attempts and nothing else ), I don’t understand how to use this. The most common error I get is:
No member named 'is_always_lock_free' in 'std::__1::atomic<bool>'
But the problem is that the dsp::IIR::Filter class and processorDuplicator doesn’t provide any methods for updating relevant parameters from the outside world (i.e. frequency, gain and q) so I will need to call the various makeXYZ to provide coefficients.
I don’t see how I can manipulate the coefficients directly in the coefficient’s array in a meaningful way. I will need to claculate them all for every change in freq/gain/q and since they are all recalculated I believed the most efficient way was to replace the filter.state of the processorDuplicator as presented here in the examples that I started out from. But maybe I’m wrong? Is there some more efficient way?
Just to make things clear, all these complications for me, are only valid for the filter types that don’t let med manipulate frequency, gain and q directly. Seems to me it makes things a whole lot more complicated and if I’m to follow the general advice in Juce tutorials not to make a lot of function calls and calculations inside process block I would need to recalculate coefficients outside the process block. That was my impression.
For smoothing it get’s even more confusing for me… Let’s say I would make an ordinary 6-band channel-eq (loshelf, 4xpeak, highshelf). None of the filters would be directly manipulated but would need a makeXYZ-call and adding smoothing to this I end up with an extreme amount of calls inside the process block. For a 128-sample buffer, wouldn’t I end up with 128*6 = 768 calls and reassignments of filter.state for every processBlock for one single channel?
Feels like I’m missing some fundamental design aspect on how to use the IIR-filter and how to set things up.
I added a paranthesis to clarify I meant that it was no use for me to provide my own failed code. A little bit of linguistic confusion not aimed at benvining who was helpful. SOrry for the misunderstanding.
Yes you are absolutely right, I use C++14, that’s the default chosen by my own Juce installation so I’ve not changed it.
You could track Q, and Frequency by yourself and only update the filter coefficients in the processing loop when they have changed.
Or you do it one time when processBlock is called. Don’t fear any performance problems. Make things work first and optimize afterward. I can imagine that it does not matter when you calculate the coefficients one time per sample buffer (processBlock call).
Yes, thanks. Actually by now I’ve tried doing the calculation in processBlock and also outside – testing two methods using either std::atomic<std::array<float, 6>> and std::unique_ptr<std::array<float, 6>> with swapping.
Reassigning the state is done in processBlock in all of them of course.
They all seem to work nicely, not sure which is the best.
If I want smoothing none of these will work. I actually can’t figure out how to do smoothing since using the dsp-classes and AudioBlock means I’m not iterating the buffer myself. Seems like I would have to unrwap all the abstraction from both the filter coefficient calulations and the context based processing, so maybe that’s not really doable when using juce::dsp::ProcessorDuplicator and juce::dsp::IIR::Filter.
Do you use some other IIR-methodology when smoothing parameters?
The swapping solution is hard to test. You need to do this right. Otherwise, it can lead to subtle bugs that happen randomly. It depends on timing and a lot of factors. The complexity is much higher than just calculating the value once per block. You may even need a lock-free queue.
The compiler may make a real lock around this. This looks too big for me. I would only make single types like floats, bools, double or int’s atomic.
You only want to switch the pointer. I think it makes a copy if you assign another array to it.
OK, I see what you mean. Seems like the only way to this properly is to calculate coefficients inside process block then.
Unless building a more sophisticated design. But if I was to do that I would probably need smoothing also and then I believe the dsp-classes based on per buffer processing (all vector iterations hidden inside predefined process methods) is of no use anyway.
Thanks for the analysis of my suggestions, I realize this was a bit of a dead end now. The “easy” way using the predefined dsp-classes seems to assume the user only wants to do fixed processing, and I didn’t realize that.
I read the question just now so forgive me if it might be a little out of context, but the SamplerPLuginDemo from JUCE has a (almost) great example on how to pass larger change commands to the audio thread using a fifo command queue.
If you read further into the JUCE code (and examples) you will find that on the very bottom line there are actually a lot of times where locks are used to synchronize the audio thread. AsyncUpdater is a very prominent example. And even in the SamplerPluginDemo a CriticalSection is used to prevent the audio thread from changing state while an entirely different thread is saving the state. This is one of the times, where a SpinLock would have been a great solution in the audio thread (with tryLock — when someone else is currently reading the state, just postpone it to the next processBlock).
That’s not out of context, I really appreciate advice on where to find actual implementations of these things. Will look into those examples.
I must say it’s a little bit funny I ended up searching for ways to make things lock free since the creator of Juce stressed that point very clearly only to find that it’s not even lock free in the example code…
Could it be because of basic architectural circumstances? That there’s no other solution for some actions in the design at hand?
I’m pretty new at audio programming but for a mature API like Juce with a lot of “free” stuff like the dsp-classes I’m puzzled by the fact that it seems very hard to add smooth dynamic parameter changes (which happens always in audio production and music) to those dsp-classes.
TBH, I haven’t yet used the DSP classes. If you want to smooth changes, use SmoothedValue (but that’s probably already in here somewhere?). But that class requires per sample processing and that’s not what you want, right?