There is a method
std::atomic<double>::is_lock_free() or similar. I am on mobile, otherwise i would add a link…
There is a method
Yes, that’s a way to check if it’s lock free, so that’s useful:
is_always_lock_free, which you can use in a static assert:
I’m doing things like this most time in the processing thread. You may need to smooth frequency changes anyway on a sample basis to get smooth changes when the user changes the filter values. Most time i check if a value changed before doing this.
I’m not sure about this. Sounds complicated to me. Wouldn’t it be easier if you just update the coefficients in the array at the beginning of the processing block instead of replacing the whole array?
I assume that the array size stays the same and it looks like the result is a normal array that you can modify and update.
In my plugin project testing
atomic<bool>, this compiles:
atomic<bool> myParameter; //in *.h bool test = myParameter.is_lock_free(); //in *.cpp
While trying your example I don’t understand how to make it compile. No use I provide my faulty code (EDIT: with that I meant my own failed attempts and nothing else ), I don’t understand how to use this. The most common error I get is:
No member named 'is_always_lock_free' in 'std::__1::atomic<bool>'
But the problem is that the dsp::IIR::Filter class and processorDuplicator doesn’t provide any methods for updating relevant parameters from the outside world (i.e. frequency, gain and q) so I will need to call the various makeXYZ to provide coefficients.
I don’t see how I can manipulate the coefficients directly in the coefficient’s array in a meaningful way. I will need to claculate them all for every change in freq/gain/q and since they are all recalculated I believed the most efficient way was to replace the filter.state of the
processorDuplicator as presented here in the examples that I started out from. But maybe I’m wrong? Is there some more efficient way?
Just to make things clear, all these complications for me, are only valid for the filter types that don’t let med manipulate frequency, gain and q directly. Seems to me it makes things a whole lot more complicated and if I’m to follow the general advice in Juce tutorials not to make a lot of function calls and calculations inside process block I would need to recalculate coefficients outside the process block. That was my impression.
For smoothing it get’s even more confusing for me… Let’s say I would make an ordinary 6-band channel-eq (loshelf, 4xpeak, highshelf). None of the filters would be directly manipulated but would need a
makeXYZ-call and adding smoothing to this I end up with an extreme amount of calls inside the process block. For a 128-sample buffer, wouldn’t I end up with 128*6 = 768 calls and reassignments of
filter.state for every
processBlock for one single channel?
Feels like I’m missing some fundamental design aspect on how to use the IIR-filter and how to set things up.
No reason to be harsh. If you look up that method you notice it was introduced in C++17
Chances are you are trying with an older C++ version.
Stuff you don’t know or don’t understand you are allowed to google or ignore
Or ask for help.
I added a paranthesis to clarify I meant that it was no use for me to provide my own failed code. A little bit of linguistic confusion not aimed at benvining who was helpful. SOrry for the misunderstanding.
Yes you are absolutely right, I use C++14, that’s the default chosen by my own Juce installation so I’ve not changed it.
You could track Q, and Frequency by yourself and only update the filter coefficients in the processing loop when they have changed.
Or you do it one time when processBlock is called. Don’t fear any performance problems. Make things work first and optimize afterward. I can imagine that it does not matter when you calculate the coefficients one time per sample buffer (processBlock call).
Edit: Keep things simple
Yes, thanks. Actually by now I’ve tried doing the calculation in
processBlock and also outside – testing two methods using either
std::atomic<std::array<float, 6>> and
std::unique_ptr<std::array<float, 6>> with swapping.
state is done in
processBlock in all of them of course.
They all seem to work nicely, not sure which is the best.
If I want smoothing none of these will work. I actually can’t figure out how to do smoothing since using the dsp-classes and AudioBlock means I’m not iterating the buffer myself. Seems like I would have to unrwap all the abstraction from both the filter coefficient calulations and the context based processing, so maybe that’s not really doable when using
Do you use some other IIR-methodology when smoothing parameters?
The swapping solution is hard to test. You need to do this right. Otherwise, it can lead to subtle bugs that happen randomly. It depends on timing and a lot of factors. The complexity is much higher than just calculating the value once per block. You may even need a lock-free queue.
The compiler may make a real lock around this. This looks too big for me. I would only make single types like floats, bools, double or int’s atomic.
You only want to switch the pointer. I think it makes a copy if you assign another array to it.
OK, I see what you mean. Seems like the only way to this properly is to calculate coefficients inside process block then.
Unless building a more sophisticated design. But if I was to do that I would probably need smoothing also and then I believe the dsp-classes based on per buffer processing (all vector iterations hidden inside predefined
process methods) is of no use anyway.
Thanks for the analysis of my suggestions, I realize this was a bit of a dead end now. The “easy” way using the predefined dsp-classes seems to assume the user only wants to do fixed processing, and I didn’t realize that.
I read the question just now so forgive me if it might be a little out of context, but the SamplerPLuginDemo from JUCE has a (almost) great example on how to pass larger change commands to the audio thread using a fifo command queue.
If you read further into the JUCE code (and examples) you will find that on the very bottom line there are actually a lot of times where locks are used to synchronize the audio thread. AsyncUpdater is a very prominent example. And even in the SamplerPluginDemo a CriticalSection is used to prevent the audio thread from changing state while an entirely different thread is saving the state. This is one of the times, where a SpinLock would have been a great solution in the audio thread (with tryLock — when someone else is currently reading the state, just postpone it to the next processBlock).
That’s not out of context, I really appreciate advice on where to find actual implementations of these things. Will look into those examples.
I must say it’s a little bit funny I ended up searching for ways to make things lock free since the creator of Juce stressed that point very clearly only to find that it’s not even lock free in the example code…
I think everyone agrees you shouldn’t! Everywhere JUCE does use locks I think it should be considered a bug.
Could it be because of basic architectural circumstances? That there’s no other solution for some actions in the design at hand?
I’m pretty new at audio programming but for a mature API like Juce with a lot of “free” stuff like the dsp-classes I’m puzzled by the fact that it seems very hard to add smooth dynamic parameter changes (which happens always in audio production and music) to those dsp-classes.
TBH, I haven’t yet used the DSP classes. If you want to smooth changes, use SmoothedValue (but that’s probably already in here somewhere?). But that class requires per sample processing and that’s not what you want, right?
I thought I could do it without per-sample but that seems to be a misconception from my side. Seems to me and from the answers here that it’s not really possible (without going inside those classes and making changes…). But it’s great to have cleared this out.
I’m also not using the juce dsp classes for filtering. But it looks that every filter also has a method to process a single sample (processSample).
Yes, but then I would need to take apart the whole AudioBlock into single samples anyway so it seems quite pointless if the idea is to work on a per block basis.