Short question, does it take a predictable amount of time for a try lock call on some mutex to return? I’m lacking a bit of understanding of the level at which mutexes are managed, will this cause some system call with unpredictable return time depending on the system load or will this return immediately in any case as this is no real system call?
Background: I have some buffers which are filled with audio samples that might get reallocated because of external events at any time. I simply want to ignore samples that are coming in while reallocating so I’m wrapping the access in a try lock on the audio thread side and use a normal lock on the side that initiates the reallocation from another thread. This works like a charm on my computer but I just want to be sure if this should be safe in every case?
Timur has shown that std::mutex::try_lock is not safe on the audio thread, but the OP had also asked about juce::ScopedTryLock. It seems to me that this one might be safe if you use it in conjunction with juce::SpinLock, because there is no possibility of it notifying another thread. Would something like this be safe to do on the audio thread?
juce::SpinLock::ScopedTryLockType l (mySpinLock);
// do stuff...
If the lock isn’t acquired, it just does nothing and tries again next time.
AFAIK such trylock approach with a spinlock is safe. But is it good? For very low contention, and for short code to protect i guess that it is.
But with more contention it could be a very bad solution (CPU/battery cost, risk of starvation). Frankly, more i learn about that, less i know what to use.
I suppose that thread preemption (i was caring above in the topic) never happens, since i never heard people talking about it! Probably that the OS will never preempt a high priority thread before the low priority threads.
In my code i kept the std::mutex_try_lock for now. The Timur Doumler’s trick is not very portable (and not fully tested). I’ll probably make my own version with nano_sleep instead of __asm__ __volatile__ ("pause" ::: "memory") and such. I’m afraid to make experiments with ARM instructions and i really want the low priority thread to wait (instead of polling and consuming CPU/battery power).
I would like to know how it is done in production codes (real softwares not youtube/forums talks).
The kind of SpinLock exposed in Timur Doumler’s blog (exponential back-off) is interesting. It would be awesome to have one in JUCE (or elsewhere) fully tested on various platform. Isn’t it something that JUCE community could do?
I tried to make one but TBH i’m not an expert in assembly (i just stoled things from various projects), so i don’t want to publish code that would burn the computer of volunteers.
Paraphrasing Timur (i think it was) said at ADC 21, that spinlocks may be real-time safe, but they EAT up CPU (and battery) while spinning. In general this strategy of burning CPU results in poor performance of the DAW in general because you are needlessly consuming CPU cycles that the DAW could be using more productively (e.g. repainting the UI at a high framerate).
Secondly, considering that lock-free mechanisms are available and widely used (e.g. message-passing), my opinion is that we could all save a lot of wasted time and effort if we stopped reaching for locks as the default ‘go to’ for solving our concurrency challenges.
full disclaimer: I did use a std:condition_variable in a streaming disk sampler, that is in use in a major companies products, and it seems to work OK. (condition variables take a lock for a very short time).
Frankly, what’s the meaning of all the C++ evolution, if we can not have guarantee about such thing?
C++ does not dictate what the OS can or cannot provide. If the OS can’t provide any guarantee in regards to locking, you can’t blame C++ for that.