This is why we need a lock-free std::atomic<std::shared_ptr>.
juce::ReferenceCountedObjectPtr might work for you if you don’t need the weak-reference capabilities of std::shared_ptr. Remember that when the ref count gets to 0 though the object will be deallocated so you might want to offload it to another thread for destruction (in a lock-free way).
Oooh juce::ReferenceCountedObjectPtr - now that is an interesting suggestion! Actually, to make sure all my dynamic data gets cleanly destructed, I make sure all of it is child (directly or indirectly) of a std::unique_ptr<T> so I don’t have to worry about destruction, so the refcount would always be at least 1 - and no locking needed because the std::unique_ptr<T> object never does anything to the data - it’s just there to ensure destruction as soon as the std::unique_ptr goes out of scope (and that’s when the plugin is shut down).
It might work - I’ll toy with the idea tomorrow. Tx once again! That makes several beverages.
That’s two contradicting concepts. What you do is making sure, always at least one reference to your ReferenceCountedObject is kept alive.
You can use an AutoReleasePool, to avoid destruction in the audio thread. It keeps them all in a ReferenceCountedObjectArray, and a timer (by default message thread) will check, if the reference count is 1, in which instance no one is interested in that any longer and you can kick it out of the array, what automatically triggers immediate destruction.
Don’t bother with std::unique_ptr or ScopedPointer, if you use ReferenceCountedObjects.
The problem with using a ReferenceCountedObjectArray is that you need to lock access to it when you push objects on to it or remove them in the message thread. Otherwise you could be adding to it and removing from it at the same time.
@daniel: of course, if I use RefCountedObjects I no longer need the std::unique pointer - that was obvious to me
@dave96 you’r right about turtling down … though I am not worried about the message thread - I am not shuffling these kinds of objects around there. But nonetheless - I still have 2 threads going on so the problem remains. I’ll play with the refCounted idea though.
And I’ll see if I can come up with a solution that does what I need in 1 thread (the audio thread). I have no real idea yet how to do that (and it would shift a lot of load to the audiothread that is currently nicely done elsewhere … ah well, keeps the brain busy ).
Actually, upping the priority of the other thread does guarantee that you avoid priority inheritance. And you only have to do it temporarily while holding the lock.
The code below should be perfectly fine:
Queue queue; // Note: In "Queue", both queue.pop() and queue.push() are realtime safe.
auto org_priority = GetPriority(this_thread);
In Posix, you can set PTHREAD_PRIO_PROTECT on your lock to virtually achieve the same thing without having to manually boost the priority of your non-reatime thread. Another alternative is PTHREAD_PRIO_INHERIT, which works by only boosting the priority of the non-realtime thread when necessary. PTHREAD_PRIO_INHERIT might be more efficient if it’s unlikely to boost priority, but more inefficient if it’s likely to boost priority. (Note that it’s unclear whether PTHREAD_PRIO_PROTECT and PTHREAD_PRIO_INHERIT is supported on osx, but it works on linux)
I can’t remember that you needed a special version of the kernel. Do you have a reference for this? If I remember correctly, PTHREAD_PRIO_INHERIT wasn’t supported 15 years ago or so, so maybe it wasn’t supported in the kernel then…
The problem I don’t understand. How can the low priority thread be asleep when it’s setting setting priority for itself? Or did you only think about PTHREAD_PRIO_INHERIT here? In the latter, I guess it must be waken up immediately, yes. Of course there is a latency, but it’s not supposed to break realtimeness. PTHREAD_PRIO_PROTECT, or boosting manually, probably has a more equal time usage, but it probably doesn’t matter much unless the mutex has a high traffic.
Real-time operating systems implement special mechanisms to avoid priority inversion. For example by temporarily elevating the priority of the lock holder to the priority of the highest thread waiting for the lock. On Linux this is available by using a patched kernel with the the RT preempt patch. But if you want your code to be portable to all general purpose operating systems, then you can’t rely on real-time OS features like priority inheritance protocols. (Update: On Linux, user-space priority inheritance mutexes (PTHREAD_PRIO_INHERIT) have been available since kernel version 2.6.18 together with Glibc 2.5. Released September 19, 2006. Used in Debian 4.0 etch, 8 April 2007. Thanks to Helge for pointing this out in the comments.)
Someone should do some audio-specific research (desktop linux/windows/macos + ios/android) about the efficiency of using lockless data structures to send info to/from a realtime thread vs. the efficiency of boosting thread priority manually and using normal locks or spin locks. I wouldn’t be surprised if the lockless data structures were both slower on average (this is already common knowledge though), but also has a more fluctuating cpu usage (not common knowledge). In other words, I wouldn’t be surprised if common knowledge about this topic is wrong, that locks are not only safe to use in realtime thread (this is a fact, but is not common knowledge), but even performs better than the alternatives (this we don’t know, but it seems likely).
The situation now is that a lot of people write overcomplicated code to avoid locks because they think locks are unsafe, which they are not.
Such research could also reveal if temporarily boosting the priority is not realtime safe. I.e that there are bugs in the operating systems and/or thread libraries.
Not sure I understand what you are saying here. Do you think using lock-free structures sometime could take vastly longer, or do you think using locked structures could take vastly longer time?
Why wouldn’t be implementable on posix?
Hmm, maybe that should have been changed to PTHREAD_PRIO_PROTECT instead. Probably doesn’t make much practical difference though.
I think locks resulting in underruns would be horrendous now, and it would have been horrendous 10 years ago as well. In case, there would be serious bugs in the thread implementations, which of course there may be.
Sorry, I said that wrong. I meant that using locks would be faster most of the time (due to the fact that they’re often uncontended and therefore probably don’t result in full system call sleeps or context switches). Using lock-free has other CPU related costs.
However, in the occasional contented case, using locks could result in a vastly longer call, possibly resulting in an underrun. This basically boils down to how ubiqutous this priority boosting is on OSes/frameworks and how well it’s implemented in the OS.
Again, could be my lack of posix knowledge but is there a way to boost the priority of a given thread and get the priority of a specific thread. To me though it looks like this isn’t necessary as PTHREAD_PRIO_INHERIT handles this automatically.
So you don’t think the implementation of thread schedulers and locks has improved in the last 10 years? Just curious as I don’t know much about that very low level stuff.
Well, when you call SetPriority(), the priority is supposed to be set immediately.
But the call to SetPriority() itself might take some time though. However, this happens in the non-realtime thread, so you won’t get underruns. But yes, performance might be lower if SetPriority() takes a long time to finish.
You call pthread_getschedparam to get priority, and pthread_setschedparam to set priority.
Or you can use PTHREAD_PRIO_PROTECT instead to let posix take care of it automatically for you.
I didn’t know there was a problem with this 10 years ago? Has pthread_setschedparam not worked immediately at some point?