Thread priority and inheritance mutex


#1

Hi,

I’m having some trouble with the current thread priority mechanism.
Whenever I change my thread priority, it seems like it doesn’t work.
I’ve dig into this, and I think I’m hiting the classical thread priority inversion with mutex inheritance.

Here’s an example of what happen:

Thread H is running with high priority
Thread L is running with low priority
Thread M is running with medium priority

1) Thread L takes a mutex shared with Thread H.
2) Thread L is scheduled
3) Since thread H is highest priority, it's selected, and try to lock the mutex. It can't (L got it)
4) Thread M is then selected, and run. When it's done running, thread H is selected, but doesn't progress, and then Thread M is selected again (as so on)

What happens here, is that Thread M with medium priority is running more often than thread H with high priority. In a 3 threads system, thread M is running exclusively (said differently, thread H and L appear deadlocked during the whole life of thread M)
Under Posix system, this can be solved easily by giving priority inheritance to the mutex. So as soon as Thread H tries to take the mutex, the Thread L gets a instantaneous priority boosts to thread H priority. The situation unlocks itself.

Under windows, it’s done automatically by the scheduler, but on Linux, it’s not the case, you have to create mutex in a specific way for this to work:

pthread_mutex_t my_mutex;
pthread_mutexattr_t my_mutex_attr;
pthread_mutexattr_init(&my_mutex_attr);
pthread_mutexattr_setprotocol(&my_mutex_attr, PTHREAD_PRIO_INHERIT);
pthread_mutex_init(&my_mutex, &my_mutex_attr);
pthread_mutexattr_destroy(&my_mutex_attr);
// From here: http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_MRG/1.0/html/Realtime_Tuning_Guide/sect-Realtime_Tuning_Guide-Application_Tuning_and_Deployment-Mutex_options.html

#2

Up.
This is not a blocking issue (can be sorted out using events instead of mutex), but still, I think it’s a design flaw that could/should be sorted out.


#3

I sorted this out last week, didn’t I?


#4

Well, sorry, I’m still locked with juce git version from two month ago (with critical fixes backported). I can’t take the risk to upgrade juce now in my project.
I’ll have a look right now for the backport.


#5

I think it was just a one-liner.


#6

Ok, it works great.
2 minor comments:

  1. WaitableEvent should have them too (on top) of the file
  2. Did you know that posix mutex & condition can be interprocess too (using PTHREAD_PROCESS_SHARED). I don’t think it’ll simplify the actual code, but anyway, it’s good to know.

#7

Good point about the events.

I seem to remember looking at using mutexes for interprocess locking, but chose not to for some reason… can’t remember exactly why not…


#8

Well, you still have to open a file for shared memory so the code is about the same size anyway for a single IPC lock. The difference is when you have hundred of those mutex, then you only need one file instead of a multiple.
Also, the mutex speed is way faster than file based locking (the former only switch to kernel on contention, while the latter perform a kernel switch each access).

It’s not my case anyway, when I need IPC, I do socket based code => that force to set up an API about the communication, and allow third party to interact.


#9

And I think that maybe not all pthread implementations supported it, or something like that. There was some good reason to avoid them, anyway.