[suggestions] Best practice for threading and locking

I’ve gotten pretty good at dealing with threading, locking and async, but this is somewhat hard-won knowledge and is scattered all over the fora.

A single document that detailed how this was done with examples would be very useful to the “next guy”. I could help out on this early next year.

Would be super useful, thanks!

I’ve actually made some pretty big strides in making my code more robust in this manner and will write something up “after the rush”.

I have a lot of threads and there seems no way to avoid most of them. The big issue is always a thread that’s a little late getting the notification that things are being deleted and continues to work on a resource that is now unavailable.

I have a system to avoid this that is seeming to work quite well.

I have a “scoped pointer” like template-class that’s called thread_ptr. It works much like Jules’ ScopedPtr except that it doesn’t delete its contents in its destructor, but calls Thread::signalThreadShouldExit() and puts it into a thread graveyard called Trash.

Whenever a new thread is added to Trash, it checks its old threads, and if they have stopped running, it deletes them.

Then I make sure that any resources that are manipulated by a thread are also owned by that thread.

This seems to work pretty well. I don’t have to remember to shut my threads down and they aren’t deleted immediately but “soon”. I never have to wait for threads to exit, and I don’t have to worry that a thread is using a resource not available to it. And any thread calling a method on that class can reasonably call Thread::threadShouldExit() on the class (not the current thread).

(I imagine if multiple threads were using the same resource, the main thread would wait for the child threads to exit, but it hasn’t come to that yet.)

If your threads are reading shared resources that shouldn’t be deallocated before they’re done, why don’t you make those resources inherit from ReferenceCountedObject ?
Then if you want to tell your threads to stop working on the resources, you could just add a condition to your thread loop. When all the threads stop referencing your resource, it’ll get deallocated eventually.

I don’t really see this thread trash thing as a best practice… I’d rather reuse my threads than trash them once they’re done anyways…
A best practice would more likely be to avoid sharing resources between threads when possible, don’t you think?

Developing high performance multi-threaded applications is an art. There is no one-size-fits-all solution to managing access to shared resources. Experience, careful planning, and thought is about the only “best practice” that I believe is applicable.

One alternative that I use often, to the problem of shared data, is simply to make a copy (depending on how “heavy” the objects in question are). When the “original” object changes, just push a copy to each thread that needs it. Sure, the thread might be looking at old data for a few cycles, but if you have any experience with multithreading and you think carefully about my proposal, you might find that it makes certain implementations much simpler.

I offer this example without any explanation

// Simple Functor to call update()
template<class Object>
struct call_update
  call_update( Object& object )
    : m_object( object )

  void operator()()

  Object& m_object;


// Allows data produced by one thread to be safely accessed
// by another thread without using a lock, by making copies.
// To guarantee that the reader always makes progress towards
// a new state during an update, and to provide assurances
// that the duration of the lock is constant, three copies
// of the data are required.

// Data must be copy constructible and support assignment.
template<class Data>
class shared_data
  // Create the SharedData object from the initial state.
  // - Data must be copy constructible and support assignment.
  // - Execution time is O(n) on copying Data.
  explicit shared_data( const Data& data )
    : m_copy1( data )
    , m_copy2( data ) // so that Data does not require a default ctor
    , m_copy3( data )
    , m_read( &m_copy1 )
    , m_next( &m_copy2 )
    , m_write( &m_copy3 )
    , m_bUpdate( false )

  // Place new information in the SharedData. The producing
  // thread usually calls this.
  // - The lock is held for O(1)
  // - Execution time is O(n) on copying Data.
  shared_data& operator=( const Data& data )
    // The copy is performed outside the lock
    *m_write = data;


    // Tell the reader to pick it up
    std::swap( m_next, m_write );
    m_bUpdate = true;


    return *this;

  // Boost convention
  void reset( const Data& data )
    this->operator=( data );

  // Retrieve new information. The method used for signaling
  // the reading thread that new data is available is up to caller.
  // - The lock is held for constant time.
  // - Execution time is O(1)
  bool update()
    bool bChanged = false;

    if( m_bUpdate )
      if( m_bUpdate )
        // Pick up the next state. This guarantees progress.
        std::swap( m_read, m_next );
        m_bUpdate = false;
        bChanged = true;

    return bChanged;

  // Return a Functor that calls update()
  call_update<shared_data<Data> > deferred_update()
    return call_update<shared_data<Data> >( *this );

  // Accessors for the reader's view of the Data.
  // The object remains valid until the next call to update().
  // - Execution time is O(1)

  Data& operator*()
    return *m_read;

  Data const& operator*() const
    return *m_read;

  Data* operator->()
    return m_read;

  const Data* operator->() const
    return m_read;

  Data m_copy1;
  Data m_copy2;
  Data m_copy3;
  Data* m_read;  // reader's current view of the data
  Data* m_next;  // what the reader should select on the next Update()
  Data* m_write; // safe place for the writer to copy the data
  bool m_bUpdate;
  Vf::Mutex m_mutex;

Regarding resources - really, the issue isn’t the resources - it’s when to delete the threads. In particular, I have a horror of destructors that block…

I don’t tend to re-use threads unless I’m doing somethere where I’m constantly creating and destroying them, like a server-side process. In the current case, I have three or four new threads every time the user opens a new document. Generally, our users only have one active document open at once.

I found a nasty gotcha in one common Juce thread technique, calling your message asynchronously on the message thread. It’s tempting to inherit from CallbackMessage and then use post() when you need to do this - but unfortunately many Juce classes do this too. Some of them use private inheritance to do this and you’ll get a slightly obscure error at compile time (which is good) but some don’t - so you override handleAsync() for both parent and child and almost certainly do the wrong thing, and there is no warning.

Thanks, TheVinn, for your production code. Lots of food for thought there!

I adapted some of your ideas, and some stuff I already had for generic callbacks, and have just pushed out something I hope is useful from my internal codebase: https://github.com/rec/swirly-juce/tree/master/src/rec/util/thread .

Note that “ptr” is my own scoped_ptr class and can be completely replaced with ScopedPtr for your purposes.

It’s probably a little technical to read it, but you don’t have to fully understand it to use it :smiley: and it’s very convenient… if I have a methods I need to call asynchronously, I can just write things like: callAsync(myFunction); callAsync(someComponent, &juce::Component::repaint); callAsync(this, &MyClass::myMethod, "an argument"); callAsync(this, &MyClass::myMethod, "an argument", 42); to call a function, a method on a class with 0, 1 or 2 elements.

Much of my last week has been tweaking my thread stuff!

Trash: the thread graveyard

I wanted to re-emphasize how useful that thread graveyard is in practice.

I don’t allow any raw pointers in my code at all - I use patterns like this:[code]thread_ptr thread(new SomeClass()); // ptr and thread_ptr are my ScopedPtr variants…
if (!doSomethingPerhapsStartThread(thread.get()))
return NULL; // We got an error.

// If I get an error here, thread goes into the graveyard if it’s running!
if (!doSomethingElse(thread.get()))
return NULL;

return data.transfer();[/code]If I don’t have my “thread_ptr”, which puts threads into the thread graveyard and then deletes them when they’ve stopped, I’ve got to think each time I start to prepare something that has a thread - if I fail, is that thread started? Must I wait for it to stop? And what if threadShouldExit has been called on my thread? Must I then wait on that other thread to finish computing?

I do NOT want to wait around for 100ms in my destructor while some computation thread I started deigns to notice me!

Callbacks and async

I’m also getting great mileage out of those functions I posted about above - except that they encourage you to create callbacks and I created a callback loop, which wasn’t at all obvious because of the async calls and didn’t have any obvious side-effects except occasional long delays on the message queue (these were “null” messages that simply mean “emit your entire state”).

Thread Priorities

I wanted also to talk about thread priorities. I don’t use 'em. :smiley: I never trust thread priorities after my early Java experiences. All my threads have default priority and they seem to work just fine - I try to avoid spinlocks and instead use wait/notify.

NEEDED: An atomic lock/wait construct.

I should add that that’s the one area where I feel that Juce is deficient in threads - that there isn’t a truly atomic wait/notify construct as there is in Java, and as a result there’s a race condition. (I should add that I’m mostly a C++ programmer but am using Java as an example here…)

Here’s why. In a deterministic world, there is never a need to poll anything. Instead, your thread should wait(-1) (wait forever) and another thread should notify it.

In Java, waiting involves a lock as well as a thread, so you can do this and there’s no possible race condition. In Juce, your “notify” can come in between the time you check your condition (which probably involves a lock) and the time you wait() - so you miss the notify(), and you wait forever. This is unacceptable so all your wait loops have to have a specific timeout, and a fairly small one.

What you need is some Juce construct that allows your thread to hold a lock, go into a wait state, and then have the lock released - so there is no time between letting go the lock and waiting() and therefore no race condition.

Race conditions really matter!

If I’ve learned one thing from writing a DSP application, that’s that if you have any race condition, no matter how unlikely it seems to you, if you leave your programming running overnight you will trip it. :smiley: If your clients use your program an hour a day, that means every day somewhere between one in 10 and one in 100 of them will experience the results of any race condition you have missed…

You’re thinking of “condition variables”

Apparently I am!

Fascinating reading.

It doesn’t seem insuperably difficult to accomplish from within Juce, but I don’t think it can be put together from the outside without changing Juce, as there’s no place connecting “wait/sleep/etc” and locks.

It is extremely difficult to properly implement condition variables correctly, especially on Windows:


Implementing a thread-safe queue using condition variables


Boost has cross-platform condition variables and their thread model is quite robust. I would suggest you just use that.



Hah! Interesting, much trickier than I thought.

Yes, it’s amazing how things are rarely more easy than they initially appear, and often more difficult. :smiley:

I have everything working quite nicely with the Juce locking/threading/etc model and so have little desire to change it for the moment. I really don’t mind spinning a little on those locks, it costs almost nothing for a thread to wake up every few hundred ms and check a condition, then go back to sleep. And because the race condition is rare, I can afford to keep that timer pretty long and still get very fast response to UI actions nearly every time…

That all sounds amazing. I am still really having issue with multithreading in juce. My app reaches the line

        // very bad karma if this point is reached, as there are bound to be
        // locks and events left in silly states when a thread is killed by force..
        Logger::writeToLog ("!! killing thread by force !!");

a few times, and then crashes…

Are you still planning on releasing some sort of document/guidelines on how to do proper multithreading in juce?


Unfortunately, that codebase was far in the past and I’m on to other things now. :-/

Yeah I only realised after posting “10 years ago…” :flushed:
Is there any other such document existing that you know of?

Thnx and good luck with your other things :slight_smile:

How are you ending your threads?

Basically, for any programming language, the only way to do threads properly is have some condition variable stopped that you set somewhere else, and then test in the thread periodically to see if the thread has terminated.

You shouldn’t be using Thread.kill().

In my UI I have a list of “steps”.
Each step class has a thread which gets created in its constructor. Each step class inherits from Timer and in its callback method checks if any parameter has changed loop-start/loop-end, if it has it starts up the thread with startThread().
When the thread is ready computing the buffer it gives it back to the step class thread by using MessageManager::getInstance()->callAsync.

I just implemented presets and when I switch between presets my ValueTree destroys all my objects and builds new ones based on the called preset.

The destructor of my step class stops its thread by calling computingThread->stopThread().

But I have the feeling that sometimes these bufferComputingThreads live on and don’t get properly destructed or something.

A bit of a long story - sorry for that, I also only now discovered threadSanitiser in xcode and I had some race conditions which I am fixing now. Maybe/probably related to the crashes.

As you can tel, a clear guideline on how to do this the right way would come in handy :smiley:

I’m pretty sure that JUCE threads work as advertised, or else there would be a huge amount of screaming.

Unless you personally do something in your thread loop to cause it to stop executing, stopThread is guaranteed to fail!

Instead, you should call Thread::signalThreadShouldExit(), and somewhere in your thread loop, you should be checking Thread::threadShouldExit() and exiting the loop if it’s true.

1 Like

Wait, so you’re saying I should never call stopThread to stop a thread executing?
and I should put signalThreadShouldExit in the destructor of my class instead?

I already put threadShouldExit lines all over my thread run() code… but yeah I didn’t know about the signal method! Very good to know.

and I should put signalThreadShouldExit in the destructor of my class instead?

No, that’s even worse! You signal, and then you delete the thread object and then the running thread tries to clean up, after you’ve destroyed the class.

The whole idea of doing thread manipulations in the destructor of the thread itself!, is a very bad one. By the time you’re in the destructor, it’s far too late to be turning threads on and off.

Yes, Jules has that stopThread in the destructor there for a good reason - to force people to deal with their threads and cause an error otherwise.

It’s particularly bad if you are derived from another Thread class, because then the child’s destructor has already gone off, so any virtual methods you call can refer to memory that’s already been deleted.

You should not start to destroy a thread until it has completely stopped!

In fact, in Juce, you should never delete a thread yourself at all. Instead, you should set deleteOnThreadEnd and then run off the end of the thread’s run method and let Juce delete the thread for you.

To summarize:

Each thread’s run method should be a loop that often checks threadShouldExit()

while (! threadShouldExit() ) {

// Do something that takes only a few milliseconds here


And you should set deleteOnThreadEnd and then never call delete on the thread yourself.

1 Like

Very interesting.

I’m thinking maybe my design is flawed.
To recap: I have a list of “StepView” class instances in a vector. When I press the buttons of my presets it basically deletes and reconstructs that list based on a ValueTree. Each such “StepView” owns a Thread object. This thread objects gets constructed in the constructor of my StepView constructor and gets deleted by stopThread() in the StepView destructor. I call startThread() whenever I need to do calculations with my thread, it calculates a buffer and hands it off back to my StepView instance when it is done.

Should I instead create a new thread every time a buffer-calculation is needed and let it destroy itself when it is done calculating? (so not reusing the same thread but everytime creating a new one)…?