C++ threads vs JUCE threads - threading tutorials?

I am new to JUCE, c++ and threading. Unsurprisingly I’m having trouble understanding the context to even start coding.

  1. Should I learn c++11 threading or JUCE threading? why pros/cons? Are they compatible? will using c++ threading in a JUCE project cause problems? other discussions on this forum didn’t seem conclusive about that. ie here: Threading and wait/notify

  2. Are there tutorials on threading in JUCE? so far the only resources I have found are the JUCE API documentation and the Multithreading demo in JuceDemo.

  3. In general are there some tutorials missing, deprecated, or hidden? Posts on this forum and the tutorials here: https://www.juce.com/tutorials seem to mention other tutorials I can’t find. Were some out of date and removed?

I have the Concurrency in Action book http://www.cplusplusconcurrencyinaction.com/ and it seems like a good explanation of c++ threading in the little bit I’ve read so far, but I’m feeling the lack of an equivalent explanation of JUCE specific threading. It also looks to me as though the JUCE thread classes include features missing from c++ is that right?

Specifically I’m struggling with issues like: should my event handling object inherit from thread or contain a member object that is a thread? When run() calls other methods what determines which thread they run on? If the same methods are called from elsewhere do they run on a different thread?

Generally how does the thread structure relate to the class/object structure? it seems like it intersects it arbitrarily… and I can’t find an explanation of how.

I see lots of references to the “message thread” by which I presume is meant the default basic gui handling main application thread? but so far I haven’t found a direct discussion of what it is. shouldn’t that be part of the most basic tutorials?

I feel like I’ve missed some basic explanation I should have seen.

6 Likes

Why do you even want to use threads? (That is, do you have some legitimate reason to use them at a low level, to begin with?)

I want to learn to write realtime audio and midi applications with the realtime processes uninterrupted by the GUI.

That said my immediate use may not actually need multiple threads. I need to poll an object that communicates with usb hid hardware at regular 1ms or so intervals, and then do some fairly simple processing on the callbacks that the object will return and then output MIDI based on that.

in this immediate case I’d be quite happy for any interaction with the GUI to delay the polling and delay or suppress the processing of incoming events while the GUI is handled, since I am assuming that the user is either playing the instrument or adjusting the settings, not both at once. However, I can see that in a more general case I might want such gui interactions NOT to interrupt the realtime processing.

additionally I can see my way to do this with a dedicated input thread that waits for a specified interval between polling the input object, but I haven’t found examples that do it all on one thread and allow the GUI to delay polling.

There are some threads that talk about it like this one.

Look into the AsyncUpdater Class

this thread is also relevant. To poll or thread or . . . .?

I’m quite confused about the use of critical sections. I understand what they are for, and I understand turning them on and off with a scoped lock, but what is the extent/scope of the critical section itself? what defines what exactly is “critical” ie what gets locked? presumably it is soemthing to do with the scope of the Criticalsecion object, ie where it is initially declared?

+1 to this. Look for Timur’s recent talks from both the CPPCon and JUCE Summit - some very good tips there. However it would be great to have a JUCE-centric tutorial on this. I’ve recently used threading in my app to improve performance on mobile devices. I used the JUCE Thread class, which seperates threads by classes. I looked at std::thread too but it’s style is quite different. It would be great to see examples of both.

2 Likes

re CrtiticalSections, they are more like a doorway your code has to walk through to continue executing. ‘locking’ (taking ownership, etc) the criticalsection is like locking the door, if another thread context tries to walk through the door when you have it locked, it stops running and waits for you to unlock the door, at which point it will walk though and lock the door. It controls execution, not data access, but you leverage that to control data access.

1 Like

thanks! does this mean a critical section locks all data objects within scope of the currently executing thread that has invoked the lock? ie prevents any other thread accessing any object in-scope until the lock is released?

if that’s wrong how do you control data access with it?

if I understand correctly it seems like an easy to use if rather broad system…

i’m unclear why it requires two objects, a critical section object and also a scoped lock object.

again, the critical section controls code flow, not data access. but, by doing so, it inherently controls access to the data within the scope of that code. using my door metaphor, let’s say a lightswitch the is data object we want to control access to, we write our code in a way, where whenever a thread wants to change that switch, it first must lock the critical section, if another thread has the critical section locked, the thread trying to lock it will become ‘blocked’, and pause execution. since it’s execution is paused, it obviously can’t access the data we want to control access to. when the other thread releases the critical section (unlocks it), the blocked thread resumes execution and gets the lock on the critical section. Since it is running again, it will access the data, and since it has the critical section, any code trying to lock the critical section will become blocked, and not access the data we want to control access to.

does this make sense? the locking isn’t just for controlling data access, but any kind of resource that you want to control it’s usage between different threads. a usb port, drawing on the screen, etc.

the scopedlock object is actual just a helper, and not require. the actul locking/unlocking is done through the criticalsection API’s ‘CriticalSection::enter()’ and 'CriticalSection::exit(). The ScopedLock, like a ScopedPointer, allows you to no think about the unlocking, as it happens automatically when you leave the ‘scope’ of the lock.

2 Likes

I understand about it controlling code flow not data access, but I’m unclear what is caught up in that. ie is the scope recursive, so that everything called from within the scopedlock is locked until the scopedlock is released?

eg if I have a thread object that handles my realtime processing, and in its run method it has a scoped lock covering a call to a public function in the same object, will that block the main gui component (which the entire thread object is a member of) from accessing that public method until the lock is released? but not block another public method that is not called in the scope of the scopedlock?

forget about ‘where you are’ in the code. a call within a thread of execution does not change the thread of execution. my metaphor contains all of the information needed…

key point; only one thread of execution can lock the criticalsection at a time, all other threads of execution that attempt to lock it will block, until it is released.

but isn’t the whole purpose to block access to some parts of the code while leaving other parts accessible to other threads? the critical section isn’t supposed to block all code in the entire app is it?

that’s what i thought, I’m trying to determine what actually is within it. Is it all code in the same scope as the critical section object, or all code called from within that scope… maybe this is a dumb question revealing my unfamiliarity with details of scoping in c++??

so does the scope of the critical section object determines the scope of the block? so the scope that is blocked is separate from the chunk of code that is executing while the block is in place (determined by the scopedlock object?)

or to put that another way, while the scopedlock is active, everything in the scope of the criticalsection is blocked even if it is not used within the scopedlock?

to put it yet another way - the scope of the critical section defines the size of the room, the scopedlock defines when the door opens and closes. is that right?

for reference timur’s cppcon talk as mentioned by @adamski

and his juce summit talk

i’m finding the first one very interesting and potentially useful, though it isn’t helping me understand critical sections (except why you mostly shouldn’t use them)

I’m not sure where you are getting confused. I’m trying to be really clear with the language I am using. We shouldn’t talk about ‘code’ we should talk about ‘threads of execution’. A CriticalSection only blocks a thread of execution (MainThread, WorkerThread, MyThread, etc) that tries to gain access to the CriticalSection while it is in use by another thread. You, as the software developer decide where, and when, to use a CriticalSection, and only in those places is execution possibly paused due to a CriticalSection in use by another thread. But, yes, the entire thread is blocked until the CriticalSection is available. So, if Thread1 takes CriticalSectionA, and then Thread2 tried to take CriticalSectionA, it will suspend execution at that line of code where it attempts to take the CriticalSection. When the CriticalSection becomes available, the Thread2 will resume execution at the line of code following the line that took the CriticalSection. Thread3, which doesn’t execute any code that involves CriticalSectionA will continue to run this entire time. And, if Thread3 access data structures that you are protecting with CriticalSectionA, it will do it without issue, because you, as the software developer, forgot to take CriticalSectionA before accessing the data.

what i don’t understand is the bounds of the critical section, what does it enclose? the word “section” implies boundaries.

if threadA invokes the criticalsection, what exactly is the scope of the code that threadB will be blocked if it tries to access?

I was thinking that the scope of the critical section defines the size of the room, and the scopedlock opens and shuts the door. But now I suspect that’s wrong, and that the room doesn’t have a size, there is just a door in the middle of the execution flow, and you have to put a door on every possible route to the thing you are trying to restrict. is that right?

(EDIT in fact in my current understanding there is no room, just paths (threads) and doors (critical sections) )

eg

class ThreadA  :    public Thread
{
private:
    CriticalSection myCriticalSection;
    
public:
    ThreadA () : Thread ("name")
    {
        //constructor
    }
    ~ThreadA ()
    {
    //destructor
    }

void run()
{
   ScopedLock::ScopedLock (myCriticalSection);
while (! threadShouldExit())
    {
       parameter = 3+4;
       functionA(parameter);
    }
}

public void functionA(int)
{
//change some values or fill an array or whatever;
}

are threads other than threadA suspended if they try to call functionA while the loop is running? I think not from your explanation since they can get to functionA without requesting the criticalsection lock.

so if I want to protect functionA (and the data structures it changes) I have to structure the code so it can’t ever be called without requesting the criticalsection? (eg maybe by putting a lock/door actually in functionA?)

The critical section does not enclose anything. The name may be a bit confusing. Most other frameworks/languages call this thing a mutex.

Think of it like a token that exists only once. Code is made thread safe by making sure it has to lock the critical section (or, acquire the mutex) in order to execute. This way, you know that it can only be executed by one thread at a time (since there’s only one token so only one thread can have it at any given moment).

[quote=“parenthetical, post:18, topic:17272”] the scope of the critical section defines the size of the room, and the scopedlock opens and shuts the door.
[/quote]

No, that’s the wrong way to think about it. The critical section/mutex is just a unique token. The scoped lock is a way to acquire and release that unique token. And the scope of the lock (i.e. the scope of code where the ScopedLock variable exists) defines what code is protected (= the size of the room if you will). It starts at the place where the lock is created:

ScopedLock::ScopedLock (myCriticalSection);

and it ends whenever the lock is destroyed (= it goes out of scope and its destructor is called, releasing the mutex). In this case, this happens at the end of the function.

This is an example of a more general technique called “RAII”, I suggest you familiarise yourself with it.

thanks @timur that seems to confirm my new understanding mentioned above.

i get the scopedlock and the basic idea of RAII, it was the function of the critical section itself I had misconstrued. I hope by being confused I’ve demonstrated the need for more explanation / tutorials of what these classes and concepts in the API really are!

to be crystal clear; the critical section object itself has no scope in terms of what it blocks it is just a “you shall not pass” flag on a single specific execution path through the code and it’s placement/scope/effect is defined by where its enter and exit functions are called not by any property of the object itself…

so if some of the code is accessible by another route (like my functionA above) then it is not protected.

is this all correct?

now I understand it (I think) it doesn’t seem that useful for anything realtime. I will focus instead on learning atomics and pointers and FIFO structures as outlined in your talk. (it would be great to write that up as an essay/tutorial for easy reference…)

does anyone have any answers or comments about the other questions in my initial post?

the meta issue I’m struggling with is the intersection of the very linear execution flow paradigm of threads and the very non-linear random access paradigm of OOP and encapsulated code in methods, objects and classes, which sort of obscures the actual execution flow that the thread follows.

my instinct is to make an class/object for each thread that inherits from thread that contains the code intended to be run on that thread - to forcibly align the two paradigms a little… is this a useful approach?

Yes this is all correct except I don’t understand what you mean by “its enter and exit functions”. A critical section/mutex has no such things as enter and exit functions. Perhaps you mean locking the ScopedLock and its automatic unlocking when it goes out of scope. It is important that those are functions of the lock, not of the critical section/mutex, which again is just a token that has no functionality except being there and lockable.