I’m a little bit new to “large scale” app development and one thing I don’t fully understand is the notion of message threads, async callbacks, and in general I think I’m not completely clear on some thread-related questions. As a consequence, there are some practices that I don’t fully understand in JUCE when skimming through arguably “well written” open-sourced plugins or examples:
If I understand correctly, in JUCE, the main thread, the UI thread and the so-called “message thread” are synonyms. Then what’s the point of some function like isThisTheMessageThread() when not called within the context of the processBlock() method ? I’ve always assumed that only code that’s called from inside the latter method runs on the audio thread, and that we are always on the message thread otherwise.
This incomprehension leaves me wondering whether I have not missed something more fundamental in the way plugins work. So far I’ve assumed that the host provides the audio-thread to the JUCE plugin, but that the plugin creates its own thread where it runs its own event loop (otherwise some UI-related problem in a plugin could impact the UI of the DAW, right?). Maybe I’m mistaken in that respect? Is the UI of some JUCE plugin running on the same UI thread and event loop as the host ? Could the host sometimes decide to “run” the plugin on some thread that’s not the message thread, which would justify using something like isThisTheMessageThread() ?
Is there any point in making async calls for stuff that’s non-blocking ? I sometimes don’t understand some callAsync calls I see in code. They sometimes seem to be rather used as a means to communicate between components to avoid having thesm reference each other directly (e.g. some processor notifying the editor and providing a callback, instead of the processor directly calling an exposed method of some editor it references). I guess this is the point of a “message” thread in the first place but I’m not relying on these tools. As a rule of thumb, when do we decide to go that route inside of holding references to things ?
I’ll try to get these questions sorted a bit for you
In general, a host will probably run multiple threads which might or might not be used to call functions from your plugin, but your plugin can also create its own threads if there is the need for it. Technically, a plugin is a dynamic library that is loaded into the process at runtime, so after loading it becomes part of the process. So if you create a thread from within a plugin it will be a new thread for the host process.
I’m not sure what your concept of “providing the audio thread” is. The host will create at least one high priority thread dedicated to render realtime audio. After loading your plugin, the host retrieves a function pointer to the plugin format specific processBlock function implementation. The host code will then call the function pointer at runtime from the audio thread which will then end up as a function call into your plugin code. I’d suggest you to put a breakpoint into your plugins processBlock function and have a look at the resulting stack trace. You’ll probably see that the call originates from some host thread with a name indicating that this is a thread used for audio rendering, you’ll also see some more or less symbolicated call stack through the hosts audio rendering functions before the call stack enters the format specific JUCE wrapper and then ends up in your plugin specific implementation of processBlock.
However, don’t assume that there is only a single audio thread. Most modern DAWs take advantage of multicore CPUs by running multiple audio threads in parallel so that multiple plugins can be processed on different CPU cores concurrently. It’s the responsibility of the host to schedule which of the audio threads is used to render which plugin and it might be that two consecutive processBlock calls to your plugin might originate from different audio threads.
The host also creates the main thread / message thread and calls your plugins GUI rendering functions and mouse / keyboard event handler functions from that, the plugin does not create its own thread here. So if you block the message thread you probably block the UI of the entire host application and other plugins.
As mentioned above, there might be and usually will be more threads involved in a host and those threads might be used to call into the plugin. Things I’ve seen so far in the wild include
Multiple parallel audio threads
Threads exclusively used to call setStateInformation/getStateInformation
Threads exclusively used to apply parameter changes
A thread used for OpenGL rendering
While some threads are more or less synced – e.g. you can expect that no two audio threads call processBlock for a specific plugin instance at the same time, most of the time all these other threads are not synced with the message thread.
Concurrent threads calling into your code are no problem as long as no two threads write to the same variables or write to data structures while some other thread is in the middle of reading from them. To avoid such conflicts, there are two strategies: First one is using mutexes that are locked before accessing such resources which will block other threads accessing them before the other has finished. The other is defining some sort of rules just defines that some functions are only ever called from a single thread. The latter is done in the case of UI rendering usually.
Now if you think of a slider on your GUI that represents an audio parameter, what happens if the host updates an automated parameter value during playback from a dedicated parameter update thread or from an audio thread? Some listener callbacks probably report the new parameter value to your slider which will then set its new value. However, if the message thread is just in the middle of repainting that slider when this happens, bad things could happen. To avoid this and to honour the rules you have to make the slider value update async on the message thread. So instead of directly setting the new value in the parameter change callback, you use some technique like callAsync which will update the slider to the new value once the message queue is processed for the next time. Note that if you use a JUCE SliderAttachment, this will be handled by the framework for you.
Another thing that you could think of is sending out a request to your license server to find out if the plugin is still licensed. You probably create a temporary thread from within your plugin code that waits for the response to that request to not block the message thread until the response has been received, which could take some time depending on your internet connection. Once that thread has processed the response it might want to update some UI elements to show that the plugin is in a licensed state. That update then should be done via callAsync.
I don’t know the exact lines of code that you have in mind there, but I came across cases where calling functions async from the message thread could break up cyclic calls or deadlocks just because functions are not called immediately but just in the near future, which is often enough when it comes to UI.
Thank you so much for this detailed reply! This definitely clarifies a lot of things for me, notably the the basics which I was kinda forgetting: plugins are DLLs. I know this of course that but somehow I was forgetting about it and started viewing them as separate processes connected to the DAW. I guess it’s easy to forget about it since JUCE basically bundles everything to make a standalone app, and the DLL part is completely hidden. So to recap: a JUCE standlone creates a message loop itself, and a JUCE plugin just connects to the message loop of the DAW and runs on its main thread / message-thread, whereas the DAW decides itself from which audio-thread to call the processBlock functions? Then, I guess there’s some kind of identifier coming with any message which allows my plugin to only listen to the messages that arose from itself ? Like the message thread itself is shared between the DAW and all plugins but each plugin does not technically receive all the messages from everyone else, right ? That would seem chaotic to me.
I think the audio-thread side is clear for me. Now just to develop a bit more on your example with parameters being automated by the DAW, which was exactly where I was the most confused, and which I think encompasses a lot of complexity in itself. If I understand correctly:
Depending on the DAW, it’s possible that parameter automations are performed from another thread than the message-thread. To avoid problems, we program things such that only the message-thread makes the actual updates within our plugins, inside the event-loop where everything is queued and ordered, and therefore we need async callbacks. So the isThisTheMessageThread()can be used to decide whether or not the async behaviour is needed depending on the DAW that runs everything, I reckon. And maybe that’s what’s done in SlideAttachment and all.
Now, imagine that I am not automating a parameter but modifying it by hand from the UI of the DAW. Assuming the DAW UI is rendered on the message-thread (i.e. no OpenGL stuff), then there’s no difference between doing just UI and updating from the UI of my plugin, right ? (because they’re running on the same thread). I suspect there actually is still a lot hidden, which is why I’m asking.
Sorry for so many questions, I’m actually discovering many areas of software engineering through plugin development and I feel like I’ll be more confident writing code if I have a better understanding things under the hood
One detail regarding SliderAttachments and automation coming from some other thread: the update to the Slider is always made asyncronously, so that the slider value is changed only on the main thread (that’s required: changing UI components from threads other than the main thread likely causes undefined behavior)
But any previous callback that has been invoked when the DAW changed the value of your automated parameters, those callbacks were all triggered syncronously and were called by the DAW thread that made the change, which may be any of audio thread, main thread, or some other worker thread.
If your audio code expects the changes to parameter values to only happen on the main thread, then you should do the “async decoupling” yourself, i.e. when you receive a parameter change, queue that value change for later processing on the main thread, not just the UI updates that come off of it.
In the second part of your post, I haven’t understood this question, sorry
Thanks a lot for your reply and thanks for these clarifications.
Regarding my question (which was certainly confusing when re-reading it), I think your first two paragraphs actually answered it! I was mainly wondering if there could be some fundamental difference between:
an update made from the DAW UI (i.e. manually moving the sliders exposed by the DAW)
an update made from the UI of the plugin.
I was thinking: “in both cases, the update is necessarily triggered from the message-thread so there’s no way either of these setups end up updating the parameter outside the message thread”.
Async calls like MessageManager:callAsync() are super handy when you need to safely update the UI from a background thread without blocking anything or messing up the message thread.
I ran into a similar issue when handling async callbacks inside threads that aren’t the message thread. For my project, which connects to ActiveCalls cloud-based call center software, I had to make sure any UI updates or message-based calls went through the main thread using MessageManager::callAsync. Otherwise, weird UI bugs popped up randomly. Using lambda captures helped keep things neat and safe when switching threads.