There is an internal branch with AudioWorkgroup support, hopefully making it’a way onto develop, probably after we’ve dealt with any fires caused by the recent release on master. That being said you can now call startRealtimeThread on the Thread class.
Thanks for the update, looking forward to that.
Will there be any way to force threads to only run on P-cores (for Mac silicon)? Or is it still up to the OS based on the QOS settings/Realtime option in the startRealTimeThread function, or some other trick to force threads stay on P-cores and not get demoted to E-cores?
I’d love to see what the AudioWorkGroup branch looks like, just to get an idea of how the client code should be structured. It’s a shame it’s not available on a separate branch.
I use juce::ThreadPool. As an experiment I tried modifying the JUCE code to call startRealtimeThread() instead of startThread(), but this didn’t result in any improvement.
All of our customers who purchased M1 Pro/Max/Ultra based Macs have been greatly disappointed, and one (so far) has reported similar problems on a new PC using one of the latest Intel Alder Lake chips (which also have separate E/P cores). Without a way to ensure that our own threads stay on P cores, this is gradually going to kill our product entirely.
What version of JUCE are you (or were you) using? there was a bug that until very until recently meant startRealtimeThread() was really doing anything!
Realtime threads on macOS are achieved by upgrading the existing thread so if you want you could pull in the headers and do it yourself but if you take a look at the latest version on develop I’ll be surprised if you don’t notice a difference. I ran multiple tests while working on the HighResolutionTimer and definitely found a very noticeable difference. You should tweak the RealtimeOptions to ensure you get the right settings, although IME I struggled to get it to make much of a difference. There are some settings that will guarantee it doesn’t work but I think we should have protected against that or at least put jasserts in for the cases we could find.
I was using JUCE 7.0.6, release branch. Thank you for the suggestions, but I prefer to avoid the whole guessing-game of debugging this myself, and wait for the official solution which has been in the works for over a year.
I think regardless of the startRealtimeThread() function where you can define the ‘max expected’ process time for the thread (and consequently affect QOS/thread policy settings), for audio without glitches on M silicon where multiple threads are working to fill the same final output buffer for the main core audio thread/callback - they need to be assigned to the same audio workgroup to ensure the OS knows they have to meet the same deadline.
(I remember reading somewhere that if you don’t meet the max expected process time when running your thread, then your thread might also get demoted.)
Quote from Apple dev pages: “Each Core Audio device provides a workgroup that other realtime threads can join… Joining the audio device workgroup tells the system that your app’s realtime threads are working toward the same deadline as the device’s thread.”
I’m still waiting for an answer about if we could ensure those threads can also get assigned only to P-cores (everything I read says otherwise - and you have to hope that a sudden load on an E-core doesn’t hold up your audio threads that are currently running there/hold up any switching over to P-cores when the OS scheduler decides it’s necessary… Why then does Logic have GUI options to specify usage of P-cores only?). JUCE team - any comment?
I’d also like to know if the audio workgroup can be retrieved and used for VST3 plugins now?
Any comments about that from JUCE team would be much appreciated - thanks!
(Hopefully all will be revealed soon anyway.)
I wouldn’t be surprised, if the CoreAudio Team implemented a “secret” function just for Logic. So this setting doesn’t count towards a possible way to enforce P-Cores only – I’m absolutely certain, that there is none.
But I read in a different thread here, that using audio work groups pretty much solved all their problems. The only thing I (personally) can just not grasp is, how audio work groups help an audio unit. Is an audio unit allowed to attach more threads to the DAWs work group? That doesn’t seem right to me from a conceptual stand point.
The workgroup is related to the hardware i/o callback, and there’s nothing stopping plugins ‘attaching’ multiple extra threads to it (it’s recommended by Apple if your plugin does parallel real time processing). It definitely improves performance for my case (multi vs single threaded processing in general), but that benefit might be reduced if lots of other plugins are running in the same DAW session (and then likely getting assigned to separate threads/cores depending on the host’s implementation)… anyway, it does give the OS a chance to spread load over cores… but you have to ensure your threads take pretty much the same amount of time so one doesn’t hold others up and waste CPU time, which is why I do it per voice - each one having the same audio processing blocks.
A related observation:
I’m not sure how Ableton 11 would work on Mac in terms of multi-threading - yet to to check that, but I observed in Win11 that it was calling renderBlock (possibly small sub-blocks of the total expected ‘numSamples’) for just one plugin (mine - I logged the render calls) using multiple different threads (even though that would mean some extra synchronization/ordering work)… it’s possible that DAWs using this approach on Mac silicon also create lots of threads registered to the selected audio output device workgroup - at least they most likely do it per plugin instance otherwise.
We have added Audio Workgroup support here: FR: Thread-Priority vs Efficiency/Performance Cores - #50 by t0m
I’ve run into a blog by Blue Cat saying this issue is solved in Sonoma. Any details about this?
It was a mistake apparently…
Hi, are there any changes to our code needed that just uses the single audio thread created by Juce, to ensure we run on the p-core?
I think this was mentioned here and there multiple times: this is not possible, and I doubt it will ever be possible.
thx - there was a lot to read through!
But then there’s this (23rd Nov):
Which offers some hope. Thing is, if there’s no way to ensure p-core on Mac then it pretty much rules out any multi-core audio development.
Haven’t looked at the blog article in depth but it seems to offer - or at least suggest - a solution.
From the article - “we now have a solution that seems to be working pretty well for all cases so far, including VST or VST3 plug-ins”
unfortunately no mention of what that solution is (I left a comment to ask about that).