AudioPluginHost CPU cores usage

Hi, I’m studing around AudioPluginHost code because i’m interested in create my mini little daw.

I’d like to understand how to take most of CPU cores, for example I notice that in logic pro every time I instantiate a plugin a core is awakened, while in audioPLuginHost I can see with Monitor activities of osx that only the first 3 cpu are working. What’s about it? there’s a way to optimize it? which the best approach that I should adopt?

Really thank you in advice?

The Juce Audio Plugin Host uses just 1 CPU core (or rather thread) for the audio processing, since it’s based on the Juce AudioProcessorGraph that doesn’t multithread the audio processing.

The activity you see in the multiple cores is from the GUI thread, the single audio thread, possibly some hosted plugins using their own threads for internal tasks and the OS migrating threads between the cores. So most likely, if the OS thread migration and the plugins possibly using their own threads wasn’t taken into account, you would see the Juce Plugin Host using about 2 CPU cores at maximum.

There is no Juce provided ready-to-use solution for this.

:scream: what bad news! Really there’s no way to achieve this??
So I can’t develop my app with juce or there are some hacks in framework that I could do to Archive this?

You will have to come up with strategies about:

  • how to balance the load over the available cores
  • how to synchronise the results of the threads, and how to deal, if one or several of them doesn’t finish in time
  • can you distinguish between interactive parts (low latency) and fixed tracks (already recorded, no plugin GUI open, no controller on latch), so you can pre-render them with larger buffers
  • how to have the necessary recorded audio in memory available, streaming from disk might be too slow

To preprocess there is BufferingAudioSource, which you can use. It uses a TimeSliceClient, which effectively uses a thread
To have audio in memory, there is MemoryMappedAudioFormatReader

To have a good setup, you need to know more about the routing options. The AudioProcessorGraph is probably too generic to allow a solution, that works in all instances.

For a quick win, maybe looking at the Tracktion engine might be a good idea

1 Like

Really thank you! I’ve just downloaded tracktion engine and started to look around it (unfortunately there’s not documentation to understand things quickly). On base of your experience it is thought to work on multicore processing or also here there’s needing of some tricks?

It really depends, what restrictions you can allow. If many tracks can be individually be switched to record, it can become infinitely difficult, vs. if you have only one track on record at a time, and anything else is streamed from existing material, it can be (almost) a piece of cake…

Having sends and side chains add another level of complexity to the problem. Also latency compensation (especially in combination with sends) will probably create headaches

The engines I wrote were rather trivial in that regard, so hopefully you get more tangible estimates from people, who wrote full featured DAWs like jules and dave96, or the many other users here, who have done that…

1 Like

If you mean “does the tracktion engine do multi-threaded rendering” then yes, it does.

Certainly if you’re a beginner then it’d take you years to write a decent multi-threaded renderer, so letting the engine handle all that for you is probably your best bet.

1 Like

Thank you again for your replies! I think I need to learn more about Tracktion engine potentiality to understand how much has already been implemented and how much is to be implemented for my purposes :slight_smile:

Thank you for your reply, my question was about real time processing. To let you understand: I’m studying to develop an audio unit mixer for live performances, a little version of main stage or a desktop (and mobile version) of AUM, so: some Audio/instrument/midi channel strips, some busses to route signal to fx tracks and to route channels to outputs of audio devices.
I have 18 core fisical cpus for example and I’d like to know how much my program, using tracktion engine, should exploit them. Thank you again in advice

The tracktion engine currently uses a thread pool to parallelise on a track-by-track basis.

But really, the easy way to find out how the engine performs is just to run Waveform and set up a project with the kind of plugins and bussing you want.

Of course! My gripe now is to get a first beta in relatively little time to test it (using some time to undestand trivially which should be the corret gesture to use on mobile for example), but to do this I must test it in real world project, using a lot of tracks and plug-ins, obviously in release time I’ll need of course to spend a lot of time to optimize it, but for now, also if not totaly optimized, usage of all fisical cores in a pc or mobile is important!

Really thank you again!! :slight_smile: