Multi-Core Plugin hosting - best strategy?


#1

Hi Jules and other Multicore /real time gurus out there !

I am working on a project which will require optimised plugin hosting. I want to be able to maximise use of all the CPU cores available to the full.

Under my scheme, any plugin hosted will run in a single cpu / process , but i plan to launch several instances of my hosting code -with each host instance having its own memory to house a pool of plugin instances. The idea is that requests for new plugin instances get farmed out in around robin fashion to balance load across the CPU cores.

The plan is that these multiple “hosts” will receive MIDI and return audio blocks back to a central “controler” which will then coordinate the flow of MIDI/audio to/from the main app and the “plugin host instances”.

There seems to be two ways i could do this:

A) have a single JUCE app, and use seperate pre-emptive threads ( or maybe just co-operative threads if this still allows the OS to farm seperate co-operative threads to different CPUcores ) for each host instance. Each pre-emptive thread would have its own memory to hold a collection of plugin instances. Each host thread would communicate back to the main app thread via shared memory. Host threads would be woken from sleep each time a new audio block is required.

B) Implement each host instance as a SEPERATE JUCE console app running in the background all communicating back to the main app via shared memory plus IPC/ signals etc.

The seperate app ( process ) approach B) is one that comes to me highly advocated highly by me fellow developers working in my other language/IDE - REALStudio/REALbasic . But this is partly because the framework it uses is not thread safe and it uses its own co-operative theading model.

Everyone states that B is just much safer and easier to debug.

I should add that there is no complex “plugin chaining” needed - and this simplifies the restrictions on how I handle my plugin instances and how they sit across processes/threads.

An thoughts JUCE gurus ?


#2

It sounds like you are optimizing code that hasn’t even been written yet. Why not just first implement the plugin, leave it up to the host to put different instances into different threads, and then use a profiler to find out where your bottleneck is?


#3

I am not writing any plugin. This is an application that as part of its operation requires a large number of plugins to be hosted.
The code for this has already been written and works within a single thread world and using just a single host instance.

But I now am thinking of how to modify all this to support seveal instances of the “hosting” portion to allow maximul useage of all available CPU cores. Each “host” will manage its own pool of plugin instances.

So basically i’m asking how best to spread the plugin HOSTING workload across cores in a JUCE app :slight_smile:


#4

well actually i WILL ALSO be writing a plugin, but that comes later and although connected to this project - it has no bearing on the question i asked.


#5

Oh…that is quite different.

Why not just use a single process with one thread per plugin instance?


#6

yes - after a strong coffee my muddied thinking has clarified a little and i now think your idea is doeable.

If I use the threads provided by threadpool in JUCE will they be co-operative or pre-emptive,

and if co-operative ( via the OS thread system - OSX in my case ) will OSX still dole out the work to seperate cores or does everything within the JUCE App process including its threads run on one CPU core ?


#7

The best way to take advantage of multiple cores is for the plugin to do it - i.e. break up the audio block work into pieces, one per core. Either that or set up a pipeline of steps in the plugin. This takes advantage of the limitations of the CPU cache. Unfortunately for most types of plugins, processing in parallel is not possible due to algorithmic constraints.

How the operating system does its thread scheduling is implementation-specific, and not something you can rely on but in general all threads get fairly equal time slices. Usually, threads are not tied to cores, when a thread is scheduled for a slice it runs on the next available core.

For your purposes, since the number of plugins greatly exceeds the number of cores, one thread per plugin should be sufficient.

Threads are pre-emptive.


#8

Cheers

Yes i wont be doing anything with plugins themselves - thats down to the plugin developers themselves - but thanks for clarifiying the info on JUCE threads being pre-emptive.

Your suggestion of a thread per plugin sounds good.