InterProcessConnection spins up

Hi everybody,
I am using a InterProcessConnection to synchronise 20 plugins with a master plugin (InterProcessConnectionServer). The only thing the plugin does is attenuate the gain.
I just realise, that the IPC threads are spinning free burning CPU.

The readNextMessageInt() method is supposed to block, but it doesn’t:

What I did was, to add breakpoints inside if (bytesInMessage > 0) and if (bytesInMessage < 0) => not hit, but if I put a breakpoint inside the thread loop, it is hit straight away.

Plus my CPU readings go up to ~150%:

I think that shouldn’t happen.

Is there anything on my end, that I need to do to throttle down the threads?



Can somebody please have a look at IPC threads, if the readNextMessageInt() actually blocks, like it is supposed to?
Because to me it seems like it is not blocking and therefore loop at max speed…

Have you selected callbacks on the message thread, and are you using a socket or a pipe?

Thanks for looking into it.
I use the default constructor, i.e. callbacks on messageThread. And I have a timer running at 1Hz:

void MyAudioProcessor::timerCallback ()
    if (! connection->isConnected()) {
        connection->connectToSocket ("", 12988, 500);

I’ve just checked in one of my apps that uses socket IPC and I don’t have any spinning. The only time a breakpoint in readNextMessageInt is hit is when I’ve actually received a message to process.

Can you get a minimal working example going?

Ok, while doing a minimal example it turns out, that the readNextMessageInt actually blocks, but still, one single instance of a master and a slave eats 10% CPU, vs. without it goes down to 2%.

I use it as a mixer plugin, so one mixing algorithm controls 20 slave plugins in each mix channel…

Had to cloak the zip as .mm - hope you can find something usefull…

Thanks @t0m! (120.9 KB)

Please try changing juce_InterprocessConnection.cpp line 328

auto ready = socket->waitUntilReady (true, 0);

to use a non-zero timeout. Something like 20 ms.

Hi @t0m, thanks for looking into it.
In my normal use case with 34 instances, I tried first adding 5ms timeout, which reduced the overall load from 140% to 95-105%, so first not impressed…
Then I tried with 20ms, and it went down to 35-40%.
So yes, this is the point, where it burns CPU, but it seems, it’s not really usable for realtime?

I am not completely sure, does adding 20 ms timeout mean, that I wait eventually up to 20 ms for my packet, or is that a blocking wait, that would continue, as soon there is activity on the socket?

I’ll try to use a pipe now, maybe that get’s somewhere…


I don’t think increasing that number would make any difference to the responsiveness - it’s just the maximum time interval that it’ll wait between checks about whether the thread should exit.

1 Like

In that case I have no problem using 100ms, and the project idles now between 8-11%, just as it is supposed to be.

Thanks @jules and @t0m.
Will you change that in the JUCE codebase?

If Tom can test it in his app then no objections from me.

Yep, I just wanted to make sure it fixed your issue first. The change is just making its way through our CI then it’ll appear on the develop branch.

1 Like

Thanks @t0m, it’s working fine now!