Interprocess Pipes

Hello,

I’m trying to establish an IPC between two plugin types that share some common code for communications handling. The named pipe solution looks good, but there is something I don’t really understand:

Here’s my connection procedure:

  • In the communications manager constructor (which is bound to a plugin type and will be created when the first instance of a plugin type is created), the name of the pipe to connect to the other type is created so that it is reversible (ie: the same name on both sides).

  • Plugin A (first to be instanciated) tries to connect to an existing pipe (which should fail, as the pipe does not exist yet). First problem, the connectToPipe method succeeds everytime, even when the pipe files PipeName_in and PipeName_out are missing in /tmp (I’m on MacOS).

  • If it cannot connect to an existing pipe, it will create it and wait for connections.

  • Plugin B is now instanciated, and takes the same algorithm (as they share this code).

Is it normal that the first plugin successfully connects to a pipe that does not exist? Is this method robust for creating/destroying/creating again pipes, or is there a need to reboot the system or something?

I also tried the opposite way:

  • Plugin A tries to create the pipe. If it’s done successfully, it waits for connections. Otherwise, it tries to connect to the pipe as it might already exist, and createPipe failed for that reason. The call here is successful, because the pipe does not exist, this is normal.
  • Plugin B does the same. The call to creation succeeds as well, even though the pipe files already exist and are used by the first plugin.

Actually, what would be great is a method DoesPipeExist, that only checks the presence of a pipe without trying to create it or connect to it.

Thanks!

(BTW: I’m using 1.53.104).

Edit: I just had a thought in mind, wouldn’t that be a problem that the two plugins are not actually processes on their own, but threads in the host’s common process? If that’s the case, I would need to fold back to the sockets solution.

There’s actually no need to use pipes if the plugins are running in the same process - you could just get them to exchange some kind of c++ object and make direct method calls to each other.

That’s what I thought, but do you know a case of host that starts the plugins in separate processes (which would break this sharing system)?

The problem is then to create a singleton instance of a shared class, so far I can do that (that’s what my Connection Manager is), but it is created once for each plugin type (ie for two plugin types, the two libraries share a common code to create the singleton, but there will be two created, the singleton will be common to all the instances of a plugin type, but not common to all plugin types).

Some hosts may use a separate process to hold all the plugins, but they’re unlikely to open each plugin in its own process - that would get too expensive where there are many plugins.

Actually, I’m not sure how you’d discover another plugin running in a different module… Maybe a named pipe would be the only way to do that kind of thing.

However, they will probably all run in the same (audio) thread

Thanks for your ideas.

There is also the fact that some DAWs handle multicore processors by making the audio thread run different plugins at the same time.

@Jules: Speaking of named pipes, is the behaviour I’m describing in my first post correct? (the connectToPipe and createPipe methods).

AFAIK, Reaper for example offers the 3 options :
-plugins in the same process as the main executable
-separate process to hold all the plugins
-separate process for each plugin

Obviously, the 3rd option is way more expansive !

True, but if you use Juce’s AudioProcessorGraph (which you probably do), it’s all gonna happen in the high priority audio thread, or did I miss something ?

Well I’m writing plugins, not a host, so I’m not using AudioProcessorGraph (only AudioProcessor).

I just had a big problem with ProTools that was related to the handling of hyperthreading on some CPU models (MacPro with 2 quadricore CPUs), all my DoProcess functions were running in parallel (real parallel, not sequential time-shared multithreading) and the locks on the shared memory went crazy…

By the way, I just noticed a bug in the JuceDemo. Here is how to recreate it:

Open a demo (A), then another one (B). On both, go to the Interprocess page.
On A, select Socket (listening).
On B, select Socket (connect to existing).
On B, select Disconnect.
On B, select Socket (connect to existing) again.
On A, send message: it will say Fail on A, but the message will be correctly received on B.

I’m digging a bit this topic as I don’t think my question is worth creating another one.

In my plugins, I have interprocess comms to allow different plugins to communicate. This works great, thanks to the Interprocess classes.

The thing is now, when an instance of a plugin is removed, it sends a message to the other plugins through an IPC socket. Here again, it works great when manually removing an instance of a plugin. The problems begin to pop out when quitting the host with several instances of different plugins loaded in the session.

What will happen is that every instance is going to send a message to tell it’s being removed, and be killed right after that. The problem is that when all the instances of all the plugins are unloaded, there is no longer any destination point for the messages that have been sent, and thus leads to MemoryBlocks being leaked.

So far I’m thinking about a few workarounds for this issue, like keeping the socket alive a bit longer after all the instances have been destroyed, and shut it down after a while (which seems quite dirty IMHO), but the cleanest way would be to wipe out the pending messages (sent but not yet read) when Juce is shutting down.

What do you guys think?