Dealing with concurrency between plugin instances

My team and I have recently started bumping up against some issues related to multiple plugin instances sharing resources that are not threadsafe, potentially in parallel. Specifically, we’ve been battling issues with database corruption, and I need to be certain that each plugin instance is behaving itself when interacting with the database file.

My current solution has involved the usage of static CriticalSection members, but upon reading more on the forums I’ve realized that this is not a catch-all solution because some DAWs will spawn plugins in their own processes (I also thought I’d read this is the case with AUv3 in general but I might be misremembering that). Furthermore, I’m not clear on whether or not plugins of different formats share static resources—I assume this is not the case, i.e., a static CriticalSection mLock will be a different object for a VST than it would be for an AU (but again, not sure).

My question is: how have you dealt with this problem in your own application? Is there a “best practice” recommended by the JUCE team? I’ve considered InterProcessLock, but that seems like it might be trying to swat a fly with a sledgehammer. I appreciate any input, and I apologize if this has been discussed before— my initial forum search didn’t yield anything that fit my question. Thank you!

Although it might seem like a makeshift solution, have you considered creating a temporary .lock file alongside the database file? Your instances could check for this file’s presence before accessing the database. If the file exists, it would indicate that the database is currently in use by another instance. This method might be simple, but it could effectively meet your needs depending on your requirements.

1 Like

This points to more serious misuse of your database client and/or database schema. This is kind of what ACID is all about. Unless you’re doing something like using unsafe sqlite pragmas, it should be possible to have two concurrent connections to the database and kill either at will even in the middle of a transaction without corrupting the database.

2 Likes

I hadn’t considered that! I’m curious if the juce::File API works for this— I’m worried about a scenario where two parallel routines could both check File::exists(), have it return false, then both call create() on the lockfile, thereby leaving each instance convinced they have the lock. I don’t think juce makes any guarantees about the atomicity of these operations, and looking at the implementation doesn’t convince me it would work. Thank you for your suggestion though, I’ll start looking into it!

EDIT: I didn’t realize that fopen has had a "wx" mode since C11 | C++17. That seems like it could provide the correct guarantees, although it also doesn’t seem to be supported consistently.

The issue that I’m concerned with at the moment is the fact that there’s a single instance where we interact with the database file directly—we delete the file when corruption is detected and start the database fresh. However, this can obviously cause a massive issue, because deleting and re-initializing the database file while there’s other active connections can obviously cause problems. So, if the user experiences corruption for a valid reason (e.g. bad disk sector), our delete/reinit pipeline can cause a never-ending chain of corruption after that.

Generally I’d like to just do this deletion/re-initialization through a database connection, but unfortunately we use an ORM that obfuscates the raw database handles and hasn’t implemented some of the newer SQLite recovery methods. Maybe there’s some other way to drop/re-init the whole database through a connection I haven’t found yet, but most suggestions I’ve read simply follow the same line of “just delete the file”.

RE: misuse of database client/schema, did you have anything else in mind when you mentioned that? We use SQLite in its fully threadsafe, serialized mode so I don’t think the DB connections are the culprit, but if you had other ideas regarding what could be faulty in our database architecture I’d be very open to hearing them. I have a decent handle on the basics of how to do things but I’m not an expert by any means.

Have you read this doc on what can cause corruption?

We use SQLite in its fully threadsafe, serialized mode so I don’t think the DB connections are the culprit

SQLite is threadsafe but SQLite connections are not. It can be tricky to pool SQLite connections which is why many people avoid it (create one connection behind a mutex and lock it for the duration of the transaction). If you’re using connections concurrently that might be the place to look - also checking those unsafe pragmas against your ORM’s defaults.

We use SQLite and saturate our SSD with read/writes to a single database with dozens/hundreds of connections concurrently and don’t see database corruption. We don’t use an ORM though, bare SQL in the app is enough and the sqlite API is straightforward to use.


Personally, I wouldn’t worry about catastrophic disk failures and fixing them when the plugin runs. That can be fixed by a reinstall and restart of the DAW, just create the DB on installation (plus: you have the same problem for the plugin binary itself - what happens if disk failure corrupts your dll?).

If that’s not acceptable you can close all connections, acquire a .lock file and check if the database is corrupted after you acquire it, recreating it, then releasing the lock and reopening connections. This way you know there’s no database i/o in any instance while you’re migrating the database, and when other instances acquire the lock, they don’t try and recreate the database again. They just reopen connections. I’ve done similar for SQLite database migrations.

Are you certain about this? I got this information from Using SQLite In Multi-Threaded Applications, but perhaps I misinterpreted it:

Either way, you’re right in that the thread-safety of connections won’t save us in the event we have to delete the database file. As such, I wanted to ask you— how would you go about closing all active connections to the database like you mentioned? Given that in a DAW environment there’s no guarantee whether individual instances are in the same process or separate processes, I don’t think static synchronization primitives are an option, and I can’t think of another way to coordinate connections like that without some over-the-top file locking logic.

Bumped into this thread again, and figured I’d update it. For anyone curious and experiencing similar issues, we found out that our issue was fairly pernicious: SQLite was corrupting databases because compiling it into different plugin formats (AU, VST, VST3) violates SQLite’s assumptions about static memory. See this link for more details on the nature of that bug. We were able to mitigate this by using the unix-dotfile VFS instead of the normal POSIX advisory locking system.

The moral of the story is that the way that plugins are linked into most DAWs, i.e. linked into the same process as separate dynamic libraries, can ruin any assumptions you make about static memory. Anything using static memory, including many JUCE facilities like SharedResourcePointer, will not often behave how you expect if multiple plugin formats are used simultaneously.

My original question regarding how to actually go about synchronizing different plugin instances, including those of different formats, is still something I haven’t come to a final conclusion on. juce::InterProcessLock is not an option as it relies on fcntl locks on POSIX which would not protect against different plugin formats in the same process attempting to claim the lock. Normally you’d resolve that by using a static mutex and an IPLock, but the mutex won’t work because of the issues with static memory mention above.

My current solution would probably be to use named semaphores, as they are available in POSIX and Windows and are safe across processes and threads. However, it’s not something I’ve given a huge amount of thought to, and there may be issues there as well.

1 Like