UI / Data Layer Separation

Hi,

so I’ve noticed that in Juce, the Plugin creates an Editor which means that those two live in the same process. However, as far as I know, some Plugin APIs (AU?) may separate the UI and the data / DSP layer in two different processes. THe communcation is supposed to be done with parameters and generic data blobs. From work, I was also lead to believe that you should separate UI and data to the point that they can exist in different processes.

So, what am I missing here? Could Juce support UI and data existing in separate processes? Is this something that the industries agreed to simply ignore and not do? I have yet to find a host that actually does that strict separation of the two.

Cheers

Juce has no concept for separating the GUI and data/DSP into different processes. If a plugin format like AU3 requires that, it’s handled behind the scenes and plugin formats like VST2 or VST3 don’t get to separately access that.

You could of course implement it yourself, but it’s not likely worth the trouble. The plugin market already has thousands of plugins where the plugins and their GUI and DSP processing share the same address space with the host application. (Which is a recipe for bugs, but that’s how things have went, likely in search of code simplicity and efficiency.)

So how would you use Juce and AU3 then? Afaik all one could do is “fake” the UI but then overlay another UI that’s running in the same process as the DSP?

What’s your opinion on splitting DSP and UI in separate processes? Sandboxing both (DSP and UI) together in a separate process from the host obviously make sense. But when it comes to separating DSP and UI in unique individual processes, that sounds very limiting to bigger projects when more than just parameters have to be synced between the two.

AU3 plugins should “just work” with Juce (building them is an option in Projucer) but I haven’t found a reason to build and test those myself yet. Also I think they do only the plugin and host process separation anyway.

Separating the plugin GUIs and DSPs completely is a thing that just hasn’t been popular. I have no particular opinion on that. If it was possible to do it very simply while keeping things efficient, why not? But at the moment there probably isn’t any reason for Juce to do that, as it isn’t natively supported with the popular plugin formats and hosts.

That’s not quite true. All the APIs suggest separating the editor/processor as much as possible. Some more than others.

For example, VST3 is explicitly designed not only to support this, but encourages it and explicitly discourages direct communication between the editor/processor.

From the VST3 documentation:

this separation [between controller/processor] enables the host to run each component in a different context [process]. It can even run them on different computers.

The two VST 3 components (processor and controller) need a way to communicate. It is the task of the host to handle this.

Please note that you CANNOT rely on the implementation detail that the connection is done directly between the processor component and the edit controller!

There is a translation nuance in there, I take it to mean you can’t assume that there isn’t anything in between the controller/processor or that they are in the same process.

There is more, from SingleComponentEffect (component = class instance).

Default implementation for a non-distributable [single process] Plug-in that combines processor and edit controller in one component.

This can be used as base class for a VST 3 effect implementation in case that the standard way of defining two separate components would cause too many implementation difficulties

Use this class only after giving the standard way of defining two components serious considerations!

JUCE doesn’t encourage that kind of architecture however. IMO the onus is on plugin developers to stop assuming their plugins will run editor/processor in the same process so host developers can stop allowing it happen, and rely as much as possible on communicating through the host. Most of the time you don’t need to create a channel between the plugin/editor directly.

Yes, I know that, but I decided to omit mentioning it because developers in practice don’t bother with that. In fact they hate having to think about it. From what I’ve gathered, it’s the main reason developers who already had a perfectly fine working VST2 framework to build on, have been very slow to adopt VST3. (The reluctant developers finally probably only used SingleComponentEffect anyway, if they decided to do VST3 versions.)

It’s a simpler model of operation and removes complexity, putting thread safety on the host instead of the plugin. The problem is the API to do it sucks, but it’s still safer and easier to deal with (if JUCE supported it) than handrolling your own thread safe and lock/wait free mechanisms for updating the processor from the editor and vice versa.

I wouldn’t say developers “don’t bother with it” so much as frameworks built on the AU/VST2 worldview don’t support it. It’s a serious problem when you want to support something like AAX native but your framework doesn’t support the necessary architecture.

By “AAX native”, do you mean Avid’s DSP cards based architecture? If you mean that, I would assume developers who have the means to build for that and support it, can also easily muster the means to deal with whatever the separate processes/hardware require when coding for it.

Thanks for your responses.
I agree with you that it has benefits to narrow the communication between UI and DSP down to a generic exchange of blobs and parameters. This then allows to host the UI or DSP on a different system and it also helps with the thread safety. However, I think in most plugins, you have to deal with three layers; UI, which is “optional”, the Data layer, which needs to deal with blocking tasks, and the DSP layer. For instance, when you need to read/write wavetables from the file system, you end up having to worry about thread safety anyway.

In practice I have to deal with a bunch of legacy code which makes it very hard to separate UI and DSP without making to many compromises. It’s interesting that even a framework like Juce can’t overcome this…

Separation of UI and Data/DSP is a completely unnecessary model for an extremely large part of our industry.

Steinberg tried to enforce it initially with VST3 because their solution for CPU-hungry DSP algorithms was to distribute the workload over several machines connected via a network.

History proved them wrong when multi-core machines became commonplace and the need for CPU distribution thus fell to zero. Their solution (called VST System Link) is essentially dead as almost no plugins support it.

Without an SDK that makes data exchange between UI and Data/DSP mandatory and extremely easy, it won’t happen any time soon.

In short: don’t worry about it. Just do what everybody else does and solve more interesting problems.

4 Likes

Sorry for asking again but isn’t AUv3 also forcing separation of UI and DSP? Haven’t looked yet since I think worrying about VST2,3 AU and AAX is sufficiently painful. :smiley:

Separation of UI and Data/DSP is a completely unnecessary model for an extremely large part of our industry.

How do you mean? The UI/DSP is already separated on different threads - you need to deal with that somehow. A mental model where they are entirely segregated naturally lends itself to communication primitives like bounded channels that are far more efficient and harder to muck up than timer callbacks through getters/setters. And if SOUL takes off you’re going to need to deal with that model.

@Xenakios programming for the Avid DSPs isn’t so much different than writing an AAX plugin by itself. Mostly you can’t assume the UI is in the same address space and have to rely on the host for communication.

1 Like

If you’re targeting desktop computers (and not iOS devices), I wouldn’t worry about AUv3.

I guess Soul will solve all that

1 Like

Yes, but a simple lock-free FIFO can do that. No need to send that data around. Also with this traditional model you can always rely on the Data/DSP part simply being there. The UI is optional and the Data/DSP code should never access ANY part of the UI directly, but the UI can easily call Data/DSP functions or read from it’s memory (e.g. for visualization) without any worry other than the data might be stale or overwritten by the time the UI can draw it.

Ah I see what you mean. My strategy for this is usually to have an abstract channel interface where you have a send/receive/try_send/try_receive interface (former locking, second never locking) for sending data between the UI/processor. If the UI/processor are in the same address space that interface wraps a pair of lock-free FIFOs, otherwise it wraps the native interface (VST3/AAX) for doing the same. It’s just a bit trickier for parameter updates.

Worrying about seperation between threads is one thing but truly separating the GUI / DSP means realising that shared data between the two given that they could be on different machines, built with different compilers, running on a different OS, processor, architecture, etc. means making sure you have complete agreement between the alignment and size of all your shared data and I assure you it’s very easy to get wrong and very hard to debug! For the vast majority of use cases you really don’t want to have to worry about all of this.

On the other hand as already mentioned it’s likely Soul will solve a lot of this (if not all?) for us.

How exactly would Soul solve this? Feels to me like we’re just adding another concept in it for variety. I was at ADC last year but since then didn’t track Soul. Afaik it’s a programming language that then could be used for the core DSP code? How will it help us with the mess of existing plugin APIs with different paradigms?

Juce already solves that for you. If it doesn’t work for you with every plugin format and every host, it’s too bad, but you can post a bug report or feature request in those cases. (And if they won’t fix the bug or add the feature, it’s not impossible to do it yourself. It can be inconvenient, of course.)

Juce already hides a big part of the mess as good as possible.
However, that’s not what @Anthony_Nicholls stated…