`juce7` technical preview branch

These are great things for you to be working on. Really core stuff the UI performance and unicode stuff.


Hey @t0m, any more details on this? Is it in the current preview or coming further down the line? thx

1 Like

You can check the contents of the juce7 branch for yourself here: Commits · juce-framework/JUCE · GitHub

The statement about a decoupled editor/processor architecture was not meant to imply that we had an expected delivery date for that feature, only that it would be a more valuable use of the JUCE team’s time than other endeavours. It is something the community appears to want, and we think it’s the right direction for the framework, but we don’t have anything else to share at the moment.

Really interested to see what changes are made here. I’ve spent a lot of time reading about GUI architectures, especially Martin Fowler’s work: GUI Architectures.

I’ve tried a few different patterns in JUCE, but found the MVP pattern is by far the simplest and cleanest to implement. MVC just doesn’t really work if you follow it strictly (as described by MF in the link above). Supervising Controller is a good candidate, especially if you use value trees and the Value properties of JUCE’s widgets.


I’m also excited about this. hope we don’t need to rewrite all plugins after the change :slight_smile: I’m really happy with how it works at the moment with the value tree and the attachments. This decouples things also a bit.

But I also see that it is not a good solution to pass the whole processor the UI.

In fact, I currently see no practical advantage to decoupled editor/processor anymore, but in theory it sounds great. In the past it was maybe interesting to offload processor load to an external machine, but CPU load isn’t a problem anymore. Also stability wise, DAWs now beginning to run the plugins in separate processes (incl. Editor), which I think is a better approach.

I have plugins, like analyser, which pumping a lot data between processor and editor, which, imho wouldn’t work as fluid with some kind of bridge between processor/editor would exist.
All this should be optional, not a requirement.


I had thought VST3 and AAX advised developers to run decoupled, but few of us do it. I have a small amount of anxiety that one day Steinberg or Avid might drop some new update or core feature that requires it causing mass unrest, so being on the right side of this or at least having a path to it if necessary feels like a good move.


Wouldn’t AAX DSP developing profit from that decoupling? It was my understanding that this is the challenging part of porting to it. Or the other way round: if both processor and editor are already decoupled for AAX DSP, it is only a matter of recompiling for the native version.

1 Like

The separation of editor and processor is more about the ability to have a processor running on a different machine, chip, architecture altogether. For example AAX-DSP, and WPAPI require this, and other formats support the concept (VST3 and AU IIRC), and UA have a wrapper around VST2 to do this. However in JUCE the two are generally too coupled to achieve this easily (not to say it’s impossible as manufacturers are doing this). The main reason you might want to do this is normally for some benefits in latency.

1 Like

Ahh okay, that’s a little more technical than I had imagined tbf… Although the concepts of GUI architectures are still applicable here - the idea of Separated Presentation is pretty much where JUCE is at at the moment where you have the Domain layer (processor) separate from the Presentation layer (editor). With more work in this area it could be further separated to split out the Service layer from the Domain layer, which is where things like the APVTS and the Attachment classes fit in.

Afterall, one of the core ideas behind separated presentation is to decouple the dependency of the presentation layer from the domain layer. Which sounds like what we’re wanting here.


I definitely wonder what additional decoupling could take place.

In my own code I’m trying to follow a scheme where both the editor and the processor know about a shared “State” class, but know nothing about each other and compile separately, so you can compile/test a test app that would only have the top Component of the editor along with the state, or just the processor with the state.

While not 100% what you’re after, what I’ve done in my own code is divide everything into (at least) 3 units. “State”, “UI”, “Processor”.

Each is a JUCE-style module, where both the UI and Processor depend on the “State” classes (which isn’t APVTS), but never on each other - the UI classes can even compile without ever including juce_audio_basics, and you can test them in a non-plugin app that spawns the State.

Technically, because juce::AudioProcessorEditor depends on AudioProcessor, the createEditor() function has to be aware of the processor and editor concrete classes.

So what I did to solve it, was to have a single class in the plugin cpp file, that inherits from the ‘real’ processor (that knows nothing about the editor), and only then implements createEditor() with the real editor, only forwarding the state class into a juce::Component that knows nothing on the processor.

That allowed me quite a lot of flexibility in the design, including for example chain sub-processors together as internals of another plugin without ever needing to build their editor, unit tests for processors and editors, way faster compile time, etc.


I don’t think AudioProcessor and AudioProcessorEditor are too coupled by themselves, and I don’t have particularly clean implementations in that respect. The wrappers are more coupled, but from the user side the most problematic thing happens in AudioProcessorParameter (and hence APVTS), as we discussed elsewhere, because UI thread and audio thread changes are merged.

So… can you possibly tell us a little about 7.0 and MIDI 2.0 support?


Can’t wait to see how you guys decided to do the LV2 plugin + host implementation!

And of course how you will handle some of the scrutiny from the community during integration testing :wink:

Lets go! :rocket:

LV2 support is now live on the preview branch, for plugins:

…and for plugin hosts:

I’ve also pushed some other bits of work to the branch:

It’s now possible for plugins to query the system time provided in the audio callback in supported formats:

There’s a new demo project, showing how plugins can be loaded inside other plugins:

The event loop in plugins on Linux has been overhauled, which should improve stability. Previously, plugins on Linux had their own “message thread” which was separate from the host’s message thread. Now, plugin UIs will use the host-provided idle callback (or equivalent), meaning that they will update on the host’s main thread, which should reduce the possibility of deadlocks and data races. This work spanned multiple commits, so I won’t link them here.

We’ve also updated the view-sizing logic for VST2 and VST3 plugins, especially on Windows. This is intended to ensure that plugin editors display at the correct size, regardless of global scale factor, desktop scale factor, and per-monitor scaling - and, importantly, editors should display at the correct size, even when the host is itself a JUCE plugin. Again, this work is spread across a few commits.


We have also added support for ASWG metadata tags in WAV files:


Hi JUCE Developers,

Thanks for the work on JUCE 7. You’re probably still busy with it :upside_down_face:
Just want to ask a small thing about future graphic improvements (again).

Seeing that you now implemented a Metal-layer backend, I wonder what direction JUCE will take in general for graphics. Let me summarize quickly:

  1. OpenGL is deprecated on Apple devices. Newer SDKs will eventually just remove the GL functions. So there is no incentive to improve juce_opengl, right?

  2. The easiest alternative on macOS / iOS is Core Graphics + Metal-layer. The performance will probably improve over time, especially on newer ARM Macs. Therefore the focus on that in JUCE 7.

  3. Since it’s too platform specific, there will be no juce_metal module (who wants to use it anyway :joy:).

  4. Vulkan + MoltenVK is not a good choice for Mac. Obvious that Apple has a big interest in pushing Metal. Additionally driver support on Windows is still not good enough, so no juce_vulkan either.

  5. Skia is big. Probably too big to drag it into JUCE. And you mentioned that performance improvements are questionable. Same for DAWN.

  6. A rewrite / extension (let’s say in JUCE 8) of juce::Graphics to improve that is too complicated.

Now my question. What about Windows and other platforms?

The OpenGL Context (wgl) on Windows is a relic of old times. It’s there, it works, and will probably work forever. But it’s not really developed, since Microsoft wants developers to use DirectX 12 and their proprietary APIs. The only reason it’s still used, is because it basically runs on the work NVidia and AMD puts into their drivers.

Looking at Game / Console developers for example. If they maintain a big game engine, they rarely choose OpenGL. The API driver overhead is too heavy, we all know that. But they also (at this time) rarely support Vulkan and most often prefer DirectX 12 or still even DirectX 11. See Unreal Engine 5.

So what’s the future proof backend for JUCE on Windows? GDI+, Direct 2d, DirectX12 or an improved OpenGL renderer. Or using Vulkan?

The topic somehow gives me no rest. I wish it was a bit more transparent.

By the way: One thing we could do on Windows already, is to implement nv_path_rendering for OpenGL. Probably the easiest measurement. But only for Nvidia drivers.

Any hints or guidance?


Thank you! Will test soon


In juce7 I’m hitting this assert

    /*  If you're building this plugin as an AudioUnit, and you intend to use the plugin in
        Logic Pro or GarageBand, it's a good idea to set version hints on all of your parameters
        so that you can add parameters safely in future versions of the plugin.
        See the documentation for AudioProcessorParameter(int) for more information.
   #if JucePlugin_Build_AU
    jassert (wrapperType == wrapperType_Undefined || param->getVersionHint() != 0);

The documentation is perfectly clear and I would like to set the hint to ‘1’ for my version. BUT I can’t see how I can set this if I subclass a RangedAudioParameter. The RangedParameter doesn’t have a hint, getVersionHint is not virtual, the member version is private. So I’m not quite sure how I go about setting this hint in my case of having custom parmeter types subclassing RangedParam and so on.

(Also note that this fires if I’m building an AU but even if I’m running a VST3 or LV2).

Any thoughts?