Looking for UI options

I’ve been enjoying using JUCE to create plugins, but sometimes I find the UI side of things frustrating and/or performance heavy. Multiple paths seem to hit performance hard and as a fan of reactive displays, this is a bit annoying. Special effects like blurs also seem to be a no go.
I have used OpenGL in a very bare bones way (just attaching a context to the plugin editor) and it did make a big difference, so I’m starting to think about coding in OpenGL properly for the heavy-duty stuff.
But my question now for anybody with experience: is learning OpenGL worthwhile, or would my time be better spent using a different library/system and programming my UI there?

Once you get a few lines of shader code written to solve your blur and performance issue, you’ll see, for sure: you should learn some OpenGL.

Its a perfectly cromulant API for graphics, and is still in use and available all over the place, not just JUCE … modern programmers should have at least a basic understanding of this, among probably another 32 other API’s in the coder zeitgeist. All imho, of course…

It could be that you will find other uses for those shader chops.

1 Like

Well, except that Apple declared OpenGl deprecated.

I never used OpenGL, because I fear it may be discontinued at some point, and I don’t want to rewrite all my paints again - too much time. I’m waiting for a more stable alternative on Windows.

Any suggestions on alternatives or other UI systems that can be integrated into JUCE? Things tailored to to audio plug-ins maybe?

The whole UI thing seems to have some difficult decisions to make currently.

Apple has deprecated OpenGL so at some point it’ll stop working on Macs. The built in JUCE graphics rendering works fine but doesn’t benefit much from enabling the OpenGL renderer unless you’re bound by fill rate. Which may be the case if you’re expecting your plugin to scale on super high-res monitors or, as you say, you’re doing blurs etc.

IIRC, one of the things slowing JUCE’s OpenGL rendering down is the stroke line drawing.

FWIW, plenty of developers here seem to stick with the software JUCE renderer. But notably, Matt Tytel’s Vital synth uses OpenGL and shaders.

Options then seem to be:

  1. Stick with the software renderer. It’ll always work even when OpenGL is finally removed and replaced with something else in the back-end (Metal?).

  2. Fully embrace OpenGL, shaders and all, and ditch JUCE’s UI system. All in the knowledge that you’ll be replacing a whole bunch of code or providing a custom abstraction over it all.

The ideal of course would be a solution that gives you near full hardware speed whilst providing a thin abstraction layer, but as far as I know this doesn’t exist.

OpenGL isn’t going away any time soon, just because Apple says so … the API is still available, still works and can still be relied on - its just that Apple won’t ship it by default, at some point in the future, as part of their OS. Big deal, there are projects such as Zink (OpenGL on Vulkan) that can be used when its necessary.

Meanwhile, there’s not a Mac out there - today - which can’t run an OpenGL app.

And - anyway - the point is not to use too much OpenGL, just the things you need - such as a custom shader for window dressing. If you find in the future that you need to do this again, on Vulkan - take NVIDIA’s advice seriously:

EDIT: A link to Zink:


1 Like

So the main options are:

  • Live with the limitations of JUCE’s software rendering and work around it.
  • Learn OpenGL and hope it doesn’t get killed by Apple soon.

Has anybody has experience with using a different system/library for the UI? For example the React-JUCE project or even something like QT? I tried QT in the past and liked it, but I’ve heard using it for plugins isn’t ideal.
I’m just looking at things like the Fabfilter Pro-R display and thinking that if I tried to mimic that in JUCE it would crumble.

I thought about this. We mention “Apple has deprecated OpenGL” as a given fact, but always forget one thing. There is MoltenVK, which essentially translates Vulkan to Metal. Proves that a translation layer is possible and already existing.

The good thing about Metal and Vulkan is that they are closer to the hardware. So any old API can be built on top of it. There is MoltenGL for example, a proprietary OpenGL ES 2.0 implementation on top of Metal. Then there is Zink, description:

Zink is an OpenGL implementation on top of Vulkan. Or to be a bit more specific, Zink is a Mesa Gallium driver that leverages the existing OpenGL implementation in Mesa to provide hardware accelerated OpenGL when only a Vulkan driver is available.

And there are probably a few more. So Apple can announce whatever they like, smart people and Linux guys will find a way to run OpenGL / Vulkan on their new OS version. Is it offering the best performance? Definitely not. But the JUCE GL implementation isn’t using the latest GL features “for performance” anyway.

So my take is: Don’t worry too much about it. If you invest time in developing GLSL shaders for OpenGL, you can easily compile them to SPIR-V later, which is usable in Vulkan, and translates to MTL Shader Language (via MoltenVK) or even HLSL for DirectX.

One thing though: If you plan to do anything with GPU, you should essentially strip away OpenGL/DirectX/Metal/Vulkan from your application code and wrap it behind a GPU Interface abstraction.

Every big engine/SDK (Unreal, Unity, Skia, bgfx) does this in some way. The only things you need (and which all APIs have in common) are:

Vertex Buffers, Index Buffers, Frame Buffers, Textures, Shaders and some form of “Render Pass” and common state (like Alpha Blending) to glue everything together.

This is enough to implement the juce::LowLevelGraphicsContext on any API. The difference between APIs is only important if you use the newest features, like compute shaders, bindless setup stuff or direct state access. All of that is more targeted for modern 3D stuff for ray tracing, ML and really intensive (15ms+) rendering.

Anyway. Yes, why reinvent the wheel? You should definitely first take a look at GitHub - bkaradzic/bgfx: Cross-platform, graphics API agnostic, "Bring Your Own Engine/Framework" style rendering library. , which does this for you. Still needs an effort to glue it together with the regular juce::Component system though. But it’s probably a good idea to check out existing alternatives before considering own shaders and GL code.

1 Like

Oooh, bgfx looks really good. However, considering that people are paying to use JUCE for commercial products shouldn’t the JUCE dev’s be digging into this sort of thing?

I want to draw vector stuff at the speed available to me by modern graphic cards in a way that’ll work on different platforms. Graphic cards that are capable of rendering triple A games at 60fps but somehow a screenful of scalable vector sliders/knobs and some spectrum/waveform visualisations cause performance issues. Doesn’t seem right.

I know it’s not as simple as it sounds. I’ve read posts on the rendering effort going into anti-aliased stroked line rendering.

But I’m hesitant to be using JUCE and then yanking its UI innards out to replace with another rendering library.

It’s irritating and seems not plausible, right? But it’s actually much harder to draw a moving complex path with perfect anti-aliasing than drawing 1 million static triangles. GPUs are still mainly triangle rasterizing machines. In recent years we got compute shaders for more general processing, but unfortunately can’t use them, since they are a GL 4.3 feature and Apple is stuck forever at 4.1.

But still, why is the drawing so slow? Well it comes down to CPU<->GPU communication / sync.
Even if JUCE is still splitting the images into pixel quads, the amount of triangles/indices is ridiculously small for a GPU. The problem is the data flow and preparation time.

Currently it’s like this:

On CPU : Image / Path → Path Flattening → Edge Table → Vertex Buffer → Shader Setup → Draw Call (for each path/call).

On every frame. The EdgeTable iterators are a huge bottleneck. During every call there is too much time spent on CPU, so the GPU constantly “waits”.
The rendering time itself (profiled in RenderDoc) is very short even measured in μs, for what a CPU rasterizer would takes ms.

But it should be more like:

On CPU (Setup): Image/Path → Pre-Process Geometry → Vertex Buffer → Compute / Geometry or Tesselation Shader → Cached Objects.

On Every frame → Setup Shader → Fill Uniform Buffers for “Paths” → Draw Call (many paths at once).

Not waiting for CPU pre-processing and only waiting for very few “read-backs” on different render passes. Like rendering to a framebuffer, then using it as texture to blur the whole screen for example.

In short - the JUCE GL render is only slow because of the CPU pre-processing each frame.
If it’s moved up front, or somehow cached via geometric objects, it will be much much faster. Alternatively the CPU processing could be moved entirely to the GPU. Ideally with Compute Shaders.

Just a thought, but I wonder if instead of pixel quads, one could submit path geometry directly, then use some kind of vertex processor to create triangles, and then use some anti-aliasing technique. Or even move the whole “clip region” stuff, which was at fault to begin with, to the GPU and use the depth buffer for masking.

Perhaps it’s even feasible to render 3D geometry with an orthographic projection instead and use something like Temporal-Anti-Aliasing to improve aliased edges over time. Glyphs for fonts can be cached to textures too.

Let’s be realistic. Yes, the team is doing a good job. But the company and license fees aren’t enough to invest into a cutting edge path renderer. It’s not trivial, even bigger companies struggle with high performance vector rendering. Existing research paper on these topics are kind of “hacky” for specific tasks. So essentially it’s a big time sink to support all possible cases. Take a look at the Ganesh renderer code inside of Google Skia and you’ll see how much effort this takes. There isn’t enough capacity for multiple full time developers writing such a library. Maintenance and new plugin features have them busy enough. And it’s understandable that this is their main focus.

Main takeaway here: Don’t wait or try a universal fast/hq solution for rendering. Focus on what should actually be visible and interactive and build specialized and optimized drawing routines. Much can be done with caching.

1 Like

I use Direct2D, which is hardware accelerated on the GPU. It also maintains a list of ‘dirty rectangles’ which is something JUCE didn’t have till recently. ‘Dirty Rectangles’ reduce the amount of drawing the API has to do when only some sliders (or whatever) move.
On mac, I translate Direct2D into Cocoa/Quartz which is a software renderer I believe, so it’s not as fast. But at least I get to ‘write once, run anywhere’ which is convenient. I think what I need to do is research using ‘Metal’ on macOS so I can get the same high performance as on Windows.
unfortunately, my library is not packaged up all neatly as a JUCE module at the moment. So it’s not really practical for general use yet. I continue to work on it though.

If you’ve used JUCE, the syntax is not so different, although you do have to spell “Colour” the wrong way :wink:

Yeah, I get it. I know it’s not a trivial problem. It’s the sort of problem I usually relish - I’ve got a lot of background in rendering tech. However, I have a feeling it’s a complete rabbit hole and it’s not something I can lend time to currently - got DSP to write!

Caching can help enormously for vector drawn knobs and sliders. As for spectrum, wave and other visualisations, I guess dipping into custom OpenGL and shaders is probably the best way to go currently for performance. IIRC, from looking at Vital’s source code it’s using shaders to render filter visualisations and so on.

1 Like

Caching images and controls has been a big performance saver in the past for sure, but I find the bottleneck really comes from animation/visualizers.
From what I’ve read here I think I’ll stick with JUCE for the main UI stuff, and learn OpenGL for any visualizers. I’ll cross the road of Apple’s historically cutthroat lack of backwards compatibility when they choose to ditch OpenGL (hopefully with the help of the smart people here).
Thanks all for the info and advice.