This is a “roll-up” feature request. Voting for this is voting for the JUCE team to prioritize efforts spent modernizing the vector UI capabilities of JUCE.
The overall goal is to improve compatibility with vector design programs and provide lower level missing pieces needed to implement modern designs. “Modern” is defined as vector-based with comfortable usage of gradients, background blurs, drop-shadows, rounded corners, animation.
Not everything on this list is a “must have”. However, everything on this list would help devs build first-class UIs without compromises and reduce time spent on workarounds and wheel reinvention.
One side effect of modern vector UI (with shadows, etc) is that all components begin to overlap each other. This presents additional requirements, such as having a concept for contentBounds, padding/margin becoming a first class concept for components, components having clickable areas outside their content bounds, new performance considerations with these assumptions.
A focus on utility classes/helpers at the middle level of abstraction between the low level “component” and high level “widget”. Examples: Drop-shadowed components which cache the shadow separately from the content component. A helper class for handling components with rounded corners (which requires a padded container to draw efficiently inside the content bounds). A built-in way to benchmark and report on paint calls and fps at a component level. Additional helpers for making it easier to draw curve/splines and draw paths efficiently in bulk.
(stretch goal #1) Hire a UI designer to create a “LookAndFeel_V5” that is in line with modern vector based design and improved usability patterns. Ideally this UI would be designed by someone fluent in UX best practices in a common program like Figma and the vector assets shared with the community. The process of “dogfooding” the implementation would expose the areas in most need of improvement and illustrate the most profitable paths forward.
(stretch goal #2) A benchmark demo for UI primitives, to ensure everything stays snappy at 60fps and demonstrate that vector drawing with X amount of line segments or Y amount of drop shadows is trivial.
UI is what I spend the majority of my time on in JUCE. It’s also where I see others struggle and compromise the most. Improvements in these areas have an extremely large positive impact on devs and their products.
atm the general consensus about that is that one should use openGL for heavy animations and realtime shaders, which seems to make sense, because it uses the GPU then. If i wanted to only consider shadows that can be cached i’d probably just set them as background image of an else invisible component with setBufferedImage(true) and setMouseIntercept() being set to let all mouse events through. that could automate this process easily.
idk if animation at 60fps is possible for plugin projects. cubase for example caps at 30 anyway and someone told me plugins’ framerate depends on the daw’s one. people sometimes claim they find the difference between 30 and 60 remarkable, but i personally prefer a consistent experience, than a framerate that makes plugins fight for resources constantly, which makes the overall daw experience stuttery sometimes. my theory is: when people ask for 60fps they just don’t really know what consequences their wish for slightly softer animation has or else they wouldn’t wish that.
overlappable components with build in margin and padding sounds nice. maybe even making the step towards a system where a component can be rotated/stretched etc., instead of (just) its content.
i’d personally suggest ditching lookAndFeel completely and instead rewriting all component paint calls to work with std::functions. cause then you could still have stateful lookAndFeel by writing your own class, but all the lookAndFeels where you just wanna define some lines written from code would be simple lambdas and you can basically write free functions for spawning specific designs. i think that would make a lot of sense because sometimes there are little things that you can’t do with the lookAndFeel functions, like when individual lines pop up on the component that are sorta part of the underlying paint call, rather than that lookAndFeel one it leads into, and people often then realize they have to roll their custom component, just because of that. but such a little issue shouldn’t be the reason for having to reinvent the wheel.
I’d love to see better support for SVGs, including animations, shadows, complex gradients, inner & outer edge stroking, etc. That would mean I could simply drag-and-drop assets from our designers and not have to re-implement everything in code, which usually leads to loads of subtle bugs, or messy workarounds for things not supported by JUCE.
I’d also love to see that SVG support implemented on native hardware-accelerated APIs (Metal, DirectX, OpenGL, etc.), as well as a fallback software implementation. Ideally, instead of a paint() method, my components would just provide an SVG document to describe themselves which can be handed off to some renderer at the top-level. In a Model-View-Presenter architecture, this would allow components to act more like Presenters than Views.
Where do you get that from? Where was this consensus formed? Are those people not aware the Apple deprecated OpenGL in 2018, and it’s only a matter of time before it becomes completely unusable in the not too distant future?
Even if that was true (and I can’t seem to confirm it, looks like 60 fps to me), Cubase is only one of many hosts. The vast majority has no problem with higher fps.
Not sure what you’re talking about. Even the official JUCE demo app shows rotated components. The transforms allow for all kinds of things (rotation, scaling, skewing) and the whole component (including it’s children) is transformed that way. Mouse input is transformed the other way, so you can still use the component in it’s transformed state.
So now I have to assign a paint function every time I create a component? Instead of a single “setLookAndFeel” call on the main-component, I’ll have to assign individual paint functions for each and every component created? That sounds like an improvement to you? Maybe I’m missing something here?
The request is a bit unreasonable for the current team size. Magically improving performance to 60 Hz and HQ anti aliasing (vectors) and modern screen sizes (4K) and shader effects (stack blur)? Impossible with software rendering. Which SDK or App offers this? I can only think of Chromium/Skia/Ganesh, which is a huge codebase and made by many developers. Using multiple GPU(!) fallbacks and specializations for drawing different stuff. Can we really expect this from JUCE?
Think this through. What’s really necessary to achieve this?
The current graphics implementation is pretty good (quality wise), easy to use and lightweight. But “improving” it comes down to changing the juce::Graphics API from imperative “drawing commands” to declarative “drawing objects” that can be cached (geometrically, or with pixel buffers) and drawn in different ways. A mix of GPU, CPU and compute. In engineering terms a delarative layered compositor.
I highly doubt any company of this size can achieve such a feat. Raw Material is not Adobe.
But just giving up is also not preferable, isn’t it? Let’s stay realistic and solve it bottom up. First and foremost improving and EXTENDING the existing graphics backend, so the declarative approach can be used where necessary. Everything else can build upon this.
The first step for this is to solve the OpenGL Apple deprecation issue. How to approach this? I don’t know. The team did some reasearch. Heard nothing about this for months though. Anyway, a modern GPU backend is the first thing to solve any problem on the list.
The request is a bit unreasonable for the current team size.
Just to reiterate: the request is a general one to focus on making modern vector-based designs easier to implement in JUCE. The items are example areas which need improvement to achieve this goal.
Think this through. What’s really necessary to achieve this?
I’ve given this a lot of thought (and hundreds of hours of work). I have layered shadows, background blurs and thousands of line-segments animating at 60fps-ish (it’ll get better with the hardware sync coming in 7.x) in my app using vanilla JUCE (not opengl, etc). It took a lot of effort to get things to work nicely and I still have some windows work to do. But if I can personally accomplish this, I feel confident the JUCE team can do even better at reaching these goals.
Let’s stay realistic and solve it bottom up. First and foremost improving and EXTENDING the existing graphics backend
I think we’re ultimately in agreement! I feel it’s a valid and pragmatic approach to work incrementally towards improving these things. An example would be the Stack Blur implementation which is massive improvement over the existing one. No, it’s not GPU, it doesn’t really have to be, it’s usable and CPUs are very fast these days. With a little work, things like drop-shadows are just as easily cacheable as any JUCE component.
The first step for this is to solve the OpenGL Apple deprecation issue. How to approach this?
The JUCE team knows best what low level implementation scope makes sense for them to build and support. As a plugin dev, I only know that I want to work my designer in vector graphics programs and be able to have those translate reasonably in JUCE. I’ve personally accomplished this without compromise so far, so I believe it’s a reasonable ask, regardless of implementation detail.
I just got a notification that the community flagged my reply to @Mrugalla for violating the rules of this forum. Can somebody shed some light on this? I’m not neurotypical, so I may have missed something here, but I don’t understand what rules I supposedly violated. Is it enough if a single user (maybe in retaliation?) flags a post? Can somebody please explain to me what I did wrong?
Just to add an example of what I’m working on, here’s some 60fps-ish animation (AffineTransform with easings from @bgporter’s library) of a component with 2 drop shadows at the top of the modal (a bit hard to see here) and a big background blur (slider and thumbs in the background are also vector with multiple drop shadows, not images).
The big background blur takes 9ms on my mac (a bit longer on my PC) when “open” is clicked, but then is cached and trivial to animate (~100us paint calls for the whole modal). I might look at allowing the background blur to be dynamic (since the controls behind can be automated, moving, etc), but if so, I’d limit the fps there, since it probably takes longer than 9ms on an older system and I like to have lots of perf headroom.
Your example isn’t very meaningful without information about the canvas size. The difference between HD and 4K is huge in terms of fill rate. So much that 60 fps will drop to 10 (see DemoRunnter > GraphicsDemo.h > Images: ARGB Tiled on HD vs 4K), for drawing stuff that could probably run at 4000 fps on GPU.
Anyway, it really should be embarrassing that we “reach” 60fps animations with much effort and trickery. Even Adobe Flash (RIP) more than a decade ago could use realtime component filters at 60 fps. Using a bytecode language that cleverly dispatched GPU shaders instead of using CPU rasterization.
The thing is. Why should we even bother optimizing these CPU render cases with caches, lower framerates, threading and other trickery? Every device has some form of GPU and supports basic shaders. Devices without at least OpenGL 2.0 are increasingly rare, we shouldn’t even think about stuff like this and instead use basic blur shaders on a framebuffer. And the component hierarchy should take care of creating these temporary framebuffers, instead of explicit juce::OpenGLFramebuffer switches. In that regard OpenGLGraphicsContextCustomShader is probably the most useless class in the entire framework.
JUCE should ideally try to avoid dedicated modules like juce_opengl and use hidden implementations for juce::Graphics, without stuff like juce::OpenGLContext.
DX12 on Windows.
Metal on Mac/iOS.
Vulkan/OpenGL ES on Android.
Vulkan/OpenGL on Linux.
Then give developers a way to easily query the available features. Personally I would like to see Vulkan on all devices. Makes the most sense, or at least offering GLSL for all backends and internally using glslang to cross compile SPIR-V to the specific shader code.
But the more future proof version is abstracting away framebuffers, vertex buffer, index buffer and render passes, which is basically enough for most effects.
Similar to how it’s done in
You can achieve anything with these objects. The hard thing is to find a good abstraction for JUCE that is easily manageable, without dragging in huge code bases and too many dependencies. Needs at least 3 or more full time developers with good knowledge of graphic APIs. Capacity I can’t see in the current commits.
This example was half the size of my plugin, 800x400 (@2.5x i think?) or about half a typical large plugin size. It scales up fine within budget beyond anything I would need (thats about max size for a laptop display already).
Please note that the goalpost for this FR is not “4k animated blurs at 60fps on CPU” — obviously that would be great, but asking the impossible, as you describe.
Instead, the FR asks for incremental improvements to what already exists to help devs implement modern vector designs.
Because with a bit of improvement (basic caching, not drawing more than is needed) I can deliver my vector plugin designs in JUCE today.
To me this seems like a good argument for improving and optimizing what we have instead of waiting for a magic bullet.
I really like this idea. Offloading some rudimentary tasks like blurring to the GPU would be a great improvement (without tossing the baby out with the bathwater).
everyone is aware of it being deprecated but as far as i know those people then always go on talking about vulcan support, so basically the same solution: adding smth that utilizes the gpu.
in cubase you can see the fps in the project settings i think. i don’t have the daw open now tho. i tested this in cubase 9.5.3 artist. my current main daw, bitwig, has no such limitations, but sometimes its own visuals as well as the plugins’ ones are laggy as hell. that’s just distracting, even if there are some moments of absolute smoothness in between.
really? rotations are possible? but then how come the bounds are then still described as x, y, width, height? would make more sense to me if it was described in points like juce::Line to make it clear it can be more than simple rectangles.
yep, that’s an improvement. cause often times people just want a little bit of variance in some ui elements but that means you either have to deal with the casts of component’s property system or let an entirely new lookAndFeel inherit from your current one or the base to make that change. meanwhile when all looks are made with free functions you just use a different one of these. i rewrote so much stuff to work like that i don’t even have to use parameterAttachments anymore but just tell the knob “oh btw, you’re a parameter of certain id and you should look like this or that” and done
I feel like this one point needs to be really clear: the pipeline used to the gpu doesn’t really matter. Vulkan, direct3d, OpenGL, metal, I don’t think we really care. As long as there is ongoing support from the target platform and a real implementation into the juce graphics calls, it would be great if we didn’t even have to think about what’s going on under the hood. Keep using OpenGL on windows and metal on macOS, but have it so that any g.draw commands populate the geometry into the relevant pipeline and completely offload rendering to the gpu. This is what I’d like to see. Abstract the details away from us, just make it use the gpu properly and have the g.draw commands follow what would be expected behaviour in the broader design community.
I always found that odd too - I thought it’d make more sense to be able to explicitly set a component’s anchor point which would be the point specified when you setPosition() and the point about which any rotation is applied.
I tend to only use the look-and-feel to overwrite the style of the existing JUCE widgets. If I’m making my own widgets then I will usually use Painter helper classes that handle the painting. I’ve never seen any benefit in adding bloat to the existing look-and-feel class.
Completely agree - I’ve always found it odd that enabling the OpenGLContext on a compnent will more often than not increase the CPU usage, and lead to lower framerates. It might be a sensible tadeoff if you could then provide custom shaders for certain components, but AFAIK that’s not possible, so you’d have to write your own implementation from scratch.
I like this. The FR suggests replacing the slow CPU blur/shadow with the fast CPU StackBlur as a first step in the short term. But this (as @parawave also described) would be a more sustainable/performant endgame.
Agreed. It would be nice to keep this on-topic to the pragmatic design needs and if/how people have been working around them (vs. the GPU implementation debate). I would love to hear if anyone else is layering multiple shadows (like I am with StackBlur), or has missing design needs not described by the FR…
trends are partially the result of what the technology offers most, not much of a guideline on how to build technology. making flat designs look good has been easier in juce than gradients, so people made flat looks. but the trend can change with new juce features. it’s the same as with sensitive mousewheel. sry that i keep bringing that up in different threads, but it is again the perfect example of juce’ influence on trends, because slider doesn’t have that feature so almost no plugin nowadays has it, even tho it was standard in all plugins 10years ago. so the trends are definitely majorly the result of the technology involved. back to topic: no matter if flat, newmorphic, skewmorphic, pixelart or whatever. if juce provides the tools, the styles will be used