Another software renderer?

Thanks for an update. I’m intrigued - When you say ’ translating all juce::drawPath and friends into skia::drawPath" - doess that mean RAW painting or does it still use anything Juce on top, like edge-lists?

Google’s DAWN and ANGLE projects look promising, but so does @parawave 's Vulkan work.

Using Vulkan is a quick compile, and works just like OpenGL did. It handles multi-threaded environments and reduces the need to alter Juce renderer fundamentally. Plus with Vulkan or Molten in Juce would be a new starting block for gradual future changes.

I’m not sure what RAW painting means. No edge tables are involved in the CoreGraphics renderer and no edge tables were involved in the Skia renderer either.


Just wanted to pitch in as we would like to see the JUCE team taking this topic seriously.

The OSX implementation translates into CoreGraphics calls, the OS turns those into Metal calls when possible. Letting the OS handle this is a good thing, if there is a good and optimised translation into native calls you don’t need a Vulkan or SKIA back-end.
When you enable the async drawing macro on OSX the drawing is even done on a background thread, freeing the Message Thread from it. So let the OS do the work and make sure we have an optimised mapping from JUCE to native calls is the way to go in my opinion.

But all of this does not work on Windows, we are still stuck with the software renderer ( if i’m correct the Direct2D was never completed), any chance a proper hardware accelerated back-end is on the planning? Can’t imagine Windows not providing a similar mechanism like OSX does.

I think it’s important to find the current bottlenecks and see what can be improved, learn from the tools that are out there and see how they can be applied to JUCE. Caching is probably one of them.

I’m not sure how REACT JS does it but it might be good to dig around.
We recently created a rest-api interface for our software, and as an example we made a REACT JS clone of our UI. Speed wise it runs circles around the JUCE UI. It’s funny to see the REACT JS UI running in a browser on a different machine respond quicker to changes in our app than the JUCE UI itself does. So there are big gains out there, let’s try to find them!


" It’s funny to see the REACT JS UI running in a browser on a different machine respond quicker to changes in our app than the JUCE UI itself does."

Haha. Regarding Java script, I think Apple now uses ANGLE to translate GLSL into native metal. This includes the latest iOS, from the feedback I get from my JS shaders. So it’s a layer under WebGL that intercepts the calls. The added bonus is that Apple are finally WebGL 2 compatible! Yay, go them! Browsers use hardware rendering where they can.
My guess is, they are probably ready to ditch OpenGL with no complaints from Web GL creatives?

I can run this JS Shadertoy on my iPad and MacOS safari now: Arcane lands shader
If you click “compiled in” at the bottom of the edit window you can see the ANGLE references on Safari.

I second every word of this :clap: :clap: :clap:

1 Like

What are the arguments against using Vulkan again?

It compiles quickly and slots straight into Juce’s pipeline:

It’s really worth trying it. There are only a few missing pieces and bugs in his version but it shows the massive potential .


Dawn looks sweet!

I like the alternate reality where JUCE integrates with / depends on a higher level C++ framework such as ImGUI (or skia, flutter, vger, etc).

JUCE provides the framework-specific ParameterAttachment-ish glue and a high quality audio-themed widgets library. The lower level graphics stuff gets outsourced to a very well maintained open source project. More graphics options are opened up to devs and the software renderer could be deprecated.

Seems like a good sustainable/modern alternative to a heavy rewrite!


To put it in dollar terms, without this it’s going to continue to be cheaper not to use JUCE (just in engineer hours, not license fees). Where that breaks down is audio plugin APIs, which have requirements that break most GUI frameworks with assumptions about the event loop, etc.

1 Like

I’ve used GDI, GDI+, Direct-2D, OpenGL, and CoreGraphics in plugins (VST2/3 AU, AAX). I can’t recall many problems caused explicitly by the plugin API? Perhaps I’m biased through avoiding Graphics frameworks that are not dll-friendly.

Which plugin APIs break GUI frameworks?

1 Like

Those are lower level than what I’m talking about, eg compare trying to use electron/flutter/anything higher level for a plugin. I haven’t looked at flutter specifically in awhile, but I recall it wasn’t possible for some time.

1 Like

I came to the same conclusion.

There seems to be a fundamental requirement for these types of render pipelines that expect some kind of geometry definition to cache and optimize things. But unfortunately the current JUCE graphics context API is designed in a way that makes it very hard to convert to conventional modern GPU based graphic backends. Especially the very flexible “clip to path” stuff.

So, instead of wasting time with APIs on top of APIs that drag in massive dependencies, how about making small adjustments to the existing API? I find the idea of a “fast mode” with restricted features very intriguing. It’s true that there will probably be a minimal difference for dynamic paths and the very edge cases. But for image based renderings there will definitely be a massive boost. If combined with the existing “cached to image” functionality of JUCE, this will most likely improve the framerates of existing UIs. Especially for 4K UI renders, large textures are still a considerable slowdown.

Again, the biggest issues stem from dynamic paths. But for things like waveforms there is still the possibility to cache the previously generated part and differentiate between static and dynamic geometry.
If we think about it, the whole problem is all about effective and smart caching of static objects, and to offer a minimal set of dynamic parameters.

All of this can be done with the current OpenGL context, it’s not that impossible. What’s more problematic, is the fact that there has to be some kind of API extension. Something that is abstract enough to not directly depend on OpenGL, Vulkan or Metal. Future proofness is a big concern.
I suspect designing it requires an experimental approach, since the requirements are unknown or at least vague.

Whenever I come to think about a possible API extension, it’s mostly something like this:

class RenderComponent : public juce::Component
		// The cache will somehow register itself in the
		// graphics backend and can now create cached objects there
		// Create cached vertex geometry and shaders for later rendering
        // Purely depends on the backend implementation (GL, Vulkan, Metal)
			juce::Path p;
			// ...
			path = cache.createPath(p);


	void paint(juce::Graphics& g) override
        // Uses the current state of Graphics to initialize a stack based "fast render" context
		AcceleratedRender renderer(g, cache);

		// Draw the cached Path with dynamic shader parameters
			PathParameters params;
			params.transform = juce::AffineTransform();
			params.colour = ...
			renderer.draw(path.get(), params);

	RenderCache cache;

	PathObject::Ptr path;

So in this example all of the heavyweight geometry stuff is cached in a vertex buffer. And dynamic things like the view transform and colour or "fill type2 is a shader parameter. With this kind of mechanism there could be many objects that cache things with different kinds of flexibility. Perhaps some paths don’t even need a transform or dynamic fill. Same for images.


I love your suggestion. I think an API change is the only way for this to happen, especially if the old API can still be used as a fallback.

Being able to explicitly define the reused parts of the components + transforms like you did in your example should lead to dramatic performance boosts, from the little I know about graphics.


I like your suggestion. Especially keeping it abstract and not choosing for specific renderer is an important precondition.


This is pretty much the sort of change I’d love to see. Could always have a higher-lever layer on top for drawRectangle, drawCircle type things when you don’t really care about performance… but also having the option to make more efficient use of the hardware available without the need to expose the underlying context would be huge.


My current project is using Flutter for the frontend and Rust for the DSP (with a thin JUCE wrapper for the VST APIs). The biggest pain is using FFI to communicate between them, and getting the flutter desktop embedder to work with VST.


Running into the same issue in Windows, graphics are highly pixelated due to the poor software resampling. Talking about drawing bitmaps only.

Someone here have a solution, in improving the resampling within Windows?

1 Like

Yes, we use “mipmaps”, so when a bitmap is loaded, we automatically create lower-resolution versions (downsample to 50% of the previous level until we hit a tiny size). In the paint function, we use the closest bitmap that doesn’t have to be upscaled but only downscaled a tiny bit.

This way JUCE always has a bitmap available that needs to be downscaled by maybe 10-20% and not 400% or something crazy which makes it look so bad.


Thanks ReFX, very helpful. Could we generate mipmaps on-demand during resize? i.e., instead of generating all mipmaps when the plugin loads, we only generate the exact mipmap that is needed for the current window size in the resized() handler. (So, to clarify, we recompute mipmaps every time the window size is either halved or doubled). We are using a huge number of images for our plugin and would like to avoid excessive RAM consumption due to having all mipmaps loaded at once; however, we are not sure what the performance impact will be of generating them on-demand.

Also, do you know if there is a safe way to recompute the mipmaps in a background thread, and use the original images until the new mipmaps have been computed?

Considering RAM: the first mipmap level (so 50% size) only needs 1/4 of the RAM of the original bitmap. The next level (so 25% size) only needs 1/16th of the RAM of the original bitmap. So the extra RAM needed is negligible. I wouldn’t consider doing that in the paint call but rather during bitmap load/creation. All mipmap levels are still less than 50% of the RAM needed for the original mipmap.

BTW, when choosing the right mipmap during the paint function, it’s essential to also consider the physical size of the rectangle to be drawn, not the logical one. Otherwise, you might pick a lower resolution bitmap that gets then upscaled and thus blurry.

Amazing thanks for the help. much appreciated.