I’ve found another new hi-quality 2D library out there to render into an image. I want to test it on JUCE and I wonder, is it even possible and if yes, then how to create a custom software rederer and make JUCE use it instead of the default one?
There are a ton of software rendering libraries out there, and sure, you can wrap them up into a custom LowLevelGraphicsContext class if you really want to. That’s not trivial, but not impossible.
However… all software rendering is going to be limited by the same memory bandwidth bottlenecks, and the juce one is already pretty quick, so if you’re looking for better performance you’re likely to be disappointed.
Tom was recently experimenting with Skia (Google/Android/Chrome’s rendering engine) to see if it was worth us supporting that as we expected it to be much faster than our own, but the results were a big nope.
The only practical reason I can think of for wanting a different engine would be if you wanted to get better image resampling, since that’s an area where the juce engine is pretty fast-and-dirty, but if that’s what you want then it’d make more sense to just implement an image resize function rather than try to replace the whole stack.
Also, I like the quality it rasterizes text glyph. Very crispy
I hoped there was less intrusive way to bring another software rasterizer to JUCE. I knew, I had to cope with LowLevelGraphicsContext, but anyway hoped JUCE had another simpler way to substitute the software rendering engine.
I doubt if there’s anything that a CPU-based engine could possibly do that could compete with a GPU-based solution, regardless of fancy JIT pipelines etc. As soon as the pixel numbers get beyond a certain point, it’s memory bandwidth that kills it, not to mention that fact that even the fastest CPU renderer is burning CPU that you could be using for other things, whereas even the slowest GPU renderer is making better use of all the silicon in your machine.
We’ve looked deeply into this for juce and our current thinking is that a Metal/Vulkan based refresh of our GL rendering engine would probably be as fast as it gets. At the moment the only bottleneck in our GL engine is that the CPU is still doing the edge-tables and path flattening, but in the more modern compute languages that could also be shoved onto the GPU.
I agree. Nothing can beat GPU now. JUCE’s OpenGL context is really astonishing. Especially, how easy you can enable OpenGL redering in JUCE for your application.
But I’m talking about software rederer only. And Blend2D beats many vector engines out there. Look at its demos. It’s a comparison between QT’s and Blend2D’s rendering speeds. Obviously, QT is not good at rendering speed, but anyway, you can see the dramatic difference in speed and can estimate the potential of Blend2D.
As per the author:
What About Blend2D and GPU Comparison?
The Blend2D project started as a challenge to compete with existing software-based 2D renderers in terms of performance. It was not planned to compete with GPU-accelerated renderers although the importance of that comparison is understandable. However, we believe that such comparisons should not only include the average frame-rate achieved, but also output quality as well as power and memory consumption of both main and GPU memory. This information is often missing, which may result in misleading and biased conclusions.
I really wish JUCE has that software redering engine. Or a way/architecture to easily substitute software redering engines.
We only keep the software renderer for old Windows versions that have ropy GL implementations. And we certainly will ditch it as soon as there’s suitable accelerators on all the platforms we target. It’s a dead-end way of doing graphics, not really worth investing time in, IMHO.
It would be appreciated if all this was thoroughly explained somewhere : How the graphics are handled by default on all the platforms, when/how is using OpenGLContext useful for accelerating Component paint() drawing etc…
I recently spoke to a producer who works with audio/video. He complained that most of his plugins with OpenGL support would interfere with the video rendering and cause huge dropouts in his video frames. So he disabled OpenGL. Not the most typical use-case but a valid one none the less
And also: integrated GPUs eat a lot of precious RAM bandwidth, if you’re on a cheap computer with a single memory module (no dual channel) it is pretty easy to starve the CPU (the cpu runs at full speed, no throttling, but all computations involving memory accesses are suddenly 80% slower…) when some other application is using the GPU. So integrated gpus are really not a good fit when doing cpu demanding realtime audio tasks.
To follow up, we are porting our OpenGL 4 video engine to Vulkan a.t.m. As everybody probably now knows Apple will drop OpenGL support. So JUCE needs to come up with a solution for this as well, we would really like to know what the plans are and when we can expect a Vulkan based renderer that will replace the OpenGL one.
PM me if you’re still interested. My implementation uses Skias OpenGL Backend to render the UI in conjuction with the JUCE OpenGLContext. I did not implement image drawing and glyphs are only drawn via their outlines.