I came to the same conclusion.
There seems to be a fundamental requirement for these types of render pipelines that expect some kind of geometry definition to cache and optimize things. But unfortunately the current JUCE graphics context API is designed in a way that makes it very hard to convert to conventional modern GPU based graphic backends. Especially the very flexible “clip to path” stuff.
So, instead of wasting time with APIs on top of APIs that drag in massive dependencies, how about making small adjustments to the existing API? I find the idea of a “fast mode” with restricted features very intriguing. It’s true that there will probably be a minimal difference for dynamic paths and the very edge cases. But for image based renderings there will definitely be a massive boost. If combined with the existing “cached to image” functionality of JUCE, this will most likely improve the framerates of existing UIs. Especially for 4K UI renders, large textures are still a considerable slowdown.
Again, the biggest issues stem from dynamic paths. But for things like waveforms there is still the possibility to cache the previously generated part and differentiate between static and dynamic geometry.
If we think about it, the whole problem is all about effective and smart caching of static objects, and to offer a minimal set of dynamic parameters.
All of this can be done with the current OpenGL context, it’s not that impossible. What’s more problematic, is the fact that there has to be some kind of API extension. Something that is abstract enough to not directly depend on OpenGL, Vulkan or Metal. Future proofness is a big concern.
I suspect designing it requires an experimental approach, since the requirements are unknown or at least vague.
Whenever I come to think about a possible API extension, it’s mostly something like this:
class RenderComponent : public juce::Component
{
RenderComponent()
{
// The cache will somehow register itself in the
// graphics backend and can now create cached objects there
cache.attach(*this);
// Create cached vertex geometry and shaders for later rendering
// Purely depends on the backend implementation (GL, Vulkan, Metal)
{
juce::Path p;
// ...
path = cache.createPath(p);
}
}
~RenderComponent()
{
cache.detach();
}
void paint(juce::Graphics& g) override
{
// Uses the current state of Graphics to initialize a stack based "fast render" context
AcceleratedRender renderer(g, cache);
// Draw the cached Path with dynamic shader parameters
{
PathParameters params;
params.transform = juce::AffineTransform();
params.colour = ...
renderer.draw(path.get(), params);
}
}
RenderCache cache;
PathObject::Ptr path;
};
So in this example all of the heavyweight geometry stuff is cached in a vertex buffer. And dynamic things like the view transform and colour or "fill type2 is a shader parameter. With this kind of mechanism there could be many objects that cache things with different kinds of flexibility. Perhaps some paths don’t even need a transform or dynamic fill. Same for images.