I looked a bit into the OpenGLGraphicsContext. You know, the underlying implementation of the LowLevelGraphicsContext.
It’s kind of amazing that, when you actually inherit from RenderingHelpers::SavedStateBase<…>, , you only have to implement a few methods to get the complete JUCE Graphics context working.
… and so on. Just implementing the “fill with solid colour” alone makes almost all of the stroke/fill path stuff visible.
Now looking at the OpenGLGraphicsContext internals I noticed there are a few shaders declared there.
Initially I thought I had to implement every one of them. But then I noticed, for example, the TiledImageMaskedProgram is only used in one place.
void setShaderForTiledImageFill (const TextureInfo& textureInfo, const AffineTransform& transform,
int maskTextureID, const Rectangle<int>* maskArea, bool isTiledFill)
if (maskArea != nullptr)
Here the TiledImageMaskedProgram is used.
Furthermore this method is only called at one place…
state->setShaderForTiledImageFill (state->cachedImageList->getTextureFor (src), trans, 0, nullptr, tiledFill);
The maskArea is always nullptr!
Not only that, all the Masked programs are actually unused? Kind of surprising.
So … is it code remainder, or was there intially a plan to apply masking via shader? Not sure how to interpret this.
Anyway. Running the GraphicsDemo example project with a rudimentary VulkanContext, I measured the performance. I noticed its performing the worst when clipping or clip regions are used.
This brings me to the conclusion that the biggest performance impact is due to the fact that any clipping of the drawing makes it necessary to perform CPU preprocessing which leads to the EdgeTable, Scanline pre-processing in the RenderingHelpers::SavedStateBase
I guess that’s the reason why a bunch of simple rotating images are performing so badly compared to a traditional “render one transformed quad” approach. With which thousands (instead of 10) are possible at 60 fps.
Seeing this and the unused masked shader stuff, I guess initially there were plans to solve this issue? But to simplify the implementation, the software rasterizer code from RenderingHelpers::SavedStateBase was used?
If anyone looked into this, or even @jules . What are your thoughts on this?
Just by debugging it, I see there went a lot of work into the clipping / clip region / edge table code. But I somehow have the feeling that a lot of it is unecessary since it was initially only intended for the software rasterizer. And some of it can be solved by using a mask texture, or perhaps the stencil buffer in OpenGL / Vulkan.
In summary I have to say: Using Vulkan is not that hard if you overcome the inital setup. At the moment it seems like there is a fundemental problem with the complexity of the clip regions. And most of the work is not Vulkan related, but preparing the paths and regions for ANY kind of GPU rendering approach.