Decals for increased graphics performance

I was wondering whether the implementation of decals (sprites that can be used directly on the graphics card) would be a simple way to greatly increase the graphics performance in most use cases.

If I am not completely misguided, using them could be as simple as drawing components or elements into decals, which are then rendered very efficiently by the GPU, as is demonstrated in the following video (starting at 4:34):

[olc::PixelGameEngine 2.0 - YouTube]

If this is at all feasible, it would provide an incredible performance boost without forcing people into using openGL or some other API. I suspect that for most people, being able to render, using the methods that they are already familiar with, several thousand, fully transformed, tinted, scaled and alpha blended sprites would be… “all right” :slight_smile:

Note that the PixelGameEngine consists of just a single header file, has no dependencies and is cross-plattform (Linux, Windows, macOS and Emscriptem). Perhaps some people here would even want to experiment with integrating the game engine with the JUCE framework, who knows?

N.B. I wholeheartedly recommend the OneLoneCoder channel. Even if game development is not one’s cup of tea, the videos are extremely interesting and well made. And there are some excellent back-to-the-basics videos on fundamental C++ concepts such as pointers, containers, polymorphism, etc. Great resources for those starting out!

To clarify: say one has a very large amount of knobs, buttons and display elements one would like to draw and animate. Instead of drawing them directly, one would first create decals out of these elements using the familiar functions provided by the graphics context (drawLine, drawRect, drawImage, strokePath, etc.) and then one would let the GPU render these decals onto the screen using decal-specific functions (such as drawDecal, drawRotatedDecal, drawWarpedDecal, etc.). Very simple but extremely efficient and without making the user learn openGL or any other API.

(Now, if this is already possible and I am just displaying my ignorance then please by all means let me know!)

1 Like

I think you’re 100% right in your approach, and that indeed would be a GPU-friendly way to paint things efficiently.

But, it would be quite a major refactor to both the front and the back end of JUCE, so I wouldn’t call it a ‘feature request’, more like an overhaul of the way JUCE does graphics.

If someone wants to fork JUCE and try it as a community patch, I’d volunteer to help out I can’t contribute much to the back end (OpenGL/Metal/etc) that’s needed for it to work.

Yes, you are right, this probably does not qualify as an ordinary “feature request”. But then again, neither does the by far most voted feature request, JUCE Vulkan :wink:

There are probably good reasons why what I propose hasn’t been done already, but I don’t quite understand why it would necessarily mean an overhaul of the way JUCE does graphics.

Couldn’t this be done through a new type of desktop window with it’s own rendering architecture? To display any component in this window, the user would first need to draw it to a decal or layer and then use the corresponding functions to render it. All the new functionality would thus be separate and in addition to the existing JUCE graphics.

This might seem somewhat limiting - no access to shaders and users would only get a performance gain by actively using decals, but a lot would be gained in terms of perfomance and a lot can be done just with this tool.

Or is this all completely unrealistic?

In that case a different approach could be to make the PixelGameEngine run alongside JUCE. Of all game engines out there, it’s as ideal a candidate as it gets: it’s a single header file, has no dependencies and has a permissive license.

If this were to work it would also attract a very large community of people to JUCE, as they could in turn benefit from the many features JUCE has to offer.

I think the difficulty is to make this idea ‘fit’ with current things made in JUCE, either existing Components, or even existing plugins/apps, with relatively minor effort.

If you just want to draw something on a whole separate window without any JUCE interaction, I’m not even sure it’s worth it to change JUCE. You can just spawn a new window, pass the HWND/NSView to PixelGameEngine and draw there. You can grab the native handle using ComponentPeer::getNativeHandle()

However, doing that also means you won’t have any of the JUCE mouse/keyboard interaction logic, and you’ll essentially be in a very “raw” land, which isn’t so high level.

I don’t think you have to convince anyone that JUCE needs better graphics support, that has been the top request since forever. The problem is to bring a concrete suggestion on how to do it in a way that’s compatible with all the use cases and platforms JUCE wants to support.

I would start by forking JUCE, and starting to prototype yourself and see what you can get. :slight_smile:


Essentially this sounds like image caching, or not? Instead of drawing it directly you render it into an image, you call it decal, then draw the image.

juce::Component::setBufferedToImage(true) does this partially. If you want more specialization customize via JUCE: CachedComponentImage Class Reference

BUT don’t forget we have high DPI aware windows in our modern OSs. Now the real problem starts if the decal is drawn, let’s say at a scale of 1.2. Now all your decals must be re-rendered or be drawn with resampling. If you want quality, this has to be done with some kind of anti aliasing. JUCE does this by using pixel line quads/triangles. This is done with a scanline algorithm and pixel precise clip regions, which is why the whole OpenGL image rendering is so slow to begin with.

One has to specify a compromise. Most sprite/decals engines do not draw their quads anti aliased and or with clip regions. But for most JUCE users, AA drawing, glyph rendering and clipping is very important.

If your goal is just a sprite engine with rotation and shader fx (no AA), you can somewhat work around by using the OpenGLRenderer. Then just draw quads into a framebuffer, and draw the framebuffer with JUCEs normal graphics.


I’m not sure how important antialiasing would be in practice, that’s a good point. But yes, pretty much what I’m after is a simple sprite engine with rotation and shader fx. If it makes use of the GPU and is as fast as what is demonstrated in the video above, then I think quite a lot can be done already with such a simple “tool”.

If I understand you correctly, this is already possible using the JUCE OpenGL renderer. If so, then the feature request would boil down to a class which offers this functionality (some basic form of GPU-accelerated sprites) without the user having to actually use OpenGL, or whichever other rendering engine might be used under the hood.

But I will experiment with what you described and see how far I get. Thanks for the explanation!

Can you describe how what you’re asking for is different than using the JUCE OpenGL renderer with setBufferedToImage?

It is my understanding that most graphics backends like to get reusable objects and then apply transformations on it on the GPU.

So for example, if your GPU knows that you passed it the same slider 500 times, but added different rotations and sizes, it can do way more efficient things than if you passed it 500 different images that only the programmer know are the same slider.

Not to mention pre-calculating those images on the CPU, like JUCE is doing now, is expensive too.