I've been working on a mobile app that uses a full-screen OpenGL view. And within that view, we do most of our UI using JUCE components. Unfortunately, we've found that JUCE doesn't paint these components very efficiently at all. For example, the time it takes to draw a rectangle seems orders of magnitude higher than it should be. Digging in to investigate...
When drawing a simple colored rectangle on the screen, you would expect the rendering to come down to sending two triangles to the GPU. Instead... RectangleListRegion::fillRectWithColour converts this rectangle to a SubRectangleIterator, which is passed to SavedState::fillWithSolidColour, which is eventually added to an EdgeTableRenderer via SubRectangleIterator::iterate. And when it's added to the EdgeTableRenderer, it's added one line at a time. And each line is rendered as a pair of triangles.
So the number of triangles actually drawn depends on the height of the rectangle. If you try to draw a rectangle with a height of 480, you end up rendering 960 triangles instead of 2. And with higher resolutions, drawing times increase proportionally.
The performance also gets much, much worse if you try to draw shapes that aren't so rectangular. As an example, text rendering. With text rendering, each pixel is individually rendered as a pair of triangles.
Because of the poor performance of simply rendering rectangles, rendering components to images isn't even that helpful. It still takes much longer to render the buffered components than it should.
Are there any plans to improve on this?
All the shapes, including rectangles, are currently rendered as paths.. Because rectangles may need anti-aliased edges they can't technically be done as two triangles, but actually I thought I had already optimised it so that they'd be drawn as 2 main triangles + a handful of extra ones around the edge to draw any anti-aliased pixels (I already have utilities to break a sub-pixel rectangle down into a set of 8 pixel-aligned ones). Thanks for the heads-up on this, I'll take a look..
..ok.. It does already use an optimisation here. If you draw a big rectangle, it'll use at most 8 triangles to draw it.
There are a few exceptions: Perhaps you're rendering into a target where the clip-region is not a simple rectangle? Or have you rotated or skewed the context?
The place to look at what's going on is juce_RenderingHelpers.h - when your fillRect call goes in there, it'll do different things depending on the type of clipping involved. For simple clip rects, it should use the code in RectangleListRegion::fillRectWithColour, which is optimised to use a minimal number of triangles. If it ends up in one of the more complex clip classes, then it may have to use a line-by-line rendering method.
Well, i think there must be problem then, this is a screenshot juce demo, rendered with the openGL render, i modified the ShaderQuadQueue to paint every quad in a different colour.
Even the plain coloured background rectangle uses thousands of quads, the openGL renderer could be much (x1000) faster, if this would be improved.
Will there be some kind of “fix” for plain filled rectangles in future juce updates?
Sorry for being so insistent, this issue keeps me from using GL. There is a great potential for optimization, especially if you draw a lot of thin vertical rectangles.
For example if you draw 1000 lines horizontal, the gl render will use 1000 rectangles.
But if you draw 1000 lines vertically with a height of 1000px, the gl render uses 1000x1000 = 1000000 Rectangles.
Will there be any improvement, it would be perfectly okay, if its only for plain colored rectangles.
We’ll certainly take a look when we have chance, though TBH it’s really only in this one use-case (drawing huge numbers of tall 1-pixel-wide rectangles) where it’d make a noticeable difference.
Thank you! Well this use case is not really uncommon for audio-related apps (waveforms, analyzer)
Yeah, though in Tracktion we moved to drawing waveforms with a Path, which had good performance and looks a bit better too. I’d probably recommend trying that.
This is maybe good for static low-detailed wave-forms, but not for dynamic changing analyzer graphs or high-def wave-forms with a lot of small peaks (a lot of vertical lines). But even with a Path, i think the renderer transform the path into the small horizontal line pieces.
I agree with chkn there, it would be great to optimize that thing because I have still not found any way yet to do a spectrum analyzer using only native JUCE functions with satisfactory performance, and that change would be very useful !
BTW, beside of that, on the long way it would be cool have the same thing which we have for CoreGraphics -> CoreGraphicsImage, with Direct2D -> Direct2D-Image as NativeImageType, which allows GPU-powered-background-thread-repainting, because the OpenGL context is always a little bit accident-sensitive.
(But Direct2D maybe too, who knows?)
Any idea, when the openGL improvement will be ready? I need it badly.
Things are a bit hectic here, doubt it’ll be possible for at least a couple of weeks
We want to know why ! (just out of curiosity )