Graphics Best Practice

Hi, I’ve never done anything particularly fancy with graphics - the most I’ve done is depict filters or level meters by pulling an atomic from the audio thread.

To do this I’ve been using the standard juce graphics paint function. However, I now have a project that requires multiple scrolling waveforms - complex things that will repeatedly paint. I was wondering what the best practice to do this efficiently. I assume I will want to use a fifo but should I use paint or should I investigate opengl? I’ve seen conflicting reports about performance boosts from opengl and I’m aware it could disappear from mac so I’m slightly sceptical about going down that route.

Any thoughts would be greatly appreciated.

1 Like

Having done this recently, it depends on what you find acceptable. OpenGL is the only way I was able to have it entirely smooth at 60fps. Just using the component paint() method will clog the message thread and result in a very stuttered looking scroll. OpenGL opens you up to new and exciting problems, though.

1 Like

From my experience doing it GPU accelerated is the only way to get it working real smooth. As OpenGL is currently state of the art in JUCE you’d currently have to use that. But even if JUCE moves to more modern graphic APIs which will happen at some point, you’ll likely still need the basics, e.g. computing the vertices for the shapes you want to draw. So even if you then pass it to e.g. a Metal API call instead of an OpenGL API call and rewrite your shader code (which will likely not exceed a few lines of code) from GLSL to MSL, having understood how a GPU works and how to understand the concept of vertices is probably already half of the way.

So yes. You should probably have a look at OpenGL

1 Like

We’ve tried OpenGL and even finished the Direct2D implementation and in the end, the software renderer was faster overall. It’s sad, but it appears the overhead of setting up all the draw-calls with OpenGL ends up being slower than just pushing the pixels.

This is especially true if you’re trying to use OpenGL or Direct2D as the rendering engine for the whole UI. Juce will hide the performance hit by creating a background-thread that will execute the OpenGL commands, but the overall performance (measuring both threads) is way worse than the performance of the pure software renderer. Even the Direct2D implementation (that was twice as fast as OpenGL at the end) was still slower overall than the software renderer.

The only way I can see you getting better performance is if you create your own OpenGL context and write a shader or something like that to display your waveforms. And then only if they are truly huge and cover the majority of a 4K screen. And then Apple will kill your OpenGL in a few years. So you would need to use Vulkan or Metal.


Just to be sure, are you talking about using OpenGL to perform the usual component painting through the components paint callback or are you talking about using the JUCE OpenGL implementation to run your own custom shaders?

Of course simply switching to OpenGL for Component painting won’t give you great benefits on most desktop systems. But writing our own shaders makes a huge difference, in fact it’s the only way we found to make some more complex and nice looking visualizations possible.

As I said, I think if you write your vertex computation code with that other APIs in mind, you can design it in a way that great parts of it might be reusable for any type of shader language and stick to OpenGL as long as JUCE has not released support for Vulkan or Metal. Would you disagree on that?

1 Like

I thought I was pretty clear when said in my last paragraph that you might get better results using your own context and shaders.

Sure you could do that and try to be as platform-agnostic as possible, so you can re-use the shaders for another API later.

I just wanted to warn against trying to use the existing OpenGL context for the current component system. It will slow everything down instead of getting you a boost.

IMO it should be actively discouraged and even removed from JUCE. We’ve tested it with GTX 1060s and the performance was WAY WAY worse than using pure software rendering. So it’s not an issue of “your integrated graphics is just too slow”. Plus you have to consider that your users might not run on the latest and greatest hardware either.

If you run the JUCE demo project and you switch between software and OpenGL, the OpenGL numbers look much better, but it’s only measuring the time spent in the UI-thread, not the time spent in the OpenGL thread actually executing the draw-commands. It’s a smoke-screen that hides the true (horrible) performance numbers.


Thanks for the responses everyone, have now read through a couple of times. It doesn’t seem like there is a straightforward answer to everything then! I’ll have a think…

I noticed exactly the same thing about openGL context, it consummes more cpu in addition to consumming gpu.

I noticed when doing my own Shaders that anyway even with doing a simple moving triangle it was not fluent, I noticed the same with the animation tutorial which is proposed by the juce team. So anyway it’s like you can’t get 60fps if you use juce, even if you use openGL. And I also got some cpu issues on Windows that I wasn’t the only one to reproduce when starting using openGL, where using openGLRenderer class was consumming a whole core of the cpu even without doing any drawing.

All of this discouraged me a lot of using openGL if you add the fact that the openGL API is quite non friendly compared to juce one.
I would recommand using the juce software renderer even if it’s not perfect for now.

I also heard some people tried out a skia lowLevelGraphicsContext which was not successful.

I noticed exactly the same thing about openGL context, it consummes more cpu in addition to consumming gpu.

Have you used a profiler to find the codepath that consumes the most?
Is it the actual draw calls (vertex + index buffer), the texture uploads of images and gradients or is it related to the MessageManager context locking, or the native wgl SwapBuffers(HDC) thing?

I noticed when doing my own Shaders that anyway even with doing a simple moving triangle it was not fluent, I noticed the same with the animation tutorial which is proposed by the juce team. So anyway it’s like you can’t get 60fps if you use juce, even if you use openGL.

Can’t confirm that. I don’t know what you’re trying but our plugin runs much more smoothly with OpenGL and it seems more like the opposite. It’s not perfect, yes, but it’s also not totaly worthless.

I’m not sure how strong the jitter is in your case. Is it really obvious (in release) or is it more like mircro jitter that mostly and only disappears when using something like VRR or GSync, Free Sync with your monitor?

Oh, by the way. Have you tried disabling VSync in OpenGLContext with .setSwapInterval(0) and measured the framerate? Adding a simple framecounter this shows 400+ frames and more and removes most of the mico jitter. I can’t really blame the general implementation of the Drawing functions. It’s more OpenGL context switching/blocking related I think.

Yes I had, and I saw that 90% of the CPU was used to call deactivateCurrentContext().
Basically it occurs when you use setContinuousRepainting(true).
I noticed that when I unplug my laptop, the pb disappears and the juce app goes down to 1% CPU in the task manager. It does the same when I manually activate the energy saving mode without unplug my laptop.

Spent the day optimising my level meters and waveforms as much as possible. Interestingly I noticed a huge decrease in CPU load when using the OpenGLContext attached to the editor, not an increase as others have reported - although it may just be due to the graphics that I am drawing as it may be a case by case scenario. Also if it makes a difference, I am testing on Mac - it may be the case that I should disable openGL on windows as I have always found JUCE graphics extremely efficient on windows.

I’ve never found a good reason to use OpenGL. The performance gain is minimal at best.

The issue I’ve found is that even though you’re using OpenGL for your rendering, you still have to calculate points for objects you’re drawing on the CPU. For example to draw a frequency spectrum you need to gather the audio samples, run the fft, and apply scaling to generate the array of points to draw. By this time OpenGL is only being used to draw a few lines (depending on how fancy your graphics are) which the CPU is perfectly capeable of doing using JUCE’s Graphics.

The biggest performance gain I was able to make was to do my drawing on a background thread, not the message thread (i.e. not in paint), draw to an image and then you only need draw that image on the message thread which is very quick since it’s essentially just a look-up-table.

Again, it probably depends on how fancy your grahpics are. I tend to just use flat colours, maybe with the occational simple gradient here and there. If you really really really need to use GLSL for some super fancy effects then OpenGL is definitely a must-have. Otherwise I just think it’s a waste of time.

I disagree, running fft and everything else related to the spectrum analyzer is not so cpu intensive. For me the bottleneck was clearly calling fillPath() especially when the area of painting is large.

EDIT : I also noticed that strokePath() is CPUvore as well because it generates a quite complex Path to handle by fillPath() then. Doing the stroking yourself might result in much lower cpu. I did it using the Clipper lib and it improved a lot path stroking perfs on my side.

I couldn’t get Ableton running below 40-45% in activity monitor with my plugin window open with standard JUCE graphics (It rests at 12% with no plugin window open). With OpenGL enabled it runs at 18%-22% window open.

Although I am filling 4 rectangle level meters with gradients, filling 2 waveforms with gradients and stroking a path of a waveform with each repaint.