OpenGL why - high CPU usage

I was just going to port one of my old flash Apps to juce. I have started a new OpenGL app and in the render function started to draw some primitives. The app is spiking at 7% CPU usage on a 9900k and RTX2080

void MainComponent::render()
    // This clears the context with a black background.
    //OpenGLHelpers::clear (Colours::black);
    // Add your rendering code here...

	if (OpenGLHelpers::isContextActive())
		glClearColor (0.0, 0.0, 0.0, 0.0);

		glColor3f(1.0, 1.0, 1.0);
		glVertex3f(0.25, 0.25, 0.0);
		glVertex3f(0.75, 0.25, 0.0);
		glVertex3f(0.75, 0.75, 0.0);
		glVertex3f(0.25, 0.75, 0.0);

		glColor3f(0.0, 1.0, 1.0);
		for (int i = 0; i <= 128; i++) {
			double angle = 2 * 3.14157 * i / 128;
			double x = std::cos(angle);
			double y = std::sin(angle);
			glVertex2d(x, y);


Is there a reason for this?

Time and time again I explain the same thing: JUCE’s OpenGL implementation treats GL like a rasteriser. This in turn does a lot more work under the hood than it should compared to a straightforward GL renderer.

I go in detail here, where the first couple paragraphs are more relevant: Are there any games built with JUCE?


OK I get it when it comes things like rendering components but I thought that the gl commands commands were just a simple openGL abstraction and as such would just go straight to GPU pipeline hence the question wasn’t about components performance or text rendering e.t.c. but rather like the example I gave.

We had a project in Unity which gamified a hearing test and there is a nice layer on top which gives direct access to openGL in that way. Speaking of which then can Juce be bypassed altogether?

I know exactly what you’re expecting and unfortunately it’s not that straightforward in the context of JUCE.

What’s happening is that the wrapper pipes data through a thread pool and a cached image. So what you’re painting to is actually an OpenGL frame buffer under the hood, some random amount of time later, with extra calls to glViewport - all of this instead of directly onto the device context. If you search for bool renderFrame() you’ll see what I mean, and you can step through that process yourself. As you can probably guess at this point, this means lots of CPU time.

All that being said, the underlying wrapper is really close to opening that up in such a way that you can avoid the bits of logic and do straight up GL calls to the window+device context like you normally would (ie: like in a game renderer style).

You have a few different ways to skin that cat… Some folks on the forum have figured out a way to just get and directly use the HWND on Windows via the ComponentPeer, set up all of the OpenGL HDC and pixel configs as needed, assuming that’s all you need. This is definitely a restrictive approach that’s not cross-platform.

I’m pretty sure you could just hack together some changes to renderFrame to at least reduce some of the CPU time, like ignoring the component painting steps, and avoiding glViewport calls if you need to.

If you found in Unity your UX framework of choice, you could also keep your audio as Unity Plugin… just a thought…

I had thought about doing that but then there’s diminished returns for using juce depending on context, I like juce for being solid cross-platform and unless I was sure I could apply the same hacks to OSX then it wouldn’t be worth it. I’m quite sure it would be easy enough with my current windows knowledge. I know there’s SDL and port or RT audio which I have used in the past but juce makes it sooo easy to access audio hardware so I’m wrestling with this decision. Unity yes but we’ve got a dirty audio hack were you add an effect to give you access to audio buffer then it’s pretty much work as usual and the other downside is no cross-platform audio driver access. It’s tough one to pick.

If you have

  • Accelerated graphics (Unity, SDL combo)
  • Cross platform (Unity, Juce)
  • Low level driver access (Juce, SDL combo)

Everything seems to tick 2 of 3