Wild framerates with OpenGL

I’m trying to achieve smooth rendering with OpenGL, (consistent 60fps), but it seems like my frame rate is varying wildly, with the interval between renders constantly swinging between 1ms and 30ms+.

I’ve measured the time it takes for my render function to finish and it doesn’t seem like that’s the problem. Even if I comment out all of the rendering code, the issue persists.

I’ve tried setting the openGLContext’s swap interval — doesn’t seem like this does anything, I guess it’s not supported on Windows 11?

I briefly managed to get it rendering at a constant 60fps after changing the component’s base class from OpenGLRenderer to OpenGLAppComponent, but then the next time I opened it, it was mysteriously broken again.

Any ideas? :disappointed_relieved:

Also noteable, the OpenGLAppExample does seem to render at consistent 60fps, but I haven’t been able to figure out how exactly it differs from my project.

How are you using it? Vertex buffers/native code, or are you still using juce paint routines?

The former — vertex buffers / native code

As a guess - have you tried ’ setSwapInterval(1) ’ on the context. Setting it to 1 will make it draw at 60fps. Perhaps it’s been set to 0 in your code somewhere?
Also, in the past, I’ve found the need to turn off ’ setContinuousRepainting ’ redraw on a my own timer!

Hard to guess without looking at:

Nvidia Nsight for profiling your frame rendering time. You can single step through frames and check for significant variations. Just in case there’s something funky going on there.

That and VTune or similar to see if there is lock contention or something else holding up the render thread.

Tried setting swap interval to 1 yeah!
Also tried disabling setContinuousRepainting and triggering repaint with a VBlankAttachment.

Well I fixed it somehow. I’m not exactly sure how though.
I played around with the Vsync settings in Nvidia control panel and now I have 60fps. Oddly enough, none of the settings give me the wacky frame rates I was getting before.
Oh well, fixed I guess!

I usually use the defaults and have ‘Let the 3D application decide’ on.

I hear NVidia are going to change the control panel / experience thing to make it ‘easier’. Whatever that means, probably with reduced choices.

Perhaps we should limit the speed by a timer anyway, just in case the user has everything turned up for gaming purposes or whatever. I had 1000fps on one project, and that’s not good for resources!