When trying to set the swap interval to a value higher than 1 on MacOS (in order to wait for more than one vertical refresh before swapping the buffer), I was seeing the GUI drawing slow down much more than it should.
After doing some timing measurements I found that the flushBuffer call was repeatedly taking less than 0.5ms which was in turn causing the thread to sleep due to the hack to prevent CPU burn when the window is occluded.
It states that the swap interval "can be set only to 0 or 1", which is very likely the reason I was experiencing problems.
Could we add an assertion inside OpenGLContext::NativeContext::setSwapInterval() in juce_OpenGL_osx.h that ensures only 0 or 1 is passed in, perhaps even limiting the incoming value?
If your application is rendering animation, use a Core Video display link to drive the animation loop.
I found that this helps with OpenGL performance issues in many cases such as when several plugin windows using OpenGL are open. Unfortunately JUCE doesn’t do it this way, but it easily could if they took this change from SR’s branch.
Thanks @yairadix. I would certainly vote for that being added to JUCE, if possible. Thoughts @t0m?
Edit:
Just for reference, here’s the diff of only the CVDisplayLink change in juce_OpenGLContext.cpp of SR’s branch (ignore the other file diffs, the change was updated from the original commit, but GitHub doesn’t seem to be able to do single file comparisons).
Maybe I do something wrong, but setSwapInterval(1) is not limiting the renderOpenGL() calls for me. It is called with exactly the same (300-320fps) frequency with 0 and 1. Although if i set it to 2, then the the renderOpenGL() callfreq drops to 16fps (AMD). On Windows (NVidia) it works perfectly (0 = 300fps, 1 = 60fps, 2 = 30fps, etc.) Do you guys have any ideas on this?
Yeah, that’s what I see on macOS too. Using the CVDisplayLink as suggested by @yairadix above does fix this problem and the rate is then correctly limited to the display’s vertical refresh.
Hopefully the JUCE team will implement that change.
A temporary ‘hack’ to limit the frame rate approximately without changing any JUCE code is to use the OpenGLContext::executeOnGLThread() method to call Thread::sleep() for an appropriate amount of time (based on how long since the last renderOpenGL() callback was received). Using that method means that the sleep happens before the message manager lock is acquired in renderFrame(), which is important.
Basically, the idea is for each time renderOpenGL() is called, measure the time taken since the last call, work out how much longer it needs to take, and call executeOnGLThread() with a lambda containing a sleep() for that time.
This method is by no means perfect, because the frame rate isn’t absolutely consistent, but for now, that’s provided a massive improvement in terms of being able to have multiple GUI instances open without hogging the message thread.
Again, IMO the ideal solution would be for JUCE to use the CVDisplayLink to ensure a consistent v-sync frame rate.
thanks, it seems @yairadix 's soultion worked for me too… With every new JUCE update I’m hoping the openGL handling will be fixed, but it seems it’s not a priority.
OpenGL runs fine on Mac and Windows, but on linux it is running extremely laggy. Our whole application blocks at glXSwapBuffers
I have continuous repainting enabled so it is continuously blocking at glXSwapBuffers. Putting a sleep in the render or repainting on a timerCallback improves performance massively but is that really a good solution?
I’ve created an example project that shows horrible performance when the app is hidden (so no rendering should in theory happen). @reuk Please try out the attached .zip
Tested on:
JUCE 7.0.3
Xcode 14.1
MacOS 12.6
MacBook Air (M1, 2020)