I need to tell OpenGL to render a new frame or grab/display frames every X number of samples, I need perfect sync between audio and graphics. Is there any way to get this done? Here is a demonstration of the problem:
Blue(Left) = CPU / Everything is done in audio thread
Pink(Right) = GPU / OpenGL
Left circle is the old version of my oscilloscope which all graphics are rendered in the audio thread thereby providing perfect sync, although the heavy CPU usage of calculating a complex image with thousands of dots/lines/pixels means the audio goes to s**t. So we moved on to using OpenGL. Now that the GPU is doing the heavy lifting of calculating the pixels, I’d like to somehow render/show the frames in perfect sync like I did before. I would love to simply call “render frame” or something FROM the audio thread, from the processBlock, but my programmers have informed me they are confused as to how to accomplish this due to how JUCE is setup.
Well, I think it’s the physics of time itself and the basic architecture of operating systems that makes this tricky, rather than anything in JUCE!
The audio thread simply can’t do anything that could block, and ANY graphics call is completely out of the question. If this is news to you, search for some of Timur’s lectures about realtime audio programming for a good intro.
The best you can do is to perform some completely non-blocking pre-processing calculations in your audio thread and make it push the results to a lock-free FIFO, where a background thread (or your GL render thread) can pick it up and render it to GL or whatever. There’s no such thing as “perfect” sync but this mechanism is how everybody does this kind of task.