Running OpenGL on a background thread to gracefully handle slow GLSL shaders


#1

Hey,
I'm writing a shader editor for mobile devices, but since mobile devices are generally slow as shit, it needs some mechanism for gracefully handling the compilation and rendering complex shaders on a background thread, without slowing the rest of the app. In Juce the OpenGLRenderer spins up a new thread already, but if for example I try to compile a super complex shader, the whole app freezes. How can I deal with this? Do I need to create a separate context, share it with the original one, then compile the shader on the new one on a separate thread?

This also applies to the rendering. If the shader it too complex, the whole app slows down. I'd like to somehow throttle the shader render thread so that it doesn't harm the UI, even if that means slower shader rendering. Anything to keep the app responsive at all cost.

Any hints would be most appreciated. 

Cheers,

Rob


#2

I think this might be relevant:

http://blog.imgtec.com/powervr/understanding-opengl-es-multi-thread-multi-window-rendering


#3

Correct me if I'm wrong:

So digging into the OpenGLContext class, it seems that in the renderFrame() method of the CachedImage class, a message lock is enabled before the GL frame is rendered meaning that big shader compilation, or heavy rendering blocks the UI. The message lock appears to be in there for two reasons, firstly for when the window moves or resizes, and secondly for painting any overlaying components. If I nullptr the message lock just before the context is made active, the shader compilation is decoupled from the UI. This is a good start. But I'd like to know what side effects this will have... 

However, if the shader is heavy, then my whole computer slows down. Is there a way to do this so that everything stays responsive, of course with a reduction of the framerate of the rendering? Basically, slow shader rendering can be dealt with in a classy manner (frame-to-frame cross fades etc)...slow GUI can't be.


#4

I'm beginning to think this is impossible: http://www.quora.com/How-do-you-multithread-an-OpenGL-program

Someone suggested splitting the output into tiles and rendering a small tile at a time. I might try that unless anyone has any other suggestions.


#5

Or, somehow compile the shader to run on the CPU and thread it normally, if the shader runs too slowly for realtime.


#6

Orrr run the GUI on the CPU instead of the GPU? Force software rendering? Hmm I'll try that. But the fact all of Windows slows down suggests that this is futile. But maybe it'll help on mobile devices.


#7

Cracked it! I'm writing this in case someone else needs to do the same thing.

So if the GPU is passed a super heavy shader to compute, it has to do in one chunk which blocks everything else from using the GPU while it computes, thus slowing down the whole computer.

The solution is to split the shader rendering into small tiles and render each one in a separate call of renderOpenGL() to the corresponding place in a frame buffer. Let this go as fast as possible. Then when the frame buffer is full, display it to the screen and wait until the next appropriate time to start the process again, thus maintaining a stable framerate. This splits the shader rendering into a bunch of smaller tasks, allowing other things to squeeze in the gaps for a bit of GPU time. That's my theory anyway - I still have basically no idea how OpenGL works.

I hope this helps anyone working with brutal shaders who doesn't them to ruin everything.