OpenGlComponent Rendering in threads

I moved my OpenGL rendering to a thread, as sort of suggested.

It seems like I did all the right overrides, it works, and really eases up my main thread. But there’s edge cases where contexts are being changed and it falls down, hard.

Part of this was my issue, but part seems related to the fact that threads can access the context while it’s being deleted. Here’s a potential fix for that in the InternalGLContext::release:

[code] if (context != 0)
{
// Need to clear the context so threads can’t get it, then delete it
void* oldContext = context;
context = 0;

		juce_deleteOpenGLContext (oldContext);
    }

[/code]

I suspect the real solution is to add a Critical section, perhaps an optional one and lock all context access? That won’t help with speed, but if it’s off the main thread may not hurt. I’ll keep testing the threaded version.

Thoughts?

Bruce

Well that wouldn’t hurt, but really it’s up to your own code to avoid using the context while it’s getting deleted. I’ll be having another look at the opengl comp shortly, so will see if I can come up with any helpful stuff for this.

Well, yes, but my code doesn’t get told if Juce is deleting the context, for instance due to resizing, hiding etc. My thread relies on the setcontext bool. That doesn’t seem that crazy to me. And it seems fairly stable right now, although that’s with a fixed size window/view.

When you take a look, please take on board void* shared context scheme - I’m using a modified version right now, and it works. Also, let me know if you want me to look at the enhanced pixel format settings, although I suspect you’re fine with putting them in ; )

Bruce