OGL renderer access violation crash

Im crashing immediately when switching to OGL renderer, in juce_OpenGLFrameBuffer.cpp, ln 106. Git Head.
See attached screenshot for stack trace.

I’ve really no idea why that’d happen! I’m stumped as to what I can suggest…

Jules, it’s the crash I’m referring in the other thread and in my previous mail.
The stack here is useless so I can complete it.
The message thread is currently trying to delete the OpenGL component, and is struck in the “wait for thread to exit”.
There is a shared issue here with the OpenGL thread.
The OpenGL thread is in performRender, but the message thread is currently deleting the class.

I think a lock or event is missing here, and it’s strange you’ve never got this crash on linux or mac. It happens on windows,I’ll say 1/10 of the time when you switch from “Use OpenGL renderer” in the “Rendering demo” to “OpenGL demo”

Thanks guys. This was quite a messy issue…

When there’s a background thread, then when your OpenGLComponent subclass’s destructor is being called, the thread will still be running and may try to call your renderOpenGL() method. Unfortunately, the base class can’t stop the thread until the base class destructor is reached, and by that time the subclass may already be partially destroyed, and hence mayhem ensues…

The only way to avoid this is to enforce a rule that in your subclass’s destructor, you must call stopRenderThread(), so that the thread can be stopped early enough to avoid this. I’ve added an assertion in the base class to warn you if you forget to do this, but it means that all GL classes that use a thread will need fixing.

While I was changing this, I also noticed that the old support I had added for creating your own custom background thread would have been broken, and I couldn’t really see a way to fix it, so have removed that feature. Presumably if anyone had been using it, they would have moaned that it was broken, so I assume that removing it won’t bother anyone.

Thanks guys for looking into it.

Just pulled recent changees and unfortunately the same problem remain.

…then I obviously fixed a different problem. I can’t get it to go wrong now on my Windows machine - can you give me any more clues?

It seems the cause of the crash is that context.extensions.glBindFramebuffer is null. frameBufferHandle is also 0.

Could be that my two year old intel onboard gfx (series 4xxx) does not support that extension…

[quote=“OBO”]It seems the cause of the crash is that context.extensions.glBindFramebuffer is null. frameBufferHandle is also 0.

Could be that my two year old intel onboard gfx (series 4xxx) does not support that extension…[/quote]

I guess it could be that, although that’s a pretty basic function!

Who’s and when is deleting the OpenGLCurrentExtension ?

Better cope with missing OGL extensions gracefully than crashing…

Anybody else with intel based windows laptops having this problem?

No, I don’t think so.
The OpenGL spec is clear, if a driver adverise a given level (for example OpenGL 2.0), it must provide all functions for this level.
Since anything below OpenGL 2.0 doesn’t exists on Windows, both function should exist, and if they are zero, this means that either the driver doesn’t respect the OpenGL standard (and in that case, any OGL software would crash on your computer), either Juce’s code zero-ed the pointers, and likely in the destructor.
So, back to the initial question, you can probably put a breakpoint where it’s set up and where it’s destructed to figure out the real issue.

The OGL context is never destructed before the crash.

I never experienced OGL speciffic problems with other OGL software. For example the OGRE3D game engine runs fine with its OGL renderer.

Here is a list of extensions that are null. The rest of them seems to be valid function pointers.

	glIsRenderbuffer	0x00000000	unsigned char (unsigned int)*
	glBindRenderbuffer	0x00000000	void (unsigned int, unsigned int)*
	glDeleteRenderbuffers	0x00000000	void (int, const unsigned int *)*
	glGenRenderbuffers	0x00000000	void (int, unsigned int *)*
	glRenderbufferStorage	0x00000000	void (unsigned int, unsigned int, int, int)*
	glGetRenderbufferParameteriv	0x00000000	void (unsigned int, unsigned int, int *)*
	glIsFramebuffer	0x00000000	unsigned char (unsigned int)*
	glBindFramebuffer	0x00000000	void (unsigned int, unsigned int)*
	glDeleteFramebuffers	0x00000000	void (int, const unsigned int *)*
	glGenFramebuffers	0x00000000	void (int, unsigned int *)*
	glCheckFramebufferStatus	0x00000000	unsigned int (unsigned int)*
	glFramebufferTexture2D	0x00000000	void (unsigned int, unsigned int, unsigned int, unsigned int, int)*
	glFramebufferRenderbuffer	0x00000000	void (unsigned int, unsigned int, unsigned int, unsigned int)*
	glGetFramebufferAttachmentParameteriv	0x00000000	void (unsigned int, unsigned int, unsigned int, int *)*

I was wrong. glBindRenderBuffer appeared in OpenGL v3.0.
Since you might have a driver not implementing OpenGL v3.0 on Windows, maybe you can try with an “OpenGL extension viewer” to figure out if your board supports the feature through an extension.
For example, you might want to check if your driver supports EXT_framebuffer_objet extension.
In that case, you only need to rename the function name to look for and it might work.

Ideally however, I think Juce should deny you from using OpenGL renderer if you don’t have all these functions available.

Yes, sounds like he’s just got an old card (or driver).

Yep, I think I’m going to have to do some work on the fallback plan it uses.

[quote=“jules”]The only way to avoid this is to enforce a rule that in your subclass’s destructor, you must call stopRenderThread(), so that the thread can be stopped early enough to avoid this. I’ve added an assertion in the base class to warn you if you forget to do this, but it means that all GL classes that use a thread will need fixing.
[/quote]
It’s not the only way. You can put the thread in a inner object so the destructor call can be done in the base class, like this:

class OpenGLComponent
{
    struct ThreadHolder
    {
        OpenGLThread * thread;
        ~ThreadHolder() { if (thread) { thread->stopRenderThread(); deleteAndZero(thread);  } }
    };

   ThreadHolder holder;
};

Since when OpenGLComponent is deleted, it’s member are deleted. But at this time, it goes to ThreadHolder destructor and such destructor call the stopRenderThread() before actually deleting the thread.

No, that wouldn’t be any better. The base class destructor and all the base class member destructors are called AFTER the subclass has already had its destructor called, so your suggestion would be called too late.

If the subclass contained a member object whose destructor stopped the thread, then that’d work. But that’d be more complicated for people to do than just adding the stopRenderThread() call, and would still only stop the thread at the end of their destructor, when ideally it should be stopped at the start of that method.

In that case, in the “setRenderingThread” method, return a ScopedPointer the user must store in its members.
When the child class is deleted, its members are deleted too.
If the user doesn’t save the scoped pointer, then his/her thread will never appear to run, so in all case, (s)he will be forced to use the smart pointer anyway.

Can’t really think of any advantages in doing it that way…?

It’s safe by design (althrough it’s invasive), while the assert in destructor will appear to work in release (or if ignored), but the bug is pending.

About the glBindFrameBuffer crash, I also experience it, so I’ll give more detail:
The callstack is:

JuceDemo.exe!juce::OpenGLFrameBuffer::Pimpl::bind() Line 106 + 0x22 bytes C++
JuceDemo.exe!juce::OpenGLFrameBuffer::makeCurrentRenderingTarget() Line 255 C++
JuceDemo.exe!juce::OpenGLTarget::makeActiveFor2D() Line 55 C++
JuceDemo.exe!juce::OpenGLGraphicsContext::GLState::GLState(const juce::OpenGLTarget & target_={…}) Line 1309 C++
JuceDemo.exe!juce::OpenGLGraphicsContext::OpenGLGraphicsContext(juce::OpenGLContext & context={…}, juce::OpenGLFrameBuffer & target={…}) Line 3231 + 0xb8 bytes C++
JuceDemo.exe!juce::OpenGLComponent::performRender() Line 432 + 0x15 bytes C++
JuceDemo.exe!juce::OpenGLComponent::OpenGLComponentRenderThread::run() Line 181 + 0xb bytes C++

The function pointer (glBindFrameBuffer) is dangling (0xDDDDDDDD) which means it has been deleted already (and so is the whole context)
It showed up as soon as I selected “Use native title bar” with the OpenGL renderer turned on.

The message thread is here:

JuceDemo.exe!juce::WaitableEvent::wait(const int timeOutMillisecs=-1) Line 97 + 0x10 bytes C++
JuceDemo.exe!juce::MessageManagerLock::BlockingMessage::messageCallback() Line 234 C++
JuceDemo.exe!juce::MessageManager::deliverMessage(juce::Message * const message=0x003db930) Line 110 + 0xd bytes C++
JuceDemo.exe!juce::MessageManager::dispatchNextMessageOnSystemQueue(const bool returnIfNoPendingMessages=false) Line 111 C++
JuceDemo.exe!juce::MessageManager::runDispatchLoopUntil(int millisecondsToRunFor=-1) Line 151 + 0xd bytes C++
JuceDemo.exe!juce::MessageManager::runDispatchLoop() Line 132 C++
JuceDemo.exe!juce::JUCEApplication::main(const juce::String & commandLine={…}) Line 217 C++

BTW, the glitch when changing the renderer is now fixed, but the flickering drop shadow around menu are not.