The OpenGL madness continues

Is this at all correct?

[code]// The following functions allow OwnedGLView to act as a lock, allowing
// safe multithreaded access to using its OpenGL context.
bool OwnedGLView::lock() {
if (!isActiveContext() && !makeCurrentContextActive()) {
// The context couldn’t be made active; notify the caller.
return false;
}
getContextLock().enter();
return true;
}

void OwnedGLView::unlock() {
// allow other threads access to the OpenGL context.
if (!isActiveContext()) return;
getContextLock().exit();
makeCurrentContextInactive();
}
[/code]
I really do need to know… I use this code to access a certain component’s GL context from the GUI thread, as well as from another thread. I am experiencing deadlocks when using:

while (!glView->lock()) yield(); ... glView->unlock();
And experiencing sporadic crashing when using the less complex:

glView->lock(); ... glView->unlock();
What appears to be the main cause of these concerns is that makeCurrentContextActive() is not creating a context synchronously on the first call, like it says it should. I think that I may have narrowed down the problem: my OpenGLComponent is inside of a TabbedComponent and thus is not always visible. When it is not visible, the context can’t be created and if there was a context, it gets deleted.

So I guess I’ve segued into an entirely different question: is there any way to override this behaviour, in order to allow me to load images into the context in one tab of a TabbedComponent and view them in the other tab?

You need another, shared OpenGL context, and then load stuff into that. That way each other can use it. I’ve abstracted that to a ‘GPU’ class so I can track what’s been sent to each graphics card and share that with other contexts on those screens.

Bruce

How do you share the context? I can’t find a way to make another context without assigning it its own OpenGLComponent (and thus have the same problems as I’m facing).

I can live with the construction/destruction of the context at regular intervals for right now (however if anyone could answer my previous question I would be very much obliged). Also thanks for answering Bruce – I would be very interested to know how you’ve implemented that class with JUCE and how much tinkering it takes.

However, a more worrying aspect is that the program crashes 4 out of 5 times. I don’t have a windows environment or a mac environment handy to test out on, so I suppose I’ll just ask – is OpenGL on linux fully working? I can get things working from the same thread but as soon as a different one is thrown into the mix, it’s all out the window. I get random segmentation faults (GDB reporting them coming from ??), crashes in my graphics driver, entire system freezes, and the occasional XQuery or XSync as the last command in the gdb step-through before things go buggy.

Hi,
You shouldn’t the first call to makeCurrentContextActive() from within a thread.

Make sure the context is allready created before making calls to makeCurrentContextActive() within a thread.

Edwin

[quote=“Sastraxi”]Is this at all correct?

// The following functions allow OwnedGLView to act as a lock, allowing
// safe multithreaded access to using its OpenGL context.
bool OwnedGLView::lock() {
	if (!isActiveContext() && !makeCurrentContextActive()) {
		// The context couldn't be made active; notify the caller.
		return false;
	}
	getContextLock().enter();
        isLocked = true;
	return true;
}

void OwnedGLView::unlock() {
        if (!isLocked) return;
	getContextLock().exit();
	// allow other threads access to the OpenGL context.
	if (isActiveContext()) 
	     makeCurrentContextInactive();
}

[/quote]
Try this instead…

Thanks for all the help. I’ve set myself up a windows environment to test on, Vista using VS2008, and I’ve gotten to the point where I have rock-solid code on Windows with sporadic crashes on linux – so I suppose I’ve gotten the IPC down, it’s probably that darned fglrx driver.

What I’m currently doing is, upon context creation, creating and loading in some of JUCE’s images, setting an orthographic projection, and creating a display list. In the actual thread’s run() function, I’m binding the texture, calling the list and swapping the buffers. The problems I’m seeing are that nothing that I’ve done in the context creation method seems to apply to the thread’s calls. Has anyone dealt with this kind of problem before? Also, is there anything JUCE-specific I must do in order to get texturing and such working? I feel like I’m forgetting some simple setup call somewhere but I can’t for the life of me figure it out.

Last post about this, I swear.
I’ve got everything working 100% except for texturing. I’ll show you the code and what’s happening:

[code]void VideoEffectWorker::changeListenerCallback(void* object) {
if (object == glView) {

	// TODO protect critical access
	// not a problem yet, but could peut-etre.
	glEnable(GL_TEXTURE_2D);

	// texture filtering properties
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

	// glView has a new context! Upload the images to the GPU
	// if they have already been loadeed from disk.
	for (int i = 0; i < imageObjects.size(); ++i) {
		glImages.set(i, loadImage(imageObjects[i]));
	}

	// set an orthographic projection: (0,0) at the top-left down to (1,1)
	glMatrixMode(GL_PROJECTION);
	glPushMatrix();
	glLoadIdentity();
	gluOrtho2D(0.0f, 1.0f, 0.0f, 1.0f);

// glScalef(1.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

	// generate a display list.
	quadList = glGenLists(1);
	glNewList(quadList, GL_COMPILE);
	glBegin(GL_QUADS);
		// top-left
		glTexCoord2f(0.0f, 0.0f);
		glVertex3f	(0.0f, 0.0f, 0.0f);
		// bottom-left
		glTexCoord2f(0.0f, 1.0f);
		glVertex3f	(0.0f, 1.0f, 0.0f);
		// bottom-right
		glTexCoord2f(1.0f, 1.0f);
		glVertex3f	(1.0f, 1.0f, 0.0f);
		// top-right
		glTexCoord2f(1.0f, 0.0f);
		glVertex3f	(1.0f, 0.0f, 0.0f);	
	glEnd();
	glEndList();

	// more stuff for the state machine
	glClearColor(1.0f, 0.0f, 1.0f, 0.0f);
	glFlush();

	contextInit = true;

}

}

// Sets one of the numbered images.
void VideoEffectWorker::setImage(int n, Image* jImage) {

// set the new JUCE image.
if (n >= imageObjects.size()) return;
imageObjects.set(n, jImage);

// make the context active
if (!glView->lock()) return;

// TODO if glImages[n] != 0 { ... replace image instead of recreating ... }
// load it into the OpenGL context.
deleteImage(glImages[n]);
glImages.set(n, loadImage(imageObjects[n]));

// unlock the context.
glView->unlock();

}

// (private) Loads an image into the OpenGL context.
// Precondition: the OpenGL context is active.
GLuint VideoEffectWorker::loadImage(Image* jImage) {

// We need an image.
if (jImage == NULL) return 0;

// figure out JUCE's internal format.
GLint textureFormat, internalFormat;
if (jImage->isARGB()) {	
	internalFormat = GL_RGBA;
	textureFormat = GL_BGRA;
} else if (jImage->isRGB()) {
	internalFormat = GL_RGB;
	textureFormat = GL_BGR;
} else {
	textureFormat = internalFormat = GL_LUMINANCE;
}

// Get the texture data from the image.
int lineStride, pixelStride;
const uint8* pixelData = jImage->lockPixelDataReadOnly(0, 0,
	jImage->getWidth(), jImage->getHeight(), lineStride, pixelStride);

// Generate a texture and bind it.
GLuint glTexture;
GLint oldTexture;
glGenTextures(1, &glTexture);
glGetIntegerv(GL_TEXTURE_BINDING_2D, &oldTexture);
glBindTexture(GL_TEXTURE_2D, glTexture);

// TODO: create a texture class and wrap this whole function.
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0,
	internalFormat,
	jImage->getWidth(),
	jImage->getHeight(),
	0,
	textureFormat,
	GL_UNSIGNED_BYTE,
	pixelData
);

// Re-bind the originally bound texture; release pixel data.
glBindTexture(GL_TEXTURE_2D, oldTexture); 
jImage->releasePixelDataReadOnly(pixelData);

// return the bound texture.
return glTexture;

}

// (private) Deletes an image from the OpenGL context.
// Precondition: the OpenGL context is active.
void VideoEffectWorker::deleteImage(GLuint glImage) {

// make quite sure there's something to delete
if (glImage == 0) return;

// ... and delete it. If we can't get the context,
// it doesn't exist, and there's nothing to be deleted.
glDeleteTextures(1, &glImage);

}

// Handles the GL view’s rendering in a seperate thread.
void VideoEffectWorker::run() {
while (!threadShouldExit()) {

	// grab exclusive access of the OpenGL context;
	// this line may jump into handleNewContext synchronously.
	if (contextInit && glView->lock()) {

		// clear the screen
		glClear(GL_COLOR_BUFFER_BIT);

		// bind the first texture and draw it.
		glBindTexture(GL_TEXTURE_2D, glImages[0]);
		glCallList(quadList);
		glFlush();			

		// release access; render
		glView->swapBuffers();
		glView->unlock();
		
	}
	
}
contextInit = false;

}[/code]
So what I get is a white, untextured quad, which is not what I want. This however does tell me that the context creation method was called and the display list was compiled and is working. So that was good. But there were no OpenGL errors being flagged, and my texture wasn’t showing. So I did some sleuthing…

I tried removing the use of texture objects – that did the trick. The texture shows up as expected (specifically, I removed all glBindTexture commands).

So here’s my question – why is the display list valid in the thread, but not the texture object? Has anyone seen this before? I think I saw a thread about it while searching for this issue here, but there was no resolution. This happens on two computers; mine has a Radeon X1400 and was tested on Ubuntu and Windows Vista. The other one has an integrated GeForce 6 and was tested on Ubuntu. All work with the glBindTexture lines taken out; all show a white quad otherwise.

What am I missing?

I removed the binding of the old texture and moved the texture filtering properties to after the binding of the generated texture. For some reason, this makes texturing work flawlessly. I hope anyone experiencing similar problems finds this!

Hi Sastraxi,
The texture filtering properties are texture specific so you allways need to bind the texture you want to set the properties for first.

gekkie100, yeah. I figured that out. What I haven’t figured out is why it doesn’t even show a texture if the filtering properties are not set. I seem to recall having not had the filtering properties some time past and the textures displaying with some default values for those properties. Oh well.