Saving OpenGL renders on iOS

Hi there

Within an active openGLContext (ie inside the render method) I can use this:

        Image snapshotImage = Image(OpenGLImageType().create(Image::ARGB, getWidth(), getHeight(), true));
        OpenGLFrameBuffer *buffer = OpenGLImageType::getFrameBufferFrom(snapshotImage);
        buffer->makeCurrentRenderingTarget();
        renderScene();
        buffer->releaseAsRenderingTarget();


From then on (but within the same method so that the openGLContext is still active and current) I can save the image to disk, or manipulate it in other ways really nicely on MacOS.

With iOS however I get fatal errors:

GL_INVALID_ENUM at /Users/jeffr/git/submodules/tracktion_engine/modules/juce/modules/juce_opengl/opengl/juce_OpenGLFrameBuffer.cpp : 125

Something is passing back a number of which there is no corresponding enum value.

I have tried other ways of manipulating the data captured via makeCurrentRenderingTarget, such as doing a std::memcpy. Even snapshotImage.getPixelAt(int pixelX, int pixelY). Everything I have tried thus far will trigger a crash.

My guess is that the memory for openGL on iOS is protected somehow, perhaps because of threading?

Does anyone have any idea/advice on how I might be able to get past this please?

With best wishes
Jeff

The interesting thing here is that GL_INVALID_ENUM is triggered on unbind().

context.extensions.glBindFramebuffer (GL_FRAMEBUFFER, context.getFrameBufferID());
JUCE_CHECK_OPENGL_ERROR

So the default context framebuffer is invalid, or either there is another call previously in renderScene(); that triggers the error.

Try using a OpenGLHelpers::resetErrorState() before the framebuffer unbind to check if
it’s really the unbind causing it.

iOS is using GLES right? What GL version are you using for the context? Could be a totally unrelated thing causing a wrong enum error. Like glEnable(… ) or other calls.

It looks like glBindFramebuffer should only generate GL_INVALID_ENUM if the first argument is not GL_((READ|DRAW)_)?FRAMEBUFFER. It’s more likely that the error is triggered elsewhere, and we only find out about it here.

You could try using glDebugMessageCallback to register a function that will be called on error, and stick a breakpoint in that function to find out exactly where the error originates.

If the GLES core profile is used, it’s probably glEnable (GL_TEXTURE_2D) that triggers it (error 1280).

Currently the calls are only disabled for Android.

Discovered this recently here (5).

Hey there @parawave and @reuk. Really appreciate the fast response and thoughts on this.

Hmm I’m a little confused though. No errors are thrown up from the
buffer->makeCurrentRenderingTarget() or the
buffer->releaseAsRenderingTarget();

In fact if I only have this (below) in the render method I don’t get an error.

        Image snapshotImage = Image(OpenGLImageType().create(Image::ARGB, getWidth(), getHeight(), true));
        OpenGLFrameBuffer *buffer = OpenGLImageType::getFrameBufferFrom(snapshotImage);
        buffer->makeCurrentRenderingTarget();
        renderScene();
        buffer->releaseAsRenderingTarget();


but if I append this:

   snapshotImage.getPixelAt(50,50);

…it will error. Maybe it’s cos without that line the rest just gets compiled out for not being used. Adding a breakpoint and stepping in and over I find that the this line:

   const BitmapData srcData (*this, x, y, 1, 1);

…in Colour Image::getPixelAt of juce_image.cpp is where I get the invalid enum.
If I resume execution I ultimately crash out with a SIGTRAP here in __mutex.base.

condition_variable::wait(unique_lock<mutex>& __lk, _Predicate __pred)
{
    while (!__pred())
        wait(__lk);
}

Is that a clue? That a mutex was involved?

I’m happy to upload a simple test case of this which is based on the OpenGL tutorial from the JUCE website. Basically my test case is that, with the teapot embedded as binary data, and my above code put into the render() method.

Try something like this, to see if any errors are produced in between. GL errors are only triggered if you check them explicitly with glGetError !


Image snapshotImage = Image(OpenGLImageType().create(Image::ARGB, getWidth(), getHeight(), true));
OpenGLFrameBuffer *buffer = OpenGLImageType::getFrameBufferFrom(snapshotImage);

buffer->makeCurrentRenderingTarget();

OpenGLHelpers::resetErrorState();
glFinish();

renderScene();

for (;;)
{
	const GLenum e = glGetError();
	if (e == GL_NO_ERROR)
		break;
		
	jassertfalse;
}

OpenGLHelpers::resetErrorState();
glFinish();

buffer->releaseAsRenderingTarget();

OpenGLHelpers::resetErrorState();
glFinish();

snapshotImage.getPixelAt(50,50);

Don’t forget. GL commands are async. Could be the the framebuffers are not valid, or not accessible yet. So throw in glFinish() to make sure everything is executed until that point. CPU/GPU sync.

Hey @parawave
Thanks for that! I was really hopeful that that might work, but alas I ended up with the same mutex_base error, even with all the glFinish() statements (to be clear I did use what you kindly provided verbatim, as well as reordering stuff.

I also tried this where I add a snapshotImage.createCopy() to try and force an explicit copy of the data out into a memberVariable with class scope/lifetime (rather than function scope/lifetime).

        Image snapshotImage = Image(OpenGLImageType().create(Image::ARGB, getWidth(), getHeight(), true));
        OpenGLFrameBuffer *buffer = OpenGLImageType::getFrameBufferFrom(snapshotImage);

        buffer->makeCurrentRenderingTarget();
        OpenGLHelpers::resetErrorState();
        glFinish();

        renderScene();

        for (;;)
        {
            const GLenum e = glGetError();
            if (e == GL_NO_ERROR)
                break;

            jassertfalse;
        }

        OpenGLHelpers::resetErrorState();
        glFinish();

        buffer->releaseAsRenderingTarget();
        OpenGLHelpers::resetErrorState();
        glFinish();

        memberVariableImage = snapshotImage.createCopy();

        OpenGLHelpers::resetErrorState();
        glFinish();

        memberVariableImage.getPixelAt(50,50);

Oddly enough the crash doesn’t happen on the createCopy() but with the getPixel, which is odd as I’m actually running it on a totally different image!

It’s like whenever I try to touch the image data it throws a hissy fit. I am still wondering if iOS is multithreading the rendering and write to image data, and that maybe its tricky to get to that one moment when the image is complete and acailable for access (please pardon my lack of understanding).

Surely it must still be possible though right? Is there a more powerful version of glFinish() that really does grind everything to a halt so you can sync things safely?

Wow this is odd.

Pixel access on an Image uses BitmapData. In case of OpenGLImage a temporary framebuffer copy with glReadPixels. Clone on the other hand uses Graphics to draw one framebuffer into another.

Perhaps really some access problem. Or more likely the format? Check out OpenGLFrameBuffer::readPixels

You can essentially just copy the code there to get PixelARGB directly. Pseudo:

bind framebuffer ()
glPixelStorei (GL_PACK_ALIGNMENT, 4);
glReadPixels (.... , JUCE_RGBA_FORMAT, GL_UNSIGNED_BYTE, pixels );
unbind framebuffer()

Now it could be that the framebuffer read doesn’t support reading pixels in that format. Not sure.
https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glReadPixels.xhtml

You should really try to nail down what GL function actually causes the error. Attaching the debug callbakc gives more specific messages.

1 Like

Thanks again for the help, I finally got this working.

Ultimately this is what I needed:

        glReadBuffer(GL_BACK);
        glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels.data );

Without the GL_BACK I was getting some corrupt image data, which made sense based on my google search travels.

Also bear in mind the dpi issue or you’ll render “cropped regions”.

If someone finds this thread and has questions feel free to ask, I’ll try and help :slight_smile: