Wrong colours when using OpenGLImageType


This problem has been around for a while, although before i wondered if i was doing something wrong.

The problem is that the bytes of an ARGB image are mixed up when using images of OpenGLImageType on Android. Under Windows it’s fine and i havent yet tested iOS. This could be an endian problem. Note also, that if i change the code to use “normal”, ie non-opengl, image types, everything works.

in order to avoid copying my images all the time to opengl, im creating Image's using theOpenGLImageType’ for my “hud”.

like this:

Then i draw to them and later i draw them using the paint' method of a component with an attached OpenGLContext. I create the image innewOpenGLContextCreated’.

What happens is that, i think, red and blue are swapped. If i add in this nasty hack it fixes it,

void GLPanel::_fixPixelFormatARGB(Image& img) { Image::BitmapData* _data = new Image::BitmapData(img, Image::BitmapData::readOnly); int i, j; int n = _data->lineStride/4; unsigned char* dp = _data->data; for (i = 0; i < _data->height; ++i) { // ABCD -> CBAD for (j = 0; j < n; ++j) { unsigned char v = dp[0]; dp[0] = dp[2]; dp[2] = v; dp += 4; } } delete _data; }

You’ll notice im hacking the “readonly” bytes which is very naughty! Also, you’ll notice that this hack can’t work because you can’t do this to an OpenGLImageType. In fact, what i have to do is create a temp “normal” image, run my hack above, then copy it to the OpenGLImageType.

Like this, it works and i only need to do this hack in the initial image creation, but it’s annoying.

I’ve just updated to the latest code and it’s still there.

any suggestions welcome,
– hugh.


Hmm, sounds like it’s an endianness thing, in that the bytes would need to be flipped by the OpenGLFrameBufferImage::Writer class, which currently just does a memcpy.

I’ve actually been pondering about the way images are handled, because it’s such a hassle that GL images can only be created when you’ve got an active GL context… What I’m considering is that all images would be software-based, but that things like GL could attach “cached” native versions to them when they get used - so e.g. you’d create a normal image, but the first time you draw it to a GL context, the GL stuff would create its own framebuffer version of it, and secretly attach that to the image. Then, GL could re-used the framebuffer each time you draw the image, until you attempt to change the image, after which it could need to re-cache it again. The same trick might be possible for CoreGraphics too. That way you’d get some nice acceleration without any deliberate effort.


I’m opposed to sweeping changes to the Image class, for obvious reasons.


To me, this sounds like a good idea. At the moment, it’s a bit bogus that you have to create these images on the opengl thread and not in your main initialisation, and further that there are two kinds of images; normal and OGL.

However, there would be complications if you wanted to get to the pixels. Ideally, the OGL version would submit the texture then drop them from main memory. Can you copy back out of a Framebuffer? and then you’d need some way of saying; “i’m done changing the pixels, please sent back to OGL”.

The other idea, which Vinn might prefer, is that you continue to create as `OpenGLType’. But you actually get a “normal” image which works in the way you described.

or you could introduce a new object which hosts the OGL framebuffer and allow conversion/connection between Juce Images and this new “Frame” object. Aren’t there other properties of the Frame buffer too which this might facilitate.

Anyhow, i dont know what this colour bug is. im not bothered to trace it down just yet. But i’ll give it a go on iOS when i get that build running.

– hugh.