[macOS] Colo(u)rSpace and OpenGL

Rendering with JUCE on macOS will produce different colours on OpenGL vs other renderers.


Why is this a mac issue?

Well… macOS does handle colour space conversions but they do it pretty much everywhere except OpenGL

Sound Radix made the JUCE OpenGL renderer ‘mess’ with the juce::Colour just before uploading it to OpenGL to overcome this difference.

You can try it out here:

Pros:

  • CoreGraphics and OpenGL should have equivalent colours.
    (minor differences could be due to rounding errors, anti-aliasing premultiplying or text renderer).

Cons:

  • Uploading juce::Image does takes a little longer as conversion is done before uploading texture.
    But your graphics designer won’t complain about you not using his colours palette! :slight_smile:
2 Likes

Cool that you found a solution and managed to get something to work! But I’m struggling to understand something; OpenGL provides sRGB buffer and texture controls, so why not use that? (eg: https://community.khronos.org/t/setting-up-srgb/75216 )

1 Like

If there’s a flag you can just set somewhere and it works that of course would be a better solution! Searching GLFW’s sources it looks like the flag is only used with EGL which iiuc is for Linux? So no chant found there to enable sRGB on macOS…

I’ll add on @yairadix feedback.

a. Our goal is to have the exact same visual colour… that is also the result of transformation between the sRGB to the display’s ColourSpace which is also affected by the ColorSync profile. So what you see (on all platforms) is sRGB after additional framebuffer processing. on macOS, OpenGL is the only exception for that. (iOS does things differently)

b. OpenGL never supported the concept of colour space. RGB isn’t defined with specific colour space on OpenGL (also JUCE…) and sRGB extensions aren’t reliable (or include macOS additional colour transformations).

To be clear - I’m aware, though readers might not!

No, that’s wrong. Colour space can be framebuffer and texture dependent, precisely as the Khronos post I shared above covers in a short and sweet way.

More simply; when you create textures you can change the colour space by specifying GL_SRGB8 or GL_SRGB8_ALPHA8.

When you create a framebuffer, you specify GL_FRAMEBUFFER_SRGB.

If neither of these options are available due to driver weirdness or whatever, IMO the conversion on your textures should happen in a shader because that would be much more performant. It’s much, much better to leverage the GPU while in GPU-land instead of trying to pacify it by pixel mashing on the CPU beforehand. Doing this kind of work on the CPU is really missing the point of using a graphics pipeline.

If it helps any, and if you have to result in the latter, someone wrote up a quick ShaderToy example: https://www.shadertoy.com/view/3l2SRD

I had to circle back to JUCE’s approach to OpenGL rendering and remembered something important: if you’re using Graphics to do all of the drawing work, then all of your graphics (Paths, Images, text, etc…) get pixel-mashed into an EdgeTable and so on, eventually getting splatted onto a singular OpenGL frame buffer.

This frame buffer is where the linear->sRGB conversion should happen. If this can possibly be done via the flags I mentioned - great. If not, then I really suggest just loading up a small pixel linear to sRGB shader up and applying it to the whole frame buffer that JUCE is controlling under the hood.

If you’re brave enough to use your own renderer based on what JUCE provides, I think the same method applies.

(Note that this post assumes you’re rendering everything entirely in OpenGL.)

That’s correct. we’ve evaluated uploading the transformation LUT and applying conversion on shaders but eventually went on simpler approach which is performant enough from our tests with less code. If anyone has some issues to report, optimizations or PRs it feel free to share those. :slight_smile:

If it helps any, and if you have to result in the latter, someone wrote up a quick ShaderToy example: https://www.shadertoy.com/view/3l2SRD

In our first iteration we tried this approach. It resulted wrong colours when looking at screenshot by designer / comparing with Apple’s Digital Color Meter.

Again, on Windows, Linux and iOS we get the same colours switching between renderers so this is macOS issue only.

I’d expect that to result in “chunkier” gradients

You shouldn’t have any issues with banding as long as the whole pipeline is configured with sRGB from the textures up to the framebuffer. Worst case is that you apply a minor dither.

1 Like