I’m not sure if this issue is because of the PNG format being saved weirdly on different platforms or if this is a JUCE difference in the renderer, but a rendering test of the JUCE software renderer fails (different platforms produce different results):
Above image is generated on macOS
Above image is on Windows
This is the diff (mean absolute error)
As you can see, there are differences in the top left corner (outer rounded rect). Is this behaviour normal ?
I didn’t trust my eyes, I boosted the levels. To be fair, photoshop apparently rounded somewhere, red is indeed off by one. in your diff image it looked like a lot more.
Can you clarify how you arrive at the different images? Are you loading pngs and displaying those, or are you stroking the lines? Is the error present when displaying the images in a JUCE application, or only when you compare pngs saved using the JUCE API? There are many unknowns at play here.
My image is generated using the absolute difference of the two images with floating point channels scaled by the mean absolute error, roughly (without clamping and rescaling back to uint8):
diff_image = numpy.abs(image1 - image2) / mse
Both images are generated using the JUCE api and saved to png using the juce::PNGImageFormat, then they are loaded back using imageio and compared using numpy, where i produce the diff image in the case the mean absolute error is greater than 0, and fail the rendering test.
Is a difference of 0.003921568627451% for two of the three color channels for four pixels (out of 160,000 = 0.000025%) on two different platforms, using different compiler versions, really worth investigating?
Maybe not, but JUCE used to be pixel perfect on all the platforms. These kind of slight differences might be hiding bigger issues, and having a non exact software renderer on different platforms will make it difficult to write extensive and working rendering tests to ensure no graphics breakage is introduced.
We have been using rendering tests for years in our in house renderer, and ensuring pixel perfection on all platforms and even across graphics stacks (Software, Metal, OpenGLES, OpenGL) made sure we don’t break something or introduce glitches at every release. Over the years, it helped making huge restructurings and refactors in the graphics stack (as we gradually introduced our low level rendering abstraction) ensuring a smooth sailing.
I would instead add a certain tolerance to your tests, so deviations of, e.g., 1% are allowed. No user will ever notice or complain about such minuscule differences.
You say JUCE used to be pixel-perfect, but now, suddenly, it isn’t. The differences might be due to compiler and compiler-setting differences, as I’m unaware of any code changes to path rendering in the last few years.
Apple Clang is on version 15.x, but for Windows, the current version is 17.x. It’s not unreasonable to assume that some compiler-related optimizations work slightly differently now.
In my experience JUCE never has been pixel perfect. Unless you use the same compiler and renderer on all platforms. Font outline generation/rendering handled by the platforms produce slightly different results. Path rendering handled by CoreGraphics on macOS and iOS also potentially renders slightly different than the software renderer or the openGL renderer or the upcoming direct2d renderer. As long as the differences are small enough I don’t care. Although this obviously makes running automated render tests across platforms difficult - but maybe that’s not really necessary? Just do a test per platform against a reference image created on the same platform.
IMO this is the right thing to do - I wouldn’t want someone in my codebase to implement a workaround for a bug they haven’t bothered to report only for us to have to pointlessly maintain that workaround for ever after. I can think of a few cases where “workarounds” have become so intertwined with the rest of the codebase that even when the things they were working around have been fixed, we can’t remove the workaround for fear of breaking things.
I also think this is the right thing. In the age of 4k retina displays the need to be pixel-perfect, especially for things like anti-aliasing, will only hold back improvements. The OpenGL renderer for example could be so much more performant if it didn’t try to be pixel-perfect.
I can think of a few cases where “workarounds” have become so intertwined with the rest of the codebase that even when the things they were working around have been fixed, we can’t remove the workaround for fear of breaking things.
I agree, and unless the OP is planning to fork JUCE to implement a fix themselves, I doubt that there’s much they can do. Reporting it as a bug isn’t likely to elicit much more of a response than it’s already gotten, since the JUCE team is already bogged down with the upcoming JUCE 8 release.
If GUI features and absolute 100% pixel-perfect drawing on every platform was my #1 requirement for a product, I would probably investigate GUI frameworks other than JUCE
Leaving the discussion aside if the test is worthwhile to pursue, here’s an idea what might have led to the artifacts:
The placement of the image displayed is in integer pixels. Since those are logical pixels and the factor is a floating point, I would be surprised not to see those differences.