SoftwareRenderer not being pixel perfect on different platforms

I haven’t reported it as a bug, i asked clarifications to what could be the issue, if a JUCE platform difference in the software renderer or some sort of quantization issue of the file format saving and reloading. @otristan provided useful ideas to what i could look at, instead of evangelising business suggestions.

Your attitude here doesn’t lend itself to productive responses. I’m done wasting my time.

2 Likes

I think you’re spending more time answering posts in the JUCE forum than your own product code

Clear steps to reproduce would be helpful. How exactly are you producing the image?
Are any coregraphics things involved? What about libpng? Is JUCE_INCLUDE_ZLIB_CODE defined? What about JUCE_INCLUDE_PNGLIB_CODE? JUCE_USE_COREIMAGE_LOADER?

In order to debug this, I’d recommend saving the images in another format that JUCE supports and observe if there are any differences, to make absolutely sure it’s not due to the renderer / not due to the png-writer.

If png is the issue: It could be helpful to perform the test entirely within the confines of JUCE to take numpy etc out of the loop. Load the image saved on Windows with the Mac build and vice versa. You could also include stb_image.h to have another png implementation, and compare with that: Have JUCE and stb_image load an image saved with the other.

2 Likes

Yes i will try to dig more by avoiding the juce PNG code together with the numpy checks and go straight raw buffers and hex diff them to see what is going on there and exclude PNG from the equation.

(both JUCE_INCLUDE_ZLIB_CODE are defined JUCE_INCLUDE_PNGLIB_CODE so using the juce provided thirdparties, but i don’t think JUCE_USE_COREIMAGE_LOADER is affecting as i’m not loading the images via JUCE back again, but only saving them once generated)

Thanks, please share your findings. From my experience so far the JUCE software renderer is pretty accurate and I’m not aware of any platform dependencies in the edge table rasteriser. Best guess: relaxed IEEE compliance and hitting extremely small values somewhere while rasterising the top left edge, affecting the antialiasing there. If this is due to numerical precision and floating point math with fastmath, it would be interesting to see if this can be reproduced if you add an affine transform that translates everything by a tiny amount, so small that it wouldn’t be visible, but affects all insignificant digits.

I don’t think it matters for all practical purposes, especially when held against the phenomena one might encounter together with different GPU drivers, buggy shader compilers, and magic “contrast enhancers” on some displays… But it IS intriguing, and there is probably something to learn for those who are crazy enough to have implemented their own software rasterisers in plugin visualisers :wink:

Total stab in the dark, I wonder if the recent(ish) addition of approximatelyEqual could be at play somewhere? I’m interested to know but even if we track the difference down there is no guarantee that we would actually want to change anything.

It might be interesting to compare old to new versions of JUCE on both platforms to see which platform the change occurred on, from there maybe you could do a git bisect to find the commit that caused the change?

I second this. Actually, before reading your message, I was thinking about dumping the pixels from the juce Image as strings of hex values to a text file, arranged in rows and columns exactly as in the original image. I think that would make it easier to visualize where the differences are with a simple text diff, rather than with actual raw buffers and a binary diff