Image rendering is done in software, normal window rendering may be done in hardware and possibly also using subpixel antialiasing vs. normal antialiasing.
Yes, of course, but I was under the impression that Images would also be backed by an OpenGL texture, and thus Image rendering would also be hardware rendering. It seems that’s not the case though.
Looking into this more, it appears the Image I’m getting on Mac is actually a CoreGraphicsImage. If scenario A is rendered with CoreGraphics, why would it differ from scenario B rendered to a CoreGraphicsImage?
The type of the image is not necessarily related to the engine rendering to it, I believe. It does make sense when you think about it: The software renderer is the lowest common denominator. While the hardware/OS contexts can take advantage of i.e. your LCD layout and apply suitable subpixel antialiasing, doing the same for a rendered image would look wrong on another monitor.
Thanks for the continued info here. So even though they are both being drawn with CoreGraphicsContext::drawGlyph, they may look different because one is actually being drawn to screen?