setPixel removed from Graphics?

setPixel seems to have been removed from the Graphics class in JUCE 5.1.2.

It used to be in ‘juce_GraphicsContext.ccp/h’.

Didn’t see anything in the release notes about it either, does anyone know what’s happened?


I think it was removed a long time ago.

To talk about pixels like that is an outdated and misleading concept now that all graphics is basically vector graphics with some unknown scale factor. All the function could possibly do was to call fillRect (x, y, 1, 1), so if that’s what you want to happen, just do that directly.


Still if one needs to draw single pixels at a time, fillRect (x, y, 1, 1) have to be much slower than a “real” setPixel (x, y).

Why was the JUCE setPixel using fillRect, instead of a real “setPixel” to the screen? I mean is it an OS limitation?

Not necessarily. The pixels have to go through a few abstraction layers, host dpi scaling, per display dpi awareness, Graphics might have an AffineTransform set.

Ultimately there is no chance you can access a physical pixel on the display. Nowadays you have to deal with logical pixels, and that’s why using vector graphics is the only way to get sharp displays.

And since a pixel might after applying all the transformations end up being a rect, using fillRect seems quite appropriate…

Even with DirectX or DirectDraw?

If that is truly the case, then it is no wonder that my crap Windows OS PC running at 2.4GHz, is not 2400 faster, at drawing (for pixels at least) than when I programmed 6502 Assembly language games on my 1Mhz Commodore 64!

Blurry displays happen when you don’t align your drawing with the physical pixel grid. This is true both bitmaps and vector drawing. Which is why you always see 0.5 pixel offsets in path drawing code. If you ship bitmaps then you can supply a few different sizes.

If you want to fill an area pixel by pixel, you make an image, fill it up (see Image::BitmapData), and paint the image. The size will match the amount of physical pixels it will occupy and you paint it using ‘low’ resampling quality to avoid blurring.

I am probably old school, and it seems like a terrible waste to me to have go through fancy layers, when you know your monitor have exactly 1920x1080 pixels and you can’t draw directly to them.

I just don’t understand the point of adding 0.5 offset to a point, when your monitor do not have half points, but only whole points. In other words drawing a line from pixel 0, 100 to 200, 100 versus 0, 100 to 200.5, 100 will turn on the exact same pixel on the monitor.

One word: anti-aliasing. One will be sharp, one will be roughly 50% transparent and 2 pixels wide.

The problem with trying to be pixel-perfect is, that most of the times you end up with “looks great on my machine”.
And the second drawback is, you have a lot of extra code, if you try to support different scalings, and you will never know, if you hit the sweet spot of the users machine.
Most pixel based UIs end up offering one or maybe a few fixed sizes.