More pain in the retina


#1

why oh why do they think it’s a good idea to pixel double. this is the road to madness. i’ve seen it before.

anyhow,

i have been battling the iOS “retina” problem again when using texture patterns.

i’m using a texture to create UI surface. This texture is a subtle bumpy pattern and what i really want is for iOS to use my texture always without any scaling, because when scaled, the bumps don’t look right. Think of something like gravel texture here.

For normal, sane user interfaces that don’t lie to me about resolution, i just draw this texture as needed. for “retina”, what i do is create a piece twice as big as i need then render it using “drawImage” with a doubled width/height for the source into the destination. So, it looks like iOS sees this halving and, because it secretly needs to double, ignores it and, presto, i get what i want.

However, this doesn’t work for non-retina, because the texture gets scaled down. Normally, scaling down a 2x image would be fine (aside from performance issues), except this is a texture that looks different when you scale it. basically, my bumps on the texture are now too small.

to fix this, i have had to KNOW whether this doubling malarky is going on. so when i’m not doubling, i use my texture normally, when we are (retina), i create one twice as big and then ask it to halve into the coordinate space (for which it internally knows is not needed for pixel space) after which it gets it right.

To KNOW, i’ve hacked the following `getContentScale’ method into the “Desktop” class:

// juce_ios_Windowing.mm float Desktop::getContentScale() { float g_scale = 1.0f; if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)]) { g_scale = [UIScreen mainScreen].scale; } return g_scale; }
i’m wondering if you think this is a good idea in general, ie to have a way to query, or whether there is already a Juce way to know this that i’ve missed.

– hugh.


#2

Yes indeedy - I’ve been meaning to add something that provides the scale factor, so thanks for looking up the API call for me! I’ll add that right away…


#3

Hmm… just thinking about potential scaling pitfalls… It’s all easy enough on an iPad, but OSX will soon provide “retina” monitor support, and I wonder what’ll happen if you attach two monitors with different scale factors? Ok, each screen can report its own scale correctly, but when you drag a window between two screens, or make it overlap, what’ll happen? Perhaps the OS will have to fake everything somehow to make the whole display seem to have a single scale factor.


#4

i’ve been thinking more about my problem and, in general, it’s whenever you need to create images programmatically.

What i’m doing is creating my own button images by blending together textures, so in order for these to look right i have to know the true target pixel size. for retina, i create a 2x image and for non-scaled, i just create 1x. Thinking about it, this is exactly the same thing as Apple are asking developers to do when they prepare artwork for an app, ie. to create two resolutions. However, since mine is created by the app itself, i have to know whether i’m being scaled or not.

What i’m saying here, is that there is a genuine case for an app to want to know any coordinate space to pixel space scaling in operation.

Regarding your point about multiple monitors, yes i think this is a new can of worms. While i’m at it, i’ll also have another moan about pixel doubling/scaling…

presumably, Apple think they’re smart by using a floating point coordinate system and an internal scaling to pixels. But this will end in tears when they start having several resolutions, especially if aspect ratios change too.

When writing any sort of intelligent UI layout, and not just a picture with hotspots, you can’t (easily) layout on proper pixel-perfect boundaries when working in a fake coordinate space. For example, all my bevel borders come out “big and chunky” on retina because it’s pixel doubled the gradient. i can’t be bothered to fix this, but it’s annoying to see this on a platform that prides itself on UI sharpness.

What you could do, of course, is multiply your coordinates by the scale (once it can be known). work in that space and pass down floats to render.

But this is exactly the same as knowing the true pixel space in the first place!

Thanks for adding in my method BTW, feel free to change the name or to combine it with others.

– hugh.


#5

FYI I actually got a bit carried away and rewrote that whole monitor-size bit of the Desktop class (had been meaning to do that for a while, actually). Already checked-in if you want to have a play.


#6

got it. yes, that works nicely!

for the benefit of anyone else interested, here’s how you can get the main display and its features, like scale:

const Desktop::Displays::Display& dis = Desktop::getInstance().getDisplays().getMainDisplay();
 _scale = (int)dis.scale;

– hugh.