Iphone 4 screen resolution



I’ve been developing an iphone app, and I’ve been using Apple’s latest iPhone emulator which should have a screen resolution of
960-by-640-pixel resolution at 326 ppi (from http://www.apple.com/iphone/specs.html). The Desktop class is telling me, however, that the screen resolution is (320,460), and nothing is looking quite as sharp as it should in by app.

Does anyone know what I need to do to get the higher resolution. I haven’t been sure whether I’m missing a property in my Info.plist or something like that or whether it’s a problem in Juce…



I was wondering too. For Apple projects, there’s some clever kung fu in UIImage with two images of different resolution being loaded.

For us, there this for one:

(would need respondsToSelector to cope with iOs < 4.0)

On an iPhone 4 in an iOS 4 app, it would return 2.0.

I dug around on that a bit, and found: setContentScaleFactor

So you would set that back to 1.0 and should get the full res. Sounds like a good candidate for Jules’ multiple layout scheme - will need another layout for high versus low res.



// Double the resolution on iPhone 4. g_scale = 1.0f; if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)] && [self respondsToSelector:@selector(setContentScaleFactor:)]) { g_scale = [UIScreen mainScreen].scale; [self setContentScaleFactor: g_scale]; }


Oh, but the touch coordinates will still be in the ‘standard’ scaling. Looks like this may be a Jules thing then.



This must be a plist setting, surely? If the OS is pretending that the screen res is 320x460, then it must think your app isn’t capable of using the full resolution…


I thought it must be an Info.plist setting as well, but there doesn’t seem to be any setting defined for it:



According to this article, the number of pixels stay the same regardless of which phone you’re running on. But you can make use of a logical construct of “points” that either map 1 to 1 to pixels or 1 to 2:



Still trying to figure out how to make use of this inside juce…


Okay, I figured out what this means in the context of Juce. Actually, Juce was already rendering at the higher resolution when you’re rendering geometric primitives like rectangles, text, and so on. The only situation that’s tricky is if you’re rendering images. Usually, I prefer to use the Graphics.imageAt method for rendering images, b/c I would guess it’s the most efficient image rendering method. But if you want to take advantage of the iPhone 4’s ability to render images in higher resolution, then you need to load a high resolution image, and draw it using drawImageWithin() or drawImage() by giving it a target pixel width and height which is half the image size. This draws the image correctly at the higher resolution on the iPhone 4 and rescales the image to half its size on earlier iPhones.


Forgive me for waking up an old thread, but after dabbling with JUCE on Android this evening, a platform and environment I detest, I started playing with iOS, a platform I don’t hate, and almost immediately was wondering how people are handling the multi-resolution issue.

It seems to me that, provided that JUCE is relying on iOS to ultimately scale the image, symfonysid’s solution should work. However, I would probably approach it from the other direction. As mentioned, scaling is typically less efficient and symfonysid’s approach requires real time down scaling on older, lower res, devices. It also would be an iOS/iPhone idiosyncrasy that would leak into other targets if building multi-platform.

The way it is typically handled in iOS development is that both low res and high res versions of images are put into the bundle. The files use the same names, but with a key just before the type (some.png, some@2x.png), when you load the image from the bundle, you get the version appropriate for the underlying screen type. This minimizes real time scaling, which generally has both a performance cost and a quality hit - since the downscaled images can pick up artifacts.

It seems to me that JUCE is already well setup to do something similar with a simple ResourceImage wrapper class. Both high and low res versions could be placed in binary resources, and then the more suitable one passed back through the Image cache. If the class is called to do rendering, imageAt could be used for 1:1 images on most platforms and low res iPhones, and drawImage with a scale only used in the pseudo scaling case, where there isn’t much impact in the underlying OS rendering. The @2x resources could be conditionally compiled in for iOS only. Heck, IntroJucer could even conditionally put images in the bundle instead of the binary for that particular platform… But I digress. I happen to really like the IntroJucer concept and would love to see it grow into an IDE. But, in the meantime, it seems that a portable, low run time performance hit solution would be well supported by JUCE now.

Again, sorry to wake up an old thread. I just got interested on JUCE on smart phone platforms and pretty quickly abandoned Android, which gives my eye a nasty twitch…



I have this issue too. all my UI development is designed for scalable resolutions, but i really dont want to be messing around with normal and x2 resources. after all, this doesnt make sense for a properly resizable app.

what i really want is for the iPhone HD models to tell me the truth about the screen resolution and positioning. then i’ll decide whether i want to scale my bitmaps down or not. i would very much like JUCE to work with the native true resolutions on all platforms and not some x2 bogus scaling of my coordinates on some models but not others. if that means JUCE de-convert iOS lies, all the better.

after all what’s gonna happen when the iPhone 5/6/7 etc and iPad 3,4,5 etc also increase in resolutions. hiding the true values is the road to madness, it always was.

i can say that downsizing on the fly with drawImageWithin is not the answer, especially on the slower models. this uses a scaling transform and the slower hardware cant keep up.

so far, my approach has been to generate scaled down images and cache them when i receive a resized' call. when im drawing, i use these cached images. the other thing that has been a real win is Jules' new(ish)setBufferedToImage’ feature on a Component. if you dont know, this tells it to automatically cache its look and paint from cache. Very useful when things are not constantly changing.

to conclude, i like to use high res resources on all platforms and i’ll do the scaling where necessary.


FWIW, I actually was privy to the early discussions and feedback prior to iPhone 4/iOS 4. I wouldn’t nec. have solved the problem the same way, but I don’t think Apple’s approach was madness either.

The Apple approach to the increased resolution displays has, from their point of view, some advantages. First, by keeping the workspace the same, but higher resolution rendering behind it, all existing apps picked up sharper text and less fuzzy graphics primitives automatically. As long as you used them to draw text and primitives, you picked up better rendering.

A second benefit was that existing titles could be updated to exploit the benefits of increased resolution by simply dropping in higher res alternative graphics.

A third benefit was that it punted the problem of a huge disparity between the workspace in terms of user input and the workspace in terms of display. Remember, the physical screen stayed about the same size and the touch input technology remained about the same. The Apple scheme keeps a close to one to one relationship between pixels you can pick to draw and pixels you can touch.

Something to keep in mind is that the api already took floating point values for locations and widths. So, the graph that I had in one app, where I just calculated the positions in floating point and drew lines, automatically took advantage of the higher resolution when available. My graphs got higher res automatically. They had already set the stage for the scheme by previously offering sub pixel rendering.

JUCE uses ints for positions, but still mostly gets a free ride to better resolution. Text and the graphics primitives all render sharper and with higher density on the higher res devices without having to do a lot of behind the scenes magic to fake matching up touch input, etc.

The problem pops up when one gets to graphics, but if you are using graphics in a resolution independent app, you are likely to end up with a similar approach anyway. Look at icons, you just get better visual quality when you draw the 16x16 for 16x16, and the 128x128 for 128x128.

I’ve been working on a JUCE port of an existing little mobile utility as a proof of concept here, and synfonysid’s approach already works pretty well. I dragged the Jucer display area to match horizontal and vertical orientations for the iPhone and laid out a bunch of graphics. I basically just took 2x graphics from the existing iPhone utility, and first set them to 1/2 in absolute pixels, then switched to proportion of parent. For text I used labels, but made those proportional as well, and added font change logic to resize. The 1, 3GS, 4/S, and iPad all look pretty good compared to the original, even though the iPad is a different aspect ratio. So I built for Android, Windows, and Mac, and they all look pretty good as well. If the platform is iOS or Android, I set the titlebar to 0 and maximize, for others, normal window bar and resizable.

I had to pick a base resolution for the graphics that was reasonable across all the uses, but that would be true regardless. And if I want graphical perfection, I have to pick between multiple resolutions based on the display. Not the primitives, they are already handled. Text, graphics primitives, etc. already look sharp. But the PNGs have to be the right resolution and positioned so that the aspect ration fits the surface.

You want to do that yourself, which I can understand (though I don’t nec. understand the benefit of having JUCE giving you back renderer resolution as pixel resolution, then faking pointer resolution, as opposed to just obtaining underlying rendering resolution in the cases you really care). But, again, consider Apple’s point of view. They addressed a problem that all applications will have to face and chose to do so in a way that let app writers pretend that only two resolutions (iPad and iPhone) exist. They’re goal was the easiest path to resolution usage for developers on their platform, not people writing platform independent apps.

Again, this actually isn’t the way I would have handled the same problem, so I don’t contend it is ideal. But having deployed a bunch of iOS apps and supported them from 1G to iPad-2, I don’t think it is madness either.



yes, i understand your reasoning and why Apple may have chosen to do things this way. The gain here is the ability to move forward easier and to give existing apps a boost.

I would very much like it if things could work either way. for example they add a new api call `setPixelAndRenderingRelation’ (figurative example), that lets apps that want to work with the true pixel resolution do so. Of course, existing apps won’t be calling this, so they get the advantage of automatic doubling (or whatever).

My motivation here, is working with actual pixels is needed for some things and this cannot be hidden forever.

you raise an important point regarding input disparity. what i have to do in my apps is be aware of the display DPI. actually, i have no proper way to find this so, right now, i have a rather horrible hack where i “know” the dpi from the resolution and other dubious OS version values.

once i have the dpi, i can scale my UI so that my buttons are pressable by fingers. i have to do this so that, for example on the iphone4, they appear the same size in mm/inches on the screen.

i admit this is a drawback when working with actual pixels.


FWIW, you can access the higher resolution screen on the Retina display devices. Just not in a very JUCE like way. The default logical coordinate space is in points (1/160th of an inch), but you can find out what the system is doing to scale (see above) and alter the logical coordinate space to match. I’ve done this on a couple of non-JUCE iOS apps.

In theory, JUCE could do this too and hide all the side effects, but cost/return of finite Fearless Leader time might not be worth it.

As far as the ‘fat finger factor’, it goes with the platforms. In the early iOS SDKs, the default info button was almost impossible to press. I’m still muddling how I want to handle it on my JUCE/iOS stuff.


Given that people will soon have to deal with HiDPI in OS X as well as Retina in iOS, this seems like the kind of place where JUCE could provide a big win by providing the same API on both platforms. It would be great to be able to get the scaling factor, get the DPI, do pixel<->point coordinate transforms, etc. in JUCE code that just works on each platform.


Yeah, it’s something I’d like to make juce handle automatically, and it should be pretty easy to do, I’ll figure it out at some point…