Retina graphics on the New iPad (aka iPad3)

Hi all,

I’ve found a previous discussion about retina graphics on iPhone 4 displays a few months ago. Now I’m experiencing a very bad performance with an iPad3 Juce application which really goes well on iPad1 and iPad2. The reason for that is the 4X implicit upscaling (1 point = 4 pixels) of all images which CoreGraphics applies to all standard resolution (1 point = 1 pixel) bitmaps.

If you just have, say, a spinning knob on flat background to redraw, everything is fine, but when you have 4 or 5 layers of overlapping images (background, array of objects, overlapping panel containing a scrolling list of items having thumbnail pictures) then each screen update becomes time consuming due to the 4X scaling applied to each layer from background to the topmost picture.

The final result is a smooth application for iPad2 running in a very choppy way on the new iPad3!

From what I can understand, somebody seems to have been able to manage high resolution images in Juce and have them painted correctly on an iPad3. Is that true? Could you please advice about how to make that?

I guess it won’t be a cross-platform solution as this problem only shows on iOS. It would be ok for me now because I’m only planning to release the app for Apple devices.
Some code snippet (maybe a hack the Juce ImageComponent or so…) would be very appreciated.

Thanks!

i have had a lot of problems with overlapping layers. whenever you do this it has to repaint all the areas from back to front. in my case, some of the backgrounds were complicated to repaint, so i had to enable the back buffering for those parts. ie `setBufferedtoImage(true)’.

However, this doesn’t work when dealing with retina displays (ipad or otherwise), because the backing buffer does not know to be 4x the size. in my case, i had to subclass `CachedComponentImage’ and do some horrible hacks myself.

the other thing to do in these cases is to be a bit more clever in the repaint methods regarding the clipping area and only repainting the minimum needed.

in actuality there is no 4x scaling because your art assets must already be of the correct pixel size. just that you have to lie about the coordinate space to get it on screen correctly.

whoever thought that having a display scale of 2 and using the original coordinate space should be shot because it hurts now, and it will really hurt when the next two Apple platforms debut.

unless, of course, apple provide us with a native pixel mode.

I’ve managed to improve the situation a little bit moving my overlapping panel (a kind of dialog box) to a new UIView. I made that calling addToDesktop(0, 0).

The panel now lives in a separate UIView. After showing the panel I put a breakpoint on the paint() of the components below and noticed that when I touch the controls on the panel, those components don’t get repainted anymore. A little step ahead…

Profiling the code I also noticed that even drawText() and gradient functions are much slower on the iPad3 than on the iPad2… so I had to simplify the look of the interface with plain colors instead of gradients.

Hugh, could you give me some further details about your hacks to CachedComponentImage? I still need a way to manage hires images but can’t figure out how to integrate that code it into Juce.

Thanks,
Alf

Hi,

If you enable the built in buffering with `setBufferedToImage(true)’ it will make a backing cache and work, but the imagery will be upscaled on retina so that it looks chunky and blurred. However, you can try this to see what kind of performance change you might expect.

If this is an advantage, then you can fix the caching retina (and non-retina) as follows:

  1. determine the display scale (eg retina = 2, everything else = 1)

const Desktop::Displays::Display& dis = Desktop::getInstance().getDisplays().getMainDisplay(); int displayScale = (int)dis.scale;

  1. use a custom scaling cache in your drawing component (see below implementation)

setCachedComponentImage(new ScalingCachedComponentImage(*this, displayScale));

  1. in your component, work in “real” coordinates. For example, in my paint' method i call a local_getLocalBounds’ method and use the bounds from this to issue my drawing commands. i do NOT call getWidth() or getHeight() independently, these will be the wrong size. the purpose of this is because the painting here winds up on the cached image and therefore must be the appropriate pixel size.

[code]template Rectangle scaledUp(const Rectangle& r, int s)
{
return Rectangle(r.getX()*s, r.getY()*s,
r.getWidth()*s, r.getHeight()*s);
}

Rectangle MyComponent:_getLocalBounds() const
{
Rectangle b = getLocalBounds();
b = scaledUp(b, displayScale);
return b;
}
[/code]

  1. implement the ScalingCachedComponent. Here’s my version,

[code]/**

  • Copyright © 2012 Voidware Ltd.
  • Permission is hereby granted, free of charge, to any person obtaining a copy
  • of this software and associated documentation files (the “Software”), to
  • deal in the Software without restriction, including without limitation the
  • rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
  • sell copies of the Software, and to permit persons to whom the Software is
  • furnished to do so, subject to the following conditions:
  • The above copyright notice and this permission notice shall be included in
  • all copies or substantial portions of the Software.
  • THE SOFTWARE IS PROVIDED “AS IS,” WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  • IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  • FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  • AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  • LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
  • FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
  • IN THE SOFTWARE.
    */

#ifndef cci_h
#define cci_h

#include “…/JuceLibraryCode/JuceHeader.h”
#include “utils.h”

/* Replacement for the standard cached image that makes use of the UI

  • scale factor to support `retina’ style coordinate spaces.
    */

class ScalingCachedComponentImage: public CachedComponentImage
{
public:

ScalingCachedComponentImage (Component& owner_, int scale) noexcept
: owner (owner_), _scale(scale) {}

void paint (Graphics& g)
{
    Rectangle<int> bounds (scaledUp(owner.getLocalBounds(), _scale));
    
    if (image.isNull() || image.getBounds() != bounds)
    {
        image = Image (owner.isOpaque() ? Image::RGB : Image::ARGB,
                       jmax (1, bounds.getWidth()), jmax (1, bounds.getHeight()), ! owner.isOpaque());

        validArea.clear();
    }

    Graphics imG (image);
    LowLevelGraphicsContext* const lg = imG.getInternalContext();

    for (RectangleList::Iterator i (validArea); i.next();)
        lg->excludeClipRectangle (*i.getRectangle());

    if (! lg->isClipEmpty())
    {
        if (! owner.isOpaque())
        {
            lg->setFill (Colours::transparentBlack);
            lg->fillRect (bounds, true);
            lg->setFill (Colours::black);
        }

        owner.paintEntireComponent (imG, true);
    }

    validArea = bounds;

    g.setColour (Colours::black.withAlpha (owner.getAlpha()));

    // draw the image applying the display scale downscaling.
    // this downscale wont actually be performed but indicates to
    // iOS that it can use the image at pixel level
    int w = image.getWidth();
    int h = image.getHeight();
    g.drawImage(image,
                0, 0, 
                w/_scale, h/_scale,
                0, 0, 
                w, h,
                false);
}

void invalidateAll()                            { validArea.clear(); }
void invalidate (const Rectangle<int>& area)    { validArea.subtract (area); }
void releaseResources()                         { image = Image::null; }

private:
Image image;
RectangleList validArea;
Component& owner;
int _scale;

JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (ScalingCachedComponentImage);

};

#endif // cci_h
[/code]

good luck!

– hugh.

Hi Hugh,

Thank you so much for your instructions!

Unfortunately my application has several overlapping images and I can’t find an easy way to “trick” the coordinates. There should be a way to distinguish between “logic” coordinates (points) and “physical” coordinates (pixels). The whole Juce framework makes no distinction between the two so I can’t see a way to have images and graphic contexts being aware of the difference.

I tried adding a float scale member to images in order to have that value at hand in CoreGraphicsContext::drawImage() where I discovered (thanks Hugh!) that adding an additional scale transform of 0.5, an image gets rendered at retina resolution.

One side of the problem, for example, is when I get the size of the image by simply calling getWidth() and getHeight(). If I manage a retina image (I’m detecting the iPad3 support at runtime and possibly upscaling all my graphic resources at 2x) I don’t know what size Image::getWidth() should return. I added two functions: Image::getPointWidth() and Image::getPointHeight() to return downscaled (point) sizes, like:

int Image::getPointWidth() { return image ? roundToInt(image->width / scale) : 0; }

But of course this is not enough because all auto referencing code like Graphics::drawImageAt() calling Graphics::drawImage() which calls Graphics::drawImageTransformed() just use image pixel sizes as synonyms or point sizes.

So I tried another way: trick the Image sizes to let them stay the same when an 2x image has a scale factor set to 2.0.

int Image::getWidth() { return image ? roundToInt(image->width / scale) : 0; }

But then all the code which I wrote that calculates offsets in pixels inside an image would have needed to be rewritten to take in account that the offsets should be multiplied by the scale factor. And in addition there are several parts of Juce where getWidth() is supposed to return the size in pixels which would require to be adjusted accordingly.

So, it’s a mess… Unfortunately I’m not able to find a satisfying solution right now and am about to release my iPad application so I’ll go with the current solution (means: no changes from the current Juce) which is slooooow on iPad3; but still hope this question won’t remain unsolved for a long time (Julian?) as I suppose all Juce users having to do with retina displays (iPhone4 and iPad3) and some complex drawings are having this serious performance issue as well.

Maybe a good way to go would be to see images more as textures than as 2D raster graphics and then to use them to texturize a component when it comes the time to paint it on the screen. This would decouple images resolution from how they get rendered on screen (by the way, most of the times it would be a fast plain 1:1 rendering).

Alf

Have either of you tried using the OpenGL renderer on iOS instead of the default (CoreGraphics) one?

According to the following post, it is possible to get a 1536x2048 GL framebuffer.

So it may be possible to use the OpenGL renderer to paint at the native pixel size and not have to deal with the CoreGraphics display scale stuff.

Firstly, Juce will work in the coordinate space of the graphics environment, which normally is pixels, but in the case of iOS retina is a virtual space with a scale. Juce won’t do anything with that scale, it’s just there so you know.

you can try to work locally in pixel space (eg x scale) in your paint method. this approach works when you’re using cached images like my example. but when you’re painting directly, calls back to the drawing system expect logical coordinates.

regarding your images, if these are pre-rendered resources then you should have one for low res and one for high res. Both should be put into your app. you could, of course, downsize the hi res one for low res, but never upsample the low res to high res.

You would then use the appropriate image depending on your device, with the high res one being “coordinate downsized” (ie not pixel reduced) onto retina.

my problems came about when certain textures within my app are procedurally generated by the app itself. this generation can make whatever size i want. in this case, i generate either a small texture or a large texture, depending on display scale =1 or 2.

the whole display scale thing is nasty. it’s totally the wrong thing in the first place and this will become much more apparent when the ipad mini and iphone5 appear. perhaps in iOS 6, Apple will add a new system call to work in pixel space.

here’s some suggestions until then,

  1. work in logical coordinates always except do tricks with images so that they come out high res on retina.

  2. use the image cache with components, so you can write your paint method always in pixel coordinates.

  3. use paths as much as possible to draw stuff because these can have applied transforms which work on retina. ie a halving transform draws to actual retina pixels.

@sonic59

yes, i’m looking into using opengl for UI currently. it works, but i do have a few problems.

one app related constraint, is that Juce OpenGL is 2.0 ES (or better) only. This might exclude some older devices (if that matters). Might some others be too slow (not looked into this).

but this might be a way round the whole thing eventually.

There are very few iOS devices that don’t support OpenGL ES 2.0.

iPhone - Discontinued July 11, 2008
iPhone 3G - Discontinued September 1, 2010
iPod Touch 1st Generation (All Models) - Discontinued September 9, 2008
iPod Touch 2nd Generation (All Models) - Discontinued September 9, 2009
iPod Touch 3rd Generation (8GB Model) - Discontinued September 1, 2010

However isn’t this a moot point anyways? The beauty of Juce’s multiple graphics rendering system is that it can be changed at runtime.

I’m guessing your code will support both non-retina and retina resolutions. If you really need to support these older devices, you could detect these old devices and use the CoreGraphics renderer. Or you could just use CoreGraphics for all the non retina resolutions while using OpenGL for retina resolutions.

Hi all again and thanks for your answers.

After upgrading my project to the latest version of Juce (the former version had no CachedComponentImage objects nor a lot of other new stuff) I tried Hugh’s solution with ScalingCachedComponentImage. But then I discovered that it doesn’t seem to work if I have an cached ImageComponent which also has child components. In that case something really goes wrong while subareas get redrawn.

Then I realized that the ImageComponent has a nice feature to optionally scale its own image!

I tried, on an iPad3, to set an hires image to an ImageComponent, leaving the component bounds untouched and setting the stretchToFit rectangle placement, and it worked!

So like Hugh suggested, I added a function to the Desktop singleton to know if I’m on a retina device, so my code could fork where required.
Now, all times I have an ImageComponent to place somewhere (I make the same with ImageButtons as well), I check if I’m on a retina device and then just rescale the image on the fly, instructing the ImageComponent to still stretch the image to fill its bounds (which stay the same!). When the ImageComponent gets repainted the possible “downscaling” makes the image be rendered at high resolution.

This means that I can finally manage hires drawing intentionally. And usually I try to balance between RAM occupation (2x images are 4x the normal size) and drawing performance.

To be honest I never got my app on iPad3 to run fast like on an iPad2 (crazy!!), but at least I left the incredible initial slowness…

Here is an example of how I got it to work:

        // getPixelScale() returns [UIScreen scale] where allowable, defaulting to 1 on older iOS versions
        const int scale = Desktop::getInstance().getPixelScale() == 2.0f ? 2 : 1;

        m_bgView = new ImageComponent ("background");
        m_bgView->setBounds(0, 0, 750, 580); // bounds in point (no pixel) coordinates

        Image img (sres->getImage("images/background")); // std resolution image got from the resources
        img = img.rescaled(img.getWidth()*scale, img.getHeight()*scale); // rescaling won't happen when scale == 1
        
        m_bgView->setImage(img, RectanglePlacement::stretchToFit); // this makes the trick

        addAndMakeVisible(m_bgView);

Another great step ahead was layering my overlapping panels on separate iOS UIViews by adding panel components to Desktop (like with AlertWindows). This would dramatically reduce the repaints when something topmost does change (I have panels with scrolling lists…). But then I experienced a problem with rotated coordinates (when the iPad is not in standard portrait orientation) with peer connected components… but I’ll eventually post another question as it would be out of topic here.

Thanks all again,
Alf

= WIN!

I just create double resolution PNGs and then use this now and it works great. No need to query the Desktop for the scaling factor if you only have one double-res set of image resources. Thanks alfjuce.

Sure!

You only have to live with some extra memory and a down-scaling when the app runs on a non-retina iPad…

Works well when using the Core renderer, but with OpenGL things are pretty blurry here. Not only images, but also labels… fonts in general. Worse actually than what would be explainable by the retina/non-retina difference. Has anybody gotten a crisp display with OpenGL?