iOS drawing speed

I made a Multipoint envelope with JUCE that runs fine on Mac (about 20-50fps, but I had to use software rendering because using CoreGraphics it was a bit slow too). The same env, on my iPhone 3, runs at about 1 fps. Why is the drawing speed so slow on the iPhone? How can I speed it up?

It must be your settings or specific Juce version. I’ve had much better results on a 2nd gen iPod touch (which has a similar spec to the iPhone 3 I believe). Pretty fast waveform drawing and FFT spectra drawing.

Or is it a 3GS? In which case make sure the target it set to build “optimized” for archs armv6 AND armv7 in Release mode. The armv6 floating point code runs slower on the 3GS as it’s only there for backward compatibility. The armv7 code uses the neon chips faster floating point instructions.

Is the drawing accelerated on iPhone or is it software rendering?

It uses CoreGraphics.

(You can switch to a software renderer, but it won’t be able to draw any text because iOS doesn’t give us access to the glyph shapes)

Or I could use OpenGL… But I remember that a few months ago the JUCE OpenGL Demo didn’t work on the iPhone - is this working now?

[quote]It uses CoreGraphics.

(You can switch to a software renderer, but it won’t be able to draw any text because iOS doesn’t give us access to the glyph shapes)
[/quote]
Is it possible to use CoreGraphics & software rendering in a mixed fashion?

Yes, but it’s probably not an optimal way of doing it, because after rendering with the software renderer, it still needs to blit the result to the screen.

I’m confused.

Wasn’t the point of EdgeTables to be able to draw text where there is no operating system access to glyph shapes?
And didn’t the iOS version use to use EdgeTables?
So why wouldn’t it be able to draw text?

Unrelated to EdgeTables, isn’t what you said incorrect for JUCE on iOS 3.2+?
When I wrote the CoreText support, I added CoreText based glyph path support. When you added CoreText to JUCE, you put this code in there. When I run the JUCE demo in the iPad simulator with the CoreGraphics renderer off, the demo works fine and the text shows up as expected.

So I’m not sure why you are saying it won’t draw any text on iOS when using the software renderer.

An EdgeTable is created from a Path, so you need to be able to get the glyph path, and in iOS2.0 that wasn’t possible.

[quote]It uses CoreGraphics.

(You can switch to a software renderer, but it won’t be able to draw any text because iOS doesn’t give us access to the glyph shapes)[/quote]

Ok, and how do I switch to software rendering? I tried setting USE_CORE_GRAPHICS to 0, which works on Mac. But it doesn’t seem to make any difference on the iPhone.

You’d have to use ComponentPeer::setCurrentRenderingEngine

This method is not implemented on iOS - it does nothing! Could you maybe please implement it?

Isn’t it? Ah… Sorry about that, but I guess that since it’s basically useless without fonts, there’s no point in me implementing it.

It’s really important for me to be able to activate software rendering, and I don’t need the font support.
Could you point me out how to do this myself?

So far I have this code, but it crashes (see remark in code):

[code]void UIViewComponentPeer::drawRect (CGRect r)
{
if (r.size.width < 1.0f || r.size.height < 1.0f)
return;

CGContextRef cg = UIGraphicsGetCurrentContext();

if (! component->isOpaque())
    CGContextClearRect (cg, CGContextGetClipBoundingBox (cg));

CGContextConcatCTM (cg, CGAffineTransformMake (1, 0, 0, -1, 0, view.bounds.size.height));


    Image temp (getComponent()->isOpaque() ? Image::RGB : Image::ARGB,
                (int) (r.size.width + 0.5f),
                (int) (r.size.height + 0.5f),
                ! getComponent()->isOpaque());
	
    const int xOffset = -roundToInt (r.origin.x);
    const int yOffset = -roundToInt ([view frame].size.height - (r.origin.y + r.size.height));
	
    const CGRect* rects = 0;
    NSInteger numRects = 0;
    [view getRectsBeingDrawn: &rects count: &numRects]; // <----- ERROR: JUCEView may not respond to getRectsBeingDrawn:count
	
    const Rectangle<int> clipBounds (temp.getBounds());
	
    RectangleList clip;
    for (int i = 0; i < numRects; ++i)
    {
        clip.addWithoutMerging (clipBounds.getIntersection (Rectangle<int> (roundToInt (rects[i].origin.x) + xOffset,
                                                                            roundToInt ([view frame].size.height - (rects[i].origin.y + rects[i].size.height)) + yOffset,
                                                                            roundToInt (rects[i].size.width),
                                                                            roundToInt (rects[i].size.height))));
    }
	
    if (! clip.isEmpty())
    {
        LowLevelGraphicsSoftwareRenderer context (temp, xOffset, yOffset, clip);
		
        insideDrawRect = true;
        handlePaint (context);
        insideDrawRect = false;
		
        CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
        CGImageRef image = CoreGraphicsImage::createImage (temp, false, colourSpace);
        CGColorSpaceRelease (colourSpace);
        CGContextDrawImage (cg, CGRectMake (r.origin.x, r.origin.y, temp.getWidth(), temp.getHeight()), image);
        CGImageRelease (image);
    }

}[/code]

Nevermind, I’m just drawing to an offline Image now that I blit to screen afterwards. But it seems that the colours are not right after using drawImageAt() (r & b swap?).

EXACTLY why I prefer to embed the font directly into my app and use FreeType to get at it, because it works regardless of platform.