Retina, tablets and DPIs

My boss is pushing me to make my app run on Retina Macbooks, but I don’t know how to do it best.
First of all, I do not own a Retina Macbook, but I suppose that right now, on screen will just be shown 4 pixels (2x2) for 1 pixel, and the OS does the downsampling from the original app’s view.
Now, my question is: How will it be possible to write a JUCE app and not care about the DPIs, etc… How will this be implemented and when will it be implemented. The future is here already, with high-resolution Notebooks and tablets.

That’s all been working for weeks already - you don’t need to do anything at all to support it, it should all just work. There are threads about this on the forum if you do a search.

Ok, I just bought a new Macbook with Retina display and will check it out ASAP.

I’ve just installed everything and tried the JUCE demo. Now when I’m using the CoreGraphics renderer, JUCE seems to use Retina, but when I’m using the software renderer, which is btw exactly 10x faster than the CoreGraphics renderer, it doesn’t use Retina. The quality of the graphics is very blurry and ugly in that case. I assume that this is easily fixable, but you’ll have to tell me what to check… The OpenGL renderer is also not using the Retina quality of the display, you can definitely see pixels (well that’s probably 4 retina pixels for 1 JUCE pixel).

The Graphics class now has a method to return the scale factor of the target display, so for the software renderer it’d need to use that to work out the size of image to use.

It might be 10x when it’s only rendering a quarter as many pixels, but using software to render a whole high-DPI display is really going to strain the CPU and memory, not to mention that it’ll need to push 4x as much data across the video bus.

Sure, but even if it’s 4x slower, it will still be faster than the CoreGraphics renderer, for whatever reason that is.

Anyway, let me know when you’ve got something I can test, I’ll gladly help you fixing this problem. As said, it also is affecting OpenGL rendering.

After a few failed attempts, I finally managed to modify JUCE in a way that the software renderer works on Retina displays. It is still much faster than the CoreGraphics renderer, and works really well. If you want, I can send you the modified source code, but I suppose you don’t need it.

I’d have expected that it’d be pretty simple to change… If it turned out to be tricky, then sure, I’d be interested in seeing what you needed to do!

It wasn’t very tricky. I’m just applying scaling transforms. My problem was more to get through the JUCE sourcecode, it’s not always easy as it is so huge.

BTW, instead of just reading out the main screen’s scale factor, I’d rather do it with this function, because if the main screen would not be retina but an attached screen would be retina, then you’d not have retina resolution on that second screen. This function also works when using 10.5 SDK.

[code]float highestDensity()
// Support for OSX 10.6
if (![[NSScreen mainScreen] respondsToSelector:@selector(backingScaleFactor)]) return 1.0f;

	NSArray *screens = [NSScreen screens];
	CGFloat highestDensity_ = 1.0f;
	for (NSScreen *screen in screens) 
		CGFloat f=objc_msgSend_fpret(screen, @selector(backingScaleFactor));
		if (f > highestDensity_) 
			highestDensity_ = f;
	return highestDensity_;

No, it does work correctly, I’ve tested it. When you drag a window from a low-dpi screen to a high-dpi one, it redraws it and the scale factor changes.

Does it also work when the window is in between the 2 screens (obviously it will work if the left screen is high-dpi and the right one low-dpi, but the other way around would surprise me) ?

It definitely worked ok when a window was split across the screens. Not sure what algorithm they use, but presumably it’ll choose the higher dpi of all the screens that the window overlaps.

I’ve made some tests, and you’re right about the blitting speed. It seems that although the software renderer takes less time to draw in Component::paint(), the overall performance is better with CoreGraphics…

I think the main problem with my code is that when I’m drawing a view (for instance, the arrangement view), and it gets scrolled to the left or to the right, I’d better not redraw the entire thing, but only draw the portion that remains after copying the not-to-be-redrawn part to the left or right.
Let’s say it needs to get scrolled 10 pixels to the left, then I first copy the big part that needs no redraw to the left by 10 pixels, then I paint the remaining right strip of 10 pixels width.
Is something like that possible without using the Image class? I think if the graphics card does this in its own memory it can be very fast. Or maybe you have another idea.

On Windows, there’s btw a rather obscure function to do exactly that, it’s called ScrollWindow/ScrollWindowEx. It’s fast as hell, I suppose because it does all the work in the GPU memory, not using the CPU.

Yeah, you could do it with an image, but you’d need to make sure that the data is stored on the GPU. I think that a CoreImage-based Image will do that, but the Cocoa stuff is rather vague about exactly how it manages its data.

Does that even still exist? I just associate that function with terrible rendering errors in old Win3.1 software, back in ye olden days.

FLStudio (which is probably the app with the smoothest scrolling/painting I’ve seen on Windows) uses ScrollWindow(Ex). I don’t think that there’s any problem with it, otherwise the millions of FLStudio users would have complained big times.

I’m gonna try to do something similar to ScrollWindowEx on Mac. It would be a welcome addition to JUCE, as with the new Retina MBP’s resolution of 2880x1800, rendering becomes quite slow when not using such tricks I guess. Right now, I’m encountering massive problems, wether using the CoreGraphics renderer or not.

I managed to do it via making JUCE paint on a backbuffer (CGLayer), which then gets copied into the current context. This didn’t seem to hurt the performance in any way.
Scrolling manipulations on this CGLayer are easy to do (you can define a clipping area, then draw the CGLayer on itself but with some deltaX to scroll), and at virtually no cost.