Gestures on the iPad


#1

I was hoping that juce would gain support for gestures before I really needed them, but the time has come, I need to add support for zooming using two finger pinching and spreading to my juce-based iPad project.

Before I dive in, I wanted to ask the collected wisdom here about the best way to go about doing this. Apple outlines the technique:http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/EventOverview/HandlingTouchEvents/HandlingTouchEvents.html#//apple_ref/doc/uid/10000060i-CH13-SW10

Which involves adding the appropriate NSResponder methods for the events I’m interested in handling. Is there already an NSResponder inheritor in the mac juce framework, or do I need to add that to one of the juce mac classes? juce_ios_UIViewComponentPeer perhaps?

What would be nice is to have that class respond to the events, and then send a juce-based method out to one of my classes to handle. Does that sound like the right way to approach this?


#2

I don’t really know too much about iOS but I would think the “best” way is to have Component-derived classes for native iOS controls. Arguments for/against this approach would be welcome.


#3

I would think deriving larger views, like windows would be the level of granularity you’d want for gestures. Actions like 4 finger swipes, pinches or rotations are hard to do over a button, but easier to do over a window.

Most of the iOS controls respond to touches as if touches were mouse clicks, so I think most of that would come for free.


#4

I use the long press and tap gesture recognizers on buttons all the time in my iOS projects, so I don’t think you’d want to artificially limit yourself to supporting “large” windows.


#5

What I mean is that, in order to produce an application for iOS that feels as native as possible, it would be necessary to create a whole set of Component objects, to REPLACE the existing juce objects (Button, ViewPort, etc…), which utilize the operating system’s built in support for gestures.

This means using native controls. For example, whatever the iOS widget is for a scrolling list of photos - and I am making an assumption here, I don’t really know any native iOS APIs at all but I would imagine these things must exist. To implement the native control in Juce we would want a Component derived class with a native implementation (probably Objective C).

Ideally someone will write this and put it out as an MIT-licensed module. If Jules could help by abstracting some of the class interfaces, and putting in the support to the LookAndFeel API so that these replacement controls can be used instead, that would be wonderful.


#6

I haven’t checked if it is all working, but it appears to me that at least the modules branch already handles what I need for gestures. To see what I mean, search the source for “touchesBegan”.

There are really just four methods (began, moved, end, cancelled) in iOS for the underlying multi-touch system. It appears to me that Jules is already handling all touches and dispatching them as multiple mouse sources. iOS really didn’t offer any gesture specific support until very recently. Yes, a few controls understand gestures, but we were on our own for processing gestures in our apps (see a multitude of iOS program books for samples). If the various mouse ‘sources’ (additional touches) are being dispatched, the same basic gesture algorithms should be applicable (In the two iPhone apps I’ve done in Juce so far, I’ve only used single finger gestures (middle or otherwise :wink:

There are some iOS controls people are used to seeing, like the Image picker, date-picker/roller wheel, etc. that would be nice to wrap (I haven’t looked at it, but I saw a subject line that seemed to suggest that someone has already done a wrapper for the Image Picker). But it really isn’t all that control rich a platform.

More importantly, I don’t see all that wrapping buying much in the way of native gesture support. It wasn’t until iOS 5 that Apple even provided a gesture interpreter (UIGestureRecognizer). We always soft bind to new features so that we run all the way back to iPhone 1 hardware (iOS 3.1 max). Something like gestures is so fundamental, we’d never use an iOS 5 specific class for it.

My vote would be check out getNumDraggingMouseSources() first, and just fix any bugs. Once it works, do a helper gesture interpreter on top of multiple mouse sources. That would be portable to Android and multitouch screens on Windows and Mac.


#7

Addendum: I just did a super quick test and it seems like multiple mouse sources are getting dispatched to mouseDown, mouseDrag, etc. in a component. At least when I get the source and query its index, it isn’t always 0. I hope that helps.


#8

mobile centric iOS juce module(s) project.


#9

You still need to detect which kind of gesture the user performed. iOS SDK has classes that figure that out for you, for example:

Patrick


#10

[quote=“P4tr3ck”]You still need to detect which kind of gesture the user performed. iOS SDK has classes that figure that out for you, for example:

Patrick[/quote]

But those classes didn’t initially exist, and first generation devices only run up to 3.1.x, and a lot of 2G devices have never been updated to 3.2 because of the (perceived?) big performance hit. So if you want to work on all iOS devices, like we generally do, you have to interpret the gestures yourself. It’s actually a bit easier when you get all the touches as a set than as independent mouse moves. If I was doing anything more complicated than swiping, pinching, and dragging I might consider adding a touches method that dispatches all moves before generating individual mouse events, but Jule’s way had advantages too.

For example, I have a component that is a graphical slider. If I pop them up on an iPad I can concurrently move more than one without any additional code at all. Each is just seeing a different ‘mouse’.


#11

At this point, I don’t think may iOS developers are targeting first, or even second generation devices. I certainly wouldn’t be concerned with their limitations, but that’s just my opinion.

Moving multiple sliders is the only case where I see the concept of “multiple mice” being useful. A much more common scenario might be that the user needs to zoom in on a particular part of the UI, and wants to do that with a pinch gesture. That’s going to be much more difficult to implement if all touches are considered mouse movements, rather than gestures.


#12

Well, it’s about 20-40 million devices still in operation, so it still matters to us! If I am going to argue to the powers that be that we can’t reach those users, it has to be for something beyond availability of a helper class! A lot of developers are clearly in the same situation, at least based on the howls on the Developer Forums every time Apple makes backwards support a little harder. But it depends on who you are trying to reach and why. Aside from slower load times and the annoyance of no psuedo multi-tasking, my most computationally intensive iOS applications still run on my iPhone 1 test unit just fine. So, with the exception of some specific features that some apps really do require, support or not support really comes down to developer convenience.

Well, there are a heck of a lot of MIDI control surfaces out there, so I suppose that any time you mimic similiar functionality in an app, splitting multi-touch up this way would be handy. It’s actually very hard to do in straight iOS Cocoa, you typically end up building something like Jules already has.

But think about your example. You can ask how many sources are dragging and get the details of each source in Juce now. I was thinking about performance, not difficulty. If you are ready to blow off older devices, performance is probably a non issue altogether. After you collect the coordinates, you need to determine a gesture. The standard ones are documented again and again. iPhone programming books from, say 2009/2010 or sooner list processing them yourself as the only way. Modern iPhone programming books talk about the pros and cons of doing it yourself.

What gets missed again and again, is that this is the easiest part. That’s why the developers who complained at Apple for gesture support prior to 3.2 complained about gesture support after 3.2, and are still complaining about gesture support even with the iOS 5 additions.

To understand the bigger problem, think of something really simple, like the launch screen. One finger, left right, fast slow. That’s pretty darn easy to think of a way to detect programmatically. Now look at how the UI responds. The elements simulate weight, inertia, etc. This is what the howling developers want to achieve with ease, but just detecting the gesture alone doesn’t do it. That is, the problem is not ‘are two fingers touching and moving together’, Juce will tell you that now. The problem is generally in creating the direct manipulation experience. Think how exasperating it gets in things like Google Maps when UI and touch start to become decoupled on pinching and stretching. Prior to iOS 4 or so, it was often downright user hostile.

Remember, a gesture isn’t a command or keypress, but an interactive event. The user is often adjusting something in response to visual feedback. That’s why people find the gesture helpers so clunky and of limited use in native iOS development. It’s also why Apple dropped gesture support from the earliest beta SDKs before the first one was released. They knew that the best gesture experiences, like the rolling picker control, were intertwined and complicated, so they stuck with, well, a coordinate mouse model, not a gesture model.