Win7 MultiTouch

Jules do you think you could test this in juce, and perhaps some of those capabilities to juce ? i don’t know if OSX has similar api i can assume Linux does not.


My plan was to redesign the mouse events to handle multi-touch on iPhone (which I can actually test), and then support other OSes like Win7 (which I have no hardware to test with!)

i got a wacom bamboo touch it has multitouch features
and it’s cheap

my powerbook has gestures on it’s touchpad i don’t know if the macbooks have that too, but if they do it would be easy to install windows on it and use that.

just some thoughts.

I second this request, it would be awesome !

Later on this year I’ll probably buy a multitouch screen (as soon as they become more afordable), so I’ll probably be able to help you regarding testing and/or coding :slight_smile:

I’m going to build one, just waiting for power supply (for lasers)…


I couldn’t wait for multitouch support for Windows 7 to be added to Juce. So, I’ve been hacking up Juce myself to add support for this, but I’ve run into a stumbling block. I wanted to ask how the callback peerWindowProc() in juce_win32_Windowing.cpp and the method juce_dispatchNextMessageOnSystemQueue() in juce_win32_Messaging.cpp work together. I had expected the touch messages (WM_TOUCH) to come in through peerWindowProc() since that’s where most of the Windows messages are coming in and since in the Windows 7 SDK examples they are all being handled here. But when I looked at what was happening in a debugger those messages never arrive in peerWindowProc() but you do see them in dispatchNextMessageOnSystemQueue(). Unfortunately, I’ve been having some problems actually processing these messages inside dispatchNextMessageOnSystemQueue(). (The method that is supposed to give me the touch data (GetTouchInputInfo) given the event id keeps returning an error.) So, now, I’m just trying to understand better how these methods work, and my guess at the moment as to how the events get from dispatchNextMessageOnSystemQueue() to peerWindowProc() is through being given over to TranslateMessage (&m) and DispatchMessage (&m). (This isn’t happening at the moment.) Is this correct?

By the way I just bought a Dell Studio multi-touch laptop that I’ve been pretty happy with so far. The N-Trig overlay is able to resolve 4 - 5 fingers simultaneously, and the whole system cost less than a grand (in dollars).


Alright, I figured out how to get those multi-touch events on Win7. I modified isEventBlockedByModalComps in juce_win32_Messaging to return true for WM_TOUCH messages. I set the WINVER define to 0x0601 so that the Windows 7 API would be available. And in juce_win32_Windowing I registered the window for touch events with a call to RegisterTouchWindow(hwnd, 0) and I added this code block to peerWindowProc:

            case WM_TOUCH:
                unsigned int numInputs = (unsigned int) LOWORD(wParam);
                TOUCHINPUT* ti = new TOUCHINPUT[numInputs]; 

                if(GetTouchInputInfo((HTOUCHINPUT) lParam, numInputs, ti, sizeof(TOUCHINPUT)))
                    // Handle each contact point
                    for(unsigned int i=0; i< numInputs; ++i)
                        if ( ti[i].dwFlags & TOUCHEVENTF_UP)

                        POINT ptInput;   
                        ptInput.x = TOUCH_COORD_TO_PIXEL(ti[i].x);
                        ptInput.y = TOUCH_COORD_TO_PIXEL(ti[i].y);
                        ScreenToClient(hwnd, &ptInput); 
                        RECT windowRect;
                        GetWindowRect(hwnd, &windowRect);      
                        int width = (windowRect.right - windowRect.left);
                        int height = (windowRect.bottom -;
                        int xPosition = 1.0 * ptInput.x / width;
                        int yPosition = 1.0 * ptInput.y / height;
                delete [] ti;

I still have to figure out a way to get these events out of Juce and into my application, and I didn’t want to make any deep changes to JUCE. So, I just sent them out using TUIO. I can’t wait for Jules to actually properly integrate this Win7 functionality with JUCE though. That’s going to be awesome.

Awesome! Keep working on that!

Cool - thanks for that, keep posting your findings and I’ll check this thread when I get down to actually adding it to the codebase.

And if anyone’s got any thoughts on the best way to deliver multi-touch events to components without breaking existing code, I’d love to hear!

Anyway, TUIO could be a good solution, because it’s crossplatform.

Having looked at the TUIO docs, I’m completely failing to see the point of it… Ok, they’ve thought out some nice classes that represent touch events, but how would I add that to juce?

It seems that if I wrote implementations of all their classes, it’d be an absolute pig to actually use them. Every component you wrote would have to inherit from TuioClient, and you’d need to register each one with some kind of a TuioServer object somewhere so that it can get events, whose co-ordinate system wouldn’t bear any resemblance to the component hierarchy that you’re working in. I get the impression that it’s been designed for apps where the whole thing is one big full-screen canvas, probably written with opengl or something that doesn’t already provide mouse input, and not for normal windowed apps.

Or maybe I’ve just misunderstood… If so, please set me straight about it!


I think you’ve actually understood TUIO fairly well. It was developed for touch screens and tangible interfaces, and it’s really a protocol for communication between a computer vision engine and a user interface that are running in two different processes. Now, that touch screens are becoming wide spread and multi-touch support is being integrated directly into the operating system, I think TUIO is going to be used less for multi-touch applications and become mainly a protocol used for tangible interfaces where you’re still going to need a separate vision engine. I would think it would be pretty cool to add some OSC and TUIO processing classes to JUCE so that you could potentially have a vision engine as an input source, but the Win7 could be another input source, and their events would need to be translated and distributed to the Juce components in a manner similar to mouse events.

Your earlier posting about possibly passing an index number with the mouse event sounded like a good idea, but it wouldn’t be backwards compatible. And actually, another consideration is that it would also be useful for an application to be able to distinguish between mouse events and touch screen events. So, maybe it would be preferable to have these events come in through different handlers. If you were using some sort of interface creation app for instance, it might be preferable to use the mouse to create the interface and the touch screen to test / use the interface while you’re creating it. But another thing to think about is that on Win7 for example it seems that you have to choose between having either gesture events or multitouch events; you can’t have both. So, if you have it in gesture mode, touches on the screen actually come in through the mouse handler (not through WM_TOUCH message), the screen behaves like a single touch screen, and gestures come in through a separate event type. If you have it in multi-touch mode, then the touch events come in through WM_TOUCH messages, you don’t get any gesture events, and mouse events are just used for the mouse (I think). Hm… I don’t know how this stuff is handled on OS X, but off the top of my head I’d think it might be best to keep the mouse handlers as they are and add new handlers for lists of touch events and gesture events as well as a way to enable one or the other…


Thanks Greg

Ideally I’d like to be able to feed it all through the mouse events, as it feels messy to have two sets of input events that effectively do the same thing, but as you say, there could be problems with older components that can’t cope with the more complex stream of ups/down/drags that you’d get.

I need to think about it all a bit more deeply, but one idea might be to let individual components register themselves as being multi-touch capable or not. Then, for newer ones, it’d allow multi-touch mouse events, but for older ones, it’d only let one figer at a time send events to it. An advantage of doing it this way would be that if you had e.g. several sliders that are not multi-touch enabled, it’d still be possible to drag them all at once with different fingers.

Was just wondering what the current state of this is? Have you taken a look at how openFrameworks is attempting to do this or other frameworks like mt4j (multi touch for java)? I believe mt4j has worked out a hardware abstraction layer to deal with input from all types of multi touch events (whether that be Windows 7, TUIO trackers…etc) and it may be worth a look at. Either way, very exciting and will be a great addition to JUCE.


I’ve already revamped the mouse input stuff to deal with multi-touch. It’s not quite finished, but have a look at the MouseInputSource class for more info. Basically in a normal system, there’s a single MouseInputSource, but in multi-touch there can be many, and each one controls an independent stream of mouse up/down/drag events.

Needs a couple of final bits of code, but is already running on the iphone.

I just got my JooJoo today ( and was wondering witch way to go to get some multitouch running win JUCE, since i can get Linux/Windows/OSX on that device, my question is witch is the best to try the multitouch code. Is linux supported at all, last time you told us that iphone is working so i was wondering about other platforms.

Huh? Really? So do I have to make any modifications to use that? For instance, the juce demo on iOS should let me adjust multiple widgets at once? I can’t believe I didn’t try that!

Holy cow! Sure enough, I just tried on the very simple iPad app I just shipped (ad hoc delivery, not in the store), and sure enough, I can tweak the two faders at once! That is completely brilliant.


I’m kinda hopin i’ll be able to use two sliders at the same time in my ctrlr interface, it would make my app much more fun. But i don’t know how standard is multitouch on windows/linux from i’ve seen there are some interfaces in .net on windows and linux is a complete mystery.

I don’t know anything about how the win32 support works, (and don’t have anything I can try it with anyway!) All the Juce groundwork is ready though, so it should be fairly straightforward to wire up the native code in the same way the iPhone works.