I think we’re making progress toward the goal of bending computers to conform to human factors rather than bending my human factors to conform to the way computers accept input.

Ultimately, our brains will be plugged directly into “the grid,” but until then, users still have to learn how to type, control a mouse, and press buttons in certain sequences. The arrival of the Windows Vista™ operating system with Microsoft® Windows® Tablet and Touch Technology is another step toward really natural computing.

The Mobile PC team’s main vision for Windows Vista development was to enable people to be more productive, in more places, with devices that are more personal.

Touch complements other input modes and extends mobile scenarios.

If you’ve already had the pleasure of using a Tablet PC, you know that there’s a profound difference between the traditional input methods of keyboard and mouse versus a stylus. Get ready for the next step in the progression, Windows Touch Technology.

More Productive…

Tablet PCs and Ultra-Mobile PCs (UMPCs) are most useful by improving existing usage scenarios. Corridor warriors, healthcare professionals, and other on-the-go knowledge workers are all examples of users with existing scenarios that are improved by the introduction of mobile PCs.

But what about categories where it’s difficult or inconvenient to use a stylus?

On a factory floor, workers may have protective gloves they need to wear while interacting with their PC. In the field, workers may already have tools in their hands, and juggling a stylus along with their tools just isn’t practical.

Enter Windows Tablet and Touch Technology, the natural next step in the evolution of human-computer interaction. With support for both of the currently leading technologies behind touch input, Windows Vista is well-positioned to make users more productive, no matter where they happen to be.

In More Places…

Implementing touch input in your applications enables new scenarios and complements existing mobile scenarios. Imagine reading e-books on the subway. Users don’t want fumble around trying to find their stylus in order to turn the pages. With touch they can simply flick a finger to turn the pages.

Have you ever seen somebody actually using their laptop while they were in line at the movie theater? No, of course not. But with touch input on an Ultra-Mobile PC or Tablet PC, it becomes a bit more realistic to imagine buying your movie tickets online while you’re waiting for your check at dinner.

Windows Tablet and Touch Technology embodies the goals of Windows Vista by being usable, comfortable, and productive in any place at any time.

…Devices that Are More Personal

As device form factors become more like personal devices (i.e. Smartphones, Pocket PCs, Ultra-Mobile PCs), the traditional methods of interacting with such devices become less convenient. Does anybody pull out a full-size keyboard when they use their Pocket PC or mobile phone? With the arrival of touch input, it’s easier to successfully tap out an instant message by pressing on-screen buttons predefined for “LOL” and “On my way” rather than the current methods which involve a lot of typing.

Touch input creates a more personal connection between users and their computing devices. When you think of how you interact with touch-enabled devices today, those that probably come to mind first are ATMs, information kiosks, and PDAs; they represent activities that are private and personalized to you.

Now that Windows Tablet and Touch Technology is available in Windows Vista, you can begin to imagine the personal relationship that will develop between a user and their devices. Imagine book-sized devices aimed at that distinctly twenty-first-century version of a diary-the blog? Or maybe you can remember drawing on your textbook covers when you were attending school? You probably felt a powerful sense of “this belongs to me” that came from being in constant physical contact with them. Touch-enabled PCs instill a similar sense of ownership.

In time, touch-enabled PCs could become the modern equivalent of a wallet/pocketbook in that they both can carry the scraps of data necessary to modern life and everybody needs their own instance.

Hardware manufactures have introduced a variety of new mobile PC form factors (Figure 1) such as the Ultra-Mobile PC (7” diagonal), small-form-factor Tablet PCs (8” diagonal, higher resolution) and others. These kinds of personal devices demand a better interaction model than keyboards and mice.

Figure 1: Today’s device spectrum.

Touch Pointer

Your fingertip has very different properties from a Tablet PC stylus. To start with, your finger is relatively large compared to a stylus’ tip. How do you know that you’re really pointing at what you think you’re pointing at? Your fingertip may obscure the area you’re pointing at!

These observations mean that users need a way to know what they are pointing at. They need confidence that they are actually targeting what they think they are targeting. The touch pointer, along with a new set of touch-friendly design guidelines, offers a better solution.

Touch enables new scenarios for mobile devices and opens new locations where they can be used.

Touch pointer is a small visual aid shaped like a two-button mouse (Figure 2). It’s composed of a left-click button, right-click button, and a drag area. The system cursor stands ready a short diagonal distance away from the touch pointer, so that while dragging the touch pointer, you can see the cursor while you carefully target the small items on your desktop. By small items, I mean buttons, resize handles, minimize/maximize/close boxes, checkboxes, etc.

Figure 2: Anatomy of the touch pointer.

Note that the cursor is not limited to one standard position away from the touch pointer. Dragging the touch pointer quickly toward a corner, or side of the display, will flip it around to another position, so that you’re never prevented from accessing even the deepest parts of the screen.

At its simplest, think of the touch pointer as a sidecar attached to the system cursor. No, it’s not a useless appendage to be made fun of in numerous bad movies. It is a visual aid to help users see what they’re targeting and have confidence that they’ve successfully targeted objects on-screen.

Listing 1 shows the C# pseudo-code snippet that allows you to enable or disable the touch pointer for your application.

You can add a button to your Windows taskbar that enables you to turn off the touch pointer when you want to work directly with the cursor with touch. This button only works with touch input, you can’t toggle it using your stylus, there’s no need to worry about accidentally disabling your touch pointer.

Users can also toggle the button on or off by right-clicking on the taskbar and clicking Properties. Navigate down through Taskbar and Start menu to Properties > Toolbars > Touch Pointer.

Navigation Gets Easier

Imagine you’re sitting at home with your Tablet PC in hand, on your sofa, and you’re yearning to read that new e-book you’ve just downloaded. After the book loads, you notice that the e-reader you’re using only recognizes the arrow keys for navigation. Bummer! Some Tablet PCs don’t even have keyboards or, if they do, you don’t have access to them without flipping up the screen. What a pain!

Windows Vista contains new API calls to distinguish touch from other input modes.

Enter flicks. Flicks are a specialized kind of gesture that only works with stylus or touch input. All you have to do to use them is perform an action-you flick. Flicking your pen to the left generates a left or back action. Flicking your pen to the right generates a right or forward action. Better yet, flicks work the same whether you generate them with the pen or with your finger. Reading long Word documents has never been easier, because flicking up or down moves the page you’re viewing up or down respectively. I love using flicks for reading long Web pages that extend beyond the bottom of the browser window.

You can configure flicks too (Figure 3). You can use the basic four pre-assigned navigational flicks, or you can assign your own custom actions to any of the eight combined navigational and editing flicks (up, up-right, right, down-right, down, down-left, left, up-left). If you use cut-copy-paste a lot, you can enable pre-assigned editing flicks. Virtually anything you can do with a keystroke, you can assign to a flick.

Figure 3: Control Panel settings for flicks.

Touch and the Windows Presentation Foundation

Windows Presentation Foundation (WPF) provides the foundation for building applications and high-fidelity experiences in Windows Vista, blending together application UI, documents, and media content. The functionality of WPF extends to support Tablet PCs, and by extension, touch input.

If you write WPF (C# or XAML) applications, all HID-class devices are treated equally as if they were styluses. This means that they benefit from an established programming model, ink smoothing, and a host of other benefits. There is no need to modify any of your WPF apps to support touch, you get it for free!

Design Considerations for Touch

I mentioned earlier, when introducing the touch pointer, that it was designed to help bridge the gap between the coarse targeting of a fingertip and the busy space most users call their desktop. As desktop resolutions get higher and higher, and desktop screens get bigger and bigger, the visual elements drawn on those screens get closer and closer together. How do you expect an average user to double-click with their finger on a single control that contains 20 items?

With the introduction of touch input, developers must take a small step back and think about how they guide users to interact with their applications. A few small changes may be all you need to make your apps touch-friendlier.

  • Provide interface elements that are big enough for average people to target using touch. MSDN® says this size is about nine square millimeters (mm^2) (Figure 6).
  • Avoid crowding visual objects too close together. Put at least five pixels of whitespace between controls, or if the objects have to be close together, increase the hotspot size (Figure 7).
  • Increase the size of the area around the control that users can touch (Figure 8). By making the hotspot larger than the visible control, you enable users to select it with less precision.
  • Don’t penalize users who mistakenly touch the “wrong” item of a set. Make it easy for a user to recover from such accidental touches.
  • Use common controls when possible so that your apps can take advantage of enhancements for pen and touch users for free.
  • Use full-screen forms where possible. This reduces the likelihood of users accidentally activating a window below your application. However, you should avoid placing buttons on the edges of the screen. Users are very likely to touch the edge of the screen accidentally.
  • Minimize the need for users to perform text entry in touch applications. Make it possible for users to choose items from a list, or provide buttons for Yes/No responses.
  • Consider whether your users will use the touch pointer to navigate an un-optimized interface or implement a touch-friendly user interface that can be easily acted upon without the touch pointer.
Figure 4: Panning hand in Internet Explorer.
Figure 5: Handwriting Personalization wizard.
Figure 6: Ensure that interface controls are at least 9MM^2.
Figure 7: Allow at least five pixels of space between adjoining controls.
Figure 8: Enlarge the hotspot around small controls.

The touch pointer makes navigating and controlling a touch-enabled device very easy, but it’s still a good practice to design for human factors. My eyes are not getting any younger, so I’d prefer you avoid filling up every pixel of a 1024x768 or 1280x1024 screen. I don’t recommend that you “dumb down” your forms. Just plan for the ergonomic realities of touch input.

Developing Applications that Understand Touch

Programmatically, to determine what kind of digitizers are available, you’d use the IInkTablet interface to enumerate how many digitizers are available, and what capabilities they have.

No work is required on the application’s part to receive touch input. For existing applications, touch is treated just like pen input. Developers will want to add touch-specific handling to their applications in order to provide users the best experience.

The relevant C++ code looks like this:

if (hr != E_NOINTERFACE)
{
   TabletDeviceKind kind;
   
   Chk(ifTablet2->get_DeviceKind(&kind));
   if (kind == TDK_Touch || kind == TDK_Mouse)
      fTouch = true;
}
else
{
   Chk(ifTablet->get_Name(&bstrName));
   fTouch = !wcscmp(bstrName,
         L"\\\\.\\DISPLAY1");
}

The interesting part that you need to know is how to use the Tablet_Device_Kind enumeration.

In this way, it’s simple for developers to query, on pen down, for what kind of input it is and perform custom rendering on the input appropriately. For example, if a pen is detected, you could lay down medium-point ballpoint ink. If you find the input is touch-generated, you could query the area of the contact, set the cursor size to be the area of the contact, and apply a smudge effect, or finger paint. With pressure information, maybe you could have a faux 3-D “ink” that indicates where you drew. The possibilities are yours to discover.

Hardware Considerations

Digitizers: Resistive or Capacitive?

There are two basic technologies underlying touch digitizers today: active and passive. Passive, or resistive technology, is the sort of digitizer you’ve seen used in PDAs or touch-screen kiosk displays. In this kind of digitizer, all the electronic components are contained in the digitizer itself. The stylus typically contains no electronics and can be pretty much any hard object such as a solid plastic stylus or your fingernail.

Active, or capacitive technology when referring to touch input, uses the capacitive properties of your fingertip to determine touches. It works by applying a small amount of voltage to the touch screen. When your fingertip touches the screen, a minute amount of the current is drawn to the point of contact. The digitizer can then calculate the X and Y coordinates of the touch.

Keep in mind that capacitive technology is different from the electromagnetic technology in use in most of today’s Tablet PCs. Electromagnetic digitizers generate a small electromagnetic field and rely on detecting the special electronic stylus’s disturbing that field to determine the stylus’ position. There are even some digitizers that combine capacitive and electromagnetic technologies to create a single system device, for simplicity.

The advantages of capacitive touch digitizers over resistive digitizers are twofold.

First, they have a higher sampling rate than resistive digitizers. Resistive digitizers require measurement of some mechanical interaction to detect touches. This means that there is no real sampling rate for resistive digitizers. Either you’ve pushed hard enough or you haven’t. But for capacitive touch, the fact that there is an electrical current passed into the screen can be measured to determine if it has been touched. The more often this current draw is measured, the better the responsiveness and fidelity of the touch screen.

Second, by using your fingertip to interact with the screen, you are less likely to push too hard on the screen and damage it. Resistive digitizers require that you exert a fair amount of pressure on them to register a touch. Capacitive digitizers are only interested in the ability of your finger to change the electrical current being sent through the screen. Most customers I’ve talked to tend to err on the side of being too gentle with their tablets, rather than pressing too hard on them. It’s only natural, since the history of electronics has been that devices are too fragile. This reputation is such that users are quite timid when it comes to using their fingers to touch the screen.

In Conclusion

It’s an exciting time to be a mobile PC developer. Device form factors are changing and the number of ways to interact with a mobile PC is increasing. All of which leads to a much more natural experience when using computing devices, whether you’re writing notes in ink, or reading on the beach with your Ultra-Mobile PC.

With just a few small changes in the way you think about user interaction, you can develop apps that work anywhere, anytime, and on any device. Think of the possibilities!