Last week, the Seattle Times published an article in their business and technology section breathlessly explaining how the keyboard and mouse are on the way out, and the touch interface is the new way forward. It is not the first time I have heard this. Some select excerpts from the article (link):
“If you let the plastic chunk that is a mouse drop away, you will be able to transmit information between you and machines in a very different, high-bandwidth way…
And the mouse — which many agree was a genius creation in its time — may end up as a relic in a museum…
School officials are debating what to do with their old desktop PCs, each with its own keyboard and mouse, and whether they should bother teaching students to move a pointer around a monitor…”
This is first rate silliness. Are there certain activities – mostly leisure activities – that fare better with a touch interface? Certainly. The article draws most of its support from (surprise) people who build gesture-controlled interfaces for a living, a school district that is about to spend almost a quarter of a million dollars on Apple hardware, and Apple itself.
People who work in the Enterprise and look forward to technology trends would be wise to take this sort of manic display for the futuristic posturing that it is. Without a doubt, mobile and touch are a new space, and it falls to us as developers of the user experience to figure out new ways to optimize old tricks within the new context, and even to do things that simply were not possible before. We should also be thinking ahead to the next step, as some of our applications and business functions might be well served, for example, by a voice interface.
But are the entrenched paradigms of the keyboard and mouse really destined for the clearance bin? Let’s take a look at what using that “ancient” combination actually buys us.
First, the keyboard provides rapid text entry. Voice recognition – the preferred technology in the Times article — does this too, in many cases even more quickly (although personally, I type as quickly as I speak). However, there are many times when speaking isn’t appropriate. Can college students take notes via voice recognition software while they listen to a lecture? Can a roomful of people work side by side — programming, writing business documents, filling out forms — while everyone is talking? And do we really want voice recognition controlling the devices and experiences around us? I would love to see a set of technology requirements based around television control via voice activation. If Mom or Dad turns the channel with their voice, should the system prevent the kids’ voices from changing it again? What if Jr. sounds just like Dad? That’s going to be great.
Humans are a language-based species, and one of modern culture’s greatest assets is the written language. The parts of our brains that we use to produce speech are related to, but not identical to, the ones we use when we write. To think that one will fully and meaningfully oust the other is ridiculous. There are so very many situations in life where we need to produce text rapidly and in relative silence, and using the more ordered parts of our brains that write as opposed to speak. The keyboard is the solution to that. It is not going anywhere.
What about the mouse? For some activities — browsing pictures, browsing a map, browsing facebook — the gesture interface really is better. But note what those all have (obviously) in common: browsing. We naturally sort things, order them, pick through them and move them with our hands. User Interface developers know that we have to seriously rethink how we display items that are a part of a touch interface. Things need to be bigger. Extra space needs to be provided around selectable items because human fingers, even small ones, take up a certain amount of screen space.
So, when we choose to move to the touch interface it may work better for letting toddlers browse the alphabet (per the Times article), but we quickly realize that we are very much hindered by the old physical constraints. These are what the mouse broke away from, and one of the reasons why it was so successful. The mouse is a built in translator that takes your physical motion and maps it into the computer’s virtual understanding of space. Depending on the application, you can interactively change the mapping so that small hand and arm motions span the entire screen (great for presentations) or so that large motions translate to fine movements on screen (great for fine detail work). When dealing with a touch interface, one inch of finger motion must equal one inch of screen motion. It’s a basic part of the interaction.
As long as people require precision and virtual motion mapping (which I bet you didn’t know you used, but you do!) in their interactions, some kind of abstraction device like the mouse will be not only useful but required. The touch interface has its place, but the 1:1 reaction space limits its utility.
In the end, keyboards and mice are tools, and good ones at that. The advent of effective touch (and other) interfaces has allowed us to discard them in cases where we really didn’t need a tool – say, turning the virtual page of a virtual book. But recall what a tool does: it extends the capabilities of human beings. It lets us do things that we couldn’t do before, like spanning a large virtual space with a tiny thumb motion, or creating large amounts of written text efficiently and more or less silently. Let’s not get caught up in the hype.