How Will Your Kids’ Kids Input?

One of the things that I’ve been thinking about quite a bit lately is the next generation of gadget lovers and how they will interact with their devices. Regardless of the size/form factor of future devices, there will no doubt be a need to input commands of some sort – text, numbers, whatever. Today, there are basically four ways to input text on a portable device:

1. Numeric keypad – this is probably the oldest way to input information into a device, specifically mobile devices such as cell phones. There is a standard 12-key layout, with each key being assigned 1 number and 3-4 letters.

2. QWERTY keyboard – since the 1870s, QWERTY has been the standard text input mechanism for putting thoughts into writing (and later, digital writing).

3. Handwriting recognition – this is probably the least-used method outside of Asia, but the most universal. One of the first things you learn in school is how to read and write text.

4. Speech – With more and more powerful gadgets, we’ve seen a big increase lately in the ability to simply talk and have software translate that to text.

Since getting my Nexus One, I’ve tried out a few different keyboards, each of which added a new idea that I’ve never seen implemented on a mobile device before.

First, I tried ThickButtons keyboard. This one’s unique because it looks like a regular QWERTY keyboard, but the keys change sizes based on the letters that it thinks you’re going to type next, based on an internal dictionary. Here’s a video of it in action:

httpv://www.youtube.com/watch?v=itIPS3U2bf8

It’s really quite unnerving at first, until you get used to it. However, after a few days, I found the size-changing to be flat-out annoying, so I went on the hunt for something different.

Swype is another new method of entering text on your device. Like the ThickButtons keyboard, Swype looks innocent enough, like a standard QWERTY keyboard. You can use it like that, pecking out a sentence, or you can simply use your finger to draw a line through the letters you want to type, and let the software build the word. Like SlideIt, Swype takes some getting used to, but I’ve discovered that it’s much faster and easier to use once you train your brain to spell things out like you did when you were a kid.

httpv://www.youtube.com/watch?v=mRUoWUhcRlE

Unfortunately, all of these input methods (save for handwriting recognition) are simply modern versions of ancient input mechanisms – the QWERTY keyboard. They’ve simply dropped the hardware buttons in favor of a virtual experience. It works, but it’s still based on old technology and thought processes.

It won’t be too long before an entire generation emerges that almost never touches a ‘computer’ as we know it today. Their first (and probably only) digital experience will likely be on a mobile device, and that’s got to influence how they’ll want to interact with that device. Touchscreens bring a ton of freedom, as they’re not limited by existing hardware keys. My Nexus One is a great example of this – I was able to try out several different text input methods simply by downloading a new keyboard to my device – it’s all controlled by software. I already have trouble going back to a regular keyboard on my phone after using Swype to simply draw what I want to type, and I’m still thinking about different ways that I might be able to enter text even faster.

My question goes past that – what other ways do you think this upcoming will want to interact with their devices? I can’t help but think back to the diner scene in Back to the Future 2, where McFly shows the kids how to play the old arcade game, to which they exclaim, ‘You mean you have to use your hands? That’s like a baby’s toy!

Update: while chatting about this idea with my good friend @ARJWright, he pointed me towards a recent Ars Technica post that somewhat mirrors my thoughts. It’s long, but worth the read if this interests you (and I’m not a Trekkie).

Published by rcadden

Just a dude with a phone.

2 thoughts on “How Will Your Kids’ Kids Input?

  1. I love this subject!

    So, let me get this straight, you’re talking about 30 to 40 years from now? That’s a long way out…

    Intelligence will be absolutely *everywhere* 20-30 years from now. In short, this will mean that the level of interaction between a computer and a human will be at least as rich as between two humans. In other words, that will mean speech, face, gesture and body recognition. The truth is, it will probably go even further, with sub-vocal speech recognition and sub-conscious communication via body system implants.

    In the near term I should think that speech and gesture recognition will lead the way. That’s one of the reasons why everyone should keep a keen on Microsoft’s Kinetic system during at the end of the year – if it works well it could start a revolution.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s