Human-computer interaction is undergoing a revolution, entering a multimodal era that goes beyond, way beyond, the WIMP (Windows-Icons-Menus-Pointers) paradigm.
Now European researchers have developed a platform to speed up that revolution.
We have the technology. So why is our primary human-computer interface (HCI) based on the 35-year-old Windows-Icons-Menus-Pointers paradigm? Voice, gestures, touch, haptics, force feedback and many other sensors or effectors exist that promise to simplify and simultaneously enhance human interaction with computers, but we are still stuck with some 100 or so keys, a mouse and sore wrists.
In part, the slow pace of interface development is just history repeating itself. The story of mechanical systems that worked faster than handwriting is a 150-year saga that, eventually, led to the QWERTY keyboard in the early 1870s.
In part, the problem is one of complexity. Interface systems have to adapt to human morphology and neurology and they have to do their job better than before. It can take a lot of time to figure how to improve these interfaces.
The revolution has begun, with touch and 2D/3D gesture systems reinventing mobile phones and gaming. But the pace of development and deployment has been painfully slow. One European project hopes to change that.
Related articles by Zemanta
- Future computer interfaces: life beyond keyboard and mouse (guardian.co.uk)
- Rocky Nevin – Human-friendly Computing through Natural Language Inferencing (itc.conversationsnetwork.org)
- Kicker Studio: Why You Want (But Won’t Like) A Minority Report-style Interface (usabilitycounts.com)