Natural User Interface : How did we come to where we are
A bit of history
More than 20 years ago, I get my first job as a HMI (Human Machine Interface) developer and designer in the industry world. My goal was to build touch user interface to control machinery devices in Pipes & Cable industry (already running under Windows 3.11 and C++). At this time touch technology was already there, present in devices purely dedicated to the industry. First technology I was using was based on a screen cover made of IR LED’s. Then few years after comes capacity and resistive touch screens. Could you imagine yourself 10 years ago interacting with your friend around a table responding to your touch and being able to collaborate on different content content?
If we look today, this is definitely not the touch technology which is new and important to note, simply because now it has been introduced worldwide to public, but more the way we are using it in our daily life. Apple has been of course a real actor to bring that technology to home but behind that Microsoft was working on the first version of Surface, bringing the technology to another level. With the touch technology, for not saying multi touch, we enter deeper in the NUI world which is in a permanent move. Touch is everywhere, multi touch is everywhere. Screens manufacturer have understand that they must be present in this world, start to implement touch on any screen and invest a lot and trying to be innovative at the same time ( Not that so easy I would say).
But wait a while, who says that touch can be only on screens? Actually it can be and it will be on any type of Surface.
For those who miss what could be the world, or what will be the world we are going to live, I drop you the link to the vision of Corning in a world of glass:
Touch is also being adapted for non-screen applications as well. For example, Microsoft is working on a touch interface called “skinput” that allows users to interact by tapping their own skin.
I think (hope I am not wrong and do not hesitate to correct me if I am ) that the Wii-Mote of Nintendo have shown the way of interaction with a simple remote control, and it has been the start of a lot of different ideas from different companies.
Gesture is the way to track user motions and translate those movements to instructions. Nintendo Wii and PlayStation Move motion gaming systems work through controller-based accelerometers and gyroscopes to sense tilting, rotation and acceleration. A more intuitive type of NUI is outfitted with a camera and software in the device that recognizes specific gestures and translates them to actions. Microsoft’s Kinect, for example, is a motion sensor for the Xbox 360 gaming console that allows users to interact through body motions, gestures and spoken commands. Kinect recognizes individual players’ bodies and voices. Gesture recognition can also be used to interact with computers.
The Kinect, and this is only my own opinion, is still consider by a majority like a toy to simply have fun interaction. Still some effort needs to be done to be converted to a real business added value. But I am sure it will come. They area actually some area where Kinect could bring some added value if we think for instance in medical sector where doctor’s hands, that once washed, must not touch any other surface to avoid bacteria’s in white rooms.
It’s a long time that I heard about speech recognition and I think, this technology is one of the most difficult one. Not only by the fact of translating word to action, but also by taking in account your voice variation and the learning curved. Let’s imagine a funny thing. You just finish installing at your home front door a voice recognition system device which replaces your traditional key lock or finger print device. You test that all works well and you are really happy that, once you pronounce your name or other word your door gets open by magic. Unfortunately few days after you get sick and try to get in your home in the same way but this time the system did not recognize you. Huston we have a problem. We start to see it coming on some smartphone or even the Kinect but we cannot say it is usable for having tried for instance on latest Iphone, more a gadget so far.
It allows users to guide a system through eye movements. I have recently seen on TV a broadcast showing a company which was monitoring with eyes tracking system on pilot customers, in which way their customer was selecting a product from the store, what they were looking at first and where customer’s eyes were placed most of the time. This type of analysis comes to help of the marketing team to arrange the way they were presenting products in the store.
For UX interaction with eye tracking, I will drop you to a nice article from UX magazine
Touch interaction on any support. It’s just the beginning
Few days ago I have been dropped to a place where Andy Wilson of Microsoft Research (initiator member of Surface V1) gives an interview of what could be the world of tomorrow in touch interaction. He is talking about our future kitchen, and the way we could interract with our home devices , but more generaly speeking on having touch projection on any support. Check the video and the prototype of what you could have integrated in your next buttons shirt.
The future will reserve a lot of more cool stuff but let’s step back on earth and dream presently what could be tomorrow.Some might be frighten, some others excited. There is so many to say in each of those technologies but my intention was not to write a novel but simply drop few words of it.
It reminds me all the time when I show to my kids an old video tape and they simply ask me:
“Daddy what’s that…”