With the proliferation of smart phones and tablets, touch interfaces are all the rage these days. Many people are looking beyond these interfaces and asking “what’s next?”. Some are experimenting with touch-less interfaces such as the Kinect. A touch-less interface can work well for those whom may not be able to use a keyboard/mouse or even a tablet. For example, a maker created Kinecticate: Kinect-powered Email so his mother could send email. Due to a stroke, she is unable to use a keyboard, but the Kinecticate allows her to communicate via email through a gesture driven, visual interface.
Sometimes environment makes touch interfaces less than desirable. A prime example of this the operating room. Surgeons need to refer to information during procedures. In order to do this without compromising sanitation, they rely on assistants to manipulate images and other information, which can distract from the task at hand. Now, surgeons are piloting a Kinect based project that allows control by the surgeon through gestures and voice control.
Another place where this would be useful is the kitchen. Sometimes it seems that we need to refer to a recipe or set a timer just when our hands are the messiest. A student at UC Berkeley used a Kinect to work towards making life in the kitchen easier. The results of the project are published at Kinect in the Kitchen: Testing depth camera interactions in practical home environments. The system gives the chef that ability to reference recipes, set timers and even adjust the mood music without smudging a keyboard or screen with food.
UX professionals should take notice of these projects. We are sure to see more of these touch-less interfaces in the near future, whether they be powered by depth cameras like the connect, voice commands or a combination of both.
Photo from Microsoft