Currently, people are becoming increasingly tired of their mobile phones, as they are constantly looking out for something new. Recently, scientists have introduced gesture navigation into the mix, meaning that people no longer need to use their fingers to select buttons or send a text; they don’t even have to come into contact with the phone!
The University of Tokyo in 2010 proposed a vision-based interface for mobile devices, utilising a 3D motion tracking system that sensed human finger motion through a single camera. Since the fingerprints near the camera moved fastest in the image, a high frame-rate camera had to be implemented for stable tracking. The binarised 3D finger trip image could then be introduced using Luca-Kanade algorithm to estimate its 3D motion and posture, resulting in a contactless clicking method, similar to that of a computer mouse. Yet, strip this back and what you do you have? A micro-Kinect.
Patrick Baudisch, Professor of Computer Science at the Hasso Plattner Institute in Postdam, Germany and his research student, Sean Gustafson also seemed to come to a similar conclusion when they took this concept one step further, developing a series of mobile prototypes that removed the use of touch screens and keyboards altogether. Simply by attaching a video recorder and microprocessor to their clothes, hand gestures could then be recognised and converted into mobile actions, such as making a telephone call or scrolling through the internet.
With these in mind, will future mobile phone developments rely heavily on such products as the Microsoft Kinect or could it be that one day we actually get bored of gesture navigation altogether and refer back to the days of touch?