The fashion for wearable technology may get rid of the need for passwords
WATCHES and spectacles were “wearable technology” long before the marketing maven who dreamed the term up was born. But now that some of these devices are fitted with gizmos which let their wearers monitor and record their lives down to the millisecond, many technologists are asking what else the data thus generated might be used for. One such is Javier Hernandez of the Massachusetts Institute of Technology (MIT). He thinks Apple Watches, Google Glasses and their kin might provide a solution to the problem of password inflation.
Ever longer, ever more numerous, ever more complicated passwords are a curse of modern life. Unless such passwords are used frequently, remembering them is close to impossible. So they get written down, obviating the point of their complexity. One way around this is to use unique bodily characteristics, known as biometrics, to identify people. Fingerprints and iris scans, in particular, have been tried, but both require special equipment. Mr Hernandez’s work offers an alternative that does not: ballistocardiography.
Ballistocardiography is the study of the body’s movement in response to the activity of the heart. Every time someone’s heart beats, his body shifts slightly. The details of these shifts seem unique to individuals. Mr Hernandez wondered whether the accelerometers fitted to things like smart watches might be able to detect ballistocardiographic shifts, and thus generate a new type of biometric. As he has just reported to a conference held near MIT, it seems they can.
He and his colleagues worked with a group of graduate students. They asked each volunteer to stand, sit and lie down while wearing either a head-mounted or a wrist-mounted device that could collect the relevant data. They then ran these data (or, rather, 80% of them) through a piece of software written for the purpose, to seek individual telltales. Once the software had chewed on the data and drawn its conclusions, they then fed it the other 20%, to see if it could spot those telltales afresh, and identify individual participants correctly.
It could. When participants were lying down, the software was able to recognise them 94% of the time. When they were sitting or standing it was less reliable, getting their identities right 86% and 72% of the time, respectively. Not perfect then, but a plausible point of departure for refinement. And if it could be refined, it would certainly be practical.
Read more: Biometrics: Shifting identity