The advent of multi-touch screens and novel gaming interfaces means that the days of the traditional mouse and keyboard are well and truly numbered. With two new technologies, Humantenna and SoundWave, you won’t even have to touch a computer to control it: Gesturing in its direction will be enough.
These are the latest offerings from Microsoft, which also gave us the Kinect controller for Xbox 360. But the Kinect hardware looks clunky next to the Humantenna and SoundWave setups, which their inventors say could be built into a watch or a laptop.
As the name suggests, Humantenna uses the human body as an antenna to pick up the electromagnetic fields — generated by power lines and electrical appliances — found in indoor and outdoor spaces. Users wear a device that measures the signals picked up by the body and transmits them wirelessly to a computer. “It’s just an electrode that measures voltage, digitizes it and sends the signal for processing,” says Desney Tan of Microsoft Research in Redmond, Wash.
By studying how the signal changes as users move through the electromagnetic fields, the team was able to program the system to identify 12 gestures, such as a punching motion or a swipe of the hand, with more than 90 percent accuracy.
One version of the system, outlined this month at the Conference on Human Factors in Computing Systems in Austin, runs off a sensor that, with training, can recognize specific gestures. Another paper prepared for the conference describes a version that relies on a wristwatch-size sensor. Thanks to advances in processing techniques, this system needs no training to recognize the same 12 gestures.
All sorts of applications would open up if Humantenna can be commercialized. The body could become a kind of universal remote control, and basic gestures such as pointing or swiping might be used to control lights, appliances and computers.
Fitness monitoring is another possibility, says Tan. We already have devices that can infer how hard a person is exercising by tracking step patterns, but Humantenna could provide a more holistic measure by monitoring whole body movements.
“It’s a very cool idea,” says Joseph LaViola, who studies user interfaces at the University of Central Florida in Orlando.
But LaViola says he is not sure how robust the system will be. Humantenna might be confused by changes in electromagnetic fields as devices are switched on and off. The system might also struggle to differentiate between closely related gestures, an issue that Tan agrees will be a challenge. Although the technology can detect movements of about two inches, it will not pick up smaller gestures, such as the wiggling of a finger.
Humantenna requires users to wear a sensor. But Tan’s team, working in collaboration with researchers at the University of Washington in Seattle, has developed another gesture-recognition device that will not.
SoundWave relies on an inaudible tone generated by a laptop’s loudspeaker. When a hand moves in front of the laptop, it changes the frequency of the tone, which the computer’s microphone picks up. By matching characteristic frequency changes with specific hand movements, SoundWave can detect certain gestures with an accuracy of 90 percent or more, even in noisy environments such as a cafeteria.
Interference caused by the tone’s bouncing off nearby objects will limit the ability to detect fine-grained motion. But the technology will still be able to translate coarse movements, such as a swipe, into commands. “I’d love to lean back and swipe to get the next page,” says Tan. “Or to push a window out of the way by moving my hands.” His team has already used SoundWave to control scrolling and to wake up a laptop when a user approaches it.
Laptops tend to come with built-in speakers and a microphone, so SoundWave could be rolled out as soon as the software is fine-tuned. “If you don’t need extra hardware, that’s a big jump in terms of getting it to the masses,” says LaViola.
SoundWave and Humantenna are steps toward a future in which we interact with computers without typing on a keyboard or clicking a mouse, Tan says. Right now, both technologies respond only to fairly vigorous gestures, but later iterations are expected to be tuned to react to gestures that are closer to those that people use in everyday communication. “We want a universal way of interacting with computers, and gestures will be a big part of that,” says Tan.
This article was produced by New Scientist magazine, which can be read online at www.newscientist.com.