Praying mantises do not perceive the world as you and I do. For starters, they're not very brainy — they're insects. A human brain has 85 billion neurons; insects such as mantises have fewer than a million. But mantises, despite their neuronal drought, have devised a way to see in three dimensions.
They have a unique sort of vision unlike the 3-D sight used by primates or any other known creature, scientists at the University of Newcastle in Britain discovered recently. The scientists say they hope to apply this visionary technique to robots, allowing relatively unintelligent machines to see in 3-D.
“Praying mantises are really specialized visual predators,” said Vivek Nityananda, an animal behavior expert at the university's Institute of Neuroscience. They are ambush hunters, waiting in stillness to strike at movement. Yet unlike other insects, they have two large, forward-facing eyes — the very feature that enables vertebrates to sense depth.
Previous research had suggested that praying mantises use 3-D vision, also called stereopsis. Stereo vision, Nityananda said, is “basically comparing the slightly different views of each eye to be able to work out how far things are from you.”
Uncovering the particulars of mantis stereo vision required a lot of patience and a little beeswax. Luckily, Nityananda and his teammates had both. Using the beeswax like glue — in a way that did not harm the insects — they affixed lenses to their faces. The lenses, similar to old-fashioned 3-D movie glasses, had one blue filter paired with one green filter. The mantises then were placed in front of a screen — an insect cinema, the researchers called it.
Thanks to the color filters, the scientists could project a different image to each eye. When combined, the images created an illusion of depth — less technologically advanced than what is produced in modern 3-D movies like “Avatar,” but effective nonetheless. They played variations on the same film: a target dot that moved against a polka-dot background. The target dot and its 3-D motion were so convincing that the mantises attacked, like a cat hunting a laser pointer.
“Mantises can see depth if there's a moving object,” Nityananda said.
The scientists manipulated the target dots in ways a person would not be able to detect. Broadly speaking, human brains need to meld two static images from each eye into a coherent picture. If there is a disparity between the image details, the result is incomprehensible. “An object moving upward in the left eye cannot be the same thing as an object moving downward in the right eye,” as Nityananda and his colleagues wrote in discussing their findings, which were published Thursday in the journal Current Biology.
Yet mantises seem to care most about where the image is changing, not about how to match the details. During the research, the mantises tried to ambush a movie showing uncorrelated targets: dots that began in the same place but moved up in the right image and down in the left. Those also provoked mantis strikes.
“For the mantis, it looks like [targets] have to be moving. But they don't have to be matching,” Nityananda said. This is a previously unknown type of vision, the scientists concluded, one that is based on motion over time and not image comparison.
It is difficult to imagine what the world looks like to praying mantises. They are hyper-focused on things only an inch or two away (the danger zone for a fly, for instance). While what they see is blurrier, they process the image more quickly. Indeed, the insect cinema needed an unusually speedy frame rate.
Somehow, mantis vision does not require a sophisticated brain. “Insects need less computational power to do the same thing that we do well,” Nityananda said. The scientist and his colleagues are working to create a computer algorithm that replicates mantis sight.
If they are successful, Nityananda envisions lightweight robots that might see the world more like mantises and less like us. Virtually no robot navigates the world by vision alone, Queensland University roboticist Jonathan Roberts noted several years ago; driverless cars supplement video cameras with lasers and radar to detect obstacles. But a tiny robot with depth perception might be able to scurry through disaster area rubble or other confounding spaces — essentially more mantis view than street view.