A Robot Didn't Write This

Network News

X Profile
View More Activity
By Lee Gutkind
Sunday, September 23, 2007; 12:00 AM

"Transformers," the summer blockbuster movie about war on Earth between two robot forces, foreshadows a world that may be closer to reality than you think.

As part of the Army's $160 billion Future Combat Systems, to be deployed within the next decade, robot soldiers will be programmed to invade hostile terrain and shoot to kill. If all goes according to plan, there could be virtually no humans on the battlefield -- at least on our side -- by the end of this century. And while "Transformers" remains science fiction today, "RoboCop" is already real: Over the summer, iRobot Corp. and Taser International announced that the iRobot's bomb-disposal PackBot can be equipped with a Taser X26 stun gun, which lets it double as a law-enforcement officer and "engage, incapacitate and control dangerous suspects."

These cyborgs are only part of the coming robotic revolution. Robots are changing the way we live and work. At banks, they provide account balances and dispense cash. They help us through computer glitches and cable TV dysfunctions. On the road, they provide directions and guide us into tight parking spots. Nursebots in hospitals deliver medication and measure heart rates. They vacuum our floors, fulfill orders in warehouses and greet visitors in museums.

We can't -- and shouldn't -- stop technological advancement, but surely there's a difference between helping society and replacing a human with a robot. Take medicine. Surgeons might choose a robotic device as the best tool for a delicate procedure. But now robots in some hospitals are "rounding" on behalf of physicians, who are speaking with patients from their offices -- or their BMWs? -- through the machines. While robots might communicate better than some doctors, this may be pushing the envelope.

And consider the media. The Computing Culture group at the Massachusetts Institute of Technology has developed a robo-scribe called "Afghan eXplorer" -- equipped with a digital video camera, audio-recording capabilities and an intercom system for remote interviews -- for use in areas in Afghanistan where it's too dangerous for Western reporters. And for more than a year, Thomson Financial, which owns the AFX financial-news service, has used robots to crunch market numbers and file roundups, sometimes in 0.3 seconds flat.

Meanwhile, on the battlefield or at a crime scene, warfare and police work without casualties -- at least our own -- aregood. But human suffering can be a curb on the impulses of bellicose leaders. Would the current outcry against the Iraq war be raging now if only Iraqis were dying?

So what are the ethical, legal and moral ramifications of the man-to-machine transition? What happens if RoboCop injures the wrong suspect? Do we incarcerate the robot, its manufacturer or its owner? Can robots be evaluated under the same moral values and laws applied to humans? Must we devise a new set of standards for them?

Some of these questions need not be answered right away. Some roboticists contend that we're decades away from confronting them; others think that humans can avoid such glitches through incisive code-writing. In other words, we are supposed to trust scientists and engineers to make infallible machines -- just like us.

Ronald Arkin, a robotics expert at Georgia Tech, is developing an artificial-conscience mechanism to govern robot behavior. At some point, he says, robots will be programmed to follow the same ethical requirements imposed on U.S. soldiers. He points to a September 2006 incident in which an armed Predator drone was sent toward a cemetery where 190 Taliban soldiers had gathered for a funeral. U.S. policy bans waging war during funerals, so the missiles, which must be triggered by a human, weren't fired. "An autonomous robot could have been programmed to have made the same decision," Arkin says, "and perhaps in a more effective and safer manner."

I don't find such prospects of a future with robots, with or without consciences, comforting. A quarter of a century ago, we were at a similar takeoff point with computers and the Internet. Who could have imagined that spam would be clogging our inboxes, that we'd be buying shoes online and that blogs, podcasts, YouTube and Google would be molding popular culture? This isn't all bad, but it would have been nice to have had a deliberate hand in shaping these radical changes in the way we live now.

So let us prepare for the inevitable new world of pervasive robotics. We must decide what we want robots to do for us and what we insist on doing ourselves. The time to take charge is now -- before the transformers do.

Leegutkind@earthlink.net

Lee Gutkind, a English professor at the University of Pittsburgh, is the author of "Almost Human: Making Robots Think."


© 2007 The Washington Post Company

Network News

X My Profile
View More Activity