» This Story:Read +|Watch +|Talk +| Comments
More news from:  Science  |  Environment  |  Health

Metro Crash May Exemplify Paradox of Human-Machine Interaction

Network News

X Profile
View More Activity
By Shankar Vedantam
Washington Post Staff Writer
Monday, June 29, 2009

Sometime soon, investigators will piece together why one train on Metro's Red Line hurtled into another last Monday, killing nine people and injuring dozens. Early indications suggest a computer system may have malfunctioned, and various accounts have raised questions about whether the driver of the speeding train applied the brakes in time.

This Story
View All Items in This Story
View Only Top Items in This Story

The problem, said several experts who have studied such accidents, is that these investigations invariably focus our attention on discrete aspects of machine or human error, whereas the real problem often lies in the relationship between humans and their automated systems.

"It is easy to focus on the last act that may or may not have prevented the collision," said John D. Lee, a professor of industrial and systems engineering at the University of Wisconsin at Madison. "But you can trace the accident back to purchasing decisions, maintenance decisions and track layout. To lay the blame on the end result of when and how quickly someone activated the brake may not help with improving safety."

Metro officials have already begun a review of the automated control systems on the stretch of track where the crash occurred and have found "anomalies." While such measures are essential, Lee said, making automated systems safer leads to a paradox at the heart of all human-machine interactions: "The better you make the automation, the more difficult it is to guard against these catastrophic failures in the future, because the automation becomes more and more powerful, and you rely on it more and more."

Automated systems are often designed to relieve humans of tasks that are repetitive. When such algorithms become sophisticated, however, humans start to relate to them as if they were fellow human beings. The autopilot on a plane, the cruise control on a car and automated speed-control systems in mass transit are conveniences. But without exception, they can become crutches. The more reliable the system, the more likely it is that humans in charge will "switch off" and lose their concentration, and the greater the likelihood that a confluence of unexpected factors that stymie the algorithm will produce catastrophe.

In 1995, the cruise ship Royal Majesty ran aground near Nantucket Island, off the coast of Massachusetts. The ship was equipped with a Global Positioning System device that told crew members with pinpoint accuracy where they were and steered the ship accordingly -- until the cable running to the GPS antenna got disconnected. It was located in an area with lots of foot traffic. A tiny electronic display reported the problem, but crew members did not notice it -- in part because they had come to trust the GPS navigation completely. In the absence of the GPS signal, the ship was programmed to switch automatically to a navigation system known as dead reckoning, which sets a course based on measurements of previous locations and information about the ship's speed and direction of travel. The system could not account for winds and tides, however, and these forces caused the ship to travel miles off course.

Greg Jamieson, an expert who studies human factors and engineering systems at the University of Toronto, said many automated systems explicitly tell human operators to disengage; they are designed to eliminate human "interference." After the previous fatal accident on Metro, in which a train overshot the Shady Grove station on an icy night, the National Transportation Safety Board found that the driver of the train had reported overshooting problems at earlier stops but was told not to interfere with the automated controls.

"The problem is when individuals start to overtrust or overrely or become complacent and put too much emphasis on the automation," Jamieson said. "In the Shady Grove accident, for a year before the accident, the transit authority had put in position a directive that you were not to drive the train in manual."

Raja Parasuraman, a psychologist at George Mason University who studies how humans interact with automated systems, said something similar once happened in aviation: Planes coming in to land were running into trouble because pilots occasionally activated reverse-thrust braking systems before the wheels touched down, causing the aircraft to stall and crash.

Designers thought they could solve the problem by eliminating the pilot's judgment from the equation -- they installed weight-sensitive sensors that activated once the wheels touched down. Until the sensors activated, pilots had no control over the reverse-thrust system. On a rainy night in Warsaw, Parasuraman said, a plane touched down but started hydroplaning on the film of water on the runway. With the aircraft skimming over the surface, the weight-sensitive sensors did not trigger -- preventing the pilots from activating the thrust reversers and causing the plane to overshoot the runway.

Lee, Jamieson and Parasuraman said there is a growing consensus among experts that automated systems should be designed to enhance the accuracy and performance of human operators rather than to supplant them or make them complacent. By definition, they said, accidents happen when unusual events come together. No matter how clever the designers of automated systems might be, they simply cannot account for every possible scenario, which is why it is so dangerous to eliminate human "interference."

Several studies have found that regular training exercises that require operators to turn off their automated systems and run everything manually are useful in retaining skills and alertness. Understanding how automated systems are designed to work allows operators to detect not only when a system has failed but also when it is on the brink. In last week's Metro accident, it remains unclear how much time the driver of the train had to react when she recognized the problem.

New cruise-control and autopilot systems in cars and planes are being designed to give better feedback in a variety of ways. When sensors detect another car too close ahead on the road, for example, they make the gas pedal harder to depress. Pilots given auditory warnings as well as visual warnings about impending problems seem to respond better.

Parasuraman has even found that the manner in which machines provide feedback is important. When they are "polite" -- waiting until a human operator has responded to one issue before interrupting with another, for example -- improved human-machine relationships produce measurable safety improvements that rival technological leaps.


» This Story:Read +|Watch +|Talk +| Comments
© 2009 The Washington Post Company

Network News

X My Profile
View More Activity