Uber abruptly halted testing of its autonomous vehicles across North America on Monday, after a 49-year old woman was struck and killed by one of its cars while crossing a Tempe, Ariz. street Sunday night.

The moratorium on testing includes San Francisco, Phoenix, Pittsburgh and Toronto. Sunday’s crash was believed to be the first fatality in any testing program involving autonomous vehicles.

The National Transportation Safety Board opened an investigation into the crash, NTSB spokesman Eric Weiss said.

Uber CEO Dara Khosrowshahi said in a tweet that the company was working to learn what went wrong.

“Some incredibly sad news out of Arizona,” he said. “We’re thinking of the victim’s family as we work with local law enforcement to understand what happened.”

The company did not respond to questions about how long the moratorium would last.

The Uber vehicle was in autonomous mode at the time of the crash, though a driver was behind the wheel, Tempe police said in a statement. The crash occurred about 10 p.m. near a busy intersection with multiple lanes in every direction.

Police said the northbound vehicle was approaching Curry Road when a woman, identified as Elaine Herzberg, was crossing from the west side of the street and was struck. She died at a hospital, the department said.


An Uber driverless car is displayed in a garage in San Francisco. Uber suspended all of its self-driving testing Monday, March 19, 2018, after what is believed to be the first fatal pedestrian crash involving the vehicles. (AP Photo/Eric Risberg)

Driverless vehicles are being developed by some of the biggest names in American technology and manufacturing. Waymo, formerly Google’s self-driving car project, also is very active in Arizona, ferrying ordinary residents around with no backup human driver behind the wheel.

Many major automakers, General Motors, Ford, Toyota, Volvo, and Tesla among them — are at various stages of developing, deploying or testing autonomous technologies. In all, 50 manufacturers have secured permits from California regulators to test autonomous cars in that state with safety chaperones behind the wheel. Next month, new regulations will take effect in California allowing technology developers to apply to test and deploy cars without a human behind the wheel if they meet various safety, notification and other criteria.

Federal transportation officials have relied on voluntary safety reporting to oversee the burgeoning industry, which has emphasized the life-saving potential of the technology in arguing against government mandates. Some states, like Arizona, welcome this approach. Some advocates say the federal government should establish strict mandatory safety and reporting standards. Others say Congress must “preempt” states from putting in place patchwork of safety regulations.

Critics have cautioned that the industry is rushing untested technology onto the roads, while supporters say fatalities are an inevitable part of the learning process.

Missy Cummings, a robotics expert at Duke University who has been critical of the swift rollout of driverless technology, said the computer-vision systems for self-driving cars are “deeply flawed” and can be “incredibly brittle,” particularly in unfamiliar circumstances.

Companies have not been required by the federal government to prove that their robotic driving systems are safe. “We’re not holding them to any standards right now,” Cummings said, arguing that the National Highway Traffic Safety Administration should provide real supervision.

But Carrie Morton, deputy director of Mcity, the University of Michigan’s 32-acre test facility simulating urban and suburban environments for self-driving vehicles, said the technology requires a combination of real-world and controlled testing to perfect.

“We have brought a tremendous amount of complexity into Mcity, but you really need some real-world exposure as well,” she said.

In general, she said, autonomous vehicles have a variety of backup mechanisms to account for anomalies such as a pedestrian in the roadway. Mcity tests those capabilities, but environments like it can only do so much.

“What we can say at this point is it just underscores the need to continue on-road testing,” she said.

Arizona has aggressively courted driverless tech firms, based largely on its light regulatory touch.

At the end of 2016, Arizona Gov. Doug Ducey (R) issued a statement touting the state’s wide-open welcome to tech companies, including Uber, which were bristling at the rules in California — another state at the vanguard of the industry.

“Arizona welcomes Uber self-driving cars with open arms and wide open roads. While California puts the brakes on innovation and change with more bureaucracy and more regulation, Arizona is paving the way for new technology and new businesses,” Ducey said. “California may not want you, but we do.”

Monday, Patrick Ptak, a spokesman for Ducey, said: “Safety is our top priority.” He did not address a question on whether the death might change anything about the state’s policies.

Ptak said Ducey’s “latest executive order provides enhanced enforcement measures and clarity on responsibility in these accidents.”

Cummings said approaches like that of Arizona have consequences. “If you’re going take that first step out, then you’re also going to be the first entity to have to suffer these kinds of issues,” she said.

Driverless technology firms generally say they painstakingly map an area digitally before running their vehicles there, so that the vehicles essentially have banked information about the surroundings that can be compared on the fly to what cameras and sensors are picking up at any moment.

Tempe police said Herzberg was “walking outside of the crosswalk” when she was struck.

“Just because you map an area doesn’t mean your computer system is necessarily going to pick up a pedestrian, particularly one that wasn’t in a cross walk,” Cummings said.

Another industry-wide issue is to what extent autonomous vehicles can deal with unanticipated problems.

“The car cameras, the vision systems, they don’t perform inductively, meaning they can’t guess about the appearance of someone in a particular place and time,” Cummings said. “Pedestrians get hit by human drivers all the time for similar reasons,” though the exact cause of this crash remains unclear, she said.

Timothy Carone, an associate teaching professor specializing in autonomous systems at the University of Notre Dame, said fatal crashes involving autonomous vehicles, while tragic, will become more commonplace as testing is introduced and further expanded.

Road testing is the only way the systems can learn and adjust to their environments, eventually reaching a level of safety that cuts down on the number of motor vehicle deaths overall, he said.

“It’s going to be difficult to accept the deaths … but at some point you’ll start to see the curve bend,” he said. “The fact is these things will save lives and we need to get there.”

He compared the self-driving rollout to the early history of aviation, which was beset by safety incidents in the early years of commercial flight, but has now gained near-universal respect as the safest form of mass transportation.

“Hopefully it happens much faster and with a much shorter time scale,” he said of autonomous vehicles.

Carone said that if the self-driving technology was ultimately responsible for Sunday’s fatality, Uber and authorities will have to “interrogate” the system to determine what went wrong.

“It’s not just one thing that goes wrong — there’s a series of several to maybe several thousand steps that led to the death,” said Carone, author of the forthcoming book “Future Automation — Changes to Lives and to Businesses.” Using data, he said, “They have to figure out, and determine if they can figure out, ‘why did the system make a decision that caused the death?’”

The incident focuses national attention on Uber’s self-driving program, in which it has ambitiously invested in to position itself as a key player in the world of autonomous driving.

The company launched robot-driven Volvo trucks, with human backup drivers, on Arizona highways in November.

Sunday’s crash wasn’t the first involving Uber’s self-driving vehicles, nor in Tempe.

Last March the company temporarily suspended its self-driving fleet after a Volvo XC90 overturned when another driver failed to yield, according to Tempe police. Authorities said the Uber, which had a backup driver, was not at fault. But it was the first major crash involving the company’s self-driving fleet.

Sunday’s fatality, for the first time, raised numerous, non-hypothetical questions about who is liable in the event of a pedestrian death involving an autonomous vehicle.

Matt Henshon, a Boston lawyer who has written and spoken widely on artificial intelligence and self-driving cars, said that while a civil suit would be fairly straightforward, the question of potential criminal liability is much more complicated.

Could the human driver have intervened? Were developers negligent in their design and rollout of the software? Was a nonworking sensor missed by inspectors?

“The analogy is autopilot, in the airline case, where you put it on autopilot but the pilot still remains legally responsible,” he said. “Would they have been able to do something?”

He said it raised a key question: “Who had ultimate responsibility, the human driver or the car itself?” he asked. But even if the car was deemed responsible, mechanical and software-related questions would also come up.

“What bug was in there?” he said. “Was it reasonable for somebody to miss the bug?”

A civil case would be much more straightforward.

“If I’m representing the decedent … I’m suing the city for allowing everyone to do it,” he said. “But Uber’s my main target.”

Peter Holley contributed to this report.