However, documents released Tuesday by the National Transportation Safety Board show Uber’s self-driving system was programmed using faulty assumptions about how some road users might behave. Despite having enough time to stop before hitting 49-year-old Elaine Herzberg — nearly 6 seconds — the system repeatedly failed to accurately classify her as a pedestrian or to understand she was pushing her bike across lanes of traffic on a Tempe, Ariz., street shortly before 10 p.m.
Uber’s automated driving system “never classified her as a pedestrian — or predicted correctly her goal as a jaywalking pedestrian or a cyclist” — because she was crossing in an area with no crosswalk, NTSB investigators found. “The system design did not include a consideration for jaywalking pedestrians.”
The system vacillated on whether to classify Herzberg as a vehicle, a bike or “an other,” so “the system was unable to correctly predict the path of the detected object,” according to investigative reports from the NTSB, which is set to meet later this month to issue its determination on what caused Herzberg’s death.
In one particularly problematic Uber assumption, given the chaos that is common on public roads, the system assumed that objects categorized as “other” would stay where they were in that “static location.”
“Unless that location is directly on the path of the automated vehicle, that object is not considered as a possible obstacle,” the NTSB wrote of Uber’s system at the time.
But Volvo ran a host of tests reproducing the circumstances of the March 18, 2018, collision and found its system would have been effective in preventing the collision or lessening its impact. The SUV “would have avoided the collision with the pedestrian in 17 out of 20 variations — the pedestrian would have moved out of the path of the SUV,” the NTSB reported.
In the remaining three cases, Volvo found that its automatic emergency braking “would have reduced the impact speed to less than 10 mph.”
In a statement, Uber said: “We regret the March 2018 crash involving one of our self-driving vehicles that took Elaine Herzberg’s life. In the wake of this tragedy, the team at Uber [Advanced Technologies Group] has adopted critical program improvements to further prioritize safety. We deeply value the thoroughness of the NTSB’s investigation into the crash and look forward to reviewing their recommendations once issued” later this month.
Uber officials said they believed their human backup drivers would guarantee safety but concede they were wrong. They have since turned the automatic emergency braking system back on, made numerous other technical fixes and strengthened the company’s training for its “mission specialists,” who once again ride in pairs in the cars to take over in case of a problem.
A local Arizona prosecutor determined in March 2019 that “there is no basis for criminal liability for the Uber corporation” stemming from the crash.
The Uber’s backup driver, Rafaela Vasquez, had looked down inside the vehicle numerous times and her smartphone was streaming NBC’s “The Voice” before the crash, according to Tempe police, who concluded that Vasquez should have been able to avoid the deadly collision.
Vasquez began trying to turn “.02 seconds prior to impact, and initiated braking 0.72 seconds after impact,” according to the NTSB.
A spokeswoman for the Maricopa County attorney’s office, Amanda Steele, said Tempe police “submitted the case with a recommendation of manslaughter . . . and we’re reviewing it for a charging decision.” An attorney who has represented Vasquez did not immediately respond to a request for comment.
In an interview with NTSB investigators, Vasquez said that “prior to the crash, multiple errors had popped up and she had been looking at the error list — getting a running diagnostic,” according to a summary of the exchange.
Uber, which reached an undisclosed financial settlement with Herzberg’s family, conceded last year that its self-driving technology needed fundamental improvements.
An internal review cited the need for “improving the overall software system design,” pointing a finger at the code at the core of the company’s self-driving ambitions.
Uber’s self-driving operation “did not have a formal safety plan, a standardized operations procedure (SOP) or guiding document for safety,” according to the new NTSB documents.
Management and oversight problems were among the key failures that allowed Uber to deploy flawed technology on public roads with backup drivers who were not up to the task.
Company executives, who have expressed contrition about Herzberg’s death, say they have made progress on that front. One way they hope to prove their assertion that “the Self-Driving Enterprise is trustworthy” is by having it “independently reviewed and audited,” the company said.
Last week, Uber announced the creation of a Self-Driving Safety and Responsibility Advisory Board, meant to “identify and suggest improvements” for how it is developing its technology and bringing “fully self-driving vehicles to market.”
Inaugural members include aviation safety experts and road safety and technology specialists with backgrounds in government and industry, among others.
Among its jobs will be identifying “potential risks and corresponding follow-up actions” and considering the company’s safety culture, the company said.
Uber executives said the company’s self-driving cars have been put through extremely challenging scenarios on test tracks to make sure they behave safely in a host of scenarios, such as when a person comes out from behind a parked car or a bike runs a stop sign. The cars are also being driven in self-driving mode on public roads in Pittsburgh.
Company engineers said they have worked at “reducing latency,” a reference to the time between an observation by a sensor and the car’s reaction, such as slowing down or turning. The company said its systems have improved “object and actor detection” for ambiguous situations.
But the company noted in a safety document last year that it still had a long way to go on its self-driving system, saying “we have frequently demonstrated proficiency on a specific scenario set only to identify a new variation beyond our current capability.”