Author: Aarian Marshall / Source: WIRED

In the four months since an Uber self-driving car struck and killed a woman in Arizona, the ride-hail company’s autonomous vehicle tech has stayed off public roads. The governor of that state banned Uber from testing there; the company let its autonomous vehicle testing permit lapse in California; it pulled its vehicles off the streets of Pittsburgh, home to its self-driving R&D center.
Until today, when self-driving chief Eric Meyhofer announced in a blog post that Uber would return its self-driving cars to the roads in Pittsburgh. With a catch. For now, the vehicles will stay in manual (human-driven) mode, simply collecting data for training and mapping purposes. To prep for the tech’s return to the public space, Uber has undertaken a wholesale “safety review”, with the help of former National Transportation Safety Board chair and aviation expert Christopher Hart.
The broader impact of that review—whether it can put this tech back on the road while preventing the sort of crash that killed Elaine Herzberg—remains to be seen. But already, Uber has addressed one key piece of its robotic technology: the humans who help it learn.
When the National Transportation Safety Board released its preliminary report on the Uber crash in May, it noted that the company’s self-driving software had not properly recognized Herzberg as she crossed the road. But it also noted that Uber’s system relied on a a perfectly attentive operator to intervene if the system got something wrong. As far back as World War II, those who study human-machine interactions have said this kind of reliance is a mistake. People just can’t stay that alert for long periods of time.
This is a problem for self-driving car developers, who believe testing on public roads is the only way to expose their tech to all the strange, haphazard things…
The post After the Fatal Crash, Uber Revamps Its Robo-Car Testing appeared first on FeedBox.