Hearthly wrote:
Isn't one of the big problems with self driving cars what they decide to when an accident is inevitable? Do they always take action to preserve the lives of people inside them, even if that means killing someone else?
The bullish answer is that this is a red herring, because the real answer is "if you're in this mess you did something wrong 30 seconds ago." This is why you've likely never experienced a trolley problem in your own driving -- future scenarios are simply not that unpredictable. In hundreds of thousands of miles of driving, you probably have a perfect track record of avoiding no-win situations before they happen.
https://www.wired.com/story/trolley-pro ... engineers/Quote:
So what do the people actually building this technology think about the trolley problem? I’ve asked lots of AV developers this question over the years, and the response is generally: sigh.
“The bottom line is, from an engineering perspective, solving the trolley problem is not something that’s heavily focused on for two reasons,” says Karl Iagnemma, the president of Aptiv Automated Mobility and cofounder of the autonomous vehicle company nuTonomy.1 “First, because it’s not clear what the right solution is, or if a solution even exists. And second, because the incident of events like this is vanishingly small and driverless cars should make them even less likely without a human behind the wheel.”
Another frequent objection: Self-driving cars definitely don’t have the data or training today to make the kind of complex tradeoffs that people are considering in the Moral Machine experiment. It’s hard enough for their sensors to distinguish vehicle exhaust from a solid wall, let alone a billionaire from a homeless person. Right now, developers are focused on more elemental issues, like training the tech to distinguish a human on a bicycle from a parked car, or a car in motion.
It is, however, likely that engineers are training their tech to make certain tradeoffs. “The way people in autonomous driving taxonomize or organize the objects that they detect is that they have vulnerable objects and non-vulnerable ones,” says Forrest Iandola, the CEO of the company DeepScale, which builds perception systems for self-driving cars. “The most important vulnerable objects to detect are humans with no protection. But a parked car or a traffic cone tend to be non-vulnerable.” And thus: better to hit.