Oh, humanity. New research published in Science shows that we, as a race, want to have our cake and eat it too when it comes to autonomous vehicles. Specifically, we're totally okay with self-driving cars that will sacrifice their passengers in favor of not harming pedestrians -- so long as we aren't the passengers when that happens. What's more, those surveyed would like other people to buy those self-sacrificing rides, but don't want to buy one themselves, and don't agree with the idea of enforcing regulations for them. Sure; this makes perfect sense.
The tough part here is designing the algorithms that will control these self-driving rides, and how to teach the artificial intelligence deal with unavoidable harm. Successfully doing so relies on a trio of what Science calls "incompatible objectives." Meaning, the algorithms must be consistent, not cause public outrage and not discourage buyers. It's tricky and raises the question of which lives are more important, those outside the vehicle or its passengers? When humans make split-second decisions, it's out of instinct and self-preservation -- not programming.
But if someone knowingly bought an autonomous vehicle that favored passengers over pedestrians, would they be held liable if a loss of public life occurred?
"I do not think concerns about very rare ethical issues of this sort [...] should paralyze the really groundbreaking leaps that e are making in this particular domain of technology, policy and conversations in liability, insurance and legal sectors, and consumer acceptance," assistant research scientist Anuj. K Pradhan, of UMTRI's Human Factors Group, tells The Verge.Again, this is all extremely early, but it's for the best that the conversation is starting now, rather than, say, when we have a whole fleet of self-driving cars on the road. For more troubling morality questions, be sure to hit the source links below.
This article by Timothy J. Seppala originally ran on Engadget, the definitive guide to this connected life.