While an autonomous vehicle being put in such a dire scenario is somewhat unlikely, there are situations where driverless collisions happen on the road, and the software must decide. People are trying to tackle this thorny issue, though. "Ultimately, this problem devolves into a choice between utilitarianism and deontology," said University of Alabama at Birmingham alumnus and Rhodes Scholar Ameen Barghi to the UAB News. Utilitarianism would value the greatest number of lives, but deontology suggests there's no right answer because killing is always wrong.
A further problem is that we can't simply load every scenario into artificial intelligence because there are too many unique situations. "A computer cannot be programmed to handle them all," said Dr. Gregory Pence, chair of the UAB college of arts and sciences department of philosophy, said to the UAB News.
There was an interesting solution to the trolley problem proposed last year: let the owner pre-define the car's ethics. People could tell the software whether to prioritize themselves or others in a crash, and make all sorts of other decisions beforehand. Other reports have also considered more benign implications of autonomous vehicles breaking the law, like parking illegally. Insurance companies are already worried about all of the new possibilities that the new tech could bring.