Autonomous cars are piloting their way into the wide philosophical sea of ethics. Right now the autonomous cars are unaware of this because the driver's will always comes first, but when we start getting cars that can overrule commands or choose a particular ethical outcome either without or in spite of driver input, we'll have a lot of decisions to make. Which means we have a lot of decisions to start considering right now.
Patick Lin considers some of them in a piece in Wired, starting with the trolley problem - whether a person who has control of a runaway trolley should let it kill five people tied to the track without intervention, or should pull a lever so that only one person on another track is killed. From there, he wonders about the possibility of fixed ethics settings, created by manufacturers, versus user-adjustable ethics settings that, for example, allow a driver to prioritize his own safety over others, or prioritize the safety of children over that of the elderly.
Lin admits that the examples are outrageous in order to stress the point of the question. Still, it's worth a read because we already have cars that can make driving decisions, and it might not be long before "Five-Mode Adjustable Prime Directive" shows up on the options sheet. Head over to Wired to read the full piece.