Self-Driving Cars Generate Tricky Ethical Questions
How should autonomous cars decided what to hit in an accident
Self-driving cars have mostly stuck to highways and freeways where complicated situations are at a minimum. Google's self-driving car has traveled 700,000 miles accident-free while maneuvering down California's freeways. Freeway driving is easier. There are fewer unpredictable events for the on-board computer to face. But with Google's announcement last month that their self-driving car was logging miles on city streets, complicated questions emerged.
WIRED magazine asked how an autonomous car should be programed to behave in a no-win scenario. That is, when a crash is unavoidable, how will that car decide what to hit? All of the programming options have their own drawbacks. There are more questions than answers at this point. And like all ethical problems, how the answer is reasoned will be just as important as the answer itself.
Read more about the ethical dilemmas of self-driving cars here.
- Biggest automotive sales disappointments
- Fastest-depreciating cars in the United States
- Find and compare 2017 Models