Today’s Disturbing Read: “Why Self-Driving Cars Must Be Programmed to Kill”

Yesterday, the MIT Technology Review addressed some of the ethical issues associated with autonomous cars. In the process, it raised some uncomfortable questions, like: “Will self-driving cars be more rational and morally upright than we are?” (Hint: yes, if we can figure out how to code them that way.)

A taste:

Here is the nature of the dilemma. Imagine that in the not-too-distant future, you own a self-driving car. One day, while you are driving along, an unfortunate set of events causes the car to head toward a crowd of 10 people crossing the road. It cannot stop in time but it can avoid killing 10 people by steering into a wall. However, this collision would kill you, the owner and occupant. What should it do?

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

Hooray for the future!