Self-Driving Cars and the Trolley Problem

Who should decide how self-driving cars behave in unavoidable collisions?
Who should decide how self-driving cars behave in unavoidable collisions? (

When I talk about my novel Our Dried Voices at a book signing or other public appearance, I like to discuss some of the automated technologies we use on a regular basis in our current lives. I almost always mention that many cars now parallel park themselves. (At least nine automobile manufacturers now offer cars with automated parking.) Parallel parking is a skill. I grew up in the suburbs and rarely had to do it, so I’m terrible. But I admire my friends who can parallel park with ease. And it’s a shame to think most of us will eventually just give up trying to perform this maneuver altogether.

Furthermore, it seems we are not too far from the day when our cars will drive themselves entirely. In a follow-up podcast to its episode on the downside of autopilot in airplanes, 99% Invisible discusses how Google intends to completely remove humans from the driving process by making cars without pedals or a steering wheel. In this way, Google hopes to circumvent the “automation paradox,” in which automation allows humans the luxury of not honing the skills they would need to employ if an automated system failed. Google’s Chris Urmson says the company its cars will be safe specifically because they have no human driver. If a malfunction occurs, the car simply pulls over and summons a different car to retrieve the passengers.

The podcast continues with a discussion of self-driving cars on a larger scale. In the future, humans may not own a personal car but will use a network of automated taxis. This system could free up urban space occupied by parking structures and allow for narrower traffic lanes and less traffic. However, this new concept of transportation would likely spell the end of family road trips, tailgate parties, Sunday drives and the simple enjoyment of piloting a finely honed two-ton machine down the open road. Or your self-driving car may malfunction in the middle of traffic and come to a sudden dead stop (or worse). Setting these worries aside, we still face the issue that automated cars may function exactly as their manufacturers intend, with undesirable and hidden consequences for their passengers.

There is a well-known ethical dilemma, first introduced by Philippa Foot, called the trolley problem. Imagine you are in a train yard and see a runaway trolley headed toward a fork in the track. If it continues straight ahead, it will hit and kill five people in the middle of the track. If you pull a lever in the train yard, you can re-direct the trolley to a side track. Unfortunately, there is one person on the side track, which the trolley will kill if you pull the lever. (Assume none of the six people can move out of the way. Perhaps they are tied up, or the tracks run through tunnels with no room on the sides of the tracks.) 

Some philosophers argue that you should throw the lever since doing so will save five lives at the expense of one. Others argue you that should do nothing, because you will kill the one person if you pull the lever, whereas you will only let the other five people die if you do nothing. (After all, it’s not your fault they’re stuck on the track or that the train is out of control!) 

Tanay Jaipuria recently proposed a modification of the trolley problem in which a human-driven car runs a red light and a self-driving car has two options: stay its course and run into that car, killing the family of five inside, or turn right and bang into another car with one passenger, killing that person instead. Or perhaps your self-driving car must decide between running down the five children who suddenly dart into the street,or self-destructing and killing you to save the kids. Most of the same arguments from the original problem will apply to these new scenarios. But who do we want to decide those arguments? Google? Other automobile manufacturers? Imagine governments pass a law that all self-driving cars must kill the fewest victims in any trolley problem situation.

We need to decide how we want our machines to handle ethical questions previously reserved for humans.

If you find these thoughts unsettling, perhaps it’s because they involve another version of the automation paradox. In the mechanical automation paradox described above, automated technology like autopiloted airplanes and self-driving cars allow humans to neglect the mechanical skills they once employed to perform these tasks for themselves. In the ethical automation paradox, automated systems allow humans to avoid reasoning about ethical dilemmas. This latter paradox may leave people less prepared to address other ethical quandaries when they arise. If my automobile makes the decision in a self-driving trolley problem scenario, will I be as capable of arguing coherently about torture, redistribution of wealth, assassination by drones, and other situations where I must balance the well-being of a few individuals against a concept of the greater good?

Automation is coming, in the form of automobiles and otherwise. We need to decide how we want our machines to handle ethical questions previously reserved for humans.