Nonfiction

The Trolley Problem: Autonomous Car Edition

by Ryan Terry

With the rise of new technology, fresh rules and ethical dilemmas are also appearing. One of the biggest moral problems that has come out of these new technologies is about how to program an autonomous car. Self-driving vehicles are in production at many car companies, such as Mercedes Benz, Tesla, and of course, Google. These cars are already on the road in prototype stages, and although they have all contained safety drivers, they are fully capable on the roads. Google has even programmed their cars to behave differently and more carefully when they are around children. These driverless cars are coming, and we can’t stop it even if we wanted to. There are already many different levels of autonomous and semi-autonomous vehicles, and a car that has features such as cruise control or an automatic breaking system is considered to be among them, but this is not where the problem lies. The problem is how to program a fully autonomous vehicle to react in a situation where a crash is inevitable; a situation such as one where the car would have to choose between hitting a group of people, or even a solo pedestrian, or crashing in a way that would kill the passenger.

This dilemma is very similar to the well known ethical thought experiment, The Trolley Problem. In this problem there is a runaway trolley headed down the tracks towards a group of five people. You are standing next to a lever that will switch the trolley to a different set of tracks where only one person is standing in the way. Do you do it? The answer would seem simple, however in switching the tracks you would be directly responsible for the death of the one person. If you did not switch the tracks you would hold no responsibility for the death of the five people. This problem becomes even more complex and difficult to answer when you add other dilemmas, such as The Fat Man. In this case, you are faced with the same scenario, only you are standing on a foot bridge above the trolley. You realize that you could stop the trolley with a heavy object, and it just so happens that there is a very fat man standing next to you. Do you push him in front of the trolley in order to save the five people on the tracks? Once again, you are faced with the decision of being directly responsible for the death of a single person, or being indirectly responsible for the death of five others.

The Autonomous Vehicle Problem is a situation where the car comes around a corner to find a group of pedestrians standing in the road. The car can either try to stop, but surely hit the people, or swerve into a wall, surely killing its passenger. This dilemma is complicated even more when numbers of the people in the car are accounted for, as well as if there are any children in the group of people, or in the car. Complicating this issue is the legal aspect of it all. Who would be convicted for the deaths? The passenger of the car would have to hold no responsibility, as they are in no way in control of the vehicle. Would it be the car’s manufacturer who is legally at fault? Would it be the specific technician who programmed the car? Or would it be ruled as an accident with no conviction? For the time being, questions like these are completely unanswerable—even the experts have no ideas on how to conclude this issue.

Any situation where a fatal crash is inevitable is not particularly common, and because of the lack of human error possessed by autonomous cars, they have a far less likelihood of getting in an accident, therefor, net fatalities will be much lower if everyone is in self-driving vehicles. Statistics show that 94% of automobile accidents in the U.S. are caused by human error (Google, 2015), and turning the control over to an autonomous vehicle will remove all human error. With over 1.2 million automobile related deaths worldwide a year (Google, 2015), self-driving cars could prevent hundreds of thousands of deaths. KPMG estimates that between 2015 and 2030 self-driving vehicles will lead to 2,500 fewer deaths in the UK alone (Greenough, 2015). Of the 1.8 million miles that Google’s autonomous car’s have been on the road for, Google writes that in the 12 accidents these cars have been involved in, “Not once was the self-driving car the cause of the accident (Lafrance, 2015).” Even if some innocent pedestrians are being killed by these autonomous vehicles in very rare situations, a human driver would likely try and stop the vehicle, even knowing it is hopeless, rather than causing their own death by crashing. Whether it is a human driver trying to save themselves, or their vehicle trying to save them, there will be some pedestrian fatalities.

In the next five years it has been projected that autonomous vehicles will be on the market for the general public, and out on the roads in great numbers (Greenough, 2015). Because these cars are much safer than human drivers, there will be far fewer accidents and deaths caused by automobile crashes, however if people believe that their self-driving car may kill them over a group of strangers in one of these greater good situations, they will not trust their cars, and be unlikely to buy them. Humans, like any animal, do not have any truly altruistic motives. Any action that seems to be altruistic can be traced back to selfish reasoning, and people will not put their own lives below those of complete strangers. If people know that their cars will keep them safe and do what is best for them in case of an accident, they will be much more likely to own and use them, which will drastically cut back the number of fatal car crashes. Because of this, it is important that self-driving vehicles do not kill their passengers.

References

Google. Why self-driving cars matter. Google Self-Driving Car Report. Retrieved from https://
www.google.com/selfdrivingcar/

Greenough, J. (July 29, 2015). THE SELF-DRIVING CAR REPORT: Forecasts, tech timelines,
and the benefits and barriers that will impact adoption. Business Insider: Tech. Retrieved from http://www.businessinsider.com/report-10-million-self-driving-cars-will-be-on-the- road-by-2020-2015-5

Lafrance, A. (June 8, 2015). When Google Self-Driving Cars Are in Accidents, Humans Are to Blame. The Atlantic: Technology. Retrieved from http://www.theatlantic.com/technology/ archive/2015/06/every-single-time-a-google-self-driving-car-crashed-a-human-was-to- blame/395183

O’Callaghan, J. (October 26, 2015). Should A Self-Driving Car Kill Its Passengers In A “Greater Good” Scenario? IFLScience: Technology. Retrieved from http://www.iflscience.com/ technology/should-self-driving-car-be-programmed-kill-its-passengers-greater-good- scenario

Turck, M. An Autonomous Car Might Decide You Should Die. BACKCHANNEL. Retrieved from https://medium.com/backchannel/reinventing-the-trolley-problem-85f3d1730756

[divider]
CV6C5697
Ryan Terry is an Early Honors freshman who has lived in Alaska her whole life. She trains year-round for nordic skiing, and spends her free-time adventuring in the mountains or relaxing with a good book and cup of coffee.

One Comment

  • Edward Yang

    This is an interesting article about the new problems from our rapidly improving technologies. Like what you said, the problem about autonomous cars has become a controversial topic. It could impede the dissemination of this awesome new technology if we cannot smoothly solve it.
    Again, nice article. It makes me interested about the future of autonomous cars, and people’s further arguing points about it.

Leave a Reply

Your email address will not be published. Required fields are marked *