The Trolley problem is not helpful for autonomous vehicles

The Trolley problem is not helpful for autonomous vehicles

Determining what autonomous driving algorithms do in difficult life-and-death situations is a real problem. Until now, many have likened it to the famous ‘trolley problem‘.

There is a runaway trolley barreling down its tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them but you are standing in the train yard next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:

  1. Do nothing, in which case the trolley will kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

The problem asks which is the more ethical option? Or, more simply: What is the right thing to do?

Analysts have noted that the variations of these “Trolley problems” largely just highlight the difference between deontological and consequentialist ethical systems. Researchers, however, are finding that distinction isn’t actually that useful for determining what autonomous driving algorithms should do.

Instead, they note that drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?

For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.

Researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations, and from that developed the Agent Deed Consequence (ADC) model.

The approach is highly utilitarian. It side-steps complex ethical problems by simply collecting data on what average people would consider ethical or not. The early research for ADC claims the judgements of the average people and ethics experts very often match; even if they were not trained in ethics. This more utilitarian approach may be sufficient for some tasks, but inherently is at risk from larger issues ‘If everyone jumped off a bridge, would you?” It’s often referred to as the Bandwagon Fallacy. Decisions made by the masses is something even Socrates argued against in The Republic.

Articles:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.