This blog reviews an spetacular episode of the show <The Greater Good - Mind Field S2 (Ep 1)>.
What is the Troley Problem (TP)?
There is a switch point at a railway that two tracks branch out. Situation comes when there is a fast approaching train, and you have five railroad workers on Track 1, and one on Track 2. None of them are aware of the approaching train.
You are a switch controller at the branching point. You must make a decision whether to pull the trigger (to save the five people on Track 1), or does not react (to save the single person on Track 2).
Has compelling social goods
Has direct implications for mass transit and self-driving vehicles:
- Revealing the difference between instinct and philosophical reflections for the trolley problem.
- Learning this difference so that we can train people to act in the way that they wish they would.
- Take this lesson learned into building self-driving vehicles.
How to perform TPT with ethical concerns addressed?
Ethical board approval:
- Filter out people who might have post-traumatic stress disorder.
- Trauma counselor on-site
How is TPT scenario set up?
- Subject signs in for a phony train comfort test.
- (Select a hot day to conduct test) Pretend the phony test will run a little bit late, invite the subjects inside an air-conditioned remote switching station.
- Subjects meet an experienced switch controller, who will casually instruct them how to operate the switch.
- The switch controller then leave the subject in the room alone with some fake excuses.
- A fake Trolley Problem crisis occurs. The subject will have to make a decision without external help.
What is the result?
6 subjects' behaviors are reported.
- Some were frozen, realized something was wrong, but weren't prepared. "I don't know what to do".
- Some say "This is not on me. Someone else must be taking care of this".
- Some felt compelled to do something to save more lives.
Each one is different in terms of their mind activities based on own background & experience.
Apparently, with this limited number of test subjects, no conclusion on human nature could be drawn on this experiment. It stays a difficult ethical problem to answer what is the correct behavior in similar difficult situations.
In real life, any action may lead to potential lethal actions (since we can never be 100% certain about our prediction of the world). The goal of building self-driving vehicles is to remove the human errors in driving, as well as to reduce the likelihood that an vehicle will ever enter the circumstances that the machine (or human) have to make difficult decisions like in Trolley Problem.
In my opinion, a self-driving vehicle (SDV) should try as hard as possible to remove (if unavoidable, minimize) the casualties of the world. But on top of that, SDV should always prioritize the safety of the passengers.
While it is still a long way to get there, people need to start now to iron out the technical, economical and legal issues to prepare for a world with cheap and automated transportation system.