Why driverless cars don’t care about your ethical dilemmas
If you’ve been paying attention to the media stories about driverless cars, you will have heard the concerns about what driverless cars will do when faced with ethical dilemmas, scenarios in which the car’s computer program has to pick between different options of who to kill when faced with an impending collision. The problem is a variation of the famous, philosophical ‘trolley problem’.
The trolley problem is a thought experiment intended to discuss the ethics of action versus inaction in a no-win situation. The most common variant of the trolley problem goes:
A runaway trolley/train/tram is speeding down railway tracks. Ahead of it, five people are tied to the tracks and are unable to escape. The trolley will kill them. You are standing near a lever in the train yard. If you pull the lever, the trolley will be diverted down a separate track. However, there is also one person tied up and unable to escape on the diverting track. You have two options: do nothing and the trolley kills five people, or pull the lever and the trolley kills only one. What do you do?
There are many interesting variants of the trolley problem, including ones that require you to decide whether you would push a fat man off a bridge in the name of the greater good. (The kind of people who come up with these questions worry me…)

Various people have raised concerns that, under certain circumstances, a driverless car could be faced with a similar dilemma. In an impending, fatal crash, does the car swerve to avoid a pedestrian, thus killing the occupant of the car, or does the car stay on course and kill the pedestrian, allowing the occupant to survive? How do we program ethics into the car’s computer?
Like myself, most of the engineers that I posed this question to responded, “That’s a stupid question.”
The trolley problem is a highly contrived scenario that is so abstracted as to have lost all basis in reality. The problem is constructed such that you have no other options. There is no way to stop the trolley, there is no way to warn the people on the track or get them out of the way, the trolley cannot be derailed by pulling the leaver only half way… In real life, and thus, on the road, the trolley problem does not apply.
When this topic of conversation was brought up by some concerned acquaintances, the conversation went something as follows:
Them: What would you do if your car was going to crash and you had to decide between killing a cyclist or a little old lady?
Me: I’d put on the brakes.
Them: What if your brakes have failed?
Me: I’d put on the hand brake.
Them: Both sets of brakes have failed.
Me: Why would I be driving a car with no working brakes?
Them: It’s hypothetical. Let’s say you forgot to get it serviced.
Me: I’d cut the engine and use that to slow down.
Them: You can’t do that.
Me: Why not?
Them: You just can’t.
Me: And I really can’t steer between the granny and the cyclist?
Them: No. You’re between two walls.
Me: I’d steer the car so as to graze along the wall and let friction stop it.
Them: You can’t do that.
Me: Why not?
Them: You just can’t.
Me: Your question is stupid.
I have only been driving for around 10 years, but in that time I have never had to make an ethical decision about who to kill in what situation. No one else I have talked to has ever had to make a similar decision, and, I would be willing to bet, neither have you. If there was any real chance of us having to make ethical decisions about who to kill on the road, it would be part of drivers license exams. (For the love of God, please don’t mention this to the WA government. We don’t need an ethics test to go along with the road-law test, the practical exam, the driving-hours log book, the hazard perception test, the six-month curfew and the two-year probation.)
Like a human driver, the car should be working to avoid any sort of crash, and an autonomous car is likely to be a hell of a lot better at it than a human. With all-round sensors like having eyes in the back of your head, and reaction times that no human could ever hope to match, driverless cars are likely to make our roads far safer than they are now by removing the most failure-prone part of any vehicle, the squishy lump in the driver’s seat.
And how are the engineers working on Google’s self-driving car dealing with the trolley problem and ethical decisions? They are ignoring them, and are designing the car to avoid any crash as best it can. If the situation has developed such that the car has to choose who to run over, the car is so out of control that the question is rendered moot.

Driverless cars will not be infallible, no human-made system is. But they have the potential to make our roads safer, and our journeys far more pleasant. My only worry is that the car’s software will be vulnerable to hackers, and that one day, when deciding whether to hit the cyclist or the little old lady, my car attempts a 7-10 split.
To which I’d add the ethical dilemma:
If a driverless car realises that its passenger is destined to become an Adolf Hitler type megalomaniac who would be responsible for the deaths of millions, should the car deliberately kill the passenger? If killing the passenger necessarily involved killing a cute kitten as well?
LikeLike