2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

benjamin_k_bergen's picture
Associate Professor, Cognitive Science, University of California, San Diego; Author, What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves
Moral Machines

Machines make decisions for us.

Today, a trading machine in Manhattan detected a change in stock prices and decided in microseconds to buy millions of shares of a tech company.

Today, a driving machine in Mountain View detected a pedestrian and decided to turn the wheels to the left.

Whether these machines are "thinking" or not isn't the issue. They're collecting input, performing computations, making decisions, and acting on the world—whether we want to call that "thinking" is merely a matter of semantics (trust me, I'm a semanticist).

The real issue is the decisions we're empowering them to make. More and more, the decisions machines make are consequential. People's savings depend on them. So do their lives. And as machines begin to make decisions that are more consequential for humans, for animals, for the environment, and for national economies, the stakes get higher.

Consider this scenario. A self-driving car detects a pedestrian running out in front of it across a major road. It quickly apprehends that there is no harm-free course of action. Remaining on course would cause a collision and inevitable harm to the pedestrian. Braking quickly would lead the car to be rear-ended, with the attendant damage and possible injuries. So would veering off-course. What protocol should a machine use to decide? How should it quantify and weigh different types of potential harm to different actors? How many injuries of what likelihood and severity are worth a fatality? How much damage property is worth a 20% chance of whiplash?

Questions like these are hard to answer. They're questions that you can't solve with more data or more computing power. They're questions about what's morally right. We're charging machines with moral decisions.

Faced with a conundrum like this, we often turn to humans as a model. What would a person do? Let's recreate that in the machine.

The problem is that when it comes to moral decisions, humans are consistently inconsistent. What people say they believe is right and what they actually do often don't match (consider the case of Kitty Genovese). Moral calculus differs over time and from culture to culture. And minute details of each specific scenario matter deeply to people's actual decisions. Is the pedestrian a child or an adult? Does he look intoxicated? Does he look like a fleeing criminal? Is the car behind me tailgating?

What's the right thing for a machine to do?

What's the right thing for a human to do?

Science is ill-equipped to answer moral questions. Yet the decisions we're already handing to machines guarantee that someone will have to answer them. And there may be a limited window left to ensure that that someone is human.