In the early 1980s, I interviewed a young student of Marvin Minsky's, one of the founders of Artificial Intelligence. The student told me that, as he saw it, his hero, Minsky, was trying to build a machine beautiful enough that "a soul would want to live in it." More recently, we are perhaps less metaphysical, more practical. We envisage eldercare-bots, nanny-bots, teacher-bots, sex-bots. To go back to Minsky's student, these days, we're not trying to invent machines that souls would want to live in but that we would want to live with. We are trying to invent machines that a self would want to love.
The dream of the artificial confidante and then love object confuses categories that are best left unmuddled. Human beings have bodies and a life cycle, live in families and have grow up from dependence to independence. This give them experiences of attachment, loss, pain, fear of illness, and of course the experience of death that are specific and that we don't share with machines. To say this does not mean that machines can't get very smart or learn a stunning amount of things, more things certainly than people can know. But they are the wrong object for the job when we want to companionship and love.
A machine companion for instrumental help (to keep one safe in one's home, to help with cleaning or with reaching high sheleves) is an excellent idea. A machine companion for conversation abut human relationships seems a bad one. A conversation about human relationships is species specific. These conversations depend on having the experiences that come from having a human body, human limitations, a human life cycle.
I see us embarked on a voyage of forgetting.
We forget about the care and conversation that can only pass between people. The word conversation derives from words that mean to tend to each other, to lean toward each other. To converse, you have to listen to someone else, to put yourself in their place, to read their body, their voice, their tone, their silences. You bring your concern and experience to bear and you expect the same. A robot that shares information is an excellent project. But if the project is companionship and mutuality of attachment, you want to lean toward a human.
When we think, for example, about giving children robot babysitters, we forget that what makes children thrive is learning that people care for them in a stable and consistent way. When children are with people, they recognize how the movement and meaning of speech, voice, inflection, faces, and bodies flow together. Children learn how human emotions play in layers, seamlessly and fluidly. No robot has this to teach.
There is a general pattern in our discussions of robot companionship: I call it "from better than nothing to better than anything." I hear people begin with the idea that robot companionship is better than nothing, as in "there are no people for these jobs," for example jobs in nursing homes or as babysitters. And then they start to exalt the possibilities of what simulation can offer. In time, people start to talk as though as though what we will get from the artificial might be better than what life could provide. Childcare workers might be abusive. Nurses might make mistakes; nursing home attendants might not be clever or well educated.
The appeal of robotic companions carries our anxieties about people. We see artificial intelligence as a risk-free way to avoid being alone. We fear that we will not be there to care for each other. We are drawn to the robotic because it offers the illusion of companionship without the demands of friendship. Increasingly, people even suggest that it might offer the illusion of love without the demands of intimacy. We are willing to put robots in places where they have no place, not because they belong there but in our disappointments with each other.
For a long time, putting hope in artificial intelligence or robots has expressed an enduring technological optimism, a belief that as things go wrong, science will go right. In a complicated world, robots have always seemed like calling in the cavalry. Robots save lives in war zones; in operating rooms; they can function in deep space, in the desert, in the sea, wherever the human body would be in danger. But in the pursuit of artificial companionship, we are not looking for the feats of the cavalry but the benefits of simple salvations.
What are the simple salvations? These are the hopes that artificial intelligences will be our companions. That talking with us will be their vocation. That we will take comfort in their company and conversation.
In my research over the past fifteen years, I've watched these hopes for the simple salvations persist and grow stronger even though most people don't have experience with a artificial companion at all but with something like Siri, Apple's digital assistant on the iPhone, where the conversation is most likely to be "locate a restaurant" or "locate a friend."
But what my research shows is that even telling Siri to "locate a friend" moves quickly to the fantasy of finding a friend in Siri, something like a best friend, but in some ways better: one you can always talk to, one that will never be angry, one you can never disappoint.
When people talk this way, about friendship without mutuality, about friendship on tap, the simple salvations of artificial companionship don't seem so simple to me. For the idea of artificial companionship to become our new normal, we have to change ourselves, and in the process, we are remaking human values and human connection. We change ourselves even before we make the machines. We think we are making new machines but really we are remaking people.