Artificial Intelligence (AI) is commonly used as a tool to augment our own thinking. But the intelligence of systems suggests that AI can be and will be more than a tool, more than our servant. What kind of relationship might we expect?
We are hearing a lot about how Superintelligent machines may spell the end of the human race and how, in this regard, the future relationship between humans and AI would be a conflict for domination.
Another path, however, is for AI to grow into a collaborator with the same give and take we have with our favorite colleagues. This path is more hopeful. We managed to domesticate wolves into faithful dogs. Perhaps we can domesticate AI and avoid a conflict over domination.
Unfortunately, domesticating AI will be extremely difficult, much harder than just building faster machines with larger memories and more powerful algorithms for crunching more data.
To illustrate why it will be so hard to shift AI from a tool into a collaborator, consider a simple transaction with an everyday intelligent system, a route planner. Imagine that you are using your favorite GPS system to find your way in an unfamiliar area, and the GPS directs you to turn left at an intersection, which strikes you as wrong. If your navigation was being done by a friend in the passenger seat reading a map, you would ask, "Are you sure?" or perhaps just, "Left?" with an intonation that signals disbelief.
However, you don't have any way to query your GPS system. These systems, and AI in general, aren't capable of meaningful explanations. They can't describe their intentions in a way that we understand. They can't take our perspective to determine what statement would satisfy us. They can't convey their confidence in the route they have selected, other than giving a probabilistic estimate of the time differential for alternative routes, whereas we want them to reflect on the plausibility of the assumptions they are making. For these and other reasons, AI is not a good partner in joint activities for route planning or for most other tasks. It is a tool, a very powerful tool that is often quite helpful. But it is not a collaborator.
Many things must happen in order to transform AI from tool to collaborator. One possible starting point is to have AI become trustworthy. The concept of "trust in automation" is somewhat popular at the moment, but is far too narrow for our purpose. Trust in Automation refers to whether the operator can believe the outputs of the automated system or thinks the software may contain bugs or, worse yet, may be compromised. Warfighters worry about their reliance on intelligent systems that are likely to be hacked. They worry about having to gauge what parts of the system have been affected by an unauthorized intrusion and the ripple effects on the rest of the system.
Accuracy and reliability are important features of collaborators, but trust goes deeper. We trust people if we believe they are benevolent and want us to succeed. We trust them if we understand how they think so that we have common ground to resolve ambiguities. We trust them if they have the integrity to admit mistakes and accept blame. We trust them if we have shared values—not the sterile exercise of listing value priorities but dynamic testing of values to see if we make the same kinds of tradeoffs when different values conflict with each other. For AI to become a collaborator, it will have to consistently try to be seen as trustworthy. It will have to judge what kinds of actions will make it appear trustworthy in the eyes of a human partner.
If AI systems are able to move down this domestication path, the doomsday struggle for domination may be avoided.
Yet there is another issue to think about. As we depend more on our smartphones and other devices to communicate, some have worried that our social skills are eroding. People who spend their days on twitter with a wide range of audiences, year after year, may be losing social and emotional intelligence. They may be taking an instrumental view of others, treating them as tools for satisfying objectives. It is possible to imagine a distant future in which humans have forgotten how to be trustworthy, forgotten to want to be trustworthy. If AI systems become trustworthy and we don't, perhaps the domination by AI systems may be a good outcome after all.