2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

murray_shanahan's picture
Professor of Cognitive Robotics, Imperial College London; Author, The Technological Singularity
Consciousness In Human-Level Artificial Intelligence

Just suppose we could endow a machine with human-level intelligence, that is to say with the capacity to match a typical human being in every (or almost every) sphere of intellectual endeavour, and perhaps to surpass every human being in a few. Would such a machine necessarily be conscious? This is an important question, because an affirmative answer would bring us up short. How would we treat such a thing if we built it? Would it be capable of suffering or joy? Would it deserve the same rights as a human being? Should we bring machine consciousness into the world at all?

The question of whether a human-level AI would necessarily be conscious is also a difficult one. One source of difficulty is the fact that multiple attributes are associated with consciousness in humans and other animals. All animals exhibit a sense of purpose. All (awake) animals are, to a greater or lesser extent, aware of the world they inhabit and the objects it contains. All animals, to some degree or other, manifest cognitive integration, which is to say they can bring all their psychological resources to bear on the ongoing situation in pursuit of their goals—perceptions, memories, and skills. In this respect, every animal displays a kind of unity, a kind of selfhood. Some animals, including humans, are also aware of themselves, of their bodies and of the flow of their thoughts. Finally, most, if not all, animals are capable of suffering, and some are capable of empathy with the suffering of others.

In (healthy) humans all these attributes come together, as a package. But in an AI they can potentially be separated. So our question must be refined. Which, if any, of the attributes we associate with consciousness in humans is a necessary accompaniment to human-level intelligence? Well, each of the attributes listed (and the list is surely not exhaustive) deserves a lengthy treatment of its own. So let me pick just two, namely awareness of the world and the capacity for suffering. Awareness of the world, I would argue, is indeed a necessary attribute of human-level intelligence.

Surely nothing would count as having human-level intelligence unless it possessed language, and the chief use of human language is to talk about the world. In this sense, intelligence is bound up with what philosophers call intentionality. Moreover, language is a social phenomenon, and a primary use of language within a group of people is to talk about the things that they can all perceive (such as this tool or that piece of wood), or have perceived (yesterday's piece of wood), or might perceive (tomorrow's piece of wood, maybe). In short, language is grounded in awareness of the world. In an embodied creature or a robot, such an awareness would be evident from its interactions with the environment (avoiding obstacles, picking things up, and so on). But we might widen the conception to include a distributed, disembodied artificial intelligence if it was equipped with suitable sensors.

To convincingly count as a facet of consciousness, this sort of worldly awareness would perhaps have to go hand-in-hand with a manifest sense of purpose, and a degree of cognitive integration. So perhaps this trio of attributes will come as a package even in an AI. But let's put that question to one side for a moment and get back to the capacity for suffering and joy. Unlike worldly awareness, there is no obvious reason to suppose that human-level intelligence necessitates this attribute, even if though it is intimately associated with consciousness in humans. It seems easy to imagine a machine cleverly carrying out the full range of tasks that require intellect in humans, coldly and without feeling. Such a machine would lack the attribute of consciousness that counts most when it comes to according rights. As Jeremy Bentham noted, when considering how to treat non-human animals, the question is not whether they can reason or talk, but whether they can suffer.

There is no suggestion here that a "mere" machine could never have the capacity for suffering or joy, that there is something special about biology in this respect. The point, rather, is that the capacity for suffering and joy can be dissociated from other psychological attributes that are bundled together in human consciousness. But let's examine this apparent dissociation more closely. I already mooted the idea that worldly awareness might go hand-in-hand with a manifest sense of purpose. An animal's awareness of the world, of what it affords for good or ill (in J.J.Gibson's terms), subserves its needs. An animal shows an awareness of a predator by moving away from it, and an awareness of a potential prey by moving towards it. Against the backdrop of a set of goals and needs, an animal's behaviour makes sense. And against such a backdrop, an animal can be thwarted, it goals unattained and its needs unfulfilled. Surely this is the basis for one aspect of suffering.

What of human-level artificial intelligence? Wouldn't a human-level AI necessarily have a complex set of goals? Wouldn't it be possible to frustrate its every attempt to achieve its goals, to thwart it at very turn? Under those harsh conditions, would it be proper to say that the AI was suffering, even though its constitution might make it immune from the sort of pain or physical discomfort human can know?

Here the combination of imagination and intuition runs up against its limits. I suspect we will not find out how to answer this question until confronted with the real thing. Only when more sophisticated AI is a familiar part of our lives will our language games adjust to such alien beings. But of course, by that time, it may be too late to change our minds about whether they should be brought into the world. For better or worse, they will already be here.