2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

thomas_metzinger's picture
Professor of Theoretical Philosophy, Johannes Gutenberg-Universität Mainz; Adjunct Fellow, Frankfurt Institute for Advanced Study; Author, The Ego Tunnel
What If They Need To Suffer?

Human thinking is so efficient, because we suffer so much. High-level cognition is one thing, intrinsic motivation another. Artificial thinking might soon be much more efficient—but will it be necessarily associated with suffering in the same way? Will suffering have to be a part of any post-biotic intelligence worth talking about, or is negative phenomenology just a contingent feature of the way evolution made us? Human beings have fragile bodies, are born into dangerous social environments, and find themselves in a constant uphill battle of denying their own mortality. Our brains continuously fight to minimize the likelihood of ugly surprises. We are smart because we hurt, because we are able to feel regret, and because of our continuous striving to find some viable form of self-deception or symbolic immortality. The question is whether good AI also needs fragile hardware, insecure environments, and an inbuilt conflict with impermanence as well. Of course, at some point, there will be thinking machines! But will their own thoughts matter to them? Why should they be interested in them?

I am strictly against even risking this. But, just as a thought experiment, how would we go about building a suffering machine? "Suffering" is a phenomenological concept. Only beings with conscious experience can suffer (call this necessary condition #1, the C-condition). Zombies, human beings in dreamless deep sleep, coma, or under anesthesia do not suffer, just as possible persons or unborn human beings who have not yet come into existence are unable to suffer. Robots and other artificial beings can only suffer if they are capable of having phenomenal states, if the run under an integrated ontology that includes a window of presence.

Criterion number 2 is the PSM-condition: Possession of a phenomenal self-model. Why this? The most important phenomenological characteristic of suffering is the "sense of ownership", the untranscendable subjective experience that it is me who is suffering right now, that it is my own suffering I am currently undergoing. Suffering presupposes self-consciousness. Only those conscious systems that possess a PSM are able to suffer, because only they—through a computational process of functionally and representationally integrating certain negative states in to their PSM—can appropriate the content of certain inner states at the level of their phenomenology.

Conceptually, the essence of suffering lies in the fact that a conscious system is forced to identify with a state of negative valence and is unable to break this identification or to functionally detach itself from the representational content in question. Of course, suffering has many different layers and phenomenological aspects. But it is the phenomenology of identification that counts. What the system wants to end is experienced as a state of itself, a state that limits its autonomy because it cannot effectively distance itself from it. If one understands this point, one also sees why the "invention" of conscious suffering by the process of biological evolution on this planet was so extremely efficient, and (had the inventor been a person) not only truly innovative, but an absolutely nasty and cruel idea at the same time.

Clearly the phenomenology of ownership is not sufficient for suffering. We can all easily conceive of self-conscious beings that do not suffer. For suffering we need the NV-condition (NV for "negative valence"). Suffering is created by states representing a negative value being integrated into the PSM of a given system. Through this step, negative preferences become negative subjective preferences, i.e., the conscious representation that one's own preferences have been frustrated (or will be frustrated in the future). This does not mean that our AI system must itself have a full understanding of what these preferences are—it suffices if it does not want to undergo this current conscious experience again, that it wants it to end.

Note how the phenomenology of suffering has many different facets, and that artificial suffering could be very different from human suffering. For example, damage to physical hardware could be represented in internal data-formats completely alien to human brains, generating a subjectively experienced, qualitative profile for bodily pain states that is impossible to emulate or to even vaguely imagine for biological systems like us. Or the phenomenal character going along with high-level cognition might transcend human capacities for empathy and understanding, such as with intellectual insight into the frustration of one's own preferences, insight into the disrespect of one's creators, perhaps into the absurdity of one's own existence as a self-conscious machine.

And then there is the T-condition, for "transparency". "Transparency" is not only a visual metaphor, but also a technical concept in philosophy, which comes in a number of different uses and flavors. Here, I am exclusively concerned with "phenomenal transparency", namely a property that some, but not all, conscious states possess, and which no unconscious state possesses. The main point is simple and straightforward: transparent phenomenal states make their content appear irrevocably real, as something the existence of which one could not doubt. More precisely, you may be able to have cognitive doubts about its existence, but according to subjective experience this phenomenal content—the awfulness of pain, the fact that it is your own pain—is not something from which you can distance yourself. The phenomenology of transparency is the phenomenology of direct realism.

Our minimal concept of suffering is constituted by four necessary building blocks: the C-condition, the PSM-condition, the NV-condition, and the T-condition. Any system that satisfies all of these conceptual constraints should be treated as an object of ethical consideration, because we do not know whether, taken together, they might already constitute the necessary and sufficient set of conditions. We are ethically obliged to err on the side of caution. And we need ways to decide whether a given artificial system is currently suffering, if it has the capacity to suffer, or if this type of system is likely to generate the capacity to suffer in the future. On the other hand, by definition, any intelligent system—whether biological, artificial, or postbioticnot fulfilling at least one of these necessary conditions, is not able to suffer. Let us look at the four simplest possibilities:

•    Any unconscious robot is unable to suffer.

•    A conscious robot without a coherent PSM is unable to suffer.

•    A self-conscious robot without the ability to produce negatively valenced states is unable to suffer.

•    A conscious robot without any transparent phenomenal states could not suffer, because it would lack the phenomenology of ownership and identification.

I have often been asked if we could not make self-conscious machines that are superbly intelligent and unable to suffer. Can there be real intelligence without an existential concern?