2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

stanislas_dehaene's picture
Neuroscientist; Collège de France, Paris; Author, How We Learn
Two Cognitive Functions That Machines Still Lack

When Turing invented the theoretical device that became the computer, he confessed that he was attempting to copy "a man in the process of computing a real number", as he wrote in his seminal 1936 paper. In 2015, studying the human brain is still our best source of ideas about thinking machines. Cognitive scientists have discovered two functions that, I argue, are essential to genuine thinking as we know it, and that have escaped programmers' sagacity—yet.

1. A global workspace

Current programming is inherently modular. Each piece of software operates as an independent "app", stuffed with its own specialized knowledge. Such modularity allows for efficient parallelism, and the brain too is highly modular—but it also able to share information. Whatever we see, hear, know or remember does not remain stuck within a specialized brain circuit. Rather, the brain of all mammals incorporates a long-distance information sharing system that breaks the modularity of brain areas and allows them to broadcast information globally. This "global workspace" is what allows us, for instance, to attend to any piece of information on our retina, say a written letter, and bring it to our awareness so that we may use it in our decisions, actions, or speech programs. Think of a new type of clipboard that would allow any two programs to transiently share their inner knowledge in a user-independent manner. We will call a machine "intelligent" when it not only knows how to do things, but "knows that it knows them", i.e. makes use of its knowledge in novel flexible ways, outside of the software that originally extracted it. An operating system so modular that it can pinpoint your location on a map in one window, but cannot use it to enter your address in the tax-return software in another window, is missing a global workspace.

2. Theory-of-mind

Cognitive scientists have discovered a second set of brain circuits dedicated to the representation of other minds—what other people think, know or believe. Unless we suffer from a disease called autism, all of us constantly pay attention to others and adapt our behavior to their state of knowledge—or rather to what we think that they know. Such "theory-of-mind" is the second crucial ingredient that current software lacks: a capacity to attend to its user. Future software should incorporate a model of its user. Can she properly see my display, or do I need to enlarge the characters? Do I have any evidence that my message was understood and heeded? Even a minimal simulation of the user would immediately give a strong impression that the machine is "thinking". This is because having a theory-of-mind is required to achieve relevance (a concept first modeled by cognitive scientist Dan Sperber). Unlike present-day computers, humans do not say utterly irrelevant things, because they pay attention to how their interlocutors will be affected by what they say. The navigator software that tells you "at the next roundabout, take the second exit" sounds stupid because it doesn't know that "go straight" would be a much more compact and relevant message.

Global workspace and theory-of-mind are two essential functions that even a one-year-old child possesses, yet our machines still lack. Interestingly, these two functions have something in common: many cognitive scientists consider them the key components of human consciousness. The global workspace provides us with Consciousness 1.0: the sort of sentience that all mammals have, which allows them to "know what they know", and therefore use information flexibly to guide their decisions. Theory-of Mind is a more uniquely human function that provides us with Consciousness 2.0: a sense of what we know in comparison with what other people know, a capacity to simulate other people's thoughts, including what they think about us, therefore providing us with a new sense of who we are.

I predict that, once a machine pays attention to what it knows and what the user knows, we will immediately call it a "thinking machine", because it will closely approximate what we do.

There is a huge room here for improvement in the software industry. Future operating systems will have to be rethought in order to accommodate such new capacities as sharing any data across apps, simulating the user's state of mind, and controlling the display according to its relevance to the user's inferred goals.