Signatures of Consciousness

Signatures of Consciousness

Stanislas Dehaene [11.24.09]

For the past twelve years my research team has been using all the brain research tools at its disposal, from functional MRI to electro- and magneto-encephalography and even electrodes inserted deep in the human brain, to shed  light on the brain mechanisms of consciousness.

I am now happy to report that we have acquired a  good working hypothesis. In experiment after experiment, we have seen the same signatures of consciousness: physiological markers that all, simultaneously, show a massive change when a person reports becoming aware of a piece of information (say a word, a digit or a sound).

STANISLAS DEHAENE is a Professor at the Collège de France and Chair of Experimental Cognitive Psychology. His research focuses on the cerebral bases of specifically human cognitive functions such as language, calculation, and reasoning. His work centers on the cognitive neuropsychology of language and reading, and his main scientific contributions include the study of the organization of the cerebral system for number processing. He is the author of The Number Sense: How Mathematical Knowledge Is Embedded In Our Brains; and Reading in the Brain the Science and Evolution of a Cultural InventionStanislas Dehaene's Edge Bio Page

THE REALITY CLUB: Daniel Kahneman, Sam Harris, George Dyson, Steven Pinker, Donald Hoffman, Arnold Trehub


Introduction by
John Brockman

On October 17, Edge organized a Reality Club meeting at The Hotel Ritz in Paris to allow neuroscientist Stanislas Dehaene to present his new theory on how consciousness arises in the brain to a group of Parisian scientists and thinkers. The theory, based on Dehaene's past twelve years of brain-imaging research  is called the global neuronal workspace. It promises to offer new tools for diagnosing consciousness disorders  in patients.

"For the past twelve years,"  says Dehaene, "my research team has been using every available brain research tool, from functional MRI to electro- and magneto-encephalography and even electrodes inserted deep in the human brain, to shed  light on the brain mechanisms of consciousness. I am now happy to report that we have acquired a  good working hypothesis. In experiment after experiment, we have seen the same signatures of consciousness: physiological markers that all, simultaneously, show a massive change when a person reports becoming aware of a piece of information (say a word, a digit or a sound). 

"Furthermore, when we render the same information non-conscious or "subliminal," all  the signatures disappear. We have a theory about why these signatures occur, called the global neuronal workspace theory. Realistic computer simulations of neurons reproduce our main experimental findings: when the information processed exceeds a threshold for large-scale communication across many brain areas, the network ignites into a large-scale synchronous state, and all  our signatures suddenly appear. 

But this is already more than a theory. We are now applying our ideas to non-communicating patients in coma, vegetative state, or locked-in syndromes. The test that we have designed with Tristan Bekinschtein, Lionel Naccache, and Laurent Cohen, based on our past experiments and theory, seems to reliably sort out which patients retain some residual conscious life and which do not. 

"My laboratory is now pursuing this research intensively on patients, animals, human adults and young children, with the hope of turning our brain-imaging measurements into a real-time monitor of conscious experience. The time thus seems ripe to share this work with a broader audience of readers interested in cutting-edge science and technology, but also those concerned with the philosophical, personal and ethical implications of these findings."

edgenews

Participating in the event and joining the Edge dinner that followed were:

Noga Arikha, Historian of ideas; Author, Passions and Tempers: A History of the Humours 
Patrick Cavanagh, University of Paris researcher on visual perception and its implications 
Laurent Cohen, Neurologist, Hôpital de la Salpêtrière (Paris); Author, L'homme thermomètre (Thermometer Man), a science-based single-case study  similar to the work of Oliver Sachs
Emmanuel Dupoux, Director of Laboratoire de Sciences Cognitives et Psycholinguistique (LSCP)
Ghislaine Dehaene-Lambertz, CNRS,  Neuro-paediatrician and researcher studying  infant brain development
Janine di Giovanni, Journalist; Vanity Fair and The New York Times
Juan Enriquez, Life Sciences investor and Academic; Author, As the Future Catches Us
Etienne Klein, Physicist; Author of many books on epistemology and history of science
Katinka Matson, Cofounder, Edge
Lionel Naccache, Neurologist; Author, Le Nouvel Inconscient, (The New Unconscious), which establishes  a new science-based dialog between research on non-conscious processing and Freudian view
Gloria Origgi, Philosopher and Researcher, Centre Nationale de la Recherche Scientifique
Sharon Pepperkamp, Linguist, University of Paris; Researcher, Laboratoire de Sciences Cognitives et Psycholinguistique
Philip Pettit, Philosopher, Professor of Politics and Human Values at Princeton; Author, Made with Words: Hobbes on Mind, Society and Politics
Jaqui Safra, Investor, (Encyclopædia Britannica, Spring Mountain Vineyards); Movie Producer
Dan Sperber, Directeur de Recherche au CNRS, Paris, Social and Cognitive scientist; Author,Rethinking Symbolism; On Anthropological Knowledge; Explaining Culture
Aalam Wassef, Digital Artist, Music Composer, Network Designer

— JB


[1hr 20 minutes]


SIGNATURES OF CONSCIOUSNESS

[STANISLAS DEHAENE] I want to start by opening the box and showing you a small object. It's not extraordinary, but I sort of like it. Of course, it's a brain. In fact, it is my brain, exactly as it was ten years ago. This is not its real size, of course. It's smaller than life and is only made of plaster. It is one of the first 3-D printouts that we made from a rapid prototyping machine, one of those 3-D printers that can take the computer constructions we build from our MRI scans of the brain, and turn them into real-life 3-D objects.

This is just brain anatomy. But I'm using it to ask this evening's big question: Can this biological object, the human brain, understand itself? In the ten years that have elapsed since my brain was printed, we've made a good bit of progress in understanding some of its parts. In the lab, for instance, we've been working on an area in the parietal lobe which is related to number processing, and we've also worked on this left occipito-temporal region that we call the visual word form area, and which is related to our knowledge of spelling and reading, and whose activation develops when we learn to read.

Since this is Edge, the idea is not to talk about what exists and has already been published, but rather, to present new developments. So I would like to tell you about a research project that we've been working on for almost ten years and which is now running at full speed — which is trying to address the much-debated issue of the biological mechanisms of consciousness.

Neuroscientists used to wait until they were in their 60's or 70's before they dared to raise the topic of consciousness. But I now think that the domain is ripe. Today we can work on real data, rather than talk about the issue in philosophical terms.

In the past, the major problem was that people barely looked at the brain and tried to generate theories of consciousness from the top, based solely on their intuitions. Excellent physicists, for instance tried to tell us that the brain is a quantum computer, and that consciousness will only be understood once we understand quantum computing and quantum gravity. Well, we can discuss that later, but as far as I can see, it's completely irrelevant to understanding consciousness in the brain. One of the reasons is that the temperature at which the brain operates is incompatible with quantum computing. Another is that my colleagues and I have entered an MRI scanner on a number of occasions, and have probably changed our quantum state in doing so but as I far as I can judge, this experience didn't change anything related to our consciousness.

Quantum physics is just an example. There has been an enormous wealth of theories about consciousness, but I think that very few are valid or even useful. There is, of course, the dualist notion that we need a special stuff for consciousness and that it cannot be reduced to brain matter. Obviously, this is not going to be my perspective. There is also the idea that every cell contains a little consciousness, and that if you add it all together, you arrive at an increasing amount of consciousness — again this is also not at all my idea, as you will see.

We could go on and on, because people have proposed so many strange ideas about consciousness — but the question is, how to move forward with actual experiments. We've now done quite a few neuroimaging studies where we manage to contrast conscious and non-conscious processing, and we've tried to produce a theory from the results — so I would like to tell you about both of these aspects.

Finally, at the end of this talk, I'll say a few words about perspectives for the future. One of the main motivations for my research is to eventually be in a position to apply it to patients who have suffered brain lesions and appear to have lost the ability to entertain a flow of conscious states. In such patients, the problem of consciousness is particularly apparent and vital — literally a matter of life-or-death. They can be in coma, in vegetative states, or in the so-called minimally conscious state. In some cases we don't even know if they are conscious or not. They could just be locked in — fully conscious, but unable to communicate, a frightening state vividly described by Jean-Louis Bauby in "The Diving Bell and the Butterfly." It is with such patients in mind that we must address the problem of consciousness. In the end, it's an extremely practical problem. Theories are fine, but we have to find ways that will allow us to get back to the clinic. 

How to experiment with conscious states

So how do we experiment with consciousness? For a long time, I thought that there was no point in asking this question, because we simply could not address it. I was trained in a tradition, widely shared in the neuroscience and cognitive psychology communities, according to which you just cannot ask this question. Consciousness, for them, is not a problem that can be addressed. But, I now think this is wrong. After reading A Cognitive Theory of Consciousness, a book by Bernard Baars, I came to realize that the problem can be reduced to questions that are simple enough that you can test them into the lab.

I want to say right from the start that this means that we have to simplify the problem. Consciousness is a word with many different meanings, and I am not going to talk about all of these meanings. My research addresses only the most simple meaning of consciousness. Some people, when they talk about consciousness, think that we can only move forward if we gain an understanding of "the self" — the sense of being I or Me. But I am not going to talk about that. There is also a notion of consciousness as a "higher order" or "reflexive" state of mind — when I know that I know. Again, this meaning of the term remains very difficult to address experimentally. We have a few ideas on this, but it's not what I want to talk about this evening.

Tonight I only want to talk about the simpler and addressable problem of what we call "access to consciousness." At all times the brain is constantly bombarded with stimulation— and yet, we are only conscious of a very small part of it. In this room, for instance, it's absolutely obvious. We are conscious of one item here and another item there, say the presence of John behind the camera, or of some of the bottles on this table. You may not have noticed that there is a red label on the bottles. Although this information has been present on your retina since the beginning of my talk, it's pretty obvious that you are only now actually paying attention to it.

In brief, there is a basic distinction between all the stimuli that enter the nervous system, and the much smaller set of stimuli that actually make it into our conscious awareness. That is the simple distinction that we are trying to capture in our experiments. The first key insight, which is largely due to Francis Crick and Christof Koch, is that we must begin with the much simpler problem of understanding the mechanisms of access to consciousness for simple visual stimuli before we can attack the issue of consciousness and the brain.

The second key insight is that we can design minimal experimental contrasts to address this question. And by minimal contrasts, I mean that we can design experimental situations in which, by changing a very small element in the experiment, we turn something that is not conscious into something that is conscious. Patrick Cavanagh, who is here in the room, designed a large number of such illusions and stimulation paradigms. Tonight, I'll just give you one example, the so-called subliminal images that we've studied a lot.

If you flash words on a screen for a period of roughly 30 milliseconds, you see them perfectly. The short duration, in itself, is not a problem. What matters is that there is enough energy in the stimulus for you to see it. If, however, just after the word, you present another string of letters at the same location, you only see the string, not the word. This surprising invisibility occurs in a range of delays between the word and the consonant string (what we call the mask) that are on the order of 50 milliseconds. If the delay is shorter than 50 milliseconds, you do not see the hidden word. It's a well-known perceptual phenomenon called masking.

Now, if you lengthen the delay a little, you see the word each time. There is even a specific delay where the subjects can see the stimulus half of the time. So now you are in business, because you have an experimental manipulation that you can reproduce in the lab, and that systematically creates a change in consciousness.

Subjective reports versus objective performance

Of course, to define our experimental conditions, we are obliged to rely on the viewer's subjective judgments. This is a very important point — we do not rely simply on a change in the stimulus. What counts is a change in stimulus that the subject claims makes his perception switch from non-conscious to conscious. Here we are touching on a difficult point — how do we define whether a subject is conscious or not? In past research, many people have been very reluctant to use this sort of subjective report. Some have argued that it is very difficult or even impossible to do science based on subjective reports. But my point of view, and I share this with many others, is that subjective reports define the science of consciousness. That's the very object we have to study — when are subjects able to report something about their conscious experience, and when they are not.

There can be other definitions. Some researchers have tried to propose an objective definition of consciousness. For instance, they have argued that, if the subject is able, say, to classify words as being animals or not, or as being words in the lexicon or not, then they are necessarily conscious. Unfortunately however, sticking to that type of definition, based solely on an objective criterion, has been very difficult. We have repeatedly found that even when subjects claim to be unaware of the word and report that they cannot see any word at all, they still do better than chance on this type of classification task. So the problem with this approach is that we need to decide which tasks are just manifestations of subliminal or unconscious processing, and which are manifestations of access to consciousness.

In the end, however, the opposition between objective and subjective criteria for consciousness is exaggerated. The problem is not that difficult because in fact, both phenomena often covary in a very regular fashion, at least on a broad scale. For instance, when I vary the delay between my word and my mask, what I find is that the subjects' performance suddenly increases, at the very point where they become able to report the word's presence and identity. This is an experimental observation that is fairly simple, but I think important: when subjects are able to report seeing the word, they simultaneously find many other tasks feasible with a greater rate of success.

It's not that subjects cannot react below this consciousness threshold. There is clearly subliminal processing on many tasks, but as one crosses the threshold for consciousness, there are a number of tasks that suddenly become feasible. These include the task of subjective report. Our research program consists in characterizing the major transition — from below the threshold for consciousness to above the threshold for consciousness.

I'm only giving you one example of masking, but there are now many experimental paradigms that can be used to make the same stimulus go in or out of consciousness using minimal manipulation, occasionally no manipulation at all. Sometimes the brain does the switching itself, such as, for instance, in binocular rivalry, where the two eyes see two different images but the brain only allows them to see one or the other, never both at the same time. Here the brain does the switching.

Although I'm only going to talk about the masking paradigm, because that is what we have focused on in my lab, I hope that you now understand why the experimental study of consciousness has developed into such a fast growing field and why so many people are convinced that it is possible to experiment in this way, through the creation of minimal contrasts between conscious and non-conscious brain states.

Signatures of the transition from non-conscious to conscious

Of course, we need to combine our capacity to create such minimal contrasts with methods that allow us to see the living, active brain. We are very far from having seen the end of the brain imaging revolution — this is only the beginning. Although it is hard to remember what it was like before brain imaging existed , you have to realize how amazing it is that we can now see through the skull as if it were transparent. Not only do we see the anatomy of the brain, but also how its different parts are activated, and with other techniques, the temporal dynamics with which these activations unfold. Typically, functional magnetic resonance imaging (fMRI) only lets you see the static pattern of activation on a scale of one or two seconds. With other techniques such as electro- or magneto-encephalography, however, you can really follow in time, millisecond by millisecond, how that activation progresses from one site to the other.

What do we see when we do these experiments? The first thing that we discovered is that, even when you cannot see a word or a picture, because it is presented in a subliminal condition, it does not mean that your cortex is not processing it. Some people initially thought that subliminal processing meant sub-cortical processing — processing that is not done in the cortex. It's of course completely false and we've known this for a while now. We can see a lot of cortical activation created by a subliminal word. It enters the visual parts of the cortex, and travels through the visual areas of the ventral face of the brain. If the conditions are right, a subliminal word can even access higher levels of processing, including semantic levels. This is something that was highly controversial in psychology, but is now very clear from brain imaging: a subliminal message can travel all the way to the level of the meaning of the word. Your brain can take a pattern of shapes on the retina, and successively turn it into a set of letters, recognize it as word, and access a certain meaning — all of that without any form of consciousness.

Next comes the obvious question: where is there more activity when you are conscious of the word? If we do this experiment with fMRI, what we see is that two major differences occur. You first see an amplification of activation in the early areas: the very same areas begin to activate much more, as much as tenfold, in for instance, this area that we have been studying and which looks at the spelling of words: the visual word form area.

The second aspect is that several other distant areas of the brain activate. These include areas in the so-called prefrontal cortex, which is in the front of the brain here. In particular, we see activation in the inferior frontal region, as well as in the inferior parietal sectors of the brain. What we find also is that these areas begin to correlate with each other — they co-activate in a coordinated manner. I am for the moment just giving you the facts: amplification and access to distant areas are some of the signatures of consciousness.

Now, if we look at a time lapse picture, obtained with a technique such as electro-encephalography which can resolve the temporal unfolding of brain activity, then we see something else which is very important: the difference between a non-conscious and a conscious percept occurs quite late in processing. Let's call time zero the point at which the word first appears on the screen, and let's follow this activation from that point. What we see is that, under the best of conditions, it can take as along as 270 to 300 milliseconds before we see any difference between conscious and unconscious processing.

For one fourth of a second, which is extraordinarily long for the brain, you can have identical activations, whether you are conscious or not. During this quarter of a second, the brain is not inactive and we can observe a number of instances of lexical access, semantic access and other processes (and subliminal processing can even continue after this point). But at about 270 milliseconds, 300 milliseconds in our experiments, we begin to see a huge divergence between conscious states and non-conscious states. If we only record using electrodes placed on the scalp, to measure what are called event-related potentials, we see a very broad wave call the P3 or P300. It's actually very easy to measure, and indeed one of the claims that my colleagues and I make is that access to consciousness is perhaps not that difficult to measure, after all.

The P3 wave is typically seen in conditions where subjects are conscious of the stimulus. You can get very small P3 waves under subliminal conditions, but there seems to be a very clear nonlinear divergence between conscious and non-conscious conditions at this point in time. When we manipulate consciousness using minimal contrasts, we find that subliminal stimuli can create a small and quickly decaying P3 wave, whereas a very big and nonlinear increase in activation, leading to a large event-related potential, can be seen when the same stimuli cross the threshold and become conscious.

At the same time as we see this large wave, which peaks at around 400, 500 milliseconds, we also see two other signatures of consciousness. First, our electrodes detect a high level of oscillatory activity in the brain, in the high-gamma band (50-100 Hz). Second, as the brain begins to oscillate at these high frequencies, we also begin to see massive synchrony across distant regions. What that means is that initially, prior to conscious ignition, processing is essentially modular, with several simultaneous activations occurring independently and in parallel. However, at the point where we begin to see conscious access, our records show a synchronization of many areas that begin to work together. 

A global neuronal workspace

I just gave you the bare facts: the basic signatures of consciousness that we have found and that many other people have also seen. I would now like to say a few words about what we think that these observations mean. We are on slightly more dangerous ground here, and I am sorry to say, a little bit fuzzier ground, because we cannot claim to have a full theory of conscious access. But we do begin to have an idea.

This idea is relatively simple, and it is not far from the one that Daniel Dennett proposed when he said that consciousness is "fame in the brain." What I propose is that "consciousness is global information in the brain" — information which is shared across different brain areas. I am putting it very strongly, as "consciousness is," because I literally think that's all there is. What we mean by being conscious of a certain piece of information is that it has reached a level of processing in the brain where it can be shared.

Because it is sharable, your Broca's area (or the part of it involved in selecting the words that you are going to speak) is being informed about the identity of what you are seeing, and you become able to name what you are seeing. At the same time, your hippocampus is perhaps informed about what you have just seen, so you can store this representation in memory. Your parietal areas also become informed of what you have seen, so they can orient attention, or decide that this is not something you want to attend to… and so on and so forth. The criterion of information sharing relates to the feeling that we have that, whenever a piece of information is conscious, we can do a very broad array of things with it. It is available.

Now, for such global sharing to occur, at the brain level, special brain architecture is needed. In line with Bernard Baars, who was working from a psychological standpoint and called it a "global workspace," Jean-Pierre Changeux and I termed it the global neuronal workspace. If you look at the associative brain areas, including dorsal parietal and prefrontal cortex, anterior temporal cortex, anterior cingulate, and a number of other sites, what you find is that these areas are tightly intertwined with long distance connections, not just within a hemisphere, but also across the two hemispheres through what is called the corpus callosum. Given the existence of this dense network of long-distance connections, linking so many regions, here is our very simple idea: these distant connections are involved in propagating messages from one area to the next, and at this very high level where areas are strongly interconnected, the density of exchanges imposes a convergence to a single mental object out of what are initially multiple dispersed representations. So this is where the synchronization comes about.

Synchronization is probably a signal for agreement between different brain areas. The areas begin to agree with each other. They converge onto a single mental object. In this picture, each area has its own code. Broca's area has an articulatory record and slightly more anterior to it there is a word code. In the posterior temporal regions, we have an acoustic code, a phonological code, or an orthographic code. The idea is that when you become aware of a word, these codes begin to be synchronized together, and converge to a single integrated mental content.

According to this picture, consciousness is not accomplished by one area alone. There would be no sense in trying to pinpoint consciousness in a single brain area, or in computing the intersection of all the images that exist in the literature on consciousness, in order to find the area for consciousness. Consciousness is a state that involves long distance synchrony between many regions. And during this state, it's not just higher association areas that are activated, because these areas also amplify, in a top-down manner, the lower brain areas that received the sensory message in the first place.

I hope that I have managed to create a mental picture for you of how the brain achieves a conscious state. We simulated this process on the computer. It is now possible to simulate neural networks that are realistic in the sense that they include actual properties of trans-membrane channels, which are put together into fairly realistic spiking neurons, which in turn are put together into fairly realistic columns of neurons in the cortex, and also include a part of the thalamus, a sub-cortical nucleus which is connected one-to-one to the cortex. Once we put these elements together into a neural architecture with long-distance connections, we found that it could reproduce many of the properties that I described to you, the signatures of consciousness that we observed in our empirical observations.

So when we stimulate this type of network, at the periphery, we actually see activation climbing up the cortical hierarchy, and if there is enough reverberation and if there are enough top-down connections, then we see a nonlinear transition towards an ignited state of elevated activity. It's of course very simple to understand: in any dynamic system that is self-connected and amplifies its own inputs, there is a nonlinear threshold. As a result, there will be either a dying out of the activation (and I claim that this state of activity correspond to subliminal processing), or there will be self-amplification and a nonlinear transition to this high-up state where the incoming information becomes stable for a much longer period of time. That, I think, corresponds to what we have seen as the late period in our recordings where the activation is amplified and becomes synchronized across the whole brain. In essence, very simple simulations generate virtually all of the signatures of consciousness that we've seen before. 

What is consciousness good for?

This type of model may help answer a question that was difficult to address before, namely, what is consciousness useful for? It is a very important question, because it relates to the evolution of consciousness. If this theory is right, then we have a number of answers to what consciousness actually does. It is unnecessary for computations of a Bayesian statistical nature, such as the extraction of the best interpretation of an image. It seems that the visual brain does that in a massively parallel manner, and is able to compute an optimal Bayesian interpretation of the incoming image, thus coming up with what is essentially the posterior distribution of all the possible interpretations of the incoming image. This operation seems to occur completely non-consciously in the brain. What the conscious brain seems to be doing is amplify and select one of the interpretations, which is relevant to the current goals of the organism.

In several experiments, we have contrasted directly what you can do subliminally and what you can only do consciously. Our results suggest that one very important difference is the time duration over which you can hold on to information. If information is subliminal, it enters the system, creates a temporary activation, but quickly dies out. It does so in the space of about one second, a little bit more perhaps depending on the experiments, but it dies out very fast anyway. This finding also provides an answer for people who think that subliminal images can be used in advertising, which is of course a gigantic myth. It's not that subliminal images don't have any impact, but their effect, in the very vast majority of experiments, is very short-lived. When you are conscious of information, however, you can hold on to it essentially for as long as you wish,. It is now in your working memory, and is now meta-stable. The claim is that conscious information is reverberating in your brain, and this reverberating state includes a self-stabilizing loop that keeps the information stable over a long duration. Think of repeating a telephone number. If you stop attending to it, you lose it. But as long as you attend to it, you can keep it in mind.

Our model proposes that this is really one of the main functions of consciousness: to provide an internal space where you can perform thought experiments, as it were, in an isolated way, detached from the external world. You can select a stimulus that comes from the outside world, and then lock it into this internal global workspace. You may stop other inputs from getting in, and play with this mental representation in your mind for as long as you wish.

In fact, what we need is a sort of gate mechanism that decides which stimulus may enter, and which stimuli are to be blocked, because they are not relevant to current thoughts. There may be additional complications in this architecture, but you get the idea: a network that begins to regulate itself, only occasionally letting inputs enter.

Another important feature that I have briefly mentioned already is the all-or-none property. You either make it into the conscious workspace, or you don't. This is a system that discretizes the inputs. It creates a digital representation out of what is initially just a probability distribution. We have some evidence that, in experiments where we present a stimulus that is just at threshold, subjects end up either seeing it perfectly and completely with all the information available to consciousness — or they end up not having seen anything at all. There doesn't seem to be any intermediate state, at least in the experiments that we've been doing.

Having a system that discretizes may help solve one of the problems that John Von Neumann considered as one of the major problems facing the brain. In his book The computer and the brain, Von Neumann discusses the fact that the brain, just like any other analog machine, is faced with the fact that whenever it does a series of computations, it loses precision very quickly, eventually reaching a totally inaccurate result in the end. Well, maybe consciousness is a system for digitizing information and holding on to it, so that precision isn't lost in the course of successive computations.

One last point: because the conscious workspace is a system for sharing information broadly, it can help develop chains of processes. A speculative idea is that once information is available in this global workspace, it can be piped to any other process. What was the output of one process can become the input of another, thus allowing for the execution of long and purely mental algorithms — a human Turing machine.

In the course of evolution, sharing information across the brain was probably a major problem, because each area had a specialized goal. I think that a device such as this global workspace was needed in order to circulate information in this flexible manner. It is extremely characteristic of the human mind that whatever result we come up with, in whatever domain, we can use it in other domains. It has a lot to do, of course, with the symbolic ability of the human mind. We can apply our symbols to virtually any domain.

So if my claim is correct, whenever we do serial processing, one step at a time, passing information from one operation to the next in a completely flexible manner, we must be relying on this conscious workspace system. This hypothesis implies that there is an identity between slow serial processing and conscious processing — something that was noted for instance by cognitive psychologist Michael Posner already many years ago.

We have some evidence in favor of this conclusion. Let me tell you about a small experiment we did. We tried to see what people could do subliminally when we forced them to respond. Imagine that I flash a digit at you, and this digit is subliminal because I have masked it. Now suppose that I ask you, "is it larger or smaller than five?" I give you two buttons, one for larger and one for smaller, and I force you to respond. When I run this experiment, although you claim that you have not seen anything, and you have to force yourself to respond, you find that you do much better than chance. You are typically around 60 percent correct, while pure chance would be 50 percent. So this is subliminal processing. Some information gets through, but not enough to trigger a global state of consciousness.

Now, we can change the task to see what kind of tasks can be accomplished without consciousness. Suppose we ask you to name a digit that you have not seen by uttering a response as fast as you can. Again, you do better than chance, which is quite remarkable: your lips articulate a word which, about 40 percent of the time, is the correct number, where chance, if there are four choices, would be 25 percent.

However, if we now give you a task that involves two serial processing steps, you cannot do it. If I ask you to give me the number plus two, you can do it — but if I ask you to compute the initial number plus two, and then decide if the result of that +2 operation is larger or smaller than five, you cannot do it. It's a strange result, because the initial experiments show that you possess a lot of information about this subliminal digit. If you just named it, you would have enough information to do so correctly, much better than chance alone would predict. However when you are engaged in a chain of processing, where you have to compute x+2, and then decide if the outcome is larger or smaller than five, there are two successive steps that make performance fall back down to chance. Presumably, this is because you haven't had access to the workspace system that allows you to execute this kind of serial mental rule.

With a group of colleagues, we are working on a research project called "the human Turing machine." Our goal is to clarify the nature of the specific system, in the mind and in the brain, which allows us to perform such a series of operations, piping information from one stage to the next. Presumably the conscious global workspace lies at the heart of this system. 

The ultimate test

Let me close by mentioning the ultimate test. As I mentioned at the beginning, if we have a theory of consciousness, we should be able to apply it to brain-lesioned patients with communication and consciousness disorders. Some patients are in coma, but in other cases, the situation is much more complicated. There is a neurological state called the vegetative state, in which a patient's vigilance, namely the capacity to waken, can be preserved. In those patients there is a normal sleep-wake cycle, and yet, there doesn't appear to be any consciousness in the sense that the person doesn't seem to be able to process information and react normally to external stimulation and verbal commands.

There are even intermediate situations of so-called minimal consciousness, where a patient can, on some occasions only, for some specific requests, provide a response to a verbal command, suggesting that there could be partially preserved consciousness. On other occasions, however, the patient does not react, just as in the vegetative state, so that in the end we don't really know whether or not his occasional reactions constitute sufficient evidence of the patient's consciousness or not. Finally, as you all know, there is the famous "locked in" syndrome. It is very different in the sense that the patient is fully conscious, but this condition can appear somewhat similar in that the patient may not be able to express that he is conscious. Indeed, the patient may remain in a totally non-communicative state for a long time, and it may be quite hard for others to discern that he is, in fact, fully aware of his surroundings.

With Lionel Naccache, we tried to design a test of the signatures of consciousness, based on our previous observations, that can indicate, very simply, in only a few minutes, based on observed brain waves alone, whether or not there is a conscious state. We opted for auditory stimuli, because this input form, unlike the visual modality, allows you to simply stimulate the patient without having to worry about whether he is looking at an image or not. Furthermore, we decided to use a situation that is called the mismatch response. Briefly, this means that the brain can react to novelty, either in a non-conscious or in a conscious way. We think that the two are very easy to discriminate, thanks to the presence of a late wave of activation which signals conscious-level processing.

Let me just give you a very basic idea about the test. We stimulate the patient with five tones. The first four tones are identical, but the fifth can be different. So you hear something like dit-dit-dit-dit-tat. When you do this, a very banal observation, dating back 25 years, is that the brain reacts to the different tone at the end. That reaction, which is called mismatch negativity, is completely automatic. You get it even in coma, in sleep, or when you do not attend to the stimulus. It's a non-conscious response.??Following it, however, there is also, typically, a later brain response called the P3. This is exactly the large-scale global response that we found in our previous experiments, that must be specifically associated with consciousness.

How to separate the two kinds of brain response is not always easy. Typically, they unfold in time, one after the other, and in patients, it is not always easy to separate them when they are distorted by a brain lesion. But here is a simple way to generate the P3 alone. Suppose the subject becomes accustomed to hearing four 'dits' followed by a single 'tat'. What we see is that there still is a novelty response at the end, but it is now one that is expected. Because you repeat four 'dits' followed by one 'tat', over and over again, the subject can develop a conscious expectation that the fifth item will be different. What does the brain then do? Well, it still generates an early automatic, and non-conscious novelty response — but it then cancels its late response, the P3 wave, because the repeated stimulus is no longer attracting attention or consciousness.

Now, after adaptation, the trick consists in presenting five identical tones: dit-dit-dit-dit-dit. This banal sequence becomes the novel situation now — the rare and unexpected one. Our claim is that only a conscious brain can take into account the context of the preceding series of tones, and can understand that five identical tones are something novel and unexpected.

In a nutshell, this outcome is exactly what we find in our experiments. We find a large P3-like response to five identical tones, in normal subjects after adaptation to a distinct sequence, but only in those subjects who attend to the stimulus and appear to be conscious of it. We tested the impact of a distracting task on normal subjects, and the P3 response behaves exactly as we expected. If you distract the subject, you lose this response. When the subject attends, you keep it. When the subject can report the rule governing the sequence, you see a P3. When they cannot report it, you don't see it.

Finally, Lionel Naccache, at the Salpêtrière Hospital, and Tristan Bekinschtein, now in Cambridge, moved on to applying these findings to patients. What they have shown is that the test would appear to behave exactly as we would expect. The P3 response is absent in coma patients. It is also gone in most vegetative state patients — but it remains present in most minimally conscious patients. It is always present in locked-in patients and in any other conscious subject.

The presence of this response in a few vegetative state patients, makes one wonder, if the person is really in a vegetative state or not? For the time being this is an open question — but it would appear that the few patients who show this response are the ones who are recovering fast and are, in fact, so close to recovery at the time of the test that you might wonder whether they were conscious during that test or not.

In summary, we have great hopes that our version of the mismatch test is going to be a very useful and simple test of consciousness. You can do it at the bedside — after ten minutes of EEG acquisition, you already have enough data to detect this type of response.

The future of neuroimaging research: decoding consciousness

This is where we stand today. We have the beginnings of a theory, but you can see that it isn't yet a completely formal theory. We do have a few good experimental observations, but again there is much more we must do. Let me now conclude by mentioning what I and many other colleagues think about the future of brain imaging in this area.

My feeling is that the future lies in being able to decode brain representations, not just detect them. Essentially all that I have told you today concerns the mere detection of processing that has gone all the way to a conscious level. The next step, which has already been achieved in several laboratories, is decoding which representation is retained in the subject's mind. We need to know the content of a conscious representation, not just whether there is a conscious representation. This is no longer science fiction. Just two weeks ago, Evelyn Eger, who is a post doc in Andreas Kleinschmidt's team in my laboratory, showed that you can take functional MRI images from the human brain and, by just looking at the pattern of activation in the parietal cortex, which relates to number processing, you can decode the number the person has in mind. If you take 200 voxels in this area, and look at which of them are active and which are inactive, you can construct a machine-learning device that decodes which number is being held in memory.

I should probably say quite explicitly that this use of the verb "decode" is an exaggeration. All you can do at the moment is achieve a better-than-chance inference about the memorized number. It does not mean that we are reading the subject's mind on every trial. It merely means that, whereas chance would be 50 percent for deciding between two digits, we manage to achieve 60 or 70 percent. That's not so bad, actually. It's a significant finding, which means that there is a cerebral code for number, and that we now understand a little bit more about that code.

But let me return again to patients. What I am thinking is that, in the future, with this type of decoding tool, we will move on to a new wave of research, where the goal will be explicitly to decode the contents of patients' minds, and maybe allow them to express themselves with the help of a brain-computer interface. Indeed, if we can decode content, there is no reason why we could not project it on the computer and use this device as a form of communication, even if the patient can no longer speak.

Again, some of this research has already started, in my lab as well as in several other places in the world. With Bertrand Thirion, we have looked at the occipital areas of the brain where there is a retinotopic map of incoming visual images. We have shown that you can start from the pattern of activation on this retinotopic map and use it to decode the image a person is viewing. You can even infer, to some extent, what mental image he has in his mind's eye, even when he is actually facing a blank screen. Mental images are a reality — they translate into a physical pattern of activation on these maps that you can begin to decode. Other researchers such as Jack Gallant, in the US, have now pursued this research program to a much greater extent.

I believe that the future of neuro-imaging lies in decoding a sequence of mental states, trying to see what William James called the stream of consciousness. We should not just decode a single image, but a succession of images. If we could literally see this stream, it would become even easier, without even stimulating the subject, to see that he is conscious because such a stream is present in his brain. Don't mistake me, though. There is a clear difference between what we have been able to do — discover signatures of consciousness — and what I hope that we will be able to do in the future — decode conscious states. The latter remains very speculative. I just hope that we will eventually get there. However, it is already a fact that we can experiment on consciousness and obtain a lot of information about the kind of brain state that underlies it.