Mirror Neurons and the Brain in the Vat

Mirror Neurons and the Brain in the Vat

Vilayanur Ramachandran [1.9.06]
Introduction by:
Vilayanur Ramachandran

Researchers at UCLA found that cells in the human anterior cingulate, which normally fire when you poke the patient with a needle ("pain neurons"), will also fire when the patient watches another patient being poked. The mirror neurons, it would seem, dissolve the barrier between self and others. [1] I call them "empathy neurons" or "Dalai Lama neurons." (I wonder how the mirror neurons of a masochist or sadist would respond to another person being poked.) Dissolving the "self vs. other" barrier is the basis of many ethical systems, especially eastern philosophical and mystical traditions. This research implies that mirror neurons can be used to provide rational rather than religious grounds for ethics (although we must be careful not to commit the is/ought fallacy).

V.S. RAMACHANDRAN is director of the Center for Brain and Cognition and professor with the Psychology Department and the Neurosciences Program at the University of California, San Diego, and adjunct professor of biology at the Salk Institute. He is the coauthor (with Sandra Blakeslee) of Phantoms in the Brain: Probing the Mysteries of the Human Mind. V.S. Ramachandran's Edge Bio Page


Introduction

Six years ago, Edge published a now-famous essay by neuroscientist V.S. Ramachandran (known to friends and colleagues as "Rama"), entitled "Mirror Neurons and Imitation Learning as the Driving Force Behind "the Great Leap Forward" in Human Evolution" [2]. This was the first time that many in the Edge community heard of mirror neurons, which were discovered by Iaccomo Rizzolati of the University of Parma in 1995. In his essay, Rama made the startling prediction that mirror neurons would do for psychology what DNA did for biology by providing a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments. He further suggested "that the emergence of a sophisticated mirror neuron system set the stage for the emergence, in early hominids, of a number of uniquely human abilities such as proto-language (facilitated by mapping phonemes onto lip and tongue movements), empathy, "theory of other minds," and the ability to "adopt another's point of view."

In the past few years, mirror neurons have come into their own as the next big thing in neuroscience, and while the jury is still out on Rama's prediction, it's obvious that something important is unfolding:

•  Interesting new research is being conducted in neuroscience labs in the US and Europe and is being discussed at conferences and in the press.

•  A team at UCLA led by Marco Iacoboni, director of the Transcranial Magnetic Stimulation Laboratory of the Ahmanson-Lovelace Brain Mapping Center at UCLA recently published important results ("Grasping the Intentions of Others with One's Own Mirror Neuron System," Iacoboni et al, 2005);

•  Christian Keysers, associate professor at the Neuro-Imaging-Center of the University Medical Center Groningen (Netherlands) published a paper neural basis of social intelligence with mirror neuron pioneers Rizzolatti and Gallese ("A unifying view of the basis of social cognition" Gallese, Keysers, Rizzolatti, 2004);

•  The New York Times "Science Times" published a page one review article on mirror neurons by ("Cells That Read Minds" by Sandra Blakeslee, January 10, 2006);

•  A virtual workshop—"What do Mirror Neurons Mean"—moderated by Gloria Origgi and Dan Sperber, and sponsored by the European Science Foundation, has an ongoing discussion on the theoretical implications of the discovery of mirror neurons.

•  At a recent conference near Paris—"Contribution of Mirroring Processes to Human Mindreading"—on the implications of mirror neurons for science and philosophy, top neuroscientists, psychologists, philosophers, and anthropologists from Europe and the United States engaged in heated debates on the interpretation and consequences of the discovery, but at least one thing was clear: mirror neurons matter, and we are only beginning to understand how much and how.

Two weeks ago, Edge received Rama's essay in response to the 2006 Edge Question, "What is your dangerous idea," which we are publishing as a separate feature. Rama's "dangerous if true" idea is "what Francis Crick referred to as 'the astonishing hypothesis'; the notion that 'our conscious experience and sense of self is based entirely on the activity of a hundred billion bits of jelly—the neurons that constitute the brain. We take this for granted in these enlightened times but even so it never ceases to amaze me." He then goes on to characterize Crick's "astonishing hypothesis" as a key indicator of "the fifth revolution"—the "neuroscience revolution"—the first four being Copernican, Darwinian, Freudian, and the discovery of DNA and the genetic code," and "that even our loftiest thoughts and aspirations are mere byproducts of neural activity. We are nothing but a pack of neurons." Central to this revolution are mirror neurons.

Rama also asks an interesting question:

Let's advance to a point of time where we know everything there is to know about the intricate circuitry and functioning of the human brain. With this knowledge, it would be possible for a neuroscientist to isolate your brain in a vat of nutrients and keep it alive and healthy indefinitely.

Utilizing thousands of electrodes and appropriate patterns of electrical stimulation, the scientist makes your brain think and feel that it's experiencing actual life events. The simulation is perfect and includes a sense of time and planning for the future. The brain doesn't know that its experiences, its entire life, are not real.

Further assume that the scientist can make your brain "think" and experience being a combination of Einstein, Mark Spitz, Bill Gates, Hugh Heffner, and Gandhi, while at the same time preserving your own deeply personal memories and identity (there's nothing in contemporary brain science that forbids such a scenario). The mad neuroscientist then gives you a choice. You can either be this incredible, deliriously happy being floating forever in the vat or be your real self, more or less like you are now (for the sake of argument we will further assume that you are basically a happy and contented person, not a starving peasant). Which of the two would you pick?

—JB


MIRROR NEURONS AND THE BRAIN IN THE VAT

"I am a brain, my dear Watson, and the rest of me is a mere appendage." —Sherlock Holmes

An idea that would be "dangerous if true" is what Francis Crick referred to as "the astonishing hypothesis"; the notion that our conscious experience and sense of self is based entirely on the activity of a hundred billion bits of jelly—the neurons that constitute the brain. We take this for granted in these enlightened times, but even so it never ceases to amaze me.

Some scholars have criticized Crick's tongue-in-cheek phrase (and title of his book) on the grounds that the hypothesis he refers to is "neither astonishing nor a hypothesis" (since we already know it to be true). Yet, the far-reaching philosophical, moral and ethical dilemmas posed by his hypothesis have not been recognized widely enough. It is in many ways the ultimate dangerous idea.

Let's put this in historical perspective.

Freud once pointed out that the history of ideas in the last few centuries has been punctuated by "revolutions," major upheavals of thought that have forever altered our view of ourselves and our place in the cosmos.

First, there was the Copernican system dethroning the earth as the center of the cosmos. Second was the Darwinian revolution; the idea that far from being the climax of "intelligent design," we are merely neotonous apes that happen to be slightly cleverer than our cousins. Third, the Freudian view that even though you claim to be "in charge" of your life, your behavior is in fact governed by a cauldron of drives and motives of which you are largely unconscious. And fourth, the discovery of DNA and the genetic code with its implication (to quote James Watson) that "There are only molecules. Everything else is sociology."

To this list we can now add the fifth, the "neuroscience revolution" and its corollary pointed out by Crick—the "astonishing hypothesis"—that even our loftiest thoughts and aspirations are mere byproducts of neural activity. We are nothing but a pack of neurons.

If all this seems dehumanizing, you haven't seen anything yet.

Consider the following thought experiment that used to be a favorite of philosophers (it was also the basis for the recent Hollywood blockbuster The Matrix). Let's advance to a point of time where we know everything there is to know about the intricate circuitry and functioning of the human brain. With this knowledge, it would be possible for a neuroscientist to isolate your brain in a vat of nutrients and keep it alive and healthy indefinitely.

Utilizing thousands of electrodes and appropriate patterns of electrical stimulation, the scientist makes your brain think and feel that it's experiencing actual life events. The simulation is perfect and includes a sense of time and planning for the future. The brain doesn't know that its experiences, its entire life, are not real.

Further, assume that the scientist can make your brain "think" and experience being a combination of Einstein, Mark Spitz, Bill Gates, Hugh Heffner, and Gandhi, while at the same time preserving your own deeply personal memories and identity (there's nothing in contemporary brain science that forbids such a scenario). The mad neuroscientist then gives you a choice. You can either be this incredible, deliriously happy being floating forever in the vat or be your real self, more or less like you are now (for the sake of argument we will further assume that you are basically a happy and contented person, not a starving peasant). Which of the two would you pick?

I have posed this question to dozens of scientists and lay people. A majority argue "I'd rather be the real me." This is an irrational choice because you already are a brain in a vat (the cranial cavity) nurtured by cerebrospinal fluid and blood and bombarded by photons. When asked to select between two vats, most pick the crummy one even though it is no more real than the neuroscientist's experimental vat. How can you justify this choice unless you believe in something supernatural?

I have heard three counter-arguments on the premise of this experiment. First, the brain, as Antonio Damasio argues so eloquently, is a natural extension of the body, not an isolated computer sitting on your neck. True, but this "embodiment" plus visceral and proprioceptive inputs can also be simulated. Second, what if the vat isn't well maintained? What if it falls down and crashes? This could happen, but such an accident can also happen to the real you. Third, the simulation of Einstein and Gates (and everyone else) can never be exact. This might be true, but it's not relevant. So what if the simulation is only 98% correct? Your own brain fluctuations from year to year are probably as great, if not greater.

If you think this scenario is farfetched, just look at what's going on around you in the world—cell phones, iPods, palm pilots, the worldwide web, email, blogs, e-publishing, and virtual reality. We are all slowly and imperceptibly approaching the brain in the vat scenario where all functions will be literally at your fingertips as you become dissolved in cyberspace.

What about "culture"? I think of homo sapiens as "the cultured ape" because it is cultural diversity above all that defines us as a species. Through the emergence and further elaboration of a group of neurons called "mirror neurons" our brains have become symbiotic, or parasitic, with culture (a child raised in a cave would not be recognizably human). Can we simulate cultural sophistication in the vat? Will the world in the 25th century be hundreds of warehouses with thousands of brains in rows and rows of vats? They could even all be identical to each other to save time and effort programming. Why not? No one brain would know it was the same as every other.

Iaccomo Rizzolati and Vittorio Gallasse discovered mirror neurons. They found that neurons in the ventral premotor area of macaque monkeys will fire anytime a monkey performs a complex action such as reaching for a peanut, pulling a lever, pushing a door, etc., (different neurons fire for different actions). Most of these neurons control motor skill (originally discovered by Vernon Mountcastle in the '60s), but a subset of them, the Italians found, will fire even when the monkey watches another monkey perform the same action. In essence, the neuron is part of a network that allows you to see the world "from the other person's point of view," hence the name “mirror neuron."

Researchers at UCLA [1] found that cells in the human anterior cingulate, which normally fire when you poke the patient with a needle ("pain neurons"), will also fire when the patient watches another patient being poked. The mirror neurons, it would seem, dissolve the barrier between self and others. I call them "empathy neurons" or "Dalai Lama neurons." (I wonder how the mirror neurons of a masochist or sadist will respond to another person being poked.) Dissolving the "self vs. other" barrier is the basis of many ethical systems, especially eastern philosophical and mystical traditions. This research implies that mirror neurons can be used to provide rational rather than religious grounds for ethics (although we must be careful not to commit the is/ought fallacy).

I previously suggested in my earlier piece—"Mirror Neurons and Imitation Learning as the Driving Force Behind "the Great Leap Forward" in Human Evolution" [2]—that the emergence of a sophisticated mirror neuron system set the stage for the emergence, in early hominids, of a number of uniquely human abilities such as proto-language (facilitated by mapping phonemes on to lip and tongue movements), empathy, "theory of other minds," and the ability to "adopt another's point of view."

This resulted in the ability to engage in goal-directed imitation, which was a crucial step in imitation learning. Once imitation learning was in place, it allowed the rapid horizontal and vertical propagation of "accidental" one-of-a-kind inventions, which provided the basis for culture, the most human of all traits. Evolution, you could say, became Lamarckian rather than purely Darwinian. (In using the phrase "accidental innovation," I do not mean to belittle those flashes of inspiration, insight and genius that arise all too rarely when the right combination of genetic and environmental circumstances fortuitously prevail in a single brain.)

My point is only that such innovations would be lost from the meme pool were it not for mirror neuron-based abilities such as imitation and language. Even that most quintessentially human trait, our propensity for metaphor, may be partly based on the kinds of cross domain abstraction that mirror neurons mediate; the left hemisphere for action metaphors ("get a grip") and the right for embodied and spatial metaphor. This would explain why any monkey could reach for a peanut, but only a human, with an adequately developed mirror neuron system, can reach for the stars. This "co-opting" of the mirror neuron system for other more sophisticated functions may have been but a short step in hominid brain evolution but it was a giant leap for mankind. I suggest this crucial step emerged 100 to 200 thousand years ago in the inferior parietal lobule.

Of course, we must avoid the temptation of attributing too much to mirror neurons—monkeys have them, but they are not capable of sophisticated culture. There are two possible reasons for this. First, mirror neurons may be necessary, but not sufficient. Other functions such as long working memory may have co-evolved through parallel selection pressures. Second, the system may need to reach a certain minimum level of sophistication before primate cognition can really get off the ground (or down from the trees!).

Intriguingly, in 2000, Eric Altschuller, Jamie Pineda and I were able to show (using EEG recordings) that autistic children lack the mirror neuron system, and we pointed out that this deficit may help explain the very symptoms that are unique to autism: lack of empathy, theory of other minds, language skills, and imitation. [3] Although initially contested, this discovery—of the neural basis of autism—has now been confirmed by several groups including our own (spearheaded, in part, by Lindsey Oberman in my lab).

Mirror neurons also deal a deathblow to the "nature vs. nurture" debate (I like Matt Ridley's suggested replacement "Nature via Nurture"), for it shows how human nature depends crucially on learnability that is partly facilitated by these very circuits. They are also an effective antidote to sociobiology and pop evolutionary psychology; the assertion that the human brain is a bundle of instincts selected and fine-tuned by natural selection when our ape-like ancestors roamed the savannahs. Even if you admit some truth to this view, I have never understood why the savannah is such a big deal. Why stop there? We spent a much longer time as fish in the Devonian seas 500 million years ago. One could argue that the reason we enjoy going to aquaria is that our piscine ancestors spent millions of years looking at and enjoying other fishes. If you think this idea is silly, you should see some of the others that have made it into print and clutter the literature. Yes, genes profoundly influence behavior. No ape, even if educated at Eton or Harrow, will ever speak with a proper public school accent. But, the notion that human talents and follies are governed mainly by instincts hard-wired by genes is ludicrous.

Thanks to mirror neurons, the human brain became specialized for culture; it became the organ of cultural diversity par excellence. It is for this reason (rather than moral reasons or political correctness) that we need to cherish and celebrate cultural diversity. To be culturally diverse is to be human and that's a good enough reason to celebrate. Indeed, mirror neurons may help bridge the huge gap between the "the two cultures," the sciences and the humanities, which CP Snow claimed could never be bridged. Based on all these ideas, I stand behind my pronouncement that "mirror neurons will do for psychology what DNA did for biology," a prophesy already starting to come true. In fact, when I saw Rizzollati at a meeting recently he complained, jokingly, that my off-the-wall remark is now quoted more often than all his original papers!

One could, I suppose, simulate mirror neuron-like activity in the brain in the vat—even simulate "culture" in a culture medium. There is nothing that logically forbids this, but it would be virtually impossible in practice because of the contingent nature of culture; the fact that it depends crucially on the rapid spread of unique innovations, or "memes."

Who could program the "culture" into the brains in the vats without first having themselves discovered culture? One could also make a strong case for the idea that you cannot program innovation given its highly contingent nature and dependence on rare combinations of fortuitous circumstances. It is conceivable, though, that one could achieve a reasonable approximation of culture. Even if we could generate "fake" culture and create a reasonable simulacrum in the vat, the question arises: Would we ever want to? I confess I have a sentimental attachment to my "real " brain even though I can't defend my choice rationally. It may just be pure narcissism. But, under some circumstances to which people are subjected, whether a starving peasant in Bangladesh or a torture victim in a secret jail, I might easily be swayed to choose the brain in the vat!

I will conclude with a metaphysical question that cannot be answered by science. I cannot decide whether the question is utterly trivial or profound. I call it the "vantage point" problem foreshadowed by the Upanishads, ancient Indian philosophical texts composed in the second millennium BC, and by Erwin Schrödinger. I am referring to the fundamental asymmetry in the universe between the "subjective" private worldview vs. the objective world of physics.

Physics depends on the elimination of the subjective: there are no colors, only wavelengths; no frequency, only pitch; no warmth or cold, only kinetic activity of molecules; no subjective "self" or I, only neural activity. Physics doesn't need, and indeed doesn't acknowledge, the subjective "here and now," or the "I" who experiences the world. Yet to me, my "I" is everything. It's as if only one tiny corner of the space-time manifold is "illuminated" by the searchlight of my consciousness. Humankind, it would seem, is forever condemned to accept this schizophrenic view of reality; the "first person" account and the third person account.

But what has this got to do with brains in vats? Everything. It's a fair assumption that the identity of your conscious experience (including your "I") depends on the information content of your brain, "software" representing millions of years of accumulated evolutionary wisdom, your cultural milieu, and your personal memories; not on the particular atoms that currently constitute your brain. You can't actually prove this logically, no more than you can prove that you are not dreaming right now, but it seems "beyond reasonable doubt" given everything else we know. After all your actual brain atoms and molecules get replaced every few months yet you wouldn't want to insist you are existentially reborn each time and stop planning for what (in such a view) would essentially be an identical twin in the future.

Now imagine speeding up this replacement process so that I destroy your present brain and replace it with a replica/simulacrum with identical information. There would be no reason to believe your conscious experience would not continue in that other brain (The fact that exact duplication cannot be achieved is irrelevant after all your own brain information fluctuates everyday!) But if you accept this argument then why not replace your brain with five replicas in five vats instead of just one? Would you then "continue" in all five? If so you can be Einstein, Gates and Heffner in parallel? This seems absurd because you can simultaneously subject one brain to pain and another to pleasure and the notion of a single conscious being simultaneously experiencing both seems impossible. But if this isn't true and "you" continue in only one, then what. Or who decides which vat you continue in? (Actually, this thought experiment isn't all that different from one that has actually been achieved empirically; the splitting of an adult human brain down the middle by severing the corpus callosum. The procedure is done for intractable epilepsy and divides what was apparently a single stream of consciousness into two, as shown elegantly by Sperry, Gazzaniga, and Bogen.)

Bill Hirstein and I recently showed that the isolated left hemisphere would tell you it is conscious, if asked directly. More surprisingly, we showed that the right hemisphere in such a patient does indeed have introspective consciousness, for we found it was quite capable of deliberate lying when tested through non-verbal signing (and you cannot lie without being conscious of yourself and others).

The possibility of multiple "minds" in a single brain is not as bizarre as it sounds. It often happens in dreams. I remember having a dream once in which another guy told me a joke and I laughed heartily even though the "other guy" was my mental invention, so I must have already known the joke all along!

The question of whether "you" would continue in multiple parallel brain vats raises issues that come perilously close to the theological notion of souls, but I see no simple way out of the conundrum. Perhaps we need to remain open to the Upanishadic doctrine that the ordinary rules of numerosity and arithmetic, of  "one vs. many," or indeed of two-valued, binary yes/no logic, simply doesn't apply to minds—the very notion of a separate "you" or "I" is an illusion, like the passage of time itself.

We are all merely many reflections in a hall of mirrors of a single cosmic reality (Brahman or "paramatman"). If you find all this too much to swallow just consider the that as you grow older and memories start to fade you may have less in common with, and be less "informationally coupled," to your own youthful self, the chap you once were, than with someone who is now your close personal friend. This is especially true if you consider the barrier-dissolving nature of mirror neurons. There is certain grandeur in this view of life, this enlarged conception of reality, for it is the closest that we humans can come to taking a sip from the well of immortality. (But I fear my colleague Richard Dawkins may suspect me of spiritual leanings of  "letting God in through the back door" for saying this.)
 
Will you choose the vat or the real you? This exercise might not provide an obvious answer, but fortunately none in this generation or the next will have to confront this choice. For those in the future who are forced to answer, I hope they make the "right" choice, whatever "right" means.

Think not existence closing your account and mine
Shall see the likes of you and me no more
The eternal saki has poured from the bowl
 millions of bubbles like you and me, and shall pour

— The Rubiyat of Omar Khayam

___

[1] Iacoboni M, Molnar-Szakacs I, Gallese V, Buccino G, Mazziotta JC, et al. (2005) Grasping the Intentions of Others with One's Own Mirror Neuron System. PLoS Biol 3(3): e79.

[2] Ramachandran, V.S., "Mirror Neurons and imitation learning as the driving force behind "the great leap forward" in human evolution", Edge, no. 69, May 29, 2000.

[3] Altschuler, E., Pineda, J., and Ramachandran, V.S., Abstracts of the Annual
Meeting of the Society for Neuroscience, 2000.