Joshua Greene: The Role of Brain Imaging in Social Science (HeadCon '13 Part VI)

[Video designed for full screen viewing. Click icon  in video image to expand.]

HeadCon '13: WHAT'S NEW IN SOCIAL SCIENCE? (Part VI)
Joshua Greene: The Role of Brain Imaging in Social Science 

We're here in early September 2013 and the topic that's on everybody's minds, (not just here but everywhere) is Syria. Will the U.S. bomb Syria? Should the U.S. bomb Syria? Why do some people think that the U.S. should? Why do other people think that the U.S. shouldn't? These are the kinds of questions that occupy us every day. This is a big national and global issue, sometimes it's personal issues, and these are the kinds of questions that social science tries to answer.

Joshua Greene is John and Ruth Hazel Associate Professor of the Social Sciences and the director of the Moral Cognition Laboratory in the Department of Psychology, Harvard University. Author, Moral Tribes: Emotion, Reason, And The Gap Between Us And Them.


We're here in early September 2013 and the topic that's on everybody's minds, (not just here but everywhere) is Syria. Will the U.S. bomb Syria? Should the U.S. bomb Syria? Why do some people think that the U.S. should? Why do other people think that the U.S. shouldn't? These are the kinds of questions that occupy us every day. This is a big national and global issue, sometimes it's personal issues, and these are the kinds of questions that social science tries to answer.

What I want to talk about today is the role of neuroscience, specifically brain imaging, in social science. So far, neuroimaging has told us something between very little and nothing about these kinds of big questions. What I want to talk about is, why is that? What has neuroimaging accomplished in the last 15 years or so? What has it failed to accomplish and why? And what's the hope for the future?

I should say a little bit about my background. I'm a neuroscientist. And at the moment I think I'm the only neuroscientist here, but I'm not a neuro-evangelist. I didn't really begin my academic career as a neuroscientist. I started out as a philosopher, and I think of myself as much as a philosopher and a psychologist than as a neuroscientist. I use neuroscientific tools but I use other tools just as much. So I'm giving you my perspective as someone who's a user but not an evangelist.

The key psychological distinction behind what I want to say is the distinction between process and content. What functional imaging has done very well is connect parts of the brain to different processes and has taught us some specific things about how certain systems process information in a general way. What neuroscience has not done very well, but is starting to—not so much in a way that connects very directly with big social scientific questions, but is starting to—is to actually get at the content of thinking.

I'm going to talk a little bit about where we are now, how we got here, and how I think important developments in functional neuroimaging may finally deliver on the promise that a lot of us felt for neuroimaging when it was really getting rolling in the late nineties.

When I started doing this in the late nineties, I was very excited. I thought that this is opening the "black box" with everything that the metaphor entails. The brain scanner would be something like the microscope of cognition. Just as biologists ground some lenses and were able to see these little things swimming around in exquisite detail, we'd finally see the little critters swimming around in the brain—the little cognitive critters doing their thing. We would understand things on a fundamentally different level than the way we had understood them. I think there've been a lot of triumphs, but we haven't gotten there yet. We haven't gotten our mental microscope, and the question is, why haven't we? What do we have? And what's it going to take to get there? Will we ever get there?

I'm going to start off telling you about a brain imaging study that I did recently, and I'm focusing on this experiment more as a matter of convenience. You can call it narcissism, whatever you like, but it has certain features that I think illustrate where things are, but not necessarily where I think things are going. This is an experiment done with Amitai Shenhav, and the experiment was looking at how people make moral decisions when there are outcomes with varying magnitude and probability involved. So, a specific situation would be like this:

You're a Coast Guard sailor. You're on a rescue boat going to save one person from drowning, and you know that you can definitely save that person. Then you get a radio signal that says that in the opposite direction there's this boat that's capsized. There are five people there and you can save them, but there's only a 50 percent chance of success. Or, say, there are not five people, there are ten people; there are 20 people; there are 40 people. It varies in this experiment the number of lives at stake for the chancy option, as opposed to the one sure thing of saving one life. And we also varied the probability of saving them (with a little twist that I won't get into).


Perhaps we can explain something like why we just don't care if it's 100 lives that we can save or 1,000 lives we can save, why past a certain point it all just sounds like "a lot." That makes sense from the perspective of a brain that's designed to seek out valuable things until it's had as much as it can use. Once I've saved a dozen lives, I'm "morally full." My ventral striatum only goes up to here.


The question is, what in the brain is keeping track of that probability parameter? What in the brain is keeping track of the magnitude parameter? The number of lives you can possibly save if you change course. And what's putting those two things together to give a sensible answer? If you want to give a sensible answer you have to think about your odds and you also have to think about how big the moral reward is.

What we found is that when people were making these kinds of hypothetical moral decisions, the structures that were doing this kind of work—keeping track and integrating the relevant parameters—were the same kinds of structures that you see in humans when they're making self-interested decisions about real rewards like food and money. You see homologues of this in other mammals, like rats.

In our case we found that people's behavioral sensitivity to the probability parameter was associated with the sensitivity of their anterior insula to the probability parameter. Do they care about probability, and how much? You can look at their insula and make a better-than-chance guess about that. How much do they care about the size? If you look in the ventral striatum, which is one of the brain regions that we heard about earlier in Fiery's talk, that's a brain region that's sensitive to the magnitude. Then the ventral medial prefrontal cortex seems to be sensitive to the interaction—that is, putting these two things together. This parallels very closely, as I said, what people found when they looked at self-interested economic decision making.

What does this tell us? Well, it says that there's this quite general process of assigning values to outcomes, and there are these general parameters that apply to a lot of different things: Saving lives versus getting more money for yourself or versus getting more food for yourself. We have these domain-general systems that we use, and when we think about something like saving hypothetical lives, we rely on the same kind of circuitry that a person or even a rat might use when they're deciding, "Should I go this way and try to get the really good food or should I go this way and get the less good food that's more certain?"

From a neuroscientist's perspective, this is not terribly surprising. What else would it be? We've been studying this stuff in rats, and human brains look more or less like big rat brains. It's an important caveat. But from a moral perspective, as a philosopher and as a psychologist, you might've expected something different. Not long ago, not a lot, but at least some people thought there was a dedicated system for making moral evaluations, a kind of "moral organ" or "moral grammar." These kinds of results and others suggest that that's not tenable.

We've identified a kind of process that's involved and it seems to be a quite general process. Does this tell us anything interesting about moral decision- making? Well, maybe a little.

Here's an interesting thing about moral decision-making that many people have documented: People seem to value human lives and other moral goods with diminishing returns. Saving one person's life, that's really good. Two… three… That's a little bit better. By the time you get to the hundredth life it's leveling off. Now, why would that be the case? In a sense it's very strange. Why is the hundredth life worth any less than the first or second or third life that you can save?

If you know that the system that we're using to attach values to these things is a system that evolved in mammals to attach values to things like food, then having this kind of diminishing returns built into the system actually makes a lot of sense. In that sense, from an evolutionary perspective it's not surprising that you would see diminishing returns built into the system. Not that it makes normative sense. Perhaps we can explain something like why we just don't care if it's 100 lives that we can save or 1,000 lives we can save, why past a certain point it all just sounds like "a lot." That makes sense from the perspective of a brain that's designed to seek out valuable things until it's had as much as it can use. Once I've saved a dozen lives, I'm "morally full." My ventral striatum only goes up to here.

That's a theory about why we intuitively have diminishing returns for saving people's lives, and the theory comes from the neuroscience. But the more general point is this: Understanding the kind of process can give you some insight into some of the quirks of the decision making process.

That's a hypothesis. There are other explanations for what might be going on there, but at least if that hypothesis is right, it tells us something interesting about the ways we make judgments, including ones that may have life-and-death consequences. That's an experiment. What it didn't tell us is how this actual thinking works. What we're doing is implicating in both the moral and non-moral case the same kind of reward system—same system for representing the value of outcomes. But, of course, somewhere in the brain you understand that now you're talking about saving hypothetical lives rather than foraging for food or making gambles. In the moral version I'm imagining that I'm working for the Coast Guard as opposed to making a gamble in which this button will give you $1 with 50 percent probability. Somewhere your brain obviously knows the difference between those two things despite the fact that the same general valuation mechanisms are involved.

This comes back to the difference between process and content. What brain imaging has been good at, and what it's essentially figured out, is that there are a relatively small number of major cognitive networks in the brain. You have brain networks that are involved in different kinds of perception, different kinds of motor actuation, in storing and retrieving long-term memories, in, as I described, attaching values to different outcomes under conditions of uncertainty, and so on and so forth. What we've found when you compare mental activity A to mental activity B, is that you see a little blob on your brain image indicating increased activity here for mental activity A. And when you compare B to C, you see another little blob indicating a local difference in overall level of brain activity between these two mental tasks. If you lower the statistical threshold on all these blobby brain images, all these little blobs end up expanding into familiar brain-wide structures—like the peaks of different mountains expanding to reveal the same mountain ranges. The blobs that you see on your brain images are the peaks of these mountain ranges, which are essentially large- scale networks in the brain. And what we've essentially learned is that pretty much any kind of juicy, large-scale, interesting cognition involving perception, involving imagery, involving memory, involving choices, involving social perception… It's going to involve all this stuff--all of the same major networks in the brain. And you get to a point where you say, "Okay, it's all involved."

We can go a step further than that. In my study with Shenhav on saving lives, we didn't just say, "Well, our mammalian reward system is 'involved'." We had a bit more of a complicated story. We said, "This brain region is tracking the probability, and this one's tracking the magnitude, and this system seems to be playing this integrative role, putting those two pieces of information together." People who do this more seriously than I do have more detailed computational models, and you heard from Fiery Cushman some discussion of some of those. We can look at these general systems, we can say, "Okay, these systems are involved," and we can say something more about the general operating characteristics of this system. And sometimes knowing something about the general operating characteristics of the system will give you some sort of insight into something that you might care about even if you're not a neuroscientist, like: Why do I not care so much about the hundredth life I can save?

But what's the next step? What does it take to actually understand our thinking? And this is where I think new advances in functional neuroimaging are—or at least could be—very important.

To flesh out the distinction between process and content there's a nice analogy—it's not mine, I wish I knew where it came from—where you can imagine hanging a giant microphone over a city like Beijing. What would you learn if you did that? Well, you wouldn't learn Chinese. What you'd learn is certain things about patterns over the course of the day or week or month: When this part of the city's very busy, this part of the city's not so busy. And when there's a disaster all of a sudden these things fan out from the central area and go to all of these different areas around the periphery. You would learn things about the function of the city, but you wouldn't learn the language of the city, right? The question is, what would it take to learn the language? To bring neuroimaging closer to psychology?

Consider a classic neuroimaging study: If you want to understand working memory, you can give somebody a word to remember and ask them to hold onto it. And while your subject is remembering that word you can see structures in the dorsolateral prefrontal cortex and corresponding regions in the parietal lobe that are keeping that information online. Looking at the a set of brain images, you might know that your subject is remembering one word rather than five words. You can see the effect of the higher load—more activity, more function at the same time. But doing this kind of neuroimaging, the kind that dominated for the first ten years or so, you wouldn't know what word your subject is remembering. The content is neutral, not reflected in the brain images. What you're learning about is the process.

Starting with a breakthrough paper in 2001 by Jim Haxby, people have started to use brain imaging to look at content. They started with categories, then individual concepts, individual intentions, and so on. Haxby's insight was that there's a lot of information that we're losing doing neuroimang the standard way. The standard way of doing neuroimaging analyses, you have mental task A over here and mental task B over there—for example, remembering a word string, two-long versus five-long. Then you'd look at the activity for those two tasks and you'd subtract, and you'd say, "Okay, the extra work for remembering five words versus two words seems to be here in this circuit, and we think that's the working memory buffer." Haxby's insight was: "Well, look, there's all this information in these patterns of neural activity. And it may not be about what's overall up or overall down at the level of brain regions the size of the tip of your pinky or bigger. The micro-details make a difference."

You can imagine, for example, training a computer to tell the difference between paintings. If you have, let's say, Starry Night over here and an equivalent size Van Gogh painting of sunflowers, that's a pretty easy distinction. You just average the overall brightness, and you can say, "Okay, that's an A, a copy of Starry Night, and that's a B, a painting of sunflowers. That's a bright one, and that's a dark one." If instead you had two classic paintings by Mondrian, the kind where you have the lines and the color patches, you couldn't necessarily distinguish between them by averaging, by looking at the overall brightness, or even by examining the kinds of compositional elements that are used. You have to look at the pattern. That's what multi-voxel pattern analysis and other multivariate methods are about. With any good thing, it's always possible to oversell it, but I think that there's really a lot to this stuff.

What's been done? Well, the original experiments examined brains performing perceptual tasks. Things usually start with perception. Can you tell whether someone is looking at a chair versus a wrench, versus a face, versus a place? Sure enough, you can look at these patterns—without paying attention to what's generally going up or going down, focusing instead on the microstructure of the pattern—and you can tell what someone's looking at! Then people did it with acts of imagination. You could tell whether someone's thinking of a face or a place or an object by looking at these patterns—and in a way that you couldn't using overall subtractions of what's generally up or what's generally down.

More recently, people have done things with concepts. There's a really fascinating paper by Tom Mitchell and colleagues where they had a huge corpus of words and they created a map of the semantic relationships among all of these words. So, obviously, dog and cat are going to be closely related, dog and nuclear reactor not so much. Then they mapped brain patterns onto a set of words. And then they took those brain-mapped words, and they took a new word that was not ever looked at with the brain imaging before. And they asked, "What should the brain pattern for this word look like?" They used "celery" and "airplane". They showed that you can make a pretty good guess about what the brain pattern will be when someone's thinking about celery or airplane based on what the patterns look like for other words, other concepts, that are more similar versus less similar to "airplane" and "celery".

Another recent classic: You give people two numbers and you say, "Don't tell me, but just in your head plan on either adding the two numbers or subtracting the two numbers." This is John-Dylan Haynes' work. And you can tell ... that is, not "read" the intention, but make a better than chance prediction, seconds in advance, about whether or not someone is going to add the two numbers or subtract the two numbers.


…if neuroscience is going to matter for the social sciences, if neuroscience is going to teach us things about whether or not the U.S. is likely to bomb Syria and why some people think it's a good idea and some people think it's a bad idea, and how our understanding of what happened in Libya, or what happened in Iraq is informing our understanding of whether or not we will or should bomb Syria… To understand those kinds of complex social things, we're going to have to really understand the language of the brain.


Recently people have started—this is Jack Gallant's work at Berkeley—to reconstruct still images and even videos from brain patterns. So, finally getting at the content. Not just what kind of processor is engaged—how does that process generally work? what kind of variables does it use? what are its temporal dynamics?—but the actual stuff that's being processed.

What's the significance of this? Well, the first thing that everybody thinks is, "Oh, so, 'brain reading,' people are going to be able to read your mind, read your brain." I think, at this point, and for a long time, if you want to know people's secrets go through their garbage, read their Facebook page. It's going to be a long time before the best way to get at somebody's secrets is by looking at their brain scans.

The long-term promise of this is really about understanding the "language of thought." That phrase was made famous by Jerry Fodor. He had a specific theory that comes with a lot of baggage, but the idea that there has to be some kind of language of thought in the brain actually makes a lot of sense. If I tell you something in English and then someone later asks you a related question in French, you can take the information that you learned in English and give the answer in French. There has to at last be something that's translating that. Likewise, you might've seen something visually but you can respond with a description of it in words. Or you can point to a picture. Later, you can access the same information from memory. You can use that same information to guide actions with any part of your body. In other words, there seems to be this global informational access within the brain where the same pieces of information can be used by almost any kind of system—a sensory system, a motor system, memory system, systems for evaluating outcomes. There's got to be some general-purpose format for representing the meanings of things.

What we've started with now with neuroimaging experiments are relatively small things, object-like things, such as an intention, a concept, a perception, a visual image. What we don't yet have is, first of all, a detailed understanding of the structure of these representations for these specific things. Moreover, we don't yet understand how these things get put together—how thoughts get put together in the same way that sentences get put together from words. I don't know if an understanding of that constructive processes is a few years off, if that's decades off. This is something that I along with Steven Frankland have just started thinking about and working on.

But to bring this discussion full circle, if neuroscience is going to matter for the social sciences, if neuroscience is going to teach us things about whether or not the U.S. is likely to bomb Syria and why some people think it's a good idea and some people think it's a bad idea, and how our understanding of what happened in Libya, or what happened in Iraq is informing our understanding of whether or not we will or should bomb Syria… To understand those kinds of complex social things, we're going to have to really understand the language of the brain. We're going to have to really understand the content that is being processed and not just the kind of processing and the general operating characteristics of that processing.

Is multi-voxel pattern analysis—and multivariate methods for neuroimaging more generally—going to answer this question? I don't know. What I do think is that this is our current best hope. This is our current best way forward for actually understanding and speaking the language of the brain, and finally getting the mental microscope that I was hoping for back in the late nineties when I first started doing this.


DENNETT: Josh, the last point you made about multi-voxel pattern analysis reminds me of one of the first points that was made today about "big data." It looks to me as if you are anticipating the future, even with rose-colored glasses on, where now, thanks to big data and really good multi-voxel pattern analysis, we can to some degree read people's minds. But we're not going to know why. We're not going to know what the system is. We can do brain writing, we can do brain reading but doing brain writing will be, if we can do it at all, we won't know how or why it works and we'll be completely dependent on the massive technology, not on any theory we've got about how representation is organized in the brain.

GREENE: I agree with you that that's where we are now, and you may be right, we may be stuck here for a long time. I may be as disappointed 15 years from now about the hopes that I'm expressing now as I am now about at least certain things that I hoped for when I first started doing this. But I think there are some real reasons to think that it's not necessarily a false hope.

A really fascinating recent experiment, also by Jim Haxby, uses a method called hyperalignment. It begins with a technical problem, but it really gets into much more interesting conceptual territory.

First, the technical problem: When a group of people are engaged in the same task, everybody's brains are representing all these things, all of the pieces of information that one needs to perform the task. But surely what's representing these kinds of fine, micro-cognitive details is not going to be in exactly the same place for me as it is for you. Neuroanatomy differs from person to person, just like gross anatomy So, how do you normalize people's brains? How do you put different people's brains onto the same map?

Haxby said, "Well, I'm going to have people watch a very complicated stimulus—I'll have people watch movies—and I'm going to start with one person's set of brain data, and then I'm going to do a factor analysis. Factor analysis is used to find the natural structure in a data set. For example, if you're Netflix, and you want to understand the general structure of people's movie preferences, you can take a massive data set consisting of different people's movie ratings and submit it to a factor analysis. And the analysis will tell you that there are some general factors corresponding to familiar categories such as "romantic comedy" and "action adventure."

Haxby wants to know what the major factors are in the patterns of neural activity that arise when people watch a complex stimulus such as a movie. What is the underlying structure within those patterns? How do the components of these patterns hang together to form more coherent representational categories?

In the beginning, it was just a technical matter. How do you align these brains? But what he found was that if you take one person's brain and you find their major components, and you take another person's brain and find their major components, the components seem to line up extremely well. If you can translate the specific topography of one person's brain into the topography of another person's brain, or into a common topography using this higher order space, that's one important step forward.

If you're a pessimist you say, "Well, factors-schmactors. Are those going to correspond to some kind of interesting cognitive units?" Maybe they will; maybe they won't. But these kind of experiments at least give me hope that we can do more than just, in a brute force kind of way, separate the A's from the B's, predict the A's from the B's. It gives me hope that we can actually find deeper structure, that we can then start saying, "Okay, well, if you were to manipulate this component this way, then you would have an overall representation that looks more like this instead of a representation that looks more like that." In other words, when you look at the level of components within these neural patterns, it might start looking more like a language and less like a schmeer of big data.

DENNETT: A schmactor-schmeer.

BROCKMAN: We're calling your talk "Factor-Schmactor."

PIZARRO: Every time I say what I'm about to say it sounds like I'm a pessimist of some sort about neuroscience, but I'm a real optimist about neuroscience. I just want to check to see whether we're talking about the same thing. You offer out this hope that we will learn deep things about psychology, and then it turns out that what we're learning is really, really interesting things about the brain. So, for a neuroscientist this is amazing. In fact, it is a level up from just maybe regular neuroscience because what you're essentially saying is we're learning the assembly here, the machine language of the brain. But I'm still not sure that we know more about the psychology. It's one thing to say we know how neurons encode concepts in this sense, but it still seems as if it's dangled out as a hope for a psychological theory that it's not just elbow grease that we need to get to the psychological theory. It's that even when we solve that problem, as Dan was saying, there's still this huge open question about the psychology.

GREENE: It's the mind-body problem. It's transparency.

PIZARRO: I don't know. Don't call it that. It's not the mind-body problem.

GREENE: Sorry. It's not the problem of consciousness. Let's be more specific. It is the problem of understanding in a transparent way the relationship between mental phenomena and the physical phenomena of the brain.

I freely admit that that gap has not been closed and so, therefore, I can't tell you when it's going to be closed, even a little bit. But let me try to do what I did with Dan, to give you some sense for how this could go. In the Mitchell experiment that I described where they're predicting brain patterns for things like "airplane" and "celery," the algorithm hasn't seen the brain data for these words, and yet it can predict reasonably well what those brain patterns will look like. For that prediction to work, there must be certain semantic features that, say, different vegetables have in common, and those semantic features seem to be represented commonly among different kinds of vegetables, or different kinds of artifacts or things. When it comes to concrete objects like celery and airplanes, we can make a pretty good guess about what the kinds of features are likely to be: Does it have leaves? Is it green? Does it move on its own? Does it make a roaring noise? And so on and so forth.

What happens when we start getting into abstract concepts? For now, to try to get a result, researchers are looking at concrete nouns. But what if you start looking at more abstract things? You can start all the way with things like "justice" or something like that. But you might also consider things that are somewhat abstract and somewhat concrete, like the idea of a market. A market can be a literal place where people exchange goods, but a market can be a more abstract kind of thing. Just as you can identify the features that are relevant for classifying different kinds of more basic concepts such as "celery" and "airplane", where you're probably not going to get any big surprises about the relevant features, you may be able to do the same kind of thing for more abstract things. For most basic concepts, I can say, "Oh, well, plants kind of look similar and vehicles kind of do the same sort of thing." But when it comes to understanding the relationships among abstract concepts, it's much harder to know where to start, and doing something like this kind of analysis can help.

Another thing that neuroimaging has revealed is that there's a huge, important distinction between animate and inanimate things. This seems to be the biggest split in terms of where object concepts are housed in the brain. And while that's not super surprising—it seemed like a good candidate—you wouldn't necessarily know that that's true just from observing behavior, at least the literature that I've seen.

CHRISTAKIS: Could you do sneaky experiments like show a celery-shaped car, for example? I'm serious, like trying to screw with the brain?

GREENE: One foot in front of the other. As far as I know, there haven't been…

CHRISTAKIS: Like a dog-shaped nuclear reactor?

GREENE: You do see things like that in other domains. There's this distinction between objects and places. And some clever person says, You've got a place like a house, you've got an object like this toy. Well, what if it's a toy house? Can you see the kind of shifting back and forth from a more object-like representation to a more place-like representation, depending on how it's framed? In the more perceptual end of things you see stuff like that. That's not too far off from your celery car.

 

People

joshua_d_greene's picture
Cognitive Neuroscientist and Philosopher, Harvard University

Mentioned

sendhil_mullainathan's picture
Professor of Economics, Harvard; Assistant Director for...
june_gruber's picture
Assistant Professor of Psychology, University of Colorado,...
fiery_cushman's picture
Assistant Professor, Department of Psychology, Harvard...
robert_kurzban's picture
Psychologist, UPenn; Director, Penn Laboratory for...
nicholas_a_christakis's picture
Sterling Professor of Social and Natural Science, Yale...
laurie_r_santos's picture
Professor of Psychology, Director, Comparative Cognition...
joshua_knobe's picture
Experimental Philosopher, Yale
david_pizarro's picture
Associate Professor of Psychology, Cornell University
daniel_c_dennett's picture
Philosopher; Austin B. Fletcher Professor of Philosophy, Co...
daniel_kahneman's picture
Recipient, Nobel Prize in Economics, 2002; Eugene Higgins...
Paperback [ 2011 ]
By
James Fowler
Hardcover [ 2013 ]
By
Daniel C. Dennett
Hardcover [ 2013 ]
Joshua D. Greene
Hardcover [ 2011 ]
By
Daniel Kahneman
Hardcover [ 2011 ]
By
Robert Kurzban
Hardcover [ 2013 ]
By
Sendhil Mullainathan, Eldar Shafir

Conversations at Edge

Topics

Topic/Category: 

Weight: 

-17