Home
About
Features
Editions
Press
Events
Dinner
Question Center
Video
Subscribe

THE NEW SCIENCE OF MORALITY
An Edge Conference

JOSHUA KNOBE

...what's really exciting about this new work is not so much just the very idea of philosophers doing experiments but rather the particular things that these people ended up showing. When these people went out and started doing these experimental studies, they didn't end up finding results that conformed to the traditional picture. They didn't find that there was a kind of initial stage in which people just figured out, on a factual level, what was going on in a situation, followed by a subsequent stage in which they used that information in order to make a moral judgment. Rather they really seemed to be finding exactly the opposite.

What they seemed to be finding is that people's moral judgments were influencing the process from the very beginning, so that people's whole way of making sense of their world seemed to be suffused through and through with moral considerations. In this sense, our ordinary way of making sense of the world really seems to look very, very deeply different from a kind of scientific perspective on the world. It seems to be value-laden in this really fundamental sense.

[JOSHUA KNOBE:] So far we have been talking about questions in moral psychology. So we've been talking about the questions: How is it that people make moral judgments? Do they make moral judgments based on emotion or reason? Is it a capacity that's just learned or is it something that's innate?

In this last talk, I thought I'd take things in a slightly broader direction. What I want to ask about is a slightly different question: What is the role of people's moral thinking in their cognition as a whole? What is the role of this particular type of thinking — morality — in our entire cognition, the way we think about  things in general?

Suppose we look out at a certain kind of situation unfolding. As we look at this situation, we might make certain types of moral judgments about it. We might think that a person in this situation is doing something morally wrong, or morally right. We might think that the person deserves praise or blame for what the person is doing. But we might also think about all sorts of other aspects of the situation. We might look at a situation and just wonder: What are these people doing? Or we might wonder: What do they intend to accomplish? Or we might wonder: Are these people happy or unhappy? What are they going to be causing down the line and so forth?

So the question now arises: What is the relationship between these two aspects of people's thought? What is the relationship between the way they think about moral questions (like, say, whether someone did something morally wrong or morally right) and the way they think about these straightforwardly, factual questions? Questions like: What are these people doing? What are they intending to accomplish? What are they are going to be causing? And so forth. What is the relation between these two kinds of judgments?

And here there's been a kind of a traditional picture that long dominated the field. The picture was that one could think about the relationship between these two things in terms of something like a series of stages. So you look out on the situation. The first thing you do is you just try to figure out what is actually going on in this situation. So you're just trying to think: What are these people doing? What are they intending? What are they causing? Just try to get a grip, in a purely factual, broadly scientific way, on what's occurring in this situation. And then once you figure that out — once you really know what's happening in the situation — then you can do something further, as a kind of second stage. In the second stage, you take in all this information about what these people are doing, and you use it to make a further judgment, a moral judgment about, say, whether what they're doing is morally right or morally wrong.

So in this kind of view, moral cognition occupies just a small and very delimited aspect of our cognition as a whole. When we try to understand the world, most of what we're doing has nothing to do with morality. It's just this purely scientific attempt to understand things. And then there's this kind of extra little step at the end. After everything else is done, this extra little step where we engage in moral cognition and try to make a moral judgment. So, on this view, moral cognition doesn't end up being that important, but it is playing a certain role in one aspect of the process.

I think this kind of view is somehow deeply intuitive on a certain level. There's something that really resonates with people about this kind of picture. For that reason, it has long held a grip on the whole way that the study of moral cognition worked.

But just in the past few years there's been a growing challenge to this view. This challenge has come from a kind of surprising source. It happened that a group of people who are actually in philosophy departments began thinking that the right way of going about understanding concepts would be to actually do experimental studies about how people use these concepts. These were philosophers just trying to understand the concept of knowledge, the concept of causation, the concept of intention, and so forth. And they began thinking: If we really want to understand how these concepts work, we can't just sort of sit here in our armchairs trying to figure it out by reflecting on it. Maybe we should go out and actually do experiments to see how could we use these concepts.   So we're going to do studies in which we systematically vary particular factors and then show how varying those factors will influence people's application of these ordinary concepts.

This really ended up being a kind of revolutionary move within philosophy. It led to a sort of new and different approach to engaging in philosophical inquiry. Just as a number of years ago, there was a time in which economists first began doing economic experiments and this gave birth to this field of experimental economics, so too now there's been this real shift towards this new idea of what's called experimental philosophy.

But what's really exciting about this new work is not so much just the very idea of philosophers doing experiments but rather the particular things that these people ended up showing. When these people went out and started doing these experimental studies, they didn't end up finding results that conformed to the traditional picture. They didn't find that there was a kind of initial stage in which people just figured out, on a factual level, what was going on in a situation, followed by a subsequent stage in which they used that information in order to make a moral judgment. Rather they really seemed to be finding exactly the opposite.

What they seemed to be finding is that people's moral judgments were influencing the process from the very beginning, so that people's whole way of making sense of their world seemed to be suffused through and through with moral considerations. In this sense, our ordinary way of making sense of the world really seems to look very, very deeply different from a kind of scientific perspective on the world. It seems to be value-laden in this really fundamental sense.

So I thought what I'd do today is just to talk about a couple of examples in which this sort of effect seems to be arising and then get your input about how we might be able to understand it. The very first experiment that I'm going to be describing is one that I think a lot of you already know. But then after that, we're going to be talking about newer work that I think most of you won't already be familiar with.

The very first effect I want to mention is an effect on people's ordinary intuitions about  the concept of intentional action. This is the concept we use to distinguish between things that people do intentionally, like, say, taking a sip of wine, and things that people do unintentionally, like, say, spilling the wine all over one's own shirt. So the question now is: How does this distinction work? How do people ordinarily distinguish between things that are done intentionally and those that are done unintentionally?

When you first begin to consider this, there's a sort of view that seems really tempting. And the view is: this distinction is just a straightforward, factual distinction. It just has to do with the mental states of the person performing the action. What it is to do something intentionally is something like: to know that you're doing this thing, or perhaps to want to do this thing, something like that.

But we started thinking, maybe there's actually more to this case. Maybe there was something more subtle going on in people's intuitions about intentional action. And, in fact, maybe people's moral judgments were actually playing a role in their notion of what it is to do something intentionally or unintentionally.

The thought then was, if you want to know whether someone did something intentionally or unintentionally, it's not enough just to know what they wanted, or what they knew — you have to make a judgment about whether what they are doing is something morally bad or something morally good.

To get at this, initially, we just ran a very simple study in which participants were assigned to one of two conditions. So there were two different conditions, and participants were randomly assigned to them. They differed in just one respect.

Participants in one condition got the following  story. The story went like this.

The vice president of a company goes to the chairman of the board, and he says, "Okay, we've got this new policy. It's going to make huge amounts of money for our company. But it's also going to harm the environment." And the chairman of the board says, "Look! I know this policy's going to harm the environment. But I don't care at all about that. All I care about is making as much money as we possibly can. So let's go ahead and implement the policy." They implement the policy and then sure enough, it ends up harming the environment.

And the question then was: Did the chairman of the board harm the environment intentionally?

Faced with this question, most people say, "Yes!" They say the chairman of the board harmed the environment intentionally. And when you first start thinking about why they might say yes to a question like this, it seems like it probably just has something to do with the mental states that this chairman is described as having. So the chairman knows that he's going to harm the environment, and he goes ahead and does it anyway. Maybe it's that fact, the fact that he had this particular mental state, that makes people think he did it intentionally.

But we were thinking maybe there's actually something more complex about it. Maybe part of the reason that people say he did it intentionally is not just his knowledge, or the mental state he had, but the fact that they make this particular moral judgment, the fact that they judge that harming the environment is something morally bad and morally wrong for him to do. So participants in the other condition got a case that was almost exactly the same except for one difference, which is the word harm was changed to help. So the story then becomes this:

 The vice president of a company goes to the chairman of the board and says, "Okay, we've got this new policy. It's going to make huge amounts of money for our company and ... it's also going to help the environment." And the chairman of the board says, "Look! I know this policy is going to help the environment but I don't care at all about that. All I care about is just making as much money as we possibly can. So let's go ahead and implement the policy." So they implement the policy. And sure enough, it helps the environment.

And now the question is: Did the chairman help the environment intentionally?

 But here people don't give the same response. They don't say the chairman helped the environment intentionally. Instead they seem to say that the chairman helped the environment unintentionally. But look at what's happening in these cases. In the two cases, it seems like the chairman's attitude is exactly the same. In both cases, he knows the outcome is going to occur, he decides to do it anyway, but he doesn't care about it at all; he's not trying to make it happen. The thing that's differing between the two cases is just the moral status of what the person is doing. In one case he's doing something bad, harming the environment, and in the other case, he's doing something good, helping the environment. So it seems somehow that people's moral judgments can affect their intuitions just about whether he did it intentionally or unintentionally.

 When we first came out with these results a number of years ago, it was really puzzling why this might be happening. We really couldn't understand what might be going on here, what might be the boundaries or the nature of the effect. But just in the past couple of years, a whole bunch of different studies by different researchers are really giving us a lot of insight into why this effect is occurring, what might really be going on here. I thought that a good way of discussing some of this work would be to talk about the work on this that has been done by the people here, the people at this workshop. And maybe that could also just give a general sense of how work on this topic has been proceeding.

I thought I'd start by talking about a study that was done in Marc Hauser's lab. Marc Hauser was interested in the question: Is this effect, the effect that we just described, due to people's emotional responses? Is it something about the way we're emotionally responding in these cases that leads us to say that the harming is intentional but the helping is unintentional?

The researchers in his lab came up with a really interesting way of going about testing this question, and that was to look at people who have really serious deficits in the capacity for emotional response. If we want to know why ordinary people respond the way they do, whether the ordinary people's responses is due to the sort of emotional reaction they have, we can go to people who differ from us in the relevant respect, who have a real impairment in the capacity for emotional response and then see how they respond. Hauser's lab looked at the intuitions of people who have lesions to the ventromedial prefrontal cortex.

As a number of people at this workshop so far have mentioned, people who have these lesions, VMPFC lesions, show really, really severe impairments in the capacity for emotional response. Their capacity to respond emotionally is really different from the way that normal subjects would. And on moral questions that have been claimed to involve emotion, they give really, really different responses. So a team of researchers, led by this pretty amazing graduate student named LianeYoung, went to these participants, participants with lesions to the ventromedial prefrontal cortex, and gave them that exact case that I just described — the case of the guy who either harms the environment or helps the environment.

The results came out as something of a surprise. These participants showed exactly the same asymmetry we find among normal participants. Even these people with really serious impairments in the capacity for emotional response still said that he harmed the environment intentionally but that he helped the environment unintentionally. So what Liane Young, Marc Hauser and their colleagues then concluded is that maybe this effect actually has nothing to do with emotion. Maybe it's not due to an emotional response we're having. Maybe we really have a kind of purely cognitive system for a kind of a non-conscious moral appraisal, and it's that purely cognitive, non-conscious appraisal that's driving the effect we find in normals.

So we have this emotional response to these cases. But if you take that emotional response away from us, we still show the same answer. So it must not have been the emotional response that was driving our answers in the first place.

But, of course, in cases like these, we often find different experimental studies that push in different kinds of directions. And this case is no exception. It happens that David Pizarro and Paul Bloom also ran a series of studies on this topic, but their study ended up moving in a really different direction. They were thinking maybe people's emotional responses do have an impact on these judgments. In particular, they were thinking maybe people's emotional responses could have an impact even in cases where they don't endorse those emotional responses.

Suppose you look at a situation. You see someone doing something, and you find what this person is doing really disgusting. It disquiets you or disturbs you. You feel sort of bad about it. But then suppose you reflect. You reason about the situation. You conclude that what this person is doing is not actually bad — it's not wrong at all. So what you think, ultimately, is that there was no reason to be disgusted by it. Your disgust is misplaced. It's actually a perfectly fine action.

Their suggestion was that in a case like that, you would find the effect emerging anyway. The immediate emotional response people have would still impact their judgments about whether the person acted intentionally, even in cases where they didn't endorse that sort of emotion.

What they needed then was a case in which people would have an immediate emotional response that they wouldn't end up endorsing. So how do you find such a case? Well one of the cases they came up with was the case of interracial sex.

They began by just asking participants straight up. "What do you think of interracial sex? Is there anything wrong with interracial sex?" You'll be happy to know that 100% of subjects say, "Absolutely not. There is nothing, nothing wrong with interracial sex. It's completely, completely fine." 

But then, they had a kind of trickier way of getting at people's intuitions about this question. And the way was this. They presented a case that was much like the one that I described earlier with the chairman. Subjects were told:

Imagine there's an executive at a record company. He's meeting with his assistant, and his assistant says, "Okay, we've got this new music video. It's really going to increase sales of this album, but we've been looking at the images in this video, and we think it's also going to encourage interracial sex. In particular, it's going to encourage sex between black men and white women."

The record executive thinks about it for a moment and then says, "Look! I know that the images in this video are going to encourage interracial sex, but I don't care at all about that. All I care about is increasing sales of this album. So let's go ahead with this new video. We're going to release it." They release the new video, and sure enough, it ends up encouraging interracial sex.

And now the question is: Did he encourage interracial sex intentionally?

What you see here's a really striking kind of correlation. Those participants who were high in a dispositional tendency to feel the emotion of disgust — those participants who just, in general, about various events that might occur to them in their lives, feel higher levels of this emotion of disgust — tend to say that he intentionally encouraged interracial sex. Whereas those participants who are low in the dispositional tendency to feel disgust, they say that he unintentionally encouraged interracial sex.

So what we see here is this striking finding. Even though all these participants swear that they find nothing wrong with interracial sex, those who had higher levels of disgust have a kind of immediate emotional reaction to this case (which they choose not to endorse), and maybe that immediate emotional reaction is still impacting their intuitions about whether it's done intentionally or unintentionally.

Looking at these different kinds of data, it's very difficult to figure out what to make of them. Maybe further studies can be done that may help disentangle these kinds of effects, but instead of disentangling them right now, let's turn to a completely separate example, a completely separate kind of case in which a similar sort of effect arises.

This is the concept of happiness. The concept of happiness is the concept we use when we ask, "Is this person truly happy?" Or when we ask, "Has this person really achieved happiness?" And the question now arises: What is that concept? What is our concept of happiness?

Initially, you might think exactly the same thing that people thought about intentional action. You might think that the concept of happiness is just the concept of having a certain kind of mental state. So to have happiness is to have these emotions: feelings of pleasure, enjoyment in the things that you're doing, a sense of satisfaction.  If you have those mental states, then you're happy. If not, then you're not happy.  It's just a purely scientific kind of thing.

But then, as we started thinking about it, we starting thinking: Maybe there's actually something more to it. Maybe when you pick out a person and you ask "Is this person truly happy?" you're not just asking, "Does this person have this psychological state?" You're asking something more, something that involves a certain kind of value judgment. You're asking: Does this is person have a life that has a certain kind of value or meaning in some way?

So if you say "I hope that this child will grow up to be happy," you're not just saying, "I hope this child will grow up to have a certain psychological state." You're saying, "I hope this child will grow up to have a certain kind of life, a life about which you can make a certain kind of value judgment."

If that's right, then the concept of happiness is really different from other kinds of concepts we use to understand the emotions. In particular, it would even be different from the concept of unhappiness.

Before I talk about any of the data, let me just see if you can kind of get this picture intuitively. Suppose we're talking about someone, and I say, "Is she truly happy? The intuition you're supposed to have is that when I ask that question, I'm not just asking,  "Does she have a certain psychological state?" or "Does she feel certain emotions?" "Is she feeling a kind of pleasure?" We seem to be getting at something more. When you say she's truly happy, you're saying that her life has some kind of meaning or value.

But now consider the opposite question. Suppose we're talking about someone, and I say, "She's truly unhappy." In saying "She's truly unhappy," I'm not saying that her life actually is bad or empty of meaningless. It seems like I'm just saying that she has a particular kind of emotion. She has a particular kind of feeling. A feeling of unhappiness. So we were thinking that these concepts might differ in a sort of systematic way.

Before I talk about this difference between happiness and unhappiness, I'll just give you a feeling for the kind of effect you see just within the case of happiness. (This is a study that's definitely unpublished — we just submitted it a couple of weeks ago.)  In these studies, we just asked people questions. We presented people with vignettes that differ in certain respects and just asked whether a person is happy.  

So subjects in the first case got the following condition.

Maria is the mother of three children who all really love her. In fact, they couldn't imagine having a better mom. Maria usually stays pretty busy taking care of her children. She often finds herself rushing from one birthday party to the next. And is always going to pick up some groceries or buy school supplies.

While Maria has been preoccupied with her children, she does get to see her old friends occasionally. Almost every night she ends up working on some projects for the next day, or planning something for her children's future.

So here, participants are given this story about a really meaningful life that has this really wholesome objective. Then participants are told that the person who has this life has these really positive emotional states. So these participants are told:

Day to day, Maria usually feels excited and really enjoys whatever she's doing. When she reflects on her life, she also feels great. She can't think of anything else in the world that she would spend her time doing and feels like the success she's had is definitely worth whatever sacrifices she's made.

And these participants were just asked a simple question: Is Maria happy?

Then participants in the other condition, just as in the story of the chairman who helps or hurt the environment, got a story of someone who has a very different kind of life, a life that was different from a value-laden perspective, a person whose life is sort of vapid, or meaningless or empty in a certain respect. So here's the story that the participants in the other condition got:

Maria wants to live the life of a celebrity in L.A. In fact, she's even started trying to date a few famous people. Maria usually stays pretty busy in trying to become popular. She often finds herself rushing from one party to the next and is always going to pick up some alcohol or a dress. Maria is so preoccupied with becoming popular that she's no longer concerned with being honest to her old friends unless they know someone famous. Almost every night she ends up either drunk or doing some kind of drug, just like the famous people she wants to be like.

But then participants in the second condition were then given the information that she has really positive emotional states. In fact, they were given exactly the same paragraph as participants in the first condition:

Maria usually feels excited and really enjoys whatever she's doing. When she reflects on her life, she also feels great. She just can't think of anything else in the world that she would spend her time doing and feels like the success she's had is definitely worth whatever sacrifices she had made.

And these participants were then given the same question. Is Maria happy?

So participants in the two conditions are given exactly the same information about what's going on in Maria's head. She has all these kind of positive emotions, feels all this pleasure, takes a lot of enjoyment in her life. But they were given really different information about whether her life is kind of valuable or meaningful one.

It turns out that they make, correspondingly, really different judgments to whether she's happy. Even though she experiences positive emotions in both cases, people say that she is happy when her life is a meaningful and valuable one but she's not happy in the case where it's not.

So we ran a number of different studies, trying to test for this, controlling for various variables. We always found the same effect emerging. If people believe that Maria is having a sort of a vapid or empty or meaningless life, they're unwilling to say that she is happy.

But, of course, the real claim I wanted to make was not just about this one concept, the concept of happiness, but about the idea that this concept differs from other concepts because the concept of happiness is different from other kinds of emotion concepts in a particular way. So to get at this notion, we ran a separate series of studies that used a 2x2 table. So we have a 2x2 design in which we independently manipulate two different factors.

First of all, we have a case in which someone has a valuable life (like, say, being a mother) or a case in which someone has this non-valuable, or a bad life (like, say, being this Paris Hilton kind of character).

Then, separately from that, we have this other difference between different conditions. Some participants were asked the question: "Is Maria happy?" and other participants were asked the question: "Is Maria unhappy?"

If you look at the judgments for whether Maria is happy, they just replicate the existing finding. People say that Maria is happy if she has a good life, but they do not say that Maria is happy if she has a bad life. But now suppose you just turn to this other question. You just ask subjects: "Is she unhappy?" There, you see absolutely no effect. There's no effect of people's moral judgments on whether people say that Maria is unhappy. The only thing that affects whether they say that she's unhappy or not is just what mental state she's described as having. Is she described as feeling bad? Or is she described as feeling good?

Across three different studies, we found the same interaction pattern arising. For judgments about the question "Is she happy?" The moral status of her life is playing a big role.  For judgments of the question "Is she unhappy?" there is no effect of moral judgment.    

In conclusion, I just want to say a few brief words about where all this seems to be heading. When we first started out doing this kind of research, it seemed like maybe the effect we were seeing for the concept of intentional action was just some weird kind of quirk or idiosyncrasy, something kind of bizarre about this one concept. So we thought that there was something special going on about this one concept, the concept of intentional action, but that the kind of traditional picture I started out by describing might still be right.

But as research on these kind of phenomena has proceeded, we seem to be finding this same kind of effect again and again for every single concept that we're looking at. So it's not just that moral judgments impact people intuitions about intentional action. It's not just that they impact intuitions of our happiness. They also impact intuitions about knowledge, about causation, about freedom, even about whether a particular trait counts as being innate.

What it's really starting to look like now is that the initial effect we saw for the concept of intentional action was really just a symptom of a much broader phenomenon whereby people's moral judgments seem to be infecting their whole way of understanding the world they see around them. I'd be really interested in hearing what folks here think about how this phenomenon might be explained.


JOSHUA KNOBE TALK:


Joshua Knobe

Suppose you look out in the world and see a person engaged in some important activity. Ultimately, you might arrive at a conclusion about whether what she is doing is morally right or wrong, but before you can even begin asking about that sort of question, it seems that you have to go through an earlier step. You have to get clear about what is actually happening in the situation. So you might start by trying to figure out what the person intends to accomplish. Or what impact her actions will have on the situation as a whole. Or whether she will be making people happy or unhappy. All of these questions seem perfectly straightforward and distinct from any controversial moral claims. Then later, once you have gotten a handle on all of the straightforward factual questions, you can go on to address the moral questions, trying to decide whether the person's action is morally right or wrong.

This, at least, is the usual picture. But an ever-growing body of evidence suggests that this picture is deeply mistaken. The evidence does not seem to suggest that moral judgment is just some extra step added on after we have figured out basically what is happening. Rather, it looks like moral judgment is actually exerting an influence from the very beginning.

Over the past few years, a series of recent experimental studies have reexamined the ways in which people answer seemingly ordinary questions about human behavior. Did this person act intentionally? What did her actions cause? Did she make people happy or unhappy? It had long been assumed that people's answers to these questions somehow preceded all moral thinking, but the latest research has been moving in a radically different direction. It is beginning to appear that people's whole way of making sense of the world might be suffused with moral judgment, so that people's moral beliefs can actually transform their most basic understanding of what is happening in a situation.

JOSHUA KNOBE is a faculty member in Yale University's Program in Cognitive Science. He is one of the founders of the ‘experimental philosophy' movement, which seeks to use experimental methods to address the traditional problems of philosophy. Accordingly, his publications have appeared both in leading psychology journals (Psychological Science, Cognition, Journal of Personality and Social Psychology) and in leading philosophy journals (Journal of Philosophy, Nous, Analysis). His work has been discussed in popular media venues including the New York Times, the BBC and Slate. He is coeditor, with Shaun Nichols, of Experimental Philosophy.

Links:

Joshua Knobe's Yale webpage
The Experimental Philosophy Page
Joshua Knobe on Wikipedia

Articles & Press:

The New New Philosophy By Kwame Anthony Appiah, in New York Times Magazine
Lessons from the Park in the Chronicle of Higher Education
Interview on Bloggingheads.tv
Can a Robot, an Insect or God be Aware?, in Scientific American
The X-Philes: Philosophy Meets the real world by Jon Lackman, in Slate

Books:

Joshua Knobe's Edge Bio page


Back to: THE NEW SCIENCE OF MORALITY


John Brockman, Editor and Publisher
Russell Weinberger, Associate Publisher

contact: [email protected]
Copyright © 2010 By Edge Foundation, Inc
All Rights Reserved.

|Top|