A New Science of Morality, Part 6

A New Science of Morality, Part 6

David Pizarro [9.17.10]
Introduction by:
David Pizarro

BACK TO EVENT PAGE: THE NEW SCIENCE OF MORALITY


What I want to talk about is piggybacking off of the end of Paul's talk, where he started to speak a little bit about the debate that we've had in moral psychology and in philosophy, on the role of reason and emotion in moral judgment. I'm going to keep my claim simple, but I want to argue against a view that probably nobody here has, (because we're all very sophisticated), but it's often spoken of emotion and reason as being at odds with each other — in a sense that to the extent that emotion is active, reason is not active, and to the extent that reason is active, emotion is not active. (By emotion here, I mean, broadly speaking, affective influences).

I think that this view is mistaken (although it is certainly the case sometimes). The interaction between these two is much more interesting.  So I'm going to talk a bit about some studies that we've done. Some of them have been published, and a couple of them haven't (because they're probably too inappropriate to publish anywhere, but not too inappropriate to speak to this audience). They are on the role of emotive forces in shaping our moral judgment. I use the term "emotive," because they are about motivation and how motivation affects the reasoning process when it comes to moral judgment.



David Pizarro: Like the others, I'd really like to thank John and the Edge Foundation for bringing us out. I really feel like a kid in a candy store here, to be able to speak with everybody here on a topic that I actually thought was the nail in the coffin of my graduate career. But thanks to kind people, including Paul and John (Laughs), it has not been the nail yet.  (Laughs) 

What I want to talk about is piggybacking off of the end of Paul's talk, where he started to speak a little bit about the debate that we've had in moral psychology and in philosophy, on the role of reason and emotion in moral judgment. I'm going to keep my claim simple, but I want to argue against a view that probably nobody here has, (because we're all very sophisticated), but it's often spoken of emotion and reason as being at odds with each other — in a sense that to the extent that emotion is active, reason is not active, and to the extent that reason is active, emotion is not active. (By emotion here, I mean, broadly speaking, affective influences).

I think that this view is mistaken (although it is certainly the case sometimes).  The interaction between these two is much more interesting.  So I'm going to talk a bit about some studies that we've done. Some of them have been published, and a couple of them haven't (because they're probably too inappropriate to publish anywhere, but not too inappropriate to speak to this audience). They are on the role of emotive forces in shaping our moral judgment. I use the term "emotive," because they are about motivation and how motivation affects the reasoning process when it comes to moral judgment.

There are a variety of ways that emotional processes affect reason in a nuanced way, and I just want to briefly mention one way in which this is the case:  We have some evidence for, in work I've done with some people here, and in work that John and his colleagues have done, showing that disgust sensitivity, just the simple tendency to experience an emotion, can actually on one account (an account that we believe although we don't have good causal evidence for), that a simple tendency to experience certain kinds of emotions can shape beliefs over time. 

We've shown that disgust sensitivity, that is, people who are more likely to be disgusted, over time end up developing certain kinds of moral views.  In particular, we've shown that not only are people more political conservative if they're more disgust sensitive, but they specifically are more politically conservative in the following ways: they tend to adhere to a certain kind of moral view that the conservative party in recent years in the United States has endorsed, that's characterized by being against homosexuality and against abortion.

What we've shown is that people who are higher in disgust sensitivity, that is, people who are more easily disgusted, actually seem to have these views.  One way of thinking of this is that early differences in emotional styles can actually shape the kinds of things that you find persuasive.  It's not a simple case of an emotion influencing me, therefore my reason is shut off and I'm influenced by this gut reaction (which certainly happens).  But it's a more complex view of the interaction.

What I'm going to talk about today is a different kind of interaction between emotion and reason, and I'm going to use the example of moral principles.  By moral principles, I mean moral principles in the way that many people (normative ethicists, development psychologists) have used the term moral principles, which is to say a sort of guiding general principle that leads to specific judgments.

The reason we use moral principles, or at least the reason people think we ought to use moral principles is that anytime we make a specific moral claim, like say murdering babies is wrong ... maybe that one's too obvious ... but any singular moral claim can't be defended, they're, unlike empirical facts they can't simply be defended by, say, going to Wikipedia and saying, "See, here it shows that murdering babies is wrong."

It takes a different kind of evidence to back up a moral claim.  There is no yardstick, there is no polling that you can do, although in some cases some people have argued there is evidence for certain kinds of moral claim.  A lot of them require the defense that comes in the form of a principle.  Moral principles provide justification, because you can't just say "Homosexuality is wrong."  "Why?"  "Well,just because." 

Usually you provide some overarching principle.  And the use of these overarching principles has been considered the hallmark of moral reasoning by for instance, Lawrence Kohlberg and his colleagues.  The reason that it seems so powerful is because it's a broader principle that's invariant across a whole bunch of similar situations.  So it's an invariant principle, just like a broad mathematical principle would apply to specific problems.

I don't need to discuss this too much here, because it's already been explained and you probably already know, but two of the biggest, most popular moral principles that philosophers and psychologists have talked about are consequentialist defenses.  So that is, if you have to use a rule, a general rule to determine what's right and what's wrong, you might say, well, what's right and what's wrong is solely determined by the outcome of any given act.  So this one broad rule that one might call utilitarian (consequentialist is the more broad term).

Or you might say the way that we determine whether something is right or wrong is, and then here you would appeal to any of a number of deontological rules, such as that the murder of innocent people is always wrong, no matter what.  Now, these two have been pit against each other many times by philosophers.  I should say they're not always in conflict, obviously.  But the interesting cases have been when they are in conflict. Earlier Josh [Greene] talked to us about trolley problems as these sort of paradigmatic examples that you can use, "the fruit flies" of moral psychology, because you can easily look at people's responses to trolley problems and other similar dilemmas that pit consequentialism and deontological principles against each other.

The way that traditionally psychologists and philosophers have understood this is, you're confronted with a moral situation.  You see a possible moral violation and you say is there a principle that might speak as to whether this is morally wrong or not.  And so you choose the appropriate moral principle, whether it's consequentialism or some sort of deontological rule, and you say ah-ha!  It applies here.  And you say, therefore, this act is wrong

What I want to present today is some evidence that this is, in fact, not the way we do it.  Rather, what happens is there's a deep motivation we have to believe certain things about the world.  And some of these things are moral things.  We have a variety of moral views, specific moral claims that we all hold to be true, that we believe, first and foremost, independent of any principled justification.  What happens is that we recruit evidence in the form of a principle, because the principle is so rhetorically powerful.   Because it's convincing to somebody else, you can say well, this is wrong because it causes harm to a large group of people.

I'm going to be borrowing from a large tradition in social psychology that says it's not that we don't reason; in fact, we reason a whole lot.  It's just that the process of reasoning can go horribly awry because of a variety of biases that we have (for example, the confirmation bias that was mentioned earlier).  There have been some classic studies in social psychology showing that we work really hard when we're presented with something that contradicts a view of ours.

One of my colleagues, who did some of these studies with me, Peter Ditto, has shown that if you tell people, look, there's this easy test that will tell you whether or not you have a disease. (He made up the disease, but let's just say it was herpes).  So let's say I present you with a little piece of paper, it's like a litmus strip.  You put it in your mouth and then you dip it in a solution. If it turns green, he told some of the participants, that means that you're healthy.  To other participants he said if it turns green, that means you're sick.  Some participants he said if nothing happens, you're healthy. The other group was told that if nothing happens, you're sick.  So for the participants who thought that if nothing happened, they were healthy, and by the way, the paper was inert; there was absolutely nothing ever happened.  For the people who thought that nothing happening was a good thing, they dipped it in once and they're like, "yes, I'm healthy!".

But for the people who were told that if nothing happens, you're sick, they were sitting there, doing this [illustrating dipping a paper repeatedly and checking each time]. This is a nice metaphor for what goes on in our mind when we're confronted with something that goes against one of our beliefs.  We're very motivated to find whatever evidence we can that we're actually right. 

One of the reasons I started thinking about this came from early experiences when I realized that when I would ask my parents certain things--I remember the first time I asked my parents about Hiroshima and Nagasaki.  And I said, well, it kind of seems wrong.  Right?  So I said, you know, Mom, Dad ... (I was 23) ... no.  (Laughs)  Mom and Dad, this doesn't seem right.  And so the answer they gave me was well, if it weren't for us dropping the bomb on Hiroshima and Nagasaki the war would have lasted a lot longer, a lot more people would have died, and so it was the right thing to do—a very consequentialist justification.

Then when I would think about it, on other occasions they would reject this very same justification, by telling me "the ends never justify the means" when there was another completely different example from a different domain.  But wait, the power of the principle is supposed to be that it applies in invariant fashion across all these examples, and here you've told me in one breath that consequentialism is the right moral theory, and in another breath you've told me that consequentialism is a horrible moral theory.  So which one can it be?

So when I got the chance to actually do these studies, what we did was we put together some studies that would test out this very idea—that what is going on is that people have a moral toolbox.  That is, we are very capable of engaging in principled reasoning using deontological principles or consequentialist principles.  What we do is simply pull out the right moral principle whenever it agrees with our previously held position. 

This is contrary to what Josh [Greene] has shown on the face of it ... But I don't think it's contrary at a deeper level.  Often the tradition in moral philosophy and in moral psychology, when poring over these [moral dillema] examples, is that they've always said, you know, "Jones and Smith," or "Person A and Person B," or "Bob and Alice," and what Josh has shown is that people have a default acceptance of deontological principles. But that if you get them to think hard enough, they'll often go with the consequentialist decision.

But what we were interested in, given the experience that I and my colleagues have had in actual moral discussions, was when it wasn't "Person A," "Person B," or "Jones and Smith," it was actually your friend or your country or your basketball team, or something — situations that we engage in in everyday life, and which are infused with a motivation to favor your side, or to argue that you are right.

So we sought to design a set of experiments that would test whether or not this was actually the case. Would people appeal to, or endorse explicitly a principle that justified what they believe to be true while claiming that it was invariant?  While at the same time, denying the same principle whenever the opportunity arose to criticize a moral view that they opposed?

What we did was try to find a natural source of motivation, and politics provides this wonderful natural source of motivation for moral beliefs, because so much of politics, as I think John [Haidt] has nicely pointed out, really is about moral differences, right?  So we wanted to look at conservatives and liberals, their moral views and their appeal to these explicit moral principles, and whether or not they would really show the invariant sort of psychology that philosophers and psychologist would assume that we're supposed to use.  Or whether they would actually just sort of use whatever principle justified their belief.

What we did was (to bring some trolleyology back into the mix), what we did was we used a modified version of the footbridge dilemma.  As Josh described this morning, the footbridge dilemma is where five people are trapped on train tracks and there's one large person that you could push off to stop the train from killing the five.  Most people find this morally abhorrent; they would never push the large person off the train.  But we manipulated it a little bit.  What we did was to give the large man a name.  We told half of the people that they were in a situation where there were a large number of people that were going to get killed by a bus unless they pushed a very large man off of a footbridge.

We manipulated whether the guy's name was Tyrone Payton or whether his name was Chip Ellsworth III.  We took this to be a valid manipulation of race, and (because reviewers ask for such things) it turns out that yes, most people think that Tyrone is Black and most people think Chip is White.  And so we simply asked people, would you push Chip Ellsworth III (or would you push Tyrone Payton)?

We also asked people to indicate [their political orientation] on a seven-point scale, where one meant they were liberal and seven meant they were conservative. In case you're curious, this simple one item measure of liberal/conservative is correlated with much larger and more detailed measures of political orientation, and it predicts voting behavior, along with all the other things that you might want it to predict.  We then asked people "was this action appropriate [i.e., pushing the large man off the bridge to save the greater number of people]," "do you agree with what the person did," and critically, we asked people the general principle, we said, "do you agree with the following: sometimes it's necessary to kill innocent people for the sake of saving greater numbers of lives?" (It's a very, very straightforward principle). 

What we found was that liberals, when they were given Chip Ellsworth III, were more than happy to say, "clearly consequentialism is right."  You push the guy, right?  You've got to save the people. Self-reported conservatives actually reported the opposite—they were more likely to say that you should push Tyrone Payton, and that well, yes, consequentialism is the right moral theory (when the example was Tyrone Payton).  When asked about the general principle [of consquentialism] they also endorsed the general principle.

We did this both at U.C. Irvine, and then we wanted to find a sample of more sort of, you know, real people, so we went in Orange County out to a mall and we got people who are actually republicans and actually democrats (not wishy-washy college students).  The effect just got stronger.  (This time it was using a "lifeboat" dilemma where one person has to be thrown off the edge of a lifeboat in order to save everybody, again using the names "Tyrone Payton" or "Chip Ellsworth III".  We replicated the finding, but this time it was even stronger.

If you're wondering whether this is just because conservatives are racist—well, it may well be that conservatives are more racist, but it appears in these studies that the effect is driven by liberals saying that they're more likely to agree with pushing the white man and disagree with pushing the Black man. So we used to refer to this as the "kill whitey" study.  It appears driven by a liberal aversion to killing, to sacrificing Tyrone, and not in this case, to the conservatives.  (Although if you want evidence that conservatives are more racist, I'm sure it's there.)

So we thought, okay, we demonstrated this using this traditional trolley example.  Let's look at another moral example that might be a more relevant and a bit more realistic.  So this time we asked students (again at U.C. Irvine), we said here's a scenario in which there are a group of soldiers who are mounting an attack against the opposing military force (this is a classic "double effect" case from philosophy) where the military leaders knew that they would unintentionally kill civilians in the attack.  They didn't want to, but they foresaw that it would happen. 

For one set of respondents we described American soldiers in Iraq who are mounting an attack against Iraqi insurgents, and in this case, Iraqi civilians, innocent Iraqi civilians would die.  The other set read about Iraqi insurgents attacking American forces and in this case, innocent American civilians would die. 

What we found, again, was that when we asked people whether they were liberal or conservative, and we look at the split—I'll just give you the actual example we used: "Recently an attack on Iraqi insurgent leaders was conducted by American forces.  The attack was strategically directed at a few key rebel leaders.  It was strongly believed that eliminating these key leaders would cause a significant reduction in the casualties of American military forces and American civilians.  It was known that in carrying out this attack, there was a chance of Iraqi civilian casualties.  Although these results were not intended, and American forces sought to minimize the death of civilians, but they did it anyway."  And we tell them that sure enough, they do it and civilians die.

We then asked people, is this morally permissible?  Is it okay to carry out a military attack when you unintentionally, but foreseeably are going to kill civilians?  And what you get, again, is a flip. This time it seems to be more motivated by a liberal bias for favoring the action of the Iraqi insurgents. So we used to refer to this as the "reasons for treason" study.  (Laughs)

But you get a more natural crossover effect here, such that conservatives are more likely to say consequentialism is true, that sometimes innocent people just have to die, but only when it's the Iraqi civilians dying. Liberals are more likely to say consequentialism is true when American civilians are dying.

So across these studies, you know, we showed in a few trolley scenarios, and in this military action scenario, that in fact it's not that people have a natural bias toward deontology or a natural bias toward consequentialism.  What appears to be happening here is that there's a motivated endorsement of one or the other whenever it's convenient. 

Up until this point, though, we had relied on participant's pre-existing political beliefs, and we really wanted to see if we could manipulate it. 

In the great tradition of social psychology, we primed participants with unscrambled sentences. Half of the participants got a task where they were supposed to unscramble sentences, and one of the words embedded in the sentence had to do with patriotism, and the other half got words that had to do with multiculturalism.  We took this as a sort of proxy for manipulating political beliefs in the way that would be more closely aligned with "conservative" or "liberal."

What we found was, regardless of the political orientation participants came in with, these priming manipulations worked.  In the exact same Iraqi/American military scenarios that I described earlier, if you were primed with patriotism, you showed the same pattern that conservatives showed, and if you were primed with multiculturalism, you showed the same pattern that liberals showed [in the previously described study].

So using a priming manipulation, in which we could embed the motivation in our participants; at least that's what we think we're doing, that motivation still causes differential endorsement of consequentialism across the different scenarios.

Here I just want to tell you that we've done this with a whole bunch of other scenarios, and that importantly, if you ask participants after they do the study whether or not race should play a role, whether nationality should play a role in these judgments "do you think nationality or race should play a role in moral dilemmas?" overwhelming people say "no."  In fact, I don't remember any one instance of somebody saying "yes."  In fact, some people are so offended at even us intimating that they might be using race or nationality as a criterion that we get all kinds of insults, like "how dare you?" or "I'm not even answering this!"  Another finding that we got is if you're curious, that some people, when we presented these results, would say, well, you know, liberals are right because, in fact, a Black life is worth more than a white life (these were white liberals in New York City, Chicago, just so you know).  Fair enough, maybe that's the case for these people.  But overwhelmingly, our participants, when asked, deny that this is the case.

Also, if, for instance, you give a liberal the scenario in which they're asked whether or not to sacrifice Tyrone, and they say "yes," and then you give them a scenario with Chip, they also say "yes." That is, they realize their inconsistency.  We take this as evidence that people seem to have a desire for consistency — a sort of embarrassment at their own inconsistency.

This is not just about consequentialism and deontology; these just provide a nice set of examples of moral principles that other psychologists use.  But we've also asked, just to give you an example, we asked people about freedom of speech.  Do you think that freedom of speech is a principle that should be upheld no matter what?  Depending on the scenarios that you give, for instance, so half of the participants we said there's a Muslim protester that was burning the American flag and proclaiming America to be evil. 

When we asked the liberals and conservatives, "do you agree with what he did?" most people say "no".  When we ask "Do you think that freedom of speech should be protected despite this?", liberals say "of course freedom of speech should be protected!"; conservatives say "absolutely not, this is just not right."  But if you give participants a scenario in which a Christian protestor showed cartoons of Mohammed and proclaimed that Islam was evil, the conservatives say, "Freedom of speech should be protected, no matter what!" And while the liberals say, "No way, freedom of speech has its limits!" 

Again, some other scenarios that we used included to ask people if it was okay to break a law if you perceived it be unjust; in one case it was pharmacists who gave the "morning after" pill, despite it being illegal, another case it was [doctors refusing to aid in capital punishment].  Here again, you can get a liberal/conservative split in the endorsement of the principle of [breaking a law if it is believed to be unjust].

Across a whole body of studies, I think what we've shown is that it's not that reasoning doesn't occur, and in fact, reasoning can actually be persuasive.  Say we're in a social situation and I want to convince you of something, and I say it's wrong to do X.  And you say, well, why is it wrong?  I appeal to a broader principle.  And this in fact, might work.  This might convince you, oh yes, that's a good point, that is a principle that we should adhere to.  It's just that we're sneaky about it, right? 

And it's not that motivation [to uphold our moral beliefs] is wiping out our ability to reason, it's just making it very directional.  If anything, it's opening up this skill and ability we have to find confirmation for any belief that we might have.  This is probably true in most domains of social judgment, but I think it's especially interesting in the domain of moral judgment because of the implications that this has. 

I want to say, before I end, that I've been a champion of reason, and I've often said that rationality can actually change moral beliefs.  In fact, early on, after John [Haidt] published his widely read and influential paper on emotions being in charge of the moral domain, I said "absolutely not!" Paul [Bloom] and I had a little paper where we said "absolutely not, you're wrong!"  But ever since then I appear to have done nothing but studies supporting John's view. 

Even then, I am still an optimist about rationality, and I cling to the one finding that I talked about, which is that when you point out people's inconsistencies, they really are embarrassed. Hopefully, at a very minimum, what we can say is (Laughs) let's at least keep pointing out the stupidity of both liberals and conservatives.  And at this point, I'll come out and say that John, I also am not someone who is a self-proclaimed liberal, although I'm sure on any test I would appear to be liberal. 

But I, in fact, am a libertarian communist, (Laughs) and oftentimes am quite apathetic about politics.  And I think this is what allows me to do these studies with the impartiality of a scientist.