David Rand: "How Do You Change People's Minds About What Is Right And Wrong?"

David Rand: "How Do You Change People's Minds About What Is Right And Wrong?"

HeadCon '14
David Rand [11.18.14]

There are often future consequences for your current behavior. You can't just do whatever you want because if you are selfish now, it'll come back to bite you. In order for any of that to work, though, it relies on people caring about you being cooperative. There has to be a norm of cooperation. The important question then, in terms of trying to understand how we get people to cooperate and how we increase social welfare, is this: Where do these norms come from and how can they be changed? And since I spend all my time thinking about how to maximize social welfare, it also makes me stop and ask, "To what extent is the way that I am acting consistent with trying to maximize social welfare?"


[34:37 minutes]

DAVID RAND is Assistant Professor of Psychology, Economics, and Management at Yale University, and Director of Yale University’s Human Cooperation Laboratory. David Rand's Edge Bio page


HOW DO YOU CHANGE PEOPLE'S MINDS ABOUT WHAT IS RIGHT AND WRONG? 

I'm a professor of psychology, economics and management at Yale. The thing that I'm interested in, and that I spend pretty much all of my time thinking about, is cooperation—situations where people have the chance to help others at a cost to themselves. The questions that I'm interested in are how do we explain the fact that, by and large, people are quite cooperative, and even more importantly, what can we do to get people to be more cooperative, to be more willing to make sacrifices for the collective good?

There's been a lot of work on cooperation in different fields, and certain basic themes have emerged, what you might call mechanisms for promoting cooperation: ways that you can structure interactions so that people learn to cooperate. In general, if you imagine that most people in a group are doing the cooperative thing, paying costs to help the group as a whole, but there's some subset that's decided "Oh, we don't feel like it; we're just going to look out for ourselves," the selfish people will be better off. Then, either through an evolutionary process or an imitation process, that selfish behavior will spread.

 

The question that has preoccupied people for a long time is "How do you stop that from happening?" There are a lot of good answers. For example, if you interact repeatedly with the same person, then that changes things. If the other person has a strategy where they'll only cooperate with you tomorrow if you cooperate with them today, it becomes in your self-interest to cooperate. Or, if people can observe what you're doing, you'll get a reputation for being a cooperator or a non-cooperator. And if people are more inclined to cooperate with people that have cooperated in the past, then that also creates an incentive to cooperate. Or there is partner choice—if people are choosing who they want to work with, who they want to interact with, then if they're more likely to choose cooperative partners, that creates an incentive to cooperate. 

What all these different mechanisms boil down to is the idea that there are often future consequences for your current behavior. You can't just do whatever you want because if you are selfish now, it'll come back to bite you. I should say that there are mathematical and computational models, lab experiments, and also real-world field experiments that show the power of these forms of accountability for getting people to cooperate.

For example, we did an experiment with a utility company in California. We were trying to get people to sign up for a blackout prevention program, where they let the utility company turn down their air conditioners a couple of degrees on really hot days so there's not a big spike in energy use which can cause blackouts. It's a great program, but nobody signs up because it's a pain: You have to be there when the guy comes to install the device and so on. We found that if we made the sheet where you signed up to be part of the program public, so that you had to write down your name and your unit number on the signup sheet instead of just a random code number, it tripled signups. This was a field study with over a thousand Californians. These effects matter in the real world. They're powerful. There's no question that these reputational effects can be powerful motivators of cooperation.

In order for any of that to work, it relies on people caring about you being cooperative; people have to care that you do the right thing. There has to be a norm of cooperation where people think it is acceptable to do what's socially beneficial, and that it's not acceptable to do things that are not socially beneficial. These observability mechanisms don't work in situations where that's not true.

There are cross-cultural experiments, for example, where people play cooperation games with the option to punish other players. If you run that setup with Harvard or Yale undergrads it works great: Most people cooperate and punish those that don't cooperate, so everyone learns to cooperate and it's a nice, happy outcome. If you go to places like post-Soviet Russia or Eastern Europe, or the Middle East, and run these experiments, you get very different outcomes. People often don't support doing the cooperative behavior, the thing that is collectively beneficial; they wind up punishing the people that are being cooperative. In those places, it's worse to have punishment and accountability than to just let everybody do whatever they want anonymously.

The important question then, in terms of trying to understand how we get people to cooperate and how we increase social welfare, is this: Where do these norms come from and how can they be changed? By norm, I mean a person’s internalized sense of what's appropriate, what's acceptable behavior and what's unacceptable behavior. That is, your moral values: what you believe inside is right, rather than what you do because you're forced to do it under threat of punishment or exclusion

There are certainly examples in recent history where deeply-held norms have changed dramatically—attitudes toward smoking, driving while drunk, gay rights—these are cases where we've seen massive shifts in the U.S., for example, in people’s opinions about what is right and wrong.

It's not at all clear, however, how that happens or where that sense of right and wrong comes from. This is what I've gotten interested in, and what I’ve started to spend a lot of time trying to unpack. Where does our sense of right and wrong come from? A general framework for thinking about these questions that I've been using comes from the study of judgment and decision making: the idea of heuristics.

Rather than carefully thinking about all the details every time we're confronted with a situation, and then asking ourselves, "All right, what is optimal here?" we sometimes use rules of thumb. If we’ve been in a similar situation before, then the behavior that typically works well in that situation can pop into your head. This heuristic rule of thumb can often work out, but sometimes isn’t perfectly matched to the current situation you’re facing. It may be that if you stop and think more carefully, you might be like, "Oh, my heuristic doesn't work that well in this specific setting, I should do something different." This process is talked about a lot in the domain of individual choice, like risking taking and impulsivity. But it seems to me that it is equally important in the domain of social interaction.

What this perspective would suggest is that the thing that you internalize, that you get used to as your way of being in the world, is the thing that typically works well for you in your daily life. If most of the time you live in a setting where you're rewarded for being cooperative and you're punished for being selfish, you wind up getting in the habit of cooperating. You internalize cooperation as your default response. Then when you find yourself in a situation where you could exploit someone without any consequence, your first impulse is to keep treating it as if it was daily life, where you shouldn't exploit the person or you're going to get in trouble. But if you stop and think about it, you might be like, "This situation is different. My first impulse to cooperate isn't optimal for me in this particular situation."

We've done a lot of experiments to try and provide evidence for this. Because I come out of a behavioral economics/experimental economics background, I like to use economic game experiments where you give people actual money and then let them choose how much to keep for themselves and how much to contribute to something that benefits other people (rather than just hypothetical surveys). We've done a number of experiments that find for most of our subjects—who are usually American and come from relatively safe daily lives where they trust other people, people don't exploit them, and it's a good idea to be cooperative—their default response is cooperation. If you make them stop and think about it, they get less inclined to spend money to help other people.

If this idea about social heuristics is right, it's not a story about our evolutionary past where we had to take care of each other so we devolved some gene that makes us, by default, be cooperative. It's a story about learning and culture. If you come up in a place where it's not a good idea to be cooperative, either because the norms of the people around you are bad or the institutions are corrupt, then you should wind up internalizing something different as your default—not cooperation but selfishness. We find evidence of this: What is intuitive to people varies depending on their experience of the world. People that experience the world as a trustworthy place are often intuitively cooperative, but people that experience the world as an untrustworthy place are intuitively selfish. It means that the way you experience the world has broad implications for your behavior.

We've done other experiments to try and directly demonstrate this effect of experience. We first have people interact for 20 minutes either under a set of rules that makes it a good idea to cooperate, so that they learn to cooperate and spend 20 minutes cooperating; or under a set of rules where it's a good idea to be selfish, so that they learn to be selfish and spend 20 minutes not cooperating. Then we have everyone do an identical battery of one-shot anonymous interactions where in some cases they can pay to help other people, or in others they can pay to punish people for being selfish.

What you see is huge spillover effects, where the habit that gets established in first part where we manipulated the rules then spills over to the subsequent anonymous interactions. If you get people used to not cooperating, they wind up being much less altruistic, less trusting, less trustworthy, less optimistic about the behavior of others, and less inclined to sanction other people for being selfish.

What this means is that when you think about what people have as their sense of right and wrong, where these values come from, I would argue that at least a good chunk of it comes from what they are experiencing in general. That means that you can have a hard equilibrium problem. If you're in a setting where the norms favor selfishness, that's reinforcing default selfishness. And then the opposite, if you're already in a good situation where people have good norms and are cooperating, it reinforces cooperative defaults.

In terms of what you can do to change the norms in a setting where they're not good, although I don't have an awesome answer, our research suggests that top-down institutional rules can play an important part. Say you're managing an organization. If you set up rules within your organization to reward people that behave cooperatively or punish people that behave selfishly, that can change what is optimal behavior in the context of your organization. That can generate a culture—an institutional culture—within that institution. Potentially, that can also have a spillover effect, where it not only affects people's behavior in this particular context but also affects people's behavior more generally.

In this vein, I like the idea of Paul Romer's model cities. You go to a place where the institutions or the norms are bad and you say, "In this one setting the institutions are going to be good. There are going to be rules that incentivize good behavior and only come and join this if you want to play by those rules." You can create a new culture there that, hopefully, people take with them when they leave, and also inspire other people who see the benefits.

The general idea I'm arguing for is that the top-down rules you establish can have a profound impact not only on the way people behave when the rules are watching them, but also on what they internalize and what they carry with them when they're interacting outside of the rules.

If you think about it in an institutional setting—in the context of a company for example—you can have incentives to get people to be cooperative and reward people for the outcome of the team as well as their individual outcome. Not only do you get them to behave well in those contexts, but it also creates a general culture or set of norms where people are more likely to help each other out even in settings that are not explicitly going to get rewarded by the company. If you get people in the habit of having positive, constructive, cooperative interactions with others, they're more likely to do those cooperative actions in settings that are not governed by the official rules.

There could also be a broader generalization where if you get used to operating that way in the context of your organization, you also carry that with you to some extent when you go out into the world. This is potentially a tool for public policy and institution design but it's also something to think about when we think about certain organizations or industries that are explicitly built around self-interest as their cornerstone. That given of institutional culture can have consequences for how employees behave more generally. There are all these interconnections between institutions and norms that are important for considering.

That's not to say that institutions are the only thing that matter for what we think is right and wrong. It's also important to think about bottom-up change, like the examples about attitudes toward smoking or drunk driving or gay rights. It's not at all clear to what extent those changes were the result of big national advertising campaigns and attempts to change people's understanding from a top-down perspective, and how much it was just some organic process of change occurring among individuals convincing each other that things should be different. We're also interested in this bottom-up change and how you can be a moral exemplar to people around you.

How do you get people around you to be more pro-social, to do what they know is right? Maybe by inspiring them. The other bigger question is how do you change people's minds about what is right in the first place? It's one thing to say, "I know I should be doing this but I don't feel like it. Oh, look, Laurie is a great role model. She's super-moral so she could inspire me to be moral like that, too." But it’s a bigger question to change people's minds about what's right in the first place? That is the most important question in any of this work related to cooperation and pro-sociality. It's something that there is not that much known about. That is, you can push people's behavior around but the question is how do you change their sense of what's right and wrong? I am hopeful that this is something that there will be a lot of interdisciplinary effort around. 

Since I spend all my time thinking about how to maximize social welfare, it also makes me stop and ask: "To what extent is the way that I am acting consistent with trying to maximize social welfare?" As an academic, my life is fun and I get to do interesting, cool things all the time. But I don't know to what extent a lot of what I'm doing is working to improve social welfare.

This is potentially an opportunity to use the things that we're always studying, how to motivate people to maximize social welfare, to try and change norms within academia itself. It would be great if more people were like, "It is an important and valuable thing to do things that matter, that have some impact on the world and on trying to make the world better." There is, in general, a lot of looking down at research that is applied, and that is not socially optimal.

In some sense, it's exactly the same problem of what I was talking about before where there is a bad norm in place. It's pretty easy to observe the extent to which people are doing things which have some real application and impact on the world, versus not. But there's not a norm in place that says that's something that should be valued and rewarded. Even though it’s observable, there's no incentive to do relevant, applied work. In fact, if anything there's a disincentive to a substantial degree because it's looked down on.

This is an important thing for us to try and change, both because applied work has an important impact on the world, and that is a good, and also because doing work that feels meaningful is important to people. Purpose comes from feeling like you're doing something that matters and that helps people. You can argue about whether it's true or not, but companies like Facebook and Google have this as part of their pitch when they try and get smart, competent people that are finishing PhDs to go and work for them that rather than staying in science. Part of the pitch is, "Here's a way that you can do something real, something that interacts with the world." That puts an additional market-type pressure on academia to try and satisfy this dimension of people's lives in order to keep the smartest people in academia.

Cooperation is good; we can get people to cooperate if there are norms in place that support that cooperation. We should try and do some of that ourselves.


THE REALITY CLUB

FIERY CUSHMAN: You introduced yourself as an economist and a psychologist and both of those themes are reflected in your talk. The economist is taking a top-down approach, where you design an institution and don't worry about the psychological mechanisms. The institutional pressures and the cost-benefit structure are going to determine behavior by hook or by crook in the long run. The psychologist is going to be the bottom-up person who wants to understand the human mind so that you can see norms that are going to prosper and you can get people to take up norms in the most efficient way possible.

You ended with the idea that we want to change the world. If someone walked into this room and said, "I'm in possession of worldly power. I can only talk to Dave the economist or Dave the psychologist," who are you going to be? Just from what you said, Dave the economist is a much, much more attractive option because we can be formal about analyzing the institution designs from a game theoretic perspective, because we don't need to do a lot of hard empirical work to try to figure out how a human brain works. The guy with worldly power can just by fiat establish the institution, and then all of the people take care of themselves.

As someone who is a psychologist, it makes me question this other theme that you ended up with: should we, as psychologists, be trying to change the world or should we be doing basic science? Even if the economists completely dominate on the problem of making the world a better place, maybe irrationally, I'm still committed to the view that it would be exciting to learn how the human mind works.

RAND: I also, obviously, share that desire, which is why I'm doing what I'm doing. There are two parts to the answer. One is that I love figuring things out and the joy of discovery, and that is a lot of what motivates me to do what I do. That's basically everything that motivates me to do what I do. But I feel guilty about it in some sense, that I'm living my life doing this awesome, fun thing. Is this optimal? In terms of trying to make things better, understanding psychology is important. 

There's also this whole literature on crowding out. If I give you an extrinsic motive to do something it can destroy your intrinsic motivation to do it. Then when I take away the threat of punishment, you're like, "Well, I'm not going to do that anymore," even if before I started threatening you to get you to do it, you did like it. That's an important psychological question of when do you get habit formation and when do you get crowding out? You need to understand a lot about cognition and about the human mind in order to sort those things out.

In addition to that blackout prevention project, we're doing a bunch of different experiments with the Department of Energy and different utility companies. Doing this, you realize that there are interesting psychological questions: how do you take this thing that works in the lab, in this cut and dry situation where people come into the lab and play a game with two options and two different payoffs, and you do this simple manipulation and it works very nicely, how do you translate that into the world in a way that works? In order to do that you have to know a lot about the psychology of people. Another way of saying it is, in the process of trying to figure that out you learn a lot about psychology and about what people care about and how these different factors interact with each other.

My goal is not to have part of my time spent doing the search for truth because it's fun, and then a different part, like, "Well, let's try and do something practical" but to find things that combine those elements that are interesting and that reveal things about how the mind works but in the service of trying to do something that has a real application.

MICHAEL MCCULLOUGH: An interesting case study in this that I've been fascinated with for a couple of years is the cultural evolution of social insurance—Social Security, guaranteed income. That happened at a place in time, it started in Germany. There seems to be two things that happened and a third thing that moved toward France and England. One was that there was a real concern in the government that socialism was so appealing that we have to do something about the glittering prizes that socialism offers or else that's going to, from the bottom-up, become what people want here in Germany. It was like, "Right, let's take away that tool and let's create a guaranteed social insurance. If people lose their jobs they'll have something to eat." Right? There was a strategic benefit to shifting the norm there.

RAND: But hold on. In that context, what you're saying is that the people running the institution made a specific decision in an effort to change or to control the way people's understanding of right and wrong was evolving.

MCCULLOUGH: Absolutely. You get the Leviathan to do the work for you. But there was another part of that, particularly in England, which involved a lot of debate and also coverage by newspapers. People could read the debate and process argument. It took years to get social insurance. It was very, very hotly contested. Once they do it, you roll the tape forward 100 years, 120 years, and now the norms are so strong among the rank and file, people gladly pay their taxes in order to provide certain benefits for everybody in society.

It's so powerful now that you can have a comedian like Rich Lowry, this prominent comedian in the U.K., who was engaging in legal tax dodges—finding ways to shelter his income from income tax. He was so browbeaten in the public eye for doing perfectly legal things. He had to come back and apologize for taking absolutely legal loopholes that he was entitled to. You can compare that against the norms in the United States where Romney was taking equally legal, equally plausible tax loopholes to shelter his income and he said, "I'm not at all apologetic," and he didn't need to be apologetic because there wasn't this groundswell of disgust in response to it.

RAND: Norms is a messy word that is used to mean a lot of different things in a lot of different settings. There is clearly a contextual element of it where there are certain things where this seems right—behavior like hugging each person you meet is acceptable or not depends on the context. There are some contexts where that would be weird and counter-normative and there are other contexts where it would be inappropriate to not do it.

There are other things that are more fundamental. Your basic values for example, the extent to which it is important to care about others or to do things that are good for society versus good for you as an individual. For one person you apply those kinds of basic morals across a range of settings and the contextually specific norms are implemented in light of these more basic moral principles.

Both of them are interesting but I am particularly interested in the latter. I don't think it stops science, because if all you did is said, "Oh, well, the reason it happens is because it's a norm," that is the end of the discussion. What I'm interested in is trying to understand where do they come from? What are they? And how do you change them? It could be a conversation-ender to say, "Well, it's the norm, so we do it." To me, that says that the interesting question is "What is going on there and how do you unpack that?"

SIMONE SCHNALL: You were asking the question earlier about how to get people to do the right thing, or how to change their norms. I was thinking of how to do that and I guess the biggest problem is opportunity costs. You do one right thing and you might not do another. Or you may have to decide how much time you spend doing one good thing or good things in general, let's say, positive things.

I was thinking about it when you described your job. You said, "Well, I have all this fun doing my job but I'm not doing anything good for the world." Maybe that's exactly the right way to think about it. You think you're having fun when in reality you're studying cooperation, which, if you make good progress, will have real life outcomes. As long as you think you're not doing anything morally good, you're just having fun, you'll be driven to do other morally good things. Do you see what I mean?

A "moral licensing" effect, for example, like, "Oh, I'm having so much fun, I'd better do something good as well." Perhaps one has to, in a way, restructure how people think about what is good or what is just having fun. Is it pleasant, unpleasant, this or that? As opposed to the knowledge that people only will do one good thing, or you have a limited amount of time, of energy, and whatsoever, so that perhaps one can repackage the things that they're doing and get them to do more of the good things.

RAND: Yes. You take things which are societally beneficial, good things that people are doing and get them to think of those things as fun things.

SCHNALL: You turn the morally good things into fun things so they keep doing those because they think it's fun. Then you get them to do additional morally good things on top of that.

RAND: Certainly it seems like being able to get socially beneficial behavior to be a thing that seems fun is a good idea. Like I was saying before, if you think about the two separate questions, where one is: how do you get people to do the thing that most people agree is the right thing to do? We have a pretty good handle on that in terms of all these reputation observability, reciprocity things. To me, the more challenging question is how do you change what most people think is the right thing?

MOLLY CROCKETT: I wonder if some of the challenge in establishing norms, particularly for cooperation, has to do with this distinction between getting people to do something versus getting people not to do something.

What I think about lately is harm and there are very strong norms against harming people and that's an easy norm to think about. Whereas, you harm people indirectly by not cooperating but because we have this distinction between actions and omissions it seems a lot harder to establish getting people to do something as a norm than prohibiting something.

RAND: That's interesting. It interacts a little bit with the question that John was asking, in terms of if you have this deeply-held underlying value that it is wrong to harm people, then when you're thinking about a individual specific context and asking, "Oh, what's acceptable or not acceptable here?", things that are functionally equivalent could provoke a much stronger reaction if they're cast as harm, because you have this underlying principle that affects how you structure or interpret these more contextual norms. Yes, that's interesting. The implication of that would be "Let's frame things as harm."

CROCKETT: Exactly. If you could somehow frame not cooperating as something that's harmful as an act, rather than an omission, that that would be a more powerful way to get people to internalize these sentiments.

RAND: Yes. Think about all the people we're harming by not doing applied science.