Hugo Mercier: "Toward The Seamless Integration Of The Sciences"

Hugo Mercier: "Toward The Seamless Integration Of The Sciences"

HeadCon '14
Hugo Mercier [11.18.14]

One of the great things about cognitive science is that it allowed us to continue that seamless integration of the sciences, from physics, to chemistry, to biology, and then to the mind sciences, and it's been quite successful at doing this in a relatively short time. But on the whole, I feel there's still a failure to continue this thing towards some of the social sciences such as, anthropology, to some extent, and sociology or history that still remain very much shut off from what some would see as progress, and as further integration. 


[39:34 minutes]

HUGO MERCIER, a Cognitive Scientist, is an Ambizione Fellow at the Cognitive Science Center at the University of Neuchâtel. Hugo Mercier's Edge Bio Page


TOWARD THE SEAMLESS INTEGRATION OF THE SCIENCES

I am Hugo Mercier. I'm a cognitive scientist, and I currently work at the University of Neuchâtel, in Switzerland, in the Cognitive Science Center. Today I want to talk about the integration of the cognitive and the social sciences, and in particular how the work of Dan Sperber can help us further that integration between the cognitive and the social sciences.  

One of the great things about cognitive science is that it allowed us to continue that seamless integration of the sciences, from physics, to chemistry, to biology, and then to the mind sciences, and it's been quite successful at doing this in a relatively short time. But on the whole, I feel there's still a failure to continue this thing towards some of the social sciences such as, anthropology, to some extent, and sociology or history that still remain very much shut off from what some would see as progress, and as further integration. 

 

There are several issues. Some of them are just purely sociological, but some of them are more substantial. Two of the issues I would suggest are that maybe we don't necessarily have the right tools to help people in the social sciences see how they can use the cognitive sciences, and the other is that in some cases we don't have very good models of high-level cognition. Even if they could integrate what we know about cognition with what they want to explain in the social sciences, we just wouldn't be able to provide them with the right mechanisms to tinker with. Some of Sperber's work can help us solve both of these issues.

On the first front, which is to have conceptual tools to integrate cognition and culture, to make cognitive and social sciences shorter. Just to give you a bit of background, Sperber trained as an anthropologist, but he very quickly realized the potential of the cognitive sciences to help us better understand cultural phenomena. One of the main things he brought about was the importance of communication. He saw communication as a way of bridging the cognitive level with the more social and cultural level because, obviously, most of culture is transmitted through communication.

One of the many things that his studies of communication have revealed is that—people kind of knew all along, but they hadn't really fully realized it, I guess—communication is extremely noisy. For instance, what I'm saying today, let's imagine there was no transcript. Whatever memory you will have will be extremely different from what I have in my mind, and then if you were to repeat that to someone else it would, again, be extremely different. That creates a big problem for culture, which is that, given that the transmission process is so noisy, culture—in the sense of having the same elements that you can identify, in many, many different people—should not even exist at all. Basically if you have one guy who has an idea, and he says it to someone else, and then that person says it to someone else, basically after a few steps you can't really recognize the original idea any more. If that's what happens you shouldn't have culture at all. Basically you should just have a bunch of ideas that maybe somewhat resemble each other, but that are too different to really be called cultural.

If you look at how communication works it raises the issue of this transmission noise that jeopardizes the very existence of culture, but it also provides some of the answers as to why culture can exist, and then as to what is more likely to become a cultural phenomenon.

When you look at how communication, especially ostensive communication, in humans works, it's a very rich inferential process that we don't really see usually. When someone tells you even something as trivial as, "It's raining," we think, there is this content, someone tells you something about the rain and you just have to understand what raining means and you're done. But, in fact, what the work of people like Grice, Sperber, and Wilson have revealed, is that you have this very rich process of inferring what the person actually means when she says it's raining. Even something as trivial as you have to understand that it's raining now, and that it's raining here, which is not said in the sentence. Usually even "it's raining" will mean many more things.

For instance, if Josh this morning had told me that it was raining, I would have inferred that he wanted me to understand that it would be complicated logistically today rather than just saying, "It's raining." That's what makes communication kind of noisy, but that's also what can make communication and this inferential process help culture stabilize, in that some cases it can correct communication that would have failed otherwise.

One of the examples that Sperber often takes is that of a tale. Imagine, for the first time in your life you're being told the tale of the Little Red Riding Hood. At the end the wolf's belly is opened and the Little Red Riding Hood is taken out, but the grandmother is not taken out. The person who tells you the story forgets to mention that the grandmother is taken out of the belly. What's going to happen, or what is likely to happen anyway, is that when you tell the story in your turn you will add that element back. You will correct the story because you might not realize that there had been a problem in the first place, but when you have to recreate it, the version in which both characters are taken out of the belly is in some ways more felicitous.

That creates what Sperber has called an attractor. The version of the story in which both characters are taken out is an attractor, so that other versions of the story that deviate from that are going to revert to that one in the process of transmission. That is what creates cultural stability. In that model what creates stability is not that transmission is faithful, it is that even though transmission is noisy, the noise is not pure noise; it's not undirected noise. It tends to go in some directions rather than others.

That's the general idea. Now with Nicolas Claidière and Thom Scott-Phillips, they have nice mathematical formulations of that that can be helpful for the model-oriented people, but I'm just going to give you a few examples of studies that are more or less recent, but most of them recent, that have been done using that concept to flesh it out.

The most famous studies that have been done using that concept were probably those of Pascal Boyer, who is an anthropologist/psychologist, who attempted to explain traditional religious beliefs, such as beliefs in ancestors, as being attractors. And in that case, what would make them attractive, is the fact that they are, as he calls it, minimally counterintuitive. In some ways they are counterintuitive because you have dead people that still are able to do things to some extent and to have thoughts and everything, but they are minimally counterintuitive in the sense that these dead people are very much like other people.

Basically, we can recruit all of our mechanisms of mind reading and theory of mind that we use to understand live people, to understand the desires, and the intentions, and the beliefs that dead people have. That makes most of these ideas intuitive, but they're still in some ways counterintuitive because the guy is dead, and that makes them more relevant. You have the right mix of being understandable, you can draw inferences, it's kind of interesting, but it's also extra interesting because it's not just run of the mill, well, someone has a belief, and someone dead who has a belief. That makes it more relevant.

Now, Pascal Boyer, with some other colleagues—Nicolas Baumard and Coralie Chevallier—are applying this to the spread of moral religions, the things that happened around the Axial Age when you had Jesus, and Buddha, and Confucius, and a whole different type of religion emerged. They're trying to explain that in terms of the changing psychology of the people that made some types of religion more attractive at different times.

That's one of the best worked out examples, but there are a few more recent that have been fun lately. One of them, which is nice because it takes the phenomenon down to its essence, is a recent study by Nicolas Claidière and his colleagues, which looked at transmission in baboons. What they do in these experiments is: the baboons play on the computer screen, in which there is a 10-by-10 grid of small squares. On that grid you have four squares that light up, and then they disappear very quickly. The baboons have to touch the screen where the four squares will light up.

What happens is that sometimes the baboon will make a mistake. Whatever the baboons manage to do will be transmitted to the next baboon. You have the initial thing that is random—you have four squares anywhere on the grid, and then the baboon does the thing. Whether succeeds or he fails, that will be transmitted to the next baboon. And you repeat that process many, many, many, many times.

What you see appear is basically Tetris, so you see the forms, the shapes of Tetris—the square, the line, and the S thing—appear because the mistakes the baboons are making are not random. They are making some mistakes, but they're not just going to tap any square. They're more likely to tap a square that is closer to the square they had originally tapped, and once you have a form that is an attractor, even though baboons will deviate from them from time to time, when they make a mistake from that deviation, they're more likely to revert back to the attractor than to go into some other direction.

You have these shapes that are extremely stable, so that once they're there they really stick around, whereas any other pattern will be much more volatile. It's a good example of attraction in that what is creating the stability is not that the baboons are really very good at repeating some shapes, because they're quite good but they're not perfect, but that there is this systematic bias to always pull in the same direction.

The other three quick examples will illustrate the different fields to which you can apply this. One is from the history of art. We know that humans have this mechanism that is extremely ingrained of paying attention to the gaze of other humans. We really pay attention. That's likely why we have the white in our eyes—so that we can really see where people are looking. In particular, something that is very salient is direct gaze. When someone is looking directly at you, especially for a long time, and they're not talking to you, it's really a signal of a strong emotion, whether it be lust, or aggression, or something else.

What this predicts is that cultural representations of faces that look directly at you should be in some way more attractive. They should be seen as more relevant. Okay? This is more interesting than a face that doesn't look at you. What Olivier Morin did is he looked at the portraits; he looked at two cases. One was 16th century Europe, the other was a span of 7th century in Korean portraiture.

He looked at two things. One was the evolution of the gaze of the portraits through the generations, and the other was which of these portraits were picked up by contemporary art books, and he found that in both cases the attractor hypothesis predicted what was happening. In both Korea and in Europe at that time you have a shift from portraits that looked away from the viewer to portraits that look towards the viewer. It turns out that the portraits that are selected in the art books now are more likely to be portraits that look directly at the viewer rather than portraits that look away. It seems as if this very fundamental communicative mechanism can explain a small part, obviously, of this cultural phenomenon.

The other example bears on that Leibniz-Newton dispute mentioned earlier. Just to give you a bit of background, you know they both invented—more or less at the same time—differential calculus. What is clear, though, is that Leibniz published his findings much earlier than Newton. He had a very strong head start, in particular in France. In England Newton was Newton, basically anything he said was fine, but in France Leibniz was published earlier, and he had this advantage from the start. You can see that, for instance, his notation—the dx and the big S for the integral—were kept.

However, when Newton was introduced in France it is his concept of the infinitesimal that won. It's kind of surprising because historically you think well, basically Leibniz was there before, he was hugely influential and had this huge prestige, and yet it's Newton's idea that ended up being used by all mathematicians. Not all physicists I'm told, but most mathematicians anyway. The idea is that the Leibnizian formulation of the infinitesimal treats the dx—that infinitesimal quantity—as an entity, precisely as something that exists. The claim that Christophe Heintz has made is that as soon as you start talking about entities that is going to recruit some of our numbers sense intuitions, or mathematical intuitions that will treat that as a little object, and that is going to make very hard to process the idea that x plus dx equals x. Basically we have this strong intuition that if you are dealing with an entity, if you add that entity to another entity, then you get something else. You don't just get the entity you had to start with, whereas Newton's formulation did not have this issue because it treated the infinitesimal more like a limit—it was not quite the concept of limit, it gave rise to the concept of limit later, but it didn't have this issue.

Even though in purely mathematical terms both concepts could have worked out, the fact that one of them jived better with our number sense, or rather that one of them didn't work out so well might explain why one of them was more successful. I mean this number sense that we're talking about is something fundamental—you find it in nonhuman animals—and the hope is that it can explain some of the most complex cultural entities ever.

The last example in that line of work is some work that Nicolas Claidière, Helena Miton, and myself have been doing on bloodletting. As some Americans might know, in the winter of 1799 George Washington got ill. The best doctors in the country were brought to his bedside, where they proceeded to bleed him of about four liters of blood. I guess that's less than a gallon, or something like that. That was not a good idea. That's about half of someone's blood. Then he proceeded to die. There's still some dispute about whether that killed him or whether he would have died anyway, but people are pretty sure that that did not help.

Bloodletting is this puzzling phenomenon that was the main therapy throughout I guess the 17th, 18th, early 19th century in Europe and in North America. For us, we don't think that it works; it's extremely puzzling that people would do this. You know, why on earth would you do this? It's the best doctors who are doing it to the most important people—they're also doing it to everyone else—and it seems very puzzling. The answer that historians in particular usually suggest is that it was mostly a matter of authority and prestige. You have these extremely prestigious ancient physicians, such as Galen in particular, who exerted a huge influence throughout the history of medicine in the West, and basically people accepted his humoral theory. You can derive bloodletting from this, and that is why people are doing bloodletting.

As a side note—I can't help but mention this—Galen was so much into bloodletting that he thought it was a good idea to do it in case of hemorrhage, which is kind of funny. Galen was a great guy, I'm sure. The standard explanation is that you go back in time, you have Galen, you can go back to the Hippocratic writers, and then you can even go back to the Egyptians, who have their own great story about how bloodletting was born, but it's mostly a story of prestige and authority. You have these guys who are unusually influential, and basically they could have developed any other theory and people would have accepted that. That's the common idea about bloodletting, and other forms of therapies used at the time.

Then you can think well, whatever cognitive mechanisms people have doesn't really matter, all they need is some kind of bias to listen to prestigious individuals. In order to test whether that was the right explanation, we did two things. The first was to look at the anthropological data to see if people who had not been influenced by these early Western physicians also practiced bloodletting. It turns out that that is the case, many, many, many cultures in the world practice bloodletting; In North America, the Native Americans used to do it with cute little bows and arrows; In Australia it's done with stones most of the time; In Africa it's done in many ways, including using horns. It seems as if many cultures throughout the world have found the idea of bloodletting rather intuitive. That shows that it's not just a fluke that these guys in Greece and in Rome just got that idea, and that spread to us. It seems as if there's something that makes bloodletting a rather intuitive form of therapy.

In order to confirm that we did some experiments with American participants who we checked don't believe that bloodletting works, most of them anyway. What we did is that we gave them stories that involved something that looks like bloodletting, in the context of an Amazonian tribe, so it's plausible that they could do something like bloodletting. We give them that story, then we distract them for a little while, and then they have to recreate the story, and we take that story and we give it to someone else. It's like the baboon thing earlier, except that it's done with stories instead of that grid.

When you do that for a number of generations, what you see is that the stories tend to converge towards bloodletting, to some extent. In the case that is the most striking we had some of the stories starting with something like, "This guy in an Amazonian tribe has a headache that stops him from hunting a bird he's supposed to hunt for some ritual reason. The guy has a headache, and at some point he cuts himself with a stone." We specify that it's an accident. "The day after, his headache gets better." (As all headaches do. That's what they do.)

What we see is that after several generations, in some cases, the thing that was accidental starts to become intentional and it starts to cause the recovery. In the end, in some cases you have full-blown bloodletting. You have people who say, "Well, the guy had a headache, he took a stone, he cut himself, and that made him feel better." I'm not saying that the people believed in it, but it's more intuitively appealing than just having this story about the guy who has a headache and cut himself. I mean you can see how it could explain, to some extent, the emergence of the phenomenon, because someone being sick, cutting himself, and then getting better is bound to have happened very often. Then you tell that story, and in the process of transmission you can see how you have the story about a guy who cuts himself intentionally, and that makes him better. It's been fun working on this.

The first part was this set of methodological and conceptual tools that can link what we know about cognitive mechanisms with cultural phenomena. One of the other things that Sperber brought is a better understanding of some cognitive mechanisms that are really important to understand many cultural phenomena.

The first mechanism, and I mentioned that earlier, is communication. What they brought with Deirdre Wilson in this theory that is called Relevance Theory is this idea building on Grice that we have, as I mentioned earlier, all these levels of intentional mind reading or theory of mind that are involved communication. For instance, if Josh tells me it's raining what I'm really processing is something like Josh wants me to know that he wants me to believe that it's raining. And that's kind of counterintuitive because it doesn't feel as if we're doing this whole work, but we can see that that's what's happening, if you look at other means that Josh might have to make me believe it's raining.

For instance, maybe he wants to play a practical joke on me, and he turns on the sprinkler in order to make me believe it's raining. Now you have Josh who wants to make me believe it's raining. Now if I see him doing this, I know that Josh wants to make me believe it's raining, but that's not going to help him make me believe it's raining. On the contrary. Then when you get at ostensive communication, if I see Josh telling me, "I'm trying to play a practical joke on you," in which case I understand that by turning on the sprinkler he wanted to make me believe that it was raining. And that's where you have full-blown communication. And if you don't have all of this, human communication doesn't work.

Not only is it counterintuitive, but for a long time people thought that it was not plausible for two reasons. One was that children were thought to be really bad at doing theory of mind and mind reading (children below three, basically, even though they obviously communicate fairly well). The other was that adults were thought to be really bad at doing many levels of mind reading. People thought that if you do four or five, it saturates your cognitive abilities. More recent experiments have shown that infants—I don't know when the youngest experiments are, but at least ten-month olds—can do some rather sophisticated mind reading. New experiments by Thom Scott-Phillips are showing that adults can do up to at least seven levels of mind reading without any issue at all if you do that in the right context. Some of the issues about this have been dismissed, and now we can come back to this idea that we're doing all of this work when we're communicating.

Just to give you an example of how that can be used to understand some cultural phenomena, Alessandro Pignocchi, another colleague from Paris, is doing great work trying to understand how art is processed as ostensive communication. When you see a painting or when you see a movie, your brain treats this as the artist trying to tell you something, and you attribute intentions to the artist. When you see a painting of, let's say, a sunset, it's extremely different from just seeing the sunset because, even though you're not doing that consciously, your brain is figuring out why did the artist depict the sunset this way and that way, et cetera. That helps Alessandro integrate some of the findings about the cognitive science and how we treat communication with how people understand art, and what art is more successful.

Another thing that has been very important that Dan has been doing related to communication is stressing the importance of the mechanisms that allow us not to understand communication, but to evaluate communication. Most of the cognitive sciences that have looked at communication have focused on how we understand communication, basically linguistics, pragmatics, semantics, etc., and not how we evaluate it. It's implicitly taken for granted that most of communication is going to be honest, and that people just have to understand, and then you're fine, whereas, in fact, from an evolutionary point of view, communication can be used to mislead, to lie, to deceive, and we have to be careful about what other people tell us.

The idea of epistemic vigilance is that we would have a whole set of mechanisms that would be devoted to evaluating other people's communication in order to make sure that we don't get deceived, too often anyway. One of the most exciting developments related to that has been a huge amount of work on children, showing how good children are at telling who they should believe, and who they should not believe, research done by Paul Harris, in Harvard, by Kathleen Corriveau by Fabrice Clément, by Olivier Mascaro and many other people. They have these great results showing that in some cases children—infants, 12-month olds—will be able to integrate their own prior beliefs with information that is communicated to them, and even to discriminate between experts and non-experts. It's a really precocious thing. And adults, we don't realize it, but we're extremely good at doing this.

One of the interesting consequences of that, or one of the interesting applications for cultural phenomena, is that it flies in the face of beliefs we have about the efficacy of propaganda, of advertising, of political campaigns, and we tend to think that people are rather gullible. Many of us, and even some of our professional colleagues, have written that we're quite gullible. We start by accepting information rather than being careful from the start, and so the work that I'm doing at the moment is trying to show that, in fact, all of these cultural phenomena—propaganda, political campaigns, the news, advertising—in fact, are much, much, much less powerful than people usually take them to be, and that whatever influence they have on people is fully consistent with people being extremely vigilant with communicating information.

The very last bit, which is really the one that if there are any questions I'd rather they be on the last bit, because it's really the one I know something about. Dan and I have been working on this theory of reasoning, which is related to epistemic vigilance. The broad framework of epistemic vigilance is that we have to have a bunch of mechanisms that protect us from potentially misleading information, and that basically allow communication to work smoothly, and to remain mostly honest.

What we have suggested is that one of these mechanisms is reasoning. People used to think of reasoning as a mostly individual skill—you reason to make better decisions, you want to make sure that you have sound reasons for your decisions, or for your beliefs. What we've been claiming is that instead reasoning is something that is done for argumentation. That is, people reason so that they can produce arguments to convince others, and so that we can evaluate other people's arguments. We have a bunch of empirical results on this that serves this theory, but I'm not really going to talk about that now. What I'm going to talk about is briefly how that can be used to explain some cultural phenomena by relating low-level communicative mechanisms with very complex cultural phenomena, such as complex religious beliefs or complex scientific beliefs.

In the case of science, we've been doing a little of that with Christophe Heintz, and in the case of religion Helen De Cruz has been doing a bit of that. The idea is that when you're arguing, you're recruiting people's intuitions to make something intuitive that was not previously intuitive. You're shifting around their intuitions so that just at the moment when you're setting up the argument, they say, "Oh, yes. Right." I can recruit this intuition to modify a belief I had before. If you repeat that process many, many times, you can see how you can arrive at beliefs or decisions that are extremely counterintuitive, and that seem, well, completely unrelated to what we know about most of cognition, because it is counterintuitive, but in fact, you can trace a chain that each step is relatively intuitive. Take an example like a standard mathematical proof, each step is supposed to be relatively intuitive, at least for the people who have the right skills, but then if you just take the axioms and the theorem no one can really have the intuition that the two will fit. Each step is intuitive, but you have to go through the steps.

One very last piece I want to mention to illustrate how these concepts can illuminate some cultural phenomena is a nice piece of anthropological work by Radu Umbres, an anthropologist who has used this concept of epistemic vigilance. You send people on a snipe hunt, which is this animal that's supposed to be impossible to catch, and you send a novice hunter, and you say, "Well, you have to catch this animal." You make up all kinds of stories, the idea is to show that novices are gullible; they don't know anything. What's interesting is that it reveals how, in most cases, these are really the exceptions. I mean what makes them kind of funny and interesting is that they are exceptional. Most people usually do not get taken in by these things, and even when they do get taken in it is in a context in which it makes sense, because you are a novice and you assume that everything else he tells you is stuff that you didn't know about before, and that turned out to be right. It's not unreasonable to trust him in this case as well.

This is my last example; it's not at all my domain, but it's interesting stuff that's going on now, and more of it will happen soon.  
 


THE REALITY CLUB

MICHAEL MCCULLOUGH: I was interested in your bloodletting example, because it clearly is an example of something that people must have some sort of deep intuition about, and then when you combine it with a prestige bias it leads to the repeating of this trope for millennia. At the same time, you know, we have medical anthropologists scouring the globe to eavesdrop on traditional hunter-gatherers and horticulturists who know about medicinal plants that do work for reasons they can't explain, but at least some of the time do empirically work.

Sometimes it seems like we recruit a prestige bias that ends up being a failure, colossal failure, and other times we seem to recruit a bias to copy the successful. This is all cultural evolution stuff that both of those sorts of heuristics might be at work in an individual mind. How can we make predictions about untested phenomena that can tell us when one of them is going to win and the other is going to lose, if those sorts of heuristics are at work simultaneously?

MERCIER: We're just trying to explain stuff, that's kind of hard enough already. You'd have to know a lot about the specifics of this situation in order to be able to make predictions. I mean, clearly, you can do predictions. Like, if you run an experiment you can say, well, my theory predicts that in this case the prestige bias will be trumped by something else, or vice-versa. If you look at real-life cases it's going to be hard in the foreseeable future, I guess, to make predictions regarding whether it's the most intuitive idea that will win or whether it's the idea that is defended by the most prestigious individual, even though it's counter-intuitive, that will win. It's going to be tricky to make predictions in the short run.      

MCCULLOUGH: It would be great if we had some way of putting those biases on common scales, so we could set up horse races.

MERCIER: Yes. It's complicated, because then they can interact in complex ways. Let's hope that that happens.

IAN REED: The bloodletting interests me as well from a clinical perspective, because there is a subset of patients who feel better by cutting themselves and seeing their own blood.       

I've had particular patients who actually call it bloodletting, so it might be that a specific intervention might be useful for one population and then just inadvertently extrapolated to be useful for entire population, and maybe that's how certain things …

MERCIER: Yes, that's a very good point. In the research we talk a little bit about this non-suicidal self-injury, as I believe they're called, and it is striking that even in cultures that obviously don't practice bloodletting you still have in some people this intuition. As you are saying it's an interesting possibility that these people brought about the thing in the first place. Then again, if that happens and that does make you feel better, then you say, well, maybe we can emulate that, so it makes sense. That's an interesting idea on how that could have emerged.             

REED: And it happens with treatments for psychological disorders all the time.

MERCIER: That's a good point.

LAURIE SANTOS: Just to give a nod to my colleagues in other social sciences, it's all well and good for cognitive scientists to be here, and be like all the stuff we do explains these mysteries in culture, and so on. Are there any cases where you think the bigger social sciences can come back and tell us about intuitions we just didn't know were there in cognitive science?

We could go to art galleries and look around and be like, "Why is all this stuff here?" And they'd be like, "Wait! Maybe there's this intuitive bias that we completely misread."

MERCIER: Potentially, there are many, many cases like this. On the whole it's pretty dispiriting to see sometimes how little psychologists or cognitive scientists know of the population level phenomena that are very much involved in the mechanisms of what they're supposed to study. Just to give you an example, psychologists who study emotional contagion, not all of them, but some of them take it for granted that the view of crowds and panic, as you know, basically as soon as you have an emergency or a threat most crowds are going to panic. All hell is going to break loose, all the norms are going to be trampled, along with the actual people. What's interesting is that this is completely false. All the sociologists and social psychologists who have studied this know that that basically never happens, that people are extremely pro-social when there is an emergency and a threat.

You have this huge disconnect between the two, and I'm sure that the cognitive scientists could help the sociologists better understand this phenomena, but at the moment it's really mostly the cognitive scientists who would need to hear what the people are telling them, because look, this is just not what's happening at all. It's a case in which when they just have wrong beliefs.                                   

DAVID RAND: In this attractor idea, it seems to me that then the key question is “What makes something an attractor versus other things not an attractor?” In some of these perceptual domains that you were talking about, I could see maybe it’s some fundamental aspect of cognition. But a lot of what I think about is social behavior and social norms, which are a similar flavor. Have you thought about that stuff in the social domain, and do you have thoughts on what makes some things particularly attractive?

MERCIER: I haven't, but some people have. I'm thinking, for instance, of the work that Nicolas Baumard is doing, another one of my colleagues, and he has the theory of where our sense of fairness comes from. This idea is that we have an intuitive sense of fairness that basically relies on default, you know, 50-50 distribution that can be influenced by the contribution of the partners and what not. It's kind of easy to see how norms can hitchhike on this, because this is an intuition. This is really an intuition you don't need an explicit norm to have this intuition.

RAND: Where does the intuition come from?

MERCIER: Well, you have a cognitive mechanism in your head that evolved. Basically, it's a proximal mechanism that fulfills the ultimate function of making you a good partner, making people think you're a good partner. The intuition, evolutionarily that's the story that you have. Partner selection makes it valuable for people to develop a reputation as good partners. One of the things that makes you a good partner is to be fair, and then basically that gets written in our cognition as this sense of fairness that, in some cases, you can recognize fairness and you can try to be fair when you think it's worth it for you. And then once you have this thing it's quite easy to see how an explicit norm that taps into this intuition would be more successful than when it doesn't.

RAND: But that doesn't explain variation in norms, right? That seems like the interesting question: why some things are attractors in some contexts?

MERCIER: Very good point. One thing we need to bring in is that in different environments, like the cost and the benefits will be different, and then that will predict differences in the norms that will apply.