Laurie Santos: What Makes Humans Unique (HeadCon '13 Part VII)

[Video designed for full screen viewing. Click icon  in video image to expand.]

HeadCon '13: WHAT'S NEW IN SOCIAL SCIENCE? (Part VII)
Laurie Santos: What Makes Humans Unique 

The findings in comparative cognition I'm going to talk about are often different than the ones you hear comparative cognitive researchers typically talking about. Usually when somebody up here is talking about how animals are redefining human nature, it's cases where we're seeing animals being really similar to humans—elephants who do mirror self-recognition; rodents who have empathy; capuchin monkeys who obey prospect theory—all these cases where we see animals doing something really similar.

Laurie Santos is Associate Professor, Department of Psychology; Director, Comparative Cognition Laboratory, Yale University.


I'm going to talk about some new findings in my field, comparative cognition. I'm interested in what makes humans unique. There are findings that I think are fantastically cool, in that they might be redefining how we think about human nature, but first they're going to pose for us some really interesting new problems.

I'm doing this, in part, because I think already having redefined human nature in the last couple of years is sort of a tall order, and that scared me, but also because I think that open questions about human nature can actually be more fun and I couldn't help but use this audience to kind of get some feedback on this stuff.

The findings in comparative cognition I'm going to talk about are often different than the ones you hear comparative cognitive researchers typically talking about. Usually when somebody up here is talking about how animals are redefining human nature, it's cases where we're seeing animals being really similar to humans—elephants who do mirror self-recognition; rodents who have empathy; capuchin monkeys who obey prospect theory—all these cases where we see animals doing something really similar.

Today, I'm going to talk about two sets of findings where we're seeing, at least in the case of nonhuman primates, young nonhuman primates doing something really different than humans. In one case they're doing something different than humans, which you might think of as cognitively less rich. That makes the human looking like, "Wow, they're super smart." But in a second case they're doing something that looks like animals in this case, so the nonhuman primates are doing something that's cognitively a bit more rational, but I think it's also going to lead to some deep insight into human nature. So those were what I took my marching orders to be, and now I'll sort of jump into two separate findings.

As I do that I'm going to violate another principal immediately that John told me to do, which is to stick to questions and findings that are very, very recent. The first set of findings bear on a question that's, in fact, very, very old, and it's a question that Premack and Woodruff posed way back in 1978, asking the question of whether or not the chimpanzee or any other animal has a theory of mind. What Premack meant by this question was, does the chimpanzee look out into its world and see all these agents just behaving—doing different behaviors? Or do they do what we do, which is to intuit inside everybody's heads all these things going on—things like intentions, and theories, and beliefs, and desires, and so on?

This is a very old question, as I said this was 1978. Some of us around the table weren't born yet, but some of us around the table thankfully were born and were writing important critiques of Premack and Woodruff studies, (Dan) which were really important to the field, because it got off the ground this question of what could actually count as evidence for this question. We can verbally talk to each other and come up with the idea that we think of each other as having beliefs and desires, but how could you ask this of a non-linguistic creature? What would really count as evidence that they're thinking not just about behaviors, but about these mental states that are different from behaviors?


...if you really want to know whether an organism is thinking in terms of other individuals' behavior, or thinking in terms of what's going on inside another individual's head, you have to use these creative kinds of cases where what they would do if they're monitoring behavior is different than what they would do if they were thinking in terms of what's going on inside somebody's head.


Dan, and Zenon Pylyshyn and others who commented on this really important paper came up with a set of marching orders for the researchers at that time about how you could design studies to potentially tell the difference. That's what launched, in the eighties, this long field of what's been known as false belief studies. Many of you know about this, but for those that don't, please just be patient with me.

These are studies which are trying to look at whether or not people are actually representing the beliefs inside someone's head as distinct from their behaviors by using this special case of false beliefs—this special case where people are doing behaviors that don't necessarily match what you might see in reality. So if I had a false belief that this event was over, I might do something crazy in my behavior like, get up, take my microphone off, go inside, have a couple beers, and so on. That would be different than what I should really be doing in reality—what reality should be telling me—but there's this sort of false content in my head, this sort of false thing that's going on. And, cleverly, folks pointed out that if you really want to know whether an organism is thinking in terms of other individuals' behavior, or thinking in terms of what's going on inside another individual's head, you have to use these creative kinds of cases where what they would do if they're monitoring behavior is different than what they would do if they were thinking in terms of what's going on inside somebody's head.

This launched this whole set of inquiry in the field of developmental psychology, where I think developmental psychologists had a bit of a leg up on those of us who are comparative cognition researchers, because they had the tool of language to ask children about different scenarios. This led to a long history of research showing that it seemed like there was some important developmental changes over the first couple years of life, and children's ability to think about what's going on inside the heads of others. The comparative researchers, though, were a bit stymied, and they came up with a lot of experiments that, even though they didn't have the fantastic commentaries that came after Premack and Woodruff's paper, I think if they had, people like Dan, and Pylyshyn and others, would say the same thing, "These aren't good tests to really get at what's going on in terms of what other animals know about other's mental states."

This was the state of the field well into the nineties, until researchers started coming up with what I think are somewhat better tests that use these nice nonverbal measures to come up with good tests of whether or not other animals have false beliefs. Here's where I have to kind of give a nod to a conversation we were having earlier about the "Noble" Prize, which for those that don't know watching this, would be a potential prize they were hoping someone out there would donate lots of money for so we can give prizes to researchers who, upon having evidence that their idea was wrong, admitted that their idea was wrong.

Here I have to give a shout out to one potential winner of this, who is Mike Tomasello, who in 1997 wrote a book that said, "I don't think any other animal has any representation of other individual's mental states," and in 2003 he wrote a paper that updated that and said, "Because of new evidence I have to say that I was completely wrong in that book. I published that book, and I was wrong. Now there's good evidence that they do."

What's that evidence? Well, the evidence comes from a variety of different tasks showing that other animals seem to process information about other individual's perception or visual access. One version of this type of test asks:  do other nonhuman animals actually pay attention to what other individuals can see? So if you give them the option of trying to deceive somebody who is looking away versus somebody who is looking at a piece of food, what you find is on the first trial with no training, nonhuman primates know who to steal from; they steal from the guy who can't see.

They also seem to know something about the fact that visual access passes into the future. So if somebody saw something at one point, they might recognize that that past visual access predicts that that individual might know something about what's happening in the future, and, therefore, won't steal from that individual, and so on. As Mike reported, we're starting to get more evidence that primates are doing better than we thought, but so far there hadn't been a really good test that would qualify for the kinds of critiques that Dan and others brought up.

Until such time as a group of clever developmental psychologists came up with a very good nonverbal false belief test—a nonverbal test that allowed us to show that maybe these might be representing something that's going on inside somebody's head, and we, as comparative cognition researchers, like it very much when developmental psychologists are clever like this, because when they come up with a good nonverbal test, we can then take it and do it ourselves and get the same answer.

And this is what happened a couple years ago when Onishi and Baillargeon came up with a good nonverbal test of false beliefs that they used in 15-month-old infants, that we and others were later able to import to nonhuman primates.

And here's how the test goes. Imagine that I say Danny, in this case, is either a monkey or an infant, and you're watching a display of me acting on the world. Later I'm going to ask the predictions you make about my behavior. Imagine, if you will, that I have a PowerPoint that shows an image of me with two different boxes where I'm hiding objects. So Danny, just this casual observer, will be watching as I hid an object in one of these boxes—I'll hide it in the box on the left. The question is where do you think I'm going to look? Well, if you were correctly representing that I had a true belief, you might expect me to reach over here. However, you might find it surprising, if you understood my true belief, that I would reach to the second box that didn't have this object that I desired.

It turns out that both 15-month-old infants and, in our case, Rhesus Monkeys show that effect. If you monitor how long they watch this event, or quasi-measure their attention or their surprise, they look longer at this event where I reach in the wrong spot. So we're just saying they're tracking information about what I might know about the world and how I'm going to act. The question is what happens in this critical case of a false belief, where reality should be telling me to do something, but my belief, if you understood it, would be telling me to do something else. This would be a case where again two boxes, I would place an object in one box, and as I wasn't looking, the object would move to the other side. Now if you're tracking my belief you should expect me to go to the box where the object was, but if you were analyzing my actions just in terms of my behavior, you might expect me to go where the object is because of course that's where it is.

What do 15-month-old infants do? The 15-month-old infants, when they see me put an object here, they see it move to the other side, they expect me to reach over here, and they're very surprised if I reach in the correct box. And this was some of the first evidence that within the second year of life babies might be tracking another individual's false belief, published in Science, this was a great thing. What we did was to say, "Ah, this is a fantastic test. Let's apply this to our macaque monkeys." I have to be honest, when we first did this test, I assumed if 15-month-old infants are tracking this information, that's exactly what the monkeys are going to do.
 
So, again, the test is put the object here, person's not looking, object moves over here. And as I hit the button on our stats package, and looked at all our data, as I hit the button to generate the means, I thought we're going to see one of two patterns.

The thing that was really curious was that we didn't see either of those two patterns. What we saw was that in both cases the monkeys looked very little at these different options. They looked very little when I put the object here, it moved, whether I reached here or whether I reached here. And that was really different than what we'd seen in the other case. And they said, "Why?" What it looked like is that the monkeys aren't just behaviorists, in this sense. They're not just tracking what my behavior is. They don't expect me to reach where the object is. But the monkeys might not have a full-blown representation of another person's belief, the content of where it is.

What it seemed that they were doing is they were tracking our visual access. We, as researchers, keep referring to this as knowledge, although we take it that this is not what philosophers refer to as knowledge, but monkeys are tracking our historic vision access and expecting individuals to act on it.  What happens when you lose that visual access is that all the representations go away, all bets are off. You're just ignorant. And the monkey might expect you, or Danny in this case, might expect you to look on the moon, because you don't actually know where the object is. This was surprising to us, because it wasn't the kind of result we expected. As we followed up on this, it turns out that the monkey system for thinking about how we act seems to, again, not have any representation of other's beliefs, but it seems to be relatively sophisticated in its own right.


By 15 months of age babies seem to be tracking other individuals' false beliefs, but this raises this question of whether or not they have this other same system that's going on under the surface, that's also tracking this visual access, too. 


Well, the first thing we've learned is that it seems to take into account what other individual's inferences are, and this is work not by me, but Mike Tomasello and his colleagues looking at the kinds of simple inferences you might make about where a piece of food is hidden. So they did this clever experiment with chimpanzees, where they had a delicious piece of food that they hid behind a screen, and when they lifted the screen, there were two pieces of cloth on the table, one that was totally flat, and one that was beveled exactly in the shape of the food. They asked:  can chimpanzees smartly make the inference that the food has to be hidden under the beveled thing? The answer is yes. Not so surprising. Chimpanzees are pretty smart.

The surprising thing is that chimpanzees can also represent in another chimpanzee that same inference. So if they watch a different individual have this test where they see a piece of food hidden, one is upright, they have the same intuition that the chimpanzee should search in that spot.

The second, even more surprising thing we found is that the way the monkeys seem to shut off their inference about whether you have visual access or whether you have knowledge seems to actually be pretty sophisticated, and seems to not bear on what you may expect from behavior. So here's this test that we ran. Again, in one of these situations Danny would be watching me hiding different objects. You'd watch as I hid the object in this location, and just as I couldn't see, it popped right out and went right back in. So all the features of the world should tell where I'm going to reach, I should reach over here. But this is not what we find in the nonhuman primates. What we find is that they, again, say well, you lost your visual access. You should be ignorant. You can search on the moon. So even though all these features of the world are telling them the way we should behave, we seem to have this interesting disconnect.

Why am I telling you all this stuff? First, I think we're finally getting some important insight into this age-old question that Premack and Woodruff gave us about whether or not other animals are mentalists. And I think the answer is that they don't seem to have representations of other's false beliefs, but they might not be as tight a behavior as we thought in the first place.

The second insight, and the reason I think this bears importantly on human nature, is it seems like we have a phylogenetically old system to track information about an individual's visual access that seems to be present in monkeys, and we have no idea yet whether or not it's present in humans as well. By 15 months of age babies seem to be tracking other individuals' false beliefs, but this raises this question of whether or not they have this other same system that's going on under the surface, that's also tracking this visual access, too. And I think that makes some interesting predictions about whether you get some disconnects between cases of these two systems, cases where what you're tracking with the sort of phylogenetically earlier system tells you something different. I think those kinds of questions would be very interesting to explore, and might redefine the way we're thinking about how other animals track other minds.

So that's set of questions number one, which I, in part, wanted to tell you because I think Mike Tomasello should win the Noble Prize; he'd certainly be getting my vote. The second set of studies I wanted to tell you about I think are even more relevant for some of this stuff we've already talked about, because I think in some ways they fall out of this case of us being a species that has a phylogenetically relatively recent system for representing other's beliefs. And the possibility I think is that when natural selection builds in new systems, they tend to be a little bit kludgy and they might actually have some problems inherent in them.

This raises the question of how we deploy our systems for representing other's beliefs. How is that we look out into the world and think that Danny might have a certain belief about something, but he's ignorant about something else. How quickly do we deploy these things? And there's a couple different options. One is that we're kind of cognitively lazy. We should only deploy these kinds of complicated systems in these cases where we really, really need to. So if Josh were to give me some complicated moral scenario about some guy knew something, but somebody had another belief, I would have to turn on all this machinery to make sense of it. But I shouldn't be kind of doing it haphazardly, just when there are kind of random things around the screen.

The second set of results I wanted to tell you is that it seems like that's not actually the case. It seems like there might be some interesting automaticity to the extent to which we turn on our mindreading abilities. And it seems like this automaticity might be different in nonhuman animals. This comes from a study that came out by Agnes Kovacs and her colleagues recently in Science, where she was asking, again, about the automaticity with which we start thinking about another individual's beliefs.

And it must have been the most boring study ever for subjects to do, because what it involved is just a subject, say Danny, in this case, is just tracking an object that's moving behind an occluder and all Danny's task was is to say when the object fell behind the occluder and the occluder went away, does he think the object is there or not. Just a basic visual detection test. And, of course, since we're tricky cognitive scientists, what we'll do is have some trials in which the object looked like it went back there, but when the screen falls, it's gone. And, of course, even though Danny is a fantastically smart person, he's going to make errors and be slower when I mess with him in that way. And that's just what you find. No surprise there.

The question is what happens in the case where there's another individual who happens to also be watching the scene, who has a different perspective than you do, who might even have a different belief about what you're seeing than you do, and the way Kovacs and colleagues tested this was to put a cartoon Smurf head on the screen, so the Smurf is on the screen while Danny is doing this task. It's completely incidental. Subjects know the Smurf doesn't matter at all. But it sometimes shares Danny's belief. Sometimes it sees it go back there just like Danny, and the screen drops, and it's gone. And sometimes it actually has a different belief. Sometimes it turns away at this critical moment where the object moves.

And the question was, even though this is a cartoon Smurf, even though it's completely incidental from the task, does it affect the way Danny responds? And I think the surprising answer is yes. What you find is that if the Smurf thought something was back there, even in the case where Danny didn't, he speeded up. So he doesn't take a reaction time pause for a belief that he would have had that was false. There's another individual in the scene who has that belief.

What does this mean? Well, it means a couple things. One is that we might be implicitly tracking the perspectives and beliefs of a variety of other individuals around us. This is the thing that Ian Apperly and his colleagues refer to as altercentric interference. We might be getting this interesting interference by other people's beliefs, other people's contents, even though we know them to be different from our own.

Why am I, as a comparative person, telling you this? Well, we've recently been able to run a study like this on nonhuman primates, and what we find is that the monkeys are a lot more rational than people in this sense. They don't seem to be automatically computing other individual's visual perspective, and they don't seem to get messed up. In this sense the monkeys react as though if they saw the object back there, it's back there. If they didn't, it didn't. Okay?

What are the implications for some of this stuff? Well, I really wish Fiery was still here, because one of the implications, I think, is that we might have automatic systems for tracking what other individuals know, and speculatively I can extend this to what other people intend, what other people's attitudes are, and so on. These things might deploy automatically and be relatively under the hood in a way that we might not expect, but that's exactly the kind of mechanisms you might need for the sorts of uniquely human things Fiery was talking about—namely things like social learning, namely things like picking up on other's reinforcement histories, all the kinds of things that humans do that we think of as unique —might rely on this kind of kludgy mechanism, where we just get interference with the contents of our own mental states versus somebody else's.

Is this really true? Do we see any data that something like this might really be happening? This is an extra third line of comparative studies that are coming out that I'll tell you about, which is some interesting work on the cases in which other animals can socially learn from us, and cases in which humans might learn from others in a way that's less rational than other animals.

One of the leftover empirical results from the 1990s is often folks think that other animals can't imitate. It's not true. They can actually follow our own actions and imitate, but they tend to do it in relatively select situations. What are those situations? Well, it tends to be situations in which they, themselves, don't know how to do something. So if you give chimpanzees an opaque puzzle box and they have no idea how to open it, what they will do is they will watch how you open it, and they will follow exactly what you do. If you give, in contrast, chimpanzees a transparent puzzle box and they can kind of figure it out, they just go on the basis of what they know.

The critical question is what I've just told you predicts that humans might do worse at this task, and this is what Vickie Horner and Andy Whiten tested, where they gave these opaque puzzle boxes and transparent puzzle boxes to chimpanzees and children, and they gave them a demonstrator who wasn't a smart demonstrator, but who was doing something dumb. So imagine you see a puzzle box, you don't know how it works, but you see me take a tool, and I probe into the top of the puzzle box in this little opening, and then I use the tool to open up a door in the front and I take candy out. What you do is you then give this to children and chimpanzees. It's an opaque box. They don't know how this works. They do exactly what the human demonstrator did, they probe into the top and use it to open the box.

Now, the critical test is you bring out a transparent box and you can see that the box is just empty. All you can do is open the door and there's the candy. But you see this demonstrated. He painstakingly sticks the tool in the top, opens the thing. What do you do? Well, chimpanzees just cut to the chase. They just open the door and take the food. What do human four year olds do? They slavishly copy exactly what they see the human do. And you might think, well, the kid doesn't want to, you know, annoy the human adult, who's just been teaching them. A graduate student at Yale, Derek Lyons, did a whole variety of controlled conditions to show it's not that the kids think that this is normative. Watching an adult demonstrator has changed the way the kids think about the causal mechanism of this box. They think somehow, I don't know the causal mechanism, but you have to do this thing at the top, or else you can't open it.

This is very profound, and, again, it suggests that in some ways animals, in their noninterference or cross-mental states, might be more rational than us, but I think this provides a powerful mechanism for teaching, a powerful mechanism for the kinds of rewards structures that Fiery has talked about, and potentially a powerful mechanism to solve the chicken and egg problem that I was asking Nick about earlier, which is if we want to know why these crazy things transmit through networks, things like our attitudes, or whether or not we smoke, or whether or not we're obese, and so on, it might be that if we're constantly walking around automatically having interference between other people's attitudes and beliefs, that's a really easy way for just being around some friend to transmit these kinds of things.

All of this stuff I talked to you about at the end has been pretty speculative, but this is exactly the reason I wanted to talk about this stuff in front of you guys. I'm not sure, if you followed John's marching orders, you get deep insight into human nature just yet, but I think these new kinds of findings where we're seeing differences are pointing us to new directions not just in the ways that humans might be unique cognitively, but the way these different cognitive mechanisms might play out in a broader context, allows us to do all kinds of human thinking things, like culture, and so on.

Just to kind of round out the discussion we had last night at dinner, I hope I've posed some interesting new questions for you, given you some zany speculation, and talked to you about some spots where the jury is still out. Thank you.
 


DENNETT: My bumper sticker these days says, "Competence without Comprehension." The idea is that human comprehension is built up out of competences which are themselves relatively uncomprehending. The Whiten result fits beautifully into that, in that it even permits you to speculate that it's an adaptation for cultural transmission that we ape more than apes do, and this opens the gates for all sorts of advanced techniques that we can acquire, and then have in our toolkit, that we don't yet have to understand. They bring us benefits, and then we can build other things out of them, but we don't require any level of comprehension in order to take them on, and then they can help us develop comprehension later.

SANTOS:  Yes.  Although I think with some of the other over imitation results you might need to amend the bumper sticker to, "Competence, not comprehension, but then later comprehension," because I think the powerful thing about some of these results is not just that the kids followed the behavior, it's that they developed rich causal explanations based on the fact that somebody had an intent to do something. And so the thing that I find most fascinating is it's not just the behavioral transmission, what goes with it when you see an intentional human do something, it's the fact that it must have been done for a reason. There must be this explanation, and kids, based on this social input, are completely willing to override the physics.

One of the powerful results that Derek had is he asked the separate children how this object works, and all of them are sharp enough to exactly know the physics of how this object works. You see a human do a dumb thing on this object, this kind of strange thing that you wouldn't do. All of the kids override what they saw before, not that you just have to do it, but that this is how the object causally works, and they spin a ton of different interesting stories that don't make any physical sense to come up with how this works. So it's not just that you can get these things without comprehension, but seeing it build in a comprehension that may or may not be accurate, based on your knowledge.

CHRISTAKIS: The thing that you're saying now that prompts a thought in me, and it was also prompted by something June said earlier today was, of course, experimentally these are fascinating things, right, to think about the way you're describing them, and the experiments are so fiendishly clever, like it makes me want to switch fields, and do the experiments, and there's so much thought and creativity in making them. And, of course, when we do experiments, we isolate down to particular actions, and so forth.

But maybe it's the case that while it's seemingly "irrational" for the baby to behave this way in this clear puzzle box, in aggregate it's better for the organism to do what the adults do. And, of course, you know, it's like genes and competition, right? I mean you could have, you know, "dysfunctional genes," or emotions earlier we were talking about. I mean there may be ways in which across time it makes no sense for you to be happy when the world is collapsing, when we look at a single packet of time, but maybe on average, in fact, across time maybe it's good for you to feel happy no matter what's happened. I don't know. I'm making it up. But the point is if you expand your horizon maybe it's no longer as crazy. Maybe it's my resistance to not wanting to think that chimps are smarter than I am, but, you know, when you describe it  like they are behaving more rationally, yes, in this particular case, but maybe more generally that's a price we pay for …

SANTOS:  So it makes a prediction about the kind of extended phenotype in which we humans find ourselves in, which is that the social information we get is often pretty accurate …

CHRISTAKIS:  And sometimes we're led astray. Yeah.

SANTOS:  And sometimes we can get led astray. And for the kinds of physical environments where we, at least as modern humans find ourselves in, that's, for sure true, right? If I were just to use my physical intuitions to try to figure out this iPad, I would be completely screwed, but as soon as Josh hits one button and does it, then I have insight into this.

When Derek Lyons talks about some of these results, he always starts his talks with the latest whatever the winning new Rube Goldberg experiment is. He puts that up, and then a coconut, and he says the coconut is the most complicated thing in the chimpanzee world, this is the causal thing that they cannot figure out, whereas we deal with these causal systems that are incredibly complicated. And he chooses Rube Goldberg to say the beauty of these is that you can, with your naive physics, understand all this stuff, but that's the teeniest, tiniest crazy causal system that we have to deal with as humans. We're constantly faced with these causal systems that we just don't have the ability to understand, but other people do.

And I think the interesting thing, the reason I think this relates to Fiery's stuff, is that it might not at least be for complicated causal systems, it might be for elaborate social reward structures, elaborate sets of goals and behaviors that you want to link together, but you, yourself, haven't done yet, and I think we really need to look en mass atthose kinds of cases and ask the question:  Do these kinds of low-level mechanisms work in all these cases, and do they ultimately derive the kinds of smart answers you're talking about?

CHRISTAKIS:  Have you seen that YouTube video that went viral a year or two ago of a little baby, I don't know how old, holding a magazine, like an old-fashioned magazine, and going like this to try make the picture bigger? They have to learn, you know …

SANTOS:  The other thing that you get out of this is the power of your social input, and one of the things you learn if you hang out with toddlers who have access to iPads, it's just how incredibly reinforcing these structures are. I think part of it is that they're reinforcing for that reason that Fiery is talking about, that they're getting incredible social input that this is a reinforcing thing. They see their parents and caregivers around these objects, interacting with them in a way that says this is the more important thing, than any food or anything. They're like the rats that were getting the cocaine except they're like the rats that are getting the iPad. But the key is the kids don't have to do that themselves. The inputs we're providing are getting sucked in in different ways.

KNOBE:  It seemed like the answer you were giving to Nicholas was that what we really want to do is understand the causal structure of some object, but luckily there are people around who already know it. We're just kind of using them as a means to do this other task. But I wonder if there's any evidence for that view as opposed to another possible view that it's not really as crucially important for us to get the right answer about the causal structure of this object. It's just to get along with other people.

What we're really concerned with is not using other people as a means to correctly understand the causal structure of this object, but interacting with other people and working with other people in certain ways, and even if we get the causal structure of the object wrong, if we connect up with other people by doing it the same way they do, and we're better off.

SANTOS:  Right. So if we make really strong cooperators, even if we don't understand how the box works, we still don't get attacked or whatever.

KNOBE:  Suppose I have the option of getting it right, but everyone thinks I'm a weirdo, or getting it wrong, but everyone things I'm good. Maybe I'd be better off getting a wrong answer.

SANTOS:  Yes.  Well, the question is why it has to go with the wrong answer, though, right? So you could imagine a whole set of mechanisms of conformity that didn't go with the competence, plus comprehension, plus comprehension, extra part, right? You could imagine a whole case of conformity that was of the form, "Wow, like Josh is such a weirdo when he's opening the thing that way. I'll open it that way in front of Josh so he won't hate me, but as soon as Josh leaves, like that's it, because I know that that's not the way to do this. I just won't test the waters of being a jerk in terms of my conformity." 

But that doesn't seem to be what kids are doing. So the fact that their causal analysis goes along with [what others do] that suggests that it's not just about relating, or something similar, or setting up your ingroup to do actions in the same way. The fact is that what goes along with it is a rich causal analysis that goes beyond what I think just if we're trying to get long. I mean maybe that might come along for the ride, and so on, but I think we need an extra thing to explain why that part comes, too, and I think that's the nice thing about some of these studies, is that they've kind of controlled for that possibility.

The way Derek did it was really elegant. So a child comes in, and they learn this task. They see the experimenter do things and Derek, as the experimenter, convinces the child that the experiment is totally over. The child is like, oh, the experiment is over, the kid gets their prize, everything is fine. And then Derek convinces the child that some emergency has happened. The emergency is there's a new child there, and we all forgot to check if the object was back in the thing. So somebody needs to open the object as quickly as possible while Derek leaves, and nobody is going to watch, but it's got to be incredibly fast. Nobody is going to watch you, and like it's very, very urgent. Derek runs out of the room, and what you see is not that the child stopped doing the stupid thing, they just do it really fast, and really urgently.  It doesn't seem like it's about just relating. It seems like it's really changing their comprehension, and I'm not sure why you get that, but you do.

CHRISTAKIS:  That's so clever.



Back to: HeadCon '13: WHAT'S NEW IN SOCIAL SCIENCE?

People

Associate Professor of Psychology, Director, Comparative...

Mentioned

Professor of Economics, Harvard; Assistant Director for...
Assistant Professor of Psychology, University of Colorado,...
Assistant Professor, Department of Psychology, Harvard...
Psychologist, UPenn; Director, Penn Laboratory for...
Physician and Social Scientist, Yale University; Coauthor,...
Cognitive Neuroscientist and Philosopher, Harvard University
Experimental Philosopher, Yale
Associate Professor of Psychology, Cornell University
Philosopher; Austin B. Fletcher Professor of Philosophy, Co...
Recipient, Nobel Prize in Economics, 2002; Eugene Higgins...

Conversations at Edge

Topics

Topic/Category: