Edge Video Library

Sarah-Jayne Blakemore: "The Teenager's Sense of Social Self"

HeadCon 14
Sarah-Jayne Blakemore
[11.18.14]

The reason why that letter is nice is because it illustrates what's important to that girl at that particular moment in her life. Less important that man landed on moon than things like what she was wearing, what clothes she was into, who she liked, who she didn't like. This is the period of life where that sense of self, and particularly sense of social self, undergoes profound transition. Just think back to when you were a teenager. It's not that before then you don't have a sense of self, of course you do.  A sense of self develops very early. What happens during the teenage years is that your sense of who you are—your moral beliefs, your political beliefs, what music you're into, fashion, what social group you're into—that's what undergoes profound change.

SARAH-JAYNE BLAKEMORE is a Royal Society University Research Fellow and Professor of Cognitive Neuroscience, Institute of Cognitive Neuroscience, University College London. Sarah-Jayne Blakemore's Edge Bio


 

Hugo Mercier: "Toward The Seamless Integration Of The Sciences"

HeadCon '14
Hugo Mercier
[11.18.14]

One of the great things about cognitive science is that it allowed us to continue that seamless integration of the sciences, from physics, to chemistry, to biology, and then to the mind sciences, and it's been quite successful at doing this in a relatively short time. But on the whole, I feel there's still a failure to continue this thing towards some of the social sciences such as, anthropology, to some extent, and sociology or history that still remain very much shut off from what some would see as progress, and as further integration. 

HUGO MERCIER, a Cognitive Scientist, is an Ambizione Fellow at the Cognitive Science Center at the University of Neuchâtel. Hugo Mercier's Edge Bio Page


 

L.A. Paul: "The Transformative Experience"

HeadCon '14
L.A. Paul
[11.18.14]

We're going to pretend that modern-day vampires don't drink the blood of humans; they're vegetarian vampires, which means they only drink the blood of humanely farmed animals. You have a one-time-only chance to become a modern-day vampire. You think, "This is a pretty amazing opportunity, do I want to gain immortality, amazing speed, strength, and power? But do I want to become undead, become an immortal monster and have to drink blood? It's a tough call." Then you go around asking people for their advice and you discover that all of your friends and family members have already become vampires. They tell you, "It is amazing. It is the best thing ever. It's absolutely fabulous. It's incredible. You get these new sensory capacities. You should definitely become a vampire." Then you say, "Can you tell me a little more about it?" And they say, "You have to become a vampire to know what it's like. You can't, as a mere human, understand what it's like to become a vampire just by hearing me talk about it. Until you're a vampire, you're just not going to know what it's going to be like."

L.A. PAUL is Professor of Philosophy at the University of North Carolina at Chapel Hill, and Professorial Fellow in the Arché Research Centre at the University of St. Andrews. L.A. Paul's Edge Bio page


Go to stand-alone video: :
 

Simone Schnall: "Moral Intuitions, Replication, and the Scientific Study of Human Nature"

HeadCon '14
Simone Schnall
[11.18.14]

In the end, it's about admissible evidence and ultimately, we need to hold all scientific evidence to the same high standard. Right now we're using a lower standard for the replications involving negative findings when in fact this standard needs to be higher. To establish the absence of an effect is much more difficult than the presence of an effect.

SIMONE SCHNALL is a University Senior Lecturer and Director of the Cambridge Embodied Cognition and Emotion Laboratory at Cambridge University. Simone Schnall's Edge Bio Page


 

Jennifer Jacquet: "Shaming At Scale"

HeadCon '14
Jennifer Jacquet
[11.18.14]

Shaming, in this case, was a fairly low-cost form of punishment that had high reputational impact on the U.S. government, and led to a change in behavior. It worked at scale—one group of people using it against another group of people at the group level. This is the kind of scale that interests me. And the other thing that it points to, which is interesting, is the question of when shaming works. In part, it's when there's an absence of any other option. Shaming is a little bit like antibiotics. We can overuse it and actually dilute its effectiveness, because it's linked to attention, and attention is finite. With punishment, in general, using it sparingly is best. But in the international arena, and in cases in which there is no other option, there is no formalized institution, or no formal legislation, shaming might be the only tool that we have, and that's why it interests me. 

JENNIFER JACQUET is Assistant Professor of Environmental Studies, NYU; Researching cooperation and the tragedy of the commons; Author, Is Shame Necessary? Jennifer Jacquet's Edge Bio Page


Go to stand-alone video: :
 

Lawrence Ian Reed: "The Face Of Emotion"

HeadCon '14
Lawrence Ian Reed
[11.18.14]

What can we tell from the face? There're mixed data, but some show a pretty strong coherence between what is felt and what’s expressed on the face. Happiness, sadness, disgust, contempt, fear, anger, all have prototypic or characteristic facial expressions. In addition to that, you can tell whether two emotions are blended together. You can tell the difference between surprise and happiness, and surprise and anger, or surprise and sadness. You can also tell the strength of an emotion. There seems to be a relationship between the strength of the emotion and the strength of the contraction of the associated facial muscles. 

LAWRENCE IAN REED is a Visiting Assistant Professor of Psychology, Skidmore College. Lawrence Ian Reed's Edge Bio page


Go to stand-alone video: :
 

David Rand: "How Do You Change People's Minds About What Is Right And Wrong?"

HeadCon '14
David Rand
[11.18.14]

There are often future consequences for your current behavior. You can't just do whatever you want because if you are selfish now, it'll come back to bite you. In order for any of that to work, though, it relies on people caring about you being cooperative. There has to be a norm of cooperation. The important question then, in terms of trying to understand how we get people to cooperate and how we increase social welfare, is this: Where do these norms come from and how can they be changed? And since I spend all my time thinking about how to maximize social welfare, it also makes me stop and ask, "To what extent is the way that I am acting consistent with trying to maximize social welfare?"

DAVID RAND is Assistant Professor of Psychology, Economics, and Management at Yale University, and Director of Yale University’s Human Cooperation Laboratory. David Rand's Edge Bio page


 

Molly Crockett: "The Neuroscience of Moral Decision Making"

HeadCon '14
Molly Crockett
[11.18.14]

Imagine we could develop a precise drug that amplifies people's aversion to harming others; on this drug you won't hurt a fly, everyone taking it becomes like Buddhist monks. Who should take this drug? Only convicted criminals—people who have committed violent crimes? Should we put it in the water supply? These are normative questions. These are questions about what should be done. I feel grossly unprepared to answer these questions with the training that I have, but these are important conversations to have between disciplines. Psychologists and neuroscientists need to be talking to philosophers about this. These are conversations that we need to have because we don't want to get to the point where we have the technology but haven't had this conversation, because then terrible things could happen.

MOLLY CROCKETT is Associate Professor, Department of Experimental Psychology, University of Oxford; Wellcome Trust Postdoctoral Fellow, Wellcome Trust Centre for Neuroimaging. Molly Crockett's Edge Bio Page


 

The Myth Of AI

Jaron Lanier
[11.14.14]

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.

JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future? Jaron Lanier's Edge Bio Page


Go to stand-alone video: :
 

Jennifer Jacquet on Extinction

Jennifer Jacquet
[11.6.14]

I dream about the sea cow or imagine what they would be like to see in the wild, but the case of the Pinta Island giant tortoise was a particularly strange feeling for me personally because I had spent many afternoons in the Galapagos Islands when I was a volunteer with the Sea Shepherd Conservation Society in Lonesome George’s den with him. If any of you visited the Galapagos, you know that you can even feed the giant tortoises that are in the Charles Darwin Research Station. This is Lonesome George here.

He lived to a ripe old age but failed, as they pointed out many times, to reproduce. Just recently, in 2012, he died, and with him the last of his species. He was couriered to the American Museum of Natural History and taxidermied there. A couple weeks ago his body was unveiled. This was the unveiling that I attended, and at this exact moment in time I can say that I was feeling a little like I am now: nervous and kind of nauseous, while everyone else seemed calm. I wasn’t prepared to see Lonesome George. Here he is taxidermied, looking out over Central Park, which was strange as well. At that moment realized that I knew the last individual of this species to go extinct. That presents this strange predicament for us to be in in the 21st century—this idea of conspicuous extinction. 


Go to stand-alone video: :
 

Pages