Existential Risk

Jaan Tallinn [4.16.15]
Introduction by

The reasons why I'm engaged in trying to lower the existential risks has to do with the fact that I'm a convinced consequentialist. We have to take responsibility for modeling the consequences of our actions, and then pick the actions that yield the best outcomes. Moreover, when you start thinking about—in the pallet of actions that you have—what are the things that you should pay special attention to, one argument that can be made is that you should pay attention to areas where you expect your marginal impact to be the highest. There are clearly very important issues about inequality in the world, or global warming, but I couldn't make a significant difference in these areas. 

JAAN TALLINN is a co-founder of The Centre for the Study of Existential Risk at University of Cambridge, UK as well as The Future of Life Institute in Cambridge, MA. He is also a founding engineer of Kazaa and Skype. Jaan Tallinn's Edge Bio Page

INTRODUCTION
by Max Tegmark

I find Jaan Tallinn remarkable in more ways than one. His rags-to-riches entrepreneur story is inspiring in its own right, starting behind the Iron Curtain and ending up connecting the world with Skype. How many times have you skyped? How many people do you know who created a new verb?

Most successful entrepreneurs I know went on to become serial entrepreneurs. In contrast, Jaan chose a different path: he asked himself how he could leverage his success to do as much good as possible in the world, developed a plan, and dedicated his life to it. His ambition makes even the goals of Skype seem modest: reduce existential risk, i.e., the risk that we humans do something as stupid as go extinct due to poor planning. 

Already after a few short years, Jaan’s impact is remarkable. He is a key supporter of a global network of non-profit existential risk organizations including The Future of Humanity Institute, The Machine Intelligence Research Institute, The Global Catastrophic Risk Institute, The Centre for the Study of Existential Risk at University of Cambridge, and The Future of Life Institute, the last two of which he co-founded.

I’ve had the pleasure to work with him on The Future of Life Institute from day one, and if you’ve heard of our recent conference, open letter and well-funded research program on keeping artificial intelligence beneficial, then I’d like to make clear that none of this would have happened if it weren’t for Jaan’s support. If you’re an intelligent life form reading this text millions of years from now and marvel at how life is flourishing, you may owe your existence to Jaan.

MAX TEGMARK is a Physicist, MIT; Researcher, Precision Cosmology; Founder, Future of Life Institute; Author, Our Mathematical Universe. Max Tegmark's Edge Bio Page


EXISTENTIAL RISK

I split my activity between various organizations. I don't have one big umbrella organization that I represent. I use various commercial organizations and investment companies such as Metaplanet Holdings, which is my primary investment vehicle,to invest in various startups, including artificial intelligence companies. Then I have one nonprofit foundation called Solenum Foundation that I use to support various so-called existential risk organizations around the world.

Death Is Optional

Yuval Noah Harari, Daniel Kahneman [3.4.15]

Once you really solve a problem like direct brain-computer interface ... when brains and computers can interact directly, that's it, that's the end of history, that's the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, by definition, we have no way of even starting to imagine what's happening beyond that. 

YUVAL NOAH HARARI, Lecturer, Department of History, Hebrew University of Jerusalem, is the author of Sapiens: A Brief History of HumankindYuval Noah Harari's Edge Bio Page

DANIEL KAHNEMAN is the recipient of the Nobel Prize in Economics, 2002 and the Presidential Medal of Freedom, 2013. He is the Eugene Higgins Professor of Psychology Emeritus, Princeton, and author of Thinking Fast and Slow. Daniel Kahneman's Edge Bio Page

THE REALITY CLUB: Nicholas Carr, Steven Pinker, Yuval Noah Harari, Kevin Kelly



Death Is Optional


DANIEL KAHNEMAN: Before asking you what are the questions you are asking yourself, I want to say that I've now read your book Sapiens twice and in that book you do something that I found pretty extraordinary. You cover the history of mankind. It seems to be like an invitation for people to dismiss it as superficial, so I read it, and I read it again, because in fact, I found so many ideas that were enriching. I want to talk about just one or two of them as examples.

Your chapter on science is one of my favorites and so is the title of that chapter, "The Discovery of Ignorance." It presents the idea that science began when people discovered that there was ignorance, and that they could do something about it, that this was really the beginning of science. I love that phrase.

And in fact, I loved that phrase so much that I went and looked it up. Because I thought, where did he get it? My search of the phrase showed that all the references were to you. And there are many other things like that in the book.

How did you transition from that book to what you're doing now?

YUVAL NOAH HARARI: It came naturally. My big question at present is what is the human agenda for the 21st century. And this is a direct continuation from covering the history of humankind, from the appearance of Homo Sapiens until today, so when you finish that, immediately, you think, okay, what next? I'm not trying to predict the future, which is impossible, now more than ever. Nobody has a clue how the world will look like in, say, 40, 50 years. We may know some of the basic variables but, if you really understand what's going on in the world, you know that it's impossible to have any good prediction for the coming decades. This is the first time in history that we're in this situation.

I'm trying to do something that is the opposite of predicting the future. I'm trying to identify what are the possibilities, what is the horizon of possibilities that we are facing? And what will happen from among these possibilities? We still have a lot of choice in this regard.

Digital Reality

Neil Gershenfeld [1.23.15]

...Today, you can send a design to a fab lab and you need ten different machines to turn the data into something. Twenty years from now, all of that will be in one machine that fits in your pocket. This is the sense in which it doesn't matter. You can do it today. How it works today isn't how it's going to work in the future but you don't need to wait twenty years for it. Anybody can make almost anything almost anywhere.              

...Finally, when I could own all these machines I got that the Renaissance was when the liberal arts emerged—liberal for liberation, humanism, the trivium and the quadrivium—and those were a path to liberation, they were the means of expression. That's the moment when art diverged from artisans. And there were the illiberal arts that were for commercial gain. ... We've been living with this notion that making stuff is an illiberal art for commercial gain and it's not part of means of expression. But, in fact, today, 3D printing, micromachining, and microcontroller programming are as expressive as painting paintings or writing sonnets but they're not means of expression from the Renaissance. We can finally fix that boundary between art and artisans.

...I'm happy to take claim for saying computer science is one of the worst things to happen to computers or to science because, unlike physics, it has arbitrarily segregated the notion that computing happens in an alien world.

NEIL GERSHENFELD is a Physicist and the Director of MIT's Center for Bits and Atoms. He is the author of FAB. Neil Gershenfeld's Edge Bio Page


Digital Reality

What interests me is how bits and atoms relate—the boundary between digital and physical. Scientifically, it's the most exciting thing I know. It has all sorts of implications that are widely covered almost exactly backwards. Playing it out, what I thought was hard technically is proving to be pretty easy. What I didn't think was hard was the implications for the world, so a bigger piece of what I do now is that. Let's start with digital.

Digital is everywhere; digital is everything. There's a lot of hubbub about what's the next MIT, what's the next Silicon Valley, and those were all the last war. Technology is leading to very different answers. To explain that, let's go back to the science underneath it and then look at what it leads to.

WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

"Dahlia" by Katinka Matson |  Click to Expand www.katinkamatson.com

"Another year, and some of the most important thinkers and scientists of the world have accepted the intellectual challenge." —El Mundo, 2015

"Deliciously creative, the variety astonishes. Intellectual skyrockets of stunning brilliance. Nobody in the world is doing what Edge is doing...the greatest virtual research university in the world. —Denis Dutton, Founding Editor, Arts & Letters Daily

_________________________________________________________________
Dedicated to the memory of Frank Schirrmacher (1959-2014).
_________________________________________________________________

In recent years, the 1980s-era philosophical discussions about artificial intelligence (AI)—whether computers can "really" think, refer, be conscious, and so on—have led to new conversations about how we should deal with the forms that many argue actually are implemented. These "AIs", if they achieve "Superintelligence" (Nick Bostrom), could pose "existential risks" that lead to "Our Final Hour" (Martin Rees). And Stephen Hawking recently made international headlines when he noted "The development of full artificial intelligence could spell the end of the human race."   
 
THE EDGE QUESTION—2015 

WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

But wait! Should we also ask what machines that think, or, "AIs", might be thinking about? Do they want, do they expect civil rights? Do they have feelings? What kind of government (for us) would an AI choose? What kind of society would they want to structure for themselves? Or is "their" society "our" society? Will we, and the AIs, include each other within our respective circles of empathy?

Numerous Edgies have been at the forefront of the science behind the various flavors of AI, either in their research or writings. AI was front and center in conversations between charter members Pamela McCorduck (Machines Who Think) and Isaac Asimov (Machines That Think) at our initial meetings in 1980. And the conversation has continued unabated, as is evident in the recent Edge feature "The Myth of AI", a conversation with Jaron Lanier, that evoked rich and provocative commentaries.

Is AI becoming increasingly real? Are we now in a new era of the "AIs"? To consider this issue, it's time to grow up. Enough already with the science fiction and the movies, Star MakerBlade Runner2001HerThe Matrix, "The Borg".  Also, 80 years after Turing's invention of his Universal Machine, it's time to honor Turing, and other AI pioneers, by giving them a well-deserved rest. We know the history. (See George Dyson's 2004 Edge feature "Turing's Cathedral".) So, once again, this time with rigor, the Edge Question—2015:

WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

John Brockman
Publisher & Editor, Edge

Subscribe to Front page feed