2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

jaan_tallinn's picture
Co-Founder, Centre for the Study of Existential Risk, Future of Life Institute; Founding Engineer, Skype, Kazaa
People Must Take Responsibility For Their Actions. Scientists And Technologists Are No Exception.

Six months before the first nuclear test, the Manhattan Project scientists prepared a report called LA-602. It investigated the chance of nuclear detonation having a runaway effect and destroying the Earth by burning up the atmosphere.

It was probably the first time scientists performed analysis to predict whether humanity would perish as a result of a new technological capability—the first piece of existential risk research.

Of course, nuclear technology did not remain the last dangerous technology that humans invented. Since then the topic of catastrophic side effects has repeatedly come up in different contexts: recombinant DNA, synthetic viruses, nanotechnology and so on. Luckily for humanity, sober analysis has usually prevailed and resulted in various treaties and protocols to steer the research.

When I think about the machines that can think, i.e. the AI, I think of them as technology that needs to be developed with similar (if not greater!) care.

Unfortunately, the idea of AI safety has been more challenging to popularise than, say, bio-safety, because people have rather poor intuitions when it comes to thinking about non-human minds. Also, if you think about it, AI is really a "meta-technology": technology that can develop further technologies—either in conjunction with humans or perhaps even autonomously, thereby complicating the analysis even further.

That said, there has been very encouraging progress over the last few years—progress exemplified by the initiatives of new institutions such as the Future of Life Institute who have gathered together leading AI researchers to explore appropriate research agendas, standards, and ethics.

Therefore, in my view, complicated arguments by people trying to sound clever on the issue of AI, thinking, consciousness, or ethics are often a distraction from the trivial truth: the only way to ensure we don't accidentally blow ourselves up with our own technology (or meta-technology!) is to do our homework and take relevant precautions—just like those Manhattan Project scientists did when they prepared the LA-602. We need to set aside the tribal quibbles and ramp up the AI safety research.

By way of analogy, since the Manhattan Project, nuclear scientists have long moved on from increasing the power of nuclear fusion to the issue of how to best contain it—and we don't even call that "nuclear ethics".

We call that common sense.