2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

nick_bostrom's picture
Professor, Oxford University; Director, Future of Humanity Institute; Author, Superintelligence: Paths, Dangers, Strategies
A Difficult Topic

 

First—what I think about humans who think about machines that think: I think that for the most part we are too quick to form an opinion on this difficult topic. Many senior intellectuals are still unaware of the recent body of thinking that has emerged on the implications of superintelligence. There is a tendency to assimilate any complex new idea to a familiar cliché. And for some bizarre reason, many people feel it is important to talk about what happened in various science fiction novels and movies when the conversation turns to the future of machine intelligence (though hopefully John Brockman's admonition to the Edge commentators to avoid doing so here this will have a mitigating effect on this occasion).

With that off my chest, I will now say what I think about machines that think:

Machines are currently very bad at thinking (except in certain narrow domains).

  1. They'll probably one day get better at it than we are (just as machines are already much stronger and faster than any biological creature).
     
  2. There is little information about how far we are from that point, so we should use a broad probability distribution over possible arrival dates for superintelligence.
     
  3. The step from human-level AI to superintelligence will most likely be quicker than the step from current levels of AI to human-level AI (though, depending on the architecture, the concept of "human-level" may not make a great deal of sense in this context).
     
  4. Superintelligence could well be the best thing or the worst thing that will ever have happened in human history, for reasons that I have described elsewhere.

The probability of a good outcome is determined mainly by the intrinsic difficulty of the problem: what the default dynamics are and how difficult it is to control them. Recent work indicates that this problem is harder than one might have supposed. However, it is still early days and it could turn out that there is some easy solution or that things will work out without any special effort on our part.

Nevertheless, the degree to which we manage to get our act together will have some effect on the odds. The most useful thing that we can do at this stage, in my opinion, is to boost the tiny but burgeoning field of research that focuses on the superintelligence control problem (studying questions such as how human values can be transferred to software). The reason to push on this now is partly to begin making progress on the control problem and partly to recruit top minds into this area so that they are already in place when the nature of the challenge takes clearer shape in the future. It looks like maths, theoretical computer science, and maybe philosophy are the types of talent most needed at this stage.

That's why there is an effort underway to drive talent and funding into this field, and to begin to work out a plan of action. At the time when this comment is published, the first large meeting to develop a technical research agenda for AI safety will just have taken place.