That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."
In the past, all kinds of different figures have proposed that this kind of thing will happen, using different terminology. Some of them like the idea of the computers taking over, and some of them don't. What I'd like to do here today is propose that the whole basis of the conversation is itself askew, and confuses us, and does real harm to society and to our skills as engineers and scientists.
A good starting point might be the latest round of anxiety about artificial intelligence, which has been stoked by some figures who I respect tremendously, including Stephen Hawking and Elon Musk. And the reason it's an interesting starting point is that it's one entry point into a knot of issues that can be understood in a lot of different ways, but it might be the right entry point for the moment, because it's the one that's resonating with people.
The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something terrible will happen. They're an existential threat, whatever scary language there is. My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn't optimal is the way it talks about an end of human agency.
But it's a call for increased human agency, so in that sense maybe it's functional, but I want to go little deeper in it by proposing that the biggest threat of AI is probably the one that's due to AI not actually existing, to the idea being a fraud, or at least such a poorly constructed idea that it's phony. In other words, what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.
For instance, we can talk about pattern classification. Can you get programs that recognize faces, that sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.
But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering practice, and also undermine scientific method, and also undermine the economy.
The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive. I'm going to go through a couple of layers of how the mythology does harm.
The most obvious one, which everyone in any related field can understand, is that it creates this ripple every few years of what have sometimes been called AI winters, where there's all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
This kind of overpromise then leads to disappointment because it was premature, and then that leads to reduced funding and startups crashing and careers destroyed, and this happens periodically, and it's a shame. It hurt a lot of careers. It has helped other careers, but that has been kind of random; depending on where you fit in the phase of this process as you're coming up. It's just immature and ridiculous, and I wish that cycle could be shut down. And that's a widely shared criticism. I'm not saying anything at all unusual.
Let's go to another layer of how it's dysfunctional. And this has to do with just clarity of user interface, and then that turns into an economic effect. People are social creatures. We want to be pleasant, we want to get along. We've all spent many years as children learning how to adjust ourselves so that we can get along in the world. If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on it. If you think of any particular movie you might want to see, the chances are it's not available for streaming, that is; that's what I'm talking about. And yet there's this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there's very little available from it. And yet people accept it as being intelligent, because a lot of what's available is perfectly fine.
The one thing I want to say about this is I'm not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That's them being a good presenter. What's a theater without a barker on the street? That's what it is, and that's fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
There are other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good. There's no way to compare them to an alternative, so you don't know what might have been. If you want to put the work into it, you can play with that; you can try to erase your history, or have multiple personas on a site to compare them. That's the sort of thing I do, just to get a sense. I've also had a chance to work on the algorithms themselves, on the back side, and they're interesting, but they're vastly, vastly overrated.
I want to get to an even deeper problem, which is that there's no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you're getting big data by observing a lot of people who make choices, and then you're doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened. That's a pretty clear thing. What's not clear is where the boundary is.
If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know. At this point there's no way to know, because it's too universal. The same thing can be said for any other big data system that recommends courses of action to people, whether it's the Google ad business, or social networks like Facebook deciding what you see, or any of the myriad of dating apps. All of these things, there's no baseline, so we don't know to what degree they're measurement versus manipulation.
Dating always has an element of manipulation; shopping always has an element of manipulation; in a sense, a lot of the things that people use these things for have always been a little manipulative. There's always been a little bit of nonsense. And that's not necessarily a terrible thing, or the end of the world.
But it's important to understand it if this is becoming the basis of the whole economy and the whole civilization. If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore. It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into. But I'm going to get back to that in a second.
To go yet another rung deeper, I'll revive an argument I've made previously, which is that it turns into an economic problem. The easiest entry point for understanding the link between the religious way of confusing AI with an economic problem is through automatic language translation. If somebody has heard me talk about that before, my apologies for repeating myself, but it has been the most readily clear example.
For three decades, the AI world was trying to create an ideal, little, crystalline algorithm that could take two dictionaries for two languages and turn out translations between them. Intellectually, this had its origins particularly around MIT and Stanford. Back in the 50s, because of Chomsky's work, there had been a notion of a very compact and elegant core to language. It wasn't a bad hypothesis, it was a legitimate, perfectly reasonable hypothesis to test. But over time, the hypothesis failed because nobody could do it.
Finally, in the 1990s, researchers at IBM and elsewhere figured out that the way to do it was with what we now call big data, where you get a very large example set, which interestingly, we call a corpus—call it a dead person. That's the term of art for these things. If you have enough examples, you can correlate examples of real translations phrase by phrase with new documents that need to be translated. You mash them all up, and you end up with something that's readable. It's not perfect, is not artful, it's not necessarily correct, but suddenly it's usable. And you know what? It's fantastic. I love the idea that you can take some memo, and instead of having to find a translator and wait for them to do the work, you can just have something approximate right away, because that's often all you need. That's a benefit to the world. I'm happy it's been done. It's a great thing.
The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up.
This is not one of those. What this is, is behind the curtain, is literally millions of human translators who have to provide the examples. The thing is, they didn't just provide one corpus once way back. Instead, they're providing a new corpus every day, because the world of references, current events, and slang does change every day. We have to go and scrape examples from literally millions of translators, unbeknownst to them, every single day, to help keep those services working.
The problem here should be clear, but just let me state it explicitly: we're not paying the people who are providing the examples to the corpora—which is the plural of corpus—that we need in order to make AI algorithms work. In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
This, to me, is where it becomes serious. Everything up to now, you can say, "Well, look, if people want to have an algorithm tell them who to date, is that any stupider than how we decided who to sleep with when we were young, before the Internet was working?" Doubtful, because we were pretty stupid back then. I doubt it could have that much negative consequence.
This is all of a sudden a pretty big deal. If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects. They're seeing certain opportunities for continuing, particularly in real-time translation… but I should point out that's going away soon too. We're going to have real-time translation on Skype soon.
The thing is, they're still needed. There's an impulse, a correct impulse, to be skeptical when somebody bemoans what's been lost because of new technology. For the usual thought experiments that come up, a common point of reference is the buggy whip: You might say, "Well, you wouldn't want to preserve the buggy whip industry."
But translators are not buggy whips, because they're still needed for the big data scheme to work. They're the opposite of a buggy whip. What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable. Big data systems are useful. There should be more and more of them. If that's going to mean more and more people not being paid for their actual contributions, then we have a problem.
The usual counterargument to that is that they are being paid in the sense that they too benefit from all the free stuff and reduced-cost stuff that comes out of the system. I don't buy that argument, because you need formal economic benefit to have a civilization, not just informal economic benefit. The difference between a slum and the city is whether everybody gets by on day-to-day informal benefits or real formal benefits.
The difference between formal and informal has to do with whether it's strictly real-time or not. If you're living on informal benefits and you're a musician, you have to play a gig every day. If you get sick, or if you have a sick kid, or whatever and you can't do it, suddenly you don't get paid that day. Everything's real-time. If we were all perfect, immortal robots, that would be fine. As real people, we can't do it, so informal benefits aren't enough. And that's precisely why things, like employment, savings, real estate, and ownership of property and all these things were invented—to acknowledge the truth of the fragility of the human condition, and that's what made civilization.
If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
For all those reasons, the mythology is the problem, not the algorithms. To back up again, I've given two reasons why the mythology of AI is stupid, even if the actual stuff is great. The first one is that it results in periodic disappointments that cause damage to careers and startups, and it's a ridiculous, seasonal disappointment and devastation that we shouldn't be randomly imposing on people according to when they happen to hit the cycle. That's the AI winter problem. The second one is that it causes unnecessary negative benefits to society for technologies that are useful and good. The mythology brings the problems, not the technology.
Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything. Here I want to suggest a simple thought experiment of my own. There are so many technologies I could use for this, but just for a random one, let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
I'm going to give you two scenarios. In one scenario, there's suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There's so many of them that it's hard to find all of them to shut it down, and there keep on being more and more of them. That's one scenario; it's a pretty ugly scenario.
There's another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?
The truth is that the part that causes the problem is the actuator. It's the interface to physicality. It's the fact that there's this little killer drone thing that's coming around. It's not so much whether it's a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there's enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.
This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don't have to worry about is the AI algorithm running them, because that's speculative. There isn't an AI algorithm that's good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it's a massive misdirection.
This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
I haven't gone through a whole litany of reasons that the mythology of it AI does damage. There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don't know how most kinds of thoughts are represented in the brain. We're starting to understand a little bit about some narrow things. That doesn't mean we never will, but we have to be honest about what we understand in the present.
A retort to that caution is that there's some exponential increase in our understanding, so we can predict that we'll understand everything soon. To me, that's crazy, because we don't know what the goal is. We don't know what the scale of achieving the goal would be... So to say, "Well, just because I'm accelerating, I know I'll reach my goal soon," is absurd if you don't know the basic geography which you're traversing. As impressive as your acceleration might be, reality can also be impressive in the obstacles and the challenges it puts up. We just have no idea.
This is something I've called, in the past, "premature mystery reduction," and it's a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that's very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.
There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.
To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world. All of the damages are essentially mirror images of old damages that religion has brought to science in the past.
There's an anticipation of a threshold, an end of days. This thing we call artificial intelligence, or a new kind of personhood… If it were to come into existence it would soon gain all power, supreme power, and exceed people.
The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity. Not all ideas about divinity, but a certain kind of superstitious idea about divinity, that there's this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world, and you should be in terrified awe of it.
That particular idea has been dysfunctional in human history. It's dysfunctional now, in distorting our relationship to our technology. It's been dysfunctional in the past in exactly the same way. Only the words have changed.
In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," but, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.
There is an incredibly retrograde quality to the mythology of AI. I know I said it already, but I just have to repeat that this is not a criticism of the particular algorithms. To me, what would be ridiculous is for somebody to say, "Oh, you mustn't study deep learning networks," or "you mustn't study theorem provers," or whatever technique you're interested in. Those things are incredibly interesting and incredibly useful. It's the mythology that we have to become more self-aware of.
This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that's great, and this is the part that's self-defeating. We have to do it exactly the same thing with AI now.
This is a hard topic to talk about, because the accepted vocabulary undermines you at every turn. This is also similar to a problem traditional religion. If I talk about AI, am I talking about the particular technical work, or the mythology that influences how we integrate that into our world, into our society? Well, the vocabulary that we typically use doesn't give us an easy way to distinguish those things. And it becomes very confusing.
If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
Unfortunately, the standard vocabulary that people use doesn't give us a great way to distinguish those two entirely different items that one might reference. I could try to coin some phrases, but for the moment, I'll just say these are two entirely different things that deserve to have entirely distinguishing vocabulary. Once again, this vocabulary problem is entirely retrograde and entirely characteristic of traditional religions.
Maybe it's worse today, because in the old days, at least we had the distinction between, say, ethics and morality, where you could talk about two similar things, where one was a little bit more engaged with the mythology of religion, and one is a little less engaged. We don't quite have that yet for our new technical world, and we certainly need it.
Having said all this, I'll mention one other similarity, which is that just because a mythology has a ridiculous quality that can undermine people in many cases doesn't mean that the people who adhere to it are necessarily unsympathetic or bad people. A lot of them are great. In the religious world, there are lots of people I love. We have a cool Pope now, there are a lot of cool rabbis in the world. A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
In the same way, if you think of the people who are the most successful in the new economy in this digital world—I'm probably one of them; it's been great to me—they're, in general, great. I like the people who've done well in the cloud computer economy. They're cool. But that doesn't detract from all of the things I just said.
That does create yet another layer of potential confusion and differentiation that becomes tedious to state over and over again, but it's important to say.