AI THAT EVOLVES IN THE WILD
GEORGE DYSON: I’m not a scientist. I’ve never done science. I dropped out of high school. But I tell stories. Ian tells stories that can take us into the future wherever he wants to go, and I go into the past and find the stories that people forgot.
Alison Gopnik said how nobody reads past the one sentence in Turing’s 1950 paper. They never read past his 1936 paper to his 1939 “Systems of Logic Based on Ordinals,” which is much more interesting. It’s about non-deterministic computers, not the universal Turing machine but the second machine he wrote his thesis on in Princeton, which was the oracle machine—a non-deterministic machine. Already he realized by then that the deterministic machines were not that interesting. It was the non-deterministic machines that were interesting. Similarly, we talk about the von Neumann architecture, but von Neumann only has one patent, and that patent is for non-von Neumann architecture. It’s for a neuromorphic computer that can do anything, and he explains that, because to get a patent you have to show what it can do. And nobody reads that patent.
The measure of a good story is that it gets better as it’s repeated by other people, such as Danny’s story about the Songs of Eden and how you can look at the development of language and consciousness from the point of the view of the songs themselves, these strings of language. We’re obsessed with these other minds that are going into technology. There’s a whole other track where you could have a mind and intelligence that has no technology at all. Freeman always pointed out that the search for extraterrestrial intelligence is wrong, that really what we are looking for is extraterrestrial technology because we can see it. Intelligence and technology are different things. There’s a parallel to the songs that went to the apes becoming us, and the songs that went into the oceans and became whales, which have highly developed songs and are raised by their maternal 100-year-old grandmothers. Whales have no technology, but obviously they have very advanced brains, five, six, eight times the size of ours.
I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.
We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.
If you look at the most interesting computation being done on the Internet, most of it now is analog computing, analog in the sense of computing with continuous functions rather than discrete strings of code. The meaning is not in the sequence of bits; the meaning is just relative. Von Neumann very clearly said that relative frequency was how the brain does its computing. It's pulse frequency coded, not digitally coded. There is no digital code.
In mathematics there’s this deep, old problem called the continuum hypothesis. We have an infinite number of different infinities, but they divide into only two kinds: countable infinities and uncountable infinities. My analogy for that is how at the end of a conference when you look for a t-shirt, there are only extra small t-shirts and extra large. There are no medium t-shirts. The continuum hypothesis—and there is a difference between being true and being provable—has not been proved. It says you will never find a medium-sized infinity. All the infinities belong to one side or the other.
Two very interesting things are happening. What this means is that for any uncountable infinity, say, a line, there’s an infinite number of points between any two points, and then if you cut a piece of that line, it still has an infinite number of points. That, I believe, is analogous to organisms. All organisms do their computing with continuous function. In nature we use discrete functions for error correction in genetics, but all control systems in nature are analog. The smallest analog system has the full power of the continuum.
On the other side, you have the constructible infinities. What’s interesting there is that we’re trying to prove this by doing it. We’re doing our best to create a medium-sized infinity. So, you can say, "Well, it exists. We’ve made it." The current digital universe is growing by 30 trillion transistors per second, and that’s just on the hardware side, so we have this medium-sized infinity, but it still legally belongs to the countable infinities.
My metaphor of how I think about this is that no matter what you do in the digital world, it stays stuck on that side of the room. But there’s no prohibition against machines doing continuous computing. Then they belong to the other side. We were talking about hybrid machines yesterday. That’s the interesting future that the Adam that Ian McEwan imagines is only going to happen when the machines move to the other side, to the continuous side. Then they can start having the things we have. There’s no reason not to do that.
I’m just going to close with not my idea but somebody else’s. The von Neumann centennial was in 2003, and the Templeton Foundation was changing from trying to prove the existence of God to not mentioning God at all. They held a series of meetings in honor of von Neumann, one of which was on von Neumann game theory. One of the people, a Scottish mathematician, came in and gave absolutely beautiful proof using classical von Neumann game theory. It wasn’t proof of the existence of God, but it was proof that if there was a God, no matter what value function you choose, the payoff is higher if God does not reveal herself.
The message to take home is that faith is better than proof. You don’t want proof. We’re in exactly the same situation with AI. We have these meetings year after year with the same discussions, and people are waiting for proof. To me the Turing test is wrong. Actually, it’s the opposite. The test of an intelligent machine is whether it's intelligent enough not to reveal its intelligence. It’s true for AI as a whole that we’re going to keep coming back, and we need to have faith in AI. I have faith in it. I believe it exists, but we don’t want proof. It's a game of faith.
* * * *
W. DANIEL HILLIS: George, I wonder if you’re making too much of this distinction between continuous and discrete.
G. DYSON: Oh, I’m definitely making too much of it.
HILLIS: To me there’s an engineering problem in systems, which is caused by noise. Analog systems generally deal with that problem by filtering. So, they do it by only accepting restricted time frequency range of signals. In some sense they disallow information being encoded in a certain part of the frequency space. Sometimes that’s just inherent in how they’re built. Sometimes it’s done by vector explicit filtering.
Another way of dealing with noise is disallowing certain amplitudes, which is basically how digital systems do it. That has some advantages and disadvantages. Either of them can be made to represent things to arbitrary precision, and in practice you can represent things to higher precision using the digital methods, although at great cost in power.
So, it seems to me like this is just an engineering trick. There are many other things that are halfway in between, like using the discrete eigenvectors of a continuous function or something like that. It seems to me like there’s nothing qualitatively different. It’s an interesting engineering discussion to say, "Hey, I might do better using analog to solve these problems," but I’m not sure that in terms of its ability to do an artificial intelligence that there’s anything there.
NEIL GERSHENFELD: On the analog side, you can price that exactly with fluctuation dissipation. There’s an exact pricing for the thermodynamic cost of having tolerance in an analog signal. The very first digital logic had floating point processors, and they had digital signal processors, and they had digital signals. They had processors to do special processing on continuous costings on the digital side. On the analog side, there’s a very precise tradeoff between what tolerance costs you, and in fact most of the power in your phone’s radio is in the receiver, not in the transmitter against this fluctuation.
G. DYSON: Analog machines, like nervous systems, don’t have the programming. There’s not an algorithm, which is where we’re wrong. We’re so obsessed that there has to be an algorithm.
RODNEY BROOKS: Instead of worrying about whether it’s analog or digital, it’s the organization, because you get into a different computational complexity class by the way stuff is organized.
HILLIS: That’s the second sense of analog. There are two completely different senses of analog, which have nothing to do with each other.
BROOKS: He was talking about your second sense, I thought.
HILLIS: I thought he was explicit that he was talking about continuous versus discrete.
G. DYSON: Yes. I didn’t get to the other side, which is that nature builds very intelligent systems without any digital programming in the sense that we take it for granted.
HILLIS: Then there’s a second sense of analog, which is in some sense whether the computation bears an analogous structure to the thing that it’s computing on. For instance, a map is an analog of the physical. You can have continuous and discrete circuits that are analog in that sense, that work by an analogy in the world.
Having the algorithms stored separately versus inherently built into the structure is yet another issue. We tend to talk about all those together, and they get mixed up in this digital/analog distinction. I’m not sure what the interesting distinction is.
GERSHENFELD: To Rod’s point, these are ridiculous extremes. If analog is the needle on the DVM and digital is ones and zeroes, neither really bears on what’s interesting. Both in biology and in computers, salvation is in the details and the architecture, which applies a really interesting space that’s not captured by either of those limits.
SETH LLOYD: Historically, this whole question was the subject of Shannon’s great book, The Mathematical Theory of Communication, where he showed exactly how if you have analog systems, continuous systems that have noise and that are power and bandwidth-limited, then they are effectively digital, and you can map the number of bits that can be encoded in it. This is the book where he coined the word "bit," which he stole from John Tukey. In some sense this is a question that was resolved gloriously in 1946.
BROOKS: A few years ago, Carver Mead told me that the defining moment of his life was when Gordon Moore handed him a bag with thirty transistors in it.
G. DYSON: He wrote the book on analog VLSI!
HILLIS: Freeman made an engineering observation that we’ve gone overboard with this digital thing, and it’s very costly. It’s probably not the right technology to get to the next level of performance. These things would be better done using analog. I agree with that point that we’ve over-pushed the digital thing in our engineering. But that's an engineering technology point; it’s nothing fundamental about computation. I thought when you started making this analogy with the continuum hypothesis that you were saying there’s some fundamental difference between these computations. I don’t believe that one.
G. DYSON: The analogy was that when you take the continuous infinity and cut it in half, you still have the full infinity. The two kinds of computing follow the same path.
HILLIS: Here’s why that’s not true: If you cut the analog signal in half, you’ve now got twice as much noise per signal.
GERSHENFELD: Fluctuation dissipation means if you multiply how much a signal fluctuates by how much power you’re consuming based on the system you’re in, that’s a constant, and so reducing the fluctuation proportionately increases the power consumption. It costs you to limit fluctuation in analog systems. They’re not continuous. It’s very expensive to bound the distribution. The people making all the devices around us live in that. It’s this naïve version of this beautiful, clean dot on the line.
TOM GRIFFITHS: You can see a nice example of this in human languages. They way that human languages are structured there is through a continuous signal that is coming out of our mouths. The way that we perceive that is by breaking it up into these discrete paths, phonemes and so on, and then building those into words and then being able to exploit the common atorics of the resulting discrete signals.
ROBERT AXELROD: Intonation is analog, right?
GRIFFITHS: Yes, but that’s layered on top of an underlying digital thing. There’s a nice experiment that was done by Simon Kirby and his colleagues where they had people playing slide whistles and then asked people to reproduce the slide whistle sounds, and then they looked at what happened as those slide whistles were transmitted. They very quickly evolved into discrete digital signals of repeating particular elements and so on. The argument is that that’s essentially what happens in language evolution, too, where you get this discreteness emerging as a way of dealing with this noisy continuous signal.
CAROLINE JONES: I’d like to reorganize the discussion to his last point, which was about faith, and ask if you are contesting Wiener’s metaphor that John kept throwing at us about kissing the hand that holds the whip. Just what are you articulating here, that we should have faith in the self-organizing benignity of AI?
G. DYSON: No. Lack of proof is not proof of lack of existence. Just because people are saying, "Oh, we don’t think there’s real AI because we don’t have proof of it," my faith is different. I’m quite willing to believe in it without needing proof.
JONES: So, you’re advocating faith without worship.
G. DYSON: Yes. I’m just as suspicious as Norbert Wiener. In fact, I’m more suspicious than Norbert Wiener. What he was talking about was if you hand this over to the corporations, you’re in trouble.
Wiener was very preoccupied with control and control systems. Now, we talk much more about intelligence. We talk less about control. Control is just as important, and there again is my faith that these large analog control systems are—that works both ways because they’re not programmed. There is no program for an analog control system in the sense that you can change a bit here and get a different outcome. That’s the way the world works, and that's why we’re fooling ourselves by thinking that there is somewhere a program that has control.
ALISON GOPNIK: This gets back to just how surprising it is that taking the phenomenology of verbally thinking through or calculating a process, that that very high level linguistic phenomenology, which essentially is what Turing was doing, and taking the structure of that turned out to be as productive as it was for creating—whether you call it intelligence or not—incredibly complex functions. That's a remarkable fact.
I don’t think a priori if you looked at human beings and said, "Look, almost everything that’s going on under the hood doesn’t have these characteristics of being digitized or being sequential," and it turns out that treating that little tiny bit on top that’s about how we talk to one another or how we talk to ourselves as the relevant structure turns out to create these systems that can do all things like see or process vision or create images. That’s just a remarkable non-obvious scientific fact.
PETER GALISON: Early Wiener, in the war and just after the war, his interest in control, which is crucial, was attached to a notion of purposefulness. But purpose was not purely computational as such. He thought that was the leading edge of a series of analog procedures that would substitute for various mental states, a kind of post behavioral behaviorism, a behavior-accessible form that would get at a mental state. Old-style behaviorism would refuse any attribution of mental states that are useful for them, but Wiener had built on things that were going on in psychology in the late ‘30s. He then had this way of trying to make circuits that would do something like purposefulness and to say, "This and no other is what purpose is."
GOPNIK: There’s an interesting connection there as well that the context in which you get human beings generating, and language is an interesting example, but there’s at least an argument that language is parasitic on things like long-term planning. So, what’s the context in which you get this phenomenon of having a series of calculations or having a series of discrete things that you’re doing?
The context is things like tool use, where you have to restrict a set of actions that you’re going to perform in the service of having a goal. As opposed to things like vision that don’t seem to have that structure or that goal-directed teleological character. If you want to go out and see things, it’s not like what you’re doing is performing a whole set of operations in order be able to see something in the way that when you’re saying to yourself, "Okay, what am I going to do tomorrow? I’m going to go here and I’m going to go there," has that structure. So, there might be a relationship between the idea of control and the idea of teleology and computation at least from the perspective of what human cognition is like.