ENGINEERS' DREAMS

ENGINEERS' DREAMS

George Dyson [7.13.08]
Introduction by:
George Dyson

Only one third of a search engine is devoted to fulfilling search requests. The other two thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). Ed's job was to balance the resulting loads.

When Ed examined the traffic, he realized that Google was doing more than mapping the digital universe. Google doesn't merely link or point to data. It moves data around. Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else. ...

Introduction by Stewart Brand

How does one come to a new understanding? The standard essay or paper makes a discursive argument, decorated with analogies, to persuade the reader to arrive at the new insight.

The same thing can be accomplished—perhaps more agreeably, perhaps more persuasively—with a piece of fiction that shows what would drive a character to come to the new understanding. Tell us a story!

This George Dyson gem couldn't find a publisher in a fiction venue because it's too technical, and technical publications (including Wired) won't run it because it's fiction. Shame on them. Edge to the rescue.

—SBB

GEORGE DYSON, a historian among futurists, is the author Baidarka; Project Orion; and Darwin Among the Machines.

George Dyson's Edge Bio Page


ENGINEERS' DREAMS

[Note: although the following story is fiction, all quotations
have been reproduced exactly from historical documents that exist.]

Ed was old enough to remember his first transistor radio—a Zenith Royal 500—back when seven transistors attracted attention at the beach. Soon the Japanese showed up, doing better (and smaller) with six. 

By the time Ed turned 65, fifteen billion transistors per second were being produced. Now 68, he had been lured out of retirement when the bidding wars for young engineers (and between them for houses) prompted Google to begin looking for old-timers who already had seven-figure mid-peninsula roofs over their heads and did not require stock options to show up for work. Bits are bits, gates are gates, and logic is logic. A systems engineer from the 1960s was right at home in the bowels of a server farm in Mountain View. 

In 1958, fresh out of the Navy, Ed had been assigned to the System Development Corporation in Santa Monica to work on SAGE, the hemispheric air defense network that was completed just as the switch from bombers to missiles made it obsolete. Some two dozen air defense sector command centers, each based around an AN-FSQ-7 (Army Navy Fixed Special eQuipment) computer built by IBM, were housed in windowless buildings armored by six feet of blast-resistant concrete. Fifty-eight thousand vacuum tubes, 170,000 diodes, 3,000 miles of wiring, 500 tons of air-conditioning equipment and a 3000-kilowatt power supply were divided between two identical processors, one running the active system and the other standing by as a “warm” backup, running diagnostic routines. One hundred Air Force officers and personnel were stationed at each command center, trained to follow a pre-rehearsed game plan in the event of enemy attack. Artificial intelligence? The sooner the better, Ed hoped. Only the collective intelligence of computers could save us from the weapons they had helped us to invent.

In 1960, Ed attended a series of meetings with Julian Bigelow, the legendary engineer who had collaborated with Norbert Wiener on anti-aircraft fire control during World War II and with John von Neumann afterwards—developing the first 32 x 32 x 40-bit matrix of random-access memory and the logical architecture that has descended to all computers since. Random-access memory gave machines access to numbers—and gave numbers access to machines. 

Bigelow was visiting at RAND and UCLA, where von Neumann (preceded by engineers Gerald Estrin, Jack Rosenberg, and Willis Ware) had been planning to build a new laboratory before cancer brought his trajectory to a halt. Copies of the machine they had built together in Princeton had proliferated as explosively as the Monte Carlo simulations of chain-reacting neutrons hosted by the original 5-kilobyte prototype in 1951. Bigelow, who never expected the design compromises he made in 1946 to survive for sixty years, questioned the central dogma of digital computing: that without programmers, computers cannot compute. He viewed processors as organisms that digest code and produce results, consuming instructions so fast that iterative, recursive processes are the only way that humans are able to generate instructions fast enough to keep up. "Highly recursive, conditional and repetitive routines are used because they are notationally efficient (but not necessarily unique) as descriptions of underlying processes," he explained. Strictly sequential processing, and strictly numerical addressing impose severe restrictions on the abilities of computers, and Bigelow speculated from the very beginning about "the possibility of causing various elementary pieces of information situated in the cells of a large array (say, of memory) to enter into a computation process without explicitly generating a coordinate address in 'machine-space' for selecting them out of the array."

At Google, Bigelow's vision was being brought to life. The von Neumann universe was becoming a non-von Neumann universe. Turing machines were being assembled into something that was not a Turing machine. In biology, the instructions say "Do this with that" (without specifying where or when the next available copy of a particular molecule is expected to be found) or "Connect this to that" (without specifying a numerical address). Technology was finally catching up. Here, at last, was the long-awaited revolt against the intolerance of the numerical address matrix and central clock cycle for error and ambiguity in specifying where and when.

The advent of template-based addressing would unleash entirely new forms of digital organisms, beginning with simple and semi-autonomous coded structures, on the level of nucleotides bringing amino acids (or template-based AdWords revenue) back to a collective nest. The search for answers to questions of interest to human beings was only one step along the way. 

Google was inverting the von Neumann matrix—by coaxing the matrix into inverting itself. Von Neumann's "Numerical Inverting of Matrices of High Order," published (with Herman Goldstine) in 1947, confirmed his ambition to build a machine that could invert matrices of non-trivial size. A 1950 postscript, "Matrix Inversion by a Monte Carlo Method," describes how a statistical, random-walk procedure credited to von Neumann and Stan Ulam "can be used to invert a class of n-th order matrices with only n2 arithmetic operations in addition to the scanning and discriminating required to play the solitaire game." The aggregate of all our searches for unpredictable (but meaningful) strings of bits, is, in effect, a Monte Carlo process for inverting the matrix that constitutes the World Wide Web.

Ed developed a rapport with the machines that escaped those who had never felt the warmth of a vacuum tube or the texture of a core memory plane. Within three months he was not only troubleshooting the misbehavior of individual data centers, but examining how the archipelago of data centers cooperated—and competed—on a global scale.

In the digital universe that greeted the launch of Google, 99 percent of processing cycles were going to waste. The global computer, for all its powers, was perhaps the least efficient machine that humans had ever built. There was a thin veneer of instructions, and then there was this dark, empty, 99 percent. 

What brought Ed to the attention of Google was that he had been in on something referred to as "Mach 9." In the late 1990's, a web of optical fiber had engulfed the world. At peak deployment, in 2000, fiber was being rolled out, globally, at 7,000 miles per hour, or nine times the speed of sound. Mach 9. All the people in the world, talking at once, could never light up all this fiber. But those 15 billion transistors being added every second could. Google had been buying up dark fiber at pennies on the dollar and bringing in those, like Ed, who understood the high-speed optical switching required to connect dark processors to dark fiber. Metazoan codes would do the rest.

As he surveyed the Google Archipelago, Ed was reminded of some handwritten notes that Julian Bigelow had showed him, summarizing a conversation between Stan Ulam and John von Neumann on a bench in Central Park in early November 1952. Ulam and von Neumann had met in secret to discuss the 10-megaton Mike shot, whose detonation at Eniwetok on November 1 would be kept embargoed from the public until 1953. Mike ushered in not only the age of thermonuclear weapons but the age of digital computers, confirming the inaugural calculation that had run on the Princeton machine for a full six weeks. The conversation soon turned from the end of one world to the dawning of the next.

"Given is an actually infinite system of points (the actual infinity is worth stressing because nothing will make sense on a finite no matter how large model)," noted Ulam, who then sketched how he and von Neumann had hypothesized the evolution of Turing-complete universal cellular automata within a digital universe of communicating memory cells. For von Neumann to remain interested, the definitions had to be mathematically precise: “A ‘universal’ automaton is a finite system which given an arbitrary logical proposition in form of (a linear set L) tape attached to it, at say specified points, will produce the true or false answer. (Universal ought to have relative sense: with reference to a class of problems it can decide). The ‘arbitrary’ means really in a class of propositions like Turing's—or smaller or bigger.”

“An organism (any reason to be afraid of this term yet?) is a universal automaton which produces other automata like it in space which is inert or only ‘randomly activated’ around it,” Ulam’s notes continued. “This ‘universality’ is probably necessary to organize or resist organization by other automata?” he asked, parenthetically, before outlining a mathematical formulation of the evolution of such organisms into metazoan forms. In the end he acknowledged that a stochastic, rather than deterministic, model might have to be invoked, which, “unfortunately, would have to involve an enormous amount of probabilistic superstructure to the outlined theory. I think it should probably be omitted unless it involves the crux of the generation and evolution problem—which it might?”

The universal machines now proliferating fastest in the digital universe arevirtual machines—not simply Turing machines, but Turing-Ulam machines. They exist as precisely-defined entities in the Von Neumann universe, but have no fixed existence in ours. Sure, thought Ed, they are merely doing the low-level digital housekeeping that does not require dedicated physical machines. But Ed knew this was the beginning of something big. Google (both directly and indirectly) was breeding huge numbers of Turing-Ulam machines. They were proliferating so fast that real machines were having trouble keeping up.

Only one third of a search engine is devoted to fulfilling search requests. The other two thirds are divided between crawling (sending a host of single-minded digital organisms out to gather information) and indexing (building data structures from the results). Ed's job was to balance the resulting loads.

When Ed examined the traffic, he realized that Google was doing more than mapping the digital universe. Google doesn't merely link or point to data. It moves data around. Data that are associated frequently by search requests are locally replicated—establishing physical proximity, in the real universe, that is manifested computationally as proximity in time. Google was more than a map. Google was becoming something else. 

In the seclusion of the server room, Ed's thoughts drifted back to the second floor communications center that linked the two hemispheres of SAGE's ANFSQ7 brain. "Are you awake? Yes, now go back to sleep!" was repeated over and over, just to verify that the system was on the alert. 

SAGE's one million lines of code were near the limit of a system whose behavior could be predicted from one cycle to the next. Ed was reminded of cybernetician W. Ross Ashby's "Law of Requisite Variety": that any effective control system has to be as complex as the system it controls. This was the paradox of artificial intelligence: any system simple enough to be understandable will not be complicated enough to behave intelligently; and any system complicated enough to behave intelligently will not be simple enough to understand. Some held out hope that the path to artificial intelligence could be found through the human brain: trace the pattern of connections into a large enough computer, and you would end up re-creating mind. 

Alan Turing's suggestion, to build a disorganized machine with the curiosity of a child, made more sense. Eventually, "interference would no longer be necessary, and the machine would have ‘grown up’." This was Google's approach. Harvest all the data in the world, rendering all available answers accessible to all possible questions, and then reinforce the meaningful associations while letting the meaningless ones die out. Since, by diagonal argument in the scale of possible infinities, there will always be more questions than answers, it is better to start by collecting the answers, and then find the questions, rather than the other way around. 

And why trace the connections in the brain of one individual when you can trace the connections in the mind of the entire species at once? Are we searching Google, or is Google searching us?

Google's data centers—windowless, but without the blast protection—were the direct descendants of SAGE. It wasn't just the hum of air conditioning and warm racks of electronics that reminded Ed of 1958. The problem Ed faced was similar—how to balance the side that was awake with the side that was asleep. For SAGE, this was simple—the two hemispheres were on opposite sides of the same building—whereas Google's hemispheres were unevenly distributed from moment to moment throughout a network that spanned the globe. 

Nobody understood this, not even Ed. The connections between data centers were so adaptable that you could not predict, from one moment to the next, whether a given part of the Googleverse was "asleep" or "awake." More computation was occurring while "asleep," since the system was free to run at its own self-optimizing pace rather that wait for outside search requests.

Unstable oscillations had begun appearing, and occasionally triggered overload alerts. Responding to the alarms, Ed finally did what any engineer of his generation would do: he went home, got a good night's sleep, and brought his old Tektronix oscilloscope with him to work. 

He descended into one of the basement switching centers and started tapping into the switching nodes. In the digital age, everything had gone to megahertz, and now gigahertz, and the analog oscilloscope had been left behind. But if you had an odd wave-form that needed puzzling over, this was the tool to use. 

What if analog was not really over? What if the digital matrix had now become the substrate upon which new, analog structures were starting to grow? Pulse-frequency coding, whether in a nervous system or a probabilistic search-engine, is based on statistical accounting for what connects where, and how frequently connections are made between given points. PageRank for neurons is one way to describe the working architecture of the brain. As von Neumann explained in 1948: "A new, essentially logical, theory is called for in order to understand high-complication automata and, in particular, the central nervous system. It may be, however, that in this process logic will have to undergo a pseudomorphosis to neurology to a much greater extent than the reverse." Ulam had summed it up: “What makes you so sure that mathematical logic corresponds to the way we think?”

As Ed traced the low-frequency harmonic oscillations that reverberated below the digital horizon, he lost track of time. He realized he was hungry and made his way upstairs. The oscilloscope traces had left ghosts in his vision, like the image that lingers for a few seconds when a cathode-ray tube is shut down. As he sat down to a bowl of noodles in the cafeteria, he realized that he had seen these 13-hertz cycles, clear off the scale of anything in the digital world, before. 

It was 1965, and he had been assigned, under a contract with Stanford University, to physiologist William C. Dement, who was setting up a lab to do sleep research. Dement, who had been in on the discovery of what became known as REM sleep, was investigating newborn infants, who spend much of their time in dreaming sleep. Dement hypothesized that dreaming was an essential step in the initialization of the brain. Eventually, if all goes well, awareness of reality evolves from the internal dream—a state we periodically return to during sleep. Ed had helped with setting up Dement's lab, and had spent many late nights getting the electroencephalographs fine-tuned. He had lost track of Bill Dement over the years. But he remembered the title of the article in SCIENCE that Dement had sent to him, inscribed "to Ed, with thanks from Bill." It was "Ontogenetic Development of the Human Sleep-Dream Cycle. The prime role of ‘dreaming sleep’ in early life may be in the development of the central nervous system."

Ed cleared his tray and walked outside. In a few minutes he was at the edge of the Google campus, and kept walking, in the dark, towards Moffett Field. He tried not to think. As he started walking back, into the pre-dawn twilight, with the hint of a breeze bringing the scent of the unseen salt marshes to the east, he looked up at the sky, trying to clear the details of the network traffic logs and the oscilloscope traces from his mind. 

For 400 years, we have been waiting for machines to begin to think. 

"We've been asking the wrong question," he whispered under his breath. 

They would start to dream first.