Quantum Monkeys

Quantum Monkeys

Seth Lloyd [5.23.06]

 

The image of monkeys typing on typewriters is quite old. . . .Some people ascribe it to Thomas Huxley in his debate with Bishop Wilberforce in 1858, after the appearance of The Origin of Species. From eyewitness reports of that debate it is clear that Wilberforce asked Huxley from which side of his family, his mother's or his father's, he was descended from an ape. Huxley said, "I would rather be descended from a humble ape than from a great gentleman who uses considerable intellectual gifts in the service of falsehood. "A woman in the audience fainted when he said that. They didn't have R-rated movies back then.

Seth Lloyd is an Edgy guy. In fact he likes to work "at the very edge of this information processing revolution". He appeared at the Edge event in honor of Robert Trivers at Harvard and talked from his "experience in building quantum computers, computers where you store bits of information on individual atoms."

Ten years ago Lloyd came up with "the first method for physically constructing a computer in which every quantum—every atom, electron, and photon—inside a system stores and processes information...During this meeting, Craig Venter claimed that we're all so theoretical here that we've never seen actual data. I take that personally, because most of what I do on a day-to-day basis is to try to coax little super-conducting circuits to give up their secrets". Below is his talk along with his discussion with Steven Pinker, Martin Nowak, J. Craig Venter, Lee Smolin, and Alan Guth

SETH LLOYD is Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. His seminal work in the fields of quantum computation and quantum communications—including proposing the first technologically feasible design for a quantum computer.

He is the author of the recently published Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos.

SETH LLOYD'S Edge Bio Page

The Reality ClubSteven PinkerMartin NowakJ. Craig VenterLee SmolinAlan GuthRudy Rucker


QUANTUM MONKEYS

[SETH LLOYD:] It's no secret that we're in the middle of an information-processing revolution. Electronic and optical methods of storing, processing, and communicating information have advanced exponentially over the last half-century. In the case of computational power this rapid advance known as Moore's Law. In the 1960s, Gordon Moore, the ex-president of Intel, pointed out that the components of computers were halving in size every year or two, and consequently, the power of computers was doubling at the same rate. Moore's law has continued to hold to the present day. As a result these machines that we make, these human artifacts, are on the verge of becoming more powerful than human beings themselves in terms of raw information processing power. If you count the elementary computational events that occur in the brain or in the computer—bits flipping, synapses firing—the computer is likely to overtake the brain in terms of bits flipped per second in the next couple of decades.

We shouldn't be too concerned, though. For computers to become smarter than us is not really a hardware problem; it's more a software issue. Software evolves much more slowly than hardware, and indeed much current software seems to be designed to junk up the beautiful machines that we build. The situation is like the Cambrian explosion, a rapid increase in the power of hardware. Who is smarter, humans or computers, is a question that will get sorted out some million years hence, maybe; maybe sooner. My guess would be that it will take hundreds or thousands of years until we actually get software that we could reasonably regard as useful and sophisticated. At the same time, we're going to have computing machines that are much more powerful quite soon.

Most of what I do in my everyday life is to work at the very edge of this information processing revolution. Much of what I say to you today comes from my experience in building quantum computers, computers where you store bits of information on individual atoms. About ten years ago I came up with the first method for physically constructing a computer in which every quantum—every atom, electron, and photon—inside a system stores and processes information. Over the last ten years I've been lucky enough to work with some of the world's great experimental physicists and quantum mechanical engineers to actually build such devices. A lot of what I'm going to tell you today is informed by my experiences in making these quantum computers. During this meeting, Craig Venter claimed that we're all so theoretical here that we've never seen actual data. I take that personally, because most of what I do on a day-to-day basis is to try to coax little super-conducting circuits to give up their secrets.

The digital information-processing revolution is only the most recent revolution, and it's by no means the greatest one. For instance, he invention of moveable type and the printing has had a much greater impact on human society so far than the electronic revolution. There have been many information processing revolutions. One of my favorites is the invention of the so-called Arabic—actually Babylonian—numbers, in particular, zero. This amazing invention, very useful in terms of processing and registering information, came from the ancient Babylonians and then moved to India. It came to us through the Arabs, which is why we call it the Arabic number system. The invention of zero allows us to write the number 10 as one zero. This apparently tiny step is in fact an incredible invention that has given rise to all sorts of mathematics, including the bits—the "binary digits"—of the digital computing revolution.

Another information processing revolution is the invention of written language. It's hard to argue that written language is not an information-processing revolution of the first magnitude. Another of my favorites is the first sexual revolution; that is, the discovery of sex by a living organism. One of the problems with life is that if you don't have sex, then the primary means of evolution is via mutation. Almost 99.9% of mutations are bad. Being from a mechanical engineering department, I would say that when you evolve only by mutation, you have an engineering conflict: your mechanism for evolution happens to have all sorts of negative effects. In particular, the two prerequisites for life—evolve, but maintain the integrity of the genome —collide. This is what's called a coupled design, and that's bad. However, if you have sexual selection, then you can combine genes from different genomes and get lots of variation without, in principal, ever having to have a mutation. Of course, you still have mutations, but you get a huge amount of variation for free.

I wrote a paper a few years ago that compared the evolutionary power of human beings to that of bacteris. The point of comparison was the number of bits per second of new genetic combinations that a population of human beings generated, compared with the number generated by a culture of bacteria. A culture of bacteria in a swimming pool of seawater has about a trillion bacteria, reproducing once every thirty minutes. Compare this with the genetic power of a small town with a few thousand people in New England—say Peyton Place—reproducing every thirty years. Despite the huge difference in population, Peyton Place can generate as many new genetic combinations as the culture of bacteria a billion times more numerous. This assumes that the bacteria are only generating new combinations via mutation, which of course they don't, but for this purpose we will not discuss bacteria having sex. In daytime TV the sexual recombination and selection happens much faster, of course.

Sexual reproduction is a great revolution. Then of course, there's the grandmother or granddaddy of all information processing revolutions, life itself. The discovery, however it came about, that information can be stored and processed genetically and that this could be used to encode functions inside an organism that can reproduce is an incredible revolution. It happened four to five billion years ago on Earth, maybe earlier if one believes that life developed elsewhere and then was transported here. At any rate, since the universe is only 13.8 billion years old, it happened sometime in the last 13.8 billion years.

We forgot to talk about the human brain (or should I say, my brain forgot to talk about the brain?). There are many information-processing revolutions, and I'm presumably leaving out many thousands that we don't even know about, but which were equally important as the ones we've discussed.

To pull a Kuhnian maneuver, the main thing that I'd like to point out about these information processing revolutions is that each one arises out of the technology of the previous one. Electronic information processing, for instance, comes out of the notion of written language, of having zeroes and ones, the idea that you can make machines to copy and transmit information. A printing press is not so useful without written language. Without spoken language, you wouldn't come up with written language. It's hard to speak if you don't have a brain. And what are brains for but to help you have sex? You can't have sex without life. Music came from the ability to make sound, and the ability to make sound evolved for the purpose of having sex. You either need vocal chords to sing with or sticks to beat on a drum with. To make sound, you need a physical object. Every information processing revolution requires either living systems, electromechanical systems, or mechanical systems. For every information processing revolution, there is a technology.

OK, so life is the big one, the mother of all information processing revolutions. But what revolution occurred that allowed life to exist? I would claim that, in fact, all information processing revolutions have their origin in the intrinsic computational nature of the universe. The first information processing revolution was the Big Bang. Information processing revolutions come into existence because at some level the universe is constructed of information. It is made out of bits.

Of course, the universe is also made out of elementary particles, unknown dark energy, and lots of other things. I'm not advocating that we junk our normal picture of the universe as being constructed out of quarks, electrons, and protons. But in fact it's been known, ever since the latter part of the 19th century, that every elementary particle, every photon, every electron, registers a certain number of bits of information. Whenever two elementary particles bounce off of each other, those bits flip. The universe computes.

The notion that the universe is, at bottom, processing information sounds like some radical idea. In fact, it's an old discovery, dating back to Maxwell, Boltzmann and Gibbs, the physicists who developed statistical mechanics from 1860 to 1900. They showed that, in fact, the universe is fundamentally about information. They, of course, called this information entropy, but if you look at their scientific discoveries through the lens of twentieth century technology, what in fact they discovered was that entropy is the number of bits of information registered by atoms. So in fact, it's scientifically uncontroversial that the universe at bottom is processing information. My claim is that this intrinsic ability of the universe to register and process information is actually responsible for all the subsequent information processing revolutions.

How do we think of information these days? The contemporary scientific view of information is based on the theories of Claude Shannon. When Shannon came up with his fundamental formula for information he went to the physicist and polymath John von Neumann and said, "What shall I call this?" and von Neuman said, "You'll call it H, because that's what Boltzmann called it," referring to Boltzmann's famous H Theorem. The founders of information theory were very well aware that the formulas they were using had been developed back in the 19th century to describe the motions of atoms. When Shannon talked about the number of bits in a signal that can be sent down a communications channel, he was using the same formulas to describe it that Maxwell and Boltzmann used to describe the amount of information, or the entropy, required to describe the positions and momenta of a set of interacting particles in a gas.

What is a bit of information? Let's get down to the question of what information is. When you buy a computer you ask how many bits its memory can register. A bit comes from a distinction between two different possibilities. In a computer a bit is a little electric switch, which can be open or closed; or it's a capacitor that can be charged, which is called 1, or uncharged, which is called 0. Anything that has two distinct states registers a bit of information. At the elementary particle level a proton can have two distinct states: spin up or spin down. Each proton registers one bit of information. In fact, the proton registers a bit whether it wants to or not, or whether this information is interpreted or not. It registers a bit merely by the fact of existing. A proton possesses two different states and so registers a bit.

We exploit the intrinsic information processing ability of atoms when building quantum computers, because many of our quantum computers consist of arrays of protons interacting with their neighbors, each of which stores a bit. Each proton would be storing a bit of information whether we were asking them to flip those bits or not. Similarly, if you have a bunch of atoms zipping around, they bounce off each other. Take two helium atoms in a child's balloon. The atoms come together, and they bounce off each other, and then they move apart again. Maxwell and Boltzmann realized that there's essentially a string of bits that attach to each of these atoms to describe its position and momentum. When the atoms bounce off each other the string of bits changes because the atoms' momentum changes. When the atoms collide, their bits flip.

The number of bits registered by each atom is well known and has been quantified ever since Maxwell and Boltzmann. Each particle—for instance each of the molecules in this room—registers something on the order of 30 or 40 bits of information as it bounces around. This feature of the universe—that it registers and processes information at its most fundamental level—is scientifically uncontroversial, in the sense that it has been known for 120 years and is the accepted dogma of physics.

The universe computes. My claim is that this intrinsic information processing ability of the universe is responsible for the remainder of the information processing revolutions we see around us, from life up to electronic computers. Let me repeat the claim: it's a scientific fact that the universe is a big computer. More technically, the universe is a gigantic information processor that is capable of universal computation. That is the definition of a computer.

If he were here Marvin Minsky would say, "Ed Fredkin and Konrad Zuse back in the 1960s claimed that the universe was a computer, a giant cellular automaton. "Konrad Zuse was the first person to build an electronic digital computer around 1940. He and Ed Fredkin at MIT came up with this idea that the universe might be a gigantic type of computer called a cellular automaton. This is an idea that has since been developed by Stephen Wolfram. The idea that the universe is some kind of digital computer is, in fact, an old claim as well.

Thus, my claim that the universe computes is an old one dating back at least half a century. This claim could actually be substantiated from a scientific perspective. One could prove, by looking at the basic laws of physics, that the universe is or is not a computer, and if so, what kind of computer it is. We have very good experimental evidence that the laws of physics support computation. I own a computer, and it obeys the laws of physics, whatever those laws are. We know the universe supports computation, at least on a macroscopic scale. My claim is that the universe supports computation at its most tiny scale. We know that the universe processes information at this level, and we know that at the larger level it's capable of doing universal computations and creating things like human beings. The thesis that the universe is, at bottom, a computer, is in fact an old notion. The work of Maxwell, Boltzmann, and Gibbs established the basic computational framework more than a century ago. But for some reason, the consequences of the computational nature of the universe have yet to be explored in a systematic way. What does it mean to us that they universe computes? This question is worthy of significant scientific investigation. Most of my work investigates the scientific consequences of the computational
universe.

One of the primary consequences of the computational nature of the universe is that the complexity that we see around us arises in a natural way, without outside intervention. Indeed, if the universe computes, complex systems like life must necessarily arise. So describing the universe in terms of how it processes information, rather than describing it solely in terms of the interactions of elementary particles, is not some kind of empty exercise. Rather, the computational nature of the universe has dramatic consequences.

Let's be more explicit about why something that's computationally capable, like the universe, must necessarily spontaneously generate the kind of complexity that's around us. There's a famous story, "Inflexible Logic," by Russell Maloney, which appeared in The New Yorker in 1940 in which a wealthy dilettante hears the phrase that if you had enough monkeys typing then they would type the works of Shakespeare. Because he's got a lot of money he assembles a team of monkeys and a professional trainer, and he has them start typing. At a cocktail party he has an argument with a Yale mathematician, who says that this is really implausible, because any calculation of the odds of this happening will show it will never happen. The gentleman invites the mathematician up to his estate in Greenwich, Connecticut, and he takes him to where the monkeys have just started to write out Tom Sawyer and Love's Labour's Lost. They're doing it, without any single mistake. The mathematician is so upset that he kills all the monkeys. I'm not sure what the moral of this story is.

The image of monkeys typing on typewriters is quite old. I spent a fair amount of time this summer going over the Internet and talking with various experts around the world about the origins of this story. Some people ascribe it to Thomas Huxley in his debate with Bishop Wilberforce in 1858, after the appearance of The Origin of Species. From eyewitness reports of that debate it is clear that Wilberforce asked Huxley from which side of his family, his mother's or his father's, he was descended from an ape. Huxley said, "I would rather be descended from a humble ape than from a great gentleman who uses considerable intellectual gifts in the service of falsehood. "A woman in the audience fainted when he said that. They didn't have R-rated movies back then.

Although Huxley made a stirring defense of Darwin's theory of natural selection during this debate, and although he did refer to monkeys, apparently he did not talk about monkeys typing on typewriters, because for one thing typewriters as we know them had barely been invented in 1859. The erroneous attribution of the image of typing monkeys to Huxley seems to have arisen because Arthur Eddington, in 1928, speculated about monkeys typing all the books in the British Library. Subsequently, Sir James Jeans ascribed the typing monkeys to Huxley.

In fact, it seems to have been the French mathematician Emile Borel, who came up with the image of typing monkeys in 1907. Borel was the person who developed the modern mathematical theory of combinatorics. Borel imagined a million monkeys each typing ten characters a second at random. He pointed out that these monkeys could in fact produce all the books in all the richest libraries of the world. He then went on to dismiss probability of them doing so as infinitesimally small.

It is true that the monkeys would, in fact, type gibberish. If you plug in "monkeys typing" into Google, you'll find a website that will enlist your computer to emulate typing monkeys. The site lists records of how many monkey years it takes to type out the opening bits of various Shakespeare plays and the current record is 17 characters of Love's Labour's Lost over 483 billion monkey years. Monkeys typing on typewriters generate random gobbledygook.

Before Borel, Boltzmann advanced a "monkeys typing" explanation for why the universe is complex. The universe, he said, is just a big thermal fluctuation. Like the flips of a coin, the universe is in fact just random information. His colleagues soon dissuaded him from this position, because it's obviously not so. If it were, then every new bit of information you got that you hadn't received before would be random. But when our telescopes look out in space, they get new information all the time and it's not random. Far from it: the new information they gather is full of structure. Why is that?

To see why the universe is full of complex structure, imagine that the monkeys are typing into a computer, rather than a typewriter. The computer, in turn, rather than just running Microsoft Word, interprets what the monkeys type as an instruction in some suitable computer language, like Java. Now, even though the monkeys are still typing gobbledygook, something remarkable happens. The computer starts to generate complex structures.

At first this seems odd: garbage in, garbage out. But in fact, there are short, random looking computer programs that will produce very complicated structures. For example, one short, random looking program will make the computer start proving all provable mathematical theorems. A second short, random looking program will make the computer evaluate the consequences of the laws of physics. There are computer programs to do many things, and you don't need a lot of extra information to produce all sorts of complex phenomena from monkeys typing into a computer.

There's a mathematical theory called algorithmic information, which can be thought of as the theory of what happens when monkeys type into computers. This theory was developed in the early 1960s by Ray Solomonoff in Cambridge, Mass., Gregory Chapin who was then a 15-year-old enfant terrible at IBM in Brazil, and then Andrey Kolmogorov, who was a famous Russian academic mathematician. Algorithmic information theory tells you the probability of producing complex patterns from randomly programmed computers. The bottom line is that if monkeys start typing into computers, there's a very high probability that they'll produce things like the laws of chemistry, autocatalytic sets, or prebiotic kinds of life. Monkeys typing into computers make up a reasonable explanation for why we have complexity in our universe.

Monkeys typing into a computer have a reasonable probability of producing almost any computable form of order that exists. You would not be surprised in this monkey universe to see all sorts of interesting things arising. You might not get Hamlet, because something like Hamlet requires huge sophistication and the evolution of societies, etc. But things like the laws of chemistry, or autocatalytic sets, or some kind of prebiotic form of protolife are the kinds of things that you would expect to see happen.

To apply this explanation to the origin of complexity in our universe we need two things: a computer, and monkeys. We have the computer, which is the universe itself. As was pointed out a century ago, the universe registers and processes information systematically at its most fundamental level. The machinery is there to be typed on. So all you need is monkeys. Where do you get the monkeys?

The monkeys that program our universe are supplied by the laws of quantum mechanics. Quantum mechanics is inherently chancy. You may have heard Einstein's phrase, "God does not play dice. " Einstein was wrong. God does play dice. In the case of quantum mechanics, Einstein was, famously, wrong. In fact, it just when God plays dice that these little quantum blips or fluctuations get programmed into our universe.

For example, Alan Guth has done work on how such quantum fluctuations form the seeds for the formation of large-scale structure in the universe. Why is our galaxy here rather than somewhere a hundred million light years away? It's here because way back in the very, very, very, very early universe there was a little quantum fluctuation that made a slight over-density of matter somewhere near here. This over density of matter was very tiny, but that was enough to make a seed around with other matter could clump. The structure that we see like the large-scale structure of the universe is in fact made by quantum monkeys typing.

We have all the ingredients, then, for a reasonable explanation of why the universe is complex. You don't require very complicated dynamics for the universe to compute. The computational dynamics of the universe can be very simple. Almost anything will work. The universe computes. Then, the universe is filled with little quantum monkeys in the form of quantum fluctuations, that program it. Quantum fluctuations get processed by the intrinsic computational power of the universes and eventually give rise to the order that we see around us. 


THE REALITY CLUB

Steven Pinker, Martin Nowak, J. Craig Venter, Lee Smolin, Alan Guth

SETH LLOYD: When I give talks, I am often asked for my definition of complexity.

I wrote my Ph.D. thesis partly on different ways of defining complexity. Although I have my favorites I wouldn't advocate one over the other. Basically the monkeys typing argument for the generation of complexity simply says that you don't have to have a preferred definition of complexity. Any structure or set of structures that you would regard as being complex will be produced by this mechanism.

If you insist that I define complexity, though, I can do so. Charlie Bennett proposed a good definition of complexity called logical depth, which says that a complex structure is one that requires a lot of computation to be produced from a simple program. If you take that idea, then the stuff that the universe has generated is exactly that logically deep stuff. The programs are simple, the computations have been going on for a long time. In fact, I can tell you exactly how many ops the universe has performed on how many bits: by applying the physics of computation you find that the universe has performed ten to the one hundred and twenty elementary operations (e.g., bit flips) on ten to the ninety bits. That's a lot of ops on a lot of bits. What we get as a result is logically deep stuff.

STEVEN PINKER: The claim that the universe is a computer would not have much empirical content if we could not conceive what it would mean for the universe not to be a computer. Is it worth distinguishing between things that we recognize as processing information as opposed to merely containing information? Containing information just means that you have more than one possibility.

LLOYD: So you're happy with that notion of things, continuing information?

STEVEN PINKER: It seems to me that a computer is more than something that contains information, because everything contains information. As an information processor a computer would seem to be special in two ways. One is that the information it processes stands for something. It has a semantics as well as syntax. Among information-processing systems we’re familiar with. written language refers to sound, sound refers to concepts, brains process information about the environment, DNA codes information about amino acids and their sequences, and so on.

Also, having information that has put into correspondence with something else, the information-processing system then attains some goal. That is, there are physical changes in the information-processor that by design (or its equivalent in evolved information-processors) is isomorphic with some relationship among the things that are represented, in some way that lead to some desirable outcome. In the case of computations it's solving equations; in the case of language it's communicating thoughts; in the case of the genetic code it's assembling functioning organisms, and so on. In all of those cases when you have a well-defined semantics and a goal-directed physical process it makes sense to talk about information processing, or in the human-made case, a computer.

But that wouldn't seem to apply to the universe. The states of all the elementary particles don't seem to stand for something else. Nor does the sequence of physical events map onto some orderly set of relationships. This would suggest that the universe is not a computer, although it contains information. Does that contradict what you're saying?

LLOYD: You've raised an important distinction. Many of the systems we regard as processing information, particularly sophisticated ones, have a notion of correspondence of a message with something else.

You seem to have a notion that computations are goal-directed. You're quite right that those kinds of features, having semantics and the notion that information corresponds to something else, are more sophisticated. I regard those as emergent features that we can only ascribe to objects like living things, or perhaps to life itself. Those emergent features are very important.

However, it is possible for a system to register information without that information having some kind of semantic meaning. If a particle's spin can contain a bit, I would argue that you can also talk about information-processing without content.

Let me make an historical point: The great advance that Shannon made in discovering information theory was discovering that quantity of information could be stripped from its semantic content. If you ask how much information can be sent on a fiberoptic cable, you can answer that question without knowing what that information is about. It could be MTV, it could be Romeo and Juliet—the number of bits per second traveling down the cable is the same. It is exactly by getting rid of the notion that semantic content is necessary to describe quantities of information that information theory and the mathematical theory of communication could arise.

STEVEN PINKER: Although Shannon did talk about information in terms of a correlation between what happens at the output end and what happened at the input end. They would have to correlate. It couldn't just be random bits at the input and random bits at the output.

LLOYD: Right, because if the bits were completely random then you would not have information. But what these bits referred to is unimportant for the quantity of information sent down the channel. Correlation is something that can be defined mathematically in the absence of any notion of what the bit means.

Still answering Steven's question, let me argue that in the same way that quantity of information is defined irrespective of semantic content, information processing is defined irrespective of semantic content or the notion that some higher order or purpose is taking place. In an ordinary computer, a computer is just performing simple operations on a bit, flipping it depending on the state of another bit; that would happen regardless of whether there's some overall purpose to that bit flip, whether greater or lesser, or of any semantic content of those bits. If I take a nuclear spin of a proton in one of our quantum computers, and I flip it from spin up to spin down, now I have just flipped a bit and there may be no purpose whatsoever for it.

The question of whether information is being processed, or transformed, has a physical meaning completely apart from any mission or goal that this information is being processed for. In the same way that Shannon was able to say that you can disassociate the quantity of information from semantics, we can also strip information-processing, the notion that the bits are being flipped, from the notion that this is part of some goal-oriented process.

That's the sense in which I'm using the notion of information being processed, the physical process of information being transformed. It actually doesn't have to be part of some goal-oriented process.

STEVEN PINKER: I can see that you can define information processing in that way, so that everything is information-processing, in which case I wonder what kind of statement it is that the universe is an information processor.

The question is whether it is true by virtue of being circular. What I'm doing is offering a definition of information-processing such that it's not true that everything by definition is an information-processor. That allows me to make a statement of content. Is the universe an information-processor or not? I would think that the answer would be no, it isn't. At least, there's an interesting distinction to be made between DNA, computers, and printing presses on the one hand and the entire universe on the other. If you come up with a definition of information-processing and you can't make that distinction, then it would raises the question of whether it means anything to say that the universe is a computer or information-processor.

LLOYD: I think we're in agreement that the statement that the universe is an information-processor is true by virtue of itself. You could call it circular, but I'll just call it true. What I'm trying to explore here are the implications of this fact. You could use your definition of information processing, which is a human-based picture, associated with ideas of language or preconceptions about life. I would just say that this information processing is the result of bits flipping, and then out of this arose life, human beings, etc. The interesting questions concern why we get these emergent features, like information that has semantic content and means something important. I certainly don't say that all bits are created equal. All bits are equal physically — they each register one bit — but some bits are a heck of a lot more important than others. I don't want you flipping the bits in my DNA.

MARTIN NOWAK: I am interested in the physical properties of the universe which might lead us to expect the possibility of life? Is this based on computation? Are you saying that certain structures in the universe can compute something while others cannot? Does this chair here compute?

LLOYD: Certain structures are better at computing than others, but the universe as a whole has this capability. Different pieces of the universe process information in different ways. The whole point about a universal computer is that it can process information in any possible way. Some of these ways of processing information are a heck of a lot more interesting than others. If human beings present very interesting questions of semantics and content and purposeful information processing, I think that's good. That's what's interesting about human beings.

I take a very physical definition of information. If the universe if computing, we have to see what the consequences of that are. The consequence is we get a very diverse universe, in which some parts of computation we regard as interesting, and some not. This chair computes itself, and you wouldn't want it to stop doing that, because if you sat on it, and the chair stopped computing its ability to hold you up – bang. You'd be on the floor. So that's pretty good computation too.

CRAIG VENTER: Your argument is basically that this computer is driving us toward order and I would argue life as a natural consequence of that. So where does decay and entropy enter into this?

LLOYD: That's really a key question. Most of the processes that you see around, particularly in life, have used the increase of entropy as a powerful mechanism that drives pieces of the system to ordered states. There's a whole physical theory of how you can get order in some part of the system at the expense of creating disorder elsewhere. According to the second law of thermodynamics the total amount of information in the system never decreases. You can't make order here without pumping that disorder out elsewhere. This may be a way of trying to discover what happened before life existed. Rather than looking for systems of genetic information we should be looking for systems that were capable of controlling the way that you create order in one place and pump disorder to another place. What kinds of systems do that? That is actually a key part of how you create order. The process of creating order has to respect the laws of physics, and that process exploits the second law of thermodynamics to create order in one place while creating disorder elsewhere.

MARTIN NOWAK: The statement that something is a universal Turing machine requires a mathematical proof. Imagine a box of ideal gas. Is that a universal Turing machine?

LLOYD: No, typically not. Not on its own. To demonstrate that something is a universal Turing machine is not a content-free statement. You can actually ask Yes or No questions. Is the universe a classical cellular automaton as was suggested by Zuse and Fredkin? The answer is almost certainly not, because classical cellular automata can't reproduce quantum mechanical effects in any efficient way. The statement that the universe is a universal computer is not a content-free statement. When you investigate what that statement means in detail, and how the universe actually computes, you can rule out certain kinds of computation as the basis for what it's doing. You actually require a proof that the laws of physics as they stand are computationally universal in a reasonable way.

For a bunch of particles in a gas, Ed Fredkin and Norm Margolus pointed out that particles colliding off each other could perform universal computation. The problem with this model of computation is that it's chaotic. The collision of molecules is a chaotic, the information that the molecules contain degrades very rapidly, and the molecules in this room are not factoring some large number, or reporting back to Microsoft on what we're doing.

LEE SMOLIN: I'm trying to understand the same thing that you're trying to understand: how it is that complexity might come out of the laws of physics. If you agree that there are two very distinct notions of processing information that you and one Steve gave—one defines semantic content and goal-oriented behavior and one just defines evolution of the system in which we identify "bits of information"—the question we're all interested in is how the first kind gives rise to the second, or the reverse.

LLOYD: Given my contempt for theories of everything I would certainly not try to suggest that the computational theory of the universe I have advocated here solves all our problems. I disagree with the notion of insisting on semantic content because it's very hard to make the kind of definitions of information-processing that rely on semantic content precise. Philosophers of language have been trying to make such definitions precise for many, many years, and down that road lies madness. To say what it means for something to have a semantic content is hard. What you mean by goal-oriented behavior is a part of semantic content. That's why I really would like to avoid a definition of that sort, because I don't regard it as being a definition that can be made scientifically precise.

But why does this low-level information-processing that pervades everywhere in the world spontaneously give rise to this kind of high-level information processing where you have language, semantics, and goal-oriented behavior? That's, indeed, what we'd like to find out. This argument that you spontaneously produce complicated structures by no means solves that question, because there's a very detailed history of the way in which this complex behavior erupted in the first place. The nature of this history is
a very interesting question, and I certainly wouldn't say that it's been solved at all.

LEE SMOLIN: Here are two possible quantum theories of gravity: Quantum theory of gravity one has a basis of states that were given by some labeled graphs, combinatorial graphs. Quantum theory of gravity two has a basis of states given by labeled graphs embedded in a three-dimensional manifold. Since you believe in quantum mechanics this means that states have to be normalized. It means you have to sum over certain numbers and get 1. In the first case, the graph isomorphism probably could be solved, and we could write an algorithm that a computer could run to check whether a quantum state is normalizable or not. In the second case it's conjectured that the embedding of graphs in three manifolds is not a problem that's solvable by a finite algorithm. You would have to be committed to the second kind of theory being wrong and the first kind of theory being right, because if the second kind of theory were right then even testing whether a quantum state was normalizable is something that a digital computer could never do. Therefore, if the universe were a digital computer, it could not learn that kind of quantum mechanics.

LLOYD: No, I actually disagree with that. The process of testing whether a theory is correct on a digital computer is very different from the process of a digital computer being something and doing something. This, by the way, is a distinct type of unpredictability from that involved in quantum mechanics. If you have something that is a computer performing a universal digital computation, then Gödel's theorem and the Halting Problem guarantee that the only way to see what it's going to do is to let it evolve and to see what happens. Even without any kind of additional lack of determinism in terms of quantum mechanics or chaos, the fact that the universe is computing makes its future behavior—and in particular its future behavior about things like complex systems, which is what we really care about—intrinsically unpredictable. The only way to see what's going to happen is to wait and see.

ALAN GUTH: When I hear about the universe as a computer and all that, I don't really know what that means that's different from saying that the universe can be described mathematically. I would think that anything that can be described mathematically is the same sort of thing as a computer.

LLOYD: There's a technical difference between something that is described mathematically and something that is capable of universal computation. You can build machines, or indeed laws of physics, that are not capable of universal computation and they could not support things like language, etc. We don't have that kind of universe. There's something called the Chomsky hierarchy, which is a hierarchy of information-processing devices, and as you move up the hierarchy you get ever more sophisticated. At the top of the hierarchy are universal Turing machines. Our universe seems to be, in terms of its information-processing ability, at the top of the Chomsky hierarchy. But it's quite easy to build toy models that don't have this capability.

ALAN GUTH: But it's easy to build models that do have the capability.

LLOYD: Once you have some kind of non-linear interaction between things then you typically get it.

ALAN GUTH: Okay, but then the universal Turing machine idea is telling us very little about the universe.

LLOYD: The fact that people seem to regard this whole statement that the universe is processing information as self-evident, and that it is almost self-evident that it's a universal Turing machine, is good. All I'm arguing is that we should actually look seriously at the implications of this self-evident fact.


Rudy Rucker

Lloyd draws on the analogy of monkeys who are pounding away not on typewriters, but on keyboards that input code to a computer. The laws of nature are the computer. And the monkeys are inputting possible programs. Now, as it happens, lots of short programs generate nice-looking complex patterns. These are what Wolfram calls the Class 4 computations; the ones that I call gnarly computations. Water, fire, clouds, trees, these are all examples of natural computations that, given any of a wide range of inputs, will generate much the same kinds of patterns.

In Lloyd's words, "Many beautiful and intricate mathematical patterns—regular geometric shapes, fractal patterns, the laws of quantum mechanics, elementary particles, the laws of chemistry—can be produced by short computer programs. Believe it or not a [programming] monkey has a good shot at producing everything we see."

He then says, "For the computational explanation of complexity to work, two ingredients are necessary: (a) a computer, and (b) monkeys. The laws of quantum mechanics themselves provide our computer. "

Actually, as I have doubts about quantum mechanics, I'd say that maybe we can just say the "laws of logic," rather than "laws of quantum mechanics "

The really debatable issue is what the monkeys are.

Stephen Wolfram would argue that the universe is ultimately deterministic; think of his beloved cone-shell type cellular automaton rule 30, which starts with a single bit, and spews out endlessly many rows of random-looking scuzz. Perhaps the random-looking seeds that feed into the universe's computation aren't in fact really random, they're pseudorandom sequences generated by a lower level randomizing computation. In this view, there is only one possible universe.

The underlying "monkeys" pseudorandomizer is, in other words, a deterministic rule like CA Rule 30, and it feeds inputs into the universal computer that then generates the complex lovely patterns of the world.

Now, Lloyd, being a quantum mechanic, prefers to say that the "monkeys" are quantum fluctuations. One of the problems in this view is that it we aren't philosophically satisfied with the notion of completely random physical events. We like to see a reason. The way quantum mechanics gets out of this is to say that since there's no reason for a particular turn of events, it must be that all possible turns of events happen, which is unsatisfying.

In any case, Lloyd seems to say that planets and trees and people are algorithmically probable. Things like us are fairly likely to occur in any gnarly class four computation, and all the universes, being universal computations, are potentially gnarly, and in fact a large number of random seed will produce gnarly.

But, being a quantum mechanic, Lloyd doesn't give enough consideration to the ability of deterministic computations to generate what Wolfram calls "intrinsic randomness, "indeed, Lloyd writes, "Without the laws of quantum mechanics, the universe would still be featureless and bare."

That's not true. If you look, for instance, at any computer simulation of a physical system, you see gnarly, but these simulations don't in fact use quantum mechanics as a randomizer. They simply use deterministic pseudorandomizers to get their "monkey" variations to feed into the simulated physics. We really don't need true randomness. Pseudorandomness, that is, unpredictable computation, is enough. There's no absolute necessity to rush headlong into quantum mechanics.