# THE COMPUTATIONAL UNIVERSE [1]

**THE COMPUTATIONAL UNIVERSE**

[SETH LLOYD:] I'm a professor of mechanical engineering at MIT. I build quantum computers that store information on individual atoms and then massage the normal interactions between the atoms to make them compute. Rather than having the atoms do what they normally do, you make them do elementary logical operations like bit flips, not operations, and-gates, and or-gates. This allows you to process information not only on a small scale, but in ways that are not possible using ordinary computers. In order to figure out how to make atoms compute, you have to learn how to speak their language and to understand how they process information under normal circumstances.

It's been known for more than a hundred years, ever since Maxwell, that all physical systems register and process information. For instance, this little inchworm right here has something on the order of Avogadro's number of atoms. And dividing by Boltzmann's concept, its entropy is on the order of Avogadro's number of bits. This means that it would take about Avogadro's number of bits to describe that little guy and how every atom and molecule is jiggling around in his body in full detail. Every physical system registers information, and just by evolving in time, by doing its thing, it changes that information, transforms that information, or, if you like, processes that information. Since I've been building quantum computers I've come around to thinking about the world in terms of how it processes information.

A few years ago I wrote a paper in Nature called "Fundamental Physical Limits to Computation," in which I showed that you could rate the information processing power of physical systems. Say that you're building a computer out of some collection of atoms. How many logical operations per second could you perform? Also, how much information could these systems register? Using relatively straightforward techniques you can show, for instance, that the number of elementary logical operations per second that you can perform with that amount of energy, E, is just E/H - well, it's 2E divided by pi times h-bar. [h-bar is essentially 10[-34] (10 to the -34) Joule-seconds, meaning that you can perform 10[-50] (10 to the 50) ops per second.)]If you have a kilogram of matter, which has mc2 — or around 10[17] Joules (10 to the 17) Joules — worth of energy and you ask how many ops per second it could perform, it could perform 10[17] (ten to the 17) Joules / h-bar. It would be really spanking if you could have a kilogram of matter — about what a laptop computer weighs — that could process at this rate. Using all the conventional techniques that were developed by Maxwell, Boltzmann, and Gibbs, and then developed by von Neumann and others back at the early part of the 20th century for counting numbers of states, you can count how many bits it could register. What you find is that if you were to turn the thing into a nuclear fireball — which is essentially turning it all into radiation, probably the best way of having as many bits as possible — then you could register about 10[30] (10 to the 30) bits. Actually that's many more bits than you could register if you just stored a bit on every atom, because Avogadro's number of atoms store about 10[24] (10 to the 24) bits.

Having done this paper to calculate the capacity of the ultimate laptop, and also to raise some speculations about the role of information-processing in, for example, things like black holes, I thought that this was actually too modest a venture, and that it would be worthwhile to calculate how much information you could process if you were to use all the energy and matter of the universe. This came up because back in when I was doing a Masters in Philosophy of Science at Cambridge. I studied with Stephen Hawking and people like that, and I had an old cosmology text. I realized that I can estimate the amount of energy that's available in the universe, and I know that if I look in this book it will tell me how to count the number of bits that could be registered, so I thought I would look and see. If you wanted to build the most powerful computer you could, you can't do better than including everything in the universe that's potentially available. In particular, if you want to know when Moore's Law, this fantastic exponential doubling of the power of computers every couple of years, must end, it would have to be before every single piece of energy and matter in the universe is used to perform a computation. Actually, just to telegraph the answer, Moore's Law has to end in about 600 years, without doubt. Sadly, by that time the whole universe will be running Windows 2540, or something like that. 99.99% of the energy of the universe will have been listed by Microsoft by that point, and they'll want more! They really will have to start writing efficient software, by gum. They can't rely on Moore's Law to save their butts any longer.

I did this calculation, which was relatively simple. You take, first of all, the observed density of matter in the universe, which is roughly one hydrogen atom per cubic meter. The universe is about thirteen billion years old, and using the fact that there are pi times 10[7] (10 to the 7) seconds in a year, you can calculate the total energy that's available in the whole universe. Remembering that there's a certain amount of energy, you then divide by Planck's Constant — which tells you how many ops per second can be performed — and multiply by the age of the universe, and you get the total number of elementary logical operations that could have been performed since the universe began. You get a number that's around 10[120] (10 to the 120). It's a little bigger — 10[122] (10 to the 122) or something like that — but within astrophysical units, where if you're within a factor of one hundred, you feel that you're okay;

The other way you can calculate it is by calculating how it progresses as time goes on. The universe has evolved up to now, but how long could it go? One way to figure this out is to take the phenomenological observation of how much energy there is, but another is to assume, in a Guthian fashion, that the universe is at its critical density. Then there's a simple formula for the critical density of the universe in terms of its age; G, the gravitational constant; and the speed of light. You plug that into this formula, assuming the universe is at critical density, and you find that the total number of ops that could have been performed in the universe over time (T) since the universe began is actually the age of the universe divided by the Planck scale — the time at which quantum gravity becomes important — quantity squared. That is, it's the age of the universe squared, divided by the Planck length, quantity squared. This is really just taking the energy divided by h-bar, and plugging in a formula for the critical density, and that's the answer you get.

This is just a big number. It's reminiscent of other famous big numbers that are bandied about by numerologists. These large numbers are, of course, associated with all sorts of terrible crank science. For instance, there's the famous Eddington Dirac number, which is 10[40] (10 to the 40). It's the ratio between the size of the universe and the classical size of the electron, and also the ratio between the electromagnetic force of, say, the hydrogen atom, and the gravitational force on the hydrogen atom. Dirac went down the garden path to try to make a theory in which this large number had to be what it was. The number that I've come up with is suspiciously reminiscent of (10[40])[3] (10 to the 40, quantity cubed). This number, 10[120], (10 to the 120) is normally regarded as a coincidence, but in fact it's not a coincidence that the number of ops that could have been performed since the universe began is this number cubed, because it actually turns out to be the first one squared times the other one. So whether these two numbers are the same could be a coincidence, but the fact that this one is equal to them cubed is not.

Having calculated the number of elementary logical operations that could have been performed since the universe began, I went and calculated the number of bits, which is a similar, standard sort of calculation. Say that we took all of this beautiful matter around us on lovely Eastover Farm, and vaporized it into a fireball of radiation. This would be the maximum entropy state, and would enable it to store the largest possible amount of information. You can easily calculate how many bits could be stored by the amount of matter that we have in the universe right now, and the answer turns out to be 10[90] (10 to the 90). This is necessary, just by standard cosmological calculations — it's (10[120])[3/4] (10 to the 120, quantity to the 3/4 power). We can store 10[90] (10 to the 90) bits in matter, and if one believes in somewhat speculative theories about quantum gravity such as holography — in which the amount of information that can be stored in a volume is bounded by the area of the volume divided by the Planck Scale squared — and if you assume that somehow information can be stored mysteriously on unknown gravitational degrees of freedom, then again you get 10[120] (10 to the 120). This is because, of course, the age of the universe squared divided by the Planck length squared is equal to the size of the universe squared divided by the Planck length. So the age of the universe squared, divided by the Planck time squared is equal to the size of the universe divided by the Planck length, quantity squared. So we can do 10[120] (10 to the 120) ops on 10[90] (10 to the 90) bits.

I made these calculations not to suggest any grandiose plan or to reveal large numbers, although of course I ended up with some large numbers, but I was curious what these numbers were. When I calculated I actually thought that these can't be right because they are too small. I can think of much bigger numbers than 10[120] (10 to the 120). There are lots of bigger numbers than that. It was fun to calculate the computational capacity of the universe, but I wanted to get at some picture of how much computation the universe could do if we think of it as performing a computation. These numbers can be interpreted essentially in three ways, two of which are relatively uncontroversial. The first one I already gave you: it's an upper bound to the size of a computer that we could build if we turned everything in the universe into a computer running Windows 2540. That's uncontroversial. So far nobody's managed to find a way to get around that. There's also a second interpretation, which I think is more interesting. One of the things we do with our quantum computers is to use them as analog computers to simulate other physical systems. They're very good at simulating other quantum systems, at simulating quantum field theories, at simulating all sort of effects, down to the quantum mechanical scale that is hard to understand and hard to simulate classically. These numbers are a lower limit to the size of a computer that could simulate the whole universe, because to simulate something you need at least as much stuff as is there. You need as many bits in your simulator as there are bits registered in the system if you are going to simulate it accurately. And if you're going to follow it step by step throughout its evolution, you need at least as many steps in your simulator as the number of steps that occur in the system itself. So these numbers, 10[120] (10 to the 120) ops, 10[90] (10 to the 90) bits of matter —10[120] if you believe in something like holography also form a lower bound on the size of a computer you would need to simulate the universe as a whole, accurately and exactly. That's also uncontroversial.

The third interpretation, which of course is more controversial, arises if we imagine that the universe is itself a computer and that what it's doing is performing a computation. If this is the case, these numbers say how big that computation is — how many ops have been performed on how many bits within the horizon since the universe began. That, of course, is more controversial, and since publishing this paper I've received what is charitably described as "hate mail" from famous scientists. There have been angry letters to the editor of Physical Review Letters. "How dare you publish a paper like this?" they say. Or "It's just absolutely inconceivable. The standards have really gotten low." Thinking of the universe as a computer is controversial. I don't see why it should be so controversial, because many books of science fiction have already regarded the universe as a computer. Indeed, we even know the answer to the question it's computing — it's 42. The universe is clearly not a computer with a Pentium inside. It's not an electronic computer, though of course it operates partly by quantum electro-dynamics, and it's not running Windows — at least not yet. Some of us hope that never happens — though you never can tell — if only because you don't want the universe as a whole to crash on you all of a sudden. Luckily, whatever operating system it has seems to be slightly more reliable so far. But if people try to download the wrong software, or upgrade it in some way, we could have some trouble.

So why is this controversial? For one, it seems to be making a statement that's obviously false. The universe is not an electronic digital computer, it's not running some operating system, and it's not running Windows. Why does it make sense to talk about the universe as performing a computation at all? There's one sense in which it's actually obvious that the universe is performing a computation. If you take any physical system — say this quarter, for example. The quarter can register a lot of information. It registers each atom in it, has a position which registers a certain amount of information, has some jiggling motion which registers a few bits of information, and can be heads or tails. Whether it's heads or tails in the famous flipping a coin is generating a famous bit of information — unless it's Rosenkranz and Guildenstern Are Dead, in which case it always comes up heads. Because the quarter is a physical system, it's also dynamic and evolves in time. Its physical state is transforming. It's easier to notice if I flip it in the air — it evolves in time, it changes, and as it changes it transforms that information, so the information that describes it goes from one state to another — from heads to tails, heads to tails, heads to tails — really fast. The bit flips, again and again and again. In addition, the positions, momentum, and quantum states of the atoms inside are changing, so the information that they're registering is changing. Merely by existing and evolving in time — by existing — any physical system registers information, and by evolving in time it transforms or processes that information.

It doesn't necessarily transform it or process it in the same way that a digital computer does, but it's certainly performing informationprocessing. From my perspective, it's also uncontroversial that the universe registers 10[90] bits of information, transforms and processes that information at a rate which is determined by its energy divided by Planck's constant. All physical systems can be thought of as registering and processing information, and how one wishes to define computation will determine your view of what computation consists of. If you think of computation as being merely information-processing, then it's rather uncontroversial that the universe is computing, but of course many people regard computation as being more than information-processing. There are formal definitions of what computation consists of. For instance, there are universal Turing machines, and there is a nice definition that's now 70-odd years old of what it means for something to be able to perform digital computation. Indeed, the kind of computers we have sitting on our desks, as opposed to the kinds we have sitting in our heads or the kind that were in that little inchworm that was going along, are universal digital computers. So information-processing where a physical system is merely evolving in time is a more specific, and potentially more powerful kind of computing, because one way to evolve in time is just to sit there like a lump. That's a perfectly fine way of evolving in time, but you might not consider it a computation. Of course, my computer spends a lot of time doing that, so that seems to be a common thing for computers to do.

One of the things that I've been doing recently in my scientific research is to ask this question: Is the universe actually capable of performing things like digital computations? Again, we have strong empirical evidence that computation is possible, because I own a computer. When it's not sitting there like a lump, waiting to be rebooted, it actually performs computation. Whatever the laws of physics are, and we don't know exactly what they are, they do indeed support computation in the form of existing computers. That's one bit of empirical evidence for it.

There's more empirical evidence in the form of these quantum computers that I and colleagues like Dave Cory, Tai Tsun Wu, Ike Chuang, Jeff Kimball, Dave Huan, and Hans Mooij have built. They're actually computers. If you look at a quantum computer you don't see anything, because these molecules are too small. But if you look at what's happening in a quantum computer, it's actually attaining these limits that I described before, these fundamental limits of computation. I have a little molecule, and each atom in the molecule registers a bit of information, because spin up is zero, spin down is one. I flip this bit, by putting it in an NMR spectrometer, zapping it with microwaves and making the bit flip. I ask, how fast does that bit flip, given the energy of interaction between the electromagnetic field I'm putting on that spin and the amount of time it takes to flip? You find out that the bit flips in exactly this time that's given by this ultimate limit to computation. I take the energy of the interaction, divide by h-bar if I want, I can make it more accurate, multiplying it by two over pi times h-bar and I find that that's exactly how fast that this bit flips. Similarly, I can do a more complicated operation, like an exclusive or-operation where, if I have two spins, I make this one flip if and only if this spin is spun out, and then the other one flips. It's relatively straightforward to do. In fact, people have been doing it since 1948, and if they'd thought of building quantum computers in 1948 they could have, because they actually already had the wherewithal to do it. When this happens — and it's indeed the sort of thing that happens naturally inside an atom — it also takes place at the limits that are given by this fundamental physics of computation. It goes exactly at the speed that it's allowed to go and no faster. It's saturating its bound for how fast you can perform a computation.

The other neat thing about these quantum computers is that they're also storing a bit of information on every available degree of freedom. Every nuclear spin in the molecules stores exactly one bit of information. We have examples of computers that saturate these ultimate limits of computation, and they look like actual physical systems. They look like alanine molecules, or amino acids, or like chloroform. Similarly, when we do quantum computation using photons, etc. we also perform computation at this limit.

I have not proved that the universe is, in fact, a digital computer and that it's capable of performing universal computation, but it's plausible that it is. It's also a reasonable scientific program to look at the dynamics of the standard model and to try to prove from that dynamics that it is computationally capable. We have strong evidence for this case. Why would this be interesting? For one thing it would justify Douglas Adams and all of the people who've been saying it's a computer all along. But it would also explain some things that have been otherwise paradoxical or confusing about the universe. Alan has done work for a long time on why the universe is so homogeneous, flat, and isotropic. This was unexplained within the standard cosmological model, and your great accomplishment here was to make a wonderful, simple, and elegant model that explains why the universe has these existing features. Another feature that everybody notices about the universe is that it's complex. Why is it complicated? Well nobody knows. It turned out that way. Or if you're a creationist you say God made it that way. If you take a more Darwinian point of view the dynamics of the universe are such that as the universe evolved in time, complex systems arose out of the natural dynamics of the universe. So why would the universe being capable of computation explain why it's complex?

There's a very nice explanation about this, which I think was given back in the '60s, and actually Marvin, maybe you can enlighten me about when this first happened, because I don't know the first instance of it. Computers are famous for being able to do complicated things starting from simple programs. You can write a very short computer program which will cause your computer to start spitting out the digits of pi. If you want to make it slightly more complex you can make it stop spitting out those digits at some point so you can use it for something else. There are short programs that generate all sorts of complicated things. That in itself doesn't constitute an explanation for why the universe itself exhibits all this complexity, but if you combine the fact that you have something that's dynamically, computationally universal with the fact that you're constantly having information injected into the universe, — by the basic laws of quantum mechanics, full of quantum fluctuations are all the time injecting, programming the universe with bits of information — then you do have a reasonable explanation, which I'll close with.

About a hundred and twenty years ago, Ludwig Boltzmann proposed an explanation for why the universe is complex. He said that it's just a big thermal fluctuation. His is a famous explanation: the monkeys-typing-on-typewriters explanation for the universe. Say there were a bunch of monkeys typing a bunch of random descriptions into a typewriter. Eventually we would get a book, right? But Boltzman among other people realized right away that this couldn't be right, because the probability of this happening is vanishingly small. If you had one dime that assembled itself miraculously by a thermal fluctuation, the chances of finding another dime would be vanishingly small; you'd never find that happening in the same universe because it's just too unlikely.

But now let's turn to this other metaphor, which I want help from Marvin with. Now the monkeys are not typing into a typewriter, but into a computer keyboard. Let's suppose this computer is accepting what the monkeys are typing as instructions to perform computational tasks. This means that, for instance, because there are short programs for producing the digits of pi you don't need that many monkeys typing for that long until all of a sudden pi is being produced by the computer. If you got a monkey that's managed to produce a program to produce a dime, then all it has to do is hit return and it's got two dimes, right? Monkeys are probably pretty good at hitting return. There's a nice theory associated with this called algorithmic information theory, which says that if you've got monkeys typing into a computer the fact is that anything that can be realistically described by a mathematical equation, by a computer computing things, will at some point show up for these monkeys. In the monkey-typing-into-the-computer universe, all sorts of complex things arise naturally by the natural evolution of the universe.

I would suggest, merely as a metaphor here, but also as the basis for a scientific program to investigate the computational capacity of the universe, that this is also a reasonable explanation for why the universe is complex. It gets programmed by little random quantum fluctuations, like the same sorts of quantum fluctuations that mean that our galaxy is here rather than somewhere else. According to the standard model billions of years ago some little quantum fluctuation, perhaps a slightly lower density of matter, maybe right where we're sitting right now, caused our galaxy to start collapsing around here. It was just a little quantum fluctuation, but it programmed the universe and it's important for where we are, because I'm very glad to be here and not billions of miles away in outer space. Similarly, another famous little quantum fluctuation that programs you is the exact configuration of your DNA. The program takes strands of DNA from your mother and from your father, splits them up, and wires them together, recombines them. This is a process that has lots of randomness in it, as you know if you have siblings. If you trace that randomness down, you find that that randomness is actually arising from little quantum fluctuations, which masquerade as thermal and chemical fluctuations. Your genes got programmed by quantum fluctuation. There's nothing wrong with that, nothing to be ashamed of — that's just the way things are. Your genes are very important to you, and they themselves form a kind of program for your life, and how your body functions.

In this metaphor we actually have a picture of the computational universe, a metaphor which I hope to make scientifically precise as part of a research program. We have a picture for how complexity arises, because if the universe is computationally capable, maybe we shouldn't be so surprised that things are so entirely out of control.