HOW FAST, HOW SMALL, AND HOW POWERFUL?

MOORE'S LAW AND THE ULTIMATE LAPTOP
Seth Lloyd [7.22.01]
Topic:
Introduction By: John Brockman

 

"Something else has happened with computers. What's happened with society is that we have created these devices, computers, which already can register and process huge amounts of information, which is a significant fraction of the amount of information that human beings themselves, as a species, can process. When I think of all the information being processed there, all the information being communicated back and forth over the Internet, or even just all the information that you and I can communicate back and forth by talking, I start to look at the total amount of information being processed by human beings — and their artifacts — we are at a very interesting point of human history, which is at the stage where our artifacts will soon be processing more information than we physically will be able to process."

THE REALITY CLUB: Joseph Traub, Jaron Lanier, John McCarthy, Lee Smolin, Philip W. Anderson, Antony Valentini, Stuart Hameroff and Paola Zizzi respond to Seth Lloyd

Introduction 

"Lloyd's Hypothesis" states that everything that's worth understanding about a complex system, can be understood in terms of how it processes information. This is a new revolution that's occurring in science.

Part of this revolution is being driven by the work and ideas of Seth Lloyd, a Professor of Mechanical Engineering at MIT. Last year, Lloyd published an article in the journal Nature — "Ultimate Physical Limits to Computation" (vol. 406, no. 6788, 31 August 2000, pp. 1047-1054) — in which he sought to determine the limits the laws of physics place on the power of computers. "Over the past half century," he wrote, "the amount of information that computers are capable of processing and the rate at which they process it has doubled every 18 months, a phenomenon known as Moore's law. A variety of technologies — most recently, integrated circuits — have enabled this exponential increase in information processing power. But there is no particular reason why Moore's law should continue to hold: it is a law of human ingenuity, not of nature. At some point, Moore's law will break down. The question is, when?"

His stunning conclusion?

"The amount of information that can be stored by the ultimate laptop, 10 to the 31st bits, is much higher than the 10 to the 10th bits stored on current laptops. This is because conventional laptops use many degrees of freedom to store a bit whereas the ultimate laptop uses just one. There are considerable advantages to using many degrees of freedom to store information, stability and controllability being perhaps the most important. Indeed, as the above calculation indicates, to take full advantage of the memory space available, the ultimate laptop must turn all its matter into energy. A typical state of the ultimate laptop's memory looks like a plasma at a billion degrees Kelvin — like a thermonuclear explosion or a little piece of the Big Bang! Clearly, packaging issues alone make it unlikely that this limit can be obtained, even setting aside the difficulties of stability and control."

Ask Lloyd why he is interested in building quantum computers and you will get a two part answer. The first, and obvious one, he says, is "because we can, and because it's a cool thing to do." The second concerns some interesting scientific implications. "First," he says, "there are implications in pure mathematics, which are really quite surprising, that is that you can use quantum mechanics to solve problems in pure math that are simply intractable on ordinary computers." The second scientific implication is a use for quantum computers was first suggested by Richard Feynman in 1982, that one quantum system could simulate another quantum system. Lloyd points out that "if you've ever tried to calculate Feynman diagrams and do quantum dynamics, simulating quantum systems is hard. It's hard for a good reason, which is that classical computers aren't good at simulating quantum systems."

Lloyd notes that Feynman suggested the possibility of making one quantum system simulate another. He conjectured that it might be possible to do this using something like a quantum computer. In 90s Lloyd showed that in fact Feynman's conjecture was correct, and that not only could you simulate virtually any other quantum system if you had a quantum computer, but you could do so remarkably efficiently. So by using quantum computers, even quite simple ones, once again you surpass the limits of classical computers when you get down to, say, 30 or 40 bits in your quantum computer. You don't need a large quantum computer to get a big huge speedup over classical simulations of physical systems.

"A salt crystal has around 10 to the 17 possible bits in it," he points out. "As an example, let's take your own brain. If I were to use every one of those spins, the nuclear spins, in your brain that are currently being wasted and not being used to store useful information, we could probably get about 10 to the 28 bits there."

Sitting with Lloyd in the Ritz Carlton Hotel in Boston, overlooking the tranquil Boston Public Gardens, I am suddenly flooded with fantasies of licensing arrangements regarding the nuclear spins of my brain. No doubt this would be a first in distributed computing

"You've got a heck of a lot of nuclear spins in your brain," Lloyd says. "If you've ever had magnetic resonance imaging, MRI, done on your brain, then they were in fact tickling those spins. What we're talking about in terms of quantum computing, is just sophisticated 'spin tickling'."

This leads me to wonder how "spin tickling" fits into intellectual property law. How about remote access? Can you in theory designate and exploit people who would have no idea that their brains were being used for quantum computation?

Lloyd points out that so far as we know, our brains don't pay any attention to these nuclear spins. "You could have a whole parallel computational universe going on inside your brain. This is, of course, fantasy. But hey, it might happen."

"But it's not a fantasy to explore this question about making computers that are much, much, more powerful than the kind that we have sitting around now — in which a grain of salt has all the computational powers of all the computers in the world. Having the spins in your brain have all the computational power of all the computers in a billion worlds like ours raises another question which is related to the other part of the research that I do."

In the '80s, Lloyd began working on how large complex systems process information. How things process information at a very small scale, and how to make ordinary stuff, like a grain of salt or a cube of sugar, process information, relates to the complex systems work in his thesis that he did with the late physicist Heinz Pagels, his advisor at Rockefeller University. "Understanding how very large complex systems process information is the key to understanding how they behave, how they break down, how they work, what goes right and what goes wrong with them," he says.

Science is being done in new an different ways, and the changes accelerates the exchange of ideas and the development of new ideas. Until a few years ago, it was very important for a young scientist to be to "in the know" — that is, to know the right people, because results were distributed primarily by pre prints, and if you weren't on the right mailing list, then you weren't going to get the information, and you wouldn't be able to keep up with the field.

"Certainly in my field, and fundamental physics, and quantum mechanics, and physics of information," Lloyd notes, "results are distributed electronically, the electronic pre-print servers, and they're available to everybody via the World Wide Web. Anybody who wants to find out what's happening right now in the field can go to http://xxx.lanl.gov and find out. So this is an amazing democratization of knowledge which I think most people aren't aware of, and its effects are only beginning to be felt."

"At the same time," he continues, "a more obvious way in which science has become public is that major newspapers such as The New York Times have all introduced weekly science sections in the last ten years. Now it's hard to find a newspaper that doesn't have a weekly section on science. People are becoming more and more interested in science, and that's because they realize that science impacts their daily lives in important ways."

A big change in science is taking place, and that's that science is becoming more public — that is, belonging to the people. In some sense, it's a realization of the capacity of science. Science in some sense is fundamentally public.

"A scientific result is a result that can be duplicated by anybody who has the right background and the right equipment, be they a professor at M.I.T. like me," he points out, "or be they from an underdeveloped country, or be they an alien from another planet, or a robot, or an intelligent rat. Science consists exactly of those forms of knowledge that can be verified and duplicated by anybody. So science is basically, at it most fundamental level, a public form of knowledge, a form of knowledge that is in principle accessible to everybody. Of course, not everybody's willing to go out and do the experiments, but for the people who are willing to go out and do that, — if the experiments don't work, then it means it's not science.

"This democratization of science, this making it public, is in a sense the realization of a promise that science has held for a long time. Instead of having to be a member of the Royal Society to do science, the way you had to be in England in the 17th, 18th, centuries today pretty much anybody who wants to do it can, and the information that they need to do it is there. This is a great thing about science. That's why ideas about the third culture are particularly apropos right now, as you are concentrating on scientists trying to take their case directly to the public. Certainly, now is the time to do it."

—JB

SETH LLOYD is an Associate Professor of Mechanical Engineering at MIT and a principal investigator at the Research Laboratory of Electronics. He is also adjunct assistant professor at the Santa Fe Institute. He works on problems having to do with information and complex systems from the very small — how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information?

Click Here for Seth Lloyd's Bio Page 


HOW FAST, HOW SMALL, AND HOW POWERFUL?

SETH LLOYD: Computation is pervading the sciences. I believe it began about 400 years ago, if you look at the first paragraph of Hobbes's famous book Leviathan. He says that just as we consider the human body to be like a machine, like a clock where you have sinews and muscles to move energy about, a pulse beat like a pendulum, and a heart that pumps energy in, similar to the way a weight supplies energy to a clock's pendulum, then we can consider the state to be analogous to the body, since the state has a prince at its head, people who form its individual portions, legislative bodies that form its organs, etc. In that case, Hobbes asked, couldn't we consider the state itself to have an artificial life?

To my knowledge that was the first use of the phrase artificial life in the form that we use it today. If we have a physical system that's evolving in a physical way, according to a set of rules, couldn't we consider it to be artificial and yet living? Hobbes wasn't talking about information processing explicitly, but the examples he used were, in fact, examples of information processing. He used the example of the clock as something that is designed to process information, as something that gives you information about time. Most pieces of the clock that he described are devices not only for transforming energy, but actually for providing information. For example, the pendulum gives you regular, temporal information. When he next discusses the state and imagines it having an artificial life, he first talks about the brain, the seat of the state's thought processes, and that analogy, in my mind, accomplishes two things.

First, Hobbes is implicitly interested in information. Second, he is constructing the fundamental metaphor of scientific and technological inquiry. When we think of a machine as possessing a kind of life in and of itself, and when we think of machines as doing the same kinds of things that we ourselves do, we are also thinking the corollary, that is, we are doing the same kinds of things that machines do. This metaphor, one of the most powerful of the Enlightenment, in some sense pervaded the popular culture of that time. Eventually, one could argue, that metaphor gave rise to Newton's notions of creating a dynamical picture of the world. The metaphor also gave rise to the great inquiries into thermodynamics and heat, which came 150 years later, and, in some ways, became the central mechanical metaphor that has informed all of science up to the 20th century.

The real question is, when did people first start talking about information in such terms that information processing rather than clockwork became the central metaphor for our times? Because until the 20th century, this Enlightenment mode of thinking of physical things such as mechanical objects with their own dynamics as being similar to the body or the state was really the central metaphor that informed much scientific and technological inquiry. People didn't start thinking about this mechanical metaphor until they began building machines, until they had some very good examples of machines, like clocks for instance. The 17th century was a fantastic century for clockmaking, and in fact, the 17th and 18th centuries were fantastic centuries for building machines, period.

Just as people began conceiving of the world using mechanical metaphors only when they had themselves built machines, people began to conceive of the world in terms of information and information-processing, only when they began dealing with information and information processing. All the mathematical and theoretical materials for thinking of the world in terms of information, including all the basic formulas, were available at the end of the 19th century, because all these basic formulas had been created by Maxwell, Boltzmann and Gibbs for statistical mechanics. The formula for information was known back in the 1880s, but people didn't know that it dealt with information. Instead, because they were familiar with things like heat and mechanical systems that processed heat, they called information in its mechanical or thermodynamic manifestation, entropy. It wasn't until the 1930s, when people like Claude Shannon and Norbert Wiener, and before them Harry Nyquist, started to think about information processing for the primary purpose of communication, or for the purposes of controlling systems so that the role of information and feedback could be controlled. Then came the notion of constructing machines that actually processed information. Babbage tried to construct one back in the early 19th century, which was a spectacular and expensive failure, and one which did not enter into the popular mainstream.

Another failure concerns the outgrowth of the wonderful work regarding Cybernetics in other fields such as control theory, back in the late 1950s, early 1960s, when there was this notion that cybernetics was going to solve all our problems and allow us to figure out how social systems work, etc. That was a colossal failure — not because that idea was necessarily wrong, but because the techniques for doing so didn't exist at that point — and, if we're realistic, may in fact never exist. The applications of Cybernetics that were spectacularly successful are not even called Cybernetics because they're so ingrained in technology, in fields like control theory, and in the aerospace techniques that were used to put men on the moon. Those were the great successes of Cybernetics, remarkable successes, but in a more narrow technological field.

This brings us to the Internet, which in some sense is almost like Anti Cybernetics, the evil twin of Cybernetics. The word Cybernetics comes from the Greek word kybernotos which means a governor — helmsman, actually, the kybernotos was the pilot of a ship. Cybernetics, as initially conceived, was about governing, or controlling, or guiding. The great thing about the Internet, as far as I'm concerned, is that it's completely out of control. In some sense the fact of the Internet goes way beyond and completely contradicts the Cybernetic ideal. But, in another sense — the way in which the Internet and cybernetics are related, Cybernetics was fundamentally on the right track. As far as I'm concerned what's really going on in the world is that there's a physical world where things happen. I'm a physicist by training and I was taught to think of the world in terms of energy, momentum, pressure, entropy. You've got all this energy, things are happening, things are pushing on other things, things are bouncing around.

But that's only half the story. The other half of the story, its complementary half, is the story about information. In one way you can think about what's going on in the world as energy, stuff moving around, bouncing off each other — that's the way people have thought about the world for over 400 years, since Galileo and Newton. But what was missing from that picture was what that stuff was doing: how, why, what? These are questions about information. What is going on? It's a question about information being processed. Thinking about the world in terms of information is complementary to thinking about it in terms of energy. 

To my mind, that is where the action is, not just thinking about the world as information on its own, or as energy on its own, but looking at the confluence of information and energy and how they play off against each other. That's exactly what Cybernetics was about. Wiener, who is the real father of the field of Cybernetics, conceived of Cybernetics in terms of information, things like feedback control. How much information, for example, do you need to make something happen?

The first physicists studying these problems were scientists who happened to be physicists, and the first person who was clearly aware of the connection between information, entropy, and physical mechanics and energy like quanta was Maxwell. Maxwell, in the 1850s and 60s, was the first person to write down formulas that related what we would now call information — ideas of information — to things like energy and entropy. He was also the first person to make such an explicit connection.

He also had this wonderfully evocative far-out, William Gibsonesque notion of a demon. "Maxwell's Demon" is this hypothetical being that was able to look very closely at the molecules of gas whipping around in a room, and then rearrange them. Maxwell even came up with a model in which the demon was sitting at a partition, a tiny door, between two rooms and he could open and shut this door very rapidly. If he saw fast molecules coming from the right and slow molecules coming from the left, then he'd open the door and let the fast molecules go in the lefthand side, and let the slow molecules go into the righthand side.

And since Maxwell already knew about this connection between the average speed of molecules and entropy, and he also knew that entropy had something to do with the total number of configurations, the total number of states a system can have, he pointed out, that if the demon continues to do this, the stuff on the lefthand side will get hot, and the stuff on the righthand side will get cold, because the molecules over on the left are fast, and the molecules on the right are slow.

He also pointed out that there is something screwy about this because the demon is doing something that shouldn't take a lot of effort since the door can be as light as you want, the demon can be as small as you want, the amount of energy you use to open and shut the door can be as small as you desire, and yet somehow the demon is managing to make something hot on one side. Maxwell pointed out that this was in violation of all the laws of thermodynamics — in particular the second law of thermodynamics which says that if you've got a hot thing over here and a cold thing over there, then heat flows from the hot thing to the cold thing, and the hot thing gets cooler and the cold thing gets hotter, until eventually they end up the same temperature. And it never happens the opposite way. You never see something that's all the same temperature spontaneously. Maxwell pointed out that there was something funny going on, that there was this connection between entropy and this demon who was capable of processing information.

To put it all in perspective, as far as I can tell, the main thing that separates humanity from most other living things, is the way that we deal with information. Somewhere along the line we developed methods, sophisticated mechanisms, for communicating via speech. Somewhere along the line we developed natural language, which is a universal method for processing information. Anything that you can imagine is processed with information and anything that could be said, can be said using language.

That probably happened around a hundred thousand years ago, and since then, the history of human beings has been the development of ever more sophisticated ways of registering, processing, transforming, and dealing with information. Society through this methodology creates an organizational formula that is totally wild compared with the organizational structures of most other species, which makes the human species distinctive, if there is something at all that makes us distinctive. In some sense we're just like any ordinary species out there. The extent to which we are different has to do with having more sophisticated methods for processing information.

Something else has happened with computers. What's happened with society right now is that we have created these devices, computers, which already can register and process huge amounts of information, which is a significant fraction of the amount of information that human beings themselves, as a species, can process. When I think of all the information being processed there, all the information being communicated back and forth over the Internet, or even just all the information that you and I can communicate back and forth by talking, I start to look at the total amount of information being processed by human beings — and their artifacts — we are at a very interesting point of human history, which is at the stage where our artifacts will soon be processing more information than we physically will be able to process. So I have to ask, how many bits am I processing per second in my head? I could estimate it, it's going to be around ten billion neurons, something like 10 to the 15 bits per second, around a million billion bits per second.

Hell if I know what it all means — we're going to find out. That's the great thing. We're going to be around to find out some of what this means. If you think that information processing is where the action is, it may mean in fact that human beings are not going to be where the action is anymore. On the other hand, given that we are the people who created the devices that are doing this mass of information processing, we, as a species, are uniquely poised to make our lives interesting and fun in completely unforeseen ways.

Every physical system, just by existing, can register information. And every physical system, just by evolving according to its own peculiar dynamics, can process that information. I'm interested in how the world registers information and how it processes it. Of course, one way of thinking about all of life and civilization is as being about how the world registers and processes information. Certainly that's what sex is about; that's what history is about. But since I'm a scientist who deals with the physics of how things process information, I'm actually interested in that notion in a more specific way. I want to figure out not only how the world processes information, but how much information it's processing. I've recently been working on methods to assign numerical values to how much information is being processed, just by ordinary physical dynamics. This is very exciting for me, because I've been working in this field for a long time trying to come up with mathematical techniques for characterizing how things process information, and how much information they're processing.

 

About a year or two ago, I got the idea of asking the question, given the fundamental limits on how the world is put together — (1) the speed of light, which limits how fast information can get from one place to another, (2) Planck's constant, which tells you what the quantum scale is, how small things can actually get before they disappear altogether, and finally (3) the last fundamental constant of nature, which is the gravitational constant, which essentially tells you how large things can get before they collapse on themselves — how much information can possibly be processed. It turned out that the difficult part of this question was thinking it up in the first place. Once I'd managed to pose the question, it only took me six months to a year to figure out how to answer it, because the basic physics involved was pretty straightforward. It involved quantum mechanics, gravitation, perhaps a bit of quantum gravity thrown in, but not enough to make things too difficult.

The other motivation for trying to answer this question was to analyze Moore's Law. Many of our society's prized objects are the products of this remarkable law of miniaturization — people have been gettingextremely good at making the components of systems extremely small. This is what's behind the incredible increase in the power of computers, what's behind the amazing increase in information technology and communications, such as the Internet, and it's what's behind pretty much every advance in technology you can possibly think of — including fields like material science. I like to think of this as the most colossal land grab that's ever been done in the history of mankind.

From an engineering perspective, there are two ways to make something bigger: One is to make it physically bigger, (and human beings spent a lot of time making things physically bigger, working out ways to deliver more power to systems, working out ways to actually build bigger buildings, working out ways to expand territory, working out ways to invade other cultures and take over their territory, etc.) But there's another way to make things bigger, and that's to make things smaller. Because the real size of a system is not how big it actually is, the real size is the ratio between the biggest part of a system and the smallest part of a system. Or really the smallest part of a system that you can actually put to use in doing things. For instance, the reason that computers are so much more powerful today than they were ten years ago is that every year and a half or so, the basic components of computers, the basic wires, logic chips etc., have gone down in size by a factor of two. This is known as Moore's Law, which is just a historical fact about history of technology.

Every time something's size goes down by a factor of two, you can cram twice as many of them into a box, and so every two years or so, the power of computers doubles, and over the course of fifty years the power of computers has gone up by a factor of a million or more. The world has gotten a million times bigger because we've been able to make the smallest parts of the world a million times smaller. This makes this an exciting time to live in, but a reasonable question to ask is, where is all this going to end? Since Moore proposed it in the early 1960s, Moore's Law has been written off numerous times. It was written off in the early 1970s because people thought that fabrication techniques for integrated circuits were going to break down and you wouldn't be able to get things smaller than a scale size of ten microns.

Now Moore's Law is being written off again because people say that the insulating barriers between wires in computers are getting to be only a few atoms thick, and when you have an insulator that's only a few atoms thick then electrons can tunnel through them and it's not a very good insulator. Well, perhaps that will stop Moore's Law, but so far nothing has stopped it.

At some point Moore's Law has to stop? This question involves the ultimate physical limits to computation: you can't send signals faster than the speed of light, you can't make things smaller than the laws of quantum mechanics tell you that you can, and if you make things too big, then they just collapse into one giant black hole. As far as we know, it's impossible to fool Mother Nature. 

I thought it would be interesting to see what the basic laws of physics said about how fast, how small, and how powerful, computers can get. Actually these two questions: given the laws of physics, how powerful can computers be; and where must Moore's Law eventually stop — turn out to be exactly the same, because they stop at the same place, which is where every available physical resource is used to perform computation. So every little subatomic particle, every ounce of energy, every photon in your system — everything is being devoted towards performing a computation. The question is, how much computation is that? So in order to investigate this, I thought that a reasonable form of comparison would be to look at what I call the ultimate laptop. Let's ask just how powerful this computer could be.

The idea here is that we can actually relate the laws of physics and the fundamental limits of computation to something that we are familiar with — something of human scale that has a mass of about a kilogram, like a nice laptop computer, and has about a liter in volume, because kilograms and liters are pretty good to hold in your lap, are a reasonable size to look at, and you can put it in your briefcase, et cetera. After working on this for nearly a year what I was able to show was that the laws of physics give absolute answers to how much information you could process with a kilogram of matter confined to a volume of one liter. Not only that, surprisingly, or perhaps not so surprisingly, the amount of information that can be processed, the number of bits that you could register in the computer, and the number of operations per second that you could perform on these bits are related to basic physical quantities, and to the aforementioned constants of nature, the speed of light, Planck's constant, and the gravitational constant. In particular you can show without much trouble that the number of ops per second — the number of basic logical operations per second that you can perform using a certain amount of matter is proportional to the energy of this matter.

For those readers who are technically-minded, it's not very difficult to whip out the famous formula E = MC2 and show, using work of Norm Margolus and Lev Levitin here in Boston that the total number of elementary logical operations that you can perform per second using a kilogram of matter is the amount of energy, MC2, times two, divided by H-bar Planck's constant, times pi. Well, you don't have to be Einstein to do the calculation; the mass is one kilogram, the speed of light is 3 times ten to the eighth meters per second, so MC2 is about ten to the 17th joules, quite a lot of energy (I believe it's roughly the amount of energy used by all the world's nuclear power plants in the course of a week or so), a lot of energy, but let's suppose you could use it to do a computation. So you've got ten to the 17th joules, and H-bar, the quantum scale, is ten to the minus 34 joules per second, roughly. So there you go. I have ten to the 17th joules, I divide by ten to the minus 34 joules-seconds, and I have the number of ops: ten to the 51 ops per second. So you can perform 10 to the 51 operations per second, and ten to the 51 is about a billion billion billion billion billion billion billion ops per second — a lot faster than conventional laptops. And this is the answer. You can't do any better than that, so far as the laws of physics are concerned.

Of course, since publication of the Nature article, people keep calling me up to order one of these laptops; unfortunately the fabrication plant to build it has not yet been constructed. You might then actually ask why it is that our conventional laptops, are so slow by comparison when we've been on this Moore's Law track for 50 years now? The answer is that they make the mistake, which could be regarded as a safety feature of the laptop, of locking up most of their energy in the form of matter, so that rather than using that energy to manipulate information and transform it, most of it goes into making the laptop sit around and be a laptop.

As you can tell, if I were to take a week's energy output of all the world's nuclear power plants and liberate it all at once, I would have something that looked a lot like a thermonuclear explosion, because a thermonuclear explosion is essentially taking roughly a kilogram of matter and turning it into energy. So you can see right away that the ultimate laptop would have some relatively severe packaging problems. Since I am a professor of mechanical engineering at MIT, I think packaging problems is where it's at. We're talking about some very severe material and fabrication problems to prevent this thing from taking not only you but the entire city of Boston out with it when you boot it up the first time.

Needless to say, we didn't actually figure out how we were going to put this thing into a package, but that's part of the fun of doing calculations according to the ultimate laws of physics. We decided to figure out how many ops per second we could perform, and to worry about the packaging afterwards. Now that we've got 10 to the 51 ops per second the next question is: what's the memory space of this laptop.

When I go out to buy a new laptop, I first ask how many ops per second can it perform? If it's something like a hundred megahertz, it's pretty slow by current standards; if it's a gigahertz, that's pretty fast though we're still very far away from the 10 to the 51 ops per second. With a gigahertz, we're approaching 10 to the 10th, 10 to the 11th, 10 to the 12th, depending how ops per second are currently counted. Next, how many bits do I have — how big is the hard drive for this computer, or how big is its RAM? We can also use the laws of physics to calculate that figure. And computing memory capability is something that people could have done back in the early decades of this century.

We know how to count bits. We take the number of states, and the number of states is two raised to the power of the number of bits. Ten bits, two to the tenth states, 1024 states. Twenty bits, two to the 20 bits, roughly a million states. You keep on going and you find that with about 300 bits, two to the 300, well, it's about ten to the one hundred, which is essentially a bit greater than the number of particles in the universe. If you had 300 bits, you could assign every particle in the universe a serial number, which is a powerful use of information. You can use a very small number of bits to label a huge number of bits.

How many bits does this ultimate laptop have?

I have a kilogram of matter confined to the volume of a liter; how many states, how many possible states for matter confined to the volume of a liter can there possibly be?

This happened to be a calculation that I knew how to do, because I had studied cosmology, and in cosmology there's this event, called the Big Bang, which happened a long time ago, about 13 billion years ago, and during the Big Bang, matter was at extremely high densities and pressures.

I learned from cosmology how to calculate the number of states for matter of very high densities and pressures. In actuality, the density is not that great. I have a kilogram of matter in a liter. The density is similar to what we might normally expect today. However, if you want to ask what the number of states is for this matter in a liter, I've got to calculate every possible configuration, every possible elementary quantum state for this kilogram of matter in a liter of volume. It turns out, when you count most of these states, that this matter looks like it's in the midst of a thermonuclear explosion. Like a little piece of the Big Bang — a few seconds after the universe was born — when the temperature was around a billion degrees. At a billion degrees, if you ask what most states for matter are if it's completely liberated and able to do whatever it wants, you'll find that it looks like a lot like a plasma at a billion degrees Kelvin. Electrons and positrons are forming out of nothing, going back into photons again, there's a lot of elementary particles whizzing about and it's very hot. Lots of stuff is happening and you can still count the number of possible states using the conventional methods that people use to count states in the early universe; you take the logarithm of the number of states, get a quantity that's normally thought of as being the entropy of the system (the entropy is simply the logarithm of the number of states which also gives you the number of bits, because the logarithm of the number of states, the base 2, is the number of bits — because the number of bits raised to the power of — 2 to the power of the number of bits is the number of states. What more do I need to say?)

When we count them, we find that there are roughly 10 to the 31 bits available. That means that there's 2 to the 10 to the 31 possible states that this matter could be in. That's a lot of states – but we can count them. The interesting thing about that is that you notice we've got 10 to the 31 bits, we're performing 10 to the 51 ops per second, so each bit can perform about 10 to the 20 ops per second. What does this quantity mean?

It turns out that the quantity — if you like, the number of ops per second per bit is essentially the temperature of this plasma. And I take this plasma, I multiply it by Bell's constant, divide by Planck's constant, and what I get is the energy per bit, essentially; that's what temperature is. It tells you the energy per bit. It tells you how much energy is available for a bit to perform a logical operation. Since I know if I have a certain amount of energy I could perform a certain number of operations per second, then the temperature tells me how many ops per bit per second I can perform.

Then I know not only the number of ops per second, and the number of bits, but also the number of ops per bit per second that can be performed by this ultimate laptop, a kilogram of matter in a liter volume; it's the number of ops per bit per second that could be performed by these elementary particles back at the beginning of time by the Big Bang; it's just the total number of ops that each bit can perform per second. The number of times it can flip, the number of times it can interact with its neighboring bits, the number of elementary logical operations. And it's a number, right? 10 to the 20. Just the way that the total number of bits, 10 to the 31, is a number — it's a physical parameter that characterizes a kilogram of matter and a liter of volume. Similarly, 10 to the 51 ops per second is the number of ops per second that characterize a kilogram of matter, whether it's in a liter volume or not.

We've gone a long way down this road, so there's no point in stopping — at least in these theoretical exercises where nobody gets hurt. So far all we've used are the elementary constants of nature, the speed of light, which tells us the rate of converting matter into energy or E = MC2. The speed of light tells us how much energy we get from a particular mass. Then we use the Planck scale, the quantum scale, because the quantum scale tells you both how many operations per second you can get from a certain amount of energy, and it also tells you how to count the number of states available for a certain amount of energy. So by taking the speed of light, and the quantum scale, we are able to calculate the number of ops per second that a certain amount of matter can perform, and we're able to calculate the amount of memory space that we have available for our ultimate computer.

Then we can also calculate all sorts of interesting issues, like what's the possible input-output rate for all these bits in a liter of volume. That can actually be calculated quite easily from what I've just described, because to get all this information into and out of a liter volume — take a laptop computer — you can say okay, here's all these bits, they're sitting in a liter volume, let's move this liter volume over, by its own distance, at the speed of light. You're not going to be able to get the information in or out faster than that.

 

We can find out how many bits per second we get into and out of our ultimate laptop. And we find we can get around 10 to the 40, or 10 to the 41, or perhaps, in honor of Douglas Adams and his mystical number 42, even 10 to the 42 bits per second in and out of our ultimate laptop. So you can calculate all these different parameters that you might think are interesting, and that tells you how good a modem you could possibly have for this ultimate laptop — how many bits per second can you get in and out over the Ultimate Internet, whatever the ultimate Internet would be. I guess the Ultimate Internet is just space/time itself in this picture.

I noted that you can't possibly do better than this, right? These are the laws of physics. But you might be able to do better in other ways. For example, let's think about the architecture of this computer. I've got this computer that's doing 10 to the 51 ops per second, or 10 to the 31 bits. Each bit can flip 10 to the 20 times per second. That's pretty fast. The next question is how long does it take a bit on this side of the computer to send a signal to a bit on that side of the computer in the course of time it takes it to do an operation.

As we've established, it has a liter volume, which is about ten centimeters on each side, so it takes about ten to the minus ten seconds for light to go from one side to another — one ten billionth of a second for light to go from this side to the other. These bits are flipping 10 to the 20 times per second — a hundred billion billion times per second. This bit is flipping ten billion times in the course of time it takes a signal to go from one side of the computer to the other. This is not a very serial computation. A lot of action is taking place over here in the time it takes to communicate when all the action is taking place over on this side of the computer. This is what's called a parallel computation.

You could say that in the kinds of densities of matter that we're familiar with, like a kilogram per liter volume, which is the density of water, we find that we can only perform a very parallel computation, if we operate at the ultimate limits of computation; lots of computational action takes place over here during the time it takes a signal to go from here to there and back again.

How can we do better? How could we make the computation more serial?

Let's suppose that we want our machine to do more serial computation, so in the time it takes to send a signal from one side of the computer to the other, there are fewer ops that are being done. The obvious solution is to make the computer smaller, because if I make the computer smaller by a factor of two, it only takes half the time for light, for a signal, for information, to go from one side of the computer to the other.

If I make it smaller by a factor of ten billion, it only takes one ten billionth of the time for its signal to go from one side of the computer to the other. You also find that when you make it smaller, these pieces of the computer tend to speed up, because you tend to have more energy per bit available in each case. If you go through the calculation you find out that as the computer gets smaller and smaller, as all the mass is compressed into a smaller and smaller volume, you can do a more serial computation.

When does this process stop? When can every bit in the computer talk with every other bit, in the course of time it takes for a bit to flip? When can everybody get to talk with everybody else in the same amount of time that it takes them to talk with their neighbors?

As you make the computer smaller and smaller, it gets denser and denser, until you have a kilogram of matter in an ever smaller volume. Eventually the matter assumes more and more interesting configurations, until it's actually going to take a very high pressure to keep this system down at this very small volume. The matter assumes stranger and stranger configurations, and tends to get hotter and hotter and hotter, until at a certain point a bad thing happens. The bad thing that happens is that it's no longer possible for light to escape from it — it becomes a black hole.

What happens to our computation at this point. This is probably very bad for a computation, right? Or rather, it's going to be bad for input-output. Input is good, because stuff goes in, but output is bad because it doesn't come out since it's a black hole. Luckily, however, we're safe in this, because the very laws of quantum mechanics that we were using to calculate how much information a physical system can compute, how fast it can perform computations, and how much information it can register, actually hold.

Stephen Hawking showed, in the 1970s, that black holes, if you treat them quantum-mechanically, actually can radiate out information. There's an interesting controversy as to whether that information has anything to do with the information that went in. Stephen Hawking and John Preskill have a famous bet, where Preskill says yes — the information that comes out of a black hole reflects the information that went in. Hawking says no — the information that comes out of a black hole when it radiates doesn't have anything to do with the information that went in; the information that went in goes away. I don't know the answer to this.

But let's suppose for a moment that Hawking is wrong and Preskill is right. Let's suppose for a moment that in fact the information that comes out of a black hole when it evaporates, radiates information the wave length of the radiation coming out which is the radius of the black hole. This black hole, this kilogram black hole, is really radiating at a whopping rate; it's radiating out these photons with wave lengths of 10 to the minus 27 meters, this is not something you would actually wish to be close to — it would be very dangerous. In fact it would look a lot like a huge explosion. But let's suppose that in fact that information that's being radiated out by the black hole is in fact the information that went in to construct it, but simply transformed in a particular way. What you then see is that the black hole can be thought of in some sense as performing a computation.

You take the information about the matter that's used to form the black hole, you program it in the sense that you give it a particular configuration, you put this electron here, you put that electron there, you make that thing vibrate like this, and then you collapse this into a black hole, 10 to the minus 27 seconds later, in one hundred billion billionth of a second, the thing goes cablooey, and you get all this information out again, but now the information has been transformed, by some dynamics, and we don't know what this dynamics is, into a new form.

In fact we would need to know something like string theory or quantum gravity to figure out how it's been transformed. But you can imagine that this could in fact function as a computer. We don't know how to make it compute, but indeed, it's taking in information, it's transforming it in a systematic form according to the laws of physics, all right, and then poop! It spits it out again.

It's a dangerous thing — the Ultimate Laptop was already pretty dangerous, because it looked like a thermonuclear explosion inside of a liter bottle of coca cola. This is even worse, because in fact it looks like a thermonuclear explosion except that it started out at a radius of 10 to the minus 27 meters, one billion billion billionth of a meter, so it's really radiating at a very massive rate. But suppose you could somehow read information coming out of the black hole. You would indeed have performed the ultimate computation that you could have performed using a kilogram of matter, in this case confining it to a volume of 10 to the minus 81 cubic meters. Pretty minuscule but we're allowed to imagine this happening.

Is there anything more to the story?

After writing my paper on the ultimate laptop in Nature, I realized this was insufficiently ambitious; that of course the obvious question to ask at this point is not what is the ultimate computational capacity of a kilogram of matter, but instead to ask what is the ultimate computational capacity of the universe as a whole? After all, the universe is processing information, right? Just by existing, all physical systems register information, just by evolving their own natural physical dynamics, they transform that information, they process it. So the question then is how much information has the universe processed since the Big Bang?