Part Five SOMETHING THAT GOES BEYOND OURSELVES

Part Five SOMETHING THAT GOES BEYOND OURSELVES

W. Daniel Hillis, John Brockman [5.1.96]

New technology equals new perceptions. As we create tools, we re-create ourselves in their image. Newtonian mechanics gave birth to the metaphor of the heart as a pump. A generation ago, with the advent of cybernetics, information science, and artificial intelligence, we began to think of the brain as a computer. We now have arrived at a new intersection of the empirical and the epistemological. Recent technological breakthroughs in the realm of massively parallel computers and their associated algorithms are having a major impact on the images we have of ourselves and our place in the universe. We have broken through the von Neumann bottleneck of the serial computer.

W. Daniel Hillis brings together, in full circle, many of the ideas in this book: Marvin Minsky's society of mind; Christopher G. Langton's artificial life; Richard Dawkins' gene's-eye view; the plectics practiced at Santa Fe. Hillis developed the algorithms that made possible the massively parallel computer. He began in physics and then went into computer science — where he revolutionized the field — and now he has begun to bring his algorithms to bear on the study of evolution. He sees the autocatalytic effect of fast computers, which lets us design better and faster computers faster, as analogous to the evolution of intelligence. At MIT in the late seventies, Hillis built his "connection machine," a computer that makes use of integrated circuits and, in its parallel operations, closely reflects the workings of the human mind. In 1983, he spun off a computer company called Thinking Machines, which set out to build the world's fastest supercomputer by utilizing parallel architecture.

The massively parallel computational model is critical to the whole set of ideas presented in this book. Hillis's computers, which are fast enough to simulate the process of evolution itself, have shown that programs of random instructions can, by competing, produce new generations of programs — an approach that may well lead to the first machine that truly "thinks." Hillis's work demonstrates that when systems are not engineered but instead allowed to evolve — to build themselves — then the resultant whole is greater than the sum of its parts. Simple entities working together produce some complex thing that transcends them; the implications for biology, engineering, and physics are enormous.


Chapter 23

W. DANIEL HILLIS

"Close to the Singularity"



Marvin Minsky: Danny Hillis is one of the most inventive people I've ever met, and one of the deepest thinkers. He's contributed many important ideas to computer science — especially, but not exclusively, in the domain of parallel computation. He's taken many algorithms that people believed could run only on serial machines and found new ways to make them run in parallel — and therefore much faster. Whenever he gets a new idea, he soon sees ways to test it, to build machines that exploit it, and to discover new mathematical ways to prove things about it. After doing wonderful things in computer science, he got interested in evolution, and I think he's now on the road to becoming one of our major evolutionary theorists.

W. DANIEL HILLIS is a computer scientist; cofounder and chief scientist of Thinking Machines Corporation; holder of thirty-four U.S. patents; editor of several scientific journals, including Artificial Life, Complexity, Complex Systems, and Future Generation Computer Systems; author of The Connection Machine (1985).

W. Daniel Hillis's Edge Bio Page


[W. Daniel Hillis:] I like making things that have complicated behaviors. The ultimate thing that has a complicated behavior is, of course, a mind. The Holy Grail of engineering for the last few thousand years has been to construct a device that will talk to you and learn and reason and create. The first step in doing that requires a very different kind of computer from the simple sequential computers we deal with every day, because these aren't nearly powerful enough. The more they know, the slower they get — as opposed to the human mind, which has the opposite property. Most computers are designed to do things one at a time. For instance, when they look at a picture, they look at every dot in the picture one by one; when they look at a database, they search through the facts one by one. The human mind manages to look all at once at everything it knows and then somehow pull out the relevant piece of information. What I wanted to do was make a computer that was more like that.

It became clear that by using integrated-circuit technology you could build a computer that was structured much more like a human brain; it would do many simple things simultaneously, in parallel, instead of rapidly running through a sequence of things. That principle clearly works in the mind, because the mind manages to work with the hardware of the brain, and the hardware of the brain is actually very slow hardware compared with the hardware of the digital computer.

With modern integrated circuits, it's possible to replicate something over and over again very inexpensively, so I started building a computer by replicating simple processing circuits over and over again and then allowing them to connect with the other interrogatory patterns. Of course, the other thing about your mind is that if I slice up your brain, I see that it's almost all wires. It's all connections between the neurons. Putting into the computer the telephone system that will connect all those little processing elements is the hardest part. That's why my computer was called "the connection machine." I designed it at MIT, but I realized that it was much too big and complicated to be built at a university. It was going to require hundreds of people and tens of millions of dollars. So in 1983 I started the Thinking Machines Corporation, and we spent the next ten years becoming the company that made the world's biggest and fastest computers. The irony is that we were so distracted with all this scientific computing that I haven't made nearly as much progress on the thing I started out with, which is the thinking computer.

My view of what it's going to take to make a thinking machine has changed in recent years. When we started out, I naively believed that each of the pieces of intelligence could be engineered. I still believe that would be possible in principle, but it would take three hundred years to do it. There are so many different aspects to making an intelligent machine that if we used normal engineering methods the complexity would overwhelm us. That presents a great practical difficulty for me; I want to get this project done in my lifetime.

The other thing I've learned is how hard it is to get lots of people to work together on a project and manage the complexity. In some senses, a big connection machine is the most complicated machine humans have ever built. A connection machine has a few hundred billion active parts, all of which are working together, and the way they interact isn't really understood, even by its designers. The only way to design an object of this much complexity is to break it into parts. We decide it's going to have this box and that box and that box, and we send a group of people off to do each of those, and they have to agree on the interfaces before they go off and design their boxes.

Imagine engineering a thinking machine that way. Somebody like Marvin Minsky would say, "O.K., there's a vision box and a reason box and a grammar box," and so on. Then we might break the project up into parts and say, "O.K., Tommy" — Tomaso Poggio, at MIT — "you go off and do the vision box," and we'd get Steve Pinker to do the grammar box, and Roger Schank to do the story box. Then Poggio would take the vision box and say, "All right, we need a depth perception box, and we need a color-recognition box," and so on. Then the depth-perception team would say, "O.K., we need a box that perceives depth perception by focus clues and a box that perceives depth perception by binocular vision." Imagine a collection of tens of thousands of people doing these modules, which is how we'd have to engineer it. If you engineer something that way, it has to decompose, and it has to go through these fairly standardized interfaces. There's every reason to believe that the brain is not, in fact, that neatly partitioned. If you look at biological systems in general, while they're hierarchical at a gross level, there's a complex set of interactions between all the parts that doesn't follow the hierarchy. But I'm convinced that our standard methods of engineering wouldn't work very well for designing the brain, although not because of any physical principles we can't control. The brain is an information-processing device, and it does nothing that any universal information-processing device couldn't do.

There's another approach besides this strict engineering approach which can produce something of that complexity, and that's the evolutionary approach. We humans were produced by a process that wasn't engineering. We now have computers fast enough to simulate the process of evolution within the computer. So we may be able to set up situations in which we can cause intelligent programs to evolve within the computer.

I have programs that have evolved within the computer from nothing, and they do fairly complicated things. You begin by putting in sequences of random instructions, and these programs compete and interact with each other and have sex with each other and produce new generations of programs. If you put them in a world where they survive by solving a problem, then with each successive generation they get better and better at solving the problem, and after a few hundred thousand generations they solve the problem very well. That approach may actually be used to produce the thinking machine.

One of the most interesting things is that larger-order things emerge from the interaction of smaller things. Imagine what a multicellular organism looks like to a single-celled organism. The multicellular organism is dealing at a level that would be incomprehensible to a single-celled organism. I think it's possible that the part of our mind that does information- processing is in large part a cultural artifact. A human who's not brought up around other humans isn't a very smart machine at all. Part of what makes us smart is our culture and our interactions with others. That's part of what would make a thinking machine smart, too. It would have to interact with humans and be part of that human culture.

On the biology side, how does this simple process of evolution organize itself into complicated biological organisms? On the engineering side, how do we take simple switching devices like transistors, whose properties we understand, and cause them to do something complex that we don't understand? On the physics side, we're studying the general phenomenon of emergence, of how simple things turn into complex things. All these disciplines are trying to get at essentially the same thing, but from different angles: how can the whole be more than the sum of the parts? How can simple, dumb things working together produce a complex thing that transcends them? That's essentially what Marvin Minsky's "society of mind" theory is about; that's what Chris Langton's "artificial life" is about; that's what Richard Dawkins' investigation of evolution is about; that's fundamentally what the physicists who are looking at emergent properties are studying; that's what Murray Gell-Mann's work on quarks is about; that is the thread that binds all these ideas together.

I am excited by the idea that we may find a way to exploit some general principles of organization to produce something that goes beyond ourselves. If you step back a zillion years, you can look at the history of life on Earth as fitting into this pattern. First, fundamental particles organized themselves into chemistry. Then chemistry organized itself into self-reproducing life. Then life organized itself into multicellular organisms and multicellular organisms organized themselves into societies bound together by language. Societies are now organizing themselves into larger units and producing something that connects them technologically, producing something that goes beyond them. These are all steps in a chain, and the next step is the building of thinking machines.

To me, the most interesting thing in the world is how a lot of simple, dumb things organize themselves into something much more complicated that has behavior on a higher level. Everything I'm interested in — whether it's the brain, or parallel computers, or phase transitions in physics, or evolution — fits into that pattern. Right now, I'm trying to reproduce within the computer the process of evolution, with the goal of getting intelligent behavior out of machines. What we do is put inside the machine an evolutionary process that takes place on a timescale of microseconds. For example, in the most extreme cases, we can actually evolve a program by starting out with random sequences of instructions — say, "Computer, would you please make a hundred million random sequences of instructions. Now, execute all those random sequences of instruction, all those programs, and pick out the ones that came closest to what I wanted." In other words, I defined what I wanted to accomplish, not how to accomplish it.

If I want a program that sorts things into alphabetical order, I'll use this simulated evolution to find the programs that are most efficient at alphabetizing. Of course, random sequences of instructions are unlikely to alphabetize, so none of them does it initially, but one of them may fortuitously put two words in the right order. Then I say to the computer, "Would you please take the 10 percent of those random programs that did the best job, save those, kill the rest, and have the ones that sorted the best reproduce by a process of recombination, analogous to sex. Take two programs and produce children by exchanging their subroutines." The "children" inherit the "traits," the subroutines, of the two programs. Now I have a new generation of programs, produced by combinations of the programs that did a superior job, and I say, "Please repeat that process, score them again, introduce some mutations, and repeat the process again and again, for many generations." Every one of those generations takes just a few milliseconds, so I can do the equivalent of millions of years of evolution within the computer in a few minutes — or, in complicated cases, in a few hours. Finally, I end up with a program that's absolutely perfect at alphabetizing, and it's much more efficient than any program I could ever have written by hand. But if I look at that program, I'm unable to tell you how it works. It's an obscure, weird program, but it does the job, because it comes from a line of hundreds of thousands of programs that did the job. In fact, those programs' lives depended on doing the job.

How do I really know the program will work? In the sorting case, I test it. What if it was something really important? What if this program was going to fly an airplane? Well, you might say, "Gee, it's really scary having a program flying an airplane when we don't have any idea how it works!" But that's exactly what you have with a human pilot; you have a program that was produced by a very similar method, and we have great confidence in it. I have much less confidence in the airplane itself, which was designed very precisely by a lot of very smart engineers. I remember riding in a 747 with Marvin Minsky once, and he pulls out this card from the seat pocket, which said, "This plane has hundreds of thousands of tiny parts, all working together to give you a safe flight." Marvin said, "Doesn't that make you feel confident?"

The engineering process doesn't work very well when it gets complicated. We're beginning to depend on computers that use a process very different from engineering — a process that allows us to produce things of much more complexity than we could with normal engineering. Yet we don't quite understand the possibilities of that process, so in a sense it's getting ahead of us. We're now using those programs to make much faster computers so that we will be able to run this process much faster. The process is feeding on itself. It's becoming faster. It's autocatalytic. We're analogous to the single-celled organisms when they were turning into multicellular organisms. We're the amoebas, and we can't quite figure out what the hell this thing is that we're creating. We're right at that point of transition, and there's something coming along after us.

It's haughty of us to think we're the end product of evolution. All of us are a part of producing whatever is coming next. We're at an exciting time. We're close to the singularity. Go back to that litany of chemistry leading to single-celled organisms, leading to intelligence. The first step took a billion years, the next step took a hundred million, and so on. We're at a stage where things change on the order of decades, and it seems to be speeding up. Technology has the autocatalytic effect of fast computers, which let us design better and faster computers faster. We're heading toward something which is going to happen very soon — in our lifetimes — and which is fundamentally different from anything that's happened in human history before.

People have stopped thinking about the future, because they realize that the future will be so different. The future their grandchildren are going to live in will be so different that the normal methods of planning for it just don't work anymore. When I was a kid, people used to talk about what would happen in the year 2000. Now, at the end of the century, people are still talking about what's going to happen in the year 2000. The future has been shrinking by one year per year, ever since I was born. If I try to extrapolate the trends, to look at where technology's going sometime early in the next century, there comes a point where something incomprehensible will happen. Maybe it's the creation of intelligent machines. Maybe it's telecommunications merging us into a global organism. If you try to talk about it, it sounds mystical, but I'm making a very practical statement here. I think something's happening now — and will continue to happen over the next few decades — which is incomprehensible to us, and I find that both frightening and exciting. 


Back to Contents

Excerpted from The Third Culture: Beyond the Scientific Revolution by John Brockman (Simon & Schuster, 1995) . Copyright © 1995 by John Brockman. All rights reserved.