PERSONAL FABRICATION

PERSONAL FABRICATION

Neil Gershenfeld [7.21.03]

We've already had a digital revolution; we don't need to keep having it. The next big thing in computers will be literally outside the box, as we bring the programmability of the digital world to the rest of the world. With the benefit of hindsight, there's a tremendous historical parallel between the transition from mainframes to PCs and now from machine tools to personal fabrication. By personal fabrication I mean not just making mechanical structures, but fully functioning systems including sensing, logic, actuation, and displays.

video
[11 minutes]

Introduction

Neil Gershenfeld teaches a class at MIT called "How To Make (almost) Anything," where the students have access to high-level tools on which the university spends millions of dollars. He expected his course to be a lab for the top engineering students to master the machines. Instead, he is finding that non technical students are showing up and and bringing varied backgrounds to bear on exploiting the possibilities and capabilities of the newest technology available.

"One student, a sculptor with no engineering background," he reports, "made a portable personal space for screaming that saves up your screams and plays them back later. Another made a Web browser that lets parrots navigate the Net."

"From this combination of passion and inventiveness", he goes on, "I began to get a sense that what these students are really doing is reinventing literacy. Literacy in the modern sense emerged in the Renaissance as mastery of the liberal arts. This is liberal in the sense of liberation, not politically liberal."

—JB

 

NEIL GERSHENFELD directs MIT's Center for Bits and Atoms. His unique research group investigates the relationship between the content of information and its physical representation, from molecular quantum computers to virtuosic musical instruments. Technology from his laboratory has been seen and used in settings including New York's Museum of Modern Art and rural Indian villages, the White House/Smithsonian Millennium celebration and automobile safety systems, Las Vegas shows and Sami reindeer herds.

He is the author of numerous technical publications, patents, and books including When Things Start To Think, The Nature of Mathematical Modeling, and The Physics of Information Technology, and has been featured in media such as The New York Times, The Economist, CNN, and the PBS.

Neil Gershenfeld's Edge Bio Page 


PERSONAL FABRICATION

I run the Center for Bits and Atoms at MIT. It involves about 20 research groups from across campus: biologists, chemists, physicists, mathematicians, various kinds of engineers — all people like me for whom the boundary between computer science and physical science never made sense. We think about how information relates its physical properties. The way the world's evolved, hardware has been separated from software, and channels from their content, but many of the hardest, most challenging, and most interesting problems lie right at this interface. These range from some of the deepest questions about how the universe works to some of the most important current technological frontiers.


Personal Fabrication from Edge Foundation on Vimeo.


Let's start with the development of "personal fabrication." We've already had a digital revolution; we don't need to keep having it. The next big thing in computers will be literally outside the box, as we bring the programmability of the digital world to the rest of the world. With the benefit of hindsight, there's a tremendous historical parallel between the transition from mainframes to PCs and now from machine tools to personal fabrication. By personal fabrication I mean not just making mechanical structures, but fully functioning systems including sensing, logic, actuation, and displays.

Mainframes were expensive machines used by skilled operators for limited industrial operations. When the packaging made them accessible to ordinary people we had the digital revolution. Computers now let you connect to Amazon.com and pick something you want, but the means to make stuff remain expensive machines used by skilled operators for limited industrial operations. That's going to change. Laboratory research, such as the work of my colleague Joe Jacobson, has shown how to print semiconductors for logic, inks for displays, three-dimensional mechanical structures, motors, sensors, and actuators. We're approaching being able to make one machine that can make any machine. I have a student working on this project who can graduate when his thesis walks out of the printer, meaning that he can output the document along with the functionality for it to get up and walk away.

In support of this basic research we started teaching a class, modestly called "How To Make (almost) Anything," where we show students how to use the millions of dollars of machines available at MIT for making things. This was meant to be a class for technical students to master the tools, but I was wholly unprepared for the reaction. On the first day a hundred or so students showed up begging to get into a class with room for ten people, saying "Please, all my life I've been waiting for this. I'll do anything to get in." Some would then furtively ask "are you allowed to teach something so useful at MIT?" There was a desperate demand by non technical students to take this class, who then used all of these capabilities in ways that I would never think of. One student, a sculptor with no engineering background, made a portable personal space for screaming that saves up your screams and plays them back later. Another made a Web browser that lets parrots navigate the Net.

From this combination of passion and inventiveness I began to get a sense that what these students are really doing is reinventing literacy. Literacy in the modern sense emerged in the Renaissance as mastery of the liberal arts. This is liberal in the sense of liberation, not politically liberal. The trivium and the quadrivium represented the available means of expression. Since then we've boiled that down to just reading and writing, but the means have changed quite a bit since the Renaissance. In a very real sense post-digital literacy now includes 3D machining and microcontroller programming. I've even been taking my twins, now 6, in to use MIT's workshops; they talk about going to MIT to make things they think of rather than going to a toy store to buy what someone else has designed.

In a place like Cambridge (MA or UK) personal fabrication is not urgently needed to solve immediate problems, because routine needs are already met. These student were not inventing for the sake of their survival, or developing products for a company; they were expressing themselves technologically. They were creating the things they desired, rather than needed, to make the kind of world they wanted to live in.

Between this short-term teaching with advanced infrastructure and our long-term laboratory research on personal fabrication, I had an epiphany last summer: for about ten thousand dollars on a desktop you can approximate both. What makes this possible is that space and time have become cheap. For a few thousand dollars a little table-top milling machine can measure its position down to microns, a fraction of the size of a hair, and so you can fabricate the structures of modern technology such as circuit boards for components in advanced packages. And a little 50-cent microcontroller can resolve time down below a microsecond, which is faster that just about anything you might want to measure in the macroscopic world. Together these capabilities can be used to emulate the functionality of what will eventually be integrated into a personal fabricator.

So we started an experiment.

Long before the research was done, we thought that it would be a good idea to learn something about who would care and what it's good for. We started using micromachining and microcontrollers to set up field "fab labs" (either fabulous, or fabrication, as you wish). They weren't meant to be economically self-sustaining; it was just a way of building up experience. We intentionally put them beyond the reach of normal technology in places like rural India and the far north of Norway. Once again we found a desperate response, but here personal fabrication does address what can truly be life-and-death problems.

In one of these labs in rural India they're working on technology for agriculture. Their livelihood depends on diesel engines, but they don't have a way to set the timing. The instrument used in your corner garage to do that costs too much, there is no supply chain to bring it to rural India, and it wouldn't work in the field anyway. So, they're working on a little microcontroller sensor device that can watch the flywheel going by and figure out when fuel is coming in. Another project aimed a $50 Webcam at a diffraction grating to do chemical spectroscopy in order to figure out when milk's going bad, when it's been diluted, and how the farmers should be fairly paid. Another fab lab is in the northeast of India, where one of the few jobs that women can do is Chikan embroidery. The patterns are limited by the need to stamp them with wooden blocks that are hard to make and modify; they're now using the lab to make 3D scans of old blocks and 3D machine new ones. At the other end of the world, at the top tip of Norway, there's a fab lab that is being used to develop radio "bells" so that nomadic data can follow the Sami's nomadic herds of sheep and reindeer around the mountains.

Each of these examples really are matters of survival for these people. Silicon Valley start-ups aren't trying to solve these problems, and even if they were the business models are unlikely to work on this scale. Through fab labs, locally-appropriate solutions can be developed and then produced locally. The design files can also be shared globally, for open-source hardware as well as software problem-solving.

Working on this project has led to some very strange days in Washington DC for me, where I'll go from the World Bank to the National Academies to the Pentagon, and they all want to talk about the same thing. The possibility of personal fabrication is enormously important for each of these institutions' agendas, but it does not easily fit into their existing organizations.

The World Bank is trying to close the digital divide by bringing IT to the masses. The message coming back for the fab labs is that rather than IT for the masses the real story is IT development for the masses. Rather than the digital divide, the real story is that there's a fabrication and an instrumentation divide. Computing for the rest of the world only secondarily means browsing the Web; it demands rich means of input and output to interface computing to their worlds.

The National Academies have been trying very hard to interest people in science and engineering, but on balance they've not been very successful at doing that. But the fab lab experience suggests that, instead of just trying to get people interested in learning about science, it's far more compelling to enable them to actually do science that's personally meaningful.

On one of these trips I found myself in a room full of Army generals, discussing the implications of emerging technologies for their job. I had come back from spending time in the Himalayas with a remarkable Indian Army general named Arjun Ray, who has just stepped down. He was in charge of Jammu & Kashmir, and the Pakistani border, and the Chinese border — i.e., the world's nuclear battlefield. He came to the conclusion that border security follows from human security, and that human security follows from human development, and therefore the best thing he could do was to take some of his budget and have soldiers bring the Internet to Muslim and Buddhist girls. I was there because they could afford to use satellite terminals to connect up just a few Quonset huts. Where these have been installed they've transformed the communities; people who used to run and hide from outsiders now come running. There's been a remarkable development of community. But the need now is to extend the connections from one village to the next, and on to the next, in a locally self-sustaining way, and hence they need tools for doing their own incremental telecommunications infrastructure deployment.

There was an amazing moment as I was talking to these Army generals about how the most profound implication of emerging technology for them might not lie in designing a better weapon to win a war, but rather in giving more people something else to do. There will always be some people who will seek the best available means to blow each other up, but for the rest there's a cost-benefit tradeoff that suggests that technologies for enabling individuals may be a cheaper and more effective investment. The generals appeared to appreciate that, but it's not clear what office in the Pentagon has that job description. It doesn't easily fit in.

So we're now at a cusp where personal fabrication is poised to reinvent literacy in the developed world, and to engage the intellectual capacity of the rest of the world. This certainly wasn't apparent (at least to me) in advance; I didn't wake up one morning and say "I have a good idea; I'm going to bring precision machining to dusty deserts." We just started tripping over this intersection between grass-roots demand and the capabilities of emerging technologies. Some of the least-developed parts of the world in fact need some of the most- rather than least-advanced technologies.

There's a historical analogy to the appearance of personal fabrication. I first understood that when I was shopping for a numerically controlled milling machine for use in our labs, and I wanted one with a graphical interface and a Web connection. Machine tool companies laughed when I said this. I finally talked to the research head of a big German machine tool company, and when I described what I needed he started pounding the table, literally saying "No, No! You must not have that!" It was a very strange moment until I realized that, word for word, he was giving me the mainframe argument. He was saying that machine tools are expensive machines to be used by specialists; that's when I really got the mainframe parallel.

If personal fabrication is indeed the next big thing, a key question is how people will people learn to do it. There are two tricks that we're using for training in the fab labs. There isn't a fixed curriculum that can teach personal fabrication, it's education on demand that builds on a long lineage through my colleagues Seymour Papert and Mitch Resnick. You can view a lot of MIT's instruction as offering just-in-case education; you learn lots of stuff in case you eventually need it. What I'm talking about is really just-in-time education. You let people solve the problems they want to solve, and you provide supporting material they can draw on as they progress.

The second part is that education can be operated as a pyramid scheme. Before people learn much they can't help much, and once they really know how to use the stuff they're too busy doing something else. But there's about a six-month window when they have just learned how to do something and really want to tell anybody they can find. You can use those people to teach the next people coming through, cycling through that window.

Now let me step back from fab labs to look more broadly at what we're doing here at the Center for Bits and Atoms. In the labs we're revisiting the foundations of the digital world, and are beginning to realize that digital logic was itself a bad idea. It took us pretty far, but the problem is that nature knows how to do much more than a Pentium assumes and we can no longer afford to ignore that. One of the things that's been happening here, to an extent I would never have believed possible, is that we're reinventing electronics. There used to be analog circuits, then came digital circuits. Now we're discovering that you can take digital circuits, lift them into an analog space where they can move continuously, and when you're done you get a digital answer. It looks like that's going to dramatically improve the speed, size, power consumption, and performance of conventional circuits. It's not analog vs. digital, it's a more efficient analog route to a digital answer.

That still uses classical logic; the next step we've backed into is computing with quantum mechanics. This has gotten a lot of press, but there hasn't been much coverage of how we actually started doing it. My colleague Ike Chuang and I developed one of the world's first complete quantum computers. More recently Ike demonstrated factoring, the largest quantum computation to date.

And we cheated. People for years have talked about molecular computing, nanoscale computing, and all of that, without much success. We realized that molecules are already computing. If you look at the atomic nuclei and consider them to be bits, how they tumble is modified by their neighbors. If you do nuclear magnetic resonance, like an MRI scan or what chemists use to study molecular structure, you are actually doing logic on the nuclei, although it wasn't labeled as such. We realized that instead of heroically building a special-purpose quantum computing apparatus, it's possible to be a bit more clever in talking to nature in the language that it uses in order to make it compute.

This might be interesting for making better computers, but what's even more interesting is doing better science. Physicists use partial differential equations to describe nature; those reflect the available information technology of the 1700s: a pencil and a piece of paper. You could view the fact that we showed molecules how to compute as an amazing consequence of the laws of quantum mechanics, or you could choose to do it exactly backwards, and say that computation is a very natural language to describe molecules, and then partial differential equations are a derived property. Without being dogmatic about it, what's beginning to happen is that we're realizing that if molecules can compute, if nature computes, you can actually use a computational language to ask nature questions of fundamental theoretical and experimental interest, and it's beginning to invert how we understand our place in the world.

Perhaps the most dramatic example at CBA of programming nature comes from my colleagues Joe Jacobson, Shuguang Zhang, and Kim Hamad Schifferli, who showed how to take a protein and stick a 50-atom gold nanocluster onto it. For proteins, their shape is their function. If you use the little antenna to send radio energy into it you change the shape. That means that you can, for example, take a repressor protein that shuts off expression of a gene, and under a radio signal you can release it and let the gene be expressed, and then reattach it. The reason that's so important is that cells run programs to make things. When a cell fabricates, say, a flagellar motor, it's running a complex program, and more importantly it's error-correcting; it's doing logic. The antennas provide handles for programming those pathways. Cells are terrible as general-purpose computers, but they function based on this amazing ability to compute for fabrication.

Go back to the early days of engineering in the last century. It was obvious that if you reduced the noise in a system you reduced the errors that it made. The remarkable thing that Claude Shannon did in '30s and '40s was to show that there's a threshold, meaning that if you have logic you can have nonzero noise but you can have zero errors. That threshold gave birth to information theory; that's why Shannon's famous, and it's why we use bits.

Although it's less widely known, in the '60s John von Neumann and others showed that the same thing holds for computation. You can make a perfect computer out of imperfect parts once again by error correction — by restoring the state — meaning that you can compute for an arbitrarily long time even though the parts you're using are imperfect. We've all forgotten that because Intel hides it within the chip, but that's what make computing possible.

Now we're realizing that if the means of assembly themselves are computational, in turn there's a threshold for fabrication, so that if you want to make a perfect macroscopic thing out of imperfect parts, you need to compute within fabrication. It's not like a milling machine with the computer outside; the tool itself needs to be smart enough to do logic in assembly.

The tutor for that lesson is cells, and you could argue that the fabricational threshold is the fundamental insight into life — that really is almost a definition of life. But in modern manufacturing technology, in nanotechnology, and in self-assembly it's been almost completely missed. This may be the single key insight for the next epoch.

There's an odd thing going on near MIT. Most every biotech company on the planet has decided that it needs to operate in or around Cambridge — and they generally think they're doing that to be develop drugs for health care. The real breakthrough may, in fact, be biological machinery that is programmable for fabrication. This may be the next manufacturing technology.

That, in turn, leads to what I'd say is the most challenging thing of all that we're doing. If you take the last things I've mentioned — printing logic, molecular logic, and eventually growing, living logic - it means that we will be able to engineer on Avogadro scales, with complexity on the scale of thermodynamics. Avogadro's number, 1023, is the number of atoms in a macroscopic object, and we'll eventually create systems with that many programmable components. The only thing you can say with certainty about this possibility is that such systems will fail if they're designed in any way we understand right now.

If you look at the development of the Internet, or the power grid, or new chips, or airplanes there's something disturbing happening. The companies that do these things have a secret that they don't want people to know: they're struggling to be able to continue developing their systems. What's hurting the chip companies isn't the cost of the fab, as bad as that is, but the cost of taping out of a chip. When you want to place a billion transistors, the design tools don't work any more. They assume that at some point somebody understands what they want to make, but it doesn't work when the system gets large enough. The companies that work on airplanes or the power grid don't really understand them as wholes any more. This means that, in a world of thermodynamic-scale engineering, you have to make a transition from designing systems to designing principles by which systems work, without actually saying how they do it.

The notion of such emergent engineering is very attractive. It's inspired beautiful demonstrations, but those have generally been on toy problems that don't scale to the really hard ones. And it's inspired equally beautiful theories that generally don't get reduced to practice. My colleague John Doyle calls these unfortunate examples of the study of "chaocritiplexity." There's a nearly null set of deep insight into emergent functionality that's reduced to useful practice.

I used to be very critical of this state of affairs, until I finally realized that what I'm asking for is a step roughly as profound as the invention of calculus, or of information theory. Bringing rigor to emergence means developing a missing language. At MIT you learn about things like bandwidth, power, and signal-to-noise ratios. Late in life you learn about how to balance notions like hierarchy, adaptation, and fanout. By the time we're done that's going to be inverted. The very first thing you'll learn at MIT is how to ask and answer questions like "How quickly should a system be able to modify itself? How many mistakes should it make? Which of its functions should be global and which local?" The kind of taxonomy that biologists do has to turn into predictive design theories. Shannon did that once. He showed that the channel capacity, that threshold I was talking about, is equal to bandwidth times the logarithm of 1 plus the signal-to noise ratio. That let you suddenly take these disparate attributes and, independent of the details of a particular design, learn how to price them and trade them off. We don't know how to do that yet for hierarchy and adaptation and emergence, but there are compelling hints of an answer lying at the intersection of statistical mechanics, control theory, and geometry, mixed in with a bit of inference.

It's certainly ambitious to aspire to be the next Shannon. But more modestly we really are learning how to think more like physicists about designing complex systems. The breakthrough in statistical mechanics made by Gibbs and Maxwell and Boltzmann was to recognize that once you have enough of most anything in a system then you can make precise statements about the behavior of a collection of them without knowing anything about their internal configuration. In science if I have one thing, or a few things, I use one kind of math, and if I have an Avogadro number of things I use a different kind. It's a mature math, it's well-developed, and it lets you make precise predictions. You can know when a lake will freeze or a pot will boil by abstracting the big picture from all the little details.

Engineers still use the math of a few things. That might do for a little piece of the system, like asking how much power it needs, but if you ask about how to make a huge chip compute or a huge network communicate, there isn't yet an Avogadro design theory. Physicists on the other hand can tell you all about the properties of big systems, but they know very little about how to make them do desired things. The "statistical mechanical" engineering we're working on aims to go beyond descriptions to actually direct a system's behavior.

Here's a concrete example. The cost of a chip fab plant is approaching $10 billion. It's just crazy. The wafers are about the size of a dinner plate now, and getting bigger. They are hideously expensive. Individual chips are growing from the size of a postage stamp to a note pad. We have a CBA researcher, Bill Butera, who was a former chip designer who came to the conclusion that this trend is cuckoo. Instead of making bigger and bigger chips, he thought that you should make a whole lot of smaller and smaller chips. His vision was to make the tiniest viable fragments, about a tenth of a millimeter or so, literally sprinkle them into a viscous medium, and then pour out computing by the pound or by the square inch. In this way you can paint a computer on your wall and if it's not powerful enough you put on another coat of computer.

Bill developed a programming model where little code fragments hop from particle to particle, traveling around and self-organizing into a system that solves a problem. He's used this model to do data storage and searching and all the things computers do, but it's a random fungible material, meaning that you can add a little bit and it gets a little bit better. Right now we are working on devices that can do this, turning the computer from a monolithic box to a raw material that gets configured by instructions traveling through it.

Now go back to the Sami herders in northern Norway or the Himalayan Quonset huts, and you see that they need technological infrastructure but can't assume that a normal Telco will manage it. The infrastructure has to manage itself. You have to use these same ideas on an even larger scale to be relevant to their problems. One more example of needing the most advanced technologies in the least-developed places.

Closer to home, this thinking about emergence in networks is leading to a new kind of Internet for the physical world. Network speeds have gotten faster and faster, while telecom companies have lost more and more money. They've been missing the tremendous opportunity that comes from slowing things down. Over the last few years, in support of other projects, we've developed simpler and simpler implementations of complete Internet sites, getting it down to a tiny chip and about a dollar in parts. This wasn't research, just capabilities that we needed, but we kept getting asked "When will they be for sale?"

We used these devices in testbeds looking at embedding intelligence, from interactive furniture in New York's Museum of Modern Art to a demonstration structure in Barcelona that aimed to make building infrastructure as expressive as Gaudi's physical forms. Coming out of these projects we ran a meeting on them at MIT, and what jumped out is the needs of the construction industry. Building construction is a trillion dollar-a-year business, and its costs are dominated by labor. They would love to be able to throw away wiring diagrams and have smart lights and thermostats with functions programmed by their users, but that will be doomed if you need a skilled network installer to connect up the lights, and even more doomed if an IT department is needed to keep them lit. No one's getting very excited about servers serving services in smart homes, but here's a compelling demand for distributed intelligence.

We used a few tricks to make our prototypes work without that IT department. They can configure themselves if there aren't any servers on the network, they provide physical access to their configuration so that they can be programmed without requiring a computer, and they don't speak any faster than they need to. This last point led to a funny moment. A bit in a network has a physical size, based on how fast it travels and its duration in time. In a fast network that's smaller than the size of the network, so whether its wired or wireless two devices can start talking simultaneously without being aware that they're conflicting, and if there are any reflections the bit will generate spurious signals. Hence the need for a network installer and IT department to prevent that. But, if a bit is slowed down so that it becomes bigger than the size of the network, all of that can be ignored and data can be treated just like electricity. This is a case where less really is more.

For a typical building, this requires slowing down below about a megabit per second. That's slow if you want to watch a movie, but then lightbulbs generally don't watch movies. At one of our meetings I was explaining this to someone working on the "Internet 2" higher-speed Internet, and he kept coming back to asking me how fast the network could go because that's been the standard question to ask about any network. In exasperation I said that this isn't Internet 2, it's Internet 0. That was supposed to be a joke, but the name stuck. And industrial engineers started sidling up to me to say that this is what they always wanted to do but their companies got stuck supporting some proprietary protocol. So the joke is now leading to a standards process to bring the Internet to the physical world, by keeping the Internet protocols but replacing the rest of Internet practice with the kinds of tricks I described. This is an Internet for distributed devices without requiring centralized people.

Here's an example: my colleagues and I did a demonstration for the White House and Smithsonian for the Millennium. They wanted to show the future, so we set up a couple of things. One of them was a bathroom shelf. As people age, perhaps the biggest cost in the healthcare system is compliance. It gets harder to manage your medication, so people end up in managed care and hospitals. We made a bathroom shelf where after you put a pill bottle on it the shelf can figure out what the pills are, and when you need to take your medicine it glows. It changes colors, so you don't have to remember, you just look at the shelf. And then, crucially, it can communicate over the Net. It tells your doctor whether or not you are managing your medication, it tells kids if they need to go see their parents, and it tells your pharmacist when you need your prescription refilled.

This little shelf may become one of the greatest impacts of technology in quality of life as people age. What makes it work is that this thing can communicate, but that will be useless if it assumes that a senior's bathroom has an IT department to configure it and so once again you need the Internet, meaning Internet protocols, but everything else has to be different. You have to throw away all of the assumptions about how we build the Internet right now. That's the job of Internet 0.

And that brings us back to the grand challenge I mentioned earlier, of engineering emergence. The Internet is just that, an inter-net, a network of networks. I0 assumes the existence of the regular Internet for doing things like global routing. But that's an unsatisfactory decomposition. Once there are enough I0 devices connected we'd like them to be able to discover hierarchy themselves without having to explicitly establish and manage it. We don't know how to do that yet, but I have another student trying to do that. It looks like the solution is going to entail viewing the messages in the network as themselves being part of a great big distributed computation, and then learning how to write programs for it. Something as apparently simple as a $1 networked light switch contains profound intellectual as well as social and economic consequences for our future.