THE TECHNIUM AND THE 7TH KINGDOM OF LIFE

THE TECHNIUM AND THE 7TH KINGDOM OF LIFE

Kevin Kelly [7.18.07]

What is the meaning of technology in our lives? What place does technology have in the universe? What place does it have in the human condition? And what place should it play in my own personal life? Technology as a whole system, or what I call the technium, seems to be a dominant force in the culture. Indeed at times it seems to be the only force — the only lasting force — in culture. If that's so, then what can we expect from this force, what governs it? Sadly we don't even have a good theory about technology.

KEVIN KELLY is Senior Maverick at Wired magazine and author of the best-selling "New Rules for the New Economy," and the classic book on decentralized emergent systems, "Out of Control." He is currently editor and publisher of the popular Cool Tools, True Film, and Street Use web sites.

Kevin Kelly's Edge Bio Page


THE TECHNIUM AND THE 7TH KINGDOM OF LIFE

[KEVIN KELLY:] The main question that I'm asking myself is, what is the meaning of technology in our lives?  What place does technology have in the universe? What place does it have in the human condition? And what place should it play in my own personal life?  Technology as a whole system, or what I call the technium, seems to be a dominant force in the culture. Indeed at times it seems to be the only force - the only lasting force - in culture. If that's so, then what can we expect from this force, what governs it? Sadly we don't even have a good theory about technology.

I'm trying to investigate ways to understand the long-term consequences of technology in the world and place it into some position along with other grand things like biological nature, big history, the physics of the cosmos, and the future. It's a very ambitious project and, surprisingly, there isn't really much thinking about technology in terms of its sphere of influence in a way that might be useful to thinking about how to evaluate what we make. 

There's no predictive theory of technology either. I've been inculcated with the fundamentals of GBN-style scenarios to understand that all predictions are wrong by default.  So, when I say predictive, I don't mean in the sense that we could actually predict, in detail, what technology will do.  I mean predictive in the sense of a theory that would give us the tools to guide its direction at the large scale.  A theory that would let us say that we know enough about technology's past that we can expect certain things about it in its future. Right now, we basically take technologies as they come up, and each novel technology, one by one, catches us caught off-guard. Though I don't think I'm capable of generating it, a useful theory of technology is what I would love to find.

There is a common sense that each novel technology brings us many new problems as well as new solutions — that it offers many things that we desire as well as many things that we want to eliminate. What we don't have is a good framework for responding to this ceaseless generation of novelty, or even a framework for understanding whether technology is something that we should, or even can, respond to.  Or, for that matter, whether we should manage our technology by not creating it in the first place.  And how we might possibly "not create."

One of the reflex responses to technology's problems is prohibition. That is, certain kinds of technology such as nuclear power, genetically modified foods, etc., technologies with obvious detrimental effects should be managed by prohibiting their use outside certain confines.  Along the same lines is the axiom that there are certain ideas that we shouldn't even have — directions of research that we should prohibit outright and certain technologies that should never be unleashed outside of the lab, or even in the lab.  A counter theory posits that prohibitions don't work and that we can't manage technology by forbidding its use.  Instead, we have to manage technologies by replacement, displacement, fine tuning — by moving a technology into another role without eliminating it. 

But even with all this, we still don't have a good sense of what technology is or how we should define it. Technology in its modern sense is a term that wasn't even invented until 1829. We had been making technology for centuries, but didn't have a word for it. I suggest we still don't know exactly what it is. Is it anything that we make from with our minds? Or only certain things?


Science and technology are intrinsically connected.  We have a sense that science is a method of  thinking that generates technology, but I've come to the conclusion that technology is a type of thinking that generates science.

The scientific method itself is not constant. It is evolving. What we call the scientific method has been changed by technology from the very beginning. The necessity of peer review, and repeatability of experiments, for example were types of thinking that had to be invented and required technologies like print to make possible. A scientist from 400 years ago would not recognize the scientific method as it is practiced today because a lot of the elements of research that we now consider essential to the scientific method weren't invented until very recently: for instance, placebos, statistical sampling, double blind experiments. All these things are new, some of them invented in just the last 50 years.

New technologies being invented today, such as social software, distributed instrumentation, and new ways of seeing will all transform the scientific method of the future. It is very likely the scientific method will change far more in the next 50 years than it has in its first 400 years of its existence.

Specific technologies are like individuals, or species, and the society or ecosystem of these individuals is the technium. I'm especially interested in how the technium works at the system level — how it operates as an ecology of technological species, as a complex web of interacting agents each with their own biases and tendencies. 

The emergent system of the technium — what we often mean by "Technology" with a capital T — has its own inherent agenda and urges, as does any large complex system, indeed, as does life itself. That is, an individual technological organism has one kind of response, but in an ecology comprised of co-evolving species of technology we find an elevated entity — the technium — that behaves very differently from an individual species. The technium is a superorganism of technology. It has its own force that it exerts. That force is part cultural (influenced by and influencing of humans), but it's also partly non-human, partly indigenous to the physics of technology itself. That's the part that is scary and interesting.
            
I tend to think of the technium like a child of humanity. Our job will be to train the technium, to imbue it with certain principles because, at a certain level and at a certain age, it will basically become much more autonomous than it is now. It will leave us like a teenager who goes on to live alone: although he or she will continue to interact with us and will always be part of us, we have to let it go.

We can't raise a successful human by remaining in complete control as parents. We have to train our children well — bury within them a strong conscience with deep values that can guide them to do the right thing in situations we had not foreseen or even imagined. We need to do the same with the technium and our technologies. In the same sense we need to embed our values into the technological superorganism so that these heuristics become guiding factors. As more autonomy is given and won by the technium, it will then be able to do the right thing.

In order to do that there are a few of problems that need to be addressed.

One is knowing what we want. We need to have a deep sense of our values, what we stand for. In a deep irony, the more technology advances, the less sure we are of who we are and what we stand for as a species and as individuals. So this discovery of what is most important about us is a huge challenge.

Two, we have to become very smart and clever about how to embed subtle guidance in large systems. We know it can be done because of our children. Three, we have to be willing to risk surrendering autonomy to the technium in order to reap the maximum freedom and benefits for ourselves. Invest, let go, benefit. That's the tradeoff in control I explored extensively in Out of Control.  There is no doubt this is a huge and scary step  — ask any parent — but I believe that we humans can work up to it.

The most difficult of those three assignments is the first, which is to know what it is that we want. The problem is that we don't know who we are. We don't know any longer what it means to be a human. Almost every day there is some news from researchers that forces us to reevaluate a fundamental aspect of our existence. Are we different from animals? Are we even real? Is consciousness real, or special, or a mere commodity? Do we have limits, should we have limits?

Our identities are being pushed and nudged and twisted by the arrival of new technologies: robots, AI, genetic engineering, quantum weirdness, any kind of enhancement technology, discoveries about our bodies and our minds, discoveries in cosmology about our place in the multi-verses.  All of it.  Each of these discoveries and inventions challenges our notions of what it is to be alive, what it means to be human, what it is to be American  — whatever. Nearly every signal broadcast by technology chips away at our identity.

So we are left with the difficult task of trying to figure out what technology means just as our own identify is shifting constantly — we're trying to find both at the same time. I believe we can't know what technology means (or what the technium wants) until we know what we mean. More importantly, I believe we'll answer both at once; that only by understanding what technology is will we understand who we are.

There's a tendency to believe that while the culture around may be becoming more technological, human nature remains intact. In fact, we have to admit that our own human natures are being reformed, redefined, and remade by technology. This is a scary too. In the extreme, if you look beyond the short now of the next ten years to the long horizon of a couple hundred years, the overwhelming question is, do we remain one species, or will we evolve ourselves into many species?

The prospect of genetic forking is probably the most divisive issue I could imagine for our species and would engender conflicts at a scale that will make some of today's inherently irresolvable issues — abortion, cloning, etc. — pale by comparison. There will be people who would not only declare that they want to remain untouched (the "Naturals") but would insist that no one has the right to remake themselves or their unnamed descendents.

Others will clearly  side with humans remodeling themselves and the species in any direction possible. It's not so far away, either. The unanswerable questions are already beginning. Is a sprinter with two prosthetic carbon-fiber springs instead of legs, disabled or enhanced? If he wants to compete in the Olympics, are his springs a crutch, or a jet pack?  What is a human anyway?

Hollywood and science fiction authors are the new theologians. They've been asking these essential existential questions way ahead of the rest of society. The rising popularity of maverick authors like Philip K Dick will move him (and others of his ilk) into the core mainstream, as the themes he explored become the central questions of the coming century.

What is the difference between fake and reality? Who are we? Are we many or one?  Where do we begin and our minds end?  These are old themes, but with new answers and alternative story lines, and it's not just the artists that are asking these questions.

We are reaching down deep into the culture so that everybody has to ask these very big questions. It's no longer the job of philosophers, nor avante guard artists — but ordinary citizens. With each new headline in USA Today, everyone is being asked, What is a human? A vernacular theology, in a certain sense, is one of unanticipated aspects of this technological culture. 

This constant identity crisis can make people depressed and it may be one of the factors driving people toward religion, since religion, especially fundamentalist religion, believe it has definite answers to some of these questions. But religion, especially fundamentalist religion, has no real answers the specific questions of say whether enhancement is humane, whether AI is good, whether we should remain one species or many, and even what precisely it means to be human. Therefore this large scale technological identity crisis is going to be the recurring theme of this century. 


I was just reading Paul Davies book, The Cosmic Jackpot, in which he wrestles with some of the biggest questions that cosmologists come up against — these big-scale questions about the origin of the universe, why this universe, why is there anything at all?

These "big" questions were often forbidden in classical scientific thinking as being not really answerable by science. Davies shows that, in fact, these are legitimate scientific questions and that we may be developing a better vocabulary, a better structure for trying to ask those questions and put them into a falsifiable condition. My interest in the semantics of the technium is also to ask a similarly broad and fundamental question. That is, in the grand sweep of the cosmic evolution from the Big Bang outwards, where does the technium or technology fit in? What powers the origin and expansion of the technium? Does it have a direction? 

To ask the classic Stephen Jay Gould question, if you rewind the tape on different worlds and different civilizations and you play it back, does technology have a natural history? Is there anything you could say about it that would be true at the class level?  

My answer so far is Yes. Technology is not merely a human-derived entity. The roots of technology go all the way back to the Big Bang. It's part of the same line that I call extropic systems that extend back through living systems, self-regulating planets, auto-coalescing star systems and so on. Extropic systems might also be called near-equilibrium sustainable systems. They run in the opposite direction from entropic systems. These are complex, sustainable systems that always teeter on the edge of falling over, but keep going. Over cosmic time, a type will gradually build up more complexity sustained on the edge of collapse.  We see extropic systems in galaxy formation, planet formation, life formation, intelligence formation, and I believe, in technology formation. 

In this way the technium shares many characteristics with biological life, mind, and other near-equilibrium self-sustaining extropic systems. Technology, therefore, can be understood in a cosmic scale as an outgrowth of the Big Bang.  Because we have some clues about what it has in common with these relatives of life, we can begin to dissect and understand it through the lens of extropic systems. I believe when we view the technium in the context of life-like systems, we can make some guesses about its trajectory and how we can use it.

One way to think of the technium is as the 7th kingdom of life. There are roughly six kingdoms of life according to Lynn Margulis and others. As an extropic system that originated from animals, one of the six kingdoms, we can think of the technium as a 7th.

Obviously there are many distinctions between life systems and the technium. One of the differences is that, in general, technological species never go extinct.  For instance one of the first technologies in history is manufactured arrow points. Well, there are five thousand flint knappers working in the U.S. today, making arrowheads exactly the same way they have always been made (pressing bone against flint), and these enthusiasts are probably making about a million points a year. You can buy a hand-made antler-handled chert blade knife on eBay, made with basically the same technology of 20,000 years ago, for 50 dollars.

I've been asking people to suggest technologies they thought were extinct and one historian of technology suggested steam-powered automobiles as an obvious dead end. Well actually they're not; people are making brand-new parts for Stanley steam-powered automobiles. You can buy a brand-new valve, or whatever else you need to keep your antique running.  If you look globally, I can guarantee that somewhere in the world today, nearly every technology you can imagine is still being used, either as a tool in everyday life or in either a revival sense. 

There are a couple of exceptions: we no longer know what one or two historical technologies were. Greek fire is one example of a technology that seems to be lost.  But in general, technological species, unlike biological species, don't go extinct.    Although that is one distinction, there are otherwise a lot of similarities between the technium and the natural world. We can show evolution through mutations in the technium, and major transitions of change in technological organization. We can see a large scale move, as in life, from the general to the specific. Technology also follows life in a cosmic scale migration towards greater complexity, diversity, and energy density. So we can think of the technium as a 7th kingdom of life.  As such the technium tends to be in alignment with the rest of the 6 kingdoms of life.  Technology is inherently at home with other life, rather than contrary to it.

One of the concerns about technology is that if you let it go where it wants to go, the technium will eat up the natural system. Out-of-control technology is popularly perceived as a natural adversary to the biological world. There's one level at which that is obviously true.

If we were to record all the ways in which gross technological negligence — clear-cutting of forests, pollution from factories, etc.  –  destroys the integrity of the biosphere its clear we need to keep the technium in check or we're in dire straits environmentally.  But I don't think this destructive tendency is inherent in the technology. The technium wants many of the same things that we do.  Clean water, for example. Most industrial processes require clean water. Some high-tech processes require water that's cleaner than drinking water.  In this sense the technium doesn't want pollution; it wants the same kind of pristine environment that we want, especially with regards to higher technologies.  

As technology has developed and become more sophisticated, it has become more and more closely aligned with environmental practices, just as humans have. In the ‘70s, Paul Ehrlich (The Population Bomb) and others were greatly concerned that as the technium grew, it would consume all the limited resources of the world. But that did not happen. It turned out the technium was capable of producing substitutes faster than the resources would be eliminated.  So now, except perhaps for oil, you don't hear the concern about resource elimination because technology has either made resources more abundant or they have been substituted through the development of new technology. Pollution is the same — the solution to pollution in most cases is better technology. All the trajectories for the technium are towards recycling materials including pollutants, energy efficiency, scarcity substitutions, the replacement of mass with information — all of which we would call green technology.

I can imagine the technium and nature being in harmony over the long term with the exception of one area where these two forces don't seem to be in alignment: elimination of species habitat. The technium seems to be insensitive to species elimination. I think this a real problem, but it quickly became apparent that we didn't have a very good understanding of all the species on the earth. 

The All Species Inventory, which I co-founded, was our attempt to address this ignorance.  We don't know what species there are on earth, and we don't know very much about the ones we do know. We're in that really horrible position where we don't even know how much we don't know. We think we know about 1.7 million species, but even that's uncertain because there's no clean master list that has eliminated all the duplicate, synonyms, and erroneous species we think we have identified.

As far as how many species may be on this planet we don't even a have consensus of the nearest magnitude. Guesses range from 3 to 100 million. The astounding fact is that nowhere else in science is there the same magnitude of ignorance as in our meager knowledge about the organisms on this planet. Trying to guide the technium's interaction with the biosphere is hampered by our vast ignorance. We can't do biology knowing only 5% of the species. It's like trying to do chemistry without knowing all the elements; it's impossible. If we were to discover life on another planet, the first thing we would do is a systematic survey of all the life on that planet.  But we've not done it with our home planet, which is a shame.

The idea of doing a planetary inventory of species was slow to catch on among taxonomists because it seemed so grandiose. Taxonomy is a poorly funded science where a $10,000 grant was an occasion to break out the champagne. $10,000 is amount most molecular biology labs budget for glassware. If science had only cataloged 1 million species in 200 years of taxonomy, how could anyone expect to do an additional 10 to 20 million in one or two generations? Done the way Darwin did it, which is how taxonomy was still being done, it was impossible. But done using DNA sequencing, it seems more likely every day. As the technium accumulates the vast genetic knowledge of all species on earth, and as genetic engineering technologies advance, it may be that this wealth of genetic information will become a reason for the technium to care about species survival. 


A common criticism of technological progress is that each invention, each supposed technological solution, will produce as many problems as it solves. I actually agree. But I see in each of those "problems" an opportunity. In my ecological framework those problems are niches to be occupied, or to be resolved, by new technology.

The ill consequence is real, but also a new opportunity to invent. But if, in the end, technology is just generating as many problems as it eliminates, then in what sense can we call this progress? At best it's a wash. This is why many technologists call technology "neutral." 

Here's where I disagree. I don't think technology is neutral or a wash of good and bad effects. To be sure it does produce both problems and solutions, but the chief effect of technology is that it produces more possibilities. More options. More freedom, essentially. That's really good. That is the reason why people move to cities — for more choices. They leave beautiful Greek islands and hamlets in Cambodia because cities have more choices. They don't move from the farms — where their communities and traditions are very supportive and comforting  — because they hate it; they move because they want more choices. The reason we like choices is that they give us more chances to use all of our talents. We have a greater chance of matching our limited abilities with opportunities to maximize them. We put up with all the inevitable problems in new gadgets, soon to be obsolete, because we are eager to try out the possibilities, hoping that we will have a better chance for unleashing who we are. 

These opportunities, these freedoms, are a very powerful force. Imagine a great artist like Mozart born before the possibility of a piano, or orchestra — what a loss that would have been. Or if Hitchcock had been born before the technology of film had been invented. Or Van Gogh before cheap oil paints. Undoubtedly those giants would have done their best with whatever they had — perhaps Beethoven on drums, Van Gogh with charcoal. But we honor them in part because in some unfathomable way they were able to realize their true genius by finding a perfect match with their tools — tools that are possibilities and choices manifested.

There are children born today whose technological possibilities have not yet come about.  I would argue that, in a certain sense, we have a moral obligation to increase the technology of the world — of the universe — to insure that the genius of every person born will have some way to express its fullness. In the end, this is what the technium wants, too. What the other six kingdoms of life want. What we want. To increase choices. To open up new freedoms. To expand the possible.