But today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the "instantly available". A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become "pancake people"—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button. THE
PANCAKE PEOPLE, OR, "THE GODS ARE POUNDING MY HEAD" [3.8.05] vs. THE
GÖDEL-TO-GOOGLE NET [3.8.05] As Richard Foreman so beautifully describes it, we've been pounded into instantly-available pancakes, becoming the unpredictable but statistically critical synapses in the whole Gödel-to-Google net. Does the resulting mind (as Richardson would have it) belong to us? Or does it belong to something else? THE REALITY CLUB: Kevin Kelly, Jaron Lanier, Steven Johnson, Marvin Minsky , Douglas Rushkoff, Roger Schank, James O'Donnell, Rebecca Goldstein. respond to Richard Foreman and George Dyson ___ Introduction Several years have gone by and recently Foreman opened his most recent play for his Ontological-Hysteric Theater at St. Marks Church in the Bowery in New York City. He also announced that the play—The Gods Are Pounding My Head—would be his last. Foreman presents Edge with a statement and a question. The statement appears in his program and frames the sadness of The Gods Are Pounding My Head. The question is an opening to the future. With both, Foreman belatedly hopes to engage Edge contributors in a discussion, and in this regard George Dyson has written the initial response, entitled "The Gödel-to-Google Net". RICHARD FOREMAN, Founder Director, Ontological-Hysteric Theater, has written, directed and designed over fifty of his own plays both in New York City and abroad. Five of his plays have received "OBIE" awards as best play of the year—and he has received five other "OBIE'S" for directing and for 'sustained achievement'. THE PANCAKE PEOPLE, OR, "THE GODS ARE POUNDING MY HEAD" A
Statement Nevertheless, this very—to my mind—elegiac play does delineate my own philosophical dilemma. I come from a tradition of Western culture in which the ideal (my ideal) was the complex, dense and "cathedral-like" structure of the highly educated and articulate personality—a man or woman who carried inside themselves a personally constructed and unique version of the entire heritage of the West. And such multi-faceted evolved personalities did not hesitate— especially during the final period of "Romanticism-Modernism"—to cut down , like lumberjacks, large forests of previous achievement in order to heroically stake new claim to the ancient inherited land— this was the ploy of the avant-garde. But today, I see within us all (myself included) the replacement of complex inner density with a new kind of self-evolving under the pressure of information overload and the technology of the "instantly available". A new self that needs to contain less and less of an inner repertory of dense cultural inheritance—as we all become "pancake people"—spread wide and thin as we connect with that vast network of information accessed by the mere touch of a button. Will this produce a new kind of enlightenment or "super-consciousness"? Sometimes I am seduced by those proclaiming so—and sometimes I shrink back in horror at a world that seems to have lost the thick and multi-textured density of deeply evolved personality. But, at the end, hope still springs eternal... ___ A
Question ______________________________ As Richard Foreman so beautifully describes it, we've been pounded into instantly-available pancakes, becoming the unpredictable but statistically critical synapses in the whole Gödel-to-Google net. Does the resulting mind (as Richardson would have it) belong to us? Or does it belong to something else? THE
GÖDEL-TO-GOOGLE NET GEORGE DYSON, science historian, is the author of Darwin Among the Machines. ___ THE GÖDEL-TO-GOOGLE NET Richard
Foreman is right. Pancakes indeed! |
|
"Can
computers achieve everything the human mind can achieve?" KEVIN KELLY is Editor-At-Large, Wired; Author, Out of Control: The New Biology of Machines, Social Systems, and the Economic World. |
The only way to deal with other people's brains or bodies sanely is to grant them liberty as far as you're concerned, but to not lose hope for them. Each person ought to decide whether to be a pancake or not, and some of those pre-pancakes Foreman misses were actually vacuous soufflés anyway. Remember? There are plenty of creamy rich three dimensional digitally literate people out there, even a lot of young ones. There is a lot of hope and beauty in digital culture, even if the prevalent fog is sometimes heavy enough to pound your head.
In the
1990s, I used to complain about the "suffocating nerdiness and
blandness" of Silicon Valley. This was how the pioneer days of
Richard Foreman's pancake personhood felt to me. I fled to live in New
York City precisely for the antidote of being around venues like Foreman's
Ontological-Hysterical Theater and his wonderful shows. But computer
culture broke out of its cage and swallowed Manhattan whole only a few
years later. Computer culture's reigning cool/hip wing of the moment, the free software —or "open source"—movement, uses the idea of the Cathedral as a metaphorical punching bag. In a famous essay by Eric Raymond ("The Cathedral and the Bazaar"), the Cathedral is compared unfavorably to an anarchic village market, and the idea is that true brilliance is to be found in the "emergent" metapersonal wisdom of neo-Darwinian competition. The Cathedral is derided as a monument to a closed, elitist, and ultimately constricting kind of knowledge. It's a bad metaphor. All this supports Foreman's pancake premise, but I recommend adding a wing to Foreman's mental cathedral. In this wing there would be a colossal fresco of two opposing armies. One one side, there would be a group led by Doug Englebart. He'd be surrounded by some eccentric characters such as the late Jef Raskin, Ted Nelson, David Gelernter, Alan Kay, Larry Tesler, Andy van Dam, Ben Schneiderman, among others. They are facing an opposing force made up of both robots and people and mechochimeras. The first group consists of members of the humanist tradition in computer science, and are people that Foreman might enjoy. They are not pancakes and they don't make others into pancakes. They are no more the cause of mental shrinkage than the written word (despite Plato's warnings to the contrary) or Gutenberg. The only way to deal with other people's brains or bodies sanely is to grant them liberty as far as you're concerned, but to not lose hope for them. Each person ought to decide whether to be a pancake or not, and some of those pre-pancakes Foreman misses were actually vacuous soufflés anyway. Remember? There are plenty of creamy rich three dimensional digitally literate people out there, even a lot of young ones. There is a lot of hope and beauty in digital culture, even if the prevalent fog is sometimes heavy enough to pound your head. If Foreman is serious about quitting the theater, he will be missed. But that's not a reason to offer computers, arbiters on their own of nothing but insubstantiality, the power to kick his butt and pound his head. The only reality of a computer is the person on the other side of it. JARON LANIER is a Computer Scientist and Musician. |
...the kind of door-opening exploration that Google offers is in fact much more powerful and unpredictable than previous modes of exploration. It's a lot easier to stumble across something totally unexpected—but still relevant and interesting—using Google than it is walking through a physical library or a bookstore. A card catalogue is a terrible vehicle for serendipity. But hypertext can be a wonderful one. You just have to use it the right way.
I think
it's a telling sign of how far the science of information retrieval
has advanced that we're seriously debating the question of whether computers
can be programmed to make mistakes. Rewind the tape 8 years or so—post-Netscape,
pre-Google—and the dominant complaint would have been that the
computers were always making mistakes: you'd plug a search
query into Alta-Vista and you'd get 53,000 results, none of which seemed
relevant to what you were looking for. But sitting here now in 2005,
we've grown so accustomed to Google's ability to find the information
we're looking for that we're starting to yearn for a little fallibility. |
I don't see any basic change; there always was too much information. Fifty years ago, if you went into any big library, you would have been overwhelmed by the amounts contained in the books therein. Furthermore, that "touch of a button" has improves things in two ways: (1) it has change the time it takes to find a book from perhaps several minutes into several seconds, and (2) in the past date usually took many minutes, or even hours, to find what you want to find inside that book—but now, a Computer can help you can search through the text, and I see this as nothing but good. Marvin Minsky: Mr. Foreman
complains that he is being replaced (by "the pressure of information
overload") with "a new self that needs to contain less and
less of an inner repertory of dense cultural inheritance" because
he is connected to "that vast network of information accessed by
the mere touch of a button." MARVIN MINSKY is a mathematician and computer scientist; Cofounder of MIT's Artificial Intelligence Laboratory; Author, The Society of Mind. |
We give up the illusion of our power as deriving from some notion of individual collecting data, and find out that having access to data through our network-enabled communities gives us an entirely more living flow of information that is appropriate to the ever changing circumstances surrounding us. Instead of growing high, we grow wide. We become pancake people.
Foreman is hinting at a "renaissance" shift I've been studying for the past few years. The original Renaissance invented the individual. With the development of perspective in painting came the notion of perspective in everything. The printing press fueled this even further, giving individuals the ability to develop their own understanding of texts. Each man now had his own take on the world, and a person's storehouse of knowledge and arsenal of techniques were the measure of the man. The more I study the original Renaissance, the more I see our own era as having at least as much renaissance character and potential. Where the Renaissance brought us perspective painting, the current one brings virtual reality and holography. The Renaissance saw humanity circumnavigating the globe; in our own era we've learned to orbit it from space. Calculus emerged in the 15th Century, while systems theory and chaos math emerged in the 20th. Our analog to the printing press is the Internet, our equivalent of the sonnet and extended metaphor is hypertext. Renaissance innovations all involve an increase in our ability to contend with dimension: perspective. Perspective painting allowed us to see three dimensions where there were previously only two. Circumnavigation of the globe changed the world from a flat map to a 3D sphere. Calculus allowed us to relate points to lines and lines to objects; integrals move from x to x-squared, to x-cubed, and so on. The printing press promoted individual perspectives on religion and politics. We all could sit with a text and come up with our own, personal opinions on it. This was no small shift: it's what led to the Protestant wars, after all. Out of this newfound experience of perspective was born the notion of the individual: the Renaissance Man. Sure, there were individual people before the Renaissance, but they existed mostly as parts of small groups. With literacy and perspective came the abstract notion the person as a separate entity. This idea of a human being as a "self," with independent will, capacity, and agency, was pure Renaissance—a rebirth and extension of the Ancient Greek idea of personhood. And from it, we got all sorts of great stuff like the autonomy of the individual, agency, and even democracy and the republic. The right to individual freedom is what led to all those revolutions. But thanks to new emphasis on the individual, it was also during the first great Renaissance that we developed the modern concept of competition. Authorities became more centralized, and individuals competed for how high they could rise in the system. We like to think of it as a high-minded meritocracy, but the rat-race that ensued only strengthened the authority of central command. We learned compete for resources and credit made artificially scarce by centralized banking and government. While our renaissance also brings with it a shift in our relationship to dimension, the character of this shift is different. In a holograph, fractal, or even an Internet web site, perspective is no longer about the individual observer's position; it's about that individual's connection to the whole. Any part of a holographic plate recapitulates the whole image; bringing all the pieces together generates greater resolution. Each detail of a fractal reflects the whole. Web sites live not by their own strength but the strength of their links. As Internet enthusiasts like to say, the power of a network is not the nodes, it's the connections. That's why new models for both collaboration and progress have emerged during our renaissance—ones that obviate the need for competition between individuals, and instead value the power of collectivism. The open source development model, shunning the corporate secrets of the competitive marketplace, promotes the free and open exchange of the codes underlying the software we use. Anyone and everyone is invited to make improvements and additions, and the resulting projects—like the Firefox browser—are more nimble, stable, and user-friendly. Likewise, the development of complementary currency models, such as Ithaca Hours, allow people to agree together what their goods and services are worth to one another without involving the Fed. They don't need to compete for currency in order to pay back the central creditor—currency is an enabler of collaborative efforts rather than purely competitive ones. For while the Renaissance invented the individual and spawned many institutions enabling personal choices and freedoms, our renaissance is instead reinventing the collective in a new context. Originally, the collective was the clan or the tribe—an entity defined no more by what members had in common with each other than what they had in opposition to the clan or tribe over the hill. Networks give us a new understanding of our potential relationships to one another. Membership in one group does not preclude membership in a myriad of others. We are all parts of a multitude of overlapping groups with often paradoxically contradictory priorities. Because we can contend with having more than one perspective at a time, we needn't force them to compete for authority in our hearts and minds—we can hold them all, provisionally. That's the beauty of renaissance: our capacity to contend with multiple dimensions is increased. Things don't have to be just one way or directed by some central authority, alive, dead or channeled. We have the capacity to contend with spontaneous, emergent reality. We give up the illusion of our power as deriving from some notion of individual collecting data, and find out that having access to data through our network-enabled communities gives us an entirely more living flow of information that is appropriate to the ever changing circumstances surrounding us. Instead of growing high, we grow wide. We become pancake people. DOUGLAS RUSHKOFF is a media analyst; Documentary Writer; Author, Media Virus. |
As
to Dyson's remarks: "Turing proved that digital computers are able to
answer most but not all� programs that can be asked in unambiguous
terms." Did he? I missed that. Maybe he proved that computers could follow
instructions which is neither here nor there. It is difficult to give
instructions about how to learn new stuff or get what you want. Google's
"allowing people with questions to find answers" is nice but irrelevant.
The Encyclopedia Britannica does that as well and no one makes claims
about its intelligence or draws any conclusion whatever from it. And,
Google is by no means an operating system—I can't even imagine what
Dyson means by that or does he just not know what an operating system
is?
Simple point number 1: A smart computer would have to be able to learn. This seems like an obvious idea. How smart can you be if every experience seems brand new? Each experience should make you smarter no? If that is the case then any intelligent entity must be capable of learning from its own experiences right? Simple point number 2: A smart computer would need to actually have experiences. This seems obvious too and follows from simple point number 1. Unfortunately, this one isn't so easy. There are two reasons it isn't so easy. The first is that real experiences are complex, and the typical experience that today's computers might have is pretty narrow. A computer that walked around the moon and considered seriously what it was seeing and decided where to look for new stuff based on what it had just seen would be having an experience. But, while current robots can walk and see to some extent, they aren't figuring out what to do next and why. A person is doing that. The best robots we have can play soccer. They play well enough but really not all that well. They aren't doing a lot of thinking. So there really aren't any computers having much in the way of experiences right now. Could there be computer experiences in some future time? Sure. What would they look like? They would have to look a lot like human experiences. That is, the computer would have to have some goal it was pursuing and some interactions caused by that goal that caused it to modify what it was up to in mid-course and think about a new strategy to achieve that goal when it encountered obstacles to the plans it had generated to achieve that goal. This experience might be conversational in nature, in which case it would need to understand and generate complete natural language, or it might be physical in nature, in which case it would need to be able to get around and see, and know what it was looking at. This stuff is all still way too hard today for any computer. Real experiences, ones that one can learn from, involve complex social interactions in a physical space, all of which is being processed by the intelligent entities involved. Dogs can do this to some extent. No computer can do it today. Tomorrow maybe. The problem here is with the goal. Why would a computer have a goal it was pursuing? Why do humans have goals they are pursuing? They might be hungry or horny or in need of a job, and that would cause goals to be generated, but none of this fits computers. So, before we begin to worry about whether computers would make mistakes, we need to understand that mistakes come from complex goals not trivially achieved. We learn from the mistakes we make when the goal we have failed at satisfying is important to us and we choose to spend some time thinking about what to do better next time. To put this another way, learning depends upon failure and failure depends upon having had a goal one care's about achieving and that one is willing to spend time thinking about how to achieve next time using another plan. Two year olds do this when they realize saying "cookie" works better than saying "wah" when they want a cookie. The second part of the experience point is that one must know one has had an experience and know the consequences of that experience with respect to one's goals in order to even think about improving. In other words, a computer that thinks would be conscious of what had happened to it, or would be able to think it was conscious of what had happened to it which may not be the same thing. Simple point number 3: Computers that are smart won't look like you and me. All this leads to the realization that human experience depends a lot on being human. Computers will not be human. Any intelligence they ever achieve will have to come by virtue of their having had many experiences that they have processed and understood and learned from that have helped them better achieve whatever goals they happen to have. So, to Foreman's question: Computers will not be programmed to make mistakes. They will be programmed to attempt to achieve goals and to learn from experience. They will make mistakes along the way, as does any intelligent entity. As to Dyson's remarks: "Turing proved that digital computers are able to answer most but not all� programs that can be asked in unambiguous terms." Did he? I missed that. Maybe he proved that computers could follow instructions which is neither here nor there. It is difficult to give instructions about how to learn new stuff or get what you want. Google's "allowing people with questions to find answers" is nice but irrelevant. The Encyclopedia Britannica does that as well and no one makes claims about its intelligence or draws any conclusion whatever from it. And, Google is by no means an operating system—I can't even imagine what Dyson means by that or does he just not know what an operating system is? People have nothing to fear from smart machines. With the current state of understanding of AI I suspect they wont have to even see any smart machines any time soon. Foreman's point was about people after all and people are being changed by the computer's ubiquity in their lives. I think the change is, like all changes in the nature of man's world, interesting and potentially profound, and probably for the best. People may well be more pancake-like, but the syrup is going to very tasty. ROGER SCHANK is a Psychologist & Computer Scientist; Author, Designing World-Class E-Learning. |
I
have trouble imagining what students will know fifty
years from now, when devices in their hands spare them the need
to know multiplication tables or spelling or dates of the kings
of England. That probably leaves us time and space for other
tasks, but the sound of the gadgets chasing us is palpable. What
humans will be like, accordingly, in 500 years is just beyond
our imagining. Can computers achieve everything the human mind can achieve? Can they, in other words, even make fruitful mistakes? That's an ingenious question. Of course, computers never make mistakes—or rather, a computer's "mistake" is a system failure, a bad chip or a bad disk or a power interruption, resulting in some flamboyant mis-step, but computers can have error-correcting software to rescue them from those. Otherwise, a computer always does the logical thing. Sometimes it's not the thing you wanted or expected, and so it feels like a mistake, but it usually turns out to be a programmer's mistake instead. It's certainly true that we are hemmed in constantly by technology. The technical wizardry in the graphic representation of reality that generated a long history of representative art is now substantially eclipsed by photography and later techniques of imaging and reproduction. Artists and other humans respond by doing more and more creatively in the zone that is still left un-competed, but if I want to know what George W. Bush looks like, I don't need to wait for a Holbein to track him down. We may reasonably expect to continue to be hemmed in. I have trouble imagining what students will know fifty years from now, when devices in their hands spare them the need to know multiplication tables or spelling or dates of the kings of England. That probably leaves us time and space for other tasks, but the sound of the gadgets chasing us is palpable. What humans will be like, accordingly, in 500 years is just beyond our imagining. So I'll ask what I think is the limit case question: can a computer be me? That is to say, could there be a mechanical device that embodied my memory, aptitudes, inclinations, concerns, and predilections so efficiently that it could replace me? Could it make my mistakes? I think I know the answer to that one. JAMES O'DONNELL is a classicist; cultural historian; Provost, Georgetown University; Author, Avatars of the Word. |
The complexity suddenly facing us can feel overwhelming and perhaps such souls as Lugubrioso's will momentarily shrink at how much they must master in order to appropriate this complexity and make it their own. It's that shrinkage that Lugubriosos is feeling, confusing his own inadequacy to take in the new forms of knowing with the inadequacy of the forms themselves. Google doesn't kill people, Rosa admonished him. People kill people.
My one character, dubbed Lugubrioso, had a flair for elaborate phraseology that rivaled the Master's, and he turned it to deploring the loss of the inner self's solemn, silent spaces, the hushed corridors where the soul communes with itself, chasing down the subtlest distinctions of fleeting consciousness, catching them in finely wrought nets of words, each one contemplated for both its precise meaning and euphony, its local and global qualities, one's flight after that expressiveness which is thought made surer and fleeter by the knowledge of all the best that had been heretofore thought, the cathedral-like sentences (to change the metaphor) that arose around the struggle to do justice to inexhaustible complexity themselves making of the self a cathedral of consciousness. (Lugubrioso spoke in long sentences.) He contemplated with shuddering horror the linguistic impoverishment of our technologically abundant lives, arguing that privation of language is both an effect and a cause of privation of thought. Our vocabularies have shrunk and so have we. Our expressive styles have lost all originality and so have we. The passivity of our image-heavy forms of communication—too many pictures, not enough words, Lugubrioso cried out pointing his ink-stained finger at the popular culture—substitutes in an all-too-pleasant anodyne for the rigors of thinking itself, and our weakness for images encourages us to reduce people, too—even our very own selves—to images, which is why we are drunk on celebrity hood and feel ourselves to exist only to the extent that we exist for others. What is left but image when the self has stopped communing with itself, so that in a sad gloss on Bishop Berkeley's apothegm, our esse has become percipi, our essence is to be perceived? Even the torrents of words posted on "web-related locations" (the precise nature of which Lugubrioso had kept himself immaculately ignorant) are not words that are meant for permanence; they are pounded out on keyboards at the rate at which they are thought, and will vanish into oblivion just as quickly, quickness and forgetfulness being of the whole essence of the futile affair, the long slow business of matching coherence to complexity unable to keep up, left behind in the dust. My other character was Rosa and she pointed out that at the very beginning of this business that Lugubrioso kept referring to, in stentorian tones, as "Western Civilization," Plato deplored the newfangled technology of writing and worried that it tolled the death of thought. A book, Plato complained in Phaedrus, can't answer for itself. (Rosa found the precise quotation on the web. She found 'stentorian,' too, when she needed it, on her computer's thesaurus. She sort of knew the word, thought it might be "sentorian," or "stentorious"but she'll know where to find it if she ever needs it again, a mode of knowing that Lugubrioso regards as epistemologically damnable.) When somebody questions a book, Plato complained, it just keeps repeating the same thing over and over again. It will never, never, be able to address the soul as a living breathing interlocutor can, which is why Plato, committing his thoughts to writing with grave misgivings, adopted the dialogue form, hoping to approximate something of the life of real conversation. Plato's misgivings are now laughable—nobody is laughing harder than Lugubrioso at the thought that books diminish rather than enhance the inner life—and so, too, will later generations laugh at Lugubrioso's lamentations that the cognitive enhancements brought on by computers will make of us less rather than more. Human nature doesn't change, Rosa tried to reassure Lugubrioso, backing up her claims with the latest theories of evolutionary psychology propounded by Steven Pinker et al. Human nature is inherently expansive and will use whatever tools it develops to grow outward into the world. The complexity suddenly facing us can feel overwhelming and perhaps such souls as Lugubrioso's will momentarily shrink at how much they must master in order to appropriate this complexity and make it their own. It's that shrinkage that Lugubriosos is feeling, confusing his own inadequacy to take in the new forms of knowing with the inadequacy of the forms themselves. Google doesn't kill people, Rosa admonished him. People kill people. Lugubrioso had a heart-felt response, but I'll spare you. REBECCA GOLDSTEIN is a philosopher and novelist; Author, Incompleteness. |
EDGE READING |
|
John Brockman, Editor and Publisher |
|Top|
|