Edge in the News: 2010
Internet
>F.A.Z.-Spezial: Jahresfrage auf Edge.org |
>F.A.Z.-Spezial: Jahresfrage auf Edge.org
WIE HAT DAS INTERNET IHR DENKEN VERÄNDERT? (Google Translation: Annual edition on edge.org)
Von Frank Schirrmacher
08th January 2010 — This Friday the American literary agent John Brockman published the 2010 question: How has the Internet and networked computers changed the way we think? At the core of the debate lies the question of the science historian George Dyson, "Is the price of machines that think, people who can no longer think?
"
Some of the most important present-day scientists and authors are in Brockman's circle, and present their vision on Edge.org with one hundred twenty-one answers. We are printing the most interesting ones in this feature. Unlike in Germany where the debate about the information age is still always marked a palaver about media interest, Edge aims at a deep debate.
>Internet-Debatte: Wenn Literatur Sich Im Netz Verfängt (Google Tranlsation: "If literature is entangled in the net")
Von Thomas Hettche
Literature is not from books, either in a cardboard or from digital. Literature consists of novels, sonnets, stories, short stories, odes, in short, of works, completed, followed by specific aesthetic and thematic aspects organized structures, their own laws, are understood only by itself and can also be reduced to nothing else. To their particular shape, these distinctive physiognomy, which arises from a specific language and from what language does this, it is to do any real writer. This special physiognomy is different from all the literary journals and film templates that are otherwise staring them between two covers and Roman names.
>FAZ.NET-Spezial: Digitales Denken Google Translation: Digitial Thinking at FAZ
A Rerprise of recent articles
Evgeny Morozov holds that Twitter is a control tool of authoritarian regimes. By invoking Søren Kierkegaard and Twitter Morozov explains why Jurgen Habermas always a bit too euphoric.
Evgeny Morozov and Clay Shirky had a short exchange of views in January in Prospect and a short exchange of blows on the importance of Twitter to the early Iranian protest movements. Morozov pointed out that encouraging social networks like Twitter and Facebook, created points of control by the regime. Clay Shirky said then: "Even taking into account the increased availability of surveillance, the net value of social media has shifted the balance of power in the direction of Iran's citizens."
FAZ today crossposted today a meeting between Morozov Shirky for Edge.org titled "Digital Power and It's Discontents". Significantly, among others, Morozov takes on Søren Kierkegaard, Jürgen Habermas and Twitter:
I don't know if you've read Kierkegaard, but there are quite a few subtle undertones of Kierkegaard in my critique of Twitter-based activism. Kierkegaard happened to live during the very times that were celebrated by Habermas: cafes and newspapers were on the rise all over Europe, a new democratized public sphere was emerging. But Kiergeaard was growing increasingly concerned that there were too many opinions flowing around, that it was too easy to rally people behind an infinite number of shallow causes, that no one had strong commitment to anything. There was nothing that people could die for. Ironically, this is also one of my problems with the promiscuous nature of online activism: it cheapens our commitment to political and social causes that matter and demand constant sacrifice. |
Evgeny Morozov und Clay Shirky haben sich in Prospekt Anfang Januar einen kleinen Schlagabtausch über die Bedeutung von Twitter für die iranische Protestbewegungen geliefert. Morozov hielt dabei soziale Netzwerke, wie Twitter und Facebook, vor allem auch für Ansatzpunkte der Kontrolle durch das Regime. Clay Shirkyerwiderte damals: “Even taking into account the increased availability of surveillance, the net value of social media has shifted the balance of power in the direction of Iran’s citizens.”
Die F.A.S. crosspostet heute nun ein Gespräch zwischen Morozov und Shirky für Edge.org(dort noch nicht online) unter dem Titel “Das Unbehagen an der digitalen Macht”. Beachtlich ist unter anderem, wie Morozov hier Søren Kierkegaard, Jürgen Habermas und Twitter verschraubt:
Wenn Sie Kierkegaard gelesen haben, werden Ihnen einige subtile kierkegaardsche Untertöne in meiner Kritik des Twitter-Aktivismus aufgefallen sein. Kierkegaard lebte genau in der Zeit, die Habermas so preist: Cafés und Zeitungen waren in ganz Europa auf dem Vormarsch, eine neue demokratische Öffentlichkeit bildete sich heraus. Kierkegaard aber machte es zunehmend Sorgen, dass immer mehr Meinungen im Umlauf waren, dass es allzu leicht war, Menschen für beliebige Anliegen zusammenzutrommeln, dass niemand sich irgendeiner Sache tief verpflichtet fühlte. Es gab nichts, wofür Menschen zu sterben bereit waren. Ironischerweise ist das auch eines meiner Probleme mit dem wahllosen Charakter des digitalen Aktivismus: Er würdigt unser Engagement für politische und gesellschaftliche Themen, die wirklich wichtig sind und permanente Aufopferung verlangen, herab.
"Unlike in novels," muses a character in Ian McEwan's "Saturday," "moments of precise reckoning are rare in real life." In life, pressing questions are not often resolved. "They simply fade. People don't remember clearly, or they die, or the questions die and new ones take their place."
Perhaps that explains why McEwan doesn't reach for pat answers in his own novels. Our "real life" interest in his political and cultural themes -- Cold War politics in "Black Dogs," the failure of liberalism in "Amsterdam," post-9/11 terrorism and the invasion of Iraq in "Saturday," the science and politics of climate change in his latest novel, "Solar" -- lingers long after we close the book and move on.
In a Web forum, "What Will Change Everything," McEwan calls for the "full flourishing" of solar technology to replace oil as our primary energy source and repair the damage caused by global warming. He envisions the world's deserts blooming with solar towers that are as expressive of our aesthetic aspirations as medieval cathedrals once were. The plot of "Solar" involves the development of a process to simulate artificial photosynthesis as a cheap, clean energy alternative.
McEwan is as much scientist as he is novelist, and in "Solar" he finds clever ways to articulate the breadth of the climate change debate, and what may be at stake. Here's an issue that might not fade. The story McEwan tells in "Solar" is certainly not as hopeful or as inspiring as his personal views seem to promise.
In fact, "Solar" is a dark and savagely funny book, a withering portrait of Michael Beard, the Nobel laureate behind the solar project. Now in his mid-50s, Beard long ago lost the inspiration that fueled his prizewinning research. He's a scientist turned bureaucrat managing a research foundation. Obsessed with sex and food, he is more concerned with his dissolving fifth marriage than he is with innovative science or the condition of the planet. Surface charm notwithstanding, Beard is totally repellent, so much so I found myself rooting for any sort of change, even climate change. An odd accident gives Beard the chance to take revenge on his unfaithful wife and her lovers, and incidentally redeem himself as a New Energy entrepreneur.
The outlook is fairly bleak. Investors want assurance of "shareholder value" before they'll commit to the solar project. Beard's new love interest demands commitment and children but refuses to think about global warming because "to take the matter seriously would be to think about it all the time," and the routines of daily life won't permit that.
Beard's corruption may doom the solar project, but he excuses his sickness as a reflection of the planet's condition. He brushes aside his own dire predictions about global warming to embrace "a bit of nihilism." The Earth will do fine without him. "And if it shrugged off all the other humans, the biosphere would soldier on, and in a mere ten million years teem with strange new forms." The question raised by Beard's story is whether his self-loathing and despair reflect humankind's lack of will to make the changes necessary to redirect the fate of the planet.
-- Vernon Peterson
SOLAR
Ian McEwan
Doubleday
$26.95, 304 pages
Si immagini di poter porre la domanda che arrovella i momenti pensosi a una cerchia di sapienti lì apposta per darvi risposta. Non è detto che se ne venga a capo, ma un avanzamento di prospettiva è assicurato.
L’ha pensata bene John Brockman, di professione impresario culturale. Lui, agente letterario specializzatosi nel fiutare i talenti scientifici, ha creato Edge, una sorta di fondazione virtuale, con sede in Internet (www.edge.org), dove menti eccelse sollecitate da domande comuni promuovono la discussione su temi culturali importanti e le loro implicazioni a largo raggio. “Orlo”, “bordo”, significa letteralmente: «Gli scienziati del gruppo Edge – spiega Piero Bianucci sulla Stampa, descrivendo il progetto – stanno sul bordo, sulla frontiera tra ciò che si sa e ciò che si vorrebbe sapere, tra presente e futuro, forse tra genio e follia».
Brockman lancia un quesito e premi nobel, aspiranti premi nobel o comunque scienziati insigni e plurititolati danno la loro chiave di interpretazione sul tema. E siccome perlopiù, essendo menti piuttosto nteressanti, vengono fuori risposte che lo sono altrettanto, il bravo agente non se le fa sfuggire. Convinto soprattutto del fatto che se gli scienziati trovano il modo adatto di comunicare a un pubblico meno specialistico e dunque ovviamente più vasto possono portare un po’ di luce all’esistenza di molti. Per questo Brockman preferisce questo mezzo di diffusione delle loro idee rispetto alla classica pubblicazione di articoli su riviste specializzate a scarsissima diffusione, aiutando gli artefici della terza cultura (su cui l’agente letterario ha scritto anche un libro: La terza cultura, Garzanti 1995) a comunicare con il vasto pubblico.
Così, per esempio, è nato il suo ultimo e recentissimo 153 ragioni per essere ottimisti (il Saggiatore 2010); Brockman ha solleticato gli scienziati del suo Edge chiedendo loro: «Su che cosa sei ottimista e perché?», naturalmente privilegiando la prospettiva di scienza e ricerca scientifica.
Raccogliendo tutte le risposte, ecco riempite le pagine del libro: e al lettore restano le 153 risposte di altrettanti grandi scienziati che spiegano perché, tutto sommato e nonostante le brutte notizie che ci assalgono ogni giorno, valga scientificamente la pena essere ottimisti. Per esempio nel caso dell’effetto serra perché nella tanta angoscia, condivisa e alimentata da moltissimi scienziati, sul problema c’è già, implicita, una presa di coscienza dello stesso. A far sperare bene è poi l’energia solare che, come spiega il già caporedattore di New ScientistAlun Anderson: «Il Sole fornisce 7mila volte più energia di quanta ne stiamo utilizzando». Vale a dire che la soluzione c’è, bisogna solo lavorare sodo per trovarla.
C’è anche chi punta sui progressi scientifici che costantemente allungano e migliorano la vita,chi sull’intelligenza artificiale, chi addirittura crede che sarà un’imminente colonia su Marte la svolta. Né si permetta alle guerre che incalzano, all’economia che langue, agli episodi di violenza che dilagano, al riscaldamento globale che incombe di uccidere l’ottimismo per il futuro. Sono fiduciosi nel domani i biologi che credono in un prossimo grande passo dei loro studio, l’analisi dell’influsso dell’ambiente sull’attivazione dei geni.
E non si lasciano sopraffarre dalla negatività neppure gli psicologi: sono convinti che i lati oscuri della mente umana, quelli che hanno generato i più grandi traumi per l’umanità, sono partoriti da un’innata capacità del cervello di mettere a fuoco l’altro da sé e prenderne le distanze, anche con aggressività e meccanismi distruttivi, che potranno essere disinnescati dai prossimi studi. Insomma, non un vago senso di fiducia nel futuro, ma, tra fisici, biologi e psicologi, un titolato pool di tecnici che invitano scientificamente all'ottimismo, come a ricordare che non ci sono scuse: il futuro è nelle nostre mani e abbiamo le carte in regola per farlo volgere al meglio. Bando ai catastrofismi e agli alibi.
“Time is a moving image of eternity.” —Plato
We tend to believe that destiny is not fixed and that all time past fades into oblivion, but can the movement be a mere illusion? A renowned Britishphysicist explains that in a special dimension, time simply doesn’t exist.
“If you try to get your hands on time, it’s always slipping through your fingers,” said Julian Barbour, British physicistand author of “The End of Time: The Next Revolution in Physics,” in aninterview with the Edge Foundation. While this poetic statement still resonates in the room, Barbour and the journalist probably do not have any connection with their own selves a second ago.
Barbour believes that people cannot capture time because it does not exist. While this is not a new theory, it has never had the popularity that Einstein’s theory of relativity or the string theory has had.
The concept of a timeless universe is not only irresistibly attractive to a handful of scientists, but such a model may pave the way to explain many of the paradoxes that modern physics faces in explaining the universe.
We tend to think and perceive time to be linear in nature, the course of which inevitably flows from past to future. This is not only a personal perception of all humans, but also the context in which classical mechanics analyzes all mathematical functions within the universe. Without such a concept, ideas such as the principle of causality and our inability to be present simultaneously in two events would begin to be addressed from a completely different level.
The idea of the discontinuity of time proposed by Barbour attempts to explain in a theoretical context a universe composed of many points he calls “now.” But such “nows” would not be understood as fleeting moments that came from the past and will die in the future; a “now” would only be one among the millions now existing in the eternal universal mosaic of a special dimension impossible to detect, each one related in a subtle way to the others, but none more outstanding than the neighboring one. They all exist at the same time.
With such a mix of simplicity and complexity, Barbour’s idea promises a great relief to anyone who is willing to accept the lack of time before the Big Bang.
Barbour thinks the concept of time might be similar to that of integers (whole numbers). All numbers exist simultaneously, and it would be insensible to think that the number 1 exists before the number 20.
At this point of the argument, it is probably inevitable for the reader to ask, “Are you trying to convince me that this movement I’m doing right now with my forearm does not exist? If infinitesimal fractions of ‘nows’ are not connected to each other, how do I remember the first ideas in this article? How do I remember what I ate for lunch? Why do I wake up and go to work if the job belongs to the ‘I’ that has nothing to do with me? If the future is already there, why strive at all?”
Such dilemmas have arisen from the illusory perception that time is fleeting, like water in a river. We can consider a timeless universe as a long vanilla custard, the center of which has been filled with chocolate for the whole length of the custard. If we cut a slice, we get what we call a present, a “now.”
Assuming that the chocolate in the center represents us, we would believe that our slice is the only one existing in the universe, and that the anterior and posterior slices exist as concepts only. This idea would sound ridiculous to an observer of the custard, who knows that all slices exist together.
Taking this example, you could say that “I” am not the same person who began writing this sentence. I’m unique, perhaps in apparent connection with each of the subjects who wrote the words earlier in this paragraph. Still, even the endless “nows” independent of each other would not be dispersed. They still make up a structure. They are a block, a whole custard with no crumbs.
And this is Barbour’s theory: In a space of the cosmos, the future (our future) is already there, deployed, and every second of our past is also present, not as a memory but as a living present. The most painful thing to humans, like Eastern philosophies outline, would be to try to break the fixed mold.
The wise one, who follows the predetermined course, would be a happy face amid the cosmic chocolate custard and tries to live our unique and extremely tiny “nows.”
Most of us are deeply convinced that at an unconscious level, a great cosmic clock is ticking out every second of this huge space called the universe. However, early in the last century, Albert Einstein had already demonstrated that temporal reality is relative to each object in the universe, and that time is a “subject” inseparable from space. Even specialists who synchronize time in the world are aware that the world is handled by an arbitrarily stipulated ticking, as clocks are not able to measure time at all.
Apparently, the only alternative is to sink into a “temporary illusion” of this infinity, knowing that there is a space where our past still exists and what we do doesn’t change. Or as Einstein himself would say, “People like us, who believe in physics, know that the distinction between past, present, and future is only a stubbornly persistent illusion.”
"Just take the Internet and digital technologies in blindly. They offer extraordinary opportunities for access to new information, but they have a social and cultural cost too high: with the reading, transform the way we analyze things, the mechanisms of learning.Moving from page to screen paper we lose the ability to concentrate, we develop a more superficial way of thinking, we become people of pancakes, as the playwright Richard Foreman: wide and thin as a pancake because, constantly jumping from one piece of information to 'thanks to another link, we get anywhere we want, but at the same time we lose because we do not have thickness more time to reflect, to contemplate.Pausing to develop a deep analysis is becoming a thing unnatural. "
Nicholas Carr is the bete noire Fan Network "without ifs and buts' and the industry of digital technology.Two years ago one of his essays, published by the magazine "The Atlantic" with the provocative title "Google is making us stupid?", Was the first stone thrown into the lake of Internet culture.Carr, a scholar who has worked in business consulting and has directed a long time, "Harvard Business Review, was branded by the people of the web as an enemy of technology.
"The truth is - he says today from his home in Colorado where he retired to write books, since the eighties have been a consumer of digital technology febrile starting from the Mac Plus, my first personal computer.I've always been a geek, not a technophobic.But my enthusiasm has gradually lessened with the discovery that, in addition to the advantages that are obvious to all, the network also brings us much less obvious disadvantages and for this the most dangerous.Also because the effects are profound and permanent. "
Jaron Lanier, the genius of artificial intelligence in a recent book-manifesto has warned against "collectivism" of the Internet that kills individual creativity, the Net has been branded as a traitor.It will be harder to treat in the same way The Shallows ('superficial: What the Internet is doing to our minds) his new book that is already discussing when there are still more than two months of its publication in the U.S..Explains why in the same Carr: "What on 'Atlantic' was an essay written based on my personal experience, a reflection on how digital culture has changed my behavior.Over the past two years I have tried to go beyond the personal, examining the scientific evidence of how the Internet and social-as well as the earlier revolutions of the alphabet - have changed the intellectual history of mankind.And how new technologies influence the structure of our brain even at the cellular level."
In the debate sponsored by the "Edge Foundation 'on these issues, she cited the case of the" Cushing Academy, an elite school that form the leading classes of Massachusetts since the time of the Civil War, from whose library are all suddenly disappeared books: replaced by computers to do research.What role is playing the school in this revolution?
"The school should be taught to use new technology wisely.In reality, however, educators and even librarians are getting used to the idea that all the information and study materials can be distributed to students in digital form.From the economic point of view it certainly makes sense that it costs less.But merely to fill the rooms of electronic systems is myopic.How do we teach McLuhan, the medium matters, and a lot.Without books is not only more difficult to concentrate, but we are driven to seek from time to time on the Internet the concepts learned to date and stored deep in our memory.The long-term memory loss is the greatest risk: it is a topic to which I devoted a whole chapter. "
The co-founder of "Wikipedia," Larry Sanger, acknowledges the risk of distraction,but the accusation of being too pessimistic, do not trust in man's ability to deal sensibly with the new possibilities offered by technologies which are, however, a large progress for humanity.The exercise of freedom, Sanger says, requires responsibility, ability to focus on the problems and solve them.Even in the digital
"The advocates of total freedom to buy and carry weapons in the same way they think when they say guns do not kill men, men who kill other men.I do not want controversy and I hope that Larry was right: I'm not a technological determinist.Unfortunately, experience tells us that his is a rather 'naïve: when new technology becomes commonly used, tends to change our habits, the way we work, how we socialize and educate our children.It is along this path, most of which are beyond our control.It happened in the past with the alphabet, or the introduction of printing.It happens, more so now with the Internet.People tend not to exercise control and, perhaps because the interruptions and distractions on the Internet, bring the pieces of information interesting or just fun '
Today, then, is not the only man more or less able to shape its future: they weigh the interests of big corporations of digital technologies.Google Here comes again ...
"To make money to companies in the Network is our perpetual motion from one site to another, from one page to another.They are our compulsive clicking at increasing advertising revenues.The last thing you may want a company like Google is that we become more reflective, that we focus more on a single source of information. "
Curious. To support the thesis of the absolute freedom of the Net, without rules or education programs, are mostly liberals.With arguments that, at least in the United States, sometimes reminiscent of those used by libertarian conservatives weapons, against the constraints on the environment or the rules of nutrition education that could prevent epidemics of obesity and diabetes.Google even raises, for now, great distrust.Why?
"Because of the counterculture left U.S. contrarissima to large IBM computer punch cards to the burning of the '60s, then found in the personal computer - a device subject to review by individual corporations and governments-an instrument of freedom.And indeed was so, it's been so long.But in recent years, much has changed since that means crowdsourcing ideas and free work for many companies operating on the Internet, social networks like Facebook that behave as landowners of the nineteenth century: small pieces of land rent free and then earn on its cultivation.It's time to start thinking. "
L'immagine di copertina di "153 ragioni per essere ottimisti
La buona notizia: ci sono 153 buone ragioni scientifiche per essere ottimisti. La cattiva: in qualche caso, perché l’ottimismo si traduca in fatti concreti, bisognerà aspettare secoli. E adesso la storia.
John Brockman è un agente letterario. Nella sua agenda non trovi romanzieri ma scienziati. Alcuni hanno già sul petto la medaglia del premio Nobel, altri studiano per conquistarla. Tutti hanno una gran voglia di comunicare tra sé e con il mondo. Brockman li ha riuniti in una specie di club virtuale che ha chiamato Edge, con sede in Internet (www.edge.org). Edge significa orlo, bordo. Ma il verbo imparentato con questa parola si traduce anche in «aguzzare». Gli scienziati del gruppo Edge stanno sul bordo, sulla frontiera tra ciò che si sa e ciò che si vorrebbe sapere, tra presente e futuro, forse tra genio e follia. E aguzzano l’ingegno per passare il confine senza passaporto. Così Edge è uno spazio aperto, dove gli scienziati del club mettono in gioco idee audaci abbassando il loro livello di inibizione, come richiede il pensiero creativo.
Brockman ha il senso dell’auditel. Dopo essersi inventato la «terza cultura», che incorpora i valori umanistici della scienza, una volta all’anno agli iscritti del club pone una domanda alla quale tutti sono invitati a rispondere nel sito Internet. Gioco astuto: in pochi giorni l’agente letterario Brockman ha tra le mani un libro. L’ultimo, appena uscito in Italia, è appunto intitolato153 ragioni per essere ottimisti, sottotitolo Le scommesse della grande ricerca (il Saggiatore, 430 pagine, 21 euro). Il tema era: «La scienza ci pone sempre nuove domande, domande più mirate e meglio articolate. Su che cosa sei ottimista e perché? Sorprendici».
Per conoscere il futuro – diceva Einstein – il modo migliore è inventarlo. Un’altra sua battuta era: «Non penso mai al futuro: arriva così presto!». I ragazzi del club invece ci hanno pensato. Non tutto è pensiero originale. Molti scienziati sono ossessionati dall’effetto serra con il conseguente rischio del riscaldamento globale, ma sono ottimisti perché vedono nel mondo una presa di coscienza del problema. Un aspetto fondamentale della questione riguarda l’energia: le fonti fossili emettono gas serra e prima o poi si esauriranno. Un gruppetto del club vede la soluzione nell’energia solare. Chi lo dice meglio è Alun Anderson, già caporedattore di New Scientist: «Il Sole fornisce 7000 volte più energia di quanta ne stiamo utilizzando».
Altri filoni di pensiero non troppo divergente riguardano la fiducia nei progressi che allungano e migliorano la vita, vedono il Santo Graal nei modelli cooperativi tipo Wiki che si affermano sempre più nella Rete, puntano sulla resurrezione dell’Intelligenza Artificiale. Leon Lederman, premio Nobel per la fisica, si dichiara «ottimista sull’istruzione scientifica», ma subito aggiunge: «Lo so, meriterei una visita psicologica, o di fare da cavia per le mie fantasie deliranti». Un altro Nobel, George Smoot, cosmologo, pur ammettendo che tra miliardi di anni l’universo morirà di entropia, porta come prova di ottimismo il fatto di investire molto nel proprio fondo pensioni. Paul Davies, astrofisico e divulgatore straordinario, si limita a prevedere una colonia su Marte «entro la fine del secolo».
L’ottimismo più interessante è quello dei biologi che vedono nello studio dell’epigenoma (cioè l’influsso dell’ambiente sull’attivazione dei nostri geni) il nuovo grande passo dopo la decifrazione del Dna. Ancora più intrigante è l’ottimismo di certi psicologi che si occupano dell’aggressività e dell’irrazionalità umane. Tutti vedono, a sorpresa, un futuro tendenzialmente pacifico, non violento. È il caso di Steven Pinker e di Marc Hauser (entrambi della Harvard University). Più precisamente, Hauser è ottimista sulla fine degli «ismi» che ancora dilaniano il mondo: fondamentalismo religioso, razzismo, sessismo, ma anche ateismo. Li ha generati, dice, «una unica causa: un cervello che ha sviluppato la capacità inconscia di cercare le differenze tra il sé e l’altro e, una volta riconosciute, di svalutare l’altro per scopi egoistici», ma «la scienza sta scoprendo i meccanismi di questa capacità distruttiva e potrebbe avere la chiave per trovare una soluzione».
A fine Ottocento l’euforia positivista faceva credere che la scienza avrebbe risolto ogni problema dell’umanità e i fisici erano convinti che la loro disciplina avesse scoperto tutto. Poco dopo sono nate la meccanica dei quanti e la relatività di Einstein. Nel libro curato da Brockman si respira invece la crisi della fisica attuale, stretta tra la teoria delle stringhe che è andata troppo avanti rispetto agli esperimenti, e laboratori kolossal che stentano a partorire risultati proporzionalmente rivoluzionari. Biologia a parte, i 153 ottimisti non sanno indicare nuovi paradigmi capaci di sostituire quelli che oggi appaiono logori.
Colpo di scena finale: il libro esce adesso, ma la domanda fu posta nel 2007, prima della crisi economica globale. Nel 2010 gli amici di Brockman saranno altrettanto ottimisti? Se si sono pentiti, potranno consolarsi con la battuta del Nobel Niels Bohr: «Fare previsioni è sempre difficile, ma specialmente sul futuro»
Um jeglichem Plagiatsvorwurf von vorneherein entgegenzutreten – auch weil das Thema durch die drohende Preisverleihung an einen mexikanischen Schwanzlurch auf der Leipziger Buchmesse gerade in den Medien hochgekocht wird – legen wir unsere Quellen gleich offen: Die nachfolgenden Betrachtungen fußen hauptsächlich auf den Ergebnissen einer Umfrage der Edge Foundation, einer Art Thinktank im Web.
Zur Jahreswende wurden Intellektuelle, Wissenschaftler und Künstler gefragt, wie das Internet ihre Art zu denken verändert hat ("How Has The Internet Changed The Way You Think?"). Eine der Gemeinsamkeiten, die viele Befragte dem neuen Medium zuschreiben, scheint eine Art Grundrauschen zu sein. Wo früher Ruhe zum Denken gefordert und eingehalten wurde, unterbrechen jetzt digitale Störenfriede die Kontemplation und sorgen für endlose Zerstreuung.
Angefangen hat das alles ja schon mit dem Telefon, als jeder Anruf aufregend war und sofort beantwortet wurde. Dann galt es als Statussymbol, wenn das Telefon permanent klingelte, oft hatten Chefs mehr als einen Apparat auf dem Schreibtisch stehen – man war wichtig und bedeutend. Die überzeichnete Groteske des Mannes, der schlangengleich eine Herde von Telefonhörern jongliert fand in Börsenmaklern ihre Realität, die dauernd zwei und mehr Apparate gleichzeitig bedienen.
Ausspannen bedeutete damals vor allem Ruhe und keine Unterbrechung einer Tätigkeit durch einen Anruf. Das galt so lange, bis die Mobiltelefone in Mode kamen. Wir erinnern uns ungern an die Zeit, als jegliche Diskussion sofort unterbrochen wurde, wenn bei einem die kleine Maschine piepste, so als hinge ein Leben davon ab, zumindest aber ein gutes Geschäft.
Jetzt scheint es mir, dass die Gesellschaft zweigeteilt ist: die einen wollen nicht immer verfügbar sein und schalten ihr Handy gezielt ein. Die anderen, oft die Jugend, scheint ohne Dauersprech und Dauerschreib nicht leben zu können. Das schafft ein neues Suchtverhalten, wie es der Neurowissenschaftler Gary Small im SZ-Interview beschreibt. Danach müssten Lehrer teilweise schon nach einer Schulstunde eine Handy-Pause einlegen, weil ihre Schüler nicht länger ohne auskommen.
Ähnliches erleben derzeit Delphine, die von uns Menschen besucht und zu therapeutischen oder einfach nur erquickenden Begegnungen in ihrem natürlichen Umfeld, vulgo dem Meer, besucht werden: Sie sind von uns so abgelenkt, dass sie ihren eigenen Aufgaben nicht mehr nachkommen. Sie verwahrlosen, jagen nicht mehr gemeinsam in der Gruppe, essen wenig und verhungern manchmal sogar. Warum die Tiere uns nicht einfach den Rücken respektive die Rückenflosse kehren? Sie sind zu neugierig und können (uns) nicht abschalten.
I am embarrassed to say that before this weekend I had never visited Edge.org.
I was first directed to the site on Friday by a post on 3QD, and I have remained there ever since, devouring responses to the 2010 Edge Annual Question, “How is the internet changing the way you think?”
There are many wonderful ideas to glean from this incredible collection of essays, but I was especially interested in what the replies suggested for the future of journalism and – perhaps a separate issue – the future of journalists.
In an article on Edge that is not actually part of the 2010 Question, the financial journalist Charles Leadbeater uses the example of open source software to suggest what the internet may allow in other cultural realms.
“The more people that test out a programme the quicker the bugs will be found,” Leadbeater explains. “The more people that see a collection of content, from more vantage points, the more likely they are to find value in it, probably value that a small team of professional curators may have missed.”
The application of this analogy to journalism is obvious and, to varying degrees, the concept has already been put into practice. The blog/traditional news hybrid site, Talking Points Memo, for instance, invites readers to contribute leads and even comb through government documents on their behalf. TPM’s crowdsourcing strategy has allowed the website’s comparatively tiny staff of reporters to break several major stories, including the U.S. Attorney firing scandal. There is also The Huffington Post, which famously employs unpaid “citizen journalists” and “volunteer bloggers,” in addition to paid editorial staff.
More generally, the surge in claims and opinions that now appear on the internet would seem, by sheer probability, to have increased the amount of accurate or useful information that is available to the public. Of course, for every instance like the TPM U.S. Attorney story, in which the work of amateur internet journalists has had beneficial consequences for society, there have been, one assumes, many more instances of misinformation, slander and inanity. There is also the problematic tendency of independent online publishers to redistribute professional content without compensating authors.
In other words, critics argue that the internet threatens quality cultural content, including quality journalism, in two ways: (1) by undermining the business models that currently finance it, and (2) by obscuring it in noise and distraction.
Clay Shirky, author and Professor of interactive telecommunications at NYU, offers a terrific analysis of the first issue, in his response to the 2010 Edge Question:
This shock of inclusion, where professional media gives way to participation by two billion amateurs (a threshold we will cross this year) means that average quality of public thought has collapsed; when anyone can say anything any time, how could it not? If all that happens from this influx of amateurs is the destruction of existing models for producing high-quality material, we would be at the beginning of another Dark Ages. So it falls to us to make sure that isn't all that happens.
TPM and The Huffington Post are two examples of what this change might look like for journalism. Programs like Ushahidi, a web platform that allows users to aggregate information on maps and timelines via text message, might also help fill the vacancy left by old media.
I share Shirky’s optimism that the internet will find ways to replace the systems that it destroys, but I also share his belief that this period of transition will be a tough one.
“It is our misfortune to live through the largest increase in expressive capability in the history of the human race,” he writes, half-ironically, “a misfortune because surplus always breaks more things than scarcity. Scarcity means valuable things become more valuable, a conceptually easy change to integrate. Surplus, on the other hand, means previously valuable things stop being valuable, which freaks people out."
The second threat that the internet poses to quality cultural content – the threat of drowning it in noise – is also addressed by several Edge contributors.
German intellectual, Frank Schirrmacher, for instance, proposes a conception of the internet as a Darwinian environment in which ideas compete for survival and the limited resource is attention.
“We have a population explosion of ideas, but not enough brains to cover them,” Schirrmacher explains.
As ideas battle for survival, we become the arbiters of which ideas live and which ideas die. But weeding through them is cognitively demanding, and our minds may be ill-suited to the task.
Conversely, in his response to the 2010 Question, the former Executive Editor of Wired, Kevin Kelly, suggests that when it comes to journalism, the act of weeding may actually confer a more nuanced appreciation of the issues of the day:
For every accepted piece of knowledge I find, there is within easy reach someone who challenges the fact. Every fact has its anti-fact…I am less interested in Truth, with a capital T, and more interested in truths, plural. I feel the subjective has an important role in assembling the objective from many data points.
The science historian, George Dyson, may have put it best in his reply to the 2010 Question, which analogized the experience of modern web surfers to that of indigenous boat builders in the North Pacific ocean.
“In the North Pacific ocean,” Dyson explains, “there were two approaches to boatbuilding” – the approach used by the Aleuts, who pieced their boats together using fragments of beach-combed wood, and the approach used by the Tlingit, who carved each vessel out of a single dugout tree.
The two methods yielded similar results, Dyson tells us, each group employing the minimum allotment of available resources. However, they did so by opposite means.
“The flood of information unleashed by the Internet has produced a similar cultural split,” Dyson argues. “We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now, we have to learn to become dugout-canoe builders, discarding unnecessary information to reveal the shape of knowledge hidden within.”
Each week, we publish an extract from a book that is topical or of general interest.
This Will Change Everything: Ideas That Will Shape The Future Edited by John Brockman Harper Perennial (2010)
What would your reply be to this question about change: 'What game-changing scientific ideas and developments do you expect to live to see?'
John Brockman, publisher and editor of an online scientific website, Edge, puts forward this hypothetical question to a group of scientists, thinkers, intellectuals and artists. The result? A collection of short essays where imagination, ideas and propositions know no bounds.
"HEY geek," says my bureau chief. He says it with affection, an honorific won from my ability to make his phone read his e-mail. A geek is not a nerd or, God forbid, a dweeb; nerds are smart and dweebs are socially incapable. A geek is obsessed and pulls things apart. Whether he puts them back together is immaterial, as is whether everyone else has left the room. "Hey geek" used to be a life sentence; to hear it was to know that your passion was a burden, that you would type out your days accompanied by nothing but a can of Coke and the sound of your own hair thinning.
But then a funny thing happened. The geeks grew up, and it wasn't so bad. The internet was a geek-hungry machine; it plucked the geek from in front of his ham radio and deposited him among sales and marketing staff, and sometimes even near girls. Several geeks became billionaires. Perhaps a geek even became the president of the United States. It became possible to be a geek and something else, too. Maybe a journalist.
John Brockman, in a brief essay on Edge, calls these geeks who went on to do something else the algorithmic culture, dedicated to learning something about the world by understanding the actual code behind the internet. The data packed into the black boxes of our phones and web browsers reveal things about us, trails of where we have been and what we have desired. And we, the algorithmic interpreters of The Economist, aim with this blog to approach black boxes with tiny screwdrivers, to let in the light and to completely ruin them on the way to finally, blissfully understanding them.
We request that you stay in the room. If you're going to step out for a bit, maybe grab us a Coke.
This post was prompted by my reading Fred Turner's book "From Counterculture To Cyberculture: Stewart Brand, the Whole Earth Network and the Rise of Digital Utopianism", which looks at the influence Bucky Fuller had on a range of people, in particular Stewart Brand, who helped create first the hippie counterculture and the back to the land movement of the sixties and seventies, then later the cyberculture that grew up around the San Francisco bay area. ... Turner has some great excerpts from his book at "EDGE" magazine — STEWART BRAND MEETS THE CYBERNETIC COUNTERCULTURE. ...
...Brand maintained that given access to the information we need, humanity can make the world a better place. The Whole Earth Catalog magazine he founded was promoted as a "compendium of tools, texts and information" which sought to "catalyze the emergence of a realm of personal power" by making technology available to people eager to create sustainable communities. Brand eventually achieved his goal of persuading NASA to release the first photo of the Earth from space (wandering around for some time wearing a badge saying "Why Haven't We Seen A Picture of the Whole Earth?") and the photo became the cover for the Catalog. ...
...Whole Earth (and later Wired) editor Kevin Kelly has noted that style of the Whole Earth Catalog preceded the modern internet / blogosphere, and was eventually made redundant by it. ...
...Brand discusses "Whole Earth Discipline" in this talk at EDGE....
About 40 years ago I wore a button that said, "Why haven't we seen a photograph of the whole Earth yet?" Then we finally saw the pictures. What did it do for us?
The shift that has happened in 40 years which mainly has to do with climate change. Forty years ago, I could say in the Whole Earth Catalog, "we are as gods, we might as well get good at it". Photographs of earth from space had that god-like perspective.
What I'm saying now is we are as gods and have to get good at it.
Further Reading on Edge: STEWART BRAND MEETS THE CYBERNETIC COUNTERCULTURE;
WE ARE AS GODS AND HAVE TO GET GOOD AT IT: Stewart Brand Talks About His Ecopragmatist Manifesto
Starting from a book the other day, our readers "What is your most dangerous idea," he asked.
Danger here of course mean, "kill the man, let's bomb bay" is not as murderous ideas.
Objective existing (economic, political, social, moral) order and radical, surprising, memorization will lead to disruptive changes to the ideas put forth. ...
"most dangerous" What is your opinion?
...I've had an interesting collection in a bookstore Mumbai: "What is Your Dangerous Idea?" Almost got it.
Because the name was very attractive; and Helen Fisher, Jared Diamond, Ray Kurzweil, Sherry Turkle, such as Douglas Rushkoff I read with interest the short article I had the idea of people.
American writers and thinkers from many different areas in the book that was prepared by John Brockman'sthinking people, who have put their own dangerous ideas.
According to what I am when I returned to Turkey in! In fact in 2009 the book "What's Your Dangerous Idea?" (Pegasus Press) translated to our language, even with the title.
Where the "dangerous ideas" and implied "murder, massacres, rape, robbery, such as" criminal actions in almost every period, and their planning is not sure.
There is talk of a threat by intellectuals in the book: So, the question that a certain moral, social, political or cultural order will change our basic assumptions about life that will shake the ideas ...
It's dangerous idea which is not wrong of course. Quite the contrary: if one day occur?
Let's say that as a result of scientific research who, what age would die to know we've become ... This knowledge was really nice to be in our resolve?
In this regard puzzle scientists who study would want to continue? Or as soon as possible discontinuation of funding for research would deal?
Chi l’ha detto che gli scienziati sono catastrofisti? Al massimo saranno tali quelli dell’Ipcc, l’ente delle Nazioni Unite incaricato di studiare i cambiamenti climatici. O qualche virologo dell’Organizzazione mondiale della sanità, troppo lesto nell’annunciare pandemie. Ma sono eccezioni. In linea di massima gli scienziati sono inguaribili ottimisti, lo sguardo rivolto al futuro, certi in cuore di avere le idee giuste per mettere a posto due o tre cosette che non vanno o che ancora non si conoscono.
Il Saggiatore porta in libreria in questi giorni 153 ragioni per essere ottimisti (pagg. 424, euro 21) a cura di John Brockman. Un gruppo di scienziati risponde alla stessa domanda: «Cosa ti rende ottimista?». Fra loro vi sono molti personaggi notissimi, ad esempio Jared Diamond, Richard Dawkins, Lisa Randall, Ray Kurzweil, Gino Segré, Brian Eno, Daniel C. Dennett, Lawrence M. Krauss. Ecco quindi un menu alla carta con ricette per risolvere i problemi energetici, democratizzare l’economia globale, aumentare la trasparenza governativa, debellare le dispute religiose, ridurre la fame nel mondo, potenziare la nostra intelligenza, sconfiggere la malattia, progredire nella morale, migliorare il concetto di amicizia, trascendere le nostre radici darwiniane, capire la legge fondamentale dell’universo, unificare tutti i saperi, abbattere il terrorismo, colonizzare Marte.
Certo, ci sono alcune discrepanze. Ad esempio Richard Dawkins è ottimista perché da qualche laboratorio salterà fuori la legge del tutto, la teoria finale, capace di spiegare ogni fenomeno fisico. Franck Wilzeck è invece ottimista per il motivo opposto: grazie a Dio nessuno sembra in grado di approdare al risultato auspicato da Dawkins, e il mondo continuerà a sorprenderci ancora a lungo. C’è chi è ottimista perché finalmente la religione sarà bollata come pura e semplice superstizione. E c’è chi rovescia la frittata e si dice ottimista perché finalmente la scienza riconoscerà che la religione non può essere bollata come pura e semplice superstizione. Insomma, è garantita una varietà d’opinioni bipartisan.
Tra i problemi più dibattuti, c’è quello della pace. Affrontato da tutti i punti di vista. Per gli antropologi, ad esempio, è un dato di fatto statistico: andiamo verso la fine della guerra. Sul XX secolo grava il sangue di cento milioni di vittime (calcolo di Lawrence Keeley in War Before Civilization). Una cifra spaventosa. Eppure sarebbero state due miliardi se i nostri tassi di violenza fossero pari a quelli di una società primitiva media, in cui il tasso di mortalità causato dalla violenza raggiungeva il cinquanta per cento. Per i neuroscienziati, è anche un problema di cervello e ormoni sessuali. Roger Bingham, ad esempio, è ottimista perché un numero maggiore di donne rispetto al passato siede al tavolo delle trattative sul controllo degli armamenti. Le pari opportunità non c’entrano. Il punto è che la sicurezza globale, in un’atmosfera carica di testosterone, risulterebbe meno garantita. Per i biologi, pur essendo gli uomini «programmati» per distinguere fra Noi e Loro (cioè fra buoni e cattivi), le categorie sono destinate a diventare sfumate grazie all’evoluzione.
When you look at young people like the ones who grew up to blow up trains in Madrid in 2004, carried out the slaughter on the London underground in 2005, hoped to blast airliners out of the sky en route to the United States in 2006 and 2009, and journeyed far to die killing infidels in Iraq, Afghanistan, Pakistan, Yemen or Somalia; when you look at whom they idolize, how they organize, what bonds them and what drives them; then you see that what inspires the most lethal terrorists in the world today is not so much the Koran or religious teachings as a thrilling cause and call to action that promises glory and esteem in the eyes of friends, and through friends, eternal respect and remembrance in the wider world that they will never live to enjoy.
Our data show that most young people who join the jihad had a moderate and mostly secular education to begin with, rather than a radical religious one. And where in modern society do you find young people who hang on the words of older educators and "moderates"? Youth generally favors actions, not words, and challenge, not calm. That's a big reason so many who are bored, underemployed, overqualified, and underwhelmed by hopes for the future turn on to jihad with their friends. Jihad is an egalitarian, equal-opportunity employer (at least for boys, but girls are web-surfing into the act): fraternal, fast-breaking, thrilling, glorious, and cool. Anyone is welcome to try his hand at slicing off the head of Goliath with a paper cutter.
If we can discredit their vicious idols (show how these bring murder and mayhem to their own people) and give these youth new heroes who speak to their hopes rather than just to ours, then we've got a much better shot at slowing the spread of jihad to the next generation than we do just with bullets and bombs. And if we can de-sensationalize terrorist actions, like suicide bombings, and reduce their fame (don't help advertise them or broadcast our hysterical response, for publicity is the oxygen of terrorism), the thrill will die down. As Saudi Arabia's General Khaled Alhumaidan said to me in Riyadh: "The front is in our neighborhoods but the battle is the silver screen. If it doesn't make it to the 6'oclock news, then Al Qaeda is not interested." Thus, the terrorist agenda could well extinguish itself altogether, doused by its own cold raw truth: it has no life to offer. This path to glory leads only to ashes and rot.
In the long run, perhaps the most important anti-terrorism measure of all is to provide alternative heroes and hopes that are more enticing and empowering than any moral lessons or material offerings. Jobs that relieve the terrible boredom and inactivity of immigrant youth in Europe, and with underemployed throughout much of the Muslim world, cannot alone offset the alluring stimulation of playing at war in contexts of continued cultural and political alienation and little sense of shared aspirations and destiny. It is also important to provide alternate local networks and chat rooms that speak to the inherent idealism, sense of risk and adventure, and need for peer approval that young people everywhere tend towards. It even could be a 21st-century version of what the Boy Scouts and high school football teams did for immigrants and potentially troublesome youth as America urbanized a century ago. Ask any cop on the beat: those things work. But it has to be done with the input and insight of local communities or it won't work: de-radicalization, like radicalization itself, best engages from the bottom up, not from the top down.
In sum, there are many millions of people who express sympathy with Al Qaeda or other forms of violent political expression that support terrorism. They are stimulated by a massive, media-driven global political awakening which, for the first time in human history, can "instantly" connect anyone, anywhere to a common cause -- provided the message that drives that cause is simple enough not to require much cultural context to understand it: for example, the West is everywhere assaulting Muslims, and Jihad is the only the way to permanently resolve glaring problems caused by this global injustice.
Consider the parable told by the substitute Imam at the Al Quds Mosque in Hamburg, where the 9/11 bomber pilots hung out, when Marc Sageman and I asked him "Why did they do it?"
"There were two rams, one with horns and one without. The one with horns butted his head against the defenseless one. In the next world, Allah switched the horns from one ram to the other, so justice could prevail."
"Justice" ('adl in Arabic) is the watchword of Jihad. Thunderously simple. When justice and Jihad and are joined to "change" -- the elemental soundbite of our age -- and oxygenated by the publicity given to spectacular acts of violence, then the mix becomes heady and potent.
Young people constantly see and discuss among themselves images of war and injustice against "our people," become morally outraged (especially if injustice resonates personally, which is more of a problem abroad than at home), and dream of a war for justice that gives their friendship cause. But of the millions who sympathize with the jihadi cause, only some thousands show willingness to actually commit violence. They almost invariably go on to violence in small groups of volunteers consisting mostly of friends and some kin within specific "scenes": neighborhoods, schools (classes, dorms), workplaces, common leisure activities (soccer, study group, barbershop, café) and, increasingly, online chat-rooms.
Does Europe especially need to reconsider their approach to the Internet? EDGE would say yes:
Edge: TIME TO START TAKING THE INTERNET SERIOUSLY By David Gelernter: "Introduction: Our Algorithmic Culture" by John Brockman:
"Edge was in Munich in January for DLD 2010 and an Edge/DLD event entitled 'Informavore' — a discussion featuring Frank Schirrmacher, Editor of the Feuilleton and Co-Publisher of Frankfurter Allgemeine Zeitung, Andrian Kreye, Feuilleton Editor of Sueddeutsche Zeitung, Munich; and Yale computer science visionary David Gelernter, who, in his 1991 book Mirror Worlds presented what's now called 'cloud computing.'
The intent of the panel was to discuss — for the benefit of a German audience — the import of the recent Frank Schirrmacher interview on Edge entitled 'The Age of the Informavore.' David Gelernter, who predicted the Web, and who first presented the idea of 'the cloud', was the scientist on the panel along with Schirrmacher and Kreye, Feuilleton editors of the two leading German national newspapers, both distinguished intellectuals....
Take a look at the photos from the recent Edge annual dinner and you will find the people who are re-writing global culture, and also changing your business, and, your head. What do Evan Williams (Twitter), Larry Page (Google), Tim Berners-Lee (World Wide Web Consortium), Sergey Brin (Google), Bill Joy (Sun), Salar Kamangar (Google), Keith Coleman (Google Gmail), Marissa Mayer (Google), Lori Park (Google), W. Daniel Hillis (Applied Minds), Nathan Myhrvold (Intellectual Ventures), Dave Morin (formerly Facebook), Michael Tchao (Apple iPad), Tony Fadell (Apple/iPod), Jeff Skoll (formerly eBay), Chad Hurley (YouTube), Bill Gates (Microsoft), Jeff Bezos (Amazon) have in common? All are software engineers or scientists.
So what's the point? It's a culture. Call it the algorithmic culture. To get it, you need to be part of it, you need to come out of it. Otherwise, you spend the rest of your life dancing to the tune of other people's code. Just look at Europe where the idea of competition in the Internet space appears to focus on litigation, legislation, regulation, and criminalization. [emphasis added]"
Those of us involved in communicating ideas need to re-think the Internet. Here at Edge, we are not immune to such considerations. We have to ask if we’re kidding ourselves by publishing 10,000+ word pieces to be read by people who are limiting themselves to 3″ ideas, i.e. the width of the screen of their iPhones and Blackberries. (((And if they’re kidding THEMSELVES, what do you suppose they’re doing to all those guys with the handsets?)))
Many of the people that desperately need to know, don’t even know that they don’t know. Book publishers, confronted by the innovation of technology companies, are in a state of panic. Instead of embracing the new digital reading devices as an exciting opportunity, the default response is to disadvantage authors. Television and cable networks are dumbfounded by the move of younger people to watch TV on their computers or cell-phones. Newspapers and magazine publishers continue to see their advertising model crumble and have no response other than buyouts.
Take a look at the photos from the recent Edge annual dinner and you will find the people who are re-writing global culture, and also changing your business, and, your head. What do Evan Williams (Twitter), Larry Page (Google), Tim Berners-Lee (World Wide Web Consortium), Sergey Brin (Google), Bill Joy (Sun), Salar Kamangar (Google), Keith Coleman (Google Gmail), Marissa Mayer (Google), Lori Park (Google), W. Daniel Hillis (Applied Minds), Nathan Myhrvold (Intellectual Ventures), Dave Morin (formerly Facebook), Michael Tchao (Apple iPad), Tony Fadell (Apple/iPod), Jeff Skoll (formerly eBay), Chad Hurley (YouTube), Bill Gates (Microsoft), Jeff Bezos (Amazon) have in common? All are software engineers or scientists.
(((So… if we can just round up and liquidate these EDGE conspirators, then us authors are out of the woods, right? I mean, that would seem to be a clear implication.)))
So what’s the point? It’s a culture. Call it the algorithmic culture.
(((Even if we rounded ‘em up, I guess we’d still have to fret about those ALGORITHMS they built. Did you ever meet an algorithm with a single spark of common sense or humane mercy? I for one welcome our algorithmic overlords.)))
To get it, you need to be part of it, you need to come out of it. Otherwise, you spend the rest of your life dancing to the tune of other people’s code. (((Not to mention all that existent code written by dead guys. Or ultrarich code-monkey guys who knocked it off and went to cure malaria.)))
Just look at Europe where the idea of competition in the Internet space appears to focus on litigation, legislation, regulation, and criminalization. (((I indeed DO look at Europe, and I gotta say that their broadband rocks. The Italians even have the nerve to round up the occasional Google engineer.)))
Gelernter writes:
The Internet is no topic like cellphones or videogame platforms or artificial intelligence; it’s a topic like education. It’s that big. Therefore beware: to become a teacher, master some topic you can teach; don’t go to Education School and master nothing. To work on the Internet, master some part of the Internet: engineering, software, computer science, communication theory; economics or business; literature or design. Don’t go to Internet School and master nothing….
Some of you surely read and remember Alvin Toffler’s 1970 bestseller, “Future Shock.” Although he remains in the background, Toffler is often noted as one of the world’s greatest futurologists and influential men. His main thesis in “Future Shock” was encapsulated in his saying we face “too much change in too short a period of time,” and that we are unprepared for it individually or as a society.
Today we still are, and we call it “information overload.” I often see it as a form of trivial pursuit; mindless talk on cell phones, mindless games on computers, and mindless drives for trivial things, while important things are unsaid or ignored.
Toffler anticipated the computer revolution, cloning, family fragmentation, cable TV, VCRs, satellites and other things we take for common or create controversy today. He had some interesting recommendations, only one of which I’ll mention here. That is, he believed the needed reformation of the education system could not be made by tinkering but be doing away with what existed – and exists still – and starting from scratch so as to teach preparedness for change.
He said, “The illiterates of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”
So what have we done? We tinkered and tinkered and tinkered, and its still broke.
I’m currently reading a 2009 book that reminded me of Toffler’s work, and if he foresaw change 40 years ago, the new book should jolt us with excitement and worry with its anticipated future. The new book is “This Will Change Everything: Ideas that will shape the future,” edited by John Brockman.
It is a compilation of short essays, usually about one-and-a-half to three pages long, by 125 of today’s leading thinkers. They were responding to the question, “What game-changing scientific ideas and developments do you expect to live to see?”
Their answers are seen more as speculative than predictive, but they certainly continue Toffler’s work by bringing us up to date. As I mention just some of their ideas, consider this a book review and an urging of you to read the book, because even more change is coming.
Robots are popular in science fiction, and the essayists don’t ignore them. They don’t just foresee simple robots that will clean the house, make dinner and take out the garbage. Some see “relational” robots, humanoid in shape and feature, but with an artificial intelligence that allows them to act human, to learn, grow, develop and enter into relations, all at a rate of change much greater than humans are capable of.
Some foresee humans will want to marry robots.
With the current controversy about same-sex marriages, imagine the ethical and legal questions of human-robot marriages! Of these and other changes in robotics, one essayist says, “If we are lucky, our new mind children (the robots) will treat us as pets. If we are very unlucky, they will treat us as food.”
Some foresee quantum computers with power far beyond our current computers. One author suggests computers will be so powerful, and brain scanning so advanced, that brain scans will be taken of humans, mapping the billions of neurons and trillions of synapses and saving them, so that the essence of you will be kept in a computer, even suggesting that “you” will be able to watch yourself die if you choose to, since the saved “you” will be able to continue full mental activity.
Others foresee life spans of 200-1,000 years and even immortality. More ethical and legal questions. Are we ready for them?
Work on the human genome and the genomes of other animals will make it possible, as one essayist describes, to break the species barrier. If we could, should we?
Personalized medicine, based on our individual genomes and physiology will be possible. No more Prozac for all with depression, but individual treatments and medicines, specific just for you, and you, and you. How will we deal with the question of who can afford such treatments? Talk about our current health care controversies!
We have searched for extra-terrestrial life for decades (and again, science fiction has been written about that, too. Toffler recommends reading science fiction as a way of learning of change). Essayists believe that we will eventually find extra-terrestrial life, and its form and chemistry basis will have the impact of totally changing our view of who we are and where we fit in the universe.
We know that if extra-terrestrial visitors came to Earth, they would be much more advanced than us. How would you react one day waking up to the morning news that such aliens had landed on earth? Would the people of the Earth finally come together as one humanity? Or would they seek to curry favor with the aliens as separate human nations? Would such an event truly be “The Day the Earth Stood Still”?
Geo-engineering, nuclear applications, cryo-technology, bioengineering, neuro-cosmetics, and many other topics are covered. A philosopher whose name is now forgotten, once suggested that whatever humans can conceive of and invent, they will use.
Think A-bomb. Does this have to be so?
Another essayist said, “We keep rounding an endless vicious circle. Will an idea or technology emerge anytime soon that will let us exit this lethal cyclotron before we meet our fate head-on and scatter into a million pieces? Will we outsmart our own brilliance before this planet is painted over with yet another layer of people? Maybe, but I doubt it.”
As for the book, try it; I don’t know that you’ll like it, but it’s important and real.
Change is coming!
The big science and tech thinkers in the orbit of Edge.org recently held a grand dinner in California, on the theme of "A New Age of Wonder." The title was taken from a Freeman Dyson essay reflecting on how the 19th century Romantics encountered science, in which the following passage appeared
"...a new generation of artists, writing genomes as fluently as Blake and Byron wrote verses, might create an abundance of new flowers and fruit and trees and birds to enrich the ecology of our planet. Most of these artists would be amateurs, but they would be in close touch with science, like the poets of the earlier Age of Wonder. The new Age of Wonder might bring together wealthy entrepreneurs like Venter and Kamen ... and a worldwide community of gardeners and farmers and breeders, working together to make the planet beautiful as well as fertile, hospitable to hummingbirds as well as to humans."
Dyson goes on:
Is it possible that we are now entering a new Romantic Age, extending over the first half of the twenty-first century, with the technological billionaires of today playing roles similar to the enlightened aristocrats of the eighteenth century? It is too soon now to answer this question, but it is not too soon to begin examining the evidence. The evidence for a new Age of Wonder would be a shift backward in the culture of science, from organizations to individuals, from professionals to amateurs, from programs of research to works of art.
If the new Romantic Age is real, it will be centered on biology and computers, as the old one was centered on chemistry and poetry.
We do live in an age of technological miracles and scientific wonder. Who can deny it? And yet, and yet! Dyson again, from the same essay:
If the dominant science in the new Age of Wonder is biology, then the dominant art form should be the design of genomes to create new varieties of animals and plants. This art form, using the new biotechnology creatively to enhance the ancient skills of plant and animal breeders, is still struggling to be born. It must struggle against cultural barriers as well as technical difficulties, against the myth of Frankenstein as well as the reality of genetic defects and deformities.
Here's where these techno-utopians lose me, and lose me big time. The myth of Frankenstein is important precisely because it is a warning against the hubris of scientists who wish to extend their formidable powers over the essence of human life, and in so doing eliminate what it means to be human. And here is a prominent physicist waxing dreamily about the way biotech can be used to create works of art out of living creatures, aestheticizing the very basis of life on earth. If that doesn't cause you to shudder, you aren't taking it seriously enough. I think of this Jody Bottum essay from 10 years back, which begins thus [read after the jump]:
On Thursday, October 5, it was revealed that biotechnology researchers had successfully created a hybrid of a human being and a pig. A man-pig. A pig-man. The reality is so unspeakable, the words themselves don't want to go together.
Extracting the nuclei of cells from a human fetus and inserting them into a pig's egg cells, scientists from an Australian company called Stem Cell Sciences and an American company called Biotransplant grew two of the pig-men to 32-cell embryos before destroying them. The embryos would have grown further, the scientists admitted, if they had been implanted in the womb of either a sow or a woman. Either a sow or a woman. A woman or a sow.
There has been some suggestion from the creators that their purpose in designing this human pig is to build a new race of subhuman creatures for scientific and medical use. The only intended use is to make animals, the head of Stem Cell Sciences, Peter Mountford, claimed last week, backpedaling furiously once news of the pig-man leaked out of the European Union's patent office. Since the creatures are 3 percent pig, laws against the use of people as research would not apply. But since they are 97 percent human, experiments could be profitably undertaken upon them and they could be used as living meat-lockers for transplantable organs and tissue.
But then, too, there has been some suggestion that the creators' purpose is not so much to corrupt humanity as to elevate it. The creation of the pig-man is proof that we can overcome the genetic barriers that once prevented cross-breeding between humans and other species. At last, then, we may begin to design a new race of beings with perfections that the mere human species lacks: increased strength, enhanced beauty, extended range of life, immunity from disease. "In the extreme theoretical sense," Mountford admitted, the embryos could have been implanted into a woman to become a new kind of human-though, of course, he reassured the Australian media, something like that would be "ethically immoral, and it's not something that our company or any respectable scientist would pursue."
But what difference does it make whether the researchers' intention is to create subhumans or superhumans? Either they want to make a race of slaves, or they want to make a race of masters. And either way, it means the end of our humanity.
The thing I don't get about the starry-eyed techno-utopians is that they don't seem to have taken sufficient notice of World War I, the Holocaust, and Hiroshima. That is, they don't seem to have absorbed the lessons of what the 20th century taught us about human nature, science and technology. Science is a tool that extends human powers over the natural world. It does not change human nature. The two wars and the Holocaust should have once and forever demolished naive optimism about human nature, and what humankind is capable of with its scientific knowledge. Obviously humankind is also capable of putting that knowledge to work to accomplish great good. That is undeniable -- but one is not required to deny it to acknowledge the shadow side of the age of wonder.
As I see it, the only real counterweight to techno-utopianism is religion. Religion is concerned with ultimate things, and demands that we weigh our human desires and actions against them. Scientists, the Promethean heroes, tend to chafe against any restriction on their curiosity -- which is why some of them (Dawkins, et alia) rage against religion. The best of humankind's religious traditions have been thinking about human nature for centuries, even millenia, and know something deep about who we are, and what we are capable of. How arrogant we are to think the Christian, the Jewish, the Islamic, the Taoist, and other sages have nothing important to say to us moderns! What religion speaks of is how to live responsibly in the world. Here is Wendell Berry, from his great book "Life Is a Miracle: An Essay Against Modern Superstition":
It should be fairly clear that a culture has taken a downward step when it forsakes the always difficult artistry that renews what is neither new nor old and replaces it with an artistry that merely exploits what is fashionably or adventitiously "new," or merely displays the "originality" of the artist.
Scientists who believe that "original discovery is everything" justify their work by the "freedom of scientific inquiry," just as would-be originators and innovators in the literary culture justify their work by the "freedom of speech" or "academic freedom." Ambition in the arts adn the sciences, for several generations now, has conventionally surrounded itself by talk of freedom. But surely it is no dispraise of freedom to point out that it does not exist spontaneously or alone. The hard and binding requirement that freedom must answer, if it is to last, or if in any meaningful sense it is to exist, is that of responsibility. For a long time the originators and innovators of the two cultures have made extravagant use of freedom, and in the process have built up a large debt to responsibility, little of which has been paid, and for most of which there is not even a promissory note.
Berry goes on:
On the day after Hitler's troops marched into Prague, the Scottish poet Edwin Muir, then living in that city, wrote in his journal ... : "Think of all the native tribes and peoples, all the simple indigenous forms of life which Britain trampled upon, corrupted, destroyed ... in the name of commercial progress. All these things, once valuable, once human, are now dead and rotten. The nineteenth century thought that machinery was a moral force and would make men better. How could the steam-engine make men better? Hitler marching into Prague is connected to all this. If I look back over the last hundred years it seems to me that we have lost more than we have gained, that what we have lost was valuable, and that what we have gained is trifling, for what we have lost was old and what we have gained is merely new."
What Berry identifies as "superstition" is the belief that science can explain all things, and tells us all we need to know about life and how to live it. In other words, the superstitious belief in science as religion. He is not against science; he only wishes for science to know its place, to accept boundaries. He writes:
It is not easily dismissable that virtually from the beginning of the progress of science-technology-and-industry that we call the Industrial Revolution, while some have been confidently predicting that science, gonig ahead as it has gone, would solve all problems and answer all questions, others have been in mourning. Among these mourners have been people of the highest intelligence and education, who were speaking, not from nostalgia or reaction or superstitious dread, but from knowledge, hard thought, and the promptings of culture.
What were they afraid of? What were their "deep-set repugnances"? What did they mourn? Without exception, I think, what they feared, what they found repugnant, was the violation of life by an oversimplifying, feelingless utilitarianism; they feared the destruction of the living integrity of creatures, places, communities, cultures, and human souls; they feared the loss of the old prescriptive definition of humankind, according to which we are neither gods nor beasts, though partaking of the nature of both. What they mourned was the progressive death of the earth.
This, in the end, is why science and religion have to engage each other seriously. Without each other, both live in darkness, and the destruction each is capable of is terrifying to contemplate -- although I daresay you will not find a monk or a rabbi prescribing altering the genetic code of living organisms for the sake of mankind's artistic amusement. What troubles me, and troubles me greatly, about the techno-utopians who hail a New Age of Wonder is their optimism uncut by any sense of reality, which is to say, of human history. In the end, what you think of the idea of a New Age of Wonder depends on what you think of human nature. I give better than even odds that this era of biology and computers identified by Dyson and celebrated by the Edge folks will in the end turn out to have been at least as much a Dark Age as an era of Enlightenment. I hope I'm wrong. I don't think I will be wrong.
Read more: http://blog.beliefnet.com/roddreher/2010/03/a-new-age-of-wonder-really.html#ixzz16vIdDlHX