Edge in the News

THE GUARDIAN [12.31.08]

Futurology is notoriously hit-and-miss. According to 2001: A Space Odyssey, we should already be using suspended animation to send humans to Jupiter
"Through science we create technology and in using our new tools we recreate ourselves." So says the intro to edge.org's annual New Year challenge to the world's greatest thinkers.This year it is asking "What will change everything – What game-changing scientific ideas and developments do you expect to live to see?" And as ever, the great and the good have responded to the call. ...


Who Are We?

Third Culture was born as a podcast in August 2009. Our idea was to spread the extraordinary findings, illuminations and epiphanies that we had throughout this decade in our studies of science of the mind.

Our ideas was to spread the extraordinary findings, illuminations and epiphanies that we had throughout this decade in our studies of science of the mind."Coming from the Faculty of Philosophy and Humanities at the University of Chile, we had the experience of being a somewhat rare beasts: interested in science in a humanistic environment. We found, in the concept of Third Culture (developed in CP Snow in the late fifties and sponsored by John Brockman in the nineties), a space where we could move easily and at the same time, share our experience students and our academic colleagues. ...

...We believe we can build a community around the issues of mind, not only among specialists of the six disciplines founding (if we ignore the hexagon of the Sloan Foundation in the seventies): Artificial Intelligence, Neuroscience, Philosophy, Psychology, Linguistics and Anthropology, but also between those who come from the humanities, which, as you said people like Jonah Lehrer or Ian Richardson, have been turning the problem of the mind since time immemorial.

We know that the others can be seen as a kind of "sensationalism" intellectual, or syncretism, even as accommodationist: we believe that this is one of the greatest dangers. We also know that you can see the third culture as "selling the system" in the humanities, dominated by epistemological pessimism, not relying on scientific research. Finally we know that on that same line of reasoning, the third culture can be seen as an unconditional surrender to the dominant ideas of the traditional right, the market, and so on. We put it bluntly, we are people with leftist values, but we are not the guerrilla left ... we are from the Darwinian left (... that is, at bottom, we are only interested in sex ).

The page / blog terceracultura.cl is our third step in the dissemination of the Third Culture in Chile and Chilean in this space will links to programs, more extensive post blogs, discuss recent articles, open the door to debate and establish links with elsewhere. We expect maximum contact.


[ED. NOTE: A new podcast website from Chile on The Third Culture with entries aboutDanlel GilbertSteven PinkerDaniel DennettLeda CosmidesJohn ToobyGuns Germs and Steel, Darwin in Chile, among others. — JB]

Linda Stone, Xconomy [12.31.08]

What game-changing ideas can we expect to see in OUR lifetimes?
As each year winds to a close, John Brockman, literary agent representing some of the finest minds in science and technology and the founder of Edge Foundation, poses a provocative question to an international community of physicists, psychologists, futurists, thought leaders, and dreamers. Brockman is a master convener, both online and in real life. This year’s annual Edge question, What will change everything?, generated responses from Freeman Dyson, Danny Hillis, Martin Seligman, Craig Venter, and Juan Enriquez, to name a few. Here are a few highlights.


"The planet's overheating, the icecaps are melting, the population is exploding, there's a bird-flu epidemic waiting to get us and even if we avoid a terrorist Armageddon, there's bound to be an asteroid up there with all our names on it. We are, to quote Private Frazer, doomed.

"Nonsense, say the 150 leading scientists assembled by John Brockman in this uplifting anthology.

"Asked the title's question, the world's best brains examined our prospects - and all of them found reasons to be very cheerful indeed. Once again, the scientific community seems to challenge our instinctive, common-sense assumption. First they told us the Earth isn't flat. Then, that solid objects are made up of empty space. ...

"...This is an enthralling book that delivers two very significant truths: we've never had it so good and things can only get better. Global warming — and asteroids — permitting."

Read the full article →


Several months ago, Christopher Hitchens was sent an article about a young soldier, Mark Jennings Daily, who had been killed in Iraq. Daily was improbably all-American — born on the Fourth of July, an honors graduate from U.C.L.A., strikingly handsome. He’d been a Democrat with reservations about the war. But, “somewhere along the way, he changed his mind,” the article said. “Writings by author and commentator Christopher Hitchens on the moral case for war deeply influenced him.”

“I don’t exaggerate by much when I say I froze,” Hitchens wrote about reading that sentence.

His essay in the November issue of Vanity Fair is a meditation on his own role in Daily’s death, and a description of the family Daily left behind. Hitchens asks painful questions and steps on every opportunity to be maudlin, and yet for all its tightly controlled intellectualism, the essay packs a bigger emotional wallop than any other this year.

Daily took books by Thomas Paine, Tolstoy, John McCain and Orwell to Iraq.

“Anyone who knew me before I joined,” Daily wrote from the front, “knows that I am quite aware and at times sympathetic to the arguments against the war in Iraq. If you think the only way a person could bring themselves to volunteer for this war is through sheer desperation or blind obedience, then consider me the exception (though there are countless like me)... . Consider that there are 19-year-old soldiers from the Midwest who have never touched a college campus or a protest who have done more to uphold the universal legitimacy of representative government and individual rights by placing themselves between Iraqi voting lines and homicidal religious fanatics.”

Hitchens spent a day with the Daily family and then was asked to speak at a memorial service. He read a passage from “Macbeth” and later reflected: “Here we are to perform the last honors for a warrior and hero, and there are no hysterical ululations, no shrieks for revenge, no insults hurled at the enemy, no firing into the air or bogus hysterics. Instead, an honest, brave, modest family is doing its private best.”

Hitchens also wrote “God Is Not Great,” which Ross Douthat reviewed provocatively in The Claremont Review of Books. Douthat noted that Hitchens specializes in picking out crackpot quotations rather than trying to closely observe the nature of spiritual experience: “Like most apologists for atheism, he evinces little interest in the topic of religion as it is actually lived, preferring to stick to the safer ground of putting the godly in the dock and cataloging their crimes against humanity.” Douthat, the believer, comes off as more curious about the world than any skeptic.

One of the best pieces of career advice I ever got is: Interview three people every day. If you try to write about politics without interviewing policy makers, you’ll wind up spewing all sorts of nonsense. John Mearsheimer and Stephen Walt wrote an entire book on the Israel Lobby without ever interviewing any of their subjects.

Jeffrey Goldberg dissected their effort in The New Republic. Goldberg usefully describes Judeocentrism, the belief that Jews play a central role in world history. Walt and Mearsheimer have a tendency, Goldberg writes, to bring the vectors of recent world history back to the Jews — the rise of radical Islam, shifts in U.S. foreign policy, Sept. 11. He then offers a piece-by-piece dissection of their historical claims.

Wonks talk about inequality, but voters talk about immigration. Christopher Jencks wrote an essay on immigration in The New York Review of Books that was superb not because he took a polemical stance, but because he clarified a complex issue in an honest way.

He shows how fluid public opinion is. Certain poll questions suggest that 69 percent of Americans want to deport illegal immigrants. Others indicate the true figure is only 14 percent. He ends up at the nub of the current deadlock. Conservatives, having learned from past failures, demand “enforcement first.” Employers, fearing bankruptcy, demand the legalization of the current immigrants first. Neither powerful group will budge.

Three other essays are worth your time. In the online magazine Edge, Jonathan Haidt wrote “Moral Psychology and the Misunderstanding of Religion,” an excellent summary of how we make ethical judgments. In the Chronicle of Higher Education, J. Bradford DeLong wrote “Creative Destruction’s Reconstruction” on why Joseph Schumpeter matters to the 21st century. In her essay, “The Abduction of Opera” in The City Journal, Heather MacDonald wonders why European directors now introduce mutilation, rape, masturbation and urination into lighthearted operas like “The Abduction from the Seraglio.” She argues that a resurgent adolescent culture has allowed directors there to wallow in all manner of self-indulgence.

TORONTO STAR [12.27.08]

When politicians change their minds, they're often lambasted for flip-flopping by other politicians, the media and the public. When scientists change their minds, their fellow scientists eventually see it as progress, integral to the self-correcting discipline of their vocation.

Unfortunately the public usually notices only a marginal subset of this phenomenon: how the futurists and short-term forecasters so often get it wrong.

After all, where is the paperless office? Or the Jetsons' flying car? And remember how hurricane forecasters used computer models to predict – wrongly – that the last two seasons would be monsters?

For a spectacularly bad computer projection, look at the mid-1970s, when a study from the Club of Rome warned that the world would run out of many essential minerals before the end of the century. Skeptical researchers picked apart the naïve assumptions of "The Limits to Growth," but not before world leaders, including Pierre Elliott Trudeau, had jetted off to an Austrian castle for a summit.

Yet the truly important self-corrections of science often escape public attention because they escape the media's attention. That's mainly because journalism exists on the time scale of mayflies while scientific consensus evolves over elephantine decades.

A personal example: When I was squeaking through university science in the mid-1960s, we were taught that the adult brain does not make new neurons.

But even then, unbeknown to us, a few researchers were arguing that the adult brain did continue to manufacture neurons. But they were dismissed as crackpots, just as Alfred Wegener was in 1915 when he proposed that the continents drifted. Or as Mario Molina and F. Sherwood Rowland were in 1974 when they warned that CFCs were destroying the ozone layer.

Molina and Rowland were vindicated in just a few years and went on to win the Nobel Prize in chemistry. But it wasn't until the 1950s that continental drift was accepted as the consensus theory.

The neuron "crackpots" were finally declared correct by their fellow brain scientists in the 1990s, and today adult neurogenesis – the fancy name for making new neurons – is a burgeoning field of study for people such as Stanford neuroscientist Robert Sapolsky, who originally dismissed the idea.

Sapolsky is one of 130-plus scientists and "thinkers" who have contributed highly personal revelations to What Have You Changed Your Mind About?, due next month.

Book marketing seems to demand sensational subtitles, but Today's Leading Minds Rethink Everything turns out to be an accurate guide to the content. In almost 400 pages, the contributors cover frontier aspects of all three scientific arenas: physical, biomedical and social.

It should come with a warning: "Reading this book may be dangerous to your cherished myths and perceptions." For example:

  • Helena Cronin says it's not primarily bias and barriers that give men the top positions and prizes. After analyzing the statistical evidence, the philosopher at the London School of Economics has come to accept that there will be (as she puts it) more dumbbells and more Nobels among males because there's a much greater variance in ability among men as a group than among women, even though both are similar on average.
  • There is probably no intelligent life elsewhere in the universe because we would have detected a stray electromagnetic signal by now, argues technologist Ray Kurzweil, who wanted to believe in E.T.
  • Until a few years ago, neuroscientist Joseph LeDoux thought that a memory is something stored in the brain into which we could tap again and again. Then a researcher in his lab at New York University did an experiment that convinced LeDoux – and is convincing others – that each time a memory is used, it has to be stored again in the brain as a new memory to be accessible later. This concept of memory "reconsolidation" is now being tested in treating drug addiction and post-traumatic stress disorder.
  • Danish science writer Tor Nørretranders changed his mind about his body, which he now considers closer to software than hardware. It's been known for decades that 98 per cent of the atoms in the human body are replaced every year, but only recently was Nørretranders able to come up with the concept of permanent reincarnation, like music moving from vinyl LPs to audio tapes to CDs and now iPods.

Many other contributors challenge conventional wisdom to write about, among other things, a finite universe; the brain creating a soul; and the Internet as a powerful tool for centralized state control.

Nor do all these deep thinkers agree. Computer scientist Rudy Rucker has come around to thinking that a computer program will be able to emulate the human mind so that self-aware robots could even believe in God. But computer scientist Roger Schank, who once said he would see machines as smart as humans within his lifetime, now believes that won't happen within the lifetime of his grandchildren.

The book's most important contribution, however, is to drive home the lesson that in science being wrong occasionally is a good thing, not least because it renews curiosity and reminds the scientists that they don't know everything.

As Discover magazine columnist Jaron Lanier writes in the book, "Being aware of being wrong once in a while keeps you young."

And since admitting they've been wrong and changing their minds works well for rational decision-making by scientists, perhaps politicians, the media and others might give it a try.

3 QUARKS DAILY [12.21.08]

How ought we, in this historical moment, use science and technology to remake the world?

Americans have been talking about what to do about climate change. Two of the lead voices are, recently, Jeffrey Sachs from Columbia University’s Earth Institute and Joseph Romm from the Center for American Progress Action Fund. Romm recommends a federal cap-and-trade plan that would immediately make carbon emissions more expensive. Sachs, however, believes that the economic costs of cap-and-trade are prohibitive without radically advanced technologies to make a low-carbon economy actually possible. Thus, Sachs proposes large-scale federal investments in development and demonstration of new energy technologies. This is what the climate change debate is now about: whether or not we need new inventions to build a low-carbon economy, or whether our existing tools are good enough to get started right away.

But what is the debate over? The question of whether man will conquer nature occupied Enlightenment philosophers. But today, the best progressive thinkers have moved on to other questions. The new conversation assumes the fact that humans have been remaking the world for millennia.

The first part of the Hebrew Bible expresses this part of human life well, depending on how one reads it. God implicitly invites humankind to be creative: Of this earth you were made, and likewise, you shall remake it. Robert Harrison, in his new book [2], ascribes to the human condition what he calls the vocation of care, of which the act of tending a garden is the best example. In Harrison’s way of reading Genesis, the fall from the Garden of Eden was more of a blessing to be cherished than a loss to be mourned: Adam and Eve were granted the privilege of caring about the world. And what if they were originally created in God’s image? In that case, says theologian Ted Peters, they must participate in the ongoing creation of the world. Peters says we are “created cocreators.”

In any case, Earth’s crust is a dynamic place, and we might as well help it along. Though Sachs and Romm offer different suggestions for climate policy, they are responding to the same question: How ought we, in this historical moment, remake the world again? If one feels alarmed by declining biodiversity, then one understands the importance of this moment. The energy technologies we select, whether they are old, emerging, or not yet developed, will have consequences for the continuing evolution of terrestrial life. The job of democratic citizens, as Walter Truett Anderson has been saying for years, includes governing evolution itself [1]. There is no turning back.


The Left — the party of science, environmentalism, equality, and choice — would do well to understand what this job does and does not include. First, as Oliver Morton explained a couple of years ago on Edge.org, it does not include saving the planet. Earth and its biosphere is resilient enough in the long term to take what we are giving it: fresh water depletion, species losses, a boosted greenhouse effect, and more. Nothing we can do (or at least, are at all likely to do) can stop biological and geological evolution on Earth. But while the planet can adapt, humans, especially the poorest, could be greatly harmed. The strongest arguments for cutting greenhouse gas emissions start by honoring human solidarity, not the intrinsic value of sea ice.

Second, our job does not include protecting the natural from the unnatural. It is too late, except in some of Earth’s remote polar regions, to preserve any “natural” ecosystems that remain unaffected by the conscious vita activa of men and women. The natural–unnatural distinction now serves no useful purpose. Moreover, it distracts us from other distinctions that do matter for our actual lives, like sustainable and unsustainable development.

The irrelevance of the natural–unnatural distinction matters, too, for our health and our ability to control our own bodies. Consider the rhetorical value of the word, “nature.” Some philosophers use the word to mask moral norms. Leon Kass, for instance, goes to great lengths to explain why the assisted reproductive technologies that he finds repugnant are also “unnatural” [3]. One benefit of decisively discarding the language of the natural and the unnatural is that doing so will prevent people like Kass from using the word, “nature,” as a way of inserting private morals into public politics.

Third, our job does not include transcending the planet or our bodies. This should go without saying, but some writers have acquired some fame by demanding, in the name of the Enlightenment, human enhancement technologies that can deliver immortality and cognitive and emotional bliss.Dale Carrico has explained that these desires ignore both the fact of human vulnerability and the fact that technological progress does not happen without political progress to enable it.

Saving the planet, protecting the natural, and achieving technological transcendence are projects with which many persons of the Left have burdened themselves. Each of these projects is misguided, unnecessary, and counter-productive. By pushing these ideas firmly and permanently aside, we can more easily grasp the challenges we are really confronting.

The climate debate demonstrates the different way of talking about nature. Despite the contrasts between Sachs’s and Romm’s plans, neither of them is a defense of nature. Instead, they are both proposals for how, in essence, to better integrate blind evolution and conscious design, ecology and technology, and nature and art.

Consider the biological history of life. An unknown number of millennia ago, creativity on Earth was blind; intentionality and purposefulness, as we know them, had not yet been invented. Later, genetic evolution gave rise to the modern human. From that time forward, design, technology, and art variously complemented and commandeered the original program of evolution, ecology, and nature. The transition is irrevocable, and now, the challenge is to make it work. Modernity may overflow with excess, but we have little else to build upon. Even if we cannot directly counter Heidegger’s objections to industrial technology, we can still design a humane home on Earth good enough for anyone but the most rabid intellectual opponents of the Gestell. Sadly, we haven’t really started trying.

We must begin with design, which is everything we make and everything else that happens amidst the private and public relationships between human beings. Human consciousness is the key ingredient of design. Sachs and Romm offer different answers to the question: Which specific products of human conscious design should interact with the products of blind evolution, and how, and when, and where, and in what combinations? For the foreseeable future, this is the question that policymakers concerned with science and technology would be wise to ask. The very ability to ask the question suggests that the question itself is urgent. And we shall be better off if philosophers, policymakers, and scientists are not the only ones asking it.

Anyone who lives in a clothed or bejeweled body confronts the question constantly, though most often without realizing it. However, the development of human modification medicine and of nanoscale, biological, information, and cognitive technologies reveals more conspicuously the question’s importance. Here, the politics of choice and self-determination — which the Left played no small part in developing — is indispensable. Again, the question is not how to protect the natural body from unnatural adulteration. Rather, the question is how to enable a new kind of human and extra-human diversity. Carrico calls it “lifeway diversity:” the varieties of ways not of transcending one’s body, but rather of transforming it.

There are right ways and wrong ways to use science and technology for the transformation of our selves and the world. Unintended consequences of science and technology are inevitable. We need to minimize them, respond to them, and learn from them. And while we must not concern ourselves with the categories of natural and unnatural, we also should not forget that many things in the world that came before us — most especially ecosystems — are crucial to a well-functioning technological biosphere. In most cases, public policies should preserve ancient ecological balances that give rise to services we depend on. Technologies will play an ever-expanding role first, in understanding what is actually happening in ecosystems and second, in intervening appropriately.

What is true for ecosystems is also true for the biological and psychological systems of human bodies and minds. This point is most easily understood through the idea (popularized by the authorRichard Ogle) of the “extended mind.” Mental life — cognition, emotion, and creativity — does not happen within the confines of the brain. Instead, it depends on complex interactions between bodies, environments, and culture. Some of these components are evolved; others, designed. Evolution and design interact every time we put our shoes on, read a map, or press the keys on a piano. The better interactions are those that give rise to what the economist Amartya Sen calls human capabilities, such as bodily health, practical reason, and play. Just as technologies should maximize ecosystem services, they should also maximize human capabilities.

The form of economic growth that is implicated in this dual focus on ecosystems and human beings should be the main concern of decision-makers and governments everywhere. Every single one of the U.N. Millennium Development Goals can be understood, at least didactically, as an effort to synchronize design and evolution. Sustainable economic growth comes from technologies that enable ecosystems and humans to do what they have been doing for millions or billions of years — and to do so more abundantly and with more freedom than is currently possible. Imagine everything that is beautiful in the world, and imagine lots more of it, but imagine still being vulnerable. That is the imperfect world that is ready to be slowly forged, whether it is made from tools we have now or tools still uninvented.

Letras Libres [12.15.08]

Humanism today limps as Andalusia ostensibly despises science. Gonzalez and Salazar Férriz indicate a new and commendable effort to remedy that Soanish ignorance: Culture 3.0.

In the preface to the recent reissue of The betrayal of the intellectuals, 1927 Julien Benda (Galaxia Gutenberg), Fernando Savater stated that "perhaps the greatest paradox of the paradoxes of the twentieth century is this: there has never been a time in human history in which more developed the ability to produce tools and knowledge the inner structure of reality in all fields. So, never was more scientific and technical brilliance. But neither had ever so many ideological movements based (or better, desfondados) as irrational, dogmatic or unverifiable, above all, never was such a wealth of supporters of rapture or intuitive certainty blood among the elite of servers for high spiritual functions. "In the words of Benda," men whose function is to defend and selfless eternal values such as justice and reason, and I call intellectuals have betrayed that role for practical interests, which often result in the conversion of a mere intellectual ideologue who aspires to a space power...

...Following the wake of Snow and probably trying to repair the betrayal of Benda-speaking, John Brockman in 1988 founded the Edge Foundation(www.edge.org), an organization that seeks to reintegrate, under the idea of a "Third Culture "scientific and humanistic discourse and contribute to that science has a key role in the discussion of public affairs. ...



Just because you’re smart doesn’t mean you get things right the first time. That’s the premise behind What Have You Changed Your Mind About? (Harper Perennial), a new anthology. In it, 150 “big thinkers” describe what they now think they were wrong about earlier in their lives. Much of this has to do with technology and education. Among the highlights:

Ray Kurzweil no longer thinks that intelligent aliens exist. The oft-cited futurist and inventor, a pioneer in artificial intelligence and in making reading machines for the blind, says that conventional thinking holds there should be billions of such civilizations and a number of them should be ahead of us, “capable of vast, galaxy-wide technologies. So how can it be that we haven’t noticed” all of the signals they should be creating? “My own conclusion is that they don’t exist.”

Roger C. Schank used to say “we would have machines as smart as we are within my lifetime.” Now Mr. Schank, a former Yale University professor and director of Yale’s artificial-intelligence project, says: “I no longer believe that will happen… I still believe we can create very intelligent machines. But I no longer believe that those machines will be like us.” Chess-playing computers that beat people are not good examples, he says. Playing chess is not representative of typical human intelligence. “Chess players are methodical planners. Human beings are not.” We tend, Mr. Schank says, “to not know what we know.”

Randolph M. Nesse “used to believe that truth had a special home at universities.” Mr. Nesse, professor of psychiatry at the University of Michigan and an expert on evolution and medicine, now thinks “universities may be the best show in town for truth pursuers, but most of them stifle innovation and constructive engagement of real controversies — not just sometimes but most of the time, systematically.” Faculty committees, he complains, make sure that most positions “go to people just about like themselves.” Deans ask how much external financing new hires will bring in. “No one with new ideas … can hope to get through this fine sieve.” —Josh Fischman


Yesterday I listed a few flip-flops by leading thinkers chronicled in a new anthology, What Have You Changed Your Mind About? (Harper Perennial). Whether universities were really that great was one of them. But there are more.

One of the major things that bright minds have rethought is that the Internet will be a boon to humanity. Here is why:

It does not fight authority. Nicholas Carr, who wrote the recent best sellerThe Big Switch: Rewiring the World, From Edison to Google, used to believe the Internet would shift the bulk of power to the little people, away from big companies and governments. But "its technical and commercial working actually promote the centralization of power and control," he says. Although the overall number of Web sites has increased from 2002 through 2006, the concentration of traffic at the 10 most popular sites has grown from 31 percent to 40 percent of all page views. Further, "look at how Google continues to expand its hegemony over Web searching," Mr. Carr says. "To what end will the Web giants deploy their power? They will, of course, seek to further their own commercial or political interests."

A few bad people counteract many good people, and machines can't fix that. Xeni Jardin, co-editor of the tech blog Boing Boing, says comments on the blog were useful and fun, originally. But as the blog grew more popular, so did antisocial posts by "trolls," or "people for whom dialogue wasn't the point." Things got so nasty that Boing Boing editors finally removed the ability for readers to comment. Now she has reinstated comments, because "we hired a community manager. … If someone is misbehaving, she can remove all the vowels from their screed with one click." There is no automated way to do this, Ms. Jardin says, and "the solution isn't easy, cheap, or hands-free. Few things of value are."


With little notice from the outside world, the community-written encyclopedia Wikipedia has redefined the commonly accepted use of the word "truth."

Why should we care? Because ­Wikipedia's articles are the first- or second-ranked results for most Internet searches. Type "iron" into Google, and Wikipedia's article on the element is the top-ranked result; likewise, its article on the Iron Cross is first when the search words are "iron cross." Google's search algorithms rank a story in part by how many times it has been linked to; people are linking to Wikipedia articles a lot.

This means that the content of these articles really matters. Wikipedia's standards of inclusion--what's in and what's not--affect the work of journalists, who routinely read Wikipedia articles and then repeat the wikiclaims as "background" without bothering to cite them. These standards affect students, whose research on many topics starts (and often ends) with Wikipedia. And since I used Wikipedia to research large parts of this article, these standards are affecting you, dear reader, at this very moment.

Essays and Opinion

Witch hunters in Africa lynch “thieves” who rob men of their masculinity. Many people’s grasp of economics is at the same level. The Edge economics course is an curative... more» ... Class no. 1 ...

Read the full article →


Es war einer jener traumhaften Momente der Wissenschaft, bei dem man gerne dabei gewesen wäre. Im vergangenen Sommer trafen sich im kalifornischen Sonoma drei Generationen der Verhaltensökonomie zu einer Meisterklasse der Edge Foundation, jener Forschungsrichtung also, die versucht, den Mechanismen des Marktes aus dem Blickwinkel der Menschen zu begegnen. Daniel Kahnemann war der Älteste der drei prominenten Gäste, eigentlich Professor der Psychologie in Princeton, aber eben auch Wirtschaftsnobelpreisträger für seine ...


In this wide-ranging assortment of 150 brief essays, well-known figures from every conceivable field demonstrate why it's a prerogative of all thoughtful people to change their mind once in a while. TechnologistRay Kurzweil says he now shares Enrico Fermi's question: if other intelligent civilizations exist, then where are they? Nassim Nicholas Taleb (The Black Swan) reveals that he has lost faith in probability as a guiding light for making decisions. Oliver Morton (Mapping Mars) confesses that he has lost his childlike faith in the value of manned space flight to distant worlds. J. Craig Venter, celebrated for his work on the human genome, has ceased to believe that nature can absorb any abuses that we subject it to, and that world governments must move quickly to prevent global disaster. Alan Alda says, “So far, I've changed my mind twice about God,” going from believer to atheist to agnostic. Brockman, editor of Edge.org and numerous anthologies, has pulled together a thought-provoking collection of focused and tightly argued pieces demonstrating the courage to change strongly held convictions. (Jan.)


RESERVE ANOTHER LAUREL for Edward O. Wilson, the Pellegrino University Professor emeritus at Harvard, serial Pulitzer winner, and prominent intellectual: online celebrity.

(Tim Bower for the Boston Globe)

Forget Charlie Rose - Wilson has Google for a soapbox. Amid the amateur-hour piffle of YouTube "talent" and skateboarding dogs, the famed biologist stands in bold relief, with more than 500 Google video search results to his credit: Interviews ranging far afield of TV shows to a spate of appearances on several Web-only video platforms such as Meaningoflife.tv,Bigthink.com, Fora.tv, and the online home of the Technology Entertainment Design (TED) conference.

It was through a TED presentation that Wilson chose to unveil his proposal for the Encyclopedia of Life, a Wikipedia of biodiversity, and a few short months later he secured the funding necessary to launch it. Hitting the talk show circuit never looked so passe.

The rise this year of a host of new Web video sites targeting high-minded, edifying content suggests that today's marketplace of ideas is rapidly moving online. "The Last Lecture," a 76-minute video by the late engineering professor Randy Pausch recorded late last year, became a crossover phenomenon - viewed by at least 7 million - and easily one of the most widely watched academic events in history. The buzzy presentation shorts of TED surged past 50 million viewings on only their second birthday.

Newly minted video start-ups Fora.tv and Bigthink.com, boasting auspicious programming starring top-shelf public intellectuals, each pledged this year to become a thinking person's YouTube: With combined inventories in the tens of thousands of clips and numerous partnerships with major media properties, that viewership is only expanding. And iTunes U., a multimedia channel of free higher education content at the iTunes Store, continued to amass its increasingly Amazonian stockpile of labs and lectures from schools around the world.

From interviews with obscure geniuses to splashy marquee names, and from hoary conference proceedings and drowsy first-year survey classes to passionate debates at exclusive, invite-only affairs like the Aspen Institute, an entire back catalog of cerebral Web video is steadily accumulating online. How do the various offerings rate?

The Oprah: TED

The TED Talks program single-handedly popularized the phenomenon of brainy programming. It's an online repository of zippy, often provocative presentations delivered by speakers at the eponymous conference. Topics range widely across the arts and sciences: inventors in today's Africa, the nature of happiness, and an evolutionary theory of technology.

TED has become, in no small part due to its videos' viral popularity, a high rooftop in academia for world authorities to report on the latest thinking in their fields, an Oprah of the intelligentsia. Like Oprah, it wields considerable kingmaking power through its presentation schedule, whose speakers graduate to greater buzz, and sometimes lasting celebrity, in the wider media.

The reason TED videos work? Their winning production values and packaging translate so well online. Even the stuffiest subject - say, Swedish global health professor Hans Rosling talking about how to visualize statistics - proves, through the meticulously observed conventions of a TED presentation (brisk pacing, humor, strong visuals), a reliably entertaining break from the tedium and rigors of high church academic discourse. TED is not a venue for speakers to go deep on a subject; instead, it's one for teasing out the more bravura elements of their work.

The Public Broadcaster: Fora.tv

Amassing video from public events and colloquia around the world, the online video storehouse Fora.tv is the wide angle lens to TED's close-up. Giving up a TED-like uniform polish for the capaciousness of its collection, Fora.tv is quickly expanding its inventory through a deep roster of partners: a number of schools, the New Republic, Brookings Institution, the Oxonian Society, the Long Now Foundation, the Hoover Foundation, and the Aspen Institute.

The speakers themselves include a wide range of Nobel laureates, cultural notables, politicians, and policy wonks - but also a good deal of unexpected surprises. Settle in for a lively food fight between Christopher Hitchens and the Rev. Al Sharpton, tussling over religion at the New York Public Library, and then rove over to see Christian Lander, author of the blog Stuff White People Like, gawking meekly and stopping to photograph the large crowd attending his Washington, D.C., reading.

Fora.tv's features are more specialized than the others, including the ability for users to compile their own video libraries, as well to upload and integrate their own contributions, public access cable-style. Production quality hardly warrants demerits in the face of the catalog collected here - one could sink into this C-SPAN of the public sphere for days of unexpected viewing.

The Talk Show: Bigthink.com

For those who think small is better, the video clip library of Bigthink.comdelivers. Its shorts are studio-shot, first-person interviews. Each clip features the interviewee answering a single question or waxing on a single topic: for example, UCLA law professor Kal Raustiala explaining his "piracy paradox," the puzzle that intellectual property protection may be inhibiting creative progress in culture and industry.

Like TED Talks, Bigthink.com video can be searched by themes and ideas as well as speakers. It also provides text transcripts, which are easy to scan and search for those just researching.

Its edge is in the notoriety of its interviewees - world leaders, presidential candidates, and prestigious thinkers - combined with its prolific production schedule. It maintains the clip of new content one would expect of a daily television show, not an Internet video site. Still, for longer sittings, its quick-take editing style can feel thin.

The Op/Ed: Bloggingheads.tv

Following from an earlier video series concept, Meaningoflife.tv, the cultural critic Robert Wright created with prominent blogger Mickey Kaus a head-to-head video debate series called Bloggingheads.tv. Advertisements tout it as being like "Lincoln-Douglas . . . with lower production values."

Nearly every day, it publishes a new "diavlog", or two-way video debate, and chapters allow users to surf quickly between conversation points. The diavlog is a millennial take on another press mainstay, the opinion editorial. (Disclosure: Bloggingheads.tv has a partnership with the New York Times Co. through the Times' opinion pages online. The Globe is owned by the New York Times Co.)

The roster of speakers is heavy on journalists and public-policy types, and the daily subject matter skews - no doubt, somewhat wearingly for some of us - toward headline political news and opinion. While Wright and Kaus star as the mainstays, the stable of commentators is populous enough to keep Bloggingheads.tv from the whiff of celebrity - the topic is always the star.

Graduate Studies: Edge.org

For those seeking substance over sheen, the occasional videos released atEdge.org hit the mark. The Edge Foundation community is a circle, mainly scientists but also other academics, entrepreneurs, and cultural figures, brought together by the literary agent John Brockman.

Edge's long-form interview videos are a deep-dive into the daily lives and passions of its subjects, and their passions are presented without primers or apologies. It is presently streaming excerpts from a private lecture, including a thoughtful question and answer session, by Nobel laureate Daniel Kahneman to Edge colleagues on the importance of behavioral economics.

It won't run to everyone's tastes. Unvarnished speakers like Sendhil Mullainathan, a MacArthur recipient with intriguing insights on poverty, are filmed in casual lecture, his thoughts unspooling in the mode of someone not preoccupied with clarity or economy of expression. The text transcripts are helpful in this context.

Regardless, the decidedly noncommercial nature of Edge's offerings, and the egghead imprimatur of the Edge community, lend its videos a refreshing air, making one wonder if broadcast television will ever offer half the off-kilter sparkle of their salon chatter.

And as for Charlie Rose? Perhaps he's voted with his feet. Excerpts of his PBS show are now streaming daily at the online magazine Slate.

Jeffrey MacIntyre (jeffmacintyre.com), who writes on culture, science and technology, is also a consultant to digital publishers (predicate-llc.com). He lives in New York.

Correction: An article about video lectures ("U Tube") in the Ideas section of Sunday, Nov. 2, misidentified Edward O. Wilson of Harvard University. He is a biologist.


...Jaron Lanier takes on the debate about the role and power of computers in shaping human finances, behavior and prospects from a radically different vantage point faulting -- in an article published on theEdge web site -- "cybernetic totalists" who, absolve from responsibility for "whatever happens" the individual people who do specific things. I think that treating technology as if it were autonomous is the ultimate self-fulfilling prophecy. There is no difference between machine autonomy and the abdication of human responsibility. . . .There is a real chance that evolutionary psychology, artificial intelligence, Moore's law fetishizing, and the rest of the package will catch on in a big way, as big as Freud or Marx did in their times.

[Also: Nathan MyhrvoldGeorge Dyson, Ray Kurzweil]



I was watching a PBS production the other day entitled Dogs That Changed the World, and wondered about our contemporary fascination with things "That Changed the World."

The Machine That Changed the World (a 1991 book about automotive mass production). Cod: A Biography of The Fish That Changed the World (a 1998 book about, well, cod). The Map That Changed The World (2002 book about geologist William Smith). 100 Photographs That Changed the World (Life, 2003). Bridges That Changed the World (book, 2005). The Harlem Globetrotters: The Team That Changed the World (book, 2005). How William Shatner Changed the World (documentary, 2006). Genius Genes: How Asperger Talents Changed the World (book on brilliant people with autism, 2007). The Book That Changed the World (2008 article in the Guardian, about The Origin of Species).

This "Changed the World" stuff is getting to be a bit tedious, isn't it? Now that we have Dogs That Changed the World, can Cats That Changed the World be far behind? ...

...Bill Bean notes that there is already a place to read about People Who Changed the World and Then Changed Their Minds. Every year, the people at the Edge Foundation ask writers, thinkers, psychologists, historians and others what major ideas they have changed their minds about. Go to www.edge.org. It's good reading.



“BEWARE of geeks bearing formulas.” So saith Warren Buffett, the Wizard of Omaha. Words to bear in mind as we bail out banks and buy up mortgages and tweak interest rates and nothing, nothing seems to make any difference on Wall Street or Main Street. Years ago, Mr. Buffett called derivatives “weapons of financial mass destruction” — an apt metaphor considering that the Manhattan Project’s math and physics geeks bearing formulas brought us the original weapon of mass destruction, at Trinity in New Mexico on July 16, 1945.

In a 1981 documentary called “The Day After Trinity,” Freeman Dyson, a reigning gray eminence of math and theoretical physics, as well as an ardent proponent of nuclear disarmament, described the seductive power that brought us the ability to create atomic energy out of nothing.

“I have felt it myself,” he warned. “The glitter of nuclear weapons. It is irresistible if you come to them as a scientist. To feel it’s there in your hands, to release this energy that fuels the stars, to let it do your bidding. To perform these miracles, to lift a million tons of rock into the sky. It is something that gives people an illusion of illimitable power, and it is, in some ways, responsible for all our troubles — this, what you might call technical arrogance, that overcomes people when they see what they can do with their minds.”

The Wall Street geeks, the quantitative analysts (“quants”) and masters of “algo trading” probably felt the same irresistible lure of “illimitable power” when they discovered “evolutionary algorithms” that allowed them to create vast empires of wealth by deriving the dependence structures of portfolio credit derivatives.

What does that mean? You’ll never know. Over and over again, financial experts and wonkish talking heads endeavor to explain these mysterious, “toxic” financial instruments to us lay folk. Over and over, they ignobly fail, because we all know that no one understands credit default obligations and derivatives, except perhaps Mr. Buffett and the computers who created them.

Somehow the genius quants — the best and brightest geeks Wall Street firms could buy — fed $1 trillion in subprime mortgage debt into their supercomputers, added some derivatives, massaged the arrangements with computer algorithms and — poof! — created $62 trillion in imaginary wealth. It’s not much of a stretch to imagine that all of that imaginary wealth is locked up somewhere inside the computers, and that we humans, led by the silverback males of the financial world, Ben Bernanke and Henry Paulson, are frantically beseeching the monolith for answers. Or maybe we are lost in space, with Dave the astronaut pleading, “Open the bank vault doors, Hal.”

As the current financial crisis spreads (like a computer virus) on the earth’s nervous system (the Internet), it’s worth asking if we have somehow managed to colossally outsmart ourselves using computers. After all, the Wall Street titans loved swaps and derivatives because they were totally unregulated by humans. That left nobody but the machines in charge.

How fitting then, that almost 30 years after Freeman Dyson described the almost unspeakable urges of the nuclear geeks creating illimitable energy out of equations, his son, George Dyson, has written an essay (published at Edge.org) warning about a different strain of technical arrogance that has brought the entire planet to the brink of financial destruction. George Dyson is an historian of technology and the author of “Darwin Among the Machines,” a book that warned us a decade ago that it was only a matter of time before technology out-evolves us and takes over.

His new essay — “Economic Dis-Equilibrium: Can You Have Your House and Spend It Too?” — begins with a history of “stock,” originally a stick of hazel, willow or alder wood, inscribed with notches indicating monetary amounts and dates. When funds were transferred, the stick was split into identical halves — with one side going to the depositor and the other to the party safeguarding the money — and represented proof positive that gold had been deposited somewhere to back it up. That was good enough for 600 years, until we decided that we needed more speed and efficiency.

Making money, it seems, is all about the velocity of moving it around, so that it can exist in Hong Kong one moment and Wall Street a split second later. “The unlimited replication of information is generally a public good,” George Dyson writes. “The problem starts, as the current crisis demonstrates, when unregulated replication is applied to money itself. Highly complex computer-generated financial instruments (known as derivatives) are being produced, not from natural factors of production or other goods, but purely from other financial instruments.”

It was easy enough for us humans to understand a stick or a dollar bill when it was backed by something tangible somewhere, but only computers can understand and derive a correlation structure from observed collateralized debt obligation tranche spreads. Which leads us to the next question: Just how much of the world’s financial stability now lies in the “hands” of computerized trading algorithms?

Here’s a frightening party trick that I learned from the futurist Ray Kurzweil. Read this excerpt and then I’ll tell you who wrote it:

But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. ... Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

Brace yourself. It comes from the Unabomber’s manifesto.

Yes, Theodore Kaczynski was a homicidal psychopath and a paranoid kook, but he was also a bloodhound when it came to scenting all of the horrors technology holds in store for us. Hence his mission to kill technologists before machines commenced what he believed would be their inevitable reign of terror.

We are living, we have long been told, in the Information Age. Yet now we are faced with the sickening suspicion that technology has run ahead of us. Man is a fire-stealing animal, and we can’t help building machines and machine intelligences, even if, from time to time, we use them not only to outsmart ourselves but to bring us right up to the doorstep of Doom.

We are still fearful, superstitious and all-too-human creatures. At times, we forget the magnitude of the havoc we can wreak by off-loading our minds onto super-intelligent machines, that is, until they run away from us, like mad sorcerers’ apprentices, and drag us up to the precipice for a look down into the abyss.

As the financial experts all over the world use machines to unwind Gordian knots of financial arrangements so complex that only machines can make — “derive” — and trade them, we have to wonder: Are we living in a bad sci-fi movie? Is the Matrix made of credit default swaps?

When Treasury Secretary Paulson (looking very much like a frightened primate) came to Congress seeking an emergency loan, Senator Jon Tester of Montana, a Democrat still living on his family homestead, asked him: “I’m a dirt farmer. Why do we have one week to determine that $700 billion has to be appropriated or this country’s financial system goes down the pipes?”

“Well, sir,” Mr. Paulson could well have responded, “the computers have demanded it.”

Richard Dooling is the author of “Rapture for the Geeks: When A.I. Outsmarts I.Q.”

EL PAIS [10.9.08]

Internet ya es para muchos el mayor canal de información. Cada vez es superior el tiempo empleado en navegar, ya sea para leer las noticias, revisar el correo, ver vídeos y escuchar música, consultar enciclopedias, mapas, conversar por teléfono y escribir blogs. En definitiva, la Red filtra gran parte de nuestro acceso a la realidad. El cerebro humano se adapta a cada nuevo cambio e Internet supone uno sin precedentes. ¿Cuál va a ser su influencia? Los expertos están divididos. Para unos, podría disminuir la capacidad de leer y pensar en profundidad. Para otros, la tecnología se combinará en un futuro próximo con el cerebro para aumentar exponencialmente la capacidad intelectual.

Uno de los más recientes en plantear el debate ha sido el ensayista estadounidense Nicholas G. Carr, experto en Tecnologías de la Información y la Comunicación (TIC), y asesor de la Enciclopedia británica. Asegura que ya no piensa como antes. Le sucede sobre todo cuando lee. Antes se sumergía en un libro y era capaz de zamparse páginas y páginas hora tras hora. Pero ahora sólo aguanta unos párrafos. Se desconcentra, se inquieta y busca otra cosa que hacer. "La lectura profunda que solía suceder de forma natural se ha convertido en un esfuerzo", señala Carr en el provocador artículo Is Google making us stupid? (¿Está Google volviéndonos tontos?), publicado en la revista The Atlantic. Carr achaca su desorientación a una razón principal: el uso prolongado de Internet. Está convencido de que la Red, como el resto de medios de comunicación, no es inocua. "[Los medios] Suministran el material del pensamiento, pero también modelan el proceso de pensar", insiste.

"Creo que la mayor amenaza es su potencial para disminuir nuestra capacidad de concentración, reflexión y contemplación", advierte Carr, a través del correo electrónico. "Mientras Internet se convierte en nuestro medio universal, podría estar readiestrando nuestros cerebros para recibir información de manera muy rápida y en pequeñas porciones", añade. "Lo que perdemos es nuestra capacidad para mantener una línea de pensamiento sostenida durante un periodo largo".

El planteamiento de Carr ha suscitado cierto debate en foros especializados, como en la revista científica online Edge.org, y de hecho no es descabellado. Los neurólogos sostienen que todas las actividades mentales influyen a un nivel biológico en el cerebro; es decir, en el establecimiento de las conexiones neuronales, la compleja red eléctrica en la que se forman los pensamientos. "El cerebro evolucionó para encontrar pautas. Si la información se presenta en una forma determinada, el cerebro aprenderá esa estructura", detalla desde Londres Beau Lotto, profesor de neurociencia en el University College de Londres. Y añade una precisión: "Luego habría que ver si el cerebro aplica esa estructura en el modo de comportarse frente a otras circunstancias; no tiene por qué ser así necesariamente, pero es perfectamente posible".

Lo que queda por ver es si esta influencia va a ser negativa, como vaticina Carr, o si va a ser el primer paso para integrar la tecnología en el cuerpo humano y ampliar las capacidades del cerebro, como predice el inventor y experto en inteligencia artificial Raymond Kurzweil. "Nuestras primeras herramientas ampliaron nuestro alcance físico, y ahora extienden nuestro alcance mental. Nuestros cerebros advierten de que no necesitan dedicar un esfuerzo mental (y neuronal) a aquellas tareas que podemos dejar a las máquinas", razona Kurzweil desde Nueva Jersey. Y cita un ejemplo: "Nos hemos vuelto menos capaces de realizar operaciones aritméticas desde que las calculadoras lo hacen por nosotros hace ya muchas décadas. Ahora confiamos en Google como un amplificador de nuestra memoria, así que de hecho recordamos peor las cosas que sin él. Pero eso no es un problema porque no tenemos por qué prescindir de Google. De hecho, estas herramientas se están volviendo más ubicuas, y están disponibles todo el tiempo".

Oponer cerebro y tecnología es un enfoque erróneo, según coincide con Kurzweil el profesor JohnMcEneaney, del Departamento de Lectura y Artes lingüísticas de la Universidad de Oakland (EE UU). "Creo que la tecnología es una expresión directa de nuestra cognición", discurre McEneaney. "Las herramientas que empleamos son tan importantes como las neuronas de nuestros cráneos. Las herramientas definen la naturaleza de la tarea para que las neuronas puedan hacer el trabajo".

Carr insiste en que esta influencia será mucho mayor a medida que aumente el uso de Internet. Se trata de un fenómeno incipiente que la neurología y la psicología tendrán que abordar a fondo, pero de momento un informe pionero sobre hábitos de búsqueda de información en Internet, dirigido por expertos del University College de Londres (UCL), indica que podríamos hallarnos en medio de un gran cambio de la capacidad humana para leer y pensar.

El estudio observó el comportamiento de los usuarios de dos páginas web de investigación, uno de la British Library y otro del Joint Information Systems Comittee (JISC), un consorcio educativo estatal que proporciona acceso a periódicos y libros electrónicos, entre otros recursos. Al recopilar los registros, los investigadores advirtieron que los usuarios "echaban vistazos" a la información, en vez de detenerse en ella. Saltaban de un artículo a otro, y no solían volver atrás. Leían una o dos páginas en cada fuente y clicaban a otra. Solían dedicar una media de cuatro minutos por libro electrónico y ocho minutos por periódico electrónico. "Está claro que los usuarios no leenonline en el sentido tradicional; de hecho, hay indicios de que surgen nuevas formas de lectura a medida que los usuarios echan vistazos horizontalmente a través de títulos, páginas y resúmenes en busca de satisfacciones inmediatas", constata el documento. "Casi parece que se conectan a la Red para evitar leer al modo tradicional".

Los expertos inciden en que se trata de un cambio vertiginoso. "La Red ha provocado que la gente se comporte de una manera bastante diferente con respecto a la información. Esto podría parecer contradictorio con las ideas aceptadas de la biología y la psicología evolutivas de que el comportamiento humano básico no cambia de manera súbita", señala desde Londres el profesor David Nicholas, de la Facultad de Información, Archivos y Bibliotecas del UCL. "Hay un consenso general en que nunca habíamos visto un cambio a esta escala y rapidez, así que éste podría muy bien ser el caso [de un cambio repentino]", añade, citando su ensayo Digital consumers.

Se trata de una transformación sin precedentes porque es un nuevo medio con el potencial de incluir a todos los demás. "Nunca un sistema de comunicaciones ha jugado tantos papeles en nuestras vidas ?o ejercido semejante influencia sobre nuestros pensamientos? como Internet hace hoy", incide Carr. "Aun así, a pesar de todo lo que se ha escrito sobre la Red, se ha prestado poca atención a cómo nos está reprogramando exactamente".

Esta alteración de las maneras de buscar información y de leer no sólo afectaría a los más jóvenes, a los que se les supone mayor número de horas conectado, sino a individuos de todas las edades. "Lo mismo les ha sucedido a maestros, profesores y médicos de cabecera. Todo el mundo muestra un comportamiento de saltos y lecturas por encima", precisa el informe.

Carr insiste en que una de las cuestiones clave es el modo de lectura "superficial" que va ganando terreno. "En los tranquilos espacios abiertos por la lectura de un libro, sostenida y sin distracciones, o por cualquier otro acto de contemplación, establecemos nuestras propias asociaciones, extraemos nuestras propias inferencias y analogías, y damos luz a nuestras propias ideas". El problema es que al impedir la lectura profunda se impide el pensamiento profundo, ya que uno es indistinguible del otro, según escribe Maryanne Wolf, investigadora de la lectura y el lenguaje de la Tufts University (EE UU) y autora deCómo aprendemos a leer (Ediciones B). Su preocupación es que "la información sin guía pueda crear un espejismo de conocimiento y, por ello, restrinja los largos, difíciles y cruciales procesos de pensamiento que llevan al conocimiento auténtico", señala Wolf desde Boston.

Más allá de las advertencias sobre los hipotéticos efectos de Internet sobre la cognición, científicos como Kurzweil dan la bienvenida a esta influencia: "Cuanto más confiamos en la parte no biológica (es decir, las máquinas) de nuestra inteligencia, la parte biológica trabaja menos, pero la combinación total aumenta su inteligencia". Otros discrepan de esta predicción. La mayor dependencia de la Red conllevaría que el usuario se vuelva vago y, entre otras costumbres adquiridas, confíe completamente en los motores de búsqueda como si fueran el grial. "Lo utilizan como una muleta", señala el profesor Nicholas, que recela de que esa herramienta sirva para liberar al cerebro de las tareas de búsqueda para poder emplearse en otras.

Carr va más allá y asegura que el tipo de lectura "vistazo" beneficia a las empresas. "Sus ingresos aumentan a medida que pasamos más tiempo conectados y que aumentamos el número de páginas y de los elementos de información que vemos", razona. "Las empresas tienen un gran interés económico en que aumentemos la velocidad de nuestra ingesta de información", añade. "Eso no significa que deliberadamente quieran que perdamos la capacidad de concentración y contemplación: es sólo un efecto colateral de su modelo de negocio".

Otros expertos matizan bastante el pronóstico de Carr. El experto en tecnología Edward Tenner, autor de Our own devices: how technology remake humanity (Nuestros propios dispositivos: cómo la tecnología rehace a la humanidad), se suma a la crítica de Carr pero añade que no tiene por qué ser irreversible. "Coincido con la preocupación por el uso superficial de Internet, pero lo considero como un problema cultural reversible a través de una mejor enseñanza y un mejor software de búsqueda, y no como una deformación neurológica", explica desde Nueva Jersey (EE UU). "Sucede como con la gente que está acostumbrada a los coches y a las tumbonas pero entiende la importancia de hacer ejercicio".

En definitiva, científicos como Kurzweil destacan el potencial de Internet como herramienta de conocimiento. "La Red ofrece la oportunidad de albergar toda la computación, el conocimiento y la comunicación que hay. Al final, excederá ampliamente la capacidad de la inteligencia humana biológica. Y concluye: "Una vez que las máquinas puedan hacer todo lo que hacen los humanos, será una conjunción poderosa porque se combinará con los modos en los que las máquinas ya son superiores. Pero nos mezclaremos con esta tecnología para hacernos más inteligentes".


At last, we have a black swan. The credit crisis began last year soon after the publication of Nassim Nicholas Taleb's bestselling Black Swan, which tackled the impact of unexpected events, such as the discovery of black swans in Australia by explorers who had thought all swans were white. ...

...Prediction markets, summing the market's wisdom, had it wrong. Last week, the Intrade market put the odds that the Tarp would have passed by now at more than 90 per cent.

Models using market extremes to predict political interventions were also fooled. When volatility rises as high as in the past few weeks, it has in the past been a great bet that the government will do something—which is in part why spikes in volatility tend to be great predictors of a subsequent bounce.

Taleb himself suggested recently that investors should rely least on normal statistical methods when they are in the "fourth quadrant"—when there is a complex range of possible outcomes and where the distribution of responses to those outcomes does not follow a well understood pattern.

Investors were in that quadrant on Monday morning. They were vulnerable to black swans and should not have relied on statistics as a guide.

One prediction for the future does look safe, however: investors will spend much more time making qualitative assessments of political risk.