his Christmas, a bevy of elegant models are on display. Not just the long-legged female variety (although you can see those at this season’s parties in New York); instead, regulators, bankers and investors have been flaunting their own smart models, as they attempt to predict what 2011 might deliver.
But as this economic catwalk gets underway, it is shot through with irony. When the financial crisis hit, many observers blamed the disaster on the misuse of financial models. Not only had these flashy computer systems failed to forecast behaviour in the sub-prime mortgage world, but they had also seduced bankers and investors to take foolhardy risks or been used to justify some crazy behaviour.
But these days, in spite of all those mis-steps, there is little sign that financiers are falling out of love with those models; on the contrary, if you flick through the recent plethora of reports from the Basel Committees — or look at the 2011 forecasts emanating from investment banks — these remain heavily reliant on ever-more complex forms of modelling.
So what are investors to make of this? One particularly thought-provoking set of ideas can be seen in the current work of Emanuel Derman, a former physicist-turned banker who shot to fame within the banking industry two decades ago by co-developing some ground-breaking financial models, such as the Black-Derman-Toy model (one of the first interest rate models) and the Derman-Kani local volatility model (the first model consistent with the volatility smile). *
At first glance, Derman’s past might suggest he should be a model-lover — or "modeliser" — par excellence. In the banking world, he is often hailed as one of the great, original "quants", who paved the way for the derivatives revolution. Yet in reality, Derman has always been pretty cynical about those models that won him, and other quants, earlier accolades. For while investment bank salesmen might have treated his creations as near infallible, in truth Derman — like many brilliant scientists-turned-quants — has always recognised their flaws. ...
Some of the world’s greatest thinkers came together recently to answer the really big question — what will change the world? Roger Highfield, editor ofNew Scientist, reveals their predictions, from crowd-sourced charity to space colonisation and built-in telepathy.
It is not hard to think of examples of wide-eyed predictions that have proved somewhat wide of the mark. Personal jetpacks, holidays on the moon, the paperless office and the age of leisure all underline how futurologists are doomed to fail.
Any predictions should thus be taken with a heap of salt, but that does not mean crystal ball-gazing is worthless: on the contrary, even if it turns out to be bunk, it gives you an intriguing glimpse of current fads and fascinations.
A few weeks ago, a science festival in Genoa, Italy, gathered together some leading lights to discuss the one aspect of futurology that excites us all: cosa farà cambiare tutto — this will change everything.
The event was organised by John Brockman, a master convener, both online and in real life, and founder of the Edge Foundation, a kind of crucible for big new ideas.
With him were two leading lights of contemporary thought: Stewart Brand, the father of the Whole Earth Catalog, co-founder of a pioneering online community called The Well and of the Global Business Network; and Clay Shirky, web guru and author of Cognitive Surplus: Creativity and Generosity in a Connected Age. ...
When I received the invitation to write here, there was the question of whether the new columns would have names different than those of their authors. I was thinking about some possibilities. The first idea was to be a "name dropper," the English term for those in the habit of naming names of important people to impress listeners. I even thought about beginning all the texts with some name and gradually forming an idiosyncratic biographical catalogue, which could be useful for adventurous spirits.
The fact that I have not found a good ironic translation for such an expression in English, made me give up the gam in the end. So thought about the title "Frontier". In the background, still thinking in English: I movied towards "the border" in the direction of "edge". The columns would deal with only the cultural production that crossed limits established for the common place, transforming the world or inventing new ways to think about life. My inspiration came from a number of different things such as "Close to the Edge" or Brian Eno's Edge feature"A Big Theory Of Culture". But mostly, I wanted to emulate, in absurdly individual and uselessly pretentious way, the site http://www.edge.org/.
I tracked the trajectory of John Brockman, the man who founded Edge before the Web existed. I bought the first book in his series "The Reality Club" at the time of its launch in 1990. I was impressed with such an interesting gathering of thinkers, coming from different areas such as the philosopher Daniel Dennett, the biologistLynn Margulis, or psychologist Mihaly Csikszentmihalyi. I learned that what was published there was only a sample of much greater diversity. The Reality Club’s monthly "invitation only" meetings in New York — which began in 1981 — is a fascinating group that includes the physicist Freeman Dyson to theater directorRichard Foreman, almost all of my idols. The motto of the club was ambitious: "To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together and have them ask each other the questions they are asking themselves."
Today, the meeting room has become the website Edge. The transformation has not exactly been democratizing. The club remains as elitist (not a criticism, an observation) as before, maybe even more, since its members have become celebrities (sign of the times: today scientists can be more pop than Mick Jagger) and many of them are incredibly rich. It is not an open site where anyone can contribute, but remains invitation-only, editorially driven. The difference: the general reader can now monitor the selected conversation almost in real time, after a light filter. Brockman still decides who may speak at the forum. Currently he is one of the more powerful literary agents in the world (specialized mainly in science books), managing to convince the major publishing houses to pay millions in advances to his clients. (One of the legends that revolve around his working method is that if a book begins to earn royalties, he says that he's failed — because he didn't get a large enough advance from the publisher). Brockman is the agent of Richard Dawkins, Jared Diamond, Martin Rees and others of the same caliber.
"An invitation to the 2010 dinner was not easy to come by as the figures who were present were the owners of Google, Twitter, Huffington Post, Bill Gates, Benoit Mandelbrot (fractals), Craig Venter (Human Genome Project). Do I need to drop more names? A bomb at dinner and we would lose much of a certain creative intelligence that drives our world and our future, or the future that these people have created for all of us. The nerd on the edge has now became the center of power."
The site has several sections. In one of them, a sort of "lifestyles of the rich and famous" — of the people Edge considers the most interesting and intelligent in the world — is an album of photos of an annual event hosted by Brockman, originally named "The Millionaires' Dinner" which was later upgraded to "The Billionaires' Dinner." An invitation to the 2010 dinner was not easy to come by as the figures who were present were the owners of Google, Twitter, Huffington Post, Bill Gates,Benoit Mandelbrot (fractals), Craig Venter (Human Genome Project). Do I need to drop more names? A bomb at dinner and we would lose much of a certain creative intelligence that drives our world and our future, or the future that these people have created for all of us. The nerd on the edge has now become the center of power.
Another very popular section is the Edge Annual Question. Every year a new question is asked. In November, Richard H. Thaler, the father of "behavioral economics" (the hottest area in economic studies), asked the following question:"Can you name your favorite examples of wrong scientific belief that were held for long periods of time". So far 65 responses have been received, authored by, among others, the physicist Lee Smolin and artist Matthew Ritchie. This week a special question was published. The inquisitor is Danny Hillis, pioneer in super computing, who — under the impact of Wiki-Leaks — wants to know if we can or if we must keep secrets in the age of information.
But this is the festive aspect of the Edge. What makes my neurons burn are the regular features, which are frequently brilliant texts, such as the most recent:"Metaphors, Models and Theories", by Emanuel Derman, one of those physicists in the past decades who has left the university to attempt to discover the laws of financial markets. (I will go deeper into this subject in a future column.) And this is why I always come back to Edge. In the world of Anglo-Saxon ideas (that still prevail throughout the whole world, or among the elite of the world), there is no smarter guide.
Hermano Vianna is a Brazilian anthropologist and writer who currently works in television. The original Portugese-language column, published behind O Globo's subscription pay-wall, is available, with an introduction, on Hermano Vianna's blog.
• Edge.org has a solid collection of essays addressing these questions: "When does my right to privacy trump your need for security? Should a democratic government be allowed to practice secret diplomacy? Would we rather live in a world with guaranteed privacy or a world in which there are no secrets? If the answer is somewhere in between, how do we draw the line?"
Now comes the "binary-turn"? A Nerdisierung clues to the FAZ.
Some time ago I had - more in jest - the venerable Frankfurter Allgemeine Zeitung in a couple of tweets as "nerd-central organ" means . The cause was Frank Schirrmacher's full-page defense of nerds just prior to the election, the publication of offensive around the iPad unveiling or opening of the FAZ as a platform for self-confessed nerds, such as in the article by Frank Rieger (Chaos Computer Club). Once the word was the "central organ" picked up every now and again, I've been thinking about it again a bit and listed a few thoughts on the "binary turn" the FAZ.
What is clear is one thing: Frank Schirrmacher is the driving force behind this process, as a glance in its editorial of 23 January in which he for more digital intelligence and vehemently calls for Deuschland (a rogue is someone who even at that date, the nerd alarm bells ringing stops ). It could be that reorientation of the FAZ-Feuilleton debates and articles on digital culture, first of all an accompanying report on his recent book project Paybacksuspect behind, but that would be very superficial and would not Mr Schirrmacher's sense of justice issues.
of at least a few leads away from the genetic, bio-and nano-technology debates of recent years - - the content of this pivot is absolutely to be welcomed, because in fact, the public discourse in Germany to serious issues of digitization and its social consequences backlog. Gets a special twist this Nerdisierung but by every now and then breaking through arrogance and rejection of online culture - which are at very different points of the paper shows, for example, currently at the minimum of impact-side position in the case Hegemann.
If you look a little closer, then there are some indications that the FAZ for much longer is a place for nerds - a small clues.
Exhibit 1: The feature of 27 June 2000
Across different FAZ had the extract from the code of the human genome expressed one that to some as "not read the article in the latest media story" is . Opening up, a publishing initiative under the banner of John Brockman led "Third Culture" debate, which aimed at the entanglement of intellectual discourse between the natural sciences and humanities.The "digital revolution" was this time "only" opens with some Schirrmacher texts. But who knows, maybe coming soon still a spread in binary code.
Exhibit 2: The daily blog entry on page one
As a big journalistic break in the part of the relaunch of 5 October 2007, a color photo on page one of the traditional mounted. Since then, this teaser photo developed more and more of a daily journal. Of course comment on the picture and the accompanying text is a central theme of world affairs - but stylistically there are often elements of this reduction, reference and commentary, which are characteristic of the interior of the leaf often criticized blogosphere.
And maybe it's just a personal perception, but the captions have lately not always once again fallen below the magic 140-character limit? But even if not - many image Comments work exactly like a typical Twitpic-mail: The image illustrates a theme, an idea and the text uses references to the authorities for further reading in the Gazette. Formulated the other way around: the same way could the "Play" the FAZ twitter too.
Exhibit 3: The nerds are not just the features section
Not so much a concrete piece of evidence, rather a collection of evidence to the thesis - one looks only superficially different departments, then there are a surprising number of examples of "nerdy" reporting: the surprisingly flowery flaunted attention to detail of the "technology and motor" editorial or the often highly encrypted reports from the gourmet world. The Ins-sheet smuggling material from Duckburg by professing Donaldisten Patrick Bahners and Andreas Platt House is one of them as well as the knowledgeable "network economy" every Tuesday in the business section - all these are small indications of the anchoring of the FAZ in the proverbial nerd culture. Taking FAZ.net added yet, you should at this point to the incorporation of Don Alphonso and Michael Seemann be mentioned.
But what is certainly the result of this partial and expandable "evidence" is? Is there one?Starting point for considerations was indeed the rather humorous name of the FAZ as "nerd-central organ", a volatile leaf criticism has provided at least some evidence of elements of the much-quoted last "nerd culture". So what?
To say it once again clear: the demand for more "digital intelligence" or better: the increased talk of "digital subject" in the German public support is essential to - Schirrmacher Paybackand the theories of information overload of the defiance. However, the term should be the "Nerds" or "nerd culture" spelled out a little clearer than it is now - otherwise converts the FAZ a fine line and does attack. Where can this lead, has recently Thomas Knüwer shown before that an article by Frank Schirrmacher split has.
Such examples are also illustrates the fragility of the construction of a "nerd-FAZ", which does recognize the opportunity to strike the tone for a new social discourse, is here but would have to make a more accurate picture of the situation in the digital cultural front.
If you were a sophisticated and up-to-the-minute science buff in 17th century Europe, you knew that there was only one properly scientific way to explain anything: "the direct contact-action of matter pushing on matter," (as Peter Dear puts it The Intelligibility of Nature). Superstitious hayseeds thought that one object could influence another without a chain of physical contact, but that was so last century by 1680. Medieval physics had been rife with such notions; modern thought had cast those demons out. To you, then, Newton's theory of gravity looked like a step backwards. It held that the sun influenced the Earth without touching it, even via other objects. At the time, that just sounded less "sciencey" than the theories it eventually replaced.
This came to mind the other day because, over at Edge.org, Richard H. Thaler asked people to nominate examples of "wrong scientific beliefs that were held for long periods." He also asked us to suggest a reason that our nominee held sway for too long. ...
There's a fascinating list of scientific ideas that endured for a long time, but were wrong, over at Edge.org, the Web site created by the agent and intellectual impresario John Brockman.
The cautionary tale of the fight over the cause of stomach ulcers, listed by quite a few contributors there, is the kind of saga that gives science journalists (appropriately) sleepless nights. One of my favorites in the list is the offering of Carl Zimmer, the author and science journalist, who discusses some durable misconceptions about the stuff inside our skulls:
"This laxe pithe or marrow in man's head shows no more capacity for thought than a Cake of Sewet or a Bowl of Curds."
This wonderful statement was made in 1652 by Henry More, a prominent seventeenth-century British philosopher. More could not believe that the brain was the source of thought. These were not the ravings of a medieval quack, but the argument of a brilliant scholar who was living through the scientific revolution. At the time, the state of science made it was very easy for many people to doubt the capacity of the brain. And if you've ever seen a freshly dissected brain, you can see why. It's just a sack of custard. Yet now, in our brain-centered age, we can't imagine how anyone could think that way.?The list grew out of a query fromRichard Thaler, the director of the Center for Decision Research at the University of Chicago Graduate School of Business and coauthor, with Cass Sunstein, of " Nudge: Improving Decisions About Health, Wealth, and Happiness." (He also writes a column for The Times.)
Here's his question:
The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?
Science can contradict itself. And that's OK. It's a fundamental part of how research works. But from what I've seen, it's also one of the hardest parts for the general public to understand. When an old theory dies, it's not because scientists have lied to us and can't be trusted. In fact, exactly the opposite. Those little deaths are casualties of the process of fumbling our way towards Truth*.
Of course, even after the pulse has stopped, the dead can be pretty interesting. Granted, I'm biased. I like dead things enough to have earned a university degree in the sort of anthropology that revolves around exactly that. But I'm not alone. A recent article at the Edge Foundation website asked a broad swath of scientists and thinkers to name their favorite long-held theory, which later turned out to be dead wrong. The responses turn up all sorts of fascinating mistakes of science history—from the supposed stupidity of birds, to the idea that certain, separate parts of the brain controlled nothing but motor and visual skills.
One of my favorites: The idea that complex, urban societies didn't exist in Pre-Columbian Costa Rica, and other areas south of the Maya heartland. In reality, the cities were always there. I took you on a tour of one last January. It's just that the people who lived there built with wood and thatch, rather than stone. The bulk of the structures decayed over time, and what was left was easy to miss, if you were narrowly focused on looking for giant pyramids.
What's your favorite dead theory?
Edge: Wrong Scientific Beliefs That Were Held for Long Periods of Time ...
Earlier this week Richard H. Thaler posted a question to selected Edge contributors, asking them for their favorite examples of wrong scientific theories that were held for long periods of time. You know, little ideas like "the earth is flat."
The contributor's responses came from all different fields and thought processes, but there were a few recurring themes. One of the biggest hits was the theory that ulcers were caused by stress–this was discredited by Barry Marshall and Robin Warren, who proved that the bacteria H. pylori bring on the ulcers. Gregory Cochran explains:
One favorite is helicobacter pylori as the main cause of stomach ulcers. This was repeatedly discovered and then ignored and forgotten: doctors preferred 'stress' as the the cause, not least because it was undefinable. Medicine is particularly prone to such shared mistakes. I would say this is the case because human biology is complex, experiments are not always permitted, and MDs are not trained to be puzzle-solvers–instead, to follow authority.
Another frequent topic of disbelief among Edge responders was theism and its anti-science offshoots–in particular the belief in intelligent design, and the belief that the Earth is only a few thousand years old. Going by current political discussions in America it may seem that these issues are still under contention and shouldn't be included on the list, but I'm going to have to say differently, and agree with Milford Wolpoff:
Creationism's step sister, intelligent design, and allied beliefs have been held true for some time, even as the mountain of evidence supporting an evolutionary explanation for the history and diversity of life continues to grow. Why has this belief persisted? There are political and religious reasons, of course, but history shows than neither politics nor religion require a creationist belief in intelligent design. ...
the conversation about the role of the two great branches of learning is important, and still topical. C.P. Snow crystallized this debate in the mid-twentieth century with his suggestion that two cultures existed in the British academy, the literary and the scientific, and that they were at odds.
In his quest to argue that science will solve global social problems, why, Snow asked, should one be held responsible for knowing the works of Shakespeare, but not understand the Second Law of Thermodynamics? His insight gave steam to an internecine intellectual fight that had surfaced a number of times in the past. (Today, one can chart the most recent "science wars" all the way back through Snow to Arnold and Huxley, on to the Romantic critiques of the Enlightenment Project, to the debate between the Ancients and Moderns, which revolved around the new science's assault on both Aristotelianism and Renaissance Humanism.)
However, what shouldn't be forgotten in the admission that the humanities will resist ultimate reduction is that there are those in the humanities who suffer from science envy. This envy was given impetus by entomologist and evolutionary theorist E.O. Wilson in his monograph, Consilience, wherein Wilson suggests that the humanities must move toward the methods of the sciences to remain relevant.
While this gentle shimmy sounds harmless enough, there are those in humanistic disciplines like literary studies who have taken such a move to heart. For example, any cursory examination into the nascent approach called Literary Darwinism reveals a loose confederacy of individuals who think literary texts are best read within Darwinian contexts (think, reading Jane Austen to understand how her characters represent universals in human behavior related to, say, their inclusive fitness). ...
...That term, 'third culture', was popularized by literary agent and Edge founder John Brockman in a response to Snow to suggest that a new culture is emerging that is displacing the traditional, literary intellectual with thinkers in the sciences who are addressing key concepts and categories found in the humanities; Richard Dawkins writing about religion or Carl Sagan expounding about the awe of the cosmos both come to mind as quintessential examples. One need only browse through the science section of the book store to see that a bridge between the two cultures has been built, with much of the traffic in the popular sphere going one way
"The Shallows" which explores what the Internet is doing to our brains) are clear examples of a profession lacking in these latitudes: the dedicated writer to think of our new state environment, technology and science. They are "tecnoescritores" or scientific writers such as the British biologist Richard Dawkins (The Selfish Gene "), Daniel Dennett (" Darwin's Dangerous Idea "), psychologist Steven Pinker (The Blank Slate), Matt Ridley ("genome"), Malcolm Gladwell ("Blink"), Bill Bryson ("Short History of Nearly Everything"), Brian Greene ("The Elegant Universe"), Michio Kaku ("Physics of the Impossible"), Paul Davies ( "The last three minutes"), and many more as the hypermedia Stephen Hawking (A Brief History of Time "). Direct descendants of Carl Sagan, Richard Feynman and Stephen Jay Gould, is a breed of authors who write and release science laboratories. And, oddly enough-attention-Argentine publishers, they sell many books. It's true: in recent years came over here-very interesting collections, such as barking Science (Siglo XXI), led by Diego Golombek biologist who trains scientists and science communicators to tell beyond a cryptic paper or news article forgettable . But you have to admit, compared to the international market for "literature" are still in the First B. Each in its own way and located in what CP Snow called "third culture" (that bridge between science and literature currently represented by the site Edge.org), the great science writers take a scientific publication, the link with the literature and in doing so take it one floor up
Dr. Craig Venter talks to Steve Kroft and takes him on a tour of his lab on "60 Minutes," Video
(CBS) The microbiologist whose scientists have already mapped the human genome and created what he calls "the first synthetic species" says the next breakthrough could be a flu vaccine that takes hours rather than months to produce.
...KROFT: "There are a lot of people in this country who don't think that you ought to screw around with nature."
VENTER: "We don't have too many choices now. We are a society that is one hundred percent dependent on science. We're going to go up in our population in the next 40 years; we can't deal with the population we have without destroying our environment."
KROFT: "But aren't you playing God?"
VENTER: "We're not playing anything. We're understanding the rules of life."
KROFT: "But that's more than studying life, that's changing life".
VENTER: "Well, domesticating animals was changing life, domesticating corn. When you do cross-breeding of plants, you're doing this blind experiment where you're just mixing DNA of different types of cells and just seeing what comes out of it."
KROFT: "This is a little different though, this is another step, isn't it?"
VENTER: "Yeah, now we're doing it in a deliberate design fashion with tiny bacteria. I think it's much healthier to do it based on some knowledge and a better understanding of life than to do it blindly and randomly." .
KROFT: "You know, I've asked two or three times, 'Do you think you're playing God?' I mean, do you believe in God?"
VENTER: "No. I believe the universe is far more wonderful than just assuming it was made by some higher power. I think the fact that these cells are software-driven machines and that software is DNA and that truly the secret of life is writing software, is pretty miraculous. Just seeing that process in the simplest forms that we're just witnessing is pretty stunning."
What idea will change everything? "Prediction is difficult, especially when it concerns the future." The quote about the difficulty of recognizing the future, is sometimes attributed to Karl Valentin, or even Mark Twain. Technological views of wrong as forecasts made around the time of 2000 testify: from nuclear powered cars ri settlements on Mars. In this book, however, the cream of the global research community ventures forth with their outlooks in brief essays. A good book by sober scientists, not by technology dreamers.
[ED. NOTE: The conversation was in English, Schirrmacher's second language. Rather than edit the piece for grammar, and risk losing the spontaneity of the conversation, I present it here -- for the most part -- verbatim. -- John Brockman]
The question I am asking myself arose through work and through discussion with other people, and especially watching other people, watching them act and behave and talk, was how technology, the Internet and the modern systems, has now apparently changed human behavior, the way humans express themselves, and the way humans think in real life. So I've profited a lot from Edge.
We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.
As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know -- this is the old Darwinian thought, the moment when Darwin started reading Malthus -- when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker.
Here European thought is quite interesting, our whole history of thought, especially in the eighteenth, nineteenth, and twentieth centuries, starting from Kant to Nietzsche. Hegel for example, in the nineteenth century, where you said which thought, which thinking succeeds and which one doesn't. We have phases in the nineteenth century, where you could have chosen either way. You could have gone the way of Schelling, for example, the German philosopher, which was totally different to that of Hegel. And so this question of what survives, which idea survives, and which idea drowns, which idea starves to death, is something which, in our whole system of thought, is very, very known, and is quite an issue. And now we encounter this structure, this phenomenon, in everyday thinking.
It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.
Of course, everybody knows we have a revolution, but we are now really entering the cognitive revolution of it all. In Europe, and in America too -- and it's not by chance -- we have a crisis of all the systems that somehow are linked to either thinking or to knowledge. It's the publishing companies, it's the newspapers, it's the media, it's TV. But it's as well the university, and the whole school system, where it is not a normal crisis of too few teachers, too many pupils, or whatever; too small universities; too big universities.
Now, it's totally different. When you follow the discussions, there's the question of what to teach, what to learn, and how to learn. Even for universities and schools, suddenly they are confronted with the question how can we teach? What is the brain actually taking? Or the problems which we have with attention deficit and all that, which are reflections and, of course, results, in a way, of the technical revolution?
Gerd Gigerenzer, to whom I talked and who I find a fascinating thinker, put it in such a way that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.
The European point of view, with our history of thought, and all our idealistic tendencies, is that now you can see -- because they didn't know that the Internet would be coming, in the fifties or sixties or seventies -- that the whole idea of the Internet somehow was built in the brains, years and decades before it actually was there, in all the different sciences. And when you see how the computer -- Gigerenzer wrote a great essay about that -- how the computer at first was somehow isolated, it was in the military, in big laboratories, and so on. And then the moment the computer, in the seventies and then of course in the eighties, was spread around, and every doctor, every household had a computer, suddenly the metaphors that were built in the fifties, sixties, seventies, then had their triumph. And so people had to use the computer. As they say, the computer is the last metaphor for the human brain; we don't need any more. It succeeded because the tool shaped the thought when it was there, but all the thinking, like in brain sciences and all the others, had already happened, in the sixties, seventies, fifties even.
But the interesting question is, of course, the Internet -- I don't know if they really expected the Internet to evolve the way it did -- I read books from the nineties, where they still don't really know that it would be as huge as it is. And, of course, nobody predicted Google at that time. And nobody predicted the Web.
Now, what I find interesting is that if you see the computer and the Web, and all this, under the heading of "the new technologies," we have, in the late nineteenth century, this big discussion about the human motor. The new machines in the late nineteenth century required that the muscles of the human being should be adapted to the new machines. Especially in Austria and Germany, we have this new thinking, where people said, first of all, we have to change muscles. The term "calories" was invented in the late nineteenth century, in order to optimize the human work force.
Now, in the twenty-first century, you have all the same issues, but now with the brain, what was the adaptation of muscles to the machines, now under the heading of multitasking -- which is quite a problematic issue. The human muscle in the head, the brain, has to adapt. And, as we know from just very recent studies, it's very hard for the brain to adapt to multitasking, which is only one issue. And again with calories and all that. I think it's very interesting, the concept -- again, Daniel Dennett and others said it -- the concept of the informavores, the human being as somebody eating information. So you can, in a way, see that the Internet and that the information overload we are faced with at this very moment has a lot to do with food chains, has a lot to do with food you take or not to take, with food which has many calories and doesn't do you any good, and with food that is very healthy and is good for you.
The tool is not only a tool, it shapes the human who uses it. We always have the concept, first you have the theory, then you build the tool, and then you use the tool. But the tool itself is powerful enough to change the human being. God as the clockmaker, I think you said. Then in the Darwinian times, God was an engineer. And now He, of course, is the computer scientist and a programmer. What is interesting, of course, is that the moment neuroscientists and others used the computer, the tool of the computer, to analyze human thinking, something new started.
The idea that thinking itself can be conceived in technical terms is quite new. Even in the thirties, of course, you had all these metaphors for the human body, even for the brain; but, for thinking itself, this was very, very late. Even in the sixties, it was very hard to say that thinking is like a computer.
You had once in Edge, years ago, a very interesting talk with Patty Maes on "Intelligence Augmentation" when she was one of the first who invented these intelligent agents. And there, you and Jaron Lanier, and others, asked the question about the concept of free will. And she explained it and it wasn't that big an issue, of course, because it was just intelligent agents like the ones we know from Amazon and others. But now, entering real-time Internet and all the other possibilities in the near future, the question of predictive search and others, of determinism, becomes much more interesting. The question of free will, which always was a kind of theoretical question -- even very advanced people said, well, we declare there is no such thing as free will, but we admit that people, during their childhood, will have been culturally programmed so they believe in free will.
But now, when you have a generation -- in the next evolutionary stages, the child of today -- which are adapted to systems such as the iTunes "Genius," which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.
What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities. And I'm wondering if the comfort of free will or not free will would be a very, very tough issue of the future. At this very moment, we have a new government in Germany; they are just discussing the what kind of effect this will have on politics. And one of the issues, which of course at this very moment seems to be very isolated, is the question how to predict certain terroristic activities, which they could use, from blogs -- as you know, in America, you have the same thing. But this can go farther and farther.
The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives. And, of course, this will play out in the work force -- the new German government seems to be very keen on this issue, to at least prevent the worst impact on people, on workplaces.
It's very important to stress that we are not talking about cultural pessimism. What we are talking about is that a new technology which is in fact a technology which is a brain technology, to put it this way, which is a technology which has to do with intelligence, which has to do with thinking, that this new technology now clashes in a very real way with the history of thought in the European way of thinking.
Unlike America, as you might know, in Germany we had a party for the first time in the last elections which totally comes out of the Internet. They are called The Pirates. In their beginning they were computer scientists concerned with questions of copyright and all that. But it's now much, much more. In the recent election, out of the blue, they received two percent of the votes, which is a lot for a new party which only exists on the Internet. And the voters were mainly 30, 40, 50 percent young males. Many, many young males. They're all very keen on new technologies. Of course, they are computer kids and all that. But this party, now, for the first time, reflects the way which we know, theoretically, in a very pragmatic and political way. For example, one of the main issues, as I just described, the question of the adaptation of muscles to modern systems, either in the brain or in the body, is a question of the digital Taylorism.
As far as we can see, I would say, we have three important concepts of the nineteenth century, which somehow come back in a very personalized way, just like you have a personalized newspaper. This is Darwinism, the whole question. And, in a very real sense, look at the problem with Google and the newspapers. Darwinism, but as well the whole question of who survives in the net, in the thinking; who gets more traffic; who gets less traffic, and so. And then you have the concept of communism, which comes back to the question of free, the question that people work for free. And not only those people who sit at home and write blogs, but also many people in publishing companies, newspapers, do a lot of things for free or offer them for free. And then, third, of course, Taylorism, which is a non-issue, but we now have the digital Taylorism, but with an interesting switch. At least in the nineteenth century and the early twentieth century, you could still make others responsible for your own deficits in that you could say, well, this is just really terrible, it's exhausting, and it's not human, and so on.
Now, look at the concept, for example, of multitasking, which is a real problem for the brain. You don't think that others are responsible for it, but you meet many people who say, well, I am not really good at it, and it's my problem, and I forget, and I am just overloaded by information. What I find interesting that three huge political concepts of the nineteenth century come back in a totally personalized way, and that we now, for the first time, have a political party -- a small political party, but it will in fact influence the other parties -- who address this issue, again, in this personalized way.
It's a kind of catharsis, this Twittering, and so on. But now, of course, this kind of information conflicts with many other kinds of information. And, in a way, one could argue -- I know that was the case with Iran -- that maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it's something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.
Because suddenly it's gone again. And the next piece of information, and the next piece of information -- and if now -- and this is something which, again, has very much to do with the concept of the European self, to take oneself seriously, and so on -- now, as Google puts it, they say, if I understand it rightly, in all these webcams and cell phones -- are full of information. There are photos, there are videos, whatever. And they all should be, if people want it, shared. And all the thoughts expressed in any university, at this very moment, there could be thoughts we really should know. I mean, in the nineteenth century, it was not possible. But maybe there is one student who is much better than any of the thinkers we know. So we will have an overload of all these information, and we will be dependent on systems that calculate, that make the selection of this information.
And, as far as I can see, political information somehow isn't distinct from it. It's the same issue. It's a question of whether I have information from my family on the iPhone, or whether I have information about our new government. And so this incredible amount of information somehow becomes equal, and very, very personalized. And you have personalized newspapers. This will be a huge problem for politicians. From what I hear, they are now very interested in, for example, Google's page rank; in the question how, with mathematical systems, you can, for example, create information cascades as a kind of artificial information overload. And, as you know, you can do this. And we are just not prepared for that. It's not too early. In the last elections we, for the first time, had blogs, where you could see they started to create information cascades, not only with human beings, but as well with BOTs and other stuff. And this is, as I say, only the beginning.
Germany still has a very strong anti-technology movement, which is quite interesting insofar as you can't really say it's left-wing or right-wing. As you know, very right-wing people, in German history especially, were very anti-technology. But it changed a lot. And why it took so long, I would say, has demographic reasons. As we are in an aging society, and the generation which is now 40 or 50, in Germany, had their children very late. The whole evolutionary change, through the new generation -- first, they are fewer, and then they came later. It's not like in the sixties, seventies, with Warhol. And the fifties. These were young societies. It happened very fast. We took over all these interesting influences from America, very, very fast, because we were a young society. Now, somehow it really took a longer time, but now that is for sure we are entering, for demographic reasons, the situation where a new generation which is -- as you see with The Pirates as a party -- they're a new generation, which grew up with modern systems, with modern technology. They are now taking the stage and changing society.
One must say, all the big companies are American companies, except SAP. But Google and all these others, they are American companies. I would say we weren't very good at inventing. We are not very good at getting people to study computer science and other things. And I must say -- and this is not meant as flattery of America, or Edge, or you, or whosoever -- what I really miss is that we don't have this type of computationally-minded intellectual -- though it started in Germany once, decades ago -- such as Danny Hillis and other people who participate in a kind of intellectual discussion, even if only a happy few read and react to it. Not many German thinkers have adopted this kind of computational perspective.
The ones who do exist have their own platform and actually created a new party. This is something we are missing, because there has always been a kind of an attitude of arrogance towards technology. For example, I am responsible for the entire cultural sections and science sections of FAZ. And we published reviews about all these wonderful books on science and technology, and that's fascinating and that's good. But, in a way, the really important texts, which somehow write our life today and which are, in a way, the stories of our life -- are, of course, the software -- and these texts weren't reviewed. We should have found ways of transcribing what happens on the software level much earlier -- like Patty Maes or others, just to write it, to rewrite it in a way that people understand what it actually means. I think this is a big lack.
What did Shakespeare, and Kafka, and all these great writers -- what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. And now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections reviewing software in a different way; at least the structures of software.
We are just beginning to look at this in Germany. And we are looking for people -- it's not very many people -- who have the ability to translate that. It needs to be done because that's what makes us who we are. You will never really understand in detail how Google works because you don't have access to the code. They don't give you the information. But just think of George Dyson's essay, which I love, "Turing's Cathedral." This is a very good beginning. He absolutely has the point. It is today's version of the kind of cathedral we would be entering if we lived in the eleventh century. It's incredible that people are building this cathedral of the digital age. And as he points out, when he visited Google, he saw all the books they were scanning, and noted that they said they are not scanning these books for humans to read, but for the artificial intelligence to read.
Who are the big thinkers here? In Germany, for me at least, for my work, there are a couple of key figures. One of them is Gerd Gigerenzer, who is somebody who is absolutely -- I would say he is actually avant-garde, at this very moment, because what he does is he teaches heuristics. And from what we see, we have an amputation of heuristics, through the technologies, as well. People forget certain heuristics. It starts with a calculation, because you have the calculator, but it goes much further. And you will lose many more rules of thumb in the future because the systems are doing that, Google and all the others. So Gigerenzer, in his thinking -- and he has a big Institute now -- on risk assessment, as well, is very, very important. You could link him, in a way, actually to Nassim Taleb, because again here you have the whole question of not risk assessment, the question of looking back, looking into the future, and all that.
Very important in literature, still, though he is 70 years old, 80 years old, is of course Hans Magnus Enzensberger. Peter Sloterdijk is a very important philosopher; a kind of literary figure, but he is important. But then you have, not unlike in the nineteenth or twentieth century, there are many leading figures. But I must say, as well as Gigerenzer, he writes all his books in English, we have quite interesting people, at this very moment, in law, which is very important for discussions of copyright and all that. But regarding the conversations of new technologies and human thought, they, at this very moment, don't really take place in Germany.
There are European thinkers who have cult followings -- Slajov Zizek, for example. Ask any intellectual in Germany, and they will tell you Zizek is just the greatest. He's a kind of communist, but he considers himself Stalinistic, even. But this is, of course, all labels. Wild thinkers. Europeans, at this very moment, love wild thinkers.
Reality Club discussion on EDGE: Daniel Kahneman, George Dyson, Jaron Lanier, Nick Bilton, Nick Carr, Douglas Rushkoff, Jesse Dylan, Virginia Heffernan, Gerd Gigerenzer, John Perry Barlow, Steven Pinker, John Bargh, George Dyson, Annalena McAfee, John Brockman
We stood on the "third culture" and Brazil, ignoring his history of science, scientists and the links between arts and sciences and humanities. It is true that Brazil has never received a Nobel (neither science nor the other areas, though some writers have deserved), but came close a few times (with Carlos Chagas and Jayme Tiomno, for example, and not with Peter Medawar, British born Petrópolis than 50 years ago received the Medicine and Physiology). And might not have gotten there because both neglect dedicated to the subject. When you see the vast literature on science emerged in the last 40 years and thinks the Brazilian case, where only recently appeared names like Marcelo Gleiser and Fernando Reinach, the melancholy of the comparison is inevitable.
The Vezo is worldwide, I know, like when you read stories of modernism at the turn of the 20th century would not talk about Einstein and Bohr, only a maximum of Freud (who was a doctor and, contrary to what many think, very friendly knowledge of brain physiology to understanding the mental complexity — and today would be impressed with the new scanning technology). But historians and sociologists Brazilians not only ignore science in his portraits of the time, but also ignore their achievements. In the U.S., where there is both religious conservatism, science has always had acute stress. And there appear initiatives such as the Edge website (www.edge.org), made to house the thoughts of authors such as Damasio,Dennett and Dawkins — to get the letter D — that try to connect to the humanistic scientific culture. ...
...The art, by the way, almost always spoke with science, because both can add knowledge, or at least a mutual defense and sound. It's been like painting by Leonardo da Vinci (who developed the veiling to be more faithful to the visual perception) to the fiction of Ian McEwan (although I liked less than solar, about a physicist involved with the issue of global warming, than Saturday), through the refraction of light by Rembrandt and Picasso's angular concurrences.
We need to rescue the role of scientists in Brazil's history and treat the subject in much better schools and publications. Although there are few initiatives here and there, with the importance of a FAPESP, there is still much room for improvement in production and thinking about science. Consider the recent obituaries in local newspapers with names like Martin Gardner (who knocked so many defenders of the existence of paranormal and UFO proven) and Benoit Mandelbrot (inventor of the concept of fractals, that some people think it was a kind of mystic of impurities, an intuitive holistic) to see how the emphasis of science is small in our culture. The republic would gain much knowledge and method to convey to their students.
This is surely one of the most remarkable infographics we've ever posted. Created by social scientist Eduardo Salcedo-Albarán, it documents the organizational structure and almost limitless influence of Mexico's Michoacan drug family. And it teaches you a great deal about why, exactly, the family is so hard to combat -- and why its power seems so pervasive.
The infographic itself details various wings of the Michoacan cartel -- or La Familia as it's better known -- alongside various government agencies. (The short hand for the acronyms: Anything starting with "FUN" is a Michoacan drug cell; those starting with "NAR" are government drug agencies.) The arrows show links between each one, meaning they're sharing information. But what's most interesting is that the size of the bubbles shows how much information each cell of the organization is able to share:
NEW WORLD VIEWS: THE MAPS OF THE 21 CENTURY [pp18-19]
THE WORLD IS IN THE MIND
No map depicts reality - but we believe unwaveringly in the objectivity of cartography
THE GOAL IS THE GOAL
Why we love paper maps, but no longer use
THE RAYS OF THE WHITE SPOTS
Curator Hans Ulrich Obrist is celebrating the new era of cartography with a festival of artists, scientists and designers
WE HAVE NOTHING MORE TO EXPECT
Progress is the present management: Why cards have replaced the watch and GPS, the Big Ben of the present is
WITH THE EYES OF THE EAGLE WHO MAKES MAPS AND USES, IS THE SOVEREIGN
Curator Hans Ulrich Obrist, invited 50 artists on stage in a 20-hour event in London: A marathon on the Maps of the 21 Century. It was about more than just Google Maps.
[Google Transtlation:] With pith helmet and ice axes, the men went into battle against the unknown. The victorious are now in the Hall of Fame. David Livingston, the missionary and explorer, the first European to see Victoria Falls and as its "discoverer", looks down graciously from his lush stucco frame at the guests of the Royal Geographical Society in London.
The marathon was to be understood literally. In 20 hours were about 50 artists, architects, philosophers, scientists and musicians on the stage. After 15 minutes they were usually heruntergescheucht again, the next speaker was waiting. The British architect David Adjaye put on there about his typology of architecture in Africa, for which he has photographed ten years buildings in the 53 states.
The skepticism about the truth of cards
The philosopher Rosi Braidotti, who teaches at the Human Rights Institute at the University of Utrecht, pointed out that we owe the skepticism about the truth of cards by post-modernism. And the American literary agent John Brockman, founder of the influential Internet platform "Edge", showed different maps used in science are like. Many of them were indeed diagrams, such as the developmental biologist Lewis Wolpert made clear, but it would be in London not to spoil the fun.
Thus one saw an intricate network of countless points of intersection, which made the sociologist Nicholas A. Christakis and political scientist James Fowler of the relationship between obesity and social contacts visible, and the gene pioneer J. Craig Venter gave what looked like an endless series of colorful building blocks: the map of the first synthetic composite heritage.
It was the fifth time that Hans Ulrich Obrist chose this strength-sapping format of the race, an issue considered by as many points of view. After manifesto and poetry — the themes of the past two years — Obrist with this year's focus has, however, was a perfect landing.
Few technologies seems to characterize the young century as the new cartography. This could already be at the DLD Conference this year in Munich from the start, when the curator ever been to a small group, a symposium cards held on: Whether designer, astronomer, or Internet artists — all spoke of the great changes that the digital cartography with them was initiated.
While in London Obristgreatly expanded its guest list, showing that the marathon reflects a global development. Since 2005 when the U.S. internet company Google began its map service, Google Maps is almost omnipresent. From the once precious resource for a hero and ruler, the medium has become readily available for the masses, to be used by both experts and lay people.
In the run up to the "Map Marathon" Obrist asked contemporary artists about their personal map of the 21 Asked century. Because Obrist, simply the greatest networker in the art world, is why he is the main curator of the world was titled as before, it was then much mail for the Serpentine Gallery . ...