Edge in the News

60 Minutes [11.17.10]

Dr. Craig Venter talks to Steve Kroft and takes him on a tour of his lab on "60 Minutes," Video

(CBS) The microbiologist whose scientists have already mapped the human genome and created what he calls "the first synthetic species" says the next breakthrough could be a flu vaccine that takes hours rather than months to produce.


...KROFT: "There are a lot of people in this country who don't think that you ought to screw around with nature."

VENTER: "We don't have too many choices now. We are a society that is one hundred percent dependent on science. We're going to go up in our population in the next 40 years; we can't deal with the population we have without destroying our environment."

KROFT: "But aren't you playing God?"

VENTER: "We're not playing anything. We're understanding the rules of life."

KROFT: "But that's more than studying life, that's changing life".

VENTER: "Well, domesticating animals was changing life, domesticating corn. When you do cross-breeding of plants, you're doing this blind experiment where you're just mixing DNA of different types of cells and just seeing what comes out of it."

KROFT: "This is a little different though, this is another step, isn't it?"

VENTER: "Yeah, now we're doing it in a deliberate design fashion with tiny bacteria. I think it's much healthier to do it based on some knowledge and a better understanding of life than to do it blindly and randomly." .

KROFT: "You know, I've asked two or three times, 'Do you think you're playing God?' I mean, do you believe in God?"

VENTER: "No. I believe the universe is far more wonderful than just assuming it was made by some higher power. I think the fact that these cells are software-driven machines and that software is DNA and that truly the secret of life is writing software, is pretty miraculous. Just seeing that process in the simplest forms that we're just witnessing is pretty stunning."

DIE WELT [11.14.10]

What idea will change everything? "Prediction is difficult, especially when it concerns the future." The quote about the difficulty of recognizing the future, is sometimes attributed to Karl Valentin, or even Mark Twain. Technological views of wrong as forecasts made around the time of 2000 testify: from nuclear powered cars ri settlements on Mars. In this book, however, the cream of the global research community ventures forth with their outlooks in brief essays. A good book by sober scientists, not by technology dreamers.

THE HUFFINGTON POST [11.14.10]

[ED. NOTE: The conversation was in English, Schirrmacher's second language. Rather than edit the piece for grammar, and risk losing the spontaneity of the conversation, I present it here -- for the most part -- verbatim. -- John Brockman]

The question I am asking myself arose through work and through discussion with other people, and especially watching other people, watching them act and behave and talk, was how technology, the Internet and the modern systems, has now apparently changed human behavior, the way humans express themselves, and the way humans think in real life. So I've profited a lot from Edge.

We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.

As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know -- this is the old Darwinian thought, the moment when Darwin started reading Malthus -- when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker.

Here European thought is quite interesting, our whole history of thought, especially in the eighteenth, nineteenth, and twentieth centuries, starting from Kant to Nietzsche. Hegel for example, in the nineteenth century, where you said which thought, which thinking succeeds and which one doesn't. We have phases in the nineteenth century, where you could have chosen either way. You could have gone the way of Schelling, for example, the German philosopher, which was totally different to that of Hegel. And so this question of what survives, which idea survives, and which idea drowns, which idea starves to death, is something which, in our whole system of thought, is very, very known, and is quite an issue. And now we encounter this structure, this phenomenon, in everyday thinking.

It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.

Of course, everybody knows we have a revolution, but we are now really entering the cognitive revolution of it all. In Europe, and in America too -- and it's not by chance -- we have a crisis of all the systems that somehow are linked to either thinking or to knowledge. It's the publishing companies, it's the newspapers, it's the media, it's TV. But it's as well the university, and the whole school system, where it is not a normal crisis of too few teachers, too many pupils, or whatever; too small universities; too big universities.

Now, it's totally different. When you follow the discussions, there's the question of what to teach, what to learn, and how to learn. Even for universities and schools, suddenly they are confronted with the question how can we teach? What is the brain actually taking? Or the problems which we have with attention deficit and all that, which are reflections and, of course, results, in a way, of the technical revolution?

Gerd Gigerenzer, to whom I talked and who I find a fascinating thinker, put it in such a way that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.

The European point of view, with our history of thought, and all our idealistic tendencies, is that now you can see -- because they didn't know that the Internet would be coming, in the fifties or sixties or seventies -- that the whole idea of the Internet somehow was built in the brains, years and decades before it actually was there, in all the different sciences. And when you see how the computer -- Gigerenzer wrote a great essay about that -- how the computer at first was somehow isolated, it was in the military, in big laboratories, and so on. And then the moment the computer, in the seventies and then of course in the eighties, was spread around, and every doctor, every household had a computer, suddenly the metaphors that were built in the fifties, sixties, seventies, then had their triumph. And so people had to use the computer. As they say, the computer is the last metaphor for the human brain; we don't need any more. It succeeded because the tool shaped the thought when it was there, but all the thinking, like in brain sciences and all the others, had already happened, in the sixties, seventies, fifties even.

But the interesting question is, of course, the Internet -- I don't know if they really expected the Internet to evolve the way it did -- I read books from the nineties, where they still don't really know that it would be as huge as it is. And, of course, nobody predicted Google at that time. And nobody predicted the Web.

Now, what I find interesting is that if you see the computer and the Web, and all this, under the heading of "the new technologies," we have, in the late nineteenth century, this big discussion about the human motor. The new machines in the late nineteenth century required that the muscles of the human being should be adapted to the new machines. Especially in Austria and Germany, we have this new thinking, where people said, first of all, we have to change muscles. The term "calories" was invented in the late nineteenth century, in order to optimize the human work force.

Now, in the twenty-first century, you have all the same issues, but now with the brain, what was the adaptation of muscles to the machines, now under the heading of multitasking -- which is quite a problematic issue. The human muscle in the head, the brain, has to adapt. And, as we know from just very recent studies, it's very hard for the brain to adapt to multitasking, which is only one issue. And again with calories and all that. I think it's very interesting, the concept -- again, Daniel Dennett and others said it -- the concept of the informavores, the human being as somebody eating information. So you can, in a way, see that the Internet and that the information overload we are faced with at this very moment has a lot to do with food chains, has a lot to do with food you take or not to take, with food which has many calories and doesn't do you any good, and with food that is very healthy and is good for you.

The tool is not only a tool, it shapes the human who uses it. We always have the concept, first you have the theory, then you build the tool, and then you use the tool. But the tool itself is powerful enough to change the human being. God as the clockmaker, I think you said. Then in the Darwinian times, God was an engineer. And now He, of course, is the computer scientist and a programmer. What is interesting, of course, is that the moment neuroscientists and others used the computer, the tool of the computer, to analyze human thinking, something new started.

The idea that thinking itself can be conceived in technical terms is quite new. Even in the thirties, of course, you had all these metaphors for the human body, even for the brain; but, for thinking itself, this was very, very late. Even in the sixties, it was very hard to say that thinking is like a computer.

You had once in Edge, years ago, a very interesting talk with Patty Maes on "Intelligence Augmentation" when she was one of the first who invented these intelligent agents. And there, you and Jaron Lanier, and others, asked the question about the concept of free will. And she explained it and it wasn't that big an issue, of course, because it was just intelligent agents like the ones we know from Amazon and others. But now, entering real-time Internet and all the other possibilities in the near future, the question of predictive search and others, of determinism, becomes much more interesting. The question of free will, which always was a kind of theoretical question -- even very advanced people said, well, we declare there is no such thing as free will, but we admit that people, during their childhood, will have been culturally programmed so they believe in free will.

But now, when you have a generation -- in the next evolutionary stages, the child of today -- which are adapted to systems such as the iTunes "Genius," which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.

What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities. And I'm wondering if the comfort of free will or not free will would be a very, very tough issue of the future. At this very moment, we have a new government in Germany; they are just discussing the what kind of effect this will have on politics. And one of the issues, which of course at this very moment seems to be very isolated, is the question how to predict certain terroristic activities, which they could use, from blogs -- as you know, in America, you have the same thing. But this can go farther and farther.

The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives. And, of course, this will play out in the work force -- the new German government seems to be very keen on this issue, to at least prevent the worst impact on people, on workplaces.

It's very important to stress that we are not talking about cultural pessimism. What we are talking about is that a new technology which is in fact a technology which is a brain technology, to put it this way, which is a technology which has to do with intelligence, which has to do with thinking, that this new technology now clashes in a very real way with the history of thought in the European way of thinking.

Unlike America, as you might know, in Germany we had a party for the first time in the last elections which totally comes out of the Internet. They are called The Pirates. In their beginning they were computer scientists concerned with questions of copyright and all that. But it's now much, much more. In the recent election, out of the blue, they received two percent of the votes, which is a lot for a new party which only exists on the Internet. And the voters were mainly 30, 40, 50 percent young males. Many, many young males. They're all very keen on new technologies. Of course, they are computer kids and all that. But this party, now, for the first time, reflects the way which we know, theoretically, in a very pragmatic and political way. For example, one of the main issues, as I just described, the question of the adaptation of muscles to modern systems, either in the brain or in the body, is a question of the digital Taylorism.

As far as we can see, I would say, we have three important concepts of the nineteenth century, which somehow come back in a very personalized way, just like you have a personalized newspaper. This is Darwinism, the whole question. And, in a very real sense, look at the problem with Google and the newspapers. Darwinism, but as well the whole question of who survives in the net, in the thinking; who gets more traffic; who gets less traffic, and so. And then you have the concept of communism, which comes back to the question of free, the question that people work for free. And not only those people who sit at home and write blogs, but also many people in publishing companies, newspapers, do a lot of things for free or offer them for free. And then, third, of course, Taylorism, which is a non-issue, but we now have the digital Taylorism, but with an interesting switch. At least in the nineteenth century and the early twentieth century, you could still make others responsible for your own deficits in that you could say, well, this is just really terrible, it's exhausting, and it's not human, and so on.

Now, look at the concept, for example, of multitasking, which is a real problem for the brain. You don't think that others are responsible for it, but you meet many people who say, well, I am not really good at it, and it's my problem, and I forget, and I am just overloaded by information. What I find interesting that three huge political concepts of the nineteenth century come back in a totally personalized way, and that we now, for the first time, have a political party -- a small political party, but it will in fact influence the other parties -- who address this issue, again, in this personalized way.

It's a kind of catharsis, this Twittering, and so on. But now, of course, this kind of information conflicts with many other kinds of information. And, in a way, one could argue -- I know that was the case with Iran -- that maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it's something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.

Because suddenly it's gone again. And the next piece of information, and the next piece of information -- and if now -- and this is something which, again, has very much to do with the concept of the European self, to take oneself seriously, and so on -- now, as Google puts it, they say, if I understand it rightly, in all these webcams and cell phones -- are full of information. There are photos, there are videos, whatever. And they all should be, if people want it, shared. And all the thoughts expressed in any university, at this very moment, there could be thoughts we really should know. I mean, in the nineteenth century, it was not possible. But maybe there is one student who is much better than any of the thinkers we know. So we will have an overload of all these information, and we will be dependent on systems that calculate, that make the selection of this information.

And, as far as I can see, political information somehow isn't distinct from it. It's the same issue. It's a question of whether I have information from my family on the iPhone, or whether I have information about our new government. And so this incredible amount of information somehow becomes equal, and very, very personalized. And you have personalized newspapers. This will be a huge problem for politicians. From what I hear, they are now very interested in, for example, Google's page rank; in the question how, with mathematical systems, you can, for example, create information cascades as a kind of artificial information overload. And, as you know, you can do this. And we are just not prepared for that. It's not too early. In the last elections we, for the first time, had blogs, where you could see they started to create information cascades, not only with human beings, but as well with BOTs and other stuff. And this is, as I say, only the beginning.

Germany still has a very strong anti-technology movement, which is quite interesting insofar as you can't really say it's left-wing or right-wing. As you know, very right-wing people, in German history especially, were very anti-technology. But it changed a lot. And why it took so long, I would say, has demographic reasons. As we are in an aging society, and the generation which is now 40 or 50, in Germany, had their children very late. The whole evolutionary change, through the new generation -- first, they are fewer, and then they came later. It's not like in the sixties, seventies, with Warhol. And the fifties. These were young societies. It happened very fast. We took over all these interesting influences from America, very, very fast, because we were a young society. Now, somehow it really took a longer time, but now that is for sure we are entering, for demographic reasons, the situation where a new generation which is -- as you see with The Pirates as a party -- they're a new generation, which grew up with modern systems, with modern technology. They are now taking the stage and changing society.

One must say, all the big companies are American companies, except SAP. But Google and all these others, they are American companies. I would say we weren't very good at inventing. We are not very good at getting people to study computer science and other things. And I must say -- and this is not meant as flattery of America, or Edge, or you, or whosoever -- what I really miss is that we don't have this type of computationally-minded intellectual -- though it started in Germany once, decades ago -- such as Danny Hillis and other people who participate in a kind of intellectual discussion, even if only a happy few read and react to it. Not many German thinkers have adopted this kind of computational perspective.

The ones who do exist have their own platform and actually created a new party. This is something we are missing, because there has always been a kind of an attitude of arrogance towards technology. For example, I am responsible for the entire cultural sections and science sections of FAZ. And we published reviews about all these wonderful books on science and technology, and that's fascinating and that's good. But, in a way, the really important texts, which somehow write our life today and which are, in a way, the stories of our life -- are, of course, the software -- and these texts weren't reviewed. We should have found ways of transcribing what happens on the software level much earlier -- like Patty Maes or others, just to write it, to rewrite it in a way that people understand what it actually means. I think this is a big lack.

What did Shakespeare, and Kafka, and all these great writers -- what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. And now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections reviewing software in a different way; at least the structures of software.

We are just beginning to look at this in Germany. And we are looking for people -- it's not very many people -- who have the ability to translate that. It needs to be done because that's what makes us who we are. You will never really understand in detail how Google works because you don't have access to the code. They don't give you the information. But just think of George Dyson's essay, which I love, "Turing's Cathedral." This is a very good beginning. He absolutely has the point. It is today's version of the kind of cathedral we would be entering if we lived in the eleventh century. It's incredible that people are building this cathedral of the digital age. And as he points out, when he visited Google, he saw all the books they were scanning, and noted that they said they are not scanning these books for humans to read, but for the artificial intelligence to read.

Who are the big thinkers here? In Germany, for me at least, for my work, there are a couple of key figures. One of them is Gerd Gigerenzer, who is somebody who is absolutely -- I would say he is actually avant-garde, at this very moment, because what he does is he teaches heuristics. And from what we see, we have an amputation of heuristics, through the technologies, as well. People forget certain heuristics. It starts with a calculation, because you have the calculator, but it goes much further. And you will lose many more rules of thumb in the future because the systems are doing that, Google and all the others. So Gigerenzer, in his thinking -- and he has a big Institute now -- on risk assessment, as well, is very, very important. You could link him, in a way, actually to Nassim Taleb, because again here you have the whole question of not risk assessment, the question of looking back, looking into the future, and all that.

Very important in literature, still, though he is 70 years old, 80 years old, is of course Hans Magnus Enzensberger. Peter Sloterdijk is a very important philosopher; a kind of literary figure, but he is important. But then you have, not unlike in the nineteenth or twentieth century, there are many leading figures. But I must say, as well as Gigerenzer, he writes all his books in English, we have quite interesting people, at this very moment, in law, which is very important for discussions of copyright and all that. But regarding the conversations of new technologies and human thought, they, at this very moment, don't really take place in Germany.

There are European thinkers who have cult followings -- Slajov Zizek, for example. Ask any intellectual in Germany, and they will tell you Zizek is just the greatest. He's a kind of communist, but he considers himself Stalinistic, even. But this is, of course, all labels. Wild thinkers. Europeans, at this very moment, love wild thinkers.

Reality Club discussion on EDGE: Daniel Kahneman, George Dyson, Jaron Lanier, Nick Bilton, Nick Carr, Douglas Rushkoff, Jesse Dylan, Virginia Heffernan, Gerd Gigerenzer, John Perry Barlow, Steven Pinker, John Bargh, George Dyson, Annalena McAfee, John Brockman

O Estado de S.Paulo Culture [11.13.10]

We stood on the "third culture" and Brazil, ignoring his history of science, scientists and the links between arts and sciences and humanities. It is true that Brazil has never received a Nobel (neither science nor the other areas, though some writers have deserved), but came close a few times (with Carlos Chagas and Jayme Tiomno, for example, and not with Peter Medawar, British born Petrópolis than 50 years ago received the Medicine and Physiology). And might not have gotten there because both neglect dedicated to the subject. When you see the vast literature on science emerged in the last 40 years and thinks the Brazilian case, where only recently appeared names like Marcelo Gleiser and Fernando Reinach, the melancholy of the comparison is inevitable.

The Vezo is worldwide, I know, like when you read stories of modernism at the turn of the 20th century would not talk about Einstein and Bohr, only a maximum of Freud (who was a doctor and, contrary to what many think, very friendly knowledge of brain physiology to understanding the mental complexity — and today would be impressed with the new scanning technology). But historians and sociologists Brazilians not only ignore science in his portraits of the time, but also ignore their achievements. In the U.S., where there is both religious conservatism, science has always had acute stress. And there appear initiatives such as the Edge website (www.edge.org), made to house the thoughts of authors such as Damasio,Dennett and Dawkins — to get the letter D — that try to connect to the humanistic scientific culture. ...

...The art, by the way, almost always spoke with science, because both can add knowledge, or at least a mutual defense and sound. It's been like painting by Leonardo da Vinci (who developed the veiling to be more faithful to the visual perception) to the fiction of Ian McEwan (although I liked less than solar, about a physicist involved with the issue of global warming, than Saturday), through the refraction of light by Rembrandt and Picasso's angular concurrences.

We need to rescue the role of scientists in Brazil's history and treat the subject in much better schools and publications. Although there are few initiatives here and there, with the importance of a FAPESP, there is still much room for improvement in production and thinking about science. Consider the recent obituaries in local newspapers with names like Martin Gardner (who knocked so many defenders of the existence of paranormal and UFO proven) and Benoit Mandelbrot (inventor of the concept of fractals, that some people think it was a kind of mystic of impurities, an intuitive holistic) to see how the emphasis of science is small in our culture. The republic would gain much knowledge and method to convey to their students.

FAST COMPANY [11.3.10]

This is surely one of the most remarkable infographics we've ever posted. Created by social scientist Eduardo Salcedo-Albarán, it documents the organizational structure and almost limitless influence of Mexico's Michoacan drug family. And it teaches you a great deal about why, exactly, the family is so hard to combat -- and why its power seems so pervasive.

The infographic itself details various wings of the Michoacan cartel -- or La Familia as it's better known -- alongside various government agencies. (The short hand for the acronyms: Anything starting with "FUN" is a Michoacan drug cell; those starting with "NAR" are government drug agencies.) The arrows show links between each one, meaning they're sharing information. But what's most interesting is that the size of the bubbles shows how much information each cell of the organization is able to share:

SUEDDEUTSCHE ZEITUNG [10.22.10]

 

THE FEIULLETON
NEW WORLD VIEWS: THE MAPS OF THE 21 CENTURY [pp18-19]

THE WORLD IS IN THE MIND
No map depicts reality - but we believe unwaveringly in the objectivity of cartography
TANJA MICHALSKY

THE GOAL IS THE GOAL 
Why we love paper maps, but no longer use
JENS-CHRISTIAN RABE

THE RAYS OF THE WHITE SPOTS 
Curator Hans Ulrich Obrist is celebrating the new era of cartography with a festival of artists, scientists and designers
LAURA WEISSMUELLER

WE HAVE NOTHING MORE TO EXPECT 
Progress is the present management: Why cards have replaced the watch and GPS, the Big Ben of the present is
BERND GRAFF

WITH THE EYES OF THE EAGLE WHO MAKES MAPS AND USES, IS THE SOVEREIGN
BURKHARD MÜLLER

 

SUEDDEUTSCHE ZEITUNG [10.21.10]

 

Curator Hans Ulrich Obrist, invited 50 artists on stage in a 20-hour event in London: A marathon on the Maps of the 21 Century. It was about more than just Google Maps.

[Google Transtlation:] With pith helmet and ice axes, the men went into battle against the unknown. The victorious are now in the Hall of Fame. David Livingston, the missionary and explorer, the first European to see Victoria Falls and as its "discoverer", looks down graciously from his lush stucco frame at the guests of the Royal Geographical Society in London.

The marathon was to be understood literally. In 20 hours were about 50 artists, architects, philosophers, scientists and musicians on the stage. After 15 minutes they were usually heruntergescheucht again, the next speaker was waiting. The British architect David Adjaye put on there about his typology of architecture in Africa, for which he has photographed ten years buildings in the 53 states.

The skepticism about the truth of cards

The philosopher Rosi Braidotti, who teaches at the Human Rights Institute at the University of Utrecht, pointed out that we owe the skepticism about the truth of cards by post-modernism. And the American literary agent John Brockman, founder of the influential Internet platform "Edge", showed different maps used in science are like. Many of them were indeed diagrams, such as the developmental biologist Lewis Wolpert made clear, but it would be in London not to spoil the fun.

Thus one saw an intricate network of countless points of intersection, which made the sociologist Nicholas A. Christakis and political scientist James Fowler of the relationship between obesity and social contacts visible, and the gene pioneer J. Craig Venter gave what looked like an endless series of colorful building blocks: the map of the first synthetic composite heritage.

It was the fifth time that Hans Ulrich Obrist chose this strength-sapping format of the race, an issue considered by as many points of view. After manifesto and poetry — the themes of the past two years — Obrist with this year's focus has, however, was a perfect landing.

Few technologies seems to characterize the young century as the new cartography. This could already be at the DLD Conference this year in Munich from the start, when the curator ever been to a small group, a symposium cards held on: Whether designer, astronomer, or Internet artists — all spoke of the great changes that the digital cartography with them was initiated.

While in London Obristgreatly expanded its guest list, showing that the marathon reflects a global development. Since 2005 when the U.S. internet company Google began its map service, Google Maps is almost omnipresent. From the once precious resource for a hero and ruler, the medium has become readily available for the masses, to be used by both experts and lay people.

In the run up to the "Map Marathon" Obrist asked contemporary artists about their personal map of the 21 Asked century. Because Obrist, simply the greatest networker in the art world, is why he is the main curator of the world was titled as before, it was then much mail for the Serpentine Gallery . ...

David Rowan, http://www.wired.co.uk/news/archive/2010-10/18/biggest-gallery-of-maps [10.17.10]

Whats the connection between Kevin Kellys habits on the internet, Louise Bourgeoiss contented view of France, and Craig Venters genome?

They can all be represented as maps. And this weekend, they all were -- along with hundreds of maps of experimental art, of the worlds oldest-known words, and of the steampunk-and-superheroes content of BoingBoing.

Oh, and not forgetting dozens more maps celebrating this magazines fascination with data visualisations as a way of turning data into stunningly beautiful visualisations.

Let me explain. Over the weekend I took part in an epic project organised by our friends at the Serpentine Gallery in London -- the Map Marathon, a live two-day event at the Royal Geographical Society in Kensington. It was the fifth in the annual series of Serpentine Gallery Marathons, conceived by super-curator Hans Ulrich Obrist, the gallerys extraordinarily energetic co-director of exhibitions, with gallery director Julia Peyton-Jones.

The Swiss-born Obrist, featured in Wired in February, was once called by Art Review "the art worlds most powerful figure". After seeing the impressive cast list for the two-day event, youll understand why -- with contributions from the likes of Anish Kapoor, David Adjaye and Gilbert & George.

In previous years, the Serpentines marathons have been curated around themes such as interviews and poetry. But this years theme was just up Wireds street (well, perhaps a short walk away up the A315): maps, in all their forms and beauty, from literal representations of physical landscapes, to abstract conceptualisations by scientists. The overall aim was "to challenge notions of art, culture, science, technology, and methods of public discourse and debate" -- and in that it more than succeeded.

From noon until 10pm on both Saturday and Sunday, there were non-stop live presentations by more than 50 artists, scientists, poets, writers, philosophers, musicians, architects and designers. There were also special collaborations with the Edge community and with the DLD conference community run by Steffie Czerny and Marcel Reichart, whose excellent events Ive written about here before.

On Sunday lunchtime, I shared the stage with Hal Bertram of ITO, the smart visualisers who worked with us on our "Data into Information" feature in Septembers issue of Wired. Our conversation was titled: When Data Meets Maps: How Datavisualisation is Changing the World. Hal showed some of our favourite visualisations, including examples of how OpenStreetMap was used to save lives after the Haiti earthquake in January, plus examples of how open data streams can be used effectively to visualise traffic flow.

Also up on stage were Wired friends such as Eric Rodenbeck of Stamen, whose work we featured a few months ago; and Aaron Koblin, who presented on "Re-Embodied Data: Mapping the Unseeable". But data visualisers were just one thread running through this constantly surprising event. There was Marina Abramovi? presenting on Body Maps; the writer Russell Hoban; and Marcus du Sautoy talking about Mathematical Maps.

One of my favourite panels was run by John Brockman, the literary super-agent who runs the EDGE community of "some of the most interesting minds in the world". Together with Lewis Wolpert and Armand Leroi, he presented maps submitted by members of the EDGE community. So we got to see Kevin Kellys internet; plus philosopher Eduardo Salcedo-Albaráns map of interconnections between Mexican drugs cartels.

That was followed by two strikingly contrasting but equally compelling sessions - C. E. B. Reas, the co-inventor (with Ben Fry) of the Processing software language, who explored the beautiful patterns it creates out of data; and architect David Adjaye, who showed some of the 35,000 photos he took in Africa.

Well done to the @WiredUK Twitter followers who got to go free of charge - the rest of you really will need to follow us on Twitter so you get early warning next time. And contragulations to all at the Serpentine for a rich and brain-expanding weekend.

BOINGBOING [10.2.10]

"We are fixated on technology and technological success, and we have no sustained or systematic approach to field-based social understanding of our adversaries' motivation, intent, will, and the dreams that drive their strategic vision, however strange those dreams and vision may seem to us."—Anthropologist Scott Atran, who believes the quest to end violent political extremism needs more science. (edge.org)

WHY EVOLUTION IS TRUE [8.2.10]

So you’re an organization whose mission is to blur the lines between faith and science, and you have huge wads of cash to do this.  What’s the best strategy?

Well, if you’re smart, you find a bunch of journalists who are not averse to being bribed to write articles consonant with your mission, give them a lot of money to attend “seminars” on reconciling faith and science (you also give a nice emolument to the speakers), enlist a spiffy British university to house these journalists, on whom you bestow the fancy title of “fellows,” cover all their expenses (including housing) to go to the UK for a couple of months, and even give them a “book allowance.”  What could be more congenial to an overworked journalist than a chance to play British scholar, punting along the lovely Cam or enjoying a nice pint in a quant pub, all the while chewing over the wisdom of luminaries like John Polkinghorne and  John Haught, and pondering the mysteries of a fine-tuned universe and the inevitable evolution of humans?

And the best part is this: forever after, those journalists are in your camp.  Not only can you use their names in your advertising, but you’ve conditioned them, in Pavlovian fashion, to think that great rewards come to those who favor the accommodation of science and faith. They’ll do your job for you!

The John Templeton Foundation may be misguided, but it’s not stupid.  The Templeton-Cambridge Journalism Fellowships in Science and Religion pay senior and mid-career journalists $15,000 (plus all the perks above) to come to Cambridge University for two months, listen to other people talk about science and religion, study a religion/science topic of their own devising, and then write a nifty paper that they can publish, so getting even more money. What a perk! Imagine sitting in a medieval library, pondering the Great Questions. And you get to be called a fellow! And write a term paper! Isn’t that better than cranking out hack pieces for people who’d rather be watching American Idol?  Sure, you have to apply, and write an application essay stating how you intend to relate science and religion, but, hey, it’s only 1500 words, and once you’re in, you’re golden.  You may even get to be on the advisory board, and have a chance to come back to the trough.

As I said, The Templeton Foundation is smart—or rather wily.  They realize that few people, especially underpaid journalists and overworked academics, are immune to the temptation of dosh, and once those people get hooked on the promise of money and prestige, they forever have a stall in the Templeton stable. And, in the hopes of future Templeton funding,  perhaps they’ll continue to write pieces congenial to the Foundation’s mission.

The Temple Foundation is wily, but they’re not exactly honest.  Look at this:

After decades during which leading voices from science and religion viewed each other with suspicion and little sense of how the two areas might relate, recent years have brought an active pursuit of understanding how science may deepen theological awareness, for example, or how religious traditions might illuminate the scientific realm.  Fellowship organizers note that rigorous journalistic examination of the region where science and theology overlap – as well as understanding the reasoning of many who assert the two disciplines are without common ground – can effectively promote a deeper understanding of the emerging dialogue.

Now if you’re interested in seeing how science and religion “illuminate” one another, what’s the first thing you think of?  How about this:  is there any empirical truth in the claims of faith? After all, if you’re trying to “reconcile” two areas of thought, and look at their interactions, surely you’d be interested if there’s any empirical truth in them.  After all, why “reconcile” two areas if one of them might be only baseless superstition?  Is the evidence for God as strong as it is for evolution? Does the “fine-tuning” of physical constants prove Jesus?  Was the evolution of humans inevitable, thereby showing that we were part of God’s plan?

It’s not that there’s nothing to say about this.  After all, one of the speakers in the Fellows’ symposia is Simon Conway Morris, who has written a popular-science book claiming that biology proves that the evolution of human-like creatures was inevitable.  It’s just that the Templeton Foundation doesn’t want to promote, or have its Fellows write about, the other side, the Dark Side that feels that no reconciliation is possible between science and faith.  John Horgan, who was once a Journalism Fellow, talks about his experience:

My ambivalence about the foundation came to a head during my fellowship in Cambridge last summer. The British biologist Richard Dawkins, whose participation in the meeting helped convince me and other fellows of its legitimacy, was the only speaker who denounced religious beliefs as incompatible with science, irrational, and harmful. The other speakers — three agnostics, one Jew, a deist, and 12 Christians (a Muslim philosopher canceled at the last minute) — offered a perspective clearly skewed in favor of religion and Christianity.

Some of the Christian speakers’ views struck me as inconsistent, to say the least. None of them supported intelligent design, the notion that life is in certain respects irreducibly complex and hence must have a divine origin, and several of them denounced it. Simon Conway Morris, a biologist at Cambridge and an adviser to the Templeton Foundation, ridiculed intelligent design as nonsense that no respectable biologist could accept. That stance echoes the view of the foundation, which over the last year has taken pains to distance itself from the American intelligent-design movement.

And yet Morris, a Catholic, revealed in response to questions that he believes Christ was a supernatural figure who performed miracles and was resurrected after his death. Other Templeton speakers also rejected intelligent design while espousing beliefs at least as lacking in scientific substance.

The Templeton prize-winners John Polkinghorne and John Barrow argued that the laws of physics seem fine-tuned to allow for the existence of human beings, which is the physics version of intelligent design. The physicist F. Russell Stannard, a member of the Templeton Foundation Board of Trustees, contended that prayers can heal the sick — not through the placebo effect, which is an established fact, but through the intercession of God. In fact the foundation has supported studies of the effectiveness of so-called intercessory prayer, which have been inconclusive.

One Templeton official made what I felt were inappropriate remarks about the foundation’s expectations of us fellows. She told us that the meeting cost more than $1-million, and in return the foundation wanted us to publish articles touching on science and religion. But when I told her one evening at dinner that — given all the problems caused by religion throughout human history — I didn’t want science and religion to be reconciled, and that I hoped humanity would eventually outgrow religion, she replied that she didn’t think someone with those opinions should have accepted a fellowship. So much for an open exchange of views.

So, the Foundation doesn’t really want the hard light of science cast upon faith.  It wants its journalists (and nearly everyone it funds) to show how faith and science are compatible.  Those who feel otherwise, like Victor Stenger, Richard Dawkins, Anthony Grayling, Steven Weinberg, well, those people don’t have a say.  (In fact, the Foundation’s history of intellectual dishonesty has made many of them unwilling to be part of its endeavors.) If a miscreant sneaks in by accident, as did John Horgan, he’s told that he doesn’t belong.  The Foundation may pay lip service to dissenters, as in this statement (my emphasis),

Fellowship organizers note that rigorous journalistic examination of the region where science and theology overlap – as well as understanding the reasoning of many who assert the two disciplines are without common ground – can effectively promote a deeper understanding of the emerging dialogue.

but you won’t see Templeton giving Journalism Fellowships to people who have a track record of such views.  Instead, the Fellows spend their time pondering, “Now how on earth could those poor people think that science and faith are incompatible?”

These journalism fellowships are nothing more than a bribe—a bribe to get journalists to favor a certain point of view.  The Foundation’s success at recruiting reputable candidates proves one thing: it doesn’t cost much to buy a journalist’s integrity.  Fifteen thousand bucks, a “book allowance,” and a fancy title will do it.

Could this explain why those journalists who trumpet every other achievement on their websites keep quiet when they get a Templeton Fellowship?

WHY EVOLUTION IS TRUE [8.2.10]

When I claimed that the John Templeton Foundation was engaged in bribing journalists, I didn’t mean that they directly paid off those journalists for writing articles that blurred the lines between science and faith.  It’s nothing so crass as that. What I meant was that Templeton creates a climate in which journalists who take a certain line in their writings can expect sizable monetary and career rewards:

As I said, The Templeton Foundation is smart—or rather wily.  They realize that few people, especially underpaid journalists and overworked academics, are immune to the temptation of dosh, and once those people get hooked on the promise of money and prestige, they forever have a stall in the Templeton stable. And, in the hopes of future Templeton funding,  perhaps they’ll continue to write pieces congenial to the Foundation’s mission.

It’s a subtle way of using writers to promulgate your own views, though of course none of those writers would ever admit that they had been bought off.

Rod Dreher is an example of how the Templeton system works.  Dreher was a columnist at the Dallas Morning News, and author of Crunchy Cons (2006), a book about those conservatives who think as righties and live as lefties.  Last year, Dreher won a Templeton-Cambridge Journalism Fellowship, one of Templeton’s most important vehicles for conflating science and faith. Since he got his fellowship, Dreher has written not only for the Dallas paper, but also on beliefnet, a religion/sprirituality website. His columns have pretty much been aligned with the Templeton Foundation’s own views.

Last August, for example, either at or near the end of his Fellowship, Dreher wrote a piece for the Dallas Morning News describing his wonderful experience at Cambridge, decrying “atheist fundamentalism,” and asserting that the horrors of Nazi Germany were part of “atheism’s savage legacy.”  He then touted a NOMA-like solution:

We ought to reject the shibboleth, advocated by both religious and secular fundamentalists, that religion and science are doomed to be antagonists. They are both legitimate ways of knowing within their limited spheres and should both complement and temper each other. The trouble comes when one tries to assert universal hegemony over the other. . .

Contrary to the biases of our time, the importance of science does not exceed that of art and religion. As the poet Wendell Berry writes, the sacredness of life “cannot be proved. It can only be told or shown.” Fortunate are those whose minds are free enough to recognize it.

This kind of stuff is like cream to the cats at Templeton.  How they must have licked their whiskers when they read it!

In a beliefnet column posted last week, Dreher decried the coming “Age of Wonder” touted by physicist Freeman Dyson,” in which science may play an increasingly important role in our life:

This, in the end, is why science and religion have to engage each other seriously. Without each other, both live in darkness, and the destruction each is capable of is terrifying to contemplate — although I daresay you will not find a monk or a rabbi prescribing altering the genetic code of living organisms for the sake of mankind’s artistic amusement. What troubles me, and troubles me greatly, about the techno-utopians who hail a New Age of Wonder is their optimism uncut by any sense of reality, which is to say, of human history. In the end, what you think of the idea of a New Age of Wonder depends on what you think of human nature. I give better than even odds that this era of biology and computers identified by [Freeman] Dyson and celebrated by the Edge folks will in the end turn out to have been at least as much a Dark Age as an era of Enlightenment. I hope I’m wrong. I don’t think I will be wrong.

Over at Pharyngula, P. Z. Myers took apart Dreher’s arguments against biotechnology, giving a dozen examples of Dreher’s ignorance and misstatement.  And although Dreher wrote

The truth of the matter is that I turned up in Cambridge knowing a lot about religion, but not much about science. What I saw and heard during those two-week seminars, and what I learned from my Templeton-subsidized research that summer (I designed my own reading program, which compared Taoist and Eastern Christian views of the body and healing) opened my mind to science. It turned out that I didn’t know what I didn’t know until I went on the fellowship.

it appears that he still doesn’t know what he doesn’t know.

On Sept. 26 of last year, five days before Templeton started accepting applications for their journalism fellowships, Dreher promoted the Templeton Journalism Fellowships on belief.net, encouraging people to apply.

On November 30 of last year, Dreher announced that he was leaving the Dallas Morning News to become director of publications at the John Templeton Foundation. That’s where he is now. He’s still publishing on beliefnet, though, where, a week ago, he wrote a heated column attacking my contention that Templeton bribes journalists.  It’s the usual stuff—outraged assertions that journalists could be bought, attacks on “atheist fundamentalists,” and what Dreher calls a “brave, contrarian position” that we should all be “nice” to each other.   You can read it for yourself, and I urge you to do so.

The curious thing, though, is that while decrying the idea that Templeton “buys off” journalists, Dreher is himself a beneficiary of Templeton’s practice of rewarding those who, after entering the system, perform well.  Dreher was a journalism fellow just last year. Other journalism fellows have been promoted to the advisory committee for the fellowships.  And several members of the Templeton Foundation’s Board of Advisors have, after their service, gone on to win the million-pound Templeton Prize itself.  The lesson, which seems transparently obvious, is that if you clamber aboard the Templeton gravy train and keep repeating that science and faith are complementary “ways of knowing,” good things will happen to you.

Oh, one last point.  The Templeton website says this about Dreher’s credentials:

A seven-time Pulitzer Prize nominee, Rod has spent most of the past two decades as an opinion journalist, having worked as a film and television critic and news columnist at the New York Post and other newspapers. He has appeared on National Public Radio, ABC News, Fox News Channel, CNN, and MSNBC.

That seemed odd to me.  Seven-time Pulitzer nominee?  That’s big stuff!  But a bit of sleuthing showed that it’s not what it seems.  Nearly any journalist can be a Pulitzer “nominee” for journalism.  All somebody has to do is fill out a form, submit a few of the “nominee’s” articles, and write a $50 check to Columbia University/Pulitzer Prizes. As the Pulitzer website says:

By February 1, the Administrator’s office in the Columbia School of Journalism has received more than 1,300 journalism entries. Those entries may be submitted by any individual based on material coming from a text-based United States newspaper or news site that publishes at least weekly during the calendar year and that adheres to the highest journalistic principles.

Editors do this all the time for their writers, but you don’t have to be an editor to nominate someone: anybody can do it.

And the thing is, the Pulitzer organization does not recognize the category of “nominee” for those who get nominated this way—it recognizes the category of “nominated finalist,” those three individuals whose submissions make the cut and get considered for the Pulitzer Prize itself. The Pulitzer organization, in fact, discourages the use of the term “nominee,” presumably because any newspaper or news site journalist who has a friend with fifty bucks can be a nominee.  From their website:

22. What does it mean to be a Pulitzer Prize Winner or a Pulitzer Prize Nominated Finalist?

  • A Pulitzer Prize Winner may be an individual, a group of individuals, or a newspaper’s staff.
  • Nominated Finalists are selected by the Nominating Juries for each category as finalists in the competition. The Pulitzer Prize Board generally selects the Pulitzer Prize Winners from the three nominated finalists in each category. The names of nominated finalists have been announced only since 1980. Work that has been submitted for Prize consideration but not chosen as either a nominated finalist or a winner is termed an entry or submission. No information on entrants is provided.

Pulitzer also says this:

The three finalists in each category are the only entries in the competition that are recognized by the Pulitzer office as nominees.

I checked the Pulitzer list of nominated finalists, and I didn’t find Dreher’s name on it.  I guess Templeton is calling Dreher a “nominee” against the recommendations of the Pulitzer organization.  If I’m right here, Dreher and Templeton may want to correct his credentials.

DIE PRESSE [7.9.10]

Ich werde aufgefressen“, klagte vor Monaten „FAZ“-Herausgeber Frank Schirrmacher im Buch „Payback“. Sein Problem: die Informationsexplosion durch Internet, Twitter und Co. Schirrmacher zufolge macht sie aus uns neue Menschen. Sie „verändert unser Gedächtnis, unsere Aufmerksamkeit und unsere geistigen Fähigkeiten, unser Gehirn wird physisch verändert, vergleichbar nur den Muskel- und Körperveränderungen der Menschen im Zeitalter der industriellen Revolution“.

Das alles war freilich schon im Jahr davor beim US-Wissenschaftspublizisten Nicholas Carr zu lesen. Carr machte 2008 mit seinem Artikel „Is Google Making Us Stupid?“ Furore. Die neuen Medien untergraben die Fähigkeit zu Konzentration und Kontemplation, behauptete Carr. Er bemühte dabei ein neurowissenschaftliches Phänomen, die „neuronale Plastizität“: nämlich, dass Synapsen, Nervenzellen, ganze Hirnareale sich durch die menschliche Erfahrung verändern können.

Jetzt werden diese Thesen in den USA erneut diskutiert, denn Carr hat sie zu einem Buch ausgebaut: „The Shallows: What the Internet is Doing to Our Brains“. Und zitiert darin unter anderem den Psychiater Gary Small, dessen Forschungen zufolge der Gebrauch der neuen Medien „schrittweise neue neuronale Pfade in unserem Hirn verstärkt und alte schwächt“. Durch das Internet werde das Hirn also quasi neu verdrahtet.

Na und? In der kognitiven Neurowissenschaft „verdreht man bei solchem Gerede nur die Augen“, meint nun der in Harvard lehrende kanadische Psychologe Steven Pinker. Tatsächlich verdrahte sich das Gehirn bei jeder neuen Erfahrung oder Fähigkeit neu, „die Information wird schließlich nicht in der Bauchspeicheldrüse gespeichert“, schrieb er in der „New York Times“ („Mind Over Mass Media“ – eine deutsche Fassung des Artikels erschien am Montag in der „Süddeutschen Zeitung“).


Doch Erfahrungen würden die grundsätzlichen Fähigkeiten des Hirns zur Informationsverarbeitung nicht neu ordnen: „Zwar haben Speed-Reading-Programme lange für sich in Anspruch genommen, sie würden genau das schaffen. Aber zu diesen hat bereits Woody Allen das gültige Urteil gefällt, nachdem er zuvor ,Krieg und Frieden' in einem Rutsch gelesen hatte: ,Es ging um Russland.‘“ Auch echtes Multitasking sei längst als Mythos entlarvt, „nicht nur durch Laborstudien, sondern auch durch den vertrauten Anblick eines zwischen den Fahrbahnspuren herumschlenkernden Geländewagens, dessen Fahrer am Handy seinen Geschäften nachgeht.“

Nicht ein Wissenschaftler hat heuer allerdings die Debatte um die kognitiven Auswirkungen des Internets am meisten befördert, sondern ein Literaturagent. John Brockman, der Autoren wie Richard Dawkins und Jared Diamond vertritt, fragte: „Verändert das Internet Ihr Denken?“ Die über 100 Antworten von bekannten Wissenschaftlern, Künstlern und Denkern aufwww.edge.org zeigen vor allem: Die Antwort gibt es nicht.

SALON [7.6.10]

According to media columnist Michael Wolff, the name Clay Shirky is "now uttered in technology circles with the kind of reverence with which left-wingers used to say, 'Herbert Marcuse'." Wolff is right. Shirky has emerged as a luminary of the new digital intelligentsia, a daringly eclectic thinker as comfortable discussing 15th-century publishing technology as he is making political sense of 21st-century social media.

Barnes & Noble ReviewIn his 2008 book, "Here Comes Everybody," Shirky imagined a world without traditional economic or political organizations. Two years later and Shirky has a new book, "Cognitive Surplus," which imagines something even more daring -- a world without television. To celebrate the appearance of the revered futurist's latest volume, we're delighted to share a February discussion between Shirky, Barnes & Noble Review editor in chief James Mustich, and BNR contributor Andrew Keen. What follows is an edited transcript of their conversation about the future of the book, of the reader and the writer, and, most intriguingly, the future of intimacy.

TED'S EXCELLENT ADVENTURE
GENTLEMAN'S QUARTERLY [6.30.10]

Or, how the annual networking session of America's nerd elite became the world's most important and influential talking shop. MICHAEL WOLFF reports on the technology, entertainment and design conference that's the global power summit for the new super-wealthy, tech-savvy, hyper-connected intelligentsia

...But TED, which launched first in 1984, and then became an annual event from 1990. was always a little different. It was a pageant of nerdiness, in a sense combining the key forms of nerd social life: summer camp, talent show and adult education class. Physicists competed with juggling acts. Magicians with New Yorker writers. Quincy Jones followed Richard Dawkins (who gave one of his first talks about atheism at TED). Cellist Yo-Yo Ma shared a stage with superstring theorist Brian Greene.

Most elementally, it attracted the world's biggest nerds. Bill Gates, Steve Jobs, the Yahoo! boys, the Google boys and everybody else who ever made a billion dollars. They, in turn, attracted Hollywood royalty, who in turn attracted the media moguls. TED is where I first went drinking with Rupert Murdoch and first flirted with American television personality Martha Stewart.

If there was a theme at TED, then it was "insider-ism". Everybody present was somebody And everybody knew everybody. (For several dotcom years, TED was the main driver of my social life.) The tech business was the Mafia and TED was the biggest Mafia wedding of the year.

A key feature and sought-after invitation at TED, hosted on the second night by the literary agent John Brockman, is the Billionaires' Dinner — row upon row of the world's most successful (and richest) human beings (Murdoch, in my first conversation with him at TED, was grouchy about some of the people who were implying they were billionaires who, according to him, were most definitely not!). ...

Read the full article →

THE SCIENTIST [6.30.10]

 
© Brucie Rosch
“From the point of view of aesthetic and intellectual elegance, it is a bad experiment. But it is nevertheless a big discovery...It proves that sequencing and synthesizing DNA give us all the tools we need to create new forms of life.”

—Theoretical physicistFreeman Dyson on the Venter synthetic biology paper in Science,quoted in Edge.org.

“The price we will pay for this huge amplification of our technological prowess is probably an equal and opposite vulnerability. Welcome to the fast lane, humanity.”

—Daniel C. Dennett, Tufts University philosopher on the Venter synthetic biology paper in Science, quoted in Edge.org.

© Brucie Rosch
“Empathy is a complicated emotion, even for mice. On seeing another in pain, a mouse will act as if it itself is also hurting—much more, though, if it knows the first mouse.”

—In “The Tears of Strangers Are Only Water,” a Big Think blog post by David Berreby about research probing the physiology of empathy.

“I understand the value of science but there is a cash constraint on what we can afford.”

—David Willetts in his first press briefing as the UK’s new Conservative minister for universities and science.

“No hope now remains for this species. It is another example of how human actions can have unforeseen consequences.”

—Birdlife International’s Leon Bennun on the recent extinction of Madagascar’s Alaotra grebe.

“I believe that man will destroy everything living on this planet, and I would like to preserve everything in the form of DNA. To have a DNA treasure house—I like the term ‘treasure house’ rather than ‘museum.’”

DEUTSCHLANDRADIO KULTUR [6.30.10]

Max Brockman (Hg.): "Die Zukunftsmacher. Die Nobelpreisträger von morgen verraten, worüber sie forschen", S. Fischer Verlag, Frankfurt am Main 2010, 270 Seiten

18 jüngere Wissenschaftler zeigen, mit welchen Themen sich die Gesellschaft in Zukunft auseinandersetzen muss. Im Mittelpunkt steht dabei die Frage nach dem Wesen des Menschen.

"What's next?": Früher hätte man den Seufzer Zukunftsforschern überlassen - in diesem neuen Buch widmen sich 18 jüngere Wissenschaftler dieser Frage. Sie definieren damit, so der Herausgeber Max Brockman, "mit welchen Themen sich die Gesellschaft in Zukunft auseinandersetzen muss". 

Nicht wenige zielen dabei mit ihrer Grundlagenforschung auch auf die lange nicht mehr gestellte, bis vor Kurzem angestaubt wirkende Frage nach dem Wesen des Menschen. Sie wollen dazu beitragen, "dass wir neu definieren, wer und was wir sind".

Scheinbar harmlose und akademisch trockene Forschungsfragen entpuppen sich dabei oft als Sprengsätze. Zum Beispiel die Frage nach der zeitlichen Verarbeitung verschiedener Komponenten eines alltäglichen Erlebnisses. Akustische, visuelle, taktile und andere Reize werden jeweils von unterschiedlichen Hirnbereichen verarbeitet, und die funktionieren nicht zeitgleich. 

Wie also koordiniert unser Hirn die unterschiedlichen Komponenten, sodass die Reize als ein Ereignis wahrgenommen, gedeutet und in seiner Relevanz beurteilt werden; dass sie mit anderen Gedächtnisinhalten abgeglichen und als Muster für künftiges Handeln gespeichert werden? 

Könnte es sein, dass bestimmte Störungen - Dyslexie zum Beispiel, das eingeschränkte Lesevermögen - nicht auf Defekte der Sprachfähigkeit zurückgehen, sondern auf eine gestörte zeitliche Verarbeitung? Möglicherweise werden hier akustische und visuelle Repräsentationen zeitlich nicht richtig koordiniert, vermutet der Neurologe David Eagleman.

Oder ein anderes Beispiel: Unterschiede der Sprache bedingen nachweislich unsere Denkstrukturen, betont die Linguistin Lera Boroditsky. Sprache ist nicht nur Ausdruck von Inhalt, sie hat eine Definitionsmacht. Analog dazu steuern kulturelle Wertvorstellungen und Begriffe jeweils unterschiedliche Evolutionsmuster, zeigt der Oxforder Philosoph Nick Bostrom. 

Und längst haben Anthropologen den Nachweis erbracht, dass, umgekehrt, unterschiedliche biologische, etwa genetische Muster wiederum jeweils andere kulturelle und soziale Wertpräferenzen entstehen lassen. 

Dass Buddhismus und Konfuzianismus sich im Osten festsetzten und das Christentum im Westen: Dies sei kein Zufall, behauptet der Neuropsychologe Matthew Lieberman - sondern eine Art bio-kognitiver Konsequenz, evolutionär gewachsen, genetisch bedingt, hormonell gesteuert durch den Botenstoff Serotonin.

Überraschend viele Forscher fordern - angesichts der wachsenden Möglichkeiten, in die Natur einzugreifen - eine bewusste Steuerung der Evolution. Experimente an Tieren zeigen, dass Menschen allein durch Änderungen eines Lebensumfelds in wenigen Generationen genetische Veränderungen bewirken können, auch ohne direkt ins Erbmaterial einzugreifen, berichtet der Biologe Brian Hare. Wünschenswerte Menschentypen werden ohnehin längst gezüchtet: Erziehung ist nichts anders als der Versuch einer solchen evolutionären Steuerung.

Zum Thema: Treffen der Nobelpreisträger in Lindau

Besprochen von Eike Gebhardt

Max Brockman (Hg.): Die Zukunftsmacher. Die Nobelpreisträger von morgen verraten, worüber sie forschen
Aus dem Amerikanischen von Sebastian Vogel
S. Fischer Verlag, Frankfurt am Mein 2010
270 Seiten, 19,95 Euro

THE TECHNIUM [6.30.10]

I was digging through some files the other day and found this document from 1997. It gathers a set of quotes from issues of Wired magazine in its first five years. I don't recall why I created this (or even if I did compile all of them), but I suspect it was for our fifth anniversary issue. I don't think we ever ran any of it. Reading it now it is clear that all predictions of the future are really just predictions of the present.

Here it is in full:


We as a culture are deeply, hopelessly, insanely in love with gadgetry. And you can't fight love and win.
Jaron Lanier, Wired 1.02, May/June 1993, p. 80

No class in history has ever risen as fast as the blue-collar worker and no class has ever fallen as fast.
Peter Drucker, Wired 1.03, Jul/Aug 1993, p. 80

In the world of immersion, authorship is no longer the transmission of experience, but rather the construction of utterly personal experiences.
Brenda Laurel, Wired 1.06, Dec 1993, p. 107

I expect that within the next five years more than one in ten people will wear head-mounted computer displays while traveling in buses, trains, and planes.
Nicholas Negroponte, Wired 1.06, Dec 1993, p. 136

Pretty soon you'll have no more idea of what computer you're using than you have an idea of where your electricity is generated.
Danny Hillis, Wired 2.01, Jan 1994, p. 103

If we're ever going to make a thinking machine, we're going to have to face the problem of being able to build things that are more complex than we can understand.
Danny Hillis, Wired 2.01, Jan 1994, p. 104

Computers are the metaphor of our time.
Jim Metzner, Wired 2.02, Feb 1994, p. 66

Yesterday, we changed the channel; today we hit the remote; tomorrow, we'll reprogram our agents/filters. Advertising will not go away; it will be rejuvenated.
Michael Schrage, Wired 2.02, Feb 1994, p. 73

The scarce resource will not be stuff, but point of view.
Paul Saffo, Wired 2.03, Mar 1994, p. 73

The idea of Apple making a $200 anything was ridiculous to me. Apple couldn't make a $200 blank disk. 
Bill Atkinson, Wired 2.04, Apr 1994, p. 104

Roadkill on the information highway will be the billions who will forget there are offramps to destinations other than Hollywood, Las Vegas, the local bingo parlor, or shiny beads from a shopping network.
Alan Kay, Wired 2.05, May 1994, p. 77

The future is bullshit.
Jay Chiat, Wired 2.07, Jul 1994, p. 84

Money is just a type of information, a pattern that, once digitized, becomes subject to persistent programmatic hacking by the mathematically skilled. 
Kevin Kelly, Wired 2.07, Jul 1994, p. 93

In a world where information plus technology equals power, those who control the editing rooms run the show.
Hugh Gallagher, Wired 2.08, Aug 1994, p. 86

Some functions require domesticated robots -- wild robots that have been bribed, tricked, or evolved into household roles. But the wild robot has to come first.
Mark Tilden, Wired 2.09, Sep 1994, p. 107

Immortality is mathematical, not mystical.
Mike Perry, Wired 2.10, Oct 1994, p. 105

As the world becomes more universal, it also becomes more tribal. Holding on to what distinguishes you from others becomes very important.
John Naisbitt, Wired 2.10, Oct 1994, p. 115

Marc Andreessen will tell you with a straight face that he expects Mosaic Communications's Mosaic to become the world's standard interface to electronic information.
Gary Wolf, Wired 2.10, Oct 1994, p. 116

Life is not going to be easy in the 21st century for people who insist on black-and-white descriptions of reality.
Joel Garreau, Wired 2.11, Nov 1994, p. 158

Take Bugs Bunny and Elmer Fudd. In mere seconds, you get an entire war -- the strategy, the attack, the retreat, the recapitulation. The whole military-industrial complex is reduced to a bunny and a stuttering guy zipping across the landscape.
Brian Boigon, Wired 2.12, Dec 1994, p. 94

The very distinction between original and copy becomes meaningless in a digital world -- there the work exists only as a copy. 
Daniel Pierehbech, Wired 2.12, Dec 1994, p. 158

It's hard to predict this stuff. Say you'd been around in 1980, trying to predict the PC revolution. You never would've come and seen me.
Bill Gates, Wired 2.12, Dec 1994, p. 166

For a long time now, America has seemed like a country where most people watch television most of the time. But only recently are we beginning to notice that it is also a country where television watches us.
Phil Petton, Wired 3.01, Jan 1995, p. 126

What gives humans access to the symbolic domain of value and meaning is the fact that we die.
Regis Debray, Wired 3.01, Jan 1995, p. 162

The scary thing isn't that computers will match our intelligence by 2008; the scary thing is that this exponential curve keeps on going, and going, and going.
Greg Blonder, Wired 3.03, Mar 1995, p. 107

The future won't be 500 channels -- it will be one channel, your channel.
Scott Sassa, Wired 3.03, Mar 1995, p. 113

In the future, you won't buy artists' works; you'll buy software that makes original pieces of "their" works, or that recreates their way of looking at things.
Brian Eno, Wired 3.05, May 1995, p. 150

It's important to regard technology in the long sweep of history as being one with history. 
Vernor Vinge, Wired 3.06, Jun 1995, p. 161

Sufficiently radical optimism -- optimism that more and more seems to be technically feasible -- raises the most fundamental questions about consciousness, identity, and desire.
Vernor Vinge, Wired 3.06, Jun 1995, p. 161

I believe human nature is vastly more conservative than human technologies.
Newt Gingrich, Wired 3.08, Aug 1995, p. 109

We're using tools with unprecedented power, and in the process, we're becoming those tools. 
John Brockman, Wired 3.08, Aug 1995, p. 119

If the Boeing 747 obeyed Moore's Law, it would travel a million miles an hour, it would be shrunken down in size, and a trip to New York would cost about five dollars.
Nathan Myrhvold, Wired 3.09, Sep 1995, p. 154

Isn't it odd how parents grieve if their child spends six hours a day on the Net but delight if those same hours are spent reading books?
Nicholas Negroponte, Wired 3.09, Sep 1995, p. 206

The human spirit is infinitely more complex than anything that we're going to be able to create in the short run. And if we somehow did create it in the short run, it would mean that we aren't so complex after all, and that we've all been tricking ourselves.
Douglas Hofstadter, Wired 3.11, Nov 1995, p. 114

What the Net is, more than anything else at this point, is a platform for entrepreneurial activities -- a free-market economy in its truest sense.
Marc Andreessen, Wired 3.12, Dec 1995, p. 236

3-D isn't an interface paradigm. 3-D isn't a world model. 3-D isn't the missing ingredient. 3-D is an attribute, like the color blue.
F. Randall Farmer, Wired 4.01, Jan 1996, p. 117

Without a deep understanding of the many selves that we express in the virtual, we cannot use our experiences there to enrich the real.
Sherry Turkle, Wired 4.01, Jan 1996, p. 199

The annoyance caused by spammers grows as the square of the size of the Net.
Ray Jones, Wired 4.02, Feb 1996, p. 96

We're born, we live for a brief instant, and we die. It's been happening for a long time. Technology is not changing it much -- if at all.
Steve Jobs, Wired 4.02, Feb 1996, p. 106-107

Just as there is religious fundamentalism, there is a technical fundamentalism. 
Paul Virilio, Wired 4.05, May 1996, p. 121

When I want to do something mindless to relax, I reinstall Windows 95.
Jean-Louis Gassee, Wired 4.05, May 1996, p. 190

It is doubtful that the [computer industry] as a whole has yet broken even.
Peter Drucker, Wired 4.08, Aug 1996, p. 116

The most successful innovators are the creative imitators, the Number Two.
Peter Drucker, Wired 4.08, Aug 1996, p. 118

We have a predisposition in Western culture for "just do it," whereas, I think that part of the future will be built much more around "just be it."
Watts Wacker, Wired 4.09, Sep 1996, p. 168

Revolutions aren't made by gadgets and technology. They're made by a shift in power, which is taking place all over the world.
Walter Wriston, Wired 4.10, Oct 1996, p. 205

Wires warp cyberspace. The two points at opposite ends of a wire are, for informational purposes, the same point, even if they are on opposite sides of the planet.
Neal Stephenson, Wired 4.12, Dec 1996, p. 98

The Web is alive. Not as a sentient being or mega-meta-super-collective consciousness, but more like a gigantic, sprouting slime mold.
Steven Alan Edwards, Wired 5.04, Apr 1997

Of all the prospects raised by the evolution of digital culture, the most tantalizing is the possibility that technology could fuse with politics to create a more civil society. 
Jon Katz, Wired 5.04, Apr 1997

Technology is not the nameless Other. To embrace technology is to embrace, and face, ourselves.
David Cronenberg, Wired 5.05, May 1997, p. 185

Community precedes commerce.
John Hagel, Wired 5.08, Aug 1997, p. 84

Modern technology is a major evolutionary transition. It would be astonishing if that occurred without disrupting existing life.
Gregory Stock, Wired 5.09, Sep 1997, p. 128

Pollution is a measure of inefficiency, and inefficiency is lost profit.
Joe Maceda, Wired 5.10, Oct 1997, p. 138

For email, the old postcard rule applies. Nobody else is supposed to read your postcards, but you'd be a fool if you wrote anything private on one.
Miss Manners, Wired 5.11, Nov 1997

The American government can stop me from going to the US, but they can't stop my virus.
Dark Avenger, Wired 5.11, Nov 1997 (from a side-bar item on p.270 which does not appear in the Wired digital archives, excerpting from an interview by Sarah Gordon)

It is the arrogance of every age to believe that yesterday was calm.
Tom Peters, Wired 5.12, Dec 1997

Pages