News From:
[ED. NOTE: The conversation was in English, Schirrmacher's second language. Rather than edit the piece for grammar, and risk losing the spontaneity of the conversation, I present it here -- for the most part -- verbatim. -- John Brockman]
The question I am asking myself arose through work and through discussion with other people, and especially watching other people, watching them act and behave and talk, was how technology, the Internet and the modern systems, has now apparently changed human behavior, the way humans express themselves, and the way humans think in real life. So I've profited a lot from Edge.
We are apparently now in a situation where modern technology is changing the way people behave, people talk, people react, people think, and people remember. And you encounter this not only in a theoretical way, but when you meet people, when suddenly people start forgetting things, when suddenly people depend on their gadgets, and other stuff, to remember certain things. This is the beginning, its just an experience. But if you think about it and you think about your own behavior, you suddenly realize that something fundamental is going on. There is one comment on Edge which I love, which is in Daniel Dennett's response to the 2007 annual question, in which he said that we have a population explosion of ideas, but not enough brains to cover them.
As we know, information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know -- this is the old Darwinian thought, the moment when Darwin started reading Malthus -- when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations. And so what interests me is that we are, because we have the Internet, now entering a phase where Darwinian structures, where Darwinian dynamics, Darwinian selection, apparently attacks ideas themselves: what to remember, what not to remember, which idea is stronger, which idea is weaker.
Here European thought is quite interesting, our whole history of thought, especially in the eighteenth, nineteenth, and twentieth centuries, starting from Kant to Nietzsche. Hegel for example, in the nineteenth century, where you said which thought, which thinking succeeds and which one doesn't. We have phases in the nineteenth century, where you could have chosen either way. You could have gone the way of Schelling, for example, the German philosopher, which was totally different to that of Hegel. And so this question of what survives, which idea survives, and which idea drowns, which idea starves to death, is something which, in our whole system of thought, is very, very known, and is quite an issue. And now we encounter this structure, this phenomenon, in everyday thinking.
It's the question: what is important, what is not important, what is important to know? Is this information important? Can we still decide what is important? And it starts with this absolutely normal, everyday news. But now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn't important, what is the information of my life. And some of them say, well, it's in Facebook. And others say, well, it's on my blog. And, apparently, for many people it's very hard to say it's somewhere in my life, in my lived life.
Of course, everybody knows we have a revolution, but we are now really entering the cognitive revolution of it all. In Europe, and in America too -- and it's not by chance -- we have a crisis of all the systems that somehow are linked to either thinking or to knowledge. It's the publishing companies, it's the newspapers, it's the media, it's TV. But it's as well the university, and the whole school system, where it is not a normal crisis of too few teachers, too many pupils, or whatever; too small universities; too big universities.
Now, it's totally different. When you follow the discussions, there's the question of what to teach, what to learn, and how to learn. Even for universities and schools, suddenly they are confronted with the question how can we teach? What is the brain actually taking? Or the problems which we have with attention deficit and all that, which are reflections and, of course, results, in a way, of the technical revolution?
Gerd Gigerenzer, to whom I talked and who I find a fascinating thinker, put it in such a way that thinking itself somehow leaves the brain and uses a platform outside of the human body. And that's the Internet and it's the cloud. And very soon we will have the brain in the cloud. And this raises the question of the importance of thoughts. For centuries, what was important for me was decided in my brain. But now, apparently, it will be decided somewhere else.
The European point of view, with our history of thought, and all our idealistic tendencies, is that now you can see -- because they didn't know that the Internet would be coming, in the fifties or sixties or seventies -- that the whole idea of the Internet somehow was built in the brains, years and decades before it actually was there, in all the different sciences. And when you see how the computer -- Gigerenzer wrote a great essay about that -- how the computer at first was somehow isolated, it was in the military, in big laboratories, and so on. And then the moment the computer, in the seventies and then of course in the eighties, was spread around, and every doctor, every household had a computer, suddenly the metaphors that were built in the fifties, sixties, seventies, then had their triumph. And so people had to use the computer. As they say, the computer is the last metaphor for the human brain; we don't need any more. It succeeded because the tool shaped the thought when it was there, but all the thinking, like in brain sciences and all the others, had already happened, in the sixties, seventies, fifties even.
But the interesting question is, of course, the Internet -- I don't know if they really expected the Internet to evolve the way it did -- I read books from the nineties, where they still don't really know that it would be as huge as it is. And, of course, nobody predicted Google at that time. And nobody predicted the Web.
Now, what I find interesting is that if you see the computer and the Web, and all this, under the heading of "the new technologies," we have, in the late nineteenth century, this big discussion about the human motor. The new machines in the late nineteenth century required that the muscles of the human being should be adapted to the new machines. Especially in Austria and Germany, we have this new thinking, where people said, first of all, we have to change muscles. The term "calories" was invented in the late nineteenth century, in order to optimize the human work force.
Now, in the twenty-first century, you have all the same issues, but now with the brain, what was the adaptation of muscles to the machines, now under the heading of multitasking -- which is quite a problematic issue. The human muscle in the head, the brain, has to adapt. And, as we know from just very recent studies, it's very hard for the brain to adapt to multitasking, which is only one issue. And again with calories and all that. I think it's very interesting, the concept -- again, Daniel Dennett and others said it -- the concept of the informavores, the human being as somebody eating information. So you can, in a way, see that the Internet and that the information overload we are faced with at this very moment has a lot to do with food chains, has a lot to do with food you take or not to take, with food which has many calories and doesn't do you any good, and with food that is very healthy and is good for you.
The tool is not only a tool, it shapes the human who uses it. We always have the concept, first you have the theory, then you build the tool, and then you use the tool. But the tool itself is powerful enough to change the human being. God as the clockmaker, I think you said. Then in the Darwinian times, God was an engineer. And now He, of course, is the computer scientist and a programmer. What is interesting, of course, is that the moment neuroscientists and others used the computer, the tool of the computer, to analyze human thinking, something new started.
The idea that thinking itself can be conceived in technical terms is quite new. Even in the thirties, of course, you had all these metaphors for the human body, even for the brain; but, for thinking itself, this was very, very late. Even in the sixties, it was very hard to say that thinking is like a computer.
You had once in Edge, years ago, a very interesting talk with Patty Maes on "Intelligence Augmentation" when she was one of the first who invented these intelligent agents. And there, you and Jaron Lanier, and others, asked the question about the concept of free will. And she explained it and it wasn't that big an issue, of course, because it was just intelligent agents like the ones we know from Amazon and others. But now, entering real-time Internet and all the other possibilities in the near future, the question of predictive search and others, of determinism, becomes much more interesting. The question of free will, which always was a kind of theoretical question -- even very advanced people said, well, we declare there is no such thing as free will, but we admit that people, during their childhood, will have been culturally programmed so they believe in free will.
But now, when you have a generation -- in the next evolutionary stages, the child of today -- which are adapted to systems such as the iTunes "Genius," which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it.
What will this mean for the question of free will? Because, in the bottom line, there are, of course, algorithms, who analyze or who calculate certain predictabilities. And I'm wondering if the comfort of free will or not free will would be a very, very tough issue of the future. At this very moment, we have a new government in Germany; they are just discussing the what kind of effect this will have on politics. And one of the issues, which of course at this very moment seems to be very isolated, is the question how to predict certain terroristic activities, which they could use, from blogs -- as you know, in America, you have the same thing. But this can go farther and farther.
The question of prediction will be the issue of the future and such questions will have impact on the concept of free will. We are now confronted with theories by psychologist John Bargh and others who claim there is no such thing as free will. This kind of claim is a very big issue here in Germany and it will be a much more important issue in the future than we think today. The way we predict our own life, the way we are predicted by others, through the cloud, through the way we are linked to the Internet, will be matters that impact every aspect of our lives. And, of course, this will play out in the work force -- the new German government seems to be very keen on this issue, to at least prevent the worst impact on people, on workplaces.
It's very important to stress that we are not talking about cultural pessimism. What we are talking about is that a new technology which is in fact a technology which is a brain technology, to put it this way, which is a technology which has to do with intelligence, which has to do with thinking, that this new technology now clashes in a very real way with the history of thought in the European way of thinking.
Unlike America, as you might know, in Germany we had a party for the first time in the last elections which totally comes out of the Internet. They are called The Pirates. In their beginning they were computer scientists concerned with questions of copyright and all that. But it's now much, much more. In the recent election, out of the blue, they received two percent of the votes, which is a lot for a new party which only exists on the Internet. And the voters were mainly 30, 40, 50 percent young males. Many, many young males. They're all very keen on new technologies. Of course, they are computer kids and all that. But this party, now, for the first time, reflects the way which we know, theoretically, in a very pragmatic and political way. For example, one of the main issues, as I just described, the question of the adaptation of muscles to modern systems, either in the brain or in the body, is a question of the digital Taylorism.
As far as we can see, I would say, we have three important concepts of the nineteenth century, which somehow come back in a very personalized way, just like you have a personalized newspaper. This is Darwinism, the whole question. And, in a very real sense, look at the problem with Google and the newspapers. Darwinism, but as well the whole question of who survives in the net, in the thinking; who gets more traffic; who gets less traffic, and so. And then you have the concept of communism, which comes back to the question of free, the question that people work for free. And not only those people who sit at home and write blogs, but also many people in publishing companies, newspapers, do a lot of things for free or offer them for free. And then, third, of course, Taylorism, which is a non-issue, but we now have the digital Taylorism, but with an interesting switch. At least in the nineteenth century and the early twentieth century, you could still make others responsible for your own deficits in that you could say, well, this is just really terrible, it's exhausting, and it's not human, and so on.
Now, look at the concept, for example, of multitasking, which is a real problem for the brain. You don't think that others are responsible for it, but you meet many people who say, well, I am not really good at it, and it's my problem, and I forget, and I am just overloaded by information. What I find interesting that three huge political concepts of the nineteenth century come back in a totally personalized way, and that we now, for the first time, have a political party -- a small political party, but it will in fact influence the other parties -- who address this issue, again, in this personalized way.
It's a kind of catharsis, this Twittering, and so on. But now, of course, this kind of information conflicts with many other kinds of information. And, in a way, one could argue -- I know that was the case with Iran -- that maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it's something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.
Because suddenly it's gone again. And the next piece of information, and the next piece of information -- and if now -- and this is something which, again, has very much to do with the concept of the European self, to take oneself seriously, and so on -- now, as Google puts it, they say, if I understand it rightly, in all these webcams and cell phones -- are full of information. There are photos, there are videos, whatever. And they all should be, if people want it, shared. And all the thoughts expressed in any university, at this very moment, there could be thoughts we really should know. I mean, in the nineteenth century, it was not possible. But maybe there is one student who is much better than any of the thinkers we know. So we will have an overload of all these information, and we will be dependent on systems that calculate, that make the selection of this information.
And, as far as I can see, political information somehow isn't distinct from it. It's the same issue. It's a question of whether I have information from my family on the iPhone, or whether I have information about our new government. And so this incredible amount of information somehow becomes equal, and very, very personalized. And you have personalized newspapers. This will be a huge problem for politicians. From what I hear, they are now very interested in, for example, Google's page rank; in the question how, with mathematical systems, you can, for example, create information cascades as a kind of artificial information overload. And, as you know, you can do this. And we are just not prepared for that. It's not too early. In the last elections we, for the first time, had blogs, where you could see they started to create information cascades, not only with human beings, but as well with BOTs and other stuff. And this is, as I say, only the beginning.
Germany still has a very strong anti-technology movement, which is quite interesting insofar as you can't really say it's left-wing or right-wing. As you know, very right-wing people, in German history especially, were very anti-technology. But it changed a lot. And why it took so long, I would say, has demographic reasons. As we are in an aging society, and the generation which is now 40 or 50, in Germany, had their children very late. The whole evolutionary change, through the new generation -- first, they are fewer, and then they came later. It's not like in the sixties, seventies, with Warhol. And the fifties. These were young societies. It happened very fast. We took over all these interesting influences from America, very, very fast, because we were a young society. Now, somehow it really took a longer time, but now that is for sure we are entering, for demographic reasons, the situation where a new generation which is -- as you see with The Pirates as a party -- they're a new generation, which grew up with modern systems, with modern technology. They are now taking the stage and changing society.
One must say, all the big companies are American companies, except SAP. But Google and all these others, they are American companies. I would say we weren't very good at inventing. We are not very good at getting people to study computer science and other things. And I must say -- and this is not meant as flattery of America, or Edge, or you, or whosoever -- what I really miss is that we don't have this type of computationally-minded intellectual -- though it started in Germany once, decades ago -- such as Danny Hillis and other people who participate in a kind of intellectual discussion, even if only a happy few read and react to it. Not many German thinkers have adopted this kind of computational perspective.
The ones who do exist have their own platform and actually created a new party. This is something we are missing, because there has always been a kind of an attitude of arrogance towards technology. For example, I am responsible for the entire cultural sections and science sections of FAZ. And we published reviews about all these wonderful books on science and technology, and that's fascinating and that's good. But, in a way, the really important texts, which somehow write our life today and which are, in a way, the stories of our life -- are, of course, the software -- and these texts weren't reviewed. We should have found ways of transcribing what happens on the software level much earlier -- like Patty Maes or others, just to write it, to rewrite it in a way that people understand what it actually means. I think this is a big lack.
What did Shakespeare, and Kafka, and all these great writers -- what actually did they do? They translated society into literature. And of course, at that stage, society was something very real, something which you could see. And they translated modernization into literature. And now we have to find people who translate what happens on the level of software. At least for newspapers, we should have sections reviewing software in a different way; at least the structures of software.
We are just beginning to look at this in Germany. And we are looking for people -- it's not very many people -- who have the ability to translate that. It needs to be done because that's what makes us who we are. You will never really understand in detail how Google works because you don't have access to the code. They don't give you the information. But just think of George Dyson's essay, which I love, "Turing's Cathedral." This is a very good beginning. He absolutely has the point. It is today's version of the kind of cathedral we would be entering if we lived in the eleventh century. It's incredible that people are building this cathedral of the digital age. And as he points out, when he visited Google, he saw all the books they were scanning, and noted that they said they are not scanning these books for humans to read, but for the artificial intelligence to read.
Who are the big thinkers here? In Germany, for me at least, for my work, there are a couple of key figures. One of them is Gerd Gigerenzer, who is somebody who is absolutely -- I would say he is actually avant-garde, at this very moment, because what he does is he teaches heuristics. And from what we see, we have an amputation of heuristics, through the technologies, as well. People forget certain heuristics. It starts with a calculation, because you have the calculator, but it goes much further. And you will lose many more rules of thumb in the future because the systems are doing that, Google and all the others. So Gigerenzer, in his thinking -- and he has a big Institute now -- on risk assessment, as well, is very, very important. You could link him, in a way, actually to Nassim Taleb, because again here you have the whole question of not risk assessment, the question of looking back, looking into the future, and all that.
Very important in literature, still, though he is 70 years old, 80 years old, is of course Hans Magnus Enzensberger. Peter Sloterdijk is a very important philosopher; a kind of literary figure, but he is important. But then you have, not unlike in the nineteenth or twentieth century, there are many leading figures. But I must say, as well as Gigerenzer, he writes all his books in English, we have quite interesting people, at this very moment, in law, which is very important for discussions of copyright and all that. But regarding the conversations of new technologies and human thought, they, at this very moment, don't really take place in Germany.
There are European thinkers who have cult followings -- Slajov Zizek, for example. Ask any intellectual in Germany, and they will tell you Zizek is just the greatest. He's a kind of communist, but he considers himself Stalinistic, even. But this is, of course, all labels. Wild thinkers. Europeans, at this very moment, love wild thinkers.
Reality Club discussion on EDGE: Daniel Kahneman, George Dyson, Jaron Lanier, Nick Bilton, Nick Carr, Douglas Rushkoff, Jesse Dylan, Virginia Heffernan, Gerd Gigerenzer, John Perry Barlow, Steven Pinker, John Bargh, George Dyson, Annalena McAfee, John Brockman