On the Road
Event Date: [ 1.27.09 ]
Location:
Germany

REFLECTIONS ON A CRISIS

Daniel Kahneman & Nassim Nicholas Taleb: A Conversation in Munich?
(Moderator: John Brockman)

Nassim Taleb and Daniel Kahneman: Reflection on a Crisis from DLD on FORA.tv

View the complete 1-hour HD streaming video of the Edge event that took place at Hubert Burda Media's Digital Life Design Conference (DLD) in Munich on January 27th as the greatest living psychologist and the foremost scholar of extreme events discuss hindsight biases, the illusion of patterns, perception of risk, and denial.

DANIEL KAHNEMAN is Eugene Higgins Professor of Psychology, Princeton University, and Professor of Public Affairs, Woodrow Wilson School of Public and International Affairs. He is winner of the 2002 Nobel Prize in Economic Sciences for his pioneering work integrating insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.

NASSIM NICHOLAS TALEB, essayist and former mathematical trader, is Distinguished Professor of Risk Engineering at New York University’s Polytechnic Institute. He is the author of Fooled by Randomness and the international bestseller The Black Swan.

Daniel Kahneman's Edge Bio Page
Nassim Taleb's Edge Bio Page



FOCUS ONLINE
January 28, 2009

ARE BANKERS CHARLATANS

Sind Banker Scharlatane?

At blame for the financial crisis is the nature of man, say two renowned scientists: Nobel Prize winner Daniel Kahneman and bestselling author Nassim Taleb ( "The Black Swan").

By Ansgar Siemens, FOCUS online editor

Two men sitting on the stage. Left. Daniel Kahneman, 74, bright-eyed, Nobel Prize winner. Right Nassim Taleb, 49, former Wall Street banker, best-selling author. Both speak on the future of Digital Life Design Conference (DLD) in Munich on the financial crisis, about the beginning--mainly they talk about people. They say it is due to human nature, that the crisis has broken out. And they choose harsh words in discussing the scale of the disaster.

Kahneman explains why there are bubbles in the financial markets, even though everyone knows that they eventually burst. The researchers used the comparison with the weather: If there is little rain for three years, people begin to believe that this is the normal situation. If over the years stocks only increase, people can't imagine a break in this trend.

"Those responsible must go--today and not tomorrow"

Taleb speaks out sharply against the bankers. The people in control of taxpayer's money are spending billions of dollars. "I want those responsible for the crisis gone today, today and not tomorrow," he says, leaning forward vigorously. The risk models of banks are a plague, he says, the bankers are charlatans.

It is nonsense to think that we can assess risks and thus protect against a crash. Taleb has become famous with his theory of the black swan described in his eponymous bestsellers described. Black swans, which are events that are not previously seen--not even with the best model. "People will never be able to control a coincidence," he says.

The early warning

"Taleb had an early warning before the crisis. In 2003 he took note of the balance sheet of the U.S. mortgage finance giant Fannie Mae, and he saw "dynamite".

In autumn last year, the U.S. government instituted A dramatic bailout. Taleb said in the "Sunday Times" in 2008: "Bankers are very dangerous." And even now, he sees a scandal: He provocatively asks what have the banks done with the government bailout money. "They have paid out more bonuses, and they have increased their risks." And it was not their own money.

Taleb calls for rigorous changes: nationalize banks--and abolish financial models. Kahneman does not quite agree with him. Certainly, the models are not capable of predicting a collapse. But one should not ignore our human nature. People will always require and use models and get benefit from them--even if they are wrong.

Edge Dinners
Event Date: [ 7.28.08 ]
Location:
United States

Master Classes
Event Date: [ 7.25.08 ]
Location:
United States

What we're saying is that there is a technology emerging from behavioral economics. It's not only an abstract thing. You can do things with it. We are just at the beginning. I thought that the input of psychology into behavioral economics was done. But hearing Sendhil was very encouraging because there was a lot of new psychology there. That conversation is continuing and it looks to me as if that conversation is going to go forward. It's pretty intuitive, based on research, good theory, and important. — Daniel Kahneman

Richard Thaler Sendhil Mullanthan Daniel Kahneman

Edge Master Class 2008 
Richard ThalerSendhil Mullainathan, Daniel Kahneman

Sonoma, CA, July 25-27, 2008


A year ago, Edge convened its first "Master Class" in Napa, California, in which psychologist and Nobel Laureate Daniel Kahneman taught a 9-hour course: "A Short Course On Thinking About Thinking". The attendees were a "who's who" of the new global business culture. 

This year, to continue the conversation, we invited Richard Thaler, the father of behavioral economics, to organize and lead the class: "A Short Course On Behavioral Economics". Thaler asked Harvard economist and former student Sendhil Mullainathan, as well as Daniel Kahneman, to teach the class with him.

Thaler arrived arrived at Stanford in the 1970s to work with Kahneman and his late partner, Amos Tversky. Thaler, in turn, asked Harvard economist and former student Sendhil Mullainathan, as well as Kahneman, to teach the class with him.

The entire text is available online along with video highlights of the talks and a photo gallery. The text is also appears in a book privately published by Edge Foundation, Inc.

Nathan Myhrvold Jeff Bezos Elon Musk

Whereas last year, the focus was on psychology, this year the emphasis shifted to behavioral economics. As Kahneman noted:

...There's new technology emerging from behavioral economics and we are just starting to make use of that. I thought the input of psychology into economics was finished but clearly it's not!

The Master Class is the most recent iteration of Edge's development, which began its activities under than name "The Reality Club" in 1981. Edge's is different from The Algonquin, The Apostles, The Bloomsbury Group, or The Club, but it offers the same quality of intellectual adventure. The closest resemblances are to The Invisible College and the Lunar Society of Birmingham.

The early seventeenth-century Invisible College was a precursor to the Royal Society. Its members consisted of scientists such as Robert Boyle, John Wallis, and Robert Hooke. The Society's common theme was to acquire knowledge through experimental investigation. Another example is the nineteenth-century Lunar Society of Birmingham, an informal club of the leading cultural figures of the new industrial age—James Watt, Erasmus Darwin, Josiah Wedgewood, Joseph Priestly, and Benjamin Franklin.

In a similar fashion, Edge's, through its Master Classes, gathers together intellectuals and technology pioneers. In this regard, George Dyson, in his summary (below) of the second day of the proceedings, writes:

Retreating to the luxury of Sonoma to discuss economic theory in mid-2008 conveys images of Fiddling while Rome Burns. Do the architects of Microsoft, Amazon, Google, PayPal, and Facebook have anything to teach the behavioral economists—and anything to learn? So what? What's new?? As it turns out, all kinds of things are new. Entirely new economic structures and pathways have come into existence in the past few years.

Sean Parker, CoFounder, Facebook Salar Kamangar, Google Evan Williams, CoFounder, Twitter

Indeed, as one distinguished European visitor noted, the weekend, which involved the 2-day Master Class in Sonoma followed by a San Francisco dinner, involved "a remarkable gathering of outstanding minds. These are the people that are rewriting our global culture".

John Brockman, Editor

 

RICHARD H. THALER is the father of behavioral economics—the study of how thinking and emotions affect individual economic decisions and the behavior of markets. He investigates the implications of relaxing the standard economic assumption that everyone in the economy is rational and selfish, instead entertaining the possibility that some of the agents in the economy are sometimes human. Thaler is Director of the Center for Decision Research at the University of Chicago Graduate School of Business. He is coauthor (with Cass Sunstein) of Nudge: Improving Decisions About Health, Wealth, and Happiness. 

Richard Thaler's Edge Bio Page

SENDHIL MULLAINATHAN, a Professor of Economics at Harvard, a recipient of a MacArthur Foundation "genius grant", conducts research on development economics, behavioral economics, and corporate finance. His work concerns creating a psychology of people to improve poverty alleviation programs in developing countries. He is Executive Director of Ideas 42, Institute of Quantitative Social Science, Harvard University.

Sendhil Mullainathan's Edge Bio Page

DANIEL KAHNEMAN, a psychologist at Princeton University, is the recipient of the 2002 Nobel Prize in Economics for his pioneering work integrating insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty. 

Daniel Kahneman's Edge Bio page.


PARTICIPANTS: Jeff Bezos, Founder, Amazon.com; John Brockman, Edge Foundation, Inc.; Max Brockman, Brockman, Inc.; George Dyson, Science Historian; Author, Darwin Among the Machines; W. Daniel Hillis, Computer Scientist; Cofounder, Applied Minds; Author, The Pattern on the Stone; Daniel Kahneman, Psychologist; Nobel Laureate, Princeton University; Salar Kamangar, Google; France LeClerc, Marketing Professor; Katinka Matson, Edge Foundation, Inc.; Sendhil Mullainathan, Professor of Economics, Harvard University; Executive Director, Ideas 42, Institute of Quantitative Social Science; Elon Musk, Physicist; Founder, Tesla Motors, SpaceX; Nathan Myhrvold, Physicist; Founder, Intellectual Venture, LLC; Event Photographer; Sean Parker, The Founders Fund; Cofounder: Napster, Plaxo, Facebook; Paul Romer, Economist, Stanford; Richard Thaler, Behavioral Economist, Director of the Center for Decision Research, University of Chicago Graduate School of Business; coauthor of Nudge; Anne Treisman, Psychologist, Princeton University; Evan Williams, Founder, Blogger, Twitter.

Further Reading on Edge:
"A Short Course In Thinking About Thinking
Edge Master Class 2007
Daniel Kahneman
Auberge du Soleil, Rutherford, CA, July 20-22, 2007


A SHORT COURSE IN BEHAVIORAL ECONOMICS
CLASS ONE • CLASS TWO • CLASS THREE • CLASS FOUR • CLASS FIVE • CLASS SIX
PHOTO GALLERY
 


LIBERTARIAN PATERNALISM:  WHY IT IS IMPOSSIBLE NOT TO NUDGE 
(Class 1)
A Talk By Richard Thaler

Danny Hillis,Nathan Myhrvold ,Daniel Kahneman, Jeff Bezos, Sendhil Mullainathan

If you remember one thing from this session, let it be this one: There is no way of avoiding meddling. People sometimes have the confused idea that we are pro meddling. That is a ridiculous notion. It's impossible not to meddle. Given that we can't avoid meddling, let's meddle in a good way. —Richard Thaler


IMPROVING CHOICES WITH MACHINE READABLE DISCLOSURE
(Class 2)
A Talk ByRichard Thaler & Sendhil Mullainathan

 

 

Jeff Bezos, Nathan Myhrvold, Salar Kamangar, Daniel Kahneman, Danny Hillis, Paul Romer, Elon Musk, Sean Parker

At a minimum, what we're saying is that in every market where there is now required written disclosure, you have to give the same information electronically and we think intelligently how best to do that. In a sentence that's the nature of the proposal.—Richard Thaler


THE PSYCHOLOGY OF SCARCITY
(Class 3)
A Talk By Sendhil Mullainathan

Nathan Myhrvold, Richard Thaler, Daniel Kahneman, France LeClerc, Danny Hillis, Paul Romer, George Dyson, Elon Musk, Jeff Bezos, Sean Parker

Let's put aside poverty alleviation for a second, and let's ask, "Is there something intrinsic to poverty that has value and that is worth studying in and of itself?" One of the reasons that is the case is that, purely aside from magic bullets, we need to understand are there unifying principles under conditions of scarcity that can help us understand behavior and to craft intervention. If we feel that conditions of scarcity evoke certain psychology, then that, not to mention pure scientific interest, will affect a vast majority of interventions. It's an important and old question.


TWO BIG THINGS HAPPENING IN PSYCHOLOGY TODAY
(Class 4)
A Talk By Daniel Kahneman

Danny Hillis,Richard Thaler,Nathan Myhrvold,Elon Musk, France LeClerc, Salar Kamangar, Anne Treisman, Sendhil Mullainathan, Jeff Bezos,Sean Parker

There's new technology emerging from behavioral economics and we are just starting to make use of that. I thought the input of psychology into economics was finished but clearly it's not!

THE REALITY CLUB

W. Daniel Hillis, Daniel Kahneman, Nathan Myhrvold, Richard Thaler on "Two Big Things Happening In Psychology Today"


THE IRONY OF POVERTY
(Class 5)
A Talk By Sendhil Mullainathan

Daniel Kahneman, Paul Romer, Richard Thaler, Danny Hillis, Jeff Bezos, Sean Parker, Anne Treisman, France LeClerc, Salar Kamangar, George Dyson

I want to close a loop, which I'm calling "The Irony of Poverty." On the one hand, lack of slack tells us the poor must make higher quality decisions because they don't have slack to help buffer them with things. But even though they have to supply higher quality decisions, they're in a worse position to supply them because they're depleted. That is the ultimate irony of poverty. You're getting cut twice. You are in an environment where the decisions have to be better, but you're in an environment that by the very nature of that makes it harder for you apply better decisions.


PUTTING PSYCHOLOGY INTO BEHAVIORAL ECONOMICS
(Class 6)
A Talk By Richard Thaler, Daniel Kahneman, Sendhil Mullainathan

Richard Thaler, Daniel Kahneman, Sendhil Mullainathan, Sean Parker, Anne Treisman, Paul Romer,Danny Hillis, Jeff Bezos, Salar Kamangar, George Dyson, France LeClerc

There's new technology emerging from behavioral economics and we are just starting to make use of that. I thought the input of psychology into economics was finished but clearly it's not!


PHOTO GALLERY
Edge Master Class & San Francisco Dinner

Photo Gallery: A Short Course In Behavioral Economics (Below)

Photo Gallery: The San Francisco 2008 Science Dinner


INTRODUCTION
By Daniel Kahneman

Many people think of economics as the discipline that deals with such things as housing prices, recessions, trade and unemployment. This view of economics is far too narrow. Economists and others who apply the ideas of economics deal with most aspects of life. There are economic approaches to sex and to crime, to political action and to mass entertainment, to law, health care and education, and to the acquisition and use of power. Economists bring to these topics a unique set of intellectual tools, a clear conception of the forces that drive human action, and a rigorous way of working out the social implications of individual choices. Economists are also the gatekeepers who control the flow of facts and ideas from the worlds of social science and technology to the world of policy. The findings of educators, epidemiologists and sociologists as well as the inventions of scientists and engineers are almost always filtered through an economic analysis before they are allowed to influence the decisions of policy makers.

In performing their function as gatekeepers, economists do not only apply the results of scientific investigation. They also bring to bear their beliefs about human nature. In the past, these beliefs could be summarized rather simply: people are self-interested and rational, and markets work. The beliefs of many economists have become much more nuanced in recent decades, and the approach that goes under the label of “behavioral economics” is based on a rather different view of both individuals and institutions. Behavioral economics is fortunate to have a witty guru—Richard Thaler of the University of Chicago Business School. (I stress this detail of his affiliation because the Economics Department of the University of Chicago is the temple of the “rational-agent model” that behavioral economists question.) Expanding on the idea of bounded rationality that the polymath Herbert Simon formulated long ago, Dick Thaler offered four tenets as the foundations of behavioral economics:

Bounded rationality

Bounded selfishness

Bounded self-control

Bounded arbitrage

The first three bounds are reasonably self-evident and obviously based on a plausible view of the psychology of the human agent. The fourth tenet is an observation about the limited ability of the market to exploit human folly and thereby to protect individual fools from their mistakes. The combination of ideas is applicable to the whole range of topics to which standard economic analysis has been applied—and at least some of us believe that the improved realism of the assumption yields better analysis and more useful policy recommendations.

Behavioral economics was influenced by psychology from its inception—or perhaps more accurately, behavioral economists made friends with psychologists, taught them some economics and learned some psychology from them. The little economics I know I learned from Dick Thaler when we worked together 25 years ago. It is somewhat embarrassing for a psychologist to admit that there is an asymmetry between the two disciplines: I cannot imagine a psychologist who could be counted as a good economist without formal training in that discipline, but it seems to be easier for economists to be good psychologists. This is certainly the case for both Dick and Sendhil Mullainathan—they know a great deal of what is going on in modern psychology, but more importantly they have superb psychological intuition and are willing to trust it.

Some of Dick Thaler’s most important ideas of recent years—especially his elaboration of the role of default options and status quo bias—have relied more on his flawless psychological sense than on actual psychological research. I was slightly worried by that development, fearing that behavioral economics might not need much input from psychology anymore. But the recent work of Sendhil Mullainathan has reassured me on this score as well as on many others. Sendhil belongs to a new generation. He was Dick Thaler’s favorite student as an undergraduate at Cornell, and his wonderful research on poverty is a collaboration with a psychologist, Eldar Shafir, who is roughly my son’s age. The psychology on which they draw is different from the ideas that influenced Dick. In the mind of behavioral economists, young and less young, the fusion of ideas from the two disciplines yields a rich and exciting picture of decision making, in which a basic premise—that the immediate context of decision making matters more than you think—is put to work in novel ways.

I happened to be involved in an encounter that had quite a bit to do with the birth of behavioral economics. More than twenty-five years ago, Eric Wanner was about to become the President of the Russell Sage Foundation—a post he has held with grace and distinction ever since. Amos Tversky and I met Eric at a conference on Cognitive Science in Rochester, where he invited us to have a beer and discuss his idea of bringing together psychology and economics. He asked how a foundation could help. We both remember my answer. I told him that this was not a project on which it was possible to spend a lot of money honestly. More importantly, I told him that it was futile to support psychologists who wanted to influence economics. The people who needed support were economists who were willing to be influenced. Indeed, the first grant that the Russell Sage Foundation made in that area allowed Dick Thaler to spend a year with me in Vancouver. This was 1983-1984, which was a very good year for behavioral economics. As the Edge Sonoma session amply demonstrated, we have come a long way since that day in a Rochester bar.

Daniel Kahneman


FIRST DAY SUMMARY—EDGE MASTER CLASS 2008
By Nathan Myhrvold

DR. NATHAN MYHRVOLD is CEO and managing director of Intellectual Ventures, a private entrepreneurial firm. Before Intellectual Ventures, Dr. Myhrvold spent 14 years at Microsoft Corporation. In addition to working directly for Bill Gates, he founded Microsoft Research and served as Chief Technology Officer.

Nathan Myhrvold's Edge Bio Page

____________________________

The recent Edge event on behavioral economics was a great success. Here is a report on the first day.

Over the course of the last few years we've been treated to quite a few expositions of behavioral economics—probably a dozen popular books seek to explain some aspect of the field. This isn't the place for a full summary but the gist is pretty simple. Classical economics has studied a society of creatures that Richard Thaler, an economist at University of Chicago dubs the "Econ". Econs are rather superhuman in some ways—they do everything by optimizing utility functions, paragons of bounded rationality. Behavioral economics is about understanding how real live Humans differ from Econs.

In previous reading, and an Edge event last year I learned the most prominent differences between Econs and Humans. Humans, as it turns out, are not always bounded rational—they can be downright irrational. Thaler likes to say that Humans are like Homer Simpson. Econs are like Mr. Spock. This is a great start, but to have any substance in economics one has to understand that in the context of economic situations. Humans make a number of systematic deviations from the Econ ideal, and behavioral economics has categorized a few of these. So, for example, we humans fear loss more than we love gain. Humans care about how a question is put to them—propositions that an Econ would instantly recognize as mathematically equivalent seem different to Humans and they behave differently.

Daniel Kahneman, a Nobel laureate for his work in behavioral economics told us about priming—how a subtle influence radically shifts how people act. So, in one experiment people are asked to fill out a survey. In the corner of the room is a computer, with a screen saver running. That's it—nothing overt, just a background image in the room. If the screen saver shows pictures of money, the survey answers are radically different. Danny went through example after example like this where occurred. The first impulse one has in hearing this is no, this can't be the case. People can't be that easily and subconsciously influenced. You don't want to believe it. But Danny in his professorial way says, "Look, this is science. Belief isn't an option. Repeated randomized trials confirm the results. Get over it".??The second impression is perhaps even more surprising—the influences are quite predictable. Show people images of money, and they tend to be more selfish and less willing to help others. Make people plot points on graph paper that are far apart, and they act more distant in lots of way. Make them plot points that are close together, and damned if they don't act closer. Again, it seems absurd, but cheap metaphors capture our minds. Humans, it seems, are like drunken poets, who can't glimpse a screen saver in the corner, or plot some points on graph paper without swooning under the metaphorical load and going off on tangents these stray images inspire.

This is all very strange, but is it important? The analogy that seems most apt to me is optical illusions. An earlier generation of psychologists got very excited about how the low level visual processing in our brains is hardwired to produce paradoxical results. The priming stories seem to me to be the symbolic and metaphorical equivalent. The priming metaphors in optical illusions are the context of the image—the extra lines or arrows that fool us into making errors in judgment of sizes or shapes. While one can learn to recognize optical illusions, you can't help but see the effect for what it is. Knowing the trick does not lessen its intuitive impact. You really cannot help but think one line is longer, even if you know that the trick will be revealed in a moment.

I wonder how closely this analogy carries over. Danny said today that you couldn't avoid priming. If he is right perhaps the analogy is close; but perhaps it's not.

I also can't help but wonder how important these effects are to thinking and decision making in general. After the early excitement about optical illusions, they have retreated from prominence—they explain a few cute things in vision, but they are only important in very artificial cases. Yes, there are a few cases where product design, architecture and other visual design problems are impacted by optical illusions, but very few. In most cases the visual context is not misleading. So, while it offers an interesting clue to how visual processing works, it is a rare special case that has little practical importance.

Perhaps the same thing is true here—the point of these psychological experiments, like the illusions, is to isolate an effect in a very artificial circumstance. This is a great way to get a clue about how the brain works (indeed it would seem akin to Steven Pinker's latest work The Stuff of Thought which argues for the importance of metaphors in the brain). But is it really important to day-to-day real world thinking? In particular, can economics be informed by these experiments? Does behavioral economics produce a systematically different result that classical economics if these ideas are factored in?

I can imagine it both ways. If it is important, then we are all at sea, tossed and turned in a tumultuous tide of metaphors imposed by our context. That is a very strange world—totally counter to our intuition. But maybe that is reality.

Or, I could equally imagine that it only matters in cases where you create a very artificial experiment—in effect, turning up the volume on the noise in the thought process. In more realistic contexts the signal trumps the noise.

The truth is likely some linear combination of these two extreme—but what combination? There are some great experiments yet to be done to nail that down.

Dick Thaler gave a fascinating talk that tries to apply these ideas in a very practical way. There is an old debate in economics about the right way to regulate society. Libertarians would say don't try—the harm in reducing choice is worse than the benefit, in part because of unintended consequences, but mostly because the market will reach the right equilibrium. Marxist economists, at the other extreme, took it for granted that one needs a dictatorship of the proletariat—choice is not an option, at least for the populace. Thaler has a new creation—a concept he calls "libertarian paternalism" which tries to split the baby.

The core idea (treated fully in his book Nudge) is pretty simple—present plenty of options, but then encourage certain outcomes by using behavioral economics concepts to stack the deck. A classic example is the difference between opt-in and opt-out in a program such as organ donation. If you tell people that they can opt-in to donating their organs if they are killed, a few will feel strongly enough to do it—most people won't. If you switch that to opt-out the reverse happens—very few people opt out. Changing the "choice architecture" that people have changes choices. This is not going to work on people who feel strongly, but the majority doesn't really care and can be pushed in one direction or another by choice architecture.

A better example is a program called "save more tomorrow" (SMT), for 401K plans in companies. People generally don't save very much. So, the "save more tomorrow" program lets you decide up front to save a greater portion of promotions and raises. You are not cutting into today's income (to which you feel you are entitled to spend) but rather you are pre-allocating a future windfall. Seems pretty simple but there are dramatic increases in savings rates when it is instituted.

Dick came to the session loaded for bear, expecting the objections of classical economics. Apparently this is all very controversial among economists and policy wonks. It struck me as very clever, but once explained, very obvious. Of course you can put some spin on the ball and nudge people the right way using to achieve a policy effect. It's called marketing when you do this in business, and it certainly can matter. In the world of policy wonk economists this may be controversial, but it wasn't to me.

An interesting connection with the discussion of priming experiments is that many policy contexts are highly artificial—very much like experiments. Filling out a driver's license form is a kind of questionnaire, and the organ donation scenario seems very remote to most people despite the fact that they're making a binding choice. The mechanics of opt-in versus opt-out or required choice could matter a lot in these contexts.

Dick has a bunch of other interesting ideas. One of them is to require that government disclosures on things like cell phone plans, or credit card statements be machine-readable disclosure with a standard schema. This would allow web sites to offer automated comparisons, and other tools to help people understand the complexities.

This is a fascinating idea that could have a lot of merit. Dick is, from my perspective, a bit over optimistic in some ways—it is unclear that it will be overwhelming. An example is unit prices in grocery stores—those little labels on store shelves that tell you that Progesso canned tomatoes are 57 cents per pound, while the store brand is 43. Consumer advocates thought these would revolutionize consumer behavior—and perhaps they did in some limited ways. But premium brands didn't disappear.

I also differ on another point—must this be required by government, and would it be incorruptible were it so mandated. In the world of technology most standards are de facto, rather than de juris, and are driven by private owners (companies or private sector standards bodies), because the creation and maintenance of a standard is a dynamic balancing act—not static one. I think that many of the disclosure standards he seeks would be better done this way. Conversely, a government mandate disclosure standard might become so ossified by changing slowly that it did not achieve the right result. Nevertheless, this is a small point compared to the main idea, which is that machine-readable disclosures with standardized schema allow third party analysis and enables a degree of competition that would harder to achieve by other means.

Sendhil Mullainathan gave a fascinating talk about applying behavior economics to understand poverty. If this succeeds (it is a work in progress) it would be extremely important.

He showed a bunch of data on itinerant fruit vendors (all women) in India. 69% of them are constantly in debt to moneylenders who charge 5% per day interest. The fruit ladies make 10% per day profit, so half their income goes to the moneylender. They also typically buy a couple cups of tea per day. Sendhil shows that 1-cup of tea per day less would let them be debt free in 30 days, doubling their income. 31% of these women have figured that out, so it is not impossible. Why don't the rest get there?

Sendhil then showed a bunch of other data arguing that poor people—even those in the US (who are vastly richer in absolute scale than his Indian fruit vendors)—do similar things with how they spend food stamps, or use of payday loans. He was very deliberate at drawing this out, until I finally couldn't stand it and blurted out "you're saying that they all have high discount rate". His argument is that under scarcity there is a systematic effect that you put the discount rate way too high for your own good. With too high a discount rate, you spend for the moment, not for the future. So, you have a cup a tea rather than double your income.

He is testing this with an amazing experiment. What would these women do if they could escape the "debt trap"? Bono, Jeffery Sachs and others have argued this point for poor nations—this is the individual version of the proposition.

Sendhil is studying 1000 of these fruit vendors (all women). Their total debt is typically $25 each, so he is just stepping in and paying off the debt for 500 of them! The question is then to see how many of them revert to being in debt over time, versus the 500 who are studied, but do not have their debt paid off. The experiment is underway and he has no idea what the result will be.

The interesting thing here is that, for these people, one can do a meaningful experiment (N = 500 gives good statistics) without much money in absolute. It would be hard to do this experiment with debt relief for poor nations, or even the US poor, but in India you can do serious field experiments for little money.

Sendhil also has an amusing argument, which is that very busy people are exactly like these poor fruit vendors. If you have very little time, it is scarce and you are as time-poor as the fruit ladies are cash-poor. So, you act like there is a high discount—and you commit to future events—like agreeing to travel and give a talk. Then as the time approaches, you tend to regret it and ask, "Why did I agree to this?" So you act like there is a high discount rate. This got everybody laughing. The difference here is that time can't be banked or borrowed, so it is unclear to me how close an analogy it is, but it was interesting nonetheless.

Indeed, I almost cancelled my attendance at this event right before hand, thinking, "why did I agree to this? I don't have the time!" After much wrestling I decided I could attend the first day, but no more. Well, this is one of those times when having the "wrong" discount rate is in your favor. I'm very glad I attended.

—Nathan Myhrvold


SECOND DAY SUMMARY—EDGE MASTER CLASS 2008
By George Dyson

GEORGE DYSON, a historian among futurists, is the author Baidarka; Project Orion; and Darwin Among the Machines.

George Dyson's Edge Bio Page

____________________________

The weekend master class on behavioral economics was productive in unexpected ways, and a lot of good ideas and thoughts about implementing them were exchanged.

Day 2 (Sunday) opened with a session led by Sendhil Mullainathan, followed by a final wrap-up discussion before we adjourned at noon. Elon Musk, Evan Williams, and Nathan Myhrvold had departed early. In the absence of Nathan's high-resolution record, a brief summary, with editorial comments, is given here.

"I refuse to accept however, the stupidity of the Stock Exchange boys, as an explanation of the trend of stocks," wrote John von Neumann to Stanislaw Ulam, on December 9, 1939. "Those boys are stupid alright, but there must be an explanation of what happens, which makes no use of this fact." This question led von Neumann (with the help of Oskar Morgenstern) to his monumental Theory of Games and Economic Behavior, a precise mathematical structure demonstrating that a reliable economy can be constructed out of unreliable parts.

The von Neumann and Morgenstern approach (developed further by von Neumann's subsequent Probabilistic Logics and the Synthesis of Reliable Organisms From Unreliable Components) assumes that human unreliability and irrationality (by no means excluded from their model) will, in the aggregate, be filtered out. In the real world, however, irrational behavior (including the "stupidity of the stock exchange boys") is not completely filtered out. Daniel Kahneman, Richard Thaler, Sendhil Mullainathan, and their colleagues are developing an updated theory of games and economic behavior that does make use of this fact.

Sendhil Mullainathan opened the first hour, on the subject of scarcity, by repeating the first day's question: what is it that prevents the fruit vendors (who borrow their working capital daily at high interest) from saving their way out of recurring debt? According to Sendhil, many vendors do manage to escape, but a core-group remains trapped.

Sendhil shows a graph with $$ on the X-axis and Temptation on the Y-axis. The curve starts out flat and then ascends steeply upward before leveling off. The dangerous area is the steep slope when a person begins to acquire disposable income and meets rapidly increasing temptations. "To understand the behavior you have to understand the scale." Thaler interjects: "It's a mental accounting problem—but I think everything is a mental accounting problem." All human beings are subject to temptation, but the consequences are higher for the poor. Conclusion: temptation is a regressive tax.

Paul Romer notes that the temptation of time is a progressive tax, since time, unlike money, is evenly distributed, and wealthy people, no matter how well supplied with money, believe they have less spare time. Bottom line: the effects of temptation do not scale with income.

How best to intervene? Daniel Kahneman notes: "Some cultures have solved that problem... there seems to be a cultural solution." Sendhil, whose field research may soon have some answers, believes that lending at lower interest rates may help but will not solve the problem, and adds: "It would be better for the micro-financiers to come in and offer money at the same rate as the existing lenders, and then make the payoff in some other ways." The problem is the chronic effects of poverty, not the lending institutions (or lack thereof).

Sendhil moves the discussion to the subject of "depletion"—when judgment deteriorates due to the effects of stress. Clinical studies and real-world examples are described. Mental depletion correlates strongly with high serum cortisol (measurable in urine) and low glucose. Poverty produces chronic depletion, and decisions are impaired. High-value decisions are made under conditions of high stress. This results in what Sendhil terms the scarcity trap.

During the mid-morning break (with cookies), Richard Thaler shows videos from a 40-year-old study (Walter Mischel, 1973) of children offered one cookie now or two if they wait. The observed behavior correlates strongly, by almost any measure, with both the economic success of the parents and the child's future success. Hypothesis: small behavioral shifts might produce (or "nudge") large economic results.

After the break we begin to wrap things up. Richard Thaler suggests a "nudge" model of the world. The same way a digital camera has both an "expert mode" and an "idiot mode," what the economy needs is an "idiot mode" resistant to experts making mistakes.

Thaler notes that Government is really bad at building systems that can be operated in "idiot mode"—just compare private sector websites vs. public sector. Imagine if the Government had designed the user interface for Amazon!

Sendhil makes a final comment that elicits agreement all around: "R&D in the poverty space has huge potential returns and there is too little thinking about that."

Daniel Kahneman concludes: "There's new technology emerging from Behavioral Economics and we are just starting to make use of that. I thought the input of psychology into economics was finished but clearly it's not!" The meeting adjourns.

My personal conclusions: retreating to the luxury of Sonoma to discuss economic theory in mid-2008 conveys images of Fiddling while Rome Burns. Do the architects of Microsoft, Amazon, Google, PayPal, and Facebook have anything to teach the behavioral economists—and anything to learn? So what? What's new?? As it turns out, all kinds of things are new. Entirely new economic structures and pathways have come into existence in the past few years. More wealth is flowing ever more quickly, and can be monitored and influenced in real time. Models can be connected directly to the real world (for instance, Sendhil's field experiment, using real money to remove real debt, observing the results over time). The challenge is how to extend the current economic redistribution as efficiently (and beneficently) as possible to the less wealthy as well as the wealthy of the world.

A time of misguided economic decisions, while bad for many of us, is a good time for behavioral economics. As Abraham Flexner argued (26 September 1931) when urging the inclusion of a School of Economics at the founding of the Institute for Advanced Study: "The plague is upon us, and one cannot well study plagues after they have run their course." All the more so amidst the plagues of 2008.

It was Louis Bamberger's wish (23 April 1934), upon granting Abraham Flexner's request, that "the School of Economics and Politics may contribute not only to a knowledge of these subjects but ultimately to the cause of social justice which we have deeply at heart."

—George Dyson
 

Edge Dinners
Event Date: [ 2.25.08 ]
Location:
Indian Summer Restaurant
Monterey, CA
United States

"The crowd was sprinkled generously with those who had amassed wealth beyond imagining in a historical eye blink." — The Wall Street Journal


 

Seminars
Event Date: [ 8.25.07 ]
Location:
United States

"Life/ Consists of propositions about life." 

— Wallace Stevens ("Men Made out of Words")

 

"I just read the Life transcript book and it is fantastic. One of the better books I've read in a while. Super rich, high signal to noise, great subject."
— Kevin Kelly, Editor-At-Large, Wired

"The more I think about it the more I'm convinced that Life: What A Concept was one of those memorable events that people in years to come will see as a crucial moment in history. After all, it's where the dawning of the age of biology was officially announced."
— Andrian Kreye, Süddeutsche Zeitung

EDGE PUBLISHES "LIFE: WHAT A CONCEPT!" TRANSCRIPT AS DOWNLOADABLE PDF BOOK [1.14.08]

Edge is pleased to announce the online publication of the complete transcript of this summer's Edge event, Life: What a Concept! as a 43,000- word downloadable PDF Edgebook.

The event took place at Eastover Farm in Bethlehem, CT on Monday, August 27th (see below). Invited to address the topic "Life: What a Concept!" were Freeman DysonJ. Craig VenterGeorge ChurchRobert ShapiroDimitar Sasselov, and Seth Lloyd, who focused on their new, and in more than a few cases, startling research, and/or ideas in the biological sciences.


      pdf download (click here)

 

Reporting on the August event, Andrian Kreye, Feuilleton (Arts & Ideas) Editor ofSüddeutsche Zeitung wrote:

"Soon genetic engineering will shape our daily life to the same extent that computers do today. This sounds like science fiction, but it is already reality in science. Thus genetic engineer George Church talks about the biological building blocks that he is able to synthetically manufacture. It is only a matter of time until we will be able to manufacture organisms that can self-reproduce, he claims. Most notably J. Craig Venter succeeded in introducing a copy of a DNA-based chromosome into a cell, which from then on was controlled by that strand of DNA."

Jordan Mejias, Arts Correspondent of Frankfurter Allgemeine Zeitung, noted that:

"These are thoughts to make jaws drop...Nobody at Eastover Farm seemed afraid of a eugenic revival. What in German circles would have released violent controversies, here drifts by unopposed under mighty maple trees that gently whisper in the breeze."

The following Edge feature on the "Life: What a Concept!" August event includes a photo album; streaming video; and html files of each of the individual talks).


In April, Dennis Overbye, writing in the New York Times "Science Times", broke the story of the discovery by Dimitar Sasselov and his colleagues of five earth-like exo-planets, one of which "might be the first habitable planet outside the solar system".

At the end of June, Craig Venter has announced the results of his lab's work on genome transplantation methods that allows for the transformation of one type of bacteria into another, dictated by the transplanted chromosome. In other words, one species becomes another. In talking to Edge about the research, Venter noted the following:

Now we know we can boot up a chromosome system. It doesn't matter if the DNA is chemically made in a cell or made in a test tube. Until this development, if you made a synthetic chomosome you had the question of what do you do with it. Replacing the chomosome with existing cells, if it works, seems the most effective to way to replace one already in an existing cell systems. We didn't know if it would work or not. Now we do. This is a major advance in the field of synthetic genomics. We now know we can create a synthetic organism. It's not a question of 'if', or 'how', but 'when', and in this regard, think weeks and months, not years.

In July, in an interesting and provocative essay in New York Review of Books entitled"Our Biotech Future", Freeman Dyson wrote:

The Darwinian interlude has lasted for two or three billion years. It probably slowed down the pace of evolution considerably. The basic biochemical machinery o life had evolved rapidly during the few hundreds of millions of years of the pre-Darwinian era, and changed very little in the next two billion years of microbial evolution. Darwinian evolution is slow because individual species, once established evolve very little. With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them.

Now, after three billion years, the Darwinian interlude is over. It was an interlude between two periods of horizontal gene transfer. The epoch of Darwinian evolution based on competition between species ended about ten thousand years ago, when a single species, Homo sapiens, began to dominate and reorganize the biosphere. Since that time, cultural evolution has replaced biological evolution as the main driving force of change. Cultural evolution is not Darwinian. Cultures spread by horizontal transfer of ideas more than by genetic inheritance. Cultural evolution is running a thousand times faster than Darwinian evolution, taking us into a new era of cultural interdependence which we call globalization. And now, as Homo sapiens domesticates the new biotechnology, we are reviving the ancient pre-Darwinian practice of horizontal gene transfer, moving genes easily from microbes to plants and animals, blurring the boundaries between species. We are moving rapidly into the post-Darwinian era, when species other than our own will no longer exist, and the rules of Open Source sharing will be extended from the exchange of software to the exchange of genes. Then the evolution of life will once again be communal, as it was in the good old days before separate species and intellectual property were invented.

It's clear from these developments as well as others, that we are at the end of one empirical road and ready for adventures that will lead us into new realms.

This year's Annual Edge Event took place at Eastover Farm in Bethlehem, CT on Monday, August 27th. Invited to address the topic "Life: What a Concept!" were Freeman DysonJ. Craig VenterGeorge ChurchRobert ShapiroDimitar Sasselov, andSeth Lloyd, who focused on their new, and in more than a few cases, startling research, and/or ideas in the biological sciences.

Physicist Freeman Dyson envisions a biotech future which supplants physics and notes that after three billion years, the Darwinian interlude is over. He refers to an interlude between two periods of horizontal gene transfer, a subject explored in his abovementioned essay.

Craig Venter, who decoded the human genome, surprised the world in late June by announcing the results of his lab's work on genome transplantation methods that allows for the transformation of one type of bacteria into another, dictated by the transplanted chromosome. In other words, one species becomes another.

George Church, the pioneer of the Synthetic Biology revolution, thinks of the cell as operating system, and engineers taking the place of traditional biologists in retooling stripped down components of cells (bio-bricks) in much the vein as in the late 70s when electrical engineers were working their way to the first personal computer by assembling circuit boards, hard drives, monitors, etc.

Biologist Robert Shapiro disagrees with scientists who believe that an extreme stroke of luck was needed to get life started in a non-living environment. He favors the idea that life arose through the normal operation of the laws of physics and chemistry. If he is right, then life may be widespread in the cosmos.

Dimitar Sasselov, Planetary Astrophysicist, and Director of the Harvard Origins of Life Initiative, has made recent discoveries of exo-planets ("Super-Earths"). He looks at new evidence to explore the question of how chemical systems become living systems.

Quantum engineer Seth Lloyd sees the universe as an information processing system in which simple systems such as atoms and molecules must necessarily give rise complex structures such as life, and life itself must give rise to even greater complexity, such as human beings, societies, and whatever comes next.

A small group of journalists interested in the kind of issues that are explored on Edgewere present: Corey Powell, Discover, Jordan Mejias, Frankfurter Allgemeine Zeitung,Heidi Ledford, Nature, Greg Huang, New Scientist, Deborah Treisman, New Yorker,Edward Rothstein, New York Times, Andrian Kreye, Süddeutsche Zeitung, Antonio Regalado, Wall Street Journal. Guests included Heather Kowalski, The J. Craig Venter Institute, Ting Wu, The Wu Lab, Harvard Medical School, and the artist Stephanie Rudloe. Attending for Edge: Katinka MatsonRussell WeinbergerMax Brockman, andKarla Taylor.

We are witnessing a point in which the empirical has intersected with the epistemological: everything becomes new, everything is up for grabs. Big questions are being asked, questions that affect the lives of everyone on the planet. And don't even try to talk about religion: the gods are gone.

Following the theme of new technologies=new perceptions, I asked the speakers to take a third culture slant in the proceedings and explore not only the science but the potential for changes in the intellectual landscape as well.

We are pleased to present the transcripts of the talks and conversation along with streaming video clips (links below).

— JB




FREEMAN DYSON

 

The essential idea is that you separate metabolism from replication. We know modern life has both metabolism and replication, but they're   carried out by separate groups of molecules. Metabolism is carried out by proteins and all kinds of other molecules, and replication is carried out by DNA and RNA. That maybe is a clue to the fact that   they started out separate rather than together. So my version of the origin of life is that it started with metabolism only.

FREEMAN DYSON

 

FREEMAN DYSON: First of all I wanted to talk a bit about origin of life. To me the most interesting question in biology has always been how it all got started. That has been a hobby of mine. We're all equally ignorant, as far as I can see. That's why somebody like me can pretend to be an expert.

I was struck by the picture of early life that appeared in Carl Woese's article three years ago. He had this picture of the pre-Darwinian epoch when genetic information was open source and everything was shared between different organisms. That picture fits very nicely with my speculative version of origin of life.

The essential idea is that you separate metabolism from replication. We know modern life has both metabolism and replication, but they're carried out by separate groups of molecules. Metabolism is carried out by proteins and all kinds of small molecules, and replication is carried out by DNA and RNA. That maybe is a clue to the fact that they started out separate rather than together. So my version of the origin of life is it started with metabolism only. ...

[...continued]

___

FREEMAN DYSON is professor of physics at the Institute for Advanced Study, in Princeton. His professional interests are in mathematics and astronomy. Among his many books are Disturbing the Universe, Infinite in All Directions Origins of Life, From Eros to Gaia, Imagined Worlds, The Sun, the Genome, and the Internet, and most recently A Many Colored Glass: Reflections on the Place of Life in the Universe.

Freeman Dyson's Edge Bio Page


CRAIG VENTER

 

I have come to think of life in much more a gene-centric view than even a genome-centric view, although it kind of oscillates.  And when we talk about the transplant work, genome-centric becomes more important than gene-centric. From the first third of the Sorcerer II expedition we discovered roughly 6 million new genes that has doubled the number in the public databases when we put them in a few months ago, and in 2008 we are likely to double that entire number again.  We're just at the tip of the iceberg of what the divergence is on this planet. We are in a linear phase of gene discovery maybe in a linear phase of unique biological entities if you call those species, discovery, and I think eventually we can have databases that represent the gene repertoire of our planet.

One question is, can we extrapolate back from this data set to describe the most recent common ancestor. I don't necessarily buy that there is a single ancestor. It’s counterintuitive to me. I think we may have thousands of recent common ancestors and they are not necessarily so common.

J. CRAIG VENTER

J. CRAIG VENTER: Seth's statement about digitization is basically what I've spent the last fifteen years of my career doing, digitizing biology. That's what DNA sequencing has been about. I view biology as an analog world that DNA sequencing has taking into the digital world . I'll talk about some of the observations that we have made for a few minutes, and then I will talk about once we can read the genetic code, we've now started the phase where we can write it. And how that is going to be the end of Darwinism.

On the reading side, some of you have heard of our Sorcerer II expedition for the last few years where we've been just shotgun sequencing the ocean. We've just applied the same tools we developed for sequencing the human genome to the environment, and we could apply it to any environment; we could dig up some soil here, or take water from the pond, and discover biology at a scale that people really have not even imagined.

The world of microbiology as we've come to know it is based on over a hundred year old technology of seeing what will grow in culture. Only about a tenth of a percent of microbiological organisms, will grow in the lab using traditional techniques. We decided to go straight to the DNA world to shotgun sequence what's there; using very simple techniques of filtering seawater into different size fractions, and sequencing everything at once that's in the fractions. ...

[...continued]

___

J. CRAIG VENTER is one of leading scientists of the 21st century for his visionary contributions in genomic research. He is founder and president of the J. Craig Venter Institute. The Venter Institute conducts basic research that advances the science of genomics; specializes inhuman genome based medicine, infectious disease, environmental genomics and synthetic genomics and synthetic life, and explores the ethical and policy implications of genomic discoveries and advances.  The Venter Institute employes more than 400 scientist and staff in Rockville, Md and in La Jolla, Ca. He is the author of A Life Decoded: My Genome: My Life.

Craig Venter's Edge Bio Page


GEORGE CHURCH

Many of the people here worry about what life is, but maybe in a slightly more general way, not just ribosomes, but inorganic life. Would we know it if we saw it? It's important as we go and discover other worlds, as we start creating more complicated robots, and so forth, to know, where do we draw the line?

GEORGE CHURCH

GEORGE CHURCH: We've heard a little bit about the ancient past of biology, and possible futures, and I'd like to frame what I'm talking about in terms of four subjects that elaborate on that. In terms of past and future, what have we learned from the past, how does that help us design the future, what would we like it to do in the future, how do we know what we should be doing? This sounds like a moral or ethical issue, but it's actually a very practical one too.

One of the things we've learned from the past is that diversity and dispersion are good. How do we inject that into a technological context? That brings the second topic, which is, if we're going to do something, if we have some idea what direction we want to go in, what sort of useful constructions we would like to make, say with biology, what would those useful constructs be? By useful we might mean that the benefits outweigh the costs — and the risks.  Not simply costs, you have to have risks, and humans as a species have trouble estimating the long tails of some of the risks, which have big consequences and unintended consequences. So that's utility. 1) What we learn from the future and the past 2) the utility 3) kind of a generalization of life.

Many of the people here worry about what life is, but maybe in a slightly more general way, not just ribosomes, but inorganic life. Would we know it if we saw it? It's important as we go and discover other worlds, as we start creating more complicated robots, and so forth, to know, where do we draw the line? I think that's interesting. And then finally — that's kind of generalizational life, at a basic level — but 4) the kind of life that we are particularly enamored of — partly because of egocentricity, but also for very philosophical reasons — is intelligent life. But how do we talk about that? ...

[...continued]

___

GEORGE CHURCH is Professor of Genetics at Harvard Medical School and Director of the Center for Computational Genetics. He invented the broadly applied concepts of molecular multiplexing and tags, homologous recombination methods, and array DNA synthesizers. Technology transfer of automated sequencing & software to Genome Therapeutics Corp. resulted in the first commercial genome sequence (the human pathogen, H. pylori,1994). He has served in advisory roles for 12 journals, 5 granting agencies and 22 biotech companies. Current research focuses on integrating biosystems-modeling with personal genomics & synthetic biology.

George Church's Edge Bio Page


ROBERT SHAPIRO

 

I looked at the papers published on the origin of life and decided that it was absurd that the thought of nature of its own volition putting together a DNA or an RNA molecule was unbelievable.

I'm always running out of metaphors to try and explain what the difficulty is. But suppose you took Scrabble sets, or any word game sets, blocks with letters, containing every language on Earth, and you heap them together and you then took a scoop and you scooped into that heap, and you flung it out on the lawn there, and the letters fell into a line which contained the words “To be or not to be, that is the question,” that is roughly the odds of an RNA molecule, given no feedback — and there would be no feedback, because it wouldn't be functional until it attained a certain length and could copy itself — appearing on the Earth.

ROBERT SHAPIRO

ROBERT SHAPIRO: I was originally an organic chemist — perhaps the only one of the six of us — and worked in the field of organic synthesis, and then I got my PhD, which was in 1959, believe it or not. I had realized that there was a lot of action in Cambridge, England, which was basically organic chemistry, and I went to work with a gentleman named Alexander Todd, promoted eventually to Lord Todd, and I published one paper with him, which was the closest I ever got to the Lord. I then spent decades running a laboratory in DNA chemistry, and so many people were working on DNA synthesis — which has been put to good use as you can see — that I decided to do the opposite, and studied the chemistry of how DNA could be kicked to Hell by environmental agents. Among the most lethal environmental agents I discovered for DNA — pardon me, I'm about to imbibe it — was water. Because water does nasty things to DNA. For example, there's a process I heard you mention called DNA animation, where it kicks off part of the coding part of DNA from the units — that was discovered in my laboratory.

Another thing water does is help the information units fall off of DNA, which is called depurination and ought to apply only one of the subunits — but works under physiological conditions for the pyrimidines as well, and I helped elaborate the mechanism by which water helped destroy that part of DNA structure. I realized what a fragile and vulnerable molecule it was, even if was the center of Earth life. After water, or competing with water, the other thing that really does damage to DNA, that is very much the center of hot research now — again I can't tell you to stop using it — is oxygen. If you don't drink the water and don't breathe the air, as Tom Lehrer used to say, and you should be perfectly safe. ...

[...continued]

___

ROBERT SHAPIRO is professor emeritus of chemistry and senior research scientist at New York University. He has written four books for the general public: Life Beyond Earth (with Gerald Feinberg); Origins, a Skeptic's Guide to the Creation of Life on Earth; The Human Blueprint (on the effort to read the human genome); and Planetary Dreams (on the search for life in our Solar System).

Robert Shapiro Edge Bio Page


DIMITAR SASSELOV

Is Earth the ideal planet for life? What is the future of life in our universe? We often imagine our place in the universe in the same way we experience our lives and the places we inhabit. We imagine a practically static eternal universe where we, and life in general, are born, grow up, and mature; we are merely one of numerous generations.

This is so untrue! We now know that the universe is 14 and Earth life is 4 billion years old: life and the universe are almost peers. If the universe were a 55-year old, life would be a 16-year old teenager. The universe is nowhere close to being static and unchanging either.

Together with this realization of our changing universe, we are now facing a second, seemingly unrelated realization: there is a new kind of planet out there which have been named super-Earths, that can provide to life all that our little Earth does. And more.

DIMITAR SASSELOV

DIMITAR SASSELOV: I will start the same way, by introducing my background. I am a physicist, just like Freeman and Seth, in background, but my expertise is astrophysics, and more particularly planetary astrophysics. So that means I'm here to try to tell you a little bit of what's new in the big picture, and also to warn you that my background basically means that I'm looking for general relationships — for generalities rather than specific answers to the questions that we are discussing here today.

So, for example, I am personally more interested in the question of the origins of life, rather than the origin of life. What I mean by that is I'm trying to understand what we could learn about pathways to life, or pathways to the complex chemistry that we recognize as life. As opposed to narrowly answering the question of what is the origin of life on this planet. And that's not to say there is more value in one or the other; it's just the approach that somebody with my background would naturally try to take. And also the approach, which — I would agree to some extent with what was said already — is in need of more research and has some promise.

One of the reasons why I think there are a lot of interesting new things coming from that perspective, that is from the cosmic perspective, or planetary perspective, is because we have a lot more evidence for what is out there in the universe than we did even a few years ago. So to some extent, what I want to tell you here is some of this new evidence and why is it so exciting, in being able to actually inform what we are discussing here. ...

[...continued]

___

DIMITAR SASSELOV is Professor of Astronomy at Harvard University and Director, Harvard Origins of Life Initiative. Most recently his research has led him to explore the nature of planets orbiting other stars. Using novel techniques, he has discovered a few such planets, and his hope is to use these techniques to find planets like Earth. He is the founder and director of the new Harvard Origins of Life Initiative, a multidisciplinary center bridging scientists in the physical and in the life sciences, intent to study the transition from chemistry to life and its place in the context of the Universe.

Dimitar Sasselov's Edge Bio Page


SETH LLOYD

If you program a computer at random, it will start producing other computers, other ways of computing, other more complicated, composite ways of computing. And here is where life shows up. Because the universe is already computing from the very beginning when it starts, starting from the Big Bang, as soon as elementary particles show up. Then it starts exploring — I'm sorry to have to use anthropomorphic language about this, I'm not imputing any kind of actual intent to the universe as a whole, but I have to use it for this to describe it — it starts to explore other ways of computing.

SETH LLOYD

SETH LLOYD: I'd like to step back from talking about life itself. Instead I'd like to talk about what information processing in the universe can tell us about things like life. There's something rather mysterious about the universe. Not just rather mysterious, extremely mysterious. At bottom, the laws of physics are very simple. You can write them down on the back of a T-shirt: I see them written on the backs of T-shirts at MIT all the time, even in size petite. IN addition to that, the initial state of the universe, from what we can tell from observation, was also extremely simple. It can be described by a very few bits of information.

So we have simple laws and simple initial conditions. Yet if you look around you right now you see a huge amount of complexity. I see a bunch of human beings, each of whom is at least as complex as I am. I see trees and plants, I see cars, and as a mechanical engineer, I have to pay attention to cars. The world is extremely complex.

If you look up at the heavens, the heavens are no longer very uniform. There are clusters of galaxies and galaxies and stars and all sorts of different kinds of planets and super-earths and sub-earths, and super-humans and sub-humans, no doubt. The question is, what in the heck happened? Who ordered that? Where did this come from? Why is the universe complex? Because normally you would think, okay, I start off with very simple initial conditions and very simple laws, and then I should get something that's simple. In fact, mathematical definitions of complexity like algorithmic information say, simple laws, simple initial conditions, imply the state is always simple. It's kind of bizarre. So what is it about the universe that makes it complex, that makes it spontaneously generate complexity? I'm not going to talk about super-natural explanations. What are natural explanations — scientific explanations of our universe and why it generates complexity, including complex things like life? ...

[...continued]

___

SETH LLOYD is Professor of Mechanical Engineering at MIT and Director of the W.M. Keck Center for Extreme Quantum Information Theory (xQIT). He works on problems having to do with information and complex systems from the very small—how do atoms process information, how can you make them compute, to the very large — how does society process information? And how can we understand society in terms of its ability to process information? He is the author if Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos.

Seth Lloyd's Edge Bio Page



FRANKFURTER 
August 31,.2007

FEUILLETON — Front Page

 

Let's play God!; Life's questions: J. Craig Venter programs the future(Lasst uns Gott spielen!)By Jordan Mejias

Was Evolution only an interlude?  At the invitation of John Brockman, science luminaries such as J. Craig Venter, Freeman Dyson, Seth Lloyd, Robert Shapiro and others discussed the question: What is Life? 

EASTOVER FARM, August 30th

It sounds like seaman's yarn that the scientist with the look of an experienced seafarer has in store for us. The suntanned adventurer with the close-clipped grey beard vaunts the ocean as a sea of bacteria and viruses, unimaginable in their varieties. And in their lifestyle, as we might call it. But what do organisms live off? Like man, not off air or love alone. There can be no life without nutrients, it is said. Not true, says the sea dog. Sometimes a source of energy is enough, for instance, when energy is abundantly provided by sunlight. Could that teach us anything about our very special form of life?

J. Craig Venter, the ingenious decoder of the genome, who takes time off to sail around the world on expeditions, balances his flip-flops on his naked feet as he tells us about such astounding phenomena of life. Us, that means a few hand-picked journalists and half a dozen stars of science, invited by John Brockman, the Guru of the all encompassing "Third Culture", to his farm in Connecticut.

Relaxed, always open for a witty remark, but nevertheless with the indispensable seriousness, the scientific luminaries go to work under Brockman's direction. He, the master of the easy, direct question that unfailingly draws out the most complicated answers, the hottest speculations and debates, has for today transferred his virtual salon, always accessible on the Internet under the name Edge, to a very real and idyllic summer's day. This time the subject matter is nothing other than life itself.

When Venter speaks of life, it's almost as if he were reading from the script of a highly elaborate Science Fiction film. We are told to imagine organisms that not only can survive dangerous radiations, but that remain hale and hearty as they journey through the Universe. Still, he of all people, the revolutionary geneticist, warns against setting off in an overly gene-centric direction when trying to track down Life. For the way in which a gene makes itself known, will depend to a large degree upon the aid of overlooked transporter genes. In spite of this he considers the genetic code a better instrument to organize living organisms than the conventional system of classification by species.

Many colleagues nod in agreement, when they are not smiling in agreement. But this cannot be all that Venter has up his sleeve. Just a short while ago, he created a stir with the announcement that his Institute had succeeded in transplanting the genome of one bacterium into another. With this, he had newly programmed an organism. Should he be allowed to do this?  A question not only for scientists. Eastover Farm was lacking in ethicists, philosophers and theologians, but Venter had taken precautions. He took a year to learn from the world's large religions whether it was permissible to synthesize life in the lab. Not a single religious representative could find grounds to object. All essentially agreed: It's okay to play God.

Maybe some of the participants would have liked to hear more on the subject, but the day in Nature's lap was for identifying themes, not giving and receiving exhaustive amounts of information. A whiff of the most breathtaking visions, both good and bad, was enough. There were already frightening hues in the ultimate identity theft, to which Venter admitted with his genome exchange. What if a cell were captured by foreign DNA? Wouldn't it be a nightmare in the shape of a genuine Darwinian victory of the strong over the weak? Venter was applying dark colors here, whereas Freeman Dyson had painted us a much more mellow picture of the future.

Dyson, the great, not yet quite eighty-four year old youngster, physicist and futurist, regards evolution as an interlude. According to his calculations, the competition between species has gone on for just three billion years. Before that, according to Dyson, living organisms participated in horizontal gene transfers; if you will, they preferred the peaceful exchange of information among themselves. In the ten thousand years since Homo sapiens conquered the biosphere, Dyson once again sees a return of the old Modus Operandi, although in a modified form.

The scenario goes as follows: Cultural evolution, characterized by the transfer of ideas, has replaced the much slower biological evolution. Today, ideas, not genes, tip the scales. In availing himself of biotechnology, Man has picked up the torn pre-evolutionary thread and revived the genetic back and forth between microbes, plants and animals. Bit by bit the borders between species are disappearing. Soon only one species will remain, namely the genetically modified human, while the rules of Open Source, which guarantee the unhindered exchange of software in computers, will also apply to the exchange of genes. The evolution of life, in nutshell, will return soon to a state of agreeable unity, as it existed in good old pre-Darwinian times, when life had not yet been separated into distinct species.

Though Venter may not trust in this future peace, he nearly matches Dyson in his futuristic enthusiasm. But he is enough of a realist to stress that he has never talked of creating new life from scratch. He is confident that he can develop new species and life forms, but will always have to rely on existing materials that he finds. Even he cannot conjure a cell out of nothing. So far, so good and so humble.

The rest is sheer bravado. He considers manipulation of human genes not only possible, but desirable. There's no question that he will continue to disappoint the inmate who once asked him to fashion an attractive cellmate, just as he refused the wish of an unsavory gentleman who yearned for mentally underdeveloped working-class people. But, Venter asks, who can object to humans having genetically beefed-up Intelligence? Or to new genomes that open the door to new, undreamt-of sources of bio fuel?  Nobody at Eastover Farm seemed afraid of a eugenic revival. What in German circles would have released violent controversies, here drifts by unopposed under mighty maple trees that gently whisper in the breeze.

All the same, Venter does confess that such life transforming technology, more powerful than any, humanity could harness until now, inevitably plunges him in doubt, particularly when looking back on human history. Still, he looks toward the future with hope and confidence. As does George Church, the molecular geneticist from Harvard, who wouldn't be surprised if a future computer would be able outperform the human brain. Could resourcefully mixed DNA be helpful to us?  The organic chemist Robert Shapiro, Emeritus of New York University, objects strongly to viewing DNA as a monopolistic force. Will he assure us, that life consists of more than DNA?  But of what? Is it conceivable that there are certain forms of life we still are unable to recognize?  Who wants to confirm that nothing runs without DNA?  Why should life not also arise from minerals???These are thoughts to make jaws drop, not only among laymen. Venter also is concerned that Shapiro defines life all too loosely. But both, the geneticist and the chemist focus on the moment at which life is breathed into an inanimate object. This will be, in Venter's opinion, the next milestone in the investigation and conditioning of life. We can no longer beat around the bush: What is Life? Venter declines to answer, he doesn't want to be drawn into philosophical bullshit, as he says. Is a virus a life form? Must life, in order to be recognized as life, be self-reproducing? A colorful butterfly glides through the debate. Life can appear so weightless. And it is so difficult to describe and define.

Seth Lloyd, the quantum mechanic from MIT points out mischievously that we know far more about the origin of the universe than we do about the origin of life. Using the quantum computer as his departing point, he tries to give us an idea of the huge number of possibilities out of which life could have developed. If Albert Einstein did not wish to envisage a dice-playing god, Lloyd, the entertaining thinker, can't help to see only dice-playing, though presumably without the assistance of god. Everything reveals itself in his life panorama as a result of chance, whether here on Earth or in an incomprehensible distance

Astrophysicist Dimitar Sasselov works also under the auspices of chance. Although his field of research necessarily widens our perspective, he can present us only a few places in the universe that could be suitable for life. Only five Super-Earths, as Sasselov calls those planets that are larger than Earth, are known to us at this point. With improved recognition technologies, perhaps a hundred million could be found in the universe in all. No, that is still, distributed throughout and applied to the entire universe, not a grand number. But the number is large enough to give us hope for real co-inhabitants of our universe. Somewhere, sometime, we could encounter microbial life. 

Most likely this would be life in a form that we cannot even fathom yet. It will all depend on what we, strange life forms that we are, can acknowledge as life. At Eastover Farm our imaginative powers were already being vigorously tested.

Text: F.A.Z., 31.08.2007, No. 202 / page 33

Translated by Karla taylor

 


SUEDDEUTSCHE ZEITUNG
September 3, 2007
FEUILLETON — Front Page

 

DARWIN WAS JUST A PHASE?(Darwin war nur eine Phase)Country Life in Connecticut: Six scientists find the future in genetic engineering??By Andrian Kreye

The origins of life were the subject of discussion on a summer day when six pioneers of science convened at Eastover Farm in Connecticut. The physicist and scientific theorist Freeman Dyson was the first of the speakers to talk on the theme: "Life: What a Concept!" An ironic slogan for one of the most complex problems. Seth Lloyd, quantum physicist at MIT, summed it up with his remark that scientists now know everything about the origin of the Universe and virtually nothing about the origin of life. Which makes it rather difficult to deal with the new world view currently taking shape in the wake of the emerging age of biology.

The roster of thinkers had assembled at the invitation of literary agent John Brockman, who specializes in scientific ideas. The setting was distinguished. Eastover Farm sits in the part of Connecticut where the rich and famous New Yorkers who find the beach resorts of the Hamptons too loud and pretentious have settled. Here the scientific luminaries sat at long tables in the shade of the rustling leaves of maple trees, breaking just for lunch at the farmhouse.

The day remained on topic, as Brockman had invited only half a dozen journalists, to avoid slowing the thinkers down with an onslaught of too many layman's questions. The object was to have them talk about ideas mainly amongst themselves in the manner of a salon, not unlike his online forum edge.org. Not that the day went over the heads of the non-scientist guests. With Dyson, Lloyd, genetic engineer George Church, chemist Robert Shapiro, astronomer Dimitar Sasselov and biologist and decoder of the genome J. Craig Venter, six men came together, each of whom have made enormous contributions in interdiscplinary sciences, and as a consequence have mastered the ability to talk to people who are not well-read in their respective fields. This made it possible for an outsider to follow the discussions, even if at moments, he was made to feel just that, as when Robert Shapiro cracked a joke about RNA that was met with great laughter from the scientists.

Freeman Dyson, a fragile gentleman of 84 years, opened the morning with his legendary provocation that Darwinian evolution represents only a short phase of three billion years in the life of this planet, a phase that will soon reach its end. According to this view, life began in primeval times with a haphazard assemblage of cells, RNA-driven organisms ensued, which, in the third phase of terrestrial life would have learned to function together. Reproduction appeared on the scene in the fourth phase, multicellular beings and the principle of death appeared in the fifth phase.

The End of Natural Selection

We humans belong to the sixth phase of evolution, which progresses very slowly by way of Darwinian natural selection. But this according to Dyson will soon come to an end, because men like George Church and J. Craig Venter are expected to succeed not only in reading the genome, but also in writing new genomes in the next five to ten years. This would constitute the ultimate "Intelligent Design", pun fully intended. Where this could lead is still difficult to anticipate. Yet Freeman Dyson finds a meaningful illustration. He spent the early nineteen fifties at Princeton, with mathematician John von Neuman, who designed one of the earliest programmable computers. When asked how many computers might be in demand, von Neumann assured him that 18 would be sufficient to meet the demand of a nation like the United States. Now, 55 years later, we are in the middle of the age of physics where computers play an integral role in modern life and culture.

Now though we are entering the age of biology. Soon genetic engineering will shape our daily life to the same extent that computers do today. This sounds like science fiction, but it is already reality in science. Thus genetic engineer George Church talks about the biological building blocks that he is able to synthetically manufacture. It is only a matter of time until we will be able to manufacture organisms that can self-reproduce, he claims. Most notably J. Craig Venter succeeded in introducing a copy of a DNA-based chromosome into a cell, which from then on was controlled by that strand of DNA.

Venter, a suntanned giant with the build of a surfer and the hunting instinct of a captain of industry, understands the magnitude of this feat in microbiology. And he understands the potential of his research to create biofuel from bacteria. He wouldn't dare to say it, but he very well might be a Bill Gates of the age of biology. Venter also understands the moral implications. He approached bioethicist Art Kaplan in the nineties and asked him to do a study on whether in designing a new genome he would raise ethical or religious objections. Not a single religious leader or philosopher involved in the study could find a problem there. Such contract studies are debatable. But here at Eastover Farm scientists dream of a glorious future. Because science as such is morally neutral—every scientific breakthrough can be applied for good or for bad.

The sun is already turning pink behind the treetops, when Dimitar Sasselov, the Bulgarian astronomer from Harvard, once more reminds us how unique and at the same time, how unstable the balance of our terrestrial life is. In our galaxy, astronomers have found roughly one hundred million planets that could theoretically harbor organic life. Not only does Earth not have the best conditions among them; it is actually at the very edge of the spectrum. "Earth is not particularly inhabitable," he says, wrapping up his talk. Here J. Craig Venter cannot help but remark as an idealist: "But it is getting better all the time".

Translated by Karla Taylor

 


 

Andrian Kreye, Süddeutsche Zeitung

Jordan MejiasFrankfurter Allgemeine Zeitung



RICHARD DAWKINS—FREEMAN DYSON: AN EXCHANGE

As part of this year's Edge Event at Eastover Farm in Bethlehem, CT, I invited three of the participants—Freeman Dyson, George Church, and Craig Venter—to come up a day early, which gave me an opportunity to talk to Dyson about his abovementioned essay in New York Review of Books entitled "Our Biotech Future".

I also sent the link to the essay to Richard Dawkins, and asked if he would would comment on what Dyson termed the end of "the Darwinian interlude".

Early the next morning, prior to the all-day discussion (which also included as participants Robert Shapiro, Dimitar Sasselov, and Seth Lloyd) Dawkins emailed his thoughts which I read to the group during the discussion following Dyson's talk. [NOTE: Dawkins asked me to make it clear that his email below "was written hastily as a letter to you, and was not designed for publication, or indeed to be read out at a meeting of biologists at your farm!"].

Now Dyson has responded and the exchange is below.

JB


RICHARD DAWKINS [8.27.07]Evolutionary Biologist, Charles Simonyi Professor For The Understanding Of Science, Oxford University; Author, The God Delusion

"By Darwinian evolution he [Woese] means evolution as Darwin understood it, based on the competition for survival of noninterbreeding species."

"With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them."

These two quotations from Dyson constitute a classic schoolboy howler, a catastrophic misunderstanding of Darwinian evolution. Darwinian evolution, both as Darwin understood it, and as we understand it today in rather different language, is NOT based on the competition for survival of species. It is based on competition for survival WITHIN species. Darwin would have said competition between individuals within every species. I would say competition between genes within gene pools. The difference between those two ways of putting it is small compared with Dyson's howler (shared by most laymen: it is the howler that I wrote The Selfish Gene partly to dispel, and I thought I had pretty much succeeded, but Dyson obviously hasn't read it!) that natural selection is about the differential survival or extinction of species. Of course the extinction of species is extremely important in the history of life, and there may very well be non-random aspects of it (some species are more likely to go extinct than others) but, although this may in some superficial sense resemble Darwinian selection, it is NOT the selection process that has driven evolution. Moreover, arms races between species constitute an important part of the competitive climate that drives Darwinian evolution. But in, for example, the arms race between predators and prey, or parasites and hosts, the competition that drives evolution is all going on within species. Individual foxes don't compete with rabbits, they compete with other individual foxes within their own species to be the ones that catch the rabbits (I would prefer to rephrase it as competition between genes within the fox gene pool).

The rest of Dyson's piece is interesting, as you'd expect, and there really is an interesting sense in which there is an interlude between two periods of horizontal transfer (and we mustn't forget that bacteria still practise horizontal transfer and have done throughout the time when eucaryotes have been in the 'Interlude'). But the interlude in the middle is not the Darwinian Interlude, it is the Meiosis / Sex / Gene-Pool / Species Interlude. Darwinian selection between genes still goes on during eras of horizontal transfer, just as it does during the Interlude. What happened during the 3-billion-year Interlude is that genes were confined to gene pools and limited to competing with other genes within the same species. Previously (and still in bacteria) they were free to compete with other genes more widely (there was no such thing as a species outside the 'Interlude'). If a new period of horizontal transfer is indeed now dawning through technology, genes may become free to compete with other genes more widely yet again.

As I said, there are fascinating ideas in Freeman Dyson's piece. But it is a huge pity it is marred by such an elementary mistake at the heart of it.

Richard


FREEMAN DYSON[8.30.07]Physicist, Institute of Advanced Study, Author, Many Colored Glass: Reflections on the Place of Life in the Universe 

Dear Richard Dawkins,

Thank you for the E-mail that you sent to John Brockman, saying that I had made a "school-boy howler" when I said that Darwinian evolution was a competition between species rather than between individuals. You also said I obviously had not read The Selfish Gene. In fact I did read your book and disagreed with it for the following reasons.

Here are two replies to your E-mail. The first was a verbal response made immediately when Brockman read your E-mail aloud at a meeting of biologists at his farm. The second was written the following day after thinking more carefully about the question.

First response. What I wrote is not a howler and Dawkins is wrong. Species once established evolve very little, and the big steps in evolution mostly occur at speciation events when new species appear with new adaptations. The reason for this is that the rate of evolution of a population is roughly proportional to the inverse square root of the population size. So big steps are most likely when populations are small, giving rise to the ``punctuated equilibrium'' that is seen in the fossil record. The competition is between the new species with a small population adapting fast to new conditions and the old species with a big population adapting slowly.

Second response. It is absurd to think that group selection is less important than individual selection. Consider for example Dodo A and Dodo B, competing for mates and progeny in the dodo population on Mauritius. Dodo A competes much better and?has greater fitness, as measured by individual selection. Dodo A mates more often and has many more grandchildren than Dodo B. A hundred years later, the species is extinct and the fitness of A and B are both reduced to zero. Selection operating at the species level trumps selection at the individual level. Selection at the species level wiped out both A and B because the species neglected to maintain the ability to fly, which was essential to survival when human predators appeared on the island. This situation is not peculiar to dodos. It arises throughout the course of evolution, whenever environmental changes cause species to become extinct.

In my opinion, both these responses are valid, but the second one goes more directly to the issue that divides us. Yours sincerely, Freeman Dyson.


Dimitar Sasselov, George Church, Robert Shapiro, John Brockman,

J. Craig Venter,Seth Lloyd, Freeman Dyson


Master Classes
Event Date: [ 7.19.07 ]
Location:
United States

A SHORT COURSE IN THINKING ABOUT THINKING
Edge Master Class 07
DANIEL KAHNEMAN
Auberge du Soleil, Rutherford, CA, July 20-22, 2007

AN EDGE SPECIAL PROJECT

 


(click for slideshow)

ATTENDEES: Jeff Bezos, Founder, Amazon.com; Stewart Brand, Cofounder, Long Now Foundation, Author, How Buildings Learn; Sergey Brin, Founder, Google; John Brockman, Edge Foundation, Inc.; Max Brockman, Brockman, Inc.; Peter Diamandis, Space Entrepreneur, Founder, X Prize Foundation; George Dyson, Science Historian; Author, Darwin Among the Machines; W. Daniel Hillis, Computer Scientist; Cofounder, Applied Minds; Author, The Pattern on the Stone; Daniel Kahneman, Psychologist; Nobel Laureate, Princeton University; Dean Kamen, Inventor, Deka Research; Salar Kamangar, Google; Seth Lloyd, Quantum Physicist, MIT, Author, Programming The Universe; Katinka Matson, Cofounder, Edge Foundation, Inc.; Nathan Myhrvold, Physicist; Founder, Intellectual Venture, LLC; Event Photographer; Tim O'Reilly, Founder, O'Reilly Media; Larry Page, Founder, Google; George Smoot, Physicist, Nobel Laureate, Berkeley, Coauthor, Wrinkles In Time; Anne Treisman, Psychologist, Princeton University;Jimmy Wales, Founder, Chair, Wikimedia Foundation (Wikipedia).


INTRODUCTION
By John Brockman

Recently, I spent several months working closely with Danny Kahneman, the psychologist who is the co-creator of behavioral economics (with his late collaborator Amos Tversky), for which he won the Nobel Prize in Economics in 2002.

My discussions with him inspired a 2-day "Master Class" given by Kahneman for a group of twenty leading American business/Internet/culture innovators—a microcosm of the recently dominant sector of American business—in Napa, California in July. They came to hear him lecture on his ideas and research in diverse fields such as human judgment, decision making and behavioral economics and well-being.


Dean Kamen

Jeff Bezos

Larry Page

While Kahneman has a wide following among people who study risk, decision-making, and other aspects of human judgment, he is not exactly a household name. Yet among many of the top thinkers in psychology, he ranks at the top of the field.

Harvard psychologist Daniel Gilbert (Stumbling on Happiness) writes: "Danny Kahneman is simply the most distinguished living psychologist in the world, bar none. Trying to say something smart about Danny's contributions to science is like trying to say something smart about water: It is everywhere, in everything, and a world without it would be a world unimaginably different than this one." And according to Harvard's Steven Pinker (The Stuff of Thought): "It's not an exaggeration to say that Kahneman is one of the most influential psychologists in history and certainly the most important psychologist alive today. He has made seminal contributions over a wide range of fields including social psychology, cognitive science, reasoning and thinking, and behavioral economics, a field he and his partner Amos Tversky invented."


Jimmy Wales

Nathan Myhrvold

Stewart Brand

Here are some examples from the national media which illustrate how Kahneman's ideas are reflected in the public conversation:

In the Economist "Happiness & Economics " issue in December, 2006, Kahneman is credited with the new hedonimetry regarding his argument that people are not as mysterious as less nosy economists supposed. "The view that hedonic states cannot be measured because they are private events is widely held but incorrect."

Paul Krugman, in his New York Times column, "Quagmire Of The Vanities" (January 8, 2007), asks if the proponents of the "surge" in Iraq are cynical or delusional. He presents Kahneman's view that "the administration's unwillingness to face reality in Iraq reflects a basic human aversion to cutting one's losses—the same instinct that makes gamblers stay at the table, hoping to break even."

His articles have been picked up by the press and written about extensively. The most recent example is Jim Holt's lede piece in The New York Times Magazine, "You are What You Expect" (January 21, 2007), an article about this year's Edge Annual Question "What Are You Optimistic About?". It was prefaced with a commentary regarding Kahneman's ideas on "optimism bias".

In Jerome Groopman's New Yorker article, "What's the trouble? How Doctors Think" (January 29, 2007), Groopman looks at a medical misdiagnosis through the prism of a heuristic called "availability," which refers to the tendency to judge the likelihood of an event by the ease with which relevant examples come to mind. This tendency was first described in 1973, in Kahneman's paper with Amos Tversky when they were psychologists at the Hebrew University of Jerusalem.

Kahneman's article (with Jonathan Renshon) "Why Hawks Win" was published in Foreign Policy (January/February 2007); Kahneman points out that the answer may lie deep in the human mind. People have dozens of decision-making biases, and almost all favor conflict rather than concession. The article takes a look at why the tough guys win more than they should. Publication came during the run up to Davis, and the article became a focus of numerous discussions and related articles.

The event was an unqualified success. As one of the attendees later wrote: "Even with the perspective a few weeks, I can still think it is one of the all time best conferences that I have ever attended."


George Smoot

Daniel Kahneman

Sergey Brin

Over a period of two days, Kahneman presided over six sessions lasting about eight hours. The entire event was videotaped as an archive. Edge is pleased to present a sampling from the event consisting of streaming video of the first 10-15 minutes of each session along with the related verbatim transcripts.

—JB

DANIEL KAHNEMAN is Eugene Higgins Professor of Psychology, Princeton University, and Professor of Public Affairs, Woodrow Wilson School of Public and International Affairs. He is winner of the 2002 Nobel Prize in Economic Sciences for his pioneering work integrating insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.

Daniel Kahneman's Edge Bio Page
Daniel Kahneman's Nobel Prize Lecture


SESSION ONE

I'll start with a topic that is called an inside-outside view of the planning fallacy. And it starts with a personal story, which is a true story....

KAHNEMAN: I'll start with a topic that is called an inside-outside view of the planning fallacy. And it starts with a personal story, which is a true story.

Well over 30 years ago I was in Israel, already working on judgment and decision making, and the idea came up to write a curriculum to teach judgment and decision making in high schools without mathematics. I put together a group of people that included some experienced teachers and some assistants, as well as the Dean of the School of Education at the time, who was a curriculum expert. We worked on writing the textbook as a group for about a year, and it was going pretty well—we had written a couple of chapters, we had given a couple of sample lessons. There was a great sense that we were making progress. We used to meet every Friday afternoon, and one day we had been talking about how to elicit information from groups and how to think about the future, and so I said, Let's see howwe think about the future.

I asked everybody to write down on a slip of paper his or her estimate of the date on which we would hand the draft of the book over to the Ministry of Education. That by itself by the way was something that we had learned: you don't want to start by discussing something, you want to start by eliciting as many different opinions as possible, which you then you pool. So everybody did that, and we were really quite narrowly centered around two years; the range of estimates that people had—including myself and the Dean of the School of Education—was between 18 months and two and a half years.

But then something else occurred to me, and I asked the Dean of Education of the school whether he could think of other groups similar to our group that had been involved in developing a curriculum where no curriculum had existed before. At that period—I think it was the early 70s—there was a lot of activity in the biology curriculum, and in mathematics, and so he said, yes, he could think of quite a few. I asked him whether he knew specifically about these groups and he said there were quite a few of them about which he knew a lot. So I asked him to imagine them, thinking back to when they were at about the same state of progress we had reached, after which I asked the obvious question—how long did it take them to finish?

It's a story I've told many times, so I don't know whether I remember the story or the event, but I think he blushed, because what he said then was really kind of embarrassing, which was, You know I've never thought of it, but actually not all of them wrote a book. I asked how many, and he said roughly 40 percent of the groups he knew about never finished. By that time, there was a pall of gloom falling over the room, and I asked, of those who finished, how long did it take them? He thought for awhile and said, I cannot think of any group that finished in less than seven years and I can't think of any that went on for more than ten.

I asked one final question before doing something totally irrational, which was, in terms of resources, how good were we are at what we were doing, and where he would place us in the spectrum. His response I do remember, which was, below average, but not by much. [much laughter]

I'm deeply ashamed of the rest of the story, but there was something really instructive happening here, because there are two ways of looking at a problem; the inside view and the outside view. The inside view is looking at your problem and trying to estimate what will happen in your problem. The outside view involves making that an instance of something else—of a class. When you then look at the statistics of the class, it is a very different way of thinking about problems. And what's interesting is that it is a very unnatural way to think about problems, because you have to forget things that you know—and you know everything about what you're trying to do, your plan and so on—and to look at yourself as a point in the distribution is a very un-natural exercise; people actually hate doing this and resist it.

There are also many difficulties in determining the reference class. In this case, the reference class is pretty straightforward; it's other people developing curricula. But what's psychologically interesting about the incident is all of that information was in the head of the Dean of the School of Education, and still he said two years. There was no contact between something he knew and something he said. What psychologically to me was the truly insightful thing, was that he had all the information necessary to conclude that the prediction he was writing down was ridiculous.

COMMENT: Perhaps he was being tactful.

KAHNEMAN: No, he wasn't being tactful; he really didn't know. This is really something that I think happens a lot—the outside view comes up in something that I call ‘narrow framing,' which is, you focus on the problem at hand and don't see the class to which it belongs. That's part of the psychology of it. There is no question as to which is more accurate—clearly the outside view, by and large, is the better way to go.

Let me just add two elements to the story. One, which I'm really ashamed of, is that obviously we should have quit. None of us was willing to spend seven years writing the bloody book. It was out of the question. We didn't stop and I think that really was the end of rational planning. When I look back on the humor of our writing a book on rationality, and going on after we knew that what we were doing was not worth doing, is not something I'm proud of.

COMMENT: So you were one of the 40 per cent in the end.

KAHNEMAN: No, actually I wasn't there. I got divorced, I got married, I left the country. The work went on. There was a book. It took eight years to write. It was completely worthless. There were some copies printed, they were never used. That's the end of that story. ...


SESSION TWO

Let me introduce a plan for this session. I'd like to take a detour, but where I would like to end up is with a realistic theory of risk taking. But I need to take a detour to make that sensible. I'd like to start by telling you what I think is the idea that got me the Nobel Prize—should have gotten Amos Tversky and me the Nobel Prize because it was something that we did together—and it's an embarrassingly simple idea. I'm going to tell you the personal story of this, and I call it 'Bernoulli's Error"—the major theory of how people take risks...

KAHNEMAN: Let me introduce a plan for this session. I'd like to take a detour, but where I would like to end up is with a realistic theory of risk taking. But I need to take a detour to make that sensible. I'd like to start by telling you what I think is the idea that got me the Nobel Prize—should have gotten Amos Tversky and me the Nobel Prize because it was something that we did together—and it's an embarrassingly simple idea. I'm going to tell you the personal story of this, and I call it 'Bernoulli's Error"—the major theory of how people take risks.

The quick history of the field is that in 1738 Daniel Bernoulli wrote a magnificent essay in which he presented many of the seminal ideas of how people take risks, published at the St. Petersburg Academy of Sciences. And he had a theory that explained why people take risks. Up to that time people were evaluating gambles by expected value, but expected value was never explained with conversion, and why people prefer to get sure things rather than gambles of equal expected value. And so he introduced the idea of utility (as a psychological variable), and that's what people assign to outcomes so they're not computing the weighted average of outcomes where the weights are the probabilities, they're computing the weighted average of the utilities of outcomes. Big discovery—big step in the understanding of it. It moves the understanding of risk taking from the outside world, where you're looking at values, to the inside world, where you're looking at the assignment of utilities. That was a great contribution.

He was trying to understand the decisions of merchants, really, and the example that he analyzes in some depth is the example of the merchant who has a ship loaded with spices, which he is going to send from Amsterdam to St. Petersburg – during the winter—with a 5% probability that the ship will be lost. That's the problem. He wants to figure out how the merchant is going to do this, when the merchant is going to decide that it's worth it, and how much insurance the merchant should be willing to pay. All of this he solves. And in the process, he goes through a very elaborate derivation of logarithms. He really explains the idea. 

Bernoulli starts out from the psychological insight, which is very straightforward, that losing one ducat if you have ten ducats is like losing a hundred ducats if you have a thousand. The psychological response is proportional to your wealth. That very quickly forces a logarithmic utility function. The merchant assigns a psychological value to different states of wealth and says, if the ship makes it this is how wealthy I will be; if the ship sinks this is my wealth; this is my current wealth; these are the odds; you have a logarithmic utility function, and you figure it out. You know if it's positive, you do it; if it's not positive you don't; and the difference tells you how much you'd be willing to pay for insurance.

This is still the basic theory you learn when you study economics, and in business you basically learn variants on Bernoulli's utility theory. It's been modified, it's axiomatic and formalized, and it's no longer logarithmic necessarily, but that's the basic idea.

When Amos Tversky and I decided to work on this, I didn't know a thing about decision-making—it was his field of expertise. He had written a book with his teacher and a colleague called "Mathematical Psychology" and he gave me his copy of the book and told me to read the chapter that explained utility theory. It explained utility theory and the basic paradoxes of utility theory that have been formulated and the problems with the theory. Among the other things in that chapter were some really extraordinary people—Donald Davidson, one of the great philosophers of the twentieth century, Patrick Suppes—who had fallen love with the modern version of expected utility theory and had tried to measure the utility of money by actually running experiments where they asked people to choose between gambles. And that's what the chapter was about.

I read the chapter, but I was puzzled by something, that I didn't understand, and I assumed there was a simple answer. The gambles were formulated in terms of gains and losses, which is the way that you would normally formulate a gamble—actually there were no losses; there was always the choice between a sure thing and a probability of gaining something. But they plotted it as if you could infer the utility of wealth—the function that they drew was the utility of wealth, but the question they were asking was about gains.

I went back to Amos and I said, this is really weird: I don't see how you can get from gambles of gains and losses to the utility of wealth. You are not asking about wealth. As a psychologist you would know that if it demands complicated mathematical transformation, something is going wrong. If you want the utility of wealth you had better ask about wealth. If you're asking about gains, you are getting the utility of gains; you are not getting the utility of wealth. And that actually was the beginning of the theory that's called "Prospect Theory", which is considered a main contribution that we made. And the contribution is what I call "Bernoulli's Error". Bernoulli thought in terms of states of wealth, which maybe makes intuitive sense when you're thinking of the merchant. But that's not how you think when you're making everyday decisions. When those great philosophers went out to do their experiments measuring utility, they did the natural thing—you could gain that much, you could have that much for sure, or have a certain probability of gaining more. And wealth is not anywhere in the picture. Most of the time people think in terms of gains and losses.

There is no question that you can make people think in terms of wealth, but you have to frame it that way, you have to force them to think in terms of wealth. Normally they think in terms of gains and losses. Basically that's the essence of Prospect Theory. It's a theory that's defined on gains and losses. It adds a parameter to Bernoulli's theory so what I call Bernoulli's Error is that he is short one parameter.

I will tell you for example what this means. You have somebody who's facing a choice between having—I won't use large units, I'll use the units I use for my students—2 million, or an equal probability of having one or four. And those are states of wealth. In Bernoulli's account, that's sufficient. It's a well-defined problem. But notice that there is something that you don't know when you're doing this: you don't know how much the person has now.

So Bernoulli in effect assumes, having utilities for wealth, that your current wealth doesn't matter when you're facing that choice. You have a utility for wealth, and what you have doesn't matter. Basically the idea that you're figuring gains and losses means that what you have does matter. And in fact in this case it does.

When you stop to think about it, people are much more risk-averse when they are looking at it from below than when they're looking at it from above. When you ask who is more likely to take the two million for sure, the one who has one million or the one who has four, it is very clear that it's the one with one, and that the one with four might be much more likely to gamble. And that's what we find.

So Bernoulli's theory lacks a parameter. Here you have a standard function between leisure and income and I ask what's missing in this function. And what's missing is absolutely fundamental. What's missing is, where is the person now, on that tradeoff? In fact, when you draw real demand curves, they are kinked; they don't look anything like this. They are kinked where the person is. Where you are turns out to be a fundamentally important parameter.

Lots of very very good people went on with the missing parameter for three hundred years—theory has the blinding effect that you don't even see the problem, because you are so used to thinking in its terms. There is a way it's always done, and it takes somebody who is naïve, as I was, to see that there is something very odd, and it's because I didn't know this theory that I was in fact able to see that.

But demand curves are wrong. You always want to know where the person is. ...


SESSION THREE

The word "utility" that was mentioned this morning has a very interesting history – and has had two very different meanings. As it was used by Jeremy Bentham, it was pain and pleasure—the sovereign masters that govern what we do and what we should do – that was one concept of utility. In economics in the twentieth century, and that's closely related to the idea of the rational agent model, the meaning of utility changed completely to become what people want. Utility is inferred from watching what people choose, and it's used to explain what they choose. Some columnist called it "wantability". It's a very different concept...

The word "utility" that was mentioned this morning has a very interesting history – and has had two very different meanings. As it was used by Jeremy Bentham, it was pain and pleasure—the sovereign masters that govern what we do and what we should do – that was one concept of utility. In economics in the twentieth century, and that's closely related to the idea of the rational agent model, the meaning of utility changed completely to become what people want. Utility is inferred from watching what people choose, and it's used to explain what they choose. Some columnist called it "wantability". It's a very different concept.

One of the things I did some 15 years ago was draw a distinction, which obviously needed drawing. between them just to give them names. So "decision utility" is the weight that you assign to something when you're choosing it, and "experience utility", which is what Bentham wanted, is the experience. Once you start doing that, a lot of additional things happen, because it turns out that experience utility can be defined in at least two very different ways. One way is when a dentist asks you, does it hurt? That's one question that's got to do with your experience of right now. But what about when the dentist asks you, Did it hurt? and he's asking about a past session. Or it can be Did you have a good vacation? You have experience utility, which is everything that happens moment by moment by moment, and you have remembered utility, which is how you score the experience once it's over.

And some fifteen years ago or so, I started studying whether people remembered correctly what had happened to them. It turned out that they don't. And I also began to study whether people can predict how well they will enjoy what will happen to them in future. I used to call that "predictive utility", but Dan Gilbert has given it a much better name; he calls it "affective forecasting". This predicts what your emotional reactions will be. It turns out people don't do that very well, either.

Just to give you a sense of how little people know, my first experiment with predictive utility asked whether people knew how their taste for ice cream would change. We ran an experiment at Berkeley when we arrived, and advertised that you would get paid to eat ice cream. We were not short of volunteers. People at the first session were asked to list their favorite ice cream and were asked to come back. In the first experimental session they were given a regular helping of their favorite ice cream, while listening to a piece of music—Canadian rock music—that I had actually chosen. That took about ten-fifteen minutes, and then they were asked to rate their experience.

Afterward, they were also told, because they had undertaken to do so, that they would be coming to the lab every day at the same hour for I think eight working days, and every day they would have the same ice cream, the same music, and rate it. And they were asked to predict their rating tomorrow and their rating on the last day.

It turns out that people can't do this. Most people get tired of the ice cream, but some of them get kind of addicted to the ice cream, and people do not know in advance which category they will belong to. The correlation between what the change that actually happened in their tastes and the change that they predicted was absolutely zero.

It turns out—this I think is now generally accepted—that people are not good at affective forecasting. We have no problem predicting whether we'll enjoy the soup we're going to have now if it's a familiar soup, but we are not good if it's an unfamiliar experience, or a frequently repeated familiar experience. Another trivial case: we ran an experiment with plain yogurt, which students at Berkeley really didn't like at all, we had them eat yogurt for eight days, and after eight days they kind of liked it. But they really had no idea that that was going to happen. ...


SESSION FOUR

Fifteen years ago when I was doing those experiments on colonoscopies and the cold pressure stuff, I was convinced that the experience itself is the only one that matters, and that people just make a mistake when they choose to expose themselves to more pain. I thought it was kind of obvious that people are making a mistake—particularly because when you show people the choice, they regret it—they would rather have less pain than more pain. That led me to the topic of well-being, which is the topic that I've been focusing on for more than ten years now...

Fifteen years ago when I was doing those experiments on colonoscopies and the cold pressure stuff, I was convinced that the experience itself is the only one that matters, and that people just make a mistake when they choose to expose themselves to more pain. I thought it was kind of obvious that people are making a mistake—particularly because when you show people the choice, they regret it—they would rather have less pain than more pain. That led me to the topic of well-being, which is the topic that I've been focusing on for more than ten years now. And the reason I got interested in that was that in the research on well-being, you can again ask, whose well-being do we care for? The remembering self?—and I'll call that the remembering-evaluating self; the one that keeps score on the narrative of our life—or the experiencing self? It turns out that you can distinguish between these two. Not surprisingly, essentially all the literature on well-being is about the remembering self.

Millions of people have been asked the question, how satisfied are you with your life? That is a question to the remembering self, and there is a fair amount that we know about the happiness or the well-being of the remembering self. But the distinction between the remembering self and the experiencing self suggests immediately that there is another way to ask about well-being, and that's the happiness of the experiencing self.

It turns out that there are techniques for doing that. And the technique—it's not at all my idea—is experience sampling. You may be familiar with that—people have a cell phone or something that vibrates several times a day at unpredictable intervals. Then they get questions on the screen that say, what are you doing? and there is a menu—and Who are you with? and there is a menu—and How do you feel about it ?—and there is a menu of feelings.

This comes as close as you can to dispensing with the remembering self. There is an issue of memory, but the span of memory is really seconds, and people take a few seconds to do that; it's quite efficient, then you can collect a fair amount of data. So some of what I'm going to talk about is the two pictures that you get, which are not exactly the same, when you look at what makes people satisfied with their life and makes them have a good time.

But first I thought I'd show you the basic puzzles of well-being. There is a line on the "Easterlin Paradox" that goes almost straight up, which is GDP per capita. The line that isn't going anywhere is the percentage of people who say they are very happy. And that's a remembering self-type of question. It's one big puzzle of the well-being research, and it has gotten worse in the last two weeks because there are now new data on international comparisons that makes the puzzle even more surprising.

But this is within-country. And within the United States, it's the same for Japan, over a period where real income grew by a factor of four or more, you get nothing on life satisfaction. Which is sort of troubling for economists, because things are improving. I once had that conversation with an economist, David Card, at Berkeley—he used to be at Princeton—and asked him, how would an economist measure well-being? He looked at me as if I were asking a silly question and said, "income of course". I said, well what's the next measure? He said, "log income". [laughter] And the general idea is that the more money you have, the more choices you have—the more options you have—and that giving people more options can only make them better off . This is the fundamental idea of economic analysis. It turns out probably to be false, but it doesn't correspond to these data. So Easterlin as an economist caused some distress in the profession with these results.

So what is the puzzle here? The puzzle is related to the affective forecasting that most people believe that circumstances like becoming richer will make them happier. It turns out that people's beliefs about what will make them happier are mostly wrong, and they are wrong in a directional way, and they are wrong very predictably. And there is a story here that I think is interesting.

When people did studies of various categories of people, like the rich and the poor, you find differences in life satisfaction. But everybody looks at those differences is surprised by how small they are relative to the variability within each of these categories. You address the healthy and the unhealthy: very small differences.

Age—people don't like the idea of aging, but, at least in the United States, people do not become less happy or less satisfied with their life as they age. So a lot of the standard beliefs that people have about life satisfaction turn out to be false. This is a whole line of research—I was doing predictive utility, and Dan Gilbert invented the term "affective forecasting", which is a wonderful term, and did a lot of very insightful studies on that.

As an example of the kinds of studies he did, he asked people —I think he started that research in '92 when Bush became governor of Texas, running against Ann Richards—before the election, (Democrats and Republicans), how happy do you think you will be depending on whether Ann Richards or George Bush is elected. People thought that it would actually make a big difference. But two weeks after the election, you come back and you get their life satisfaction or their happiness and it's a blip, or nothing at all.

QUESTION: What about four years later?

KAHNEMAN: And interestingly enough, you know, there is an effect of political events on life satisfaction. But that effect, like the effect of other things like being a paraplegic or getting married, are all smaller than people expect.

COMMENT: Unless something goes really wrong.

KAHNEMAN: Unless something goes terribly wrong. ...


SESSION FIVE

I'll start with a couple of psychological notions. 

There seems to be a very general psychological principle at work here, which is that sometimes when you are asked a question that is difficult, the mind doesn't stay silent if it doesn't have the answer. The mind produces something, and what it produces very characteristically is the answer to an easier but related question. That's one of the heuristics of good problem-solving, but it is a system one operation, which is an operation that takes place by itself.

The visual illusions that you have here are that kind of thing because the question that people are asked is, what is the size of the three men on the page? When you look at it, you get a pretty compelling illusion (as it turns out to be) that the three men are not the same size on the page. They are different sizes. It is the same thing with the two monsters. When you take a ruler to them, they are of course absolutely identical. Now what seems to be happening here, is that we see the images three-dimensionally. If they were photographs of three-dimensional objects, of scenes, then indeed the person to the right would be taller than the person to the left. What's interesting about this is that people are not confused about the question that they've been asked; if you ask them how large the people are, they answer in fractions of an inch. They answer in centimeters, not meters. They know that they are supposed to give the two-dimensional size. they just cannot. What they do is give you something that is a scaling of a three-dimensional experience, and we call that "attribute substitution". That is, you try to judge an attribute and you end up judging something else. It turns out that this happens everywhere.

So the example I gave yesterday about happiness and dating is the same thing; you ask people how happy they are, and they tell you how happy they are with their romantic life, if that happens to be what's on the top of their mind at that moment. They are just substituting.

Here is another example. Some ten or fifteen years ago when there were terrorism scares in Europe but not in the States, people who were about to travel to Europe were asked questions like, How much would you pay for insurance that would return a hundred thousand dollars if during your trip you died for any reason. Alternatively other people were asked, how much would you pay for insurance that could pay a hundred thousand dollars if you died in a terrorist incident during your trip. People pay a lot more for the second policy than for the first. What is happening here is exactly what was happening with prolonging, the colonoscopy. And in fact psychologically– I won't have the time to go into the psychology unless you press me—but psychologically the same mechanism produces those violations of dominance, and basically what you're doing there is substituting fear.

You are asked how much insurance you would pay, and you don't know—it's a very hard thing to do. You do know how afraid you are, and you're more afraid of dying in a terrorist accident than you're afraid of dying. So you end up paying more because you map your fear into dollars and that's what you get.

Now if you ask people the two questions next to each other, you may get a different answer, because they see that one contains the other. A post-doc had a very nice idea. You ask people, How many murders are there every year in Michigan, and the median answer is about a hundred. You ask people how many murders are there every year in Detroit, and the median estimate is about two hundred. And again, you can see what is happening. The people who notice that, "oh, Michigan: Detroit is there" will not make that mistake. Or if asked the two questions next to each other, many people will understand and will do it right.

The point is that life serves us problems one at a time; we're not served with problems where the logic of the comparison is immediately evident so that we'll be spared the mistake. We're served with problems one at a time, and then as a result we answer in ways that do not correspond to logic.

In the case of time, we took the average instead of the integral. And we take the average instead of the integral in many other situations. Contingent valuation is a method where you survey people and ask them how much you should pay for different public goods. It's used in litigation, especially in environmental litigation; it's used in cost-benefit analysis—I think it's no good whatsoever, but this is an example to study.

How much would you pay to save birds from drowning in oil ponds? There is a whole scenario of how the poor birds mistake the oil ponds for real water ponds, and so how much should we pay to basically cover the oil ponds with netting to prevent that from happening. Surprisingly, people are willing to pay quite a bit once you describe the scenario well enough. But one thing is, it doesn't matter what the number of birds is. Two thousand birds, two hundred thousand, two million, they will pay exactly the same amount.

QUESTION: This is not price per bird?

KAHNEMAN: No, this is total. And so the reason is the same reason that you had with time, taking an average instead of an integral. You're not thinking of saving two hundred thousand birds. You are thinking of saving a bird. The emotion is associated with the idea of saving a bird from drowning. The quantity is completely separate. Basically you're representing the whole set by a prototype incident, and you evaluate the prototype incident. All the rest are like that.

When I was living in Canada, we asked people how much money they would be willing to pay to clean lakes from acid rain in the Halliburton region of Ontario, which is a small region of Ontario. We asked other people how much they would be willing to pay to clean lakes in all of Ontario.

People are willing to pay the same amount for the two quantities because they are paying to participate in the activity of cleaning a lake, or of cleaning lakes. How many lakes there are to clean is not their problem. This is a mechanism I think people should be familiar with. The idea that when you're asked a question, you don't answer that question, you answer another question that comes more readily to mind. That question is typically simpler; it's associated, it's not random; and then you map the answer to that other question onto whatever scale there is—it could be a scale of centimeters, or it could be a scale of pain, or it could be a scale of dollars, but you can recognize what is going on by looking at the variation in these variables. I could give you a lot of examples because one of the major tricks of the trade is understanding this attribute substitution business. How people answer questions.

COMMENT: So for example in the Save the Children—types of programs, they focus you on the individual.

KAHNEMAN: Absolutely. There is even research showing that when you show pictures of ten children, it is less effective than when you show the picture of a single child. When you describe their stories, the single instance is more emotional than the several instances and it translates into the size of contributions.

People are almost completely insensitive to amount in system one. Once you involve system two and systematic thinking, then they'll act differently. But emotionally we are geared to respond to images and to instances, and when we do that we get what I call "extension neglect." Duration neglect is an example of, you have a set of moments and you ignore how many moments there are. You have a set of birds and you ignore how many birds there are. ...


SESSION SIX

The question I'd like to raise is something that I'm deeply curious about, which is what should organizations do to improve the quality of their decision-making? And I'll tell you what it looks like, from my point of view.

I have never tried very hard, but I am in a way surprised by the ambivalence about it that you encounter in organizations. My sense is that by and large there isn't a huge wish to improve decision-making—there is a lot of talk about doing so, but it is a topic that is considered dangerous by the people in the organization and by the leadership of the organization. I'll give you a couple of examples. I taught a seminar to the top executives of a very large corporation that I cannot name and asked them, would you invest one percent of your annual profits into improving your decision-making? They looked at me as if I was crazy; it was too much.

I'll give you another example. There is an intelligence agency, and the CIA, and a lot of activity, and there are academics involved, and there is a CIA university. I was approached by someone there who said, will you come and help us out, we need help to improve our analysis. I said, I will come, but on one condition, and I know it will not be met. The condition is: if you can get a workshop where you get one of the ten top people in the organization to spend an entire day, I will come. If you can't, I won't. I never heard from them again.

What you can do is have them organize a conference where some really important people will come for three-quarters of an hour and give a talk about how important it is to improve the analysis. But when it comes to, are you willing to invest time in doing this, the seriousness just vanishes. That's been my experience, and I'm puzzled by it.

Since I'm in the right place to raise that question, with the right people to raise the question, I will. What do you think? Where did this come from; can it be fixed; can it be changed; should it be changed? What is your view, after we have talked about these things?

One of my slides concerned why decision analysis didn't catch on. And it's actually a talk I prepared because 30 years ago we all thought that decision analysis was going to conquer the world. It was clearly the best way of doing things—you took Bayesian logic and utility theory, a consistent thing, and—you had beliefs to be separated from values, and you would elicit the values of the organization, the beliefs of the organization, and pull them together. It looked obviously like the way to go, and basically it's a flop.Some organizations are still doing it, but it really isn't what it was intended to be 30 years ago. ...


 

Edge Dinners
Event Date: [ 3.6.07 ]
Location:
Monterey, CA
United States

"This goes beyond all known schmoozing. This is like some kind of virtual-intellectual conspiracy-in-restraint-of-trade."
Bruce Sterling


 

On the Road
Event Date: [ 5.30.06 ]
Location:
Spain

Last year Edge received an invitation from Juan Insua, Director of Kosmpolis, a traditional literary festival in Barcelona, to stage an event at Kosmopolis 05 as part of an overall program "that ranges from the lasting light of Cervantes to the (ambiguous) crisis of the book format, from a literary mapping of Barcelona's Raval district to the dilemma raised by the influence of the Internet in the kitchen of writing, from the emergence of a new third culture humanism to the diverse practices that position literature at the core of urban creativity."

Marc D. HauserLee Smolin

Robert Trivers

Eduard Punset — Redes-TV

JB



Click for information on "Darwin Y La Tercera Cultura" on Redes TV

Click here for a 2-minute tv clip



Las nuevas lecturas del 'Quijote' copan los actos de Kosmopolis

Israel Punzano — Barcelona
EL PAÍS - Cultura - 04-12-2005

Cervantes was not the the only protagonist of the second day of Kosmopolis. Also debated was the influence of Darwin's theory of the natural selection in the advances of diverse scientific disciplines, that include evolutionary biology to neuroscience to cosmology. In this colloquy, which also covered the future of the humanism, were the cosmologist Lee Smolin, the biologist Robert Trivers and the neurocientist Marc Hauser. The presentation of the event was Eduard Punset and the moderator was John Brockman, who is know for spreading scientific publications. Smolin emphasized the importance of the investigations of Darwin in the later development of Einstein's theory of the relativity and wondered if we were prepared to accept a world without absolute laws, where everything changes. Hauser pointed out that the revolution of Darwin's revolution was also about morality, as it counters the rationality of Kant and the predominance of emotions in Hume.


Brockman: "Hoy la cultura es la ciencia, los intelectuales de letras estan desfasados"
Justo Barranco — 05/12/2005 — Barcelona

"The thinkers of the third culture are the new public intellectuals" as "science is the only news"... "Nobody voted the electricity, the Internet, the birth control pill, or for fire. "The great inventions that change everything involves technology based on science "... "It is critical to participate in the discussion of such questions today as the culture is science." 


TRIBUNA: JOHN BROCKMAN
La tercera cultura en Kosmopolis
John Brockman
EL PAÍS - 05-12-2005

In terms of science, the third culture is front and center: geneticist J. Craig Venter is attempting to create synthetic genes as an answer to our energy needs; biologist Robert Trivers is exploring the evolutionary basis for deceit and self-deception in human nature; biologist Ian Wilmut, who cloned Dolly the sheep, is using nuclear transfer to produce embryonic stem cells for research purposes and perhaps eventually as cures for disease; cosmologist Lee Smolin researches the Darwinian evolution of universes; quantum physicist Seth Lloyd is attempting to build quantum computers; psychologist Marc D. Hauser is examining our moral minds; and computer scientists Sergey Brin and Larry Page of Google are radically altering both the way we search for information, as well as the way we think.


CULTURA
Kosmopolis, literatura a la ultima
Eva Belmonte
December 3, 2005

Kosmopolis 2005. Celebration of International of Literature in the Center of Contemporanea Culture of Barcelona (CCCB).

...The relation between science and the third culture was another one of the subjects of debate of this Celebration of Literature. Four personalities of the scientific world participated in the Third Culture event. They are Robert Trivers, John Brockman, Marc Hauser and Lee Smolin. They demonstrated that Literature is not is not just the province of the old school of the humanities culture.


Can a person be considered cultured today with only slight knowledge of fields such as molecular biology, artificial intelligence, chaos theory, fractals, biodiversity, nanotechnology or the human genome?  Can we construct a proposal of universal knowledge without such knowledge?  The integration of  "literary culture" and "scientific culture" is the basis for what some call the "third culture":  a source of metaphors that renews not only the language, but also the conceptual tookit of classic humanism
The New Humanists 
SALVADOR PÁNIKER

A polifacética figure
Brockman and the New Intellectuals 
Interview
"Science won the battle"
SALVADOR LLOPART
"¿Qué queda del marxismo? ¿Qué queda de Freud? La neurociencia le ha dejado como una superstición del siglo XVIII, de ideas irrelevantes"


 

...there is a deep relation between Einstein's notion that everything is just a network of relations and Darwin's notion because what is an ecological community but a network of individuals and species in relationship which evolve?  There's no need in the modern way of talking about biology for any absolute concepts for any things that were always true and will always be true.—Lee Smolin

...what I'm interested in is how science can now come together with moral philosophy and do some interesting work at the overlap areas.  This is not to say that science takes over philosophy, by no means.  It works together with philosophy, to figure out what the deep issues are, what the overlapping areas are, and how we can meet together.—Marc D. Hauser

I believe that self-deception evolves in the service of deceit.  That is, that the major function of self-deception is to better deceive others.  Both make it harder for others to detect your deception, and also allow you to deceive with less immediate cognitive cost.  So if I'm lying to you now about something you actually care about, you might pay attention to my shifty eyes if I'm consciously lying, or the quality of my voice, or some other behavioral cue that's associated with conscious knowledge of deception and nervousness about being detected.  But if I'm unaware of the fact that I'm lying to you, those avenues of detection will be unavailable to you. —Robert Trivers


TRIVERS, SMOLIN, HAUSER,: "DARWIN Y LA TERCERA CULTURA" IN BARCELONA

Last year Edge received an invitation from Juan Insua, Director of Kosmpolis, a traditional literary festival in Barcelona, to stage an event at Kosmopolis 05 as part of an overall program "that ranges from the lasting light of Cervantes to the (ambiguous) crisis of the book format, from a literary mapping of Barcelona's Raval district to the dilemma raised by the influence of the Internet in the kitchen of writing, from the emergence of a new third culture humanism to the diverse practices that position literature at the core of urban creativity."

Something radically new is in the air: new ways of understanding physical systems, new focuses that lead to our questioning of many of our foundations. A realistic biology of the mind, advances in physics, information technology, genetics, neurobiology, engineering, the chemistry of materials: all are questions of capital importance with respect to what it means to be human.

Charles Darwin's ideas on evolution through natural selection are central to many of these scientific advances. Lee Smolin, a theoretical physicist, Marc D. Hauser, a cognitive neuroscientist, and Robert Trivers, an evolutionary biologist, travelled to Barcelona last October to explain how the common thread of Darwinian evolution has led them to new advances in their respective fields.

The evening was presented by Eduard Punset, host of the internationally-viewed Spanish-language science television program Redes, and a best-selling author in Spain. A Redes television program based on the event was broadcast throughout Spain and Latin America.

The house was packed. The Barcelona press was present (see El PaisLa VanguardiaEl PaisEl Mundo, and a cover story in "Culturas", the magazine fo La Vanguardia).


I believe that self-deception evolves in the service of deceit.  That is, that the major function of self-deception is to better deceive others.  Both make it harder for others to detect your deception, and also allow you to deceive with less immediate cognitive cost.  So if I'm lying to you now about something you actually care about, you might pay attention to my shifty eyes if I'm consciously lying, or the quality of my voice, or some other behavioral cue that's associated with conscious knowledge of deception and nervousness about being detected.  But if I'm unaware of the fact that I'm lying to you, those avenues of detection will be unavailable to you.

ROBERT TRIVERS: DECEIT AND SELF-DECEPTION

(ROBERT TRIVERS:) Why do I talk about, or wish to talk about, deception and self-deception in the same breath?  Because I think you miss the truth about each if you are not conscious of the other and the relationship between the two.  If by deception you only think of conscious deception, where you're planning to lie or aware of the fact that you're lying, you will miss all the lying that goes on that the individual is unaware of, and this may be the larger portion of lies and deception that is going on. 

Conversely, if you think about self-deception without comprehending its connection with deception, then I think you'll miss the major function of self-deception.  In particular, you'll be tempted to go the route that psychology went a hundred years ago or so and think of self-deception as defensive: I'm defending my tender ego, I'm defending my weak psyche.  And you will not see the offensive characteristic of self-deception. 

What do I mean by that?  I mean that I believe that self-deception evolves in the service of deceit.  That is, that the major function of self-deception is to better deceive others.  Both make it harder for others to detect your deception, and also allow you to deceive with less immediate cognitive cost.  So if I'm lying to you now about something you actually care about, you might pay attention to my shifty eyes if I'm consciously lying, or the quality of my voice, or some other behavioral cue that's associated with conscious knowledge of deception and nervousness about being detected.  But if I'm unaware of the fact that I'm lying to you, those avenues of detection will be unavailable to you. 

Regarding the second argument, it is intrinsically difficult, and mentally demanding, to lie and be conscious about it.  The more complex in detail the lie—the longer you have to keep it up—the more costly cognitively.  I believe that selection favors rendering a portion of the lie unconscious, or much of the knowledge of it unconscious, so as to reduce the immediate cognitive cost.  That is, with self-deception you'll perform better cognitively on unrelated tasks that you might have to do moments later than if you had just undergone a lot of consciously mediated deception.

Let me step back and say a word or two about the underlying logic. First of all, we understand that if we are making an evolutionary argument in terms of natural selection, we are talking about benefits to individuals in terms of the propagation of their own genes, and there are innumerable opportunities in nature to gain a benefit by deceiving another. 

However, the reverse is true for the deceived. 

The deceived is typically losing knowledge or resources or whatever, resulting in a decrease in the propagation of their genes.  So you have what we call a co-evolutionary struggle: with natural selection improving deception on the one hand, and improving the ability to spot deception on the other.

Now let me just say that deception is a very deep feature of nature. At all levels, all interactions, e.g. viruses and bacteria often use deception to get inside you. They may mimic your own cell surface proteins.  They may have other tricks to deceive your system into not recognizing them as alien and worthy of attack.  Even genes inside yourself, which propagate themselves selfishly during meiosis may do so by mimicking particular sub-sections of other genes so as to get copied an extra time, even though the rest of the genome, if you asked their opinion, would be against this extra copying.

When you turn to insects and larger creatures like those, we know that in relations between species, again there's a huge and rich world of deception.  Considering insects alone: they will mimic harmless objects so as to avoid detection by their predators.  Or they will mimic poisonous or distasteful objects to avoid being eaten.  Or they will mimic a predator of their predator, so as to frighten away their predator.  Or, in one case, they will mimic the predator that's trying to eat them, so that the predator misinterprets them as a member of their own species and gives them territorial display instead of eating them.

They will even, I have to tell you, mimic the feces, or droppings, of their predators.  That's so common it has a technical term in the literature, forgive me, "shit mimics".  And they come in all varieties and sizes.  There are moths that look like the splash variety of a bird dropping.  And you can understand from the bird's standpoint, you might have a strong supposition that this is a butterfly or a moth, but you'd be unwilling to put it to the test—especially if you have to use your beak to put it to a test.

Now when you turn to relations within species, you find a rich world that we're uncovering now of deception also.  To give you two quick examples.  Warning cries have evolved in many contexts to warn others of danger.  But they can be used in new and deceitful contexts.  For example you can give a warning cry in order to grab an item of food from another individual.  The individual's startled and runs for cover, you grab the food. You can give a warning cry when your offspring are at each other's throats—they run to cover and then you separate them and protect them from each other.  It has even been described that you can give a warning call when you see your mate near a prospective lover—get them dashing to safety, and then you intervene. 

In this continually co-evolving struggle regarding truth and falsehood, if you will, there are situations in other creatures as well as ourselves where we have to make tight evaluations of each others' motive in an aggressive encounter.  I'm lining up against Marc Hauser; how confident is he of himself?  I'm courting someone; the woman is looking at me; how confident am I of myself?  And so on.  That allows misrepresentation of these kinds of psychological variables and you can see how self-deception can start coming in.  Be more confident than you have grounds to be confident and be unconscious of that bias, the better to manipulate others.

Once you have language, that greatly increases the opportunity for both deception and self-deception.  We spend a lot of time with each other pushing various theories of reality, which are often biased towards our own interests but sold as being generally useful and true.

Let me just mention a little bit of evidence—and of course there's a huge amount of evidence regarding self-deception, from everyday life, from study of politics and history, autobiography, et cetera.  But I just want to talk about some of the scientific evidence in psychology. There's a whole branch of social psychology that's devoted to our tendencies for self-inflation.  If you ask students how many of them think they're in the top half of the class in terms of leadership ability, 80 percent say they are.  But if you turn to their professors and ask them how many think they're in the top half of their profession, 94 percent say they are. 

And people are often unconscious of some of the mechanisms that naturally occur in them in a biased way.  For example, if I do something that is beneficial to you or to others, I will use the active voice: I did this, I did that, then benefits rained down on you.  But if I did something that harmed others, I unconsciously switch to a passive voice: this happened, then that happened, then unfortunately you suffered these costs. One example I always loved was a man in San Francisco who ran into a telephone pole with his car, and he described it to the police as, "the pole was approaching my car, I attempted to swerve out of the way, when it struck me". 

Let me give you another, the way in which group membership can entrain language-usages that are self-deceptive. You can divide people into in-groups or out-groups, or use naturally occurring in-groups and out-groups, and if someone's a member of your in-group and they do something nice, you give a general description of it—"he's a generous person".  If they do something negative, you state a particular fact: "in this case he misled me", or something like that.  But it's exactly the other way around for an out-group member.  If an out-group member does something nice, you give a specific description of it: "she gave me directions to where I wanted to go".  But if she does something negative, you say, "she's a selfish person".  So these kinds of manipulations of reality are occurring largely unconsciously, in a way that's perhaps similar to what Marc Hauser in his talk was saying about morality.

A new world of the neurophysiology of deceit and self-deception is emerging. For example, it has been shown that consciously directed forgetting can produce results a month later and they are achieved by a particular area of the prefrontal cortex (normally associated with initiating motor responses or overcoming cognitive obstacles) suppressing activity in the hippocomapus, the brain region in which memories are stored. So there is clear evidence that one part of the brain has been co-opted in evolution to serve the function of personal information suppression within self.

What I want to turn to very briefly is the relationship between self-deception and war.  Now war, in the sense of battles between large numbers of soldiers, is an evolutionarily very recent phenomenon.  A raid, where you run over to another group, kill off a number of individuals, and run back, is something we share with chimpanzees.  And that has a long history and is much more likely to be constrained by rational considerations. 

But warfare as we experience it now is a ten thousand, (plus or minus a few thousand) year old phenomenon.  Not an awful lot of time for selection.  And not much selection necessarily on those who start the wars.  There may be a lot of selection in the civilian population or the soldiers, but it's not necessarily true that those who start stupid wars end up with as as great a decrease in surviving offspring (and other kin) as one would have wished. 

Wars also tempt us easily to self-deception for other reasons.  There is often very little overlap in self-interest between your group and another group, in contrast to activities within the group.  There is also low feedback from members of an outside group.  There's greater ignorance. 

And so war is a particular situation where self-deception is expected to be both especially prominent and especially harmful in its general effects.

Let us use the most recent war—the current war launched by my own country, the United States in 2003 against the country of Iraq—to see one simple illustration of how deceit and self-deception is a useful concept in thinking about war. It has been said that the first casualty of war is the truth, but we know regarding the Iraq war that the truth was dead long before this war started.  We know the thing was conceived and promulgated based on a lie.  The predator, the U.S., saw an opportunity to leap on a prey, and decided almost immediately, within days of 9/11, and certainly within a couple of months, to prepare and launch this war.

Now what's the significance of that fact?  Well, one significance of it is, psychologists have shown, very nicely I think, for 20 years now, that when we are considering an option—whether to marry Susan, or to go to the University of Bologna instead of Barcelona, or whatnot—we are much more rational, we weigh options, and we are even, if anything, slightly depressed.  But once we decide which way to go, we act as if we want all the cells in our body rowing in the same direction.  If it's Susan we're going to marry, we don't want to hear about Maria or some of Susan's less desirable side.  If it's Barcelona we're going to, that's the best university to go to and to hell with Bologna. 

Now the point about this war is that there was no period of rational discussion of the pluses and minuses. The United States decided—at least a small cabal within it, including the President, decided—to go to war almost instantaneously. They immediately went into the implementation stage—your mood goes up, you downplay the negatives—after all, you have made your decision—and you do not wish to hear contrary opinions. Especially you do not wish to hear contrary opinions if the real reasons for going to war can not be revealed and the whole public pretense is a lie.

Thus, all planning for the aftermath was dismissed because it greatly increased the apparent expense and difficulty and suggested greatly diminished gains from the endeavor. This, of course, implicitly called into question the entire enterprise, so rational planning was dismissed. And witness the dread effects, a continuing bloodbath unleashed on an innocent population.

One other comment: self-deception can not only get you into disastrous situations, but then it gives you a second reward and that is, it deprives you of the ability to deal with the disaster once it's in front of you.  And what could be more dramatic than what happened in the first month after the U.S. arrived in Baghdad—the complete looting of the country, 20 billion dollars of resources destroyed, priceless cultural heritage destroyed—all of that and the U.S. sat around and sucked its thumb.  Did nothing to deal with it.  And has been dealing with an escalating disaster ever since.  A blood-letting of dreadful proportions, and still blind about what to do.

Well, I'll just summarize these thoughts by saying that there's good news and there's bad news.  The good news is, we do have it in our grasp at last to develop a scientific theory of deceit and self-deception, integrating all kinds of information, but at least sticking this phenomenon out in front of ourselves and studying it objectively.  The bad new is that the forces we're dealing with—that is, of deceit and self-deception—are very powerful.

ROBERT TRIVERS' scientific work has concentrated on two areas, social theory based on natural selection and the biology of selfish genetic elements. He is the author of Social Evolution, Natural Selection and Social Theory: Selected Papers of Robert Trivers; and coauthor (with Austin Burt) of Genes in Conflict: The Biology of Selfish Genetic Elements. He was cited in a special Time issue as one of the 100 greatest thinkers and scientists of the 20th Century.

Robert Trivers's Edge Bio Page


...there is a deep relation between Einstein's notion that everything is just a network of relations and Darwin's notion because what is an ecological community but a network of individuals and species in relationship which evolve?  There's no need in the modern way of talking about biology for any absolute concepts for any things that were always true and will always be true.

LEE SMOLIN: COSMOLOGICAL EVOLUTION

(LEE SMOLIN:) I'm a theoretical physicist, and I'm here to talk about natural selection and Darwin.  The main thing that I'll have to communicate is why somebody who thinks about what the laws of nature are has something to say about Darwin and the impact and the role of Darwin in our contemporary thinking over all fields. 

But as a way of getting into that, I want to say something—and you're going to hate me for this—about John.  When John talks about the Third Culture, what he has done, besides create the idea, is create a group of people.  I don't know if it's a community, there are very close friendships in it, many of which—in my case they are with people whom I met through John.  This conversation that he talks about is a real conversation, which, at least as far as I know, was not happening and would not be happening were it not for John. 

I think it is not so usual that John appears with the people whom he talks about and writes about and it is wonderful to be here with him.  So I wanted to say ‘thank you' publicly because when I started to think about Darwinism, I didn't know any biologists and I didn't know any psychologists—the academic world is very narrow—and I didn't know any artists or digerati.  It is because of this involvement that I met many of the people I admired and whose work has inspired me. 
      
Now I want to make a claim and my claim is that while Darwin's ideas are certainly completely absorbed and verified within biology, the whole impact of Darwin's ideas is as still yet to be absorbed and felt and the impact is going to happen in my field of theoretical physics and cosmology and I see it happening in other fields –mathematics, social fields, and so forth.  Also I want to make a hypothesis about why—and I'll speak about my field because I don't have any rights to speak about another field, but I think the resonances and the similarities are there. 

In my field, two things happened in the 20th century that we're absorbing.  One of them is Einstein and the revolution of physics started by Einstein, both relativity theory and quantum theory. And I'm going to claim in my brief time—I'm not going to have time to fully justify—that the main development and the main meaning of Einstein's contribution is closely related to the main meaning of Darwin's contribution.  At least I'll say why in a few minutes.

The other thing that's happened in my field, and this has been accelerating really for the last five years—I can't see the audience, so I don't know if any of my friends who are physicists here in Barcelona are here in the audience, but I think they will agree that a very strange thing has happened to our field, which is that we used to think that the purpose of theoretical physics was to understand what the laws of nature are—to learn the laws of nature—and we're not done with that.  But what we've discovered on the way is that we really have to answer a different question—and for our field a very new question—which is, why these laws and not other laws? 

I don't actually believe it's going to work all the way through, but the most successful approach to putting all the laws together and unifying physics is string theory and in the last few years we've learned that there are an infinite number of these theories, and the best we can do looking for a unified theory so far is to have an infinite list of theories, one of which might describe our universe.  So the question has gone from, what are the laws? to, why these laws and not other laws?  Now I think that the only rational way to approach that question is through Darwin's thinking, that is, through evolution by natural selection. 

If  I had been an educated person rather than a narrowly-educated person in science, I would have known the quotation that I'm going to read to you, which is from the 1890s from the American philosopher Charles Pierce, who was one of the founders of the philosophical school called ‘pragmatism.'  Already in the 1890s he was worrying about the question of, why these laws, which shows that sometimes philosophers really are a century ahead of the scientists. 

He wrote—and I'll read it slowly so that a translator can get it:

"To suppose universal laws of nature capable of being apprehended by the mind and yet having no reason for their special forms but standing inexplicable and irrational is hardly a justifiable position."  He's saying, it's not enough to know what the laws are, you want to ask why these laws, and just to say ‘these are the laws, tough,' is irrational and unjustifiable. He says: "Uniformities are precisely the sorts of facts that need to be accounted for.  Law is par excellence the thing that wants a reason."  And now here his thesis is this: "The only possible way of accounting for the laws of nature, and for uniformity in general, is to suppose them the results of evolution."  By which, from the context, we know it's evolution by natural selection because he was fully absorbing the impact of Darwinism and that's a lot of what his philosophy and the American Pragmatists' philosophy was about.

In my own work, I began to worry about this problem about 15 years ago—why these laws and not other laws—and I went looking for a method to attack that problem because there's another side to that problem, which is that the laws we happen to have are very special.  The laws we happen to have have a number of free constants that can be freely adjusted and about 25 years ago, Martin Rees and colleagues—these are great cosmologists and astrophysicists—began to realize that if you varied these numbers—these numbers refer to things like the mass of the elementary particles, the mass of the electron, the mass of the proton, the strengths of the different forces—the world we live in would fall apart. 

Imagine that the universe can be set up by a dial—by a machine with a set of dials where you dial these constants.  If you go away—in any direction—from the settings of the dials that we have, there's no more stars, there's no more hundred-something nuclei which are stable, there is just hydrogen. There's no structure, there's no energy, the world is just dead internal equilibrium. 

So the fact that we live in a world which is as complex as it is, which has stars that live for billions of years, which enables life to evolve on planets, which is a process that takes hundreds of millions to billions of years, is due to these constants being finely-tuned—the dials being precisely tuned.  They were worried by that and most of the people who found it are sort of liberal British Anglicans, and they have an answer that vaguely has something to do with God, or is something which is logically equivalent to God.  And I was disturbed by that, and was looking for an alternative which would be a scientific explanation of how the dials got turned. 

At about that time, somebody gave me a book by Richard Dawkins and I started to read it and it opened up my eyes to the kinds of explanations which are possible in biology. I copied it and I made a little cosmological theory that I don't have time to tell you about, but I might in the discussion discuss, in which these dials get tuned by a process which is just like natural selection. 

It works better than the theory that it was made by God or is logically equivalent to made-by-God in that—and I think that this is characteristic of biology and Darwinian thought/Darwinism—The process of natural selection produces not just what we see, but a whole very complicated set of interrelations among the different species and among the individuals of the species which leads to predictions that these guys can test.  Similarly, the style of Darwinian thought and cosmology and physics has led to predictions that we could test.  That impressed me very deeply and I started to look into it more and, as I started to look into it more, I began to see a connection with what really was the field I was trained in, which is relativity and quantum theory. 

Roughly speaking the connection is the following—and I'm just going to say some key words and define them and key statements and then, if people want, I can elaborate on it—What did Einstein do, in one sentence?  Before Einstein—and what this has to do with is the nature of space and the nature of time—physics, which was based on Newton's physics, was formulated the following way: there's a fixed absolute space, it's eternal, it goes on forever, it was always there, and particles come and move around in this space, and they have all their properties defined with respect to that space. 

The space never changes.  Similarly, time is absolute, flows whether anything's happening or not, the same way: in a certain sense, space and time are outside the phenomena that we observe and prior to it.  And Newton believed that for a good reason, which is, he believed that space was really God's way of sensing his creation. These were really theological ideas for Newton and they became how people did science.

Einstein replaced that with another idea, which is much more common-sense, which is that what space is is a system of relations amongst things in the world.  Where this pen is in space is not some absolute thing that only God can see, it's where it is relative to the glass, the bottle, Mark here, and so forth.  So space has no meaning apart from a network of relationships and time is nothing but change in that network of relationships.  And that was an idea that some philosophers—of course we scientists don't pay attention to philosophers, I said—but some philosophers, like Leibnitz and Mach had been arguing for against the success of Newton's physics.

But it was Einstein who first took these ideas and made them into science, and made them into science which, as far as we know, is true, is much better science than the previous Newton science. So the changes from a world in which things exist against an absolute preexisting framework to a world which is nothing but a network of relations, where change is nothing but change in those relations.

Now here's something that's fascinating: we draw pictures in our work—when I work on the theory of space-time and quantum space—we draw pictures which are networks of relations and how they change in time and our pictures look just like pictures of ecological networks that these people study.  Or the Internet.  Or networks of people in interaction, in social interaction.  And we began to notice that: why do our pictures look the same as these pictures from biology and social theory and the Internet and so forth?  I think the reason is that there is a deep relation between Einstein's notion that everything is just a network of relations and Darwin's notion because what is an ecological community but a network of individuals and species in relationship which evolve?  There's no need in the modern way of talking about biology for any absolute concepts for any things that were always true and will always be true.

That what I think is important about Darwin and, again, why I think that it's closely related to Einstein's ideas.  It's just the start of what I hope is a conversation.

Let me close by saying what's the scariest idea for me because these are really revolutionary ideas and that means that they're scary to those of us who think about them. We look forward to the generation to whom they will not be scary, which will mean that the revolution is over and we can go and have fun—not that we don't have fun, but they can take over. 

The scary thing is that if the laws evolve, what does that really mean?  If what we're used to thinking of as laws which are absolutely true, true for all time—the phrase 'God-given' comes to mind because that's how the founders of modern physics like Newton thought about them. If laws instead become, as Pierce said, explainable through a process of evolution, then that means time is very real, in a way that it is not in other representations of physics. 

But it's also very scary because we're used to thinking of laws as absolute and if laws evolve, then at least I and the people I work with get very confused.  What does it mean?  Is there nothing? What is guiding the evolution?  Are there just other laws, which you guys don't have to worry about because you have our laws to hold things steady, but when our laws start to evolve, is there anything under anything / everything?  Or is it possible that people in the future, when this revolution that Einstein started is over, will be perfectly comfortable living in a world described the philosopher Pierce in which there is nothing to laws but this temporary momentary result of an on-going process of evolution.

LEE SMOLIN, a theoretical physicist, is the founding member and research physicist at the Perimeter Institute in Waterloo Canada. He is the author of The Life of The Cosmos, Three Roads to Quantum Gravity, and the recently published The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next.

Lee Smolin's Edge Bio Page


...what I'm interested in is how science can fuse with and energize moral philosophy to create some powerful new ideas and findings at the interface.  This is not to say that science will take over philosophy.  It this new enterprise works at all it will be through a deep collaboration, working to find out the origins of our moral judgments, and how they figure in our ethical decisions and moral institutions.

MARC D. HAUSER: MORAL MINDS

(MARC D. HAUSER:) I want to echo one of Lee's comments about John, and say thanks for a slightly different, but related reason. What I believe John has allowed many of us to do, which is exciting, is to communicate our passion to a broader audience, escaping academia to exchange with interested professionals and others from a broader slice of mental life.  This not only enriches understanding at a broader level, but also allows for a more interesting dialog. So thank you John.

Today, I want to engage you in a game that I hope will bring to life my thinking in the last few years.  Here is the game: I want you to turn to your neighbor and pair up into a team—okay, you're a pair now.  Please pair off with somebody.  One of you will be designated the donor in this game and the other person is the receiver.  Please chose a role, either donor or receiver.  Please pair of as  I really need your data, I'm an experimentalist.  Okay, here is the game.  It's going to be played once.  I'm giving— play along with me—each donor ten euros. The game starts in the following way: the donor is going to turn to the receiver and offer some proportion of that ten euros—one, two, up to ten.  The receiver will respond by either accepting the offer or rejecting it.  If the receiver accepts, he or she gets what was offered and the donor gets what's left; if the receiver rejects, nobody gets any money.  So now, donor, make an offer to the receiver, and receiver, respond.

Okay.  Let me collect some of the data by asking you to raise your in hands in the following way: of the donors, how many offered between one and three euros?  Raise you hands.  How many offered between four and six euros?  How many offered between 7 and 10?  Only a few very generous people, and most of you offered in the four to six range.  Now, how many of the receivers rejected their offers?  Keep your hands up—of those with your hands up, how many of you got offers of one to three euros?  One to three euros?  Small numbers?  Small offer?  How much were you offered?  Uno.  Okay, good.  Now what I want you to do with me is, think through the logic of the game as if you were an economist.  If you were trying to maximize your returns, donors should have given the lowest offers possible, and they should have been thinking that the receivers should accept any offer, because one euro is certainly better than zero euros.  You didn't have anything to begin with, so one's better than nothing; two's better than nothing; and so is three.

But it turns out that when this game is played, in many many different countries, the typical offer is exactly in the range seen here: about 4, 5, or 6 euros—it's much more than if you were trying to maximize your own returns.  And yet we seem to make this calculation very quickly, spontaneously, almost without thinking.  That's example number one. Keep in mind.

Here's example number two. I want you to imagine that you are watching a train moving down a track, out of control.  It's lost its breaks.  If the train continues, it will hit and kill five people.  But you are standing next to the train tracks, and you can flip a switch and turn that train onto a sidetrack, where there's one person.  Now the train will kill that one person.  Here's the question: is it permissible—morally permissible—for you to flip the switch, causing the train to kill one but save five.  If you think yes, raise your hand.  If you think no, raise your hand. Ok, most of you think it is permissible.

Now, second example: here comes that train again, it's going to kill the five if it keeps going. You are standing next to a very heavy, fat person, and you can throw them onto the tracks, killing them, but the train will stop before the five.  Is it morally permissible to throw the fat person?  Yes?  We've lost half of you!  Or more.  Okay, what happened?  Why do so many of you switch from a permissible to a forbidden judgment?

Here is the idea that I want to give you tonight, in the next few minutes.  There has been a long history—a very old tradition—about the sources of our moral judgments.  Where do they come from?  Many moral philosophers, legal scholars, think that the way that we deliver a moral judgment, like you just did, comes from reasoning. It comes from thinking about the principles, maybe utilitarian (more saved is better than less saved).  You work through the principles in a conscious, reasonable, rational way.  This was certainly a view that someone like Kant was very much in favor of; how you deliberate with your moral judgments.  Now opposing that view — diametrically opposed — was a view that dates back at least to Hume, which is that when we give a moral judgment, we do so based on our emotions. It just feels wrong, or it feels right, to do something, and that's why we do it, that's why we say it's morally right or morally wrong.

What I want to argue for you today is that both of these views, which have dominated the entire field of moral philosophy, are wrong, at least in one particular way.  What you just did tonight is an example of why it's wrong.  You delivered those moral judgments quickly, probably without reasoning, and without consciously thinking about principles.  And if I were to ask you, as I have asked literally thousands of people, on the Internet, in small-scale societies like the Mayans and hunter-gatherers of Africa, people deliver exactly the same judgments that you did tonight, but are incapable of justifying why.  Typically they say it’s a hunch or a gut feeling.  So for example, let me illustrate by telling you about my father’s response to these cases.  He was a distinguished physicist. But I am not picking on the physicists.  When I first presented him with these moral dilemmas, the ones you just answered, he said, yes, you can flip the switch, turning the train onto the side track; he said, yes, you can push the fat man on to the train.  I said, but Dad, really?  He answered "Of course, it's still five versus one."  He was following good utilitarian guidelines. 

And now I give him case number 3.

You are a doctor in a hospital and there are five people in critical care.  Each person needs an organ to survive.  The nurse comes to the doctor and says, doctor, there's a man who has just walked into the hospital, completely healthy, coming in for a visit.  We can take his organs and save the five.  Can you do that, Dad?  He immediately replies "No, you can't just kill somebody!" I then say  "But you killed the fat man five seconds ago." He then volleys back  "Okay, you can't kill the fat man."  "But what about the switch?" I say.  Defeated, he replies "Okay, not the switch either."  And the whole thing unravels because there is not a consciously accessible set of principles that people can recall and use to justify what's going on.  And it's not based on emotion.  It's based on a calculus—that the mind has, that it evolved to solve particular kinds of moral dilemmas.  And it's not learned, either; it's there in place early in development.

If I have been sufficiently clear this far, you may have already figured out where I am going, and the connections I wish to make with another discipline.  The core of my argument for moral judgment derives from an argument that the linguist Noam Chomsky developed almost 50 years ago concerning the nature of language, its representation in the mind and its normal functioning in every human.   The idea here in a nutshell is that the way our moral sense works is very much like the way language works.  There is a universal set of moral principles that allows the establishment of a set of possible moral systems.  In this sense, perhaps this provides some convergence with what Lee said just a few seconds ago.  In the same way that you might want to ask about possible universes, I want to ask the question about possible moral systems—that the mind is constraining the range of possible variation. 

So the deep aspect of Chomsky's thinking about language, which I think is directly translatable into the way we think about morality, and the way we do the science, is by imagining that humans are equipped with, born with, a set of universal principles.  What culture can do is change things locally—like a parameter—there are switches. Once you turn something on, things can change.

Let me try to give you a concrete example of some work that a student of mine just recently did.  There's a population of people in Panama, Central America, called the Kuna Indians.  There is one part of their range that is very remote, and this is where we worked.   They live in a quite simple type of society, including small scale agriculture and fishing.  We went there recently and gave them moral dilemmas exactly like the ones you just answered.  They weren't about trolleys, they were about wild animals.  So in one example, there are crocodiles coming to eat five people in the river; you're in a canoe, you can move those crocodiles off to where they will kill one.  Is it permissible?  The Kuna said it was, virtually every single person we asked.  Here’s the second case: you can throw somebody into the river so that the crocodiles will eat him, and save the five.  Is that permissible?  No.  They're showing the same parallel system of psychology where an intended harm—using someone as a means to a greater good is less permissible than a foreseen consequence that causes the same harm.  So in the switch case of a train, you foresee the consequence, but you are not intending the harm as a means to the greater good.  The Kuna are sensitive to this distinction, but here's where the cultural aspects move in to make this case more interesting: the Kuna Indians are much more willing to say that it's permissible to throw the fat man in front of the crocodiles than we are in our society.  They have an unstated policy—a social behavior of high levels of infanticide.  Killing, as a part of society, is much more common.  And that's the way in which culture can potentially change the dynamics of how the judgment gets made.  In other words, we will see a universal principle such as the means versus side effect distinction, but culture can change how much more impermissible the means based harm is when contrasted with the foreseen side effect.

At this point this is still looking relatively abstract and theoretical and what I'm interested in is how science can fuse with and energize moral philosophy to create some powerful new ideas and findings at the interface.  This is not to say that science will take over philosophy.  It this new enterprise works at all it will be through a deep collaboration, working to find out the origins of our moral judgments, and how they figure in our ethical decisions and moral institutions.  Let me end with a few more cases to make this all a bit more concrete.

Consider a disorder that people are aware of, acutely aware of, in many societies.  It's called psychopathy—people who are known for massive killings.  They kill, often with no regret: they don't feel guilt, they don't feel shame, and they don't feel empathy.  Now people have described that as a problem of lacking any moral sense.  I think that's completely the wrong interpretation.  What psychopathy is, is a case where they have completely intact moral knowledge.  They would judge the cases I just gave you like everybody else in the room.  What makes them moral monsters is that they lack the kinds of emotions that we have to prevent them from doing horrible things. They don’t have braking emotions. On this view, emotions don’t dictate our moral judgments, but they do guide our moral behavior, how we act. We are now engaged in a collaborative project, working to actually test psychopaths, to see whether that is in fact the case. It is too early to tell, but stay tuned.

The second part of the story is to use, as John mentioned a few minutes ago, some of the modern techniques in the neurosciences where you can image the brain, attempting to understand which parts of the brain are active, how they are engaged when we come to our moral judgments, and how they resolve conflict. 

To sum up, we're in an extremely exciting phase now where a set of questions that were forever the providence of moral philosophy and law are now coming directly into contact with the sciences.  This is exciting because both areas are working together and it may have direct implications for the law, and the extent to which formal institutions like law and religion penetrate our evolved moral sense.

MARC D. HAUSER, an evolutionary psychologist and biologist, is Harvard College Professor of Psychology, Biological Anthropology, and Organismic & Evolutionary Biology, and Director of the Cognitive Evolution Laboratory. He is the author of The Evolution of Communication, Wild Minds, and the recently published Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong.

Marc D. Hauser's Edge Bio Page

On the Road
Event Date: [ 3.16.06 ]
Location:
The Old Theatre (Old Building, LSE, Houghton Street, London WC2A 2AE)
London
United Kingdom

THE SELFISH GENE: THIRTY YEARS ON
Thursday 16 March 2006
6.45pm to 8.15pm
The Old Theatre (Old Building, LSE, Houghton Street, London WC2A 2AE)

It is thirty years since The Selfish Gene revolutionised our understanding of living things. Since then, Richard Dawkins' pursuit of the implications of science has informed areas as diverse as biology, psychology, philosophy and religion. His work has made an outstanding contribution to the understanding of science in society; and it has shown how science deepens our appreciation of the natural world.

At this event, Darwin@LSE, in collaboration with OUP, brings together leading intellectuals to explore these insights — and to make their own distinctive contributions to this fertile field.

Helena Cronin
Darwin@LSE

 

Speakers

Daniel C Dennett (Tufts)
Author of Darwin's Dangerous Idea; Freedom Evolves; Breaking the Spell
'The view from Dawkins' mountain'

Sir John Krebs, FRS (Zoology, Oxford)
'From intellectual plumbing to arms races'

Matt Ridley
Author of The Red Queen; The Origins of Virtue; Genome
'Selfish DNA and the junk in the genome'

Ian McEwan
Booker Prize winner and author of Enduring Love; Amsterdam; Saturday
'Science writing: Towards a literary tradition?'

Richard Dawkins, FRS (Oxford)
Author of The Selfish Gene; The Blind Watchmaker; The Ancestor's Tale.
'Afterword'

Chair: Melvyn Bragg
Broadcaster, writer and novelist


Organiser

Helena Cronin
Founder and Director of Darwin@LSE and author of The Ant and The Peacock. 

Quicktime streaming audio [1hr.22 min]
(double-click image to begin)


HELENA CRONIN launched and runs Darwin@LSE. She is a Co-Director of LSE's Centre for Philosophy of Natural and Social Science. She is the author of The Ant and the Peacock: Altruism and Sexual Selection from Darwin to Today, which was chosen as one of The New York Times' nine best books of the year for 1992, and co-editor of Darwinism Today, a series of short books by leading figures in the field of evolutionary theory. Each title is an authoritative pocket introduction to the Darwinian ideas that are setting today's intellectual agenda.
 


MELVYN BRAGG is a broadcaster, writer and novelist. He presents In Our Time for BBC Radio 4, a series where he and his expert guests discuss the history of ideas, and explore subjects in culture and science. He presented Start the Week between 1988 and 1998. In his 1998 series On Giant's Shoulders he interviewed scientists about their eminent predecessors, and from 1999 to 2001 he presented The Routes of English, a series celebrating 1,000 years of the spoken language.

As well as presenting for Radio 4, he is Controller of Arts for London Weekend Television and is the presenter of The South Bank Show. In 1998 he was made a life peer (Lord Bragg of Wigton). He has written 19 novels, the latest of which is Crossing the Lines. 


MELVYN BRAGG: Introduction

They are in you and me; they created us, body and mind; and their preservation is the ultimate rationale for our existence. They have come a long way, those replicators. Now they go by the name of genes, and we are their survival machines.

In 1976, a young zoology lecturer at Oxford University published his first book, from which those words are taken. Powerfully encapsulating a gene's-eye view of life, The Selfish Gene rapidly became deeply influential both within biology and associated disciplines, and in wider intellectual debate.

Thirty years and over a million copies later, The Selfish Gene has come to be seen as one of the defining books of the twentieth century. To commemorate this thirtieth anniversary, Oxford University Press has published a sparkling new edition, with a fresh introduction by the author and an extensive collection of reviews that are testimony to the book's importance and influence.

And more … Today also sees the publication of another OUP book: Richard Dawkins: How a scientist changed the way we think. It is edited by Alan Grafen and Mark Ridley, both former students of Richard's. The book is a collection of essays by scientists, philosophers and writers, which reflects on Richard's contribution and influence as scientist, rationalist, writer and public intellectual, in areas such as biology, philosophy, evolutionary psychology, artificial life and debates on religion.

Our event today is the launch of this lively and wide-ranging collection. Tonight, Darwin@LSE and OUP have brought together some of these and other intellectuals with Richard, to explore these insights further and to make their own distinctive contributions.

So … welcome, everyone, to 'The Selfish Gene: Thirty years on'.


[MELVYN BRAGG:] Our first speaker is DANIEL DENNETT. A professor at Tufts Universit

y, Dan is an outstanding polemicist and one of those all-too-rare philosophers who takes science, particularly Darwinian theory, seriously. His books include Brainstorms, Brainchildren, Elbow Room, Consciousness Explained, Darwin's Dangerous Idea and — a new book, published this week — Breaking the Spell, an original and comprehensive explanation of religious belief.

Dan's talk takes 'The view from Dawkins' mountain'.


DANIEL DENNETT: 'The view from Dawkins' mountain'.

Thank you very much.

When I first read The Selfish Gene — it was not in '76, it was a few years later — I was struck by the very first paragraph, and by one of the chief sentences in it — not quite the sentence that Melvyn Bragg just read, but another very similar sentence:

We are survival machines, robot machines, blindly programmed to preserve the selfish molecules known as genes.

And then the author went on to say,

This is a truth which still fills me with astonishment.

Thirty years on I think the question that can be raised is, are we still astonished by this remarkable inversion, this strange inversion of reasoning that we find in this claim?

When I read the book it changed my life. I was a Darwinian, but I didn't understand evolutionary theory at all well, and I thought after 30 years I should go back and re-read the book again. I was a bit afraid — I'd read parts of it many times, because I'd assigned it to my students in many courses — philosophy courses, even. I wondered if re-reading the whole book I would have one of those disappointing experiences where you think, well, yes, this was a young man's book, and this was very exciting at the time, but I wonder how well it holds up.

So I thought I would put it to as stern a test as I knew, and so last June I took it with me, the more recent edition, on a two-week trip to the Galapagos, where I spent a week on a wonderful three-masted schooner, the Sagitta, retracing Darwin's footsteps with some excellent evolutionary biologists, who were there for the World Summit of Evolution. That's a pretty good test of a book. That's where I reread the book, and it came through with flying colors. It was a wonderful accompaniment to that wonderful and amazing week.

When I thought about which features of the book I would talk about tonight, knowing who the others were who were going to be speaking about it, I realized that I should perhaps stick to some of the grander, larger, more philosophical themes and leave some of the wonderful details to people who are more expert in those.

And I also thought, on rereading the book, that the late Steve Gould was really right when he called Richard and me Darwinian fundamentalists. And
I want to say what a Darwinian fundamentalist is. A Darwinian fundamentalist is one who recognizes that either you shun Darwinian evolution altogether, or you turn the traditional universe upside down and you accept that mind, meaning, and purpose are not the cause but the fairly recent effects of the mechanistic mill of Darwinian algorithms. It is the unexceptioned view that mind, meaning, and purpose are not the original driving engines, but recent effects that marks, I think, the true Darwinian fundamentalist.

And Dawkins insists, and I agree wholeheartedly, that there aren't any good compromise positions. Many have tried to find a compromise position, which salvages something of the traditional right-side-up view, where meaning and purpose rain down from on high. It cannot be done. And the recognition that it cannot be done is I would say, the mark of sane Darwinian fundamentalism.

How on earth is it possible to adopt such a position? Evolution itself seems to be such a mindless and cruel thing. How can such heartless culling produce the magnificent designs that we see around us? It seems just about impossible that such a simple mechanical sieve could produce such amazing design in the biosphere. One of the key elements in The Selfish Gene, one that has always struck me as particularly valuable to me, strengthening my resolve in my own work, and also showing the way, is what I'm going to call by the rather bizarre name of "mentalistic behaviorism".

First I want to remind you of what Francis Crick called Orgel's Second Rule. "Evolution is cleverer than you are." Now what Crick meant by this jape, of course, was that again and again and again evolutionists, molecular biologists, biologists in general, see some aspect of nature which seems to them to be sort of pointless or daft or doesn't make much sense — and then they later discover it's in fact an exquisitely ingenious design — it is a brilliant piece of design — that's what Francis Crick means by Orgel's Second Rule.

But notice that this might almost look like a slogan for Intelligent Design theory. Certainly Crick was not suggesting that the process of evolution was a process of intelligent design. But then how can evolution be cleverer than you are?

What you have to understand is that the process itself has no foresight; it's entirely mechanical; has no purpose — but it just happens that that very process dredges up, discovers, again and again and again, the most wonderfully brilliant designs — and these designs have a rationale. We can make sense of them. We can reverse-engineer them, and understand why they are the wonderful designs they are.

And what this suggests is that it would help us to understand how this is possible if we could break all this brilliant design work up, into processes which we could understand the rationale of, without attributing it to the reason of some intelligent designer.

In other words what we need is this weird thing that I'm calling "mentalistic behaviorism". Now to many the idea of mentalistic behaviorism would seem to be a contradiction in terms. Classical psychological behaviorism is profoundly and explicitly anti-mentalistic. So what on earth could mentalistic behaviorism be? It could be exactly what Richard writes about in The Selfish Gene. How on earth can a gene be selfish? It doesn't even have a mind. It is just a bunch of information in the genome; how could it be selfish?

And what Richard showed, patiently, vividly, clearly, again and again, is if you treat it as if it had a mind, if you treat it mentalistically, you can characterize and make sense of the interactions, the dynamics of the processes that produce the effects that strike us as so intelligent. What we can then see is that these processes are arms races. Not just arms races between armies of intelligent people, but arms races between trees, and between bacteria, and between any form of life you want to name. We can watch an arms race generate more and more design, more exquisite solutions to problems, in ways that are strikingly similar to the more intelligently (but not very intelligently) guided arms races that give us the metaphor in the first place.

Also we find bargains struck, wonderfully intelligent bargains struck, for instance, between fruit-producing plants and omnivorous animals that carry the fruit off and pay for this high-energy fruit by distributing the seeds with their fertilizer at some distance from the tree, just to give one vivid case.

Many wonderful bargains, many ploys and counter-ploys which can be described in this mentalistic language at the same time that one rigorously insists: these things don't have minds, they are just mechanical processes, they are simply structures that have effects in the world that invite this particular metaphorical — but quite rigorously metaphorical — interpretation.

I have recently hit upon a way of characterizing what a virus is, which I like, and which I see a lot of evolutionary biologists like too: a virus is a string of nucleic acid with attitude! Of course it doesn't have a mind, but it has attitude. What does that "attitude" mean? It means that it behaves in such a way that it promotes its own replication, more than its rivals promote their own replication. That's what it is to be a virus.

Now I want to return to the quote I began with — "We are survival machines, robot vehicles, blindly programmed to preserve the selfish molecules known as genes" — because I want to talk about the other philosophically brilliant contribution in The Selfish Gene.

The first one is the pioneering clear articulation of mentalistic behaviorism and the defense of it. We can use selfish gene talk because we know how to translate that into terms that are purely behavioristic. But now I want to concentrate on another word in that phrase, "the selfish molecules known as genes." That, I think, is a nice example of Richard's writing skill. The first time he describes them, he calls genes selfish molecules, but later in the book, once we've gotten used to this way of talking, he shows that that's actually not quite what he is talking about. He's saying something much more subtle and much more interesting. What a philosopher would say is that a gene, it turns out, is a type, not a token.

What's a token? A token is an individual word — like the word "gene" up there on the screen behind me, a token that is made of nothing but shadow against a light background. Some tokens are made out of ink, some are made out of plastic letters and so forth — it's a physical object, it's like a molecule. But what Dawkins was drawing attention to was that the concept of gene that really does the work is the concept of a type, not a token. Here he says:

What I am doing is emphasizing the potential near-immortality of a gene in the form of copies as its defining property.

In other words he was saying something quite remarkable. He was saying that genes are like words, or like novels, or like plays, or melodies!

Now of course one of the famous (and most embattled) chapters in The Selfish Gene, one that I have spent a lot of time articulating and defending and trying to extrapolate further, is the chapter on memes. And there he said memes are like genes. What I want to point out, if it isn't already obvious, is that earlier in the book he said that genes are like memes! He said that genes are information structures that have many tokens, many physical tokens, and it's the information that you're really talking about when you're talking about the gene.

Now this of course was not entirely original to Richard. It was also stressed by George Williams, for instance, in his own work which was one of the inspirations for Richard. In other words, the hox genes are like the Romeo and Juliet memes! There's many copies of them, we can recognize them, we can see the role that they play in many different contexts. And they are all related not only by similarity but by having been copied and copied and copied, having all been replications coming from earlier instances.

Here on the screen is a diagram of the tree of life. It may not look like a tree to you, because you're not used to seeing trees from this angle — this is a bird's-eye view of a tree. You're looking down from above. You see the main trunk right in the middle there. This diagram is already out of date; I've had some interesting discussions with evolutionary biologists in the last two weeks and I've learned that some of them would draw the lines rather differently now. But it shows the three main branches of the tree — the bacteria, the archaea, and the eukarya, and of course we're on this lower eukaryote branch, and there we are, Homo — and the artist has put two other close cousins on the tree of life on this map — Coprinus and Zea.

What are those? Mushrooms and corn — our close cousins! Now one of these species, Homo sapiens, is exceptional: of all the species on the planet, it is the only species that has evolved that can understand that it's one of the fruits on the tree of life. We are unique in that regard. It is human language and culture that has made this possible. Not just our brain power, but the fact that we have a division of labor — because we have language and culture we can fill our brains with the fruits of the labors of everybody else on earth, not merely those who are our ancestors. Tonight we celebrate one of its most brilliant creations, The Selfish Gene.

Thank you very much. 

Dan Dennett's Edge bio page


[MELVYN BRAGG:] Thank you very much.

Our next speaker is SIR JOHN KREBS. A Fellow of the Royal Society and until last year a Royal Society Research Professor in Oxford's Department of Zoology, John is a highly distinguished biologist. He is one of the founding fathers of behavioural ecology, having co-edited the leading textbooks on the subject; and, with Richard, he has co-authored some classic papers in evolutionary thinking. Having been head of the Natural Environment Research Council (NERC) and of the Food Standards Agency, he is now Principal of Jesus College, Oxford.

John's talk is entitled 'From intellectual plumbing to arms races'.


SIR JOHN KREBS: 'From intellectual plumbing to arms races'

Thank you very much, Melvyn. I've got half as much time as Dan, so I'm going to have to talk twice as fast — or say half as much.

I was thinking, in this brief presentation, of how I should characterize Richard and his contribution, and hence my title, which I'll explain in a moment. But when I had almost worked out what I was going to say, I happened to go into the men's loo in the Department of Zoology, and there were some graffiti which gave me a different characterization of Richard. The question was this: What's the difference between God and Richard Dawkins? The answer below was:

"God is here but everywhere; Dawkins is everywhere but here."

Richard does travel around quite a bit.

Also of course the challenge of giving a very short talk is the one that Mark Twain summarized in the famous phrase, "If I'd had more time I'd have written a shorter letter. "

But first let me talk about Richard as an intellectual plumber. I first came across the notion of an intellectual plumber when I was sitting in my then Oxford College, Pembroke, next to Simon Blackburn, the philosopher now at Cambridge. I turned to him and asked, "What's the point of philosophy anyway, Simon?"

And he said, "Well, think of it this way, John. You're just a biologist, you sometimes have leaks in your thinking, and what you need is an intellectual plumber to patch up those leaks, and that's what philosophy will do for you. "

This is one way of describing Richard. He is indeed an intellectual plumber, and if anybody has leaks in their scientific thinking, be it about evolution or about any other aspect of biology or science in general, Richard's intelligence and razor-sharp analysis will detect the leak and carefully fix it for you.

And he also expresses it beautifully, and one of my favorite quotes from Richard's writing is not out of The Selfish Gene but from the book River Out of Eden, in which he says, talking about cultural relativism:

Show me a cultural relativist at thirty thousand feet and I'll show you a hypocrite. Airplanes are built according to scientific principles and they work. They stay aloft and they get you to a chosen destination. Airplanes built to tribal or mythological specifications such as the dummy planes of the Cargo cults in jungle clearings or the bees-waxed wings of Icarus don't.

That's a beautiful deconstruction of cultural relativism.

But — you might say, supposing Richard was wrong? Well, here I'm tempted to quote Lord Carrington, when asked what would happen if Margaret Thatcher was run over by a bus. And his reply was, "It wouldn't dare."

But I want now to move on from Richard as an intellectual plumber to talk about another aspect of Richard's contribution to biology, which is about a really original idea, a really original way of looking at a very familiar phenomenon.

Now ideas of course never come out of a vacuum. So The Selfish Gene, a highly original book, by everybody's reckoning, was born out of a particular zoological environment.

Richard was a student at Oxford in the 1950s and early '60s, which was a center of neo-Darwin in biology. People like Niko Tinbergen, David Lack, E. B. Ford, had already begun to articulate the debate about levels of natural selection: does selection act to the level of the group, the individual, or the gene? It was also an environment in which there'd been huge success at popularizing biology, in particular behavior. For example, in the writings of Niko Tinbergen and Desmond Morris, who was also associated with the department.

But to see just how radical Richard's ideas were in this overall context, let's look at his writing about communication. And that's what I want to talk about for the next few minutes.

What do you think the essence of communication is? Whether it's communication amongst human beings, or amongst other animals on the planet, or amongst plants? Well, at the time when Richard entered into this field, ethologists, information theorists, social psychologists, and others, all agreed that the essence of communication is transfer of information. That's what it's all about. And those who thought more specifically about animal behavior and evolution saw the whole process by which animal communication has developed by natural selection as one in which the efficiency of information transfer is increased.

Richard's supervisor, the Nobel-prize winning Niko Tinbergen, made a very famous film called "Signals for Survival," which won all sorts of international prizes, and the opening phrase of that film, which is all about animal communication, is a memorable moment, with Niko standing there in a colony of herring gulls in the North of England, and as he talks to the camera he raises his fist and he says, with a Dutch accent, "When I do this, you know what I mean. "

In other words, it's clear that communication is about transferring information. And Niko himself summarized communication by saying:

"One party, the actor, emits a signal, to which the other party, the re-actor, responds, in a way that the welfare of the species is promoted."

Absolutely wrong.

You can see already why it's wrong; it's a species welfare-oriented view. And here's how Richard defines communication a few years later:

"Natural selection favors individuals who successfully manipulate other individuals. Whether or not this is to the advantage of the manipulated individuals. Selection will also work on individuals to make them resist manipulation. But actors do sometimes succeed in subverting the nervous systems of re-actors. As adaptation to do this are the phenomena we see as animal communication."

In other words, Richard reframes the whole of thinking about communication. It is not about information transfer, but about manipulation. It is about an arms race between manipulators and recipients of manipulation. And so influential is that idea, that in a recent monograph on animal communication by two American scientists, they start the history of the subject with Richard's paper.

Let me just make three comments as I move towards the end.

The first thing you may ask if you think about communication as manipulation, how on earth could manipulation succeed? Surely reactors, over evolutionary time, would develop the capacity to resist manipulation.

Before you get too seduced by that thought, think of your own senses, and the way they can be manipulated. Otherwise, why is it that men are influenced by motor cars with semi-naked women draped over them? It's manipulating the senses, persuading men that they might indeed attract semi-naked women of a certain kind if they bought that particular make of car. [I own a BMW and I can confirm that it doesn't have this effect.] Think of people who respond to pornographic images, flat images of color, but are sufficiently aroused by them to think of them as real sexual stimuli.

In a more sophisticated and perhaps less blunt way, think of the writing of Keats in his "Ode to a Nightingale" — the opening phrase —

"My heart aches and a drowsy numbness pains my sense as though of hemlock I had drunk."

The nightingale’s song is making him drowsy and numb as though he'd drunk hemlock.

But there's another factor why signals might be manipulative. Think of how signaling might start — the reactor anticipating the behavior of the actor, and that very anticipation creating the basis for signals that manipulate.

The second point I want to make is that this idea about communications manipulation becomes all the more troubling when you think that each individual can play the role of actor or re-actor. So we're not talking about evolution between individuals, but evolutionary interactions between roles. And this is an example in the way in which, as Dan has already articulated, jumping out of the mind set of thinking of the individual as the unit of evolution enables you to free your thoughts and be creative.

And finally, my third point here is this. If communication is the result of an arms race between manipulation and resistance, what's the end point? And here Richard had further insight. The kind of end point you would expect depends on the degree of conflict of interest between the two roles. In cases where the conflict is very strong, like males attracting females when the females are reluctant to mate, the arms race between manipulation and resistance results in an escalation. And that's why you get brilliantly elaborate, vocal, visual, and other signals associated with sexual displays.

On the other hand, if the conflict of interest between the roles is minimal, as it might be between members of a pair who have already mated, then the evolutionary process will lead to a reduction in the visibility, the amplitude, of the signal. And that dichotomy in the nature of communication is still one that stands to be investigated by biologists.

So in summary, Richard's writing about communication transformed our thinking about not just animal communication, but, I believe, about communication in general.

And a final comment: people sometimes say to me, what was Richard doing before he wrote The Selfish Gene? What was he known for. And the answer is, Richard was known for his organ before he wrote The Selfish Gene.

There's nothing personal, Richard, you understand. But Richard did invent a device for event-recording with a computer. This was in the very early days of computing. It's hard to imagine that what you now have in a laptop took a room as big as the average academic's office to process information and store it. And Richard invented the so-called Dawkins Organ, which was a device for recording data by pressing keys and that went straight into a computer.

The other particular thing that Richard did early in his career was to study the development of pecking behavior in chicks. When I came to write my own thesis, I read Richard's thesis as an example of how it should be done. And I was struck by a sentence in the very first Chapter, in which under "methods" it said, "The chicks were tested in Paris. And I thought, my God, this man's got real style. It was only about ten minutes later that I realized that it was a misprint for "The chicks were tested in pairs. "

Thank you.


[MELVYN BRAGG:] Thank you.

And now to MATT RIDLEY. 23 pairs of chromosomes, together with a doctorate from Oxford University, equipped Matt for a career as a top-rank science writer. He has worked for The Economist, the Daily Telegraph and the Sunday Telegraph. His books — The Red Queen, The Origins of Virtue, Genome, Nature via Nurture — have sold over half a million copies and been short-listed for six literary prizes; and in 2004 he won the American National Academies Book Award. He is the energetic founding chairman of Newcastle-upon-Tyne's International Centre for Life, which is highly regarded for its research in genetics.

Matt will talk about 'Selfish DNA and the junk in the genome'.

Matt Ridley's Edge bio page


MATT RIDLEY: 'Selfish DNA and the junk in the genome'

Thank you very much, Melvyn.

Good evening; it's a huge honor to be here. I'm only here because I intercepted an invitation for Mark Ridley. Just to be clear, the excellent book about Richard is edited by Alan Grafen and Mark, not by me, although I do have a chapter in it, just to confuse people. We have, Mark and I have had our Y chromosomes analyzed by Brian Sykes and we have the same Y chromosomes haplotypes, so he'd say the same thing as I'm going to say anyway. After all, we are supposed to believe in genetic determinism.

What I want to talk about tonight is a throwaway remark in The Selfish Gene, which I think was not only prophetic but in a sense made the book much more literal than it otherwise is. At the time, in the early 1970s, it had just been discovered that genomes have a lot more DNA in them than is necessary for coding for proteins. And this was a big puzzle. Richard suggested a solution to this, which turned out to be mostly true, and was completely original.

The remark is found on page 47 of the first edition of The Selfish Gene, and it goes:

Biologists are wracking their brains trying to think what useful tasks this apparently surplus DNA in the genome is doing. But for the point of view of the selfish genes themselves there is no paradox. The true "purpose" of DNA is to survive, no more no less. The simplest way to explain the surplus DNA is to suppose that it is a parasite, or at best a harmless but useless passenger hitching a ride in the survival machines created by other DNA.

And as a classic of the argument in The Selfish Gene, what he's doing is saying cui bono, who benefits. Is it possible that perhaps this stuff is there not for the good of the species, but for the good — not even for the good of the whole genome, but for the good of the bits of DNA itself. He's turning the world upside down.

Just to recount the history of why this is an interesting question, by 1971 the phrase the C-value paradox had been coined for this problem, that nuclear genomes vary enormously in size, up to 300,000-fold, but the number of proteins made from them doesn't vary nearly as much.

Some species have enormous genomes and produce no more proteins than others. The idea was beginning to be abroad in the late '60s, early '70s, that this might just be junk — that an awful lot of the DNA in the genome might stand for nothing; it might have no purpose.

And in a lecture at MIT in 1972 Crick said, What is all this DNA for? Is it junk or is it an evolutionary reserve? Still thinking, though, in terms of what's it for in terms of the organism. And in 1978 Tom Cavalier-Smith suggested that perhaps it's there to support the rest of the DNA, to place the genes in the right parts of the nucleus, to spread the genes out, and things like that. And that's an idea that I'll come back to in a minute, because it has a second history. But it's in 1980 that the idea of selfish DNA is coined in two papers in Nature by Doolittle and Sapienza and Orgel and Crick, arguing that perhaps most of, or some of this DNA is simply selfish DNA, that it's there because it's good at getting itself there. It's good at replicating itself, it's good at copying itself. They were quite explicit, they said this idea is not new, it's sketched briefly but clearly by Dawkins in his book The Selfish Gene. There's no question that this originated as an idea with Richard. By the way, — in 1982 the first computer virus was created, the Elk Cloner virus — and that of course has an interesting parallel with the argument that I'm talking about.

Just to illustrate what we're talking about — genome size bears very little relation to the complexity of an organism; two creatures like a puffer fish and a zebra fish have very different size genomes, even though they look very similar.

From this end of the telescope, human beings look like they have quite a big genome, but if you turn the telescope around and look from another direction, the human genome looks rather a small one, compared with that of grasshoppers, which is at least three times as large, or deep-sea shrimps, which have ten times as much DNA as us.

Salamanders get even bigger, and the king of the genomes in the animal kingdom at least, is the marbled lung fish. Some people say amoebae have larger genomes at 500 gigabases, but they're almost certainly polyploid, as are lilies, which also have very big genomes. This is a perfectly ordinary diploid genome in the marbled lungfish, and it has as much digital information in it as about ten British Museum reading rooms.

So what's it all for? Well, it does appear that Richard was partly wrong. It does appear that the genome size is under selection, and that it's linked to the size of the cell. The bigger the nucleus the bigger the cell, it's a pretty good rule. And there's all sorts of evidence to suggest that animals are optimizing the size of their genomes, so parasites often minimize the amount of junk in their genomes in order to shrink themselves.

Malaria parasites have very little junk in their genomes, and very small cells. At the other extreme, ciliates have very large cells, and they achieve this with small genomes by making a huge macronucleus in which they put sort of working copies of all their genes in multiple numbers. They are an exception that proves the rules; they have a small genome but a large cell but only because they make a special sort of working nucleus that's a whole lot bigger.

And high-metabolism animals, like bats, and birds, have got rid of quite a lot of the junk in their genome, in order to be able, it appears, to have small blood cells with larger surface areas. A lesser horseshoe bat like this has a genome less than two gigabases, compared with three gigabases for us. Why the lungfish, the marble lungfish, has such a gigantic genome is not clear, but it does look like it may be something to do with having very big cells, in order to be able to store glycogen when it estivates during a drought, when it disappears into the mud and lives there for six months off its glycogen reserves. That's a possibility. But one of the strongest pieces of evidence that genome size is not — that it's not possible simply to expand your genome at length by letting parasites run riot is the ALU sequence, which is one of the commonest sequences in our genomes, which has appeared in the last 30 or 40 million years. Mice don't have it, but our genome is not bigger than mice. In other words it's come at the expense of another sequence, rather than added to it.

Just in passing, it does seem that big genomes go with small brains. This is particularly true in amphibia, where — in frogs and salamanders, the larger the genome the smaller the brain. A frog has about five gigabytes and a comparably large brain; a salamander has about 30 gigabytes and a smaller brain, and a mudpuppy has an 85- gigabase — sorry, I keep saying byte, I mean base — gigabase genome, and has an extremely small brain. Human beings luckily have larger brains than frogs. There are two reasons for this: the bigger your genome the slower you are at duplicating yourselves, so the harder it is to grow a big brain by multiplying cells. And also it's harder to fit the same number of neurons in your head if neuron bodies are bigger.

How much of the human genome might be selfish DNA? Well, what we think of the genomes consisting of is genes; well, there is the proportion of our genome that actually consists of real protein-coating genes, sequences that direct the manufacture of proteins themselves. One and a half percent. Add in another three and a half percent for all the control sequences, all the functional DNA that seems to be under very strong purifying selection.

That's where all the promoters and enhancers and switches that control the expression of the genes is. We've only got to five percent and we've got all that we need to build and run a human body. Eight percent consists of retro-viruses. 450,000 copies of the retroviruses, complete or incomplete, in our genomes. They're there because they're good at being there; they're simply left over from infections in the past, with viruses that are good at stitching copies of themselves back into our genes. There's three percent transposons — these are just cut-and- paste sequences that are good at moving around the genome. Many more of them in plants and fruit flies, but fewer in us.

But the really interesting ones are the LINEs: long interspersed nuclear elements. Or autonomous retroposons. These are sequences that are several thousand base pairs long, they're transcribed, two proteins are made from them, the proteins bind to the messenger RNA and take it straight back into the nucleus, make a DNA copy, and stitch it back into the genes. That's all they ever do.

They are as clear a definition as you can get of a selfish gene, they are simply copying themselves and spreading themselves around the nucleus. The SINEs are very similar, there's 13 percent of our genomes consist of them. The ALU that I mentioned is one of these. The only difference is that they parasitize the LINEs. They don't make their own machinery for copying themselves, they use the LINE machinery. These are lesser fleas on greater fleas.

All that gets you to about half the human genome. What's left? Well, there's introns — gaps inside genes — there's simple sequence repeats, the bits we use for DNA finger printing and things like that, segmental duplications, and a whole bunch of other stuff. Broadly speaking, the green stuff I think is true junk DNA. In other words it doesn't matter what its sequence is. The blue stuff is the functional DNA that builds and runs our bodies. And the red stuff is there because it's good at being there. It's things that have spread at the expense of other sequences, it's selfish DNA.

Just to clarify the LINEs and SINEs, at any given time in the last 60,000,000 years there's been one different LINE that's been dominant, that's been most dominant in the human lineage, there's been 16 overall that have been rampaging through our genomes. The one that's currently doing so is called LINE 1, it's at the moment taking up about 17 and a half percent of your genome as you sit here today. Likewise the ALU sequences have gone berserk, their activity peaked about 40,000,000 years ago in the primate lineage, it's a 280-base per sequence, and it's repeated over a million times.

Now interestingly the LINEs are found in the AT-rich regions. These are where there's fewest genes — which is what you'd expect if the organism was saying, we don't like these parasites, we want to keep them out of the way of genes. But the older SINEs are actually found in the CG-rich regions, the areas where most genes are. In other words the longer a SINE has been hanging around, the more it's been recruited to areas where there are genes, so it looks like the organism has somehow co-opted some of these sequences to actually affect the expression of genes, which is an interesting case of a selfish gene being, if you like, tamed.

Just worth reminding ourselves that junk DNA has spawned a bigger industry than coding DNA already — I'm referred to DNA finger-printing — and the two people of course who made DNA a household word are, Monica Lewinsky and O.J. Simpson, if you think about it.

Ladies and gentlemen, my conclusion is that it looks like about 45 percent of the human genome is made up of what you might literally call selfish genes — sequences that copy themselves very efficiently. And that Richard's suggestion was right. However, selfish DNA can, it seems, spread at the expense of neutral junk, but doesn't seem to be able to actually expand the genome. We're not in danger of suddenly have our genomes grow bigger and bigger and bigger. And these selfish elements range from unwanted parasites to co-opted symbionts, and most of them are somewhere in between the two. Richard was absolutely right, in a very literal sense, and the genome would actually be inexplicable without the notion of the selfish gene.

Thank you.


[MELVYN BRAGG:] Our next speaker is IAN McEWAN. Ian's books have earned him 

worldwide critical acclaim. Shortlisted for the Booker Prize for Fiction three times, he won the award in 1998 for Amsterdam. And many other prizes span his career, from an early Somerset Maugham Award to the triumph of a Book Critics' award. Ian is a writer who understands and respects science. His book Enduring Love draws on explicit Darwinian themes; and his latest novel, Saturday, is based on a day in the life of a brain surgeon.

His talk entitled 'Science writing: Towards a literary tradition?'

Ian McEwan's Edge bio page 


IAN McEWAN: 'Science writing: Towards a literary tradition?'

Let me start with the opening of an essay on immunology, which might entertain you. And this is in sense an appeal for a grand parlor game among those who love science.

'It is whispered in Christian Europe that the English are mad and maniacs: mad because they give their children smallpox to prevent their getting it, and maniacs because they cheerfully communicate to their children a certain and terrible illness with the object of preventing an uncertain one. The English on their side say: 'The other Europeans are cowardly and unnatural: cowardly in that they are afraid of giving a little pain to their children, and unnatural because they expose them to death from smallpox some time in the future. To judge who is right in this dispute, here is the history of this famous inoculation which is spoken of with such horror outside England'.

Well, you have probably guessed that this is Voltaire, writing in the late 1720s. Voltaire visited England — probably the only instance in recorded history when an intellectual Frenchman has come to England and been impressed by what he found. Voltaire wrote beautifully in his Lettres Philosophiques — translated as Letters from England — on religion, politics, and literature. And he also wrote about science — he attended Newton's funeral and was awed by the fact that a humble scientist was buried like a king in Westminster Abbey. But I want to afford Voltaire an important place in the library that will help us define a literary tradition of science. He wrote superb expositions, lucid expositions, on Newton's theories of optics and gravitation. They still stand today. If you want to know what Newton said you can read Voltaire, as good as anything to be found.

The atmosphere of this gathering tonight, celebrating a book written 30 years ago, really does reinforce my impression that we need a stronger sense of a scientific literary tradition. Those of us educated in a literary tradition take for granted a kind of mental map — a temporal map, really — of a literary history, a canon, a hierarchy if you like. The weight of the past, the cumulative achievements, give meaning to the achievements of the present. This canon has been vigorously challenged in the last 20 years — too male, too middle class, too white, too imperial, or whatever. But in order for it to be challenged it had to exist. One had to have Donne and Tennyson and Clough and Virginia Woolf, all placed in the firmament, waiting to be redefined, elevated further, or shot out of the sky. It seems to me that the pace of change in contemporary science, and the necessary passion for innovation, put us in danger of neglecting, or forgetting completely, what a beautiful and intricate tapestry of curiosity, persistence, human weakness and inspiration a scientific literary history could represent.

One of Richard's achievements has been to extend an enjoyment of science to layman like myself. Permission has been granted, no apologies necessary. Just as we can enjoy and discuss opera, art, movies, poetry, without being composers or performers, painters, film directors or poets, so we can engage with this vast edifice, the sublime achievement of human creativity. But to move around inside this edifice, we need the temporal spaciousness of a literary past.

Here is another one for the shelf — a man has ground up some lenses, taken some water from a lake and has been looking at it carefully, with an open mind:

'I found floating therein divers earthy particles and some green streaks , spirally wound serpent wise and orderly arranged... Other particles had but the beginning of the foresaid streak; but all consisted of very small green globules joined together; and there were very many small green globules as well.... These animacules had divers colours, some being whitish and transparent, others with green and very glittering little scales...And the motion of most of these animacules in the water was so swift, and so various upwards. Downwards and roundabout, that 'twas wonderful to see: and I judge that some of these little creatures were above a thousand times smaller than the smallest ones I have ever yet seen...'

This is Leeuwenhoek writing to the Royal Society in 1674, giving the first account of, among others, spyrogyra. He wrote his observations in letters to the Royal Society over a period of 50 years. And it is no accident that he should have sent his letters there. At that time, in a small space, within a triangle between London, Cambridge, and Oxford, and within a couple of generations, there existed nearly all the world's science. Newton, Locke, (I think generally we have to include certain philosophers in here, Hume most certainly), Willis, Hooke, Boyle, Wren, Flamsteed, Halley — an incredible concentration of talent, and the core of our library — its classical moment, if you like.

A good question to ask about this tradition is, how important is it that what one reads is true? Do we exclude those who simply got it all wrong? I think we have to beware of writing a Whig history of science, a history of the victors. I think we need to remember phlogiston and the ether and protoplasm. Scientists who hurl themselves down dead alleys perform a service for everyone else — they save them a great deal of trouble.

My son, William McEwan, last year completed an undergraduate biology course at UCL. When he was studying genetics, he told me he was advised to read no papers written before 1997. One can see the point of this advice. In the course of his studies, estimates of the size of the human genome shrank by a factor of three. Such is the headlong nature of contemporary science. But if we understand science merely as a band of light moving through time, advancing on the darkness, and leaving darkness behind it, always at its best only in the incandescent present, we turn our backs on a magnificent and eloquent literature, an epic tale of ingenuity propelled by curiosity.

Another consideration I think for the literary tradition has to be style. Not every scientist or science writer is a stylist. How about this? You will guess the author, of course:

'Individuals are not stable things, they are fleeting. Chromosomes too are shuffled into oblivion, like hands of cards soon after they are dealt. But the cards themselves survive the shuffling. The cards are the genes. The genes are not destroyed by crossing over, they merely change partners and march on. Of course they march on. That is their business. They are the replicators and we are their survival machines. When we have served our purpose, we are cast aside. But genes are the denizens of geological time: genes are forever.'

I raise my hat to that lovely phrase — "shuffled into oblivion". The analogy with cards — the hand being the information, the cards themselves as the genes — is precise and informative — true eloquence. The Selfish Gene would have to have a central place in our tradition, as would many other of Richard's books. In particular, Unweaving the Rainbow has a powerful appeal to the literary imagination.

There is no time to discuss the nineteenth century's contribution — Darwin's Origin, and of course, The Expression, or Huxley's "On a Piece of Chalk". And when we come to the present, our parlor game intensifies, for we are wallowing in riches. The Selfish Gene initiated a golden age of science writing. With a fine sense of literary tradition, Steven Weinberg in his book Dreams of a Final Theory, revisited Huxley's famous essay in order to make the case for reductionism. Among many other 'classics' I would propose E. O. Wilson on the beauties of the Amazon rain forest, and on the teeming microorganisms in a handful of soil, Steven Weinberg again on the aesthetics of scientific theories, David Deutsch's The Fabric of Reality, Matt Ridley, unweaving the opposition of nature and nurture. And recently, too Dan Dennett, always conscious of Hume as well as Dawkins, laying out for us the memetics of faith.

In fact, I'll end with a consideration of religion, because a very important part of Richard's work has been to address it. He has refused to gloss over the innate contradictions of reason and faith. None of us, I think, in the mid-'70s, when The Selfish Gene was published, would have thought we'd be devoting so much mental space now to confront religion. We thought that matter had long been closed. Here is another bit of prose that I would want carved into my library — perhaps over the door as you go in. This is a man who's just been threatened with indefinite imprisonment and torture, unless he signs on the dotted line.

'...having before my eyes and touching with my hands the Holy Gospels, swear that I have always believed, do believe, and with God's help will in the future believe all that is held, preached and taught by the Holy Catholic and Apostolic Church... I must altogether abandon the false opinion that the sun is the centre of the world and immovable, and that the earth is not the centre of the world and moves and that I must not hold, defend or teach in any way whatsoever, verbally and in writing the said false doctrine…'

Now, in 1632 Galileo may or may not have whispered as he signed, "but it moves", but his confession serves to remind us that open-minded rational enquiry will always have its enemies. We can take nothing for granted, for totalitarian thinking, religious or political is always with us in some form or other. For this reason alone, a scientific literary tradition has its uses. I would also like to think that the spirit of, "but it moves" lives on in Richard's work.

Ian McEwan's contribution Copyright © 2006 by Ian McEwan. All rights reserved. 


[MELVYN BRAGG:] Thank you very much.

And now we come to RICHARD DAWKINS for an afterword.

In one way there's nothing to be said, because a great deal has been said, and unless you're living on Mars you know a great deal about Richard Dawkins, but I think he deserves to be set up like everyone else. It's a dead hand to say he needs no introduction; he does need no introduction, but here's a short one.

Richard has done more than anyone to clarify one of the most fundamental and enduring ideas in all of science — the theory of evolution by natural selection. In The Selfish Gene and, not least, The Extended Phenotype, he showed how evolution could be understood as the differential success of genes in making their way down the generations by means of adaptations. Adaptations being the familiar 'design features' of living things: eyes, wings, brains, fins. From this gene's-eye view of evolution, all the numerous, often previously disparate studies of living things come together. Genetics, game theory, population biology, phylogeny, development, animal behaviour — all become mutually transparent.

As a bonus to experts and lay-readers alike, he has also made these dramatic developments accessible to a wider public in his string of international bestsellers, so lucid and so readable, as Ian has told us. This year marks not only the thirtieth anniversary of The Selfish Gene but also the twentieth anniversary of The Blind Watchmaker and the tenth of Climbing Mount Improbable. These and other memorable titles — River Out of Eden, The Ancestor's Tale — have all been praised both for their scientific insights and their brilliant literary style. As a result, Richard has the rare honour of being a Fellow both of the Royal Society and of the Royal Society for Literature.

In sum, Richard Dawkins is widely regarded as one of the most influential thinkers and writers in the world today. And it is my pleasure to invite him to provide an afterword to this evening.

Richard Dawkins.

Richard Dawkins' Edge bio page


RICHARD DAWKINS: 'Afterword'

This is of course a wonderful occasion for me. I'm very moved and very grateful, not just to Helena and the LSE and to the OUP for organizing it, to the other speakers, and to Melvyn Bragg for chairing it.

I'm sometimes asked if there's any unifying philosophy in all my plumbing activities, and I find it quite hard to answer. I suppose I'm a lover of explanation. I love to reduce complex mysteries by means of simple explanations. And I suppose that makes me a reductionist, but the word means so many different things, and is to some people a dirty word. It's one thing to reduce, in the sense of trying to find simple explanations for complex phenomena, and in that sense I'm proud to be a reductionist.

But if it's taken to mean reducing in the sense of demeaning, or underestimating, the beauty, the complexity, of that which we're trying to explain, then I would not own up to it. I want to do full justice to the complexity of that which we're trying to explain — while all the time seeking the simplest possible explanation for it. So in that sense I am a reductionist; I'm a materialist. As to whether I'm a determinist, I'll let you know when I've decided.

This is not just an anniversary of several books, as Melvyn has said, it's also the launch of the book edited by Alan Grafen and Mark Ridley. I can't actually bring myself to say the title; modesty forbids. But it is a collection of essays and I'm very very grateful to all the authors — 25 of them — of this collection.

Dan Dennett's contribution to this book of essays is called "The Selfish Gene as a Philosophical Essay," and it begins: "Probably most scientists would shudder at the prospect of having a work of theirs described as a philosophical treatise. You really know how to hurt a guy. Why don't you just say you disagree with my theory, instead of insulting me?"

Well I think one respect in which I am philosophical is this: although I'm very interested in the way life is, I'm also fascinated by the question "Are there aspects of life that just had to be so?"

For example, it's a matter of fact that the genetics that we know is digital, both at the Mendelian level of the independent assortment of genes in pedigrees, and also at the Watson and Crick level of the digital information within each gene. That's a fact. But is the digitalness of genetics just a fact, or is it something that had to be so, for life to work at all?

Whether you call such an approach philosophical or not, that's what I'm interested in. And my suspicion is that genetics did indeed have to be digital, in order at least for evolution by natural selection to work, and I further suspect that evolution by natural selection is also a necessary condition for all of life, wherever life may be found anywhere in the universe. This is my Universal Darwinism claim, and it's the one that Dennett was quoting as getting me into trouble with a fellow biologist for being too philosophical.

Now if you take your science as narrowly evidential, you'll say something like, "Since you've never seen life on another planet other than this one, how can you possibly say anything about the way life might be universally, on other planets.?" On the face of it that sounds like a reasonable complaint, but on the other hand there surely must be some things that theory tells us must be so. And it can't be right to rule out of bounds everything that we can't see with our own eyes.

The extraterrestrial perspective, by the way, is the inspiration for choosing the Desmond Morris painting "The Expectant Valley," which was on the original hardback edition of The Selfish Gene and has been revived in the 30th anniversary retro edition.

So what are the general principles of life, wherever life might be found? I just want to suggest some candidates, as a sort of stimulus to get other people thinking of others. First, Darwinism itself. I've mentioned that. I think it's universal. Can't prove it, but I think it is. Second, digital genetics, with very low mutation rate. Does it have to be DNA? Presumably not. Does it have to be a polynucleotide? Possibly not. Does it have to have a triplet code? Almost certainly not. Et cetera; those are the kinds of questions I'm trying to ask.

Does it have to be one-dimensional? (The DNA code is a one-dimensional string of digits.) Or could it be two-dimensional; could it be a two-dimensional array? I suspect that it probably could. Could it be three-dimensional? Almost certainly not, because a three-dimensional code is very hard to read out of. But there does have to be something three-dimensional, and in our form of life it's provided by proteins. Proteins are the three-dimensional executives which are specified by the one-dimensional genetic code and which in their turn specify the whole of embryology and hence the rest of life.

Sexual recombination. In our form of life this could be said to be a prerequisite for the existence of what we call species — not in the boring taxonomist's sense, but in the sense of an entity which has a gene pool in which information is passed on.

Multicellularity. Life as we know it on this planet is either small or is built up from large numbers of small units, which we call cells. Is this something that had to be so? Or could one imagine a life form which was large, and yet not cellular?

There are lots more questions of that general type, which I haven't got time to go into. But every time I meet a biochemist, the first question I always ask them is, would you please devise for me an alternative biochemistry? And see how different it is possible to be and still, at least in theory, work.

Next question might be, does the information have to be molecular at all? Dan Dennett's already referred to memes. This is not something that I've ever wanted to push as a theory of human culture, but I originally proposed it as a kind of — almost an anti-gene point, to make the point that Darwinism requires accurate replicators with phenotypic power, but they don't necessarily have to be genes. What if they were computer viruses? They hadn't been invented when I wrote The Selfish Gene so I went straight for memes, units of cultural inheritance.

I want to say a little bit, which I actually also said in the new preface to the 30th anniversary edition, so I won't spend long on it — about the title The Selfish Gene. I don't think it's a great title. I'm quite pleased with some of my other titles, but I don't think this is one of my best. It can — it has — given rise to misunderstanding.

The best way to explain it is by correctly locating the emphasis. If you emphasize "selfish," then you will think the book is about selfishness. But it isn't, it's mostly about altruism. The correct word of the title to stress is "gene," and that's not because I ever thought that genes are deterministic in the sense that is politically objectionable to some people; it's because of a debate within Darwinism.

The central debate within Darwinism concerns the unit that is actually selected, the kind of thing which becomes more or less numerous in a pool of such entities. That unit will become, more or less by definition, selfish, in this sense. Altruism would then be favored at other levels. So if natural selection chooses between species, then you could write a book called The Selfish Species, and we would then expect individual organisms to behave for the good of the species. That isn't the way it is — it is in fact the selfish gene, which means that we expect, and see, individual organisms behaving for the good of their genes, which may mean altruistic behavior at the level of the individual organism. And that's quite largely what the book is about.

I can see how the title The Selfish Gene could be misunderstood, especially by those philosophers, not here present, who prefer to read a book by title only, omitting the rather extensive footnote which is the book itself.

Alternative titles could well have been The Immortal Gene, The Altruistic Vehicle, or indeed The Cooperative Gene. The book could equally well have been called The Cooperative Gene, and it would scarcely have needed to be changed at all.

One of the main points in the book is that genes in a sense do cooperate — not that groups of genes prosper at the expense of rival groups, but rather each gene is seen as pursuing its own self-interested agenda against the background of the other genes in the gene pool: the set of candidates for sexual shuffling within a species. Those other genes should be thought of as part of the climate, part of the context, part of the environmental background against which genes are selected. Rather like the weather. Natural selection under those conditions will see to it that gangs of mutually compatible genes will arise, each one selected for its capacity to cooperate with the others that it is likely to meet in bodies, which means the other genes of the gene pool of the species — that's in the case of a sexual species.

Given that natural selection for selfish genes in that sense tends to favor cooperation, we then have to admit that there are some genes that do no such thing, and work against the interests of the rest of the genome, and these are the things that Matt was talking about, the true selfish DNA. And there's a bit of a terminological problem arises here, which I think Matt glanced at.

Selfish DNA, in the sense of Orgel and Crick, and Doolittle and Sapienza is DNA which works at the expense of the rest of the genome. Selfish genes in my sense also include genes which actually cooperate — when they build bodies. Because a body is a cooperative enterprise of many genes. So they are still selfish genes in my original sense, but they're not selfish genes in the sense of Orgel and Crick. So some people have resorted to the use of the phrase "ultra selfish genes" — or "outlaw genes" — to distinguish those.

A new book has appeared, very recently, unfortunately too recently to be quoted in my preface to the new edition, by Robert Trivers and Austin Burt, called Genes in Conflict, which is the last word on this subject. Bob Trivers's name reminds me, and it's a source of particular joy, that the 30th anniversary edition has restored the original forward by him, which was in the first edition, and which was somehow cut out of the second edition.

Bob Trivers is one of the four intellectual heroes of the book, the others mentioned being Bill Hamilton, John Maynard Smith, and George Williams — there are of course many others, because I really need to stress that my book is more a summary of ideas of others, and I'd be quite embarrassed if it were thought that I were claiming them for myself — the neo-Darwinian synthesis of Fisher, Haldane, and Wright already — indeed going back as far as Weisman — I think, foreshadowed the idea of the selfish gene very explicitly.

But as I was saying, I'm delighted that Bob Trivers's original foreword is back. Not only is it a beautifully crafted introduction to the book; unusually he chose the medium of a book foreword to announce to the world a brilliant new idea, his theory of the evolution of self-deception. Which would grace any scientific paper, and I'm very grateful to him for giving permission for the original foreword to go into this anniversary edition.

One of the oddest reactions to The Selfish Gene has been the desire expressed by more than one person to un-read it. Here's the verdict of a reader in Australia, for example:

"Fascinating, but at times I wish I could unread it . . . On one level, I can share in the sense of wonder Dawkins so evidently sees in the workings-out of such complex processes . . . But at the same time, I largely blame The Selfish Gene for a series of bouts of depression I suffered from for more than a decade . . . Never sure of my spiritual outlook on life, but trying to find something deeper – trying to believe, but not quite being able to – I found that this book just about blew away any vague ideas I had along these lines, and prevented them from coalescing any further. This created quite a strong personal crisis for me some years ago."

I previously, in another book, Unweaving the Rainbow, described similar reactions. There was a man in New Zealand who said he couldn't sleep for three nights after reading it; and a teacher in Canada wrote to say that a pupil of his had come to him in tears, because reading The Selfish Gene had convinced her that life was futile and not worth living. He drew her attention to the occasion when Lenin was placed in a sealed train, in case the bacillus of Leninism should leak out when he was transported back to Russia, and he advised this young woman to show the book to none of her friends.

If something is true, no amount of wishful thinking can undo it. That's the first thing to say. But the second thing to say is almost as important. Which is that there really never was any reason for these despairing reactions at all. It is a complete misunderstanding of what science can tell us about ourselves if we conclude from it that we are somehow diminished by it, by the truth. Our life is what we make of it. No new facts about our nature can change that. And another way of putting it is, in the concluding words of the original first edition of The Selfish Gene:

"We can even discuss ways of deliberately cultivating and nurturing pure, disinterested altruism – something that has no place in nature, something that has never existed before in the whole history of the world. We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on earth can rebel against the tyranny of the selfish replicators."

Thank you very much. 


[MELVYN BRAGG:] Thank you very much. You've been generous with your applause but I'd like to thank all the speakers for their splendidly stimulating and original talks and their clarity and extraordinary concision. We are all immensely grateful to them.

And Darwin@LSE would like to thank Oxford University Press for supporting this event. And thanks to LSE Conferences and Events office, which dealt with a stampede for tickets so unprecedented that, within a few minutes, both the server and phone lines had crashed.

I'm sure I'm speaking for everyone on this platform to express our gratitude for the extraordinary efficiency and the best briefing in the world from Helena Cronin.

As one person said in reply to the standard question: "Where did you hear about this event?": "The whole world is talking about it. "



March 04, 2006

Yes, genes can be selfish

Review by Prof. Steven Pinker

To mark the 30th anniversary of Richard Dawkins's book, OUP is to issue a collection of essays about his work. Here, professor of psychology at Harvard University, wonders if Dawkins's big idea has not gone far enough.

I AM A COGNITIVE SCIENTIST, someone who studies the nature of intelligence and the workings of the mind. Yet one of my most profound scientific influences has been Richard Dawkins, an evolutionary biologist. The influence runs deeper than the fact that the mind is a product of the brain and the brain a product of evolution; such an influence could apply to someone who studies any organ of any organism. The significance of Dawkins's ideas, for me and many others, runs to his characterisation of the very nature of life and to a theme that runs throughout his writings: the possibility of deep commonalities between life and mind.

[...continue]


My biggest bloomer
Robin McKie
Sunday March 5, 2006
The Observer

In The Selfish Gene, Dawkins, in typical, robust style, rips up the idea of evolution as it was then understood and substitutes his vision of natural selection. Animals and plants do not use genes to self-replicate, he argues. It is the other way round. 'We are robot vehicles blindly programmed to preserve the selfish molecules known as genes,' he states. Thus the egg not only comes before the chicken, it runs the animal's entire life.

It is an intriguing thesis, one that has proved to be highly enduring and phenomenally successful, shaping a generation's thinking about the way that DNA controls our lives. I just wished someone had warned me about it at the time....

[...continue]


March 12, 2006
It's all in the genes
by Prof. Richard Dawkins 
The Sunday Times Oxford Literary Festival starts on Friday, March 24. Previewing events at the festival, Richard Dawkins looks back at the extraordinary 30-year history of his first book, The Selfish Gene

The best way to explain the title is by locating the emphasis. Emphasise “selfish” and you will think the book is about selfishness, whereas, if anything, it devotes more attention to altruism. The correct word of the title to stress is “gene”, and let me explain why. A central debate within Darwinism concerns the unit that is actually selected: what kind of entity is it that survives, or does not survive, as a consequence of natural selection? That unit will become, more or less by definition, “selfish”. Altruism might well be favoured at other levels. Does natural selection choose between species? If so, we might expect individual organisms to behave altruistically “for the good of the species”. They might limit their birth rates to avoid overpopulation, or restrain their hunting behaviour to conserve the species’ future stocks of prey. It was such widely disseminated misunderstandings of Darwinism that originally provoked me to write the book....

[...continue]


Great minds united in an ungodly trio
The Observer's Science Editor charts Dennett's central role in the long and bitter struggle of the 'Darwin Wars'

Robin McKie, science editor
Sunday March 12, 2006
The Observer

Daniel Dennett's main claim to fame is through his membership of a triumvirate of intellectual heavyweights who have waged war on behalf of Charles Darwin and his theories. The British zoologist Richard Dawkins, based at Oxford University, and the Harvard biologist and ant expert Edward O. Wilson make up the rest of this group. Each is committed, fiercely, to the idea that evolutionary theory is sufficient to explain our world, all living things and our own species. Call in any other force to elucidate our existence and you are indulging in sheer intellectual sloppiness, they argue.

All three are fierce debaters, particularly Dennett and Dawkins, and none has been known for taking prisoners on the battlefield of biology. Many is the bloodied academic who has crossed swords with them. Not surprisingly, this ungodly crew doesn't go down terribly well with the religious right of America....

[...continue]


Loveable rogue, or selfish killer?
(March 21, 2006)
Roger Highfield 

Genes will do whatever it takes to duplicate themselves and survive, even if the result is infanticide, murderous queens or a vicious battle of the sexes. Roger Highfield examines new evidence that reinforces Richard Dawkins' 30-year-old vision of ruthless DNA

Three decades after Richard Dawkins revolutionised our understanding of living things with The Selfish Gene, evidence has accumulated to back his cold-eyed vision of how bodies, families and society are shaped by the simple "duplicate me" message in our genetic instructions.

[...continue]

 

Pages

Subscribe to