LEARNING TO EXPECT THE UNEXPECTED

LEARNING TO EXPECT THE UNEXPECTED

Nassim Nicholas Taleb [4.17.04]

A black swan is an outlier, an event that lies beyond the realm of normal expectations. Most people expect all swans to be white because that's what their experience tells them; a black swan is by definition a surprise. Nevertheless, people tend to concoct explanations for them after the fact, which makes them appear more predictable, and less random, than they are. Our minds are designed to retain, for efficient storage, past information that fits into a compressed narrative. This distortion, called the hindsight bias, prevents us from adequately learning from the past.

video

Introduction
by John Brockman

Nassim Taleb is an essayist and mathematical trader, and an "Edge Activist—a member of the literary and empirical community of scientists-philosophers". He wants to create a "platform for a new scientific-minded public intellectual dealing with social and historical events — in replacement to the 'fooled by randomness' historian and the babbling journalistic public intellectual". He is interested in the epistemology of randomness and the multidisciplinary problems of uncertainty and knowledge, particularly in the large-impact hard-to-predict rare events ("Black Swans").

He is a new breed of third culture thinker...a scientist, an essayist, and, to add to an already rare combination, a businessman. This combination has been key to his formulation of The Black Swan as it is concerned with the interconnection between chance and the dynamics of historical events on one hand, and the cognitive biases embedded in human nature affecting our understanding of history on the other.

"Much of what happens in history", he notes, "comes from 'Black Swan dynamics', very large, sudden, and totally unpredictable 'outliers', while much of what we usually talk about is almost pure noise. Our track record in predicting those events is dismal; yet by some mechanism called the hindsight bias we think that we understand them. We have a bad habit of finding 'laws' in history (by fitting stories to events and detecting false patterns); we are drivers looking through the rear view mirror while convinced we are looking ahead."

"Why are we so bad at understanding this type of uncertainty? It is now the scientific consensus that our risk-avoidance mechanism is not mediated by the cognitive modules of our brain, but rather by the emotional ones. This may have made us fit for the Pleistocene era. Our risk machinery is designed to run away from tigers; it is not designed for the information-laden modern world."

—JB

[Editor's Note: On April 8th, the day the 9/11 Commission heard testimony from Presidential advisor Condoleezza Rice, Taleb's Op-Ed piece, "Learning to Expect the Uniexpected", was published in The New York Times. After the testimony, he stopped by for a conversation. Below I present both the Op-Ed piece and the discussion.]

NASSIM NICHOLAS TALEB is an essayist and mathematical trader and the author of Dynamic Hedging and Fooled by Randomness (2nd Ediition, April, 2004).

Nassim Taleb's Edge Bio Page

THE REALITY CLUB: Response by Stewart Brand


The 9/11 commission has drawn more attention for the testimony it has gathered than for the purpose it has set for itself. Today the commission will hear from Condoleezza Rice, national security adviser to President Bush, and her account of the administration's policies before Sept. 11 is likely to differ from that of Richard Clarke, the president's former counterterrorism chief, in most particulars except one: it will be disputed.

There is more than politics at work here, although politics explains a lot. The commission itself, with its mandate, may have compromised its report before it is even delivered. That mandate is "to provide a 'full and complete accounting' of the attacks of Sept. 11, 2001 and recommendations as to how to prevent such attacks in the future."

It sounds uncontroversial, reasonable, even admirable, yet it contains at least three flaws that are common to most such inquiries into past events. To recognize those flaws, it is necessary to understand the concept of the "black swan."

A black swan is an outlier, an event that lies beyond the realm of normal expectations. Most people expect all swans to be white because that's what their experience tells them; a black swan is by definition a surprise. Nevertheless, people tend to concoct explanations for them after the fact, which makes them appear more predictable, and less random, than they are. Our minds are designed to retain, for efficient storage, past information that fits into a compressed narrative. This distortion, called the hindsight bias, prevents us from adequately learning from the past.

Black swans can have extreme effects: just a few explain almost everything, from the success of some ideas and religions to events in our personal lives. Moreover, their influence seems to have grown in the 20th century, while ordinary events — the ones we study and discuss and learn about in history or from the news — are becoming increasingly inconsequential.

Consider: How would an understanding of the world on June 27, 1914, have helped anyone guess what was to happen next? The rise of Hitler, the demise of the Soviet bloc, the spread of Islamic fundamentalism, the Internet bubble: not only were these events unpredictable, but anyone who correctly forecast any of them would have been deemed a lunatic (indeed, some were). This accusation of lunacy would have also applied to a correct prediction of the events of 9/11 — a black swan of the vicious variety.

A vicious black swan has an additional elusive property: its very unexpectedness helps create the conditions for it to occur. Had a terrorist attack been a conceivable risk on Sept. 10, 2001, it would likely not have happened. Jet fighters would have been on alert to intercept hijacked planes, airplanes would have had locks on their cockpit doors, airports would have carefully checked all passenger luggage. None of that happened, of course, until after 9/11.

Much of the research into humans' risk-avoidance machinery shows that it is antiquated and unfit for the modern world; it is made to counter repeatable attacks and learn from specifics. If someone narrowly escapes being eaten by a tiger in a certain cave, then he learns to avoid that cave. Yet vicious black swans by definition do not repeat themselves. We cannot learn from them easily.

All of which brings us to the 9/11 commission. America will not have another chance to hold a first inquiry into 9/11. With its flawed mandate, however, the commission is in jeopardy of squandering this opportunity.

The first flaw is the error of excessive and naïve specificity. By focusing on the details of the past event, we may be diverting attention from the question of how to prevent future tragedies, which are still abstract in our mind. To defend ourselves against black swans, general knowledge is a crucial first step.

The mandate is also a prime example of the phenomenon known as hindsight distortion. To paraphrase Kirkegaard, history runs forward but is seen backward. An investigation should avoid the mistake of overestimating cases of possible negligence, a chronic flaw of hindsight analyses. Unfortunately, the hearings show that the commission appears to be looking for precise and narrowly defined accountability.

Yet infinite vigilance is not possible. Negligence in any specific case needs to be compared with the normal rate of negligence for all possible events at the time of the tragedy — including those events that did not take place but could have. Before 9/11, the risk of terrorism was not as obvious as it seems today to a reasonable person in government (which is part of the reason 9/11 occurred). Therefore the government might have used its resources to protect against other risks — with invisible but perhaps effective results.

The third flaw is related. Our system of rewards is not adapted to black swans. We can set up rewards for activity that reduces the risk of certain measurable events, like cancer rates. But it is more difficult to reward the prevention (or even reduction) of a chain of bad events (war, for instance). Job-performance assessments in these matters are not just tricky, they may be biased in favor of measurable events. Sometimes, as any good manager knows, avoiding a certain outcome is an achievement.

The greatest flaw in the commission's mandate, regrettably, mirrors one of the greatest flaws in modern society: it does not understand risk. The focus of the investigation should not be on how to avoid any specific black swan, for we don't know where the next one is coming from. The focus should be on what general lessons can be learned from them. And the most important lesson may be that we should reward people, not ridicule them, for thinking the impossible. After a black swan like 9/11, we must look ahead, not in the rear-view mirror.

[Editor's Note: First published as an Op-Ed Page article in The New York Times on April 8, 2004.]


LEARNING TO EXPECT THE UNEXPECTED: A Talk with Nassim Nicholas Taleb

[NASSIM NICHOLAS TALEB:] People say I'm a trader, which may be a strange designation for someone whose central belief is skepticism about the predictability of markets and people's biases in the attribution of skills, how we are "fooled by randomness". Some people call me a mathematician, which is not right either, because I specialize in a narrow branch of mathematics, models of uncertainty applied to the social sciences. But I focus instead on the breakdown of these models as they enter contact with reality; I spend my time doing empirical research on how and where these ambitious models fail us. So you cannot call a "mathematician" someone who uses mathematics to reject mathematical methods without proposing anything better than mere "humility" and the rare courage to say "I don't know" — both of which have no sophisticated equation and do not provide for technical papers. As I will discuss later I am extremely skeptical about our current ability to capture socio-economic randomness with models — but such information is vital in and by itself and can be used to get out of trouble. Being an inverse-trader and an inverse-mathematician (using the conventional sense), I tell people I am a scientific-philosophical essayist and people leave me alone.

One can study randomness, at three levels: mathematical, empirical, and behavioral. The first is the narrowly defined mathematics of randomness, which is no longer the interesting problem because we've pretty much reached small returns in what we can develop in that branch. The second one is the dynamics of the real world, the dynamics of history, what we can and cannot model, how we can get into the guts of the mechanics of historical events, whether quantitative models can help us and how they can hurt us. And the third is our human ability to understand uncertainty. We are endowed with a native scorn of the abstract; we ignore what we do not see, even if our logic recommends otherwise. We tend to overestimate causal relationships. When we meet someone who by playing Russian roulette became extremely influential, wealthy, and powerful, we still act toward that person as if he gained that status just by skills, even when you know there's been a lot of luck. Why? Because our behavior toward that person is going to be entirely determined by shallow heuristics and very superficial matters related to his appearance.

There are two technical problems in randomness — what I call the soft problem and the hard problem. The soft problem in randomness is what practitioners hate me for, but academics have a no-brainer solution for it — it's just hard to implement. It's what we call in some circles the observation bias, or the related data mining problem. When you look at anything — say the stock market — you see the survivors, the winners; you don 't see the losers because you don't observe the cemetery and you will be likely to misattribute the causes that led to the winning.

There is a silly book called A Millionaire Next Door, and one of the authors wrote an even sillier book called The Millionaire's Mind. They interviewed a bunch of millionaires to figure out how these people got rich. Visibly they came up with bunch of traits. You need a little bit of intelligence, a lot of hard work, and a lot of risk-taking. And they derived that, hey, taking risk is good for you if you want to become a millionaire. What these people forgot to do is to go take a look at the less visible cemetery — in other words, bankrupt people, failures, people who went out of business — and look at their traits. They would have discovered that some of the same traits are shared by these people, like hard work and risk taking. This tells me that the unique trait that the millionaires had in common was mostly luck.

This bias makes us miscompute the odds and wrongly ascribe skills. If you funded 1,000,000 unemployed people endowed with no more than the ability to say "buy" or "sell", odds are that you will break-even in the aggregate, minus transaction costs, but a few will hit the jackpot, simply because the base cohort is very large. It will be almost impossible not to have small Warren Buffets by luck alone. After the fact they will be very visible and will derive precise and well-sounding explanations about why they made it. It is difficult to argue with them; "nothing succeeds like success". All these retrospective explanations are pervasive, but there are scientific methods to correct for the bias. This has not filtered through to the business world or the news media; researchers have evidence that professional fund managers are just no better than random and cost money to society (the total revenues from these transaction costs is in the hundreds of billion of dollars) but the public will remain convinced that "some" of these investors have skills.

The hard problem of randomness may be insoluble. It's what some academics hate me for. It is an epistemological problem: we do not observe probabilities in a direct way. We have to find them somewhere, and they can be prone to a few types of misspecification. Some probabilities are incomputable — the good news is that we know which ones. Much of the mathematical models we have to capture uncertainty work in a casino and gambling environment, but they are not applicable to the complicated social world in which we live, a fact that is trivial to show empirically. Consider two types of randomness. The first type is physical randomness — in other words, the probability of running into a giant taller than seven, eight, or nine feet, which in the physical world is very low. The probability of running into someone 200 miles tall is definitely zero; because you have to have a mother of some size, there are physical limitations. The probability that a heat particle will go from here to China, or from here to the moon, is extremely small since it needs energy for that. These distributions tend to be "bell-shaped", Gaussian, with tractable properties.

But in the random variables we observe today, like prices — what I call Type-2 randomness, anything that's informational — the sky is the limit. It' s "wild" uncertainty. As the Germans saw during the hyperinflation episode, a currency can go from one to a billion, instantly. You can name a number, nothing physical can stop it from getting there. What is worrisome is that nothing in the past statistics could have helped you guess the possibility of such hyperinflation effect. People can become very powerful overnight on a very small idea.

Take the Google phenomenon or the Microsoft effect — "all-or-nothing" dynamics. The equivalent of Google, where someone just takes over everything, would have been impossible to witness in the Pleistocene. These are more and more prevalent in a world where the bulk of the random variables are socio-informational with low physical limitations. That type of randomness is close to impossible to model since a single observation of large impact, what I called a Black Swan, can destroy the entire inference.

This is not just a logical statement: these happens routinely. In spite of all the mathematical sophistication, we're not getting anything tangible except the knowledge that we do not know much about these "tail" properties. And since our world is more and more dominated by these large deviations that are not possible to model properly, we understand less and less of what's going on.

Quantitative economics, particularly finance, has not been a particularly introspective or empirical science. Wilfredo Pareto had the intuition of type 2 uncertainty in the social world (Black Swan style) more than 100 years ago with his nonGaussian distribution. Shamefully mainstream economists ignored him because his alternative did not yield "tangible" answers for academic careers. Financial economists built "portfolio theory" that is based on our ability to measure the financial risks. They used the Bell-Shaped (and similar) distribution which proliferated in academia and yielded a handful of Nobel medals.

Everything reposes on probabilities being stationary, i.e. not changing after your observe them, assuming what you observed was true. They were all convinced of measuring risks as someone would measure the temperature. It led to series of fiascos, including the blowup of a fund called Long-term Capital Management, co-founded by two Nobel economists. Yet it has not been discredited — they still say "we have nothing better" and teach it in Business Schools. This is what I call the problem of gambling with the wrong dice. Here you have someone who is extremely sophisticated at computing the probabilities on the dice, but guess what? They have no clue what dice they are using and no mental courage to say "I don't know".

Social scientists have suffered from physics envy, since physics has been very successful at creating mathematical models with huge predictive value. In financial economics, particularly in a field called risk management, the predictive value of the models is no different from astrology. Indeed it resembles astrology (without the elegance). They give you an ex-post ad-hoc explanation. After seeing the reactions to the Long-Term Capital Fiasco I became puzzled with the reactions of regulators, central bankers and the financial establishment. They did not seem to learn the real lesson from the event. I then moved my intellectual energy into the more scientifically honest sciences of human nature.


The puzzling question is why is it that we humans don't realize that we don't know anything about the significant brand of randomness? Why don't we realize that we are not that capable of predicting? Why don't we notice the bias that causes us not to realize that we're not learning from our experiences? Why do we still keep going as if we understand them?

A lot of insight that comes from behavioral and cognitive psychology — particularly with the work of Daniel Kahneman, Amos Tversky, and, of course, Daniel Gilbert — which shows that we don't have good introspective ability to see and understand what makes us tick. This has implications on why we don't know what makes us happy — affective forecasting — why we don't quite understand how we make our choices, and why we don't learn from our own experiences. We think we're better at forecasting than we actually are. Viciously, this applies particularly in full force to categories of social scientists.

We are not made for type-2 randomness. How can we humans take into account the role of uncertainty in our lives without moralizing? As Steve Pinker aptly said, our mind is made for fitness, not for truth — but fitness for a different probabilistic structure. Which tricks work? Here is one: avoid the media. We are not rational enough to be exposed to the press. It is a very dangerous thing, because the probabilistic mapping we get from watching television is entirely different from the actual risks that we exposed to. If you watch a building burning on television, it's going to change your attitude toward that risk regardless of its real actuarial value, no matter your intellectual sophistication. How can we live in a society in the twenty-first, twenty-second, or twenty-third century, while at the same time we have intuitions made for probably a hundred million years ago? How can we accept as a society that we are largely animals in our behavior, and that our understanding of matters is not of any large consequence in the way we act?

Trivially, we can see it in the behavior of people. Many smokers know that what they're doing is dangerous, yet they continue doing it because visibly their cognition is of no big impact on their behavior. There is another bias: they believe that the probabilities that apply to others do not apply to them. Most people who want to look like Greek gods — and a lot of people want to look like Greek gods — know exactly what to do: buy a hundred dollar membership at some gym and show up three times a week.

Most people know exactly what the solutions are to the problem, including the problems of randomness. It is not sufficient. We need to protect ourselves from the modern environment in a far more effectual methods than we have today. Think of the risk of diabetes, which is emerging largely as a maladaptation to the world in which we live. But we have far more acute a problem in our risk-bearing. We scorn what we don't see. We take a lot of risks, but because we're comfortable we don't see them. We have no protocol of behavior, and this is far more dangerous than these physical diseases. The probabilistic blindness we have is an equivalent mental illness, far more severe than diabetes and obesity.

Take an example of this probabilistic maladjustment. Say you are flying to New York City. You sit next to someone on the plane, and she tells you that she knows someone whose first cousin worked with someone who got mugged in Central Park in 1983. That information is going to override any form of statistical data that you will acquire about the crime rate in New York City. This is how we are. We're not made to absorb abstract information. The first step is to make ourselves aware of it. But so far we don't have a second step. Should newspapers and television sets come with a warning label?

We have vital research in risk-bearing, done by people like Danny Kahneman Christopher Hsee, and Paul Slovic. The availability heuristic tells you that your perception of a risk is going to be proportional to how salient the event comes to your mind. It can come in two ways, either because it compressed a vivid image, or because it's going to elicit an emotional reaction in you. The latter is called the affect heuristic, recent developed as the "risk as feeling" theory. We observe it in trading all the time. Basically you only worry about what you know, and typically once you know about something the damage is done.


I'm an activist and a proselyte for Edge as a platform for a new scientific-minded public intellectual dealing with social and historical events — in replacement to the "fooled by randomness" historian and the babbling journalistic public intellectual. There are two types of people, people worthy of respect who try to resist explaining things, and people who cannot resist explaining things. So we have a left column and a right column. In the left column, that of the people who take their knowledge too seriously, you first have historians: "This was caused by that. Why? Because two events coincided. The president came and suddenly we had prosperity, and hence, maybe having a tall president is something good for the economy." You can always find explanations.

The second one is a journalist. On the day when Saddam was caught, the bond market went up in the morning, and it went down in the afternoon. So here we had two headlines — "Bond Market Up on Saddam News," and in the afternoon, "Bond Market Down on Saddam News" — and then they had in both cases very convincing explanations of the moves. Basically if you can explain one thing and its opposite using the same data you don't have an explanation. It takes a lot of courage to keep silent.

One aspect of this left-column bias is well-known by empirical psychologists called the Belief in the Law of Small Numbers — how we tend to overestimate how much data we have to reach a conclusion. It might have been an optimal strategy in some simpler environment. You don't need to make sure that this is a tiger before your run for your life. If you see something that vaguely resembles a tiger, you run; it's far more efficient to overestimate the odds.

People in the humanities tend to compound our biases — they do not understand the basic concept of sampling error. You also have the businesspeople and their servant economists. Gerd Gigerenzer, paraphrasing George Orwell, has noted that every human being needs to learn to read, to write, and to understand statistical significance. He gave examples showing that the third step has proven so much more difficult than the first two. This is where understanding the significance of events is something that cannot be done without the rigor of scientific skepticism.

On the right side you have Montaigne, worthy of respect because he's intensely introspective, with the courage of resisting his own knowledge. That was before everything got wrecked with the Cartesian world, the quest for certainties, and encouraging people to explain. Add Hume, Popper, Hayek, Keynes, Peirce. In that category you also have the physicists and the scientists in the empirical world. Why? It's not because they don't have human biases, but because you have this huge infrastructure above them that prevents them from saying something that they cannot back up empirically. You also have many skeptical traders (or inverse-traders) in the right column, those who are aware of our inability to predict markets. We do not speculate but take advantage of imbalances and order-flow. We operate from a base of natural skepticism.

I want to aggressively promote this skeptical brand of probabilistic thinking into public intellectual life. This is beyond an intellectual luxury. Watching recent events and the politicians and journalists (of all persuasions) reacting to them, I am truly scared to live in this society.