REAL LIFE IS NOT A CASINO

REAL LIFE IS NOT A CASINO

Nassim Nicholas Taleb [11.6.08]

On New Years day I received a prescient essay from Nassim Taleb, author of The Black Swan, as his response to the 2008 Edge Question: "What Have You Change Your Mind About?" In "Real Life Is Not A Casino", he wrote:

I've shown that institutions that are exposed to negative black swans—such as banks and some classes of insurance ventures—have almost never been profitable over long periods. The problem of the illustrative current subprime mortgage mess is not so much that the "quants" and other pseudo-experts in bank risk-management were wrong about the probabilities (they were) but that they were severely wrong about the different layers of depth of potential negative outcomes.

Taleb had changed his mind about his belief "in the centrality of probability in life, and advocating that we should express everything in terms of degrees of credence, with unitary probabilities as a special case for total certainties and null for total implausibility".

Critical thinking, knowledge, beliefs—everything needed to be probabilized. Until I came to realize, twelve years ago, that I was wrong in this notion that the calculus of probability could be a guide to life and help society. Indeed, it is only in very rare circumstances that probability (by itself) is a guide to decision making. It is a clumsy academic construction, extremely artificial, and nonobservable. Probability is backed out of decisions; it is not a construct to be handled in a stand-alone way in real-life decision making. It has caused harm in many fields.

The essay is one of more than one hundred that have been edited for a new book What Have You Changed Your Mind About? (forthcoming, Harper Collins, January 9th).

I spent a long time believing in the centrality of probability in life and advocating that we should express everything in terms of degrees of credence, with unitary probabilities as a special case for total certainties and null for total implausibility. Critical thinking, knowledge, beliefs—everything needed to be probabilized. Until I came to realize, twelve years ago, that I was wrong in this notion that the calculus of probability could be a guide to life and help society. Indeed, it is only in very rare circumstances that probability (by itself) is a guide to decision making. It is a clumsy academic construction, extremely artificial, and nonobservable. Probability is backed out of decisions; it is not a construct to be handled in a stand-alone way in real-life decision making. It has caused harm in many fields.

Consider the following statement. "I think that this book is going to be a flop, but I would be very happy to publish it." Is the statement incoherent? Of course not: Even if the book is very likely to be a flop, it may make economic sense to publish it (for someone with deep pockets and the right appetite), since one cannot ignore the small possibility of a handsome windfall or the even smaller possibility of a huge windfall. We can easily see that when it comes to low odds, decision making no longer depends on the probability alone. It is the pair, probability times payoff (or a series of payoffs), the expectation, that matters. On occasion, the potential payoff can be so vast that it dwarfs the probability—and these are usually real-world situations in which probability is not computable.

Consequently, there is a difference between knowledge and action. You cannot naïvely rely on scientific statistical knowledge (as they define it) or what the epistemologists call justified true belief for non-textbook decisions. Statistically oriented modern science is typically based on Right/Wrong, with a set confidence level, stripped of consequences. Would you take a headache pill if it was deemed effective at a 95-percent confidence level? Most certainly. But would you take the pill if it is established that it is "not lethal" at a 95-percent confidence level? I hope not.

When I discuss the impact of the highly improbable ("black swans"), people make the automatic mistake of thinking that the message is that these "black swans" are necessarily more probable than assumed by conventional methods. They are mostly less probable. Consider that, in a winner-take-all environment, such as the arts, the odds of success are low, since there are fewer successful people, but the payoff is disproportionately high. So, in a fat-tailed environment (what I call Extremistan), rare events are less frequent (their probability is lower), but they are so effective that their contribution to the total pie is more substantial.

[Technical note: the distinction is, simply, between raw probability, P[x>K], i.e., the probability of exceeding K, and E[x|x>K], the expectation of x conditional on x>K. It is the difference between the zeroth moment and the first moment. The latter is what usually matters for decisions. And it is the (conditional) first moment that needs to be the core of decision making. What I saw in 1995 was that an out-of-the-money option value increases when the probability of the event decreases, making me feel that everything I thought until then was wrong.]

What causes severe mistakes is that outside the special cases of casinos and lotteries, you almost never face a single probability with a single (and known) payoff. You may face, say, a 5-percent probability of an earthquake of magnitude 3 or higher, a 2-percent probability of one of magnitude 4 or higher, and so forth. The same with wars: You have a risk of different levels of damage, each with a different probability. "What is the probability of war?" is a meaningless question for risk assessment.

So it is wrong to look just at a single probability of a single event in cases of richer possibilities (like focusing on such questions as "What is the probability of losing a million dollars?" while ignoring that, conditional on losing more than a million dollars, you may have an expected loss of $20 million, $100 million, or just $1 million). Once again, real life is not a casino with simple bets. This is the error that helps the banking system go bust with an astonishing regularity. I've shown that institutions that are exposed to negative black swans—such as banks and some classes of insurance ventures—have almost never been profitable over long periods. The problem of the illustrative current subprime mortgage mess is not so much that the "quants" and other pseudo-experts in bank risk-management were wrong about the probabilities (they were) but that they were severely wrong about the different layers of depth of potential negative outcomes. For instance, Morgan Stanley has lost about $10 billion (so far), while allegedly having foreseen a subprime crisis and executed hedges against it; they just did not realize how deep it would go and had open exposure to the big tail risks. This is routine. A friend who went bust during the crash of 1987 told me, "I was betting that it would happen, but I did not know it would go that far."

The point is mathematically simple, but does not register easily. I've enjoyed giving math students the following quiz (to be answered intuitively, on the spot). In a Gaussian world, the probability of exceeding one standard deviation is around 16 percent. What are the odds of exceeding it under a distribution of fatter tails (with same mean and variance)? The right answer: lower, not higher—the number of deviations drops, but the few that take place matter more. It was entertaining to see that most of the graduate students got it wrong. Those who are untrained in the calculus of probability have a far better intuition of these matters.

Another complication is that just as probability and payoff are inseparable, so one cannot extract another complicated component—utility—from the decision-making equation. Fortunately the ancients, with all their tricks and accumulated wisdom in decision making, knew a lot of that—at least, better than modern-day probability theorists. Let us stop systematically treating them as if they were idiots. Most texts blame the ancients for their ignorance of the calculus of probability: The Babylonians, Egyptians, and Romans in spite of their engineering sophistication, and the Arabs in spite of their taste for mathematics, were blamed for not having produced a calculus of probability (the latter being, incidentally, a myth, since Umayyad scholars used relative word frequencies to determine authorships of holy texts and decrypt messages). The reason was foolishly attributed to theology, lack of sophistication, lack of something people call the "scientific method," or belief in fate. The ancients just made decisions in a more ecologically sophisticated manner than modern epistemology-minded people. They integrated skeptical Pyrrhonian empiricism into decision making. As I said, consider that belief (i.e., epistemology) and action (i.e., decision making), the way they are practiced, are largely not consistent with each other.

Let us apply the point to the current debate on carbon emissions and climate change. Correspondents keep asking me if the climate worriers are basing their claims on shoddy science and whether, owing to nonlinearities, their forecasts are marred with such a possible error that we should ignore them. Now, even if I agreed that it was shoddy science; even if I agreed with the statement that the climate folks were most probably wrong, I would still opt for the most ecologically conservative stance. Leave Planet Earth the way we found it. Consider the consequences of the very remote possibility that they may be right—or, worse, the even more remote possibility that they may be extremely right.