2013 : WHAT *SHOULD* WE BE WORRIED ABOUT?

nassim_nicholas_taleb's picture
Distinguished Professor of Risk Engineering, New York University School of Engineering ; Author, Incerto (Antifragile, The Black Swan...)
What We Learn From Firefighters

How Fat Are the Fat Tails?

Eight years ago, I showed, using twenty million pieces of data from socioeconomic variables (about all the data that was available at the time), that current tools in economics and econometrics don't work, whenever there is an exposure to a large deviations, or "Black Swans". There was a gigantic mammoth in the middle of the classroom. Simply, one observation in 10,000, that is, on day in 40 years, can explain the bulk of the "kurtosis", a measure of what we call "fat tails", that is, how much the distribution under consideration departs from the standard Gaussian, or the role of remote events in determining the total properties. For the U.S. stock market, a single day, the crash of 1987, determined 80% of the kurtosis. The same problem is found with interest and exchange rates, commodities, and other variables. The problem is not just that the data had "fat tails", something people knew but sort of wanted to forget; it was that we would never be able to determine "how fat" the tails were. Never.

The implication is that those tools used in economics that are based on squaring variables (more technically, the Euclidian, or L-2 norm), such as standard deviation, variance, correlation, regression, or value-at-risk, the kind of stuff you find in textbooks, are not valid scientifically (except in some rare cases where the variable is bounded). The so-called "p values" you find in studies have no meaning with economic and financial variables. Even the more sophisticated techniques of stochastic calculus used in mathematical finance do not work in economics except in selected pockets.

The results of most papers in economics based on these standard statistical methods—the kind of stuff people learn in statistics class—are thus not expected to replicate, and they effectively don't. Further, these tools invite foolish risk taking. Neither do alternative techniques yield reliable measures of rare events, except that we can tell if a remote event is underpriced, without assigning an exact value.

The Evidence

The story took a depressing turn, as follows. I put together this evidence—in addition to a priori mathematical derivations showing the impossibility of some statistical claims—as a companion for The Black Swan. The papers sat for years on the web, were posted on this site, Edge (ironically the Edge posting took place only a few hours before the announcement of the bankruptcy of Lehman Brothers). They were downloaded tens of thousands of times on SSRN (the Social Science Research Network). For good measure, a technical version was published in a peer-reviewed statistical journal.

I thought that the story had ended there and that people would pay attention to the evidence; after all I played by the exact rules of scientific revelation, communication and transmission of evidence. Nothing happened. To make things worse, I sold millions of copies of The Black Swan and nothing happened so it cannot be that the results were not properly disseminated. I even testified in front of a Congressional Committee (twice). There was even a model-caused financial crisis, for Baal's sake, and nothing happened. The only counters I received was that I was "repetitive", "egocentric", "arrogant", "angry" or something even more insubstantial, meant to demonize the messenger. Nobody has managed to explain why it is not charlatanism, downright scientifically fraudulent to use these techniques.

Absence of Skin in the Game

It all became clear when, one day, I received the following message from a firefighter. His point was that he found my ideas on tail risk extremely easy to understand. His question was: How come risk gurus, academics, and financial modelers don't get it?

Well, the answer was right there, staring at me, in the message itself. The fellow as a firefighter could not afford to misunderstand risk and statistical properties. He would be directly harmed by his error. In other words, he has skin in the game. And, in addition, he is honorable, risking his life for others not making others take risks for his sake.

So the root cause of this model fraud has to be absence of skin-in-the game, combined with too much money and power at stake. Had the modelers and predictors been harmed by their own mistakes, they would have exited the gene pool—or raised their level of morality. Someone else (society) pays the price of the mistakes. Clearly, the academics profession consists in playing a game, pleasing the editors of "prestigious" journals, or be "highly cited". When confronted, they offer the nihilistic fallacy that "we got to start somewhere"—which could justify using astrology as a basis for science. And the business is unbelievably circular: a "successful PhD program" is one that has "good results" on the "job market" for academic positions. I was told bluntly at a certain business school where I refused to teach risk models and "modern portfolio theory" that my mission as a professor was to help students get jobs. I find all of this highly immoral—immoral to create harm for profit. Primum non nocer.

Only a rule of skin in the game, that is, direct harm from one's errors, can puncture the game aspect of such research and establish some form of contact with reality.