COLLECTIVE AWARENESS
Economic failures cause us serious problems. We need to build simulations of the economy at a much more fine-grained level that take advantage of all the data that computer technologies and the Internet provide us with. We need new technologies of economic prediction that take advantage of the tools we have in the 21st century.
J. DOYNE FARMER is director of the Complexity Economics Programme at the Institute for New Economic Thinking at the Oxford Martin School, professor in the Mathematical Institute at the University of Oxford, and an external professor at the Santa Fe Institute. He was a co-founder of Prediction Company, a quantitative automated trading firm that was sold to the United Bank of Switzerland in 2006. J. Doyne Farmer's Edge Bio Page
~ ~ ~ ~
I'm thinking about collective awareness, which I think of as the models we use to collectively process information about the world, to understand the world and ourselves. It's worth distinguishing our collective awareness at three levels. The first level is our models of the environment, the second level is our models of how we affect the environment, and the third level is our models of how we think about our collective effect on ourselves.
Understanding the environment is something we've been doing better and better for many centuries now. Celestial mechanics allows us to understand the solar system. It means that if we spot an asteroid, we can calculate its trajectory and determine whether it's going to hit the Earth, and if it is, send a rocket to it and deflect it.
Another example of collective awareness at level one is weather prediction. It's an amazing success story. Since 1980, weather prediction has steadily improved, so that every ten years the accuracy of weather prediction gets better by a day, meaning that if this continues, ten years from now the accuracy for a two-day weather forecast will be the same as that of a one-day weather forecast now. This means that the accuracy of weather prediction has gotten dramatically better. We spend $5 billon a year to make weather predictions and we get $30 billion a year back in terms of economic benefit.
The best example of collective consciousness at level two is climate change. Climate change is in the news, it's controversial, etc., but most scientists believe that the models of climate change are telling us something that we need to pay serious attention to. The mere fact that we're even thinking about it is remarkable, because climate change is something whose real effects are going to be felt fifty to 100 years from now. We're making a strong prediction about what we're doing to the Earth and what's going to happen. It's not surprising that there's some controversy about exactly what the outcome is, but we intelligent people know it's really serious. We are going to be increasingly redirecting our efforts to deal with it through time.
The hardest problem is collective awareness at level three—understanding our own effects on ourselves. This is because we're complicated animals. The social sciences try to solve this problem, but they have not been successful in the dramatic way that the physical and natural sciences have. This doesn’t mean the job is impossible, however.
Climate prediction had the big advantage that it could piggyback on weather prediction. As weather predictions got more accurate, climate models automatically got more accurate, too. There is a way in which climate prediction is actually easier than weather prediction. You don't try to say what's going to happen three days in the future, you try to say what's going to happen, on average, if things change. If we pump 100 parts per million more CO2 into the atmosphere, how much is that going to warm things up? A climate model is just a simulation of the weather for a long time, but under conditions that are different from those now. You inject some greenhouse gases into the atmosphere, you simulate the world, and you measure the average temperature and the variability of the weather in your simulation.
Climate predictions get a huge benefit from all the effort that's gone into weather prediction. I've been trying to get a good number on how much we've invested in weather prediction, but it is certainly $100 billion dollars or more. Probably more. It's probably closer to $1 trillion that we've invested since 1950, when we did the first numerical weather predictions. It sounds like a lot of money, but the benefits are enormous.
I've been thinking about how we can make better economic models, because a lot of the problems we're having in the world right now are at least in part caused by economics and the interaction of economics with mass sociology. Our cultural institutions are lagging technological change, and having a difficult time keeping pace with them. The economy plays a central role. Since the '70s, the median wage in the US has been close to flat. At the same time, the rich have been getting richer at a rate of two or three percent per year. A lot of the factors that are driving the problems we're having involve the interaction of the economy with everything else. We need to pursue some radically different approaches to making economic models.
It's interesting to reflect on the way we do economic modeling now. How do those models work? What are the basic ideas they're built on? We got an unfortunate taste of the ways in which they don't work in 2006, when some prescient economists at the New York Fed asked FRB/US, the leading econometric model, "What happens if housing prices drop by twenty percent?" This was 2006—their intuition was right on target—over the next two years, housing prices dropped by almost thirty percent. FRB/US said there'd be a little bit of discomfort in the economy, but not much. The answer FRB/US gave them was off by a factor of twenty. It made such bad forecasts because the model didn’t have the key elements that caused the crisis to happen.
Since then, economists have focused a lot of effort on adding these key elements, for example, by coupling financial markets to the macroeconomy. FRB/US didn’t model the banking system, and couldn’t even think about the possibility that banks might default. Issues like that are now in those models. The models have gotten better. But there is still a good chance that when we have the next crisis, we'll get similarly bad answers. The question is, how can we do better?
The first thing one has to say is that it's a hard problem. Economics is a lot harder than physics because people can think. If you make a prediction about the future of the economy, people may respond to your prediction, automatically invalidating it by behaving in a way that creates a different future. Making predictions about economics is a lot harder than using physics to predict the behavior of the natural world.
Fortunately, the most interesting things we want to do aren't to predict what GDP is going to do next month, but to make predictions about what happens if we tinker with the system. If we change the rules so that, say, people can't use as much leverage, or if we put interest rates at level X instead of level Y, what happens to the world? These are conditional forecasts, in contrast to predicting tomorrow's weather, which is an unconditional forecast. It's more like climate prediction. It’s an easier problem in some ways and harder in others because it is necessary to simulate a hypothetical world and take into account how people will behave in that hypothetical world. If you have a system like the economy that depends on thinking people, you have to have a good model for how they think and how they're going to respond to the changes you're making.
~ ~ ~ ~
When I was a graduate student, Norman Packard and I decided to take on the problem of beating roulette. We ended up building what turned out to be the first wearable digital computer. We were the first people to take a computer into a casino and successfully predict the outcome of roulette and make a profit. We were preceded by Claude Shannon and Ed Thorpe who built an analog computer that allowed them to predict roulette in Shannon’s basement, but they never successfully took it into the casino. My roulette experience changed the rest of my life because it set me on a career path in which I became an expert on prediction. This never would have occurred to me before that.
If a system is chaotic it means that prediction is harder than it is for a system that isn’t chaotic. But nonetheless, perhaps by better understanding chaotic systems and acknowledging that the chaos is there, we can predict them better. When I was studying chaos, I thought that perhaps there was a way to take advantage of chaos. Drawing on my roulette experience, John Sidorowich and I came up with an algorithm for building a nonlinear model of a time series, for making better short-term predictions of low‑dimensional chaotic behavior. This is just like weather: Chaos still overwhelms the predictions in the long term, but prediction in the short term is possible. In some cases we could beat standard models pretty well using our method.
We applied this to turbulent fluid flows, ice ages, and sunspots. Some clown in the audience would always say, "Have you tried applying this to the stock market?" I got tired of hearing this question. I was approaching ten years at Los Alamos, where they give you a little Nambé-ware nut dish to commemorate ten years of service. That freaked me out. I figured that if I kept hanging around, I'd be there at twenty years and thirty years. So I left just before they gave me the nut dish.
Norman and I started a company called Prediction Company, which predicted the stock market. After many years of hard work, we built a system that made reliable predictions of certain aspects of the stock market. We were betting not on the big movements, but on the little ripples. We could predict the idiosyncratic movements of stocks about a month in advance. The predictions were far from perfect, but we made lots of independent bets and it made a steady stream of profits. It's been hugely elaborated on since then, but it's still being traded and still making money. But making money isn't my goal in life, so after eight years I quit and went back to doing basic research. I went to the Santa Fe Institute, where I decided to put my complex systems background together with my domain knowledge about financial markets and try to create better theories for what makes the financial system and the economy tick. That's what I've been doing for the last fifteen or twenty years.
My models make alternative assumptions to those that are made in standard economic models. The biggest assumption is equilibrium. A standard economic model assumes that people have a utility function, which measures what they want. Each person—each agent—maximizes their utility. Each agent also has a way of forming expectations about the world, which they use to maximize their utility. The most powerful model of the world is rational expectations. A rational agent has a good model of the world and understands everybody else's model of the world as well. A rational agent can think about all these, know what other people are going to do, and optimize utility accordingly.
Equilibrium means that outcomes will match expectations. This is true in a statistical sense. That doesn't mean we're right every time, but it means that we are right on average. If people are rational, if they believe that GDP is going to go up by two percent on average, then GDP goes up by two percent on average. Of course this is just on average—it might go down in a given year.
Starting in the '80s, with more and more effort over the last twenty-five or thirty years, economists have shown that real people aren’t rational, and they have been trying to tinker with their models to take this into account. There were always some economists who said people aren't rational, but the mainstream view has been that maybe people aren't rational, but let's see how far we can get with rational models, and then treat the deviations from rationality as needed. Rational models are well defined, they're clear, and they can be solved. Otherwise, it is too easy to get lost, because as soon as you don't let people be rational, you have to figure out how they do think, and the way real people think is complicated. It’s easy to get lost in a wilderness of bounded rationality, as it's called. There're too many different ways for people to be non-rational.
Daniel Kahneman is one of the behavioral economists who has studied ways in which people are not rational. In my opinion, he didn't go nearly far enough. Kahneman says that people don't use utility, and instead use an alternative called prospect theory. But prospect theory is pretty close to utility, and it's still a poor model of what motivates people.
Utility is about goals. It means maximizing something, like the logarithm of wealth, or consumption through time, appropriately discounted to include not just your consumption today, but your consumption tomorrow, which isn't quite as important as your consumption today. Prospect theory makes utility a little bit more complicated because it treats your losses differently than your gains. Both of them are reasonable starting points; I'm not convinced by either one. Utility gives a useful way to put somebody's goals into a model and take into account the fact that we have goals. It's a very reasonable starting point, but we need to go beyond it.
The way economists have been going beyond it is to add what are called "frictions" to their models. Frictions are essentially constraints on perfect rationality. For example, in an idealized model, wages would always adjust so that supply equals demand, or in economics jargon, so that the labor market clears. But the real world doesn’t work that way. If you're running a company, it's not possible to constantly adjust the wages of your employees. It’s particularly hard to lower their wages. You can’t just go in and say, "The labor market has gotten tighter, so I'm going to lower your wage." So, to make things more realistic, economists have added a constraint that says wages are sticky. This is called a friction. In the revised model the rational agent knows this, and takes it into account in making decisions.
Macro models have developed over the years by adding more and more "frictions." This involves adding constraints to idealized models in which you typically have representative agents or maybe a distribution of agents, and in which each agent who might represent a household reasons about their consumption over their lifetime, makes a bunch of planning decisions, and then updates those as new information is received about what's going on in the economy. Most of us don't function that way.
We need to seriously re-examine the whole program. Economic failures cause us serious problems. We need to build simulations of the economy at a much more fine-grained level that take advantage of all the data that computer technologies and the Internet provide us with. We need new technologies of economic prediction that take advantage of the tools we have in the 21st century.
Places like the US Federal Reserve Bank make predictions using a system that has been developed over the last eighty years or so. This line of effort goes back to the middle of the 20th century, when people realized that we needed to keep track of the economy. They began to gather data and set up a procedure for having firms fill out surveys, for having the census take data, for collecting a lot of data on economic activity and processing that data. This system is called “national accounting,” and it produces numbers like GDP, unemployment, and so on. The numbers arrive at a very slow timescale. Some of the numbers come out once a quarter, some of the numbers come out once a year. The numbers are typically lagged because it takes a lot of time to process the data, and the numbers are often revised as much as a year or two later. That system has been built to work in tandem with the models that have been built, which also process very aggregated, high-level summaries of what the economy is doing. The data is old fashioned and the models are old fashioned.
It's a 20th-century technology that's been refined in the 21st century. It's very useful, and it represents a high level of achievement, but it is now outdated. The Internet and computers have changed things. With the Internet, we can gather rich, detailed data about what the economy is doing at the level of individuals. We don't have to rely on surveys; we can just grab the data. Furthermore, with modern computer technology we could simulate what 300 million agents are doing, simulate the economy at the level of the individuals. We can simulate what every company is doing and what every bank is doing in the United States. The model we could build could be much, much better than what we have now. This is an achievable goal.
But we're not doing that, nothing close to that. We could achieve what I just said with a technological system that’s simpler than Google search. But we’re not doing that. We need to do it. We need to start creating a new technology for economic prediction that runs side-by-side with the old one, that makes its predictions in a very different way. This could give us a lot more guidance about where we're going and help keep the economic shit from hitting the fan as often as it does.
~ ~ ~ ~
I'm at the Institute for New Economic Thinking at the Oxford Martin School where I’m the director of Complexity Economics. I'm also a professor in the Mathematics Department at the University of Oxford. I have a group with about ten graduate students and five postdocs. We're doing research to build models of the economy. We're building models at several different levels.
One effort is to build models of the financial system. Our models use snapshots of the portfolios of banks and other large financial institutions at different points in time to understand systemic risk. We model what the banks are doing, how they affect each other, and how shocks propagate around the financial system. We look at things like how stable the financial system is.
Another project is on simulating housing markets. We have worked with the Bank of England to analyze policies for regulating housing markets. For example, about a year or a year and a half ago, the Bank of England instituted a policy that banks had to make eighty-five percent of their loans to people whose loan-to-income ratio is below 3.5. They did this to ensure stability in housing prices, to damp a possible bubble. We did a simulation of housing markets and saw that this policy worked pretty well.
We are now working on understanding regional differences in housing prices. Can we understand quantitatively why prices in London are so much higher than in other parts of the UK? Can we make a map of how housing prices change around the UK and what causes this so that the policymakers can think about the consequences of, for example, putting in a new rail line? Housing prices play an important role in the economy. Should we be encouraging new housing developments, and if so, where, and at which part of the housing spectrum? We can simulate housing markets to help regulators answer these kinds of questions.
We've also been thinking about the insurance business. There is a new directive called Solvency II, which is a regulation stating how much capital insurance companies need to hold in reserve and what kinds of models they're allowed to use to estimate their risk. We have concerns that the restrictions on the models they're allowed to use may be dangerous because they force all the insurance companies to do more or less the same thing, to act as a herd, making the system fragile. Currently seventy-five percent of catastrophe insurance is based on the same model. A better understanding of the benefits of model diversity could allow us to improve on Solvency II, and might prevent a collapse of the insurance business.
We've also been looking at technological change. We gather data on the cost and performance of technologies through time to better understand technological change. The answers are surprising. Out of the 200 or so technologies that we've looked at, about fifty of them obey a version of Moore's law. The cost of these technologies drops exponentially in time, at different rates for different technologies. As we all know, computer prices drop really fast. The cost of transistors drops at forty percent per year, as does that of many other computer components. The price of gene sequencing has dropped even faster. Other technologies, like solar photovoltaics, have been dropping at a rate of ten percent per year. Since the first photocell went into the Vanguard satellite in 1956, the cost of a solar cells has dropped by more than a factor of 3000.
In contrast, if you look at the price of coal, the price fluctuates, but over the long term, once you adjust for inflation, it's been roughly constant for 150 years. The price of oil has been roughly constant for more than 100 years. It fluctuates up and down—we know it's been from more than $100 a barrel down to $20 or $30 a barrel—but there's no overall trend like there is for technologies like solar cells that have improved over time.
We've been trying to understand why some technologies improve so much faster than others. We've also been trying to better understand the patterns in technological improvement and how we can use them. In particular, Francois Lafond and I have developed a method for forecasting future technology prices. It's very simple: We just model Moore's law as a random walk with drift. Our model is probabilistic. We don't make misleading statements like, "Solar energy will be a factor of four cheaper by 2030." We make probabilistic statements like, "Most likely, cost for solar energy will drop by a factor of four by 2030, but there's a five percent chance it won't drop in cost at all." We've used our collection of historical records of technological change to test the accuracy of our model, and it does very well. We make forecasts, test them, and compare the accuracy to that predicted by the model. We don't just make a forecast, we say how good that forecast is, and we say what the range of possibilities is and what their probabilities are.
We've been using this to think about investment in climate change. How much do we have to invest and which technologies should we be investing in to get to zero carbon energy emissions as soon as possible? The answers we're finding look good. Unless we are unlucky, our results indicate that we're going to get there quickly. Solar energy will probably contribute at least twenty or thirty percent of our energy within about ten years. Again, there are some big error bars around these numbers, but this is the most likely outcome.
To evaluate the real cost of something, it is important to look not just at the value now, but also the value in future. This is done by taking an average over present and expected future costs and benefits to compute what is called net present value. What is the real cost of making the transition to climate change? Work by Rupert Way and I suggest that because energy is going to get cheaper than it is now due to solar energy and wind, the net present value is actually negative, meaning we're going to benefit. Climate change is not a net cost, it's a net gain. We win.
I get pursued by hedge funds that want me to consult for them. I've done some consulting, for example, for a hedge fund that was thinking about technology investments, and we advised them about likely rate of progress for a technology like robotics versus a technology like solar cells. But making money is not my primary interest now.
I left Prediction Company after being there for eight years. We were finally doing well, making steady profits, and I was at a fork in the road. My intention had been to stay for five years. I'd already made more money than I had intended to make. I thought, how do I want to feel on my deathbed? Do I want to say I made a lot of money, or do I want to say I got to pursue my love of science and I maybe did something good for the world? I chose the latter course.
I've been spending the last ten years using the expertise I gained from predicting roulette and predicting the stock market and trying to apply it to do something useful for humanity. Unfortunately, I have to say, it's much easier to get funded to beat the stock market than it is to help the world.
Because of that, I am thinking about starting another company. I'm being driven to do that for two reasons. One is that it may be easier to fund the things I want to do via private enterprise. The models I want to build require a lot of resources. I want to build a system that can make better predictions about the economy. I want to build something that all the central banks will want to use because it will give them a superior view into the future. To use a caricature of brain anatomy, at Prediction Company we built a market cerebellum. We looked at past data and searched for consistent patterns and made bets on them. We had no idea why those patterns happened. But what I want to do now is build a cerebrum. I want to build a system that decomposes the market into its parts, that models the causal mechanisms, and that allows us to think about those important "what if" questions: What if we change the rules? Can we find better outcomes? We've been chipping away at these problems in my group at Oxford.
I love being an academic. It's wonderful because I get to work with the smartest people in the world. But it's hard to do a focused effort. To do what I want to do now, I feel like I need a team. I need to do what we did at Prediction Company. We hired thirty people, put them all in the same building, and we said, "This is the problem. We're going to work on this problem full time until we crack it."
I've decided to start another company. This time, the company is going to be much more public-focused. One of the problems with Prediction Company is that we had to keep what we were doing a secret. We had to do that out of respect for our investors. My new company is going to be much more open. Of course we will try to make money, that’s what investors need to be happy, but we will also work to create an open source toolkit that everybody can use to make better models of the economy. We'll make some key add-ons to make a profit, but I want to have the open source toolkit in the center so that we can lift the whole technology of economic prediction up from where it is now.
~ ~ ~ ~
I came to Oxford for several reasons. I got offered a job there, it seemed like a good opportunity, and I got to do what I wanted to do. I like working with smart people and I wanted to be in Europe. I aspire to be a member of the European Union, a European Union citizen. As it works out, I will probably get my British passport about two or three months before Britain leaves the European Union, which is a disappointment. Fotini Markopoulou, my wife, is Greek, and she wanted to be back in Europe. It was an adventure to come here. The complexity economics community that I'm a part of is much more centered in Europe than it is in America. The American economics establishment is very conformist; the European establishment is a bit more open.
The big picture in the present world has me very worried. Populism seems to be taking over. This is at least in part caused by the wrong kind of economics. We need new ideas about economics because we need to change the way we run our economies. Eric Beinhocker, Fotini Markopoulou, Steen Rasmussen, are running a workshop at the Santa Fe Institute to think about how we can create a new set of economic principles and connect them to politics. We want to do what Friedman and Hayek and others did when they developed neoliberalism. They developed a new set of economic ideas and convinced politicians like Thatcher and Reagan to follow them.
We don’t agree with neoliberalism—we think we need new ideas. We need to provide a better intellectual basis. We need a new kind of economics, one that fuses ideas from political science and sociology and elsewhere to give a more coherent scheme for how we go forward in the world that can dramatically accelerate our collective awareness at level III.