SMART HEURISTICS

SMART HEURISTICS

Gerd Gigerenzer [3.29.03]

What interests me is the question of how humans learn to live with uncertainty. Before the scientific revolution determinism was a strong ideal. Religion brought about a denial of uncertainty, and many people knew that their kin or their race was exactly the one that God had favored. They also thought they were entitled to get rid of competing ideas and the people that propagated them. How does a society change from this condition into one in which we understand that there is this fundamental uncertainty? How do we avoid the illusion of certainty to produce the understanding that everything, whether it be a medical test or deciding on the best cure for a particular kind of cancer, has a fundamental element of uncertainty?

video

Introduction by John Brockman

"Isn’t more information always better?" asks Gerd Gigerenzer. "Why else would bestsellers on how to make good decisions tell us to consider all pieces of information, weigh them carefully, and compute the optimal choice, preferably with the aid of a fancy statistical software package? In economics, Nobel prizes are regularly awarded for work that assumes that people make decisions as if they had perfect information and could compute the optimal solution for the problem at hand. But how do real people make good decisions under the usual conditions of little time and scarce information? Consider how players catch a ball—in baseball, cricket, or soccer. It may seem that they would have to solve complex differential equations in their heads to predict the trajectory of the ball. In fact, players use a simple heuristic. When a ball comes in high, the player fixates the ball and starts running. The heuristic is to adjust the running speed so that the angle of gaze remains constant —that is, the angle between the eye and the ball. The player can ignore all the information necessary to compute the trajectory, such as the ball’s initial velocity, distance, and angle, and just focus on one piece of information, the angle of gaze."

Gigerenzer provides an alternative to the view of the mind as a cognitive optimizer, and also to its mirror image, the mind as a cognitive miser. The fact that people ignore information has been often mistaken as a form of irrationality, and shelves are filled with books that explain how people routinely commit cognitive fallacies. In seven years of research, he, and his research team at Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, have worked out what he believes is a viable alternative: the study of fast and frugal decision-making, that is, the study of smart heuristics people actually use to make good decisions. In order to make good decisions in an uncertain world, one sometimes has to ignore information. The art is knowing what one doesn’t have to know. 

Gigerenzer's work is of importance to people interested in how the human mind actually solves problems. In this regard his work is influential to psychologists, economists, philosophers, and animal biologists, among others. It is also of interest to people who design smart systems to solve problems; he provides illustrations on how one can construct fast and frugal strategies for coronary care unit decisions, personnel selection, and stock picking. 

"My work will, I hope, change the way people think about human rationality", he says. "Human rationality cannot be understood, I argue, by the ideals of omniscience and optimization. In an uncertain world, there is no optimal solution known for most interesting and urgent problems. When human behavior fails to meet these Olympian expectations, many psychologists conclude that the mind is doomed to irrationality. These are the two dominant views today, and neither extreme of hyper-rationality or irrationality captures the essence of human reasoning. My aim is not so much to criticize the status quo, but rather to provide a viable alternative."

— JB

GERD GIGERENZER is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences. He is the author of Calculated Risks: How To Know When Numbers Deceive You, the German translation of which won the Scientific Book of the Year Prize in 2002. He has also published two academic books on heuristics, Simple Heuristics That Make Us Smart (with Peter Todd & The ABC Research Group) and Bounded Rationality: The Adaptive Toolbox (with Reinhard Selten, a Nobel laureate in economics).

Gerd Gigernezer 's Edge Bio Page


SMART HEURISTICS

[Gerd Gigerenzer:] At the beginning of the 20th century the father of modern science fiction, Herbert George Wells, said in his writings on politics, "If we want to have an educated citizenship in a modern technological society, we need to teach them three things: reading, writing, and statistical thinking." At the beginning of the 21st century, how far have we gotten with this program? In our society, we teach most citizens reading and writing from the time they are children, but not statistical thinking. John Alan Paulos has called this phenomenon innumeracy.

There are many stories documenting this problem. For instance, there was the weather forecaster who announced on American TV that if the probability that it will rain on Saturday is 50 percent and the probability that it will rain on Sunday is 50 percent, the probability that it will rain over the weekend is 100 percent. In another recent case reported by New Scientist an inspector in the Food and Drug Administration visited a restaurant in Salt Lake City famous for its quiches made from four fresh eggs. She told the owner that according to FDA research every fourth egg has salmonella bacteria, so the restaurant should only use three eggs in a quiche. We can laugh about these examples because we easily understand the mistakes involved, but there are more serious issues. When it comes to medical and legal issues, we need exactly the kind of education that H. G. Wells was asking for, and we haven't gotten it. 

What interests me is the question of how humans learn to live with uncertainty. Before the scientific revolution determinism was a strong ideal. Religion brought about a denial of uncertainty, and many people knew that their kin or their race was exactly the one that God had favored. They also thought they were entitled to get rid of competing ideas and the people that propagated them. How does a society change from this condition into one in which we understand that there is this fundamental uncertainty? How do we avoid the illusion of certainty to produce the understanding that everything, whether it be a medical test or deciding on the best cure for a particular kind of cancer, has a fundamental element of uncertainty?

For instance, I've worked with physicians and physician-patient associations to try to teach the acceptance of uncertainty and the reasonable way to deal with it. Take HIV testing as an example. Brochures published by the Illinois Department of Health say that testing positive for HIV means that you have the virus. Thus, if you are an average person who is not in a particular risk group but test positive for HIV, this might lead you to choose to commit suicide, or move to California, or do something else quite drastic. But AIDS information in many countries is running on the illusion of certainty. The actual situation is rather like this: If you have about 10,000 people who are in no risk group, one of them will have the virus, and will test positive with practical certainty. Among the other 9,999, another one will test positive, but it's a false positive. In this case we have two who test positive, although only one of them actually has the virus. Knowing about these very simple things can prevent serious disasters, of which there is unfortunately a record. 

Still, medical societies, individual doctors, and individual patients either produce the illusion of certainty or want it. Everyone knows Benjamin Franklin's adage that there is nothing certain in this world except death and taxes, but the doctors I interviewed tell me something different. They say, "If I would tell my patients what we don't know, they would get very nervous, so it's better not to tell them." Thus, this is one important area in which there is a need to get people — including individual doctors or lawyers in court — to be mature citizens and to help them understand and communicate risks.

Representation of information is important. In the case of many so-called cognitive illusions, the problem results from difficulties that arise from getting along with probabilities. The problem largely disappears the moment you give the person the information in natural frequencies. You basically put the mind back in a situation where it's much easier to understand these probabilities. We can prove that natural frequencies can facilitate actual computations, and have known for a long time that representations — whether they be probabilities, frequencies or odds — have an impact on the human mind. There are very few theories about how this works.

I'll give you a couple examples relating to medical care. In the U.S. and many European countries, women who are 40 years old are told to participate in mammography screening. Say that a woman takes her first mammogram and it comes out positive. She might ask the physician, "What does that mean? Do I have breast cancer? Or are my chances of having it 99%, 95%, or 90% ­ or only 50%? What do we know at this point?" I have put the same question to radiologists who have done mammography screening for 20 or 25 years, including chiefs of departments. A third said they would tell this woman that, given a positive mammogram, her chance of having breast cancer is 90%.

However, what happens when they get additional relevant information? The chance that a woman in this age group has cancer is roughly1%. If a woman has breast cancer, the probability that she will test positive on a mammogram is 90%. If a woman does not have breast cancer the probability that she nevertheless tests positive is some 9%. In technical terms you have a base rate of 1%, a sensitivity or hit rate of 90%, and a false positive rate of about 9%. So, how do you answer this woman who's just tested positive for cancer? As I just said, about a third of the physicians thinks it's 90%, another third thinks the answer should be something between 50% and 80%, and another third thinks the answer is between 1% and 10%. Again, these are professionals with many years of experience. It's hard to imagine a larger variability in physicians' judgments — between 1% and 90% — and if patients knew about this variability, they would not be very happy. This situation is typical of what we know from laboratory experiments: namely, that when people encounter probabilities — which are technically conditional probabilities — their minds are clouded when they try to make an inference.

What we do is to teach these physicians tools that change the representation so that they can see through the problem. We don't send them to a statistics course, since they wouldn't have the time to go in the first place, and most likely they wouldn't understand it because they would be taught probabilities again. But how can we help them to understand the situation?

Let's change the representation using natural frequencies, as if the physician would have observed these patients him- or herself. One can communicate the same information in the following, much more simple way. Think about 100 women. One of them has breast cancer. This was the 1%. She likely tests positive; that's the 90%. Out of 99 who do not have breast cancer another 9 or 10 will test positive. So we have one in 9 or 10 who tests positive. How many of them actually has cancer? One out of ten. That's not 90%, that's not 50%, that's one out of ten.

Here we have a method that enables physicians to see through the fog just by changing the representation, turning their innumeracy into insight. Many of these physicians have carried this innumeracy around for decades and have tried to hide it. When we interview them, they obviously admit it, saying, "I don't know what to do with these numbers. I always confuse these things." Here we have a chance to use very simple tools to help those patients and physicians to understand what the risks are and which enable them to have a reasonable reaction to what to do. If you take the perspective of a patient — that this test means that there is a 90% chance you have cancer — you can imagine what emotions set in, emotions that do not help her to reason the right way. But informing her that only one out of ten women who tests positive actually has cancer would help her to have a cooler attitude and to make more reasonable decisions.

Prostate cancer is another disease for which we have good data. In the U.S. and European countries doctors advise men aged 40 to 50 to take a PSA test. This is a prostate cancer test that is very simple, requiring just a bit of blood, and so many people do it. The interesting thing is that most of the men I've talked to have no idea of the benefits and costs of this test. It's an example of decision-making based on trusting your doctor or on rumors. But interestingly, if you read about the test on the Internet in independent medical societies like Cochran.com, or read the reports of various physicians' agencies who give recommendations for screening, then you find out that the benefits and costs of prostate cancer screening are roughly the following: Mortality reduction is the usual goal of medical testing, yet there's no proof that prostate cancer screening reduces mortality. On the other hand there is proof that, if we distinguish between people who do not have prostate cancer and those who do, there is a good likelihood that it will do harm. The test produces a number of false positives. If you do it often enough there's a good chance of getting a high level on the test, a so-called positive result, even though you don't have cancer. It's like a car alarm that goes off all the time.

For those who actually have cancer, surgery can result in incontinence or impotence, which are serious consequences that stay with you for the rest of your life. For that reason, the U.S. Preventive Services task force says very clearly in a report that men should not participate in PSA screening because there is no proof in mortality reduction, only likely harm.

It is very puzzling that in a country where a 12-year-old knows baseball statistics, adults don't know the simplest statistics about tests, diseases, and the consequences that may cause them serious damage. Why is this? One reason, of course, is that the cost benefit computations for doctors are not the same as for patients. One cannot simply accuse doctors of knowing things or not caring about patients, but a doctor has to face the possibility that if he or she doesn't advise someone to participate in the PSA test and that person gets prostate cancer, then the patient may turn up at his doorstep with a lawyer. The second thing is that doctors are members of a community with professional pride, and for many of them not detecting a cancer is something they don't want to have on their records. Third, there are groups of doctors who have very clear financial incentives to perform certain procedures. A good doctor would explain this to a patient but leave the decision to the patient. Many patients don't see this situation in which doctors find themselves, but most doctors will recommend the test.

But who knows? Autopsy studies show that one out of three or one out of four men who die a natural death have prostate cancer. Everyone has some cancer cells. If everyone underwent PSA testing and cancer were detected, then these poor guys would spend the last years or decades of their lives living with severe bodily injury. These are very simple facts.

Thus, dealing with probabilities also relates to the issue of understanding the psychology of how we make rational decisions. According to decision theory, rational decisions are made according to the so-called expected utility calculus, or some variant thereof. In economics, for instance, the idea is that if you make an important decision — whom to marry or what stock to buy, for example — you look at all the consequences of each decision, attach a probability to these consequences, attach a value, and sum them up, choosing the optimal, highest expected value or expected utility. This theory, which is very widespread, maintains that people behave in this way when they make their decisions. The problem is that we know from experimental studies that people don't behave this way.

There is a nice story that illustrates the whole conflict: A famous decision theorist who once taught at Columbia got an offer from a rival university and was struggling with the question of whether to stay where he was or accept the new post. His friend, a philosopher, took him aside and said, "What's the problem? Just do what you write about and what you teach your students. Maximize your expected utility." The decision theorist, exasperated, responded, "Come on, get serious!"

Decisions can often be modeled by what I call fast and frugal heuristics. Sometimes they're faster, and sometimes they're more frugal. Deciding which of two jobs to take, for instance, may involve consequences that are incommensurate from the point of view of the person making the decision. The new job may give you more money and prestige, but it might leave your children in tears, since they don't want to move for fear that they would lose their friends. Some economists may believe that you can bring everything in the same common denominator, but others can't do this. A person could end up making a decision for one dominant reason. 

We make decisions based on a bounded rationality, not the unbounded rationality of the decision maker modeled after an omniscient god. But bounded rationality is also not of one kind. There is a group of economists, for example, who look at the bounds or constraints in the environment that affect how a decision is made. This study is called "optimization under constraints," and many Nobel prizes have been awarded in this area. Using the concept of bounded rationality from this perspective you realize that an organism has neither unlimited resources nor unlimited time. So one asks, given these constraints what's the optimal solution?

There's a second group, which doesn't look at bounds in the environment but at bounds in the mind. These include many psychologists and behavioral economists who find that people often take in only limited information, and sometimes make decisions based on just one or two criteria. But these colleagues don't analyze the environmental influences on the task. They think that for a priorireasons people make bad choices because of a bias, an error, or a fallacy. They look at constraints in the mind. 

Neither of these concepts takes advantage of what the human mind takes advantage of: that the bounds in the mind are not unrelated to the bounds in the environment. The bounds get together. Herbert Simon developed a wonderful analogy based on a pair of scissors, where one blade is cognition and the other is the structure of the environment, or the task. You only understand how human behavior functions if you look at both sides.

Evolutionary thinking gives us a useful framework for asking some interesting questions that are not often posed. For instance, when I look at a certain heuristic — like when people make a decision based on one good reason while ignoring all others — I must ask in what environmental structures that heuristic works, and where it does not work. This is a question about ecological rationale, about the adaptation of heuristics, and it is very different from what we see in the study of cognitive illusions in social psychology and of judgment decision-making, where any kind of behavior that suggests that people ignore information, or just use one or two pieces of information, is coded as a bias. That approach is non-ecological; that is, it doesn't relate the mind to its environment.

An important future direction in cognitive science is to understand that human minds are embedded in an environment. This is not the usual way that many psychologists, and of course many economists, think about it. There are many psychological theories about what's in the mind, and there may be all kinds of computations and motives in the mind, but there's very little ecological thinking about what certain cognitive strategies or emotions do for us, and what problems they solve. One of the visions I have is to understand not only how cognitive heuristics work, and in which environments it is smart to use them, but also what role emotions play in our judgment. We have gone through a kind of liberation in the last years. There are many books, by Antonio Damasio and others, that make a general claim that emotions are important for cognitive functions, and are not just there to interrupt, distract, or mislead you. Actually, emotions can do certain things that cognitive strategies can't do, but we have very little understanding of exactly how that works. 

To give a simple example, imagine Homo economicus in mate search, trying to find a woman to marry. According to standard theory Homo economicus would have to find out all the possible options and all the possible consequences of marrying each one of them. He would also look at the probabilities of various consequences of marrying each of them — whether the woman would still talk to him after they're married, whether she'd take care of their children, whatever is important to him — and the utilities of each of these. Homo economicus would have to do tons of research to avoid just coming up with subjective probabilities, and after many years of research he'd probably find out that his final choice had already married another person who didn't do these computations, and actually just fell in love with her.

Herbert Simon's idea of satisfying solves that problem. A satisfier, searching for a mate, would have an aspiration level. Once this aspiration is met, as long as it is not too high, he will find the partner and the problem is solved. But satisfying is also a purely cognitive mechanism. After you make your choice you might see someone come around the corner who looks better, and there's nothing to prevent you from dropping your wife or your husband and going off with the next one.

Here we see one function of emotions. Love, whether it be romantic love or love for our children, helps most of us to create a commitment necessary to make us stay with and take care of our spouses and families. Emotions can perform functions that are similar to those that cognitive building blocks of heuristics perform. Disgust, for example, keeps you from eating lots of things and makes food choice much simpler, and other emotions do similar things. Still, we have very little understanding of how decision theory links with the theory of emotion, and how we develop a good vocabulary of building blocks necessary for making decisions. This is one direction in which it is important to investigate in the future.

Another simple example of how heuristics are useful can be seen in the following thought experiment: Assume you want to study how players catch balls that come in from a high angle — like in baseball, cricket, or soccer — because you want to build a robot that can catch them. The traditional approach, which is much like optimization under constraints, would be to try to give your robot the complete representation of its environment and the most expensive computation machinery you can afford. You might feed your robot a family of parabolas because thrown balls have parabolic trajectories, with the idea that the robot needs to find the right parabola in order to catch the ball. Or you feed him measurement instruments that can measure the initial distance, the initial velocity, and the initial angle the ball was thrown or kicked. You're still not done because in the real world balls are not flying parabolas, so you need instruments that can measure the direction and the speed of the wind at each point of the ball's flight to calculate its final trajectory and its spin. It's a very hard problem, but this is one way to look at it.

A very different way to approach this is to ask if there is a heuristic that a player could actually use to solve this problem without making any of these calculations, or only very few. Experimental studies have shown that actual players use a quite simple heuristic that I call the gaze heuristic. When a ball comes in high, a player starts running and fixates his eyes on the ball. The heuristic is that you adjust your running speed so that the angle of the gaze, the angle between the eye and the ball, remains constant. If you make the angle constant the ball will come down to you and it will catch you, or at least it will hit you. This heuristic only pays attention to one variable, the angle of gaze, and can ignore all the other causal, relevant variables and achieve the same goal much faster, more frugally, and with less chances for error.

This illustrates that we can do the science of calculation by looking always at what the mind does — the heuristics and the structures of environments — and how minds change the structures of environments. In this case the relationship between the ball and one's self is turned into a simple linear relationship on which the player acts. This is an example of a smart heuristic, which is part of the adaptive tool box that has evolved in humans. Many of these heuristics are also present in animals. For instance, a recent study showed that when dogs catch frisbees they use the same gaze heuristic.

Heuristics are also useful in very important practical ways relating to economics. To illustrate I'll give you a short story about our research on a heuristic concerning the stock market. One very smart and simple heuristic is called the recognition heuristic. Here is a demonstration: Which of the following two cities has more inhabitants — Hanover or Bielefeld? I pick these two German cities assuming that you don't know very much about Germany. Most people will think it's Hanover because they have never heard of Bielefeld, and they're right. However, if I pose the same question to Germans, they are insecure and don't know which to choose. They've heard of both of them and try to recall information. The same thing can be done in reverse. We have done studies with Daniel Gray Goldstein in which we ask Americans which city has more inhabitants — San Diego or San Antonio? About two-thirds of my former undergraduates at the University of Chicago got the right answer: San Diego. Then we asked German students — who know much less about San Diego and many of whom had never even heard of San Antonio — the same question. What proportion of the German students do you think got the answer right? In our study, a hundred percent. They hadn't heard of San Antonio, so they picked San Diego. This is an interesting case of a smart heuristic, where people with less knowledge can do better than people with more. The reason this works is because in the real world there is a correlation between name recognition and things like populations. You have heard of a city because there is something happening there. It's not an indicator of certainty, but it's a good stimulus.

In my group at the Max Planck Institute for Human Development I work alongside a spectrum of researchers, several of whom are economists, who work on the same topics but ask a different kind of question. They say, "That's all fine that you can demonstrate that you can get away with less knowledge, but can the recognition heuristic make money?" In order to answer this question we did a large study with the American and German stock markets, involving both lay people and students of business and finance in both countries. We went to downtown Chicago and interviewed several hundred pedestrians. We gave them a list of stocks and asked them one question: Have you ever heard of this stock? Yes or no? Then we took the ten percent of the stocks that had the highest recognition, which were all stocks in the Standard & Poor's Index, put them in the portfolio and let them go for half a year. As a control, we did the same thing with the same American pedestrians with German stocks. In this case they had heard of very few of them. As a third control we had German pedestrians in downtown Munich perform the same recognition ratings with German and American stocks. The question in this experiment is not how much money the portfolio makes, but whether it makes more money than some standards, of which we had four. One consisted of randomly picked stocks, which is a tough standard. A second one contained the least-recognized stocks, which is according to the theory an important standard, and shouldn't do as well. In the third we had blue chip funds, like Fidelity II. And in the last we had the market — the Dow and its German equivalent. We let this run for six months, and after six months the portfolios containing the highest recognized stocks by ordinary people outperformed the randomly picked stocks, the low recognition stocks, and in six out of eight cases the market and the mutual funds. 

Although this was an interesting study, one should of course be cautious, because unlike in other experimental and real world studies, we have a variable and very random environment. But what this study at least showed is that the recognition of ordinary citizens can actually beat out the performance of the market and other important criteria. The empirical evidence, of course — the background — is consumer behavior. In many situations when people in a supermarket choose between products they go with the item with name recognition. Advertising by companies like Benetton exploits the use of the recognition heuristic. They give us no information about the product, but only increase name recognition. It has been a very successful strategy for the firm. 

Of course the reaction to this study, which is published in our bookSimple Heuristics that Make Us Work, has split the experts in two camps. One group said this can't be true, that it's all wrong, or it could never be replicated. Among them were financial advisers, who certainly didn't like the results. Another group of people said, "This is no surprise. I knew it all along. The stock market's all rumor, recognition, and psychology." Meanwhile, we have replicated these studies several times and found the same advantage of recognition — in bull and bear market — and also found that recognition among those who knew less did best of all in our studies.

I would like to share these ideas with many others, to use psychological research, and to use what we know about how to facilitate people's understanding of uncertainties to help to promote this old dream about getting an educated citizenship that can deal with uncertainties, rather than denying their existence. Understanding the mind as a tool that tries to live in an uncertain world is an important challenge.