|
What interests me is the question of how humans learn to live with uncertainty. Before the scientific revolution determinism was a strong ideal. Religion brought about a denial of uncertainty, and many people knew that their kin or their race was exactly the one that God had favored. They also thought they were entitled to get rid of competing ideas and the people that propagated them. How does a society change from this condition into one in which we understand that there is this fundamental uncertainty? How do we avoid the illusion of certainty to produce the understanding that everything, whether it be a medical test or deciding on the best cure for a particular kind of cancer, has a fundamental element of uncertainty? SMART HEURISTICS : GERD GIGERENZER [3.31.03] Introduction "Isnt
more information always better?" asks Gerd Gigerenzer. "Why
else would bestsellers on how to make good decisions tell us
to
consider all pieces of information, weigh them carefully, and
compute the optimal choice, preferably with the aid of a fancy
statistical software package? In economics, Nobel prizes are
regularly awarded for work that assumes that people make decisions
as if they had perfect information and could compute the optimal
solution for the problem at hand. But how do real people make
good decisions under the
usual conditions of little time and scarce information? Consider
how players catch a ballin baseball, cricket, or soccer.
It may seem that they would have to solve complex differential
equations in their heads to predict the trajectory of the ball.
In fact, players use a simple heuristic. When a ball comes in
high, the player fixates the ball and starts running. The heuristic
is to adjust the running speed so that the angle of gaze remains
constant that is, the angle between the eye and the ball.
The player can ignore all the information necessary to compute
the trajectory, such as the balls initial velocity, distance,
and angle, and just focus on one piece of information, the angle
of gaze." GERD GIGERENZER is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences. He is the author of Calculated Risks: How To Know When Numbers Deceive You, the German translation of which won the Scientific Book of the Year Prize in 2002. He has also published two academic books on heuristics, Simple Heuristics That Make Us Smart (with Peter Todd & The ABC Research Group) and Bounded Rationality: The Adaptive Toolbox (with Reinhard Selten, a Nobel laureate in economics). Gerd Gigernezer 's Edge Bio Page SMART HEURISTICS : GERD GIGERENZER At the beginning of the 20th century the father of modern science fiction, Herbert George Wells, said in his writings on politics, "If we want to have an educated citizenship in a modern technological society, we need to teach them three things: reading, writing, and statistical thinking." At the beginning of the 21st century, how far have we gotten with this program? In our society, we teach most citizens reading and writing from the time they are children, but not statistical thinking. John Alan Paulos has called this phenomenon innumeracy.
There are many stories documenting this problem. For instance,
there was the weather forecaster who announced on American
TV that if the probability that it will rain on Saturday
is 50 percent and the probability that it will rain on Sunday
is 50 percent, the probability that it will rain over the
weekend is 100 percent. In another recent case reported
by New Scientist an inspector in the Food and Drug
Administration visited a restaurant in Salt Lake City famous
for its quiches made from four fresh eggs. She told the
owner that according to FDA research every fourth egg has
salmonella bacteria, so the restaurant should only use three
eggs in a quiche. We can laugh about these examples because
we easily understand the mistakes involved, but there are
more serious issues. When it comes to medical and legal
issues, we need exactly the kind of education that H. G.
Wells was asking for, and we haven't gotten it. Representation of information is important. In the case of many so-called cognitive illusions, the problem results from difficulties that arise from getting along with probabilities. The problem largely disappears the moment you give the person the information in natural frequencies. You basically put the mind back in a situation where it's much easier to understand these probabilities. We can prove that natural frequencies can facilitate actual computations, and have known for a long time that representations whether they be probabilities, frequencies or odds have an impact on the human mind. There are very few theories about how this works.
I'll give you a couple examples relating to medical care.
In the U.S. and many European countries, women who are 40
years old are told to participate in mammography screening.
Say that a woman takes her first mammogram and it comes
out positive. She might ask the physician, "What does that
mean? Do I have breast cancer? Or are my chances of having
it 99%, 95%, or 90% or only 50%? What do we know
at this point?" I have put the same question to radiologists
who have done mammography screening for 20 or 25 years,
including chiefs of departments. A third said they would
tell this woman that, given a positive mammogram, her chance
of having breast cancer is 90%. What we do is to teach these physicians tools that change the representation so that they can see through the problem. We don't send them to a statistics course, since they wouldn't have the time to go in the first place, and most likely they wouldn't understand it because they would be taught probabilities again. But how can we help them to understand the situation? Let's change the representation using natural frequencies, as if the physician would have observed these patients him- or herself. One can communicate the same information in the following, much more simple way. Think about 100 women. One of them has breast cancer. This was the 1%. She likely tests positive; that's the 90%. Out of 99 who do not have breast cancer another 9 or 10 will test positive. So we have one in 9 or 10 who tests positive. How many of them actually has cancer? One out of ten. That's not 90%, that's not 50%, that's one out of ten. Here we have a method that enables physicians to see through the fog just by changing the representation, turning their innumeracy into insight. Many of these physicians have carried this innumeracy around for decades and have tried to hide it. When we interview them, they obviously admit it, saying, "I don't know what to do with these numbers. I always confuse these things." Here we have a chance to use very simple tools to help those patients and physicians to understand what the risks are and which enable them to have a reasonable reaction to what to do. If you take the perspective of a patient that this test means that there is a 90% chance you have cancer you can imagine what emotions set in, emotions that do not help her to reason the right way. But informing her that only one out of ten women who tests positive actually has cancer would help her to have a cooler attitude and to make more reasonable decisions.
Prostate cancer is another disease for which we have good
data. In the U.S. and European countries doctors advise men
aged 40 to 50 to take a PSA test. This is a prostate cancer
test that is very simple, requiring just a bit of blood, and
so many people do it. The interesting thing is that most of
the men I've talked to have no idea of the benefits and costs
of this test. It's an example of decision-making based on
trusting your doctor or on rumors. But interestingly, if you
read about the test on the Internet in independent medical
societies like Cochran.com, or read the reports of various
physicians' agencies who give recommendations for screening,
then you find out that the benefits and costs of prostate
cancer screening are roughly the following: Mortality reduction
is the usual goal of medical testing, yet there's no proof
that prostate cancer screening reduces mortality. On the other
hand there is proof that, if we distinguish between people
who do not have prostate cancer and those who do, there is
a good likelihood that it will do harm. The test produces
a number of false positives. If you do it often enough there's
a good chance of getting a high level on the test, a so-called
positive result, even though you don't have cancer. It's like
a car alarm that goes off all the time.
It is very puzzling that in a country where a 12-year-old
knows baseball statistics, adults don't know the simplest
statistics about tests, diseases, and the consequences that
may cause them serious damage. Why is this? One reason,
of course, is that the cost benefit computations for doctors
are not the same as for patients. One cannot simply accuse
doctors of knowing things or not caring about patients,
but a doctor has to face the possibility that if he or she
doesn't advise someone to participate in the PSA test and
that person gets prostate cancer, then the patient may turn
up at his doorstep with a lawyer. The second thing is that
doctors are members of a community with professional pride,
and for many of them not detecting a cancer is something
they don't want to have on their records. Third, there are
groups of doctors who have very clear financial incentives
to perform certain procedures. A good doctor would explain
this to a patient but leave the decision to the patient.
Many patients don't see this situation in which doctors
find themselves, but most doctors will recommend the test. Thus, dealing with probabilities also relates to the issue of understanding the psychology of how we make rational decisions. According to decision theory, rational decisions are made according to the so-called expected utility calculus, or some variant thereof. In economics, for instance, the idea is that if you make an important decision whom to marry or what stock to buy, for example you look at all the consequences of each decision, attach a probability to these consequences, attach a value, and sum them up, choosing the optimal, highest expected value or expected utility. This theory, which is very widespread, maintains that people behave in this way when they make their decisions. The problem is that we know from experimental studies that people don't behave this way. There is a nice story that illustrates the whole conflict: A famous decision theorist who once taught at Columbia got an offer from a rival university and was struggling with the question of whether to stay where he was or accept the new post. His friend, a philosopher, took him aside and said, "What's the problem? Just do what you write about and what you teach your students. Maximize your expected utility." The decision theorist, exasperated, responded, "Come on, get serious!"
Decisions can often be modeled by what I call fast and frugal
heuristics. Sometimes they're faster, and sometimes they're
more frugal. Deciding which of two jobs to take, for instance,
may involve consequences that are incommensurate from the
point of view of the person making the decision. The new
job may give you more money and prestige, but it might leave
your children in tears, since they don't want to move for
fear that they would lose their friends. Some economists
may believe that you can bring everything in the same common
denominator, but others can't do this. A person could end
up making a decision for one dominant reason.
There's a second group, which doesn't look at bounds in
the environment but at bounds in the mind. These include
many psychologists and behavioral economists who find that
people often take in only limited information, and sometimes
make decisions based on just one or two criteria. But these
colleagues don't analyze the environmental influences on
the task. They think that for a priori reasons people
make bad choices because of a bias, an error, or a fallacy.
They look at constraints in the mind. Evolutionary thinking gives us a useful framework for asking some interesting questions that are not often posed. For instance, when I look at a certain heuristic like when people make a decision based on one good reason while ignoring all others I must ask in what environmental structures that heuristic works, and where it does not work. This is a question about ecological rationale, about the adaptation of heuristics, and it is very different from what we see in the study of cognitive illusions in social psychology and of judgment decision-making, where any kind of behavior that suggests that people ignore information, or just use one or two pieces of information, is coded as a bias. That approach is non-ecological; that is, it doesn't relate the mind to its environment.
An important future direction in cognitive science is to
understand that human minds are embedded in an environment.
This is not the usual way that many psychologists, and of
course many economists, think about it. There are many psychological
theories about what's in the mind, and there may be all
kinds of computations and motives in the mind, but there's
very little ecological thinking about what certain cognitive
strategies or emotions do for us, and what problems they
solve. One of the visions I have is to understand not only
how cognitive heuristics work, and in which environments
it is smart to use them, but also what role emotions play
in our judgment. We have gone through a kind of liberation
in the last years. There are many books, by Antonio Damasio
and others, that make a general claim that emotions are
important for cognitive functions, and are not just there
to interrupt, distract, or mislead you. Actually, emotions
can do certain things that cognitive strategies can't do,
but we have very little understanding of exactly how that
works. Herbert Simon's idea of satisfying solves that problem. A satisfier, searching for a mate, would have an aspiration level. Once this aspiration is met, as long as it is not too high, he will find the partner and the problem is solved. But satisfying is also a purely cognitive mechanism. After you make your choice you might see someone come around the corner who looks better, and there's nothing to prevent you from dropping your wife or your husband and going off with the next one. Here we see one function of emotions. Love, whether it be romantic love or love for our children, helps most of us to create a commitment necessary to make us stay with and take care of our spouses and families. Emotions can perform functions that are similar to those that cognitive building blocks of heuristics perform. Disgust, for example, keeps you from eating lots of things and makes food choice much simpler, and other emotions do similar things. Still, we have very little understanding of how decision theory links with the theory of emotion, and how we develop a good vocabulary of building blocks necessary for making decisions. This is one direction in which it is important to investigate in the future.
Another simple example of how heuristics are useful can
be seen in the following thought experiment: Assume you
want to study how players catch balls that come in from
a high angle like in baseball, cricket, or soccer
because you want to build a robot that can catch
them. The traditional approach, which is much like optimization
under constraints, would be to try to give your robot the
complete representation of its environment and the most
expensive computation machinery you can afford. You might
feed your robot a family of parabolas because thrown balls
have parabolic trajectories, with the idea that the robot
needs to find the right parabola in order to catch the ball.
Or you feed him measurement instruments that can measure
the initial distance, the initial velocity, and the initial
angle the ball was thrown or kicked. You're still not done
because in the real world balls are not flying parabolas,
so you need instruments that can measure the direction and
the speed of the wind at each point of the ball's flight
to calculate its final trajectory and its spin. It's a very
hard problem, but this is one way to look at it.
This illustrates that we can do the science of calculation
by looking always at what the mind does the heuristics
and the structures of environments and how minds
change the structures of environments. In this case the
relationship between the ball and one's self is turned into
a simple linear relationship on which the player acts. This
is an example of a smart heuristic, which is part of the
adaptive tool box that has evolved in humans. Many of these
heuristics are also present in animals. For instance, a
recent study showed that when dogs catch frisbees they use
the same gaze heuristic. |
John Brockman,
Editor and Publisher |
|Top|
|