2013 : WHAT *SHOULD* WE BE WORRIED ABOUT? [1]

stuart_firestein's picture [5]
Professor and Chair, Department of Biological Sciences, Columbia University; Fellow, AAAS
Say It Ain't So

How often does science fail to deliver? How often should it fail? Should we be worried about the failure rate of science?

Much has been made recently of all the things that science has predicted and which haven't come true. This is typically presented as an indictment of science and sometimes even a reason not to put so much faith in it. Both of these are wrong. They are wrong on the statistics and they are wrong on the interpretation.

The statistical error arises from lumping together scientific predictions that are different in kind. For example, the covers of post war popular science and technology magazines are full of predictions of amazing developments that were supposed to be just around the corner—floating airports, underground cities, flying cars, multilayer roadways through downtowns, etcetera ad nauseum. Very few (maybe none) of these wild predictions came to pass, but what was the chance they would? They were simply the unbridled imaginings of popular science writers or graphic artists looking for a dramatic cover or story that would sell magazines.

You can't lump these predictions in the same bin with more serious promises like the eradication of cancer. One sort of prediction is just imaginative musing, the other is a kind of promissory note. Artificial intelligence, space travel, alternative energies, and cheaper more plentiful food are all in this second category. They cost money, time and resources and they are serious activities; the speculations of science fiction writers don't have any cost associated with them.

But of course not all the serious promises have worked out either. The second error, one of interpretation, is that we have therefore squandered vast sums of public money and resources on abject failures. To take one case, the so called War on Cancer, where we have spent 125 billion dollars on cancer research since then President Richard Nixon "declared war" on this disease 42 years ago in 1971. The result: over the same 42 year period some 16 million people have died from cancer, and it is now the leading cause of death in the US. Sounds bad, but in fact we have cured many previously fatal cancers and prevented an unknowable number of cases by understanding the importance of environmental factors (asbestos, smoking, sunshine, etc.).

And what about all the ancillary benefits that weren't in the original prediction—vaccines, improved drug delivery methods, sophisticated understanding of cell development and aging, new methods in experimental genetics, discoveries that tell us how genes are regulated—all genes not just cancer genes—and a host of other goodies that never get counted as resulting from the war on cancer. Then there is the unparalleled increase in our understanding of biology at every level—from biochemical cascades to cell behavior, to regulatory systems to whole animals and people, and the above mentioned effects of the environment on health. Is anybody tallying all this up? I'd venture to say that, all in all, this cancer war has given us more for the dollars spent than any real war, certainly any recent war.

Much of science is failure, but it is a productive failure. This is a crucial distinction in how we think about failure. More importantly is that not all wrong science is bad science. As with the exaggerated expectations of scientific progress, expectations about the validity of scientific results have simply become overblown. Scientific "facts" are all provisional, all needing revision or sometimes even outright upending. But this is not bad; indeed it is critical to continued progress. Granted it's difficult, because you can't just believe everything you read. But let's grow up and recognize that undeniable fact of life—not only in the newspapers, but in scientific journals.

In the field of pharmacology, where drugs are made, we say that the First Law of Pharmacology is that every drug has at least two effects—the one you know and the other one (or ones?). And the even more important Second Law of Pharmacology is that the specificity of a drug is inversely proportional to the length of time it is on the market. This means, in simpler terms, that when a drug first comes out it is prescribed for a specific effect and seems to work well. But as time passes and it is prescribed more widely and taken by a more diverse population, then side effects begin showing up—the drug is not as specific to the particular pathology as was thought and it seems to have other unexpected effects, mostly negative. This is a natural process. It's how we learn about the limitations of our findings. Would it be better if we could shortcut this process? Yes. Is it likely we'll be able to? No—unless we would be content with trying only very conservative approaches to curing diseases.

So what's the worry? That we will become irrationally impatient with science, with it's wrong turns and occasional blind alleys, with its temporary results that need constant revision. And we will lose our trust and belief in science as the single best way to understand the physical universe (which includes us, or much of us). From a historical perspective the path to discovery may seem clear, but the reality is that there are twists and turns and reversals and failures and cul de sacs all along the path to any discovery. Facts are not immutable and discoveries are provisional. This is the messy process of science. We should worry that our unrealistic expectations will destroy this amazing mess.