2013 : WHAT *SHOULD* WE BE WORRIED ABOUT?

aubrey_de_grey's picture
Gerontologist; Chief Science Officer, SENS Foundation; Author, Ending Aging
Society's Parlous Inability To Reason About Uncertainty

Broadly well-educated people are generally expected by other broadly well-educated people to be likely to demonstrate reasonably facile ability to learn and accommodate new information on topics with which they are unfamiliar. By "accommodate" I specifically refer to absorption not only of facts, but also of the general tenor of the topic as it exists in the expert community: what the major themes of discussion are, and (at least in outline) why they are topical.

There are, unfortunately, frequent cases where this expectation is not fulfilled. I first became aware of the depth of this problem through my own work, the development of medical interventions against aging, in which the main problem is that throughout the history of civilisation we have had no choice but to find ways to put the horror of aging out of our minds by whatever psychological device that, however irrational from a dispassionate standpoint, may work: a phenomenon to which researchers in the field are, tragically, not immune (though that is changing at a gratifyingly accelerating pace). But here I wish to focus on a much more general problem.

Above all, uncertainty is about timescales. Humans evolved in an environment where the short term mattered the most, but in recent history it has been important to depart from that mindset. And what that means, in terms of ways to reason, is that we need to develop an evolutionarily unselected skill: how best to integrate the cumulative uncertainties that longer-term forecasting inexorably entails.

Consider automation. The step-by-step advance of the trend that began well before, but saw its greatest leap with, the Industrial Revolution has resulted in a seismic shift of work patterns from manufacturing and agriculture to the service industries—but, amazingly, there is virtually no appreciation of what the natural progression of this phenomenon, namely the automation of service jobs too, could mean for the future of work. What is left, once the service sector goes the same way? Only so many man-hours can realistically be occupied in the entertainment industry. Yet, rather than plan for and design a world in which it is normal either to work for far fewer hours per week or for far fewer years per lifetime, societies across the world have acquiesced in a political status quo that assumes basically no change. Why the political inertia?

My view is that the main problem here is the public's catastrophic deficiency in probabilistic reasoning. Continued progress in automation, as in other areas, certainly relies on advances that cannot be anticipated in their details, and therefore not in their precise timeframes either, Thus, it is a topic for speculation—but I do not use that term in a pejorative way. Rather, I use it to emphasise that aspects of the future about which we know little cannot thereby be ignored: we must work with what we have.

And it is thought leaders in the science and engineering realms who must take a lead here. It's not controversial that public policy overwhelmingly follows, rather than leads, public opinion: ultimately, the number one thing that politicians work towards is getting re-elected. Thus, while voters remain unable to reach objective conclusions about even the medium term—let's say a decade hence—it is fanciful to expect policy-makers to act any better.

The situation is the worst in the more mathematically extreme cases. These are the situations that can be summarised as "high risk, high gain"—low perceived probability of success, but huge benefits in the event of success. As any academic will aver, the mainstream mechanisms for supplying public funding to research have gravitated to a disastrous level of antipathy towards high-risk high-gain work, to the point where it is genuinely normal for senior scientists to stay one step ahead of the system by essentially applying for money to do work that is already largely complete, and thus bears no risk of failing to be delivered on time.

The fields of research that most interest Edge readers are exceptionally susceptible to this challenge. Visionary topics are of necessity long-term, hence high risk, and of almost equal necessity high gain. In the area of medical research, for example, the question must be raised: are we benefiting the most people, to the greatest extent, with the highest probability, by the current distribution of research funding? In all such areas that I can think of, the fundamental bias apparent in public opinion and public policy is in favour of approaches that might, arguably (often very arguably), deliver modest short-term benefits but which offer pretty much no prospect of leading to more effective, second-generation approaches down the road. The routes to those second-generation approaches that show the best chance of success are, by contrast, marginalised as a result of their lack of "intermediate results".

We should be very, very worried about this. I would go so far as to say that it is already costing masses—masses—of lives, by slowing down life-saving research. And how hard is it to address, really? How hard is Bayes' Theorem, really? I would assert that the single most significant thing that those who understand the issue I have highlighted here can do to benefit humanity is to agitate for better understanding of probabilistic reasoning among policy-makers, opinion-formers and thence the public.