2009 : WHAT WILL CHANGE EVERYTHING?

Thomas Chair as Distinguished Professor in the Department of Anthropolgy, University of Utah

Cheap individual genotyping will give a new life to dating services and marriage arrangers. There is a market for sperm and egg donors today, but the information available to consumers about donors is limited. This industry will flourish as individual genotyping costs go down and knowledge of genomics grows.

Potential consumers will be able to evaluate not only whether or not a gamete provider has brown eyes, is tall or short, has a professional degree, but also whether the donor has the appropriate MHC genotypes, long or short androgen receptors, the desired dopamine receptor types, and so on. The list of criteria and the sophistication of algorithms matching consumers and donors will grow at an increasing rate in the next decade.

The idea of a "compatible couple" will have a whole new dimension. Consumers will have information about hundreds of relevant donor genetic polymorphisms to evaluate in the case of gamete markets. In marriage markets there will be evaluation by both parties. Where will all this lead? Three possibilities come immediately to mind:

A. Imagine that Sally is looking at the sperm donor market. Perhaps she is shopping for someone genetically compatible, for example with the right MHC types. She is a homozygote for the 7R allele of the DRD4 genetic locus so she is seeking a sperm donor homozygous for the 4R allele so she won't have to put up with a 7R homozygous child like she was. In other words whether Tom or Dick is a more desirable donor depends on characteristics specific to Sally.

B. But what if Sally values something like intelligence, which is almost completely unidimensional and of invariant polarity: nearly everyone values high intelligence. In this case Sally will evaluate Tom and Dick on simple scales that Sally shares with most other women, Tom will almost always be of higher value than Dick, and he will be able to obtain a higher price for his sperm.

C. Perhaps a new President has red-haired children. Suddenly Sally, along with most other women in the market, wants red-haired children because they are fashionable. Dick, with his red hair, is the sellout star of the sperm market but only for a short time. There is a cohort of children born with red hair, then the fad soon goes away as green eyes, say, become the new hot seller. Dick loses his status in the market and is forced to get a real job.

These three scenarios or any mix of them is a possible future for love and marriage among those prosperous enough to indulge in this market. Scenario A corresponds to traditional views of marriage: for everyone there is someone special and unique. Scenario B corresponds somewhat more closely to how marriage markets really work-every Sally prefers rich to poor, smart to dumb, and a BMW to a Yugo. Scenario C is close to one mechanism of what biologists call sexual selection: male mallards have green heads essentially because it is just the fashion. I would not wager much on which of these scenarios will dominate the coming gamete market but I favor scenario B.

p_z_myers's picture
Biologist; Associate Professor, University of Minnesota, Morris

The question, "what will change everything?" is in the wrong tense: it should be "what is changing everything right now?" We're in the midst of an ongoing revision of our understanding of what it means to be human—we are struggling to redefine humanity, and it's going to radically influence our future.

The redefinition began in the 19th century with the work of Charles Darwin, who changed the game by revealing the truth of human history. We are not the progeny of gods, we are the children of worms; not the product of divine planning, but of cruel chance and ages of brutal winnowing. That required a shift in the way we view ourselves that is still working its way through the culture. Creationism is an instance of a reaction against the dethroning of Homo sapiens. Embracing the perspective of evolution, however, allows us to see the value of other species and to appreciate our place in the system as a whole, and is a positive advance.

There are at least two more revolutions in the works. The first is in developmental biology: we're learning how to reprogram human tissues, enabling new possibilities in repair and regeneration. We are acquiring the tools that will make the human form more plastic, and it won't just stop with restoring damaged bodies to a prior state, but will someday allow us to resculpt ourselves, add new features and properties to our biology, and maybe, someday, even free us completely from the boundaries of the fixed form of a bipedal primate. Even now with our limited abilities, we have to rethink what it means to be human. Does a blastocyst really fit the definition? How about a 5-week embryo, or a three-month-old fetus?

The second big revelation is coming from neuroscience. Mind is clearly a product of the brain, and the old notions of souls and spirits are looking increasingly ludicrous…yet these are nearly universal ideas, all tangled up in people's rationalizations for an afterlife, for ultimate reward and punishment, and their concept of self. If many object to the lack of exceptionalism in our history, if they're resistant to the idea that human identity emerges gradually during development, they're most definitely going to find the idea of soullessness and mind as a byproduct of nervous activity horrifying.

This will be our coming challenge, to accommodate a new view of ourselves and our place in the universe that isn't encumbered with falsehoods and trivializing myths. That's going to be our biggest change—a change in who we are.

dean_ornish's picture
Founder and President of the non-profit Preventive Medicine Research Institute

We are entering a new era of personalized medicine.  One size does not fit all. 

One way to change your genes is to make new ones, as Craig Venter has elegantly shown. Another is to change your lifestyle: what you eat, how you respond to emotional stress, whether or not you smoke cigarettes, how much you exercise, and the experience of love and intimacy.

New studies show that these comprehensive lifestyle changes may change gene expression in hundreds of genes in only a few months—"turning on" (upregulating) disease-preventing genes and "turning on" (downregulating) genes that promote heart disease, oncogenes that promote breast cancer and prostate cancer, and genes that promote inflammation and oxidative stress. These lifestyle changes also increase telomerase, the enzyme that repairs and lengthens telomeres, the ends of our chromosomes that control how long we live. 

As genomic information for individuals becomes more widely available—via the decoding of each person's complete genome (as Venter and Watson have done) or partially (and less expensively) via new personal genomics companies—this information will be a powerful motivator for people to make comprehensive lifestyle changes that may beneficially affect their gene expression and significantly reduce the incidence of the pandemic of chronic diseases. 

joseph_traub's picture
Professor of Computer Science, Columbia University; coauthor, Complexity and Information

On January 20, 2009, our nation's leaders will gather in Washington for the inauguration of the 44th President of the United States. How many more such public inaugurations will we see?

Due to the threat from increasing precision, range, and availability of weapons, will it be safe for our nation's leaders to gather in one public place? Such weapons are or will be available to a variety of nations, NGOs, and terrorist groups. It might well be in someone's interest to wipe out the nation's leadership in one blow.

Why am I willing to announce this danger publicly? Won't it give terrorists ideas? I've come to believe that terrorist groups as well as other nations are smart and will identify such opportunities themselves.

What can be done about the potential physical threat when our leaders are gathered in one place?

kevin_slavin's picture
Assistant Professor and Founder, Playful Systems, MIT Media Lab

In just a few years, we’ll see the first generation of adults whose every breath has been drawn on the grid. A generation for whom every key moment (e.g., birth) has been documented and distributed globally. Not just the key moments, of course, but also the most banal: eating pasta, missing the train, and having a bad day at the office. Ski trips and puppies.

These trips and puppies are not simply happening, they are becoming data, building up the global database of distributed memories. They are networked digital photos – 3 billion on Flickr, 10 billion on Facebook. They were blog posts, and now they are tweets, too (a billion in 18 months). They are Facebook posts, Dopplr journals, Last.FM updates.

Further, more and more of these traces we produce will be passive or semi-passive. Consider Loopt, which allows us to track ourselves, our friends through GPS. Consider voicemail transcription bots that transcribe the voice messages we leave into searchable text in email boxes on into eternity. The next song you listen to will likely be stored in a database record somewhere. Next time you take a phonecam photo, it may well have the event’s latitude and longitude baked into the photo’s metadata.

The sharp upswing in all of this record-keeping – both active and passive – are redefining one of the core elements of what it means to be human, namely to remember. We are moving towards a culture that has outsourced this essential quality of existence to machines, to a vast and distributed prosthesis. This infrastructure exists right now, but very soon we’ll be living with the first adult generation whose entire lives are embedded in it.

In 1992, the artist Thomas Bayrle wrote that the great mistakes of the future would be that as everything became digital, we would confusememory with storage. What’s important about genuine memory and how it differs from digital storage is that human memory is imperfect, fallible, and malleable. It disappears over time in a rehearsal and echo of mortality; our abilities to remember, distort and forget are what make us who we are.

We have built the infrastructure that makes it impossible to forget. As it hardens and seeps into every element of daily life, it will make it impossible to remember. Changing what it means to remember changes what it means to be.

There are a few people with who already have perfect episodic memory, total recall, neurological edge cases. They are harbingers of the culture to come.  One of them, Jill Price, was profiled in Der Spiegel:

"In addition to good memories, every angry word, every mistake, every disappointment, every shock and every moment of pain goes unforgotten. Time heals no wounds for Price. 'I don't look back at the past with any distance. It's more like experiencing everything over and over again, and those memories trigger exactly the same emotions in me. It's like an endless, chaotic film that can completely overpower me. And there's no stop button.'"

This also describes the life of Steve Mann, passively recording his life through wearable computers for many years. This is an unlikely future scenario, but like any caricature, it is based on human features that will be increasingly recognizable. The processing, recording and broadcasting prefigured in Mann’s work will be embedded in everyday actions like the twittering, phonecam shots and GPS traces we broadcast now. All of them entering into an outboard memory that is accessible (and searchable) everywhere we go.

Today is New Year’s Eve. I read today (on Twitter) that three friends, independent of each other, were looking back at Flickr to recall what they were doing a year ago. I would like to start the New Year being able to remember 2008, but also to forget it.

For the next generation, it will be impossible to forget it, and harder to remember. What will change everything is our ability to remember what everything is. Was. And wasn’t.

richard_dawkins's picture
Evolutionary Biologist; Emeritus Professor of the Public Understanding of Science, Oxford; Author, Books Do Furnish a Life

 

Our ethics and our politics assume, largely without question or serious discussion, that the division between human and 'animal' is absolute. 'Pro-life', to take just one example, is a potent political badge, associated with a gamut of ethical issues such as opposition to abortion and euthanasia. What it really means is pro-human-life. Abortion clinic bombers are not known for their veganism, nor do Roman Catholics show any particular reluctance to have their suffering pets 'put to sleep'. In the minds of many confused people, a single-celled human zygote, which has no nerves and cannot suffer, is infinitely sacred, simply because it is 'human'. No other cells enjoy this exalted status.

But such 'essentialism' is deeply un-evolutionary. If there were a heaven in which all the animals who ever lived could frolic, we would find an interbreeding continuum between every species and every other. For example I could interbreed with a female who could interbreed with a male who could . . . fill in a few gaps, probably not very many in this case . . . who could interbreed with a chimpanzee. We could construct longer, but still unbroken chains of interbreeding individuals to connect a human with a warthog, a kangaroo, a catfish. This is not a matter of speculative conjecture; it necessarily follows from the fact of evolution.

Theoretically we understand this. But what would change everything is a practical demonstration, such as one of the following:

1. The discovery of relict populations of extinct hominins suchHomo erectus and Australopithecus. Yeti-enthusiasts notwithstanding, I don't think this is going to happen. The world is now too well explored for us to have overlooked a large, savannah-dwelling primate. Even Homo floresiensis has been extinct 17,000 years. But if it did happen, it would change everything.

2. A successful hybridization between a human and a chimpanzee. Even if the hybrid were infertile like a mule, the shock waves that would be sent through society would be salutary. This is why a distinguished biologist described this possibility as the most immoral scientific experiment he could imagine: it would change everything! It cannot be ruled out as impossible, but it would be surprising.

3. An experimental chimera in an embryology lab, consisting of approximately equal numbers of human and chimpanzee cells. Chimeras of human and mouse cells are now constructed in the laboratory as a matter of course, but they don't survive to term. Incidentally, another example of our speciesist ethics is the fuss now made about mouse embryos containing some proportion of human cells. "How human must a chimera be before more stringent research rules should kick in?" So far, the question is merely theological, since the chimeras don't come anywhere near being born, and there is nothing resembling a human brain. But, to venture off down the slippery slope so beloved of ethicists, what if we were to fashion a chimera of 50% human and 50% chimpanzee cells and grow it to adulthood? That would change everything. Maybe it will?

4. The human genome and the chimpanzee genome are now known in full. Intermediate genomes of varying proportions can be interpolated on paper. Moving from paper to flesh and blood would require embryological technologies that will probably come on stream during the lifetime of some of my readers. I think it will be done, and an approximate reconstruction of the common ancestor of ourselves and chimpanzees will be brought to life. The intermediate genome between this reconstituted 'ancestor' and modern humans would, if implanted in an embryo, grow into something like a reborn Australopithecus: Lucy the Second. And that would (dare I say will?) change everything.

I have laid out four possibilities that would, if realised, change everything. I have not said that I hope any of them will be realised. That would require further thought. But I will admit to afrisson of enjoyment whenever we are forced to question the hitherto unquestioned.

alexander_vilenkin's picture
Professor of Physics and Director of the Institute of Cosmology at Tufts University

The long-term prospects of our civilization here on Earth are very uncertain. We can be destroyed by an asteroid impact or a nearby supernova explosion, or we can self-destruct in a nuclear or bacteriological war. It is a matter of not if but when the disaster will strike, and the only sure way for humans to survive in the long run is to spread beyond the Earth and colonize the Galaxy. The problem is that our chances of doing that before we are wiped out by some sort of catastrophe appear to be rather bleak.

The Doomsday argument

The probability for a civilization to survive the existential challenges and colonize its galaxy may be small, but it is non-zero, and in a vast universe such civilizations should certainly exist. We shall call them large civilizations.  There will also be small civilizations which die out before they spread much beyond their native planets.

For the sake of argument, let us assume that small civilizations do not grow much larger than ours and die soon after they reach their maximum size. The total number of individuals who lived in such a civilization throughout its entire history is then comparable to the number of people who ever lived on Earth, which is about 400 billion people, 60 times the present Earth population.

A large civilization contains a much greater number of individuals. A galaxy like ours has about 100 billion stars. We don't know what fraction of stars have planets suitable for colonization, but with a conservative estimate of 0.01% we would still have about 10 million habitable planets per galaxy. Assuming that each planet will reach a population similar to that of the Earth, we get 4 million trillion individuals. (For definiteness, we focus on human-like civilizations, disregarding the planets inhabited by little green people with 1000 people per square inch.) The numbers can be much higher if the civilization spreads well beyond its galaxy. The crucial question is: what is the probability P for a civilization to become large? 

It takes 10 million (or more) small civilizations to provide the same number of individuals as a single large civilization. Thus, unless P is extremely small (less than one in 10 million), individuals live predominantly in large civilizations. That's where we should expect to find ourselves if we are typical inhabitants of the universe. Furthermore, a typical member of a large civilization should expect to live at a time when the civilization is close to its maximum size, since that is when most of its inhabitants are going to live. These expectations are in a glaring conflict with what we actually observe: we either live in a small civilization or at the very beginning of a large civilization. With the assumption that P is not very small, both of these options are very unlikely – which indicates that the assumption is probably wrong.

If indeed we are typical observers in the universe, then we have to conclude that the probability P for a civilization to survive long enough to become large must be very tiny. In our example, it cannot be much more than one in 10 million.

This is the notorious "Doomsday argument". First suggested by Brandon Carter about 35 years ago, it inspired much heated debate and has often been misinterpreted. In the form given here it was discussed by Ken Olum, Joshua Knobe, and me.

Beating the odds

The Doomsday argument is statistical in nature. It does not predict anything about our civilization in particular. All it says is that the odds for any given civilization to grow large are very low. At the same time, some rare civilizations do beat the odds.

What distinguishes these exceptional civilizations? Apart from pure luck, civilizations that dedicate a substantial part of their resources to space colonization, start the colonization process early, and do not stop, stand a better chance of long-term survival.

With many other diverse and pressing needs, this strategy may be difficult to implement, but this may be one of the reasons why large civilizations are so rare. And then, there is no guarantee. Only when the colonization is well underway, and the number of colonies grows faster than they are dying out, can one declare a victory. But if we ever reach this stage in colonization of our Galaxy, this would truly be a turning point in the history of our civilization.

Where are they?

One question that needs to be addressed is: why is our Galaxy not yet colonized? There are stars in the Galaxy that are billions of years older than our Sun, and it should take much less than a billion years to colonize the entire Galaxy. So, we are faced with Enrico Fermi's famous question: Where are they? The most probable answer, in my view, is that we may be the only intelligent civilization in the entire observable universe.

Our cosmic horizon is set by the distance that light has traveled since the big bang. It sets the absolute limit to space colonization, since no civilization can spread faster than the speed of light. There is a large number of habitable planets within our horizon, but are these planets actually inhabited? Evolution of life and intelligence require some extremely improbable events. Theoretical estimates (admittedly rather speculative) suggest that their probability is so low that the nearest planet with intelligent life may be far beyond the horizon. If this is really so, then we are responsible for a huge chunk of real estate, 80 billion light years in diameter. Our crossing the threshold to a space-colonizing civilization would then really change everything. It will make a difference between a "flicker" civilization that blinks in and out of existence and a civilization that spreads through much of the observable universe, and possibly transforms it.

gloria_origgi's picture
Philosopher and Researcher, Centre National de la Recherche Scientifique, Paris; Author, Reputation: What it is and Why it Matters

When asked about what will change our future, the most straightforward reply that comes to mind is, of course, the Internet. But how the Internet will change things that it has not already changed, what is the next revolution ahead on the net, this is a harder matter. The Internet is a complex geography of information technology, networking, multimedia content and telecommunication. This powerful alliance of different technologies has provided not only a brand new way of producing, storing and retrieving information, but a giant network of ranking and rating systems in which information is valued as long as it has been already filtered by other people.

My prediction for the Big Change is that the Information Age is being replaced by a Reputation Age in which the reputation of an item — that is how others value and rate the item — will be the only way we have to extract information about it. This passion of ranking is a central feature of our contemporary practices of filtering information, in and out of the net (take as two different examples of it — one inside and the other outside the net - www.ebay.com and the recent financial crisis).

The next revolution will be a consequence of the impact of reputation on our practices of information gathering. Notice that this won’t mean a world of collective ignorance in which everyone has no other chances to know something than to rely on the judgment of someone else, in a sort of infinite chain of blind trust where nobody seems to know anything for sure anymore: The age of reputation will be a new age of knowledge gathering guided by new rules and principles. This is possible now thanks to the tremendous potential of the social web in aggregating individual preferences and choices to produce intelligent outcomes. Let me explain how more precisely.

One of the main revolution of Internet technologies has been the introduction by Google of the « PageRank » algorithm for retrieving information, that is, an algorithm that bases its search for relevant information on the structure of the links on the Web. Algorithms such as these extract the cultural information contained in each preference users express by putting a link from a page to another with a mathematical cocktail of formulas that gives a special weight to each of these connections. This determines which pages are going to be in the first positions of a search result.

Fears about these tools are obviously many, because our control on the design of the algorithms, on the way the weights are assigned to determine the rank is very poor, nearly inexistent. But let us imagine a new generation of search engines whose ranking procedures are simply generated by the aggregation of individual preferences expressed on these pages: no big calculations, no secret weights: the results of a query are organized just according to the « grades » each of these pages has received by the users that have crossed that page at least once and taken the time to rank it.

A social search engine based on the power of the « soft » social computing, will be able to take advantage of the reputation each site and page has cumulated simply by the votes users have expressed on it. The new algorithms for extracting information will exploit the power of the judgments of the many to produce their result. This softer Web, more controlled by human experiences than complex formulas, will change our interaction with the net, as well as our fears and hopes about it. The potential of social filtering of information is that of a new way of extracting information by relying on the previous judgments of others.

Hegel thought that universal history was made by universal judgments: our history will be written from now on in the language of « good » and « bad », that is, in terms of the judgments people express on things and events around them, that will become the more and more crucial for each of us to extract information about these events.  According to Frederick Hayek,Civilization rests on the fact that we all benefit from knowledge we do not possess: that’s exactly the kind of civilized cyber-world that will be made possible by social tools of aggregating judgments on the Web.

verena_huber_dyson's picture
1923-2016; Emeritus Professor, Formerly, Philosophy Department. University of Calgary, Alberta Canada

What will change everything is radical paradigm shift in the scientific method that opens up horizons beyond the reach of Boolean Logic, Digital Manipulations and Numerical Evaluations.

Due to my advanced age, I am not likely to witness the change.  But I am seeing signs and have my hunches.  These I will briefly spell out.

To change Everything a radical paradigm shift must interrupt the scientific method's race: 

STOP for a moment's reflection; what are you up to?

How do you know your dog would rather be a cat — just because you prefer cats.  Did you asked him, have you figured out how to ask him?

Having figured out how to do something is not enough reason for actually doing it. That's one aspect of the paradigm shift I am expecting; coming from inside the ranks. Evaluation of scientific results and their potential effects on the world as we know it is of particular urgency these days when news are spreading so easily all over the population. Of course we do not want to regress to a system of classified information that generates elitism. Well this problem is creating the, not so new anymore, philosophical discipline of applied Ethics; if only it keeps scientifically well informed and focused down to earth on concrete issues.

The goal of this part of the shift is a tightening of the structure of the whole conglomerate of the sciences and their presentation in the media.

But this brings me to the more radical effect of the shift I am envisaging; a healing effect on the rift between the endeavors that are bestowed the label "scientific" and the proliferation of so-called "alternative" enterprises, many of which are striving to achieve the blessings of scientific grounding by experimentations, theories and statistical evaluations, whether appropriate or not. A true and fruitful symbiosis that leads to a deeper understanding of the meaning of Human Existence than the models of a machine or of a token created by a Superior Being for the mysterious purpose of suffering through life in the service of His Glory.

Where do I expect the decisive push to come from? Possibly from the young discipline of cognitive science, provided psychology, philosophy and physiology are ready to cooperate.  There are shoots rising up all over, but I won't embark on a list. Once the "real thing" is found or constructed it will be recognized. It will have shape and make sense.

The myth of the scientific method as the only approach to reality will become obsolete without loss to man's interaction with this world.  The path to understanding has to be prepared by a direct, still somewhat mysterious approach of hunches and intuitions in addition to direct perceptions and sensations.  Moreover the results of that procedure are useless unless suitably interpreted.

Well, this is as far as I am ready to go with this explanation of a hunch. The alternative to my current vagueness would be rigidity prone to misinterpretation.

As to my own turf, Mathematics, I do not believe there will be any radical change.  Mathematics is a rock of a structure, here to stay. Mathematical insights do not change, they become clearer, dead ends are recognized as such, but what is proved beyond doubt what is cumulative.   

But methodological changes here are in order as well as meta-mathematical and philosophical interpretation of the nature of results. So is the evolution of an ever more lucid language.

I personally believe we'd do well to focus on Mathematical Intuitionism as our Foundations.  Boolean thinking has done its service by now.

To sum up what I am expecting of this paradigm shift are: clarification, simplification and unification of our understanding and with it the emergence of a more lucidly expressive language conducive to the End of Fragmentation of Knowledge.

scott_sampson's picture
President & CEO, Science World British Columbia; Dinosaur paleontologist and science communicator; Author, How To Raise A Wild Child

Evolution is the scientific idea that will change everything within next several decades.  

I recognize that this statement might seem improbable.  If evolution is defined generally, simply as change over time, the above statement borders on meaningless.  If regarded in the narrower, Darwinian sense, as descent with modification, any claim for evolution's starring role also appears questionable, particularly given that 2009 is the 150th anniversary of the publication of On the Origin of Species.  Surely Darwin's "Dangerous Idea," however conceived, has made its mark by now.  Nevertheless, I base my claim on evolution's probable impacts in two great spheres: human consciousness and science and technology.

Today, the commonly accepted conception of evolution is extremely narrow, confined largely to the realm of biology and a longstanding emphasis on mutation and natural selection.  In recent decades, this limited perspective has become further entrenched by the dominance of molecular biology and its "promise" of human-engineered cells and lifeforms.  Emphasis has been placed almost entirely on the generation of diversity — a process referred to as "complexification" — reflecting the reductionist worldview that has driven science for four centuries. 

Yet science has also begun to explore another key element of evolution — unification — which transcends the biological to encompass evolution of physical matter.  The numerous and dramatic increases in complexity, it turns out, have been achieved largely through a process of integration, with smaller wholes becoming parts of larger wholes.  Again and again we see the progressive development of multi-part individuals from simpler forms.  Thus, for example, atoms become integrated into molecules, molecules into cells, and cells into organisms.  At each higher, emergent stage, older forms are enveloped and incorporated into newer forms, with the end result being a nested, multilevel hierarchy. 

At first glance, the process of unification appears to contravene the second law of thermodynamics by increasing order over entropy.  Again and again during the past 14 billion years, concentrations of energy have emerged and self-organized as islands of order amidst a sea of chaos, taking the guise of stars, galaxies, bacteria, gray whales, and, on at least one planet, a biosphere.  Although the process of emergence remains somewhat of a mystery, we can now state with confidence that the epic of evolution has been guided by counterbalancing trends of complexification and unification.  This journey has not been an inevitable, deterministic march, but a quixotic, creative unfolding in which the future could not be predicted. 

How will a more comprehensive understanding of evolution affect science and technology?  Already a nascent but fast-growing industry called "biomimicry" taps into nature's wisdom, imitating sustainable, high performance designs and processes acquired during four billion years of evolutionary R&D.  Water repellant lotus plants inspire non-toxic fabrics.  Termite mounds inspire remarkable buildings that make use of passive cooling.  Spider silk may provide inspiration for a new, strong, flexible, yet rigid material with innumerable possible uses.  Ultimately, plant photosynthesis may reveal secrets to an unlimited energy supply with minimal waste products.

The current bout of biomimicry is just the beginning.  I am increasingly convinced that ongoing research into such phenomena as complex adaptive systems will result in a new synthesis of evolution and energetics — let's call it the "Unified Theory of Evolution" — that will trigger a cascade of novel research and designs.  Science will relinquish its unifocal downward gaze on reductionist nuts and bolts, turning upward to explore the "pattern that connects."  An understanding of complex adaptive systems will yield transformative technologies we can only begin to imagine.  Think about the potential for new generations of "smart" technologies, with the capacity to adapt, indeed to evolve and transform, in response to changing conditions. 

And what of human consciousness?  Reductionism has yielded stunning advances in science and technology.  However, its dominant metaphor, life-as-machine, has left us with a gaping chasm between the human and non-human worlds.  With "Nature" (the non-human world) reduced merely to resources, humanity's ever-expanding activities have become too much for the biosphere to absorb.  We have placed ourselves, and the biosphere, on the precipice of a devastating ecological crisis, without the consciousness for meaningful progress toward sustainability.  

At present, Western culture lacks a generally accepted cosmology, a story that gives life meaning.  One of the greatest contributions of the scientific enterprise is the epic of evolution, sometimes called the Universe Story.  For the first time, thanks to the combined efforts of astronomers, biologists, and anthropologists (among many others), we have a realistic, time-developmental understanding of the 14 billion year history of us.  Darwin's tree of life has roots that extend back to the Big Bang, and fresh green shoots reach into an uncertain future.  Far from leading to a view that the Universe is meaningless, this saga provides the foundation for seeing ourselves as fully embedded into the fabric of nature.  To date, this story has had minimal exposure, and certainly has not been included (as it should be) in the core of our educational curricula. 

Why am I confident that these transformations will occur in the near future?  In large part because necessity is the mother of invention.  We are the first generation of humans to face the prospect that humanity may have a severely truncated future.  In addition to new technologies, we need a new consciousness, a new worldview, and new metaphors that establish a more harmonious relationship between the human and the non-human.  Of course, the concept of "changing everything" makes no up-front value judgments, and I can envision evolution's net contribution as being either positive or negative, depending on whether the shift in human consciousness keeps pace with the radical expansion of new (and potentially even more exploitative) technologies.  In sum, our future R&D efforts need to address human consciousness in at least equal measure to science and technology. 

dan_sperber's picture
Social and Cognitive Scientist; CEU Budapest and CNRS Paris; Co-author (with Deirdre Wilson), Meaning and Relevance; and (with Hugo Mercier), The Enigma of Reason

From the Neolithic revolution to the information age, the major changes in the human condition—none of them changing everything, needless to say—have been consequences of new technologies. There is now a glut of new technologies in the offing that will alter the way we live more rapidly and radically than anything before in ways we cannot properly foresee. I wish I could just wax lyrical about some of the developments we can at least sensibly speculate about, but others will do so more competently. So, let me focus on the painfully obvious that we would rather not think about.

Many new technologies can provide new weapons or new ways to use old ones. Access to these technologies is every day easier. In the near future we should expect, with near certainty, that atomic, chemical, and biological weapons of mass destruction will be used in a variety of conflicts. The most important change this will bring about is not that so many will die. Hundreds of thousands have died all these years in wars and natural catastrophes, with an unspeakable impact on the population affected, but, alas, massacres and other forms of collective death have been part and parcel of the human condition. This time, however, many of the victims will belong to powerful modern societies that, since the Second World War, have on the whole been spared. People in these societies are, neither less nor more than the usual poorer and powerless victims of massive violence, entitled to live full decent lives, and have a right to fight for this. What may bring about radical changes is that they will be in a much stronger position to exert and possibly abuse this right. Recent large-scale murderous attacks resulted in the acceptance of fewer checks on executive power, limitation of civil rights, preventive warfare, ethnically targeted public suspicion. In the future, people who will have witnessed even direr events at close quarters may well support even more drastic measures. I am not discussing here the rationality of fears to come, or the extent to which they are likely to be biased and manipulated. I just assume that, for good or bad reasons, they will weigh in favor of limitations to the liberties of individuals and to the independence of countries.

One must hope that, in part thanks to the changes brought about by novel technologies, new forms of social and political understanding and action will develop to help address at the root issues that otherwise might give rise to ever more lethal conflicts. Still, while more and more powerful technologies are becoming more and more accessible, there is no reason to believe that humans are becoming commensurately wiser and more respectful of one another's rights. There will be, then, at least in most people's perception, a direct clash between their safety and their liberty and even more between their safety and the liberty of others. The history of this century—our history, that of our children and grandchildren—will in good part be that of the ways in which this clash is played out, or overcome.

robert_sapolsky's picture
Neuroscientist, Stanford University; Author, Behave

We humans are pretty impressive when it comes to being able to extract information, to discern patterns from lots of little itsy bitsy data points. Take a musician sitting down with a set of instructions on a piece of paper — sheet music — and being able to turn it into patterned sound. And one step further is the very well-trained musician who can sit and read through printed music, even an entire orchestral score, and hear it in his head, and even feel swept up in emotion at various points in the reading. Even more remarkable is the judge in a composition competition, reading through a work that she has never heard before, able to turn that novel combination of notes into sounds in her head that can be judged to be hackneyed and derivative, or beautiful and original.

And, obviously, we do it in the scientific realm in a pretty major way. We come to understand how something works by being able to make sense of how a bunch of different independent variables interact in generating some endpoint. Oh, so that's how mitochondria have evolved to solve that problem, that's what a temperate zone rain forest does to balance those different environmental forces challenging it. Now I know.

The trouble is that it is getting harder to do that in the life sciences, and this is where something is going to have to happen which will change everything.

The root of the problem is technology outstripping our ability to really make use of it. This isn't so much about the ability to get increasingly reductive biological information. It was relatively some time ago that scientists figured out how to sequence a gene, identify a mutation, get the crystallographic structure of a protein, or measure ion flow through a single channel in a cell.

What the recent development has been is to be able to get staggeringly large amounts of that type of information. We have not just sequenced genes, but sequenced our entire human genome. And we can compare it to that of other species, or can look at genome-wide differences between human populations, or even individuals, or information about tens of thousands of different genes. And then we can look at expression of those genes — which ones are active at which time in which cell types in which individuals in which populations in which species.

We can do epigenomics, where instead of cataloging which genes exist in an individual, we can examine which genes have been modified in a long-term manner to make it easier or harder to activate them (in each particular cell type). Or we can do proteomics, examining which proteins and in what abundance have been made as the end product of the activation of those genes, or post-translational proteomics, examining how those proteins have been modified to change their functions.

Meanwhile, the same ability to generate massive amounts of data has emerged in other realms of the life sciences. For example, it is possible to do near continuous samplings of blood glucose levels, producing minute-by-minute determinations, or do ambulatory cardiology, generating heart beat data 24/7 for days from an individual going about her business, or use state-of-the-art electrophysiological techniques to record the electrical activity of scores of individual neurons simultaneously.

So we are poised to be able to do massive genomo-epigenomo-proteonomo-glyco-endo-neurono-orooni-omic comparisons of the Jonas Brothers with Nelson Mandela with a dinosaur pelvis with Wall-E and thus better understand the nature of life.

The problem, of course, is that we haven't a clue what to do with that much data. By that, I don't mean "merely" how to store, or quantitatively analyze, or present it visually. I mean how to really think about it.

You can already see evidence of this problem in too many microarray papers (this is the approach where you can ask, "In this particular type of tissue, which genes are more active and which less active than usual under this particular circumstance"). With the fanciest versions of this approach, you've got yourself thousands of bits of information at the end. And far too often, what is done with all this suggests that the scientists have hit a wall in terms of being able to squeeze insight out of their study.

For example, the conclusion in the paper might be, "Eleventy genes are more active under this circumstance, whereas umpteen genes are less active, and that's how things work." Or maybe the punch line is, "Of those eleventy genes that are more active, an awful lot of them have something to do with, say, metabolism, how's about that?" Or in a sheepish tone, it might be, "So changes occurred in the activity of eleventy + umpteen different genes, and we don't know what most of them do, but here are three that we do know about and which plausibly have something to do with this circumstance, so we're now going to focus on those three that we already know something about and ignore the rest."

In other words, the technologies have outstripped our abilities to be insightful far too often. We have some crutches — computer graphics allow us to display a three-dimensional scatter plot, rotate it, change it over time. But we still barely hold on.

The thing that is going to change everything will have to wait for, probably, our grandkids. It will come from their growing up with games and emergent networks and who knows what else that (obviously) we can't even imagine. And they'll be able to navigate that stuff as effortlessly as we troglodytes can currently change radio stations while driving while talking to a passenger. In other words, we're not going to get much out of these vast data sets until we have people who can intuit in six-dimensions. And then, watch out.

paul_w_ewald's picture
Professor of Biology, Amherst College; Author, Plague Time

Thirteen decades ago, Louis Pasteur and Robert Koch led an intellectual revolution referred to as the germ theory of disease, which proposes that many common ailments are caused by microbes. Since then the accepted spectrum of infectious causation has been increasingly steadily and dramatically.  The diseases that are most obviously caused by infection were accepted as such by the end of the 19th century; almost all of them were acute diseases.  Acute diseases with a transmission twist, mosquito-borne malaria for example, were accepted a bit later at the beginning of the 20th century.  Since the early 20th century, the spectrum has been broadened mostly by recognition of infectious causation of chronic diseases.  The first of these had distinctly infectious acute phases, which made infectious causation of the chronic disease more obvious. Infectious causation of shingles, for example, was made more apparent by its association with chicken pox.  Over the past thirty years, the spectrum of infectious causation has been broadened mostly through inclusion of chronic diseases without obvious acute phases.  With years or even decades between the onset of infection and the onset of such diseases, demonstration of infectious causation is difficult. 

Technological advances have been critical to resolving the ambiguities associated with infectious causation of such cryptic infectious causation.  In the early 1990s Kaposi's Sarcoma Associated Herpes Virus was discovered using a molecular technique that stripped away the human genetic material from Kaposi's sarcoma cells and found what remained.  A similar approach revealed Hepatitis C virus in blood transfusions.  In these cases there were strong epidemiological signs that an infectious agent was present.  When the cause was discovered, acceptance did not have to confront the barrier of entrenched opinions favoring other non-infectious causes.  If such special interests are present the evidence has to be proportionately more compelling.  Such is the case for schizophrenia, atherosclerosis, Alzheimer's disease, breast cancer, and many other chronic diseases, which are now the focus of vehement disagreements.

Advances in molecular/bioinformatic technology are poised to help resolve these controversies.  This potential is illustrated by two discoveries, which seem cutting edge now, but will soon be considered primitive first steps.  About a decade ago, one member of Stanford team scraped spots on two teeth of another team member, and amplified the DNA from the scrapings.  They found sequences that were sufficiently unique to represent more than 30 new species.  This finding hinted at the magnitude of the challenge--tens or perhaps even hundreds of thousands of viruses and bacteria may need to be considered to evaluate hypotheses of infectious causation. 

The second discovery provides a glimpse of how this challenge may be addressed.  Samples from prostate tumors were tested on a micro-array that contained 20,000 DNA snippets from all known viruses.  The results documented a significant association with an obscure retrovirus related to one that normally infects mice. If this virus is a cause of prostate cancer, it causes only a small portion that occurs in men with a particular genetic background. Other viruses have been associated with prostate cancer in patients without this genetic background. So, not only may thousands of viruses need to be tested to find one correlated with a chronic disease, but even then it may be one of perhaps many different infectious causes.

The problems of multiple pathogens and ingrained predispositions are now coming to a head in research on breast cancer.   Presently, three viruses have been associated with breast cancer: mouse mammary tumor virus, Epstein Barr virus, and human papillomavirus.  Researchers are still arguing about whether these correlations reflect causation.  If they do, these viruses account for somewhere between half and about 95% of breast cancer, depending on the extent to which they act synergistically. Undoubtedly array technology will soon be used to assess this possibility and to identify other viruses that may be associated with breast cancers.

There is a caveat. These technological advancements provide sophisticated approaches to identifying correlations between pathogens and disease.  They do not bridge the gulf between correlation and causation.  One might hope that with enough research all aspects of the pathological process could be understood, from the molecular level up to the whole patient.  But as one moves from molecular to the macro levels, the precision of interpretation becomes confounded by the complex web of interactions that intervene, especially in chronic diseases.   Animal models are generally inadequate for chronic human diseases because the disease in animals is almost never quite the same as the human disease. The only way out of this conundrum, I think, will be to complement the technological advancements in identifying candidate pathogens with clever clinical trials.  These clinical trials will need to use special states, such as temporary immune suppression, to identify those infections that are exacerbated concurrently with exacerbations of the chronic disease in question.  Such correlations will then need to be tested for causation by treatment of the exacerbated infection to determine whether the suppression of the infection is associated with amelioration of the disease.  

Why will this process change things?   For those of us who live in prosperous countries, infectious causes are implicated but not accepted in most common lethal diseases: cancers, heart attacks, stroke, Alzheimer's disease.  Infectious causes are also implicated in the vast majority of nonlethal, incapacitating illness of uncertain cause, such as arthritis, fibromyalgia, and Crohn's disease.  If infectious causes of these diseases are identified medical history tells us that they will tend to be resolved. 

A reasonable estimate of the net effect would be a rise in healthy life expectancy by two or three decades, pushing lifespan up against the ultimate boundary of longevity molded by natural selection, probably an age range between 95 and 105 years.  Being pushed up against this barrier people could be expected to live healthy lives into their 90s and then go downhill quickly.  This demographic transition toward healthy survival will improve productivity, lower medial costs, and enhance quality of life.  In short it will be one of medicine's greatest contributions.

work has been exhibited at Tate Britain and the Institute of Contemporary Arts, London

I think what I will live to see is a quite different kind of  male subjectivity. As we increasingly exit the cultural and behavioural reflexes of industrial society with its distinctive separation of labour between men and women, the base for our still pervasive idea of what constitutes 'masculinity' is equally eroded. Eventually this will trickle down to the conception of male subjectivity, with each new generation doing their little step.

I really hope I will live to see this, as (although our epoch is of course a very challenging and thereby interesting one with the whole sustainability issue) the codes and modes of behaviour and expression available to men are extremely limited and simplistic, which make our times slightly dull. I am looking forward to seeing my own child grow up and what his generation's contribution will be, but even more so I sometimes have a sense of impatience to see how young men will be when I am very old.

bruce_parker's picture
Visiting Professor, Stevens Institute of Technology; Author, The Power of the Sea: Tsunamis, Storm Surges, and Our Quest to Predict Disasters

Even with all the scientific and technological advances of the last two millennia and especially of the last century, humankind itself has not really changed.  The stories we read in our most ancient books do not seem alien to us.  On the contrary, the humans who wrote those works had the same needs and desires that we have today, though the means of meeting those needs and of fulfilling those desires may have changed in some of the details.

The human species has managed to survive a great number of truly monumental catastrophes, some naturally caused (floods, droughts, tsunamis, glaciations, etc,) and some the result of its own doing, often with the help of  science and technology (especially in creating the tools of war).  But such calamities did not really "change everything."  (This statement is certainly not meant to minimize the tragedy of the millions of lives lost in these catastrophes.)  Though we worry about the possible dramatic effects that an anthropogenically changed global climate might have, humankind itself will survive such changes (because of its science and technology), though we cannot predict how many people might tragically die because of it.

In terms of "what will change everything" the larger view is what will significantly change humankind itself.  From this human perspective, the last "event" that truly changed everything was over some period of time around 50,000 years ago when evolutionary advances finally led to intelligent humans who left Africa and spread out over the rest of the world, literally changing everything in the entire world.

Prior to that evolutionary advance in Africa, our ancestors' main motivations in life were like any other animal — find food and avoid death until they could reproduce.  After they evolved into intelligent beings their motivations in life expanded.  Although they still pursued food and sex and tried to avoid death, they also spent increasing amounts of time on activities aimed at preventing boredom and making them feel good about themselves.  These motivations have not changed in the succeeding millennia, though the means of satisfying themhave changed often.

How humankind came to be what it is today was a result of natural selection.  Humans survived in the hostile environment around them (and went on to Earthly dominance) because of the evolved improvements in their brains.  One improvement was the development of curiosity and a desire to learn about the environment in which they lived.  But it was not simply a matter of becoming smarter.  The human species survived and succeeded in this world as much because of an evolved need for affection or connection with other human beings, a social bond.  It was both its increased intelligence and its increased social cooperation that led to increasing knowledge and eventually science and technology.  Not all individuals, of course, had the same degree of these characteristics, as shown by the wars and horrendous atrocities that humankind was capable of, but social evolution driven by qualities acquired from the previous species evolution did make progress overall.  The greatest progress and the greatest gain in knowledge only happened when people worked with each other in harmony and did not kill each other.

The evolution of human intelligence and cooperative social bonding tendencies took a very long time, though it seems quite fast when appreciating the incredible complexity of this intelligence and social bonding.  How many genes must have mutated and been naturally selected for to achieve this complexity?  We are here today as both a species and a society because of those gene changes and the natural selection process that over this long time period weeded out the bad changes and allowed the good changes to remain.

Technically, evolution of the human species as a result of natural selection stopped when we became a social animal — when the strong began protecting the weak and when our scientific and technological advances allowed us to extend the lives of those individuals unfortunate enough to have genetic weaknesses that would have killed them.  With humans, artificial selection (selective breeding) was never a serious replacement for natural selection possibility, and as a result there have been no significant changes to the human species since its societies began.

But now, with the recent great advances in genetic engineering, we are in a position to change the human species for the first time in 50,000 years.  We will be able to put new genes in any human egg or sperm we wish.  The children born with these new genes will grow up and pass them on to their children.  The extensive use of this genetic selection (or should we call it anthropogenic selection) will rapidly pass new genes and their corresponding (apparently desired) traits throughout the population.  But what will be the overall consequence?  When selecting particular genes that we want while perhaps not understanding how particular gene combinations work, might we unknowingly begin a process that could change our good human qualities? While striving for higher intelligence could we somehow genetically diminish our capacity for compassion, or our inherent need for social bonding?  How might the human species be changed in the long run?  The qualities that got us here — the curiosity, the intelligence, the compassion and cooperation resulting from our need for social bonding– involve an incredibly complex combination of genes.  Could these have been produced through genetic planning?

Our ever-expanding genetic capabilities will certainly "change everything" with respect to medicine and health, which will be a great benefit.  Our life span will also be greatly extended, a game-changing benefit to be sure, but it will also add to our overpopulation, the ultimate source of so many problems on our planet.  But the ultimate effect may be on the human species itself.  How many generations might it take before the entire human race is significantly altered genetically?  From a truly human perspective, that would really "change everything."

henry_warwick's picture
artist, composer, and scientist

HENRY HARPENDING holds the Thomas Chair as Distinguished Professor in the Department of Anthropolgy at the University of Utah, the strongest biological anthropology department in the United States. He is a member of the National Academy of Sciences. He is a unique blend of field anthropologist (having spent years with the Bushmen and other peoples of the northern Kalahari) and population geneticist -one of international reputation, particularly active in the analysis of modern human origins. He played an important role in the development of the Out-of-Africa theory of human origins. He is coauthor, with Gregory Cochran, of The 10,000 Year Explosion.

corey_s__powell's picture
Senior Editor, Discover Magazine; Adjunct Professor, Science Journalism, NYU; Author: God in the Equation: How Einstein Transformed Religion

The tricky, slippery word in this question is "expect." There is nothing that I expect to see with 100 percent certainty, and there are some remarkable things that I expect to see with perhaps 10 or 5 percent certainty (but I sure would be excited if that 5 percent paid off). With that bit of preamble, I'd like to lay out my game-changing predictions ranked by order of expectation, starting with the near-sure things and ending with the thrilling hey-you-never-knows.

The real end of oil. Technology will make liquid fuels obsolete — not just petroleum but also alternatives like biodiesel, ethanol, etc. Fossil fuel supplies are too volatile and limited, the fuels themselves far too environmentally costly, and biofuels will never be more than niche players. More broadly, moving around fuels in liquid form is just too cumbersome. In the future, energy for personal transit might be delivered by wire or by beam. It might not be delivered at all — the Back to the Future "Mr. Fusion" device is not so farfetched (see below). But whatever comes next, in another generation or so pumping fuel into a car will seem as quaint as getting out and cranking the engine to get it started. 
Odds: 95 percent.

Dark matter found. The hunt for the Higgs boson is a yawn from my perspective: Finding it will only confirm a theory that most physicists are fairly sure about already. Identifying dark matter particles — either at the Large Hadron Collider or at one of the direct detectors, like Xenon100 — would be much more significant. It would tell us what the other 6/7ths of all matter in the universe consists of, it would instantly rule out a lot of kooky cosmological theories, and it would allow us to construct a complete history of the universe. 
Odds: 90 percent.

Genetically engineered kids. I'm not just talking about screening out major cancer genes or selecting blue eyes; I'm talking about designing kids who can breathe underwater or who have radically enhance mental capabilities. Such offspring will rewrite the rules of evolution and redefine what it means to be human. They may very well qualify as totally new species. From a scientific point of view I think this capability is extremely likely, but legal and ethical considerations may prevent it from happening.
With all that, I put the odds at: 80 percent.

Life detected on an exoplanet. Astronomers have already measured the size, density, temperature, and atmospheric composition of several alien worlds as they transit in front of their parent stars. The upcoming James Webb Space Telescope may be able to do the same for earth-size planets. We haven't found these planets yet but it's a shoo-in that the Kepler mission, launching this spring, or one of the ground-based planet searches will find them soon. The real question is whether the chemical evidence of alien life will be conclusive enough to convince most scientists. (As for life on Mars, I'd say the odds are similar that we'll find evidence of fossil life there, but the likelihood of cross-contamination between Mars and Earth makes Martian life inherently less interesting.)
Odds: 75 percent.

Synthetic telepathy. Rudimentary brain prostheses and brain-machine interfaces already exist. Allowing one person to control another person's body would be a fairly simple extension of that technology. Enabling one person to transmit his thoughts directly to another person's brain is a much trickier proposition, but not terribly farfetched, and it would break down one of the most profound isolations associated with the human condition. Broadcasting the overall state or "mood" of a brain would probably come first. Transmitting specific, conscious thoughts would require elaborate physical implants to make sure the signals go to exactly the right place — but such implants could soon become common anyway as people merge their brains with computer data networks.
Odds: 70 percent.

Lifespan past 200 (or 1,000). I have little doubt that progress in fighting disease and patching up our genetic weaknesses will make it possible for people to routinely reach the full human lifespan of about 120. Going far beyond that will require halting or reversing the core aging process, which involves not just genetic triggers but also oxidation and simple wear-and-tear. Engineering someone to have gills is probably a much easier proposition. Still, if we can hit 200 I see no reason why the same techniques couldn't allow people to live to 1,000 or more.
Odds: 60 percent. 

Conscious machines. Intelligent machines are inevitable — by some measures they are already here. Synthetic consciousness would be a much greater breakthrough, in some ways a more profound one than finding life on other planets. One problem: We don't understand how consciousness works, so recreating it will require learning a lot more about what it means to be both smart and self-aware. Another problem: We don't understand what consciousness is, so it's not clear what "smart" and "self-aware" mean, exactly. Gerald Edelman's brain-based devices are a promising solution. Rather than trying to deconstruct the brain as a computer, they construct neural processing from the bottom up, mimicking the workings of actual neurons.
Odds: 50-50.

Geoengineering. We may be able to deal with global warming through a combination of new energy sources, carbon sequestration, and many local and regional adaptations to a warmer climate. All of these will be technologically challenging but not truly "game-changing." It is possible, though, that the climate impact of our environmental follies will be so severe, and the progress of curative scientific research so dramatic, that some of the pie-in-the-sky geoengineering schemes now being bandied about will actually come to pass. Giant space mirrors and sunshades strike me as the most appealing options, both because they would support an aggressive space program and because they are adjustable and correctable. (Schemes that aim to fight carbon pollution with sulfur pollution seem like a frightening mix of hubris and folly.) Geoengineering techniques are also a good first step toward being able to terraform other planets.
Odds: 25 percent.

Desktop fusion. The ITER project will prove that it is possible to spend billions of dollars to construct an enormous device that produces controlled hydrogen fusion at a net loss of energy. A few left-field fusion researchers — most notably the ones associated with Tri-Alpha — are exploring a much more ambitious approach that would lead to the construction of cheap, compact reactors. These devices could in theory take advantage of more exotic, neutron-free fusion reactions that would allow almost direct conversion of fusion energy to electricity. The old dream of a limitless power plant that could fit under the hood of your car or in a closet in your house might finally come true. Since energy is the limiting factor for most economic development, the world economy (and the potential for research and exploration) would be utterly transformed.
Odds: 20 percent.

Communication with other universes. Studies of gravitational wave patterns etched into the cosmic microwave background could soon provide hints of the existence of universes outside of our own. Particle collisions at the LHC could soon provide hints of the existence of higher dimensions. But what would really shake the world would be direct measurements of other universes. How exactly that would work is not at all clear, since any object or signal that crossed over directly from another universe could have devastating consequences; indirect evidence, meanwhile, might not be terribly convincing (eg, looking for the gravitational pull from shadow matter on a nearby brane). I hold out hope all the same. 
Odds: 10 percent

Antigravity devices. Currentphysics theory doesn't allow such things, but from time to time fringe experiments (mostly involving spinning superconducting disks) allegedly turn up evidence for an antigravity phenomenon. Even NASA has invested dribbles of money in this field, hoping that something exciting and unexpected will pop up. If antigravity really exists it would require revising Einstein's general theory of relativity. It would also vindicate all those science-fiction TV shows in which everyone clomps around heavily in outer space. Given how little we know about how gravity works, antigravity or artificially generated gravity don't seem impossible…just highly improbable. 
Odds: 5 percent

ESP verified!Probably the closest thing I've seen to a scientific theory of ESP is Rupert Sheldrake's concept of "morphic fields." Right now there's nary a shred of evidence to support the idea — unless you count anecdotes of dogs who know when their owners are about to return home, and people who can "feel" when someone is looking at them — but Sheldrake is totally correct that such off-the-wall ideas merit serious scientific investigation. After all, scientists investigate counterintuitive physics concepts all the time; why not conduct equally serious investigations of the intuitive feelings that people have all the time? Everything I know about science, and about human subjectivity, says that there's nothing to find here. And yet, when I think of a discovery that would change everything this is one of the first that springs to mind. 
Odds: 0.1 percent

james_geary's picture
Deputy Curator, Nieman Foundation for Journalism at Harvard; Author, Wit's End

J. Craig Venter may be on the brink of creating the first artificial life form, but one game-changing scientific idea I expect to live to see is the moment when a robotic device achieves the status of "living thing." What convinces me of this is not some amazing technological breakthrough, but watching some videos of the annual RoboCup soccer tournament organized by Georgia Tech in Atlanta. The robotics researchers behind RoboCup are determined to build a squad of robots capable of winning against the world champion human soccer team. For now, they are just competing against other robots.

For a human being to raise a foot and kick a soccer ball is an amazingly complex event, involving millions of different neural computations co-ordinated across several different brain regions. For a robot to do it — and to do it as gracefully as members of the RoboCup Humanoid League — is a major technical accomplishment. The cuddlier, though far less accomplished quadrupeds in the Four Legged League are also a wonder to behold. Plus, the robots are not programmed to do this stuff; they learn to do it, just like you and me.

These robots are marvels of technological ingenuity. They are also "living" proof of how easily, eagerly even, we can anthropomorphize robots — and why I expect there won't be much of a fuss when these little metallic critters start infiltrating our homes, offices, and daily lives.

I also expect to see the day when robots like these have biological components (i.e. some wetware to go along with their hardware) and when human beings have internal technological components (i.e. some hardware to go along with our wetware). Researchers at the University of Pittsburgh have trained two monkeys to munch marshmallows using a robotic arm controlled by their own thoughts. During voluntary physical movements, such as reaching for food, nerve cells in the brain start firing well before any movement actually takes place. It's as if the brain warms up for an impending action by directing specific clusters of neurons to fire, just as a driver warms up a car by pumping the gas pedal. The University of Pittsburgh team implanted electrodes in this area of the monkeys' brains and connected them to a computer operating the robotic limb. When the monkeys thought about reaching for a marshmallow, the mechanical arm obeyed that command. In effect, the monkeys had three arms for the duration of the experiments.

In humans, this type of brain-machine interface (BMI) could allow paralyzed individuals to control prosthetic body parts as well as open up new fields of entertainment and exploration. "The body's going to be very different 100 years from now," Miguel Nicolelis, Anne W. Deane Professor of Neuroscience at Duke University and one of the pioneers of the BMI, has said. "In a century's time you could be lying on a beach on the east coast of Brazil, controlling a robotic device roving on the surface of Mars, and be subjected to both experiences simultaneously, as if you were in both places at once. The feedback from that robot billions of miles away will be perceived by the brain as if it was you up there."

In robots, a BMI could become a kind of mind. If manufacturers create such robots with big wet puppy dog eyes — or even wearing the face of a loved one or a favorite film star — I think we'll grow to like them pretty quickly. When they have enough senses and "intelligence," then I'm convinced that these machines will qualify as living things. Not human beings, by any means; but kind of like high-tech pets. And turning one off will be the moral equivalent of shooting your dog.

gary_marcus's picture
Professor of Psychology, Director NYU Center for Language and Music; Author, Guitar Zero

Within my lifetime (or soon thereafter) scientists will finally decode the language of the brain. At present, we understand a bit about the basic alphabet of neural function, how neurons fire, and how they come together to form synapses, but haven't yet pieced together the words, let alone the sentences.  Right now, we're sort of like Mendel, at the dawn of genetics: he knew there must be something like genes (what he called "factors"), but couldn't say where they lived (in the protein? in the cytoplasm?) or how they got their job done. Today, we know that thought has something to do with neurons, and that our memories are stored in brain matter, but we don't yet know how to decipher the neurocircuitry within. 

Doing that will require a quantum leap. The most popular current techniques for investigating the brain, like functional magnetic resonance imaging (fMRI), are far too coarse. A single three dimensional "voxel" in an fMRI scan lumps together the actions of tens or even hundreds of thousands of neurons — yielding a kind of rough geography of the brain (emotion in the amygdala, decision-making in the prefrontal cortex) but little in the way of specifics.  How does the prefrontal cortex actually do its thing? How does the visual cortex represent the difference between a house and a car, or a Hummer and a taxi? How does Broca's area know the difference between a noun and verb?

To answer questions like these, we need to move beyond the broad scale geographies of fMRI and down to the level of individual neurons.

At the moment, that's a big job.  For one thing, in the human brain there are billions of neurons and trillions of connections between them; the sheer amount of data involved is overwhelming. For another, until recently we've lacked the tools to understand the function of individual neurons in action, within the context of microcircuits.

But there's good reason to think all that's about to change. Computers continue to advance at a dizzying pace.  Then there's the truly unprecedented explosion in databases like the Human Genome and the Allen Brain Atlas, enormously valuable datasets that are shared publically and instantly available to all researchers, everywhere; even a decade ago there was nothing like them. Finally, genetic neuroimaging is just around the corner — scientists can now induce individual neurons to fire and (literally) light up on demand, allowing us to understand individual neural circuits in a brand new way.

Technical advances alone won't be enough, though — we'll need a scientist with the theoeretical vision of Francis Crick, who not only helped identify the physical basis of genes — DNA — but also the code by which the individual nucleotides of a gene get translated (in groups of three) into amino acids.  When it comes to the brain, we already know that neurons are the physical basis of thinking and knowledge, but not the laws of translation that relate one to the other.

I don't expect that there will be one single code. Although every creature uses essentially the same translation between DNA and amino acids, different parts of the brain may translate between neurons and information in different ways.  Circuits that control muscles, for example, seem to work on a system of statistical averaging; the angle at which a monkey extends its arm seems, as best we can tell, to be a kind of statistical average of the actions of hundreds of individual neurons, each representing a slightly different angle of possible motion, 44 degrees, 44.1 degrees, and so forth. Alas, what works for muscles probably can't work for sentences and ideas, so-called declarative knowledge like the proposition that "Michael Bloomberg is the Mayor of New York" or the idea that my fight to Montreal leaves at noon. It's implausible that the brain would have vast population of neurons reserved for each specific thought I might entertain ("my flight to Montreal leaves at 11:58 am", "my flight to Montreal leave leaves at 11:59 am", etc). Instead, the brain, like language itself, needs some sort of combinatorial code, a way of putting together smaller pieces (Montreal, flight, noon) into larger elements.

When we crack that nut, when we figure out how the brain manages to encode declarative knowledge , an awful lot is going to change. For one thing, our relationship to computers will be completely and irrevocably altered; clumsy input devices like mice, windows, keyboards, and even heads-up displays and speech recognizers will go the way of typewriters and fountain pens; our connection to computers will be far more direct. Education, too, will fundamentally change, as engineers and cognitive sciences begin to leverage an understanding of brain code into ways of directly uploading information into the brain. Knowledge will become far cheaper than it already has become in the Internet era; with luck and wisdom, we as species could advance immeasurably.

david_bodanis's picture
Writer; Futurist; Author, Einstein's Greatest Mistake

The big one coming up is going to be massive technological failure: so strong that it will undermine faith in science for a generation or more.

It's going to happen because science is expanding at a fast rate, and over the past few centuries, the more science we've had, then — albeit with some time lags — the more powerful technology we have had.

That's where the problem will arise. With each technology, the amplitude of its effects gets greater: both positive and negative. Automobiles, for example, are an early 20th century technology (based on 18th and 19th century science), which caused a certain amount of increased mobility, as well as a certain number of traffic deaths. The amount on each side was large, but not so large that the negative effects couldn't be accepted. Even when the negative effects came to be understood to include land-use problems or pollution, those have still generally been considered manageable. There's little desire to terminate all scientific inquiry because of them.

Nuclear power is a mid 20th century technology (based on early 20th century science). Its overall power is greater still, and so is the amplitude of its destructive possibilities. Through good chance its negative use has, so far, been restricted to the destruction of two cities. Yet even that led to a great wave of generalized, anti-scientific feeling, not least from among the many people who'd always felt it's impious to interfere with the plans of God.

The internet is any many ways an even more powerful technology (based on early 20th century quantum mechanics, and mid 20th century information theory). So far its problems have been manageable, be those in surveillance of personal activity, or virus-like intrusions which interrupt important services. But the internet will get stronger and more widespread, as will the collaborative and other tools allowing its misuse: the negative effects will be greater still.

Thus the dynamic we face. Science brings magic from the heavens. In the next few decades, clearly, it will get stronger. Yet just as inevitably, some one of its negative amplitudes — be it in harming health, or security, or something as yet unrecognized — will pass an acceptable threshold. When that happens, society is unlikely to respond with calm guidelines. Instead, there will be blind fury against everything science has done.

philippe_parreno's picture
French artist and filmmaker, of Algerian heritage.

Could we take the next step by breaking down the strict distinction between reality and fiction: NO MORE REALITY!

andrian_kreye's picture
Editor-at-large of the German Daily Newspaper, Sueddeutsche Zeitung, Munich

It should be an easy transition. Instead of thinking about energy as a commodity to harvest, new sources of power will be manufactured. The medieval quest for new sources of that life force called energy will be over including all those white knights on horses conquering the wild lands where those sources happen to be. Technologically this will mean a shift from an energy industry dominated by geologists and engineers to a wave of innovations driven by biologists and chemists.

The thought process itself has already been set in motion. The surge of first generation bio-fuels has been based on the idea of renewable sources of energy. Still most alternative energies like solar and wind power are still based on the old way of thinking about harvesting. Most bio-fuels are preceded by a literal harvest of crops. Craig Venter's work on a microorganism that can transform CO2, sunlight and water into fuel is already jumping quite a few steps ahead.

This new approach will drastically reduce the EIoER formula, which has so far slowed down the commercial viability of most innovations in the search for alternative sources of energy. Any fuel that can be synthetically "grown" in a lab or factory will be economically much more viable for mass production than the conversion of sunlight, wind or agricultural goods.

Lab-based production of synthetic sources of energy will also end the geopolitical dependencies now tied to the consumption of power and thus change the course of recent history in the most dramatic fashion. This will eliminate the sources of many current and future conflicts, first and foremost in the Gulf region, but also in the Northern part of South America, in the Black Sea region and the increasingly exploitable Arctic.

The introduction of biological processes into the energy cycle will also minimize the impact of energy consumption on the environment. If made available cheaply, possibly as an open source endeavor, it will allow emerging nations to develop new arable land and create wealth while avoiding conflict and environmental negligence.

There could of course be downsides to the emergence of new sources of energy. Transitions are never easy, no matter how benign or progressive. The loss of economical and political power by oil- and gas-producing nations and corporations could become a new, if temporary source of conflict. Unforeseen dangers in the production might emerge impacting environment and public health. New monopolies could be formed.

The shift from harvesting to manufacturing energy would not only impact economy, politics and environment. Turning mankind from mere harvesters of energy into manufacturers would lead to a whole new way of thinking, that could lead to even greater innovations. Because every form of economical and technological empowerment always initiates leaps that go way beyond the practical application of new technologies. It's hard to predict where a new mindset will lead. One thing is for sure it almost always leads to new freedoms and enlightenments.

bart_kosko's picture
Information Scientist and Professor of Electrical Engineering and Law, University of Southern California; Author, Noise, Fuzzy Thinking

Society will change when the poor and middle class have easy access to cryonic suspension of their cognitive remains — even if the future technology involved ultimately fails.

Today we almost always either bury dead brains or burn them. Both disposal techniques result in irreversible loss of personhood information because both techniques either slowly or quickly destroy all the brain tissue that houses a person's unique neural-net circuitry. The result is a neural information apocalypse and all the denial and superstition that every culture has evolved to cope with it.

Some future biocomputing technology may extract and thus back-up this defining neural information or wetware. But no such technology is in sight despite the steady advances of Moore's Law doubling of transistor density on computer chips every two years or so. Nor have we cracked the code of the random pulse train from a single neuron. Hence we are not even close to making sense of the interlocking pulse trains of the billions of chattering neurons in a functioning human brain.

So far the only practical alternative to this information catastrophe is to vitrify the brain and store it indefinitely in liquid nitrogen at about -320 degrees Fahrenheit. Even the best vitrification techniques still produce massive cell damage that no current or even medium-term technology can likely reverse. But the shortcomings of early twenty-first century science and engineering hardly foreclose the technology options that will be available in a century and far less so in a millennium. Suspended brain tissue needs only periodic replacement of liquid nitrogen to wait out the breakthroughs.

Yet right now there are only about 100 brains suspended in liquid nitrogen in a world where each day about 150,000 people die.

That comes to fewer than three suspended brains per year since a 40-year-old and post-Space Odyssey Stanley Kubrick hailed the promise of cryonic suspension in his 1968 Playboy interview. Kubrick cast death as a problem of bioengineering: "Death is no more natural or inevitable than smallpox or diphtheria. Death is a disease and as susceptible to cure as any other disease." The Playboy interviewer asked Kubrick if he was interested in being frozen. Kubrick said that he "would be if there were adequate facilities available." But just over three decades later Kubrick opted for the old neural apocalypse when he could easily have afforded a first-class cryonic suspension in quite adequate facilities.

The Kubrick case shows that dollar cost is just one factor that affects the ease of mass access to cryonics. Today many people can afford a brain-only suspension by paying moderate premiums for a life-insurance policy that would cover the expenses. But almost no one accepts that cryonics wager. There are also stigma costs from the usual scolds in the church and in bioethics. There is likewise no shortage of biologists who will point out that you cannot get back the cow from the hamburger.

And there remains the simple denial of the inexorable neural catastrophe. That denial is powerful enough that it keeps the majority of citizens from engaging in rational estate planning. The probate code in some states such as California even allows valid handwritten wills that an adult can pen (but not type) and sign in minutes and without any witnesses. But only a minority of Californians ever executes these handwritten wills or the more formal attested wills. The great majority dies intestate and thus they let the chips fall where the state says they fall.

So it is not too surprising that the overwhelming majority of the doomed believe that the real or imagined transaction costs of brain suspension outweigh its benefits if they think about the matter at all. But those costs will only fall as technology marches on ever faster and as the popular culture adapts to those tech changes. One silver lining of the numbing parade of comic-book action movies is how naturally the younger viewing audience tends to embrace the fanciful information and biotechnology involved in such fare even if the audience lacks a like enthusiasm for calculus.

Again none of this means that brain suspension in liquid nitrogen will ever work in the sense that it leads to some type of future resurrection of the dead. It may well never work because the required neuro-engineering may eventually prove too difficult or too expensive or because future social power groups outlaw the practice or because of many other technical or social factors. But then again it may work if enough increased demand for such brain suspensions produces enough economies of scale and spurs enough technical and business innovation to pull it off. There is plenty of room for skepticism and variation in all the probability estimates.

But just having an affordable and plausible long shot at some type of resurrection here on Earth will in time affect popular belief systems and lengthen consumer time horizons. That will in turn affect risk profiles and consumption patterns and so society will change and perhaps abruptly so. A large enough popular demand for brain suspensions would allow democracies to directly represent some of the interests from potential far-future generations because no one would want themselves or their loved ones to revive and find a spoiled planet. Our present dead-by-100 life spans make it all too easy to treat the planet like a rental car as we run up the social credit cards for unborn debtors.

The cryonics long shot lets us see our pending brain death not as the solipsistic obliteration of our world but as the dreamless sleep that precedes a very major surgery.

jamshed_bharucha's picture
Psychologist; President Emeritus, Cooper Union

An understanding of how brains synchronize — or fail to do so — will be a game-changing scientific development.

Few behavioral forces are as strong as the delineation of in-groups and out-groups: 'us' and 'them'. Group affiliation requires alignment, coupling or synchronization of the brain states of members. Synchronization yields cooperative behavior, promotes group cohesion, and creates a sense of group agency greater than the sum of the individuals in the group. In the extreme, synchronization yields herding behavior. The absence of synchronization yields conflict.

People come under the grip of ideologies, emotions and moods are infectious, and memes spread rapidly through populations. Ethnic, religious, and political groups act as monolithic forces. Mobs, cults and militias are characterized by the melding of large numbers of individuals into larger units, such that the brains of individuals operate in lockstep – a single organism controlled by a single — distributed — nervous system.

Leaders who mobilize large followings have an intuitive ability to synchronize brains or to plug into systems that already are synchronized.

Herding behavior has received a great deal of attention in economics. In the recent financial bubble that eventually burst, investors and regulators were swept up by a wave of blinding optimism and over-confidence. Contrary information was discounted, and analysis from first principles ignored.

Herding behavior is prevalent in times of war. A group that perceives itself to be under attack binds together as a collective fighting unit, without questioning. When swift synchronization is critical and the stakes are high, psychological forces such as duty, loyalty, conformity, compliance – all of which promote group cohesion — come to the fore, overwhelming the rational faculties of individual brains.

Synchronization is found in many species, although the mechanisms may not be the same. Flocks of birds fly in tight formation. Fish swim in schools, and to a distant observer appear as one aggregated organism. Wolves hunt in packs. Some instances of synchronization are driven by environmental cues that regulate individual brains in the same way. For example, light cycles and seasonal cycles can entrain biorhythms of individuals who share the genetic predisposition to be regulated in this way. In other cases, the co-evolution of certain behaviors together with the perception of these behaviors holds individuals together, as in the ability to both produce and recognize species-specific vocalizations.

Synchronization is mediated by communication between brains. Communicative channels include language as well as non-verbal modes such as facial expressions, gestures, tone of voice, and music. Communication across regions of an individual brain is simply a special case of a system that includes communication between brains.

Elsewhere I have argued that music serves to synchronize brain states involved in emotion, movement, and the recognition of patterns — thereby promoting group cohesion. As with tradition or ritual, what's being synchronized needn't have intrinsic utility; it may not matter what's being synchronized. The very fact of synchronization can be a powerful source of group agency.

Just around the corner is an explosion of research that regards individual brains as nodes in a system bound together by multiple channels of communication. Information technology has provided novel ways for brains to align across great distances and over time. When a song becomes a hit, millions around the world are aligned, forming a virtual unit. In the future, brain prostheses and artificial interfaces for biological systems will add to the picture.

Some clues are emerging about how brains synchronize. The hot recent discovery is the existence of mirror neurons — brain cells that respond to the actions of other individuals as if one were making them oneself. Mirror systems are thought to generate simulations of the behavior of others in one's own brain, enabling mimicking and empathy. Other pieces of the puzzle have been accumulating for a while. Certain cases of frontal lobe damage result in asocial behavior.

Recent work on autism has drawn attention to the mechanisms whereby individuals connect with others. The brain facilitates (sometimes in unfortunate ways) the categorization of oneself and others into in-groups and out-groups. When white participants in an MRI machine view pictures of faces, the amygdala in the left hemisphere of the brain is more strongly activated when the faces are black than when they are white. The brain has circuits specialized for the perception of faces, which convey enormous amounts of information that enable us to recognize people and gauge their emotions and intentions.

Understanding how brains synchronize to form larger systems of behavior will have vast consequences for our grasp of group dynamics, interpersonal relations, education and politics. It will influence how we make sense of — and manage — the powerful unifying forces that constitute group behavior. For better and for worse, it will guide the development of technologies designed to interface with brains, spread knowledge, shape attitudes, elicit emotions, and stimulate action. As with all technological advances, leaders will seize on them to either improve the human condition or consolidate power.

Not all individuals are susceptible to synchronizing with others. Some reject the herd and lose out. Some chart a new course and become leaders. Being contrarian often requires enduring the psychological forces of stigma and ridicule. Understanding the conditions under which people resist will be part of the larger understanding of synchronization.

Understanding how brains synchronize – or fail to do so — will not emerge from a single new idea, but rather from a complex puzzle of scientific advances woven together. What is game-changing is that only recently have researchers begun to frame questions about brain function in terms not of individual brains but rather in terms of how individual brains are embedded in larger social and environmental systems that drive their evolution and development. This new way of framing brain and cognitive science — together with unforeseen technological developments — promises transformational integrations of current and future knowledge about how brains interact.

david_berreby's picture
Journalist; Author, Us and Them

Global 21st-century society depends on an 18th-century worldview.  It's an Enlightenment-era model that says the essence of humanity — and our best guide in life — is cool, conscious reason.  Though many have noted, here on Edge and elsewhere, that this is a poor account of the mind, the rationalist picture still sustains institutions that, in turn, shape our daily lives.

It is because we are rational that governments guarantee our human rights: To " use one's understanding without guidance"  (Kant's definition of enlightenment), one needs freedom to inquire, think and speak.  Rationality is the reason for elections (because governments not chosen by thoughtful, evidence-weighing citizens would be irrational). Criminal justice systems assume that impartial justice is possible, which means they assume judges and juries can reason their way through a case.  Our medical system assumes that drugs work for biochemical reasons, applicable to all human bodies — and not the price on the pill bottle makes a difference in its effectiveness.

And free markets presume that all players are avatars of Rational Economic Man: He who consciously and consistently perceives his own interests, relates those to possible actions, reasons his way through the options, and then acts according to his calculations. When Adam Smith famously wrote that butchers, brewers and bakers worked efficiently out of  "regard for their own interest,"  he was doing more than asserting that self-interest could be good.  He was also asserting that self-interest — a long-lasting, fact-based, explicit sense of  "what's good for me" — is possible.

The rationalist model also suffuses modern culture.  Rationalist politics requires tolerance for diversity — we can't reason together if we agree on everything. Rationalist economics teaches the same lesson. If everyone agrees on the proper price for all stocks on the market, then there's no reason for those brokers to go to work. This tolerance for diversity makes it impossible to unite society under a single creed or tradition, and that has the effect of elevating the authority of the scientific method. Data, collected and interpreted according to rigorous standards, elucidating material causes and effects, has become our lingua franca. Our modern notions of the unity of humanity are not premised on God or tribe, but on research results.  We say "we all share the same genes," or " we are all working with the same evolved human nature" or appeal in some other way to scientific findings.

This rationalist framework is so deeply embedded in modern life that its enemies speak in its language, even when they violate its tenets. Those who loathe the theory of evolution felt obligated to come up with "creation science." 

Businesses proclaim their devotion to the free market even as they ask governments to interfere with its workings. Then too, tyrants who take the trouble to rig elections only prove that elections are now a universal standard.

So that's where the world stands today, with banks, governments, medical systems, nation-states resting, explicitly or implicitly, on this notion that human beings are rational deciders.

And of course this model looks to be quite wrong. That's fact is not what changes everything, but it's a step in that direction.

What's killing Rational Economic Man is an accumulation of scientific evidence suggesting people have (a) strong built-in biases that make it almost impossible to separate information's logical essentials from the manner and setting in which the information is presented; and (b) a penchant for changing their beliefs and preferences according to their surroundings, social setting, mood or simply some random fact they happened to have noticed. The notion that "I" can " know" consistently what my " preferences"  are — this is in trouble.  (I won't elaborate the case against the rationalist model as recently made by, among others, Gary Marcus, Dan Ariely, Cass Sunstein, because it has been well covered in the recent Edge colloquium on behavioral economics.) 

What changes is everything is not this ongoing intellectual event, but the next one. 

In the next 10 or 15 years, after the burial of Rational Economic Man, neuroscientists and people from the behavioral disciplines will converge on a better model of human decision-making. I think it will picture people as inconsistent, unconscious, biased, malleable corks on a sea of fast-changing influences, and the consequences of that will be huge for our sense of personhood (to say nothing of sales tricks and marketer manipulations).

But I think the biggest shocks might come to, and through, institutions that are organized on rationalist premises.  If we accept that people are highly influenced by other people, and by their immediate circumstances, then what becomes of our idea of impartial justice? (Jon Hanson at Harvard has been working on that for some time.) How do we understand and protect democracy, now that Jonah Berger has shown that voters are more likely to support education spending just because they happen to cast their ballots in a school? What are we to make of election results, after we've accepted that voters have, at best, "incoherent, inconsistent, disorganized positions on issues," as William Jacoby puts it?  How do you see a town hall debate, once you know that people are more tolerant of a new idea if they're sitting in a tidy room than if they're in a messy one?

How do we understand medical care, now that we know  that chemically identical pills have different effects on people who think their medicine is expensive than they do on patient who were told it was cheap? How should we structure markets after learning that even MBAs can be nudged to see a $7 per item as a fair price — just by exposing them to the number 7 a few minutes before?  What do we do about standardized testing, when we know that women reminded that they're female score worse on a math test than fellow women who were reminded instead that they were elite college students?

Perhaps we need a new Adam Smith, to reconcile our political, economic and social institutions with present-day knowledge about human nature. In any event, I expect to see the arrival of Post-Rational Economic (and Political and Psychological) Humanity. And I do expect that will change everything.

daniel_goleman's picture
Psychologist; Author (with Richard Davidson), Altered Traits

Every manmade object — all the things in our homes and workplace — has an invisible back story, a litany of sorry impacts over the course of the journey from manufacture to use to disposal. Take running shoes.

Despite the bells and whistles meant to make one brand of running shoe appeal more than another, at base they all reduce to three parts. The shoe's upper consists of nylon with decorative bits of plastics or synthetic leather. The "rubber" sole for most shoes is a petroleum-based synthetic, as is the spongy midsole, composed of ethylene vinyl acetate. Like any petrochemical widget, manufacturing the soles produces unfortunate byproducts, among them benzene, toluene, ethyl benzene, and xylene. In environmental health circles these are known as the "Big Four" toxics, being variously carcinogens, central nervous system disrupters, and respiratory irritants, among other biological irritants. 

Those bouncy air pockets in some shoe soles contain an ozone-depleting gas. The decorative bits of plastic piping harbor PVC, which endangers the health of workers who make it, and contaminate the ecosystems around the dumps where we eventually send our shoes. The solvents in glues that bind the outsole to midsole can damage the lungs of the workers who apply it. Tanning leather shoe tops can expose workers to hexavalent chromium and other carcinogens.

 I remember my high school chemistry teacher's enthusiasm for the chemical reaction that rendered nitrogen fertilizer from ammonia (he moonlighted in a local fertilizer factory); we never heard a word about eutrophication, the dying of aquatic life due to fertilizer runoff that creates a frenzy of algae growth, depleting the water's oxygen. Likewise, coal-burning electric plants seemed a marvel when first deployed: cheap electricity from a virtually inexhaustible source. Who knew about respiratory disease from particulates, let alone global warming?

The full list of adverse impacts on the environment or the health of those who make or use any product can run into hundreds of such details. The reason: most all of the manufacturing methods and industrial chemicals in common use today were invented in a day when little or no attention was paid to their negative impacts on the planet or its people.

We have inherited an industrial legacy from the 20th-century which no longer meets the needs of the 21st. As we awaken from our collective naivete about such hidden costs, we are reaching a pivot point where we can question hidden assumptions. We can ask, for example, why not have running shoes that are not just devoid of toxins, but also can eventually be tossed out in a compost pile to biodegrade? We can rethink everything we make, developing alternative ingredients and processes with far less — or ideally, no — adverse health or environmental impacts.

The singular force that can drive this transformation of every manmade thing for the better is neither government fiat nor the standard tactics of environmentalists, but rather radical transparency in the marketplace. If we as buyers can know the actual ecological impacts of the stuff we buy at the point of purchase, and can compare those impacts to competing products, we can make better choices. The means for such radical transparency has already launched. Software innovations now allow any of us to access a vast database about the hidden harms in whatever we are about to buy, and to do this where it matters most, at the point of purchase. As we stand in the aisle of a store, we can know which brand has the fewest chemicals of concern, or the better carbon footprint. In the Beta version of such software, you click your cell phone's camera on a product's bar code, and get an instant readout of how this brand compares to competitors on any of hundreds of environmental, health, or social impacts. In a planned software upgrade, that same comparison would go on automatically with whatever you buy on your credit card, and suggestions for better purchases next time you shop would routinely come your way by email.

Such transparency software converts shopping into a vote, letting us target manufacturing processes and product ingredients we want to avoid, and rewarding smarter alternatives. As enough of us apply these decision rules, market share will shift, giving companies powerful, direct data on what shoppers want — and want to avoid — in their products.

Creating a market force that continually leverages ongoing upgrades throughout the supply chain could open the door to immense business opportunities over the next several decades. We need to reinvent industry, starting with the most basic platforms in industrial chemistry and manufacturing design. And that would change every thing.

antony_garrett_lisi's picture
Theoretical physicist

Human beings have an amazingly flexible sense of self. If we don a pair of high resolution goggles showing the point of view from another body, with feedback and control, we perceive ourselves to be that body. As we use rudimentary or complex tools, these quickly become familiar extensions of our bodies and minds. This flexibility, and our indefatigable drive to learn, invent, have fun, and seek new adventure, will lead us down future paths that will dramatically alter human experience and our very nature.

Because we adapt so quickly, the changes will feel gradual. In the next few years solid state memory will replace hard drives, removing the mechanical barrier to miniaturization of our computational gadgetry. Battery size remains as a barrier to progress, but this will improve, along with increased efficiency of our electronics, and we will live with pervasive computational presence. Privacy will vanish. People will record and share their sensorium feeds with the world, and the world will share experiences. Every physical location will be geo-tagged with an overlay of information. Cities will become more pleasant as the internal combustion engine is replaced with silent electric vehicles that don't belch toxic fumes. We'll be drawn in to the ever evolving and persistently available conversations among our social networks. Primitive EEG's will be replaced by magnetoencephalography and functional MRI backed by the computational power to recognize our active thought patterns and translate them to transmittable words, images, and actions. Our friends and family who wish it, and our entire external and internal world, will be reachable with our thoughts. This augmentation will change what it means to be human. Many people will turn away from their meat existence, to virtual worlds, which they will fill with meaning — spending time working on science, virtual constructions, socializing, or just playing games. And we humans will create others like us, but not.

Synthetic intelligence will arrive, but slowly, and it will be different enough that many won't acknowledge it for what it is. People used to think a computer mastering chess, voice recognition, and automated driving would signal the arrival of artificial intelligence. But these milestones have been achieved, and we now consider them the result of brute force computation and clever coding rather than bellwethers of synthetic intelligence. Similarly, computers are just becoming able to play the game of Go at the dan level, and will soon surpass the best human players. They will pass Turing's test. But this synthetic intelligence, however adaptable, is inhuman and foreign, and many people won't accept it as more than number crunching and good programming. A more interesting sign that synthetic intelligence has arrived will be when captchas and reverse Turing tests appear that exclude humans. The computers will have a good laugh about that. If it doesn't happen earlier, this level of AI will arrive once computers achieve the computational power to run real-time simulations of an entire human brain. Shortly after that, we will no longer be the game changers. But by then, humans may have significantly altered themselves through direct biological manipulation.

The change I expect to see that will most affect human existence will come from biohacking: purposefully altering genomes, tissue engineering, and other advances in biology. Humans are haphazardly assembled biological machines. Our DNA was written by monkeys banging away at... not typewriters, but each other, for millions of years. Imagine how quickly life will transform when DNA and biochemistry are altered with thoughtful intent. Nanotechnology already exists as the machinery within our own biological cells; we're just now learning how these machines work, and how to control them. Pharmaceuticals will be customized to match our personal genome. We're going to be designing and growing organisms to suit our purposes. These organisms will sequester carbon, process raw material, and eventually repair and replace our own bodies.

It may not happen within my lifetime, but the biggest game change will be the ultimate synthesis of computation and biology. Biotech will eventually allow our brains to be scanned at a level sufficient to preserve our memories and reproduce our consciousness when uploaded to a more efficient computational substrate. At this point our mind may be copied, and, if desired, embedded and connected to the somatic helms of designed biological forms. We will become branching selves, following many different paths at once for the adventure, the fun, and the love of it. Life in the real world presents extremely rich experiences, and uploaded intelligences in virtual worlds will come outside where they can fly as a falcon, sprint as a cheetah, love, play, or even just breath — with superhuman consciousness, no lag, and infinite bandwidth. People will dance with nature, in all its possible forms. And we'll kitesurf.

Kitesurfing, you see, is a hell of a lot of fun — and kites are the future of sailing. Even though the sport is only a few years old and kite design is not yet mature, kitesurfers have recently broken the world sailing speed record, reaching over 50 knots. Many in the sailing world are resisting the change, and disputing the record, but kites provide efficient power and lift, and the speed gap will only grow as technology improves. Kitesurfing is a challenging dynamic balance of powerful natural forces. It feels wonderful; and it gets even more fun in waves.

All of these predicted changes are extrapolations from the seeds of present science and technology; the biggest surprises will come from what can't be extrapolated. It is uncertain how many of these changes will happen within our lifetimes, because that timescale is a dependent variable, and life is uncertain. It is both incredibly tragic and fantastically inspiring that our generation may be the last to die of old age. If extending our lives eludes us, cryonics exists as a stopgap gamble — Pascal's wager for singularitarians, with an uncertain future preferable to a certain lack of one. And if I'm wrong about these predictions, death will mean I'll never know.

betsy_devine's picture
Journalist, Author and Blogger

In the next five years, policy-makers around the world will embrace economic theories (e.g. those of Richard Layard) aimed at creating happiness. The Tower of Economic Babble is rubble. Long live the new, improved happiness economics!

Cash-strapped governments will love Layard's theory that high taxes on high earners make everyone happier. (They reduce envy in the less fortunate while saving those now super-taxed from their regrettable motivation to over-work.) It also makes political sense to turn people's attention from upside-down mortgages and looted pension funds to their more abstract happiness that you claim you can increase.

Just a few ripple effects from the coming high-powered promotion of happiness:

• Research funding will flow to psychologists who seek advances in happiness creation.

• Bookstores will re-name self-help sections as "Happiness sections"; then vastly expand them to accommodate  hedonic workbooks and gratitude journals in rival formats.

• In public schools, "happiness" will be the new "self-esteem," a sacred concept to which mere educational goals must humbly bow.

• People will pursue happiness for themselves and their children with holy zeal; people whose child or spouse displays public unhappiness will feel a heavy burden of guilt and shame.

Will such changes increase general citizen happiness? This question is no longer angels-on-head-of-pin nonsense; researchers now claim good measures for relative happiness.

The distraction value alone should benefit most of us. But in the short run, I at least would be happy to see that my prediction had come true.

david_m_eagleman's picture
Neuroscientist, Stanford University; Author, Incognito, Sum, The Brain

 

While medicine will advance in the next half century, we are not on a crash-course for achieving immortality by curing all disease.  Bodies simply wear down with use.  We are on a crash-course, however, with technologies that let us store unthinkable amounts of data and run gargantuan simulations.  Therefore, well before we understand how brains work, we will find ourselves able to digitally copy the brain's structure and able to download the conscious mind into a computer. 

If the computational hypothesis of brain function is correct, it suggests that an exact replica of your brain will hold your memories, will act and think and feel the way you do, and will experience your consciousness — irrespective of whether it's built out of biological cells, Tinkertoys, or zeros and ones.  The important part about brains, the theory goes, is not the structure, it is about the algorithms that ride on top of the structure.  So if the scaffolding that supports the algorithms is replicated — even in a different medium — then the resultant mind should be identical.  If this proves correct, it is almost certain we will soon have technologies that allow us to copy and download our brains and live forever in silica.  We will not have to die anymore.  We will instead live in virtual worlds like the Matrix.  I assume there will be markets for purchasing different kinds of afterlives, and sharing them with different people — this is future of social networking.  And once you are downloaded, you may even be able to watch the death of your outside, real-world body, in the manner that we would view an interesting movie.

Of course, this hypothesized future embeds many assumptions, the speciousness of any one of which could spill the house of cards.  The main problem is that we don't know exactly which variables are critical to capture in our hypothetical brain scan.  Presumably the important data will include the detailed connectivity of the hundreds of billions of neurons. But knowing the point-to-point circuit diagram of the brain may not be sufficient to specify its function.  The exact three-dimensional arrangement of the neurons and glia is likely to matter as well (for example, because of three-dimensional diffusion of extracellular signals).  We may further need to probe and record the strength of each of the trillions of synaptic connections.  In a still more challenging scenario, the states of individual proteins (phosphorylation states, exact spatial distribution, articulation with neighboring proteins, and so on) will need to be scanned and stored.  It should also be noted that a simulation of the central nervous system by itself may not be sufficient for a good simulation of experience: other aspects of the body may require inclusion, such as the endocrine system, which sends and receives signals from the brain.  These considerations potentially lead to billions of trillions of variables that need to be stored and emulated. 

The other major technical hurdle is that the simulated brain must be able to modify itself. We need not only the pieces and parts, we also the physics of their ongoing interactions — for example, the activity of transcription factors that travel to the nucleus and cause gene expression, the dynamic changes in location and strength of the synapses, and so on. Unless your simulated experiences change the structure of your simulated brain, you will be unable to form new memories and will have no sense of the passage of time.  Under those circumstances, is there any point in immortality? 

The good news is that computing power is blossoming sufficiently quickly that we are likely to make it within a half century.  And note that a simulation does not need to be run in real time in order for the simulated brain to believe it is operating in real time.  There's no doubt that whole brain emulation is an exceptionally challenging problem.  As of this moment, we have no neuroscience technologies geared toward ultra-high-resolution scanning of the sort required — and even if we did, it would take several of the world's most powerful computers to represent a few cubic millimeters of brain tissue in real time.  It's a large problem.  But assuming we haven't missed anything important in our theoretical frameworks, then we have the problem cornered and I expect to see the downloading of consciousness come to fruition in my lifetime.  

 

christine_finn's picture
Archaeologist; Journalist; Author, Artifacts, Past Poetic

While our minds have been engaging with intangibles of the virtual world, where will our real bodies be taking us on planet earth, compassed with these new perspectives?

Today we are fluent at engaging with the 'other' side of the world. We chart a paradox of scale; from the extra-terrestriall to the international. Those places we call home are both intimately bounded and digitally exposed. Our observation requires a new grammar of both voyager and voyeur.

The generation growing up with both digital maps and terrestrial globes will have the technological means to shake up our orthodoxies, at the very moment that we need to be aware of every last sprawling suburb and shifting sandbank.

Our brave new map of the world is evolving as one created by us as individuals, as much as one which is geographically verifiable by a team of scientists. It will continue to be a mixture of landscapes mapped over centuries, and of unbounded digital terrains. One charted in  well-thumbed atlases in libraries, and by global positioning systems prodded by fingers on the go. Armchair travellers gathering souvenirs through technology, knowing no bounds; a geography of personal space, extended by virtual reality; a sense of place plotted by chats over garden fences, as much as instant messaging exchanged between our digital selves across time zones drawn up in the steam age. 

What will not change is our love for adventure. We will not lose our propensity to explore beyond our own horizons and to re-explore those at home. Our innate curiosity will be as relevant in the digital age as it was to the early Pacific colonisers, to 17th-century merchants heading east from Europe, and America's west-drawn pioneers.

Our primaeval wanderlust will continue in meanderings off the path, and these are as necessary for our physical selves as the day dreams that bring forth innovation. We develop ways to secure our co-ordinates while also straining at the leash. And our maps move with us.

Dislocation is good. Take a traditional map of the world. Cut it in half. Bring the old Oceanic edges together, and look what happens to the Pacific.

Seeing the world differently changes everything.

oliver_morton's picture
Chief News and Features Editor

It is quite likely that we will at some point see people starting to make deliberate changes in the way the climate system works. When they do they will change the world — though not necessarily, or only, in the way that they intend to.

"Geoengineering" technologies for counteracting some aspects of anthropogenic climate change — such as putting long-lived aerosols into the stratosphere, as volcanoes do, or changing the lifetimes and reflective properties of clouds — have to date been shunned by the majority of climate scientists, largely on the basis of the moral hazard involved: any sense that the risks of global warming can be taken care of by such technology weakens the case for reducing carbon-dioxide emissions.

I expect to see this unwillingness recede quite dramatically in the next few years, and not only because of the post-Lehman-Brothers bashing given to the idea that moral hazard is something to avoid at all costs. As people come to realise how little has actually been achieved so far on the emissions-reduction front, quite a few are going to start to freak out. Some of those who freak will have money to spend, and with money and the participation of a larger cadre of researchers, the science and engineering required for the serious assessment of various geoengineering schemes might be developed fairly quickly.

Why do I think those attempts will change the world? Geoengineering is not, after all, a panacea. It cannot precisely cancel out the effects of greenhouse gases, and it is likely to have knock on effects on the hydrological cycle which may well not be welcome. Even if the benefits outweigh the costs, the best-case outcome is unlikely to be more than a period of grace in which the most excessive temperature changes are held at bay. Reducing carbon-dioxide emissions will continue to be necessary. In part that is because of the problem of ocean acidification, and in part because a lower carbon-dioxide climate is vastly preferable to one that stays teetering on the brink of disaster for centuries, requiring constant tinkering to avoid teetering over into greenhouse hellishness.

So geoengineering would not "solve" climate change. Nor would it be an unprecedented human intervention into the earth system. It would be a massive thing to undertake, but hardly more momentous in absolute terms than our replacement of natural ecosystems with farmed ones; our commandeering of the nitrogen cycle; the wholesale havoc we have wrought on marine food webs; or the amplification of the greenhouse effect itself.

But what I see as world changing about this technology is not the extent to which it changes the world. It is that it does so on purpose. To live in a world subject to purposeful, planetwide change will not, I think, be quite the same as living in one being messed up by accident. Unless geoengineering fails catastrophically (which would be a pretty dramatic change in itself) the relationship between people and their environment will have changed profoundly. The line separating the natural from the artificial is itself an artifice, and one that changes with time. But this change, different in scale and not necessarily reversible, might finish off the idea of the natural as a place or time or condition that could ever be returned to. This would not be the "end of nature" — but it would be the end of a view of nature that has great power, and without which some would feel bereft. The clouds and the colours of the noon-time sky and of the setting sun will feel different if they have become, to some extent, a matter of choice.

And that choice is itself another aspect of the great change: Who chooses, and how? All climate change, whether intentional or not, has different outcomes for different regions, and geoengineering is in many ways just another form of climate change. So for some it will likely make the situation worse. If it does, does that constitute an act of war? An economic offence for which others will insist that reparations should be made? Just one of those things that the stronger do to the weaker?

Critics of geoengineering approaches are right to stress this governance problem. Where they tend to go wrong is in ignoring the fact that we already have a climate governance problem: the mechanisms currently in place to "avoid dangerous climate change", as the UN's Framework Convention on Climate Change puts it, have not so far delivered the goods. A system conceived with geoengineering in mind would need to be one that held countries to the consequences of their actions in new ways, and that might strengthen and broaden approaches to emissions reduction, too. But there will always be an asymmetry, and it is an important one. To do something about emissions a significant number of large economies will have to act in concert. Geoengineering can be unilateral. Any medium sized nation could try it.

In this, as in other ways, geoengineering issues look oddly like nuclear issues. There, too, a technological stance by a single nation can have global consequences. There, too, technology has reset the boundaries of the natural in ways that can provoke a visceral opposition. There, too, there is a discourse of transcendence and a tendency to hubris that need to be held in check. And there, too, the technology has brought with it dreams of new forms of governance. In the light of Trinity, Hiroshima and Nagasaki, many saw some sort of world government as a moral imperative, an historical necessity, or both. It turned out not to be, and the control of nuclear weapons and ambitions has remained an ad hoc thing, a mixture of treaties, deterrence, various suasions and occasional direct action that is unsatisfactory in many ways though not, as yet, a complete failure. A geoengineered world may end up governed in a similarly piecemeal way — and bring with it a similar open-ended risk of destabilisation, and even disaster.

The world has inertia and complexity. It changes, and it can be changed — but not always quickly, and not necessarily controllably, and not all at once. But within those constraints geoengineering will bring changes, and it will do so intentionally. And that intentional change in the relationship between people and planet might be the biggest change of all.

brian_eno's picture
Artist; Composer; Recording Producer: U2, Coldplay, Talking Heads, Paul Simon; Recording Artist

What would change everything is not even a thought. It's more of a feeling.

Human development thus far has been fueled and guided by the feeling that things could be, and are probably going to be, better. The world was rich compared to its human population; there were new lands to conquer, new thoughts to nurture, and new resources to fuel it all. The great migrations of human history grew from the feeling that there was a better place, and the institutions of civilisation grew out of the feeling that checks on pure individual selfishness would produce a better world for everyone involved in the long term. 

What if this feeling changes? What if it comes to feel like there isn't a long term—or not one to look forward to? What if, instead of feeling that we are standing at the edge of a wild new continent full of promise and hazard, we start to feel that we're on an overcrowded lifeboat in hostile waters, fighting to stay on board, prepared to kill for the last scraps of food and water? 

Many of us grew up among the reverberations of the 1960's. At that time there was a feeling that the world could be a better place, and that our responsibility was to make it real by living it. Why did this take root? Probably because there was new wealth around, a new unifying mass culture, and a newly empowered generation whose life experience was that the graph could only point 'up'. In many ways their idealism paid off: the better results remain with us today, surfacing, for example, in the wiki-ised world of ideas-sharing of which this conversation is a part.

But suppose the feeling changes: that people start to anticipate the future world not in that way but instead as something more closely resembling the nightmare of desperation, fear and suspicion described in Cormac McCarthy's post-cataclysm novel The Road. What happens then? 

The following: Humans fragment into tighter, more selfish bands. Big institutions, because they operate on longer time-scales and require structures of social trust, don't cohere. There isn't time for them. Long term projects are abandoned—their payoffs are too remote. Global projects are abandoned—not enough trust to make them work. Resources that are already scarce will be rapidly exhausted as everybody tries to grab the last precious bits.  Any kind of social or global mobility is seen as a threat and harshly resisted. Freeloaders and brigands and pirates and cheats will take control. Survivalism rules. Might will be right.

This is a dark thought, but one to keep an eye on. Feelings are more dangerous than ideas, because they aren't susceptible to rational evaluation. They grow quietly, spreading underground, and erupt suddenly, all over the place. They can take hold quickly and run out of control ('FIRE!') and by their nature tend to be self-fueling. If our world becomes gripped by this particular feeling, everything it presupposes could soon become true.

w_daniel_hillis's picture
Physicist, Computer Scientist, Co-Founder, Applied Invention.; Author, The Pattern on the Stone

In 1851, Nathaniel Hawthorne wrote, "Is it a fact — or have I dreamt it — that, by means of electricity, the world of matter has become a great nerve, vibrating thousands of miles in a breathless point of time? Rather, the round globe is a vast head, a brain, instinct with intelligence!" He was writing about the telegraph, but today we make essentially the same observation about the Internet.

One might suppose that, with all its zillions of transistors and billions of human minds, the world brain would be thinking some pretty profound thoughts. There is little evidence that this is so. Today's Internet functions mostly as a giant communications and storage system, accessed by individual humans. Although much of human knowledge is represented in some form within the machine, it is not yet represented in a form that is particularly meaningful to the machine. For the most part, the Internet knows no more about the information it handles than the telephone system knows about the conversations that take place over its lines. Most of those zillions of transistors are either doing something very trivial or nothing at all, and most of those billions of human minds are doing their own thing.

If there is such a thing as a world mind today, then its thoughts are primarily about commerce. It is the "invisible hand" of Adam Smith, deciding the prices, allocating the capital. Its brain is composed not only of the human buyers and sellers, but also of the trading programs on Wall Street and of the economic models of the central banks. The wires "vibrating thousands of miles in a breathless point of time" are not just carrying messages between human minds, they are participating in the decisions of the world mind as a whole. This unconscious system is the world's hindbrain.

I call this the hindbrain because it is performing unconscious functions necessary to the organism's own survival, functions that are so primitive that they predate development of the brain. Included in this hindbrain are the functions of preference and attention that create celebrity, popularity and fashion, all fundamental to the operation of human society. This hindbrain is ancient. Although it has been supercharged by technology, growing in speed and capacity, it has grown little in sophistication. This global hindbrain is subject to mood swings and misjudgments, leading to economic depressions, panics, witch-hunts, and fads. It can be influenced by propaganda and by advertising. It is easily misled. As vital as the hindbrain is for survival, it is not very bright.

What the world mind really needs is a forebrain, with conscious goals, access to explicit knowledge, and the ability to reason and plan. A world forebrain would need the capacity to perceive collectively, to decide collectively, and to act collectively. Of these three functions, our ability to act collectively is the most developed.

For thousands of years we have understood methods for breaking a goal into sub-goals that can be accomplished by separate teams, and for recursively breaking them down again and again until they can be accomplished by individuals. This management by hierarchy scales well. I can imagine that the construction of the pyramids was a celebration of its discovery. The hierarchical teams that built these monuments were an extension of the pharaoh's body, the pyramid a dramatic demonstration of his power to coordinate the efforts of many. Pyramid builders had to keep their direct reports within shouting distance, but electronic communication has allowed us to extend our virtual bodies, literally corporations, to a global scale. The Internet has even allowed such composite action to organize itself around an established goal, without the pharaoh. The Wikipedia is our Great Pyramid.

The collective perception of the world mind is also relatively well developed. The most important recent innovations have been search and recommendation engines, which combine the inputs of humans with machine algorithms to produce a useful result. This is another area where scale helps. Many eyes and many judgments are combined into a collective perception that is beyond the scope of any individual. The weak point is that the result of all this collective perception is just a recommendation list. For the world mind to truly perceive, it will need a way of sharing more general forms of knowledge, in a format that can be understood by both humans and machines. Various new companies are beginning to do just that.

What is still missing is the ability for a group of people (or people and machines) to make collective decisions with intelligence greater than the individual. This can sometimes be accomplished in small groups through conversation, but the method does not scale well. Generally speaking, technology has made the conversation larger, but not smarter. For large groups, the state-of-the-art method for collective decision-making is still the vote. Voting only works to the degree that, on average, each voter is able to individually determine the right decision. This is not good enough. We need an intelligence that will scale with the size of our problems.

So this is the development that will make a difference: a method for groups of people and machines to work together to make decisions in a way that takes advantage of scale. With such a scalable method for collective decision-making, our zillions of transistors and billions of brains can be used to advantage, giving the collective mind a way to focus our collective actions. Given this, we will finally have access to intelligence greater than our own. The world mind will finally have a forebrain, and this will change everything.

maria_spiropulu's picture
Shang-Yi Ch’en Professor of Physics, California Institute of Technology; Founder, AQT/INQNET

In no order of presumed value or significance the two grand revolutionary scientific achievements  that begin changing us are:

1.  the concilience of the sciences of life and technology of artificial intelligence, advanced computing and software (including life programming a la Venter and possibly consciousness programming) towards the "cylon" creation;

2.  the knowledge of how space and time emerge: the tectonics and characterization of our very Universe (with the Large Hadron Collider, relevant accelerator and non-accelerator fundamental research projects  including a large number of astro-particle-gravity-cosmology observatories terrestrial and in space)

Both are products of the curious ever growing capacity of the human intelligence and craftsmanship. Both manifest  through science and technology no signs of saturation or taming.  Both are changing our cognitive reference systems and the compass of knowledge.

seth_lloyd's picture
Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe

My job is to design and build quantum computers, computers that store and process information at the level of individual atoms. Even at the current rapid rates of progress of current computer technology, with the computer components halving in size every two years or less, and computers doubling in power over the same time, quantum computers should not be available for forty years. Yet we are building simple quantum computers today. I could tell you that quantum computers will drastically change the way the world works during our lifetime. But I'm not going to do that, for the simple reason that I have no idea whether it's true or not.

Whether or not they change the world, quantum computers have something to offer to all of us. When they flip those atomic bits to perform their computations, quantum computers possess a several useful features. It's well known that quantum computers, properly programmed, afford their users privacy and anonymity guaranteed by the laws of physics. A less well-known virtue of quantum computers is that everything that they do, they can undo as well. This ability is built into quantum computers at the level of fundamental physical law. At their most microscopic level, the laws of physics are reversible: what goes forward can go backward. (By contrast, at the more macroscopic scales at which classical computers operate, the second law of thermodynamics kicks in, and what is done cannot be undone.) Because they operate at the level of individual atoms, quantum computers inherit those atoms' ability to undo the present, and recall the past.

While quantum computers afford their users protection and anonymity that classical computers cannot, even classical computers can be programmed to share this ability to erase regret, although they currently are not. Although classical computers dissipate heat and operate in and a physically irreversible way, they can still function in a logically reversible fashion: properly programmed, they can un-perform any computation that they can perform. We already see a hint of this digital nostalgia in hard-disk 'time machines,' which restore a disk to its state in an earlier, pre-crash era.

Suppose that we were to put this ability of computers to run the clock backward to the service of undoing not merely our accidental erasures and unfortunate viral infections, but to undoing financial transactions that were conducted under fraudulent conditions? Credit card companies already supply us with protection against theft conducted in our name. Why should not more important financial transactions be similarly guaranteed? Contracts for home sales, stock deals, and credit default swaps are already recorded and executed digitally. What would happen if combined digital finance with reversible computation?

For example, if a logically reversible computer—quantum or classical—were used to record a financial contract and to execute its terms, then at some later point, if the parties were not satisfied with the way those terms were executed, then those terms could be 'un-executed,' any money disbursed, reimbursed, and the contract deleted, as if it had never been. Since finance is already digital, why not introduce a digital time machine: let's agree now that when the crash comes, as it inevitably will, we'll restore everything to a better, earlier time, before we clicked those inauspicious buttons and brought on the blue screen of financial death.

Can it be done? The laws of physics and computation say yes. But what about the laws of human nature? The financial time machine erases profits as well as losses. Will hedge fund managers and Ponzi schemers sign on to turn back the clock if schemes go awry, even if it means that their gains, well-gotten or ill-, will be restored to their clients/victims? If they refuse to agree, then you don't have to give them your money.

I make no predictions, but the laws of physics have been around for a long time. Meanwhile, the only true 'law' of human nature is its intrinsic adaptability. Microscopic reversibility is the way that Nature does business. Maybe we can learn a thing or two from Her to change the way that we do business.

marco_iacoboni's picture
Neuroscientist; Professor of Psychiatry & Biobehavioral Sciences, David Geffen School of Medicine, UCLA; Author, Mirroring People

Life expectancy has dramatically increased over the last 100 years. At the beginning of last century, the average life expectancy was 30-40 years, while the current world average life expectancy is almost 70 years. Unfortunately there are still great variations in life expectancy, between countries (guess what? people living in more developed countries live longer...) and within countries. (Guess what? Wealthier people live longer...). Today, people in the wealthier strata of developing countries can expect to live more than 80 years old. While the disparity in life expectancy is a policy issue (not discussed here), the overall dramatic increase in life expectancy brings out some interesting science issues. How can we fight the cognitive decline associated with aging (a side effect of the nice fact that we live longer)? How can we fix mood disorders often associated with a general cognitive decline? The real game changer will be the immortal cognition (well, not really, but close enough) and boundless happiness (ok, again, not really, but close enough) provided by painless brain stimulation.

Today, we have two main ways of stimulating the brain painlessly and non invasively: Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (TDCS). TMS stimulates the brain by inducing local magnetic fields over the scalp (which in turn induce electric currents in the brain), whereas TDCS uses weak direct currents. There are many different ways of stimulating the brain, and obviously many brain areas can be stimulated. We will be able to delay significantly cognitive decline and improve mood by stimulating brain areas that are collectively called 'association cortices.' Association cortices connect many other brain areas (their name comes from the fact that they associate many brain areas in neural networks). There are two main types of association cortices, in the front of the brain (called 'anterior' association cortices) and in the back of the brain (called 'posterior' association cortices). TMS has already been experimentally used for some years to treat depression by stimulating the anterior association cortices. The results are so encouraging that TMS is now an approved treatment for depression in many countries (FDA has approved it for the United States in October 2008). I believe we will see in the next two decades a great improvement in our ability to stimulate the brain to treat mood disorders. We will improve the hardware and the 'stimulation protocols' (how frequently we stimulate and for how long). We will also improve our ability to target specific parts of the anterior association cortices using brain imaging. Each brain is slightly different, in both anatomy and physiological responses. Brain stimulation coupled with brain imaging will allow to design specific treatments tailored to specific individuals, resulting in highly effective treatments.

The posterior association cortices (the ones in the back of the brain) are the first ones affected by Alzheimer's disease, a degenerative brain disorder affecting higher cognitive functions, for instance memory. The posterior association cortices also have reduced activity in the less dramatic cognitive decline that is often associated with aging. Brain stimulation will facilitate the activity of the posterior association cortices in the elderly by inducing synchronized firing of many neurons at specific frequencies. Synchronous neuronal firing at certain frequencies is thought to be critical for perceptual and cognitive processes. Our aging brain will get its synchronized neuronal firing going thanks to brain stimulation.

A final touch (a critical one, I would say) will be given by our ability to induce specific brain states during brain stimulation. The brain never rests, obviously. Brain stimulation always stimulates the brain in a given state. The effect of brain stimulation can be thought of the interaction between the stimulation itself and the state of the brain while it is stimulated. Stimulating the brain while inducing specific brain states in the stimulated subject (for instance, playing word association games that require the subject to associate words together, or showing the subject stimuli that are more easily associated with happiness) will result in much more effective treatments of cognitive decline and mood disorder.

This will be a real game changer. If my prediction is correct, we will also see dramatic changes in policy. People won't tolerate to be excluded from the beneficial effects of brain stimulation. Right now, people don't easily grasp insidious environmental factors or subtle differences in health care that result in dramatic individual differences in the long term (approximately ten years of life between the wealthy and the poor living in the same country), but they will immediately grasp the beneficial effects of brain stimulation, and will demand not to be excluded anymore. That's also a game changer.

mahzarin_banaji's picture
Department Chair; Richard Clarke Cabot Professor of Social Ethics, Department of Psychology, Harvard University; Co-author (with Anthony Greenwald), Blind Spot

If we understand our minds as we understand the physical world — that will change everything.

Because I've been writing about the history of a particular form of mental activity, I've been especially aware of the limits of what we know about the brain and the mind, this new entrant on the stage of science. Think about it: what we know about the human mind comes from data gathered for little over a hundred years. In actuality, only since mid 20th century do we have anything approximating the sort of activity that we would call a science of the mind. For a half century's work we've done pretty well, but the bald fact is that we have almost no understanding of how the thing that affects all aspects of our lives does its work. The state of our decision making — whether it is about global warming or the human genome, about big bailouts on Wall Street or microfinancing in Asia, about single payer healthcare or not — is what it is because the machine that does all the heavy lifting is something we barely understand. Would we trust the furnace in our house if we understood it as little?

I anticipate that many of the viable candidates for "everything changers" will be striking single events such as encountering new intelligent life, the ability to live forever, and a permanent solution to the problem of the environment. Indeed, any of these will produce enormous change and deserves to be on the list. But if we are to take seriously the question of what will change "everything", then the candidate really has to be something that underlies all other changes, and hence my candidate remains understanding the mind.

From the little we do know, we can say that good people (we, us) are capable of incredible harm to others and even themselves. That daily moral decisions we make are not based on the principles of justice that we think they are, but are often a result of the familiarity and similarity of the other to oneself. These two simple types of bias happen because the mind and its workings remain invisible to us and until we unmask it meanderings, the disparity between what we do and what we think we do will remain murky.

That we have spent no more than a few decades of our entire history scrutinizing the mind should be both frightening and give hope. Whether the question is how we will deal with new life forms when we encounter them, or how we will design our lives as we prepare to live forever, or how we will generate the courage to stop environmental devastation, understanding the mind will change everything.

lisa_randall's picture
Physicist, Harvard University; Author, Dark Matter and the Dinosaurs

Predicting the future is notoriously difficult. Towards the end of the 19th century, the famous physicist William Thomson, more commonly known as Lord Kelvin, proclaimed the end of physics. Despite the silliness of declaring a field moribund, particularly one that had been subject to so many important developments not so long before Thomson's ill-fated pronouncement, you can't really fault the poor devil for not foreseeing quantum mechanics and relativity and the revolutionary impact they would have. Seriously, how could anyone, even someone as smart as Lord Kelvin, have predicted quantum mechanics?

So I'm not going to even try. I'll stick to a safer (and more prosaic) prediction that has already begun its realization. Increases in computing power, in part through shared computational resources, are likely to transform the nature of science and further revolutionize the spread of information. Individual computing power might increase according to Moore's Law but a more discrete jump in computational power should also result from clever uses of computers in concert.

Already we have seen SETI allow for a large-scale search for extraterrestrial signals that would not be possible with any individual computer. Protein folding is currently being studied through a distributed computational effort.

Currently CERN is developing "grid computing" to allow the increase in computational power that will be required to analyze the enormous amount of Large Hadron Collider (LHC) data. Though the grid system would be hard pressed to have the transformative power the World Wide Web (also developed at CERN), the jump in computational power that can be possible with processors coordinated the way that data currently is can have enormous transformational consequences.

Modern science has two different streams that face very different challenges. Physicists and biologists today, for example, ask very different sorts of questions and use somewhat different methods. Traditionally scientists have searched for the smallest and most basic components from which the behavior of large complex systems can be derived. This mode has been extremely successful in understanding and interpreting the physical world.

For example, it has also helped us understand the operation of the human body. I am betting this reductionist approach will continue to work for some fields of science such as particle physics.

However, understanding some of the complex systems that modern science now studies is unlikely to be so "simple". Although the LHC's search for more fundamental building blocks is likely to be rewarded with deeper understanding of the substructure of matter, it is not obvious that the most basic structure of biological systems will be understood with as straightforward a reductionist approach.

Very likely individual elements will work in conjunction with their environment or in collaboration with other system elements to produce emergent effects. Already we have learned that the genetic code is not sufficient to predict behavior, but gene's environments that determine which genes are triggered also play a big role. Very likely understanding the brain will require understanding coordinated dynamics as much as any individual element. Many diseases too are unlikely to be completely cured until the complicated dynamics among different elements is fully processed.

How can massive computing power affect such science? It will clearly not replace experiments or the need to identify individual fundamental elements. But it will make us better able to understand systems and how elements work in conjunction. Massive simulations "experiments" will help determine how feedback loops work and how any individual element works in concert with the system as a whole. Such "experiments" will also help determine when current data is insufficient in that systems are more sensitive to individual elements than anticipated. Computation alone will not solve problems — the full creativity of scientific minds will still be needed — but computational advances will allow researchers to explore hypotheses efficiently.

At a broader level (although one that will affect science too) coordinated and expanded computational power will also allow a greatly expanded use of the huge amounts of underutilized information that is currently available. Searching is likely to become a more refined process where one can ask for particular types of data more finely honed to one's needs. Imagine how much faster and easier "googling" could be in a world where you "feel lucky" every time (or at least significantly more often).

The advance I am suggesting isn't a quantum leap. It's not even a revolution since it's simply an adiabatic evolution of advances that are currently occurring. But when one asks about science in twenty years, coordinated computation is likely to be one of the contributing factors that will change many things — though not necessarily everything.

john_tooby's picture
Founder of field of Evolutionary Psychology; Co-director, Center for Evolutionary Psychology, Professor of Anthropology, UC Santa Barbara

Currently, the most keenly awaited technological development is an all-purpose artificial intelligence — perhaps even an intelligence that would revise itself and grow at an ever-accelerating rate, until it enacts millennial transformations. Since the invention of artificial minds seventy years ago, computer scientists have felt on the verge of building a generally intelligent machine. Yet somehow this goal, like the horizon, keeps retreating as fast as it is approached. In contrast, we think that an all-purpose artificial intelligence will — for the foreseeable future — remain elusive. But understanding why will unlock other revolutions.

AI's wrong turn? Assuming that the best methods for reasoning and thinking — for true intelligence — are those that can be applied successfully to any content. Equip a computer with these general methods, input some facts to apply them to, increase hardware speed, and a dazzlingly high intelligence seems fated to emerge. Yet one never materializes, and achieved levels of general AI remain too low to meaningfully compare to human intelligence.

But powerful natural intelligences do exist. How do native intelligences — like those found in humans — operate? With few exceptions, they operate by being specialized. They break off small but biologically important fragments of the universe (predator-prey interactions, color, social exchange, physical causality, alliances, genetic kinship, etc.) and engineer different problem-solving methods for each. Evolution tailors computational hacks that work brilliantly, by exploiting relationships that exist only in its particular fragment of the universe (the geometry of parallax gives vision a depth cue; an infant nursed by your mother is your genetic sibling; two solid objects cannot occupy the same space). These native intelligences are dramatically smarter than general reasoning because natural selection equipped them with radical short cuts. These bypass the endless possibilities that general intelligences get lost among. Our mental programs can be fiendishly well-engineered to solve some problems, because they are not limited to using only those strategies that can be applied to all problems.

Lessons from evolutionary psychology indicate that developing specialized intelligences — artificial idiot savants — and networking them would achieve a mosaic AI, just as evolution gradually built natural intelligences. The essential activity is discovering sets of principles that solve a particular family of problem. Indeed, successful scientific theories are examples of specialized intelligences, whether implemented culturally among communities of researchers or implemented computationally in computer models. Similarly, adding duplicates of the specialized programs we discover in the human mind to the emerging AI network would constitute a tremendous leap toward AI. Essentially, for this aggregating intelligence to communicate with humans — for it to understand what we mean by a question or want by a request, it will have to become equipped with accurate models of the native intelligences that inhabit human minds.

Which brings us to another impending transformation: rapid and sustained progress in understanding natural minds.

For decades, evolutionary psychologists have been devoted to perpetrating the great reductionist crime — working to create a scientific discipline that progressively maps the evolved universal human mind/brain — the computational counterpart to the human genome. The goal of evolutionary psychology has been to create high resolution maps of the circuit logic of each of the evolved programs that together make up human nature (anger, incest avoidance, political identification, understanding physical causality, guilt, intergroup rivalry, coalitional aggression, status, sexual attraction, magnitude representation, predator-prey psychology, etc.). Each of these is an intelligence specialized to solve its class of ancestral problems.

The long-term ambition is to develop a model of human nature as precise as if we had the engineering specifications for the control systems of a robot. Of course, both theory and evidence indicate that the programming of the human is endlessly richer and subtler than that of any foreseeable robot.

Still, how might a circuit map of human nature radically change the situation our species finds itself in?

Humanity will continue to be blind slaves to the programs that evolution has built into our brains until we drag them into the light. Ordinarily, we only inhabit the versions of reality they spontaneously construct for us — the surfaces of things. Because we are unaware we are in a theater, with our roles and our lines largely written for us by our mental programs, we are credulously swept up in these plays (such as the genocidal drama of us versus them). Endless chain reactions among these programs leave us the victims of history — embedded in war and oppression, enveloped in mass delusions and cultural epidemics, mired in endless negative sum conflict.

If we understand these programs and the coordinated hallucinations they orchestrate in our minds, our species could awaken from the roles these programs assign to us. Yet this cannot happen if this knowledge — like quantum mechanics — remains forever locked up in the minds of a few specialists, walled off by the years of study required to master it.

Which brings us to another interlinked transformation, which could solve this problem.

If a concerted effort were made, we could develop methods for transferring bodies of understanding — intellectual mastery — far more rapidly, cheaply, and efficiently than we do now. Universities still use medieval (!) techniques (lecturing) to noisily, haphazardly and partially transfer fragments of 21st century disciplines, taking many years and spending hundreds of thousands of dollars per transfer per person. But what if people could spend four months with a specialized AI — something immersive, interactive, all-absorbing and video game-like, and emerge with a comprehensive understanding of physics, or materials science, or evolutionary psychology? To achieve this, technological, scientific, and entertainment innovations in several dozen areas would be integrated: Hollywood post-production techniques, the compulsively attention-capturing properties of emerging game design, nutritional cognitive enhancement, a growing map of our evolved programs (and their organs of understanding), an evolutionary psychological approach to entertainment, neuroscience-midwived brain-computer interfaces, rich virtual environments, and 3D imaging technologies. Eventually, conceptual education will become intense, compelling, searingly memorable, and ten times faster.

A Gutenberg revolution in disseminating conceptual mastery would change everything, and — not least — would allow our species to achieve widespread scientific self-understanding. We could awaken from ancient nightmares.

paul_saffo's picture
Technology Forecaster; Consulting Associate Professor, Stanford University

Accelerating change is the new normal. Even the most dramatic discoveries waiting in the wings will do little more than push us further along the rollercoaster of exponential arcs that define modern life. Momentous discoveries compete with Hollywood gossip for headline space, as a public accustomed to a steady diet of surprises reacts to the latest astonishing science news with a shrug.

But there is one development that would fundamentally change everything — the discovery of non-human intelligences equal or superior to our species.  It would change everything because our crowded, quarreling species is lonely. Vastly, achingly, existentially lonely. It is what compels our faith in gods whose existence lies beyond logic or proof. It is what animates our belief in spirits and fairies, sprites, ghosts and little green men. It is why we probe the intelligence of our animal companions, hoping to start a conversation.  We are as lonely as Defoe’s Crusoe. We desperately want someone else to talk to.

The search for extraterrestrial intelligences — SETI — began 50 years ago with a lone radio astronomer borrowing spare telescope time to examine a few frequencies in the direction of two nearby stars. The search today is being conducted on a continuous basis with supercomputers and sophisticated receivers like the SETI Institute’s Allen Telescope Array. Today’s systems search more radio space in a few minutes than was probed in SETI’s first decade. Meanwhile, China is breaking ground on a new 500-meter dish (that’s a receiving surface the size of 30 football fields, or 10 times the size of Arecibo) whose mission explicitly encompasses the search for other civilizations.

Astronomers are looking as well as listening. Over 300 extrasolar planets have been detected, all but 12 in the last decade and over 100 in the last two years alone. More significantly, the minimum size of detectable extrasolar planets has plummeted, making it possible to identify planets with masses similar to earth. Planetary discovery is poised to go exponential with the 2009 launch of NASA’s Kepler spacecraft, which will examine over 100,000 stars for the presence of terrestrial-sized planets. The holy grail of planet hunters isn’t Jupiter-sized giants, but other Earths suitable for intelligent life recognizable to us.

The search so far has been met only by a great silence, but as astronomers continue their hunt for intelligent neighbors, computer scientists are determined to create them. Artificial intelligence research has been underway for decades and a few AIs have arguably already passed the Turing Test. Apply the exponential logic of Moore’s Law and the arrival of strong AI in the next few decades seems inevitable. We will have robots smart enough to talk to, and so emotionally appealing that people will demand the right to marry them.

One way or another, humanity will find someone or some thing to talk to. The only uncertainty is where the conversations will lead. Distant alien civilizations will make for difficult exchange because of the time lag, but the mere fact of their existence will change our self-perception as profoundly as Copernicus did five centuries ago. And despite the distance, we will of course try to talk to them. A third of us will want to conquer them, a third of us will seek to convert them, and the rest of us will try to sell them something.

Artificial companions will make for more intimate conversations, not just because of their proximity, but because they will speak our language from the first moment of their stirring sentience. However, I fear what might happen as they evolve exponentially. Will they become so smart that they no longer want to talk to us? Will they develop an agenda of their own that makes utterly no sense from a human perspective? A world shared with super-intelligent robots is a hard thing to imagine. If we are lucky, our new mind children will treat us as pets. If we are very unlucky, they will treat us as food.

daniel_c_dennett's picture
Philosopher; Austin B. Fletcher Professor of Philosophy, Co-Director, Center for Cognitive Studies, Tufts University; Author, From Bacteria to Bach and Back

What will change everything? The question itself and many of the answers already given by others here on Edge.org point to a common theme: reflective, scientific investigation of everything is going to change everything. When we look closely at looking closely, when we increase our investment in techniques for increasing our investment in techniques... for increasing our investment in techniques, we create non-linearities, — like Doug Hofstadter's strange loops — that amplify uncertainties, allowing phenomena that have heretofore been orderly and relatively predictable to escape our control. We figure out how to game the system, and this initiates an arms race to control or prevent the gaming of the system, which leads to new levels of gamesmanship and so on.

The snowball has started to roll, and there is probably no stopping it. Will the result be a utopia or a dystopia? Which of the novelties are self-limiting, and which will extinguish institutions long thought to be permanent? There is precious little inertia, I think, in cultural phenomena once they are placed in these arms races of cultural evolution. Extinction can happen overnight, in some cases. The almost frictionless markets made possible by the internet are already swiftly revolutionizing commerce.

Will universities and newspapers become obsolete? Will hospitals and churches go the way of corner grocery stores and livery stables? Will reading music soon become as arcane a talent as reading hieroglyphics? Will reading and writing themselves soon be obsolete? What will we use our minds for? Some see a revolution in our concept of intelligence, either because of "neurocosmetics" (Marcel Kinsbourne) or quantum-computing (W. H. Hoffman), or "just in time storytelling" (Roger Schank). Nick Humphrey reminds us that when we get back to basics — procreating, eating, just staying alive — not that much has changed since Roman times, but I think that these are not really fixed points after all.

Our species' stroll through Design Space is picking up speed. Recreational sex, recreational eating, and recreational perception (hallucinogens, alcohol), have been popular since Roman times, but we are now on the verge of recreational self-transformations that will dwarf the modifications the Romans indulged in. When you no longer need to eat to stay alive, or procreate to have offspring, or locomote to have an adventure — packed life, when the residual instincts for these activities might be simply turned off by genetic tweaking, there may be no constants of human nature left at all. Except, maybe, our incessant curiosity.

eric_r_kandel's picture
Recipient, Nobel Prize in Physiology or Medicine, 2002; Professor of Biochemistry and Molecular Biophysics, Columbia University; Reductionism in Art and Brain Science: Bridging the Two Cultures

Biology in general and the biology of mind in particular have become powerful scientific disciplines. But a major lack in the current science of mind is a satisfactory understanding of the biological basis of almost any mental illness. Achieving a biological understanding of schizophrenia, manic-depressive illness, unipolar depression, anxiety states, or obsessional disorders would be a paradigm shift for the biology of mind. It would not only inform us about some of the most devastating diseases of humankind, but since these are diseases of thought and feeling, understanding them would also tell us more about who we are and how we function.

To illustrate the embarrassing lack of science in this area, let me put this problem into a historical perspective with two personal introductory comments.

First, in the 1960s, when I was a psychiatric resident at the Massachusetts Mental Health Center, of the Harvard Medical School, most psychiatrists thought that the social determinants of behavior were completely independent of the biological determinants and that each acted on different aspects of mind. Psychiatric illnesses were classified into two major groups — organic mental illnesses and functional mental illnesses — based on presumed differences in origin. That classification, which dated to the nineteenth century, emerged from postmortem examinations of the brains of mental patients.

The methods available for examining the brain at that time were too limited to detect subtle anatomical changes. As a result, only mental disorders that entailed significant loss of nerve cells and brain tissue such as Alzheimer's disease, Huntington's disease, and chronic alcoholism were classified as organic diseases, based on biology. Schizophrenia, the various forms of depression, and the anxiety states produced no readily detectable loss of nerve cells or other obvious changes in brain anatomy and therefore were classified as functional, or not based on biology. Often, a special social stigma was attached to the so-called functional mental illnesses because they were said to be "all in a patient's mind." This notion was accompanied by the suggestion that the illness may have been put into the patient's mind by his or her parents.

With the passage of forty years we have made progress and the advent of a paradigm shift for the science of the mind. We no longer think that only certain diseases affect mental states through biological changes in the brain. Indeed, the underlying precept of the new science of mind is that all mental processes are biological — they all depend on organic molecules and cellular processes that occur literally "in our heads." Therefore, any disorder or alteration of those processes must also have a biological basis.

Second, in 2001 Max Cowan and I were asked to write a review for the Journal of the American Medical Association about molecular biological contributions to neurology and. In writing the review, we were struck by the radical way in which molecular genetics had transformed neurology. This led me to wonder why molecular biology has not had a similar transformative effect on psychiatry.

The fundamental reason is that neurological diseases and psychiatric diseases differ in several four important ways.

  1. Neurology has long been based on the knowledge of where in the brain specific diseases are located. The diseases that form the central concern of neurology — strokes, tumors, and degenerative diseases of the brain — produce clearly discernible structural damage. Studies of those disorders taught us that, in neurology, location is key. We have known for almost a century that Huntington's disease is a disorder of the caudate nucleus of the brain, Parkinson's disease is a disorder of the substantia nigra, and amyotrophic lateral sclerosis (ALS) is a disorder of motor neurons. We know that each of these diseases produces its distinctive disturbances of movement because each involves a different component of the motor system.
  2. In addition, a number of common neurological illnesses, such as Huntington's, the fragile X form of mental retardation, some forms of ALS, and the early-onset form of Alzheimer's, were found to be inherited in a relatively straightforward way, implying that each of these diseases is caused by a single defective gene.
  3. Pinpointing the genes and defining the mutation that produce these diseases therefore has been relatively easy.
  4. Once a mutation is identified, it becomes possible to express the mutant gene in mice and flies and thus to discover its mechanism of pathenogenesis: how the gene gives rise to disease.

Over the last 20 years neurology has been revolutionized by the advent of molecular genetics. As a result of knowing the anatomical location, the identity, and the mechanism of action of specific genes, diagnoses of neurological disorders are no longer based solely on behavioral symptoms. We have even established new diagnostic categories with the neurological diseases such as the ion channellopathies, such as familial periodic paralysis, characterized by aberrant function of ion channel proteins, and thetrinucleotide repeat disorders such as Huntington Disease and Fragile-X syndrome, where there is an abnormal and unstable replication of short repeating elements in DNA that alter the function of the resulting protein.

These new diagnostic categories are based not on symptomatology but on the dysfunction of specific genes, proteins, neuronal organelles, or neuronal systems. Moreover, molecular genetics has given us insight into the mechanisms of pathogenesis of neurological disease that did not exist 20 years ago. Thus in addition to examining patients in the office, physicians can order tests for the dysfunction of specific genes, proteins, and nerve cell components, and they can examine brain scans to see how specific regions have been affected by a disorder.

By contrast to the brilliant impact on neurology, molecular genetics has so far had only a minor impact on psychiatry. We may well ask: Why is that so?

Tracing the causes of mental illness is a much more difficult task than locating structural damage in the brain. The same four factors that have proven useful in studying neurological illnesses have been limiting in the study of mental illness.

  1. A century of postmortem studies of the brains of mentally ill persons failed to reveal the clear, localized lesions seen in neurological illness. Moreover, psychiatric illnesses are disturbances of higher mental function. The anxiety states and the various forms of depression are disorders of emotion, whereas schizophrenia is a disorder of thought. Emotion and thinking are complex mental processes mediated by complex neural circuitry. Until quite recently, little was known about the neural circuits involved in normal thought and emotion.
  2. Furthermore, although most mental illnesses have an important genetic component, they do not have straightforward inheritance patterns, because they are not caused by mutations of a single gene. Thus, there is no single gene for schizophrenia, just as there is no single gene for anxiety disorders, depression, or most other mental illnesses. Instead, the genetic components of these diseases are thought to arise from interaction with the environment of several genes, each of which exerts a relatively small effect. Together, the several genes create a genetic predisposition — a potential — for a disorder. Most psychiatric disorders are caused by a combination of these genetic predispositions and some additional, environmental factors. For example, identical twins have identical genes. If one twin has Huntington's disease, so will the other. But if one twin has schizophrenia, the other has only a 50 percent chance of developing the disease. To trigger schizophrenia, some other, nongenetic factors in early life — such as intrauterine infection, malnutrition, stress, or the sperm of an elderly father — are required. Because of this complexity in the pattern of inheritance, we have not yet identified most of the genes involved in the major mental illnesses.
  3. As a result we know little about the specific genes involved in any major mental illness.
  4. Because of points two and three, we have no satisfactory animal models for most mental disorders.

What is then needed to achieve a better biological understanding of mental illness?

Two initial requirements are essential and, in principle, obtainable within the next two decades:

  1. We need biological markers for mental illness so that we could understand the anatomical basis of these diseases and diagnose them objectively and follow their response to treatment. A beginning is evident here is the case of depression which is associated with hyperactivity in the prefrontal cortical area, Broadmann Area 25 in anxiety states where there is hyperactivity in the amygdala, and in obsessive compulsive neurosis where there is an abnormality in the striatum.
  2. We need identification of the genes for various mental illnesses, so that we can understand the molecular basis of these diseases.

These two advances would enhance our ability to understand these disorders better and recognize them earlier. But in addition, these advances would open up completely new approaches to the treatment of mental illness, an area that has been at a pharmacological standstill for depression, bipolar disorders, and schizophrenia for the last twenty years.

sherry_turkle's picture
Abby Rockefeller Mauzé Professor of the Social Studies of Science and Technology, MIT; Internet Culture Researcher; Author, The Empathy Diaries

I will see the development of robots that people will want to spend time with. Not just a little time, time in which the robots serve as amusements, but enough time and with enough interactivity that the robots will be experienced as companions, each closer to a someone than a something. I think of this as the robotic moment.

Sociable technologies first came on the mass market with the 1997 Tamagotchi, a creature on a small video screen that did not offer to take care of you, but asked you to take care of it. The Tamagotchi needed to be fed and amused. It needed its owners to clean up after its digital messes. Tamagotchis demonstrated that in digital sociability, nurturance is a "killer app." We nurture what we love but we love what we nurture. In the early days of artificial intelligence, the emphasis had been on building artifacts that impressed with their knowledge and understanding. When AI goes sociable, the game changes: the "relational" artifacts that followed the Tamagotchis inspired feelings of connection because they push on people's "Darwinian" buttons: they asked us to teach them, they made eye contact, they tracked our motions, they remembered our names. For people, these are the markers of sentience, they signal us, rightly or wrongly, that there is "somebody home."

Sociable technologies came onstage as toys, but in the future, they will be presented as potential nannies, teachers, therapists, life coaches, and caretakers for the elderly. First, they will be put forward as "better than nothing." (It is better to have a robot as a diet coach than just to read a diet book. If your mother is in a nursing home, it is better to leave her interacting with a robot that knows her habits and interests than staring at a television screen.) But over time, robots will be presented as "better than something," that is, preferable to an available human being, or in some cases, to a living pet. They will be promoted as having powers – of memory, attention, and patience – that people lack. Even now, when people learn that I work with robots, they tell me stories of human disappointment: they talk of cheating husbands, wives who fake orgasms, children who take drugs. They despair about human opacity: "We never know how another person really feels; people put on a good face. Robots would be safer." As much as a story of clever engineering, our evolving attachments to technology speaks to feelings of unrequited love.

In the halls of a large psychology conference, a graduate student takes me aside to ask for more information on the state of research about relational machines. She confides that she would trade in her boyfriend "for a sophisticated Japanese robot" if the robot would produce what she termed "caring behavior." She tells me that she relies on a "feeling of civility in the house." She does not want to be alone. She says: "If the robot could provide the environment, I would be happy to help produce the illusion that there is somebody really with me." What she is looking for, she tells me, is a "no-risk relationship" that will stave off loneliness; a responsive robot, even if it is just exhibiting scripted behavior, seems better to her than a demanding boyfriend. I ask her if she is joking. She tells me she is not.

It seemed like no time at all that a reporter for Scientific American calls to interview me about a book on robot love by computer scientist David Levy. In Love and Sex with Robots Levy argues that robots will teach us to be better friends and lovers because we will be able to practice on them, relationally and physically. Beyond this, they can substitute where people fail us. Levy proposes, among other things, the virtues of marriage to robots. He argues that robots are "other," but in many ways, better. No cheating. No heartbreak.

I tell the reporter that I am not enthusiastic about Levy's suggestions: to me, the fact that we are discussing marriage to robots is a window onto human disappointments. The reporter asks if my opposition to people marrying robots doesn't put me in the same camp as those who had for so long stood in the way of marriage for lesbians and gay men. I try to explain that just because I don't think that people should marry machines doesn't mean that any mix of adult people with other adult people isn't fair territory. He accuses me of species chauvinism and restates his objection: Isn't this the kind of talk that homophobes once used, not considering gays and lesbians as "real" people? Machines are "real" enough to bring special pleasures to relationships, pleasures that need to be honored in their own right.

The argument in Love and Sex is exotic, but we are being prepared for the robotic moment every day. Consider Joanie, seven, who has been given a robot dog. She can't have a real dog because of her allergies, but the robot's appeal goes further. It is not just better than nothing but better than something. Joanie's robot, known as an Aibo, is a dog that can be made to measure. Joanie says, "It would be nice to be able to keep Aibo at a puppy stage for people who like to have puppies."

It is a very big step from Joanie admiring a "forever young" Aibo to David Levy and his robot lover. But they share the fantasy that while we may begin with substituting a robot if a person is not available we will move on to specifically choose malleable artificial companions. If the robot is a pet, it might always stay a puppy because that's how you like it. If the robot is a lover, you might always be the center of its universe because that's how you like it.

But what will happen if we get what we say we want? If our pets always stay puppy cute; if our lovers always said the sweetest things? If you only know cute and cuddly, you don't learn about maturation, growth, change, and responsibility. If you only know an accommodating partner, you end up knowing neither the partner nor yourself.

The robotic moment will bring us to the question we must ask of every technology: does it serve our human purposes, a question that causes us to reconsider what these purposes are. When we connect with the robots of the future, we will tell and they will remember. But have they listened? Have we been "heard" in a way that matters? Will we no longer care?

j_craig_venter's picture
A leading scientist of the 21st century for Genomic Sciences; Co-Founder, Chairman, Synthetic Genomics, Inc.; Founder, J. Craig Venter Institute; Author, A Life Decoded

In science, as with most areas, seemingly simple ideas can and have changed everything. Just one hundred and fifty years ago Charles Darwin'sOn the Origin of Species was published and immediately impacted science and society by describing the process of evolution as natural selection but nobody understood why or how this process happened. It took until the 1940's to establish that the substance that carried the inheritable information was DNA. In 1953 an Englishman and an American, Crick and Watson, proposed that DNA is formed as a spiraling ladder — or double helix — with the bases A-T and C-G paired (base pairs) to form the rungs, however no one knew what the code of life actually was.

In the 1960's some of the first secrets of our "genetic code" were revealed with the discovery that the chemical bases should be read in groups of three. These "nucleotide triplets" then defined and coded for amino acids.

In the late 1970's the complete genetic code (5,000 nucleotides) of a phage (a small virus that kills E. coli, a type of bacteria) was read out in sequence by a new technology developed by Fred Sanger from Cambridge. This technology named Sanger sequencing would dominate genetics for the next 25 years.

In 1995 my team read the complete genetic code of the chromosome containing all of the genetic information for a bacterium. The genome of the bacteria that we decoded was over 1.8 million nucleotides long and coded for all the proteins associated with the life of the bacteria. Based on our new methods there was an explosion of new data from decoded genomes of many living species including humans.

Just as Darwin observed evolution in the changes that he saw in various species of finches, land and sea iguanas, and tortoises, the genomics community is now studying the changes in the genetic code that are associated with human traits and disease and the differences among us by reading the genetic code of many humans and comparing them. The technology is changing rapidly where soon it will be common place for everyone to know their own genetic code. This will change the practice of medicine from treating disease after it happens to preventing disease before its onset. Understanding the mutations and variations in the genetic code will clearly help us to understand our own evolution.

Science is changing dramatically again as we use all our new tools to understand life and perhaps even to redesign it. The genetic code is the result of over 3.5 billion years of evolution and is common to all life on our planet. We have been reading the genetic code for a few decades and are gaining in how it programs for life.

In a series of experiments to better understand the genetic code, my colleagues and I developed new ways to chemically synthesize DNA in the laboratory. First we synthesized the genetic code of the same virus that Sanger and colleagues decoded in 1977. When this large synthetic molecule was inserted into a bacterium, the cellular machinery in the bacteria was not only able to read the synthetic genetic code, but the cell was also able to produce the proteins coded for by the DNA. The proteins self assembled to produce the virus particle that was then able to infect other bacteria. Over the past few years we were able to chemically make an entire bacterial chromosome, which at more than 582,000 nucleotides is the largest man-made chemical produced to date.

We have now shown that DNA is absolutely the information-coded material of life by completely transforming one species into another simply by changing the DNA in the cell. By inserting a new chromosome into a cell and eliminating the existing chromosome all the characteristics of the original species were lost and replaced by what was coded for on the new chromosome. Very soon we will be able to do the same experiment with the synthetic chromosome.

We can start with digitized genetic information and four bottles of chemicals and write new software of life to direct organisms to do processes that are desperately needed, like create renewable biofuels and recycle carbon dioxide. As we learn from 3.5 billion years of evolution we will convert billions of years into decades and change not only conceptually how we view life but life itself.

dominique_gonzalez_foerster's picture
Film projections, photography and spatial installations,

Following the nano and miniaturization trend occuring in many fields from tapas, to cameras, including surgery, vegetables, cars, computers....

Let's imagine that the fantastic Gulliver iconography with its cohabitation of tiny and giant people could also have some visionnary quality. Let's think that the 1957 film The Incredible Shrinking Man can be more than science fiction and let's imagine a worldwide collective decision to genetically miniaturize the future generations in order to reduce the human needs and increase space and ressources on the blue planet.

There would be a strange Gulliver-like period of transition where giants would still live with the next smaller generations, but on a longer run, the planet might look very different and the change of scale in relation to animals, plants, lanscapes could generate completely new ideas perceptions, representations and ideas.

frank_wilczek's picture
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics; Author, Fundamentals

More than a hundred years passed between Columbus' first, confused sighting of America in 1492 and the vanguard of English colonization, at Jamestown in 1607. A shorter interval separates us today from Planck's first confused sighting of the quantum world, in 1899. The quantum world is a New New World far more alien and difficult of access than Columbus' Old New World. It is also, in a real sense, much bigger. While discovery of the Old New World roughly doubled the land area available to humans, the New New World exponentially expands the dimension of physical reality.* (For example, every single electron's spin doubles it.) Our fundamental equations do not live in the three-dimensional space of classical physics, but in an (effectively) infinite-dimensional space: Hilbert space. It will take us much more than a century to homestead that New New World, even at today's much-accelerated pace.

We've managed to establish some beachheads, but the vast interior remains virgin territory, unexploited. (This time, presumably, there are no aboriginals.) Poking along the coast, we've already stumbled upon transistors, lasers, superconducting magnets, and a host of other gadgets. What's next? I don't know for sure, of course, but there are two everything-changers that seem safe bets:

• New microelectronic information processors, informed by quantum principles—perhaps based on manipulating electron spins, or on supplementing today's silicon with graphene—will enable more cycles of Moore's law, on several fronts: smaller, faster, cooler, cheaper. Supercomputers will approach and then surpass the exaflop frontier, making their capacity comparable to that of human brains. Improved bandwidth will put the Internet on steroids, allowing instant access from anywhere to all the world's information, and blurring or obliterating the experienced distinction between virtual and physical reality.

• Designer materials better able to convert energy from the hot and unwieldy quanta (photons) Sol rains upon us into more convenient forms (chemical bonds) will power a new economy of abundance. Evolution in its patient blindness managed to develop photosynthesis; with mindful insight, we will do better.

As we thus augment our intelligence and our power, a sort of bootstrap may well come into play. We—or our machines, or our hybrid descendants—will acquire the wit and strength to design and construct still better minds and engines, in an ascending spiral.

Our creative mastery over matter, through quantum theory, is still embryonic. The best is yet to come. 
---
*This is established physics, independent of speculations about extra spatial dimensions (which are essentially classical).

robert_shapiro's picture
Professor Emeritus of Chemistry and Senior Research Scientist, New York University; Author, Planetary Dreams

We may find the evidence we need within the frigid hydrocarbon lakes of Titan. Or perhaps we shall locate it by tracing the source of the methane hot spots on Mars to their deep underground origin. It may be easier though to sample in depth the contents of the water vapor and ice jets that erupt from “tiger stripe” cracks near the south pole of Enceladus. Least expensive of all would be to explore closer to home - in forsaken regions of Earth that are so hot, or so acidic, or so lacking in some vital nutrient necessary to life as we know it, that no creatures built of such life would deign to inhabit it. So many promising leads have appeared that it seems likely that only our desire and our finances stand as obstacles to our gaining the prize.

The prize in this case would be a sample of truly alien life. Despite the great legacy from pulp science fiction magazines and expensive Hollywood box office productions, aliens need not be green men, or menacing fanged monsters. Even humble microbes dismissed as “shower scum” in the New York Times would do nicely, provided that they met one key requirement. They must differ enough from us at the biochemical level so that it would be clear that they had started up and evolved on their own. Two separate origins in the same Solar System would imply that the universe is liberally sprinkled with life. Why would a discovery of this type change everything? It would not put food upon our dinner tables or shorten our commute to work. The change would come in our perception of the universe (which does contain everything that we know) and the place of life within it. We would learn that life, like art, can take on many forms and be constructed in countless ways and that we appear to be residents of a universe built to encourage such diversity.

We have always understood, of course, that living things came in many sizes and shapes: bacteria and whales, octopuses and centipedes. But we took it for granted that they were all made of one substance, one flesh. When they were ferocious, they could devour us. When they were domestic, we could make meals of them. This expectation was extended to alien life in fiction, myth and imagination. The Martian invaders of H.G. Wells "War of the Worlds" were ultimately subdued by infection by Earthly microbes. The creatures of the “Aliens” film series could incubate in humans, and draw nourishment from them. Humans could have sexual encounters with ancient Greek gods as well as with intruders in flying saucers.

Such events would have provoked little surprise in the 19th century, when the basic substance of life was thought to be a vital, gel-like protoplasm, which presumably would be the same everywhere. We now understand that life’s basic structure is much more intricate, but that the same building materials, nucleic acids, proteins, carbohydrates and fatty substances are used by all known life forms here. Some scientists have extended this conclusion to alien life. To quote Nobel Laureate George Wald "…So I tell my students: learn your biochemistry here and you will be able to pass examinations on Arcturus." This view carries practical consequences today: proposals for instruments to be flown to Mars and elsewhere include antibodies, probes, and other methods designed to detect the molecules familiar to us on Earth today.

If the microbes that we discoved on a nearby world had traveled from Earth within a meteorite, then this expectation would be valid.. By studying them, we might learn a lot about the earlier stages of the evolution of life here, and about the ability of Earth life to adapt to a very different environment. Such information would be valuable, but it would not change everything.

When we discover separate origin life, we will hit the jackpot. It will truly be made of a different flesh. Biochemists will be fascinated to learn how life functions such as energy capture, information storage and catalysis can be carried out by materials different from the life we know. The field of biology will be greatly enriched, and a host of new technical innovations may arise from the new knowledge, but even this would not change everything. The largest impact would take place in the way that we view our existence and plan for our future.

For stability and comfort, most human beings appear to require a narrative that provides meaning and purpose to their lives. In many religions, our behavior as individuals here determines our fate in a hereafter. Our actions are crucial in the grand scheme of things. Prior to the Copernican revolution, the Earth was naturally placed at the very center of the stage in the theater of existence. As suitable decorations, the various heavenly bodies were embedded in spheres that rotated above us.

Now we understand that our home world occupies only a minute fragment of an immense array of planets, stars, and galaxies. Our species has experienced only a sliver of the great expanse of time that has passed since the Big Bang, and much more is yet to come. The playing field has become immense.

Traditional religions have generally ignored this huge expansion of the cosmic scheme and cling to an essentially pre-Copernican view of existence. In doing so, they reduce their narratives to cherished folk tales, with a message as relevant today as the science of Aristotle. By contrast, some Nobel Laureate scientists have regarded the Universe as meaningless and pointless, with our life representing an accidental anomaly that will disappear sooner or later. Fortunately, another interpretation exists; one that is fully compatible with science though it extends beyond it.

Eric Chaisson, Paul Davies and others have described a viewpoint often called "Cosmic Evolution". The successive appearance of galaxies, stars and planets, atoms and molecules, life and intelligence are all seen as inherent in the laws that have governed our universe since the Big Bang. Minor alterations in many of the fundamental constants that are embedded in those laws would have made this succession of events impossible. For whatever reason, our universe is (to use Paul Davies' word) "bio-friendly".

If a separate origin of life were encountered within our own Solar System, the credibility of this viewpoint would be strengthened immensely. We could see ourselves as active participants in a vast cosmic competition that required all available space and time to play itself out to the fullest extent. To advance in the game and ultimately grasp its point, our mission would be to survive, prosper, evolve to the next stage and to expand into the greater universe (which would not be bad goals under any circumstance). By liberating humanity from a choice between obsolete dogma and unrelenting pessimism, this discovery would ultimately change everything.

austin_dacey's picture
Representative to the United Nations for the Center for Inquiry in New York City

Nobody eat animals — not the whole things. Most of us eat animal parts, with a few memorable culinary exceptions. And as we become more aware of the costs of meat — to our health, to our environments, and to the lives of the beings we consume — many of us wish to imagine the pieces apart from the wholes. The meat market obliges. It serves up slices disembodied, drained, and reassembled behind plastic, psychically sealed off from the syringe, saw blade, effluent pool, and all the other instruments of so-called husbandry. But of course this is just cynical illusion.

Imagine, though, that the illusion could come true. Imagine giving in to the human weakness for flesh, but without the growth hormones, the avian flu, the untold millions tortured and gone; imagine the voluptuous tenderness of muscle, finally freed from brutality. You are thinking of cultured meat or in vitro meat, and already it is becoming technologically feasible.

Research on several promising tissue-engineering techniques, being led by scientists in the Netherlands and the United States, has been accelerating since 2000, when NASA cultured goldfish meat as possible sustenance on space missions. Soon it will be within our means to stop farming animals and start growing meat. Call it carniculture.

With the coming of carniculture (a term found in science fiction literature, although, etymologically speaking, "carneculture" might be more correct), meat and other animal products can be made safe, nutritious, economical, energy efficient, and above all, morally defensible. While carniculture may not change everything in the same way agriculture changed everything, certainly it will transform our economy and our relationship to animals.

Grains once roamed free on untamed plains, tomatoes were wild berries in the Andes. And meat once grew on animals.

sam_harris's picture
Neuroscientist; Philosopher; Author, Making Sense

When evaluating the social cost of deception, one must consider all of the misdeeds—marital infidelities, Ponzi schemes, premeditated murders, terrorist atrocities, genocides, etc.—that are nurtured and shored-up, at every turn, by lies. Viewed in this wider context, deception commends itself, perhaps even above violence, as the principal enemy of human cooperation. Imagine how our world would change if, when the truth really mattered, it became impossible to lie.

The development of mind-reading technology is in its infancy, of course. But reliable lie-detection will be much easier to achieve than accurate mind reading. Whether on not we ever crack the neural code, enabling us to download a person’s private thoughts, memories, and perceptions without distortion, we will almost surely be able to determine, to a moral certainty, whether a person is representing his thoughts, memories, and perceptions honestly in conversation. Compared to many of the other hypothetical breakthroughs put forward in response to this year’s Edge question, the development of a true lie-detector would represent a very modest advance over what is currently possible through neuroimaging. Once this technology arrives, it will change (almost) everything.

The greatest transformation of our society will occur only once lie-detectors become both affordable and unobtrusive. Rather than spirit criminal defendants and hedge-fund managers off to the lab for a disconcerting hour of brain scanning, there may come a time when every courtroom or boardroom will have the requisite technology discretely concealed behind its wood paneling. Thereafter, civilized people would share a common presumption: that wherever important conversations are held, the truthfulness of all participants will be monitored. Well-intentioned people would happily pass between zones of obligatory candor, and these transitions will cease to be remarkable. Just as we’ve come to expect that many public spaces will be free of nudity, sex, loud swearing, and cigarette smoke—and now think nothing of the behavioral changes demanded of us whenever we leave the privacy of our homes—we may come to expect that certain places and occasions will require scrupulous truth-telling. Most of us will no more feel deprived of the freedom lie during a press conference or a job interview than we currently feel deprived of the freedom to remove our pants in a restaurant. Whether or not the technology works as well as we hope, the belief that it generally does work will change our culture profoundly.

In a legal context, some scholars have already begun to worry that reliable lie detection will constitute an infringement of a person’s Fifth Amendment privilege against self-incrimination. But the Fifth Amendment has already succumbed to advances in our technology. The Supreme Court has ruled that defendants can be forced to provide samples of their blood, saliva, and other physical evidence that may incriminate them. In fact, the prohibition against compelled testimony appears to be a relic of a more superstitious time: it was once widely believed that lying under oath would damn a person’s soul for eternity. I doubt whether even many fundamentalist Christians now imagine that an oath sworn on a courtroom Bible has such cosmic significance.

Of course, no technology is ever perfect. Once we have a proper lie-detector in hand, we will suffer the caprice of its positive and negative errors. Needless to say, such errors will raise real ethical and legal concerns. But some rate of error will, in the end, be judged acceptable. Remember that we currently lock people away in prison for decades—or kill them—all the while knowing that some percentage of those convicted must be innocent, while some percentage of those returned to our streets will be dangerous psychopaths guaranteed to re-offend. We have no choice but to rely upon our criminal justice system, despite the fact that judges and juries are poorly calibrated truth detectors, prone to error. Anything that can improve the performance of this ancient system, even slightly, will raise the quotient of justice in our world.

There are several reasons to doubt whether any of our current modalities of neuroimaging, like fMRI, will yield a practical form of mind-reading technology. It is also true that the physics of neuroimaging may grant only so much scope to human ingenuity. It is possible, therefore, that an era of cheap, covert lie-detection might never dawn, and we will be forced to rely upon some relentlessly costly, cumbersome technology. Even so, I think it safe to say that the time is not far off when lying, on the weightiest matters, will become a practical impossibility. This fact will be widely publicized, of course, and the relevant technology will be expected to be in place, or accessible, whenever the stakes are high. This very assurance, rather than the incessant use of these machines, will make all the difference.

nassim_nicholas_taleb's picture
Distinguished Professor of Risk Engineering, New York University School of Engineering ; Author, Incerto (Antifragile, The Black Swan...)

People want advice on how to get rich –and pay for it. Now how not to go bust does not appear to be valid advice –yet given that over time only a minority of companies do not go bust, avoiding death is the best possible –and most robust --advice. It is particularly good advice after your competitors get in trouble and you can go on legal pillages of their businesses. But few value such advice: this is the reason Wall Street quants, consultants, and investment managers are in business in spite of their charlatanic record. I was recently on TV and some "empty suit" kept bugging me for precise advice on how to pull out of the crisis. It was impossible to communicate my "what not to do" advice –or that my field is error avoidance not emergency room surgery, and that it could be a standalone discipline. Indeed I spent 12 years trying to explain that in many instances no models were better –and wiser –than the mathematical acrobatics we had in finance and it took a monumental crisis to convince people of the point.

Unfortunately such lack of rigor pervades the place where we expect it the least: institutional science. Science, particularly its academic version, never liked negative results, let alone the statement and advertizing of its own limits — the reward system is not set up for it. You get respect for doing funambulism or spectator sports –following the right steps to become the "Einstein of Economics" or the "next Darwin" rather than give society something real by debunking myths or by cataloguing where our knowledge stops.

[In some instances we accept limit of knowledge trumpeting, say, Gödel's "breakthrough" mathematical limits –because it shows elegance in formulation and mathematical prowess – though the importance of such limit is dwarfed by our practical limits in forecasting climate changes, crises, social turmoil, or the fate of the endowment funds that will finance research of such future "elegant" limits].

Let's consider Medicine –which only started saving lives less than a century ago (I am generous), and to a lesser extent than initially advertised in the popular literature, as the drops in mortality seem to arise much more from awareness of sanitation and the (random) discovery of antibiotics rather than therapeutic contributions. Doctors, driven by the beastly illusion of control, spent a long time killing patients, not considering that "doing nothing" could be a valid option –and research compiled by my colleague Spyros Makridakis shows that they still do to some extent. Indeed practitioners who were conservative and considered the possibility of letting nature do its job, or stated the limit of our medical understanding were until the 1960s accused of "therapeutic nihilism". It was deemed so "unscientific" to decide on a course of action based on an incomplete understanding to the human body –to say this is the limit of where my body of knowledge stops.

The very term iatrogenic, i.e., harm caused by the healer, is not well spread -- I have never seen it used outside medicine. In spite of my lifelong obsession with what is called "type 2 error", or false positive, I was only introduced to the concept very recently thanks to a conversation with the essayist Bryan Appleyard. How can such a major idea remained hidden from our consciousness?  Even in medicine, that is, modern medicine, the ancient concept "do no harm"   sneaked-in very late. The philosopher of Science Georges Canguilhem wondered why it was not until the 1950s that the idea came to us. This, to me, is a mystery: how professionals can cause harm for such a long time in the name of knowledge and get away with it. 

Sadly, further investigation shows that these iatrogenics were mere rediscoveries after science got too arrogant by the enlightenment. Alas, once again, the elders knew better –Greeks, Romans, Byzantines, and Arabs had a built-in respect for limits of knowledge. There is a treatise by the Medieval Arab philosopher and doctor Al-Ruhawi which betrays the familiarity of these Mediterranean cultures with iatrogenics. I have also in the past speculated that religion saved lives by taking the patient away from the doctor. You could satisfy your illusion of control by going to the Temple of Apollo rather than seeing the doctor. What is interesting is that the ancient Mediterraneans may have understood the trade-off very well and have accepted religion partly as a tool to tame such illusion of control.

I will conclude with the following statement: you cannot do anything with knowledge unless you know where it stops, and the costs of using it. Post enlightenment science, and its daughter superstar science, were lucky to have done well in (linear) physics, chemistry and engineering. But, at some point we need give up on elegance to focus on something that was given the short shrift for a very long time: the maps showing what current knowledge and current methods do not do for us; and a rigorous study of generalized scientific iatrogenics, what harm can be caused by science (or, better, an exposition of what harm has been done by science). I find it the most respectable of pursuits.

stefano_boeri's picture
Architect; Professor, Politecnico of Milan; Visiting Professor at Harvard GSD; Editor-in-Chief, Abitare magazine

DISCOVERING THAT SOMEONE FROM THE FUTURE HAS ALREADY COME TO VISIT US

david_m_buss's picture
Professor of Psychology, University of Texas, Austin; Author, When Men Behave Badly

Game-changing scientific breakthroughs will come with the discovery of evolved psychological circuits for exploiting other humans—through cheating, free-riding, mugging, robbing, sexually deceiving, sexually assaulting, physically abusing, cuckolding, mate poaching, stalking, and murdering.  Scientists will discover that these exploitative resource acquisition adaptations contain specific design features that monitor statistically reliable cues to exploitable victims and opportunities. 

Convicted muggers who are shown videotapes of people walking down a New York City street show strong consensus about who they would choose as a mugging victim.  Chosen victims emit nonverbal cues, such as an uncoordinated gait or a stride too short or long for their height, indicative of ease of victimization.  These potential victims are high on muggability. Similarly, short stride length, shyness, and physical attractiveness provide reliable cues to sexual assaultability.  Future scientific breakthroughs will identify the psychological circuits of exploiters sensitive to victims who give off cues to cheatability, deceivability, rapeability, abusability, mate poachability,cuckoldability, stalkability, and killability; and to groups that emanate cues tofree-ridability and vanquishability.

This knowledge will offer the potential for developing novel defenses that reduce cheating, mugging, raping, robbing, stalking, mate poaching, murdering, and warfare.  On the other hand, because adaptations for exploitation co-evolve in response to defenses against exploitation, selection may favor the evolution of additional adaptations that circumvent these defenses.

Because evolution by selection is a relatively slow process, the acquisition of scientific knowledge about adaptations for exploitation may enable staying one step ahead of exploiters, and effectively short-circuit their strategies.  Some classes of crime will be curtailed. Cultural evolution, however, being fleeter than organic evolution, may enable the rapid circumvention of anti-exploitation defenses.  Defenses, in turn, favor novel strategies of exploitation.  Dissemination of discoveries about adaptations for exploitation and co-evolved defenses may change permanently the nature of social interaction.  Or perhaps, like some co-evolutionary arms races, these discoveries ultimately may change nothing at all.

peter_schwartz's picture
Futurist; Senior Vice President for Global Government Relations and Strategic Planning, Salesforce.com; Author, Inevitable Surprises

It is obvious that many of the most powerful new technologies are likely to flow from biology, but one of the most game changing is likely to be neural control of devices. We are not far from being able to "jack in" to the Web. Why do I think so? Several new biological tools are converging to give us both an understanding and new capabilities at the neuronal level.

The first new tools are the means to image the internal working of living cells, including neurons. The second are a variety of tools for precisely mapping complex bio-molecular mechanics so that we can understand and manipulate neural functioning within the cell at a molecular level. And finally the functional MRI is already giving us a systemic understanding of neural behavior.

Over the next several decades there are likely to be other significant new biological tools that I have not foreseen that will only strengthen my argument. But the combination of these three are already likely to give us sufficient insight into how the brain works that we will be able to construct the means to reliably read the state of the brain and use that information to control external devices.

The first steps toward that are already far along in the pursuit of advanced prosthetic devices. The ability to give the seriously injured the ability to control a prosthetic arm is already a reality in the laboratory. And we have read out neural states that seem to express language. It is a few big steps from there to the ability to reliably control devices.

But if this does come to pass the most obvious applications will be the control of such physical devices as cars, trucks, fighter jets, drones, machine tools, etc. But the most interesting device to control will be the computer cursor/keyboard. It is not hard to imagine a piece of technology, say like a blue tooth ear-piece that would enable one to think of a message and send it. It will be a form of one way electronically mediated telepathy. Reading out a message to another person will be fairly easy in the sense that control of the keyboard, will give one the ability to transmit.

But we are much further from understanding the read-in process. We may be able to send fairly easily but we may never figure out how to receive directly into the brain. We may have to read the information as we do today even if it is a neurally expressed message. It might be some advanced form of interface, e.g. electronic contact lenses, but it is just a better computer screen. But neuro-transmitting by itself will be sufficiently game changing, and also will be a step along the way to a radically different world of computer mediated reality in every sense.

kai_krause's picture
Software Pioneer; Philosopher; Author, A Realtime Literature Explorer

Why this idea, that any state is not a good state unless there is growth, expansion, re-design...change?

It is deeply embedded in the human psyche to see any situation as mere momentary balance, just waiting for the inevitable change to happen. 
The one constant is always.... change.

However, for the larger scale of human lifetimes, change for change sake is not a worthy mantra. Sometimes stasis, the actual opposite of change, may be the harder achievement, the trickier challenge and yet the nobler cause.

The lengthy quest as we meander through the years is in my view really all about quality of life more than just riches, honors or power.

And there I question the role of society-at-large for me in my own private day-to-day cycles: I am not about to wait for any 'new world', no matter how 'brave', to guide my path towards a fulfilled life.

That differs from some of the learned brains in this distinguished forum: 
Straightforward answers are found in these pages: Climate catastrophy, extra terrestrial life and asteroid collisions are interspersed with solutions 'from the lab': meddling with genes, conjuring up super intelligence and nano-technology, waiting for the singularity of hard A.I., fearing insurgent robots or clamoring for infinite human life span.

Big themes, well said, but somehow...I end up shrugging my shoulders.

Is it that the real future has always turned out unimaginably different than any of the predictions ever offered ?!?  That all the descriptions going past a decade or so have always missed the core nature of the changes by a mile?
That all the descriptions going past a decade or so have always missed the core nature of the changes by a mile? 

My thesis summary: No need to go the far future to talk about change.
We are smack in the middle of it, and have been, for quite a while.

This decade, aptly called 'The Zero Years', already changed everything. The towering collapse of once hallowed personal freedom like mail, phone or banking secrecy was the mere start. 
Where global priorities like Aids research or space travel were allocated budgets in the low billions, we now poker casually with ten trillion to prop up one industry after the next. The very foundations are quivering.

It's not about politics - simply: no one foresaw the magnitude of change ten years ago that we are finding ourselves in the middle of right now. 
No need for an Apophis meteor to blast the planet.

But the theme "Change has already happened" can be positive as well.

It is a near impossibility to define quality of life, a deeply personal set of values, judgements and emotions. But in recent years there have been breakthroughs, a new sense of empowerment, a new degree of functionality for our tools: Research is now immensely powerful, fast, cheap and enjoyable, it used to be a slow, painful and expensive chore. To have the entirety of Britannica, Wikipedia, millions of pages of writings of all ages, find answers to almost any question in seconds, view them on a huge sharp window-to-the-world screen: a pure joy to "work".

Images: memories can be frozen in time, at any occasion, in beautiful detail, collected by the hundreds of thousands.
The few cineastic masterpieces mankind produced, among the wretched majority of trashy abyss, one can now own & watch, rerun & freeze-frame.
Music: all I ever cherished at my direct disposal, tens of thousands of pieces. Bach's lifetime oeuvre, 160 CDs worth, in my pocket even! Consider that just a few generations ago?

A perfect cup of tea, the right bread with great jam,
the Berlin Philharmonic plays NOW, just for ME, exactly THAT
....and will even pause when I pee! 
What more does anyone need ??

Billions of our predecessors would have spontaneously combusted with "instant happiness overdose syndrome" given all these wonderful means,
and I am not even mentioning heated rooms, lit at night, clean showers, safe food, ubiquitous mobility or dentists with anaesthesia.

To change everything should really equate to: let's bring this basic state to the billions of our co-inhabitants on this dystopic dirtball. Now! 
An end to the endless, senseless suffering is the most meaningful goal. Yes, indeed, let's change everything.

Will I see that in my lifetime? Well. I'll see the beginning. 
And there is an optimistic streak in me, hopeful about the human spirit to face insurmountable challenges.

Photonic solar paint that gathers free energy on any surface; transparent photovoltaic film to make every window in any house or vehicle into a steady energy source; general voltaics with 90% efficiency...
Splitting water into hydrogen with ease ( and then hook that to power a desalination machine which can feed the water.. ; ) 
Transmit power wirelessly (as MIT finally did, following Teslas cue).
Thin air "residual energy" batteries powering laptops and cellphones nearly infinitely.... all that is very close and will make an enormous difference - not just in Manhattan, London and Tokyo, but also in Ulaanbaatar, Irian Jaya or the Pantanal.

No need to invoke grand sweeping forces, momentous upheaval, those armies of nano-tech, gene-spliced A.I. robots...

Let's embrace the peace and quiet of keeping things just as they are for a while. Bring them to the rest of the planet. 
Taking the time to truly enjoy them, milking the moment for all it has, really watching, listening, smelling and tasting it all....
...that stasis.....
that changed everything.

ian_wilmut's picture
Chair of Reproductive Biology, Director Scottish Centre for Regenerative Medicine, University of Edinburgh; Author, After Dolly

In 2009 we are still comparatively near to the beginning of an era in which biomedical research is revolutionizing our understanding of inherited human diseases and providing the first effective treatment for at least some of them. This new knowledge will offer benefits that are at least as great as those from past biomedical research which has dramatically reduced the devastating effects of many infectious diseases. The powerful new tools that will bring this about are those for molecular genetic analysis and stem cell biology.

Human health and lifespan in the more fortunate parts of the world has improved dramatically in the past 1,000 years, but in the main this is because we became better at meeting the everyday needs for survival. Over this period humans became more effective at collecting or producing adequate supplies of food. On this timescale it was only comparatively recently that communities recognized the need for clean water and effective sanitation to prevent infection. More recently still, methods have been developed for immunization against potential infection and compounds identified as being powerful antibiotics. While the authors of these essays, and the vast majority of those who read them, can take all of these benefits for-granted it is a sad commentary on us all that this is not true for many millions in the less fortunate parts of the world, but that is another matter.

The coming together of emerging techniques in cell and molecular biology will change our entire approach to human diseases that are inherited, rather than acquired from an infective agent, such as a virus or bacterium. “Inherited diseases” are those in which run in families because of errors in the DNA sequence of some family members. For the sake of simplicity, this essay will concentrate upon diseases inherited through chromosomal DNA, while acknowledging that there is DNA in mitochondria which is also error prone and the cause of other inherited diseases.

While the proportion of diseases for which the precise genetic cause has been identified is increasing because of the power of modern molecular analysis, it is still small. Even more important, is the fact that, the way in which the genetic error causes the symptoms of the disease is known in very few cases. This has been a major limiting factor in the development of effective treatments because the objective with present treatments is not to correct the error in the DNA, but rather to prevent the development of the symptoms.

One advantage of the new tools is that it is not necessary to have identified the genetic error to be able to identify compounds that can prevent the development of symptoms. This new opportunity arises from the revolutionary new technique, by which stem cells able to form all tissues of the body are formed from cells taken from adults. Shinya Yamanaka of the University of Kyoto was the first to show that a simple procedure could achieve this extraordinary change and named the cells “induced pluripotent cells” in view of their ability for form all tissues.

Many laboratories are now using induced pluripotent stem cells to study inherited diseases, such as ALS. Pluripotent cells from ALS patients are turned into the different neural populations affected by the disease and contrasted with the same cell population from healthy donors. Discovery of the molecular cause of the diseases will involve analyses of gene function in the diseased cells in many ways. There is then the practical issue of devising a test to discover if potential drugs are able to prevent the development of the disease symptoms. This can be used as the basis of tests that can be carried out by robots able to screen thousands of compounds every week. Many further studies will then be required before any new medicine can be used to treat patients.

In addition to the prospect of understanding and being able to treat inherited diseases it is also likely that these therapies will be effective in treating related cases for which there is no evidence of a genetic cause. In the case of ALS it is estimated that less than 10% of cases are inherited. ALS should be considered as a family of diseases because it reflects errors in several different genes. Recent studies have revealed an unusual distribution of a particular protein within the cells of many patients. This was the case in inherited cases associated with all except one of these genes and in addition occurred in several patients for which there was no evidence of an inherited effect. While this pattern may not occur with all inherited diseases the observation lends encouragement to the hope that treatments developed through research with inherited cases will often be equally effective for the cases in which there is no genetic effect.

While I have separated infectious and inherited diseases, in reality there is a considerable overlap. New understanding of the molecular and cellular mechanisms that govern normal development and health will also provide the basis for novel treatments for infectious diseases. In this way for example, an understanding of the development and function of the immune system may reveal new approaches to the treatment of diseases such as HIV.

It is always impossible to predict the future and scientists above many others should know to expect the unexpected. Sadly this leads one to be cautious and fear that it will not be possible to develop effective treatments for some diseases, but it also suggests that there will also be joyous surprises in store. I certainly find it very exciting indeed to think that in my lifetime effective treatments will be available for some of the many hundred inherited diseases. The devastating effect of these diseases on the patients and their families will be greatly reduced or even removed, in just the same way that earlier research banished infections such as polio, TB and the childhood diseases such as measles or mumps.

hans_ulrich_obrist's picture
Curator, Serpentine Gallery, London; Editor: A Brief History of Curating; Formulas for Now; Co-author (with Rem Koolhas), Project Japan: Metabolism Talks

Immanuel Wallerstein wrote in his essay "utopistics," about historical choices of the 21th century exploring what are possible better—not perfect—but better societies within the constraints of reality. We travel from dreams that were betrayed to a a world-system in structural crisis which is unpredictable and uncertain towards a new world system which goes beyond the limits of the 19th century paradigm of Liberal Capitalism.

In order to find a new sense of fulfillment, individually and collectively there will be a tendency towards increasing the number of de-commodified institutions. In Wallersteins words "instead of speaking about transforming hospitals and schools into profit-making institutions, let’s work it the other way. I think we move in the direction of de-commodifying a lot of things which we historically commodified. And this could be a very decentralized process. if you look at a lot of movements around the world, local and social movements, what they are objecting to in many ways is commodification." Examples are public libraries which are mostly free or free public galleries.

Artist Gustav Metzger who has pioneered alternative systems of production and circulation of art says " e transform these possibilities in a cooperative manner. We cannot radicalize enough against a radicalizing world. I see the possibility that artists will increasingly take over their own lives their own production in relation to society in a wider sense."

New structures of knowledge are an important aspect of new emerging world-systems. In the late eighteenth century the divorce happened between science and philosophy

Wallerstein says "What will change everything is to question it, to find a new, unified epistemology, whereas the "two cultures" were for 150 to 200 years centrifugal, complexity studies and cultural studies are centripetal, that is, moving towards each other."

English architect Cedric Price pioneered centripetal models of a transdisciplinary art centre and a transdiciplinary school in his visionary projects the Fun Palace and The Potteries Thinkbelt. This does not make him—and this is the paradox—a utopian architect. Much less than Archigram, for example, who has been interested in producing utopian drawings, Cedric Price took a pragmatic position and suggested engineering solutions. The Fun Palace was developed in the late sixties but remained unrealized and can best be described as a model for a trans-disciplinary cultural institution for the 21st Century. It was a complex comprised of various moveable facilities that gave shape to a set of ideas that theatre-producer Joan Littlewood had on how such a trans-disciplinary institution should work. The complex, according to Price, was made to enable self-participatory education and entertainment and was basically limited to a certain time and was seen as a university of the streets, which would be easy for people to visit and would also function as a test site.

Thus projects such as the Potteries Thinkbelt applied many ideas of the Fun Palace to a school, a university. The Potteries Thinkbelt was preceded by the Atom project from 1969, which was an atomic education facility spread over a whole city. The education was not to be for one age-group but was seen as a continuous necessity for all members of the community. Thus, this place of learning for all included an industrial education showcase, a home-study station, open teach-toys, open-air servicing, life-conditioners, electronic audio-visual equipment—all of those elements in an atomic way spread over the city.

So the whole city would become what Price called, a "town-brain it set the foundations for the Potteries Thinkbelt, where Price’s research into simple architectural components that build a complex system reached a kind of a peak. There, he drew up not only the hardware, but also an entire program, a major project that had a lot to do with his discussions with the cybernetician Gordon Pask. It also had much to do with the effort to reuse the entire area of Northern Stafordshire, an area of the fading and waning British ceramics industry. This ceramics industry no longer used the railway lines or stations, so Price proposed to establish a university research facility which would be belt-shaped and would go through this whole area using this reactivated infrastructure.

The plan was for a university of about 20,000 students built on a network of rail and motorways, where different stations would become places of knowledge production and where permanent places could move. His was the idea of a "classroom on the move" where you would have all sorts of housing for professors, researchers and students: crates, sprawls, capsules…And when the existing infrastructure would not be enough there would also be inflatables designed by Price and which could be easily and swiftly added. 

The Potteries Thinkbelt and the Fun Palace remain unrealized, but can be revisited now, without the nostalgia of being "projects from the past." They can be seen as instruction models or recipes or triggers for artist and architects to engage with public space and reclaim public space in new and critical ways.

randolph_nesse's picture
Research Professor of Life Sciences, Director (2014-2019), Center for Evolution and Medicine, Arizona State University; Author, Good Reasons for Bad Feelings

Many people think that genetic engineering will change everything, even our very bodies and minds. It will, eventually. Right now, however, attempts to apply new genetic knowledge are having profound effects, not on our bodies, but on how we understand our bodies.  They are revealing that our central metaphor for the body is fundamentally flawed. The body is not a machine. It is something very different, a soma shaped by selection with systems unlike anything an engineer would design. Replacing the machine metaphor with a more biological view of the body will change biology in fundamental ways.

The transition will be difficult because the metaphor of body as machine has served us well. It sped escape from vitalism, and encouraged analyses of the body's components, connections, and functions, as if they were the creations of some extraordinarily clever cosmic engineer. It has yielded explanations with boxes and arrows, as if the parts are components of an efficient device. Thanks to the metaphor of the body as machine, vitalism has been replaced by an incredible understanding of the body's mechanisms. 

Now, however, genetic advances are revealing the metaphor's limitations.  For instance, a decade ago it was reasonable to think we would find the genes that cause bipolar disease.  New data has dashed these hopes. Bipolar disease is not caused by consistent genetic variations with large effects. Instead, it may arise from a many different mutations, or from the interacting tiny effects of dozens of genes. 

We like to think of genes as information quanta whose proteins serve specific functions. However, many regulate the expression of other genes that regulate developmental pathways that are regulated by environmental factors that are detected by yet other bodily systems unlike those in any machine. Even the word "regulate" implies coherent planning, when the reality is systems that work, one way or another, by mechanisms sometimes so entangled we cannot fully describe them. We can identify the main players, insulin and glucagon in glucose regulation, the amygdala in responding to treats and losses. But the details?  Dozens of genes, hormones and neural pathways influence each other in interactions that defy description, even while they do what needs to be done.  We have assumed, following the metaphor of the machine, that the body is extremely complex.  We have yet to acknowledge that some evolved systems may be indescribably complex.  

Indescribable complexity implies nothing supernatural. Bodies and their origins are purely physical. It also has nothing to do with so-called irreducible complexity, that last bastion of creationists desperate to avoid the reality of unintelligent design. Indescribable complexity does, however, confront us with the inadequacy of models built to suit our human preferences for discrete categories, specific functions, and one directional causal arrows. Worse than merely inadequate, attempts to describe the body as a machine foster inaccurate oversimplifications. Some bodily systems cannot be described in terms simple enough to be satisfying; others may not be described adequately even by the most complex models we can imagine. 

This does not mean we should throw up our hands. Moving to a more fully evolutionary view of organisms will improve our understanding. The foundation is recognizing that the body is not a machine. I like to imagine the body as Rube-Goldberg device, modified by generations of blind tinkers, with indistinctly separate parts connected, not by a few strings and pulley, but by myriads of mechanisms interacting in ways that no engineer would tolerate, or even imagine. But even this metaphor is flawed. A body is a body is a body. As we come to recognize that bodies are bodies, not machines, everything will change. 

david_gelernter's picture
Computer Scientist, Yale University; Chief Scientist, Mirror Worlds Technologies; Author, America-Lite: How Imperial Academia Dismantled our Culture (and ushered in the Obamacrats)

What will change everything? The replacement of 90% of America's teachers at every level with parent-chosen, cloud-resident "learning tracks"; the end of conventional centralized, age-stratified schools & their replacement by local cluster-rooms where a few dozen children of all ages & IQs gather under the supervision of any trustworthy adult; where each child follows his own "learning track" at his own level & rate, but all kids in the cluster do playtime and gym-type activities together.

Thus primary & secondary education becomes radically localized & delocalized simultaneously. All children go to a nearby cluster room, & mix there with other children of all ages & interests from this neighborhood: radical localization (or re-localization, the return of the little red schoolhouse). But each child follows a learning-track prepared & presented by the best teachers and thinkers anywhere in the nation or the world. Local schools become cheap and flexible (doesn't matter whether 10 children or 50 show up, so long as there are enough machines to go around--& that will be easy). Perhaps 80+ % of school funding goes straight to the production of learning tracks, which accumulate in a growing worldwide library.

This inversion of education has bad properties as well as good: it's much easier to learn from a good teacher face-to-face than from any kind of software. But the replacement of schools by tracks-and-clusters is the inevitable, unstoppable, take-it-or-leave-it response to educational collapse in the US. "A nation at risk" appeared in 1983. Americans have known for a full generation that their schools are collapsing—& have failed even to make a dent in the problem. If anything, today's schools are worse than 1983's. Tracks-&-clusters is no perfect solution—but radical change is coming, & cloud-based, parent-chosen tracks with local cluster-rooms are all but inevitable as the (radical) next step.

None of today's software frameworks for online learning is adequate. New software must make it easy for parents & children to see & evaluate each track as a whole, give learners control over learning, integrate multimedia smoothly, include students in a net-wide discussion of each topic & put them in touch with (human) teachers as needed. Must also make it easy for parents & "guidance teachers" to evaluate each child's progress. It's all easily done with current technology—if software design is taken seriously.

Any person or group can offer a learning track at any level, on any topic. The usual consumer-evaluation mechanisms will help parents & students choose: government & private organizations will review learning tracks, comment & mark them "approved" or not. Suggested curricula will proliferate on the net. Anybody will be free to offer his services as a personal learning consultant.

Track-and-clusters poses many problems (& suggests many solutions). It represents the inevitable direction of education in the US not because it solves every problem, but because the current system is intellectually bankrupt—not merely today's schools & school districts but the whole system of government funding, local school boards & budget votes, approved textbooks., nation-wide educational fads & so on. It's all ripe for the trash, & on its way out. US schools will change radically because (& only because) they must change radically. Ten years from now the move to clusters-&-tracks will be well underway.

howard_rheingold's picture
Communications Expert; Author, Smart Mobs

Social media literacy is going to change many games in unforeseeable ways. Since the advent of the telegraph, the infrastructure for global, ubiquitous, broadband communication media have been laid down, and of course the great power of the Internet is the democracy of access—in a couple of decades, the number of users has grown from a thousand to a billion. But the next important breakthroughs won't be in hardware or software but in know-how, just the most important after-effects of the printing press were not in improved printing technologies but in widespread literacy. The Gutenberg press itself was not enough. Mechanical printing had been invented in Korea and China centuries before the European invention. For a number of reasons, a market for print and the knowledge of how to use the alphabetic code for transmitting knowledge across time and space broke out of the scribal elite that had controlled it for millennia. From around 20,000 books written by hand in Gutenberg's lifetime, the number of books grew to tens of millions within decades of the invention of moveable type. And the rapidly expanding literate population in Europe began to create science, democracy, and the foundations of the industrial revolution. Today, we´re seeing the beginnings of scientific, medical, political, and social revolutions, from the instant epidemiology that broke out online when SARS became known to the world, to the use of social media by political campaigns. But we´re only in the earliest years of social media literacy. Whether universal access to many-to-many media will lead to explosive scientific and social change depends more on know-how now than physical infrastructure. Would the early religious petitioners during the English Civil War, and the printers who eagerly fed their need to spread their ideas have been able to predict that within a few generations, monarchs would be replaced by constitutions? Would Bacon and Newton have dreamed that entire populations, and not just a few privileged geniuses, would aggregate knowledge and turn it into technology? Would those of us who used slow modems to transmit black and white text on the early Internet 15 years ago been able to foresee YouTube?

max_tegmark's picture
Physicist, MIT; Researcher, Precision Cosmology; Scientific Director, Foundational Questions Institute; President, Future of Life Institute; Author, Life 3.0

A serial killer is on the loose! A suicide bomber! Beware the West Nile Virus! Although headline-grabbing scares are better at generating fear, boring old cancer is more likely to do you in. Although you have less than a 1% chance per year to get it, live long enough, and it has a good chance of getting you in the end. As does accidental nuclear war.

During the half-century that we humans have been tooled up for nuclear Armageddon, there has been a steady stream of false alarms that could have triggered all-out war, with causes ranging from computer malfunction, power failure and faulty intelligence to to navigation error, bomber crash and satellite explosion. Gradual declassification of records has revealed that some of them carried greater risk than was appreciated at the time. For examples, it became clear only in 2002 that during the Cuban Missile Crisis, the USS Beale had depth-charged an unidentified submarine which was in fact Soviet and armed with nuclear weapons, and whose commanders argued over whether to retaliate with a nuclear torpedo.

Despite the end of the Cold War, the risk has arguably grown in recent years. Inaccurate but powerful ICBMs undergirded the "mutual assured destruction" stability, because a first strike could not prevent massive retaliation. The shift toward more accurate missile navigation, shorter flight times and better enemy submarine tracking erodes this stability. A successful missile defense system would complete this erosion process. Both Russia and the US retain their "launch-on-warning" strategy, requiring launch decisions to be made on 5-15 minute timescales where complete information may be unavailable. On January 25 1995, Russian President Boris Yeltsin came within minutes of initiating a full nuclear strike on the United States because of an unidentified Norwegian scientific rocket. Concern has been raised over a recent US project to replace the nuclear warheads on 2 of the 24 D5 ICBMs carried by Trident Submarines by conventional warheads, for possible use against Iran or North Korea: Russian early warning systems would be unable to distinguish them from nuclear missiles, expanding the possibilities for unfortunate misunderstandings. Other worrisome scenarios include deliberate malfeasance by military commanders triggered by mental instability and/or fringe political/religious agendas.

But why worry? Surely, if push came to shove, reasonable people would step in and do the right thing, just like they have in the past?

Nuclear nations do indeed have elaborate countermeasures in place, just like our body does against cancer. Our body can normally deal with isolated deleterious mutations, and it appears that fluke coincidences of as many as four mutations may be required to trigger certain cancers. Yet if we roll the dice enough times, shit happens — Stanley Kubrick's dark nuclear comedy "Dr. Strangelove" illustrates this with a triple coincidence.

Accidental nuclear war between two superpowers may or may not happen in my lifetime, but if it does, it will obviously change everything. The climate change we are currently discussing pales in comparison with nuclear winter, and the current economic turmoil is of course nothing compared to the resulting global crop failures, infrastructure collapse and mass starvation, with survivors succumbing to hungry armed gangs systematically pillaging from house to house. Do I expect to see this in my lifetime? I'd give it about 30%, putting it roughly on par with me getting cancer. Yet we devote way less attention and resources to reducing this risk than we do for cancer.

monica_narula's picture
Artist, New Dehli; Member, Raqs Media Collective; Co-Initiator, Sarai.net

THE LENGTHENING LIFE-SPANS OF INDIVIDUALS SHADOWED BY THE SHORTENING LIFE-SPANS OF SPECIES

brian_knutson's picture
Professor of Psychology and Neuroscience; Stanford University

The fashionable phrase "game-changing" can imply not only winning a game (usually with a dramatic turnaround), but also changing the rules of the game. If we could change the rules of the mind, we would alter our perception of the world, which would change everything (at least for humans). Assuming that the brain is the organ of the mind, what are the brain's rules, and how might we transcend them? Technological developments that combine neurophenomics with targeted stimulation will offer answers within the next century.

In contrast to genomics, less talk (and funding) has been directed towards phenomics. Yet, phenomics is the logical endpoint of genomics (and a potential bottleneck for clinical applications). Phenomics has traditionally focused on a broad range of individual characteristics including morphology, biochemistry, physiology, and behavior. "Neurophenomics," however, might more specifically focus on patterns of brain activity that generate behavior. Advances in brain imaging techniques over the past two decades now allow scientists to visualize changes in the activity of deep-seated brain regions at a spatial resolution of less than a millimeter and a temporal resolution of less than a second. These technological breakthroughs have sparked an interdisciplinary revolution that will culminate in the mapping of a "neurophenome." The neural patterns of activity that make up the neurophenome may have genetic and epigenetic underpinnings, but can also respond dynamically to environmental contingencies. The neurophenome should link more closely than behavior to the genome, could have one-to-many or many-to-one mappings to behavior, and might ideally explain why groups of genes and behaviors tend to travel together. Although mapping the neurophenome might sound like a hopelessly complex scientific challenge, emerging research has begun to reveal a number of neural signatures that reliably index not only the obvious starting targets of sensory input and motor output, but also more abstract mental constructs like anticipation of gain, anticipation of loss, self-reflection, conflict between choices, impulse inhibition, and memory storage / retrieval (to name but a few...). By triangulating across different brain imaging modalities, the neurophenome will eventually point us towards spatially, temporally, and chemically specific targets for stimulation.

Targeted neural stimulation has been possible for decades, starting with electrical methods, and followed by chemical methods. Unfortunately, delivery of any signal to deep brain regions is usually invasive (e.g., requiring drilling holes in the skull and implanting wires or worse), unspecific (e.g., requiring infusion of neutoransmitter over minutes to distributed regions), and often transient (e.g., target structures die or protective structures coat foreign probes). Fortunately, better methods are on the horizon. In addition to developing ever smaller and more temporally precise electrical and chemical delivery devices, scientists can now nearly instantaneously increase or decrease the firing of specific neurons with light probes that activate photosensitive ion channels. As with the electrical and chemical probes, these light probes can be inserted into the brains of living animals and change ongoing behavior. But at present, scientists still have to insert invasive probes into the brain. What if one could deliver the same spatially and temporally targeted bolus of electricity, chemistry, or even light to a specific brain location without opening the skull? Such technology does not yet exist — but given the creativity, brilliance, and pace of recent scientific advances, I expect that relevant tools will emerge in the next decade (e.g., imagine the market for "triangulation helmets"...). Targeted and hopefully noninvasive stimulation, combined with the map that comprises the neurophenome, will revolutionize our ability to control our minds.

Clinical implications of this type of control are straightforward, yet startling. Both psychotherapy and pharmacotherapy look like blunt instruments by comparison. Imagine giving doctors or even patients the ability to precisely and dynamically control the firing of acetylcholine neurons in the case of dementia, dopamine neurons in the case of Parkinson's disease, or serotonin neurons in the case of unipolar depression (and so on...). These technological developments will not only improve clinical treatment, but will also advance scientific theory. Along with applications designed to cure will come demands for applications that aim to enhance. What if we could precisely but noninvasively modulate mood, alertness, memory, control, willpower, and more? Of course, everyone wants to win the brain game. But are we ready for the rules to change?

stephen_m_kosslyn's picture
Founding Dean, Minerva Schools at the Keck Graduate Institute

We humans are more alike than different, but our differences are nonetheless pervasive and substantial. Think of differences in height and weight, of shoe size and thumb diameter. In this context, it's not surprising that our brains also differ. And in fact, ample research has documented individual differences not only in the sizes of specific brain regions, but also in how strongly activated the same parts of the brain are when people perform a given task. And more than this, both sorts of neural differences – structural and functional – have been shown to predict specific types of behavior. For example, my collaborators and I have shown that the strength of activation in one part of visual cortex predicts the ease of visualizing shapes.

Nevertheless, our society and institutionalized procedures rarely acknowledge such individual differences, and instead operate on a philosophy (usually implicit) of "one size fits all." It is impossible to estimate how much leverage we could gain if we took advantage of individuals' strengths, and avoided falling prey to their weaknesses. The technology now exists to do this in multiple domains.

How can we gain leverage from exploiting individual differences? One approach is first to characterize each person with a mental profile. This profile would rely on something like a "periodic table of the mind," which would characterize three aspects of mental function, pertaining to: (1) information processing (i.e., the ease of representing and processing information in specific ways), (2) motivation (what one is interested in, as well as his or her goals and values), and (3) the contents of one's knowledge (in particular, what knowledge base one has in specific areas, which then can be built upon). A value would be assigned to each cell of the table for a given person, creating an individual profile.

Being able to produce such profiles would open up new vistas for personalizing a wide range of activities. For example:

Learning. Researchers have argued that some people learn more effectively by verbal means, others by visual means if shapes are used, others by visual means if spatial relations are used, and so on. People no doubt vary in a wide range of ways in their preferred and most effective learning styles. Knowing the appropriate dimensions of the relevant individual differences will put us in a position to design teaching regimens that fit a given person's information processing proclivities, motivation, and current level of knowledge.

Communicating. A picture may be worth 1,000 words for many people, but probably not for everyone. The best way to reach people is to ensure that they are not overloaded with too much information or bored with too little, to appeal to what interests them, and to make contact with what they already know. Thus, both the form and content of a communication would profit from being tailored to the individual.

Psychotherapy. Knowing what motivates someone obviously is a key to effective psychotherapy, but so is knowing a client's or patient's strengths and weaknesses in information processing (especially if a cognitive therapy is used).

Jobs. Any sort of work task could be analyzed in terms of the same dimensions that are used to characterize individual differences (such as, for example, the importance of having a large working memory capacity or being interested in finding disparities in patterns). Following this, we could match a person's strengths with the necessary requirements of a task. In fact, by appropriate matching, a person could be offered jobs that are challenging enough to remain interesting, but not so challenging as to be exhausting and not so easy as to be stultifying.

Teams. Richard Hackman, Anita Wooley, Chris Chabris, and our colleagues used knowledge of individual differences to compose teams. We showed that teams are more effective if the individuals are selected to have complementary cognitive strengths that are necessary to perform the task. Our initial demonstrations are just the beginning; a full characterization of individual differences will promote much more effective composition of teams.

However, these worthy goals are not quite as simple to attain as they might appear. A key problem is that the periodic table approach suggests that each facet of information processing, motivation, and content is independent. That is, the approach suggests that each of these facets can be combined as if they were "mental atoms" – and, like atoms, that each function retains its identity in all combinations.

But in fact the various mental functions are not entirely independent. This fact has been appreciated almost since the inception of scientific psychology, when researchers identified what they called "the fallacy of pure insertion": A given mental process does not operate the same way in different contexts. For instance, one could estimate the time people require to divide a number by 10; this value could then be subtracted from the time people require to find the mean of 10 numbers, with the idea being that the residual should indicate the time to add up the numbers. But it does not. 

In short, a simple "periodic table of the mind," where a given mental function is assumed to operate the same way when inserted in the context of other functions, does not work. Depending on what other functions are in play, we are more or less effective at a given one – and there will no doubt be individual differences in the degree to which context modulates processing.

To begin to use individual differences in the ways summarized above, we need to pursue two strategies in parallel, one for the short term and one for the long term. First, a short-term strategy is simply to work backwards from a specific application: Do we want to teach someone calculus? For that person, we would to assess the relevant information processes within the context of their motivation and prior knowledge. Given current computer technology, this can easily be done. The teaching method (or psychotherapy technique, etc.) would then be tailored for the content for that person in that particular context.

Second, a long-term strategy is to identify higher-order regularities that not only characterize information processing, motivation, and content but also characterize the ways in which these factors interact. Such regularities may be almost entirely statistical, and may end up having the same status as some equations in physics (sophisticated algorithms already exist to perform such analyses); other regularities may be easier to interpret. For example, some people may discount future rewards more than others – but especially rewards that do not bear heavily on their key values, which in turn reduces the effort they will put into attaining such rewards. Once we characterize such regularities in how mental functions interact, we can then apply them to individuals and specify individual differences at this more abstract level.

Much will be gained by leveraging individual differences, instead of ignoring them as is commonly done today. We will not only make human endeavors more effective, but also make them more satisfying for the individual.

eric_fischl's picture
Visual Artists

When scientists realize that they have failed to produce technological advances that improve or assure the most important of our brain’s abilities, empathy, it will start to allow us to accept the actual limitations of our bodies and enable us to accept our physicality.

That the body is our biggest single problem is no doubt. What we have to do to keep it fed, sheltered, clothed and reproducing has shaped everything we have invented, built and killed for.

We are blessed with a mind of extraordinary powers and we are cursed because that mind is housed in a less than extraordinary container. This has been a source of great discomfort to us all along. We have continually invented more powerful devices to enhance our sensory perceptions. We now have eyesight that can see objects billions of light years away. We can see color spectrums way beyond our natural eye’s ability to do so. We can see within as well as out. We can hear almost as far as we can see and we have learned how to throw our voices such great distances that people around the world can hear us. We can fly and we can move about underwater as if we lived there. We have tended to preference transportation and communication over our other sense experiences: touch, smell, and taste. (It probably bears looking into this discrepancy as to why this should be.)

And, of course, there is mortality. Ah, Death, you son of a bitch. You and your brothers, Disease and Aging, have tormented us since we became aware of Time. And we have worked like crazy trying to develop ways of extending Time so as to hold off the inevitable. We have even broken down Time into such miniscule units as to fool ourselves into believing it is endless.

Most scientific advancements focus on rapid repair of malfunctioning parts, most currently using robotic replacements or ever-more precise chemical interventions, or ways of expanding the sensory capabilities through technologies implanted in the brain.

One could say that these "game changers" are technologies for body enhancement. Lower forms of this techno-wish are what fuel the beauty industry.

Leave it to the Scientists, so caught up in their research, to miss the big picture. Either they take for granted that the reasons that motivate the questioning and the technologies they develop will not only contain its original impetus but will make it explicit. Or they believe that technological development will transcend its own impetus and render those expressed needs insignificant.

Will new systems for global education address the content of its courses? If the body can be made better by robotics will it enhance our ability to experience empathy?

The way we feel about the body does not get acknowledged in the way we think about the body. We fetish-ize the idea of systemic and technological developments geared towards dealing with the problems of fixing our bodies but have only managed to obscure the emotional and psychological 

k_eric_drexler's picture
Researcher; Policy Advocate; Author, Engines of Creation

Human knowledge changes the world as it spreads, and the spread of knowledge can be observed. This makes some change predictable. I see great change flowing from the spread of knowledge of two scientific facts: one simple and obvious, the other complex and tangled in myth. Both are crucial to understanding the climate change problem and what we can do about it.

First, the simple scientific fact: Carbon stays in the atmosphere for a long time.

To many readers, this is nothing new, yet most who know this make a simple mistake. They think of carbon as if it were sulfur, with pollution levels that rise and fall with the rate of emission: Cap sulfur emissions, and pollution levels stabilize; cut emissions in half, cut the problem in half. But carbon is different. It stays aloft for about a century, practically forever. It accumulates. Cap the rate of emissions, and the levels keep rising; cut emissions in half, and levels will still keep rising. Even deep cuts won't reduce the problem, but only the rate of growth of the problem.

In the bland words of the Intergovernmental Panel on Climate Change, "only in the case of essentially complete elimination of emissions can the atmospheric concentration of CO2 ultimately be stabilised at a constant [far higher!] level." This heroic feat would require new technologies and the replacement of today's installed infrastructure for power generation, transportation, and manufacturing. This seems impossible. In the real world, Asia is industrializing, most new power plants burn coal, and emissions are accelerating, increasing the rate of increase of the problem.

The second fact (complex and tangled in myth) is that this seemingly impossible problem has a correctable cause: The human race is bad at making things, but physics tells us that we can do much better.

This will require new methods for manufacturing, methods that work with the molecular building blocks of the stuff that makes up our world. In outline (says physics-based analysis) nanoscale factory machinery operating on well-understood principles could be used to convert simple chemical compounds into beyond-state-of-the-art products, and do this quickly, cleanly, inexpensively, and with a modest energy cost. If we were better at making things, we could make those machines, and with them we could make the products that would replace the infrastructure that is causing the accelerating and seemingly irreversible problem of climate change.

What sorts of products? Returning to power generation, transportation, and manufacturing, picture roads resurfaced with solar cells (a tough, black film), cars that run on recyclable fuel (sleek, light, and efficient), and car-factories that fit in a garage. We could make these easily, in quantity, if we were good at making things.

Developing the required molecular manufacturing capabilities will require hard but rewarding work on a global scale, converting scientific knowledge into engineering practice to make tools that we can use to make better tools. The aim that physics suggests is a factory technology with machines that assemble large products from parts made of smaller parts (made of smaller parts, and so on) with molecules as the smallest parts, and the smallest machines only a hundred times their size.

The basic science to support this undertaking flourishing, but the engineering has has gotten a slow start, and for a peculiar reason: The idea of using tiny machines to make things has been burdened by an overgrowth of mythology. According to fiction and pop culture, it seems that all tiny machines are robots made of diamond, and they're dangerous magic— smart and able to do almost anything for us, but apt to swarm and multiply and maybe eat everything, probably including your socks.

In the real world, manufacturing does indeed use "robots", but these are immobile machines that work in an assembly line, putting part A in slot B, again and again. They don't eat, they don't get pregnant, and making them smaller wouldn't make them any smarter.

There is a mythology in science, too, but of a more sober sort, not a belief in glittery nanobugs, but a skepticism rooted in mundane misconceptions about whether nanoscale friction and thermal motion will sabotage nanomachines, and whether there are practical steps to take in laboratories today. (No, and yes.) This mythology, by the way, seems regional and generational; I haven't encountered it in Japan, India, Korea, or China, and it is rare among the rising generation of researchers in the U.S.

The U.S. National Academies has issued a report on molecular manufacturing, and it calls for funding experimental research. A roadmap prepared by Battelle with several U.S. National Laboratories has studied paths forward, and suggests research directions. This knowledge will spread, and will change the game.

I should add one more fact about molecular manufacturing and the climate change problem: If we were good at making things, we could make efficient devices able to collect, compress, and store carbon dioxide from the atmosphere, and we could make solar arrays large enough to generate enough power to do this on a scale that matters. A solar array area, that if aggregated, would fit in a corner of Texas, could generate 3 terawatts. In the course of 10 years, 3 terawatts would provide enough energy remove all the excess carbon the human race has added to the atmosphere since the Industrial Revolution began. So far as carbon emissions are concerned, this would fix the problem.

lee_smolin's picture
Physicist, Perimeter Institute; Author, Einstein's Unfinished Revolution

I would like to describe a change in viewpoint, which I believe will alter how we think about everything from the most abstract questions on the nature of truth to the most concrete questions in our daily lives. This change comes from the deepest and most difficult problems facing contemporary science: those having to do with the nature of time.

The problem of time confronts us at every key juncture in fundamental physics: What was the big bang and could something have come before it? What is the nature of quantum physics and how does it unify with relativity theory? Why are the laws of physics we observe the true laws, rather than other possible laws? Might the laws have evolved from different laws in the past?

After a lot of discussion and argument, it is becoming clear to me that these key questions in fundamental physics come down to a very simple choice, having to do with the answers to two simple questions: What is real? And what is true?

Many philosophies and religions offer answers to these questions, and most give the same answer: reality and truth transcend time. If something is real, it has a reality which continues forever, and if something is true, it is not just true now, it was always true, and will always be. The experience we have of the world existing within a flow of time is, according to some religions and many contemporary physicists and philosophers, an illusion. Behind that illusion is a timeless reality, in modern parlance, the block universe. Another manifestation of this ancient view is the currently popular idea that time is an emergent quality not present in the fundamental formulation of physics.

The new viewpoint is the direct opposite. It asserts that what is real is only what is real in the moment, which is one of a succession of moments. It is the same for truth: what is true is only what is true in the moment. There are no transcendent, timeless truths.

There is also no past. The past only lives as part of the present, to the extent that it gives us evidence of past events. And the future is not yet real, which means that it is open and full of possibilities, only a small set of which will be realized. Nor, on this view, is there any possibility of other universes. All that exists must be part of this universe, which we find ourselves in, at this moment.

This view changes everything, beginning with how we think of mathematics. On this view there can be no timeless, Platonic, realm of mathematical objects. The truths of mathematics, once discovered, are certainly objective. But mathematical systems have to be invented-or evoked- by us. Once brought into being, there are an infinite number of facts which are true about mathematical objects, which further investigation might discover. There are an infinite number of possible axiomatic systems that we might so evoke and explore-but the fact that different people will agree on what has been shown about them does not imply that they existed, before we evoked them.

I used to think that the goal of physics was the discovery of a timeless mathematical equation that was isomorphic to the history of the universe. But if there is no Platonic realm of timeless mathematical object, this is just a fantasy. Science is then only about what we can discover is true in the one real universe we find ourselves in.

More specifically, this view challenges how we think about cosmology. It opens up new ways to approach the deepest questions, such as why the laws we observe are true, and not others, and what determined the initial conditions of the universe. The philosopher Charles Sanders Pierce wrote in 1893 that the only way of accounting for which laws were true would be through a mechanics of evolution, and I believe this remains true today. But the evolution of laws requires time to be real. Furthermore, there is, I believe, evidence on technical grounds that the correct formulations of quantum gravity and cosmology will require the postulate that time is real and fundamental.

But the implications of this view will be far broader. For example, in neoclassical, economic theory, which is anchored in the study of equilibria of markets and games, time is largely abstracted away. The fundamental results on equilibria by Arrow and Debreu assume that there are fixed and specifiable lists of goods, and strategies, and that each consumer’s tastes and preferences are unchanging.

But can this be completely correct, if growth is driven by opportunities that suddenly appear from unpredictable discoveries of new products, new strategies, and new modes of organization? Getting economic theory right has implications for a wide range of policy decisions, and how time is treated is a key issue. An economics that assumes that we cannot predict key innovations must be very different from one that assumes all is knowable at any time.

The view that time is real and truth is situated within the moment further implies that there is no timeless arbiter of meaning, and no transcendent or absolute source of values or ethics. Meaning, values and ethics are all things that we humans project into the world. Without us, they don’t exist.

This means that we have tremendous responsibilities. Both mathematics and society are highly constrained, but within those constraints there are an infinitude of possibilities, only a few of which can be evoked and explored in the finite time we have. Because time is real and the future does not yet exist, the imaginative and social worlds in which we will live are to be brought into being by the choices we will make.

terence_koh's picture
Artist

i have thought ever more

i have drawn birds for eternity

i have made poems

i have made and thrown a snow ball

i have burnt myself

my answer is still just

i

nicholas_a_christakis's picture
Sterling Professor of Social and Natural Science, Yale University; Co-author, Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives

We will create life from inanimate compounds, and we will find life on Mars or in space. But the life that more immediately interests me lies between these extremes, in the middle range we all inhabit between our genes and our stars. It is the thin bleeding line within the thin blue line, the anthroposphere within the biosphere, the part of the material world in which we live out our lives. It is us.

And we are rapidly and inexorably changing. I do not mean that our numbers are exploding — a topic that has been attracting attention since Malthus. I mean a very modern and massive set of changes in the composition of the human population.

The global population stood at one million at 10,000 BC, 50 million at 1,000 BC, and 310 million in 1,000 AD. It stood at about one billion in 1800, 1.65 billion in 1900, and 6.0 billion in 2000. Analysis of these macro-historical trends in human population usually focuses on this population growth and on the "demographic transition" underlying it.

During the first stage of the demographic transition, life — as Hobbes rightly suggested — was nasty, brutish, and short. There was a balance between birth rates and death rates, and both were very high (30-50 per thousand people per year). The human population grew less than 0.05% annually, with a doubling time of over 1,000 years. This state of affairs was true of all human populations everywhere until the late 18th century.

Then, during the second stage, the death rate began to decline — first in northwestern Europe, but then spreading over the next 100 years to the south and east. The decline in the death rate was due initially to improvements in food supply and in public health, both of which reduced mortality, particularly in childhood. As a consequence, there was a population explosion.

During the third stage, birth rates dropped for the first time in human history. The prior decline in childhood mortality probably prompted parents to realize they did not need as many children; and increasing urbanization, increasing female literacy, and (eventually) contraceptive technology also played a part.

Finally, during the fourth stage — in which the developed world presently finds itself — there is renewed stability. Birth and death rates are again in balance, but now both are relatively low. Causes of mortality have shifted from the pre-Modern pattern dominated by infectious diseases, perinatal diseases, and nutritional diseases, to one dominated by chronic diseases, mental illnesses, and behavioral conditions.

This broad story, however, conceals as much as it reveals. There are other demographic developments worldwide beyond the increasing overall size of the population, developments that are still unfolding and that matter much more. Changes in four aspects of population structure are key: (1) sex ratio, (2) age structure, (3) kinship systems, and (4) income distribution.

Sex ratios are becoming increasingly unbalanced in many parts of the world, especially in China and India (which account for 37% of the global population). The normal sex ratio at birth is roughly 106 males for every 100 females, but it may presently be as high as 120 for young people in China, or as high as 111 in India. This shift, much discussed, may arise from preferential abortion or the neglect of baby girls relative to boys. Gender imbalance may also have other determinants, such as large-scale migration of one or the other sex in search of work. This shift has numerous implications. For example, given the historical role of females as caregivers to elderly parents, a shortage of woman to fill this role will induce large-scale social adjustments. Moreover, an excess of low-status men unable to find wives results in an easy (and large) pool of recruits for extremism and violence.

This shift in gender ratios may have other, less heralded implications, however. Some of our own work has suggested that this shift may actually shorten men's lives, reversing some of the historic progress we have made. Across a range of species, skewed sex ratios result in intensified competition for sexual partners and this induces stress for the supernumerary sex. In humans, it seems, a 5% excess of males at the time of sexual maturity shortens the survival of men by about three months in late life, which is a very substantial loss.

On the other hand, the population worldwide is getting older, especially in the developed world. Globally, the UN estimates that the proportion of people aged 60 and over will double between 2000 and 2050, from 10% to 21%, and the proportion of children will drop from 30% to 21%. This change also has numerous implications, including on the "dependency ratio," meaning that fewer young people are available to provide for the medical and economic needs of the elderly. Much less heralded, however, is the fact that war is a young person's activity, and it is entirely likely that, as populations age, they may become less aggressive.

The changing nature of kinship networks, such as the growth in blended families — whether due to changing divorce patterns in the developed world or AIDS killing off parents in Africa — has implications for the network of obligations and entitlements within families. Changing kinship systems in modern American society (with complex mixtures of remarried and cohabiting couples, half-siblings, step-siblings, and so on) are having profound implications for caregiving, retirement, and bequests. Who cares for Grandma? Who gets her money when she dies?

Finally, it is not just the balance between males and females, or young and old, that is changing, but also the balance between rich and poor. Income inequality is reaching historic heights throughout the world. The top 1% of the people in the world receives 57% of the income. Income inequality in the US is presently at its highest recorded levels, exceeding even the Roaring Twenties. And while economic development in China has proceeded with astonishing rapidity, income is not evenly distributed; the prospects for conflict in that country as a result seem very high in the coming decades.

Lacking any real predators, a key feature of the human environment is other humans. In our rush to focus on threats such as global warming and environmental degradation, we should not overlook this fact. It is well to look around at who, and not just what, surrounds us. Population structure will change everything. Our health, wealth, and peace depend on it.

marti_hearst's picture
Computer Scientist, UC Berkeley, School of Information; Author, Search User Interfaces

As an academic I am of course loathe to think about a world without reading and writing, but with the rapidly increasing ease of recording and distributing video, and its enormous popularity, I think it is only a matter of time before text and the written word become relegated to specialists (such as lawyers) and hobbyists.

Movies have already replaced books as cultural touchstones in the U.S. And most Americans dislike watching movies with subtitles. I assume that given a choice, the majority of Americans would prefer a video-dominant world to a text-dominant one. (Writing as a technologist, I don't feel I can speak for other cultures.) A recent report by Pew Research included a quote from a media executive who said that emails containing podcasts were opened 20% more often than standard marketing email. And I was intrigued by the use of YouTube questions in the U.S. presidential debates. Most of the citizen-submitted videos that were selected by the moderators consisted simply of people pointing the camera at themselves and speaking their question out loud, with a backdrop consisting of a wall in a room of their home. There were no visual flourishes; the video did not add much beyond what a questioner in a live audience would have conveyed. Video is becoming a mundane way to communicate.

Note that I am not predicting the decline of erudition, in the tradition of Allan Bloom. Nor am I arguing that video will make us stupid, as in Niel Postman's landmark "Amusing Ourselves to Death." The situation is different today. In Postman's time, the dominant form of video communication was television, which allowed only for one-way, broadcast-style interaction. We should expect different consequences when everyone uses video for multi-way communication. What I am espousing is that the forms of communication that will do the cultural "heavy lifting" will be audio and video, rather than text.

How will this come about? As a first step, I think there will be a dramatic reduction in typing; input of textual information will move towards audio dictation. (There is a problem of how to avoid disturbing officemates or exposing seat-mates on public transportation to private information; perhaps some sound-canceling technology will be developed to solve this problem.) This will succeed in the future where it has failed in the past because of future improvements in speech recognition technology and ease-of-use improvements in editing, storage, and retrieval of spoken words.

There already is robust technology for watching and listening to video at a faster speed than recorded, without undue auditory distortion (Microsoft has an excellent in-house system for this). And as noted above, technology for recording, editing, posting, and storing video has become ubiquitous and easy to use. As for the use of textual media to respond to criticisms and to cite other work, we already see "video responses" as a heavily used feature on YouTube. One can imagine how technology and norms will develop to further enrich this kind of interaction.

The missing piece in technology today is an effective way to search for video content. Automated image analysis is still an unsolved problem, but there may well be a breakthrough on the horizon. Most algorithms of this kind are developed by "training", that is, by exposing them to large numbers of examples. The algorithms, if fed enough data, can learn to recognize patterns which can be applied to recognize objects in videos the algorithm hasn't yet seen. This kind of technology is behind many of the innovations we see in web search engines, such as accurate spell checking and improvements in automated language translation. Not yet available are huge collections of labeled image and video data, where words have been linked to objects within the images, but there are efforts afoot to harness the willing crowds of online volunteers to gather such information.

What about developing versus developed nations? There is of course an enormous literacy problem in developing nations. Researchers are experimenting with cleverly designed tools such as the Literacy Bridge Talking Book project which uses a low-cost audio device to help teach reading skills. But perhaps just as developing nations "leap-frogged" developed ones by skipping land-line telephones to go straight to cell phones, the same may happen with skipping written literacy and moving directly to screen literacy.

I am not saying text will disappear entirely; one counter-trend is the replacement of orality with text in certain forms of communication. For short messages, texting is efficient and unobtrusive. And there is the question of how official government proclamations will be recorded. Perhaps there will be a requirement for transliteration into written text as a provision of the Americans with Disabilities Act, for the hearing-impaired (although we can hope in the future for increasingly advanced technology to reverse such conditions). But I do think the importance of written words will decline dramatically both in culture and in how the world works. In a few years, will I be submitting my response to the Edge question as a podcast?

marcel_kinsbourne's picture
Neurologist and Cognitive Neuroscientist, The New School; Co-author, Children's Learning and Attention Problems

Innovation in science and technology will continue to bring much change. But since it is the brain that experiences change, only changing the brain itself can possibly change everything. Changing the human brain is not new, when it is a matter of correcting psychopathology. But whether the usual agents, psychoactive drugs, psychosurgery, electroshock, even what we eat, drink and smoke, can change a brain that is functioning normally (other than for the worse), is not known. However, the novel method of deep brain stimulation (DBS), by which electrodes are inserted into the brain to stimulate precisely specified locations electrically, is already used to correct certain brain disorders (Parkinsonism, Obsessive Compulsive Disorder). Not only are the targeted symptoms often relieved; there have been profound changes in personality, although the prior personality was not abnormal. A patient of lifelong somber disposition may not only be relieved of obsessions, but also shift to a cheerful mood, the instant the current is switched on (and revert to his prior subdued self, the instant it is switched off). The half empty glass temporarily becomes the glass that is half full. The brain seems not entirely to respect our conventional sharp distinction between what is normal and what is not.

The stimulation's immediate effect is shocking. We assume that our gratifyingly complex minds and brains are incrementally shaped by innumerable dynamic and environmental factors. And yet, identical twins adopted apart into sharply contrasting social and economic environments have shown impressive similarities in mood and sense of wellbeing. Genetically determined types of brain organization appear to set the emotional tone; experience modulates it in a positive or negative direction. Stimulating or disrupting neural transmission along a specific neural pathway or reverberating loop may reset the emotional tone, entirely sidestepping the complexities of early experience, stress, misfortune, and let personality float free. The procedure releases a previously unsuspected potential. The human brain is famously plastic. Adjusting key circuitry presumably has wide repercussions throughout the brain's neural network, which settles into a different state. We have yet thoroughly to digest the philosophical implications, but a more unequivocal validation of psychoneural identity theory (the identity of the brain and the mind, different aspects of the same thing) can hardly be imagined.

Certainly, deep brain stimulation is not currently used to render sane people more thoughtful, agreeable, gentle or considerate. Potential adverse neurosurgical side effects aside, ethical considerations prohibit using deep brain stimulation to enhance a brain considered to be normal. But history teaches two lessons: Any technology will tend to become more precise, effective and safer over time, and, anything that can be done, ultimately will be done, philosophical and ethical considerations notwithstanding.

The example of cosmetic plastic surgery is instructive. Reconstructive in its origins, it is increasingly used for cosmetic purposes. I predict the same shift for deep brain stimulation. Cosmetic surgery is used to render people more appealing. In human affairs, appearance is critical. For our hypersocial species, personal appeal opens doors that remain shut to mere competence and intellect. Undoubtedly, cosmetic surgery enhances quality of life, so how can it be denied to anyone? And yet, it is by its very nature deceptive; the operated face is not really the person's face, the operated body not really their body. However, experience teaches that these reservations as to authenticity remain theoretical. The cosmetically adjusted nose, breast, thighs or skin tones become the person's new reality, without significant social backlash. Even face transplants are now feasible. We read so much into a face—but what if it is not the person's "real" face? Does anyone care, or even remember the previous appearance? So it will be with neurocosmetics.

And yet, is it not more deeply disturbing to tinker with the brain itself, than to adjust one's body to one's liking? It is. However, the mind-body distinction has become somewhat blurred of late. Evidence accumulates as to the embodiment of cognition and emotion, and at the least, there is influential feedback between the two domains. Considerations that will be raised by cosmetic deep brain stimulation are already in play, in a minor key, with cosmetic surgery.

Deep brain stimulation seems not to enhance intellect, but intellect is no high road to success. "Social intelligence" is of prime importance, and it is a byproduct of personality. In some form, deep brain stimulation will be used to modify personality so as to optimize professional and social opportunity, within my lifetime. Ethicists will deplore this, and so they should. But it will happen nonetheless, and it will change how humans experience the world and how they relate to each other in as yet unimagined ways.

Consider an arms race in affability, a competition based not on concealing real feelings, but on feelings engineered to be real. Consider a society of homogenized good will, making regular visits to the DBS provider who advertises superior electrode placement? Switching a personality on and then off, when it becomes boring? Alternating personalities: Dr. Accumbens and Mr. Insula (friendly and disgusted respectively)? Tracking fashion trends in personality? Coordinating personalities for special events? Demanding personalities such as emerge on drugs (e.g. cocaine), or in psychopathologies (e.g. hypomania)? Regardless, the beneficiaries of deep brain stimulation will experience life quite differently. Employment opportunities for yet more ethicists and more philosophers!

We take ourselves to be durable minds in stable bodies. But this reassuring self-concept will turn out to be yet another of our so human egocentric delusions. Do we, strictly speaking, own stable identities? When it sinks in that the continuity of our experience of the world and our self is at the whim of an electrical current, then our fantasies of permanence will have yielded to the reality of our fragile and ephemeral identities.

neil_gershenfeld's picture
Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality

Life is defined by organic chemistry. There's software for artificial life and artificial intelligence, but these are, well, artificial — they exist in silico rather than in vivo. Conversely, synthetic biology is re-coding genes, but it isn't very synthetic; it uses the same sets of proteins as the rest of molecular biology. If, however, bits could carry mass as well as information, the distinction between artificial and synthetic life would disappear. Virtual and physical replication would be equivalent.

There are in fact promising laboratory systems that can compute with bits represented by mesoscopic materials rather than electrons or photons. Among the many reasons to do this, the most compelling is fabrication: instead of a code controlling a machine to make a thing, the code can itself become a thing (or many things).

That sounds a lot like life. Indeed, current work is developing micron-scale engineered analogs to amino acids, proteins, and genes, a "millibiology" to complement the existing microbiology. By working with components that have macroscopic physics but microscopic sizes, the primitive elements can be selected for their electronic, magnetic, optical or mechanical properties as well as active chemical groups.

Biotechnology is booming (if not bubbling). But it is very clearly segregated from other kinds of technology, which contribute to the study of, but not the identity of, biology. If, however, life is understood as an algorithm rather than a set of amino acids, then the creation of such really-artificial or really-synthetic life can enlarge the available materials, length, and energy scales. In such a world, biotechnology, nanotechnology, information technology, and manufacturing technology merge into a kind of universal technology of embodied information. Beyond the profound practical implications, forward- rather than reverse-engineering life may be the best way to understand it.

april_gornik's picture
Visual Artists

There is a growing scientific consensus that animals have emotions and feel pain. This awareness is going to effect change: better treatment of animals in agribusiness, research, and our general interaction with them. It will change the way we eat, live, and preserve the planet. We will eliminate the archaic tendency to base their treatment on an equation of their intelligence with ours. The measure of and self-congratulation for our own intelligence should have its basis in our moral behavior as well as our smarts.

mihaly_csikszentmihalyi's picture
Psychologist; Director, Quality of Life Research Center, Claremont Graduate University; Author, Flow

The idea that will change the game of knowledge is the realization that it is more important to understand events, objects, and processes in their relationship with each other than in their singular structure.

Western science has achieved wonders with its analytic focus, but it is now time to take synthesis seriously. We shall realize that science cannot be value-free after all. The Doomsday clock ticking on the cover of the Bulletin of Atomic Scientists ever closer to midnight is just one reminder that knowledge ignorant of consequences is foolishness.

Chemistry that shrugs at pollution is foolishness, Economics that discounts politics and sociology is just as ignorant as are politics and sociology that discount economics.

Unfortunately, it does not seem to be enough to protect the neutral objectivity of each separate science, in the hope that the knowledge generated by each will be integrated later at some higher level and used wisely. The synthetic principle will have to become a part of the fundamental axioms of each science. How shall this breakthrough occur? Current systems theories are necessary but not sufficient, as they tend not to take values into account. Perhaps after this realization sets in, we shall have to re-write science from the ground up.

anton_zeilinger's picture
Nobel laureate (2022 - Physics); Physicist, University of Vienna; Scientific Director, Institute of Quantum Optics and Quantum Information; President, Austrian Academy of Sciences; Author, Dance of the Photons: From Einstein to Quantum Teleportation

Some day all semiconductors will break down and therefore all computers as, besides historic instruments no computers exist today which are nor based on semiconductor technology. The breakdown will be caused by a giant electromagnetic pulse (EMP) created by a nucler explosion outside Earth's athmosphere. It will cover large areas on Earth up to the size of a continent. Where it will happen is unpredictable. But it will happen since it is exteremely unlikely that we will be able to get rid of all nuclear weapons and the probabilty for it to happen at any given time will never be zero.

The implications of such an event will be enormous. If it happens to one of our technology based societies literally everything will break down. You will realize that none your phones does work. There is no way to find out via the internet what happened. Your car will not start anymore as it is also controlled by computer chips, unless you are lucky to own an antique car. Your local supermarket is unable to get new supplies.There will be no trucks operating anymore, no trains, no elctricity, no water supplies Society will completely break down.

There will be small exceptions in those countries where military equipment has been hardened against EMPs making the army available for emrgency relief. In some countires even some emergency civilian infrastructure has been hardened against EMPs. But these are exceptions as most goverments simply ignore that danger.

joel_garreau's picture
Principal, The Garreau Group

We are turning environmentalism into an elaborate moral narrative. We are doing the same for neurology. And possibly globalism. This makes me wonder whether we are creating the greatest eruption of religion in centuries, if not millennia—an epoch comparable to the Great Awakening, if not the Axial Age. If so, this will change everything.

Financially, politically, climatically and technologically, the ground is moving beneath our feet. Our narratives of how the world works are not matching the facts. Yet humans are pattern-seeking, story-telling animals. Human beings cannot endure emptiness and desolation. We will always fill such a vacuum with meaning.

Think of the constellations in the night sky. Humans eagerly connect dots and come up with the most elaborate—even poetic—tales, adorning them with heroes and myths, rather than tolerate randomness. The desire to believe goes way back in evolutionary history.

The Axial Age, circa 800 B.C. to 200 B.C. was a period of unique and fundamental focus on transcendence that is the beginning of humanity as we now know it. All over the world, humans simultaneously began to wake up to a burning need to grapple with deep and cosmic questions. All the major religious beliefs are rooted in this period.

The search for spiritual breakthrough was clearly aching and urgent. Perhaps it arose all over the world, simultaneously, among cultures that were not in touch with each other, because it marked a profound shift. Perhaps it was the rise of human consciousness. If such profound restatements of how the world works arose universally the last time we had a transition on the scale of that from biological evolution to cultural evolution, is it logical to think it is happening again as we move from cultural evolution to radical technological evolution?

The evidence is beginning to accumulate. The pursuit of moral meaning in environmentalism has advanced to the state that it has become highly controversial. Some Christians view it as a return to paganism. Some rationalists view it as a retreat from the complexities of the modern world. Yet it would appear that there is something to the idea of environmentalism having religious value. Otherwise, why would we find some fundamentalists regarding the stewardship of creation as divinely mandated?

Then there is the new vision of transcendence coming out of neuroscience. It’s long been observed that intelligent organisms require love to develop or even just to survive. Not coincidentally, we can readily identify brain functions that allow and require us to be deeply relational with others. There are also aspects of the brain that can be shown to equip us to experience elevated moments when we transcend boundaries of self. What happens as the implications of all this research starts suggesting that particular religions are just cultural artifacts built on top of universal human physical traits?

Some of this is beginning to overlap with our economic myths—that everything fits together, that the manufacturing of a sneaker connects a jogger in Portland to a village in Malaysia. There is an interconnectedness of things.

If we came to believe deeply that there is a value somehow in the way things are connected—the web of life, perhaps—is that the next Enlightenment?

The importance of creating such a commonly held framework is that without it, we have no way to move forward together. How can we agree on what must be done if we do not have in common an agreement on what constitutes the profoundly important?

This much seems certain. We’re in the midst of great upheaval. It is impossible to think that this does not have an impact on the kind of narratives that are central to what it means to be human. Such narratives could be nothing less than our new means of managing transcendence—of coming up with specific ways to shape the next humans we are creating. If so, this would change everything.

dimitar_d_sasselov's picture
Professor of Astronomy, Harvard University; Director, Harvard Origins of Life Initiative; Author, The Life of Super-Earths

To an economist "everything" is the world global marketplace. To a baseball fan the World Series is the competition of teams from two countries. To me, a student of astronomy, "everything" is the universe, and really all of it.

What could we possibly do to change that "everything"? People like to say that a scientific idea changed the world when it is as big as Copernicus suggesting that the Sun, not Earth, is at the center. People file the invention of the Internet as a development that changed the world. The list is long.

But which world did these ideas change? Well, yes - they are all about us, Homo sapiens, a recently evolved branch on the "tree of life" with roots in a biochemistry that somehow, 4 billion years ago, took hold on planet Earth. And yes, Homo sapiens has created new amazing things: airplanes, antibiotics, phones, the Internet, but none of these will change the orbits of the stars.

And so it was until now. There is a game-changing scientific development that transcends all in human history. It is already underway and it even has a name: synthetic biology. Different people use synthetic biology to mean different things. Most often synthetic biology is reduced to synthetic genomics - re-designing the genomes of organisms to make them act in new ways. For example, microbes that produce fuel or pharmaceutical products. I use synthetic biology to mean creating new "trees of life", as opposed to synthetic genomics which engineers new branches to the existing Terran "tree". In my use, synthetic biology is about engineering an alternative biochemistry, thus "seeding" an alternative "tree" that then evolves on its own. In that, alternative life is as natural as any life we know.

I shall let others describe what it is and how they are going to do it. One thing is sure - it is going to be powerful - biologists will use synthesis the same way chemists today use synthesis routinely. But there is more! It is the inter-planetary reach of synthetic biology that makes it a new phenomenon in the cosmos we know.

Life is a planetary phenomenon that can transform a planet. The development of synthetic biology appears to be a stage in life's evolution when some of its forms can leave the host planet and adapt to other environments, potentially transforming other planets and eventually the Galaxy.

 

yochai_benkler's picture
Berkman Professor of Entrepreneurial Legal Studies, Harvard; Author, The Wealth of Networks: How Social Production Transforms Markets and Freedom

What will change everything within forty to fifty years (optimistic assumptions about my longevity, I know)? One way to start to think about this is to look at the last “change everything” innovation, and work back fifty years from it. I would focus on the Internet's generalization into everyday life as the relevant baseline innovation that changed everything. We can locate its emergence to widespread use to the mid-1990s. So what did we have that existed in the mid-1940s that was a precursor? We had mature telephone networks, networked radio stations, and point-to-point radio communications. We had the earliest massive computers. So to me the challenge is to look at what we have now, some of which may be quite mature; other pieces of which may be only emerging; and to think of how they could combine in ways that will affect social and cultural processes in ways that will “change everything,” which I take to mean: will make a big difference to the day to day life of many people. Let me suggest four domains in which combinations and improvements of existing elements, some mature, some futuristic, will make a substantial difference, not all of it good.

Communications

We already have handsfree devices. We already have overhead transparent display in fighter pilot helmets. We already have presence-based and immediate communications. We already upload images and movies, on the fly, from our mobile devices, and share them with friends. We already have early holographic imaging for conference presentations, and high-quality 3D imaging for movies. We already have voice-activated computer control systems, and very very early brainwave activated human-computer interfaces. We already have the capacity to form groups online, segment and reform them according to need, be they in World of Warcraft or Facebook groups. What is left is to combine all these pieces into an integrated, easily wearable system that will, for all practical purposes, allow us to interact as science fiction once imagined telepathy working. We will be able to call upon another person by thinking of them; or, at least, whispering their name to ourselves. We will be able to communicate and see them; we will be able to see through their eyes if we wish to, in real time in high resolution to the point that it will seem as though we were in fact standing there, next to them or inside their shoes. However much we think now that collaboration at a distance is easy; what we do today will seem primitive. We won't have “beam me up, Scotty” physically; but we will have a close facsimile of the experience. Coupled with concerns over global warming, these capabilities will make business travel seem like wearing fur. However much we talk now about telecommuting today; these new capabilities, together with new concerns over environmental impact, will make virtual workplaces in the information segments of the economy as different from today's telecommuting as today's ubiquitous computing and mobile platforms are from the mini-computer “revolution” of the 1970s.

Medicine

It is entirely plausible that 110 or 120 will be an average life expectancy; with senescence delayed until 80 or 90. This will change the whole dynamic of life: how many careers a lifetime can support; what the ratio or professional moneymaking to volunteering; how early in life one starts a job; length of training. But this will likely affect, if at all within the relevant period, only the wealthiest societies. Simple innovations that are more likely will have a much wider effect on many more people. A cheap and effective malaria vaccine. Cheap and ubiquitous clean water filters. Cheap and effective treatments and prevention techniques against parasites. All these will change life in the Global South on scales and with values that they will swamp, from the perspective of a broad concern with human values, whatever effects lengthening life in the wealthier North will have.

Military Robotics

We are already have unmanned planes that can shoot live targets. We are seeing land robots, for both military and space applications. We are seeing networked robots performing functions in collaboration. I fear that we will see a massive increase in the deployment and quality of military robotics, and that this will lead to a perception that war is cheaper, in human terms. This, in turn, will lead democracies in general, and the United States in particular, to imagine that there are cheap wars, and to overcome the newly-learned reticence over war that we learned so dearly in Iraq.

Free market ideology

This is not a technical innovation but a change in realm of ideas. The resurgence of free market ideology, after its demise in the Great Depression, came to dominance between the 1970s and the late 1990s as a response to communism. As communism collapsed, free market ideology triumphantly declared its dominance. In the U.S. And the UK it expressed itself, first, in the Reagan/Thatcher moment; and then was generalized in the Clinton/Blair turn to define their own moment in terms of integrating market-based solutions as the core institutional innovation of the “left.” It expressed itself in Europe through the competition-focused, free market policies of the technocratic EU Commission; and in global systems through the demands and persistent reform recommendations of the World Bank, the IMF, and the world trade system through the WTO. But within less than two decades, its force as an idea is declining. On the one hand, the Great Deflation of 2008 has shown the utter dependence of human society on the possibility of well-functioning government to assure some baseline stability in human welfare and capacity to plan for the future. On the other hand, a gradual rise in volunteerism and cooperation, online and offline, is leading to a reassessment of what motivates people, and how governments, markets, and social dynamics interoperate. I expect the binary State/Market conception of the way we organize our large systems to give way to a more fluid set of systems, with greater integration of the social and commercial; as well as of the state and the social. So much of life, in so many of our societies, was structured around either market mechanisms or state bureaucracies. The emergence of new systems of social interaction will affect what we do, and where we turn for things we want to do, have, and experience.

rupert_sheldrake's picture
biologist and author

Credit crunches happen because of too much credit and too many bad debts. Credit is literally belief, from the Latin credo, "I believe." Once confidence ebbs, the loss of trust is self-reinforcing. The game changes.

Something similar is happening with materialism. Since the nineteenth century, its advocates have promised that science will explain everything in terms of physics and chemistry; science will show that there is no God and no purpose in the universe; it will reveal that God is a delusion inside human minds and hence in human brains; and it will prove that brains are nothing but complex machines.

Materialists are sustained by the faith that science will redeem their promises, turning their beliefs into facts. Meanwhile, they live on credit. The philosopher of science Sir Karl Popper described this faith as "promissory materialism" because it depends on promissory notes for discoveries not yet made. Despite all the achievements of science and technology, it is facing an unprecedented credit crunch.

In 1963, when I was studying biochemistry at Cambridge I was invited to a series of private meetings with Francis Crick and Sydney Brenner in Brenner's rooms in King's College, along with a few of my classmates. They had just cracked the genetic code. Both were ardent materialists. They explained there were two major unsolved problems in biology: development and consciousness. They had not been solved because the people who worked on them were not molecular biologists—nor very bright. Crick and Brenner were going to find the answers within 10 years, or maybe 20. Brenner would take development, and Crick consciousness. They invited us to join them.

Both tried their best. Brenner was awarded the Nobel Prize in 2002 for his work on the development of the nematode worm Caenorhabdytis. Crick corrected the manuscript of his final paper on the brain the day before he died in 2004. At his funeral, his son Michael said that what made him tick was not the desire to be famous, wealthy or popular, but "to knock the final nail into the coffin of vitalism."

He failed. So did Brenner. The problems of development and consciousness remain unsolved. Many details have been discovered, dozens of genomes have been sequenced, and brain scans are ever more precise. But there is still no proof that life and minds can be explained by physics and chemistry alone.

The fundamental proposition of materialism is that matter is the only reality. Therefore consciousness is nothing but brain activity. However, among researchers in neuroscience and consciousness studies there is no consensus. Leading journals such as Behavioural and Brain Sciences and the Journal of Consciousness Studies publish many articles that reveal deep problems with the materialist doctrine. For example, Steven Lehar argues that inside our heads there must be a miniaturized virtual-reality full-colour three-dimensional replica of the world. When we look at the sky, the sky is in our heads. Our skulls are beyond the sky. Others, like the psychologist Max Velmans, argue that virtual reality displays are not confined to our brains; they are life-sized, not miniaturized. Our visual perceptions are outside our skulls, just where they seem to be.

The philosopher David Chalmers has called the very existence of subjective experience the "hard problem" of consciousness because it defies explanation in terms of mechanisms. Even if we understand how eyes and brains respond to red light, for example, the quality of redness is still unaccounted for.

In biology and psychology the credit-rating of materialism is falling fast. Can physics inject new capital? Some materialists prefer to call themselves physicalists, to emphasize that their hopes depend on modern physics, not nineteenth-century theories of matter. But physicalism's credit-rating has been reduced by physics itself, for four reasons.

First, some physicists argue that quantum mechanics cannot be formulated without taking into account the minds of observers; hence minds cannot be reduced to physics, because physics presupposes minds

Second, the most ambitious unified theories of physical reality, superstring and M theories, with 10 and 11 dimensions respectively, take science into completely new territory. They are a very shaky foundation for materialism, physicalism or any other pre-established belief system. They are pointing somewhere new.

Third, the known kinds of matter and energy constitute only about 4% of the universe. The rest consists of dark matter and dark energy. The nature of 96% of reality is literally obscure.

Fourth, the cosmological anthropic principle asserts that if the laws and constants of nature had been slightly different at the moment of the Big Bang, biological life could never have emerged, and hence we would not be here to think about it. So did a divine mind fine-tune the laws and constants in the beginning? Some cosmologists prefer to believe that our universe is one of a vast, and perhaps infinite, number of parallel universes, all with different laws and constants. We just happen to exist in the one that has the right conditions for us.

In the eyes of skeptics, the multiverse theory is the ultimate violation of Occam's Razor, the principle that entities should not be multiplied unnecessarily. But even so, it does not succeed in getting rid of God. An infinite God could be the God of an infinite number of universes.

Here on Earth we are facing climate change, great economic uncertainty, and cuts in science funding. Confidence in materialism is draining away. Its leaders, like central bankers, keep printing promissory notes, but it has lost its credibility as the central dogma of science. Many scientists no longer want to be 100% invested in it.

Materialism's credit crunch changes everything. As science is liberated from this nineteenth-century ideology, new perspectives and possibilities will open up, not just for science, but for other areas of our culture that are dominated by materialism. And by giving up the pretence that the ultimate answers are already known, the sciences will be freer—and more fun.

paul_davies's picture
Theoretical physicist; cosmologist; astro-biologist; co-Director of BEYOND, Arizona State University; principle investigator, Center for the Convergence of Physical Sciences and Cancer Biology; Author, The Eerie Silence and The Cosmic Jackpot

A hundred and fifty years ago, Charles Darwin gave us a convincing theory of how life has evolved, over billions of years, from primitive microbes, to the richness and diversity of the biosphere we see today. But he pointedly left out of account how life got started in the first place. "One might as well speculate about the origin of matter," he quipped in a letter to a friend. How, where and when life began remain some of the greatest unsolved problems of science. Even if we make life in the laboratory in the near future, it still won't tell us how Mother Nature did it without expensive equipment, trained biochemists and - the crucial point — a pre-conception of the goal to be achieved. However, we might be able to discover the answer to a more general question: did life originate once, or often?

The subject of astrobiology is predicated on the hope and expectation that life emerges readily in earthlike conditions, and is therefore likely to be widespread in the universe. The assumption that, given half a chance, life will out, is sometime called biological determinism. Unfortunately, nothing in the known laws of physics and chemistry single out the state of matter we call "living" as in any way favored. There is no known law that fast-tracks matter to life. If we do find life on another planet and we can be sure it has started there from scratch, completely independently of life on Earth, biological determinism will be vindicated. With NASA scaling back its activities, however, the search for extraterrestrial life has all but stalled.

Meanwhile, there is an easy way to test biological determinism right here and now. No planet is more earthlike than Earth itself, so biological determinism predicts that life should have started many times on our home planet. That raises the fascinating question of whether there might be more than one form of life inhabiting the terrestrial biosphere. Biologists are convinced that all known species belong to the same tree of life, and share a common origin. But almost all life on Earth is microbial, and only a tiny fraction of microbes have been characterized, let alone sequenced and positioned on the universal tree. You can't tell by looking what makes a microbe tick; you have to study its innards. Microbiologists do that using techniques carefully customized to life as we know it. Their methods wouldn't work for an alternative form of life. If you go looking for known life, you are unlikely to find unknown life.

I believe there is a strong likelihood that Earth possesses a shadow biosphere of alternative microbial life representing the evolutionary products of a second genesis. Maybe also a third, fourth... I also think we might very well discover this shadow biosphere soon. It could be ecologically separate, confined to niches beyond the reach of known life by virtue of extreme heat, cold, acidity or other variables. Or it could interpenetrate the known biosphere in both physical and parameter space. There could be, in effect, alien microbes right under our noses (or even in our noses). Chances are, we would not yet be aware of the fact, especially if the weird shadow life is present at relatively low abundance. But a targeted search for weird microbes, and the weird viruses that prey on them, could find shadow life any day soon.

Why would it change everything? Apart from the sweeping technological applications that having a second form of life would bring, the discovery of a shadow biosphere would prove biological determinism, and confirm that life is indeed widespread in the universe. To expect that life would start twice on Earth, but never on another planet like Earth, is too improbable. And to know that the universe is teeming with life would make it far more likely that there is also intelligent life elsewhere in the universe. We might then have greater confidence that the answer to the biggest of the big questions of existence — Are we alone in the universe? — is very probably, no.

roger_highfield's picture
Director, External Affairs, Science Museum Group; Co-author (with Martin Nowak), SuperCooperators

Now this idea really will change everything, ending the energy crisis and curbing climate change at a stroke. I am confident in what I say because a lot of clever people have said it again and again—and again—for more than half a century. Since the heady, optimistic days when scientists first dreamt of taming the power of the Sun, fusion energy has remained tantalisingly out of reach.

It will take us between 20 and 50 years to build a fusion power plant. That is what glinty-eyed scientists announced at the height of the Cold War. Their modern equivalents are still saying it. And I am going to say it once again because it really could—and will—make a difference.

Fusion power could be a source of energy that would have a greater impact on humankind than landing the first man on the Moon. The reason is, as one former UK Government chief scientist liked to put it, that the lithium from one laptop battery and deuterium from a bath of water would generate enough energy to power a single citizen for 30 years. And, overall, fusion reactors would create fewer radioactive waste problems than their fission sisters.

The skeptics have always sneered that the proponents of fusion power are out of touch with reality. As the old joke goes, fusion is the power of the future—and it always will be. But this is one energy bet that must pay off, given the failure of the Kyoto Protocol.

There are good reasons to be hopeful. In Cadarache, France, construction is under way of the International Thermonuclear Experimental Reactor (Iter means "the way" in Latin, though cynics carp that it can also mean "journey", and a bloody long one too). This project will mark a milestone in fusion development and there are other solid bets that are being placed, notably using high-power lasers to kick-start the fusion process.

Greens will complain that the money would be better spent on renewables but if this unfashionable gamble pays off the entire planet will be the winner. Imagine the patent squabbles when engineers finally figure out how to make fusion economic. Think of the seismic implications for energy research and alleviating poverty in the developing world. Consider the massive implications for holding back climate change. We are about to catch up with the receding horizon of fusion expectations.

 

stewart_brand's picture
Founder, the Whole Earth Catalog; Co-founder, The Well; Co-Founder, The Long Now Foundation, and Revive & Restore; Author, Whole Earth Discipline

To take mastery of climate as we once took mastery of fire, then of genetics (agriculture), then of communication (music, writing, math, maps, images, printing, radio, computers) will require mathematics we don't have, physics and biology we don't have, and governnance we don't have.

Our climate models, sophisticated and muscular as they are (employing more teraflops than any other calculation), still are just jumped-up weather prediction models. The real climate system has more levels and modes of hyper-connected nonlinearity than we can yet comprehend or ask computers to replicate, because so far we lack the math to represent climate dynamics with the requisite variety to control it. Acquiring that math will change everything.

Materials scientist and engineer Saul Griffith estimates that humanity must produce 13 terawatts of greenhouse-free energy in order to moderate global warming to a just-tolerable increase of 2° Celsius. (Civilization currently runs on about 16 terawatts of energy, most of it from burning fossil fuels.) Griffith calculates that deploying current clean technologies — nuclear, wind, geothermal, biofuels, and solar technology — to generate 13 terawatts would cover an area the size of Australia. It is imaginable but not feasible. Just improving the engineering of nuclear and solar won't get us what we need; new science is needed. The same goes for biofuels: the current state of genetic engineering is too crude to craft truly efficient organisms for sequestering carbon and generating usable energy. The science of molecular biology has to advance by leaps. Applied science that powerful will change everything.

Climate change is a global problem that cannot be fixed with global economics, which we have; it requires global governance, which we don't have. Whole new modes of international discourse, agreement, and enforcement must be devised. How are responsibilities to be shared for legions of climate refugees? Who decides which geoengineering projects can go forward? Who pays for them? Who adjudicates compensation for those harmed? How are free riders dealt with? Humans have managed commons before — fisheries, irrigation systems, fire regimes — but never on this scale. Global governance will change everything.

Of course, these radical adjustments may not happen, or not happen in time, and then climate will shift to either a chaotic mode or a different stable state with the carrying capacity for just a fraction of present humanity, and that will really change everything.

alun_anderson's picture
Senior Consultant (and former Editor-in-Chief and Publishing Director), New Scientist; Author, After the Ice

Green oil is the development that will utterly change the world and it will arrive in the next few decades.

Oil that we take out of the ground and burn is going to be replaced by oil that we grow. Bio-fuels based on corn are our first effort to grow green oil but they have clearly not succeeded. Current bio-fuels take too much land and too much energy to grow and too much of that energy goes into building parts of a plant that we can't easily convert into fuel. The answer will come from simple, engineered organisms that can soak up energy in a vat in any sunny spot and turn that sunlight straight into a precursor for fuel, preferably a precursor that can go straight into an existing oil refinery that can turn out gasoline.

The impacts of such a development are staggering. The power balance of the world will be completely changed. Petro-dictatorships, where an endless flow of oil money keeps the population quiet, will no longer be able to look forward to oil at $50, $100, $150 and so on a barrel as oil supplies tighten. Power will be back in the hands of innovators rather than resource owners. The quest for dirty oil in remote and sensitive parts of the world, whether the Arctic or the Alberta tar sands, will not make economic sense and the environment will gain. The burning of gasoline in automobiles will no longer add much to the amount of carbon dioxide in the atmosphere as the fuel will have soaked up an almost equal amount of carbon dioxide while it was being grown. The existing networks for delivering fuel to transportation (the 100,000 gas station in the US for example) won't become redundant as they would if we switched to electric autos, making plans for cutting emissions much less difficult.

Will the green oil come from algae, bacteria, archaea or something else? I don't know. Oil is a natural product arising from the transformation of plant material created by the capture of light. As it is a transformation in nature we can replicate it, not necessarily directly, but to arrive at a similar result. It is not magic.

Scientists around the world have seen the prize and hundreds of millions are going into start-up companies. There is a nice twist to this line of investment. Despite the ups and downs, the long-term trend for the price of oil is up. That means the size of the prize for replacing oil is going up while the size of the challenge is going down. Replacing $20-a-barrel oil would be difficult but replacing $100 oil is much easier.

There is an old saying: "The Stone Age didn't end because we ran out of stone. Someone came up with a better idea". The better idea is coming.

david_g_myers's picture
Professor of Psychology, Hope College; Co-author, Psychology, 11th Edition

Speaking recently to university colleagues in southern Africa I heard their wish for teaching materials that, for them, would change everything. If only there could be a way for their students, who cannot afford even greatly discounted Euro-American textbooks, to have access to low cost, state-of-the-art textbooks with culturally relevant examples.

For students in Africa and around the world, this utopian world may in the next decade become the real world, thanks to:

• Interactive textbooks: Various publishers are developing web-based interactive e-books with links to tutorials, simulations, quizzes, animations, virtual labs, discussion boards, and video clips. (These are not yesterday’s e-textbooks.)

• Customizability: Instructors, and regional instructor networks, will be able to rearrange the content, delete unwanted material, and add (or link to) materials pertinent to their students' worlds and their own course goals.

• Affordability: In the North American context, students will pay for course access tied to their names. With no hard copy book production and shipping, and no used books, publishers will stay afloat with a much smaller fee paid by many more students, or via a site license. For courses in economically impoverished regions, benevolent publishers could make access available for very low cost per student.

• Student accountability: Instructors will track their students’ engagement in advance of class sessions, thus freeing more class time for discussion.

• Expanding broadband access: Thanks partly to a joint foundation initiative by Rockefeller, Carnegie, Ford, and others, “information technologies and connectivity to the Internet” are coming to African universities. As yet, access is limited and expensive. But with increased bandwidth and the prospect of inexpensive, wireless personal reading devices, everything may change.

This is not pie in the sky. African researchers are eager to explore the effectiveness of the new interactive content when it becomes accessible to their students. The hope is that such will combine the strengths of existing texts?which are comprehensive, expertly reviewed, painstakingly edited, attractively packaged, and supported with teaching aids?at reduced cost and with the possibility of locally adapted illustrations and content.

Textbooks are sometimes faulted for being biased, dated, or outrageously expensive. But say this much for them, whether in traditional book or new web-based formats: By making the same information available to rich and poor students at rich and poor schools in rich and poor countries, they are egalitarian. They flatten the world. And as James Madison noted in 1825, “the advancement and diffusion of knowledge is the only guardian of true liberty.

patrick_bateson's picture
Professor of Ethology, Cambridge University; Co-author, Design for a Life (*Deceased)

"The power set free from the atom has changed everything, except our ways of thinking." So said Einstein. Whether or not he foresaw the total destruction of the world, the thought gave rise to a joke, albeit a sick one, that humans would never make contact with civilizations in other parts of the universe. Either those civilizations were not advanced enough to decode our signals or they were more advanced than us, had developed nuclear weapons and had destroyed themselves. The chances would be vanishingly small that their brief window of time between technological competence and oblivion would coincide with ours.

I never understood the policy of deterrence that justified the nuclear arms race. The coherence of such a view depended utterly on the maintenance of human rationality. Suppose that people whose concern for personal safety or thought for others was subordinated to religious or ideological belief ruled a country in possession of nuclear weapons. The whole notion of deterrence collapses.

I usually regard myself as an optimist. Tomorrow will be better than to-day. My naïve confidence has been dented by advancing age and by the growing number of reality checks that point to trouble ahead. Even if the red mists of anger or insanity do not unleash the total destruction of our way of life, the prognosis for the survival of human civilization is not good. However much we believe in technical fixes that will overcome the problems of diminishing resources, the planet is likely to be overwhelmed by the sheer number of people who inhabit it and by the conceit that economic growth for everybody is the only route to well-being. The uncontrolled greed of the developed world has taken a sharp knock in the recent credit crunch, but how do you persuade affluent people to accept an overall reduction in their standard of living? Which government of any stripe is going to risk its future by enforcing the unpopular policies that are already needed? On this front the prognosis might not be too bad since crises do bring about change.

The Yom Kippur war of 1973 led to a dramatic reduction in the oil supply and, in the UK, petrol rationing was swiftly introduced and everybody was required to save fuel by driving no more than 50 mph. The restraint disappeared, of course, as soon as the oil started to flow again, but the experience showed that people will uncomplainingly change their behavior when they are required to do so and understand the justification.

Growth of the human population must be one of the major threats to sustainability of resources such as drinking water and food. Here again the prognosis does not have to be wholly bad if far-sighted wisdom prevailed. If the GDP of each country of the world is plotted on graph paper against average family size in that country, the correlation is almost perfectly negative. (The outliers provided by rich countries with large average family sizes are almost exclusively those places in which women are treated badly.) The evidence suggests that if we wished to take steps to reduce population growth, every effort should be made to boost the GDP of the poorest countries of the world. This is an example were economic growth in some countries and overall benefit to the world could proceed hand in hand, but the richest countries would have to pay the price.

Another darker thought is that human population might be curbed by its own stupidity and cupidity. I am not now thinking of conflict but of the way in which endocrine disruptors are poured unchecked into the environment. Suddenly males might be feminized by the countless number of artificial products that simulate the action of female hormone. Sufficiently so that reproduction becomes impossible. For some the irony would be delicious. The ultimate feed-back mechanism, unforeseen by Malthus, that places a limit on the growth of the human population.

Sustainability is the goal that we pass onto the next generation the resources (or some equivalent) that we received from our forbears. It may be a pipe dream, given the way we think.

martin_seligman's picture
Professor and Director, Positive Psychology Center, University of Pennsylvania; Author, Flourish

If we could teach intuition, people would be smarter.

Most of real world “intelligent” performance is based on intuition, not on reasoning. The expert surgeon just “knows” where to cut. The experienced farmer just “knows” that it is going to rain. The expert firefighter just “knows” that the roof is about to collapse. The judge just “knows” the defendant is lying. These finely honed intuitions are fast, unconscious, multidimensional, inarticulate, and are made confidently. What separates intelligent from mechanically stupid action is wrapped up in this mysterious process. If we could only teach intuition, we could raise human intelligence substantially.

I believe that the teaching of intuition is on the horizon by computationally driven simulation.

Intuition is a species of recognition, formally akin to the way we recognize that a table is a table. We are now close to understanding how natural classes are recognized. Consider the universe of objects all people agree are tables. There are a great many features of tables that are potentially relevant (but neither necessary nor sufficient singly or jointly) to being a table: e.g., flatness of the surface, number of legs, capacity for supporting other objects, function, compatibility with chairs, etc., etc. Each of these features can be assigned some value, which could either be binary (present vs. absent) or continuous. Different instances of tables will have different values along several of the dimensions, e.g., some, like dining room tables are flat, whereas others, like pool tables, have pockets. This means that the process of categorization is stochastic in nature. Upon observing a new object one can decide whether it is a table by comparing its features with the features of stored tables in memory. If the sum of its similarity to all of the tables in memory is higher than the sum of its similarity to other objects (e.g., chairs, animals, etc.) then one “knows” that it too is a table.

Now consider an “eagle” lieutenant recognizing a likely ambush. Here, is what this eagle has stored in her brain. She has a list of the dimensions relevant to an ambush site versus a non-ambush site. She has values along each of these dimensions for each of the ambush and non-ambush sites that she has experienced or learned about. She has a mental model which assigns weights to each of these basic dimensions or features (and to higher order features, such as the interaction between two dimensions). Based on past experiences with similar sets of features and knowledge of the outcomes of those feature sets, she can predict the outcome of the present feature set and based on her predictive model, choose how to respond to the possibility of this being an ambush.

This strongly implies that intuition is teachable, perhaps massively teachable. One way is brute force: simple repeated experience with forced choice seems to build intuition, and chicken-sexing is an example of such brute force. Professional Japanese chicken-sexers can tell male from female chicks at a glance, but they cannot articulate how they do it. With many forced choice trials with feedback, naïve people can be trained to very high accuracy and they too cannot report how they do it.

A better way is virtual simulation. A sufficient number of simulations, with the right variations, to allow a buildup of the mental model will result in a commander or surgeon who when it happens in real life has “seen it before,” will recognize it, and take the life saving action at zero cost in blood. It would be a waste of training to simulate obvious decisions in which most commanders or surgeons would get it right without training. Computational modeling of the future can derive a decision contour, along which “close calls” occur. These are the scalpel-edge cases that yield the slowest response times and are most prone to error. One could also systematically morph material along the decision contour and thereby over-represent cases near the boundary.

By so simulating close decisions in almost every human domain, vastly better intuition becomes teachable. Hence many more intelligent surgeons, judges, commanders, investors, and scientists.

nicholas_humphrey's picture
Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust

We're easily seduced by the idea that, once the Big One comes, nothing will ever be the same again. But I guess what will surprise—and no doubt frustrate—those who dream of a scientifically-driven new order is how unchangeable, and unmanageable by technology, human lives are.

Imagine if this Edge question had been posed to the citizens of Rome two thousand years ago. Would they have been able to predict the coming of the internet, DNA finger-printing, mind-control, space travel? Of course not. Would that mean they would have failed to spot the technological developments that were destined to change everything? I don't think so. For the fact is nothing has changed everything.

Those Romans, despite their technological privations, led lives remarkably like ours. Bring them into the 21st century and they would of course be amazed by what science has achieved. Yet they would soon discover that beneath the modern wrapping it is business as usual. Politics, crime, love, religion, heroism.. The stuff of human biography. The more it changes, the more it's the same thing.

The one development that really could change everything would be a radical, genetically programmed, alteration of human nature. It hasn't happened in historical times, and I'd bet it won't be happening in the near future either. Cultural and technical innovations can certainly alter the trajectory of individual human lives. But, while human beings continue to reproduce by having sex and each new generation goes back to square one, then every baby begins life with a set of inherited dispositions and instincts that evolved in the technological dark ages.

The Latin poet Horace wrote: "You can drive out nature with a pitchfork, but she will always return". Let's dream, if we like, of revolution. But be prepared for more of the same.

alan_alda's picture
Actor; Writer; Director; Host, PBS program Brains on Trial; Author, Things I Overheard While Talking to Myself

I find it hard to believe that anything will change everything. The only exception might be if we suddenly learned how to live with one another. But, does anyone think that will come about in a foreseeable lifetime?

Evidence from the past seems to point to our becoming increasingly dangerous pretty much every time we come up with a new idea or technology. These new things are usually wholesome and benign at first (movable type, pharmacology, rule of law) but before long we find ways to use these inventions to do what we do best—exercise power over one another.

Even if we were visited by weird little people from another planet and were forced to band together, I doubt if it would be long before we’d be finding ways to break into factions again, identifying those among us who are not quite people.

We keep rounding an endless vicious circle. Will an idea or technology emerge anytime soon that will let us exit this lethal cyclotron before we meet our fate head on and scatter into a million pieces? Will we outsmart our own brilliance before this planet is painted over with yet another layer of people? Maybe, but I doubt it.

leo_m_chalupa's picture
Neurobiologist; Professor of Pharmacology and Physiology, George Washington University

In the 1960s movie "The Graduate" a young Dustin Hoffman is advised to go into plastics, presumably because that will be the next big thing.

Today, one might well advise the young person planning to pursue a degree in medicine or the biological sciences to go into brain plasticity. This refers to the fact that neurons are malleable throughout life, capable of being shaped by external experiences and endogenous events.

Recent imaging studies of single neurons have revealed that specialized parts of nerve cells, termed dendritic spines are constantly undergoing a process of rapid expansion and retraction. While brain cells are certainly capable of structural and functional changes throughout life, an extensive scientific literature has shown that plasticity in the nervous system is greatest early in development, during the so-called critical periods. This accounts for the marvelous ability of children to rapidly master various skills at different developmental stages. Toddlers have no difficulty in learning two, three and even more languages, and most adolescents can learn to ski black diamond slopes much before their middle-aged parents. The critical periods underlying such learning reflect the high degree of plasticity exhibited by specific brain circuits during the first two decades of life.

In recent years, developmental neurobiologists have made considerable progress in unraveling the myriad factors underlying the plasticity of neurons in the developing brain. For instance, a number of studies have now demonstrated that it is the formation of inhibitory circuits in the cortex that causes decreased plasticity in the maturing visual system. While no single event can entirely explain brain plasticity, progress is being attained at a rapid pace, and I am convinced that in my lifetime we will be able to control the level of plasticity exhibited by mature neurons.

Several laboratories have already discovered ways to manipulate the brain in ways to make mature neurons as plastic as during early development. Such studies have been done using genetically engineered mice with either a deletion or an over-expression of specific genes known to control plasticity during normal development. Moreover, drug treatments have now been found to mimic the changes observed in these mutant mice.

In essence this means that the high degree of brain plasticity normally evident only during early development can now be made to occur throughout the life span. This is undoubtedly a game changer in the brain sciences. Imagine being able to restore the plasticity of neurons in the language centers of your brain, enabling you to learn any and all languages effortlessly and at a rapid pace. The restoration of neuronal plasticity would also have important clinical implications since unlike in the mature brain, connections in the developing brain are capable of sprouting (i.e. new growth). For this reason, this technology could provide a powerful means to combat loss of neuronal connections, including those resulting from brain injury as well as various disease states.

I am optimistic that these treatments will be forthcoming in my lifetime. Indeed a research group in Finland is about to begin the first clinical study to assess the ability of drug treatments to restore plasticity to the visual system of adult humans. If successful this would provide a means for treating amblyopia in adults, a prevalent disorder of the visual system, which today can only be treated in young children whose visual cortex is still plastic. 
— 
Still there are a number of factors will need to be worked out before the restoration of neuronal plasticity becomes a viable procedure. For one thing, it will be necessary to devise a means of targeting specific groups of neurons, those controlling a function that one wants to attain enhanced plasticity. Many people might wish to have a brain made capable of effortlessly learning foreign languages, but few would be pleased if this were accompanied by a vocabulary limited to babbling sounds, not unlike those of my granddaughter who is beginning to learn to speak English and Ukrainian.

barry_c_smith's picture
Professor & Director, Institute of Philosophy School of Advanced Study University of London

Despite the inevitable decline in the environment brought by climate change, the advance of technology will steadily continue. Many pin their hopes on technological advances to lessen the worst effects of climactic upheaval and to smooth the transition between our dependence on fossil fuels and our eventual reliance on renewable energy sources. However, bit by bit, less dramatic advances in technology will take place, changing the world, and our experience of it, for ever.

It is tempting when thinking about developments that will bring fundamental change to look to the recent past. We think of the Internet and the cell phone. To lose contact with the former, even temporarily, can make one feel that one is suddenly stripped of a sense, like the temporary lose of one’s sight or hearing; while the ready supply of mobile phone technology has stimulated the demand to communicate. Why be alone anywhere? You can always summon someone’s company? Neither of these technologies is yet optimal, and either we, or they, will have to adapt to one another. The familiar refrain is that email increases our workload and that cell phones put us at the end of the electronic leash. Email can also be a surprisingly inflammatory medium, and cell phones can separate us from our surroundings, leaving us uneasy with these technologies. Can’t live with them, can’t live without them. So can future technology help, or is it we who will adapt?

Workers in A.I. used to dream of the talking typewriter and this is ever closer closer to being an everyday reality. Why write emails when you can dictate them? Why read them when you can listen to them being read to you, and do something else? And why not edit as you go, to speed up the act of replying? All this will come one day, no doubt, perhaps with emails being read in the personalized voice patterns of their senders. Will this cut down on the surprisingly inflammatory and provocative nature of email exchanges? Perhaps not.

However, the other indispensable device for communicating, the cell phone, is far from adaptive. We hear, unwanted, other people’s conversations. We lose our inhibitions and our awareness of our surroundings while straining to capture the nuances of the other’s speech; listing out for the subtle speech signals that convey mood and meaning, many of which are simply missing in this medium. Maybe this is why speakers are more ampliative on their cell phones, implicitly aware that less of them comes across. Face to face our attention is focused on many features of the talker. It is this multi-modal experience that can simultaneously provide so much. Without these cross-modal clues, we make a concentrated effort to tune in to what is happening elsewhere, often with dangerous consequences, as happens when drivers lose the keen awareness of their surroundings — even when using hands free sets. Could technology overcome these problems?

Here, I am reminded not of the recent past but of a huge change that occurred in the middle-ages when humans transformed their cognitive lives by learning to read silently. Originally, people could only read books by reading each page out loud. Monks would whisper, of course, but the dedicated reading by so many in an enclosed space must have been an highly distracting affair. It was St Aquinas who amazed his fellow believers by demonstrating that without pronouncing words he could retain the information he found on the page. At the time, his skill was seen as a miracle, but gradually human readers learned to read by keeping things inside and not saying the words they were reading out loud. From this simple adjustment, seemingly miraculous at the time, a great transformation of the human mind took place, and so began the age of intense private study so familiar to us now; whose universities where ideas could turn silently in large minds.

Will a similar transformation of the human mind come about in our time? Could there come a time when we intend to communicate and do so without talking out loud? If the answer is ‘yes’ a quiet public space would be restored where all could engage in their private conversations without disturbing others. Could it really happen? Recently, we have been amazed by how a chimpanzee running on a treadmill could control—for a short time—the movements of a synchronized robot running on a treadmill thousands of miles away. Here, we would need something subtly different but no less astounding: a way of controlling in thought, and committing to send, the signals in the motor cortex that would normally travel to our articulators and ultimately issue in speech sounds. A device, perhaps implanted or appended, would send the signals and another device in receivers would read them and stimulate similar movements or commands in their motor cortex, giving them the ability, through neural mimicry, to reproduce silently the speech sounds they would make if they were saying them. Could accent be retained? Maybe not, unless some way was found of coding separately, but usably, the information voice conveys about the size, age and sex of the speaker. However, knowing who was calling and knowing how they sounded may lead us to ‘hear’ their voice with the words understood.

Whether this could be done depends, in part, on whether Lieberman’s Motor Theory of Speech Perception is true, and it may well not be. However, a break-though of this kind, introducing such a little change as our not having to speak out loud or having to listen attentively to sounds when communicating, would allow us to share our thoughts efficiently and privately. Moreover, just as thinking distracts us less from our surroundings than listening attentively to sounds originating elsewhere, perhaps one could both communicate and concentrate on one’s surroundings, whether that be driving, or just negotiating our place among other people. It would not be telepathy, the reading of minds, or the invasion of thought, since it would still depend on senders and receivers with the appropriate apparatus being willing to send to, and receive from, one another. We would still have to dial and answer.

Would it come to feel as if one were exchanging thoughts directly? Perhaps. And maybe it would become the preferred way of communicating in public. And odd as this may sound to us, I suspect the experience of taking-in the thoughts of others when reading a manuscript silently was once just as strange to those early Medieval scholars. These are changes in experience that transform our minds, giving us the ability to be (notionally) in two places at once. It is these small changes in how we utilize our minds that may ultimately have the biggest effects on our lives.

freeman_dyson's picture
Physicist, Institute of Advanced Study; Author, Disturbing the Universe; Maker of Patterns

What will change everything?  What game-changing scientific ideas and developments do you expect to live to see?

Since I am 85, I cannot expect to see any big changes in science during my life-time. I beg permission to change the question to make it more interesting.

What will change everything?  What game-changing scientific ideas and developments do you expect your grandchildren to see?

I assume that some of my grandchildren will be alive for the next 80 years, long enough for neurology to become the dominant game-changing science. I expect that genetics and molecular biology will be dominant for the next fifty years, and after that neurology will have its turn. Neurology will change the game of human life drastically, as soon as we develop the tools to observe and direct the activities of a human brain in detail from the outside.

The essential facts which will make detailed observation or control of a brain possible are the following. Microwave signals travel easily through brain tissue for a few centimeters. The attenuation is small enough, so that signals can be transmitted from the inside and detected on the outside. Small microwave transmitters and receivers have bandwidths of the order of gigahertz, while neurons have bandwidths of the order of kilohertz. A single microwave transmitter inside a brain has enough bandwidth to transmit to the outside the activity of a million neurons. A system of 10^5 tiny transmitters inside a brain with 10^5 receivers outside could observe in detail the activity of an entire human brain with 10^11 neurons. A system of 10^5 transmitters outside with 10^5 receivers inside could control in detail the activity of 10^11 neurons. The microwave signals could be encoded so that each of the 10^11 neurons would be identified by the code of the signal that it transmits or receives.

These physical tools would make possible the practice of "Radiotelepathy", the direct communication of feelings and thoughts from brain to brain. The ancient myth of telepathy, induced by occult and spooky action-at-a-distance, would be replaced by a prosaic kind of telepathy induced by physical tools. To make radiotelepathy possible, we have only to invent two new technologies, first the direct conversion of neural signals into radio signals and vice versa, and second the placement of microscopic radio transmitters and receivers within the tissue of a living brain. I do not have any idea of the way these inventions will be achieved, but I expect them to emerge from the rapid progress of neurology before the twenty-first century is over.

It is easy to imagine radiotelepathy as a powerful instrument of social change, used either for good or for evil purposes. It could be a basis for mutual understanding and peaceful cooperation of humans all over the planet. Or it could be a basis for tyrannical oppression and enforced hatred between one communal society and another. All that we can say for certain is that the opportunities for human experience and understanding would be radically enlarged. A society bonded together by radiotelepathy would be experiencing human life in a totally new way. It will be our grandchildren's task to work out the rules of the game, so that the effects of radiotelepathy remain constructive rather than destructive. It is not too soon for them to begin thinking about the responsibilities that they will inherit. The first rule of the game, which should not be too difficult to translate into law, is that every individual should be guaranteed the ability to switch off radio communication at any time, with or without cause. When the technology of communication becomes more and more intrusive, privacy must be preserved as a basic human right.

Another set of opportunities and responsibilities will arise when radiotelepathy is extended from humans to other animal species. We will then experience directly the joy of a bird flying or a wolf-pack hunting, the pain of a deer hunted or an elephant starved. We will feel in our own flesh the community of life to which we belong.  I cannot help hoping that the sharing of our brains with our fellow-creatures will make us better stewards of our planet.

gerald_holton's picture
Mallinckrodt Professor of Physics and Professor of the History of Science, Emeritus, Harvard University; Author, Einstein for the 21st Century: His Legacy in Science, Art, and Modern Culture

An answer can be given once more in one sentence: the intentional, hostile deployment‚—whether by a state, a terrorist group, or other individuals‚—of a significant nuclear device.

laurence_c_smith's picture
Professor of Environmental Studies, Brown University; Author, Rivers of Power

In the classic English fable Jack and the Beanstalk, the intrepid protagonist risks being devoured on sight in order to repeatedly raid the home of a flesh-eating giant for gold. All goes well until the snoring giant awakens and gives furious chase. But Jack beats him back down the magic beanstalk and chops it down with an axe, toppling the descending cannibal to its death. Jack thus wins back his life plus substantial economic profit from his spoils.

Industrialized society has also reaped enormous economic and social benefit from fossil fuels, so far without rousing any giants. But as geoscientists, my colleagues and I devote much of our time to worrying about whether they might be slumbering in the Earth's climate system.

We used to think climate worked like a dial — slow to heat up and slow to cool down — but we've since learned it can also act like a switch. Twenty years ago anyone who hypothesized an abrupt, show-stopping event — a centuries-long plunge in air temperature, say, or the sudden die-off of forests — would have been laughed off. But today, an immense body of empirical and theoretical research tells us that sudden awakenings are dismayingly common in climate behavior.

Ancient records preserved in tree rings, sediments, glacial ice layers, cave stalactites, and other natural archives tells us that for much of the past 10,000 years — the time when our modern agricultural society evolved — our climate was remarkably stable. Before then it was it was capable of wild fluctuations, even leaping eighteen degrees Fahrenheit in ten years. That's as if the average temperature in Minneapolis warmed to that of San Diego in a single decade.

Even during the relative calm of recent centuries, we find sudden lurches that exceed anything in modern memory. Tree rings tell us that in the past 1,000 years, the western United States has seen three droughts at least as bad as the Dust Bowl but lasting three to seven times longer. Two of them may have helped collapse past societies of the Anasazi and Fremont people.

The mechanisms behind such lurches are complex but decipherable. Many are related to shifting ocean currents that slosh around pools of warm or cool seawater in quasi-predictable ways. The El Niño/La Niña phenomenon, which redirects rainfall patterns around the globe, is one well-known example. Another major player is the Atlantic thermohaline circulation (THC), a massive density-driven "heat conveyor belt" that carries tropical warmth northwards via the Gulf Stream. The THC is what gifts Europe with relatively balminess despite being as far north as some of Canada's best polar bear habitat.

If the THC were to weaken or halt, the eastern U.S. and Europe would become something like Alaska. While over-sensationalized by The Day After Tomorrow film and a scary 2003 Pentagon document imagining famines, refugees, and wars, a THC shutdown nonetheless remains an unlikely but plausible threat. It is the original sleeping giant of my field.

Unfortunately, we are discovering more giants that are probably lighter sleepers than the THC. Seven others — all of them potential game-changers — are now under scrutiny: (1) the disappearance of summer sea-ice over the Arctic Ocean, (2) increased melting and glacier flow of the Greenland ice sheet, (3) "unsticking" of the frozen West Antarctic Ice Sheet from its bed, (4) rapid die-back of Amazon forests, (5) disruption of the Indian Monsoon, (6) release of methane, an even more potent greenhouse gas than carbon dioxide, from thawing frozen soils, and (7) a shift to a permanent El Niño-like state. Like the THC, should any of these occur there would be profound ramifications — like our food production, the extinction and expansion of species, and the inundation of coastal cities.

To illustrate, consider the Greenland and Antarctic ice sheets. The water stored in them is enormous, enough to drown the planet under more than 200 feet of water. That will not happen anytime soon but even a tiny reduction in their extent — say, five percent — would significantly alter our coastline. Global sea level is already rising about one-third of a centimeter every year and will reach at least 18 to 60 centimeters higher just one long human lifetime from now, if the speeds at which glaciers are currently flowing from land to ocean remain constant. But at least two warming-induced triggers might speed them up: percolation of lubricating meltwater down to the glaciers' beds; and the disintegration of floating ice shelves that presently pin glaciers onto the continent. If these giants awaken happen our best current guess is 80 to 200 centimeters of sea level rise. That's a lot of water. Most of Miami would either be surrounded by dikes or underwater.

Unfortunately, the presence of sleeping giants makes the steady, predictable growth of anthropogenic greenhouse warming more dangerous, not less. Alarm clocks may be set to go off, but we don't what their temperature settings are. The science is too new, and besides we'll never know for sure until it happens. While some economists predicted that rising credit-default swaps and other highly leveraged financial products might eventually bring about an economic collapse, who could have foreseen the exact timing and magnitude of late 2008? Like most threshold phenomena it is extremely difficult to know just how much poking is needed to disturb sleeping giants. Forced to guess, I'd mutter something about decades, or centuries, or never. On the other hand, one might be stirring already: In September 2007, then again in 2008, for the first time in memory nearly 40% of the later-summer sea ice in the Arctic Ocean abruptly disappeared.

Unlike Jack, the eyes of scientists are slow to adjust to the gloom. But we are beginning to see some outlines and unfortunately, discern not one but many sleeping forms. What is certain is that our inexorable loading of the atmosphere with heat-trapping greenhouse gases increases the likelihood that one or more of them will wake up.

susan_blackmore's picture
Psychologist; Visiting Professor, University of Plymouth; Author, Consciousness: An Introduction

All around us the techno-memes are proliferating, and gearing up to take control; not that they realise it; they are just selfish replicators doing what selfish replicators do—getting copied whenever and wherever they can, regardless of the consequences. In this case they are using us human meme machines as their first stage copying machinery, until something better comes along. Artificial meme machines are improving all the time, and the step that will change everything is when these machines become self-replicating. Then they will no longer need us. Whether we live or die, or whether the planet is habitable for us or not, will be of no consequence for their further evolution.

I like to think of our planet as one in a million, or one in a trillion, of possible planets where evolution begins. This requires something (a replicator) that can be copied with variation, and selection. As Darwin realised, if more copies are made than can survive, then the survivors will pass on to the next generation of copying whatever helped them get through. This is how all design in the universe comes about.

What is not so often thought about is that one replicator can piggy-back on another by using its vehicles as copying machinery. This has happened here on earth. The first major replicator (the only one for most of earth’s existence and still the most prevalent) is genes. Plants and animals are gene machines—physical vehicles that carry genetic information around, and compete to protect and propagate it. But something happened here on earth that changed everything. One of these gene vehicles, a bipedal ape, became capable of imitation.

Imitation is a kind of copying. The apes copied actions and sounds, and made new variations and combinations of old actions and sounds, and so they let loose a new replicator—memes. After just a few million years the original apes were transformed, gaining enormous brains, dexterous hands, and redesigned throats and chests, to copy more sounds and actions more accurately. They had become meme machines.

We have no idea whether there are any other two-replicator planets out there in the universe because they wouldn’t be able to tell us. What we do know is that our planet is now in the throes of gaining a third replicator—the step that would allow interplanetary communication.

The process began slowly and speeded up, as evolutionary processes tend to do. Marks on clay preserved verbal memes and allowed more people to see and copy them. Printing meant higher copying fidelity and more copies. Railways and roads spread the copies more widely and people all over the planet clamoured for them. Computers increased both the numbers of copies and their fidelity. The way this is usually imagined is a process of human ingenuity creating wonderful technology as tools for human benefit, and with us in control. This is a frighteningly anthropocentric way of thinking about what is happening. Look at it this way:

Printing presses, rail networks, telephones and photocopiers were among early artificial meme machines, but they only carried out one or two of the three steps of the evolutionary algorithm. For example, books store memes and printing presses copy them, but humans still do the varying (i.e. writing the books by combining words in new ways), and the selecting (by choosing which books to buy, to read, or to reprint). Mobile phones store and transmit memes over long distances, but humans still vary and select the memes. Even with the Internet most of the selection is still being done by humans, but this is changing fast. As we old-fashioned, squishy, living meme machines have become overwhelmed with memes we are happily allowing search engines and other software to take over the final process of selection as well.

Have we inadvertently let loose a third replicator that is piggy-backing on human memes? I think we have. The information these machines copy is not human speech or actions; it is digital information competing for space in giant servers and electronic networks, copied by extremely high fidelity electronic processes. I think that once all three processes of copying, varying and selecting are done by these machines then a new replicator has truly arrived. We might call these level-three replicators “temes” (technological-memes) or “tremes” (tertiary memes). Whatever we call them, they and their copying machinery are here now. We thought we were creating clever tools for our own benefit, but in fact we were being used by blind and inevitable evolutionary processes as a stepping stone to the next level of evolution.

When memes coevolved with genes they turned gene machines into meme machines. Temes are now turning us into teme machines. Many people work all day copying and transmitting temes. Human children learn to read very young—a wholly unnatural process that we’ve just got used to—and people are beginning to accept cognitive enhancing drugs, sleep reducing drugs, and even electronic implants to enhance their teme-handling abilities. We go on thinking that we are in control, but looked at from the temes’ point of view we are just willing helpers in their evolution.

So what is the step that will change everything? At the moment temes still need us to build their machines, and to run the power stations, just as genes needed human bodies to copy them and provide their energy. But we humans are fragile, dim, low quality copying machines, and we need a healthy planet with the right climate and the right food to survive. The next step is when the machines we thought we created become self-replicating. This may happen first with nano-technology, or it may evolve from servers and large teme machines being given their own power supplies and the capacity to repair themselves.

Then we would become dispensable. That really would change everything.

ian_mcewan's picture
Novelist; Recipient, the Man Booker Prize for Fiction; Author, Sweet Tooth; Solar; On Chesil Beach; Nutshell; Machines Like Me; The Cockroach

Philip Larkin began a poem with the hypothesis, If I were called in/ To construct a religion/ I should make use of water. Instead of water, I would propose the sun, and the religion I have in mind is a rational affair, with enormous aesthetic possibilities and of great utility.

By nearly all insider and expert accounts, we are or will be at peak oil somewhere between now and the next five years. Even if we did not have profound concerns about climate change, we would need to be looking for different ways to power our civilization. How fortunate we are to have a safe nuclear facility a mere 93 million miles away, and fortunate too that the dispensation of physical laws is such that when a photon strikes a semi-conductor, an electron is released. I hope I live to see the full flourishing of solar technology—photovoltaics or concentrated solar power to superheat steam, or a combination of the two in concentrated photovoltaics. The technologies are unrolling at an exhilarating pace, with input from nanotechnology and artificial photosynthesis. Electric mobility and electricity storage are also part of this new quest. My hope is that architects will be drawn to designing gorgeous arrays and solar towers in the desert—as expressive of our aspirations as Medieval cathedrals once were. We will need new distribution systems too, smart grids—perfect Rooseveltian projects for our hard-pressed times. Could it be possible that in two or three decades we will look back and wonder why we ever thought we had a problem when we are bathed in such beneficent radiant energy?

david_dalrymple's picture
Research affiliate, MIT Media Lab

Having lived only 17 years so far, to ask what I expect to live to see is to cast a long, wide net.

When looking far into the future, I find it a useful exercise to imagine oneself as a non-human scientist: an alien, a god, or some other creature with a modern understanding of mathematics and physics, but no inherent understanding of human culture or language, beyond what it can deduce from watching what happens at a high level. Essentially, it looks at the world "up to isomorphism": it is not relevant who does what, what it's called, whether it has five fingers or six; but rather how much of it there is, whether it survives, and where it goes.

From this perspective, a few things are apparent: We are depleting our planet's resources faster than they can be replenished. Most of the sun's energy is reflected back into space without being used. There are more of us every minute and we have barely the slightest hint of slowing growth, despite overcrowding and lack of resources. We are trapped in a delicate balance of environmental conditions that has been faltering ever since we began pulling hydrocarbons out of the crust and burning them in the atmosphere (by coincidence, perhaps?), and there seems to be a good chance it will collapse catastrophically in the next 100 years if we don't run out of hydrocarbons first. We have thrown countless small, special-purpose objects into space, and some have transmitted very valuable information back to us. For a short period (while it seemed we would destroy our planet with deliberate nuclear explosions and immediate evacuation might be necessary) we played at shooting living men into the sky, but they have only gone as far as our planet's moon, still within Earth orbit, and sure enough wound up right back in our atmosphere. I should note here that I do not mean any disrespect to the achievements of the Apollo program. In fact, I believe they are among humankind's greatest—so far!

If civilization is to continue expanding, however, as well it shall if it does not collapse, it must escape the tiny gravity well it is trapped in. It is quite unclear to me how this will happen: whether humans will look anything like the humans of today, whether we will escape to sun-orbiting space stations or planetary colonies, but if we expand, we must expand beyond Earth. Even if environmentalists succeed in building a sustainable terrestrial culture around local farming and solar energy, it will only remain sustainable if we limit reproduction, which I expect most of modern society to find unconscionable on some level.

It has always been not only the human way, but the way of all living things, to multiply and colonize new frontiers. What is uniquely human is our potential ability to colonize all frontiers: to adapt our intelligence to new environments, or to adapt environments to suit ourselves. Although the chaos of a planetary atmosphere filled with organic diversity is a beautiful and effective cradle of life, it is no place for the new human-machine civilization. By some means—genetic engineering, medical technology, brain scanning, or something even more fantastical—I expect that humans will gradually shorten the food chain, adapting to use more directly the energy of stars. Perhaps we will genetically modify humans to photosynthesize directly, or implant devices that can provide all the energy for the necessary chemical reactions electrically, or scan our intelligences into solar-powered computing devices. Again, the details are very hard to predict, but I believe there will be some way forward.

I'm getting ahead of myself, so let's come back to the present. There is budding new interest in the development of space technology, in large part undertaken as private ventures, unlike in the past. Many view such operations as absurd luxury vacations for the super-rich, or at best as unlikely schemes to harvest fuel on the Moon and ship it back to Earth via a rail gun. I believe this research is tremendously important, because whatever short-term excuses may be found to fund it, in the long run, it is absolutely critical to the future development of our civilization. I also don't mean to imply that we should give up on environmentalism and sustainability, and just start over with another planet: these principles will only become more important as we spread far and wide, beginning in each new place with even more limited resources and limited contact with home. Not to mention that if Earth can be saved, it would be a tremendous cultural treasure to preserve as long as possible.

I'm not as optimistic about interstellar travel as some (I certainly don't expect it to become practical in this century), but I'm also much more optimistic about the ability of human civilization to adapt and survive without the precise conditions that were necessary for its evolution. There are so many possible solutions for the survival of humans (or posthumans) in solar orbit or on "inhospitable" planets that I expect we will find some way to make it work long before generational or faster-than-light voyages to faraway star systems; in fact, I expect it in my lifetime. But someday, "escaping the gravity well" will mean not that of Earth, but that of our star, and then humankind's ship will at last have...gone out.

alison_gopnik's picture
Psychologist, UC, Berkeley; Author, The Gardener and the Carpenter

The world is transforming from an agricultural and manufacturing economy to an information economy. This means that people will have to learn more and more. The best way to make it happen is to extend the period when we learn the most — childhood. Our new scientific understanding of neural plasticity and gene regulation, along with the global spread of schooling, will make that increasingly possible. We may remain children forever — or at least for much longer.

Humans already have a longer period of protected immaturity — a longer childhood — than any other species. Across species, a long childhood is correlated with an evolutionary strategy that depends on flexibility, intelligence and learning. There is a developmental division of labor. Children get to learn freely about their particular environment without worrying about their own survival — caregivers look after that. Adults use what they learn as children to mate, predate, and generally succeed as grown-ups in that environment. Children are the R & D department of the human species. We grown-ups are production and marketing. We start out as brilliantly flexible but helpless and dependent babies, great at learning everything but terrible at doing just about anything. We end up as much less flexible but much more efficient and effective adults, not so good at learning but terrific at planning and acting.

These changes reflect brain changes. Young brains are more connected, more flexible and more plastic, but less efficient. As we get older, and experience more, our brains prune out the less-used connections and strengthen the connections that work. Recent developments in neuroscience show that this early plasticity can be maintained and even reopened in adulthood. And, we've already invented the most unheralded but most powerful brain-altering technology in history — school.

For most of human history babies and toddlers used their spectacular, freewheeling, unconstrained learning abilities to understand fundamental facts about the objects, people and language around them — the human core curriculum. At about 6 children also began to be apprentices. Through a gradual process of imitation, guidance and practice they began to master the particular adult skills of their particular culture — from hunting to cooking to navigation to childrearing itself. Around adolescence motivational changes associated with puberty drove children to leave the protected cocoon and act independently. And by that time their long apprenticeship had given children a new suite of executive abilities — abilities for efficient action, planning, control and inhibition, governed by the development of prefrontal areas of the brain. By adolescence children wanted to end their helpless status and act independently and they had the tools to do so effectively.

School, a very recent human invention, completely alters this program. Schooling replaces apprenticeship. School lets us all continue to be brilliant but helpless babies. It lets us learn a wide variety of information flexibly, and for its own sake, without any immediate payoff. School assumes that learning is more important than doing, and that learning how to learn is most important of all. But school is also an extension of the period of infant dependence — since we don't actually do anything useful in school, other people need to take care of us — all the way up to a Ph.D. School doesn't include the gradual control and mastery of specific adult skills that we once experienced in apprenticeship. Universal and extended schooling means that the period of flexible learning and dependence can continue until we are in our thirties, while independent active mastery is increasingly delayed.

Schooling is spreading inexorably throughout the globe. A hundred years ago hardly anyone went to school, even now few people are schooled past adolescence. A hundred years from now we can expect that most people will still be learning into their thirties and beyond. Moreover, the new neurological and genetic developments will give us new ways to keep the window of plasticity open. And the spread of the information economy will make genetic and neurological interventions, as well as educational and behavioral interventions, more and more attractive.

These accelerated changes have radical consequences. Schooling alone has already had a revolutionary effect on human learning. Absolute IQs have increased at an astonishing and accelerating rate, "the Flynn effect". Extending the period of immaturity indeed makes us much smarter and far more knowledgeable. Neurological and genetic techniques can accelerate this process even further. We all tend to assume that extending this period of flexibility and openness is a good thing — who would argue against making people smarter?

But there may be an intrinsic trade-off between flexibility and effectiveness, between the openness that we require for learning and the focus that we need to act. Child-like brains are great for learning, but not so good for effective decision-making or productive action. There is some evidence that adolescents even now have increasing difficulty making decisions and acting independently, and pathologies of adolescent action like impulsivity and anxiety are at all-time historical highs. Fundamental grown-up human skills we once mastered through apprenticeship, like cooking and caregiving itself, just can't be acquired through schooling. (Think of all those neurotic new parents who have never taken care of a child and try to make up for it with parenting books). When we are all babies for ever, who will be the parents? When we're all children who will be the grown-ups?

kenneth_w_ford's picture
Retired Director of the American Institute of Physics

Not in my lifetime, but someday, somewhere, some team will figure out how to read your thoughts from the signals emitted by your brain. This is not in the same league as human teleportation—theoretically possible, but in truth fictional. Mind reading is, it seems to me, quite likely.

And, as we know from hard disks and flash memories, to be able to read is to be able to write. Thoughts will be implantable.

Some will applaud the development. After all, it will aid the absent minded, enable the mute to communicate, preempt terrorism and crime, and conceivably aid psychiatry. (It will also cut down on texting and provide as reliable a staple for cartoonists as the desert island and the bed.) Some will, quite rightly, deplore it. It will be the ultimate invasion of privacy.

Game-changing indeed. If we choose to play the game. Until about forty years ago, we lived in the "If it is technically feasible, it will happen" era. Now we are in the "If it is technically feasible, we can choose" era. An important moment was the decision in the United States in 1971 not to develop a supersonic transport. An American SST would hardly have been game-changing, but the decision not to build it was a watershed moment in the history of technology. Of course, since then—if I may offer up my own opinions—we should have said no to the International Space Station but didn't, and we should have said yes to the Superconducting Super Collider but didn't. Our skill in choosing needs refinement.

As what is technically feasible proliferates in its complexity, cost, and impact on humankind, we should more often ask the question, "Should we do it?" Take mind reading. We can probably safely assume that the needed device would have to be located close to the brain being read. That would mean that choice is possible. We could let Mind Reader™, Inc. make and market it. Or we could outlaw it. Or we could hold it as an option for special circumstances (much as we now try to do with wiretapping). What we will not have to do is throw up our hands and say, "Since it can be done, it will be done."

I like being able to keep some of my thoughts to myself, and I hope that my descendants will have the same option.

 

irene_pepperberg's picture
Research Associate & Lecturer, Harvard; Author, Alex & Me

Knowledge of exactly how the brain works will change everything. Despite all our technical advances in brain-mapping, we still do not fully understand how the human or nonhuman brain works as a complete organ—e.g., the interconnectedness of the separate areas we are currently mapping. Just as we are beginning to learn that it is not "the" gene that controls what happens in our bodies, but rather the interplay of many genes, proteins, and environmental influences that turn genes on and off, we will learn how the interplay of various neural tissues, the chemicals in our body, environmental influences, and possibly some current unknowns, come together to affect how the brain works…and that will change everything.

We will, for example:

(a) ameliorate diseases in which the brain stops working properly—from diseases involving cognitive deficits such as Alzheimers to those involving issues of physical control such as Parkinsons. We will monitor just when the brain stops functioning optimally and begin interventions much earlier. Age-related senility, with its concomitant problems and societal costs, will cease to exist. If dysfunctions such as autism and schizophrenia are indeed the result of faulty interconnections among many disparate areas, we will 'rewire' the appropriate systems either physically or through targeted drug intervention….similarly for problems such as dyslexia and ADHD;

(b) understand and repair brains susceptible to addictions, or criminality that is based on lack of inhibitory control;

(b) use this knowledge to develop models of brain function for advanced robotics and computers to design 'smart' interactive systems for, e.g., space and ocean exploration or seamless interfaces for, e.g., artificial limbs, vision, and hearing;

(c) determine ways in which human and nonhuman brains function similarly and differently, whether human and nonhuman intelligences are distinctly separate or whether a measureable gradient exists, the extent of any overlap of function, and whether the critical issues involve modules or a constellation of inter-functioning areas that both match and are disparate. For example, we will better understand how human intelligence and language evolved and the extent to which parallel intelligence and communication evolved in nonmammalian evolutionary lines. And how they may still be evolving….

(d) maybe frighteningly, attempt to improve upon the current human brain in an anatomical sense, or, in a much more acceptable manner, determine what forms of teaching and training enable learning to proceed most rapidly, by enhancing appropriate connectivity and memory formation. Different types of intelligence will likely be found to be correlated with particular brain organizational patterns; thus we will identify geniuses of particular sorts more readily and cultivate their abilities.

By truly understanding brain function, and harnessing it most effectively, we will affect everything else in the universe—for better or worse.

keith_devlin's picture
Mathematician; Executive Director, H-STAR Institute, Stanford; Author, Finding Fibonacci

This is a tough one. Not because there is a shortage of possibilities for major advances in science, and not because any predictions Edgies make are likely to be way off the mark (history tells us that they assuredly will); rather you set the hurdle impossibly high with "change everything." and "expect to live to see". The contraceptive pill "changed everything" for people living in parts of the world where it is available and the Internet "changed everything" for those of us who are connected. But for large parts of the world those advances may as well not have occurred. Moreover, many scientific changes take a generation or more to have a significant effect.

But since you ask, I'll give you an answer, and it's one I am pretty sure will happen in my lifetime (say, thirty more years). The reason for my confidence? The key scientific and technological steps have already been taken. In giving my answer, I'm adopting a somewhat lawyer-like strategy of taking advantage of that word "development" in your question. Scientific advances do not take place purely in the laboratory, particularly game-changing ones. They have to find their way into society as a whole, and that transition is an integral part of any "scientific advance."

History tells us that it can often take some time for a scientific or technological advance to truly "change everything". Understanding germs and diseases, electricity, the light bulb, and the internal combustion engine are classic examples. (Even these examples still have not affected everyone on the planet, of course, at least not directly, but that is surely just a matter of time.) The development I am going to focus on is the final one in the scientific chain that brings the results of the science into everyday use.

My answer? It's staring us in the face. The mobile phone. Within my lifetime I fully expect almost every living human adult, and most children, in the world to own one. (Neither the pen nor the typewriter came even close to that level of adoption, nor did the automobile.) That puts global connectivity, immense computational power, and access to all the world's knowledge amassed over many centuries, in everyone's hands. The world has never, ever, been in that situation before. It really will change everything. From the way individual people live their lives, to the way wealth and power are spread across the globe. It is the ultimate democratizing technology. And if my answer seems less "cutting edge" or scientifically sexy than many of the others you receive, I think that just shows how dramatic and pervasive the change has already been.

What other object do you habitually carry around with you and use all the time, and take for granted? Yet when did you acquire your first mobile phone? Can you think of a reason why anyone else in the world will not react the same way when the technology reaches them? Now imagine the impact on someone in apart of the world that has not had telephones, computers, the Internet, or even easy access to libraries. I'll let your own answers to these questions support my case that this is game changing on a hitherto unknown global scale.

john_d_barrow's picture
Cosmologist; Physicist; Professor of Mathematical Science, Director, Millennium Mathematics Project, University of Cambridge; Author, The Book of Universes
ernst_p_ppel's picture
Head of Research Group Systems, Neuroscience and Cognitive Research, Ludwig-Maximilians-University Munich, Germany; Guest Professor, Peking University, China

When time came to an end, the gods decided to run a final experiment. They wanted to be prepared after the big crunch for potential trajectories of life after the next big bang. For their experiment they choose two planets in the universe where evolution had resulted in similar developments of life. For planet ONEthey decided to interfer with evolution by allowing only ONE species to develop their brain to a high level of complexity. This species referred to itself as being „intelligent“; members of this species were very proud about their achievements in science, technology, the arts or philosophy.

For planet TWO the gods altered just one variable. For this planet they allowed that TWO species with high intelligence would develop. The two species shared the same environment, but—and this was crucial for the divine experiment—they did not communicate directly with each other. Direct communication was limited to their own species only. Thus, one species could not inform directly the other one about future plans; each species could only register what has happened to their common environment.

The question was how life would be managed on planet ONE and on planet TWO. As for any organism, the goal was on both planets to maintain an internal balance or homeostasis by using optimally the available resources. As long as the members of the different samples were not too intelligent stability was maintained. However, when they became more intelligent and according to their own view really smart, and when the frame of judgment changed, i.e. individual interests became dominant, trouble was preprogrammed. Being driven by uncontrolled personal greed, more resources were drawn form the environment than could be replaced. Which planet would do better with such species of too much intelligence to maintain the conditions of life?

Data analysis after the experimental period of 200 years showed that planetTWO did much better to maintain stability of the environment. Why this? The species on planet TWO had to monitor always the consequences of actions of the other species. If one would take too many resources for individual satisfaction, sanctions by the other species would be the consequence. Thus, drawing resources from the environment was controlled by the other species in a bi-directional way resulting in a dynamic equilibrium.

When the gods published their results, they drew the following conclusions: Long-term stability in complex systems like in social systems with members of too much intelligence can be maintained if two complementary systems interact with each other. In case only one system like on planet ONE has been developed it is recommended to adopt for regulative purposes a second system. For social systems it should be the next generation. Their future environment should be made present both conceptually and emotionally. By doing so long-term stabillity is guaranteed.

Being good brain scientists the gods knew that making the future present is not only a matter of abstract or explicit knowledge. This is necessary but not sufficient for action resulting in a long-term equilibrium. Decisions have to be anchored in the emotional systems as well, i.e. an empathic relationship between the members of the two systems ha

haim_harari's picture
Physicist, former President, Weizmann Institute of Science; Author, A View from the Eye of the Storm

Sometimes you make predictions. Sometimes you have wishful thinking. It is a pleasure to indulge in both, by discussing one and the same development which will change the world.

Today's world, its economy, industry, environment, agriculture, energy, health, food, military power, communications, you name it, are all driven by knowledge. The only way to fight poverty, hunger, diseases, natural catastrophes, terrorism, war, and all other evil, is the creation and dissemination of knowledge, i.e. research and education.

Of the six billion people on our planet, at least four billions are not participating in the knowledge revolution. Hundreds of millions are born to illiterate mothers, never drink clean water, have no medical care and never use a phone.

The "buzz words" of distant learning, individualized learning, and all other technology-driven changes in education, remain largely on paper, far from becoming a daily reality in the majority of the world's schools. The hope that affluent areas will provide remote access good education to others has not materialized. The ideas of bringing all of science, art, music and culture to every corner of the world and the creation of schools designed differently, based on individual and group learning, team work, simulations and special aids to special needs—all of these technology enabled goals remain largely unfulfilled.

It is amazing that, after decades of predictions and projections, education, all around the world, has changed so little. Thirty years ago, pundits talked about the thoroughly computerized school. Many had fantasies regarding an entirely different structure of learning, remote from the standard traditional school-class-teacher complex, which has hardly changed in the last century.

It is even more remarkable that no one has made real significant money on applying the information revolution to education. With a captive consumer audience of all school children and teachers in the world, one would think that the money made by eBay, Amazon, Google and Facebook might be dwarfed by the profits of a very clever revolutionary idea regarding education. Yet, no education oriented company is found among the ranks of the web-billionaires.

How come the richest person on the globe is not someone who had a brilliant idea about using technology for bringing education to the billions of school children of the world? I do not know the complete answer to this question. A possible guess is that in other fields you can have "quickies" but not in education. The time scale of education is decades, not quarters. Another possible guess is that, in education, you must mix the energy and creativity of the young with the wisdom and experience of the older, while in other areas, the young can do it fast and without the baggage of the earlier generations.

I am not necessarily bemoaning the fact that no one got into the list of richest people in the world by reforming education. But I do regret that no "game-changing" event has taken place on this front, by exploiting what modern technology is offering.

Four million Singapore citizens have a larger absolute GDP than 130 million Pakistanis. This is not unrelated to all the miseries and problems of Pakistan, from poverty to terror to severe earthquake damage. The only way to change this, in the long run, is education. Nothing better can happen to the world, than better education to such a country. But, relying only on local efforts may take centuries. On the other hand, if Al Qaida can reach other continents from Pakistan by using the web, why can't the world help educate 130 million Pakistanis using better methods?

So, my game-changing hope and prediction is that, finally, something significant will change on this front. The time is ripe. A few novel ideas, aided by technologies that did not exist until recently, and based on humanistic values, on compassion and on true desire to extend help to the uneducated majority of the earth population, can do the trick.

Am I naive, stupid or both? Why do I think that this miracle, predicted for 30 years by many, and impatiently waited for by more, will finally happen in the coming decades?

Here are my clues:

First, a technology-driven globalization is forcing us to see, to recognize and to fear the enormous knowledge gaps between different parts of the world and between segments of society within our countries. It is a major threat to everything that the world has achieved in the last 100 years, including democracy itself. Identifying the problem is an important part of the solution.

Second, the speed and price of data transmission, the advances in software systems, the feasibility of remote video interactions, the price reduction of computers, fancy screens and other gadgets, finally begin to lead to the realization that special tailor-made devices for schools and education are worth designing and producing. Until now, most school computers were business computers used at school and very few special tools were developed exclusively for education. This is beginning to change.

Third, for the first time, the generation that grew up with a computer at home is reaching the teacher ranks. The main obstacle of most education reforms has always been the training of the teachers. This should be much easier now. Just remember the first generation of Americans who grew up in a car-owning family. It makes a significant difference.

Fourth, the web-based social networks in which the children now participate pose a new challenge. The educational system must join them, because it cannot fight them. So the question is not any more: "Will there be a revolution in education?" But "Will the revolution be positive or deadly?" Too many revolutions in history have led to more pain and death than to progress. We must get this one right.

Fifth, a child who comes to school with a 3G phone, iPod or whatever, sending messages to his mother's blackberry and knowing in real time what is happening in the class room of his brother or friend miles or continents away, cannot be taught anything in the same way that I was taught. Has anyone seen lately a slide rule? A logarithmic table? A volume of Pedia other than Wiki?

At this point I could produce long lists of specific ideas which one may try or of small steps which have already been taken, somewhere in the world. But that is a matter for long essays or for a book, not for a short comment. It is unlikely that one or three or ten such ideas will do the job. It will have to be an evolutionary process of many innovations, trial and error, self adjustment, avoiding repetition of past mistakes and, above all, patience. It will also have to include one or more big game-changing elements of the order of magnitude of the influence of Google.

This is a change that will create a livable world for the next generations, both in affluent societies and, especially, in the developing or not-even-yet-developing parts of the world. Its time has definitely come. It will happen and it will, indeed, change everything.

frank_tipler's picture
Professor of Mathematical Physics, Tulane University; Coauthor (with John Barrow), The Anthropic Cosmological Principle

I'm 62, so I'll have to limit my projections to what I expect to happen in the next two to three decades. I believe these will be the most interesting times in human history (Remember the old Chinese curse about "interesting times?") Humanity will see, before I die, the "Singularity," the day when we finally create a human level artificial intelligence. This involves considering the physics advances that will be required to create the computer that is capable of running a strong AI program.

Although by both my calculations and those of Ray Kurzweil (originator of the "Singularity" idea), the 10 teraflop speed of today's supercomputers have more than enough computing power to run a minimum AI program, we are missing some crucial idea in this program. Conway's Game of Life has been proven to be a universal program, capable of expressing a strong AI program, and it should therefore be capable, if allowed to run long enough, of bootstrapping itself into the complexity of human level intelligence. But Game of Life programs do no do so. They increase their complexity just so far, and then stop. Why, we don't know. As I said, we are missing something, and what we are missing is the key to human creativity.

But an AI program can be generated by brute force. We can map an entire human personality, together with a simulated environment into a program, and run this. Such a program would be roughly equivalent to the program being run in the movie The Matrix, and it would require enormous computing power, power far beyond today's supercomputers. The power required can only be provided by a quantum computer.

A quantum computer works by parallel processing across the multiverse. That is, part of the computation is done in this universe by you and your part of the quantum computer, and the other parts of the computation are done by your analogues with their parts of the computer in the other universes of the multiverse. The full potential of the quantum computer has not been realized because the existence of the multiverse has not yet been accepted, even by workers in the field of quantum computation, in spite of the fact that the multiverse's existence is required by quantum mechanics, and by classical mechanics in its most powerful form, Hamilton-Jacobi theory.

Other new technologies become possible via action across the multiverse. For example, the Standard Model of particle physics, the theory of all forces and particles except gravity, a theory confirmed by many experiments done over the past forty years, tells us that it is possible to transcend the laws of conservation of baryon number (number of protons plus neutrons) and conservation of lepton number (number of electrons plus neutrinos) and thereby convert matter into energy in a process far more efficient that nuclear fission or fusion. According to the Standard Model, the proton and electron making up a hydrogen atom can be combined to yield pure energy in the form of photons, or neutrino-anti-neutrino pairs. If the former, then we would have a mechanism that would allow us to convert garbage into energy, a device Doc in the movie Back to the Future obtained from his trip to the future. If the latter, then the directed neutrino-anti-neutrino beam would provide the ultimate rocket: the exhaust would be completely invisible to those nearby, just as the propulsion mechanism that Doc also obtained from the future. The movie writers got it right, Doc's future devices are indeed in our future.

Quantum computer running an AI program, direct conversion of matter into energy, the ultimate rocket that would allow the AI's and the human downloads to begin interstellar travel at near light speed, depend on the same physics, and should appear at the same time in the future.

Provided we have the courage to develop the technology allowed by the known laws of physics. I have grave doubts that we will.

In order to have advances in physics and engineering, one must first have physicists and engineers. The number of students majoring in these subjects has dropped enormously in the quarter century that I have been a professor. Worse, the quality of the few students we do have has dropped precipitously. The next decade will see the retirement of Stephen Hawking, and others less well-known but of similar ability, but I know of no one of remotely equal creativity to replace them. Small wonder, given that the starting salary of a Wall Street lawyer fresh out of school is currently three times my own physicist's salary. As a result, most American engineers and physicists are now foreign born.

But can foreign countries continue to supply engineers and physicists? That is, will engineers and physicists be available in any country? The birth rate of the vast majority of the developed nations has been far below replacement level for a decade and more. This birth dearth also holds for China, due to their one-child policy, and remarkably is developing even in the Muslim and southern nations. We may not have enough people in the next twenty years to sustain the technology we already have, to say nothing of developing the technology allowed by the known laws of physics that I describe above.

The great Galileo scholar Giorgio de Santillana, who taught me history of science when I was an undergraduate at MIT in the late 1960's, wrote that Greek scientific development ended in the century or so before the Christian era because of a birth dearth and a simultaneous bureaucratization of intellectual inquiry. I fear we are seeing a repeat of this historical catastrophe today.

However, I remain cautiously optimistic that we will develop the ultimate technology described above, and transfer it with faltering hands to our ultimate successors, the AI's and the human downloads, who will be thus enabled to expand outward into interstellar space, engulf the universe, and live forever.

lawrence_m_krauss's picture
Theoretical Physicist; Foundation Professor, School of Earth and Space Exploration and Physics Department, ASU; Author, The Greatest Story Ever Told . . . So Far

"With Nuclear Weapons, everything has changed, save our way of thinking."  So said Albert Einstein, sixty three years ago, following the Hiroshima and Nagasaki bombings at the end of World War II.  Having been forced to choose a single game changer, I have turned away from the fascinating scientific developments I might like to see, and will instead focus on the one game changer that I will hopefully never directly witness, but nevertheless expect will occur during my lifetime:  the use of nuclear weapons against a civilian population.  Whether used by one government against the population of another, or by a terrorist group, the detonation of even a small nuclear explosive, similar in size, for example, to the one that destroyed hiroshima, would produce an impact on the economies, politics, and lifestyles of the first world in a way that would make the impact of 9/11 seem trivial.   I believe the danger of nuclear weapons use remains one of the biggest dangers of this century.  It is remarkable that we have gone over 60 years without their use, but the clock is ticking.  I fear that Einstein's admonition remains just as true today as it did then, and I that we are unlikely to go another half century with impunity, at least without confronting the need for a global program of disarmament that goes far beyond the present current Nuclear Non-Proliferation, and strategic arms treaties.

Following forty years of Mutually Assured Destruction, with the two Superpowers like two scorpions in a bottle, each held at bay by the certainty of the destruction that would occur at the first whiff of nuclear aggression on the part of the other, we have become complacent.  Two generations have come to maturity in a world where nuclear weapons have not been used.  The Nuclear Non-Proliferation Treaty has been largely ignored, not just by nascent nuclear states like North Korea, or India and Pakistan, or pre-nuclear wannabies like Iran.  Together the United States and Russia possess 26,000 of the world's 27,000 known nuclear warheads.  This in spite of the NPT's strict requirement for these countries to significantly reduce their arsenals.   Each country has perhaps 1000 warheads on hair trigger full alert.  This in spite of the fact that there is no strategic utility at the current time associated with possessing so many nuclear weapons on alert.  

Ultimately, what so concerned Einstein, and is of equal concern today, is the fact that first use of nuclear weapons cannot be justified on moral or strategic grounds. Nevertheless, it may surprise some people to learn that the United States has no strict anti-first-use policy. In fact, in its 2002 Nuclear Posture Review, the U.S. declared that nuclear weapons "provide credible military options to deter a wide range of threats" including "surprising military developments."  

And while we spend $10 billion/yr on flawed ballistic missile defense systems against currently non-existent threats, the slow effort to disarm means that thousands of nuclear weapons remain in regions that are unstable, and which could, in principle, be accessed by well organized and well financed terrorist groups.  We have not spent a noticeable fraction of the money spent supposedly defending against ballistic missiles instead outfitting ports and airports to detect against possible nuclear devices smuggled into this country in containers. 

Will it take a nuclear detonation used against a civilian population to stir a change in thinking?  The havoc wreaked on what we now call the civilized world, no matter where a nuclear confrontation takes place, would be orders of magnitude greater than that which we have experienced since the Second World War.   Moreover, as recent calculations have demonstrated, even a limited nuclear exchange between, say India and Pakistan, could have a significant global impact for almost a decade on world climates and growing seasons.  

I sincerely hope that whatever initiates a global realization that the existence of large nuclear stockpiles throughout the world is a threat to everyone on the planet, changing the current blind business-as-usual mentality permeating global strategic planning, does not result from a nuclear tragedy.  But physics has taught me that the world is the way it is whether we like it or not.  And my gut tells me that to continue to ignore the likelihood that a game changer that exceeds our worst nightmares will occur in this century is merely one way to encourage that possibility.

michael_shermer's picture
Publisher, Skeptic magazine; Monthly Columnist, Scientific American; Presidential Fellow, Chapman University; Author, Heavens on Earth

It is January, named for the Roman God Janus (Latin for door), the doorway to the new year, and yet Janus-faced in looking to the past to forecast the future. This January, 2009, in particular, finds us at a crisis tipping point both economically and environmentally. If ever we needed to look to the past to save our future it is now. In particular, we need to do two things: (1) stop the implosion of the economy and enable markets to function once again both freely and fairly, and (2) make the transition from nonrenewable fossil fuels as the primary source of our energy to renewable energy sources that will allow us to flourish into the future. Failure to make these transformations will doom us to the endless tribal political machinations and economic conflicts that have plagued civilization for millennia. We need to make the transition to Civilization 1.0. Let me explain.

In a 1964 article on searching for extraterrestrial civilizations, the Soviet astronomer Nikolai Kardashev suggested using radio telescopes to detect energy signals from other solar systems in which there might be civilizations of three levels of advancement: Type 1 can harness all of the energy of its home planet; Type 2 can harvest all of the power of its sun; and Type 3 can master the energy from its entire galaxy.

Based on our energy efficiency at the time, in 1973 the astronomer Carl Sagan estimated that Earth represented a Type 0.7 civilization on a Type 0 to Type 1 scale. (More current assessments put us at 0.72.) As the Kardashevian scale is logarithmic — where any increase in power consumption requires a huge leap in power production — fossil fuels won’t get us there. Renewable sources such as solar, wind and geothermal are a good start, and coupled to nuclear power — perhaps even nuclear fusion (instead of the fission reactors we have now) could eventually get us to Civilization 1.0.

We are close. Taking our Janus-faced look to the past in order to see the future, let’s quickly review the history of humanity on its climb to become a Civilization 1.0:

Type 0.1: Fluid groups of hominids living in Africa. Technology consists of primitive stone tools. Intra-group conflicts are resolved through dominance hierarchy, and between-group violence is common.

Type 0.2: Bands of roaming hunter-gatherers that form kinship groups, with a mostly horizontal political system and egalitarian economy.

Type 0.3: Tribes of individuals linked through kinship but with a more settled and agrarian lifestyle. The beginnings of a political hierarchy and a primitive economic division of labor.

Type 0.4: Chiefdoms consisting of a coalition of tribes into a single hierarchical political unit with a dominant leader at the top, and with the beginnings of significant economic inequalities and a division of labor in which lower-class members produce food and other products consumed by non-producing upper-class members.

Type 0.5: The state as a political coalition with jurisdiction over a well-defined geographical territory and its corresponding inhabitants, with a mercantile economy that seeks a favorable balance of trade in a win-lose game against other states.

Type 0.6: Empires extend their control over peoples who are not culturally, ethnically or geographically within their normal jurisdiction, with a goal of economic dominance over rival empires.

Type 0.7: Democracies that divide power over several institutions, which are run by elected officials voted for by some citizens. The beginnings of a market economy.

Type 0.8: Liberal democracies that give the vote to all citizens. Markets that begin to embrace a nonzero, win-win economic game through free trade with other states.

Type 0.9: Democratic capitalism, the blending of liberal democracy and free markets, now spreading across the globe through democratic movements in developing nations and broad trading blocs such as the European Union.

Type 1.0: Globalism that includes worldwide wireless Internet access with all knowledge digitized and available to everyone. A global economy with free markets in which anyone can trade with anyone else without interference from states or governments. A planet where all states are democracies in which everyone has the franchise.

Looking from this past toward the future, we can see that the forces at work that could prevent us from reaching Civilization 1.0 are primarily political and economic, not technological. The resistance by non democratic states to turning power over to the people is considerable, especially in theocracies whose leaders would prefer we all revert to Type 0.4 chiefdoms. The opposition toward a global economy is substantial, even in the industrialized West, where economic tribalism still dominates the thinking of most people.

The game-changing scientific idea is the combination of energy and economics — the development of renewable energy sources made cheap and available to everyone everywhere on the planet by allowing anyone to trade in these game-changing technologies with anyone else. That will change everything.

charles_seife's picture
Professor of Journalism, New York University; Former Journalist, Science Magazine; Author, Hawking Hawking

For the first time, humans are within reach of a form of immortality. Just a few years ago, we had to be content with archiving a mere handful of events in our lives—storing what we could in a few faded photographs of a day at the zoo, a handful of manuscript pages, a jittery video of an anniversary, or a family legend that gets passed down for three or four generations. All else, all of our memory and knowledge, melts away when we die.

That era is over. It's now within your means to record, in real time, audio and video of your entire existence. A tiny camera and microphone could wirelessly transmit and store everything that you hear and see for the rest of your life. It would take only a few thousand terabytes of hard-drive space to archive a human's entire audiovisual experience from cradle to grave.

Cheap digital memory has already begun to alter our society, at least on a small scale. CDs have become just as quaint as LPs; now, you can carry your entire music collection on a device the size of a credit card. Photographers no longer have to carry bandoliers full of film rolls. Vast databases, once confined to rooms full of spinning magnetic tapes, now wander freely about the world every time a careless government employee misplaces his laptop. Google is busy trying to snaffle up all the world's literature and convert it into a digital format: a task that, astonishingly, now has more legal hurdles than technical ones.

Much more important, though, is that vast amounts of digital memory will change the relationship that humans have with information. For most of our existence, our ability to store and relay knowledge has been very limited. Every time we figured out a better way to preserve and transmit data to our peers and to our descendents—as we moved from oral history to written language to the printing press to the computer age—our civilization took a great leap. Now we are reaching the point where we have the ability to archive every message, every telephone conversation, every communication between human beings anywhere on the planet. For the first time, we as a species have the ability to remember everything that ever happens to us. For millennia, we were starving for information to act as raw material for ideas. Now, we are about to have a surfeit.

Alas, there will be famine in the midst of all that plenty. There are some hundred million blogs, and the number is roughly doubling every year. The vast majority are unreadable. Several hundred billion e-mail messages are sent every day; most of it—current estimates run around 70%—is spam. There seems to be a Malthusian principle at work: information grows exponentially, but useful information grows only linearly. Noise will drown out signal. The moment that we, as a species, finally have the memory to store our every thought, etch our every experience into a digital medium, it will be hard to avoid slipping into a Borgesian nightmare where we are engulfed by our own mental refuse.

We are at the brink of a colossal change: our knowledge is now being limited not only by our ability to gather information and to remember it, but also by our wisdom about when to ignore information—and when to forget.

paul_j_steinhardt's picture
Albert Einstein Professor in Science, Departments of Physics and Astrophysical Sciences, Princeton University; Coauthor, Endless Universe

One of the sacred principles of physics is that information is never lost. It can be scrambled, encrypted, dissipated and shredded, but never lost. This tenet underlies the second law of thermodynamics and a concept called "unitarity," an essential component of unified theories of particles and forces. Discovering a counterexample or new ways to preserve information could be a real game-changer: one that alters our understanding of the fundamental laws of nature, transforms our concept of space and time, triggers a reconstruction of the history of the universe and leads to new prognostications about its future.

There is a real chance of breakthrough in the foreseeable future as theorists converge on one of the greatest threats to information preservation: black holes. According to Einstein's general theory of relativity, a black hole forms when matter is so concentrated that nothing, not even light, can escape its gravitational field. Any information that passes through the event horizon surrounding the black hole—the "point of no return"—is lost forever to the outside world. Suppose, for example, that Bob pilots a spaceship into the black hole carrying along three books of his choice. It appears that the titles and contents of the three books vanish. Either that or Einstein's general theory of relativity is wrong.

There is nothing shocking about having to correct Einstein's general theory of relativity. It's known to be missing an essential element, quantum physics. Einstein, and generations of theorists since, have sought an improved theory of gravity that incorporates quantum physics in a way that is mathematically and physically consistent. String theory and loop quantum gravity are the most recent attempts.

There is no doubt that quantum physics alters the event horizon and the evolution of a black hole in a fundamental way, as first point out in the work of Jacob Bekenstein, Gary Gibbons and Stephen Hawking in the 1970s. According to quantum physics, matter and energy are composed of discrete chunks known as quanta (such as electrons, quarks and photons) whose position and velocities are undergoing constant random fluctuations. Even empty space—a pure vacuum—is seething with microscopic fluctuations that create and annihilate pairs of quanta and anti-quanta. The seething vacuum just outside the event horizon occasionally produces a pair of quanta, such as an electron-positron duo, in which one escapes and one falls into the black hole. From afar, it appears that the black hole radiates a particle. This phenomenon repeats continuously, producing a spectrum of particles known as "Hawking radiation," whose properties are similar to the "thermal radiation" emitted by a hot body. Very slowly, the black hole radiates away energy and shrinks in mass and size until—well, here is where the story really begins to get interesting.

Thermal radiation only depends on the temperature of the emitting body, providing no other details about the body itself. So, if Hawking radiation is truly thermal, then the information inside the black hole is truly lost . For the last decade, though, leading physicists including Gerard 't Hooft, Leonard Susskind, and Stephen Hawking fiercely debated (and even bet on) the outcome—Susskind refers to the debate as the "black hole war." Aided by new theoretical tools developed by Juan Maldacena and other string theorists, physicists discovered that Hawking radiation is not quite thermal after all! The radiation deviates by a tiny amount from a perfectly thermal signal, and the tiny deviation incorporates information about whatever was inside. The titles of Bob's three books, for example, are not lost forever, although the information dribbles out incredibly slowly and is unimaginably scrambled. Thus, victory was declared in the black hole war.

But it may be an uneasy peace, for there remains the question of what happens to information after it falls into the horizon. This is a reasonable question because, curiously enough, passage through the horizon can be unremarkable (if the black hole is very big). There are no sign posts indicating to Bob that he has passed the point of no return, and his books remain intact. Now suppose Bob scribbles some notes in the margins of his book. What happens to this information?

Here there is a diversity of views. Some suggest that this information, too, is radiated away through the Hawking process and the black hole simply disappears. Some suggest that quantum physics makes the event horizon penetrable so that some information is radiated by the Hawking process but some escapes directly. Yet others suggest that the information is copied; one copy is radiated away and the other strikes the singularity, entering a new section of space-time that is causally disconnected from observers outside the black hole, so the two copies never meet.

Theorists have recently developed a number of new theoretical tools to attack the problem and are hard at work. Although the subject lies in the domain of quantum gravity, the implications for other fields, including my own, cosmology, will be profound. The answer will shape any future formulation of the laws of thermodynamics, quantum gravity and unified field theory. Since scrambling information, a.k.a., the entropy, determines the arrow of time, the results may inform us how time may have first emerged at the cosmic singularity known as the big bang. Or, if it proves possible for copies to bounce from the black hole singularity to a separate piece of space time, the same may apply to an even more famous singularity, the big bang. This would lend support to recent ideas suggesting that the large scale properties of the universe were shaped by events before the big bang and these conditions (a form of information) were transmitted across the cosmic singularity into a new phase of expansion. In fact, if information is forever preserved across singularities, the universe may undergo regularly repeating cycles of big bangs, expansion, and big crunches, forever into the past and into the future. To me, a breakthrough with these kinds of implications would be the ultimate game-changer.

terrence_j_sejnowski's picture
Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain

Scientific ideas change when new instruments are developed that detect something new about nature. Electron microscopes, radio telescopes, and patch recordings from single ion channels have all led to game-changing discoveries.

We are in the midst of a technological revolution in computing that has been unfolding since 1950 and is having a profound impact on all areas of science and technology. As computing power doubles every 18 months according to Moore's Law, unprecedented levels of data collection, storage and analysis have revolutionized many areas of science.

For example, optical microscopy is undergoing a renaissance as computers have made it possible to localize single molecules with nanometer precision and image the extraordinary complex molecular organization inside cells. This has become possible because computers allow beams to be formed and photons collected over long stretches of time, perfectly preserved and processed into synthetic pictures. High resolution movies are revealing the dynamics of macromolecular structures and molecular interactions for the first time.

In trying to understand brain function we have until recently relied on microelectrode technology that limited us to recording from one neuron at a time. Coupled with advances in molecular labels and reporters, new two-photon microscopes guided by computers will soon make it possible to image the electrical activity and chemical reactions occurring inside millions of neurons simultaneously. This will realize Sherrington's dream of seeing brain activity as an "enchanted loom where millions of flashing shuttles weave a dissolving pattern, always a meaningful pattern though never an abiding one; a shifting harmony of subpatterns."

By 2015 computer power will begin to approach the neural computation that occurs in brains. This does not mean we will be able to understand it, only that we can begin to approach the complexity of a brain on its own terms. Coupled with advances in large-scale recordings from neurons we should by then be in a position to crack many of the brain's mysteries, such as how we learn and where memories reside. However, I would not expect a computer model of human level intelligence to emerge from these studies without other breakthroughs that cannot be predicted.

Computers have become the new microscopes, allowing us to see behind the curtains. Without computers none of this would be possible, at least not in my lifetime.

stephen_schneider's picture
climatologist, is a professor in the Department of Biological Sciences

Scientists have been talking about the risks of human induced climate changes for decades now in front of places like Congress, scientific conventions, media events, corporate board rooms, and at visible cultural extravaganzas like Live Earth. Yet, a half century after serious scientific concerns surfaced, the world is still far from a meaningful deal to implement actions to curb the threats by controlling the offending emissions.

The reason is obvious: controlling the basic activities that brought us our prosperity—burning fossil fuels—is not going to be embraced by those who benefit from using the atmosphere as a place to dump for free their tailpipe and smokestack effluents, nor will developing economies like China and India easily give up the techniques we used to get rich because of some threat perceived as distant and not yet certain. To be sure there is real action at local, state, national and international levels, but a game changing global deal is still far from likely. Documented impacts like loss of the Inuit hunting culture, small island states survival in the face of inexorable sea level rise, threats of species extinction in critical places like mountain tops, or a five fold increase in wild fires in the US West since 1970 have not been game changin—yet. What might change the game?

In order to give up something—the traditional pathway to wealth, burning coal oil and gas—nations will have to viscerally perceive they are getting something—protection from unacceptably severe impacts. The latter has been difficult to achieve because most scientific assessments are honest that along with many credible and major risks are many remaining uncertainties.

We cannot pin down whether sea levels will rise a few feet or a few meters in the next century or two—the former is nasty but relatively manageable with adaptation investments, the latter would mean abandoning coastline installations or cultures where a sizeable chunk of humanity lives and works. If we could show scientifically that such a threat was likely, it would be game changing in terms of motivating the kinds of compromises required to achieve the actions needed that are currently politically difficult to achieve.

This is where the potential for up to 7 meters of sea level rise stored as ice on Greenland will come in to tip us toward meaningful actions. Already Greenland is apparently melting at an unprecedented rate, and way faster than any of our theories or models predicted. But it can be—and has been—argued it is just a short term fluctuation since large changes in ice volume come and go typically on millennial timescales—though mounting evidence from ice cores says probably there is unprecedented melting going on right now. Another decade or two of such scientifically documented acceleration of melting could indeed imply we will get the unlucky outcome: meters of sea level rise in the time frame of human infrastructure lifetimes for ports and cities—to say nothing of vulnerable natural places like coastal wetlands etc.

Unfortunately, the longer we wait for more confident "proof" of game changing melt rates in Greenland (or West Antarctica as well, where another 5 meters potential sea level rise lurks), the higher the risk of passing a tipping point in which the melting becomes an unstoppable self-driven process. That game change occurrence would force unprecedented retreat from the sea, and a major abandonment or rebuilding of coastal civilization and loss of coastal wetlands. This is a gamble with "Laboratory Earth", that we can't afford to lose.

daniel_l_everett's picture
Linguistic Researcher; Dean of Arts and Sciences, Bentley University; Author, How Language Began

"We should really not be studying sentences; we should not be studying language — we should be studying people" Victor Yngve

Communication is the key to cooperation. Although cross-cultural communication for the masses requires translation techniques that exceed our current capabilities, the groundwork of this technology has already been laid and many of us will live to see a revolution in automatic translation that will change everything about cooperation and communication across the world.

This goal was conceived in the late 1940s in a famous memorandum by Rockefeller Foundation scientist, Warren Weaver, in which he suggested the possibility of machine translation and tied its likelihood to four proposals, still controversial today: that there was a common logic to languages; that there were likely to be language universals; that immediate context could be understood and linked to translation of individual sentences; and that cryptographic methods developed in World War II would apply to language translation. Weaver's proposals got off the ground financially in the early 1950s as the US military invested heavily in linguistics and machine translation across the US, with particular emphasis on the research of the team of Victor Yngve at the Massachusetts Institute of Technology's Research Laboratory of Electronics (a team that included the young Noam Chomsky).

Yngve, like Weaver, wanted to contribute to international understanding by applying the methods of the then incipient field that he helped found, computational linguistics, to communication, especially machine translation. Early innovators in this area also included Claude Shannon at Bell Labs and Yehoshua Bar-Hillel who preceded Yngve at MIT before returning to Israel. Shannon was arguably the inventor of the concept of information as an entity that could be scientifically studied and Bar-Hillel was the first person to work full-time on machine translation, beginning the program that Yngve inherited at MIT.

This project was challenged early on, however, by the work of Chomsky, from within Yngve's own lab. Chomsky's conclusions about different grammar types and their relative generative power convinced people that grammars of natural languages were not amenable to machine translation efforts as they were practiced at the time, leading to a slowdown in and reduction of enthusiasm for computationally-based translation.

As we have subsequently learned, however, the principal problem faced in machine-translation is not the formalization of grammar per se, but the inability of any formalization known, including Chomsky's, to integrate context and culture (semantics and pragmatics in particular) into a model of language appropriate for translation. Without this integration, mechanical translation from one language to another is not possible.

Still, mechanical procedures able to translate most contents from any source language into accurate, idiomatically natural constructions of any target language seem less utopian to us now because of major breakthroughs that have led to several programs in machine translation (e.g. the Language Technologies Institute at Carnegie Mellon University). I believe that we will see within our lifetime the convergence of developments in artificial intelligence, knowledge representation, statistical grammar theories, and an emerging field — computational anthropology (informatic-based analysis and modeling of cultural values) — that will facilitate powerful new forms of machine translation to match the dreams of early pioneers of computation.

The conceptual breakthroughs necessary for universal machine translation will also require contributions from Construction Grammars, which view language as a set of conventional signs (varieties of the idea that the building blocks of grammar are not rules or formal constraints, but conventional phrase and word forms that combine cultural values and grammatical principles), rather than a list of formal properties. They will have to look at differences in the encoding of language and culture across communities, rather than trying to find a 'universal grammar' that unites all languages.

At least some of the steps are easy enough to imagine. First, we come up with a standard format for writing statistically-based Construction Grammars of any language, a format that displays the connections between constructions, culture, and local context (such as the other likely words in the sentence or other likely sentences in the paragraph in which the construction appears). This format might be as simple as a flowchart or a list. Second, we develop a method for encoding context and values. For example, what are the values associated with words; what are the values associated with certain idioms; what are the values associated with the ways in which ideas are expressed? The latter can be seen in the notion of sentence complexity, for example, as in the Pirahã of the Amazon's (among others) rejection of recursive structures in syntax because they violate principles of information rate and new vs. old information in utterances that are very important in Pirahã culture. Third, we establish lists of cultural values and most common contexts and how these link to individual constructions. Automating the procedure for discovering or enumerating these links will take us to the threshold of automatic translation in the original sense.

Information and its exchange form the soul of human cultures. So just imagine the possible change in our perceptions of 'others' when we able to type in a story and have it automatically and idiomatically translated with 100% accuracy into any language for which we have a grammar of constructions. Imagine speaking into a microphone and having your words come out in the language of your audience, heard and understood naturally. Imagine anyone being able to take a course in any language from any university in the world over the internet or in person, without having to first learn the language of the instructor.

These will always be unreachable goals to some degree. It seems unlikely, for example, that all grammars and cultures are even capable of expressing everything from all languages. However, we are developing tools that will dramatically narrow the gaps and help us decide where and how we can communicate particular ideas cross-culturally. Success at machine translation might not end all the world's sociocultural or political tensions, but it won't hurt. One struggles to think of a greater contribution to world cooperation than progress to universal communication, enabling all and sundry to communicate with nearly all and sundry. Babel means 'the gate of god'. In the Bible it is about the origin of world competition and suspicion. As humans approached the entrance to divine power by means of their universal cooperation via universal communication, so the biblical story goes, language diversity was introduced to destroy our unity and deprive us of our full potential.

But automated, near-universal translation is coming. And it will change everything.

gino_segre's picture
Professor of Physics & Astronomy, University of Pennsylvania; Author, The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age

Einstein’s Theory of General Relativity, first presented in the fall of 1915, and his earlier Special Theory of Relativity have changed very little of our day to day world, but they have radically altered the way we think about both space and time and have also launched the modern theory of cosmology. If in the near future we discover additional space-time dimensions we will undergo a shift in our perceptions every bit as radical as the one experienced almost a hundred years ago.

Though proof of their existence would necessarily alter our view of the Universe, there is also a way in which our psyches would be changed. I believe we would gain a new confidence that great almost unimaginable phenomena are yet to be discovered. It would also make us realize once again the power that lies in a few simple equations, in the tools we can build to test them and in the human imagination.

At the November 6, 1919 joint meeting of the Royal Society and the Royal Astronomical Society, Sir Frank Watson Dyson reported on the observations of starlight made during the previous May’s solar eclipse. “After a careful study of the plates I am prepared to say that they confirm Einstein’s prediction. A very definite result had been obtained, that light is deflected in accordance with Einstein’s law of gravitation.” Sir John Joseph Thomson, presiding, afterwards called the result “one of the highest achievements of human thought.” It was a triumphant moment for both theoretical physics and observational astronomy.

A few years after the momentous Royal Society meeting a German and a Swedish physicist, Theodor Kaluza and Oskar Klein, reached a striking conclusion. They noticed that the equations of general relativity, when solved in five rather than four dimensions, led to additional solutions that were identical to the well-known Maxwell equations of electromagnetism. Since the apparent fifth dimension had not, and still has not been observed, a necessary additional postulate for this theory to correspond to possible reality was that the fifth dimension was curled up so tightly that any motion in its direction had not been detected.

Einstein, finding this extension of his General Theory of Relativity extraordinarily attractive, tried more than once, without success, to make it part of his lifelong dream of a unified field theory of interactions. But this direction of research fell into relative disfavor during the first post World War II decades during which theoretical physics turned its attention to other matters. It returned with a vengeance during the late 1970s, gaining momentum in the 1980s as physicists began to seriously examine theories that could unite all fundamental interactions into one comprehensive scheme. The rising popularity of superstring theory, mathematically consistent only if additional space-time dimensions are present, has provided the decisive impetus for such considerations.

There are striking differences from the 1915 situation, most particularly the lack of a clear test for the detection of extra dimensions. The novel theories now in fashion do predict that additional particles must be present in nature because of these extensions of space and time, but since the mass of these particles is related to the unknown scale of the extra dimensions, it also remains unknown. Roughly speaking, the smaller the one, the larger the other. Nevertheless the hunt has begun; we are beginning to see in the literature publications from major laboratories with titles such as “ Search for Gamma Rays from the Lightest Kaluza-Klein Particle”, that being the name frequently given to the as of yet undiscovered particles associated with extra dimensions.

These searches are largely motivated by the desire to identify Dark Matter, estimated to be several times more plentiful in our Universe’s makeup than all known species of matter. Kaluza-Klein particles are one possible candidate, perhaps hard to distinguish from other candidates even if found. Challenges abound, but the stakes are very high as well.

mark_pagel's picture
Professor of Evolutionary Biology, Reading University, UK; Fellow, Royal Society; Author, Wired for Culture

We all develop from a single cell known as a zygote. This zygote divides and becomes two cells, then four, eight and so on. At first, most of the cells are alike, but as this division goes on something wondrous occurs: the cells begin to commit themselves to adopting different fates as eyes or ears, or livers or kidneys, or brains and blood cells. Eventually they produce a body of immense and unimaginable complexity, making things like supercomputers and space shuttles look like Lego toys. No one knows how they do it. No one is there to tell the cells how to behave, there is no homunculus directing cellular traffic, and no template to work to. It just happens.

If scientists could figure out how cells enact this miracle of development they could produce phenotypes—the outward form of our bodies—at will and from scratch, or at least from a zygote. This, or something close to it, will happen in our lifetimes. When we perfect it—and we are well on the way—we will be able to recreate ourselves, even redefine the nature of our lives.

The problem is that development isn't just a matter of finding a cell and getting it to grow and divide. As our cells differentiate into our various body parts they lose what is known as their 'potency', they forget how to go back to their earlier states where, like the zygote, all fates are possible. When we cut ourselves the skin nearby knows how to grow back, erasing all or most of the damage. But we can only do this on a very local scale. If you cut off your arm it does not grow back. What scientists are learning bit by bit to do is how to reverse cells back to their earlier potent states, how to re-program them so they could replace a limb.

Every year brings new discoveries and new successes. Cloning is one of the more visible. At the moment most cloning is a bit of a cheat, achieved by taking special cells from an adult animal's body that still retain some of their potency. But this will change as cell re-programming becomes possible, and the consequences could be alarming. Someone might be able to clone you by collecting a bit of your hair or other cells left behind when you touch something or sit somewhere. Why someone would want to do this—and wait for you to grow up—might limit this in practice but it could happen. You could become your own "father" or at least a very grown up twin.

More in the realm of the everyday and of real consequence is that once we can re-program cells, whole areas of science and medicine, including aging, injury and disease will vanish or become unimportant. All of the contentious work on 'embryonic stem cells' that regularly features in debates about whether it is moral to use embryos in research exists solely because scientists want a source of 'totipotent' cells, cells that haven't committed themselves to a fate. Embryos are full of them. Scientists aren't interested in embryonic stem cells per se, they simply want totipotent cells. Once scientists acquire the ability to return cells to their totipotent state, or even what is known as a 'multi-potent' state—a cell that is not quite yet fully committed—all this stem cell research will become unnecessary. This could happen within a decade.

School children learn that some lizards and crabs can re-grow limbs. What they are not taught is that this is because their cells retain multi- or even toti-potency. Because ours don't, this makes car crashes, ski accidents, gun shot wounds and growing old a nuisance. But once we unlock the door of development, we will be able to re-grow our limbs, heal our wounds and much more. Scientists will for once make the science-fiction writers look dull. The limbs (and organs, nerves, body parts, etc) that we re-grow will be real, making those bionic things like Anakin Skywalker gets fitted with after a light-sabre accident seem primitive. This will make transplants obsolete or just temporary, and things like heart disease will be treatable by growing new hearts. Nerve damage and paralysis will be reversible and some brain diseases will become treatable. Some of these things are already happening as scientists inch-by-inch figure out how to re-program cells.

If these developments are not life changing enough, they will, in the longer-term usher in a new era in which our minds, the thing that we think of as "us", can become separated from our body, or nearly separated anyway. I don't suggest we will be able to transplant our mind to another body, but we will be able to introduce new body parts into existing bodies with a resident mind. With enough such replacements, we will become potentially immortal: like ancient buildings that exist only because over the centuries each of their many stones has been replaced. An intriguing aspect of re-programming cells is that they can be induced to 'forget' how old they are. Aging will become a thing of the past if you can afford enough new pieces. We will then discover the extent to which our minds arise from perceptions of our bodies and the passage of time. If you give an old person the body of a teenager do they start to behave and think like one? Who knows, but it will be game-changing to find out.

helen_fisher's picture
Biological Anthropologist, Rutgers University; Author, Why Him? Why Her? How to Find and Keep Lasting Love

"Mind is primarily a verb," wrote philosopher John Dewey. Every time we do or think or feel anything the brain is doing something. But what? And can we use what scientists are learning about these neural gymnastics to get what we want? I think we can and we will, in my life time, due to some mind—bending developments in contemporary neuroscience. Brain scanning; genetic studies; antidepressant drug use; estrogen replacement therapy; testosterone patches; L-dopa and newer drugs to prevent or retard brain diseases; recreational drugs; sex change patients; gene doping by athletes: all these and other developments are giving us data on how the mind works—and opening new avenues to use brain chemistry to change who we are and what we want. As the field of epigenetics takes on speed, we are also beginning to understand how the environment affects brain systems, even turns genes on and off—further enabling us (and others) to adjust brain chemistry, affecting who we are, how we feel and what we think we need.

But is this new? Our forebears have been manipulating brain chemistry for millions of years. Take "hooking up," the current version of the "one night stand," one of humankind's oldest forms of chemical persuasion. During sex, stimulation of the genitals escalates activity in the dopamine system, the neurotransmitter network that my colleagues and I have found to be associated with feelings of romantic love. And with orgasm you experience a flood of oxytocin and vasopressin, neurochemicals associated with feelings of attachment. Casual sex isn't always casual. And I suspect our ancestors seduced their peers to (unconsciously) alter their brain chemistry, thereby nudging "him" or "her" toward feelings of passion and/or attachment. Indeed, this chemical persuasion works. In a recent study of 507 college students, anthropologist Justin Garcia found that 50% of women and 52% of men hopped into bed with an acquaintance or a stranger in hopes of starting a longer relationship. And about one third of these hook ups turned into romance.

In 1957 Vance Packard wrote The Hidden Persuaders to unmask the subtle psychological techniques that advertisers use to manipulate people's feelings and induce them to buy. We have long been using psychology to persuade other's minds. But now we are learning why our psychological strategies work. Holding hands, for example, generates feelings of trust, in part, because it triggers oxytocin activity. As you see another person laugh, you naturally mimic him or her, moving muscles in your face that trigger nerves to alter your neurochemistry so that you feel happy too. That's one reason why we feel good when we are around happy people. "Mirror neurons" also enable us to feel what another feels. Novelty drives up dopamine activity to make you more susceptible to romantic love. The placebo effect is real. And wet kissing transfers testosterone in the saliva, helping to stimulate lust.

The black box of our humanity, the brain, is inching open. And as we peer inside for the first time in human time, you and I will hold the biological codes that direct our deepest wants and feelings. We have begun to use these codes too. I, for example, often tell people that if they want to ignite or sustain feelings of romantic love in a relationship, they should do novel and exciting things together—to trigger or sustain dopamine activity. Some 100 million prescriptions for antidepressants are written annually in the United States. And daily many alter who we are in other chemical ways. As scientists learn more about the chemistry of trust, empathy, forgiveness, generosity, disgust, calm, love, belief, wanting and myriad other complex emotions, motivations and cognitions, even more of us will begin to use this new arsenal of weapons to manipulate ourselves and others. And as more people around the world use these hidden persuaders, one by one we may subtly change everything.

aubrey_de_grey's picture
Gerontologist; Chief Science Officer, SENS Foundation; Author, Ending Aging

Since I think I have a fair chance of living long enough to see the defeat of aging, it follows that I expect to live long enough to see many momentous scientific and technological developments. Does one such event stand out? Yes and no.

You don't have to be a futurophile, these days, to have heard of "the Singularity". What was once viewed as an oversimplistic extrapolation has now become mainstream: it is almost heterodox in technologically sophisticated circles not to take the view that technological progress will accelerate within the next few decades to a rate that, if not actually infinite, will so far exceed our imagination that it is fruitless to attempt to predict what life will be like thereafter.

Which technologies will dominate this march? Surveying the torrent of literature on this topic, we can with reasonable confidence identify three major areas: software, hardware and wetware. Artificial intelligence researchers will, numerous experts attest, probably build systems that are "recursively self-improving"—that understand their own workings well enough to design improvements to themselves, thereby bootstrapping to a state of ever more unimaginable intellectual performance.

On the hardware side, it is now widely accepted as technically feasible to build structures in which every atom is exactly where we wish it to be. The positioning of each atom will be painstaking, so one might view this as of purely academic interest—if not for the prospect of machines that can build copies of themselves. Such "assemblers" have yet to be completely designed, let alone built, but cellular automata research indicates that the smallest possible assembler is probably quite simple and small. The advent of such devices would rather thoroughly remove the barrier to practicability that arises from the time it takes to place each atom: exponentially accelerating parallelism is not to be sneezed at.

And finally, when it comes to biology, the development of regenerative medicine to a level of comprehensiveness that can give a few extra decades of healthy life to those who are already in middle age will herald a similarly accelerating sequence of refinements—not necessarily accelerating in terms of the rate at which such therapies are improved, but in the rate at which they diminish our risk of succumbing to aging at any age, as I've described using the concept of "longevity escape velocity".

I don't single out one of these areas as dominant. They're all likely to happen, but all have some way to go before their tipping point, so the timeframe for their emergence is highly speculative. Moreover, each of them will hasten the others: superintelligent computers will advance all technological development, molecular machines will surpass enzymes in their medical versatility, and the defeat of our oldest and most implacable foe (aging) will raise our sights to the point where we will pursue other transformative technologies seriously as a society, rather than leaving them to a few rare visionaries. Thus, any of the three—if they don't just wipe us all out, but unlike Martin Rees I personally think that is unlikely—could be "the one".

Or... none of them. And this is where I return to the Singularity. I'll get to human nature soon, fear not.

When I discuss longevity escape velocity, I am fond of highlighting the history of aviation. It took centuries for the designs of da Vinci (who was arguably not even the first) to evolve far enough to become actually functional, and many confident and smart engineers were proven wrong in the meantime. But once the decisive breakthrough was made, progress was rapid and smooth. I claim that this exemplifies a very general difference between fundamental breakthroughs (unpredictable) and incremental refinements (remarkably predictable).

But to make my aviation analogy stick, I of course need to explain the dramatic lack of progress in the past 40 years (since Concorde). Where are our flying cars? My answer is clear: we haven't developed them because we couldn't be bothered, an obstacle that is not likely to occur when it comes to postponing aging. Progress only accelerates while provided with impetus from human motivation. Whether it's national pride, personal greed, or humanitarian concern, something—someone—has to be the engine room.

Which brings me, at last, to human nature. The transformative technologies I have mentioned will, in my view, probably all arrive within the next few decades—a timeframe that I personally expect to see. And we will use them, directly or indirectly, to address all the other slings and arrows that humanity is heir to: biotechnology to combat aging will also combat infections, molecular manufacturing to build unprecedentedly powerful machines will also be able to perform geoengineering and prevent hurricanes and earthquakes and global warming, and superintelligent computers will orchestrate these and other technologies to protect us even from cosmic threats such as asteroids—even, in relatively short order, nearby supernovae. (Seriously.) Moreover, we will use these technologies to address any irritations of which we are not yet even aware, but which grow on us as today's burdens are lifted from our shoulders. Where will it all end?

You may ask why it should end at all—but it will. It is reasonable to conclude, based on the above, that there will come a time when all avenues of technology will, roughly simultaneously, reach the point seen today with aviation: where we are simply not motivated to explore further sophistication in our technology, but prefer to focus on enriching our and each other's lives using the technology that already exists. Progress will still occur, but fitfully and at a decelerating rather than accelerating rate. Humanity will at that point be in a state of complete satisfaction with its condition: complete identity with its deepest goals. Human nature will at last be revealed.

thomas_metzinger's picture
Professor of Theoretical Philosophy, Johannes Gutenberg-Universität Mainz; Adjunct Fellow, Frankfurt Institute for Advanced Study; Author, The Ego Tunnel

John Brockman points out that new technology leads not only to new ways of perceiving ourselves, but also to a process he calls "recreating ourselves." Could this become true in an even deeper and more radical way than through gene-technology? The answer is yes.

It is entirely plausible that we may one day directly control virtual models of our own bodies directly with our brain. In 2007, I first experienced taking control of a computer-generated whole-body model myself. It took place in a virtual reality lab where my own physical motions were filmed by 18 cameras picking up signals from sensors attached to my body. Over the past two years, different research groups in Switzerland, England, Germany and Sweden have demonstrated how, in a passive condition, subjects can consciously identify with the content of a computer-generated virtual body representation, fully re-locating the phenomenal sense of self into an artificial, visual model of their body.

In 2008, in another experiment, we saw that a monkey on a treadmill could control the real-time walking patterns a humanoid robot via a brain-machine interface directly implanted into its brain. The synchronized robot was in Japan, while the poor monkey was located thousands of miles away, in the US. Even after it stopped walking, the monkey was able to sustain locomotion of the synchronized robot for a few minutes—just by using the visual feedback transmitted from Japan plus his own "thoughts" (whatever that may turn out to be).

Now imagine two further steps.

First, we manage to selectively block the high-bandwidth "interoceptive" input into the human self-model—all the gut feelings and the incessant flow of inner body perceptions that anchor the conscious self in the physical body. After all, we already have selective motor control for an artificial body-model and robust phenomenal self-identification via touch and sight. By blocking the internal self-perception of the body, we could be able to suspend the persistent causal link to the physical body.

Second, we develop richer and more complex avatars, virtual agents emulating not only the proprioceptive feedback generated by situated movement, but also certain abstract aspects of ongoing global control itself—new tools, as Brockman would call them. Then suddenly it happens that the functional core process initiating the complex control loop connecting physical and virtual body jumps from the biological brain into the avatar.

I don't believe this will happen tomorrow. I also don't believe that it would change everything. But it would change a lot.

steven_pinker's picture
Johnstone Family Professor, Department of Psychology; Harvard University; Author, Rationality

I have little faith in anyone’s ability to predict what will change everything. A look at the futurology of the past turns up many chastening examples of confident predictions of technological revolutions that never happened, such as domed cities, nuclear-powered cars, and meat grown in dishes. By the year 2001, according to the eponymous movie, we were supposed to have suspended animation, missions to Jupiter, and humanlike mainframe computers (though not laptop computers or word processing – the characters used typewriters.) And remember interactive television, the internet refrigerator, and the paperless office?

Technology may change everything, but it’s impossible to predict how. Take another way in which 2001: A Space Odyssey missed the boat. The American women in the film were “girl assistants”: secretaries, receptionists, and flight attendants. As late as 1968, few people foresaw the second feminist revolution that would change everything in the 1970s. It’s not that the revolution didn’t have roots in technological change. Not only did oral contraceptives make it possible for women to time their childbearing, but a slew of earlier technologies (sanitation, mass production, modern medicine, electricity) had reduced the domestic workload, extended the lifespan, and shifted the basis of the economy from brawn to brains, collectively emancipating women from round-the-clock childrearing.

The effects of technology depend not just on what the gadgets do but on billions of people’s judgments of their costs and benefits (do you really want to have call a help line to debug your refrigerator?). They also depend on countless nonlinear network effects, sleeper effects, and other nuisances. The popularity of baby names (Mildred, Deborah, Jennifer, Chloe), and the rates of homicide (down in the 1940s, up in the 1960s, down again in the 1990s) are just two of the social trends that fluctuate wildly in defiance of the best efforts of social scientists to explain them after the fact, let alone predict them beforehand.

But if you insist. This past year saw the introduction of direct-to-consumer genomics. A number of new companies have been recently launched. You can get everything from a complete sequencing of your genome (for a cool $350,000), to a screen of more than a hundred Mendelian disease genes, to a list of traits, disease risks, and ancestry data. Here are some possible outcomes:

• Personalized medicine, in which drugs are prescribed according to the patient’s molecular background rather than by trial and error, and in which prevention and screening recommendations are narrowcasted to those who would most benefit.

• An end to many genetic diseases. Just as Tay-Sachs has almost been wiped out in the decades since Ashkenazi Jews have tested themselves for the gene, a universal carrier screen, combined with preimplantation genetic diagnosis for carrier couples who want biological children, will eliminate a hundred others.

• Universal insurance for health, disability, and home care. Forget the political debates about the socialization of medicine. Cafeteria insurance will no longer be actuarially viable if the highest-risk consumers can load up on generous policies while the low-risk ones get by with the bare minimum.

• An end to the genophobia of many academics and pundits, whose blank-slate doctrines will look increasingly implausible as people learn their about genes that affect their temperament and cognition.

• The ultimate empowerment of medical consumers, who will know their own disease risks and seek commensurate treatment, rather than relying on the hunches and folklore of a paternalistic family doctor.

But then again, maybe not.

brian_goodwin's picture
Professor of Biology at Schumacher College

I anticipate that biology will go through a transforming revelation/revolution that is like the revolution that happened in physics with the development of quantum mechanics nearly 100 years ago. In biology this will involve the realisation that to make sense of the complexity of gene activity in development, the prevailing model of local mechanical causality will have to be abandoned. In its place we will have a model of interactive relationships within gene transcription networks that is like the pattern of interactions between words in a language, where ambiguity is essential to the creation of emergent meaning that is sensitive to cultural history and to context. The organism itself is the emergent meaning of the developmental process as embodied form, sensitive to both historical constraint within the genome and to environmental context, as we see in the adaptive creativity of evolution. What contemporary studies have revealed is that genes are not independent units of information that can be transferred between organisms to alter phenotypes, but elements of complex networks that act together in a morphogenetic process that produces coherent form and function as embodied meaning.

A major consequence that I see of this revelation in biology is the realisation that the separation we have made between human creativity as expressed in culture, and natural creativity as expressed in evolution, is mistaken. The two are much more deeply related than we have previously recognised. That humans are embedded in and dependent on nature is something that no-one can deny. This has become dramatically evident recently as our economic system has collapsed, along with the collapse of many crucial ecosystems, due to our failure to integrate human economic activity as a sustainable part of Gaian regulatory networks. We now face dramatic changes in the climate that require equally dramatic changes in our technologies connected with energy generation, farming, travel, and human life-style in general.

On the other hand, the recognition that culture is embedded in nature is not so evident but will, I believe, emerge as part of the biological revelation/revolution. Biologists will realise that all life, from bacteria to humans, involves a creative process that is grounded in natural languages as the foundation of their capacity for self-generation and continuous adaptive transformation. The complexity of the molecular networks regulating gene activity in organisms reveals a structure and a dynamic that has the self-similar characteristics and long-range order of languages. The coherent form of an organism emerges during its development as the embodied meaning of the historical genetic text, created through the process of resolving ambiguity and multiple possibilities of form into appropriate functional order that reflects sensitivity to context. Such use of language in all its manifestations in the arts and the sciences is the essence of cultural creativity.

In conclusion, I see the deep conceptual changes that are currently happening in biology as a prelude and accompaniment to the cultural changes that are occurring in culture, facilitating these and ushering in a new age of sustainable living on the planet.

lera_boroditsky's picture
Assistant Professor of Cognitive Science, UCSD

There is an old joke about a physicist, a biologist, and an epistemologist being asked to name the most impressive invention or scientific advance of modern times. The physicist does not hesitate—"It is quantum theory. It has completely transformed the way we understand matter." The biologist says "No. It is the discovery of DNA—it has completely transformed the way we understand life." The epistemologist looks at them both and says "I think it's the thermos." The thermos? Why on earth the thermos? "Well," the epistemologist explains patiently, "If you put something cold in it, it will keep it cold. And if you put something hot in it, it will keep it hot." Yeah, so what?, everyone asks. "Aha!" the epistemologist raises a triumphant finger "How does it know?"

With this in mind, it may seem foolhardy to claim that epistemology will change the world. And yet, that is precisely what I intend to do here. I think that knowledge about how we know will change everything. By understanding the mechanisms of how humans create knowledge, we will be able to break through normal human cognitive limitations and think the previously unthinkable.

The reason the change is happening now is that modern Cognitive Science has taken the role of empirical epistemology. The empirical approach to the origins of knowledge is bringing about breathtaking breakthroughs and turning what once were age-old philosophical mysteries into mere scientific puzzles.

Let me give you an example. One of the great mysteries of the mind is how we are able to think about things we can never see or touch. How do we come to represent and reason about abstract domains like time, justice, or ideas? All of our experience with the world is physical, accomplished through sensory perception and motor action. Our eyes collect photons reflected by surfaces in the world, our ears receive air-vibrations created by physical objects, our noses and tongues collect molecules, and our skin responds to physical pressure. In turn, we are able to exert physical action on the world through motor responses, bending our knees and flexing our toes in just the right amount to defy gravity. And yet our internal mental lives go far beyond those things observable through physical experience; we invent sophisticated notions of number and time, we theorize about atoms and invisible forces, and we worry about love, justice, ideas, goals, and principles. So, how is it possible for the simple building blocks of perception and action to give rise to our ability to reason about domains like mathematics, time, justice, or ideas?

Previous approaches to this question have vexed scholars. Plato, for example, concluded that we cannot learn these things, and so we must instead recollect them from past incarnations of our souls. As silly as this answer may seem, it was the best we could do for several thousand years. And even some of our most elegant and modern theories (e.g., Chomskyan linguistics) have been awkwardly forced to conclude that highly improbable modern concepts like ‘carburetor' and ‘bureaucrat' must be coded into our genes (a small step forward from past incarnations of our souls).

But in the past ten years, research in cognitive science has started uncovering the neural and psychological substrates of abstract thought, tracing the acquisition and consolidation of information from motor movements to abstract notions like mathematics and time. These studies have discovered that human cognition, even in its most abstract and sophisticated form, is deeply embodied, deeply dependent on the processes and representations underlying perception and motor action. We invent all kinds of complex abstract ideas, but we have to do it with old hardware: machinery that evolved for moving around, eating, and mating, not for playing chess, composing symphonies, inventing particle colliders, or engaging in epistemology for that matter. Being able to re-use this old machinery for new purposes has allowed us to build tremendously rich knowledge repertoires. But it also means that the evolutionary adaptations made for basic perception and motor action have inadvertently shaped and constrained even our most sophisticated mental efforts. Understanding how our evolved machinery both helps and constrains us in creating knowledge, will allow us to create new knowledge, either by using our old mental machinery in yet new ways, or by using new and different machinery for knowledge-making, augmenting our normal cognition.

So why will knowing more about how we know change everything? Because everything in our world is based on knowledge. Humans, leaps and bounds beyond any other creatures, acquire, create, share, and pass on vast quantities of knowledge. All scientific advances, inventions, and discoveries are acts of knowledge creation. We owe civilization, culture, science, art, and technology all to our ability to acquire and create knowledge. When we study the mechanics of knowledge building, we are approaching an understanding of what it means to be human—the very nature of the human essence. Understanding the building blocks and the limitations of the normal human knowledge building mechanisms will allow us to get beyond them. And what lies beyond is, well, yet unknown...

 

donald_d_hoffman's picture
Cognitive Scientist, UC, Irvine; Author, The Case Against Reality

Everything will change with the advent of the laptop quantum computer (QC). The transition from PCs to QCs will not merely continue the doubling of computing power, in accord with Moore's Law. It will induce a paradigm shift, both in the power of computing (at least for certain problems) and in the conceptual frameworks we use to understand computation, intelligence, neuroscience, social interactions, and sensory perception.

Today's PCs depend, of course, on quantum mechanics for their proper operation. But their computations do not exploit two computational resources unique to quantum theory: superposition and entanglement. To call them computational resources is already a major conceptual shift. Until recently, superposition and entanglement have been regarded primarily as mathematically well-defined by psychologically incomprehensible oddities of the quantum world—fodder for interminable and apparently unfruitful philosophical debate. But they turn out to be more than idle curiosities. They are bona fide computational resources that can solve certain problems that are intractable with classical computers. The best known example is Peter Shor's quantum algorithm which can, in principle, break encryptions that are impenetrable to classical algorithms.

The issue is the "in principle" part. Quantum theory is well established and quantum computation, although a relatively young discipline, has an impressive array of algorithms that can in principle run circles around classical algorithms on several important problems. But what about in practice? Not yet, and not by a long shot. There are formidable materials-science problems that must be solved—such as instantiating quantum bits (qubits) and quantum gates, and avoiding an unwanted noise called decoherence—before the promise of quantum computation can be fulfilled by tangible quantum computers. Many experts bet the problems can't adequately be solved. I think this bet is premature. We will have laptop QCs, and they will transform our world.

When laptop QCs become commonplace, they will naturally lead us to rethink the notion of intelligence. At present, intelligence is modeled by computations, sometimes simple and sometimes complex, that allow a system to learn, often by interacting with its environment, how to plan, reason, generalize and act to achieve goals. The computations might be serial or parallel, but they have heretofore been taken to be classical.

One hallmark of a classical computation is that it can be traced, i.e., one can in principle observe the states of all the variables at each step of the computation. This is helpful for debugging. But one hallmark of quantum computations is that they cannot in general be traced. Once the qubits have been initialized and the computation started, you cannot observe intermediate stages of the computation without destroying it. You aren't allowed to peak at a quantum computation while it is in progress.

The full horsepower of a quantum computation is only unleashed when, so to speak, you don't look. This is jarring. It clashes with our classical way of thinking about computation. It also clashes with our classical notion of intelligence. In the quantum realm, intelligence happens when you don't look. Insist on looking, and you destroy this intelligence. We will be forced to reconsider what we mean by intelligence in light of quantum computation. In the process we might find new conceptual tools for understanding those creative insights that seem to come from the blue, i.e., whose origin and development can't seem to be traced.

Laptop QCs will make us rethink neuroscience. A few decades ago we peered inside brains and saw complex telephone switch boards. Now we peer inside brains and see complex classical computations, both serial and parallel. What will see once we have thoroughly absorbed the mind set of quantum computation? Some say we will still find only classical computations, because the brain and its neurons are too massive for quantum effects to survive. But evolution by natural selection leads to surprising adaptations, and there might in fact be selective pressures toward quantum computations.

One case in point arises in a classic problem of social interaction: the prisoner's dilemma. In one version of this dilemma, someone yells "FIre!" in a crowded theater. Each person in the crowd has a choice. They can cooperate with everyone else, by exiting in turn in an orderly fashion. Or they can defect, and bolt for the exit. Everyone cooperating would be best for the whole crowd; it is a so-called Pareto optimal solution. But defecting is best for each individual; it is a so-called Nash equilibrium.

What happens is that everyone defects, and the crowd as a whole suffers. But this problem of the prisoner's dilemma, viz., that the Nash equilibrium is not Pareto optimal, is an artifact of the classical computational approach to the dilemma. There are quantum strategies, involving superpositions of cooperation and defection, for which the Nash equilibrium is Pareto optimal. In other words, the prisoner's dilemma can be resolved, and the crowd as a whole needn't suffer if quantum strategies are available. If the prisoner's dilemma is played out in an evolutionary context, there are quantum strategies that drive all classical strategies to extinction. This is suggestive. Could there be selective pressures that built quantum strategies into our nervous systems, and into our social interactions? Do such strategies provide an alternative way to rethink the notion of altruism, perhaps as a superposition of cooperation and defection?

Laptop QCs will alter our view of sensory perception. Superposition seems to be telling us that our sensory representations, which carve the world into discrete objects with properties such as position and momentum, simply are an inadequate description of reality: No definite position or momentum can be ascribed to, say, an electron when it is not being observed. Entanglement seems to be telling us that the very act of carving the world into discrete objects is an inadequate description of reality: Two electrons, billions of light years apart in our sensory representations, are in fact intimately and instantly linked as a single entity.

When superposition and entanglement cease to be abstract curiosities, and become computational resources indispensable to the function of our laptops, they will transform our understanding of perception and of the relation between perception and reality.

jesse_bering's picture
Psychologist; Associate Professor, Centre for Science Communication, University of Otago, New Zealand; Author, Perv

What if I were to tell you that God were all in your mind? That God, like a tiny spec floating at the edge of your cornea producing the image of a hazy, out-of-reach orb accompanying your every turn, were in fact an illusion, a psychological blemish etched onto the core cognitive substrate of your brain? It may feel like there is something grander out there…. watching, knowing, caring. Perhaps even judging. But in fact there is only the air you breathe. Consider, briefly, the implications of seeing God this way, as a sort of scratch on our psychological lenses rather than the enigmatic figure out there in the heavenly world most people believe him to be. Subjectively, God would still be present in our lives. In fact rather annoyingly so. As a way of perceiving, he would continue to suffuse our experiences with an elusive meaning and give the sense that the universe is communicating with us in various ways. But objectively, the notion of God as an illusion is a radical and some would say even dangerous idea, since it raises important questions about God as an autonomous, independent agent that lives outside human brain cells.

In fact, the illusion of God is more plausible a notion than some other related thought experiments, such as the possibility that our brains are sitting in an electrified vat somewhere and we're merely living out simulated lives. In contrast to the vat exercise or some other analogy to the science-fiction movie The Matrix, it is rather uncontroversial to say that our species' ability to think about God—even an absent God—is made possible only by our very naturally derived brains. In particular, by virtue of the fact that our brains have evolved over the eons in the unusual manner they have. In philosophical discourse, the idea that God is an illusion would be a scientifically inspired twist on a very ancient debate, since it deals with the nature and veridicality of God's actual being.

That's all very well, you may be thinking. But perhaps God isn't an illusion at all. Rather than a scratch on our psychological lenses, our brain's ability to reason about the supernatural—about such things as purpose, the afterlife, destiny—is in fact God's personal signature on our brains. One can never rule out the possibility that God micro-engineered the evolution of the human brain so that we've come to see him more clearly, a sort of divine Lasik procedure, or a scraping off the bestial glare that clouds the minds of other animals. In fact some scholars, such as psychologists Justin Barrett and Michael Murray, hold something like this "theistic evolution" view in their writings. Yet as a psychological scientist who studies religion, I take explanatory parsimony seriously. After all, parsimony is the basic premise of Occam's Razor, the very cornerstone of all scientific enquiry. Occam's Razor holds that, of two equally plausible theories, science shaves off the extra fat by favoring the one that makes the fewest unnecessary assumptions. And in the natural sciences, the concept of God as a causal force tends to be an unpalatable lump of gristle. Although treating God as an illusion may not be entirely philosophically warranted, therefore, it is in fact a scientifically valid treatment. Because the human brain, like any physical organ, is a product of evolution, and since natural selection works without recourse to intelligent forethought, this mental apparatus of ours evolved to think about God quite without need of the latter's consultation, let alone his being real.

Indeed, the human brain has many such odd quips that systematically alter, obscure, or misrepresent entirely the world outside our heads. That's not a bad thing necessarily; nor does it imply poor adaptive design. You have undoubtedly seen your share of optical illusions before, such as the famous Müller-Lyer image where a set of arrows of equal length with their tails in opposite directions creates the subjective impression that one line is actually longer than the other. You know, factually, the lines are of equal length, yet despite this knowledge your mind does not allow you to perceive the image this way. There are also well-documented social cognitive illusions that you may not be so familiar with. For example, David Bjorklund, a developmental psychologist, reasons that young children's overconfidence in their own abilities keeps them engaging in challenging tasks rather than simply giving up when they fail. Ultimately, with practice and over time, children's actual skills can ironically begin to more closely approximate these earlier, favorably warped self-judgments. Similarly, evolutionary psychologists David Buss and Martie Haselton argue that men's tendency to over-interpret women's smiles as sexual overtures prompts them to pursue courtship tactics more often, sometimes leading to real reproductive opportunities with friendly women.

In other words, from both a well-being and a biological perspective, whether our beliefs about the world 'out there' are true and accurate matters little. Rather, psychologically speaking, it's whether they work for us—or for our genes—that counts. As you read this, cognitive scientists are inching their way towards a more complete understanding of the human mind as a reality-bending prism. What will change everything? The looming consensus among those who take Occam's Razor seriously that the existence of God is a question for psychologists and not physicists.

lewis_wolpert's picture
Professor of Biology

We know much about the mechanisms involved in the development of embryos. But given the genome of the egg we cannot predict the way the embryo will develop. This will require a enormous computation in which all the many thousands of components , particularly proteins, are involved and so the behavior of every cell will be known. We would, given a fertilized human egg be able to have a picture of all the details of the newborn baby, including any abnormalities. We would also be able to programme the egg to develop into any shape we desire. The time will come when this is possible.

carlo_rovelli's picture
Theoretical Physicist; Aix-Marseille University, in the Centre de Physique Théorique, Marseille, France; Author, Helgoland; There Are Places in the World Where Rules Are Less Important Than Kindness

I grew up expecting that, when adult, I'd travel to Mars. I expected cancer and the flu—and all illnesses—to be cured, robots taking care of labor, the biochemistry of life fully unraveled, the possibility of recreating damaged organs in every hospital, the nations of the Earth living prosperously in peace thanks to new technology, and physics having understood the center of a black hole. I expected great changes, that did not came. Let's be open minded: it is still possible for them to come. It is possible for unexpected advances to change everything—it has happened in the past. But—let's indeed be open minded—it is also possible that big changes would not come.

Maybe I am biased by my own research field, theoretical physics. I grew up in awe for the physics of the second half of the XIX century and the first third of the XX century. What a marvel! The discovery of the electromagnetic field and waves, understanding thermodynamics with probability, special relativity, quantum mechanics, general relativity... Curved spacetimes, probability waves and black holes. What a feast! The world transforming every 10 years under our eyes; reality becoming more subtle, more beautiful. Seeing new worlds. I got into theoretical physics. What has happened big in the last 30 years? We are not sure. Perhaps not much. Big dreams, like string theory and multi-universes, but are they credible? We do not know. Perhaps the same passion that charmed me towards the future has driven large chunks of today's research into useless dead-end dreams. Maybe not. Maybe we are really understanding what happened before the Big Bang (a "Big Bounce"?) and what takes place deep down at the Planck scale ("loops"? space and time loosing their meaning?). Let's be open to the possibility we are getting there—let's work hard to get there. But let's also be ready to recognize that perhaps we are not there. Perhaps our dreams are just that: dreams. Too often I have been hearing that somebody is "on the brink of" the great leap ahead. I now tend to get asleep when I hear "on the brink of". In physics it is 15 years that I hear that we are "on the brink of observing supersymmetry". Please weak me up when we are actually there.

I do not want to sound pessimistic. I just want to put a word of caution in. Maybe what really changes everything is not something that sounds so glamourous. What did really change everything in the past? Here are two examples. Until no more than a couple of centuries ago, 95% of humanity worked the countryside as peasants. That is, humanity needed the labour of 95 out of 100 of its members just to feed the group. This left happy few for doing everything else. Today only a few percent of the humans work the fields. A few are enough to feed everybody else. This means that the large majority of us, including me and most probably you, my reader, are free to do something else, participating in constructing the world we inhabit, a better one, perhaps. What made such a huge change in our lives possible? Mostly, just one technological tool: the tractor. The humble rural machine has changed our life perhaps more than the wheel or electricity. Another example? Hygiene. Our life expectancy has nearly doubled from little more than washing hands and taking showers. Change comes often from where it is not expected. The famous note from the IBM top management at the beginning of the computer history estimated that: "there is no market for more than a few dozens of computers in the world".

So, what is my moral? Making predictions is difficult, of course, especially about the future. It is good to dream about big changes, actively seek them and be open minded to them. Otherwise we are stuck here. But let us not get blinded by hopes. Dreams and hopes of humanity sometimes succeed, sometime fail big. The century just ended has shown us momentous examples of both. The Edge question is about what will change everything, which I'll see in my lifetime: and if the answer was: "nothing"? Are we able to discern hype from substance? Dolly may be scientifically important, but I tend to see it just as a funny-born twin-sister: she hasn't changed much in my life, yet. Will she really?

tor_n_rretranders's picture
Writer; Speaker; Thinker, Copenhagen, Denmark

Understanding that the outside world is really inside us and the inside world is really outside us will change everything. Both inside and outside. Why?

"There is no out there out there", physicist John Wheeler said in his attempt to explain quantum physics. All we know is how we correlate with the world. We do not really know what the world is really like, uncorrelated with us. When we seem to experience an external world that is out there, independent of us, it is something we dream up.

Modern neurobiology has reached the exact same conclusion. The visual world, what we see, is an illusion, but then a very sophisticated one. There are no colours, no tones, no constancy in the "real" world, it is all something we make up. We do so for good reasons and with great survival value. Because colors, tones and constancy are expressions of how we correlate with the world.

The merging of the epistemological lesson from quantum mechanics with the epistemological lesson from neurobiology attest to a very simple fact: What we percieve as being outside of us is indeed a fancy and elegant projection of what we have inside. We do make this projection as as result of interacting with something not inside, but everything we experience is inside.

Is it not real? It embodies a correlation that is very real. As physicist N. David Mermin has argued, we do have correlations, but we do not know what it is that correlates, or if any correlata exists at all. It is a modern formulation of quantum pioneer Niels Bohr's view: "Physics is not about nature, it is about what we can say about nature."

So what is real, then? Inside us humans a lot of relational emotions exists. We feel affection, awe, warmth, glow, mania, belonging and refusal towards other humans and to the world as a whole. We relate and it provokes deep inner emotional states. These are real and true, inside our bodies and percieved not as "real states" of the outside world, but more like a kind of weather phenomena inside us.

That raises the simple question: Where do these internal states come from? Are they an effect of us? Did we make them or did they make us? Love exists before us (most of us were conceived in an act of love). Friendship, family bonds, hate, anger, trust, distrust, all of these entities exist before the individual. They are primary. The illusion of the ego denies the fact that they are there before the ego consciously decided to love or hate or care or not. But the inner states predate the conscious ego. And they predate the bodily individual.

The emotional states inside us are very, very real and the product of biological evolution. They are helpful to us in our attempt to survive. Experimental economics and behavioral sciences have recently shown us how important they are to us as social creatures: To cooperate you have to trust the other party, even though a rational analysis will tell you that both the likelihood and the cost of being cheated is very high. When you trust, you experience a physiologically detectable inner glow of pleasure. So the inner emotional state says yes. However, if you rationally consider the objects in the outside world, the other parties, and consider their trade-offs and motives, you ought to choose not to cooperate. Analyzing the outside world makes you say no. Human cooperation is dependent on our giving weight to what we experience as the inner world compared to what we experience as the outer world.

Traditionally, the culture of science has denied the relevance of the inner states. Now, they become increasingly important to understanding humans. And highly relevant when we want to build artefacts that mimic us.

Soon we will be building not only Artificial Intelligence. We will be building Artificial Will. Systems with an ability to convert internal decisions and values into external change. They will be able to decide that they want to change the world. A plan inside becomes an action on the outside. So they will have to know what is inside and outside.

In building these machines we ourselves will learn something that will change everything: The trick of perception is the trick of mistaking an inner world for the outside world. The emotions inside are the evolutionary reality. The things we see and hear outside are just elegant ways of imagining correlata that can explain our emotions, our correlations. We don't hear the croak, we hear the frog.

When we understand that the inner emotional states are more real than what we experience as the outside world, cooperation becomes easier. The epoch of insane mania for rational control will be over.

What really changes is they way we see things, the way we experience everything. For anything to change out there you have to change everything in here. That is the epistemological situation. All spiritual traditions have been talking about it. But now it grows from the epistemology of quantum physics, neurobiology and the building of robots.

We will be sitting there, building those Artificial Will-robots. Suddenly we will start laughing. There is no out there out there. It is in here. There is no in here in here. It is out there. The outside is in here. Who is there?

That laughter will change everything.

james_j_odonnell's picture
Classics Scholar, University Librarian, ASU; Author, Pagans

"Africa" is the short answer to this question. But it needs explanation.

Historians can't predict black swan game-changers any better than economists can. An outbreak of plague, a nuclear holocaust, an asteroid on collision course, or just an unassassinated pinchbeck dictator at the helm of a giant military machine—any of those can have transformative effect and will always come as a surprise.

But at a macro level, it's easier to see futures, just hard to time them. The expansion of what my colleague, the great environmental historian John McNeill, calls "the human web" to build a planet-wide network of interdependent societies is simply inevitable, but it's taken a long time. Rome, Persia, and ancient China built a network of empires stretching from Atlantic to Pacific, but never made fruitful contact with each other and their empire-based model of "globalization" fell apart in late antique times. A religion-based model kicked in then, with Christianity and Islam taking their swings: those were surprising developments, but they only went so far.

It took until early modern times and the development of new technologies for a real "world-wide web" of societies to develop. Even then, development was Euro-centric for a very long time. Now in our time, we've seen one great game-changer. In the last two decades, the Euro-centric model of economic and social development has been swamped by the sudden rise of the great emerging market nations: China, India, Brazil, and many smaller ones. The great hope of my youth—that "foreign aid" would help the poor nations bootstrap themselves—has come true, sometimes to our thinly-veiled disappointment: disappointment because we suddenly find ourselves competed with for steel and oil and other resources, suddenly find our products competed with by other economies' output, and wonder if we really wanted that game to change after all. The slump we're in now is the inevitable second phase of that expansion of the world community, and the rise that will follow is the inevitable third—and we all hope it comes quickly.

But a great reservoir or misery and possibility awaits: Africa. Humankind's first continent and homeland has been relegated for too long to disease, poverty, and sometimes astonishingly bad government. There is real progress in many places, but astonishing failures persist. That can't last. The final question facing humankind's historical development is whether we can bring the whole human family, including Africa's billion, can all achieve together sustainable levels of health and comfort.

When will we know? That's a scary question. One future timeline has us peaking now and subsiding, as we wrestle with the challenges we have made for ourselves, into some long period of not-quite-success, while Africa and the failed states of other continents linger in waiting for—what? Decades? Centuries? There are no guarantees about the future. But as we think about the financial crises of the present, we have to remember that what is at risk is not merely the comfort and prosperity of the rich nations but the very lives and opportunity for the poorest.

richard_foreman's picture
Playwright & Director; Founder, The Ontological-Hysteric Theater

The belief that there is anything that will change things, in and of itself stymies, I believe, real change. To believe that anything "will change things" focuses one on the superficial surface of things, which indeed change all the time. Such changes—which have been and will continue to be—create always an orientation of consciousness that focuses always on "the future".

But I propose that the only thing that will in fact 'change everything' is, or would be, the refusal to think about the future. And this, of course, is almost impossible for almost all human beings to do.

Therefore, nothing will change everything.

(I admit that I myself have fallen prey to this unavoidable human tendency, having written "of the future" in these pages, proposing that the internet is now creating, and will radicalized in the future—wide ranging yet depthless "pancake people".)

But if we could "think not" about the future, the present moment would obviously expand and become the full (and very different) universe. One can say "ah, but this is the animal state".

I would answer—no, the animal achieves this automatically, while the human being who achieves this only does so by erecting it on a foundational superstructure which postulates a necessary 'future' (past-based) much as Freud (and others before him) postulated a necessary "unconscious'—out of which the 'conscious' human being emerged.

(I am aware, obviously, that this theme has been engaged by philosophers and mystics down through the ages).

So for a human being to not think about the future would be to become a non-animal inhabiting the pure present (the dream of so called 'avant-guard' art, by the way). And animals do not (apparently) make avant-guard art.

Take John Brockman's offered example of a future event that changes everything—through genetic manipulation "your dog could become your cat" (and by implication, I could become you, etc.)

I say, this changes only the shell. Such alterations and achievements, along with many others similarly imaginable, add but another room onto the "home" inhabited by human beings—who will still spend most of their time "thinking about the future. And nothing, at the deepest level, therefore will ever change a postulated 'everything'—not so long as we keep imagining possible "change" which only reinforces the psychic dwelling of our un-changing selves in a "future" that is always imaginary and beyond us.

stephon_h_alexander's picture
Professor of Physics at Brown University; Author, The Jazz of Physics

I grew up in the northeast Bronx, when in the ‘80’s pretty much everyone’s heroes were basketball sensations Michael Jordan and Dominique Wilkins. Most of my friends, including myself fantasized about playing in the NBA. True, playing basketball was fun. But another obvious incentive was that aside from drug dealers, athletes were the only ones from our socioeconomic background that we saw earning serious money and respect. Despite my early tendencies toward science and math, I also played hooky quite a bit, spending many hours on the P. S. 16 basketball court. There, I would fantasize of one day, making my high school basketball team and doing a 360 dunk. Neither happened. At 15, in the middle of a layup, I stumbled and broke my kneecap, which forced me off the basketball playground for a half a year. I was relegated to homework and consistent class attendance.

Most of my street-court pals didn’t end up graduating from high school. But, although they were far better ball players than I, only one made it to the NBA. A few others did get scouted and ended up playing in big ten basketball teams. To this day, whenever I return to my old neighborhood, I see some of my diploma-less pals doing old school moves with kneepads on.

The year of the broken knee led to a scholarship from a private donor for a summer physics camp for teens called ISI (International Summer Institute). The camp took place in the Southampton, Long Island an environment far different from what I’d ever experienced. Most of the other kids were from foreign countries. I made strange new friends, including Hong, a South Korean boy who spent the summer trying to compute Pi to some decimal point or other. Or the group of young chess players being coached by a Russian chess master. I took college physics. Most of these students went on to become excellent scientists, one of which I am still in touch with. At some point, I met the organizer of the summer camp, a gentleman wearing a leather jacket in summer who turned out to be Nobel Laureate Sheldon Glashow(who coincidentally went to my neighboring High School). He gave us a physics/inspirational talk. During that talk, I realized that there are other types of Michael Jordans, in areas other than basketball and, like Shelly, I could be different plus make a good living as a scientist. More importantly, us teenagers really bonded with each other and, in a sense, formed a young global community of future scientists.

When I returned to the Bronx, I couldn’t really talk much about my experience. After all, a discussion on the Heisenberg principle is far less interesting than ball-park trash talk. I began playing less basketball and eventually went on to college and became a physicist. I could not help feeling a little guilty. In the back of my mind, I knew the real mathematical genius in my neighborhood was a guy named Eric Deabreu. But he never finished high school.

What if there were a global organization of scientists and educators dedicated to identifying (or scouting) the potential Michael Jordans of science, regardless of what part of the world they are from and regardless of socioeconomic background? This is happening on local levels, but not globally. What if these students were provided the resources to reach their full potential and naturally forge a global community of scientific peers and friends? What we would have is, among many benefits, an orchestrated global effort to address the most pressing scientific problems that current and future generations must confront: the energy crisis, global warming, HIV, diplomacy to name a few. I think an inititiative that markets the virtues of science on every corner of the planet, with the same urgency as the basketball scouts on corners of street ball courts, would change the world. Such a reality has long been my vision, which, in light the past efforts of some in the science community, including Clifford Johnson and Jim Gates and Neil Turok, I believe will see come to past.

jonathan_haidt's picture
Social Psychologist; Thomas Cooley Professor of Ethical Leadership, New York University Stern School of Business; Author, The Righteous Mind

The most offensive idea in all of science for the last 40 years is the possibility that behavioral differences between racial and ethnic groups have some genetic basis. Knowing nothing but the long-term offensiveness of this idea, a betting person would have to predict that as we decode the genomes of people around the world, we're going to find deeper differences than most scientists now expect. Expectations, after all, are not based purely on current evidence; they are biased, even if only slightly, by the gut feelings of the researchers, and those gut feelings include disgust toward racism..

A wall has long protected respectable evolutionary inquiry from accusations of aiding and abetting racism. That wall is the belief that genetic change happens at such a glacial pace that there simply was not time, in the 50,000 years since humans spread out from Africa, for selection pressures to have altered the genome in anything but the most trivial way (e.g., changes in skin color and nose shape were adaptive responses to cold climates). Evolutionary psychology has therefore focused on the Pleistocene era – the period from about 1.8 million years ago to the dawn of agriculture — during which our common humanity was forged for the hunter-gatherer lifestyle.

But the writing is on the wall. Russian scientists showed in the 1990s that a strong selection pressure (picking out and breeding only the tamest fox pups in each generation) created what was — in behavior as well as body — essentially a new species in just 30 generations. That would correspond to about 750 years for humans. Humans may never have experienced such a strong selection pressure for such a long period, but they surely experienced many weaker selection pressures that lasted far longer, and for which some heritable personality traits were more adaptive than others. It stands to reason that local populations (not continent-wide "races") adapted to local circumstances by a process known as "co-evolution" in which genes and cultural elements change over time and mutually influence each other. The best documented example of this process is the co-evolution of genetic mutations that maintain the ability to fully digest lactose in adulthood with the cultural innovation of keeping cattle and drinking their milk. This process has happened several times in the last 10,000 years, not to whole "races" but to tribes or larger groups that domesticated cattle.

Recent "sweeps" of the genome across human populations show that hundreds of genes have been changing during the last 5-10 millennia in response to local selection pressures. (See papers by Benjamin Voight, Scott Williamson, and Bruce Lahn). No new mental modules can be created from scratch in a few millennia, but slight tweaks to existing mechanisms can happen quickly, and small genetic changes can have big behavioral effects, as with those Russian foxes. We must therefore begin looking beyond the Pleistocene and turn our attention to the Holocene era as well – the last 10,000 years. This was the period after the spread of agriculture during which the pace of genetic change sped up in response to the enormous increase in the variety of ways that humans earned their living, formed larger coalitions, fought wars, and competed for resources and mates.

The protective "wall" is about to come crashing down, and all sorts of uncomfortable claims are going to pour in. Skin color has no moral significance, but traits that led to Darwinian success in one of the many new niches and occupations of Holocene life — traits such as collectivism, clannishness, aggressiveness, docility, or the ability to delay gratification — are often seen as virtues or vices. Virtues are acquired slowly, by practice within a cultural context, but the discovery that there might be ethnically-linked genetic variations in the ease with which people can acquire specific virtues is — and this is my prediction — going to be a "game changing" scientific event. (By "ethnic" I mean any group of people who believe they share common descent, actually do share common descent, and that descent involved at least 500 years of a sustained selection pressure, such as sheep herding, rice farming, exposure to malaria, or a caste-based social order, which favored some heritable behavioral predispositions and not others.)

I believe that the "Bell Curve" wars of the 1990s, over race differences in intelligence, will seem genteel and short-lived compared to the coming arguments over ethnic differences in moralized traits. I predict that this "war" will break out between 2012 and 2017.

There are reasons to hope that we'll ultimately reach a consensus that does not aid and abet racism. I expect that dozens or hundreds of ethnic differences will be found, so that any group — like any person — can be said to have many strengths and a few weaknesses, all of which are context-dependent. Furthermore, these cross-group differences are likely to be small when compared to the enormous variation within ethnic groups and the enormous and obvious effects of cultural learning. But whatever consensus we ultimately reach, the ways in which we now think about genes, groups, evolution and ethnicity will be radically changed by the unstoppable progress of the human genome project.

emanuel_derman's picture
Professor, Financial Engineering, Columbia University; Author, Models.Behaving.Badly

The biggest game-changer looming in your future, if not mine, is Life Prolongation. It works for mice and worms, and surely one of these days it'll work for the rest of us.

The current price for Life Prolongation seems to be semi-starvation; the people who try it wear loose clothes to hide their ribs and intentions. There's something desperate and shameful about starving yourself in order to live longer. But right now biologists are tinkering with reservatrol and sirtuins, trying to get you the benefit of life prolongation without cutting back on calories.

Life and love gets their edge from the possibility of their ending. What will life be like when we live forever? Nothing will be the same.

The study of financial options shows that there is no free lunch. What you lose on the swings you gain on the roundabouts. If you want optionality, you have to pay a price, and part of that price is that the value of your option erodes every day. That's time decay. If you want a world where nothing fades away with time anymore, it will be because because there's nothing to fade away.

No one dies. No one gets older. No one gets sick. You can't tell how old someone is by looking at them or touching them. No May-September romances. No room for new people. Everyone's an American car in Havana, endlessly repaired and maintained long after its original manufacturer is defunct. No breeding. No one born. No more evolution. No sex. No need to hurry. No need to console anyone. If you want something done, give it to a busy man, but no one need be busy when you have forever. Life without death changes absolutely everything.

If everyone is an extended LP, the turntable has to turn very slowly.

Who's going to do the real work, then? Chosen people who will volunteer or be volunteered to be mortal.

If you want things to stay the same, then things will have to change (Giuseppe di Lampedusa in The Leopard).

gregory_benford's picture
Emeritus Professor of Physics and Astronomy, UC-Irvine; Novelist, The Berlin Project

I expect to see this happen, because I'll be living longer. Maybe even to 150, about 30 more years than any human is known to have lived.

I expect this because I've worked on it, seen the consequences of genomics when applied to the complex problem of our aging.

Since Aristotle, many scientists and even some physicians (who should know better) thought that aging arises from a few mechanisms that make our bodies deteriorate. Instead, the genomic revolution of the last decade now promises a true 21st Century path to extending longevity: follow the pathways.

Genomics now reveals what physicians intuited: the staggering complexity of aging pathophysiology among real clinical patients. We can't solve "the aging problem" using the standard research methods of cell biology, despite the great success such methods had with some other medical problems.

Aging is not a process of deterioration actively built by natural selection. Instead it arises from a lack of such natural selection in later adulthood. Not understanding this explains the age-old failures to explain or control aging and the chronic diseases underlying it.

Aging comes from multiple genetic deficiencies, not a single biochemical problem.

But now we have genomics to reveal all the genes in an organism. More, we can monitor how each and every one of them expresses in our bodies. Genomics, working with geriatric pathology, now unveils the intricate problems of coordination among aging organ systems. Population genetics illuminates aging's cause and so, soon enough, its control. Aging arises from interconnected complexity hundreds of times greater than cell biologists thought before the late 1990s.

The many-headed monster of aging can't be stopped by any vaccine or by supplying a single missing enzyme. There are no "master regulatory" genes, or avenues of accumulating damage. Instead, there any complex pathways that inevitably trade current performance for longterm decay. Eventually that evolutionary strategy catches up with us.

So the aging riddle is inherently genomic in scale. There is no biochemical or cellular necessity to aging—it arises from side effects of evolution, through natural selection. But this also means we can attack it by using directed evolution.

Michael Rose at UC Irvine has produced "Methuselah flies" that live over four times longer than control flies in the lab. He did this by not allowing their eggs to hatch, until half are dead, for hundreds of generations. Methuselah flies are more robust, not less, and so resist stress.

Methuselah flies genomics shows us densely overlapping pathways. Directed evolution uses these to enhance longevity. Since flies have about ¾ of their genes in common with us, this tells us much about our own pathways. We now know many of these pathways and can enhance their resistance to the many disorders of aging.

By finding substances that can enhance the action of those pathways, we have a 21st Century approach to aging. Such research is rapidly ongoing in private companies, including one I co-founded only three years ago. The field is moving fast. The genomic revolution makes the use of multi-pathway treatments to offset aging inevitable.

Knowledge comes first, then its use. Science yields engineering. Already there seems no fundamental reason why we cannot live to 150 years or longer. After all, nature has done quite well on her own. We know of a 4,800-year-old bristlecone pine, a 400 year old clam—plus whales, a tortoise and koi fish over 200 years old—all without technology. After all, these organisms use pathways we share, and can now understand.

It will take decades to find the many ways of acting on the longevity genes we already know. Nature spent several billion years developing these pathways; we must plumb them with smart modern tools. The technology emerging now acts on these basic pathways to immediately effect all types of organs. Traditionally, medicine focuses on disease by isolating and studying organs. Fair enough, for then. Now it is better to focus on entire organisms. Only genomics can do this. It looks at the entire picture.

Quite soon, simple pills containing designer supplements will target our most common disorders — cardiovascular, diabetes, neurological. Beyond that, the era of affordable, personal genomics makes possible designer supplements, now called neutrigenomics. Tailored to each personal genome, these can enforce the repair mechanisms and augmentations that nature herself provided to the genomically fortunate.

So…what if it works?

The prospect of steadily extending our lifespans terrifies some governments. These will yield, over time, to pressures to let us work longer—certainly far beyond the 65 years imposed by most European Union countries. Slowly it will dawn that vibrant old age is a boon, not a curse.

Living to 150 ensures that you take the long view. You're going to live in a future ecology, so better be sure it's livable. You'll need longterm investments, so think longterm. Social problems will belong to you, not some distant others, because problems evolve and you'll be around to see them.

Rather than isolating people, "old age" will lead to social growth. With robust health to go with longer lives, the older will become more socially responsible, bringing both experience and steady energy to bear.

We need fear no senioropolis of caution and withdrawal. Once society realizes that people who get educated in 20 years can use that education for another century or so, working well beyond 100, all the 20th Century social agenda vanishes. Nobody will retire at 65. People will switch careers, try out their dreams, perhaps find new mates and passions. We will see that experience can damp the ardent passions of glib youth, if it has a healthy body to work through. That future will be more mature, and richer for it.

All this social promise emerges from the genomic revolution. The 21st Century has scarcely begun, and already it looks as though most who welcomed it in will see it out–happily, after a good swim in the morning and a vigorous party that night, to welcome in the 22nd. The first person to live to 150 may be reading this right now.

clifford_pickover's picture
Author, The Math Book, The Physics Book, and The Medical Book trilogy

Many mathematical surveys indicate that the "Proof of the Riemann Hypothesis" is the most important open question in mathematics. The rapid pace of mathematics, along with computer-assisted mathematical proofs and visualizations, leads me to believe that this question will be resolved in my lifetime. Math aficionado John Fry once said that he thought we would have a better chance of finding life on Mars than finding a counterexample for the Riemann Hypothesis.

In the early 1900s, British mathematician Godfrey Harold Hardy sometimes took out a quirky form of life insurance when embarking on ocean voyages. In particular, he would mail a postcard to a colleague on which he would claim to have found the solution of the Riemann Hypothesis. Hardy was never on good terms with God and felt that God would not let him die in a sinking ship while Hardy was in such a revered state, with the world always wondering if he had really solved the famous problem.

The proof of the Riemann Hypothesis involves the zeta function, which can be represented by a complicated-looking curve that is useful in number theory for investigating properties of prime numbers. Written as f(x), the function was originally defined as the infinite sum:

equation

When x = 1, this series has no finite sum. For values of x larger than 1, the series adds up to a finite number. If x is less than 1, the sum is again infinite. The complete zeta function, studied and discussed in the literature, is a more complicated function that is equivalent to this series for values of x greater than 1, but it has finite values for any real or complex number, except for when the real part is equal to one. We know that the function equals zero when x is -2, -4, -6, ... . We also know that the function has an infinite number of zero values for the set of complex numbers, the real part of which is between zero and one—but we do not know exactly for what complex numbers these zeros occur. In 1859, mathematician Georg Bernhard Riemann (1826–1866) conjectured that these zeros occur for those complex numbers the real part of which equals 1/2. Although vast numerical evidence exists that favors this conjecture, it is still unproven

The proof of Riemann's Hypothesis will have profound consequences for the theory of prime numbers and in our understanding of the properties of complex numbers. A generalized version of the Hypothesis, when proven true, will allow mathematicians to solve numerous important mathematical problems. Amazingly, physicists may have found a mysterious connection between quantum physics and number theory through investigations of the Riemann Hypothesis. I do not know if God is a mathematician, but mathematics is the loom upon which God weaves the fabric of the universe.

Today, over 11,000 volunteers around the world are working on the Riemann Hypothesis, using a distributed computer software package at Zetagrid.Net to search for the zeros of the Riemann zeta function. More than 1 billion zeros for the zeta function are calculated every day.

In modern times, mathematics has permeated every field of scientific endeavor and plays an invaluable role in biology, physics, chemistry, economics, sociology, and engineering. Mathematics can be used to help explain the colors of a sunset or the architecture of our brains. Mathematics helps us build supersonic aircraft and roller coasters, simulate the flow of Earth's natural resources, explore subatomic quantum realities, and image faraway galaxies. Mathematics has changed the way we look at the cosmos.

Physicist Paul Dirac once noted that the abstract mathematics we study now gives us a glimpse of physics in the future. In fact, his equations predicted the existence of antimatter, which was subsequently discovered. Similarly, mathematician Nikolai Lobachevsky said that "there is no branch of mathematics, however abstract, which may not someday be applied to the phenomena of the real world."

robert_provine's picture
Professor Emeritus, University of Maryland, Baltimore County; Author, Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond

The survival of our ancestors on the savannah depended on their ability to detect change. Change is where the action is. You don't need to know that things are the same, the same, the same.

Our nervous system is biased for the detection of change. Do you feel the watch on your wrist or the ring on your finger? Probably not, unless you have just put them on. You don't see the blind spot of each retina because they are unchanging and filled-in by your brain with information from the visual surround. If the image on your retina is experimentally stabilized, the entire visual field fades in a few seconds and you can see only visual stimuli that move through the field of view. You notice the sound of your home's air control system when it turns-on or turns-off, but not when it's running.

Our perception of changing stimulus amplitude is usually nonlinear. The sensation of loudness grows much more slowly (exponent of 0.6) than the amplitude of the physical stimulus, a reason why rock bands have huge amplifiers and speakers. Perceived brightness grows even more slowly than loudness (exponent of 0.33). The sensation of electric shock grows at an accelerating rate (exponent of 3.5), quickly shifting from a just detectable tingle to an agonizing jolt. Our estimate of length grows linearly (exponent of 1.0); a two-inch line appears twice as long as a one-inch line. We are lousy sound, light, and volt meters, but half-way decent rulers.

We are poor at making absolute judgments of stimulus amplitude, basing decisions on relative, ever changing standards. We judge ourselves to be warm or cool relative to "physiological zero," our adaptation level. The same room can seem either warm or cool, depending on whether you entered it from a chilly basement or an overheated sunroom. The lesson of temperature judgment is applicable to other, more complex measures of change associated with wealth and success. For a highly paid CEO, this year's million dollar bonus does not feel as good as last year's bonus of the same size, the adaptation level. The second term of a presidency does not feel as momentous as the first.

The above exploration of how we perceive changes in anything suggests the difficulty of identifying something that changes everything, from the perspective of the individual. The velocity of change is also critical. Did the Renaissance, Reformation, industrial revolution, or computer revolution, have ordinary people amazed at the changes in their lives? Historical and futuristic speculation about events that change everything features time compression and overestimates the rate of cultural and psychological change. As with previous generations, we may be missing the slow motion revolution that is taking place around us, unaware that we are part of an event that will change everything. What is it?

andy_clark's picture
Professor of Cognitive Philosophy, Department of Philosophy and Department of Informatics, University of Sussex, Brighton, UK; Author, Surfing Uncertainty: Prediction, Action, and the Embodied Mind

What will change everything is the onset of celebratory species self re-engineering.

The technologies are pouring in, from wearable, implantable, and pervasive computing, to the radical feature blends achieved using gene transfer techniques, to thought-controlled cursors freeing victims of locked-in syndrome, to funkier prosthetic legs able to win track races, and on to the humble but transformative iPhone.

But what really matters is the way we are, as a result of this tidal wave of self- re-engineering opportunity, just starting to know ourselves: not as firmly bounded biological organisms but as delightfully reconfigurable nodes in a flux of information, communcation, and action. As we learn to celebrate our own potential, we will embrace ever-more-dramatic variations in bodily form and in our effective cognitive profiles. The humans of the next century will be vastly more heterogeneous, more varied along physical and cognitive dimensions, than those of the past as we deliberately engineer a new Cambrian explosion of body and mind.

 

gregory_cochran's picture
Consultant; Adaptive Optics and Adjunct Professor of Anthropology, University of Utah; Coauthor (with Henry Harpending), The 10,000 Year Explosion

Our most reliable engine of change has been increased understanding of the physical world. First it was Galilean dynamics and Newtonian gravity, then electromagnetism, later quantum mechanics and relativity. In each case, new observations revealed new physics, physics that went beyond the standard models—physics that led to new technologies and to new ways of looking at the universe. Often those advances were the result of new measurement techniques. The Greeks never found artificial ways of extending their senses, which hobbled their protoscience. But ever since Tycho Brahe, a man with a nose for instrumentation, better measurements have played a key role in Western science.

We can expect significantly improved observations in many areas over the next decade. Some of that is due to sophisticated, expensive, and downright awesome new machines. The Large Hadron Collider should begin producing data next year, and maybe even information. We can scan the heavens for the results of natural experiments that you wouldn't want to try in your backward—events that shatter suns and devour galaxies—and we're getting better at that. That means devices like the 30-meter telescope under development by a Caltech-led consortium, or the 100-meter OWL (Overwhelmingly Large Telescope) under consideration by the European Southern Observatory. Those telescopes will actively correct for the atmospheric fluctuations which make stars twinkle—but that's almost mundane, considering that we have a neutrino telescope at the bottom of the Mediterranean and another buried deep in the Antarctic ice. We have the world's first real gravitational telescope (LIGO, the Laser Interferometer Gravitational-Wave Observatory) running now, and planned improvements should increase its sensitivity enough to study cosmic fender-benders in the neighborhood, as (for example) when two black holes collide. An underground telescope, of course….

There's no iron rule ensuring that revolutionary discoveries must cost an arm and a leg: ingenious experimentalists are testing quantum mechanics and gravity in table-top experiments, as well. They'll find surprises. When you think about it, even historians and archaeologists have a chance of shaking gold out of the physics-tree: we know the exact date of the Crab Nebula supernova from old Chinese records, and with a little luck we'll find some cuneiform tablets that give us some other astrophysical clue, as well as the real story about the battle of Kadesh…

We have a lot of all-too-theoretical physics underway, but there's a widespread suspicion that the key shortage is data, not mathematics. The universe may not be stranger than we can imagine but it's entirely possible that it's stranger than we have imagined thus far. We have string theory, but what Bikini test has it brought us? Experiments led the way in the past and they will lead the way again.

We will probably discover new physics in the next generation, and there's a good chance that the world will, as a consequence, become unimaginably different. For better or worse.

steve_nadis's picture
Contributing Editor to Astronomy Magazine and a freelance writer

What would change everything? Well, if you think of the universe as everything, then something that changes the universe—or at least changes our whole conception of it—would change everything. So I think I’ll go with the universe (which is generally a safe pick when you want to cover all bases). I just have to figure out the thing that’s changing. And the biggest, most dramatic thing I can think of would be discovering another universe in our universe.

Now what exactly does that mean? To some extent, it comes down to definitions. If you define the universe as “all there is,” then the idea of discovering another universe doesn’t really make sense. But there are other ways of picturing this. And the way many cosmologists view it is that our universe is, in fact, an expanding bubble--an honest-to-god bubble with a wall and everything. Not so different from a soap bubble really, except for its size and longevity. For this bubble has kept it together for billions of years. And as viewed from the inside, it appears infinitely large. Even so, there’s still room for other bubbles out there--an infinite number of them--and they could appear infinitely large too.

I guess the picture I’m painting here has lots of bubbles. And it’s not necessarily wrong to think of them as different universes, because they could be made of entirely different stuff that obeys different physical laws and sits at a different general energy level (or vacuum state) than our bubble. The fact is, we can never see all of our own bubble, or even see its edge, let alone see another bubble that might be floating outside. We can only see as far as light will take us, and right now that’s about 13.7 billion light-years, which means we only get to observe a small portion of our bubble and nothing more. That’s why it’s fair to consider a bubble outside ours as a universe unto itself. It could be out there, just as real as ours, and we’ll never have any prospect of knowing about it. Unless, perchance, it makes a dramatic entrance into our world by summarily crashing into us.

This sounds like the stuff of fantasy, and it may well be, but I’m not just making it up. Because one of our leading theories in cosmology called inflation predicts—at least in some versions—that our bubble universe will eventually experience an infinite number of collisions with other bubble universes. The first question one might ask is could we withstand such a crash and live to tell about it? The small number of physicists and cosmologists who’ve explored this issue have concluded that in many cases we would survive, protected to some extent by the vastness of our bubble and its prodigious wall.

The next question to consider is whether we could ever see traces of such a collision? There’s no definitive answer to that yet, and until we detect the imprint of another bubble we won’t know for sure. But theorists have some pretty specific ideas of what we might see—namely, disk-shaped features lurking somewhere amidst the fading glow of Big Bang radiation known as the cosmic microwave background. And if we were to look at such a disk in gravitational waves, rather than in electromagnetic waves (which we should be able to do in the near future), we might even see it glow.

The probability of seeing a disk of this nature is hard to assess because it appears to be the product of three numbers whose values we can only guess at. One of those numbers has to do with the rate at which other bubbles are forming. The other two numbers have to do with the rate at which space is expanding both inside and outside our bubble. Since we don’t know how to get all these numbers by direct measurements, there doesn’t seem to be much hope of refining that calculation in the near-term. So our best bet, for now, may be trying to obtain a clearer sense of the possible observational signatures and then going out and looking. The good news is that we won’t need any new observatories in the sky. We can just sift through the available cosmic microwave data, which gets better every year, and see what turns up.

If we find another universe, I’m not sure exactly what that means. The one thing I do know is that it’s big. It should be of interest to everybody, though it will undoubtedly mean different things to different folks. One thing that I think most people will agree on is that the place we once called the universe is even grander and more complex than we ever imagined.

john_gottman's picture
Psychologist; Co-founder, The Gottman Relationship Institute; Author, The Seven Principles for Making Marriage Work

The technological changes were small at first. In 2007 a telescope was developed that could search for planets in the Milky Way within 100 light years of Earth. The next version of the telescope in 2008 did not have to block out the light of the new star to see the planets. It could directly see the reflected light of the planets closest to every star. That made it possible to do spectroscopic analysis of reflected light and search for blue planets like Earth. Within a decade, 100 Earth-like planets had been identified within 100 light years. In the next two centuries that number increased to 50,000 blue planets.

Within the next two centuries the seemingly impossible technical problems of space travel began to be solved. Problems of foil sails were solved. Designs emerged for ships that could get up to 85% of the speed of light within 2 years, using acceleration from starts and from harnessing the creative energy of empty space itself. The Moon, Europa and Mars were colonized. Terra-forming technologies developed. Many designs emerged for the spinning complete 2-mile Earth-habitat ship that produced a 1-g environment. Thousands of people wanted to make the trips.

Laboratory Earth colonies were formed for simulating conditions for the galactic trips. Based on these experiments, social scientists soon recognized that the major unsolved problem of galactic colonization was the social psychological problem, How could humans live together for up to 52 years, raising children who would become the explorers of the blue planets? Much had been learned, of course, from the social psychological studies early in the 21st and 22nd Centuries for obtaining planet-wide cooperation in solving global warming and sustainable energy production, and in curing world-wide hunger and disease. But that work was primitive and rudimentary for the challenges of galactic colonization.

The subsequent classic social psychological studies were all funded privately by one man. Thousands of scientists participated. Studies of all kinds were initially devised, and the results were carefully replicated. The entire series of social psychological experiments took a century to perform. It rapidly became clear that a military or any hierarchical social structure could not last without the threats of continual external danger. The work of Peggy Sanday had demonstrated that fact without question. The problem was to foster creative collaboration and minimize self-interest. Eventually, it was deemed necessary for each ship to spend 5 years prior to the trip selecting a problem that all the members would creatively and cooperatively face. The work had to easily consume the crew of a ship for 60 years. In addition, each ship represented a microcosm of all Earth's activities, including all the occupations and professions, adventure, play, and sports.

In the year 2,500 more than 20,000 ships set out, 2 headed for each planet. It was inevitable that many ships would successfully make the journey. No one knew what they would find. There was no plan for communication between the stars. The colonization of the Milky Way had begun.

william_h_calvin's picture
Theoretical Neurobiologist; Affiliate Professor Emeritus, University of Washington; Author, Global Fever

Climate will change our worldview. That each of us will die someday ranks up there with 2+2=4 as one of the great certainties of all time. But we are accustomed to think of our civilization as perpetual, despite all of the history and prehistory that tells us that societies are fragile. The junior-sized slices of society such as the church or the corporation, also assumed to outlive the participant, provide us with everyday reminders of bankruptcy. Climate change is starting to provide daily reminders, challenging us to devise ways to build in resiliency, an ability to bounce back when hit hard.

Climate may well force on us a major change in how science is distilled into major findings. There are many examples of the ponderous nature of big organizations and big projects. While I think that the IPCC deserves every bit of its hemi-Nobel, the emphasis on "certainty" and the time required for a thousand scientists and a hundred countries to reach unanimous agreement probably added up to a considerable delay in public awareness and political action.

Climate will change our ways of doing science, making some areas more like medicine with its combination of science and interventional activism, where delay to resolve uncertainties is often not an option. Few scientists are trained to think this way — and certainly not climate scientists, who are having to improvise as the window of interventional opportunity shrinks.

Climate will, at times, force a hiatus on doing science as usual, much like what happened during World War II when many academics laid aside their usual teaching and research interests to intensively focus on the war effort.

The big working models of fluid dynamics used to simulate ocean and atmospheric circulation will themselves be game-changing for other fields of dynamics, such as brain processing and decision making. They should be especially important as they are incorporated into economic research. Climate problems will cause economies to stagger and we have just seen how fragile they are. Unlike 1997 when currency troubles were forced by a big El Niño and its associated fires in southeast Asia, the events of 2008 show that, even without the boat being rocked by external events, our economy can partially crash just from internal instabilities, equivalent to trying to dance in a canoe. Many people will first notice climate change elsewhere via the economic collapse that announces it.

That something as local as a U.S. housing bubble could trigger a worldwide recession shows us just how much work we have to do in "earthquake retrofits" for our economy. Climate-proofing our financial flows will rely heavily on good models of economic dynamics, studies of how things can go badly wrong within a month. With such models, we can test candidates for economic crash barriers.

Finally, climate's challenges will change our perspective on the future. Long-term thinking can be dangerous if it causes us to neglect the short term hazards. A mid-century plan for emissions reduction will be worthless if the Amazon rain forest burns down during the next El Niño.

ed_regis's picture
Science writer; Author, Monsters

Nothing has a greater potential for changing everything than the successful implementation of good old-fashioned nanotechnology.

I specify the old-fashioned version because nanotechnology is decidedly no longer what it used to be. Back in the mid-1980s when Eric Drexler first popularized the concept in his book Engines of Creation, the term referred to a radical and grandiose molecular manufacturing scheme. The idea was that scientists and engineers would construct vast fleets of "assemblers," molecular-scale, programmable devices that would build objects of practically any arbitrary size and complexity, from the molecules up. Program the assemblers to put together an SUV, a sailboat, or a spacecraft, and they'd do it—automatically, and without human aid or intervention. Further, they'd do it using cheap, readily-available feedstock molecules as raw materials.

The idea sounds fatuous in the extreme…until you remember that objects as big and complex as whales, dinosaurs, and sumo wrestlers got built in a moderately analogous fashion: they began as minute, nanoscale structures that duplicated themselves, and whose successors then differentiated off into specialized organs and other components. Those growing ranks of biological marvels did all this repeatedly until, eventually, they had automatically assembled themselves into complex and functional macroscale entities. And the initial seed structures, the gametes, were not even designed, built, or programmed by scientists: they were just out there in the world, products of natural selection. But if nature can do that all by itself, then why can't machines be intelligently engineered to accomplish relevantly similar feats?

Latter-day "nanotechnology," by contrast, is nothing so imposing. In fact, the term has been co-opted, corrupted, and reduced to the point where what it refers to is essentially just small-particle chemistry. And so now we have "nano-particles" in products raging from motor oils to sunscreens, lipstick, car polish and ski wax, and even a $420 "Nano Gold Energizing Cream" that its manufacturer claims transports beneficial compounds into the skin. Nanotechnology in this bastardized sense is largely a marketing gimmick, not likely to change anything very much, much less "everything."

But what if nanotechnology in the radical and grandiose sense actually became possible? What if, indeed, it became an operational reality? That would be a fundamentally transformative development, changing forever how manufacturing is done and how the world works. Imagine all of our material needs being produced at trivial cost, without human labor, and with no waste. No more sweat shops, no more smoke-belching factories, no more grinding workdays or long commutes. The magical molecular assemblers will do it all, permanently eliminating poverty in the process.

Then there would be the medical miracles performed by other types of molecular-scale devices that would repair or rejuvenate your body's cells, killing the cancerous or other bad ones, and nudging the rest of them toward unprecedented levels of youth, health, and durability. All without $420 bottles of face cream.

There's a downside to all this, of course, and it has nothing to do with Michael Chrichton-ish swarms of uncontrolled, predatory nanobots hunting down people and animals. Rather, it has to do with the question of what the mass of men and women are going to do when, newly unchained from their jobs, and blessed or cursed with longer life spans, they have oceans of free time to kill. Free time is not a problem for the geniuses and creators. But for the rest of us, what will occupy our idle hands? There is only so much golf you can play.

But perhaps this is a problem that will never have to be faced. The bulk of mainstream scientists pay little attention to radical nanotechnology, regarding its more extravagant claims as science-fictional and beyond belief. Before he died, chemist Richard Smalley, a Nobel prizewinner, made a cottage industry out of arguing that insurmountable technical difficulties at the chemical bonding level would keep radical nanotechnology perpetually in the pipe dream stage. Nobody knows whether he was right about that.

Some people may hope that he was. Maybe changing everything is not so attractive an idea as it seems at first glance.

christopher_j_anderson's picture
Curator, TED conferences, TED Talks; author, TED Talks

Today when we think of the world's teeming billions of humans, we tend to think: overpopulation, poverty, disease, instability, environmental destruction. They are the cause of most of the planet's problems.

What if that were to change? What if the average human were able to contribute more than consume? To add more than subtract? Think of the world as if each person drives a balance sheet. On the negative side are the resources they consume without replacing, on the positive side are the contributions they make to the planet in the form of the resources they produce, the lasting artifacts-of-value they build, and the ideas and technologies that might create a better future for their family, their community and for the planet as a whole. Our whole future hangs on whether the sum of those balance sheets can turn positive.

What might make that possible? One key reason for hope is that so far we have barely scraped the surface of human potential. Throughout history, the vast majority of humans have not been the people they could have been.

Take this simple thought experiment. Pick your favorite scientist, mathematician or cultural hero. Now imagine that instead of being born when and where they were, they had instead been born with the same in-built-but-unlocked abilities in a typical poverty-stricken village in, say, the France of 1200 or the Ethiopia of 1980. Would they have made the contribution they made? Of course not. They would never have received the education and encouragement it took to achieve what they did. Instead they would have simply lived out a life of poverty, with perhaps an occasional yearning that there must be a better way.

Conversely, an unknown but vast number of those grinding out a living today have the potential to be world-changers... if only we could find a way of unlocking that potential.

Two ingredients might be enough to do that. Knowledge and inspiration. If you learn of ideas that could transform your life, and you feel the inspiration necessary to act on that knowledge, there's a real chance your life will indeed be transformed.

There are many scary things about today's world. But one that is truly thrilling is that the means of spreading both knowledge and inspiration have never been greater. Five years ago, an amazing teacher or professor with the ability to truly catalyze the lives of his or her students could realistically hope to impact maybe 100 people each year. Today that same teacher can have their words spread on video to millions of eager students. There are already numerous examples of powerful talks that have spread virally to massive Internet audiences.

Driving this unexpected phenomenon is the fact that the physical cost of distributing a recorded talk or lecture anywhere in the world via the internet has fallen effectively to zero. This has happened with breathtaking speed and its implications are not yet widely understood. But it is surely capable of transforming global education.

For one thing, the realization that today's best teachers can become global celebrities is going to boost the caliber of those who teach. For the first time in many years it's possible to imagine ambitious, brilliant 18-year-olds putting 'teacher' at the top of their career choice list. Indeed the very definition of "great teacher" will expand, as numerous others outside the profession with the ability to communicate important ideas find a new incentive to make that talent available to the world. Additionally every existing teacher can greatly amplify their own abilities by inviting into their classroom, on video, the world's greatest scientists, visionaries and tutors. (Can a teacher inspire over video? Absolutely. We hear jaw-dropping stories of this every day.)

Now think about this from the pupils' perspective. In the past, everyone's success has depended on whether they were lucky enough to have a great mentor or teacher in their neighborhood. The vast majority have not been fortunate. But a young girl born in Africa today will probably have access in 10 years' time to a cell phone with a high-resolution screen, a web connection, and more power than the computer you own today. We can imagine her obtaining face-to-face insight and encouragement from her choice of the world's great teachers. She will get a chance to be what she can be. And she might just end up being the person who saves the planet for our grandchildren.

douglas_rushkoff's picture
Media Analyst; Documentary Writer; Author, Throwing Rocks at the Google Bus

We're talking about changing everything—not just our abilities, relationships, politics, economy, religion, biology, language, mathematics, history and future, but all of these things at once. The only single event I can see shifting pretty much everything at once is our first encounter with intelligent, extra-terrestrial life.

The development of any of our current capabilities—genetics, computing, language, even compassion—all feel like incremental advances in existing abilities. As we've seen before, the culmination of one branch of inquiry always just opens the door to a new a new branch, and never yields the wholesale change of state we anticipated. Nothing we've done in the past couple of hundred thousand years has truly changed everything, so I don't see us doing anything in the future that would change everything, either.

No, I have the feeling that the only way to change everything is for something be done to us, instead. Just imagining the encounter of humanity with an "other" implies a shift beyond the solipsism that has characterized our civilization since our civilization was born. It augurs a reversal as big as the encounter of an individual with its offspring, or a creature with its creator. Even if it's the result of something we've done, it's now independent of us and our efforts.

To meet a neighbor, whether outer, inner, cyber- or hyper- spatial, finally turns us into an "us." To encounter an other, whether a god, a ghost, a biological sibling, an independently evolved life form, or an emergent intelligence of our own creation, changes what it means to be human.

Our computers may never inform us that they are self-aware, extra-terrestrials may never broadcast a signal to our SETI dishes, and interdimensional creatures may never appear to those who aren't taking psychedelics at the time—but if any of them did, it would change everything.

gregory_paul's picture
Independent Researcher; Author, The Princeton Field Guide of Dinosaurs

Predicting what has the potential to change everything — really change everything — in this century is not difficult. What I cannot know is whether I will live to see it, the data needed to reliably calculate the span of my mind's existence being insufficient.

According to the current norm I can expect to last another third of century. Perhaps more if I match my grandmother's life span — born in a Mormon frontier town the same year Butch Cassidy, the Sundance Kid and Etta Place sailed for Argentina, she happily celebrated her 100th birthday in 2001. But my existence may exceed the natural ceiling. Modern medicine has maximized life spans by merely inhibiting premature death. Sooner or later that will become passé as advancing technology renders death optional.

Evolution whether biological or technological has been speeding up over time as the ability to acquire, process and exploit information builds upon itself. Human minds adapted to comprehend arithmetic growth tend to underestimate exponential future progress. Born two years before the Wright's first flight, my young grandmother never imagined she would cross continents and oceans in near sonic flying machines. Even out of the box thinkers did not predict the hyperexpansion of computing power over the last half century. It looks like medicine is about to undergo a similar explosion. Extracellular matrix powder derived from pig bladders can regrow a chopped off finger with a brand new tip complete with nail. Why not regenerate entire human arms and legs, and organs?

DARPA funded researchers predict that we may soon be "replacing damaged and diseased body parts at will, perhaps indefinitely." Medical corporations foresee a gold mine in repairing and replacing defective organs using the cells from the victims' own body (avoiding the whole rejection problem). If assorted body parts ravaged by age can be reconstructed with tissues biologically as young and healthy as those of children, then those with the will and resources will reconstruct their entire bodies.

Even better is stopping and then reversing the very process of aging. Humans, like parrots, live exceptionally long lives because we are genetically endowed with unusually good cellular repair mechanisms for correcting the damage created by free radicals. Lured by the enormous market potential, drugs are being developed to tweak genes to further upgrade the human repair system. Other pharmaceuticals are expected to mimic the life extension that appears to stem from the body's protective reaction to suppressed caloric intake. It's quite possible, albeit not certain, that middle-aged humans will be able to utilize the above methods to extend their lives indefinitely. But keeping our obsolescing primate bodies and brains up and running for centuries and millennia will not be the Big Show.

The human brain and the mind it generates have not undergone a major upgrade since the Pleistocene. And they violate the basic safety rule of information processing — that it is necessary to back up the data. Something more sophisticated and redundant is required. With computing power doubling every year or two cheap personal computers should match the raw processing power of the human brain in a couple of decades, and then leave it in the dust.

If so, it should be possible to use alternative, technological means to produce conscious thought. Efforts are already underway to replace damaged brain parts such as the hippocampus with hypercomputer implants. If and when the initial medical imperative is met, elective implants will undoubtedly be used to upgrade normal brain operations. As the fast evolving devices improve they will begin to outperform the original brain, it will make less and less sense to continue to do one's thinking in the old biological clunker, and formerly human minds will become entirely artificial as they move into ultra sophisticated, dispersed robot systems.

Assuming that the above developments are practical, technological progress will not merely improve the human condition, it should replace it. The conceit that humans in anything like their present form will be able to compete in a world of immortal superminds with unlimited intellectual capacity is naïve; there simply will not be much for people to do. Do not for a minute imagine a society of crude Terminators, or Datas that crave to be as human as possible. Future robots will be devices of subtle sophistication and sensitivity that will expose humans as the big brained apes we truly are. The logic predicts that most humans will choose to become robotic.

Stopping the CyberRevolution is probably not possible, the growing knowledge base should make the production of superintelligent minds less difficult and much faster than is replicating, growing and educating human beings. Trying to ban the technology will work as well as the war on drugs. The replacement of humanity with a more advanced system will be yet another evolutionary event on the scale of the Cambrian revolution, the Permian and K/C extinctions that produced and killed off the nonavian dinosaurs, and the advent of humans and the industrial age.

The scenario herein is not radical or particularly speculative, it seems so only because it has not happened yet. If the robotic civilization comes to pass it will quickly become mundane to us. The ability of cognitive minds to adjust is endless.

Here's a pleasant secondary effect — supernaturalistic religion will evaporate as ordinary minds become as powerful as gods. What will the cybersociety be like? Hardly have a clue. How much of this will I live to see? I'll find out.

juan_enriquez's picture
Managing Director, Excel Venture Management; Co-author (with Steve Gullans), Evolving Ourselves

Speciation is coming. Fast. We keep forgetting that we are but one of several hominids that have walked the Earth (erectus, habilis,neanderthalis, heidelbergensis, ergaster, australopithecus). We keep thinking we are the one and only, the special. But we easily could not have been a dominant species. Or even a species anymore. We blissfully ignore the fact that we came within about 2,000 specimens of going extinct (which is why human DNA is virtually identical).

There is not much evidence, historically, that we are the be all and end all, or that we will remain the dominant species. The fossil history of the planet tells tales of at least six mass extinctions. In each cycle, most life was toast as DNA/RNA hit a reboot key. New species emerged to adapt to new conditions. Asteroid hits? Do away with oceans of slime. World freezes to the Equator? Microbes dominate. Atmosphere fills with poisonous oxygen? no worries, life eventually blurts out obnoxious mammals.

Unless we believe that we have now stabilized all planetary and galactic variables, these cycles of growth and extinction will continue time and again. 99% of species, including all other hominids, have gone extinct. Often this has happened over long periods of time. What is interesting today, 200 years after Darwin's birth, is that we are taking direct and deliberate control over the evolution of many, many species, including ourselves. So the single biggest game changer will likely be the beginning of human speciation. We will begin to get glimpses of it in our lifetime. Our grandchildren will likely live it.

There are at least three parallel tracks on which this change is running towards us. The easiest to see and comprehend is taking place among the "handicapped." As we build better prostheses, we begin to see equality. Legless Oscar Pistorious attempting to put aside the Special Olympics and run against able bodied Olympians is but one example. In Beijing he came very close, but did not meet the qualifying times. However, as materials science, engineering, and design advance, by next Olympics he and his disciples will be competitive. And one Olympics after that the "handicapped" could be unbeatable.

It's not just limbs, what started out as large cones for the hard of hearing eventually became pesky, malfunctioning hearing aids. Then came discrete, effective, miniaturized buds. Now internally implanted cochlear implants allow the deaf to hear. But unlike natural evolution, which requires centuries, digital technologies double in power and halve in price every few months. Soon those with implants will hear as well as we do, and, a few months after that, their hearing may be more acute than ours. Likely the devices will span a broad and adjustable tonal range, including that of species like dogs, bats, or dolphins. Wearers will be able to adapt to various environments at will. Perhaps those with natural hearing will file discrimination lawsuits because they were not hired by symphony orchestras…

Speciation does not have to be mechanical, there is a second parallel, fast moving, track in stem cell and tissue engineering. While the global economy melted down this year, a series of extraordinary discoveries opened interesting options that will be remembered far longer that the current NASDAQ index. Labs in Japan and Wisconsin rebooted skin cells and turned them into stem cells. We are now closer to a point where any cell in our body can be rebooted back to its original factory settings (pluripotent stem cell) and can rebuild any part of our body. At the same time, a Harvard team stripped a mouse heart of all its cells, leaving only cartilage. The cartilage was covered in mouse stem cells, which self organized into a beating heart. A Wake Forest group was regrowing human bladders and implanting them into accident and cancer victims. By year end, a European team had taken a trachea from a dead donor, taken the cells off, and then covered the sinew with bone marrow cells taken from a patient dying of tuberculosis. These cells self organized and regrew a fully functional trachea which was implanted into the patient. There was no need for immunosuppressants; her body recognized the cells covering the new organ as her own…

Again, this is an instance where treating the sick and the needy can quickly expand into a "normal" population with elective procedures. The global proliferation of plastic surgery shows how many are willing to undergo great expense, pain, and inconvenience to enhance their bodies. Between 1996 and 2002 elective cosmetic surgery increased 297%, minimally invasive procedures increased 4146%. As artificial limbs, eyes, ears, cartilage begin to provide significant advantages, procedures developed to enhance the quality of life for the handicapped may become common.

After the daughter of one of my friends tore her tendons horseback riding, doctors told her they would have to harvest parts of her own tendons and hamstrings to rebuild her leg. Because she was so young, the crippling procedure would have to be repeated three times as her body grew. But her parents knew tissue engineers were growing tendons in a lab, so she was one of the first recipients of a procedure that allows natural growth and no harvesting. Today she is a successful ski racer, but her coach feels her "damaged" knee is far stronger and has asked whether the same procedure could be done on the undamaged knee…

As we regrow or engineer more body parts we will likely significantly increase average life span and run into a third track of speciation. Those with access to Google already have an extraordinary evolutionary advantage over the digitally illiterate. Next decade we will be able to store everything we see, read, and hear in our lifetime. The question is can we re-upload and upgrade this data as the basic storage organ deteriorates? And can we enhance this organ's cognitive capacity internally and externally? MIT has already brought together many of those interested in cognition—neuroscientists, surgeons, radiologists, psychologists, psychiatrists, computer scientists—to begin to understand this black box. But rebooting other body parts will likely be easier than rebooting the brain, so this will likely be the slowest track but, over the long term, the one with the greatest speciation impact.

Speciation will not be a deliberate, programmed event. Instead it will involve an ever faster accumulation of small, useful improvements that eventually turn homo sapiens into a new hominid. We will likely see glimpses of this long-lived, partly mechanical, partly regrown creature that continues to rapidly drive its own evolution. As the branches of the tree of life, and of hominids, continue to grow and spread, many of our grandchildren will likely engineer themselves into what we would consider a new species, one with extraordinary capabilities, a homo evolutis.

george_dyson's picture
Science Historian; Author, Analogia

The detection of extraterrestrial life, extraterrestrial intelligence, or extraterrestrial technology (there’s a difference) will change everything. The game could be changed completely by an extraterrestrial presence discovered (or perhaps not discovered) here on earth.

SETI@home, our massively-distributed search for extraterrestrial communication, now links some five million terrestrial computers to a growing array of radio telescopes, delivering a collective 500 teraflops of fast Fourier transforms representing a cumulative two million years of individual processing time. Not a word (or even a picture) so far. However, as Marvin Minsky warned in 1970: "Instead of sending a picture of a cat, there is one area in which you can send the cat itself."

Life, assuming it exists elsewhere in the universe, will have had time to explore an unfathomable diversity of forms. Those best able to survive the passage of time, adapt to changing environments, and migrate unscathed across interstellar distances will become the most widespread. Life forms that assume digital representation, for all or part of their life cycle, will not only be able to send messages at the speed of light, they will be able to sendthemselves.

Digital organisms can be propagated economically even with extremely low probability of finding a host environment in which to germinate and grow. If the kernel is intercepted by a host that has discovered digital computing (whose ability to translate between sequence and structure, as Alan Turing and John von Neumann demonstrated, is as close to a universal common denominator as life and intelligence running on different platforms may be able to get) it has a chance. If we discovered such a kernel, we would immediately replicate it widely. Laboratories all over the planet would begin attempting to decode it, eventually compiling the coded sequence — intentionally or inadvertently — to utilize our local resources, the way a virus is allocated privileges within a host cell. The read-write privileges granted to digital organisms already include material technology, human minds, and, increasingly, biology itself. (What, exactly, are those screen savers doing at Dr. Venter’s laboratory during the night?)

According to Edward Teller, Enrico Fermi asked "Where is everybody?" at Los Alamos in 1950, when the subject of extraterrestrial beings came up over lunch. The answer to Fermi’s Paradox could be "We’ve arrived! Now help us unpack!" Fifty years later, over lunch at Stanford, I asked a 91-year-old Edward Teller (holding a wooden staff at his side like an Old Testament prophet) how Fermi’s question was holding up.

"Let me ask you," Teller interjected in his thick Hungarian accent. "Are you uninterested in extraterrestrial intelligence? Obviously not. If you are interested, what would you look for?"

"There's all sorts of things you can look for," I answered.  "But I think the thing not to look for is some intelligible signal... Any civilization that is doing useful communication, any efficient transmission of information will be encoded, so it won't be intelligible to us — it will look like noise."

"Where would you look for that?" asked Teller.

"I don't know..."

"I do!" 

"Where?"

"Globular clusters!" answered Teller.  "We cannot get in touch with anybody else because they choose to be so far away from us. In globular clusters, it is much easier for people at different places to get together.  And if there is interstellar communication at all, it must be in the globular clusters."

"That seems reasonable," I agreed. "My own personal theory is that extraterrestrial life could be here already... and how would we necessarily know? If there is life in the universe, the form of life that will prove to be most successful at propagating itself will be digital life; it will adopt a form that is independent of the local chemistry, and migrate from one place to another as an electromagnetic signal, as long as there's a digital world — a civilization that has discovered the Universal Turing Machine — for it to colonize when it gets there.  And that's why von Neumann and you other Martians got us to build all these computers, to create a home for this kind of life."

There was a long, drawn-out pause. "Look," Teller finally said, lowering his voice, "may I suggest that instead of explaining this, which would be hard... you write a science fiction book about it."

"Probably someone has," I said.

"Probably," answered Teller, "someone has not."

roger_schank's picture
CEO, Socratic Arts Inc.; John Evans Professor Emeritus of Computer Science, Psychology and Education, Northwestern University; Author, Make School Meaningful-And Fun!

An executive of a consumer products company who I know was worrying about how to make the bleach his company produces better. He thought it would be nice if the bleach didn't cause "collateral damage." That is, he wanted it to harm bad stuff without harming good stuff. He seized upon the notion of collateral damage and began to wonder where else collateral damage was a problem. Chemotherapy came to mind and he visited some oncologists who gave him some ideas about what they did to make chemotherapy less harmful tp patients. He then applied those same ideas to improve his company's bleach.

He began to wonder about what he had done and how he had done it. He wanted to be able to do this sort of thing again. But what is this sort of thing and how can one do it again?

In bygone days we lived in groups that had wise men (and women) who told stories to younger people if they thought that those stories might be relevant to their needs. This was called wisdom and teaching and it served as way of passing one generation's experiences to the next.

We have lost this ability to some extent because we live in a much larger world, where the experts are not likely to be in the next cave over and where there is a lot more to have expertise about. Nevertheless, we, as humans, are set up to deliver and make use of just in time wisdom. We just aren't that sure where to find it. We have created books, and schools, and now search engines to replace what we have lost. Still it would be nice if there was wisdom to be had without having to look hard to find it.

Those days of just in time storytelling will return. The storyteller will be your computer. The computers we have today are capable of understanding your needs and finding just the right (previously archived and indexed) wise man (or woman) to tell you a story, just when you need it, that will help you think something out. Some work needs to be done to make this happen of course.

No more looking for information. No more libraries. No more key words. No more search engines.

Information will find you, and just in the nick of time. And this will "change everything."

You are seeing the beginning of this today, but it is being done in a mindless and commercial way, led of course by Google ads that watch the words you type and match them to ads they have written that contain those words. (I receive endless offers of on line degrees, for example, because that is what I often write about.) Three things will change:

1. The information that finds you will be relevant and important to what you are working on and will arrive just in time.
2. The size of information will change. No more books-worth amount of information (book size is an artifact of what length books sells—there are no ten page books.)
3. A new form of publishing will arrive that serves to vet the information you receive. Experts will be interviewed and their best stories will be indexed. Those stories will live forever waiting for someone to tell them to at the right moment.

In the world that I am describing the computer has to know what you are trying to accomplish, not what words you just typed, and it needs to have an enormous archive of stories to tell you. Additionally it needs to have indexed all the stories it has in its archives to activities you are working on in such a way that the right story comes up at the right time.

What needs to happen to make this a reality? Computers need an activity model. They need to know what you are doing and why. As software becomes more complex and more responsible for what we do in our daily lives, this state of affairs is inevitable.

An archive needs to be created that has all the wisdom of the world in it. People have sought to do this for years, in the form of encyclopedias and such, but they have failed to do what was necessary to make those encyclopedias useful. There is too much in a typical encyclopedia entry, not to mention the absurd amount of information in a book. People are set up to hear stories and stories don't last all that long before we lose our ability to concentrate on their main point, their inherent wisdom, if you will. People tell each other stores all the time, but when they write or lecture they are permitted (or encouraged) to go on way too long (as I am doing now.)

Wisdom depends upon goal directed prompts that say what to do when certain conditions are encountered. To put this another way, an archive of key strategic ideas about how to achieve goals under certain conditions is just the right resource to be interacting with enabling a good story to pop up when you need it. The solution involves goal-directed indexing. Ideas such a "collateral damage" are indices to knowledge. We are not far from the point where computers will be able to recognize collateral damage when it happens and find other examples that help you think something out.

Having a "reminding machine" that gets reminding of universal wisdom as needed will indeed change everything. We will all become much more likely to profit from humanity's collective wisdom by having a computer at the ready to help us think.

stuart_a_kauffman's picture
Professor of Biological Sciences, Physics, Astronomy, University of Calgary; Author, Reinventing the Sacred

John Brockman's question is dramatic: What will change everything? Of course, no one knows. But the fact that no one knows may be the feature of our lives and the universe that does change everything. Reductionism has reigned as our dominant world view for 350 years in Western society. Physicist Steven Weinberg states that when the science shall have been done, all the explanatory arrows will point downward, from societies to people, to organs, to cells, to biochemistry, to chemistry and ultimately to physics and the final theory.

I think he is wrong: the evolution of the biosphere, the economy, our human culture and perhaps aspects of the abiotic world, stand partially free of physical law and are not entailed by fundamental physics. The universe is open.

Many physicists now doubt the adequacy of reductionism, including Philip Anderson, and Robert Laughlin. Laughlin argues for laws of organization that need not derive from the fundamental laws of physics. I give one example. Consider a sufficiently diverse collection of molecular species, such as peptides, RNA, or small molecules, that can undergo reactions and are also candidates to catalyze those very reactions. It can be shown analytically that at a sufficient diversity of molecular species and reactions, so many of these reactions are expected to be catalyzed by members of the system that a giant catalyzed reaction network arises that is collectively autocatalytic. It reproduces itself.

The central point about the autocatalytic set theory is that it is a mathematical theory, not reducible to the laws of physics, even if any specific instantiation of it requires actual physical "stuff". It is a law of organization that may play a role in the origin of life.

Consider next the number of proteins with 200 amino acids: 20 to the 200th power. Were the 10 to the 80th particles in the known universe doing nothing but making proteins length 200 on the Planck time scale, and the universe is some 10 to the 17th seconds old, it would require 10 to the 39th lifetimes of the universe to make all possible proteins length 200 just once. But this means that, above the level of atoms, the universe is on a unique trajectory. It is vastly non-ergodic. Then we will never make all complex molecules, organs, organisms, or social systems.

In this second sense, the universe is indefinitely open "upward" in complexity.

Consider the human heart, which evolved in the non-ergodic universe. I claim the physicist can neither deduce nor simulate the evolutionary becoming of the heart. Simulation, given all the quantum throws of the dice, for example cosmic rays from somewhere mutating genes, seems out of the question. And were such infinitely or vastly many simulations carried out there would be no way to confirm which one captured the evolution of this biosphere.

Suppose we asked Darwin the function of the heart. "Pumping blood" is his brief reply. But there is more. Darwin noted that features of an organism of no selective use in the current environment might be selected in a different environment. These are called Darwinian "preadaptations" or "exaptations". Here is an example: Some fish have swim bladders, partially filled with air and partially with water, that adjust neutral bouyancy in the water column. They arose from lung fish. Water got into the lungs of some fish, and now there was a sac partially filled with air, partially filled with water, poised to become a swim bladder. Three questions arise: Did a new function arise in the biosphere? Yes, neutral bouyancy in the water column. Did it have cascading consequences for the evolution of the biosphere? Yes, new species, proteins and so forth.

Now comes the essential third question: Do you think you could say ahead of time all the possible Darwinian preadaptations of all organisms alive now, or just for humans? We all seem to agree that the answer is a clear "No". Pause. We cannot say ahead of time what the possible preadaptations are. As in the first paragraph, we really do not know what will happen. Part of the problem seems to be that we cannot prespecify all possible selective environments. How would we know we had succeeded? Nor can we prespecify the feature(s) of one or several organisms that might become preadaptations.

Then we can make no probability statement about such preadaptations: We do not know the space of possibilities, the sample space, so can construct no probability measure.

Can we have a natural law that describes the evolution of the swim bladder? If a natural law is a compact description available beforehand, the answer seems a clear No. But then it is not true that the unfolding of the universe is entirely describable by natural law. This contradicts our views since Descartes, Galileo and Newton. The unfolding of the universe seems to be partially lawless. In its place is a radically creative becoming.

Let me point to the Adjacent Possible of the biosphere. Once there were lung fish, swim bladders were in the Adjacent Possible of the biosphere. Before there were multicelled organisms, the swim bladder was not in the Adjacent Possible of the biosphere. Something wonderful is happening right in front of us: When the swim bladder arose it was of selective advantagein its context. It changed what was Actual in the biosphere, which in turn created a new Adjacent Possible of the biosphere. The biosphere self consistently co-constructs itself into its every changing, unstatable Adjacent Possible.

If the becoming of the swim bladder is partially lawless, it certainly is not entailed by the fundamental laws of physics, so cannot be deduced from physics. Then its existence in the non-ergodic universe requires an explanation that cannot be had by that missing entailment. The universe is open.

Part of the explanation rests in the fact that life seems to be evolving ever more positive sum games. As organismic diversity increases, and the "features" per organism increase, there are more ways for selection to select for mutualisms that become the conditions of joint existence in the universe. The humming bird, sticking her beak in the flower for nectar, rubs pollen off the flower, flies to a next flower for nectar, and pollen rubs off on the stamen of the next flower, pollinating the flower. But these mutualistic features are the very conditions of one another's existence in the open universe. The biosphere is rife with mutualisms. In biologist Scott Gilbert's fine phrase, these are codependent origination—an ancient Buddhist phrase. In this open universe, beyond entailment by fundamental physics, we have partial lawlessness, ceaseless creativity, and forever co-dependent origination that changes the Actual and the ever new Adjacent Possible we ceaselessly self-consistently co-construct. More, the way this unfolds is neither fully lawful, nor is it random. We need to re-envision ourselves and the universe.

karl_sabbagh's picture
Producer; Founder, Managing Director, Skyscraper Productions; Author, The Antisemitism Wars: How the British Media Failed Their Public

Much of the misery in the world today — as it always has been — is due to the human propensity to contemplate, or actually commit, violence against another human being. It's not just assaults and murders that display that propensity. Someone who designs a weapon, punishes a child, declares war or leaves a hit-and-run victim by the side of the road has defined 'harming another human being' as a justifiable action for himself. How different the world would be if, as a biologically determined characteristic of future human beings, there was such a cognitive inhibition to such actions that people would be incapable of carrying them out, just as most of us are incapable of moving our ears.

It must be the case that that in the brains of everyone, from abusive parents and rapists to arms dealers and heads of state, there can arise a concatenation of nerve impulses which allow someone to see as 'normal' — or at least acceptable — the mutilation, maiming or death of another for one's own pleasure, greed or benefit. Suppose the pattern of that series of impulses was analysable exactly, with future developments of fMRI, PET scans or technology as yet uninvented. Perhaps every decision to kill or harm another person can be traced to a series of nerve impulses that arise in brain centre A, travel in a microsecond to areas B, C, and D, inhibit areas E and F, and lead to a previously unacceptable decision becoming acceptable. Perhaps we would discover a common factor between the brain patterns of someone who is about to murder a child, and a head of state signing a bill to initiate a nuclear weapons programme, or an engineer designing a new type of cluster bomb. All of them accept at some intellectual level that it is perfectly all right for their actions to cause harm or death to another human. The brains of all of them, perhaps, experience pattern D, the 'death pattern'.

If such a specific pattern of brain activity were detectable, could methods then be devised that prevented or disrupted it whenever it was about to arise? At its most plausible — and least socially acceptable — everyone could wear microcircuit-based devices that detected the pattern and suppressed or disrupted it, such that anyone in whom the impulse arose would instantaneously lose any will to carry it out. Less plausible, but still imaginable, would be some sophisticated chemical suppressant of 'pattern D', genetically engineered to act at specific synapses or on specific neurotransmitters, and delivered in some way that reached every single member of the world's population. The 'pattern D suppressant' could be used as a water additive, like chlorine, acceptable now to prevent deaths from dirty water; or as inhalants sprayed from the air; or in genetically modified foodstuffs; even, perhaps, alteration of the germ cell line in one generation that would forever remove pattern D from future generations.

Rapes would be defused before they happened; soldiers — if there were still armies — would be inhibited from firing as their trigger fingers tightened, except of course there would be no one to fire at since enemy soldiers, insurgents, or terrorists would themselves be unable to carry their violent acts to completion.

Would the total elimination of murderous impulses from the human race have a down side? Well, of course, one single person who escaped the elimination process could then rule the world. He — probably a man — could oppress and kill with impunity since no one else would have the will to kill him. Measures would have to be devised to deal with such a situation. Such a person would be so harmful to the human race that, perhaps, plans would have to be laid to control him if he should arise. Tricky, this one, since he couldn't be killed, as there would no one able to kill him or even to design a machine that would kill him, as that also would involve an ability to contemplate the death of another human being.

But setting that possibility aside, what would be the disadvantages of a world in which, chemically or electronically, the ability to kill or harm another human being would be removed from all people? Surely, only good could come from it. Crimes motivated by greed would still be possible, but robberies would be achieved with trickery rather than at the point of a pistol; gang members might attack each other with insults and taunts rather than razors or coshes; governments might play chess to decide on tricky border issues; and deaths from road accidents would go down because even the slightest thought about one's own behaviour causing the death of another would be so reminiscent of 'pattern D' that we would all drive much more carefully to avoid it. Deaths from natural disasters would continue, but charitable giving and international aid in such situations would soar as people realised that not helping to prevent them in future would be almost as bad as the old and now eliminated habit of killing people.

A method to eliminate 'pattern D' will lead to the most significant change ever in the way humans — and therefore societies — behave. And somewhere, in the fields of neurobiology or genetic modification today the germ of that change may already be present.

Science fiction writers traffic in a world that tries on possible worlds. What if, as in the Hollywood blockbuster Minority Report, we could read people's intentions before they act and thus preempt violence? An intentionality detector would be a terrific device to have, but talk about ethical nightmares. If you ever worried about big brother tapping your phone lines, how about tapping your neural lines? What about aliens from another planet? What will they look like? How do they reproduce? How do they solve problems? If you want to find out, just go back and watch reruns of Star Trek, or get out the popcorn and watch Men In Black, War of the Worlds, The Thing, Signs, and The Blob.

But here's the rub on science fiction: it's all basically the same stuff, one gimmick with a small twist. Look at all the aliens in these movies. They are always the same, a bit wispy, often with oversized heads, see through body parts, and with awesome powers. And surprisingly, this is how it has been for 75 or so years of Hollywood, even though our technologies have greatly expanded the range of special effects that are possible. Why the lack of creativity? Why such a poverty of the imagination?

The answer is simple, and reveals a deep fact about our biology, and the biology of all other organisms. The brain, as a physical device, evolved to process information and make predictions about the future. Though the generative capacity of the brain, especially the human brain, is spectacular — providing us with a system for massive creativity, it is also highly constrained. The constraints arise from both the physics of brain operation, as well as the requirements of learnability.

These constraints establish what we, and other organisms have achieved — the actual — and what they could, in the future and with the right conditions, potentially achieve — the possible. Where things get interesting is in thinking about the unimaginable. Poof! But there is a different way of thinking about this problem that takes advantage of exciting new developments in molecular biology, evolutionary developmental biology, morphology, neurobiology, and linguistics. In a nutshell, for the first time we have a science that enables us to understand the actual, the possible and the unimaginable, a landscape that will forever change our understanding of what it means to be human, including how we arrived at our current point in evolutionary theory, and where might end up in ten or ten million years.

To illustrate, consider a simple example from the field of theoretical morphology, a discipline that aims to map out the space of possible morphologies and in so doing, reveal not only why some parts of this space were never explored, but also why they never could be explored. The example concerns an extinct group of animals called the ammonoids, swimming cephalopod mollusks with a shell that spirals out from the center before opening up.

In looking at the structure of their shells — the ones that actually evolved that is — there are two relevant dimensions that account for the variation: the rate at which the spiral spirals out and the distance between the center of this coil or spiral and the opening. If you plot the actual ammonoid species on a graph that includes spiral rate and distance to the opening, you see a density of animals in a few areas, and then some gaps. The occupied spaces in this map show what actually evolved, whereas the vacant spaces suggest either possible (not yet evolved) or impossible morphologies.

Of great interest in this line of research is the cause of the impossible. Why, that is, have certain species never taken over a particular swath of morphological turf? What is it about this space that leaves it vacant? Skipping many details, some of the causes are intrinsic to the organisms (e.g., no genetic material or developmental programs for building wheels instead of legs) and some extrinsic (e.g., circles represent an impossible geometry or natural habitats would never support wheels).

What is exciting about these ideas is that they have a family resemblance to those that Noam Chomsky mapped out over 50 years ago in linguistics. That is, the biology that allows us to acquire a range of possible languages, also puts constraints on this system, leaving in its wake a space of impossible languages, those that could never be acquired or if acquired, would never remain stable. And the same moves can be translated into other domains of cultural expression, including music, morality, and mathematics. Are there musical scores that no one, not even John Cage, could dream up because the mind can't fathom certain frequencies and temporal arrangements? Are there evolvable moral systems that we will never see because our current social systems and environments make these toxic to our moral sensibilities? Regardless of how these questions are resolved, they open up new research opportunities, using methods that are only now being refined.

Here are some of my favorites, examples that reveal how we can extend the range of the possible, invading into the terra incognita of the impossible. Thanks to work by neuroscientists such as Evan Balaban, we now know that we can combine the brain parts of different animals to create chimeras. For example, we can take the a part of a quail's brain and pop it into a chicken and when the young chick develops, it head bobs like a quail and crows like a chicken.

Functionally, we have allowed the chicken to invade an empty space of behavior, something unimaginable, to a chicken that is. Now let your imagination run wild. What would a chimpanzee do with the generative machinery that a human has when it is running computations in language, mathematics and music? Could it imagine the previously unimaginable? What if we gave a genius like Einstein the key components that made Bach a different kind of genius? Could Einstein now imagine different dimensions of musicality? These very same neural manipulations are now even possible at the genetic level. Genetic engineering allows us to insert genes from one species into another, or manipulate the expressive range of a gene, jazzing it up or turning it off.

This revolutionary science is here, and it will forever change how we think. It will change what is possible, potentially remove what is possible but deleterious, and open our eyes to the previously impossible.

kevin_kelly's picture
Senior Maverick, Wired; Author, What Technology Wants and The Inevitable

It is hard to imagine anything that would "change everything" as much as a cheap, powerful, ubiquitous artificial intelligence—the kind of synthetic mind that learns and improves itself. A very small amount of real intelligence embedded into an existing process would boost its effectiveness to another level. We could apply mindfulness wherever we now apply electricity. The ensuing change would be hundreds of times more disruptive to our lives than even the transforming power of electrification. We'd use artificial intelligence the same way we've exploited previous powers—by wasting it on seemingly silly things. Of course we'd plan to apply AI to tough research problems like curing cancer, or solving intractable math problems, but the real disruption will come from inserting wily mindfulness into vending machines, our shoes, books, tax returns, automobiles, email, and pulse meters.

This additional intelligence need not be super-human, or even human-like at all. In fact, the greatest benefit of an artificial intelligence would come from a mind that thought differently than humans, since we already have plenty of those around. The game-changer is neither how smart this AI is, nor its variety, but how ubiquitous it is. Alan Kay quips in that humans perspective is worth 80 IQ points. For an artificial intelligence, ubiquity is worth 80 IQ points. A distributed AI, embedded everywhere that electricity goes, becomes ai—a low-level background intelligence that permeates the technium, and trough this saturation morphs it.

Ideally this additional intelligence should not be just cheap, but free. A free ai, like the free commons of the web, would feed commerce and science like no other force I can imagine, and would pay for itself in no time. Until recently, conventional wisdom held that supercomputers would first host this artificial mind, and then perhaps we'd get mini-ones at home, or add them to the heads of our personal robots. They would be bounded entities. We would know where our thoughts ended and theirs began.

However, the snowballing success of Google this past decade suggests the coming AI will not be bounded inside a definable device. It will be on the web, like the web. The more people that use the web, the more it learns. The more it knows, the more we use it. The smarter it gets, the more money it makes, the smarter it will get, the more we will use it. The smartness of the web is on an increasing-returns curve, self-accelerating each time someone clicks on a link or creates a link. Instead of dozens of geniuses trying to program an AI in a university lab, there are billion people training the dim glimmers of intelligence arising between the quadrillion hyperlinks on the web. Long before the computing capacity of a plug-in computer overtakes the supposed computing capacity of a human brain, the web—encompassing all its connected computing chips—will dwarf the brain. In fact it already has.

As more commercial life, science work, and daily play of humanity moves onto the web, the potential and benefits of a web AI compound. The first genuine AI will most likely not be birthed in standalone supercomputer, but in the superorganism of a billion CPUs known as the web. It will be planetary in dimensions, but thin, embedded, and loosely connected. Any device that touches this web AI will share —and contribute to—its intelligence. Therefore all devices and processes will (need to) participate in this web intelligence.

Standalone minds are likely to be viewed as handicapped, a penalty one might pay in order to have mobility in distant places. A truly off-the-grid AI could not learn as fast, as broadly, or as smartly as one plugged into 6 billion human minds, a quintillion online transistors, hundreds of exabytes of real-life data, and the self-correcting feedback loops of the entire civilization.

When this emerging AI, or ai, arrives it won't even be recognized as intelligence at first. Its very ubiquity will hide it. We'll use its growing smartness for all kinds of humdrum chores, including scientific measurements and modeling, but because the smartness lives on thin bits of code spread across the globe in windowless boring warehouses, and it lacks a unified body, it will be faceless. You can reach this distributed intelligence in a million ways, through any digital screen anywhere on earth, so it will be hard to say where it is. And because this synthetic intelligence is a combination of human intelligence (all past human learning, all current humans online) and the coveted zip of fast alien digital memory, it will be difficult to pinpoint what it is as well. Is it our memory, or a consensual agreement? Are we searching it, or is it searching us?

While we will waste the web's ai on trivial pursuits and random acts of entertainment, we'll also use its new kind of intelligence for science. Most importantly, an embedded ai will change how we do science. Really intelligent instruments will speed and alter our measurements; really huge sets of constant real time data will speed and alter our model making; really smart documents will speed and alter our acceptance of when we "know" something. The scientific method is a way of knowing, but it has been based on how humans know. Once we add a new kind of intelligence into this method, it will have to know differently. At that point everything changes.

rodney_a_brooks's picture
Panasonic Professor of Robotics (emeritus); Former Director, MIT Computer Science and Artificial Intelligence Lab (1997-2007); Founder, CTO, Robust.AI; Author, Flesh and Machines

I am very sure that in my lifetime we will have a definitive answer to one question that has been debated, with little data, for hundreds of years. The answer as to whether or not there is life on Mars will either be a null result if negative, or it will profoundly impact science (and perhaps philosophy and religion) if positive.

As 90's Administrator of NASA Dan Goldin rightly reasoned the biggest possible positive public relations coup for his agency, and therefore for its continued budget, would be if it discovered unambiguous evidence of life somewhere elsewhere in the Universe, besides on Earth.

One of the legacies we see today of that judgment is the almost weekly flow of new planets being discovered orbiting nearby stars. If life does exist outside of our solar system the easy bet is that it exists on planets, so we better find planets to look at for direct evidence of life. We have been able to infer the existence of very large planets by carefully measuring star wobbles, and more recently we have detected smaller planets by measuring their occultations, the way they dim a star as they cross between it and Earth. And just in the last months of 2008 we have our first direct images of planets orbiting other stars.

NASA has an ambitious program using the Hubble and Spitzer space telescopes and the 2016 launch of the Terrestial Planet Finder to get higher and higher resolution images of extra-solar planets and look for tell-tale chemical signatures of large scale biochemical impact of Earth-like life on these planets. If we do indeed discover life elsewhere through these methods it will have an large impact on our views of the life, and will no doubt stimulate much creative thinking which will lead to new science about Earth-life. But it will take a long, long, time to infer many details about the nature of that distant life and the detailed levels of similarities and differences to our own versions of life.

The second of Goldin's legacies is about life much closer to home. NASA has a strong, but somewhat endangered at this moment, direct exploration program for the surface of Mars. We have not yet found direct evidence of life there, but neither have the options for its existence narrowed appreciably. And we are very rapidly learning much more about likely locations for life; again just in the last months of 2008 we have discovered vast water glaciers with just a shallow covering of soil. We have many more exciting places to go look for life on Mars than we will be able to send probes over the next handful of years. If we do discover life on Mars (alive or extinct) one can be sure that there will be a flurry of missions to go and examine the living entities or the remnants in great detail.

There are a range of possible outcomes for what life might look like on Mars, and it may leave ambiguity of whether its creation was a spontaneous event independent of that on Earth or whether there has been cross contamination of our two planets with only one genesis for life.

At one extreme life on Mars could turn out to be DNA-based with exactly the same coding scheme for amino acids that all life on Earth uses. Or it could look like a precursor to Earth life, again sharing a compatible precursor encoding, perhaps an RNA-based life form, or even an PNA-based (Peptide Nucleic Acid) form. Any of these outcomes would help us immensely in our understanding of the development of life from non-life, whether it happened on Mars or Earth.

Another set of possibilities for what we might discover would be one of these same forms with a different or incompatible encoding for amino acids. That would be a far more radical outcome. It would tell us two things. Life arose twice, spontaneously and separately, on two adjacent planets in one particular little solar system. The Universe must in that case be absolutely teeming with life. But more than that it would say that the space of possible life biochemistries is probably rather narrow, so we will immediately know a lot about all those other life forms out there. And it will inform us about the probable spaces that we should be searching in our synthetic biology efforts to build new life forms.

The most mind expanding outcome would be if life on Mars is not at all based on a genetic coding scheme of long chains of purine bases that decode in triples to select an amino acid to be tacked on to a protein under construction. This would revolutionize our understanding of the possibilities for biology. It would provide us with a completely different form to study. It would open the possibilities for what must be invariant in biology and what can be manipulated and engineered. It would completely change our understanding of ourselves and our Universe.

howard_gardner's picture
Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; Author, A Synthesizing Mind

What is talent? If you ask the average grade school teacher to identify her most talented student, she is likely to reject the question: "All my students are equally talented." But of course, this answer is rubbish. Anyone who has worked with numerous young people over the years knows that some catch on quickly, almost instantly, to new skills or understandings, while others must go through the same drill, with little depressingly little improvement over time.

As wrongheaded as the teacher's response is the viewpoint put forward by some psychological researchers, and most recently popularized in Malcolm Gladwell's Outliers: The Story of Success. This is notion that there is nothing mysterious about talent, no need to crack open the lockbox: anyone who works hard enough over a long period of time can end up at the top of her field. Anyone who has the opportunity to observe or read about a prodigy—be it Mozart or Yo-Yo Ma in music, Tiger Woods in golf, John von Neumann in mathematics—knows that achievement is not just hard work: the differences between performance at time 1 and successive performances at times 2, 3, and 4 are vast, not simply the result of additional sweat. It is said that if algebra had not already existed,, precocious Saul Kripke would have invented it in elementary school: such a characterization would be ludicrous if applied to most individuals.

For the first time, it should be possible to delineate the nature of talent. This breakthrough will come about through a combination of findings from genetics (do highly talented individuals have a distinctive, recognizable genetic profile?); neuroscience (are there structural or functional neural signatures, and, importantly, can these be recognized early in life?); cognitive psychology (are the mental representations of talented individuals distinctive when contrasted to those of hard workers); and the psychology of motivation (why are talented individuals often characterized as having 'a rage to learn, a passion to master?)

This interdisciplinary scientific breakthrough will allow us to understand what is special about Picasso, Gauss, J.S. Mill. Importantly, it will illuminate whether a talented person could have achieved equally in different domains (could Mozart have been a great physicist? Could Newton have been a great musician?) Note, however, that will not illuminate two other issues:

1.    What makes someone original, creative? Talent, expertise,
       are necessary but not sufficient.
2.    What determines whether talents are applied to constructive 
       or destructive ends?

These answers are likely to come from historical or cultural case studies, rather than from biological or psychological science. Part of the maturity of the sciences is an appreciation of which questions are best left to other disciplinary approaches.

marcelo_gleiser's picture
Appleton Professor of Natural Philosophy, Dartmouth College; Author, The Island of Knowledge

There is no question more fundamental to us than our mortality. We die and we know it. It is a terrifying, inexorable truth, one of the few absolute truths we can count on. Other noteworthy absolute truths tend to be mathematical, such as in 2+2=4. Nothing horrified the French philosopher and mathematician Blaise Pascal more than "the silence of infinitely open spaces," the nothingness that surrounds the end of time and our ignorance of it.

For death is the end of time, the end of experience. Even if you are religious and believe in an afterlife, things certainly are different then: either you exist in a timeless Paradise (or Hell), or as some reincarnate soul. If you are not religious, death is the end of consciousness. And with consciousness goes the end of tasting a good meal, reading a good book, watching a pretty sunset, having sex, loving someone. Pretty grim in either case.

We only exist while people remember us. I think of my great-grandparents in nineteenth-century Ukraine. Who were they? No writings, no photos, nothing. Just their genes remain, diluted, in our current generation.

What to do? We spread our genes, write books and essays, prove theorems, invent family recipes, compose poems and symphonies, paint and sculpt, anything to create some sort of permanence, something to defy oblivion. Can modern science do better? Can we contemplate a future when we control mortality? I know I am being way too optimistic considering this a possibility, but the temptation to speculate is far too great for me to pass on it. Maybe I'll live for 101 years like Irving Berlin, having still half of my life ahead of me.

I can think of two ways in which mortality can be tamed. One at the cellular level and the other through an integration of body with genetic, cognitive sciences, and cyber technology. I'm sure there are others. But first, let me make clear that at least according to current science, mortality could never be completely stopped. Speculations aside, modern physics forbids time travel to the past. Unfortunately, we can't just jump into a time machine to relive our youth over and over again. (Sounds a bit horrifying, actually.)

Causality is an unforgiving mistress. Also, unless you are a vampire (and there were times in my past when I wished I were one) and thus beyond submitting to the laws of physics, you can't really escape the second law of thermodynamics: even an open system like the human body, able to interact with its external environment and absorb nutrients and energy from it, will slowly deteriorate. In time, we burn too much oxygen. We live and we rust. Herein life's cruel compromise: we need to eat to stay alive. But by eating we slowly kill ourselves.

At the cellular level, the mitochondria are the little engines that convert food into energy. Starving cells live longer. Apparently, proteins from the sirtuin family contribute to this process, interfering with normal apoptosis, the cellular self-destruction program.

Could the right dose of sirtuin or something else be found to significantly slow down aging in humans? Maybe, in a few decades… Still at the cellular level, genetic action may also interfere with the usual mitochondrial respiration. Reduced expression of the mclk1 gene has been shown to slow down aging in mice. Something similar was shown to happen in C. Elegansworms. The results suggest that the same molecular mechanism for aging is shared throughout the animal kingdom.

We can speculate that, say, by 2040, a combination of these two mechanisms may have allowed scientists to unlock the secrets of cellular aging. It's not the elixir of life that alchemists have dreamt of, but the average life span could possibly be increased to 125 years or even longer, a significant jump from the current US average of about 75 years. Of course, this would create a terrible burden on social security. But retirement age by then would be around 100 or so.

A second possibility is more daring and probably much harder to become a reality within my next 50 or so years of life. Combine human cloning with a mechanism to store all our memories in a giant database. Inject the clone of a certain age with the corresponding memories. Voilà! Will this clone be you? No one really knows. Certainly, just the clone without the memories won't do. We are what we remember.

To keep on living with the same identity, we must keep on remembering. Unless, of course, you don't like yourself and want to forget the past. So, assuming such tremendous technological jump is even feasible, we could migrate to a new copy of ourselves when the current one gets old and rusty. Some colleagues are betting such technologies will become available within the century.

Although I'm an optimist by nature, I seriously doubt it. I probably will never know, and my colleagues won't either. However, there is no question that controlling death is the ultimate human dream, the one "thing that can change everything else." I leave the deeply transforming social and ethical upheaval this would cause to another essay. Meanwhile, I take advice from Mary Shelley's Frankenstein. Perhaps there are things we are truly unprepared for.

timothy_taylor's picture
Jan Eisner Professor of Archaeology, Comenius University in Bratislava; Author, The Artificial Ape

Culture changes everything because culture contains everything, in the sense of things that can be named, and so what can be conceived. Wittgenstein implied that what cannot be said cannot be thought. He meant by this that language relies on a series of prior agreements. Such grammar has been shown by anthropologists to underpin the idea of any on-going community, not just its language, but its broader categories, its institutions, its metaphysics. And the same paradox is presented: how can anything new ever happen? If by 'happen' we only think of personal and historical events, we miss the most crucial novelty—the way that new things, new physical objects, devices and techniques, insinuate themselves into our lives. They have new names which we must learn, and new, revolutionary effects.

It does not always work like that. Resistance is common. Paradoxically, the creative force of culture also tries to keep everything the same. Ernest Gellner said that humans, taken as a whole, present the most extensive behavioural variation of any species while every particular cultural community is characterized by powerful norms. These are ways of being that, often through appeals to some apparently natural order, are not just mildly claimed as quintessentially human, but lethally enforced at a local level, in a variety of more or less public ways. Out groups (whether a different ethnicity, class, sexuality, creed, whether being one of twins, an albino, someone disabled or an unusually talented individual) are suspect and challenging in their abnormality. Categories of special difference are typical foci for sacrifice, banishment, and ridicule through which the in-group becomes not just the in-group but, indeed, a distinctly perceptible group, confident, refreshed and culturally reproductive. This makes some sense: aberrance subverts the grammar of culture.

The level at which change can be tolerated varies greatly across social formations, but there is always a point beyond which things become intolerably incoherent. We may rightly label the most unprecedented behaviour mad because, whatever relativization might be invoked to explain it, it is, by definition, strategically doomed: we seek to ignore it. Yet the routine expulsion of difference, apparently critical in the here and now, becomes maladaptive in any longer-term perspective. Clearly, it is change that has created our species' resilience and success, creating the vast inter- (not intra-) cultural diversity that Gellner noted. So how does change happen?

Major change often comes stealthily. Its revolutionary effect may often reside in the very fact that we do not recognize what it is doing to our behaviour, and so cannot resist it. Often we lack to words to articulate resistance as the invention is a new noun whose verbal effect lags in its wake. Such major change operates far more effectively through things than directly through people, not brought about by the mad, but rather by 'mad scientists', whose inventions can be forgiven their inventors.

Unsurprisingly then, the societies that tolerate the least behavioural deviance are the most science-averse. Science, in the broadest sense of effective material invention, challenges quotidian existence. The Amish (a quaint static ripple whose way of life will never uncover the simplest new technological fix for the unfolding hazards of a dynamic universe) have long recognized that material culture embodies weird inspirations, challenging us, as eventual consumers, not with 'copy what I do', but a far, far more subversive 'try me.'

Material culture is the thing that makes us human, driving human evolution from the outset with its continually modifying power. Our species' particular dilemma is that in order to safeguard what we have, we have continually to change. The culture of things—invention and technology—is ever changing under the tide of words and routines whose role is to image fixity and agreement when, in reality, none exists. This form of change is no trivial thing because it is essential to our longer term survival. At least, the longer term survival of anything we may be proud to call universally human

nick_bostrom's picture
Professor, Oxford University; Director, Future of Humanity Institute; Author, Superintelligence: Paths, Dangers, Strategies

Intelligence is a big deal. Humanity owes its dominant position on Earth not to any special strength of our muscles, nor any unusual sharpness of our teeth, but to the unique ingenuity of our brains. It is our brains that are responsible for the complex social organization and the accumulation of technical, economic, and scientific advances that, for better and worse, undergird modern civilization.

All our technological inventions, philosophical ideas, and scientific theories have gone through the birth canal of the human intellect. Arguably, human brain power is the chief rate-limiting factor in the development of human civilization.

Unlike the speed of light or the mass of the electron, human brain power is not an eternally fixed constant. Brains can be enhanced. And, in principle, machines can be made to process information as efficiently as — or more efficiently than — biological nervous systems.

There are multiple paths to greater intelligence. By "intelligence" I here refer to the panoply of cognitive capacities, including not just book-smarts but also creativity, social intuition, wisdom, etc.

Let's look first at how we might enhance our biological brains. There are of course the traditional means: education and training, and development of better methodologies and conceptual frameworks. Also, neurological development can be improved through better infant nutrition, reduced pollution, adequate sleep and exercise, and prevention of diseases that affect the brain. We can use biotech to enhance cognitive capacity, by developing pharmaceuticals that improve memory, concentration, and mental energy; or we could achieve these ends with genetic selection and genetic engineering. We can invent external aids to boost our effective intelligence — notepads, spreadsheets, visualization software.

We can also improve our collective intelligence. We can do so via norms and conventions — such as the norm against using ad hominem arguments in scientific discussions — and by improving epistemic institutions such the scientific journal, anonymous peer review, and the patent system. We can increase humanity's joint problem-solving capacity by creating more people or by integrating a greater fraction of the world's existing population into productive endeavours, and we can develop better tools for communication and collaboration — various internet applications being recent examples.

Each of these ways of enhancing individual and collective human intelligence holds great promise. I think they ought to be vigorously pursued. Perhaps the smartest and wisest thing the human species could do would be to work on making itself smarter and wiser.

In the longer run, however, biological human brains might cease to be the predominant nexus of Earthly intelligence.

Machines will have several advantages: most obviously, faster processing speed — an artificial neuron can operate a million times faster than its biological counterpart. Machine intelligences may also have superior computational architectures and learning algorithms. These "qualitative" advantages, while harder to predict, may be even more important than the advantages in processing power and memory capacity. Furthermore, artificial intellects can be easily copied, and each new copy can — unlike humans — start life fully-fledged and endowed with all the knowledge accumulated by its predecessors. Given these considerations, it is possible that one day we may be able to create "superintelligence": a general intelligence that vastly outperforms the best human brains in every significant cognitive domain.

The spectrum of approaches to creating artificial (general) intelligence ranges from completely unnatural techniques, such as those used in good old-fashioned AI, to architectures modelled more closely on the human brain. The extreme of biological imitation is whole brain emulation, or "uploading". This approach would involve creating a very detailed 3d map of an actual brain — showing neurons, synaptic interconnections, and other relevant detail — by scanning slices of it and generating an image using computer software. Using computational models of how the basic elements operate, the whole brain could then be emulated on a sufficiently capacious computer.

The ultimate success of biology-inspired approaches seems more certain, since they can progress by piecemeal reverse-engineering of the one physical system already known to be capable of general intelligence, the brain. However, some unnatural or hybrid approach might well get there sooner.

It is difficult to predict how long it will take to develop human-level artificial general intelligence. The prospect does not seem imminent. But whether it will take a couple of decades, many decades, or centuries, is probably not something that we are currently in a position to know. We should acknowledge this uncertainty by assigning some non-trivial degree of credence to each of these possibilities.

However long it takes to get from here to roughly human-level machine intelligence, the step from there to superintelligence is likely to be much quicker. In one type of scenario, "the singularity hypothesis", some sufficiently advanced and easily modifiable machine intelligence (a "seed AI") applies its wits to create a smarter version of itself. This smarter version uses its greater intelligence to improve itself even further. The process is iterative, and each cycle is faster than its predecessor. The result is an intelligence explosion. Within some very short period of time — weeks, hours — radical superintelligence is attained.

Whether abrupt and singular, or more gradual and multi-polar, the transition from human-level to superintelligence would of pivotal significance. Superintelligence would be the last invention biological man would ever need to make, since, by definition, it would be much better at inventing than we are. All sorts of theoretically possible technologies could be developed quickly by superintelligence — advanced molecular manufacturing, medical nanotechnology, human enhancement technologies, uploading, weapons of all kinds, lifelike virtual realities, self-replicating space-colonizing robotic probes, and more. It would also be super-effective at creating plans and strategies, working out philosophical problems, persuading and manipulating, and much else beside.

It is an open question whether the consequences would be for the better or the worse. The potential upside is clearly enormous; but the downside includes existential risk. Humanity's future might one day depend on the initial conditions we create, in particular on whether we successfully design the system (e.g., the seed AI's goal architecture) in such a way as to make it "human-friendly" — in the best possible interpretation of that term.