2007 : WHAT ARE YOU OPTIMISTIC ABOUT?

terrence_j_sejnowski's picture
Computational Neuroscientist; Francis Crick Professor, the Salk Institute; Investigator, Howard Hughes Medical Institute; Co-author (with Patricia Churchland), The Computational Brain
Computational Neuroscientist, Salk Institute, Coauthor, The Computational Brain

A Breakthrough In Understanding Intelligence Is Around The Corner

The clinically depressed often have a more realistic view of their problems than those who are optimistic. Without a biological drive for optimism it might be difficult to motivate humans to take on difficult problems and face long odds. What optimistic view of the future drives string theorists in physics working on theories that are probably hundreds of years ahead of their time? There is always the hope that a breakthrough is just around the corner.

In 1956 a small group of optimists met for a summer conference at Dartmouth, inspired by the recent invention of digital computers and breakthroughs in writing computer programs that could solve mathematical theorems and play games. Since mathematics was among the highest levels of human achievement, they thought that engineered intelligence was immanent. Last summer, 50 years later, another meeting was held at Dartmouth that brought together the founders of Artificial Intelligence and a new generation of researchers. Despite all the evidence to the contrary, the pioneers from the first meeting were still optimistic and chided the younger generation for having given up the goal of achieving human level intelligence.

Problems that seem easy, like seeing, hearing and moving about, are much more difficult to program than theorem proving and chess. How could this be?  It took hundreds of millions of years to evolve efficient ways for animals to find food, avoid danger and interact with one another, but humans have been developing mathematics for only a few thousand years, probably using bits of our brains that were meant to do something altogether different. We vastly underestimated the complexity of our interactions with the world because we are unaware of the immense computation our brains perform to make seeing objects and turning doorknobs seem effortless.

The early pioneers of AI sought logical descriptions that were black or white and geometric models with a few parameters, but the world is high dimensional and comes in shades of gray. The new generation of researchers has made progress by focusing on specific problems in computer vision, planning, and other areas of AI. Intractable problems have yielded to probabilistic analysis of large databases using powerful statistical techniques. The first algorithms that could handle this complexity were neural networks with many thousands of parameters that learned to categorize input patterns from labeled examples. New machine learning algorithms have been discovered that can extract hidden statistical structure from large datasets without the need for any labels. Progress is accelerating now that the internet provides truly large datasets of text and images. Computational linguists, for example, have adopted statistical algorithms for parsing sentences and language translation, having found transformational grammars too impoverished.

One of the most impressive learning systems is TD-Gammon, a computer program that taught itself to play backgammon at the championship level. Built by Gerald Tesauro at IBM Yorktown Heights, TD-Gammon started out with little more than the board position and the rules of the game, and the only feedback was who won. TD-gammon solved the temporal credit assignment problem:  If after a long string of choices you win, how do you know which choices were responsible for the victory?  Unlike rule-based game programs, TD-Gammon discovered better ways to play positions on its own, and developed a surprisingly subtle sense of when to play safely and when to be aggressive. This captures some important aspects of human intelligence.

Neuroscientists have discovered that dopamine neurons, found in the brains of all vertebrates, are central to reward learning. The transient responses of dopamine neurons signal to the brain predictions for future reward, which are used to guide behavior and regulate synaptic plasticity. The dopamine responses have the same properties as the temporal difference learning algorithm used in TD-Gammon. Reinforcement learning was dismissed years ago as too weak a learner to handle the complexity of cognition. This belief needs to be re-evaluated in the light of the successes of TD-Gammon and learning algorithms in other areas of AI.

What would a biological theory of intelligence look like, based on internal brain states derived from experimental studies rather than introspection? I am optimistic that we are finally on the right track and that, before too long, an unexpected breakthrough will occur.