However, there's more to computer science than that. Many people think of computer science as the science of what computers do, but I think of it quite differently: Computer Science is a new way collection of ways to describe and think about complicated systems. It comes with a huge library of new, useful concepts about how mental processes might work. For example, most of the ancient theories of memory envisioned knowledge like facts in a box. Later theories began to distinguish ideas about short and long-term memories, and conjectured that skills are stored in other ways.

However, Computer Science suggests dozens of plausible ways to store knowledge away - as items in a database, or sets of "if-then" reaction rules, or in the forms of semantic networks (in which little fragments of information are connected by links that themselves have properties), or program-like procedural scripts, or neural networks, etc. You can store things in what are called neural networks - which are wonderful for learning certain things, but almost useless for other kinds of knowledge, because few higher-level processes can 'reflect' on what's inside a neural network. This means that the rest of the brain cannot think and reason about what it's learned - that is, what was learned in that particular way. In artificial intelligence, we have learned many tricks that make programs faster - but in the long run lead to limitations because the results neural network type learning are too 'opaque' for other programs to understand.

Yet even today, most brain scientists do not seem to know, for example, about cache-memory. If you buy a computer today you'll be told that it has a big memory on its slow hard disk, but it also has a much faster memory called cache, which remembers the last few things it did in case it needs them again, so it doesn't have to go and look somewhere else for them. And modern machines each use several such schemes - but I've not heard anyone talk about the hippocampus that way. All this suggests that brain scientists have been too conservative; they've not made enough hypotheses - and therefore, most experiments have been trying to distinguish between wrong alternatives.

Reinforcement vs. Credit assignment.

There have been several projects that were aimed toward making some sort of "Baby Machine" that would learn and develop by itself - to eventually become intelligent. However, all such projects, so far, have only progressed to a certain point, and then became weaker or even deteriorated. One problem has been finding adequate ways to represent the knowledge that they were acquiring. Another problem was not have good schemes for what we sometimes call 'credit assignment' - that us, how do you learning things that are relevant, that are essentials rather than accidents. For example, suppose that you find a new way to handle a screwdriver so that the screw remains in line and doesn't fall out. What is it that you learn? It certainly won't suffice merely to learn the exact sequence of motions (because the spatial relations will be different next time) - so you have to learn at some higher level of representation. How do you make the right abstractions? Also, when some experiment works, and you've done ten different things in that path toward success, which of those should you remember, and how should you represent them? How do you figure out which parts of your activity were relevant? Older psychology theories used the simple idea of 'reinforcing' what you did most recently. But that doesn't seem to work so well as the problems at hand get more complex. Clearly, one has to reinforce plans and not actions - which means that good Credit-Assignment has to involve some thinking about the things that you've done. But still, no one has designed and debugged a good architecture for doing such things.

We need better programming languages and architectures.

I find it strange how little progress we've seen in the design of problem solving programs - or languages for describing them, or machines for implementing those designs. The first experiments to get programs to simulate human problem-solving started in the early 1950s, just before computers became available to the general public; for example, the work of Newell, Simon, and Shaw using the early machine designed by John von Neumann's group. To do this, they developed the list-processing language IPL. Around 1960, John McCarthy developed a higher-level language LISP, which made it easier to do such things; now one could write programs that could modify themselves in real time. Unfortunately, the rest of the programming community did not recognize the importance of this, so the world is now dominated by clumsy languages like Fortran, C, and their successors - which describe programs that cannot change themselves. Modern operating systems suffered the same fate, so we see the industry turning to the 35-year-old system called Unix, a fossil retrieved from the ancient past because its competitors became so filled with stuff that no one cold understand and modify them. So now we're starting over again, most likely to make the same mistakes again. What's wrong with the computing community?

Previous | Page 1 2 3 4 5 6 Next