2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

scott_atran's picture
Anthropologist; Emeritus Research Director, Centre National de la Recherche Scientifique, Institut Jean Nicod, Paris; Co-Founder, Centre for the Resolution of Intractable Conflict, University of Oxford; Author, Talking to the Enemy
What Neuroscience And Machine Models Of The Mind Should Be Looking For

 

Machines can perfectly imitate some of the ways humans think all of the time, and can consistently outperform humans on some thinking tasks all of the time, but computing machines as usually envisioned will not get right human thinking all of the time because they actually process information in ways opposite to humans in domains commonly associated with human creativity.

Machines can faithfully imitate the results of some human thought processes whose outcomes are fixed (remembering people's favorite movies, recognizing familiar objects) or dynamic (jet piloting, grand master chess play). And machines can outperform human thought processes, in short time and with little energy, in matters both simple (memorizing indefinitely many telephone numbers) and complex (identifying, from trillions global communications, social networks whose members may be unaware they are part of the network).

However underdeveloped now, I see no principled reason why machines operating independently of direct human control cannot learn from people's—or their own—fallibilities, and so evolve, create new forms of art and architecture, excel in sports (some novel combination of Deep Blue and Oscar Psitorius), invent new medicines, spot talent and exploit educational opportunities, provide quality assurance, or even build and use weapons that destroy people but not other machines.

But if the current focus in artificial intelligence and neuroscience persists, which is to reliably identify patterns of connection and wiring as a function of past connections and forward probabilities, then I don't think machines will ever be able to capture (imitate) critically creative human thought processes, including novel hypothesis formation in science or even ordinary language production.

Newton's laws of motion or Einstein's insights into relativity required imagining ideal worlds without precedent in any past or plausible future experience, such as moving in a world without friction or chasing a beam of light through a vacuum. Such thoughts require levels of abstraction and idealization that disregard, rather than assimilate, as much information as possible to begin with.

Increasingly sophisticated and efficient patterns of input and output, using supercomputers accessing massive data sets and constantly refined by Bayesian probabilities or other statistics based on degrees of belief in states of nature, may well produce ever better sentences and translations, or pleasing musical melodies and novel techno variations. In this way, machines may come to approximate, through a sort of reverse engineering, what human children or experts effortlessly do when they begin with fairly well-articulated internal structures in order to draw in and interpret relevant input from an otherwise impossibly noisy world. Humans know from the outset what they are looking for through the noise: in a sense they are there before they start; computing machines can never be sure they are there.

Can machines that operate independently of direct human control consistently interact with humans in ways such that humans believe themselves to be always interacting with other humans and not machines? Machines can come vanishingly close in many areas, and surpass mightily in others; but just as even the most highly skilled con artist always has some probability—however small—of being caught in deception, whereas the honest person never deceives and so can never be caught, so the associationist-connectionist machine that operates on stochastic rather than structure-dependent principles may never quite get the sense or sensibility of it all.

In principle, structurally richer machines, with internal architecture—beyond "read," "write" and "address"—can be built (indeed, earlier advocates of AI added logical syntax), interact with some degree of fallibility (for if no error, then no learning is possible), and culturally evolve. But the current emphasis in much AI and neuroscience, which is to replace posits of abstract psychological structures with physically palpable neural networks and the like, seems to be going in precisely the wrong direction. 

Rather, the cognitive structures that psychologists posit (provided they are descriptively adequate, plausibly explanatory, and empirically tested against alternatives and the null-hypothesis) should be the point of departure—what it is that neuroscience and machine models of the mind should be looking for. If we then discover that different abstract structures operate through the same physical substrate, or that similar structures operate through different substrates, then we have a novel and interesting problem that may lead to a revision in our conception of both structure and substrate The fact that such simple and basic matters as these are puzzling (or even excluded, a priori, from the puzzle) tells us how very primitive still is the science of mind, whether human brain or machine.