2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

andy_clark's picture
Professor of Cognitive Philosophy, Department of Philosophy and Department of Informatics, University of Sussex, Brighton, UK; Author, Surfing Uncertainty: Prediction, Action, and the Embodied Mind
You Are What You Eat: Home-Grown A.I.s and the Big Data Food Chain

A common theme in recent writings about machine intelligence is that the best new learning machines will constitute rather alien forms of intelligence. I'm not so sure. The reasoning behind the 'alien AIs' image usually goes something like this. The best way to get machines to solve hard real-world problems is to set them up as statistically-sensitive learning machines able to benefit maximally from exposure to 'big data'. Such machines will often learn to solve complex problems by detecting patterns, and patterns among patterns, and patterns within patterns, hidden deep in the massed data streams to which they are exposed. This will most likely be achieved using 'deep learning' algorithms to mine deeper and deeper into the data streams. After such learning is complete, what results may be a system that works but whose knowledge structures are opaque to the engineers and programmers who set the system up in the first place.

Opaque? In one sense yes. We won't (at least without further work) know in detail what has become encoded as a result of all that deep, multi-level, statistically-driven learning. But alien? I'm going to take a big punt at this point and road-test a possibly outrageous claim. I suspect that the more these machines learn, the more they will end up thinking in ways that are recognizably human. They will end up having a broad structure of human-like concepts with which to approach their tasks and decisions. They may even learn to apply emotional and ethical labels in roughly the same ways we do. If I am right, this somewhat undermines the common worry that these are emerging alien intelligences whose goals and interests we cannot fathom, and that might therefor turn on us in unexpected ways. By contrast, I suspect that the ways they might turn on us will be all-too-familiar—and thus hopefully avoidable by the usual steps of extending due respect and freedom.

Why would the machines think like us? The reason for this has nothing to do with our ways of thinking being objectively right or unique. Rather, it has to do with what I'll dub the 'big data food chain'. These AIs, if they are to emerge as plausible forms of general intelligence, will have to learn by consuming the vast electronic trails of human experience and human interests. For this is the biggest repository of general facts about the world that we have available. To break free of restricted uni-dimensional domains, these AIs will have to trawl the mundane seas of words and images that we lay down on Facebook, Google, Amazon, and Twitter. Where before they may have been force-fed a diet of astronomical objects or protein-folding puzzles, the break-through general intelligences will need a richer and more varied diet. That diet will be the massed strata of human experience preserved in our daily electronic media.

The statistical baths in which we immerse these potent learning machines will thus be all-too-familiar. They will feed off the fossil trails of our own engagements, a zillion images of bouncing babies, bouncing balls, LOL-cats, and potatoes that look like the Pope. These are the things that they must crunch into a multi-level world-model, finding the features, entities, and properties (latent variables) that best capture the streams of data to which they are exposed. Fed on such a diet, these AIs may have little choice but to develop a world-model that has much in common with our own. They are probably more in danger of becoming super-Mario freaks than becoming super-villains intent on world-domination.

Such a diagnosis (which is tentative and at least a little playful) goes against two prevailing views. First, as mentioned earlier, it goes against the view that current and future AIs are basically alien forms of intelligence feeding off big data and crunching statistics in ways that will render their intelligences increasingly opaque to human understanding. On the contrary, access to more and more data, of the kind most freely available, won't make them more alien but less so.

Second, it questions the view that the royal route to human-style understanding is human-style embodiment, with all the interactive potentialities (to stand, sit, jump etc.) that that implies. For although our own typical route to understanding the world goes via a host of such interactions, it seems quite possible that theirs need not. Such systems will doubtless enjoy some (probably many and various) means of interacting with the physical world. These encounters will be combined, however, with exposure to rich information trails reflecting our own modes of interaction with the world. So it seems possible that they could come to understand and appreciate soccer and baseball just as much as the next person. An apt comparison here might be with a differently-abled human being.

There's lots more to think about here of course. For example, the AIs will see huge swathes of human electronic trails, and will thus be able to discern patterns of influence among them over time. That means they may come to model us less as individuals and more as a kind of complex distributed system. That's a difference that might make a difference. And what about motivation and emotion? Maybe these depend essentially upon features of our human embodiment such as gut feelings, and visceral responses to danger? Perhaps- but notice that these features of human life have themselves left fossil trails in our electronic repositories.

I might be wrong. But at the very least, I think we should think twice before casting our home-grown AIs as emerging forms of alien intelligence. You are what you eat, and these learning systems will have to eat us. Big time.