Why should people think about machines that think (or anything that thinks, for that matter)? People do ponder others' thoughts—under certain circumstances. One tipping point might involve considering others as "agents" rather than "automata." On the one hand, automata act at the behest of their creators (even if removed in space or time). Thus, if automata misbehave, the creator gets the blame. On the other hand, agents act based on their own agendas. When agents misbehave, they themselves are to blame.
While agency is difficult to define, people naturally and rapidly distinguish agents from nonagents, and may even use specialized neural circuits to infer others' feelings and thoughts. In fact, designers can co-opt features associated with agency to fool people into thinking that they are interacting with agents (including physical similarity, responsiveness to feedback, and self-generated action). But beyond external appearances, what is necessary to endow an entity with agency? While at least three alternatives present themselves, two of the most popular and seductive possibilities may not be necessary:
1. Physical similarity. There are infinite ways to make machines similar to humans, both in terms of appearance and behavior—but ultimately, only one of these is accurate. It is not enough to duplicate the software—one also has to implement it on the underlying hardware, with all of its associated affordances and limitations.
One of the first automata, de Vaucanson's duck, appeared remarkably similar to a duck, right down to its digestion. But while it may have looked like a duck and quacked like a duck (and even crapped like a duck), it was still not a duck. Nonetheless, maximizing physical similarity is an easy way to trick others into inferring agency (at least, initially).
2. Self-awareness. Many seem concerned that if machines consume enough information, they will become self-aware, and that self-aware machines will then develop their own sense of agency—but neither logic nor evidence supports these extrapolations. While robots have apparently been trained to recognize themselves in mirrors and sense the position of their appendages, these trappings of self-awareness have not led to laboratory revolts or surgical lapses. Perhaps conveying a sense of self-awareness would cause others to infer that a machine had greater agency (or at least entertain philosophers), but self-awareness alone does not seem necessary for agency.
3. Self-interest. Humans are not mere information processors. They are survival processors. They prefer to focus and act on information that promotes their continuance and procreation. Thus, humans process information based on self-interest. Self-interest can provide a unified but open framework for prioritizing and acting on almost any input.
Thanks to a clever evolutionary trick, humans do not even need to be aware of their goals, since intermediate states like emotions can stand in for self-interest. Armed with self-interest and an ability to flexibly align responses to changing opportunities and threats, machines might develop agency. Thus, self-interest might provide a necessary building block of agency, and also could powerfully evoke agentic inferences from others.
Self-interest might transform machines that act on the world (or "robots") from automata into agents. Self-interest also flips the ordering (but not the content) of Asimov's prescient laws of robotics:
(1) robots must not harm humans,
(2) robots must help humans (unless this violates the first law), and
(3) robots must protect themselves (unless this violates the first two laws).
A self-interested robot would instead protect itself before helping or averting harm to humans. Constructing a self-interested robot would then seem straightforward: endow it with survival and procreation goals, allow it to learn what promotes those goals, and motivate it to continually act on what it learns.
Still, we should think twice before building self-interested robots. Self-interest can conflict with others' interests. Witness the destructive impact of viruses' simple drives to survive and spawn in the virtual world. If self-interested robots did exist, we would have to think about them more seriously. Their presence would raise basic questions: Should these robots have self-interest? Should they be allowed to act on it? Should they do so without awareness of why they were acting that way?
And, don't we have enough of these robots already?