I see many misunderstandings in current discussions about the nature of mind, such as the assumption that if we create sophisticated AI, it will inevitably be conscious. There is also this idea that we should “merge with AI”—that in order for humans to keep up with developments in AI and not succumb to hostile superintelligent AIs or AI-based technological unemployment, we need to enhance our own brains with AI technology.
One thing that worries me about all this is that don't think AI companies should be settling issues involving the shape of the mind. The future of the mind should be a cultural decision and an individual decision. Many of the issues at stake here involve classic philosophical problems that have no easy solutions. I’m thinking, for example, of theories of the nature of the person in the field of metaphysics. Suppose that you add a microchip to enhance your working memory, and then years later you add another microchip to integrate yourself with the Internet, and you just keep adding enhancement after enhancement. At what point will you even be you? When you think about enhancing the brain, the idea is to improve your life—to make you smarter, or happier, maybe even to live longer, or have a sharper brain as you grow older—but what if those enhancements change us in such drastic ways that we’re no longer the same person?
SUSAN SCHNEIDER holds the Distinguished Scholar chair at the Library of Congress and is the director of the AI, Mind and Society (“AIMS”) Group at the University of Connecticut. Susan Schneider's Edge Bio Page