On the site Edge.org discussed the dangers of artificial intelligence (AI) in November, led by Stephen Hawking and philosopher Nick Bostrom has recently warned of what superior and possibly malevolent artificial intelligences could get up to. Only two people were worth reading: Jaron Lanier , which is critical to AI, and Rodney Brooks , which is positive. ...
This was quite a week. Settle into to your favorite easy chair, pour yourself some freshly brewed Sumatra coffee and enjoy these longer-form weekend reads:
- What Do You Think About Machines That Think? (Edge)
Whether it’s the four bodily humors, the geocentric universe, or the steady state theory, sometimes an old idea has to die before new science can flourish. (Just ask Copernicus.) A new anthology edited by Edge.org’s John Brockman aims to speed that process along by asking scientists and big thinkers which scientific concepts they’d target for extinction. Ira talks with two contributors to This Idea Must Die—theoretical physicist Sean Carroll and quantum mechanic Seth Lloyd— about the ideas they’d like to give a good shove out the door. Read an excerpt from the book here, and vote for which ideas you think should die.
Once again, the online magazine Edge has returned to stimulate an exciting intellectual debate of great height, with the annual question that launches on these dates to some of the brightest minds of our time. On this occasion, its brilliant editor John Brockman has raised the challenge of dissecting the lights and shadows of the artificial intelligence (AI): "Do you think about the machines that think?" The responses reflect a wide range of views among some of the great scientists and thinkers of the world today, showing that there is no consensus clear when assessing to what point should celebrate or fear the emergence of thinking machines.
At one end are the great American philosopher Daniel C. Dennett, who mocks with much scorn of the "urban legend" according to which "the robots we will dominate" in the near future. On the other are scientists of the stature of the astrophysicist's NASA and Nobel prize winner John C. Mather, who is convinced that the artificial intelligence "will become a reality, and quite soon", taking into account the massive amount of money that is already being invested in this field, and the enormous potential benefits awaiting entrepreneurs who built the first computers with human (or superhuman) intelligence.
However, although experts are not based on the time of predict whether much or little time for the era of AI, there is a very broad consensus on the unstoppable advent, sooner or later, this revolution. The reason explains it very well the physicist and Nobel Prize Frank Wilczek, citing the famous "astonishing hypothesis" of the co-discoverer of DNA, Francis Crick: the human mind is nothing more than "an emergent property of matter" and therefore "all intelligence is intelligence produced by a machine" (either a brain formed by neurons or a robot manufactured with silicon chips).
As I said in a memorable interview the great Spanish neuroscientist Rafael Yuste: "inside the skull there is no magic, the human mind and all our thoughts, our memories and our personality, everything is based on shots of groups of neurons. There is nothing more, there is a spirit in the ether... There is a great lack of knowledge on how to operate this machine. But I am sure that consciousness arises from the physical substrate which we have on the brain."
And so, as the biologist George Church says in his own answer to the question of Edge, "I am a thinking machine, made of atoms." If this is true, the appearance of another type of machine that can also think is only a matter of time.
From the self to left brain vs. right brain to romantic love, a catalog of broken theories that hold us back from the conquest of Truth.
“To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact,” asserted Charles Darwin in one of the eleven rules for critical thinking known as Prospero’s Precepts. If science and human knowledge progress in leaps and bounds of ignorance, then the recognition of error and the transcendence of falsehood are the springboard for the leaps of progress. That’s the premise behind This Idea Must Die: Scientific Theories That Are Blocking Progress (public library) — a compendium of answers Edge founder John Brockman collected by posing his annual question — “What scientific idea is ready for retirement?” — to 175 of the world’s greatest scientists, philosophers, and writers. Among them are Nobel laureates, MacArthur geniuses, and celebrated minds like theoretical physicist and mathematician Freeman Dyson, biological anthropologist Helen Fisher, cognitive scientist and linguist Steven Pinker, media theorist Douglas Rushkoff, philosopher Rebecca Newberger Goldstein, psychologist Howard Gardner, social scientist and technology scholar Sherry Turkle, actor and author Alan Alda, futurist and Wired founding editor Kevin Kelly, and novelist, essayist, and screenwriter Ian McEwan.
Brockman paints the backdrop for the inquiry:
Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858–1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals.
Many of the answers are redundant — but this is a glorious feature rather than a bug of Brockman’s series, for its chief reward is precisely this cumulative effect of discerning the zeitgeist of ideas with which some of our era’s greatest minds are tussling in synchronicity. They point to such retirement-ready ideas as IQ, the self, race, the left brain vs. right brain divide, human nature and essentialism, free will, and even science itself. What emerges is the very thing Carl Sagan deemed vital to truth in his Baloney Detection Kit — a “substantive debate on the evidence by knowledgeable proponents of all points of view.” ...
Complement This Idea Must Die, the entirety of which weaves a mind-stretching mesh of complementary and contradictory perspectives on our relationship with knowledge, with some stimulating answers to previous editions of Brockman’s annual question, exploring the only thing worth worrying about (2013), the single most elegant theory of how the world works (2012), and the best way to make ourselves smarter (2011).
We live in a reformatory whose message is that the future is already determined. It awaits us are robots, artificial life and superior artificial intelligence. No choice, we have not: do not think that any other future is possible. The only thing we can do is to bite the bullet and accept this future technology and the dilemmas that come with it, whether we like it or not. The sooner we accept it, the more able we are: the prepackaged cyber future are sold with so much gloomy moralizing is akin to entering into a marriage in the bad old days. I would like to put a spotlight on the strange, retroactive destiny that characterizes the debate about the future: we are asked to accept the whole package in advance, long before any of this is reality, so that it eventually becomes just as inevitable as it actually is not. ...
"Science advances by a series of funerals," writes John Brockman, founder of the online discussion forum Edge.org. Sometimes, he says, old ideas have to be put to bed before new ones can flourish. With that in mind, he asked researchers, journalists and other science enthusiasts to weigh in on which established theories need to go. From the replies, Brockman compiled This Idea Must Die, a fascinating smorgasbord of 175 short essays about every field and facet of research. ...
A few of the arguments are bound to be controversial. For example, a journalist asserts that the information gleaned from massive particle accelerators isn’t worth their equally massive price tags. And while Brockman’s question inspired some thought-provoking responses, the short essays can provide only a brief overview of complex problems. Readers will want to do some research of their own before deciding which, if any, of these ideas really requires a funeral.
I was seduced by infinity at an early age. Georg Cantor’s diagonality proof that some infinities are bigger than others mesmerized me, and his infinite hierarchy of infinities blew my mind. The assumption that something truly infinite exists in nature underlies every physics course I’ve ever taught at MIT—and, indeed, all of modern physics. But it’s an untested assumption, which begs the question: Is it actually true? ...
Excerpted from This Idea Must Die, edited by John Brockman. Used with permission.
...[T]he most controversial topics in the field of advanced technologies will be 'artificial intelligence [AI]' of power to be machined...the exponential increase in computing power, represented by Moore's Law, Google, Facebook, Twitter, etc. The emergence of Web services-based (really) big data and machine learning with the Big Data, especially Deep learning techniques environmental changes that seem to apply again sparked a boom in artificial intelligence. ...
What super artificial intelligence will bring the 'existential risk' destruction of the human race? ...[C]omputer scientist Jaron Lanier has an Edge.org comment titled 'The Myth of AI'. Eminent scholars such as Stephen Pinker caused a hot response to the comments. ...[E]dge annual Distinguished members of the cast to one topic comments recipients in 2015 the theme of the event 'What do you think about that machine think?' That decided, and thus the physics In response, psychology, cognitive science, neuroscience, computer science, journalism, art, artificial intelligence, directly or indirectly, related to the field of oneself and others Certified Professional 186...answers have come up. ...
Love is… at root, biology. A host of endocrine-system-regulated hormones relay chemical messages around the body and brain. Complex loops of physiological feedback between endocrine, nervous, and reproductive systems regulate our sexual responses and maintain homeostasis via hormone-producing glands such as the pituitary and thyroid. We feel the effects of ‘love’ throughout our bodies; even with the reproductive system completely excised our hormones would continue their thrilling course. And we feel it in our brains, in our minds. Modulated by hormones such as oxytocin, neurotransmitting chemicals at synapses lead to inhibition or firing of networks of neurons (baby, you flood my synaptic clefts like no other). Firing or inhibition consolidates or weakens these networks – thus do we fall in, or out of, love.
According to Steven Pinker, ‘Love is not all you need, and does not make the world go round.’ That is true. However, this fluke of natural selection can come to be our everything. Sometimes, the end of love can be the end of meaningful life (and for an unhappy few, literally the end of life). The neurochemical, neurostructural resonances within close relationships – couples, families, tribes – can gift members a sense of shared purpose. When we draw significance from these bonds, from their apparent strength and continuity, we are often driven to try to shape our environments to uphold and sanctify them. This drive has myriad positive effects, but it can also be perilously narrow. If we are to avoid relationship conservatism – and exclusion of those who do not identify with the love paradigm – we must allow the flourishing of love in the widest possible sense. ...
 Steven Pinker, “Evolutionary Genetics and the Conflicts of Human Social Life,” in This Explains Everything: Deep, Beautiful, and Elegant Theories of How the World Works, ed. John Brockman, 1st ed (New York: Harper Perennial, 2013), 45.
"If at first the idea is not absurd, then there is no hope for it," Albert Einstein reportedly said. I’d like to broaden the definition of addiction—and also retire the scientific idea that all addictions are pathological and harmful.
...Scientists have now shown that food, sex, and gambling compulsions employ many of the same brain pathways activated by substance abuse. Indeed, the 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders (the DSM) has finally acknowledged that at least one form of non-substance abuse—gambling—can be regarded as an addiction. The abuse of sex and food have not yet been included. Neither has romantic love.
I shall propose that love addiction is just as real as any other addiction, in terms of its behavior patterns and brain mechanisms. Moreover, it’s often a positive addiction. ...
Excerpted from This Idea Must Die, edited by John Brockman. Used with permission.
There are few more damning responses to a new study or book or proposal than to say that it relies on “anecdotal” evidence — implying not just that the underlying idea lacks seriousness and objectivity, but that the author is lazy or even untrustworthy. Editors also tend to recoil from anecdotal openings for news stories (in part because most anecdotal ledes are awful), and book critics love to display their smartypants-ness by dissing some new volume as anecdotal.
Nicholas Carr, author of “The Shallows: What the Internet is Doing to Our Brains” (2010), wants to rehabilitate the anecdote. So when Edge.org asked him and other thinkers to answer the question “What scientific idea is ready for retirement?” he had his answer. In “This Idea Must Die: Scientific Theories That Are Blocking Progress,” a collection of 175 short essays from top thinkers, Carr makes his case against anti-anecdotalism in two sharp paragraphs: ...
“This Idea Must Die,” edited by John Brockman, is forthcoming from Harper Perennial on Feb. 17.
Stephen Hawking famously warned in 2010 that based on the history of humankind, an alien, more-advanced civilization would probably destroy us. "We only have to look at ourselves to see how intelligent life might develop into something we wouldn't want to meet," he said. Hawking expressed a similar fear of advanced artificial intelligence (AI) machines. In 2014 he pronounced, "The development of full artificial intelligence could spell the end of the human race." Taken seriously, these two statements could even imply that we should neither search for extrasolar advanced civilizations nor strive for superior AI machines.
I was contemplating these issues when I received the annual EDGE question from "intellectual impresario" John Brockman. Every year Brockman sends to about 200 thinkers a single question, and he posts all the answers on his website, edge.org. The question for 2015 was "What do you think about machines that think?"... I strongly recommend reading all the answers, since they are quite fascinating. ...
It’s difficult to deny that humans began as Homo sapiens, an evolutionary offshoot of the primates. Nevertheless, for most of what is properly called "human history" (that is, the history starting with the invention of writing), most of Homo sapiens have not qualified as "human"—and not simply because they were too young or too disabled.
In sociology, we routinely invoke a trinity of shame—race, class, and gender—to characterize the gap that remains between the normal existence of Homo sapiens and the normative ideal of full humanity. Much of the history of social science can be understood as either directly or indirectly aimed at extending the attribution of humanity to as much of Homo sapiens as possible. It’s for this reason that the welfare state is reasonably touted as social science’s great contribution to politics in the modern era. But perhaps membership in Homo sapiens is neither sufficient nor even necessary to qualify a being as "human." What happens then? ...
Excerpted from This Idea Must Die, edited by John Brockman. Used with permission.
UVM robotics expert contributes essay to world-famous Edge conversation
John Brockman's Edge Question is a major event in the intellectual calendar each year—its roots go back to talks he had with Isaac Asimov and others in 1980. This year's question, "What do you think about machines that think?" drew essays from Daniel C. Dennett, Nicholas Carr, Steven Pinker, Freeman Dyson, George Church and nearly two hundred other luminaries and Nobel Prize winners.
UVM computer scientist and robotics expert Joshua Bongard was asked to weigh in, too. ...
...[R]ead the whole essay. It’s online now and will appear in a printed book as each of the Edge questions—like “What will change everything?” (2009) and “What is your dangerous idea?” (2006)—has for the last decade.
Thinking machines are consistently in the news these days, and often a topic of discussion here at 13.7. Last week, Alva Noë came out as a singularity skeptic, and three of us contributed to Edge.org's annual question for 2015: What do you think about machines that think?
In response to the Edge.org question, I argued that we shouldn't be chauvinists when it comes to defining thinking — that is, we should resist the temptation to restrict what counts as thinking to "thinking like adult humans" or "thinking like contemporary computers." Marcelo Gleiser suggested that we're already living as transhumans, enhanced by our technogadgets and medical improvements. And Stuart Kauffman considered Turing machines, the quantum and human choice.
In addressing the relationship between humans and thinking machines, all three of our responses — and those by many others — raised questions about what (if anything) makes us uniquely human. Part of what's fascinating about the idea of thinking machines, after all, is that they seem to approach and encroach on a uniquely human niche, homo sapiens — the wise.
— Oh, to have lived in the age of the Parisian salons of the Enlightenment and been privy to some of the great intellectual discussions that went on there between writers, artists, philosophers, politicians and perhaps some budding scientists. Then again, I'm rather fond of the 21st century ands its modern medicine, indoor plumbing and smart websites. On the subject of clever websites, I give you Edge.org, which is a place where brilliant minds from many disciplines gather to mull a big annual question. In 2014, the annual question was "What scientific idea is ready for retirement?"
The responses covered things like the concept of race, barriers of scientific understanding, Moore's Law and the robustness (or lack thereof) of large studies. There's a lot to wade through there, but if you're into science, it's worth a peek.
Domingo 25, de enero do 2015 | lanacion.com (Buenos Aires)
More than 180 scientists, philosophers, writers and technicians responded to the annual call Edge.org website with original reflections on the scope, risks and possibilities of artificial intelligence, a field-edge science that is already bringing the future to present
Artificial intelligence, is one of the most promising developments of modern science, or risk to humanity? Between these two poles, with irony, optimism and caution, the 186 scientists, writers and thinkers convened this year by Edge.org-a website associated with a publisher that promotes thinking and discussion of the art in science, arts and moved literature- to meet its annual question. The collaborators wrote brief essays available on the web ( www.edge.org ) and, like every year, will soon have its publication on paper. Here a selection of their responses.
Pamela McCorduck, Steven Pinker, Irene Pepperberg, Thomas A. Bass, Paul Davies, Nicholas G. Carr.