Digital Reality

Neil Gershenfeld [1.23.15]

 

...Today, you can send a design to a fab lab and you need ten different machines to turn the data into something. Twenty years from now, all of that will be in one machine that fits in your pocket. This is the sense in which it doesn't matter. You can do it today. How it works today isn't how it's going to work in the future but you don't need to wait twenty years for it. Anybody can make almost anything almost anywhere.              

...Finally, when I could own all these machines I got that the Renaissance was when the liberal arts emerged—liberal for liberation, humanism, the trivium and the quadrivium—and those were a path to liberation, they were the means of expression. That's the moment when art diverged from artisans. And there were the illiberal arts that were for commercial gain. ... We've been living with this notion that making stuff is an illiberal art for commercial gain and it's not part of means of expression. But, in fact, today, 3D printing, micromachining, and microcontroller programming are as expressive as painting paintings or writing sonnets but they're not means of expression from the Renaissance. We can finally fix that boundary between art and artisans.

...I'm happy to take claim for saying computer science is one of the worst things to happen to computers or to science because, unlike physics, it has arbitrarily segregated the notion that computing happens in an alien world.


[57 min]

NEIL GERSHENFELD is a Physicist and the Director of MIT's Center for Bits and Atoms. He is the author of FAB. Neil Gershenfeld's Edge Bio Page


Digital Reality

 

What interests me is how bits and atoms relate—the boundary between digital and physical. Scientifically, it's the most exciting thing I know. It has all sorts of implications that are widely covered almost exactly backwards. Playing it out, what I thought was hard technically is proving to be pretty easy. What I didn't think was hard was the implications for the world, so a bigger piece of what I do now is that. Let's start with digital.

Digital is everywhere; digital is everything. There's a lot of hubbub about what's the next MIT, what's the next Silicon Valley, and those were all the last war. Technology is leading to very different answers. To explain that, let's go back to the science underneath it and then look at what it leads to.

Jesse Dylan's Documentary On The Edge Question — 2014

An Edge World Premier Jesse Dylan [12.17.14]

[Click icon  in video image to expand to full-screen viewing.]

Following January's publication of the Edge Question—2014, "What Scientific Idea Is Ready for Retirement?", the director Jesse Dylan approached Edge with regard putting together a documentary film on the project. 

The result:Edgeis pleased to present the world premiere of Dylan's interesting and engaging four-minute impressionistic montage, featuring appearences by a number of Edgies: Jerry Coyne, Daniel C. Dennett, George Dyson, David Gelernter, Rebecca Newberger Goldstein, Alison Gopnik, Kevin Kelly, Alex Pentland, Irene Pepperberg, Steven Pinker, Lee Smolin, Paul Steinhardt, and Frank Wilczek.

JESSE DYLAN is a filmmaker and founder, Creative Director and CEO of Wondros, a production company based in LA. He has created media projects for a diverse group of organizations, including George Soros and the Open Society Foundations, Clinton Global Initiative, Council on Foreign Relations, MIT Media Lab, the Columbia Journalism School, and Harvard Medical School. Among his best known works is in the Emmy Award-winning  "Yes We Can—Barack Obama Music Video".  
Jesse Dylan's Edge Bio Page.


The book version of Edge Question—2014 is being published in February by HarperCollins, retitled as This Idea Must Die: Scientific Theories That Are Blocking Progress.  Available for pre-order:

Formulating Science in Terms of Possible and Impossible Tasks

Chiara Marletto [12.6.14]

It turns out that in the constructor theoretic view, humans, as knowledge creating systems, are quite central to fundamental physics in an objective, non-anthropocentric, way. This is a very deep change in perspective. One of the ideas that will be dropped if constructor theory turns out to be effective is that the only fundamental entities in physics are laws of motion and initial conditions. In order for physics to accommodate more of physical reality, there needs to be a switch to this new mode of explanation, which accepts that scientific explanation is more than just predictions. Predictions will be supplemented with statements about what tasks are possible, what are impossible and why.


[32:58]

CHIARA MARLETTO is a Junior Research Fellow at Wolfson College and Postdoctoral Research Assistant at the Materials Department, University of Oxford; Currently working with David Deutsch.

Chiara Marletto's Edge Bio Page

THE REALITY CLUB: NEW Arnold Trehub


FORMULATING SCIENCE IN TERMS OF POSSIBLE AND IMPOSSIBLE TASKS 

I’ve been thinking about constructor theory a lot in the past few years. Constructor theory is this theory that David Deutsch proposed—a proposal for a new fundamental theory to formulate science in a completely different way from the prevailing conception of fundamental physics. It has the potential to change the way we formulate science because it’s a new mode of explanation.

When you think about physics, you usually describe things in terms of initial conditions and laws of motion; so what you say is, for example, where a comet goes given that it started in a certain place and time. In constructor theory, what you say is what transformations are possible, what are impossible, and why. The idea is that you can formulate the whole of fundamental physics this way; so, not only do you say where the comet goes, you say where it can go. This incorporates a lot more than what it is possible to incorporate now in fundamental physics.

Entwined Fates

Margaret Levi [11.24.14]

 

We keep coming back to the issue of a community of fate: can it be for good or for bad, right? We can imagine the beer hall in Munich and what happened there that created a community of fate, and we can imagine the left-wing union organizers developing a different kind of community of fate. The real distinction between them is not just the ethical principles that inform them—that's clearly an important distinction—but what kind of community of fate it is. The terminology that I use there, and I keep repeating and want to get that through, is between an inclusive and an expansive community of fate versus an exclusive and narrowing community of fate. That's the difference.

 
[45:25]

MARGARET LEVI is the Director of the Center For Advanced Study in the Behavioral Sciences and Professor of Political Science at Stanford University. She is the Jere L. Bacharach Professor Emerita of International Studies at the University of Washington.

Margaret Levi's Edge Bio Page


ENTWINED FATES

The thing that interests me has to do with how we evoke, from people, the ethical commitments that they have, or can be encouraged to have, that make it possible to have better government, that make it possible to produce collective goods, that make it possible to have a better society. 

I'm a political scientist, political economist, so I think about this not so much from the perspective of moral reasoning, or philosophy, or psychology for that matter—though all those disciplines come into play in my thinking—but I think about it in terms of the institutional arrangements and contextual arrangements in which people find themselves. It is about those that evoke certain behaviors as opposed to other kinds of behaviors, and certain attitudes as opposed to other kinds of attitudes, that ultimately lead to actions. I'm ultimately interested not just in how the individual's mind works, but how individual minds work together to create an aggregate outcome.

HeadCon '14

Sarah-Jayne Blakemore, Molly Crockett, Jennifer Jacquet, Michael McCullough, Hugo Mercier, L.A. Paul, David Rand, Lawrence Ian Reed, Simone Schnall [11.18.14]

"To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves." 

HEADCON '14

In September a group of social scientists gathered for HEADCON '14, an Edge Conference at Eastover Farm. Speakers addressed a range of topics concerning the social (or moral, or emotional) brain: Sarah-Jayne Blakemore: "The Teenager's Sense Of Social Self"; Lawrence Ian Reed: "The Face Of Emotion"; Molly Crockett: "The Neuroscience of Moral Decision Making"; Hugo Mercier: "Toward The Seamless Integration Of The Sciences"; Jennifer Jacquet: "Shaming At Scale"; Simone Schnall: "Moral Intuitions, Replication, and the Scientific Study of Human Nature"; David Rand: "How Do You Change People's Minds About What Is Right And Wrong?"; L.A. Paul: "The Transformative Experience"; Michael McCullough: "Two Cheers For Falsification". Also participating as "kibitzers" were four speakers from HEADCON '13, the previous year's event: Fiery CushmanJoshua KnobeDavid Pizarroand Laurie Santos.

We are now pleased to present the program in its entiretynearly six hours of Edge Video and a downloadable PDF of the 55,000-word transcript.


[6 hours] 

John Brockman, Editor
Russell Weinberger, Associate Publisher

 Download PDF of Manuscript  

Copyright (c) 2014 by Edge Foundation, Inc. All Rights Reserved. Please feel free to use for personal, noncommercial use (only).

_____

Related on Edge:

HeadCon '13
Edge Meetings & Seminars
Edge Master Classes


THE ÉMINENCE GRISE

Georg Diez [11.17.14]
MASTERMIND BROCKMAN

THE ÉMINENCE GRISE
As a New York agent, John Brockman manages the star authors of science, as a visionary behind the scenes, he creates a new image of man for the 21st century. By Georg Diez   

Who is John Brockman? Even in New York, the world capital of people who know just about everybody, they are uncertain.

"Brockman, Brockman?" Shake of the head. "I don't know", says the reporter from the New Yorker. Says the colleague of the New York Review of Books. Says the young writer who cofounded the magazine n + 1.

In the literary milieu where he is ignored more than despised, John Brockman is about as well known as the first three digits of the number Pi.

"This crowd sees everything through the lenses of culture and politics," he says. "But an understanding of life, of the world, can only come through biology, through science."

Ebola, stem cells, brain research—Who needs the new David Foster Wallace, the new Philip Roth?

"The great questions of the world concern scientific news," says Brockman. "We are at the beginning of a revolution. And what we hear from the mainstream is: "Please make it go away."


"He is a key figure of the late 20th and early 21st century, the éminence grise and major source of inspiration for the globally dominant culture, which he himself named as the 'third culture'."


And there you are—this is how it goes with John Brockman who doesn’t like to waste time in the midst of the contradictions of the present. "Come, let's start," he says in a good mood and puts a recording device on his desk. "I'm turning it on, you don't mind?"

He is charming, without hiding his own interests. He is proud of his life, his intelligence, without that he would have to apologize for it. He is a key figure of the late 20th and early 21st century, the éminence grise and major source of inspiration for the globally dominant culture, which he himself named as the "third culture".

It is not Brockman, but his authors, who are well-known: Richard Dawkins, Steven Pinker, Daniel C. Dennett, Jared Diamond, Daniel Kahneman. Physicists, neuroscientists, geneticists, evolutionary biologists, fixed stars of the science age, superstars of nonfiction bestseller lists, the reason for Brockman's financial success and good mood.

The Myth Of AI

Jaron Lanier [11.14.14]

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.


[39:47]

JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future? 

Jaron Lanier's Edge Bio Page

THE REALITY CLUB: George Church, Peter Diamandis, Lee Smolin, Rodney Brooks, Nathan Myhrvold, George Dyson, Pamela McCorduck, Sendhil Mullainathan, Steven Pinker, Neal Gershenfeld, D.A. Wallach, Michael Shermer, Stuart Kauffman, Kevin Kelly, Lawrence Krauss, Robert Provine, Stuart Russell, Kai Krause 

INTRODUCTION

by John Brockman

This past weekend, during a trip to San Francisco, Jaron Lanier stopped by to talk to me for an Edge feature. He had something on his mind: news reports about comments by Elon Musk and Stephen Hawking, two of the most highly respected and distinguished members of the science and technology communiity, on the dangers of AI. ("Elon Musk, Stephen Hawking and fearing the machine" by Alan Wastler, CNBC 6.21.14). He then talked, uninterrupted, for an hour. 

As Lanier was about to depart, John Markoffthe Pulitzer Prize-winning technology correspondent for THE NEW YORK TIMES, arrived. Informed of the topic of the previous hour's conversation, he said, "I have a piece in the paper next week. Read it." A few days later, his article, "Fearing Bombs That Can Pick Whom to Kill" (11.12.14), appeared on the front page. It's one of a continuing series of articles by Markoff pointing to the darker side of the digital revolution.

This is hardly new territory. Cambridge cosmologist Martin Rees, the former Astronomer Royal and President of the Royal Society, addressed similar topics in his 2004 book, Our Final Hour: A Scientist's Warning, as did computer scientist, Bill Joy, co-founder of Sun Microsystems, in his highly influential 2000 article in Wired"Why The Future Doesn't Need Us: Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species".

But these topics are back on the table again, and informing the conversation in part is Superintelligence: Paths, Dangers, Strategies, the recently published book by Nick Bostrom, founding director of Oxford University’s Institute for the Future of Humanity. In his book, Bostrom asks questions such as "what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?" 

I am encouraging, and hope to publish, a Reality Club conversation, with comments (up to 500 words) on, but not limited to, Lanier's piece. This is a very broad topic that involves many different scientific fields and I am sure the Edgies will have lots of interesting things to say. 

—JB

Related on Edge:

Jaron Lanier: "Digital Maoism: The Hazards of the New Online Collectivism" (2006) "One Half A Manifesto" (2000) 
Kevin Kelly: "The Technium" (2014) 
George Dyson: "Turing's Cathedral" (2004) 


THE MYTH OF AI

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn't been asked to decide, and declare that corporations are people. That's a cover for making it easier for big money to have an influence in politics. But there's another angle to it, which I don't think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they're essentially algorithms.

If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel between people and algorithms. And the algorithms run on these big cloud computer facilities.

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us.

Salon Culture: Network of Ideas

Andrian Kreye [10.2.14]

 

Despite their intense scientific depth, John Brockman runs these gatherings with the cool of an old school bohemian. A lot of these meetings indeed mark the beginning of a new phase in science history. One such example was a few years back, when he brought together the luminaries on behavioral economics, just before the financial crisis plunged mainstream economics into a massive identity crisis. Or the meeting of researchers on the new science of morality, when it was noted that the widening political divides were signs of the disintegration of American society. Organizing these gatherings over summer weekends at his country farm he assumes a role that actually dates from the 17th and 18th century, when the ladies of the big salons held morning and evening meetings in their living rooms under the guise of sociability, while they were actually fostering the convergence of the key ideas of the Enlightenment.

By the Late John Brockman - NEW E-BOOK EDITION ON SALE NOW!

John Brockman [9.16.14]

 

Image Map

Also availble from HarperCollins in the UK at amazon.co.uk and in Germany (in translation as Nachworte) from S. Fischer Verlag at amazon.de


Articles of  Note

John Brockman, literary über agent and intellectual arbiter, wrote a trilogy of experimental, divisive books. Then, at age 32, he retired from writing ...more»

 

Subscribe to Front page feed