Edge Video Library

A Post-Galilean Paradigm

Philip Goff
[9.24.19]

We're now going through a phase of history where people are so blown away at the success of physical science and the wonderful technology it's produced that they've forgotten its philosophical underpinnings. They've forgotten its inherent limitations. If we want a science of consciousness, we need to move beyond Galileo. We need to move to what I call a post-Galilean paradigm. We need to rethink what science is. That doesn't mean we stop doing physical science or we do physical science differently—I'm not here to tell physical scientists how to do their jobs. It does, however, mean that it's not the full story. We need physical science to encompass a more expansive conception of the scientific method. We need to adopt a worldview that can accommodate both the quantitative data of physical science and the qualitative reality of consciousness. That's essentially the problem.

Fortunately, there is a way forward. There is a framework that could allow us to make progress on this. It's inspired by certain writings from the 1920s of the philosopher Bertrand Russell and the scientist Arthur Eddington, who is incidentally the first scientist to confirm general relativity after the First World War. I'm inclined to think that these guys did in the 1920s for the science of consciousness what Darwin did in the 19th century for the science of life. It's a tragedy of history that this was completely forgotten about for a long time for various historical reasons we could talk about. But, it's recently been rediscovered in the last five or ten years in academic philosophy, and it's causing a lot of excitement and interest.

PHILIP GOFF is a philosopher and consciousness researcher at Durham University, UK, and author of Galileo's Error: Foundations for a New Science of Consciousness (forthcoming, 2019). Philip Goff's Edge Bio Page


Go to stand-alone video: :
 

The Universe Is Not in a Box

Julian Barbour
[9.11.19]

One of the great books in science was published in 1824 by a young Frenchman called Sadi Carnot. This is one of the most wonderful books, the title of which is Reflections on the Motive Power of Fire. In about six pages, he explains how you would make a steam engine that would work with the absolute maximum efficiency possible. It was almost entirely ignored, and he died before anything much could come out of it. It was rediscovered in 1849 when William Thomson, who later became Lord Kelvin, wrote a paper which publicized this work. Within a couple of years, thermodynamics had been created as a science.

It caused a tremendous lot of excitement from the 1850s onwards. The key thing about this work of Carnot's is that if you have a steam engine, the steam has to remain in a cylinder in a box. You want the steam engine to work continuously, so you keep on having to bring the steam and the cylinder back to the condition it was before. It's remarkable that the development of what's called statistical mechanics—understanding how steam behaves—led to the discovery of entropy, one of the great discoveries in the history of science, and it was all followed out of this work of Carnot on how steam engines work. And moreover, it was very anthropocentric thinking about how human beings could exploit coal to drive steam engines and do work for them. At that stage, nobody was thinking about the universe as a whole; they were just thinking about how they could make steam engines work better.

This way of thinking, I believe, has survived more or less unchanged to this day. You still find that people who work on this problem of the arrow of time are still assuming conditions that are appropriate for a steam engine. But in the 1920s and early 1930s, Hubble showed that the universe was expanding, that we live in an expanding universe. Is that going to be well modeled by steam in a box? My belief is that people haven't realized that we have to think out of the box. We have to think in different ways. We keep on finding ways in which the mathematics that was developed before to understand systems confined in a box have to be modified with quite surprising consequences and, above all, possibly to explain why we have this incredibly powerful sense of the passage of time, why the past is so different from the future.

JULIAN BARBOUR is a theoretical physicist specializing in the study of time and motion; visiting professor of physics at the University of Oxford; and author of The Janus Point (forthcoming) and The End of TimeJulian Barbour's Edge Bio Page


Go to stand-alone video: :
 

Emergences

W. Daniel Hillis
[9.4.19]

My perspective is closest to George Dyson's. I liked his introducing himself as being interested in intelligence in the wild. I will copy George in that. That is what I’m interested in, too, but it’s with a perspective that makes it all in the wild. My interest in AI comes from a broader interest in a much more interesting question to which I have no answers (and can barely articulate the question): How do lots of simple things interacting emerge into something more complicated? Then how does that create the next system out of which that happens, and so on?

Consider the phenomenon, for instance, of chemicals organizing themselves into life, or single-cell organisms organizing themselves into multi-cellular organisms, or individual people organizing themselves into a society with language and things like that—I suspect that there’s more of that organization to happen. The AI that I’m interested in is a higher level of that and, like George, I suspect that not only will it happen, but it probably already is happening, and we’re going to have a lot of trouble perceiving it as it happens. We have trouble perceiving it because of this notion, which Ian McEwan so beautifully described, of the Golem being such a compelling idea that we get distracted by it, and we imagine it to be like that. That blinds us to being able to see it as it really is emerging. Not that I think such things are impossible, but I don’t think those are going to be the first to emerge.

There's a pattern in all of those emergences, which is that they start out as analog systems of interaction, and then somehow—chemicals have chains of circular pathways that metabolize stuff from the outside world and turn into circular pathways that are metabolizing—what always happens going up to the next level is those analog systems invent a digital system, like DNA, where they start to abstract out the information processing. So, they put the information processing in a separate system of its own. From then on, the interesting story becomes the story in the information processing. The complexity happens more in the information processing system. That certainly happens again with multi-cellular organisms. The information processing system is neurons, and they eventually go from just a bunch of cells to having this special information processing system, and that’s where the action is in the brains and behavior. It drags along and makes much more complicated bodies much more interesting once you have behavior.

W. DANIEL HILLIS is an inventor, entrepreneur, and computer scientist, Judge Widney Professor of Engineering and Medicine at USC, and author of The Pattern on the Stone: The Simple Ideas That Make Computers Work. W. Daniel Hillis's Edge Bio Page


Go to stand-alone video: :
 

Epistemic Virtues

Peter Galison
[8.21.19]

I’m interested in the question of epistemic virtues, their diversity, and the epistemic fears that they’re designed to address. By epistemic I mean how we gain and secure knowledge. What I’d like to do here is talk about what we might be afraid of, where our knowledge might go astray, and what aspects of our fears about how what might misfire can be addressed by particular strategies, and then to see how that’s changed quite radically over time.

~~

James Clerk Maxwell, just by way of background, had done these very mechanical representations of electromagnetism—gears and ball bearings, and strings and rubber bands. He loved doing that. He’s also the author of the most abstract treatise on electricity and magnetism, which used the least action principle and doesn’t go by the pictorial, sensorial path at all. In this very short essay, he wrote, "Some people gain their understanding of the world by symbols and mathematics. Others gain their understanding by pure geometry and space. There are some others that find an acceleration in the muscular effort that is brought to them in understanding, in feeling the force of objects moving through the world. What they want are words of power that stir their souls like the memory of childhood. For the sake of persons of these different types, whether they want the paleness and tenuity of mathematical symbolism, or they want the robust aspects of this muscular engagement, we should present all of these ways. It’s the combination of them that give us our best access to truth." 

PETER GALISON is a science historian; Joseph Pellegrino University Professor and co-founder of the Black Hole Initiative at Harvard University; and author of Einstein's Clocks and Poincaré’s Maps: Empires of Time. Peter Galison's Edge Bio Page

 


Go to stand-alone video: :
 

AI That Evolves in the Wild

George Dyson
[8.14.19]

I’m interested not in domesticated AI—the stuff that people are trying to sell. I'm interested in wild AI—AI that evolves in the wild. I’m a naturalist, so that’s the interesting thing to me. Thirty-four years ago there was a meeting just like this in which Stanislaw Ulam said to everybody in the room—they’re all mathematicians—"What makes you so sure that mathematical logic corresponds to the way we think?" It’s a higher-level symptom. It’s not how the brain works. All those guys knew fully well that the brain was not fundamentally logical.

We’re in a transition similar to the first Macy Conferences. The Teleological Society, which became the Cybernetics Group, started in 1943 at a time of transition, when the world was full of analog electronics at the end of World War II. We had built all these vacuum tubes and suddenly there was free time to do something with them, so we decided to make digital computers. And we had the digital revolution. We’re now at exactly the same tipping point in history where we have all this digital equipment, all these machines. Most of the time they’re doing nothing except waiting for the next single instruction. The funny thing is, now it’s happening without people intentionally. There we had a very deliberate group of people who said, "Let’s build digital machines." Now, I believe we are building analog computers in a very big way, but nobody’s organizing it; it’s just happening.

GEORGE DYSON is a historian of science and technology and author of Darwin Among the Machines and Turing’s Cathedral. George Dyson's Edge Bio Page


Go to stand-alone video: :
 

The Language of Mind

David Chalmers
[8.8.19]

Will every possible intelligent system somehow experience itself or model itself as having a mind? Is the language of mind going to be inevitable in an AI system that has some kind of model of itself? If you’ve just got an AI system that's modeling the world and not bringing itself into the equation, then it may need the language of mind to talk about other people if it wants to model them and model itself from the third-person perspective. If we’re working towards artificial general intelligence, it's natural to have AIs with models of themselves, particularly with introspective self-models, where they can know what’s going on in some sense from the first-person perspective.

Say you do something that negatively affects an AI, something that in an ordinary human would correspond to damage and pain. Your AI is going to say, "Please don’t do that. That’s very bad." Introspectively, it’s a model that recognizes someone has caused one of those states it calls pain. Is it going to be an inevitable consequence of introspective self-models in AI that they start to model themselves as having something like consciousness? My own suspicion is that there's something about the mechanisms of self-modeling and introspection that are going to naturally lead to these intuitions, where an AI will model itself as being conscious. The next step is whether an AI of this kind is going to naturally experience consciousness as somehow puzzling, as something that potentially is hard to square with basic underlying mechanisms and hard to explain.

DAVID CHALMERS is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is best known for his work on consciousness, including his formulation of the "hard problem" of consciousness. David Chalmers's Edge Bio Page


Go to stand-alone video: :
 

Morphogenesis for the Design of Design

Neil Gershenfeld
[7.31.19]

As we work on the self-reproducing assembler and writing software that looks like hardware that respects geometry, they meet in morphogenesis. This is the thing I’m most excited about right now: the design of design. Your genome doesn’t store anywhere that you have five fingers. It stores a developmental program, and when you run it, you get five fingers. It’s one of the oldest parts of the genome. Hox genes are an example. It’s essentially the only part of the genome where the spatial order matters. It gets read off as a program, and the program never represents the physical thing it’s constructing. The morphogenes are a program that specifies morphogens that do things like climb gradients and symmetry break; it never represents the thing it’s constructing, but the morphogens then following the morphogenes give rise to you.

What’s going on in morphogenesis, in part, is compression. So, a billion bases can specify a trillion cells, but the more interesting thing that’s going on is almost anything you perturb in the genome is either inconsequential or fatal. The morphogenes are a curated search space where rearranging them is interesting—you go from gills to wings to flippers. The heart of success in machine learning, however you represent it, is function representation. The real progress in machine learning is learning representation. How you search hasn’t changed all that much, but how you represent search has. These morphogenes are a beautiful way to represent design. Technology today doesn’t do it. Technology today generally doesn’t distinguish genotype and phenotype in the sense that you explicitly represent what you’re designing. In morphogenesis, you never represent the thing you’re designing; it's done in a beautifully abstract way. So, for these self-reproducing assemblers, what we’re building is morphogenesis for the design of design. Rather than a combinatorial search over billions of degrees of freedom, you search over these developmental programs. This is one of the core research questions we’re looking at.

NEIL GERSHENFELD is director of MIT’s Center for Bits and Atoms; founder of the global fab lab network; the author of FAB; and co-author (with Alan Gershenfeld & Joel Cutcher-Gershenfeld) of Designing Reality. Neil Gershenfeld's Edge Bio Page


Go to stand-alone video: :
 

Ecology of Intelligence

Frank Wilczek
[7.23.19]

I don't think a singularity is imminent, although there has been quite a bit of talk about it. I don't think the prospect of artificial intelligence outstripping human intelligence is imminent because the engineering substrate just isn’t there, and I don't see the immediate prospects of getting there. I haven’t said much about quantum computing, other people will, but if you’re waiting for quantum computing to create a singularity, you’re misguided. That crossover, fortunately, will take decades, if not centuries.

There’s this tremendous drive for intelligence, but there will be a long period of coexistence in which there will be an ecology of intelligence. Humans will become enhanced in different ways and relatively trivial ways with smartphones and access to the Internet, but also the integration will become more intimate as time goes on. Younger people who interact with these devices from childhood will be cyborgs from the very beginning. They will think in different ways than current adults do.

FRANK WILCZEK is the Herman Feshbach Professor of Physics at MIT, recipient of the 2004 Nobel Prize in physics, and author of A Beautiful Question: Finding Nature’s Deep DesignFrank Wilczek's Edge Bio Pag


Go to stand-alone video: :
 

Humans: Doing More With Less

Tom Griffiths
[7.16.19]

Imagine a superintelligent system with far more computational resources than us mere humans that’s trying to make inferences about what the humans who are surrounding it—which it thinks of as cute little pets—are trying to achieve so that it is then able to act in a way that is consistent with what those human beings might want. That system needs to be able to simulate what an agent with greater constraints on its cognitive resources should be doing, and it should be able to make inferences, like the fact that we’re not able to calculate the zeros of the Riemann zeta function or discover a cure for cancer. It doesn’t mean we’re not interested in those things; it’s just a consequence of the cognitive limitations that we have.

As a parent of two small children, this is a problem that I face all the time, which is trying to figure out what my kids want, kids who are operating in an entirely different mode of computation, and having to build a kind of internal model of how a toddler’s mind works such that it’s possible to unravel that and work out that there’s a particular motivation for the very strange pattern of actions that they’re taking.

Both from the perspective of understanding human cognition and from the perspective of being able to build AI systems that can understand human cognition, it’s desirable for us to have a better model of how rational agents should act if those rational agents have limited cognitive resources. That’s something that I’ve been working on for the last few years. We have an approach to thinking about this that we call resource rationality. And this is closely related to similar ideas that are being proposed in the artificial intelligence literature. One of these ideas is the notion of bounded optimality, proposed by Stuart Russell.

TOM GRIFFITHS is Henry R. Luce Professor of Information, Technology, Consciousness, and Culture at Princeton University. He is co-author (with Brian Christian) of Algorithms to Live By. Tom Griffiths's Edge Bio Page


Go to stand-alone video: :
 

A Separate Kind of Intelligence

Alison Gopnik
[7.10.19]

Back in 1950, Turing argued that for a genuine AI we might do better by simulating a child’s mind than an adult’s. This insight has particular resonance given recent work on "life history" theory in evolutionary biology—the developmental trajectory of a species, particularly the length of its childhood, is highly correlated with adult intelligence and flexibility across a wide range of species. This trajectory is also reflected in brain development, with its distinctive transition from early proliferation to later pruning. I’ve argued that this developmental pattern reflects a distinctive evolutionary way of resolving explore-exploit tensions that bedevil artificial intelligence. Childhood allows for a protected period of broad, high-temperature search through the space of solutions and hypotheses, before the requirements of focused, goal-directed planning set in. This distinctive exploratory childhood intelligence, with its characteristic playfulness, imagination and variability, may be the key to the human ability to innovate creatively yet intelligently, an ability that is still far beyond the purview of AI. More generally, a genuine understanding of intelligence requires a developmental perspective.

ALISON GOPNIK is a developmental psychologist at UC Berkeley. Her books include The Philosophical Baby and, most recently, The Gardener and the Carpenter: What the New Science of Child Development Tells Us About the Relationship Between Parents and Children. Alison Gopnik's Edge Bio Page


Go to stand-alone video: :
 

Pages