Conversations

The Third Law

The Future of Computing is Analog
George Dyson
[2.12.19]

The history of computing can be divided into an Old Testament and a New Testament: before and after electronic digital computers and the codes they spawned proliferated across the Earth. The Old Testament prophets, who delivered the underlying logic, included Thomas Hobbes and Gottfried Wilhelm Leibniz. The New Testament prophets included Alan Turing, John von Neumann, Claude Shannon, and Norbert Wiener. They delivered the machines.

Alan Turing wondered what it would take for machines to become intelligent. John von Neumann wondered what it would take for machines to self-reproduce. Claude Shannon wondered what it would take for machines to communicate reliably, no matter how much noise intervened. Norbert Wiener wondered how long it would take for machines to assume control.

Wiener’s warnings about control systems beyond human control appeared in 1949, just as the first generation of stored-program electronic digital computers were introduced. These systems required direct supervision by human programmers, undermining his concerns. What’s the problem, as long as programmers are in control of the machines? Ever since, debate over the risks of autonomous control has remained associated with the debate over the powers and limitations of digitally coded machines. Despite their astonishing powers, little real autonomy has been observed. This is a dangerous assumption. What if digital computing is being superseded by something else?

Electronics underwent two fundamental transitions over the past hundred years: from analog to digital and from vacuum tubes to solid state. That these transitions occurred together does not mean they are inextricably linked. Just as digital computation was implemented using vacuum tube components, analog computation can be implemented in solid state. Analog computation is alive and well, even though vacuum tubes are commercially extinct.

There is no precise distinction between analog and digital computing. In general, digital computing deals with integers, binary sequences, deterministic logic, and time that is idealized into discrete increments, whereas analog computing deals with real numbers, nondeterministic logic, and continuous functions, including time as it exists as a continuum in the real world.

Imagine you need to find the middle of a road. You can measure its width using any available increment and then digitally compute the middle to the nearest increment. Or you can use a piece of string as an analog computer, nearest increment. Or you can use a piece of string as an analog computer, mapping the width of the road to the length of the string and finding the middle, without being limited to increments, by doubling the string back upon itself.

Many systems operate across both analog and digital regimes. A tree integrates a wide range of inputs as continuous functions, but if you cut down that tree, you find that it has been counting the years digitally all along.

In analog computing, complexity resides in network topology, not in code. Information is processed as continuous functions of values such as voltage and relative pulse frequency rather than by logical operations on discrete strings of bits. Digital computing, intolerant of error or ambiguity, depends upon error correction at every step along the way. Analog computing tolerates errors, allowing you to live with them.

Nature uses digital coding for the storage, replication, and recombination of sequences of nucleotides, but relies on analog computing, running on nervous systems, for intelligence and control. The genetic system in every living cell is a stored-program computer. Brains aren’t.

Digital computers execute transformations between two species of bits: bits representing differences in space and bits representing differences in time. The transformations between these two forms of information, sequence and structure, are governed by the computer’s programming, and as long as computers require human programmers, we retain control.

Analog computers also mediate transformations between two forms of information: structure in space and behavior in time. There is no code and no programming. Somehow—and we don’t fully understand how—nature evolved analog computers known as nervous systems, which embody information absorbed from the world. They learn. One of the things they learn is control. They learn to control their own behavior, and they learn to control their environment to the extent that they can.

Computer science has a long history—going back to before there even was computer science—of implementing neural networks, but for the most part these have been simulations of neural networks by digital computers, not neural networks as evolved in the wild by nature herself. This is starting to change: from the bottom up, as the threefold drivers of drone warfare, autonomous vehicles, and cell phones push the development of neuromorphic microprocessors that implement actual neural networks, rather than simulations of neural networks, directly in silicon (and other potential substrates); and from the top down, as our largest and most successful enterprises increasingly turn to analog computation in their infiltration and control of the world.

While we argue about the intelligence of digital computers, analog computing is quietly supervening upon the digital, in the same way that analog components like vacuum tubes were repurposed to build digital computers in the aftermath of World War II. Individually deterministic finite-state processors, running finite codes, are forming large-scale, nondeterministic, non-finite-state metazoan organisms running wild in the real world. The resulting hybrid analog/digital systems treat streams of bits collectively, the way the flow of electrons is treated in a vacuum tube, rather than individually, as bits are treated by the discrete-state devices generating the flow. Bits are the new electrons. Analog is back, and its nature is to assume control.

Governing everything from the flow of goods to the flow of traffic to the flow of ideas, these systems operate statistically, as pulse-frequency coded information is processed in a neuron or a brain. The emergence of intelligence gets the attention of Homo sapiens, but what we should be worried about is the emergence of control.

~~

Imagine it is 1958 and you are trying to defend the continental United States against airborne attack. To distinguish hostile aircraft, one of the things you need, besides a network of computers and early-warning radar sites, is a map of all commercial air traffic, updated in real time. The United States built such a system and named it SAGE (Semi-Automatic Ground Environment). SAGE in turn spawned Sabre, the first integrated reservation system for booking airline travel in real time. Sabre and its progeny soon became not just a map of what seats were available but also a system that began to control, with decentralized intelligence, where airliners would fly, and when.

But isn’t there a control room somewhere, with someone at the controls? Maybe not. Say, for example, you build a system to map highway traffic in real time, simply by giving cars access to the map in exchange for reporting their own speed and location at the time. The result is a fully decentralized control system. Nowhere is there any controlling model of the system except the system itself.

Imagine it is the first decade of the 21st century and you want to track the complexity of human relationships in real time. For social life at a small college, you could construct a central database and keep it up to date, but its upkeep would become overwhelming if taken to any larger scale. Better to pass out free copies of a simple semi-autonomous code, hosted locally, and let the social network update itself. This code is executed by digital computers, but the analog computing performed by the system as a whole far exceeds the complexity of the underlying code. The resulting pulse-frequency coded model of the social graph becomes the social graph. It spreads wildly across the campus and then the world.

What if you wanted to build a machine to capture what everything known to the human species means? With Moore’s Law behind you, it doesn’t take too long to digitize all the information in the world. You scan every book ever printed, collect every email ever written, and gather forty-nine years of video every twenty-four hours, while tracking where people are and what they do, in real time. But how do you capture the meaning?

Even in the age of all things digital, this cannot be defined in any strictly logical sense, because meaning, among humans, isn’t fundamentally logical. The best you can do, once you have collected all possible answers, is to invite well- defined questions and compile a pulse-frequency weighted map of how everything connects. Before you know it, your system will not only be observing and mapping the meaning of things, it will start constructing meaning as well. In time, it will control meaning, in the same way the traffic map starts to control the flow of traffic even though no one seems to be in control.

~~

There are three laws of artificial intelligence. The first, known as Ashby’s Law, after cybernetician W. Ross Ashby, author of Design for a Brain, states that any effective control system must be as complex as the system it controls.

The second law, articulated by John von Neumann, states that the defining characteristic of a complex system is that it constitutes its own simplest behavioral description. The simplest complete model of an organism is the organism itself. Trying to reduce the system’s behavior to any formal description makes things more complicated, not less.

The third law states that any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.

The third law offers comfort to those who believe that until we understand intelligence, we need not worry about superhuman intelligence arising among machines. But there is a loophole in the third law. It is entirely possible to build something without understanding it. You don’t need to fully understand how a brain works in order to build one that works. This is a loophole that no amount of supervision over algorithms by programmers and their ethical advisers can ever close. Provably “good” AI is a myth. Our relationship with true AI will always be a matter of faith, not proof.

We worry too much about machine intelligence and not enough about self- reproduction, communication, and control. The next revolution in computing will be signaled by the rise of analog systems over which digital programming no longer has control. Nature’s response to those who believe they can build machines to control everything will be to allow them to build a machine that controls them instead.

Biological and Cultural Evolution

Six Characters in Search of an Author
Freeman Dyson
[2.19.19]

In the near future, we will be in possession of genetic engineering technology which allows us to move genes precisely and massively from one species to another. Careless or commercially driven use of this technology could make the concept of species meaningless, mixing up populations and mating systems so that much of the individuality of species would be lost. Cultural evolution gave us the power to do this. To preserve our wildlife as nature evolved it, the machinery of biological evolution must be protected from the homogenizing effects of cultural evolution.

Unfortunately, the first of our two tasks, the nurture of a brotherhood of man, has been made possible only by the dominant role of cultural evolution in recent centuries. The cultural evolution that damages and endangers natural diversity is the same force that drives human brotherhood through the mutual understanding of diverse societies. Wells's vision of human history as an accumulation of cultures, Dawkins's vision of memes bringing us together by sharing our arts and sciences, Pääbo's vision of our cousins in the cave sharing our language and our genes, show us how cultural evolution has made us what we are. Cultural evolution will be the main force driving our future.

FREEMAN DYSON is an emeritus professor of physics at the Institute for Advanced Study in Princeton. In addition to fundamental contributions ranging from number theory to quantum electrodynamics, he has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All DirectionsMaker of Patterns, and Origins of LifeFreeman Dyson's Edge Bio Page 


BIOLOGICAL AND CULTURAL EVOLUTION: SIX CHARACTERS IN SEARCH OF AN AUTHOR

In the Pirandello play, "Six Characters in Search of an Author", the six characters come on stage, one after another, each of them pushing the story in a different unexpected direction. I use Pirandello's title as a metaphor for the pioneers in our understanding of the concept of evolution over the last two centuries. Here are my six characters with their six themes.

1. Charles Darwin (1809-1882): The Diversity Paradox.
2. Motoo Kimura (1924-1994): Smaller Populations Evolve Faster.
3. Ursula Goodenough (1943- ): Nature Plays a High-Risk Game.
4. Herbert Wells (1866-1946): Varieties of Human Experience.
5. Richard Dawkins (1941- ): Genes and Memes.
6. Svante Pääbo (1955- ): Cousins in the Cave.

The story that they are telling is of a grand transition that occurred about fifty thousand years ago, when the driving force of evolution changed from biology to culture, and the direction changed from diversification to unification of species. The understanding of this story can perhaps help us to deal more wisely with our responsibilities as stewards of our planet.

Judith Rich Harris: 1938 - 2018

Judith Rich Harris
[1.9.19]

It was in the 1990s that I received a phone call from Steven Pinker who wanted to make the world aware of the work of Judith Rich Harris, an unheralded psychologist who was advocating a revolutionary idea which she discussed in her 1999 Edge interview, “Children don't do things half way: children don’t compromise,” in which she said “How the parents rear the child has no long-term effects on the child's personality, intelligence, or mental health.”  

From the very early days of Edge, Judith Rich Harris was the gift that kept giving. Beginning in 1998, with her response to “What Questions Are You Asking Yourself” through “The Last Question” in 2016, she exemplified the role of the Third Culture intellectual: “those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are.”

Her subsequent Edge essays over the years focused on subjects as varied as natural selection, parenting styles, the effect of genes on human behavior, twin studies, the survival of friendship, beauty as truth, among others are evidence of a keen intellect and a fearless thinker determined to advance science-based thinking as well as her own controversial ideas.

In this special 16,000-word edition of Edge, dedicated to the memory of Judith Rich Harris, we take a deep dive into her ideas.

—JB

Collaboration and the Evolution of Disciplines

Robert Axelrod
[7.1.19]

The questions that I’ve been interested in more recently are about collaboration and what can make it succeed, also about the evolution of disciplines themselves. The part of collaboration that is well understood is that if a team has a diversity of tools and backgrounds available to them—they come from different cultures, they come from different knowledge sets—then that allows them to search a space and come up with solutions more effectively. Diversity is very good for teamwork, but the problem is that there are clearly barriers to people from diverse backgrounds working together. That part of it is not well understood. The way people usually talk about it is that they have to learn each other’s language and each other’s terminology. So, if you talk to somebody from a different field, they’re likely to use a different word for the same concept.

ROBERT AXELROD, Walgreen Professor for the Study of Human Understanding at the University of Michigan, is best known for his interdisciplinary work on the evolution of cooperation. He is author of The Evolution of Cooperation. Robert Axelrod's Edge Bio Page

The Brain Is Full of Maps

Freeman Dyson
[6.11.19]

 I was talking about maps and feelings, and whether the brain is analog or digital. I’ll give you a little bit of what I wrote:

Brains use maps to process information. Information from the retina goes to several areas of the brain where the picture seen by the eye is converted into maps of various kinds. Information from sensory nerves in the skin goes to areas where the information is converted into maps of the body. The brain is full of maps. And a big part of the activity is transferring information from one map to another.

As we know from our own use of maps, mapping from one picture to another can be done either by digital or by analog processing. Because digital cameras are now cheap and film cameras are old fashioned and rapidly becoming obsolete, many people assume that the process of mapping in the brain must be digital. But the brain has been evolving over millions of years and does not follow our ephemeral fashions. A map is in its essence an analog device, using a picture to represent another picture. The imaging in the brain must be done by direct comparison of pictures rather than by translations of pictures into digital form.

FREEMAN DYSON, emeritus professor of physics at the Institute for Advanced Study in Princeton, has worked on nuclear reactors, solid-state physics, ferromagnetism, astrophysics, and biology, looking for problems where elegant mathematics could be usefully applied. His books include Disturbing the UniverseWeapons and HopeInfinite in All Directions, and Maker of PatternsFreeman Dyson's Edge Bio Page

Machines Like Me

Ian McEwan
[4.16.19]

I would like to set aside the technological constraints in order to imagine how an embodied artificial consciousness might negotiate the open system of human ethics—not how people think they should behave, but how they do behave. For example, we may think the rule of law is preferable to revenge, but matters get blurred when the cause is just and we love the one who exacts the revenge.

A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.

IAN MCEWAN is a novelist whose works have earned him worldwide critical acclaim. He is the recipient of the Man Booker Prize for Amsterdam (1998), the National Book Critics' Circle Fiction Award, and the Los Angeles Times Prize for Fiction for Atonement (2003). His most recent novel is Machines Like Me. Ian McEwan's Edge Bio Page


MACHINES LIKE ME

IAN MCEWAN: I feel something like an imposter here amongst so much technical expertise. I’m the breakfast equivalent of an after-dinner mint.

What’s been preoccupying me the last two or three years is what it would be like to live with a fully embodied artificial consciousness, which means leaping over every difficulty that we’ve heard described this morning by Rod Brooks. The building of such a thing is probably scientifically useless, much like putting a man on the moon when you could put a machine there, but it has an ancient history.

Paul Allen Remembered

Edward H. "Eddie" Currie
[10.22.18]

Jean Pigozzi & Paul Allen at the Edge Dinner (March 17, 2014)

It was Microsoft’s phenomenal success, early in the evolution of the microcomputer, that made it possible for Paul to make so many other significant contributions to the world, and that success may well have never occurred without Paul’s ability to deliver on the dream to supply the software for all of the microcomputers in the world, beginning with BASIC, and to do so in the early days of the industry. Those at MITS who knew Paul always referred to him as a brilliant polymath and a true gentleman. His quiet, easy going manner, great sense of humor, love of music, guitars, software in all of its forms, compassion and concern for others, together with a totally committed work ethic served as a great role model at MITS.

EDDIE CURRIE is an Associate Professor in Hofstra’s School of Engineering and Applied Sciences. From 1975 to 1978, he was Executive Vice President and General Manager for MITS, the first commercially viable, microcomputer-based personal computer company, where he was involved in the development and manufacture of the Altair and recruitment and supervision of the staff, which included Paul Allen and Bill Gates (founders of Microsoft). Eddie Currie's Edge Bio Page

Reality Club: Jean Pigozzi


PAUL ALLEN REMEMBERED

In 1974, a small company called MITS, in Albuquerque, New Mexico, was struggling to survive as it watched its kit calculator business being destroyed by Texas Instrument’s draconian calculator price-cutting. An Intel salesman dropped off a data sheet for a new microprocessor called the 8080. That evening the president of the company, Ed Roberts, took the data sheet home and, using an HP calculator, concluded that Intel’s newest product was quite capable of becoming the heart of a microcomputer. Roberts went to a local bank and presented a hastily drafted business plan and when asked by a bank officer how many computers could be sold, he responded “800.” The bank assumed that he meant 800/month and granted the loan. In actuality, Roberts meant 800/year.


Eddie Currie

Also in 1974, some two thousand miles away in Boston, two young men were monitoring Intel’s microprocessor evolution and dreaming of building a microcomputer based on a microprocessor. The Intel 8080 was announced in April of 1974, but it wasn’t until December of that year, in Electronic Design magazine, that a full instruction set was published. As soon as Paul Allen and Bill Gates saw the article, work began by both, using Harvard computing resources, that ultimately resulted in an 8080 simulator that ran on a DEC PDP-10 and the creation of a 4K version of the BASIC language that would run on the simulator.

Thus MITS had hardware and no computer language, and Bill and Paul had a computer language but no hardware. The breakthrough came in the January 1975 issue of Popular Electronics, published in late December of 1974, featuring the “MITS’ Altair, for the world’s first microcomputer kit,” on the cover.

Paul Allen sent a letter to MITS on January 2, 1975, explaining that his company had a BASIC interpreter that could be marketed on paper tape and/or floppy disk and run on the MITS microcomputer. Ed Roberts' response was immediate and resulted in Paul flying to Albuquerque a few days later with a paper tape and successfully demonstrating Microsoft BASIC running on an Altair. Microsoft was founded as Micro-Soft, on April 4, 1975.

I also played a role. Ed Roberts and I had grown up together, attended the same elementary school, junior high and high school, and were best friends. From 1975-78 I worked alongside him at MITS as Executive Vice President and General Manager, which involved supervising the work of Paul and Bill. At one point, I asked Paul and Bill about Microsoft’s ultimate goal. Their response came quickly and succinctly, viz., “… Microsoft wants to supply software for all of the microcomputers in the world…” This unequivocal mission statement was to serve Microsoft well in the years to come. It also resulted in Microsoft staking a claim to what would later become a hotly contested domain. Such a clear vision was surprising not only for its breadth and scope but also because they were 18 and 20, respectively, at the time. While Bill and Paul early on collaborated on software development, overtime Bill’s focus was primarily on business development and Paul’s on software development.

Schirrmacher's Heritage

Andrian Kreye
[2.14.18]

A book of essays claims the authority to interpret, carries the militant title "Reclaim Autonomy”, and demands self-empowerment.
 

The questions of how science and technology are transforming life and society are among the greatest intellectual challenges that surprisingly few of today's intellectuals take on. One of the first to do so was FAZ editor Frank Schirrmacher, who died in 2014. So it was not only an gesture of respect, but also an attempt at a programmatic continuation, when the publisher of the weekly Freitag, Jakob Augstein, dedicated a symposium on digital debate to Frank Schirrmacher.

Pages

Subscribe to RSS - Conversations