BRAIN TIME

BRAIN TIME

David M. Eagleman [6.23.09]

Your brain, after all, is encased in darkness and silence in the vault of the skull. Its only contact with the outside world is via the electrical signals exiting and entering along the super-highways of nerve bundles. Because different types of sensory information (hearing, seeing, touch, and so on) are processed at different speeds by different neural architectures, your brain faces an enormous challenge: what is the best story that can be constructed about the outside world?

BRAIN TIME 
By David M. Eagleman

DAVID M. EAGLEMAN is director of Baylor College of Medicine's Laboratory for Perception and Action, whose long-range goal is to understand the neural mechanisms of time perception. He also directs BCM's Initiative on Law, Brains, and Behavior, which seeks to determine how new discoveries in neuroscience will change our laws and criminal justice system. He is the author of Sum: Forty Tales from the Afterlives, and Wednesday is Indigo Blue: Discovering the Brain of Synesthesia.

David M. Eagleman's Edge Bio Page


From WHAT'S NEXT?
Dispatches on the Future of Science
Edited By Max Brockman 


BRAIN TIME

[DAVID M. EAGLEMAN:] At some point, the Mongol military leader Kublai Khan (1215–94) realized that his empire had grown so vast that he would never be able to see what it contained. To remedy this, he commissioned emissaries to travel to the empire's distant reaches and convey back news of what he owned. Since his messengers returned with information from different distances and traveled at different rates (depending on weather, conflicts, and their fitness), the messages arrived at different times. Although no historians have addressed this issue, I imagine that the Great Khan was constantly forced to solve the same problem a human brain has to solve: what events in the empire occurred in which order?

Your brain, after all, is encased in darkness and silence in the vault of the skull. Its only contact with the outside world is via the electrical signals exiting and entering along the super-highways of nerve bundles. Because different types of sensory information (hearing, seeing, touch, and so on) are processed at different speeds by different neural architectures, your brain faces an enormous challenge: what is the best story that can be constructed about the outside world?

The days of thinking of time as a river—evenly flowing, always advancing—are over. Time perception, just like vision, is a construction of the brain and is shockingly easy to manipulate experimentally. We all know about optical illusions, in which things appear different from how they really are; less well known is the world of temporal illusions. When you begin to look for temporal illusions, they appear everywhere. In the movie theater, you perceive a series of static images as a smoothly flowing scene. Or perhaps you've noticed when glancing at a clock that the second hand sometimes appears to take longer than normal to move to its next position—as though the clock were momentarily frozen.

More subtle illusions can be teased out in the laboratory. Perceived durations are distorted during rapid eye movements, after watching a flickering light, or simply when an "oddball" is seen in a stream of repeated images. If we inject a slight delay between your motor acts and their sensory feedback, we can later make the temporal order of your actions and sensations appear to reverse. Simultaneity judgments can be shifted by repeated exposure to nonsimultaneous stimuli. And in the laboratory of the natural world, distortions in timing are induced by narcotics such as cocaine and marijuana or by such disorders as Parkinson's disease, Alzheimer's disease, and schizophrenia.

Try this exercise: Put this book down and go look in a mirror. Now move your eyes back and forth, so that you're looking at your left eye, then at your right eye, then at your left eye again. When your eyes shift from one position to the other, they take time to move and land on the other location. But here's the kicker: you never see your eyes move. What is happening to the time gaps during which your eyes are moving? Why do you feel as though there is no break in time while you're changing your eye position? (Remember that it's easy to detect someone else's eyes moving, so the answer cannot be that eye movements are too fast to see.)

All these illusions and distortions are consequences of the way your brain builds a representation of time. When we examine the problem closely, we find that "time" is not the unitary phenomenon we may have supposed it to be. This can be illustrated with some simple experiments: for example, when a stream of images is shown over and over in succession, an oddball image thrown into the series appears to last for a longer period, although presented for the same physical duration. In the neuroscientific literature, this effect was originally termed a subjective "expansion of time," but that description begs an important question of time representation: when durations dilate or contract, does time in general slow down or speed up during that moment? If a friend, say, spoke to you during the oddball presentation, would her voice seem lower in pitch, like a slowed- down record?

If our perception works like a movie camera, then when one aspect of a scene slows down, everything should slow down. In the movies, if a police car launching off a ramp is filmed in slow motion, not only will it stay in the air longer but its siren will blare at a lower pitch and its lights will flash at a lower frequency. An alternative hypothesis suggests that different temporal judgments are generated by different neural mechanisms—and while they often agree, they are not required to. The police car may seem suspended longer, while the frequencies of its siren and its flashing lights remain unchanged.

Available data support the second hypothesis.1 Duration distortions are not the same as a unified time slowing down, as it does in movies. Like vision, time perception is underpinned by a collaboration of separate neural mechanisms that usually work in concert but can be teased apart under the right circumstances.

This is what we find in the lab, but might something different happen during real- life events, as in the common anecdotal report that time "slows down" during brief, dangerous events such as car accidents and robberies? My graduate student Chess Stetson and I decided to turn this claim into a real scientific question, reasoning that if time as a single unified entity slows down during fear, then this slow motion should confer a higher temporal resolution—just as watching a hummingbird in slowmotion video allows finer temporal discrimination upon replay at normal speed, because more snapshots are taken of the rapidly beating wings.

We designed an experiment in which participants could see a particular image only if they were experiencing such enhanced temporal resolution. We leveraged the fact that the visual brain integrates stimuli over a small window of time: if two or more images arrive within a single window of integration (usually under one hundred milliseconds), they are perceived as a single image. For example, the toy known as a thaumatrope may have a picture of a bird on one side of its disc and a picture of a tree branch on the other; when the toy is wound up and spins so that both sides of the disc are seen in rapid alternation, the bird appears to be resting on the branch. We decided to use stimuli that rapidly alternated between images and their negatives. Participants had no trouble identifying the image when the rate of alternation was slow, but at faster rates the images perceptually overlapped, just like the bird and the branch, with the result that they fused into an unidentifiable background.

To accomplish this, we engineered a device (the perceptual chronometer) that alternated randomized digital numbers and their negative images at adjustable rates. Using this, we measured participants' threshold frequencies under normal, relaxed circumstances. Next, we harnessed participants to a platform that was then winched fifteen stories above the ground. The perceptual chronometer, strapped to the participant's forearm like a wristwatch, displayed random numbers and their negative images alternating just a bit faster than the participant's determined threshold. Participants were released and experienced free fall for three seconds before landing (safely!) in a net. During the fall, they attempted to read the digits. If higher temporal resolution were experienced during the free fall, the alternation rate should appear slowed, allowing for the accurate reporting of numbers that would otherwise be unreadable.2

The result? Participants weren't able to read the numbers in free fall any better than in the laboratory. This was not because they closed their eyes or didn't pay attention (we monitored for that) but because they could not, after all, see time in slow motion (or in "bullet time," like Neo in The Matrix). Nonetheless, their perception of the elapsed duration itself was greatly affected. We asked them to retrospectively reproduce the duration of their fall using a stopwatch. (" Re- create your freefall in your mind. Press the stopwatch when you are released, then press it again when you feel yourself hit the net.") Here, consistent with the anecdotal reports, their duration estimates of their own fall were a third greater, on average, than their recreations of the fall of others.

How do we make sense of the fact that participants in free fall reported a duration expansion yet gained no increased discrimination capacities in the time domain during the fall? The answer is that time and memory are tightly linked. In a critical situation, a walnut-size area of the brain called the amygdala kicks into high gear, commandeering the resources of the rest of the brain and forcing everything to attend to the situation at hand. When the amygdala gets involved, memories are laid down by a secondary memory system, providing the later flashbulb memories of post- traumatic stress disorder. So in a dire situation, your brain may lay down memories in a way that makes them "stick" better. Upon replay, the higher density of data would make the event appear to last longer. This may be why time seems to speed up as you age: you develop more compressed representations of events, and the memories to be read out are correspondingly impoverished. When you are a child, and everything is novel, the richness of the memory gives the impression of increased time passage—for example, when looking back at the end of a childhood summer.

To further appreciate how the brain builds its perception of time, we have to understand where signals are in the brain, and when. It has long been recognized that the nervous system faces the challenge of feature-binding—that is, keeping an object's features perceptually united, so that, say, the redness and the squareness do not bleed off a moving red square. That feature-binding is usually performed correctly would not come as a surprise were it not for our modern picture of the mammalian brain, in which different kinds of information are processed in different neural streams. Binding requires coordination—not only among different senses (vision, hearing, touch, and so on) but also among different features within a sensory modality (within vision, for example: color, motion, edges, angles, and so on).

But there is a deeper challenge the brain must tackle, without which feature-binding would rarely be possible. This is the problem of temporalbinding: the assignment of the correct timing of events in the world. The challenge is that different stimulus features move through different processing streams and are processed at different speeds. The brain must account for speed disparities between and within its various sensory channels if it is to determine the timing relationships of features in the world.

What is mysterious about the wide temporal spread of neural signals is the fact that humans have quite good resolution when making temporal judgments. Two visual stimuli can be accurately deemed simultaneous down to five milliseconds, and their order can be assessed down to twenty-millisecond resolutions. How is the resolution so precise, given that the signals are so smeared out in space and time?

To answer this question, we have to look at the tasks and resources of the visual system. As one of its tasks, the visual system—couched in blackness, at the back of the skull—has to get the timing of outside events correct. But it has to deal with the peculiarities of the equipment that supplies it: the eyes and parts of the thalamus. These structures feeding into the visual cortex have their own evolutionary histories and idiosyncratic circuitry. As a consequence, signals become spread out in time from the first stages of the visual system (for example, based on how bright or dim the object is).

So if the visual brain wants to get events correct timewise, it may have only one choice: wait for the slowest information to arrive. To accomplish this, it must wait about a tenth of a second. In the early days of television broadcasting, engineers worried about the problem of keeping audio and video signals synchronized. Then they accidentally discovered that they had around a hundred milliseconds of slop: As long as the signals arrived within this window, viewers' brains would automatically resynchronize the signals; outside that tenth- of- a- second window, it suddenly looked like a badly dubbed movie.

This brief waiting period allows the visual system to discount the various delays imposed by the early stages; however, it has the disadvantage of pushing perception into the past. There is a distinct survival advantage to operating as close to the present as possible; an animal does not want to live too far in the past. Therefore, the tenth-of- a-second window may be the smallest delay that allows higher areas of the brain to account for the delays created in the first stages of the system while still operating near the border of the present. This window of delay means that awareness is postdictive, incorporating data from a window of time after an event and delivering a retrospective interpretation of what happened.3

Among other things, this strategy of waiting for the slowest information has the great advantage of allowing object recognition to be independent of lighting conditions. Imagine a striped tiger coming toward you under the forest canopy, passing through successive patches of sunlight. Imagine how difficult recognition would be if the bright and dim parts of the tiger caused incoming signals to be perceived at different times. You would perceive the tiger breaking into different space-time fragments just before you became aware that you were the tiger's lunch. Somehow the visual system has evolved to reconcile different speeds of incoming information; after all, it is advantageous to recognize tigers regardless of the lighting.

This hypothesis—that the system waits to collect information over the window of time during which it streams in—applies not only to vision but more generally to all the other senses. Whereas we have measured a tenth-of-a-second window of postdiction in vision, the breadth of this window may be different for hearing or touch. If I touch your toe and your nose at the same time, you will feel those touches as simultaneous. This is surprising, because the signal from your nose reaches your brain well before the signal from your toe. Why didn't you feel the nose-touch when it first arrived? Did your brain wait to see what else might be coming up in the pipeline of the spinal cord unti lit was sure it had waited long enough for the slower signal from the toe? Strange as that sounds, it may be correct.

It may be that a unified polysensory perception of the world has to wait for the slowest overall information. Given conduction times along limbs, this leads to the bizarre but testable suggestion that tall people may live further in the past than short people. The consequence of waiting for temporally spread signals is that perception becomes something like the airing of a live television show. Such shows are not truly live but are delayed by a small window of time, in case editing becomes necessary.

Waiting to collect all the information solves part of the temporal- binding problem, but not all of it. A second problem is this: if the brain collects information from different senses in different areas and at different speeds, how does it determine how the signals are supposed to line up with one another? To illustrate the problem, snap your fingers in front of your face. The sight of your fingers and the sound of the snap appear simultaneous. But it turns out that impression is laboriously constructed by your brain. After all, your hearing and your vision process information at different speeds. A gun is used to start sprinters, instead of a flash, because you can react faster to a bang than to a flash. This behavioral fact has been known since the 1880s and in recent decades has been corroborated by physiology: the cells in your auditory cortex can change their firing rate more quickly in response to a bang than your visual cortex cells can in response to a flash.

The story seems as though it should be wrapped up here. Yet when we go outside the realm of motor reactions and into the realm of perception (what you report you saw and heard), the plot thickens. When it comes to awareness, your brain goes through a good deal of trouble to perceptually synchronize incoming signals that were synchronized in the outside world. So a firing gun will seem to you to have banged and flashed at the same time. (At least when the gun is within thirty meters; past that, the different speeds of light and sound cause the signals to arrive too far apart to be synchronized.)

But given that the brain received the signals at different times, how can it know what was supposed to be simultaneous in the outside world? How does it know that a bang didn't really happen before a flash? It has been shown that the brain constantly recalibrates its expectations about arrival times. And it does so by starting with a single, simple assumption: if it sends out a motor act (such as a clap of the hands), all the feedback should be assumed to be simultaneous and any delays should be adjusted until simultaneity is perceived. In other words, the best way to predict the expected relative timing of incoming signals is to interact with the world: each time you kick or touch or knock on something, your brain makes the assumption that the sound, sight, and touch are simultaneous.

While this is a normally adaptive mechanism, we have discovered a strange consequence of it: Imagine that every time you press a key, you cause a brief flash of light. Now imagine we sneakily inject a tiny delay (say, two hundred milliseconds) between your key-press and the subsequent flash. You may not even be aware of the small, extra delay. However, if we suddenly remove the delay, you will now believe that the flash occurredbefore your key-press, an illusory reversal of action and sensation. Your brain tells you this, of course, because it has adjusted to the timing of the delay.

Note that the recalibration of subjective timing is not a party trick of the brain; it is critical to solving the problem of causality. At bottom, causality requires a temporal order judgment: did my motor act come before or after that sensory signal? The only way this problem can be accurately solved in a multisensory brain is by keeping the expected time of signals well calibrated, so that "before" and "after" can be accurately determined even in the face of different sensory pathways of different speeds.

It must be emphasized that everything I've been discussing is in regard to conscious awareness. It seems clear from preconscious reactions that the motor system does not wait for all the information to arrive before making its decisions but instead acts as quickly as possible, before the participation of awareness, by way of fast subcortical routes. This raises a question: what is the use of perception, especially since it lags behind reality, is retrospectively attributed, and is generally outstripped by automatic (unconscious) systems? The most likely answer is that perceptions are representations of information that cognitive systems can work with later. Thus it is important for the brain to take sufficient time to settle on its best interpretation of what just happened rather than stick with its initial, rapid interpretation. Its carefully refined picture of what just happened is all it will have to work with later, so it had better invest the time.

Neurologists can diagnose the variety of ways in which brains can be damaged, shattering the fragile mirror of perception into unexpected fragments. But one question has gone mostly unasked in modern neuroscience: what do disorders of time look like? We can roughly imagine what it is like to lose color vision, or hearing, or the ability to name things. But what would it feel like to sustain damage to your time- construction systems?

Recently, a few neuroscientists have begun to consider certain disorders—for example, in language production or reading—as potential problems of timing rather than disorders of language as such. For example, stroke patients with language disorders are worse at distinguishing different durations, and reading difficulties in dyslexia may be problems with getting the timing right between the auditory and visual representations.

We have recently discovered that a deficit in temporalorder judgments may underlie some of the hallmark symptoms of schizophrenia, such as misattributions of credit ("My hand moved, but I didn't move it") and auditory hallucinations, which may be an order reversal of the generation and hearing of normal internal monolog.

As the study of time in the brain moves forward, it will likely uncover many contact points with clinical neurology. At present, most imaginable disorders of time would be lumped into a classification of dementia or disorientation, catch-all diagnoses that miss the important clinical details we hope to discern in coming years.

Finally, the more distant future of time research may change our views of other fields, such as physics. Most of our current theoretical frameworks include the variable t in a Newtonian, river-flowing sense. But as we begin to understand time as a construction of the brain, as subject to illusion as the sense of color is, we may eventually be able to remove our perceptual biases from the equation. Our physical theories are mostly built on top of our filters for perceiving the world, and time may be the most stubborn filter of all to budge out of the way.


1 V. Pariyadath and D. M. Eagleman, "The Effect of Predictability on Subjective Duration,"PLoS ONE (2007).

2 A critical point is that the speed at which one can discriminate alternating patterns is not limited by the eyes themselves, since retinal ganglion cells have extremely high temporal resolution. For more details on this study, see C. Stetson et al., "Does Time Really Slow Down During a Frightening Event?" PLoS ONE (2007).

3 We introduced the term postdiction in 2000 to describe the brain's act of collecting information well after an event and then settling on a perception (D. M. Eagleman and T. J. Sejnowski, "Motion Integration and Postdiction in Visual Awareness," Science287(2000):2036–8).

4 R. Efron, "Temporal Perception, Aphasia, and Deja Vu," Brain 86(1963): 403–24; M. M. Merzenich et al., "Temporal Processing Deficits of Language-Learning Impaired Children Ameliorated by Training," Science 271, no. 5245 (1996): 77–81.