WHAT'S THE QUESTION ABOUT YOUR FIELD THAT YOU DREAD BEING ASKED?
WHAT'S THE QUESTION ABOUT YOUR FIELD THAT YOU DREAD BEING ASKED?
"Don't ask."
What question about your field do you dread being asked? Maybe it's a sore point: your field should have an answer (people think you do) but there isn't one yet. Perhaps it's simple to pose but hard to answer. Or it's a question that belies a deep misunderstanding: the best answer is to question the question.
WHAT'S THE QUESTION ABOUT YOUR FIELD THAT YOU DREAD BEING ASKED?
For me, it would be "Why are there recessions?" Don't ask.
SENDHIL MULLAINATHAN is Professor of Economics at Harvard. In 2011, he was appointed by the U.S. Treasury Department to serve as Assistant Director for Research at The Consumer Financial Protection Bureau (CFPB).
[ED. NOTE: Word Length for comments: up to 500 words.]
CONTRIBUTORS: Raj Chetty, Lawrence Krauss *, Adam Alter *, George Dyson, Jens Ludwig, Emanuel Derman, Scott Atran, Jaron Lanier, Haim Harari, Richard Thaler, Paul Bloom, Michael Norton *, Samuel Arbesman *, Lillian Lee *, Jon Kleinberg *, Nicholas Epley *, Susan Blackmore, NEW Nicholas Epley
* Sendhil Mullainathan responds
Reality Club Discussion
Why do American students perform poorly relative to students in other countries and how can we fix education in the U.S.?
Here are three questions that come to mind that I dread answering as an economist working on policy issues:
1. If you were in charge, what policies would you enact today to raise growth rates and incomes for the average family in America?
2. Why do American students perform poorly relative to students in other countries and how can we fix education in the U.S.?
3. When are house prices going to recover to pre-recession levels?
What happened before the big bang?
This is a frustrating question because first, it presumes that just because we know there was a big bang, we understand it back to t=0 or 'before'. This is like presuming evolutionary biology can explain the origin of life itself, which it cannot, since we need to know chemistry and the conditions of the early earth to address it. Similarly, we simply don't have a physical theory that works back to t=0 so we cannot answer the question. But more than that, all of our current ideas suggest, alas, that it isn't a good question!
Saying that always frustrates the listener, but science often tells us that naive questions aren't good questions because they make presumptions that are unwarranted. Because space and time are coupled to matter and energy in general relativity, it is eminently plausible that if space spontaneously popped into existence, so did time. Namely, time and space are both classical concepts that may have a limited domain of validity, and so the question what was 'before the big bang" may simply be a bad question because there was no 'before'! Time didn't exist.
That answer of course makes no one happy, but it may be true. As I often say, the business of the universe is not to make you happy.
"How do we know that other people experience the world the same way we do?"
The question I dread being asked is, "How do we know that other people experience the world the same way we do?"
When people learn that I'm colorblind it often dawns on them to question whether they see the world the same way other people do. It's an old question that's difficult to answer-made more difficult because we're poorly engineered to recognize that our own view of the world isn't veridical.
Social psychologists have documented this so-called "false-consensus effect" in dozens of domains, showing, for example, that we overestimate the overlap between our own and others' political beliefs, moral sensibilities, judgments, and decisions. If we struggle to imagine that other people might hold different opinions and tastes from our own-subjective matters that admit many valid positions-how are we supposed to grasp the much harder question of whether we all share the same basic sensory and perceptual representation of the world?
The question has vexed philosophers for millennia, but early psychologists struggled as well. Many Behaviorists, who dominated the field for several decades during the early- and mid-twentieth century, found the question so overwhelming and incapable of being tackled rigorously that they petulantly declared subjective introspection irrelevant to psychological science. Instead, they restricted their inquiries to visible, measurable responses, refusing to consider the woolier (but unarguably interesting) concepts of love, happiness, and the self. We've made plenty of progress since picking up the thread in the mid-twentieth century, but psychologists still can't tell you whether your version of a blue sky matches the version experienced by billions of other people who share your basic visual and neural apparatus.
"Who is the next Alan Turing?"
As a technology historian, I get asked "Who is the next Alan Turing?" or "Who is the next John von Neumann?" It is impossible to say! No one expected Alan Turing to be the next Alan Turing, or John von Neumann to be the next John von Neumann. Turing was 24, and von Neumann 26, upon arrival in the United States. I'm at a loss for an answer—except to point out that we gave both of them visas, during a 1930s job market that was just as bad as today.
"Why in the world is there so much violence in Chicago right now?"
As an economist who studies crime and works at the University of Chicago, the question I most dread being asked is: "Why in the world is there so much violence in Chicago right now?"
One reason I dread this question is because my initial instinct is always to say "Chicago's current rate per 100,000 people of homicide (the best measured of all crimes) is on the order of about half of what it was in the early 1990s, at the peak of the crack epidemic." Even though that's true, I hate to say that because it trivializes the problem we have today in Chicago, where the city's overall homicide rate (in recent years ranging between 15 to 18 per 100,000 people) is still about 5 or 10 times what we tend to see in other developed nations. Moreover the crime drop in Chicago has been quite uneven; I live near the University of Chicago in what the Chicago PD calls the "prairie police district," where homicide rates from 1990-2010 have dropped by over 90%. In the high-crime Austin district on the west side of the city the homicide rate over this period dropped by 50%—a lot, but Austin's homicide rate today is about the same as what you see in the country of Colombia. In the Harrison district, right next to Austin, the homicide rate today is nearly identical to what it was in the early 1990s—the homicide rate there today is like that of El Salvador's. There are lots of low-income moms trying to raise their kids in Harrison today, who never let their kids leave their house, who are looking around saying "what in the world are all these people talking about this 'crime drop' in America?"
Another reason I hate being asked this question is that it gets people to focus on reasons why the people who live in Chicago are prone to violence. But if you compare the rate per 100,000 capita in Chicago of violent crimes in general, we don't have so many more robberies or assaults or whatever compared to cities in other developed countries. What we do have is a much higher rate of homicides, with most of the difference across cities coming from differences in the rate of gun homicides. That is, we don't have a crime problem, or a violence problem—we have a gun violence problem.
A third reason I hate being asked this question is people usually follow up on my point about gun violence being the real problem that makes Chicago (and America in general) unusual by saying "But isn't that problem hopeless given that we have over 300 million guns already in circulation, and more than half of Congress acts like a paid subsidiary of the NRA?" It's true we have a lot of guns, but most of those guns are in the hands of middle age, middle class people living in mostly rural and to some extent suburban area that is, people at statistically low risk of misusing those guns. Moreover for the most part they tend to hold on to their guns for a fairly long time. On the other hand gun crime is disproportionately committed by young people between the ages of say 15 and 25. Put differently, most "criminal careers" are fairly short. There is always a new generation of young people entering their high-risk years who have to solve the problem anew about how to get a gun. If you think about the U.S. as a giant bathtub filled with 300 million guns, from a policy perspective we don't need to worry about the whole tub – we just need to worry about the drain, that is, the smaller and more manageable subset of guns that are changing hands each year. (Operations research people would say "don't worry about the stock of guns, just worry about the flow.")
I think there are a bunch of pragmatic things on the law enforcement side that we could do to disrupt the flow of guns into high-risk hands, such as "buy and bust" operations in the underground gun market, which I think would be more effective than this strategy has been in the illegal drug market for reasons that I won't get into here. But most of the public attention, particularly after the Newton, Connecticut tragedy, has been on new firearm regulations—that is, gun control. I always tell people that while it's true that we don't have a surplus of success stories of city and state gun regulations that have dramatically reduced gun crime, borders are porous—guns flow easily across city and state boundaries, which makes it hard for individual jurisdictions to regulate their way out of this problem unilaterally. The problem of gun violence is a lot like, say, air quality—what happens in one state matters a lot for what happens in nearby places. My hypothesis is that serious regulation at the federal level (for example President Obama's proposal for a new universal background check requirement) may be more effective than what we've seen with city and state level regulations. There are a few data points here and there to support that hypothesis—for example FDR's national ban on machine guns seems to have done some good (when was the last time anyone robbed a bank with a Tommy gun?), and state-level gun laws enacted by Hawaii, the one state that doesn't have to worry so much about cross-state gun trafficking, seem to have done some good. But this is still just a hypothesis, albeit one with pretty high stakes.
How does one justify having worked and continuing to work in the financial sector?
As a financial engineer for the past twenty-five years, the question I have increasingly come to hate being asked is: Wouldn't the world be better off if people like you used their skills on real scientific and engineering problems? Given the taxpayer-funded bailouts of the past few years, the crony capitalism, the refusal to prosecute banks for money laundering, I ask myself this question too. How does one justify having worked and continuing to work in the financial sector?
Some people answer this question by claiming that financial engineering makes markets more efficient. That may or may not be true, but to me efficiency isn't a goal. Here are some of my answers.
First, I think everything on earth is interesting and worthy of understanding, provided that you try to understand it honestly, provided that you try to see the world as it really is.
Second, like most people I have some measure of ambition and vanity. In 1950, Norman Mailer commented in a letter to William Styron:
"I didn't write [The] Naked [and the Dead] because I wanted to say war was horrible, or that history is complex, resistant, and almost inscrutable, or because I wanted to say that the coming battle between the naked fanatics and the dead mass was approaching, but because really what I wanted to say was, "Look at me, Norman Mailer, I'm alive, I'm a genius, I want people to know that; I'm a cripple, I want to hide that," and so forth."
Mailer's "Look at me" is a good part of what drives any writer or scientist, and what drives capitalists too, to do what they're good at.
Third, I feel it's my duty to try to convey what I understand from experience, namely that financial models are attempts to describe the relatively calm ripples on a vast and ill-understood sea of violent and ephemeral human passions, using variables such as volatility and liquidity that are crude but quantitative proxies for complex human behaviors. Such models are roughly reliable only as long as the world doesn't change too much. When it does, when crowds panic, anything can happen. Models are therefore useful, but intrinsically fallible. To confuse a model with the world of humans is a form of idolatry, and dangerous. This understanding, I hope, will enable practitioners and academics to use models more wisely.
"When will there be an end to war?"
As an anthropologist who studies political and religious violence I'm often asked a question I dread, "When will there be an end to war?" Despite the popular myth propagated by Konrad Lorenz, that killing members of one's own species is rare, and equally cheery fairytales that some of our hominid ancestors (Ralph Solecki's Neanderthals) or previous cultures and civilizations (Sergei Einstein's Maya) were true "flower children," the comparative zoological, paleo-archaeological, historical and cross-cultural evidence overwhelming indicates that social mammals regularly kill their own (felines, canines, monkeys, apes), and that mass murder of members of human outgroups is a recurrent complement of intense ingroup love. Will this recurrence ever end, or at least significantly abate and decline in the magnitude of its death and destruction?
In The Wealth of Nations, Adam Smith lauded the development of war-making among the "civilized nations," such as Great Britain, as "the noblest of all arts" because this would supposedly prevent the more indiscriminately murderous impulses of "barbarians" from threatening the social and political tranquility necessary for regular commerce between peoples. In fact, the near monopoly of ever more high-performance lethal weapons by state power has led to a drastic reduction in interpersonal violence within nations, and to significantly fewer wars between nations. Only in the last 70 years (since WWII), however, has there been a marked leveling off in what had been (since the 16th century) a geometrically rising rate (following a power law distribution) of civilian casualties, social dislocations, and collapse of existing political orders associated with each successive major international war. But also since WWII, committed revolutionaries and terrorists who gained relatively easy access to medium-performance weapons through the unregulated international arms trade have, on average, been able to resist and overcome state forces with up to ten times greater manpower and firepower. This is because committed "devoted actors" are willing to carry out actions for a cause, independently of calculated risks, costs, consequences or likely prospects of success: they are ideologically blind to exit strategies; whereas state actors (military and police) much more readily respond to reward structures that appeal to "rational actor" calculations, such as better pay or promotion.
Today, we find that economically fragile and failed states, and some terrorist groups, are seeking nuclear and biological weapons of mass destruction to break the monopoly on large-scale war making capacity that the Great Powers have enjoyed. In the Middle East, the world's most volatile region, a covert nuclear arms race has already begun. Some scholars (Steve Pinker) conjecture that the intermittent but generally upward progress of reason, empathy, tolerance of diversity, and human rights since the Enlightenment days of Adam Smith is now poised to permanently cap the upward jerk of massive death and destruction in international wars that appears stalled since WWII, as well as reduce and contain smaller-scale wars that might otherwise provoke large-scale wars.
That China and Russia seem to side with America to constrain North Korea's nuclear war fantasies goes in this direction. But when Egyptian leaders tell me that they need nukes to parry the power of the West (and that Iran and Turkey might as well also have them) so that Islam will be able to compete on a level power field to inevitably win humankind's salvation, hope fades that the violent eruptions associated with conflicts between the great ideological −isms spawned by the Enlightenment (colonialism, anarchism, communism, fascism, or even democratic liberalism and now jihadism) will ever remain dormant.
"Where does it happen in the brain?"
I often get some version of this question when I give a public presentation on some aspect of psychology, and my heart sinks when I hear it. I usually don't know the answer. And I usually don't care. As an example, much of my recent work focuses on moral psychology. There is a lot of neuroscientific research in this area, and some of it is terrific. But the value of such research is that it helps us explore interesting and substantive issues, such as the precise influence that emotions such as disgust and anger have on moral judgment. The brute facts about brain localization—that the posterior cingulate gyrus is active during certain sorts of moral deliberation, say—are, in and of themselves, boring.
Most of all, the sort of person who asks this question usually knows nothing about the brain. I could make up an answer—"it's mostly in the flurbus murbus"—and my questioner would be satisfied. What the person really wants is some reassurance that there is true science going on, that the phenomenon that I'm discussing actually exists—and this means that I have to say something specific about the brain. This assumption reflects a fundamental and widespread confusion about the mind and how to study it, and so it's a question that I dread.
"How well do you understand what your cloud software does?"
When a company trumpets cloud security designs, they will eventually be breached. If a cloud service is supposed to be robust and "always on", it will repeatedly go down. When private data is supposed to be segregated, it will sometimes leak.
Cloud software violates our expectations often enough that we are rarely surprised. Perhaps we maintain unrealistic expectations because it is organically impossible for our brains to become completely cynical.
If I am asked to explain exactly how a cloud design will perform, I cannot answer, but I am also not perturbed. I can fall back on an inexhaustible spectrum of excuses. There are theorems that limit our ability to understand software perfectly, like the famous Halting Theorem, and one can always blame the human element, the bull in our china shop.
However, engineering is ostensibly going on, and it might seem reasonable to a questioner to press for at least a predictable rate of failure. Other schools of engineering generally provide such guidance. Why is it so difficult in computational designs?
Before engineering can yield predictable results, enough variables must be tied down that an expectation can be fully specified. In the case of information, the most specified situation is chip design, and it therefore yields the most predictable results. In fact, in that case, specificity is maximized, and the result is Moore's Law, a steady course of improved results.
This is the real difference between software and hardware. We can carefully specify the environment a chip is connected to (a circuit board), while software connects to reality, that vague ideal that neither science nor engineering ever fully conquer.
Fortunately, we can recall cases of unreliable information system designs undergoing reform until they became reliable. For the first couple of decades, personal computers were notoriously unstable. They used to crash all the time. I remember Larry Tesler, the user interface pioneer, crashing an early Mac just by moving the mouse in a certain way, not even clicking.
The tendency to crash is a result of what I call the "brittleness" of software. It breaks before it bends. A tiny error can have great consequences.
Naturally enough, when we gain more visibility into how an information system is connecting to its environment, it becomes possible to make it more reliable. The first cloud service most consumers connected to was an error reporting service to help a computer company debug its operating system. Both Apple and Microsoft initiated such services as soon as many of their customers went online. The cloud provided visibility into software failures. There was finally a way to close the empirical loop in the real world. As a result, modern personal devices rarely crash. I have talked to young people who have never seen a personal device crash.
What the cloud gives, the cloud takes away. While devices have stopped crashing, for the most part, the way they connect together has become unreliable. Brittleness reigns anew. It is now completely commonplace for devices to connect when they shouldn't (violating privacy and security desires) or to fail to connect when they should (as when you can't get onto a cloud service, or when it's hard to back up your data.)
So naturally, we seek to close the empirical loop once again. So far, we haven't done so. We imagine the cloud as an all-seeing eye, but actually it is haunted by blind spots. It does not provide us with adequate visibility into its own failures, in the way it looked down upon lowly PCs to tell us about their failures.
Clouds spawn more diversity in the environment surrounding our devices than clouds can measure. Therefore, relying on data alone will not be enough. We need to do more than we did to make PCs reliable.
This is a frontier of computer science. Don't ask.
When will there be a single unified "behavioral" theory of economic activity?
Never.
If you want a single, unified theory of economic behavior we already have the best one available, the selfish, rational agent model. For simplicity and elegance this cannot be beat. Expected utility theory is a great example of such a theory. von Neumann was no dummy! And if you want to teach someone how to make good decisions under uncertainty, you should teach them to maximize expected utility.
The problem comes if, instead of trying to advise them how to make decisions, you are trying to predict what they will actually do. Expected utility theory is not as good for this task. That is the purpose for which Kahneman and Tversky's descriptive alternative, prospect theory, was invented. But of course prospect theory was incomplete. It did not tell us where the reference point comes from, and did not provide a theory of mental accounting to tell us how people will combine gains and losses in various domains.
As people make progress filling in the holes in prospect theory we can do a better job of understanding and predicting behavior, but prospect theory was already more complicated than expected utility theory, and adding these dimensions further adds to the complexity. Some people wrongly conclude from this that behavioral economists are just adding degrees of freedom, allowing us to "explain anything".
This is an incorrect conclusion to draw because these additional complexities are not coming out of thin air to deal with some empirical anomaly, they are based on evidence about how people actually behave.
Just as psychology has no unified theory but rather a multitude of findings and theories, so behavioral economics will have a multitude of theories and variations on those theories. You need to know both physics and engineering to be able to build a structurally sound bridge, and as far as I know there is no general theory of structural engineering. But (most) bridges are still standing. As economics becomes more like engineering, it will become more useful, but it will not have a unified theory.
Sorry.
No one wants to be asked such a question
I would like to propose to you the explanation why Sendhil's question will not "fly", and my answer is serious, it is not a joke or a logical exercise.
Here is the reason:
Q: WHAT'S THE QUESTION ABOUT YOUR FIELD THAT YOU FEAR BEING ASKED?
A: The question about my field that I fear being asked is: "WHAT'S THE QUESTION ABOUT YOUR FIELD THAT YOU FEAR BEING ASKED?"
No one wants to be asked such a question.
Truly !!!
"Isn't That Obvious?"
When I used to teach introductory social psychology, at some point in the semester I let it slip to my students that I was the oldest child in a family of five siblings. Their response was always the same: "You're totally the oldest child!" When I asked why, they came up with an impressive list of supporting attributes (I assume they were primarily positive because I was grading them): "You're smart" "hard-working" "funny" "successful" and on and on.
But then I played a little trick on my students. I told them the truth: I am in fact the youngest of five. Their response? A one-second pause, followed by: "You're totally the youngest child!" Why? "You're smart, hardworking, funny, successful" and on and on again.
Unlike some other branches of science, social science has an unusual curse: regardless of what social scientists discover, laypeople tend to feel that they "knew it all along" such that even the most counterintuitive finding quickly feels, well, obvious. (In contrast, people don't usually think that the general theory of relativity is obvious after their first—or fiftieth—encounter).
Another example from the classroom: I tell students to think of reasons why the phrase "birds of a feather flock together" is true, and then to think of reasons why the phrase "opposites attract" is true. Again, an impressive ability to support the veracity of the statements is evident—despite the fact that the two phrases are in direct contradiction, both feel completely obvious in isolation.
Social scientists refer to this phenomenon as hindsight bias: once we know something, we are unable to not know it—making it feel as though we've always known it. In fact, now that you've learned what hindsight bias is, you probably instantly had the feeling that you've always known about hindsight bias.
Some of the most brilliant and insightful findings in social science are brilliant and insightful precisely because once they are explained to us, they feel deeply and powerfully true—even though one moment before, we believed that the exact opposite was deeply and powerfully true.
An example. It makes perfect sense that the more someone is paid to complete a series of tasks, the more tasks they will complete. All else equal, who wouldn't work harder when being paid twice as much for completing the same task? Well, it turns out that in some cases, paying people more can actually make them work less. A lot less. In particular, when people love some task because they find it intrinsically rewarding, paying them to perform it "crowds out" their motivation. Students who love to study because they enjoy learning study less when they are suddenly paid for their grades; musicians frequently lose their love of music once they "sell out."
Crowding out seems obvious now, right? But it doesn't to the many people who continue to believe that higher and higher wages will invariably lead to better performance. Just like my students, once we hear one truism (birds of a feather) the whole world seems to align; but when we hear another truism (opposites attract), the world instantly realigns.
The thing is, it's rarely all A or all B, despite our inclination to see the world that way. Under some circumstances, birds of a feather do flock together; under others, opposites really do attract. Under some circumstances, paying more does improve performance; under others, it greatly harms it. Rather than see both A and B as true, social scientists explore the moderating and mitigating factors that allow us to predict when each is true. In part, social scientists test whether—and under what conditions—the theories laypeople have about the world are accurate.
Here's how to combat the "isn't this obvious?" dilemma. When I am ruining a party by describing some new research project, I often first describe the results exactly backwards: "People who we told to do X were far more likely to engage in behavior Y than people who we told to do Z." The response is invariable: "we already knew that." When I then inform people that in fact our results show that it was people told to do Z who are more likely to engage in Y, I can prevent hindsight bias—though inevitably, at the cost of further running the party.
"If we thought some things are true but now know they’re no longer correct, doesn’t this therefore mean that we really don’t know anything at all?"
When it comes to studying the growth of scientific knowledge, one question that’s commonly asked is “If we thought some things are true but now know they’re no longer correct, doesn’t this therefore mean that we really don’t know anything at all?"
I’m unhappy with this question not because it can’t be answered, but rather because it’s so often asked in a Gotcha fashion, and belies a misunderstanding of the scientific process.
Yes, it’s true that science is inherently uncertain and that everything is in a draft form. This is exemplified in a story a professor of mine told me. He once lectured on a certain topic on a Tuesday, only to read a paper the next day that invalidated what he had spoken to the class about the day before. So he went into class on Thursday and said "Remember what I told you on Tuesday? It’s wrong. And if that worries you, you need to get out of science."
But just because science is in a draft form (and scientists rejoice in this) doesn’t mean that we don’t know anything. Clearly our scientific understanding of the world allows us to make predictions about the world, such as in meteorology, or create sophisticated technologies. Our willingness to fly on an airplane is due to more than simply the hope that aerodynamics won’t be invalidated while we’re in the air.
But even more than that, when certain things we thought were true are overturned, we don’t simply return to our previous state of ignorance. We have learned something through this process, and come closer to the truth of the world around us. Isaac Asimov, when writing about humanity’s improved understanding of the shape of the Earth—which we now know to be an oblate spheroid—makes this clear:
"[W]hen people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."
Science is constantly approaching a better understanding of the world. Along the way, much of what we know might be overturned. But that’s okay. Because this improves our knowledge. And thinking it’s anything different misses the excitement of how science works.
"How can we have this much data and still not understand collective human behavior?"
If you want to study the inner workings of a giant organization distributed around the globe, here are two approaches you could follow—each powerful, but very different from each other. First, you could take the functioning of a large multinational corporation as your case study, embed yourself within it, watch people in different roles, and assemble a picture from these interactions. Alternately, you could do something very different: take the production of articles on Wikipedia as your focus, and download the site's complete edit-by-edit history back to the beginning: every revision to every page, and every conversation between two editors, time-stamped and labeled with the people involved. Whatever happened in the sprawling organization that we think of as Wikipedia—whatever process of distributed self-organization on the Internet it took to create this repository of knowledge—a reflection of it should be present and available in this dataset. And you can study it down to the finest resolution without ever getting up from your couch.
These Wikipedia datasets—and many other sources like them—are completely public; the same story plays out with restricted access if you're a data scientist at Facebook, Amazon, Google, or any of a number of other companies: every conversation within people's interlocking social circles, every purchase, every expression of intent or pursuit of information. And with this hurricane of digital records, carried along in its wake, comes a simple question: How can we have this much data and still not understand collective human behavior?
There are several issues implicit in a question like this. To begin with, it's not about having the data, but about the ideas and computational follow-through needed to make use of it—a distinction that seems particularly acute with massive digital records of human behavior. When you personally embed yourself in a group of people to study them, much of your data-collection there will be guided by higher-level structures: hypotheses and theoretical frameworks that suggest which observations are important. When you collect raw digital traces, on the other hand, you enter a world where you're observing both much more and much less—you see many things that would have escaped your detection in person, but you have much less idea what the individual events mean, and have no a priori framework to guide their interpretation. How do we reconcile such radically different approaches to these questions?
In other words, this strategy of recording everything is conceptually very simple in one sense, but it relies on a complex premise: that we must be able to take the resulting datasets and define richer, higher-level structures that we can build on top of them.
What could a higher-level structure look like? Consider one more example—suppose you have a passion for studying the history of the Battle of Gettysburg, and I offer to provide you with a dataset containing the trajectory of every bullet fired during that engagement, and all the movements and words uttered by every soldier on the battlefield. What would you do with this resource? For example, if you processed the final day of the data, here are three distinct possibilities. First, maybe you would find a cluster of actions, movements, and words that corresponded closely to what we think of as Pickett's Charge, the ill-fated Confederate assault near the close of the action. Second, maybe you would discover that Pickett's Charge was too coarse a description of what happened—that there is a more complex but ultimately more useful way to organize what took place on the final day at Gettysburg. Or third, maybe you wouldn't find anything interesting at all; your analysis might spin its wheels but remain mired in a swamp of data that was recorded at the wrong granularity.
We don't have that dataset for the Battle of Gettysburg, but for public reaction to the 2012 U.S. Presidential Election, or the 2012 U.S. Christmas shopping season, we have a remarkable level of action-by-action detail. And in such settings, there is an effort underway to try defining what the consequential structures might be, and what the current datasets are missing—for even with their scale, they are missing many important things. It's a convergence of researchers with backgrounds in computation, applied mathematics, and the social and behavioral sciences, at the start of what is by every indication a very hard problem. We see glimpses of the structures that can be found—Trending Topics on Twitter, for example, is in effect a collection of summary news events induced by computational means from the sheer volume of raw tweets—but a general attack on this question is still in its very early stages.
And you shouldn't stop there—you can and should ask about the potential dangers of recording, stockpiling, and analyzing so much data on human behavior. In doing so you'd tap into an ongoing conversation bringing together scientists, policy-makers, large Internet sites, and the public, focused on a changing landscape around notions of individual privacy. It's important to push forward on all these questions, because in a world where both the possibilities and the risks are evolving rapidly, the worst thing you can do is not ask.
"Why can't computers understand what people say as well as a five-year-old can?"
Q: Why can't computers understand what people say as well as a five-year-old can? (The same question goes for phones, or websites, or mega-computers that can beat people at Jeopardy and chess—anything that is claimed to be "smart". More about mega-computers later.)
A: You're not giving five-year-olds enough credit.
We take language for granted. We do this because we are immersed in language every day from the day we are born. But even everyday language is incredibly complex and requires access to enormous resources to comprehend. To see this, let's look at a few simple examples. (All but the last are well-known to my research community.)
Suppose it's Sunday, and we ask a flight-booking system, "List all flights on Tuesday". We humans know that this means finding out about all the flights that take off on Tuesday; that is, "on Tuesday" modifies "flights". It rarely occurs to us that there's an alternate interpretation of that sentence: wait until Tuesday, and then list all 10,000 or however many flights there are in the system's database; in that case, "on Tuesday" modifies "list".
But other sentences with the same apparent structure take on the opposite default interpretation: in
"Eat Chinese food with chopsticks,"
"with chopsticks" modifies "eat"—we don't interpret this sentence as a command to order shrimp chow fun laced with chopstick fragments, and then choke the whole thing down, splinters and all.
Once you start looking for these alternate interpretations that your brain hasn't been consciously registering, you start realizing that they are everywhere. How come "After dinner, we all saw a movie" usually means we all saw the same movie, but "After dinner, we all had a beer" usually means we each had separate drinks? (Otherwise, eeeew). And why, in
"Retrieve all the local patient files"
does there not seem to be a single strong default interpretation?: It could be the patients that are local, or it could be the files.
To understand even these simple sentences correctly—to see which interpretations are plausible and which aren't—requires knowing what people usually mean and how the world usually works, which is a very tall order.
Incidentally, whether we humans really want computers to that exhibit "true intelligence" isn't clear. When Watson, the computer that handily beat human champions at the quiz-game Jeopardy, responded to an item in the category "U.S. Cities" with a clearly off-base "What is Toronto???", there was a practically-audible collective sigh of relief, as manifested by the Interwebs focusing on this gaffe. This reaction echoed what happened when the computer Deep Blue defeated human chess champion Kasparov. Before the match, Time Magazine carried the solemn headline, "The Brain's Last Stand". Immediately afterwards, public response exhibited what Colby Cosh called "a hysterical self-reassurance" and a sour-grapes shifting of the goal posts.
Sample excerpt: "The truth of the matter is that Deep Blue ... is just a brute-force computational device. ... It is unconscious, unaware, literally thoughtless. It is not even stupid." Another: "Left on its own, Deep Blue wouldn't even know to come in out of the rain." Notice that there wasn't much mention of such objections beforehand.
"What about me?"
I am willing to bet that every psychologist, no matter what aspect of the human mind he or she studies, has experienced something like the following more than once. You meet a stranger, mention that you work as a psychologist, and then the strangers asks, "Are you analyzing me?"
I dread this question because it reflects both a superficial misunderstanding of what many psychologists do for a living, but it also highlights a deeper limitation in psychological science. The superficial misunderstanding comes from the long shadow that Sigmund Freud still casts on our field. Although many psychologists working as clinicians and therapists do indeed analyze individuals, trying carefully to understand their problems, many work as scientists trying to understand how the mind works just as a chemist tries to understand how chemical bonds work. I, and every other psychologist you'll see on the Edge site, is of the latter type.
This misconception is easy to fix, and the great popular writing of many authors from this very site is shortening Freud's shadow. But I dread this question more because of the deeper issue it raises. "Are you analyzing me?" implies that I, as a psychologist, could indeed analyze you. The problem is that psychological science has, and always will be, a group-based enterprise. We randomly assign volunteers in our experiments to one condition or another and then analyze the average differences between our conditions relative to the variability within those conditions. We do not analyze individuals in our experiments, nor we do know why some people within a given condition in an experiment behavior differently than others. Our understanding of our own research participants is relatively coarse.
All sciences work this way. In medicine, for instance, doctors prescribe drugs because the average outcome of those in the treatment group of a drug trial were better than the outcome of those in the placebo group. Not everyone in the treatment group improved and some improved more than others, but enough got better that doctors think it's likely that you'll get better if you take this drug as well. But as a psychologist, I often field questions that call on me to offer more individualized answers than our science can warrant. I'm asked to give precise advice and recommend exact solutions when what we can offer is general advice and broad solutions that may or may not apply exactly to your particular problem. I dread having to explain all of this to people. In fact, I dreaded trying to explain it in this little essay.
So if you should happen to sit next to me on a train or a plane, I'll happily start up a conversation with you and explain that I'm a psychologist. Just rest assured that I am not, in fact, analyzing you.
To Adam Alter:
Adam, your question made me wonder about the Berlin-Kay findings on the universality of color categorization in language? What do psychologists now believe on that? Putting aside defects such as color blindness, is there any sense in which we all categorize colors similarly? Do you know of any psychologists who have a forthcoming book on color that might be able to help us answer this question?
To Lawrence Krauss:
As a naïve reader, is this partly a problem of framing? If we had scaled time t as T = log(t), then we'd have a scale that went from negative to positive infinity and then no one would ask "what happened before negative infinity"? Or am I only further obscuring the issue?
To Nicholas Epley:
Nick, since you dreaded answering this question, let me add to your dread. Isn't there a deeper mistake people are making when They ask, "Are you analyzing me?"
They are emphasizing the me; they are making a presumption of individuality.
Isn't one of the big insights that individual differences are over-stated? That in fact the biases and tendencies we all share may be much larger than the biases and tendencies that are distinctive? Loss aversion, the hedonic treadmill, cognitive dissonance and so on impact all of us. For me, this creates a fundamental connection between all of us: we are not so different from each other as we presume.
Isn't the presumption of individuality, uniqueness, the deeper error here?
To Jon Kleinberg:
Your story of Gettysburg makes a persuasive case that we have "too much data". But consider this: if I went to one of my colleagues and said, "The real problem of social science is that we have way too much data," they'd laugh at me (more than they already do!). And they'd proceed to list off dozens (hundreds) of things we have few measurements of.
So I am torn: do we have too much data or too little data? This conundrum is at the heart of so many of the issues surrounding "big data". Yes, Twitter has a firehose of data. And yes it is "too much" in the way you describe. But isn't it too little in another sense? It seems like a very thin (but very, very long) slice of social life.
To Lillian Lee:
Your piece raised one question in me. "Why can't computers understand what people say as well as a five-year-old can?"
Joking aside, I see the problem is much harder than I understand it to be. But what is it that the five year old has? Access to a lot more (and diverse kinds of) data? A better processor? Why is the five year old such an amazing parser of context? What is the lacking in the computer?
To Samuel Arbesman:
Your essay made me wonder whether or not we are at fault here. Are we communicating science correctly? If the heart of science is the recognition that truth is transitory, always ready to be over-turned by better evidence, then should we not be communicating it that way? The popularization of science always seems to trend in the other direction, to emphasize the great discovery.
To Michael Norton:
I love your way of countering this problem. Here's another I have seen used.
I once did a paper on discrimination and faced the same problem. After the fact, people said, "Oh yes, of course you find that level of discrimination". The thing was that when I asked people before running the experiment, they all said "You will never find that". What I should have done—and what people like Dan Gilbert do brilliantly—is to run another study, a prediction study. Ask a group of people—experts, lay people—to predict what I will find. Then I would have two cells: the reality and the prediction. The "that was obvious" criticism falls by the wayside pretty quickly.
"My Granny saw a real ghost—how do you explain that?"
For more than twenty years my field was parapsychology. I started out as a wide-eyed student, convinced of the reality of telepathy, clairvoyance, precognition, spirits, ghosts and all things psychic. A few years of increasingly careful experimentation convinced me that most of the phenomena did not exist. So I turned to investigating some genuinely puzzling experiences.
This was far more satisfying. For example, alien abductions can turn out to be sleep paralysis; a very scary experience that can easily be misinterpreted, and the creepy feeling of déjà vu is an inappropriate feeling of familiarity related to temporal lobe lability.
So if someone says "I had an out of body experience – how do you explain that?' I can help them. I can explain about the construction of the body image in areas near the temporo-parietal junction and how research has discovered that disturbances to this part of the brain induce sensations of flying, floating and leaving the body. If they say "I keep waking up terrified and I can't move and I know there's someone—or something—in the room with me. Am I possessed?" I can explain about sleep paralysis; how the normal paralysis that occurs during dreaming can carry on into a half-waking state that includes buzzing and humming noises, crawly feelings on the skin, and sexual arousal. If they say "When I was a kid I used to be able to wake up in my dreams and control them but I can't do it anymore. Why not?" I can explain about lucid dreaming; the physiological state in which it occurs, the people most likely to have it, and I can even suggest ways of inducing it.
But those wretched ghost stories drive me mad. The problem is not that they cannot be explained but that I am always given far too much completely irrelevant information (My Granny's house is a hundred years old and an old man once died there and it was the full moon and she'd been thinking about …..) and none of the information I need. Indeed the information needed is usually unobtainable so long after the event. So I have to reply "I'm sorry I cannot explain what happened to your Granny," and accept the smug look that says "Ha Ha, I thought so. You scientists won't accept that there are more things in heaven and earth ……"
I want to scream. And this is one of the reasons I eventually left the field for good, unwilling any longer to be "renta-skeptic" for all those TV and radio shows; no longer willing to go on explaining why coincidences are not what they seem to be; no longer able to endure the hate mail from true believers or the cruel accusations of "If only you had an open mind ….."
Surely our chronic sense of uniqueness is one of the biggest barriers to learning from others. That sense is surely underlying the "are you analyzing me" question as well. Not only does the question imply that I CAN provide some unique analysis of you, but it also implies that there's a great deal of uniqueness to be analyzed. Surely there is, just not as much as people tend to think.