Learning By Thinking

Learning By Thinking

Tania Lombrozo [7.28.17]

Sometimes you think you understand something, and when you try to explain it to somebody else, you realize that maybe you gained some new insight that you didn't have before. Maybe you realize you didn't understand it as well as you thought you did. What I think is interesting about this process is that it’s a process of learning by thinking. When you're explaining to yourself or to somebody else without them providing feedback, insofar as you gain new insight or understanding, it isn't driven by that new information that they've provided. In some way, you've rearranged what was already in your head in order to get new insight.

The process of trying to explain to yourself is a lot like a thought experiment in science. For the most part, the way that science progresses is by going out, conducting experiments, getting new empirical data, and so on. But occasionally in the history of science, there've been these important episodes—Galileo, Einstein, and so on—where somebody will get some genuinely new insight from engaging in a thought experiment. 

TANIA LOMBROZO is a professor of psychology at the University of California, Berkeley, as well as an affiliate of the Department of Philosophy and a member of the Institute for Cognitive and Brain Sciences. She is a contributor to Psychology Today and the NPR blog 13.7: Cosmos and CultureTania Lombrozo's Edge Bio page

LEARNING BY THINKING

The questions that motivate my research concern how we come to understand the social and physical world the way we do. Why are we so motivated to get an understanding of the world? What does that understanding do for us? Those are pretty broad questions that have been approached from lots of different disciplinary perspectives.

My own work is most informed by a few different disciplines. One of them is psychology, where people have been interested in the learning mechanisms that allow us to understand aspects of the world; another is philosophy. Traditionally, epistemologists, philosophers of science, have been interested in how we can get a grip on what's going on in the world, how we can effectively interact with the world, and when we arrive at something that we might believe is justified, true, and so on.

Those are very broad questions, and part of the way I've tried to get a grip on them empirically is to focus on the question of explanation. People are extremely motivated to explain. If you start eavesdropping on your friends and your neighbors, you'll notice that a lot of what they do is try to explain things that happened in their experience. They try to explain why someone was happy or upset, or why things happened the way that they did. 

In some ways this is puzzling, that we spend so much of our time looking for explanations. Explanation is a backwards-looking activity, on the face of it at least. If you were engaged in the project of trying to predict things, it's clear why that's valuable. If you're trying to figure out why things occur so you can predict them, you're going to be able to structure your life to do things more effectively. If you are trying to figure out how to control or intervene on the world to bring about particular outcomes, it's clear why that has some sort of instrumental value. 

But why are we so interested and engaged in the process of explaining? Once a thing has already occurred, what's the value that we get in trying to explain and understand it? The answer that I and many other people have argued for is that something about the process of explanation, the process of trying to get an understanding of something, is crucial to how we're able to then predict and intervene in the future. Something about this more backwards-looking process of trying to understand why something came to be gives us the kind of information we need to then be able to navigate the world more effectively.

In order to study explanation, we do a few different things. One of the things that I find fascinating about this process is that it gives us some insight into the way that learning and inference work, but it does so in a way that departs from how we normally think about those processes. When people typically think about learning, they imagine that there's something you don't know, so you look for it in the world in some way. You look for it in a book, you ask an expert, you get new observations from the external world, and you come to learn something new because you got this new piece of external information from somebody else.

However, if you think about the process of explanation, it covers cases that don't fit that standard model of learning from someone's testimony or learning from observation. It exhibits a phenomenon that I refer to as learning by thinking. This is the phenomenon where sometimes you can come to learn something or understand something new in the absence of any external data or input from another person. The way this relates to explanation is through an experience that I think most people have had, which is the experience of coming to understand something better as a result of explaining it to yourself.

Teachers have this experience routinely, and so do parents. Sometimes you think you understand something, and when you try to explain it to somebody else, you realize that maybe you gained some new insight that you didn't have before. Maybe you realize you didn't understand it as well as you thought you did. What I think is interesting about this process is that it’s a process of learning by thinking. When you're explaining to yourself or to somebody else without them providing feedback, insofar as you gain new insight or understanding, it isn't driven by that new information that they've provided. In some way, you've rearranged what was already in your head in order to get new insight.

The process of trying to explain to yourself is a lot like a thought experiment in science. For the most part, the way that science progresses is by going out, conducting experiments, getting new empirical data, and so on. But occasionally in the history of science, there've been these important episodes—Galileo, Einstein, and so on—where somebody will get some genuinely new insight from engaging in a thought experiment. 

That happens pretty frequently in the course of everyday life, gaining much more mundane kinds of understanding. We do that when we explain to ourselves, when we engage in various mental simulations or imaginative exercises, and we do that when we make certain analogical inferences. That process is fascinating and interesting from the perspective of psychology. 

What consequences does it have for your ability to then learn things about the world and to make predictions? It also raises interesting questions from the perspective of epistemology and philosophy of science. What role do these processes play in scientific discovery? To what extent are the insights we get when we engage in these processes likely to be reliable reflections of the way that the world actually is? When we engage in a process like explaining to ourselves, when are we likely to end up with something that's true or justified? When are we misleading ourselves? When are we perhaps just reinforcing our prior beliefs and misconceptions? One of the things that my lab has done is bring these ideas and questions into the lab to study them in an experimental way using the tools of cognitive psychology.

There's a long history of people being interested in explanation and when it's valuable, poets talk about it, historians talk about it, scientists do—Aristotle certainly did. A relatively new development is being able to think about how we can study explanation, using the tools of cognitive psychology, social psychology, and developmental psychology.

Maybe I can give you an example to make this easier to think about. This is an example that comes from a set of studies in which we looked at how people, in particular young children, learn to draw some sort of abstract generalization from a particular concrete case. The thought was that engaging in the process of explanation might be one of the crucial ways that you're able to do this. That might be one of the mechanisms by which we can go from our particular concrete experiences and figure out the underlying principle that will allow us to generalize in other cases. 

In this study, we took advantage of a pretty well-known phenomenon: Children are pretty bad at extracting the moral of the story. If you give them something like Aesop's Fable, or another short story that's intended to teach you a lesson like "patience is a virtue," or, "you should be kinder to people who are different from you," what you typically find is that they do learn something from the story, but they learn it at a very concrete, particular level. They don't learn that it's generally good to be kind to people who are different from you. What they learn is that these dogs should be nice to three-legged dogs. They do extract some lesson from the story, but it's concrete and specific.

As you go through a story like this, if you engage in the process of asking yourself why something happened, part of what you're doing is trying to relate the concrete particulars of that story to a more general principle or generalization. In the course of doing that, you're going to perhaps come to that generalization for the first time, but also realize how the thing you're explaining is an instance of that.

That process of trying to make sense of something or explain it is going to perhaps help these young kids extract more general lessons. That's what we found when we had kids go through these storybooks. Half of them were prompted at particular points in the story to explain why particular events in the story happened. They provided explanations, but they were not given any feedback on whether these explanations were right or wrong, good or bad. In the control condition, we would stop at the same points in the story and ask them yes/no questions that would draw attention to the same aspect of the story. In the end we found that when we asked the kids what they thought the lesson of the story was—the thing that the author wanted them to get out of the story—the kids we had prompted to explain were much more likely to articulate the more abstract moral like patience is a virtue, or, you should be kind to people who are different from you. 

Part of the reason why that fits into this broader story about the role of explanation in governing and guiding our lives is that you need these more abstract principles if you want to be able to generalize broadly—if you want to be able to have one particular experience or story and learn something from it that you're then going to be able to apply to a context which is superficially different. Instead of patience in the context of planting a seed and waiting for a plant to grow, now you need to understand the value of patience in a context that perhaps involves a large project at work.

In order to figure out the similarity between those two superficially different situations, you have to be able to think about things in terms of these more abstract generalizations or properties. It seems like explanation is playing an important role in allowing us to engage in that kind of abstraction and, we think, therefore leading to future predictions.

~ ~ ~ ~

A lab in cognitive psychology can mean a lot of different things. Often, it's just workstations with computers. We will have adults come into the lab, sit at a computer, and do a particular task. When we do studies with children, we often go to them. That means going to a science museum or a preschool, and then creating an experiment that's more like an interactive activity or game that a child is going to want to engage with you in. From the perspective of the child, they're just playing a game with an adult, or just reading a storybook with an adult. Of course, the materials in that game or in that storybook have been carefully constructed to allow us to test particular hypotheses about how children are learning and reasoning.

Increasingly, it's becoming common in the kind of research I do, and in the field more generally, to collect data online through crowdsourcing platforms. That has a lot of advantages in terms of the number and diversity of participants that you can reach, but it also has limitations. Historically, a lot of psychology studies focused on college students as a population. That was very convenient, and it also gave them an opportunity to see what psychological research looks like. But then you worry that you're just studying Western college students, rather than something more general about human cognition.

With online samples, which we use a lot in my lab, you've got greater diversity in terms of age and social economic background. Nonetheless, you're testing people who are choosing to spend their time participating in experiments online, which is not going to be a representative sample of the American population, and certainly not a representative sample of the human species.

One thing that we have to be mindful of is thinking about who we're testing, and how we can generalize our findings to whatever population we think it's appropriate to generalize to, whether it's a particular age group or cultural demographic. One important question that arises through this research is the extent to which we're studying something that's true of human reasoning in general, or something that's particular to the populations that we're studying. 

In the last decade, probably even more just the last few years, there's been an increasing appreciation in psychology that we need to think more carefully about who we are testing. We need do a better job of testing people in different cultures and different contexts. For the kind of phenomena that I study, there are interesting questions that arise here.

Are you going to find important cross-cultural differences in the extent to which people are motivated to explain, or the extent to which they think certain kinds of explanations are appropriate or satisfying? The research that's been done to date suggests that, indeed, there are differences like that.

For example, a lot of Western-educated adults will not accept what are called teleological or functional explanations for many aspects of the natural world. If I said that earthquakes happen in order to relieve pressure at the Earth's surface, that framing, in terms of it being for the purpose of doing that, is something that a lot of Western-educated adults might reject as a legitimate explanation. Research by Deborah Kelemen and her colleagues has shown that in other cultures, where people are less exposed to Western-style scientific education, adults are much more willing to accept those kinds of explanations.

I found in a study that I did with some collaborators that patients with Alzheimer's disease are pretty willing to accept those kinds of explanations. There seems to be something very compelling about that kind of explanation, but you find it less in Western-educated adults than in other populations. There are two different ways to make sense of that finding.

One view is that the underlying cognitive machinery is just fundamentally different in these different groups; they're doing something completely different when it comes to evaluating explanations. That may be, but that's not the sort of interpretation that is most plausible. What is more plausible is that, in a lot of important ways, the machinery is basically the same, but when it comes to evaluating explanations, you're trying to relate something to your prior beliefs about the way the world works. What varies across these groups is their prior beliefs about the way the world works.

If you are somebody who believes that God created the world, designed the particular characteristics it has, and is governing the way that things unfold, then you might be perfectly happy to accept things like earthquakes happen for this particular reason, or, there are mountains because they're for climbing, or, we have the sun because it provides fuel for plants so that we can eat them. That's a perfectly reasonable explanation if you already antecedently have this particular belief about the structure of the world, if you already believe that there was a designer who created everything.

If you don't have that belief, and you believe that the sun is a result of physical laws and processes, and that certain things happen due to geological processes and natural selection, then you're not going to find those kinds of explanations very compelling for many aspects of the natural world.

The way that I would characterize some of these cross-cultural differences is not that people have fundamentally different things they're looking for in explanations, or that they differ fundamentally in the way that they reason about explanations; it's that they have different beliefs about the causal structural of the world. The relationship between those beliefs about the causal structure of the world and what makes something a good explanation is pretty much the same.

That's an empirical hypothesis. I don't think it's been adequately tested yet, but I do think it's consistent with the data we have so far. I'd bet on that, rather than the view that there's something different about the mechanism involved in explanatory reasoning.

~ ~ ~ ~

I happened to grow up very close to UCSD—the University of California at San Diego—which turned out to play an important role in my getting exposed to cognitive science when I was pretty young. When I was a junior in high school, I happened to read The Language Instinct by Steven Pinker. At the time, my favorite subjects in school were math and English. Part of what fascinated me about this book was that it seemed to be applying something as formal as mathematics to the study of language. That seemed appealing. It was my first introduction to the idea that you could have a rigorous, science of aspect of behavior and mental life.

In the course of reading that book, Stephen Pinker kept mentioning this Noam Chomsky person as if he were a really important person I should've heard of. As a junior in high school, I did not know about Noam Chomsky, but I thought he sounded like an important person I should learn about. I went to my local used bookstore, DG Wills, which was a fantastic resource, and I looked for something by Noam Chomsky. By sheer luck, I found something on linguistics rather than on politics. I chose the shortest book that seemed the least daunting to me. That was one of his more accessible books, and it was a good choice. In that book, he kept talking about the cognitive revolution in cognition, and that seemed important.

I ended up reading a lot of books, which, in retrospect, I recognize, were very idiosyncratic choices. I was basically looking for books that had cognition, or cognitive science, or something like that in the title. In the course of doing that, I discovered that the university I was growing up next to—UCSD—was one of the best places for cognitive science. It was one of the first places that had started a Department of Cognitive Science, and which identified that as an important discipline to bring people together around.

In the course of my reading, I learned about the Churchlands, Paul and Patricia Churchland, who are philosophers. They have made important contributions to our understanding of the mind and philosophy. I discovered that Paul Churchland was teaching a class called "The Philosophy of Cognitive Science." I showed up to his office hours as a high school student and asked if he'd let me take this class. Miraculously, he said yes. As a senior in high school, I was able to take this phenomenal class with Paul Churchland and a couple of other classes at UCSD, which were terrific experiences.

I also had an opportunity to work in a lab at UCSD the summer before I went on to college. The way this worked was that a teacher at my high school had volunteered to find a lab that would be willing to let me work with them. She asked me to go to the UCSD website and come up with a list of faculty I thought were interesting, so I did. She contacted most of these people, who said they weren't interested in having a high school student work with them.

The one person who agreed was someone named Marta Kutas, who's a phenomenal researcher. She works on language, using event-related potential, EEGs along the scalp to study aspects of linguistic processing. I had a phenomenal experience working in her lab when I was in high school. By that point, I knew that I wanted to do something related to cognitive science. I knew that I loved research, but I didn't know what direction to go in.

From there, I went to undergrad at Stanford. In my first year there, two things happened. One was that I took a philosophy of science class and just loved it. My instructor for that class was Peter Godfrey-Smith, who went on to be my honors thesis advisor as an undergraduate. I also took a class on visual perception. I spent some time in that period trying to figure out if I wanted to be a cognitive psychologist, a philosopher of science, or a visual psychophysicist. Those seemed to be the main options.

Part of what I discovered in that period was that the questions that I was most interested in were the big-picture, cognitive psychology kinds of questions that also have roots in philosophy—questions about how we learn, how we draw inferences, what our concepts are like, how we learn new concepts. What attracted me to the study of visual perception was that it seemed like that was an area where we could ask precise questions and make systematic scientific progress in answering those questions. It's the great example of a success story, and a case where we've made a huge amount of progress in understanding how the human visual system works, and relating that to the underlying neuroscience. That was very appealing, and I wanted to do that but with the questions that drove me, which to a greater extent came more from cognitive psychology and philosophy. In the course of trying to figure this all out, one thing that I realized was that within psychology, a lot of the ideas that I was most interested in, a lot of the theoretical perspectives I was most interested in, kept appealing to the role of explanation. There's a perspective that I'm sympathetic to in both cognitive development and social psychology and, to some extent, cognitive psychology, which is that there's an important analogy between the way that human cognition works and the way that the scientific process works.

If we think about a toy model of science, you imagine that scientists are making observations about the world; they're generating theories on the basis of those observations; they're revising those theories as they make new observations; and these theories are what then guide them to be able to do things like effectively intervene on the world. The perspective in psychology is to think of, for example, the child as scientist, which is one way that people put this—associated strongly with my colleague Alison Gopnik—or the person as scientist.

The idea is that in some ways, analogous to the way that science works, we as individuals come into the world and make observations; we construct intuitive or folk theories about the way the world works; we revise those theories as we have new experiences and new observations; and we use these theories in the way that we go about deciding which actions to take. When you push people to try to articulate that perspective, one of the important things to try to get more clarity on is what is known as an intuitive theory.

When we say that a person has an intuitive theory of the social world, an intuitive theory of how people's minds work, an intuitive theory of physics or biology, what do we mean by theory there? Certainly, what we mean by theory in that context can't be the same as what we mean by theory in science, where a theory in science is extremely explicit, articulated.

These intuitive theories are much messier, typically implicit sorts of things. What people will typically say is that what's key to a theory is that it somehow embodies our explanations for how things work. Causal explanatory principles—that's what the theories are doing, helping us explain things. The thing that differentiates a theory from other kinds of mental representations has to do with the explanations. It's important for the explanation that it contains. 

Something about that seems right, but then of course, you just want to ask the further question. Okay, we had one mystery, which is what a theory is, and now we have this new mystery. What do we mean by an explanation in these cases? It seemed to me like psychologists had not yet tackled that question by trying to articulate different theories of what explanation could be, and then trying to test them. In contrast, in philosophy of science, people have been interested in this question for a long time, certainly since Aristotle and before.

In terms of the more contemporary literature, starting in the 1940s and 50s there was an enormous effort in philosophy of science to try to develop a formal account of what constitutes an explanation. What makes something a good explanation? Why does explanation play a role in science? Why does that seem to be such an important part of the scientific process?

It seems like there were these complementary endeavors that hadn't been adequately put together yet. There were the philosophers who were articulating these theories in a way that was largely divorced from considerations of how everyday human cognition works, and how explanation works in our everyday lives. They were thinking about explanation in science, but also about normative questions: How should explanation operate in science? What ought to count as an explanation?

Then there were these psychological projects, where people were invoking the idea of explanation but without having investigated empirically. That's part of what got me interested in studying explanation. It seemed like there are these fascinating questions that we want to answer, and we have some conceptual tools that philosophy has already provided for how we might go about asking those questions. A lot of my early research involved taking distinctions that philosophers have appreciated for decades and using them to ask empirical questions about the way that everyday explanatory judgments work.

One place where I can give you an example of that is something like the notion of simplicity. Intuitively, we like simple explanations. Certainly, scientists have said that, philosophers have said that. Many people have extolled the virtues of simple explanations. But as philosophers have appreciated for a long time, it's difficult to give a precise characterization of what simplicity is supposed to be. 

If you take something like simplicity, intuitively we have some sense of what that means. One thing that philosophy is very good at doing is taking an intuitive notion like that, and then pushing on it and forcing people to articulate different things you might mean by it, and to be precise about those alternative definitions. One of the things that I've done is try to differentiate things that you might mean by simplicity, and then empirically test which of those is the one that affects people's judgments of how good an explanation is. 

One way that we've done this is in the context of a causal explanation. In a simple case, we were trying to explain why somebody has some symptoms. You want to explain it by one or a conjunction of various diseases. You might think that would make an explanation like that simple. It has to do with just how many different diseases or causes in this causal process you invoke in the explanation. If you invoke five different causal steps in this explanation, that's going to be more complicated than an explanation that invokes two or three.

An alternative is that maybe what matters is not the absolute number of these causes that you invoke in an explanation, but how many of those causes are themselves unexplained? How many of those causes are ones that you just have to posit and assume to be true? That would go along with the idea that what we really care about in a simple explanation is not that it has little to it, but that it has little independent parts that we need to accept. We want to reduce things to the fewest number of unexplained parts with the fewest number of assumptions.

We pit these two ideas against each other and came up with some explanations. For example, if you want to explain two particular facts, one possibility is they're explained by two independent causes—cause one generated this one, and cause two generated this one. That involves two causes. Now I'm going to make it more complicated in the sense that I'm going to introduce one more cause. I'm going to say that both of these causes have a common cause. Now I've introduced a third cause.

If what you care about is just the number of causes in the explanation, I've made it more complicated because I've added another cause. But if what you care about is how many causes are themselves unexplained, I've made it simpler because I can now explain these two causes here by appeal to the common cause. It's just this one common cause up here that's unexplained. What we find is that it's that latter notion of simplicity that people seem to be responsive to and care about. We found no evidence that they care of the total number of causes.

In an experiment like that, we pit this notion of people favoring explanations that involve fewer causes total to explanations with the fewest number of unexplained causes. The evidence from those studies suggests that people don't seem to be at all sensitive to just the number of causes of explanation, but they are very sensitive to the number of unexplained causes. We show a systematic preference for explanations that involve a smaller number of unexplained causes.

That's just one illustration of a way that you might take the conceptual work that is standard in philosophy, where you're trying to identify different distinctions within a notion, and then you use those conceptual tools to address an empirical question. In this case, we're not addressing the question of what should make a good explanation, but rather the question of why people in their everyday intuitive judgments find some explanations better than others. What is the notion of simplicity that they're responsive to in those cases?

There are a lot of ways you might try to carve up the landscape of cognitive psychology. One of the important division points is thinking about the extent to which people think that the important explanatory work in accounting for the aspects of human cognition that we care about are innate versus learned. Like most of these dichotomies, both endpoints are incoherent.

Insofar as I lean one way or the other, I'm fascinated by learning mechanisms and the fact that we are such flexible learners that we can do so well in different environments. I'm typically drawn more to perspectives that try to explain how we can learn something about the structure of the environment. In some sense, the building blocks we have for those learning mechanisms had to have been there to begin with. We can't get something from nothing. But we can understand what those mechanisms are and how to flexibly deploy them in different contexts. That's going to be a key part of explaining human intelligence.

Some of the other divisions in the field are more matters of emphasis. One thing that is often discussed in the media is the extent to which humans are rational, or good at reasoning versus bad at reasoning. To some extent, that's a question of emphasis. If we get things right 80 percent of the time, is that phenomenal because where getting it right 80 percent of the time? Or is that abysmal because of the 20 percent of the time you're getting it wrong?

Often when you look at some of the debates around these issues, people don't disagree about the cases where people get things right or wrong. They disagree about how important those cases are, and to what extent that undermines the claim that humans are pretty sophisticated, incredible learners.

I tend to fall more on the glass half-full side, where I think it's important to acknowledge the cases where we get things wrong. Those cases are extremely practically consequential because we want to be able to identify them and improve human reasoning. Overall, I'm more often impressed by how good human reasoning is. I'm impressed by the inferences that we see even three, four, and five-year-olds being able to make, or the fact that most adults are able to navigate an extremely complicated environment so effectively.

With that in mind, we can then turn to thinking about how we can do even better. It's important not to lose sight of the fact that we are extremely impressive learners, and still, although perhaps not for much longer, the best learners we know of. For the most part, artificial systems do not compete with humans when it comes to a lot of these tasks.

~ ~ ~ ~

One thing that is a conflict in terms of how I think about the role of explanation is that on alternate days, I go back and forth between two perspectives. What I'm interested in are the problems where people have a limited amount of information and they have to draw some inference that goes beyond that information. That's what's called an inductive inference—solving inductive problems. The normative benchmark for how you should solve those problems is what's called Bayes' rule, or Bayesian inference. That's a rule for statistics that tells you how you can take the information that you have and your prior beliefs and combine them to update your beliefs appropriately. For the most part, that's accepted as the right way to do it. That's the right way to do it, and we can see if humans are doing that well or not.

There are some questions about whether or not we should be taking that as the right normative benchmark. The thing that I go back and forth on in my own research is thinking about the relationship of explanation to that kind of inference. One view that you could have, and this is the view that I had earlier my career, is that there are lots of cases where human behavior seems to approximate Bayesian inference reasonably well. It seems like we are learning from the environment in a way that's pretty close to the optimal way to learn.

However we're doing that is clearly not by explicitly applying this rule. It's just not the case that people intuitively know Bayes' rule and they go about the world getting observations and saying, "I'm going to plug this into Bayes' rule," and doing the math in their heads. That's clearly not the way learning works. Even the people who advocate the idea that a lot of cognition is Bayesian believe that there is a cognitive process by which we come to approximate that kind of reasoning, and it's not going to look anything like an explicit application of this mathematical rule.

Early in my career, I thought maybe this process of trying to explain things is one of the key mechanisms by which we come to approximate Bayesian inference. Maybe the way this works is that when you encounter some new observation, you have to update your beliefs in light of that observation. By trying to explain that observation in light of what you already believe, you're performing this computation of integrating the new evidence with your prior beliefs in order to update them in a way that's going to turn out to be a good proxy for a Bayesian posterior—what you should believe after this type of updating process.

On that view, explanation is basically a means to Bayesian inference. Increasingly, I'm coming to think that that's not the right way to think about what explanation is doing. That's motivated both by some empirical findings that have come out of the literature, which suggest that when you're explaining, you seem to get further away from calculating a Bayesian posterior than when you're not explaining, and also from thinking about the range of epistemic and social goals that we have.

The sense in which Bayesian inference is the normative standard, or optimal, is that if in the long run you want to minimize your inaccuracy, then you probably should be updating your beliefs in light of Bayes' rule. That would be a good thing for you to be doing if what you want to do is minimize your long-term inaccuracy. But that's not the only thing that we care about. That's an important thing that we care about, but sometimes you just want to be mostly right quickly, so you're willing to sacrifice some amount of accuracy for efficiency, even if it's going to mean you get something wrong in some cases.

Sometimes what we want to do is be persuasive. Sometimes what we want to do is come up with a convenient way for solving a particular type of problem. Again, it might be wrong some of the time, but it's going to be much easier to implement in other cases. There are all sorts of different epistemic and social goals that we might have. Increasingly, I'm thinking that maybe explanation doesn't have just one goal; it probably has multiple goals. Whatever it is, it's probably not just the thing that Bayesian inference tracks. It's probably tracking some of these other things.

There are going to be some features of explanation that we have because they're the sorts of things that facilitate the development of mental representations, which can be used or communicated easily. That's going to lead us to some systematic departures from accuracy, but maybe those trade-offs are worthwhile in various cases. That's a question that I go back and forth on. I don't think I have a clear view about it, except to say that I'm moving away from the view that I had earlier in my career.

I've been influenced by a lot of different people. A couple of them I've mentioned already. One very important influence in my thinking was my PhD advisor Susan Carey. She's best known for her contributions in cognitive development, and thinking about the kinds of representations that infants are born with. One of the most valuable things that I learned from her was how to take big, complicated questions about the nature of learning conceptual representation, and try to address those questions empirically and in a way that engages with the big picture philosophical issues that motivated the questions to begin with. She's someone who's continuing to do fascinating work as well.

A lot of the inspiration for the way I'm thinking about explanation comes from some recent developments in a field called formal epistemology. Formal epistemology is an extension of epistemology and philosophy. Epistemology is concerned with questions about what knowledge is, how we come to know when our beliefs are justified, and so on. For the most part, historically, that was approached in the same way that other philosophical areas were approached, with a certain kind of conceptual analysis and argument.

What formal epistemology does is try to answer some of those questions and close neighbors of those questions, using the formal tools that have historically been more characteristic of fields like statistics, computer science, and mathematics. Within that area, some people have started to take a formal approach to thinking about explanations. That's nice because it both allows you to have the quantitative position you need in prediction to be able to test predictions against human behavior, and also because it allows you to be very precise in your characterization of what you mean by explanation and how it relates to something like Bayes' rule.

Within that literature, there are two people, Igor Douven and Jonah Schupbach, who've been exploring what they've been calling explanationism as an alternative to the idea that the way we ought to be updating our beliefs is just by following something like Bayes' rule. That's been influential in my thinking lately. I can't say exactly where it might go in terms of either the empirical studies or the theoretical picture that will emerge from it, but they're certainly raising more interesting possibilities in this landscape.

A reasonable question for someone to be wondering is if there are any real-world implications of this type of research. Why should we care about being able to articulate the fine microstructure of people's explanatory judgments and the fact that we explain in particular ways? There are a few different reasons why this is important. The one that's motivating my research for the most part so far is the one that's more internal to cognitive science, and that has to do with the theoretical implications.

What does this mean for the way we think about learning? If we take seriously these phenomena of learning by thinking, or learning without some new data, we need to rethink our theories of learning. That's the reason this matters for cognitive science. Cognitive science cares about learning, about inference, about reasoning, and explanation is just fundamental to that.

There are also two realms where this matters for the real world. One of them has to do with cases where we make mistakes, and the ability for a better understanding of human reasoning to allow us to correct those mistakes, to engage in what's sometimes called de‑biasing. If we can better understand the cognitive errors we make, then maybe we can intervene on the real world to generate better decision-making. If we know that people prefer simpler explanations, where simplicity is defined in a particular way, and that they do so even when those explanations are not the ones that are best supported by the evidence, then we know they're making a mistake there.

Do we see people in the real world making that mistake? When doctors are faced with a decision, a diagnosis problem that has the structure where they're pitting the explanation that's simpler in one sense against the one that's more likely in light of the data, do we see them making the wrong decision? If so, then maybe we know where to look for those kinds of errors, based on this theory and empirical work, and we can think about how to correct those errors. One broad goal is to develop some tools for identifying real‑world mistakes and how to correct those mistakes, because those decisions can be very consequential.

The other way that this might turn out to have some implications is for artificial intelligence. There are two ways to think about that. One is that, if it turns out that explanation is just this crucial part of how humans learn, then maybe understanding how explanation plays that role can help us build better machine-learning systems, for example. We know that humans solve certain problems much better than the best AI. To the extent those problems are ones that in fact humans solve using things like explanation, we want to be able to understand that so that we can build better artificial systems.

The other way that explanation fits in is through what's now being called explainable AI. This points to a challenge that has emerged from the recent advances that we've seen in areas of artificial intelligence and machine learning, which is that you can end up with a situation where you have, say, a particular deep-learning system, a very complicated neural network that solves your problem extremely well. It might have extremely good predictive accuracy, but if you ask someone what it's doing, or why it got something right, or why it made a mistake, it is extremely opaque. It can even be opaque to the person who designed it and implemented the system.

There's something these systems are doing, but they've gotten to a level of complexity that they learn themselves from the data that can make it extremely opaque. One of the challenges for explainable AI is to think about how we can understand these systems. As engineers, as consumers, as doctors, we need to know how seriously to take the output of some system that tells us some diagnosis is likely.

In order to do that, we need to know a lot about what it takes to give us that sense of understanding. What is it that's going on when we receive an explanation? What makes it a good explanation? How does that affect our downstream behavior? All of these kinds of empirical questions that I'm addressing end up having this practical application when it comes to thinking about the latest technology that we're developing and how they get it to interact effectively with humans.

One of the ways people have tried to get a handle on what makes a good explanation is by differentiating it from other things we do. An explanation versus a description, an explanation versus a prediction. I don't think that I've seen a very satisfying way of cleanly articulating what that distinction is, but what seems to be crucial to explanation, which I think you might not get from these other things like prediction or description, is a sense of understanding.

Some people do go so far as to say that what makes something an explanation is that it generates understanding, and other people will define things the other way first. They'll say that what generates understanding is having an explanation. I don't have a well-articulated view on which comes first, explanation or understanding, but it seems to me that those are intimately related. You don't necessarily get understanding from a description, and you don't necessarily get understanding from a prediction. 

Part of the reason why I think these problems that emerge with this new technology in the context of explainable AI are related specifically to explanation is because explanation is the thing that is intimately related to understanding. I've been talking a lot about explanation because it's a theoretically rich topic, and an important, consequential one, but in some ways what I want to emphasize by talking about explanation is ways in which philosophy and psychology can be fruitfully brought together to address these topics. 

What's characteristic of the way that I've approached explanation in my research is that it tries to borrow the best of the conceptual tools we get from philosophy and combine them with the empirical tools that we get from psychology. That approach isn't something that only applies to the case of explanation. That applies extremely broadly. There are so many areas of rich and fruitful contact between philosophy and psychology.

Some of the ones that I focus on in my own research include aspects of causal reasoning, which relates to explanation, but also diverges from it in various cases; also, aspects of moral reasoning. Why do we have the particular moral beliefs that we do? How do we evaluate when someone is blameworthy and when they should be punished? You also see this in many other cases: aspects of language, aspects of how we think about other people's minds, aspects of how we think about social structure. These are all topics that we benefit from the insights of both philosophy and psychology.