L.A. Paul: "The Transformative Experience"

L.A. Paul: "The Transformative Experience"

HeadCon '14
L.A. Paul [11.18.14]

We're going to pretend that modern-day vampires don't drink the blood of humans; they're vegetarian vampires, which means they only drink the blood of humanely farmed animals. You have a one-time-only chance to become a modern-day vampire. You think, "This is a pretty amazing opportunity, do I want to gain immortality, amazing speed, strength, and power? But do I want to become undead, become an immortal monster and have to drink blood? It's a tough call." Then you go around asking people for their advice and you discover that all of your friends and family members have already become vampires. They tell you, "It is amazing. It is the best thing ever. It's absolutely fabulous. It's incredible. You get these new sensory capacities. You should definitely become a vampire." Then you say, "Can you tell me a little more about it?" And they say, "You have to become a vampire to know what it's like. You can't, as a mere human, understand what it's like to become a vampire just by hearing me talk about it. Until you're a vampire, you're just not going to know what it's going to be like."


[48:42 minutes]

L.A. PAUL is Professor of Philosophy at the University of North Carolina at Chapel Hill, and Professorial Fellow in the Arché Research Centre at the University of St. Andrews. L.A. Paul's Edge Bio page


THE TRANSFORMATIVE EXPERIENCE

My name is Laurie Paul, and I'm a professor of philosophy at the University of North Carolina at Chapel Hill. I'm a metaphysician. I'm especially interested in metaphysics and philosophy of mind. I have been developing what I think of as formal phenomenology. In other words, I'm especially interested in looking at formal techniques engaging with the nature of experience, and I've paid special attention to temporal experience. One thing I've been thinking a lot about lately is the notion of transformative experience, which I'll tell you a little bit about today.

The questions that have been occupying me involve questions that come up when we as individuals think about making big life decisions. Metaphorically, it's when we think about making decisions when we're at life's crossroads. As we live our lives, all of us experience a series of these crossroad-style big decisions.

 

The worries or puzzles that I've been thinking about and exploring come from drawing together a number of strands in philosophy that haven't been drawn together before. The first strand involves a relatively new area of inquiry in philosophy that goes by the name of formal epistemology. It's an interesting and engaging new development, and formal epistemologists are interested in the way that individuals make decisions. They're interested in looking at formal decision theory, but they're interested in doing this within the context of epistemic questions. The thought is to explore how we can make rational decisions by taking agents to have psychologically real utilities or desires, by thinking in terms of particular degrees of beliefs or credences and also psychologically real preferences, and thinking about how we want, in an epistemic context, to think about individuals making decisions so that these individuals can know how they should act. The "should" is important; we're exploring these questions from a normative perspective.

I'm interested in normative decision theory, as opposed to behavioral decision theory. I'm interested in what the epistemic gold standard is that we as individuals should be aspiring to reach when we make decisions. In particular, what I'll talk about a little bit more concerns the normative gold standard for when we make important decisions.                                 

The formal epistemologist usually thinks of the individual in a third-personal sense. Namely, it's as though we're observing individuals, and thinking about their epistemic states and how they're making their decisions. But there's another perspective that's also important and draws in another strand from philosophy, a strand of work that's been important over the last 30 or 40 years in philosophy. People like Dave Chalmers have made important contributions to philosophy involving the notion of consciousness and trying to understand what consciousness is.                                 

What I want to look at closely is what philosophers have learned about the value of experience—how we've learned about what experience teaches us. A lot of times this discussion occurs in the context of worrying about the mind-body problem, or questions about physicalism. That's not my focus. I want to get a better understanding and think about how important it is, in some contexts, that we have certain experiences in order to know or understand certain information. There are disputes in the philosophical community about whether experience is required to know certain facts, or what exactly it is that experience teaches. I'm not worried about that dispute. I just want us to be able to see that sometimes experience is important. It's necessary, at the very least, for us to have conceptual or imaginative abilities in order to grasp certain kinds of imaginative content.                                

If we draw the strands together—formal epistemology, normative decision theory, consciousness with a focus on what experience teaches—then we get a different perspective on how we need to think about how we make big decisions. What I'm going to say is going to connect a little bit to what Molly talked about earlier today and some of the things that Josh Knobe talked about the last time we had this session. When each of you thinks about how you make a big decision, you need to consider how you—your current self—wants to perform some act or decide what to do in order to maximize the utility for your future self. The choices I'm especially interested in are ones that are life-changing decisions. As I said, they're high-stakes—the things that we care about very much.                                 

What we ordinarily do is imagine ourselves into different possible scenarios: "Maybe I could do A, maybe I could do B, maybe I could do C. What should I do? How do I want to live my life? What kind of person do I want to be?" You can think of this philosophically as what kind of future self do I want to become? What kind of future do I want to occupy? I care about what it's going to be like to be me after I undergo this central experience that's part of this big decision. That's the question I'm interested in.    

As a philosopher, it's kind of funny to tell a bunch of scientists about a fictional example. The reason why it's important to look at these fictional examples, or at least the one about vampires I'm giving you, is because the structure is present in a number of real-life cases. It's important to get the structure out there so we can understand it. We're not worried about questions about morality here. Obviously those questions are important, but as a metaphysician I don't think about morality, I lack the relevant expertise.

We're going to pretend that modern-day vampires don't drink the blood of humans; they're vegetarian vampires, which means they only drink the blood of humanely farmed animals. You have a one-time-only chance to become a modern-day vampire. You think, "This is a pretty amazing opportunity, do I want to gain immortality, amazing speed, strength, and power? But do I want to become undead, become an immortal monster and have to drink blood? It's a tough call." Then you go around asking people for their advice and you discover that all of your friends and family members have already become vampires. They tell you, "It is amazing. It is the best thing ever. It's absolutely fabulous. It's incredible. You get these new sensory capacities. You should definitely become a vampire." Then you say, " Can you tell me a little more about it?" And they say, "You have to become a vampire to know what it's like. You can't, as a mere human, understand what it's like to become a vampire just by hearing me talk about it. Until you're a vampire, you're just not going to know what it's going to be like."

The question you need to ask yourself is how could you possibly make a rational decision about whether or not to become a vampire? You don't know, and you can't know what it's like. You can't know what you'd be choosing to do if you became a vampire, and you can't know what you're missing if you pass it up. This would be a problem if we faced these choices on a regular basis because what it suggests is that there is a principled, philosophical reason why, when faced with this big choice, we would be unable to reach our epistemic gold standard.

If that were the only case in which this situation arose, most of us probably wouldn't have to worry about it, but I don't think it's the only situation in which this kind of thing arises. Now I want to talk about a case that is different in important ways from the vampire case because it's a low-stakes case. It's a little closer to real life, so we can see how this philosophical problem is one that we grapple with even if we're not always recognizing that we're grappling with it on a regular basis.

I've never tried a durian fruit. If you've tried a durian before, then bear with me. You can probably remember back to before you'd tried durian, and for those of you who haven't tried durian, we're in the same epistemic boat. The thing to know about durian is it's an exotic Southeast Asian fruit; it's very distinctive. One important chef says, "The only way to describe its taste is 'indescribable.'" The thought is, until you've tasted a durian fruit, you can't know what it tastes like. There are various evocative descriptions people have: "Eating vanilla ice cream by a sewer" or "French kissing a dead rat." These evocative descriptions are interesting, but they're not going to give you the information that you might like to have, namely, what it's like to taste a durian. The only way that you can know what it's going to be like for you is to taste one.

It's not about being sophisticated or liking exotic things because, as I already mentioned, even those with sophisticated palates, like chefs, differ widely on how they respond. Some people find it absolutely repulsive; other people call it the king of fruits. Ambrosia would be the description. In this situation, when someone asks, "Do you like the taste of durian?" you don't know. You would have what I think of as an epistemic transformation if you tasted durian. Once you taste durian for the first time, you know what it's like.

The philosophical example in the literature that parallels this, about the value or what experience can teach you, is an example that was developed by Frank Jackson. He talks about black and white Mary. Mary, we suppose, has grown up in a black and white room. She's never seen color, she's just seen shades of gray and black and white. When she's finally let out of her black and white cell and sees a red fire engine, she learns something. She learns something that she couldn't have learned by reading all the literature about color science or about how we see or hearing testimony of other people. She learns what it's like to see red. The thought is that we can all recognize that there is something important that we gain by experience and by experience alone. We gain an ability to grasp a certain phenomenal concept. We gain a certain imaginative ability—the ability to imagine redness in various contexts.

 This is important because if we think about what experience teaches us, then we can see how the puzzle that I was sketching with the vampire comes up again in the case of the durian fruit. Imagine that you're in Thailand. It's breakfast time, you're looking at the menu and you're trying to decide what you're going to have for breakfast. You have a choice between having some ripe pineapple for breakfast, or having some ripe durian. I'm going to assume we've all had ripe pineapple, and let's just assume you like pineapple, you think it's pretty good, but you've never had ripe durian.

The problem, when you're looking at your breakfast menu, is that you can't make a decision about what to have for breakfast based on which taste you prefer. Why? Because you've never had durian. You can't assign a value to the outcome of what it's like for you to taste durian. In a certain sense, the utility of that outcome is not defined. If that's the case, then there's no way to make sense of determining how best to maximize your utility, or how best to respect your preferences in terms of picking whatever you would like best to have for breakfast that day. Because you can't assign a value to what it's like for you to taste durian, you can't, in a sense, have a preference, at least based on the way that we're thinking of the options. You can't step back and think about what the epistemic gold standard would be for you, that you should apply to yourself, when you're thinking about how best to choose what you want to have for breakfast.

When we're in context where we face epistemically transformative experiences, there's a way to make the decision that's just not accessible to us because we lack certain information, or we lack a certain ability. Why does this matter? As I said before, one way in which we assess our different options is by imaginatively projecting ourselves forward into different possible scenarios: "There's me having durian for breakfast," or "There's me having ripe pineapple for breakfast." We decide which scenario meets our desires in a more satisfying way, which scenario we assign a higher utility, then act so as to maximize that utility. That's the gold standard route.

In a low-stakes case, like deciding what you want to have for breakfast, there are other things we might want to do. We might say, "I value discovery. I'll just flip a coin. I'll just try durian for the heck of it. It's not a big deal." That's just fine in low-stakes cases. What matters are high-stakes cases. The vampire case I was describing to you is a high-stakes case. What makes it high-stakes is that it's both epistemically transformative and personally transformative. It's the personal transformation, the fact that it's going to affect the rest of your life and your very being, that makes it important.

These high-stakes cases are the cases where we care most about meeting the epistemic gold standard, or at least we should care most, because the decision has big effects on you or maybe your loved ones. If any of those personally transformative decisions are also epistemically transformative, then the same problem we faced with the durian case resurfaces with the big decision case.

There are some real-life cases that have this structure. Let me just sketch two. The first case involves sensory capacities. Imagine a congenitally deaf person who has never been able to hear contemplating whether or not he should have a cochlear implant. Let's say he's built a lot of his life around being a member of the Deaf community. He's contemplating the possible outcome of getting a cochlear implant, and then presumably after he's learned how to interpret the signals from his implant, knowing what it's like to hear. Because he doesn't know what it's like to hear he can't, in principle, know what it's like to hear until he becomes a hearing person. There's a certain sense in which he can't assign a value to the outcome of what it's like to hear. It's a high-stakes case because, presumably, what it's like for him to hear is going to have a huge effect on the way he lives his life, and a lot of the features of his life.       

It's not a matter of thinking more carefully or reflecting in a deliberate manner. For principled reasons, there's something that's epistemically inaccessible to him, and we can't expect him to make a decision based on information that he can't have access to. There has to be a different way to make that decision. Part of what I want to say is we need to recognize that agents can find themselves in that kind of epistemic situation.

There are lots of other cases involving disability and similar issues, but there's another case that is maybe a little more familiar to those of us who never had to face the possibility of having a cochlear implant: The choice of whether or not to have one's first child. Having one's first child is also an epistemically transformative experience. One of the most important and salient features of becoming a parent is what it's like to experience the attachment to the actual child that you produce—the loving, satisfying, attachment relation that you stand in to the child that you produce. In order for you to stand in this attachment relation, first you have to produce the child. Second, the character of that attachment relation is going to be highly defined by the particular characteristics of the actual child that you produce. Until you stand in that relation, you can't know what it's like. You might know some very general features, but it's the particular features that matter and that are going to have the biggest impact on your experience of being a parent.

When you make the choice or you think about whether or not you want to become a parent, and you cognitively evolve yourself forward and imagine holding your baby and what it would be like to be a mother or a father, performing that act might be an interesting exercise in imaginative fiction, but it's not going to give you information about what it's going to be like for you to become a parent. That means that the utility of that outcome is not defined for you. And of course, this is a high-stakes decision. Becoming a parent is one of the classic cases where people's preferences and other things about their situation change dramatically. Often people do take themselves to be a different person. Some people say they're less selfish, they care about different things, they don't party as much. There are lots of different things that happen.

This is another case, one that many people face, and that is when they think about whether or not they want to have their first child, there's something important that's epistemically inaccessible to them. It's the thing that we care about, and the thing that's going to personally transform you if you have a child. When you contemplate whether or not to have a child, if you want to do it by assessing what it would be like for you to be a parent then it's, in principle, not possible for you to make that decision while reaching the epistemic gold standard. In other words, by acting so as to maximize your utility in the way that you understand to be doing it.

It's important to recognize this philosophical issue, and to recognize how a natural and intuitive way that we want to deliberate and introspect and think about who we are might be in conflict with the thought that rational decision-making defines our epistemic gold standard. (Decision theory does define our epistemic gold standard, so there's a real tension here.) There's a lot of value in introspecting. It's important for us to try to think about who we are and who we want to become when we make these big decisions. Yet there might be an in-principle conflict between this desire we have to be authentic in this sense, and the desire we have to reach the epistemic gold standard.

I want to close with another problem that comes up because there are a cluster of issues here. The other problem, which stems from the conflict between authenticity and the epistemic gold standard is the following decision theoretic issue. A natural thing to do is to say, "Let's just do some empirical research. Let's look at this question from a scientific perspective." I'm in favor of that. Doing empirical research on these questions is absolutely the best way to go, but imagine that we find ourselves in the following situation. Imagine yourself as a child-free person, as somebody who takes themselves to be essentially someone who is child-free. You're a person who has no kids, you love your life the way it is, and you think of yourself as intrinsically child-free; you have no desire to have children. When you think about what it would be like to be a parent, you think, "I wouldn't be happy, that's just not the life that I want to live."

You go around and you talk to people. Let's pretend that all the empirical research out there tells you once you become a parent, the way that you're going to evaluate the quality of life as a parent, the utility of becoming a parent, is going to skyrocket. Once you become a parent, you're going to think that being a parent is fabulous, that it's the best way to live your life, far better than it would be to live your life child-free. Let's say that all the description and testimony—from your parents, your friends who have children—all say the same thing. Now you're in a situation where you value who you are as a child-free person, your preferences are to remain child-free. You also, in principle, cannot introspect into what it would be like for you to be a parent. If you were to be rational, you should replace your assessment, your imaginative projection, with this empirical information. All the empirical information and testimony that you have tells you that once you've undergone this experience, your preferences will change so that you'll be much happier—you'll be maximizing utility as a parent.

If we're to meet the epistemic gold standard, obviously we're supposed to be utility maximizers, right? If the way to do that is to listen to the empirical research in this case, then the right thing to do is to reject your current self and replace it with the future self that's a parent. There's a problem here. The way that I've set the case up, your current self assigns a reasonably low utility to becoming a parent, but because you undergo a transformative experience in virtue of becoming a parent, your future self as a parent assigns a very high utility to being a parent. Because you want to maximize utility, you should give up your present self and replace it with your future self.               

That again illustrates the philosophical tension that comes out here. Some people think that to be rational, you need to respect your current preferences. To be authentic, you have to respect who you are now. It looks like if we want to meet the epistemic gold standard in this case, we have to violate the preferences of our current self—violate who we take ourselves to be now, who we take our current self to be—and replace it with a different self. That suggests that rationality can entail a kind of self-alienation that I find worth exploring.


THE REALITY CLUB

MOLLY CROCKETT: That was so fascinating and engaging and touches on a lot of things that I think about, both intellectually and personally. One thought that comes to mind is that there's a lot of evidence from psychology that not only do we choose the things that we prefer, but we also come to prefer the things we choose. I know Laurie Santos has done some cool work on this, so maybe you want to follow up after. I've never thought about this cognitive dissonance reduction stuff from a functional perspective, but I wonder if maybe one reason that we do this is to make up for the fact that we have to sometimes make these epistemically transformative choices. Maybe one interesting question for empirical research would be whether cognitive dissonance reduction and choice-induced preference change is stronger for these decisions where there is this epistemic transformation going on.

PAUL:  Absolutely. One of the things that fascinates me is this notion of preference capture, where you're contemplating the possibility of changing your preferences, and where you can't forecast how they're going to evolve. That's an important component here. You don't know what's going to happen to you, but maybe you know, "Well, it's going to be the case that I'm going to change myself so that I'll be happier with the result." Philosophers need to think about this. You're right, there might be this interesting evolutionary or adaptive feature to this so there's a way to make sense of this and think about what the epistemic gold standard should be in that context.

It also raises interesting philosophical questions about how we want to think about authenticity in the light of that issue.

HUGO MERCIER: One question is, how different are these cases? In the case in which you don't know what it's going to be like to be a vampire, to any other source of uncertainty. You have to make a decision, and you just don't know why are they different. The other question is if you look at cases like the durian example, you might use a bunch of heuristics, such as how much you like novel food, how much you like novel fruits, if it's that you like something that some people hate. You can use a bunch of things to make it less blind.

PAUL: There are some subtleties here. Normally, uncertainty regards the probabilities or the credences involved with the situation. In the way that I'm thinking about decision theory—which is a very orthodox, natural way—we've got various probabilities that we would assign to states, and then the utilities of the outcomes. Normally, standard models involve uncertainty with respect to the probabilities. The standard assumption is that we know enough to be able to assign the values, but what we don't know are the probabilities, so that's where the uncertainty is. My problem is different from that because what we don't know is the value. Sometimes we might have probabilistic uncertainty, but sometimes we might have perfect certainty about the various likelihoods. We just don't know what the values are.

That said, there is a decision theoretic move that one could make. I've been exploring different ways of developing decision theoretic models to accommodate these issues. Interestingly, all of these decision theoretic models, to accommodate the problem, to force them into a problem, say, of uncertainty, means that you get even more problems on the authenticity. It's a dilemma, and you have to figure out which one you want to choose.

There is a move you could make, where instead of the utility being undefined, you say, "I'm just going to describe every possible utility." Then the decision involves a massive amount of uncertainty about which utility is going to come into play. That's a way of pushing it all into massive uncertainty. But as you can see, then the decision becomes horrible in a different sense.

MERCIER:  In more realistic cases you can use heuristics to approximate how you feel.

PAUL:  Yes, I definitely agree. In some of the work that I've been exploring, there are a couple of different issues that need to be separated here. One question is are we changing the way that we're making the decision in a way that takes us away from the way we want to? We might have to. As a philosopher, I want to say it might be the case that the natural way you want to think about should you become a parent just isn't the right way to do it. We need to think about how to replace that natural model with a better model. The next question is what other models should we use? Two of my favorite options are one, where we think we know that there are these outcomes, but we don't know what the outcomes are. Then you can use principles, or some people call them heuristics. I prefer to call them principles. I think of a chess game, where for example, I haven't memorized a bunch of chess moves, let's say, and I'm playing against Grandmaster. I move my queen a certain way. I know the rules of the game, and how the pieces move, and I know when I move my queen a certain way that something's going to happen, but I don't know what all the different configurations are going to be that are going to result.

The best way to make my decision would be to endorse certain principles about how one should move in those situations, even if I can't assess the utilities and compare the outcomes explicitly. That does seem to be one way to start to address these questions. There are interesting things you can do by saying, "Let's look at how people who have come out on the other end, and who have had these transformative experiences." It might be the case that you can eat all the other fruit that you want; it's not going to help you know what a durian tastes like. It turns out that if you've done a lot of sewage work, inhaled a lot of fumes, there's a certain first-personal experience that you have that will give you some insight into the first-personal experience of tasting a durian. Discovering what those things might be, which might be very different from imagining tasting something, would also be another way to get at least a partial value. It's not going to take away all of the problems.

It's also important, not that you were doing this, but sometimes people slip into, "Well, maybe I can introspect a little bit and then use some information and do it that way." Part of my point is that natural way of thinking about things works for familiar situations, but for these radically new contexts, it's just not going to work. We just have to be careful about not slipping into that way of thinking.

LAURIE SANTOS:  A lot of the work we know in judgment and decision-making because of choice induced preference changes, because we don't have access to our preferences, because our new situation changes our evaluation of the old, there's a sense in which you could take every choice and every decision as a mini-version of a transformative experience. If that's the case, then this is a fundamental problem, not just for deciding to become a vampire or having a kid. Every time I choose one of those cookies, it's going to affect my future. It's going to be a mini-transformative experience that affects my future preferences.

PAUL: Here's where I make some philosophical distinctions that are relevant. It's a context-dependent situation. I think of it in terms of experiential, natural kinds. If I'm thinking, "Do I want to have a chocolate chip cookie?" I've had chocolate chip cookies before, so in a thick sense, I know what it's like to have a chocolate chip cookie, and I can make a decision based on what I think it's going to be like. Here's what I don't have: I don't have the fine-grained experience of having that particular, totally fabulous, amazing chocolate chip cookie.

When you start playing around with the context and the stakes, you do get the problem right back. What I'm trying to push is that we have to be incredibly precise about how we're defining these things, or much more precise to try to avoid unknowingly finding ourselves with these in-principle structural problems.

DAVID PIZARRO: You can't know what a transformative experience will be in that precise way. It's transformative experiences all the way down. The very thing that you're saying should be a warning sign—"when it is transformative, don't make this error"—I don't know yet. Why was your category "chocolate chip cookie" and not "baked good in Connecticut?" I don't think that you would argue that there are things that are transformative and things that are not. You could say you've had pain, and let's say that severe pain is five percent of being a Mom. You know something. You could say you don't know anything about what it's like to be in space; it's the ultimate transformative experience. Then you could say, somehow you could rank all possible experiences in roughly how much you would know. I don't think that's what you're trying to say. What you're trying to say is that some things are unknowable. I just don't think you even know what's unknowable.

PAUL:  First of all, philosophers want to understand the structure as opposed to my particular situation. Second thing—we've been going very meta, and as a philosopher I have to go even more meta—it might be that you discover the category of transformative experiences, but you have to have one to know what it's like to have a transformative experience. There's a little wiggle room there.

SANTOS:  You think you know, but you know.

PAUL:  You can always raise these questions.

PIZARRO:  Some people say, "I've had a dog so I'm totally with you about the mom thing."

CROCKETT: You talked about natural kinds. I wonder if there can be a distinction between experiences that are truly unknowable and other experiences that you've never experienced before, but you can simulate. There's an interesting paper by Helen Barron and Tim Behrens published last year, where they looked at the neural mechanisms of making decisions about novel goods. The goods were food items that were composed of familiar foods—a raspberry avocado milkshake, for example, or tea-flavored JELL-O. Even though you've never had a raspberry avocado milkshake, you can simulate raspberry and avocado and what that would be like. What the brain is doing in these decisions is, you can see the trace of avocado and the trace of raspberry being combined and simulated in that way. I'm just wondering if you make that distinction between non-experienced but potentially simulated.

PAUL:  I read that article. I thought it was a cool article. It's important to see that these decision problems are arising at the individual level. It obviously doesn't mean that individuals can't make generalizations or draw on past experience, but the way that each of us faces this problem is going to be highly dependent on the previous experiences that each of us have had. That is crucial.

In some experiences, past experiences of parts can be conjoined to allow us to imagine what the whole experience would be like, but other experiences don't seem to be like that. Again, having a child, speaking from experience, didn't seem to be that way. There are interesting empirical and philosophical questions here about which experiences can be collected or conjoined together so that you can perform an imaginative simulation or model, and which ones aren't, and why not.

FIERY CUSHMAN: I feel like part of what the core question is going to have to be is does the unitary utility comprise the full space of decision-relevant qualia? If it's the case that even parenthood finally grounds out in utility, because I know what utility is, you can tell me how much you have, and then the only relevant simulation I need to perform is one of utility. Not one of the particular experiences that happen to afford utility, if utility is the only kind of qualia that has decision value. But if somehow it were possible that there were qualia which were not utility, which could not be translated into utility, but that would still bear on decisions, then it would be necessary for me to simulate those, and they might be unknowable because I've never had them before, unlike utility, which I've certainly had before.

It feels like there are two interesting ways to go. One is to say utility just is decision-relevant things. If you've ever experienced making a decision at all, then you know what it is to make a good one, and parenthood is a good one. It's going to be one of those ones that you like.

PAUL: I was with you until that last bit. It's absolutely the case that we need to think in a more sophisticated way about utility. One thing that I'm convinced of is that we should not be thinking in terms of simple hedonic pleasure and pain. One thing I'm fascinated by is the intrinsic value of experience. There's work in philosophy about color experience; a lot of that work is focused on defining color terms. Some of the interesting work concerns this notion of revelation. In other words, what do certain experiences reveal? Whatever it is that they reveal, it's very hard to pin down. We also think that they're valuable. This comes from Aristotle, who argued that, in principle, experience has a value to us. It's not clear to me that we can measure that in terms of hedons. I'm not saying the values are not comparable, it's not a straightforward issue about incommensurability either. Rather, this is another place where there are pressing questions that philosophers, in particular, need to think more about. There's been less attention paid to this formal approach towards phenomenology than there needs to be. Again, it seems to me there are obvious empirical ways that we could think about exploring this.

MICHAEL MCCULLOUGH:  Someone brought up the usefulness of relying on past experience, and I wanted to get in on that and combine that with Hugo's comments about heuristics. There is another way you can draw on past experience, which is to draw on deep past experience. One of the things that natural selection does is it capitalizes on invisible correlations ancestrally. You could say it's more or less self-evident that ancestrally there was a correlation between having offspring and fitness, on average. I'm just going to put that out there. Dawkins has this idea of child lust. It's as real a quality as sexual lust. That is the product of this invincible correlation in deep time between setting yourself up for having children and getting more copies of your genes out into the world.

JENNIFER JACQUET:  But birth rates are going down.

PAUL:  I say we grant him this, but there's some work by Gary Brase and some other people about baby fever that might call that into question, but let's pass it on.

MCCULLOUGH:  We can use that for anything—food, maybe that's a better one. To what extent can we ask for a free pass on trusting some of our intuitions for some of these big questions? The intuitions, on average, are going to be reliable ones. You can also back this out to after you've had the child and you begin to regret it. What does that say? That's going to happen to some people. The mind is built around these invisible correlations that build up over time. You can imagine, as horrible as it is to contemplate, that compulsion that some people have to get rid of their child, is reflective of some information in the environment. Not 100 percent reliable information coming through, a noisy signal processor that says perhaps this is not the time when taking this child forward is in your best inclusive fitness interest.

PAUL: Let's go back to the philosophical picture. Again, there is an ancient philosophical picture, where the best self is the rational, deliberative self, who thinks carefully, assesses their intrinsic inner nature, and then chooses in a calm, epistemically wonderful way. We've got a picture where that involves a certain reflection on who you are and involves certain imaginative or mental capacities. What I'm hearing you suggest is maybe that's just not the right story for a lot of our big decisions. Maybe there's a different biological story that we should look to. That's worth exploring. But again, it illustrates this tension between this picture that we have of ourselves as introspective agents and what rationality might demand. When you're faced with a big decision, let's say, in cases of informed consent, or you're thinking about writing an advanced directive, you are supposed to think very carefully about what you want. There's something unsatisfying about being told, "I'm going to replace that picture with something else."

MCCULLOUGH:  Maybe what I would want to say there is when you ended your talk by saying we find ourselves alienated from ourselves, maybe the thing I'd want to say there is this problem alienates us from our system 2 self.

DAVID RAND:  No, I don't think so. From a rational perspective, the problem is how do I predict what my post-child-having utility is going to be? If you believe in natural selection, you can say from a completely rational perspective, "I understand that it must be the case that I will be glad that I had the child afterwards. Otherwise, we would all be dead."

JACQUET:  Well, no, we never had the choice before. The whole idea about us having preference on this decision is a new one.

PAUL:  This particular case is very modern. The more choices that we get, the more control we have over our futures, the more we face these issues. It's a distinctively modern problem in a certain way, as well as having ancient connections. From an evolutionary perspective, we know that our preferences are going to change, and we know we're going to be happy—supposedly; let's disregard the confusing empirical results—so we should do it. All you're saying then is that we should replace our current self with our future self because we should be utility maximizers. That needs to be questioned. That is one route, but it's not an obviously satisfactory route, especially when you don't want to have kids. It might be like, "You've got to have a kid because you're not rational if you don't choose to have a child. You're not even biologically fit in some way." That's a deeply problematic claim to make.

PIZARRO:  But you're also saying you're not rational if you don't choose to have a child.

PAUL:  Well, that's right. I was just responding to Dave's suggestion there.

PIZARRO:  Here's one example of perhaps the most epistemically and phenomenologically unavailable state: Death. Surely, one implication of what you're saying is that it is not rational to not want to die. Because after all, not only do I not know what it is, I can't even ask anybody.

SIMONE SCHNALL:  But you don't have much of a choice in the matter.

PIZARRO:  Sure you have a choice—suicide.

PAUL: Death involves the absence of experience, in a certain way.

Here's a problem. In principle, we can't know until we do it. That's why there are these decision-theoretic problems, because look, do you want status quo bias? Is discovery always good? There's no simple, straightforward answer.

PIZARRO:  I'm was trying to use this as a reductio where you would say, "Surely it is irrational to prefer death, all things being equal!"

PAUL:  I wasn't suggesting that we should prefer death, all things being equal.

PIZARRO: No, but the rationality of it. That you would say that I am able to make a rational decision based on something that I know nothing phenomenologically or epistemologically.

PAUL:  You have to be very careful about how you're framing that decision. My claim isn't that there's no way to make rational decisions about these different things. You can, for example, rationally choose to have children. You can rationally choose to try durian and other things. But there are certain bases that you can't use to make your decision. This way of thinking is a mistake: I want to commit suicide because I know it's going to be better when I'm dead. That doesn't make any sense, for obvious reasons. It's also the problematic form of the decision that you see in a number of other cases: I want to be a parent because I know it's going to be better, because I know it's going to form my preferences. That reasoning is problematic. We have to be quite careful here because although I do want to say that this is a fundamental problem and it's fairly far-reaching, it doesn't destroy decision theory, and it doesn't destroy the way that we can rationally plan. I'm a big fan of using principles, and I'm a big fan of building more sophisticated decision models, where we distinguish between different types of utility. My point is that we need to see the structure here and see the complexity so that we can successfully attack the problem.