Reconsider the Lobster

cooked_lobsters_960x360

Jeff Sebo is a Research Assistant Professor of Philosophy and the Associate Director of the Parr Center for Ethics at the University of North Carolina at Chapel Hill.  He gave a talk at this year’s Rocky Mountain Ethics Congress entitled “Reconsider the Lobster” in which he asked how we should treat nonhuman animals in cases of uncertainty about whether or not they are sentient (on the assumption that sentience is necessary and sufficient for having moral status).  The paper considers three answers to this question and concludes that we morally ought to treat many animals, such as invertebrates, much better than we currently do.  What’s Wrong? is grateful to Professor Sebo for making the full text of his paper available.                 (image: cooked lobsters)

RECONSIDER THE LOBSTER

1.Introduction

In 2003 David Foster Wallace took a trip to Maine to write an article for Gourmet Magazine about what it was like to attend the Maine Lobster Festival. But the article that he ended up publishing, titled “Consider the Lobster,” is not so much a travelogue as a deeply personal examination of whether or not we have a moral duty not to kill lobsters for food.[1] About halfway through this article, Wallace puts his finger on a problem that, I think, deserves much more philosophical attention than it currently receives. He writes:

[T]he questions of whether and how different kinds of animals feel pain … turn out to be extremely complex and difficult. And comparative neuroanatomy is only part of the problem. … [The] principles by which we can infer that others experience pain … involve hard-core philosophy – metaphysics, epistemology, value theory. … And everything gets progressively more abstract and convoluted as we move farther and farther out from the higher-type mammals into cattle and swine and dogs and cats and rodents, and then birds and fish, and finally invertebrates like lobsters.[2]

The problem that Wallace is describing here, which I will call the sentience problem, is this: Suppose that (a) an individual has moral status if and only if they are sentient and (b) we are not always certain which individuals are sentient. How should we treat individuals in cases of uncertainty about whether or not they are sentient?

My aim in this paper is to present and evaluate three possible solutions to the sentience problem. First, we can follow an incautionary principle that permits us to treat individuals as non-sentient in cases of uncertainty. Second, we can follow a precautionary principle that requires us to treat individuals as sentient in cases of uncertainty. Finally, we can follow an expected value principle that requires us to multiply our credence that individuals are sentient by the amount of moral value that they would have if they were. I will not try to say which one of these principles is correct in this paper. Instead, I will argue for two preliminary conclusions. First, we should reject the incautionary principle. Second, if we accept either the precautionary principle or the expected value principle, then we morally ought to treat many animals, such as invertebrates, much better than we currently do.

I will proceed as follows. In section 2, I will present the background assumptions in moral philosophy and philosophy of mind that motivate the sentience problem. In sections 3-5, I will present and evaluate the incautionary principle, precautionary principle, and expected value principle, respectively. Finally, in section 6, I will draw my preliminary conclusions.

2. The sentience problem

I begin by presenting the assumptions in moral philosophy and philosophy of mind that motivate the sentience problem.

The first assumption that we need to make in order to motivate the sentience problem is sentientism about moral status. On this assumption, an individual has moral status, i.e. matters morally for their own sake, if and only if they are sentient, i.e. capable of consciously experiencing pleasure or pain. Note that the kind of consciousness that matters for our purposes here is phenomenal consciousness. That is, on the theory of moral status that we will be considering, your experiences count as conscious in the morally relevant sense if and only if there is “something that it is like” for you to be having them.[3] As we will see, this requirement that you have a private, subjective, qualitative feeling corresponding to your pleasure and pain experience is part of what makes the sentience problem, as defined here, so problematic.

The second assumption that we need to make in order to motivate the sentience problem is uncertainty about other minds. On this assumption (which is widely accepted), we are not always certain which individuals are sentient. This uncertainty has scientific as well as philosophical sources. Scientifically, we are not always certain which individuals experience pleasure and pain. For example, many people now accept that all vertebrates experience pain.[4] But the category of invertebrates is much less clear. With respect to some species, like squid, many people think that the behavioral and physiological continuities with humans outweigh the discontinuities, and so they proceed on the assumption that these animals do experience pain. With respect to other species, like ants, many people think that the behavioral and physiological discontinuities with humans outweigh the continuities, and so they proceed on the assumption that these animals do not experience pain (an inference that, we should note, is less well motivated). And with respect to still other species, like lobsters, many people think that the continuities and discontinuities are so well balanced that they have no idea what to think.

Moreover, philosophically, we are not always certain which individuals are conscious. Unlike your behavior and physiology, your private mental life is not publicly observable even in principle. Sure, I might perceive you as having private, subjective experiences, and I might also infer that you do by analogy with me. But this perception may be inaccurate, and this inference may be based on bad reasoning.[5] Of course, we may think that the problem of other minds has a solution. But even if we do, it is not likely that this solution will ground certainty about whether or not others are conscious in all cases. Instead, and at most, it will ground a high degree of confidence that, say, other humans are conscious, and then a decreasing degree of confidence that nonhumans are conscious depending on how behaviorally, physiologically, and evolutionarily continuous they are with us.

If we combine these assumptions, the upshot is that we are not always certain which individuals have moral status.[6] Depending on how persuaded we are by the problem of other minds, we might think that this uncertainty will last for a very long time, if not forever. Yet in the meantime, we still have to decide how to treat many individuals about whom we are uncertain. How many exactly? It is impossible to say for sure. But, to put things in perspective, the Encyclopedia Smithsonian estimates that, at any given time, “there are some 10 quintillion (10,000,000,000,000,000,000) individual insects alive.”[7] It follows that, if there is even a one/10,000 chance that the average insect consciously experiences even one/10,000th the pleasure and pain that, say, the average dog does at any given time, then the expected total amount of pleasure and pain consciously experienced by insects at any given time is equal to that of 100 billion dogs. (And of course, this is leaving aside all the other, non-insect individuals about whom we are uncertain.) It is therefore extremely important that, rather than wait for developments in cognitive ethology, comparative psychology, and philosophy of mind that may never come, we think as carefully as possible, right now, about how we morally ought to treat individuals in cases of uncertainty about whether or not they are sentient.

Since the question that we are asking here falls into the general category of the ethics of risk and uncertainty, we will proceed by considering the three main types of principles that one can accept in such cases – an incautionary principle, a precautionary principle, and an expected value principle – and by exploring the strengths and limitations of each principle as it applies to the sentience problem.

3. The incautionary principle

Start with the incautionary principle. This principle holds that, in cases of uncertainty about whether or not a particular individual is sentient, we are morally permitted to treat them as though they are not.[8]

As far as I can tell, philosophers do not usually defend the incautionary principle in cases of risk and uncertainty, but I am starting with it anyway since many people seem to at least implicitly accept it in this area. So, what should we think about it? In my view, it faces at least two problems. First of all, the incautionary principle, as currently stated, is far too extreme. For example, it implies that, if you are less than 100% certain that the problem of other minds has a solution (and therefore that other individuals have conscious experiences), then you are morally permitted to treat everyone in the world other than you as non-sentient. But I think that most of us would agree that this kind of moral solipsism is a non-starter. The mere fact that your friends and family members are only, say, 99.99999% likely to be sentient, given your evidence, is not sufficient reason to treat them as though they are not.

A proponent of the incautionary principle might reply to this objection by pointing out that we can restrict the scope of the principle so that it applies in the case of, say, lobsters but not in the case of, say, our friends and family members. For example, perhaps we can say that the incautionary principle applies if and only if you are, say, less than 15% confident that a particular individual is sentient; otherwise one of the other principles that we will be considering here applies. (Of course, it is a separate question whether or not this revision would be enough to rule out moral solipsism, as well as whether or not we could find a principled basis for this revision. But we can ignore those complications for the sake of argument here.)

But even if we accept this reply to the first objection, the incautionary principle still faces another, more important objection. Even if we restrict the scope of this principle in this kind of way, it still has implausible implications for our treatment of individuals who fall within its restricted scope. For example, suppose that you are only, say, 12% confident that a lobster is sentient (and therefore this lobster falls within the restricted scope of the incautionary principle). Suppose further that you feel inclined to boil this lobster alive – not even in order to eat him, but rather only for the simple pleasure of doing so. In this case, the incautionary principle implies that you have no moral reason at all not to act on this inclination. However, I suspect that many of us think that this assessment of the situation is too simple: We think that, if there is a real chance that this lobster is sentient, then you need to take that possibility into account in your thinking about how to treat him. And if you do not – for example, if you boil this lobster alive for the simple pleasure of doing so despite believing that there is a 12% chance that he is phenomenally aware of every single moment of his torment – then your action is at least prima facie morally wrong. Moreover, what makes your action at least prima facie morally wrong is not only, as Kant claimed, that you might be conditioning yourself to be “harder in your dealings with” other human and nonhuman animals, but also, indeed primarily, that you might be harming this particular lobster here and now.[9]

If this is right, then the question that we face is: Can we find a way to make our moral thinking sensitive to the possibility that a particular individual is sentient? And if so, how?

4. The precautionary principle

This brings us to the precautionary principle. This principle holds that, in cases of uncertainty about whether or not a particular individual is sentient, we are morally required to treat them as though they are.

The precautionary principle is widely accepted, at least by many animal rights advocates. However, this principle faces at least two problems as well, which mirror the problems that the incautionary principle faces. The first problem is that the precautionary principle, as currently stated, is implausibly extreme. For example, it implies that, if you are less than 100% certain that panpsychism is false (and therefore that other individuals do not have conscious experiences), then you are morally required to treat everyone and everything in the world as though they are sentient. But I think that most of us would agree that this kind of moral panpsychism is a non-starter as well. The mere fact that tables and chairs are, say, .00001% likely to be sentient, given your evidence, is not sufficient reason to treat them as though they are.

A proponent of the precautionary principle might reply to this objection in the same kind of way that the proponent of the incautionary principle might: by pointing out that we can restrict the scope of the principle so that it applies in the case of, say, lobsters but not in the case of, say, tables and chairs. For example, perhaps we can say that the precautionary principle applies if and only if you are, say, more than 5% confident that a particular individual is sentient; otherwise one of the other principles that we will be considering here applies. (As before, it is a separate question whether or not this revision would be enough to rule out moral panpsychism, as well as whether or not we could find a principled basis for this revision. But we can ignore those complications for the sake of argument here.)

But even if we accept this reply to the first objection, the precautionary principle, like the incautionary principle, still faces another, more important objection. Even if we restrict the scope of this principle in this kind of way, it still has implausible implications for our treatment of individuals who fall within its restricted scope. For example, suppose that you are, say, 12% confident that a lobster is sentient, and you are, say, 8% confident that a functionally identical robot lobster is sentient. (Why the difference? Because even though these lobsters are functionally identical, the real lobster is physiologically and evolutionarily continuous with you whereas the robot lobster is not, and therefore you have at least a bit more (subjective) reason to think that the real lobster is sentient than that the robot lobster is.) Finally, suppose that a house containing both lobsters is burning down, and you have time to save only one of them. Which lobster should you save? The precautionary principle implies that you are morally required to treat both lobsters as fully sentient, and therefore it is neutral about which lobster you should save. Yet I suspect that many of us think that this assessment of the situation is too simple: We think that, while both of these lobsters might be sentient, the real lobster is also a bit more likely to be sentient than the robot lobster is, given your evidence, and therefore you morally ought to break the tie in favor of saving the real lobster, all else equal.

If this is right, then the question that we now face is: Can we find a way to make our moral thinking sensitive to not only the possibility but also the (subjective) probability that different individuals are sentient? And if so, would this scalar solution to the sentience problem be better or worse, all things considered, than a suitably restricted all-or-nothing precautionary principle?

5. The expected value principle

This brings us, finally, to the expected value principle. This principle holds that, in cases of uncertainty about whether or not a particular individual is sentient, we are morally required to multiply our credence that they are by the amount of moral value they would have if they were, and to treat the product of this equation as the amount of moral value that they actually have. Of course, different moral theorists will flesh this principle out in different ways, depending on the kind of value that they attach to moral status. In what follows I will briefly consider how utilitarians and Kantians might flesh it out, and I will also discuss the strengths and limitations of each approach.

Start with utilitarianism. Utilitarians are primarily concerned with how much conscious pleasure and pain we bring about. Therefore, a utilitarian will interpret the expected value principle as an expected utility principle, according to which, in cases of uncertainty about whether or not a particular individual is sentient, we are morally required to multiply our credence that they are by the amount of conscious pleasure and pain they would be experiencing if they were, and to treat the product of this equation as the expected utility of our treatment of them. If we can make this principle work, then it would have plausible results in many cases. For example, in the burning house case, it would imply that we should save the real lobster all else equal, which seems plausible.

There are two related worries that we might have about this expected utility principle, however. First, we might worry that, even if we set aside radical skepticism about other minds, we do not have enough information about probabilities and utilities to be able to make reliable comparative judgments about sentience. This is true whether we make these judgments critically or intuitively. For example, think about what it would take to make them critically. We would have to calculate the probability that each individual is phenomenally conscious, and then we would have to calculate the probability that each individual experiences pleasure and pain. Then we would have to calculate the probability that each individual phenomenally consciously experiences pleasure and pain, based on our answers to these questions. I think we can all agree that it would be difficult for us to perform these calculations in any kind of rigorous, systematic, and generally speaking reliable way.[10] We might even worry that, if we try to engage in this kind of risk-benefit analysis in everyday life, then we will end up doing more harm than good overall.[11]

Now think about what it would take to make these predictions intuitively. We would have to make snap judgments about how likely each individual is to be sentient, and we would have to “round up” or “round down” the expected utility of our treatment of them accordingly. This seems to be what many of us do in practice, at least for certain species. For example, many of us are much more cavalier about harming insects than mammals, in part because we have the intuition that insects are much less likely to experience pain than mammals are. But we also have good reason to think that these snap judgments are unreliable in a wide range of cases. For example, numerous studies have shown that our capacities for sympathy and empathy (as well as the judgments about sentience that they depend on) are sensitive to a variety of factors that we think, on reflection, are not particularly likely to be relevant, including symmetrical vs. asymmetrical features, furry vs. slimy skin, four vs. six limbs, and so on.[12] Thus, we have at least some reason to worry that our intuitive judgments about sentience are no more reliable than our critical judgments are.

A utilitarian might reply to this worry, however, by pointing out that, even if we do not have reason to trust our comparative judgments about sentience in all cases, we might still have reason to trust them in some cases. For example, in the burning house case, we do not have to assign a precise credence to the proposition that each lobster is sentient. Instead, all we have to do is break a tie between them. And, given that these lobsters are qualitatively identical except that the real lobster is physiologically and evolutionarily continuous with us whereas the robot lobster is not, we might think that we can trust our comparative judgment enough in this case to justify following the expected utility principle rather than the precautionary principle, even if the same is not true in other cases. If so, then a utilitarian might accept a hybrid solution to the sentience problem. In particular, they might say that (a) in easy cases where we are clearly capable of engaging in the relevant kind of risk-benefit analysis, we should do so, whereas (b) in hard cases where we are not clearly capable of engaging in the relevant kind of risk-benefit analysis, we should follow a suitably restricted precautionary principle instead, on the grounds that false negatives are worse than false positives in this area, i.e., treating sentient individuals as non-sentient is worse than treating non-sentient individuals as sentient, within certain limits.

This reply might be enough to assuage concerns about the clear limits of our critical and intuitive reasoning skills. But it might not be enough to assuage a second, related worry, which is that, even if expertly applied, the expected utility principle is unacceptably anthropocentric. That is, if we follow this principle correctly, then we will end up systematically favoring individuals who are similar to us in certain respects, since, as we have seen, we have at least a bit more reason to think that individuals who are similar to us in certain respects are sentient than individuals who are not, all else equal. Thus, for example, in a future world where we all co-exist with functionally identical robots, the expected utility principle would direct us to systematically favor real humans over robot humans (and it would direct robot humans to do the same). And we might worry that this kind of discrimination might not only be mistaken (since functionalism might be true) but might also result in the kinds of social conflicts that will do more harm than good overall. In response to the first concern – that anthropocentrism is possibly wrong – utilitarians might simply accept that our epistemic standpoint is limited, and that the actions that we subjectively ought to perform (i.e. the actions that maximize expected utility) are not always the same as the actions that we objectively ought to perform (i.e. the actions that maximize actual utility). But in response to this second concern – that anthropocentrism, right or wrong, might do more harm than good overall – utilitarians might grant that, insofar as this worry is accurate, we should place further restrictions on our use of the expected utility principle in practice.

Now consider Kantianism. Kantians are primarily concerned not about how much pleasure and pain we produce overall, but rather about whether or not we are treating subjects of moral concern as ends in themselves, with a “dignity beyond all price.”[13] Therefore, Kantians will flesh out the expected value principle as an expected dignity principle, according to which, in cases of uncertainty about whether or not a particular individual is sentient, we are morally required to multiply our credence that they are by the infinite and incomparable moral value that they would have if they were, and to treat the product of this equation as the moral value that they actually have. This principle sounds reasonable as far as it goes. But how does it work in practice? For example, in the burning house case, how exactly are you supposed to multiply an infinite and incomparable moral value by .12 and .08, respectively?

At first glance, we might think that the answer to this question is simple. If Kantians think that moral status is a kind of infinite and incomparable moral value, then the expected dignity principle implies that, as long as we have any non-zero credence that a particular individual is sentient, we morally ought to treat them as though they are fully sentient – since, after all, if we multiply any non-zero number by infinity, the product is infinity.[14] We might therefore think that the expected dignity principle collapses back into the precautionary principle. For example, in the burning house case, we might think that this principle implies that we are morally required to treat both the real lobster and the robot lobster as having full and equal moral status, and therefore we should be neutral about whom to save all else equal.

There is another interpretation of the expected dignity principle available to Kantians, however, that avoids this implication. This interpretation involves thinking about this principle in the same kind of way that Kantians think about risk and uncertainty in general. Consider the following case. A boulder is rolling down a hill, and if it stays on its current path it will crash into two houses, killing anyone inside. Fortunately (since this is a philosophy thought experiment), you have access to a lever such that, if you pull it to the left, the boulder will hit only the house on the left, and if you pull it to the right, the boulder will hit only the house on the right. You have no other options available to you. Meanwhile, the only information you have about the houses is: The house on the left has lights on, whereas the house on the right has lights on and music playing. What should you do? As far as I can tell, there is no universal Kantian answer to this question. But many Kantians would say, and it is certainly consistent with Kantianism to say, that you morally ought to divert the boulder to the left all else equal. Why? Because even though anyone who might be home has full and equal moral status, someone is a bit more likely to be home on the right, given your evidence, and therefore you minimize the risk of harm to individuals with full and equal moral status if you divert the boulder to the left.

If this is right, then Kantians can interpret the expected dignity principle in the same kind of way. On this interpretation, the question is not how much moral status you should treat others as having, but rather how much risk you should assume there is that others have “someone home.” For example, in the burning house case, Kantians can say that you morally ought to save the real lobster all else equal – not because you should treat the real lobster as having a higher moral status than the robot lobster (they both have full and equal moral status if “someone is home”), but rather because someone is a bit more likely to be home in the real lobster, given your evidence, and therefore you minimize the risk of harm to individuals with full and equal moral status if you save the real lobster.

As this analysis shows, the Kantian expected dignity principle, on this latter interpretation, allows for the same kinds of comparative judgments as the utilitarian expected utility principle (and is also, therefore, subject to similar concerns about reliability and anthropocentrism). Further parallels between the utilitarian and Kantian interpretations are worth noting as well. For example, as we have seen, utilitarians might accept a hybrid decision procedure that directs them to follow an expected utility principle in some cases and a suitably restricted precautionary principle in others, so as to maximize expected utility overall. Similarly, as we have seen, Kantians might accept an expected dignity principle that resembles an expected utility principle in some cases (since it implies that we should favor individuals who are more likely to be sentient all else equal) and a suitably restricted precautionary principle in others (since it implies that we should minimize unnecessary risk to individuals who might be sentient in general). Of course, utilitarians and Kantians will still use these principles in different ways, since they will still disagree about whether or not we should, for example, aggregate harms and benefits across individuals. But as long as they are sentientists, they will at least agree that we should expand the moral radar beyond vertebrates, and they will also (if they accept the expected value principle) agree that we should make comparative judgments in at least some cases.

6. Conclusion

My aim in this paper has been to present and evaluate three possible solutions to the sentience problem: an incautionary principle, a precautionary principle, and an expected value principle. I think that my discussion here supports two general, if preliminary, conclusions.

First, we should reject the incautionary principle. If there is a real chance that a particular individual is sentient, then we have to take that possibility into account when thinking about how to treat them. It follows that, unless and until other, better solutions to the sentience problem become available, we should accept either the precautionary principle, the expected value principle, or a combination of the two.

Second, if we accept either the precautionary principle, the expected value principle, or a combination of the two, then we should think of the moral radar as much more inclusive than many moral philosophers, including many sentientists, have in the past. For example, we should attribute at least partial moral status to not only invertebrates such as lobsters but also to fetuses, PVS patients, sophisticated robots, and many other such individuals. Of course, it is a further question how we should treat these individuals all things considered. For example, Judith Jarvis Thomson famously argued that, even if we assume that a fetus has moral status, we might still think that abortion is morally permissible in certain cases. [15] Similarly, if it turns out that we should treat quintillions of insects (among many other individuals) as having moral status, we might still think that killing them is morally permissible in certain cases, for reasons including cluelessness, demandingness, and more. Still, if this conclusion holds, then many common human practices, ranging from aquaculture to pest control, will turn out to be at least prima facie morally wrong – and some might even turn out to be morally wrong all things considered.[16]

[1] David Foster Wallace, “Consider the Lobster,” Gourmet Magazine (August 2004).

[2] http://www.gourmet.com/magazine/2000s/2004/08/consider_the_lobster, p. 5.

[3] For discussion of the distinction between access consciousness and phenomenal consciousness, see Ned Block, “On a confusion about the function of consciousness.” Behavioral and Brain Sciences 18 (1995): 227–47.

[4] For discussion of evidence that fish experience pain, see Victoria Braithwaite, Do Fish Feel Pain?, (Oxford: Oxford University Press, 2010).

[5] For discussion of the problem of other minds, see Peter Carruthers, “The Problem of Other Minds,” in The Nature of the Mind: An Introduction (Routledge, 2004), pp. 6-35.

[6] Many philosophers are interested in what we morally ought to do in cases of uncertainty about which moral theory is true. But note that our question here is relevantly different. We are asking what we morally ought to do in cases of uncertainty about whether or not a particular individual has moral status given a certain theory of moral status, rather than what we morally ought to do in cases of uncertainty about which theory of moral status is true. This means that certain issues that arise in cases of moral uncertainty, for example how to identify a neutral “covering value” against which to compare alternative theories, will not arise in this case.

[7] http://www.si.edu/encyclopedia_si/nmnh/buginfo/bugnos.htm

[8] Of course, you may or may not be rational in your uncertainty. But since my aim in this paper is to find a principle that we can apply first-personally, I will assume for the sake of simplicity that our epistemic states are rational.

[9] Immanuel Kant, “Duties to Animals and Spirits.”

[10] For discussion of how difficult intersubjective comparisons of utility are even on the assumption that the entities in question are sentient, see Lori Gruen, “Experimenting with Animals, in Ethics and Animals (Cambridge: Cambridge University Press), pp. 105-129.

[11] For an argument that risk-benefit analysis concerning the distant future might do more harm than good overall, see Dale Jamieson, “Ethics, Public Policy, and Global Warming,” Science, Technology, and Human Values 17:2 (1992), pp. 139-53.

[12] For discussion of the many ways in which snap judgments about sentience can mislead, see Hal Herzog, “The Importance of Being Cute,” in Some We Love, Some We Hate, Some We Eat (Harper Collins, 2010), pp. 37-66. For a critique of anthropomorphism in cognitive ethology and comparative psychology, see Clive Wynne, “What are Animals? Why Anthropomorphism Is Still Not a Scientific Approach to Behavior,” Comparative Cognition and Behavior Reviews 2 (2007), pp. 125-135. For a defense of anthropomorphism in these fields, see John Fisher, “The Myth of Anthropomorphism,” in Colin Allen & D. Jamieson (eds.), Readings in Animal Cognition (Cambridge: MIT Press, 1996), pp. 3-16.

[13] Immanuel Kant, The Metaphysics of Morals, ed. Mary Gregor (Cambridge: Cambridge University Press, 1996), 6:434-435.

[14] The question what happens when we multiply zero by infinity is quite a bit more complex, but we can ignore that complexity for our purposes here.

[15] Judith Jarvis Thomson, “A Defense of Abortion,” Philosophy and Public Affairs 1:1 (1971), pp. 47–66.

[16] Thanks to participants at the 2013 NYU-Columbia Animal Studies Conference and the 2015 NIH WIP series for useful questions and comments on previous drafts. Thanks especially to Dale Jamieson and Joe Millum for detailed feedback and discussion.

Leave a comment