Skip to main content
Log in

Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution

  • Published:
Philosophical Studies Aims and scope Submit manuscript

Abstract

This paper takes on several distinct but related tasks. First, I present and discuss what I will call the “Ignorance Thesis,” which states that whenever an agent acts from ignorance, whether factual or moral, she is culpable for the act only if she is culpable for the ignorance from which she acts. Second, I offer a counterexample to the Ignorance Thesis, an example that applies most directly to the part I call the “Moral Ignorance Thesis.” Third, I argue for a principle—Don’t Know, Don’t Kill—that supports the view that the purported counterexample actually is a counterexample. Finally, I suggest that my arguments in this direction can supply a novel sort of argument against many instances of killing and eating certain sorts of animals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Zimmerman (1997); Rosen (2003).

  2. Zimmerman, p. 411.

  3. Rosen, p. 64.

  4. Strawson (1982).

  5. Slote (1982), p. 71.

  6. Moody-Adams (1994).

  7. Rosen, p. 64.

  8. Slote, p. 72.

  9. Rosen, p. 66.

  10. In cases in which the relevant proposition p is true, the person will be acting from ignorance. In cases in which the relevant proposition p is false, the person will not actually be acting from ignorance (from lack of true belief), but their action is explained by the fact that they lack some particular false belief, and so it will sometimes be relevant to discuss these cases, though I will include this under the heading of ‘acting from absence of belief’ rather than under the heading of ‘acting from ignorance.’

  11. In this paper, I will for the most part ignore the possibility of offering this account in terms of degrees of belief. This will simplify the presentation, though there are places in which it would be natural to recast the account in those terms. The basic points stand on either way of presenting them, though there are places in which I will call attention to possible differences that might arise on the degrees of belief account, or places in which it would be difficult to capture exactly what I’m after in those terms.

  12. In general, I will focus on the simple case, on which an individual either does or does not have true belief about some particular fact F. Obviously, there could be cases in which an individual has contradictory beliefs about the same fact F, such that they believe both that F and that not-F. This could happen either from irrationality, confusion, or from cases in which an individual’s beliefs are sensitive to particular presentations or descriptions of the fact in question (e.g. someone might believe that some particular frog lacks moral status while believing that some particular prince has moral status, without realizing that the frog is the prince). There are cases in this vein which can raise interesting problems, but I will leave them aside for the purposes of simplifying the discussion here.

  13. Roughly, and without wanting to take a stand regarding the metaphysics of beliefs, one has an implicit belief that p if one believes p, but non-occurently, and if one’s mind does not possess an explicit representation with that content. I might, for example, only implicitly believe things like the following: that there isn’t a koala bear in the tree outside my window, that I couldn’t name all the countries in the world, that I do no one moral injury by pressing the spacebar on my keyboard (of course, I come to explicitly believe all these things after coming up with the examples). Most important, for our purposes, is the fact that people have not brought their implicit beliefs under conscious scrutiny.

  14. In considering whether an individual case serves to impugn the MIT, we are faced with the following situation: if it is relatively easy to be culpably ignorant, then many apparent counterexamples to the MIT will fail; if it is harder to be culpably ignorant, if the obligations governing deliberation, etc., are quite low, then, arguably, there will be many more counterexamples to the MIT. Basically, we’ll end up in one of two dialectical situations when a purported counterexample to the MIT has been presented. In the first situation, we can grant that the person is blamelessly ignorant, and then find them ‘still culpable’ for what they do, acting in ignorance. In this situation, the case stands in as a counterexample to the MIT. Alternatively, we can argue that the person wasn’t blamelessly ignorant, that they should have done more (perhaps the investigative burden was higher because of the type of thing they were doing, I’ll suggest), and this is why they are culpable for the act done from ignorance—they are culpable for the ignorance. This option doesn’t serve as a counterexample to the MIT.

  15. Rosen, p. 63.

  16. Rosen, p. 63.

  17. This view is similar to one that is implicit in Smith (1983), in which she sets out three main types of cases in which a person can be culpably ignorant. Id. at 544–545. On her view, one can be culpable for conducting deficient investigation, preventing subsequent discovery, and/or making deficient inferences.

  18. Some may feel that this is to leave aside the truly compelling issue. I disagree, and explain my disagreement in Part Five below.

  19. Rosen, p. 63.

  20. DeRose (1992), p. 914.

  21. DeRose, p. 914.

  22. Moody-Adams, pp. 295–296.

  23. This terminology affords us another way to define moral epistemic contextualism: How much one is morally required to do from an epistemic point of view with regard to investigating some proposition p varies depending on the moral context—on whether p describes a state of affairs that is a blocker for actions that the person is contemplating performing in that particular context. If p describes a state of affairs that is a blocker for actions that the person is contemplating performing, then one is morally required to do more from an epistemic point of view with regard to investigating whether p is true, whether the relevant state of affairs obtains.

  24. Rosen, p. 64.

  25. Cases in which someone acts objectively permissibly but is reckless in doing so (because they know they are ignorant of what the relevant moral requirements are) are cases in which a person is not actually acting from ignorance at all, though they are acting in ignorance. Their ignorance does not cause or explain their action. I think that including such cases as counterexamples to the MIT is unfair to the proponent of the MIT. Individuals acting in this way do something for which they are culpable, and their culpability stems from their behaving recklessly, but I think the most natural understanding of the source of their moral error is that it comes from their behaving recklessly with regard to their epistemic obligations, from their being culpably ignorant, in precisely the way described earlier when discussing the (BI1) cases. As noted above, what moral epistemic contextualism tells us about (BI1) cases is that it will generally be unreasonable to fail to investigate whether the blockers for the actions that one is performing obtain. This remains the case even if the blockers turn out not to obtain in the particular case.

  26. This isn’t to say that gustatory pleasure won’t ever be relevant, or perhaps even morally relevant, if only minimally. And it is true that the reasoning that applies in this case might well apply in every case, so that Douglas would never be able to eat pigs or maybe any meat at all. There are several things to say here. First, in this particular case, Douglas would enjoy eating other things as well, things the moral status of which he was certain. We might imagine another case in which this weren’t true, though. In such a case, I still think that one does something for which one is blameworthy if killing the organism has higher expected disvalue than not killing it, which will only be true if the moral value of eating the organism is relatively low. Perhaps at some point (and depending on one’s account of what is of moral value), the enjoyment of eating some particular organism would rise to a level of moral significance such that one would not violate any moral principle (such as the one I’ll offer in a moment) in eating an organism the moral status of which one was uncertain. But I don’t think that will be the ordinary case, at least not on a plausible account of what is of moral value.

  27. Except insofar as we struggle to come up with a compelling account of what talk of degrees of beliefs amounts to. See, for example, Eriksson and Hájek (2007).

  28. In his recent book, Moral Uncertainty and Its Consequences, Ted Lockhart defends an even more general principle that is stronger than DKDK and, in fact, arguably entails DKDK:

    PR2: In situations of moral uncertainty, I (the decision-maker) should (rationally) choose some action that has the maximum probability of being morally right.

    In the binary comparison between, say, killing the pig and not killing the pig, it seems clear that PR2 would require not-killing the pig. There is some chance that killing the pig will be morally wrong, and little chance that killing the pig will be morally right (since there is nothing of substantial moral significance at stake that would require killing the pig in cases to which DKDK would apply). And this will be true for all such cases to which DKDK would apply. Any case in which there are no substantial moral considerations on the other side will always, under PR2, be cases in which one should not do the thing that risks significant moral harm (killing something that may, for all you know, have significant moral status). See Chapter Two (and particularly pp. 26–28) of Moral Uncertainty and Its Consequences, (Oxford University Press, 2000). I think that DKDK is a more perspicuous formulation of what is driving the intuition in these kinds of cases, and it doesn’t yield many of the results which PR2 does, some of which strike me as mistaken. Additionally, Lockhart’s account requires not just specific views regarding the probabilities and moral valuations of various outcomes, but also specific views regarding the likelihood that some particular moral theory is true, another reason not to prefer this account.

  29. I find the requirement of this extra caution plausible, in part because it allows us to keep the epistemic state component subjective—requiring only that an individual reasonably believe that they are morally compelled—without putting quite as much weight on defining when such belief will be reasonable. If the principle were just that one must reasonably believe that one is morally permitted, almost all of the debate would center on when such a belief is reasonable. I think that the end result of this debate is likely to be extensionally similar to the result we get when we focus just on the question of whether one reasonably believes that one is morally compelled, for reasons discussed below. Additionally, the distinction between being morally permitted and being morally required only arises in this way for those moral views that are non-maximizing. Of course, many such views seem plausible, but it is worth noting this fact.

  30. Hooker (2000), p. 32.

  31. Scanlon (1998), p. 153.

  32. This is obviously far from anything that Rawls would endorse, given that he explicitly wasn’t giving an account of interpersonal morality.

  33. I have not defined what a living organism is, nor will I try to do so. On my understanding of the term, a human fetus clearly counts as a living organism. Living organisms need not be particularly complicated: a single bacterium counts as a living organism. And many organisms live inside us at all times. Still, there are natural objections both that a fetus isn’t living—it doesn’t undergo or isn’t capable of the right sort of biological processes; and that a fetus isn’t an organism—it isn’t ‘independent’ in the right sort of way to count as a separate organism. Both of these are misguided. The term ‘fetus’—as opposed to ‘embryo’—is applied at the end of the eighth week, roughly at the point at which the fetuses heart is beating and brain waves can be detected. It seems plausible to say that any entity that has a beating heart and detectable brain waves should count as a living organism. How to individuate organisms is not entirely clear (do Siamese twins count as one or two organisms?), but given that the heart, limbs, spine, nervous and circulatory systems all begin to form at 4–5 weeks, in the pre-fetal, embryonic stage, it seems hard to argue that the fetus and the person in which the fetus resides are only one organism.

  34. Thomson (1971), No. I; Kamm (1992).

  35. There might be tricky things here regarding the way in which my driving is part of the joint cause of the world’s becoming uninhabitable. I might, for instance, be certain that my driving by itself isn’t going to cause catastrophic consequences for future generations, while also acknowledging that I am engaging in a practice which, if everyone engaged in it, would cause catastrophic consequences for future generations. Still, it seems that if I believe (or don’t fully disbelieve) this latter fact about the practice, this provides me with familiar moral grounds for not engaging in the practice—whether those are Kantian, fair play, rule-consequentialist, no free riding, or some other sort of moral grounds. I think that Don’t Know, Don’t Drive could be reformulated to include these kinds of joint-cause stories as well.

  36. There is a footnote very early in Rosen’s paper (discussing an unrelated point) which suggests that he simply was not considering cases in which one might be blamelessly ignorant of some fact, F, and know that one is ignorant or uncertain of that fact, and yet still act from ignorance of F. Rosen writes, “A better formulation of the relevant principle is rather as follows: When X does A from blameless ignorance, then X is blameless for doing A, provided the act would have been blameless if things had been as the agent blamelessly took them to be.” Rosen, n. 4, p. 63 (italics in original). It is simply unclear what should be filled in as the way an uncertain agent ‘takes things to be’—if the agent is truly uncertain. It is not as if such an agent takes there to be no fact of the matter, or at least this need not generally be the case.

  37. Rosen, p. 80.

  38. Rosen, p. 74–75.

  39. Zimmerman may have been on to something similar when he asserts that “lack of ignorance” that one ought not to perform the act in question is a “root requirement for responsibility” for performing the act in question. Zimmerman, p. 424. He thinks this, and not anything to do with the ‘avoidability’ of acting in a certain way, is what explains why it is true that “one is culpable for behaving ignorantly only if one is culpable for being ignorant.” Id. at 423. Where I disagree with both Zimmerman and Rosen is in their assessment (implicit or explicit) of when one is reasonable in believing that there is nothing morally objectionable about one’s action.

  40. I would like to thank Tyler Doggett, Liz Harman, Derek Parfit, and an anonymous referee for their insightful and invaluable written comments on drafts of this paper. Thanks also to Simon Rippon and Stephen Schiffer for their helpful comments in response to presentations of an earlier version of this paper, and to audiences at the 2005 Harvard-MIT Graduate Student Conference and the NYU Thesis Preparation Seminar.

References

  • DeRose, K. (1992). Contextualism and knowledge attributions. Philosophy and Phenomenological Research, 52, 913–929.

    Article  Google Scholar 

  • Eriksson, L., & Hájek, A. (2007). What are degrees of belief? In B. Fitelson (Ed.), Studia Logica, special issue on formal epistemology.

  • Hooker, B. (2000). Ideal code, real world. Oxford: Oxford University Press.

    Google Scholar 

  • Kamm, F. M. (1992). Creation and abortion: A study in moral and legal philosophy. Oxford: Oxford University Press.

    Google Scholar 

  • Lockhart, T. (2000). Moral uncertainty and its consequences. Oxford: Oxford University Press.

    Google Scholar 

  • Moody-Adams, M. (1994). Culture, responsibility, and affected ignorance. Ethics, 104(2), 291–309.

    Article  Google Scholar 

  • Rosen, G. (2003). Culpability and ignorance. Proceedings of the Aristotelian Society, 103(1), 61–84.

    Article  Google Scholar 

  • Scanlon, T. M. (1998). What we owe to each other. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Smith, H. (1983). Culpable ignorance. The Philosophical Review, 92(4), 543–571.

    Article  Google Scholar 

  • Slote, M. (1982). Is virtue possible? Analysis, 42, 70–76.

    Article  Google Scholar 

  • Strawson, P. (1982). Freedom and resentment, reprinted In G. Watson (Ed.), Free will. Oxford: Oxford University Press.

  • Thomson, J. J. (1971). A defense of abortion. Philosophy & Public Affairs, 1, 47–66.

    Google Scholar 

  • Zimmerman, M. (1997). Moral responsibility and ignorance. Ethics, 107(3), 410–426.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander A. Guerrero.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guerrero, A.A. Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution. Philos Stud 136, 59–97 (2007). https://doi.org/10.1007/s11098-007-9143-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11098-007-9143-7

Keywords

Navigation