Category Archives: Normative Ethics

Are All Wrongs Equal? Exploring Infinite Sins and Divine Justice

In my Bible study with my church recently, we talked about Acts 4-5, which include the account of Ananias and Sapphira being immediately struck down for their deception in selling land and donating the proceeds. This passage raises some interesting questions, and I am curious about its implications for Christian ethics, if any. In this article, I will explore what these verses might imply about the relative badness of lying and killing, the value of human beings, and the badness of sins against God.

In this article, I defend the view that human beings are infinite valuable, some sins are worse than others, not all sins against God are infinitely bad, a particularly egregious sin against God may warrant eternal punishment, and that God’s judgment against Ananias and Sapphira is warranted by features specific to the 1st century church. Join me for the ride!

Note: This is a long post, so feel free to skip around to the sections of particular interest using the linked section headers below. Additionally, this post is available as a PDF or Word document.

  1. Some Introductory Questions
  2. Justice and Punishment
  3. The Badness of Killing and the Value of Humans
    1. The Case for the Infinite Value of Humans
    2. The Case for the Finite Value of Humans
    3. Summary
  4. Are All Sins Equally Bad? The Disvalue of Lying vs Killing
    1. The Status Principle, Equal Value, and Infinite Value
      1. Infinite Sin without Equal Sin
      2. Finite Sins Against Infinite Beings
    2. Responses to Some Arguments for the Equality of Sins
      1. “All Sin is Equally Sin”
      2. “All Sin is Equally Wrong”
      3. “All Sin Deserves Hell (Infinite Punishment)”
    3. Scripture on the (In)equality of Sins
      1. Scriptural Support for the Equality of Sins
        1. The Wages of Sin is Death (Romans 6:23)
        2. Sins of Thought vs Sins of the Body (Matthew 5) 
        3. Guilty of One = Guilty of All (James 2:10)
      2. Scriptural Support for the Inequality of Sins
        1. Old Testament
        2. New Testament
  5. Contextual Specificities (Why Ananias and Sapphira are Special)
    1. The Finite Case
    2. The Infinite Case
  6. Conclusion
  7. Endnotes

Some Introductory Questions

A question that is often asked of the Acts 5 account of Ananias and Sapphira (A&S) is: why is lying deserving of death? People wonder what the exact sin was that Ananias and Sapphira committed when they told Peter that the money gave was the full proceeds from selling the field when, in actuality, it was less than the full proceeds. Ananias and Sapphira were killed immediately upon their proclamation (or implication) that they had donated more than they, in fact, did donate. Were they killed for lying? Acts 5:4 says that “You have not lied just to human beings but to God.” So, it appears as though lying was the sin for which they were killed.

Was there something special about the nature of their lie that made it special compared to everyday instances of direct lying in my own life? I think the answer is “clearly not.” We are not told any details that would differentiate this case of lying or deception from any other case of lying, even though this case was considered lying to God (and Acts 5:3 specifically says it was lying to the Holy Spirit). If Acts 5:3 exhibits parallelism of repetition, then the lie to the Holy Spirit just was keeping the money back, not something especially externally differentiable from typical lies. I think one implication of this passage might be that, since their lie was obviously relevantly similar to everyday direct lies, that every lie against humans is also a lie against God. This generalizes the concept of sins against God.

However, this doesn’t help us at all with the original question, which is that why is a lie, even a lie against God, deserving of death? In order to answer this question, we need to get into a few things, including 1) the nature of justice and punishment, 2) the badness of killing and the value of humans, and 3) the disvalue of lying vs killing, particularly against an infinite being 4) contextual specificities.

Ultimately, I think that the judgment against Ananias and Sapphira will rely on certain facts specific to their context, but I think for a fuller picture, we will need to investigate broader facts of Christian ethics, such as the infinite value of humans, inequality of sins, infinite sins, and the first century church context.

Justice and Punishment

I won’t talk about this long, but we need to briefly talk about divine justice and punishment. Although the passage does not explicitly say that God struck A&S dead, saying instead that they “fell down and died” at the moment Peter pronounces their wrongdoing, it seems apparent that the passage portrays the deaths of Ananias and Sapphira as a kind of divine punishment, an administration of justice. Justice requires the punishment of wrongdoing, so justice demands that the sin of Ananias and Sapphira be punished.

The requirement of punishment does not demand immediate punishment, but it does require proportional punishment. The punishment given should be deserved by the wrongdoing. So, divine punishment will be proportional to the magnitude of the wrong or evil act. Sometimes, people say that all sin is equally bad (because it is all sin against an infinite God). I will come back to this idea. This idea is counterintuitive, so it would be a strike against a Christian ethic to have this entailment that all sins are equally bad.

So, the punishment for lying must be proportional to the wrong of lying. I think it (reasonably) strikes many of us as disproportionate to kill someone for lying, including the magnitude of the lie of Ananias and Sapphira. I will come back to this. The point is that, somehow, we need to square how an apparent punishment of death is proportionate to the wrongdoing of lying. To answer this question, we need to know how bad killing is.

The Badness of Killing and the Value of Humans

Death is bad. I won’t get into a debate on the badness of death itself, but what is relevant here is not mere badness of death, but the badness of killing specifically. If God struck down Ananias and Sapphira, then God intentionally killed Ananias and Sapphira. Perhaps, as I saw some website propose, God merely withdrew his support and protection from them, and since no one can survive without that, they immediately died. Therefore, on this view, God merely let Ananias and Sapphira die, not killed them. This is a great example of why this distinction between killing and letting die is, at times, so ridiculous to not be taken seriously (at least in terms of value differential). In any case, I think killing in a case where you have the legitimate option to save someone from dying (i.e., prevent letting them die) is not significantly worse than “letting die”. Plus, to best respond to the objection, we can respond to the hardest case to deal with, which is killing, rather than any attempt to weaken the act of God to be less bad.

Okay, so God killed two people as a punishment for lying. How bad is killing? I would think that letting die or killing is at least as bad as the value of the person in question. This is true even if, on a technicality, humans do not cease to exist upon death.[1] Human lives have value simply in virtue of being human lives. To deny this results in extremely problematic implications for the value of those with severe cognitive impairments.[2] So, taking human life seems to be as bad as the value of a human life.

How valuable is a human life? Plausibly, a human being is infinitely valuable. There is a long tradition of Christians affirming the infinite value of humans. Admittedly, I have rarely seen this explicitly discussed, especially by analytic philosophers, and I have seen almost no explorations of its implications for Christian ethics. If anybody can point me to substantive discussions of this in the Christian tradition (or anywhere), please do!

The Case for the Infinite Value of Humans

By far the most extensive discussion of the infinite value of humans that I have seen is in the paper “How Valuable Could a Person Be?” by Andrew Bailey and Josh Rasmussen.[3] I have found this to be a convincing argument that people are infinitely valuable. The argument relies on two basic and widely accepted ideas, which are the equal and extremely high value of people, and it suggests that the infinite value of people would explain both the equal and extreme value of people. The argument can be structured as below (or as a Bayesian argument[4]):

  1. People are equally valuable.
  2. People are extremely valuable.
  3. The best explanation of (1) and (2) is that people are infinitely valuable.

I take it as a given that all humans are equal. Every person is equally valuable. No human is superior to any other in their moral worth or dignity. The equal value of humans is true regarding either their intrinsic (non-instrumental) value or their total value (including intrinsic + instrumental), as the argument works either way. I think it is more plausible on the “intrinsic” value version (usually called “final” value in the ethics literature), which focuses on value for its own sake or as an end in itself. In moral evaluation, every human is intrinsically worthy of the same consideration, even if their instrumental value may differ and will affect the overall moral evaluation. Therefore, I take the first premise to be on extremely solid ground. To deny the equal (intrinsic) value of humans is a pretty insane result with catastrophic consequences.

I also take it as a given that people are extremely valuable. Human life is precious, and it should not be taken lightly or whimsically. It is a very good thing that you exist, and you are more important than every star or donut or animal or sunset that has ever been. Something of immense value has been lost when someone passes away. Some might say something priceless is lost. So, people are extremely valuable.

The strange thing is that this combination of extreme and equal value produces a puzzle. Consider the following analogy (adapted from Bailey and Rasmussen): You go to an art museum of all the best artwork in all the world of all different kinds, styles, and methods. There are watercolor, acrylic, and oil-based paintings. There is realist, surrealist, abstract, Dada, expressionist, and (my favorite) pointillist artwork. There is art from the 1st century and every other century until modern times. There are pieces from van Gogh, da Vinci, Picasso, Rembrant, Dali, and many more. Clearly, there is a huge variety of artwork, and each of them with very different properties or properties expressed in different ways and to different degrees, each expressing aesthetic value.  

You approach the museum curator and ask, “How much are each of these paintings and beautiful variety of artworks worth? This is an incredible and varied collection.” The curator responds, “Each and every single piece in the museum is worth exactly $37,635,127,099.74.” You would be incredulous! “I’m sorry, what?! Every single piece, from the most realist to the most abstract, from the 1st to 21st century, from da Vinci to van Gogh, is worth exactly and precisely thirty-seven billion, six hundred thirty-five million, one hundred and twenty seven thousand, ninety-nine dollars and seventy four cents?!”[5] The artistic experts came in and looked at the wide variety of art and its beauty exemplified in very different ways displayed throughout the museum and, assuming they were attempting to price exactly according to their aesthetic value, thought they had equal and extreme aesthetic value.

This would be bizarre! An incredible, unbelievable coincidence. It is so unbelievable, I would suggest, that it is literally unbelievable. You should not believe these artworks have identical aesthetic value, particularly when it appears to be so arbitrarily applied to end up with equal worth, down to the level of cents. At the least, this equality would demand a very good explanation, one that is not forthcoming.

Hopefully, the analogy is obvious. Humans come in a wide variety of shapes, sizes, colors, and a wide variety of properties exemplified in different ways to different degrees. The human race is relevantly similar to a museum of the finest artwork, not necessarily with respect to aesthetic value (though I think the human race does include great aesthetic value, and a subset of them may be museum worthy), but with moral value.

We all differ in numerous ways, many of which people take to be morally significant. When we ask what makes humans uniquely valuable, especially in comparison to various other animals, the answer usually is something in the ballpark of: cognitive capabilities, consciousness, intelligence, self-awareness, rationality. Alternatively, one can give explicitly morally significant properties, such as moral deliberation and having moral intuitions or things like that. The trouble is that it seems obvious that each of these properties are degreed properties. One can exemplify self-awareness or rationality to different degrees, and some humans are better at moral deliberation or performing various cognitive tasks compared to other humans. This observation is true even restricting to those without significant cognitive impairment/disability, as Eistein is obviously much more cognitively capable than me, and some people are more reflective and self-aware than others.

Yet, humans are still equally valuable, in spite of their many differences in these properties. There are two conclusions one can take away from this. The first, which is irrelevant to my argument here, is that all human beings, independent of their cognitive status or location in time, space, or life cycle, have the same (extreme) value in virtue of being a human being.[6] The second takeaway is that humans are infinitely valuable.

The infinite value of humans guarantees the extreme value of humans (since infinity is an extremely high value), and it strongly suggests or at least can easily make sense of the equal value of humans. While I later discuss ways to get unequal value even assuming infinite value[7] there are easy and natural ways to get equal value on infinite value, especially given that equal value is a desideratum of our current reasoning. In this way, we can see that using ordinal numbers to represent various value-enhancing properties (see “Infinite Sin without Equal Sin” for discussion of infinite ordinal and cardinal numbers) would not be adequate, as the different degrees of properties in humans would lead to different values, conflicting with premise 1. So, we must use cardinal numbers (or we could all have the smallest ordinal infinite value ω). Therefore, the lowest infinite cardinal number ℵ0 makes sense for the value of humans, which would imply all humans have equal value. The next largest infinite number that we know of[8] is a quantum leap higher that could not be obtained by the finite differences in human properties, so we are justified in thinking humans would not be different levels of infinite value.

So, the infinite value of humans (ℵ0) makes perfect sense of the extreme and equal value of humans. The value of humans no longer amounts to the bizarre claim that each human is worth some arbitrarily high finite number in the trillions, or beyond that, which coincidentally is identical to everyone else’s value down to the decimal places. Human life is, in a real sense, priceless. It is limitless. The infinite value of humans answers the demand for explanation of the extreme and equal value of humans.

Figure 1: Definitive proof of the infinite value of human beings. I mean, you see the infinity symbol on the chest, right?

Richard Swinburne (and Josh Rasmussen) have given some arguments that one should prefer infinite or unlimited value over arbitrarily high finite values, where possible. For example, in the history of science, it was assumed that light traveled at infinite velocity before it was experimentally measured to be a finite velocity. Swinburne defends the view that limitless quantities are simpler and thus preferable to finite or limited quantities, all else equal.[9]

In summary, I think there is a strong case for the infinite value of humans, and I think it well explains the equal and extreme value of human beings.

The Case for the Finite Value of Humans

On the other hand, we can build a case for the finite value of humans. Probably the best case for the finite value of humans is just that the infinite value of humans appears to have some absurd implications. These implications are supposed to be sufficiently morally unacceptable to warrant the rejection of infinite value of humans.

For example, Matthew Adelstein argues some of these purported counterexamples, but his arguments have some problems. His first four counterexamples, in order, appear to 1) assume we cannot make infinite comparisons (i.e., comparisons among worlds that contain infinite value, see Infinite Sin without Equal Sin for more discussion), so we cannot conclude that an infinitely valuable human + human pleasure is better than infinitely valuable human + human suffering, 2) ignore the instrumental value of humans[10] or the difference the value of the human person (a person) and the value of the human life[11] (an event), 3) neglect the difference between the marginal vs intrinsic value of humans (see picture below), which doesn’t imply every second of a human life is of infinite value even if a human itself is, and 4) also neglects infinite comparisons (which he does this again later when saying that, on the infinite value view, saving two humans is no better than saving one human).

Figure 2: Rasmussen and Bailey distinguish the value of a human being from the marginal value of human life from the lifetime of events of a human being, and they only think the first of these is plausible.

Matthew’s next argument is that if humans are infinitely valuable, then shortening a human life by 1 second is infinitely disvaluable. If so, then shortening a human life by 1 second is worse than thousands of lethal headaches or a quadrillion animals being tortured. Therefore, humans don’t have infinite value. However, we already saw that we do not have good reason to think that the infinite value of a human being implies the infinite value of every second of a human being’s life, and the authors (and myself) already find this implausible. Matthew gives no reason to think this link is plausible. So, I am happy to conclude with Matthew that shortening a human life by 1 second is not infinitely disvaluable, but I just don’t know what that has to do with the infinite value of a human being.  

Perhaps the best response to this is to argue that killing is infinitely bad and that killing is relevantly similar to shortening a human life by 1 second. The thought would be that there is an arbitrary difference between shortening a human life by 1 second and shortening a human life by force by, say, 50 years. I don’t think these are relevantly similar, but it kind of depends on if we think that killing someone is causing them to cease to exist. If killing them does not actually remove this person from the whole of reality, then it would not seem to be infinitely bad. If the whole of reality includes the past, however, it would be impossible to remove this person from the whole of reality. In fact, on eternalism, the person exists tenselessly, including past temporal stages of that person. I think that is not a morally interesting fact, but it is a fun spooky true sentence nonetheless.

The implication of the previous paragraph is just that there are some things that do challenge the idea that killing people is infinitely bad, even if humans are infinitely valuable. I think after writing this post (since I happened to write this section 2nd to last), I am definitely less confident that killing someone is infinitely bad. Yet, finite disvalue for murder I don’t think is a challenge at all to the infinite value of humans, as killing with an afterlife is merely transporting them to another dimension (or, given my version of soul sleep, only temporarily causing them to cease to exist), and killing on eternalism would merely be causing their temporal stages to not extend further into the future, not removing their existence. Since the person would not be removed from the totality of reality, then it is not obvious there is infinite value removed from reality compared to if the person were not killed.[12]

Killing only deprives a person of a subset of its full life, not of its life in its entirety. So, I think it is plausible that killing is only finitely bad to kill an infinitely valuable being. Probably the most popular view, and a reasonable one, is that the badness of killing amounts to the value of the deprivation of the value of the wellbeing that one would have had if you had not killed them. So, the badness of killing is proportional to the number of life-years you removed. I don’t think I have made up my mind on this yet.

The remaining concerning objection that Matthew raises is that it would seem to produce a paradox: if humans are infinitely valuable, then creating a human creates something of infinite value, and so everybody should have as many babies as possible (or at least we should maximize the number of humans), even if it is at significant finite cost. He says that the infinite value view implies that “torturing 1000000000000000000000000000000000000000000000000 animals in order to produce one extra human would be good overall, producing infinite value at a cost of merely finite value.”

This is a really interesting and powerful objection. A similar objection is raised in a paper by Andrew Lee arguing that life has no intrinsic value, but the value of a human life is only in the goods it contains.[13] Lee raises the issue of cases where a life with intrinsic value (which would be especially true if it is infinite) would entail that even a short life containing nothing but suffering is worth living, but surely this is absurd.

I think Lee’s variation of the objection is easier to respond to.[14] It is usually agreed (with Lee) that if I know that I have some genetic defect, along with my wife, that would guarantee that any child of mine would live for a short time and experience virtually nothing but pain, I have a moral obligation to not have that child. It would be wrong to create a person whose life would be short, miserable, and consist almost exclusively of suffering (and I don’t mean something like Down syndrome). But it might seem that if I created something of infinite value, then infinite value minus the finite disvalue of suffering is still infinitely good overall (especially if this child would go to heaven and experience another round of infinite good[15]).

What this does not consider is that this child’s death is within the foreseeable consequences that follows directly from your action of having a child with a known genetic defect. Therefore, while creating this child would create infinite value, then the act of knowingly letting them die (because you knew about this defect) would be an act of infinite disvalue. These two infinite values would cancel each other out, and the only remaining value is the great disvalue of suffering, which is clearly worse than not having the child.

The response to this will likely be that anytime you create a human being, you know that they will eventually die, and so if my response works, then any act of creating a human being would not be intrinsically valuable, as it would always be offset by the disvalue of letting die, and so we are back to the view that the value of a human life is just the goods it contains. However, there seems to be an obvious difference between the “letting die” that is creating a human with a known genetic defect that follows directly from your act of creating the child in the first place, and the version of “letting die” where, for all you know, your child will end up dying of old age of natural causes, of their own negligence or bad health choices, or taken against their own will; in none of these cases do the parents have any responsibility for the death of their child, even though had they not produced the child, the child would not have eventually died. It’s just obvious these situations are wildly different.

Secondly, even if this argument fails, I’m fine with the implication that creating a human being is net neutral intrinsic value, which is still different than the assessment of the intrinsic value of humans. Creating a human being might be net neutral and the intrinsic value of humans be infinite, and the act of letting die or killing be infinitely bad. 

I think this second approach might be the best response to Matthew’s objection about torturing animals to create more humans. If creating humans is net neutral intrinsically, then obviously it would be unacceptable to torture animals to create more humans. The other thing to consider for more realistic scenarios that don’t stretch the imagination to include the choice to torture a bajillion animals to produce more humans are the opportunity costs of creating more humans vs other more valuable endeavors that may also be infinitely valuable, such as ensuring more people go to heaven (which is a worthy consideration due to the risk analysis even if you don’t think there is such thing as an eternal hell but consider it to have a nonzero probability).[16]

Therefore, I think there is plenty of room to affirm the infinite value of humans even if other things, such as creating a human, killing a human, letting a human die, or hurting a human are all only finitely valuable or disvaluable. My credence in the infinite disvalue of killing has been tampered in writing this post, but I think it remains my default for the time being. It does seem to problematic reduce all of ethics down to questions of minimizing human death (or getting people on the path to heaven), which is counterintuitive, but I think can be salvaged by appealing to synergy between minimizing human death and broader moral considerations. At the end of the day, I recognize it will require biting some bullets to say killing is infinitely bad, but thankfully the infinite value of humans remains perfectly intact without this assumption about killing.

Summary

While there are good objections to the infinite value of humans that I do not know how to fully deal with, the finite disvalue of human killing has some limited intuitive appeal, and I certainly have not systematically worked through how to deal with infinite value in a moral theory, it remains extremely plausible to me that killing a human is infinitely disvaluable,[17] and I am hopeful the objections can be dealt with, even as there has not been sufficient attention given to this question. Plus, it is stronger to respond to the hardest case, so I will move forward assuming that killing is infinitely bad.

Are All Sins Equally Bad? The Disvalue of Lying vs Killing

The conclusion of the previous section is that God carried out an action that, in isolation, has infinite disvalue. Now, I am already going to grant that, assuming this punishment is warranted (and thus must be proportionate), this punishment (though not necessarily applied immediately) is the best thing for God to do, aka results in the most overall good, given the wrongdoing happened. We can easily model the retributive justice here as a composite[18] of 1) lying (wrongdoing), 2) killing (punishment), 3) the relevant relationship between (1) and (2) such that the punishment was deserved, appropriately given, and proportionate. In this way, two bads make a good (in virtue of (3)), as giving warranted punishment to a wrongdoer is morally better than letting a wrongdoing go unpunished, on the retributive view. I won’t defend but will merely assume the retributive view here.

Therefore, if killing as a punishment is proportionate to the wrongdoing of lying, then it is appropriate for God to kill Ananias and Sapphira. But surely this seems wrong?

One thing I have heard Christians say, whether in connection to this passage or in general, is that all sin is equally bad, especially insofar as all sin is against an infinite God. This principle, that the badness of a wrongdoing is related to or proportional to worth/value/status of the one who is wronged, is termed the “status principle”. One area of theology in which this claim comes out explicitly is in discussions of the justification of an eternal hell. If all sin is infinitely bad, then all sin is equally bad, and all and every individual sin is individually deserving of eternal hell. I will discuss the status principle and potential implications for our investigation.

The Status Principle, Equal Value, and Infinite Value

We can construct a straightforward argument for the view that all sin (i.e., moral wrongdoing) is infinitely evil based on the fact it is against an infinite God.[19]

  1. All sin is against God.
  2. God is infinitely worthy of regard.
  3. The gravity of an offense against a being is principally determined by the being’s worth or dignity. (Status Principle)
  4. There is infinite demerit in all sin against God. (from 2 and 3)
  5. Therefore, all sin is infinitely heinous.

I hope that premise two, that God is infinitely worthy of regard or infinite moral status/worth can be reasonably accepted by all. Premise 1, that all sin is against God, is more controversial, and Jonathan Kvanvig dedicates 7 pages of his book on hell to exploring this question.[20] I will leave this question for another time, as I think we can grant in this context that the sin in view with questionable disvalue, lying, was explicitly identified as a sin against God in Acts 5:4.

It may be tempting at this juncture to, if one already grants the status principle (and thus the entirety of the argument), conclude that all sin would be equally evil, since it is all infinitely evil. This temptation, however, must be resisted, as one can have 1) different infinite sins that are unequal to each other, and 2) finite sins against infinite beings, which undermines premise 4 above.

Infinite Sin without Equal Sin

As it turns out, one can have varying levels of infinitely bad sin, so even if all or a range of sins were infinitely bad, that would not imply that these sins are equally bad. This section will explore why that is the case.

Naively, one can appeal to standard transfinite (cardinal) arithmetic to defend the view that all infinite sins are equally bad. One way to “measure” infinity is to map it to the size of the set of an infinite series of numbers. For example, there are an infinite number of natural numbers, {1,2,3…}. By convention, if you count the number of numbers in this set (i.e. the “cardinality” of the set of natural numbers), you get the number ℵ0 (pronounced “aleph-zero” or “aleph-naught”).

In transfinite arithmetic, adding or subtracting finite numbers to an infinite number like ℵ0 does not change its value. Thus, ℵ0 + 5 = ℵ0 – 27 = ℵ0 + 1,356,874 = ℵ0 (see Wolfram Alpha’s computation of this and play with the numbers, if you wish). In fact, even multiplying ℵ0 by nonzero numbers does not change its value. ℵ0 x 54 = ℵ0 x 999,999,999 = ℵ0. This is because the set of natural numbers can incorporate any finite (or countably infinite) number of additional members and each member still be put into a 1-to-1 correspondence with the set of natural numbers (or the set of integers or the set of rational numbers), and thus has identical size (cardinality). Therefore, someone may say, the way in which sins are worse or better, which must be finite differences in nature, do not make any difference to their ultimate evil, which remains . So, all sins are equally evil.

There are two problems with this: the first is mathematical and the second is moral. There are actually two mathematical disputes one can have with the aforementioned reasoning. The first is that ℵ0 is not the only infinite number, but it is merely the smallest infinite number. There are an infinite number of infinite numbers, each larger than the last. One can obtain ever larger infinite numbers by taking the power set of the previous set, starting with the set of natural numbers (or the integers). The power set of set S is the set that contains all subsets of set S (which would include the empty/null set ∅. For example, consider the set S = {1,2,3}. The subsets would include the empty set, each individual member, each group of two, and the group of three. So, the power set P(S)={∅,1,2,3,{1,2},{1,3},{2,3},{1,2,3}} . As you can see, the power set is much larger than the original set, and this difference would only increase as you increase the size of the set. In fact, there is a proof called Cantor’s theorem that the cardinality of the power set is strictly larger than the cardinality of the original set. The relevant implication is that taking the power set of the natural numbers would produce a much larger set (with cardinality 20 > ℵ0)[21] than the set of natural numbers itself. Therefore, sins could potentially have very different levels of badness if they corresponded to different transfinite numbers, whether ℵ0, ℵ1, ℵ2, ℵ3 … ℵn. Perhaps, however, if God has a value of ℵ1, all sins would be evil of level ℵ1, but this is questionable.[22]

Figure 3: A picture I took of Cantor’s paradise of the different sizes of infinities approaching heaven.

The second mathematical dispute is that transfinite ordinal arithmetic, unlike transfinite cardinal arithmetic, does allow straightforwardly for comparisons among sizes of infinity even with finite differences. Ordinal numbers are those numbers like “first”, “second”, “third,” etc., that indicate ordering (larger/smaller or higher/lower than), whereas cardinal numbers are those that measure the size of sets. Usually, the smallest infinite ordinal number is termed ω, and in transfinite ordinal arithmetic, the following comparisons are correct: ω2 > 2ω > ω + 5 > ω. So, if the badness of infinite sins can be faithfully represented as transfinite ordinal numbers, which I would have every reason to think so, then not all infinite sins are equally bad. I won’t get into this further, as I think this kind of reasoning leads naturally to the moral objection to not being able to make infinite comparisons.

Now, let us move to the moral objection. If there are such things as infinitely bad actions, then, plausibly, this could entirely break ethics (at least, assuming certain things that I find obviously true, such as an aggregation principle). The good news is that, unlike Christian ethicists who have largely ignored this problem (although they even more than consequentialists need to reckon with this problem that arises from the infinite value of the afterlife), consequentialists have worked substantially on the question of comparing worlds that involve infinite value.

While I would love to get into the messy details of how this may work,[23] I really think the only thing you need to see is that it is incredibly obvious that some infinite evil is less bad than other infinite evil. For example, Oliver Crisp provides a comparison of two people, Trevor and Gary, consigned to eternal hell, “Both suffer an infinite punishment in hell. But Trevor is only punished for one hour a day whereas Gary is punished for twelve hours a day. Clearly, in this state of affairs they are both punished infinitely but not equally. Therefore, an [Infinite Punishment] does not entail an [Equal Punishment].”[24] An eternity of daily suffering to degree n+1 is worse than an eternity of daily suffering to degree n. One does not need to merely appeal to the naïve quantitative sum of evil in transfinite cardinal arithmetic in order to make reasonable moral evaluations.

If humans are infinitely valuable, then one must also consider the implications for a status principle when applied to wronging humans. One of those implications is that, obviously, some wrong actions against beings of infinite status are worse than others. For example, slapping my friend is morally better than chopping his finger off, all else equal. If all sins against infinitely valuable beings are equally bad, then sins against other humans are all equally bad, which is clearly incorrect. In fact, this reasoning suggests that not all sins against beings of infinite value are even infinitely evil, as most minor wrongdoings against humans are clearly only finitely valuable. We can give a similar parity argument about good actions against this kind of status principle (see the end of the Scriptural Support for the Inequality of Sins section below for more). Further, lying against humans seems to be exactly one such case of a finite sin, even if it is done against a being of infinite value (humans). Therefore, there is some reason to suspect that lying against a different infinite being (God) would similarly be of finite disvalue.[25]

Clearly, even granting that a sin is infinite, we do not get the conclusion that the sin is just as bad as another infinite sin. The relevant takeaway is that lying may be infinitely bad and yet, since is presumably less bad than killing, killing still not be a justified punishment due to lack of proportionality.

But can we even get to the idea that all sin against God, including lying, is infinitely bad? Next, we must investigate this aspect of the Status Principle.

Finite Sins Against Infinite Beings

As I stated previously, the debate about infinite sins is usually in the context of debating the justification of an eternal hell. This makes sense, as any defense that a sin is infinitely bad would amount to a justification for eternal hell (assuming retributivism). If killing humans is infinitely bad, then hell is justified for killing humans. If lying by itself is sufficient for eternal hell, then lying is infinitely bad, and vice versa.[26] Yet, to many people, telling one lie does not seem sufficient for eternal damnation. So, if a status principle entails that lying is infinitely bad, then so much for that status principle; it should be rejected.

What exactly is the Status Principle? One rendering is,

Status Principle: Other things being equal, the higher the status of the offended party the worse the act of the offender, and the greater the guilt of the offender.[27]

One kind of example people appeal to is an interhuman example, such as saying that slapping Ghandi is worse than slapping a heinous criminal. An even more obvious case, which is more relevant by appealing to the very different “statuses” of the actors in question, killing a human is much worse than killing a pig (and that is in virtue of the difference in moral status or worth). I think some version of this principle seems very plausible.

The most plausible version of the Status Principle (SP) is going to be a version that does not settle the question of the badness of the wrongdoing with merely the status of the offended alone. So, premise (3), which says the badness of an act is principally determined by the wronged’s worth/dignity, needs to be replaced with a more plausible status principle. The key contemporary defenders of the status principle do just that, combining the relevance of the status of the offended with some kind of intrinsic magnitude of the wrongdoing, and the badness of the action is proportional to both of these things.

For example, Crisp words his SP as, “for any person who commits a sin, the guilt accruing for that sin leading to punishment is proportional to both, (a) the severity of the actual or intended harm to the person or object concerned, and (b) the kind of being against whom the wrong is committed.”[28] The specifics of (a) are such that it is subject to numerous objections on the widespread applicability of either actual or intended harm, but I think these can plausibly be adapted to be widely appealing, including discussion of dishonor rather than just harm, and even considering the possibility of God experiencing infinite dishonor or infinite harm.[29]

The real takeaway from this discussion so far is that the most plausible status principle does not guarantee an entailment that all sins are infinitely bad, but only some sins. This is true because while all sins against God are against an infinite being, all sins are not infinite in the (actual or intended) harm or dishonor they cause God. For example, Francis Howard-Snyder states that “one needn’t argue that all sin is equally deserving of the ultimate punishment in order to argue that all sinners are equally deserving. All one needs to argue is that there is a class of sins (‘mortal sins,’ perhaps) such that we’ve all committed at least one member of this class and that one of them is enough to qualify one for [eternal hell].”[30] I lean toward the view that there is only one such sin that actually deserves eternal hell, which is the intentional rejection of the infinite good of God and a relationship with Him.[31]

So, taming a status principle to allow for both finite and infinite sins against an infinitely worthy being is certainly the more plausible way to go, and I have yet to see any argument for the view that lying would specifically be one of those sins against God that would be infinitely bad. As we saw earlier, it would make a lot more sense of sins against humans that lying would generally be a finitely bad sin, and I see no reason to think would change when lying to God (especially if humans are infinitely valuable).

Therefore, I conclude that endorsing a status principle does not give any reason to think that the sin against God of lying is infinitely bad. In that case, I see no reason to think that, for general reasons, killing would be a proportionate response to lying.

Responses to Some Arguments for the Equality of Sins

Sometimes I have heard[32] Christians (including my past self) say phrases like “all sin is equally sin” or “all sin is equally wrong” or that “all sin is equally worthy of damnation.” These may be true on a technicality (and I now probably think the last phrase is just incorrect), but I think they are best avoided, as they are at best misleading and tautological. I suspect that the use of these and related phrases betray an improper appreciation of the varying magnitudes of sin, but this lack of appreciation is sourced in a good desire and motivation to emphasize just how awful all sin is, and how opposed God is to sin of any kind, which often goes insufficiently appreciated.

“All Sin is Equally Sin”

All sin is sin. This is true because sin is sin, by definition. Trivially, all x’s are x’s, in virtue of being an x. Saying that it is “equally” an x adds nothing to that sentence. So, yes, it is true that each sinful action is equally a sinful action or equally just as much fulfilling the criterion of being a sinful action, in virtue of it being a sinful action. That’s not what anyone cares about or means when they ask about if sins are equal. “All sin is equally sin” is a trivial tautology that tells us nothing about the nature of sin or its badness. Thus, it is probably best to avoid using this phrase.

“All Sin is Equally Wrong”

Since sin is identical to moral wrongdoing, all sins are morally wrong actions. Since, because all sins fit the criteria of being morally wrong, by definition, then all sins are “equally” morally wrong because there is no way for things to be “unequally” morally wrong, as that does not even make sense. Contrary to what my ethics professor argues, moral wrongness does not come in degrees, as it is a binary option. An action is either wrong or it is not. In standard ethics, there is a threefold classification of actions: an action is either obligatory, optional, or impermissible (wrong). Consequentialism,[33] which is the correct moral theory, simplifies things further to only two categories: obligatory and impermissible. If an action is obligatory and you fail to do it, it is impermissible (wrong). If an action is permissible, then you do nothing wrong. As long as you are not obligated to do not-x, then doing x is permissible (i.e., not wrong).

What does come in degrees is how evil a morally wrong action is, which is exactly what people are actually getting at when talking about the equality of sins. Some wrong actions are better or worse than other wrong actions. In other words, while all sin is morally wrong, different wrongdoings have different magnitudes. Since the magnitude of the wrongdoing is what people are actually talking about when asking if all sin is equal, to say that all sin is equally morally wrong is, at best, misleading.

The fact that there are different magnitudes of moral wrongdoing is obvious considering what it means to do wrong. What it means to do wrong is to do anything but the best option from your list of available actions.[34] The list of actions available to you is a function of time, and it might be that you find yourself in a (pseudo[35]-)moral dilemma such that all the possible actions are evil, in which case the least evil action is right and all others are wrong. Alternatively, if you have the opportunity to save 1000 people, and you only save 999, you have done something wrong, but you have still done something really good (and clearly something better than killing all 1000 people, which is “just as wrong” but way worse). Moral right and wrong are distinct from moral good and bad, as right actions can still have negative results or be negative intrinsically, and thus be a morally evil action, while wrong actions can be very good and even revolutionize society for the better, and still be wrong because it was not the best you could do.[36]

“All Sin Deserves Hell (Infinite Punishment)”

Sometimes, Christians will emphasize that it only takes a single sin to, in short, send people to hell. They may say that only one sin, any sin, is sufficient for eternal damnation. I think I used to believe this, but I am currently skeptical of this, or at least it requires some finagling to work. I do believe that a single sin is sufficient to make you unable to enter into the fullness of God’s presence, lest you be destroyed, due to God’s maximal perfection and holiness. If it is true that one sin disqualifies you from heaven, and hell just is the total absence of God’s presence and nothing more, then it appears as though one sin, any sin, is sufficient for damnation. However, it is not true that any sin guarantees infinite or eternal damnation, as we saw earlier with the status principle discussion. In this case, it seems annihilation is the best option.[37]

While I think the most defensible view is that only a subset of sins, or perhaps a single sin (the unforgiveable sin), would be justification for eternal damnation, I want to explore one way in which any sin might be sufficient to justify eternal damnation. Consider an idealistic case of a non-Christian that has heard and understands the Gospel and has heard reasonable versions of arguments for Christianity. The idea is that each and every sin is going to implicitly accompanied by an ongoing sin in the background, which is the rejection of a relationship with the one true God and a refusal to repent for that sin. For a Christian, each and every sin has forgiveness associated with it and thus while damnation may be warranted, it is not imposed. On the other hand, the unrepentant sinner has, in association with each and every sin, a lack of repentance to the almighty God of the universe and a rejection of the greatest gift of all, a relationship with God. If this lack of repentance and relationship wasn’t there (i.e., if there was repentance and relationship), then this sin would be forgiven and damnation would not be imposed (and plausibly not warranted based on the status principle discussion, i.e., if nothing infinitely bad is done).

However, I think a key part of this idea is that it is a sin itself to have this lingering background rejection of repentance and relationship, a sin that is distinct from any other given sin of lying, adultery, etc. If so, then it is really just this sin of rejecting the Gospel that is what warrants eternal damnation, not any given sin of lying or whatever. Therefore, it sounds a bit weird to say that all sin deserves eternal damnation, when it is really one particular sin that deserves it.

We can take the case to its limits to test this idea: consider two non-Christians, one whom has only sinned once (except for rejecting the Gospel) in their life, by telling a lie when they were 16 to their parents that they were going to study for an exam when they were actually going to hang out with friends. The other non-Christian has never sinned in any specific instance other than their rejection of the Gospel.[38] My suggestion here is that if it is the case that both these non-Christians warrant eternal punishment, the lie has absolutely nothing to do with that. It could only plausibly be the rejection of the Gospel that could bear the weight of justifying eternal damnation. The only reason that the first non-Christian warrants infinite punishment is because of the accompanying sin of rejecting God’s infinitely great gift and Himself, and not because of the lie. Therefore, I doubt it is the case that any sin warrants infinite punishment.

So, the only way that it would be true that any sin warrants infinite punishment is only in case we assume that sin entails this other background sin of the rejection of the Gospel and God Himself. If that is true, I again think it is misleading to say that all sin warrants infinite punishment. It would be all sin (with their associated and background entailments) warrants infinite punishment, and such that the associated background entailments are really what warrant infinite punishment.

One final note is not quite an argument about equality of sin as much as it is an argument about God’s response to sin. Some Christians will say that God created us, he can do literally whatever he wants with us, and it be perfectly justified for God to strike someone dead for basically no reason whatsoever. This represents a “He brought you into this world, He can take you out” mentality. I think this is plainly incorrect, except on a super uninteresting gloss of the word “can.” The best interpretation or version of this argument, in its strongest form and with additional assumptions, concludes that God does not have moral obligations toward us, not that God would be justified to act in any possible way towards us (just as parents cannot abuse their children just because they brought them into this world). This still does not imply that God can do anything to us in any interesting sense. It is still the case that, since God is morally good and perfect and all-loving, that God would act in certain ways, respect us and acting in line with our wellbeing, etc., even if, technically, it is not true that God should act in those ways. It is for this reason that God’s lack of obligations does not defeat the problem of evil, as divine obligations or their lack is in fact completely irrelevant for making evidential assessments of how God would act. The point is: God merely being creator of Ananias and Sapphira does not imply that God is automatically justified in striking them dead. That’s just not how ethics works, including God’s Own Ethics.

Scripture on the (In)equality of Sins

Some Christians might say that while all sin is not equal in our flawed human eyes, all sin is equal in God’s eyes, and we should trust Him and His omniscient nature over our own. So, in this section, we will look at Scripture to see what the Bible has to say about whether all sins are equal.

Scriptural Support for the Equality of Sins

The Wages of Sin is Death (Romans 6:23)

One may appeal to Romans 6:23 in support of the equality of sins. Romans 6:23 says that “the wages of sin is death.” One way to read this verse in support of saying all sins are equal is reading this verse as implying that “the wages of each and every sin is death.” The “death” here is commonly understood as referring to spiritual death, which usually translates to “eternal hell” for evangelicals, something I won’t dispute here.

The first problem with this reading is that appears to be opposed to 1 John 5:16-17, which distinguishes between a kind of sin that leads to death, and a kind of sin that does not lead to death. 1 John 5:17 reads, “All wrongdoing is sin, but there is sin that does not lead to death.” The normal interpretation of this verse appears to be that the sin that does not lead to death is a sin from which one repents. This understanding aligns nicely with my proposal earlier, which is that it is unrepentance that leads to spiritual death. Therefore, the kind of sin that will actually result in spiritual death is that accompanied by unrepentance (and thus a rejection of God’s offer of forgiveness). Finally, even if one is able to conclude that Romans 6:23 implies that every sin deserves eternal hell, as we saw earlier, hell has gradings of punishment, and some eternal punishments are worse than others, so not all sins are equal simply in virtue of deserving eternal punishment. Therefore, Romans 6 cannot imply that all sins are equal in God’s eyes.

The best response to 1 John 5 might be to say that while Romans 6 is talking about wages, about what one deserves, while 1 John 5 is talking about what actually happens, and one may not get what one deserves just in case one repents, so what sin deserves and what sin leads to may come apart. That is the whole basis of the Gospel after all, that sinners do not get what they deserve when they repent. However, an obvious reason a sin may not lead to death is if it does not actually deserve death (since God is just and acts in accordance with what is warranted). So, 1 John 5:17 could be understood to teach that not all sins warrant spiritual death (e.g., if it is repented of), which is why it will not lead to death. Therefore, 1 John 5:17 gives some reason to read Romans 6:23 to be referring to sin generally and not each and every sin individually. Further, I still stand by the claim about levels of eternal hell as being sufficient to block this argument.

In addition, I will point out that the verse does not say “the wages of each and every sin is death,” but just that “the wages of sin is death.” It makes sense to understand this comment to be about sin in general, which is always accompanied, in the relevant cases, by unrepentance, if it is going to be the kind of sin that deserves damnation. So, my earlier discussion for why I don’t think it is true that all sin warrants eternal hell is one reason to not read Romans 6:23 in this “each and every” way.  

In conclusion, I do not think Romans 6:23 gives any reason to think all sin is equal in God’s eyes.

Sins of Thought vs Sins of the Body (Matthew 5) 

The next Scripture people may appeal to support the idea that all sin is equal in God’s eyes is Jesus’ teaching in the Sermon on the Mount, particularly Matthew 5:21-22 (murder) and 5:27-28 (adultery). The idea here is that Jesus connects sins of the heart (in your thoughts) with sins of the body (in your actions), so that lust is adultery and hate/anger is murder. Therefore, the argument goes, having thoughts of adultery is identical in its badness to committing adultery (with the body).

This is not a good argument, for multiple reasons. First, let us look at what Jesus actually taught. Jesus states the uncontroversial claim that “anyone who murders will be subject to judgment,” followed immediately with, “But I tell you that anyone who is angry with a brother or sister will be subject to judgment.” First, let us go ahead and clarify that the kind of anger in question is not the justified kind of righteous anger Jesus espouses and supports elsewhere, but an inappropriate kind of anger. So, Jesus clearly thinks that anger can be sinful. But note what Jesus does not say.

What Jesus says is that murder will be subject to judgment, and that anger will be subject to judgment. He does not say that anger will be subject to the same judgment that murder will be. Jesus’ teaching is perfectly compatible with anger being subject to a less severe judgment than murder, even if anger is subject to a severe judgment that his audience would not have expected.

Similarly, Jesus says that “anyone who says, ‘You fool!’ will be in danger of the fire of hell.” This statement says nothing about the expected amount of punishment in hell being equal to killing someone. In fact, since he does not say “eternal hell,” we cannot without argument conclude that calling someone a fool warrants infinite punishment.[39] The statement suggests that insults warrant punishment, but it doesn’t say anything about deserving the same amount of punishment as killing someone. It is only through reading the text with a preconfigured lens that we would come away with that conclusion.

Figure 4: An authentic 1st century photograph of Jesus arguing with the Jewish religious leaders about lust and adultery.

The same analysis is true regarding Jesus’ comments about adultery. Jesus quotes the commandment “You shall not commit adultery,” followed by “But I tell you that anyone who looks at a woman lustfully has already committed adultery with her in his heart.” It just doesn’t follow from that statement that committing adultery in your heart is just as bad as committing adultery with your body. Jesus designates it as, at minimum a distinct subcategory, which is adultery “of the heart”, different from the distinct subcategory, “of the body.”  Nothing in his statement precludes these two distinct subcategories from having different levels of badness, even if they both are, without hesitation, wrong. 

In fact, even within one of these subcategories, sins of the body, there are clearly levels of badness. Not all adultery (of the body) is the same. Kissing an unmarried woman once while mildly intoxicated is not as morally bad as a full-on affair for months. The fact that lust is in the category of adultery (of the heart) does not mean lusting for a woman after a bad day once in your life is as bad as cheating on your wife with a different woman every week for 3 decades. Jesus’ innovation to the typical 1st century ethic was that these sins of the mind and heart are sins at all, not that these two subcategories are of equal badness.

That, I think, is the main point: Jesus was correcting a Pharisaic tendency to ignore sins of thought completely. Jesus and Paul both continuously corrected the idea that it is not what goes into or out of the body that defiles the person, but these actions are the natural outcomes of a corrupt heart (Matthew 15:17-20, Mark 7:20-23). So, Jesus was teaching that there are, in fact, sins that occur solely in the mind. The idea that sins of thought were a legitimate category of sins at all was foreign to the Pharisees, and Jesus was correcting their sole focus on external traditions as being the basis for evaluating one’s righteousness.

Here is what I think is happening, or here is one way of analyzing this argument that I think makes clear how it goes astray: Sometimes, when making this argument, Christians will say that all sin is the same in God’s eyes, but some sins have worse consequences than others, are more disastrous for interpersonal relations, etc. In other words, we might distinguish between the “intrinsic” disvalue of some action and its “instrumental” disvalue (i.e., negative consequences that it leads to). So, these people would say that adultery of the heart has the same intrinsic disvalue of the adultery of the body, but adultery of the body has more negative consequences; as Paul says, unlike other sins, sexual sins are “sins against their own body” (1 Corinthians 6:18).[40]

Let’s grant that sins of thought have the same intrinsic disvalue as sins of the body (i.e., externally carrying out one’s thoughts of murder, adultery, etc.). The fact that they differ in their instrumental disvalue just is to say that some sins are worse than others! Differentiating sins (or any instance of moral value of an action, good or bad) includes the sum of both intrinsic and instrumental value. This is explicitly built into consequentialism, but any plausible moral theory includes known consequences to some degree in the moral evaluation of the goodness or badness of an action.

Consider the following two hypothetical scenarios, and let us judge the moral badness of these two actions.

Option 1: you shoot someone, and they fall down and die.
Option 2: you shoot someone, and they fall back onto a big red button that launches nuclear missiles to every square meter on earth, killing all human civilization, and you knew this would happen before you shot them.

Which one of these actions is morally worse, or are they morally equal? Obviously, the second action is morally much worse than the first one, despite shooting someone having equal intrinsic disvalue in both scenarios. Therefore, consequences factor into moral evaluation of actions, so saying that sins are “equal in God’s eyes” but differ in consequences doesn’t make any sense.

In summary, we have no reason to think that Jesus’ teachings on sins of thought on lust and anger of the heart implied these are just as bad as committing physical adultery or killing, respectively. I believe it is admirable that Christians want to take sins of thought so seriously, and this is something well-deserved. God hates sin, include sins of thought. Sins of thought are also likely to lead to sins of the body. As Paul said, it is essential for us to “take captive every thought to make it obedient to Christ” (2 Corinthians 10:5). Sins of thought detour our relationship with God and need to be repented of. If we ever find ourselves dismissing our sinful thoughts as “well, it’s better than actually doing it,” then we are in need of correction and mortification of the flesh, which includes putting these fleshly thoughts to death. All of this we can affirm while still recognizing that some sins are worse than others, and we do not need to throw away these clear moral intuitions if we don’t need to in order to align with Scripture.

Guilty of One = Guilty of All (James 2:10)

I saved what is perhaps the most difficult challenge for last. James 2:10 says, “For whoever keeps the whole law and yet stumbles in one point, he has become guilty of all.” At first glance, this might imply that all sins are equal because if you break one commandment, you become guilty of breaking every commandment. At second glance, however, I believe the point is that the law is taken as a whole, not in individual parts, and so, for the purposes of this passage, comparison between individual sins cannot be done in either direction, whether to establish equality or inequality.

The point in James 2 appears to be emphasizing the categorical or qualitative change that takes place when moves from the category of sinless to sinful, from innocent to transgressor. James 2:9 picks out an individual sin, the sin of partiality (as opposed to love), and says that committing this sin means you “are convicted by the law as transgressors.” It is relevant that “the law” is taken as a whole. You are a transgressor of “the law,” not “the command to love.” So, James 2 isn’t thinking about the badness of individual sins at all.

Similarly, James 2:11 says that, even though the law includes commands against both adultery and murder, “If you do not commit adultery but do murder, you have become transgressor of the law.” Again, “the law” is holistic and categorical. Committing one sin is sufficient to move you from the category of righteous, pure, sinless to the category of sinful, unrighteous, and in need of repentance, forgiveness, and mercy in order to ever be in the fullness of God’s presence again.

In other words, in terms of your need for a Savior, committing one sin makes it so (in a sense) you might as well have committed them all, as you won’t make it to heaven unless you repent and trust in Christ as your Savior. The person who has committed 147,000 sins is equally in need of a Savior as the person who has committed 4 sins, which is to say that every sinner is fully, 100% in need of a Savior. It is recognition of this point that every sinner is on equal ground before the Savior: we all are completely and categorically at God’s mercy for saving. Thankfully, “mercy triumphs over judgment” (James 2:13). So, we can affirm wholeheartedly the point of the verse, that any sin makes one 100% in need of a Savior, while making room for the difference in badness of individual sins.

I think Barnes’ Commentary is worth quoting at length here on the meaning of “he is guilty of all” in James 2:10 that explicitly defends differences of sin in the face of this verse:

He is guilty of violating the law as a whole, or of violating the law of God as such; he has rendered it impossible that he should be justified and saved by the law. This does not affirm that he is as guilty as if he had violated every law of God; or that all sinners are of equal grade because all have violated some one or more of the laws of God; but the meaning is, that he is guilty of violating the law of God as such; he shows that be has not the true spirit of obedience; he has exposed himself to the penalty of the law, and made it impossible now to be saved by it. His acts of obedience in other respects, no matter how many, will not screen him from the charge of being a violator of the law, or from its penalty.

I conclude that James 2:10 is about the equality of each sinner in need of a Savior having crossed from the category of sinless to sinner in virtue of only one sin, and that different magnitudes of sin is compatible with the teaching in James 2.

Scriptural Support for the Inequality of Sins

Unlike the Scriptural support for the equality of sins, there are many more than three verses or groups of verses that support different magnitudes of sin. Let’s start in the Old Testament.

Old Testament

First, we can see different magnitudes of sin reflected in the punishments in Old Testament law. Since punishment should be proportional to the magnitude of the moral wrongdoing for justice to be met, and God instituted the punishments in the OT in line with his justice, then different punishments in OT law reflect a difference in moral magnitude of sins. This is true even assuming we can differentiate a tripartite law of ceremonial, moral, and civil law.

For example, Numbers 15 describes offerings for unintentional sins, such as a goat for a sin offering, contrasting that with someone who sins “defiantly,” whose punishment involves being cut off from the people of Israel. Both of these differ from various other sins, such as violating the Sabbath, which invoke the death penalty.

In a different way, Proverbs 6:16-19 singles out “six things the LORD hates; seven that are detestable to him,” suggesting these might be of particular disinterest to God compared to other sins. Lamentations 4:6 says, “For the wrongdoing of the daughter of my people is greater than the sin of Sodom.”[41] This sounds like a very straightforward proclamation that some wrongdoing has higher magnitude than others.

The Old Testament affirms the concept of ultimate justice in Psalm 62:12 and Proverbs 24:12 (and probably many other verses that I have not yet collected). Proverbs 24:12 rhetorically asks, “Will he not repay everyone according to what they have done?” and Psalm 62:12 says of God, “You reward everyone according to what they have done.” This concept of justice suggests that punishment will be based on one’s works. I think it is assumed here that these works are differentiated in magnitude (just as OT law punishments are differentiated in magnitude), but see the New Testament section for more discussion.

New Testament

The New Testament even more clearly indicates a difference in the magnitude of wrongdoing of different sins. When Jesus sent out his disciples, he said that for towns that reject them, “it will be more bearable for Sodom and Gomorrah on the day of judgment than for that town” (Matthew 10:15). In this next chapter, this is immediately applied to the cities of Chorazin and Bethsaida, stating that “it will be more bearable for Tyre and Sidon on the day of judgment than for you” (Matthew 11:22), and then to Capernaum, saying that “it will be more bearable for Sodom on the day of judgment than for you” (Matthew 11:24). If some cities will be better off on judgment day, then that means their punishment will be less, and thus their wrongdoing was less in magnitude.

Jesus, in a parable where God is the master and we are the servants, teaches that “The servant who knows the master’s will and does not get ready or does not do what the master wants will be beaten with many blows. But the one who does not know and does things deserving punishment will be beaten with few blows” (Luke 12:47). This suggests that intentional sin is worse than unintentional sin given a difference in proportional punishment, in agreement with the OT teaching above.

Jesus told Pilate that “the one who handed me over to you is guilty of a greater sin” (John 19:11), which is about the most explicit statement of some sins being better or worse than others as you can possibly get. If that doesn’t imply that not all sins are equal, I don’t know what does. James 3:1 comments that “Not many of you should become teachers, my fellow believers, because you know that we who teach will be judged more strictly,” implying that a teacher doing a wrong action is worse than a non-teacher doing the same wrong action, presumably because by doing so, he would lead more than just himself astray.

Plausibly, Jesus singling out a specific sin of “whoever causes one of these little ones who believe in me to sin”, saying that “it would be better for him to have a great millstone fastened around his neck and to be drowned in the depth of the sea” (Matthew 18:6) suggests that this sin is worthy of a uniquely bad punishment, worse than other sins. If the claim was true of every sin, then Jesus would merely be repeating himself (and this seems independently implausible). Presumably, Jesus is making a point that is only true of a subset of sins, ones that are deserving of being drowned in response rather than committing this sin.

Like the Old Testament, the New Testament affirms ultimate justice, which presumes that sin is differentiated. Many verses claim that God “will judge/repay/reward each person according to their deeds/works.” This statement is found in Matthew 16:27, Romans 2:6, Revelation 2:23, and Revelation 22:12, and comparable claims are found in 1 Corinthians 3:13 and probably others. Assuming authorial intent guides our interpretation, we must consider how the audience would have understood the meaning of these passages. Since the audience was ordinary people without bizarre moral intuitions, they likely would have understood and believed that repayment according to one’s works implies different magnitudes of punishment. I think this is decent reason for us to accept there are different magnitudes of punishment, and thus different magnitudes of wrongdoing. If repayment according to one’s works actually meant everyone got the same punishment, a standard amount consistent across all people (which would in essence be independent of the vast majority of one’s works), then was I would instead expect a reference to “the ultimate punishment” or something like that in the place of every instance of “repayment according to their works.”

Finally, we see different magnitudes of rewards in the NT, so, all else equal, I would expect to see different magnitudes of punishments. 2 Corinthians 5:10 says, “For we must all appear before the judgment seat of Christ, so that each one may receive what is due for what he has done in the body, whether good or evil.” This verse sets up a parity between good and evil in the receiving one’s due at the judgment. Verses talk about particular rewards, such as a crown of glory for shepherding God’s flock (1 Peter 5:2-4) or a crown of life for persevering under trials or martyrdom (James 1:12, Revelation 2:10), and these suggest that not all good actions will get the same reward. Similarly, Jesus in Mark 10:29-30 teaches that those who have “left house or brothers or sisters or mother or father or children or lands, for my sake and for the gospel” will receive rewards a hundredfold, both now in this life “and in the age to come.” Since I would expect the repayment for good and bad to be on a par, as suggested by 2 Corinthians 5:10, then the presence of differentiated magnitudes of rewards implies the presence of differentiated magnitudes of wrongdoing.

In fact, this parity between good and evil actions suggests a parity argument against the equal magnitude of sins based the status principle. The status principle analysis above focuses only on wrongdoing, but what about the equivalent principle for good actions? If every action of disobedience against an infinite being is infinitely bad, then I would think that every action of obedience is infinitely good. However, clearly, not every action of obedience is infinitely good (and therefore deserving of infinite reward, aka heaven). The New Testament seems to designate particular good actions as deserving certain rewards that not all will get, and further these seem like finite rewards. Both of these facts are at least prima facie incompatible with a seemingly equally justified status principle about the goodness of obedient actions or other acts of goodness toward an infinite being (e.g., hugging another human). So, this parity gives us reason to doubt the implication that all wrongs against an infinite being are infinitely or equally bad.

Contextual Specificities (Why Ananias and Sapphira are Special)

In the previous section, we saw that, on general grounds, we have no reason to think that killing is a proportionate punishment for lying. I suppose to the average non-evangelical, we could have taken this as our starting point, but I wanted to explore and rule out some claims I have somewhat commonly, but certainly not universally, heard from evangelicals about all sins being equal.

If killing is not a proportionate punishment for lying, then something else must be going on; Ananias and Sapphira must be a case of particular circumstances that make them special that would make God justified in killing A&S for lying, if God would be justified at all.

There is some reason to suspect special circumstances from the outset, as it is not normally the case that lying is immediately met with death in Scripture. For example, Abraham lying to Pharoah that his wife is his sister to, Joseph’s brothers deceived Jacob about Joseph’s death, Laban tricks Jacob about giving his daughter in marriage and the serpent lied to Eve. None of these people were struck dead for their sin of deception. Although we know generally that God does not always meter out punishment immediately when deserved, as he is merciful and relents many times, we still have some limited reason to think there may be something special happening here, unique to the context of the 1st century church, since it is so rarely the case that we (or others in Scripture) die immediately upon telling a lie.   

The Finite Case

One proposal that I find reasonable is that, in the beginning stages of the early church, the church needed to be very carefully and exquisitely pruned to set up the rest of the church’s future for success (this idea was mentioned by one of my Bible study members and also discussed here). It would be essential to protect the church from sins that could destroy church unity or undermine church trust, if its members were regularly engaging in any kind of deception toward one another. The New Testament emphasizes so heavily the unity of the church that any challenge to it needed to be shown much seriousness. The live possibility of deception by any given church member would undermine this trust and unity.

Therefore, God striking down Ananias and Sapphira can serve to emphasize this importance of trust and ruling out deception among the church body. This could help solidify the opposition to sin and God’s hatred of wrongdoing into the hearts of the early church. Acts even records the success of this effect, saying that “great fear seized the whole church and all who heard about these events” (Acts 5:5,11). This effect would be a reduction of sin, as living in the fear of God tends to do that.

Figure 5: Source

God was beginning a new stage of His relationship with His people, so it was important to swiftly enact justice to set a principle and example, as the early church was in its infant stages. This event seems similar to the stoning of Achan when he held back some of the treasures in the conquest for himself, whenever Israel was finally able to enter the Promised Land, entering into a new stage of the divine-Israel relationship. Further, the recording of this event in Scripture guarantees that this example would be set in stone for the ages, so all future generations could learn from it and appreciate this same importance of honesty, trust, and hatred against sin. Thus, this one event likely prevented many future sins from occurring and was probably justified if killing a person is only finitely valuable.

The Infinite Case

But even if this makes sense, we haven’t obviously made sense of the killing of Ananias and Sapphira if killing is infinitely disvaluable. The infinite value of humans is likely going to swamp our moral calculus. This means that if someone is killed, the only way to justify this action is by saving human lives or changing someone’s trajectory from hell to heaven, not just preventing some finite sum of finite sins.

I propose that the event of Ananias and Sapphira, as well as its recording in Scripture, has or will eventually cumulatively result in saving at least two human lives, whether in the physical or spiritual sense. One such mechanism is a greater appreciation of the fear of God and hatred of sin decreases the chance that one finds themselves of the character (cumulative effects of right/wrongdoing) that would result in someone being killed or let die in a careless way. This only needs to happen for 1 or 2 people throughout the past 2,000 years or however many years until Jesus returns in order for this passage to make sense, morally. That doesn’t seem unreasonable to me.

Alternatively, the character improvement from having the example of Ananias and Sapphira in history and Scripture is likely to make us less sinful, better influences on others, and for Christianity to be more attractive to others, which makes it more likely for people to get saved. This last step is essentially the application of the meager moral fruits argument (which doesn’t require the argument to be sound, but only convincing to non-Christians). Since many people are turned off from Christianity due to hypocrisy or other moral failings, and A&S leads to less hypocrisy and other moral failings, fewer people will be turned off from Christianity. Therefore, the event of A&S likely will lead to more people being in heaven than would otherwise be. Thus, A&S is justified.

One counterargument to this would be to appeal to people that have left Christianity as a result of perceiving A&S as a morally problematic verse, whether or not they were correct.[42] This is a good point (at least that A&S may, in practice, be an incremental nudge in that direction, even if my argument succeeds). I think my only available independent response is just that the likelihood of being more attracted to Christianity because of the improved character from A&S is more than the detraction from Christianity. This seems to be a stretch, honestly. People will explicitly raise the issue of A&S in deconversion narratives, but no one explicitly talks about how A&S motivated them to be a much better person, or how it drew them to Christianity (hence why it is a frequent objection, not a selling point). Therefore, I’m probably better off talking about divine sovereignty, which I will bring up in the next paragraph.

Finally, one can appeal to God’s sovereignty over the events of history, that God would ensure that the event and recording of A&S would ensure that it would bring people closer to him in relationship and moral character than it would push others away from him in relationship and in moral character. If God is in charge of history, then the storyline is at his disposal, and it is His choice whether more people are drawn to Him because of A&S or away from Him. As a Calvinist and thus theological determinist, I am happy to say that I think God ultimately determines whatever happens according to his (ultimate) will (see this book or this one for a nice discussion of various objections to this idea). Therefore, I think that God did, in fact, set up the states of affairs of the world so that at least 2 (net) people were prevented from being killed or from damnation as a result of the event and recording of Ananias and Sapphira.

Alternatively, this claim about divine sovereignty can be case in Molinist terms; God chose to create the world where God knew that people would freely respond in such a way that A&S would result in the relevant salvations, or whatever is required to ensure that the actions involved in Ananias and Sapphira were justified. Just as William Lane Craig says that God chose to create the world that maximizes the number of saved people, we can add a further condition about how people freely respond to A&S.

Then, critics will respond that since I have added yet another claim to my view with great specificity, then my view then has a lower prior probability. I have not yet figured out what makes for a good theory of prior probability, and it sounds like people just make up whatever they want when they talk about this, and there are many proposals for what makes a good prior, but I am skeptical that this point does much of anything. In order to rule out my view, they also have to make very specific claims about what God could or would not do, just the same as I do. Since the argument against Ananias and Sapphira is being leveraged by the critic as an argument against (a specific version of) Christianity, then I think that at this point, the critic is unable to finish the task he set out for, as there appears to be a perfectly viable view that dismantles the objection, and the critic does not appear to be in any better position to have great confidence defending the view that A&S is not justified.

The response in this section appealed to contextual specificities of Ananias and Sapphira to provide an undermining defeater to the objection to God’s justice in light of Acts 5. It appears as though we have no good reason to think that God is unjustified in striking Ananias and Sapphira dead. Therefore, it appears perfectly reasonable and consistent and plausible for the Christian to retain their belief that the events surrounding Ananias and Sapphira are justified and that this constitutes no concerning objection to any version of Christianity.  

Conclusion

In this way too long blog post, we covered a lot of territory. First, I made the case for the infinite value of human beings (and infinite disvalue of killing) and responded to objections to this view. I recognize now more than ever that there are serious challenges to this view that I still need to work through. 

Secondly, I talked about the status principle and whether all sins against God are infinitely bad and whether all sins are equally bad. I argue that not all sins against God are infinitely bad, though at least one plausibly is, and not all sins are equally bad. I respond to objections to these views that come in the form of common sayings or Scripture, concluding that even Scripture supports inequality of sins. I touch on the justification for eternal hell in light of these conclusions and gesture at a plausibly infinitely bad sin that might justify hell.

Finally, I focus back on the specific case of Ananias and Sapphira. I takeaway from Acts 5 that, since there wasn’t anything special about the lying aspect of their sin, that all lies against humans are also lies against God, which is some evidence that all sins are sins against God. I strongly suspect that the justification behind the death of Ananias and Sapphira lies in context specific to the 1st century church, so that Ananias and Sapphira serve as examples of God’s hatred of sin, an imperative to live a pure life in all things and at all times. I conjecture that God’s sovereignty would ensure God’s actions with respect to the couple would guarantee the greatest good overall through influencing people towards Himself in relationship or moral character, so that the death of Ananias and Sapphira are justified after all.

This post allowed me to explore in much further depth and rigor ideas that have been floating in my mind for a while, even though I probably could have skipped everything but the final section if I just cared about Ananias and Sapphira, as it was obvious as a starting point that it would have to be specifics of Ananias and Sapphira that justify their deaths as opposed to any given judgment for lying. But hopefully it added a bit more rigor to the discussion that I have not yet seen. It increased my confidence in my views about differentiating sins, clarified my views about finite sins against God, and made me question further my views about the infinite badness of killing.

Hopefully, this post added to your exploration of the ideas about infinite value of humans, of infinite sins, sins against God, equal sins, and perhaps Ananias and Sapphira specifically, and helped you make some sense of the ideas at play here.


Endnotes

[1] This is assuming humans immediately upon death go to some temporary version of heaven/hell in some state of existence, even though their bodies cease to exist, which I take to be the usual view of Christians with substance dualism. On another technicality, which is my specific view of soul sleep motivated by emergent substance dualism, humans do cease to exist upon death, but they are later resurrected/resuscitated for the final judgment. The usual objection people raise here is to claim there is a Bible verse which has Paul saying “to be absent from the body IS to be present with the Lord.” You will find, however, if you read your Bible, that this verse simply does not exist. The verse in question, 2 Corinthians 5:8, says Paul is “willing rather to be absent from the body AND to be present with the Lord.” This verse that actually is in Scripture does not anywhere close to imply the former view. Anyway, I don’t think ceasing to exist or dying without ceasing to exist makes a difference to the argument.

[2] See Miller, Calum. “Human equality arguments against abortion.” Journal of Medical Ethics 49.8 (2023): 569-572.

[3] Bailey, Andrew M., and Joshua Rasmussen. “How Valuable Could a Person Be?” Philosophy and Phenomenological Research 103.2 (2021): 264-277.

[4] Bailey and Rasmussen also provide a Bayesian (probabilistic) argument that runs as follows:
1. Equal Human Value
2. P(Equal Human Value/Infinite Value Hypothesis) [=1] >> P(Equal Human Value/Finite Value Hypothesis) [=low]
3. If so, then Equal Human Value is strong evidence for Infinite Value Hypothesis
4. Therefore, Equal Human Value is strong evidence for Infinite Value Hypothesis

[5] My brother raised an objection that this intuition pump is misleading because I have chosen a certain arbitrarily high number with nearly all non-zero numbers. If I had chosen a round number, then it would be less counterintuitive. I don’t believe this is correct. The surprising thing is that we all have the same exact value, down to the ones digit or the decimals place, not that it is not a nice pretty number with a lot of zeros. It would be similarly surprising if everyone was worth exactly and precisely $40,000,000,000.00. The first assumption people would have would be that this “worth” represents some arbitrary convenient convention of a nice pretty number, not that it represents the factually accurate value of the worth of human beings. Now, if you’re saying that everyone’s value rounds to 40 trillion but is not exactly 40 trillion, that would be an entirely different story. But that is exactly my point! We don’t all round to the same intrinsic value, but we all possess identical intrinsic value. And that is surprising.

[6] Or, being the kind of thing that exemplifies such and such properties. I will definitely be coming back to this question in future posts. Also see Calum Miller’s paper from footnote 2 again. It’s possible that there is an interference effect between my two conclusions; if you are convinced that the value of humans comes from a binary membership in human kind, then maybe that same kind membership could explain an arbitrarily high finite value of all humans? But it seems reasonable to ask why it is that the value from that kind membership would have that arbitrarily high finite value and not some slightly higher or lower value. Rasmussen still gives his reasons to prefer infinite value, but this consideration might collapse the worry about equal value into just the worry about having an arbitrarily high finite value. I’m not settled on whether this collapse occurs quite yet.
Another objection might be that some non-human animals might also have extreme and equal value, so then these animals would also have infinite value. I think it is obvious that humans have orders of magnitude higher value than the highest value animals, so the extreme value premise is much weaker. Further, I think it is much less obvious that animals have equal value compared to the equal value of humans. If value is based on kind membership as in the case of humans, then they would have equal value. But in any case, I think both premises are noticeably weaker for non-human animals and thus unlikely to be successful.  

[7] Note however that the argument discussed later, which is that infinite value implies equal value, is the inverse of the argument discussed here, which is that equal value + extreme value implies infinite value.

[8] It is an open question, called the continuum hypothesis, whether there is an infinite value in between the lowest and second lowest infinite number.

[9] See Swinburne’s Is There a God? or The Existence of God. See also thorough discussion and defense in Miller, Calum. “Is theism a simple hypothesis? The simplicity of omni-properties.” Religious Studies 52.1 (2016): 45-61. But see a response to Swinburne in Gwiazda, Jeremy. “Richard Swinburne’s argument to the simplicity of God via the infinite.” Religious Studies 45.4 (2009): 487-493. I personally am opposed to simplicity-based arguments since I think it is a pragmatic and not epistemic virtue. But if you ignore the simplicity language, I do think infinity is a better explanation of arbitrarily high values, all else equal, particularly in the case where you have equal value. In fact, I think the equal value is what really settles it in this case and makes infinity a much better explanation.

[10] It does seem as though the proposed counterexample would apply to the second version of Rasmussen-Bailey’s argument, which is that human beings are “overall” (intrinsic + instrumental value) equally and infinitely valuable. I don’t buy their second argument, and they admit that it is a much logically stronger claim to say human beings are equally overall valuable rather than just equally intrinsically (finally) valuable, so the infinity value of humans can still proceed on their first argument, as I think it does. Further, they still clarify that they mean the overall value of the human person is equal and infinite; they don’t mean a human life, which would include the value of all that person’s actions, is equally valuable to all others. They say, in agreement with Matthew, that “some human lives – particularly those that produce great evil (i.e., that have vast quantities of instrumental disvalue) – may be overall disvaluable on that account, even if they contain vast quantities of final value too” (p. 10). 

[11] I probably have not been careful to distinguish a human being from a human life in most of this post. Outside of this section, human life is meant to be a colloquial expression for a human being.

[12] When you kill someone, you are not removing them from the whole of reality, but you are depriving them from many valuable years of life. Further, it would be strange if the badness of killing a healthy 4 year old is identical to killing a 100 year old who was already in the middle of breathing his last breath.

[13] Lee, Andrew Y. “The neutrality of life.” Australasian Journal of Philosophy 101.3 (2023): 685-703.

[14] Okay I admit I’m not going to engage directly with the paper and I still need to read it. Forgive me.

[15] This raises a whole new host of questions, some of which have been explored, such as in Kershnar, Stephen. “Hell, Threshold Deontology, and Abortion.” Philosophia Christi 12.1 (2010): 80-101. or Kershnar, Stephen. “The Strange Implications for Bioethics of Taking Christianity Seriously.” Sophia 63.1 (2024): 13-33.

[16] Sampson, Eric. “Effective Altruism, Disaster Prevention, and the Possibility of Hell: A Dilemma for Secular Longtermists.” Oxford Studies in Philosophy of Religion. (pdf)

[17] And so would letting a human die that you can save be infinitely disvaluable.

[18] More specifically, a Moorean organic unity, such as discussed in Dancy, Jonathan. “Moore’s account of vindictive punishment: A test case for theories of organic unities.” Themes from GE Moore: New essays in epistemology and ethics (2007): 325-42.

[19] This reconstruction comes from William Wainwright, “Original Sin,” Philosophy and the Christian Faith, Thomas V. Morris, ed. (Notre Dame, 1988), p. 33. Wainwright is himself formalizing an argument given by Jonathan Edwards in his Original Sin, ed. Clyde Holbrook (New Haven, 1970), p. 130.

[20] Kvanvig, Jonathan L. The Problem of Hell. Oxford University Press, USA, 1993, pp. 33-40.

[21] It is currently an open question, labeled the “continuum hypothesis,” whether there are any infinite numbers in between ℵ0 (cardinality of the set of natural numbers) and 20 (cardinality of the real numbers, which is the same as power set of natural numbers).

[22] Since there is no highest infinite number, I think many Christians would take this as evidence that we cannot or should not even attempt to represent God’s value mathematically in any way. I won’t get into this dispute here, but I think ordinal ranking is the next logical step of reasoning when thinking about the badness of sins anyway, which I will turn to now. 

[23] The starting point of this work is Vallentyne, Peter, and Shelly Kagan. “Infinite value and finitely additive value theory.” The Journal of Philosophy 94.1 (1997): 5-26.

[24] Crisp, Oliver D. “Divine Retribution: A Defence.” Sophia 42 (2003): 35-52, p. 38.

[25] Obviously, theists would say that God is infinite in a very different way than humans are. But in terms of infinite moral worth or consideration or value as used in a Status Principle, I do not see any principled way to make this a relevant difference between the two. Even if we assign God a higher infinite value, I don’t think this makes a qualitative difference in trying to make inferences about sins against infinitely worthy beings.

[26] I’m leaving out some assumptions about God having the relevant authority, victimhood, etc.

[27] Kabay, Paul. “Is the status principle beyond salvation? Toward redeeming an unpopular theory of hell.” Sophia 44 (2005): 91-103, p. 91. The more fully worked out version given by Kabay is, “If S wrongs P and Q by doing some act and P has a higher status than Q, then S’s wrongdoing against P is qualitatively more serious than S’s wrongdoing against Q. As such S accrues greater guilt in wronging P than in wronging Q” (pp. 91-92).

[28] Crisp, “Divine Retribution: A Defence,” p. 39. 

[29] See discussion in Rogers, Andrew, and Nathan Conroy. “A New Defense of the Strong View of Hell.” The Concept of Hell. London: Palgrave Macmillan UK, 2015. 49-65, pp. 52-58. They only talk about the possibility of infinite divine harm in terms of infinite pain, but I think this can easily be generalized to other models of harm, including one based on desire satisfactionism, as clearly sin frustrates divine desires (the divine will), which may be held of infinite strength, or perhaps even objective list views.

[30] Howard-Snyder, Frances. “The Problem of Hell.” Faith and Philosophy 12.3 (1995): 442-450, p. 443.

[31] I presume this to be identical to the unforgiveable sin of blasphemy against the Holy Spirit. Previously, I would have said it was the only sin that is explicitly identified as unforgiveable in Scripture, but recently thinking about forgiveness has reminded me of the several verses which state that if you do not forgive others’ sins against you, your sins will not be forgiven. I think these can be distinguished to defend my original thought, but I won’t do that here. 

Actually, I might also put killing or letting humans die on this list of deserving eternal hell as well. It seems to follow from the rest of my commitments. But minimally I am only going to claim one, and it is the one that I am reasonably confident in.

[32] See some popular level discussions on the equality of sin: https://www.youthpastortheologian.com/blog/are-all-sins-equal, https://pastormikestone.com/are-all-sins-equal/, https://www.gotquestions.org/sins-equal.html. These articles seem to get some things more right and some things less right.

[33] Technically, this is only true of maximizing consequentialism, while scalar consequentialism and satisficing consequentialism both eschew the bipartite classification. They do so to their own demise though, as these are not plausible consequentialist theories, getting rid of what makes maximizing consequentialism appealing and intuitive (by my lights).

[34] Yes, consequentialism is true. Bite me. More specifically, maximizing act (divine glory) consequentialism based on foreseeable consequences is the correct moral theory, but I get ahead of myself.

[35] I say pseudo- because there is no such thing as a genuine moral dilemma. This trivially follows from the truth of maximizing consequentialism. If you really want to be a madman, you can construct a consequentialist theory that includes moral dilemmas by totally breaking your theory beyond recognition of sanity (see this paper), but I think seeing what is required to do this reveals yet another reason to reject the existence of genuine moral dilemmas.

[36] Yes, this is very demanding. Welcome to Christianity, Christian Ethics, and What the Gospel Demands.

[37] Well, if humans are infinitely valuable, then annihilation would be an infinite punishment, right? Actually, Rasmussen even explicitly argues that the infinite value of humans rules out the possibility of annihilationism. I’m not really sure what to do here. I suppose the only way out is to try to give a really weird model of hell which has ever decreasing amounts of punishment over time (I was told recently that Brian Cutter has defended this view), or perhaps a consistent amount of infinitesimal punishment for eternity (credit to @SolarxPvP on Twitter for this view), both of which sum to a finite amount.
Perhaps an even weirder view of hell is that in all absence of God’s presence, one has perfectly neutral wellbeing. So, maybe a finite punishment for one’s sins is given on judgment day, but then one spends eternity in the absence of God’s presence in a state of perfect neutrality. On this view, any sin would then reasonably justify a finite punishment followed by an eternal hell of neutrality. So, all sins justify eternal hell. I don’t suppose there is a single human on earth that finds any component of this view plausible.

[38] It’s super implausible to think that a Christian ethic is correct and that any non-Christian’s only sin is being a non-Christian (without any further entailments), but I will assume it is possible here.

[39] This gets into a bunch of issues that I don’t want to get into here, but after reading the biblical analysis in The Inescapable Love of God by Thomas Talbott, I’m less confident that hell=eternal hell is as easy of an equation in Scripture as I once thought.

[40] Proverbs concurs with Paul, saying that, “But a man who commits adultery has no sense; whoever does so destroys himself” (Proverbs 6:32). It is interesting that Proverbs also specifically mentions lust of the heart. Proverbs 6:25 says about the wayward woman, “Do not lust in your heart after her beauty.”

[41] Some translations say “punishment” rather than “wrongdoing” (or “iniquity”), but, on the assumption that God is just and God is the one dishing out the punishment, then a greater punishment implies a greater wrongdoing anyway.

[42] One could also appeal to the worsening of character that would result from leaving Christianity on the basis of A&S as a counterargument, but I doubt my critics would want to appeal to or accept such a premise.

Resolving the Conflict Between Maximal Justice and Mercy

It is commonly thought that there is a tension between mercy and justice, and it is indeed not obvious how to make sense of God being both perfectly just and perfectly merciful. In this blog post, I characterize mercy and justice and discuss how God might be said to maximize both of these, building on recent developments in perfect being theology by Yujin Nagasawa, Mark Murphy, and Daniel Hill.

What are Mercy and Justice?

Mercy and justice are responses by an agent to the actions of an individual who is the recipient of mercy or justice. An agent responds to a recipient depending on whether the action is good or evil (positive or negative). A good action deserves reward, and an evil action deserves punishment. In short, justice is getting what you deserve (good or bad), and mercy is not getting something negative (i.e., punishment) you do deserve.[1] We can extend the characterization to add grace, which is getting something positive you do not deserve, but I will focus my discussion on mercy and justice for simplicity. There are two questions to ask to distinguish justice, mercy, and grace:

  1. Is the response deserved? (Yes: justice, No: mercy or grace)
  2. Is the response positive, negative, or non-negative? (Positive: grace or justice, Negative: justice, Non-negative: mercy)

The quadrants of answers to these questions can be summarized in Table 1 below. It is just to pay an agreed-upon wage for a job, or to give a misbehaving high school student detention. It is gracious to give someone a Christmas gift, especially one whom has never given you a gift (where there is definitely no obligation). It is merciful for a police officer that pulls you over for speeding to only give a verbal warning, or for a judge to drop or reduce a charge or sentence for someone who committed a crime.

AttributeJusticeJusticeGraceMercy
Deserved?YesYesNoNo
ActionPositiveNegativeNoneNegative
ResponsePositiveNegativePositiveNon-negative
ExamplesWages from jobJail, detentionChristmas gifts, free samplesNo sentencing, verbal warning from officer
Table 1: A summary of the attributes of justice, grace, and mercy

Thus, justice is deserved punishment or reward, mercy is withholding deserved punishment, and grace is giving undeserved reward. As it stands, it appears as though mercy and justice are incompatible in the sense that an action cannot be both merciful and just, at least at the same time in the same way with respect to the same people. In the case of justice, the result is deserved, while in the case of mercy, the result is undeserved; these seem mutually exclusive.

The Incoherence of Theism

Justice is good, and mercy is good. It is good to perform acts of justice and acts of mercy. So, just and merciful are likely attributes, or properties, that make agents good, or great. We naturally think that just leaders are better than unjust leaders, and merciful bosses (or friends) are greater than merciless bosses (or friends). Thus, we can call justice and mercy great-making properties (or good-making properties, or perfections). At the same time, God is commonly understood as the greatest possible being (or the greatest conceivable being). This is a commitment of perfect being theology. If God is the greatest possible being (GPB), then it seems as though God would possess all great-making properties (GMPs) to the highest degree possible. In other words, a GPB maximizes all GMPs.

If an action cannot be both merciful and just in an important way due to an inherent incompatibility, then there appears to be a problem with maximizing both of these properties. If there is a problem with maximizing all GMPs, then it looks like a problem for the existence of a GPB. If no GPB exists, and if God has to be a GPB, then God does not exist. This can perhaps be formalized (adapted from a previous Twitter conversation I had) as:

  1. God is (defined as) the greatest metaphysically possible being (GPB)
  2. Justice and mercy are great-making properties (GMPs)
  3. A GPB maximizes all GMPs
  4. It is not possible for both justice and mercy to be maximized in one being
  5. It is not possible for all GMPs to be maximized in one being
  6. If (5), it is not possible for a GPB to exist 
  7. It is not possible for a GPB to exist 
  8. Therefore, God does not exist

Let’s go through the premises to determine where we will focus in this post.

We will grant perfect being theology as characterized by (1) for the sake of the post, though I think it is quite reasonable to reject (1) and avoid the problem entirely, opting instead for a variant of perfect being theology or an entirely different fundamental characterization of God (i.e., metatheology). Alternative metatheologies include the creator of all else (creator theology), a worship-worthy being (worship-worthiness theology), the combination of creator and worship-worthiness theology (as Jonathan Kvanvig defends in his book, Depicting Deity, dedicated to exploring these options), or some mysterious fourth thing. I briefly motivated and do accept (2). I will briefly discuss later a reason to possibly exclude justice or mercy as GMPs due to their being unable to be maximized coherently.

The argument outlined above is a great opportunity to explore these ideas more in-depth, with a particular focus on premises 3-6. The first helpful item to explore is (3) regarding what sense a GPB maximizes GMPs. Are GMPs maximized individually or collectively? What are the interaction effects between the properties on their maximum values? With respect to (4), what does it mean to individually be either maximally just or maximally merciful? Is it possible to maximize them collectively? Finally, (6), if all GMPs cannot be maximized (in whatever sense we determine to be relevant), does that imply there is no GPB? We will discuss each of these in turn.

Introduction to Great-Making Properties

Before moving too far into our investigation, we need to understand exactly what is meant by great-making properties, what types of properties can be GMPs, and how to understand their maximums and their most valuable degrees. We can use some vocabulary developed by Daniel Hill in Divinity and Maximal Greatness to (hopefully) offer some clarity on this concept. Hill distinguishes between properties that have a maximum, a highest degree, and those that do not (such as set-theoretic cardinality due to the power set axiom), as well as those that have an optimum, a most valuable degree (not necessarily the highest possible degree), and those that do not.

The difference between the optimum and the maximum is that the maximum is just what is possible for a being to have more of, so a being has something maximally if it is not possible to have more of that thing (e.g., power). The optimum is where a property becomes less valuable when it is possessed to a greater or lesser degree than the optimum. In analytic terms, a being has property F maximally if and only if it is not possible that there be a being that has more F, and a being, x, has great-making property F optimally if and only if nothing could be greater than x in virtue of having more or less F.[2]

The optimum, as far as I can tell, is identical to what has previously regularly been called the intrinsic maximum of the value of that property. I find the language of optimum vs maximum much more helpful than the clunky ‘intrinsic maximum of value’ (not to mention misleading, since, if Atomism is false (see next section), then the ‘intrinsic’ maximum can be affected by external properties).

There are some properties whose maximum is its optimum, but other properties will have their optimum that is less than their maximum. Hill calls properties where the optimum = maximum maxi-optimality properties, and he calls properties where the optimum is not the maximum duality properties because they have a dual nature of a distinct optimum and maximum. In other words, a being possessing some duality property to its optimal degree is greater than a being that possesses that property to a greater degree (higher than the optimum).

Maximize Collectively or Individually?

A greatest possible being is said to maximize all great-making properties, but what if the GMPs themselves affect the maximum, or at least the maximum greatness, of one of the other GMPs? A common example is God’s omnipotence and omnibenevolence: if God can do anything metaphysically possible, can God sin? This is a standard question. A standard response is: no, God cannot sin, but that is not a problem; it would actually be a kind of weakness in God to be able to sin, not a strength. It is a liability, not a capability. Thus, there is no true capability, as opposed to liability, that God lacks. Is this response adequate for showing that God can, in fact, maximize both power and goodness in the relevant sense? We will explore this question in a general way in this section. 

Overall, we need to assess two theses regarding how GMPs are maximized (Distribution and Atomism as termed by Mark Murphy in God’s Own Ethics[3]):

  • Distribution: for each great-making property that God exhibits, God exhibits that property to the intrinsic maximum of its value
  • Atomism: for each great-making property, what constitutes the intrinsic maximum of the value of that property is independent of that great-making property’s  relation to other divine great-making properties

Our stance on Distribution will determine if we think God has to maximize each GMP individually (independently), where each GMP is possessed to its most valuable degree, or collectively, where the GPB has the highest overall value and maximizes the set of GMPs rather than each GMP individually. For example, a Distributivist may say that a GPB can have more power, but not more power in a more valuable way. See Figure 1 for a visual depiction between a Distributivist and a non-Distributivist set of GMP values and maximums.

Figure 1: Left is a Distributive set of GMPs since all GMPs are possessed to the optimum (aka the most valuable) degree. On the right, one GMP, knowledge, is not at the optimum level and thus rejects Distribution. Note that if Atomism is true, then it seems the maximum values for power and knowledge would need to be reduced below the optimum values and would need to be motivated by the metaphysical impossibility of possessing that GMP to a higher degree. If Atomism is false, then it could be that the interaction between power, knowledge, and goodness results in a decrease of the optimum values of power and knowledge to their current levels.

Yujin Nagasawa in Maximal God rejects distribution and says that a GPB has the maximal consistent set of all GMPs. Thus, the GPM maximizes greatness overall, independent of whether each individual GMP is at its intrinsic maximum of value. It is perfectly acceptable to have less power than as much as one could most valuably have if the amount of power one can most valuably have produces an inconsistency with another maximized GMP, such as omnibenevolence. Murphy challenges Nagasawa’s view and defends Distribution, but I will not assess that argument here. (I think it fails, and Distribution is plausibly rejected for a parallel reason that one can reject Atomism.)

If Distribution were false, there is a fairly easy way to avoid consistency issues by making God have the maximal consistent set of justice and mercy. If maximizing GMPs just means take the maximum of two things combined in some way that requires consistency, then coherence is baked into the definition.

  1. God is (defined as) the greatest metaphysically possible being (GPB)
  2. Justice, mercy, & grace are great-making properties (GMPs)
  3. The GPB has the maximal consistent set of all GMPs
  4. God has the maximal consistent set of justice, mercy, & grace

This sounds like a nice (though cheap) save, but, as Jeff Speaks has pointed out in The Greatest Possible Being, something fishy must be going on here. There is still more to explore about how to maximize the consistent set, whatever that is, of the two properties. It remains unanswered if this GPB meets some absolute standard of greatness (in case the GMPs are in tension to the extent that the “maximum consistent set” is not any greater overall than a human[4] or even a rock). I am not going to assume that Distribution is false in the rest of this, so I consider alternative ways to respond to the challenge of the consistency of the GMPs.

Even assuming Distribution is true, to say that each GMP is at their intrinsic maximum is not to say that there are no interaction effects between the GMPs. Namely, the GMPs might interact in such a way that reduces the intrinsic maximum of value of another GMP. Thus, our stance on Atomism determines if we think that the value maximum of a GMP can be affected by another GMP.

Without Atomism, we have no problem saying God can have more power with the ability to sin, but not power in a more valuable way, which means that God still maximizes all GMPs in the relevant way. An atomist would say that the maximum value of power does not depend on anything to do with the goodness of an action or as a property of that agent, which appears to imply that God would be greater if he could sin. Murphy does not think this (and Atomism generally) is implausible, but that seems to me to be pretty clearly incorrect. We can, in fact, say that one who can sin is not greater than one who cannot sin. Plausibly, one is greater if one cannot sin due to their impeccability (not due to a lack of power or ability) than if one can sin. Thus, power cannot be realized in a more valuable way by adding the ability to sin. 

The key takeaway from this section is that GMPs can interact with each other in such a way that affects what degree of that property is most valuable to possess, which is to say that Atomism is false. I am neutral on Distribution, but I find no issue with rejecting it. As Murphy contends, we need some absolute minimum overall standard to be met without Distribution; however, (as Murphy also defends) we might should include that absolute minimum either way.

This section applies to mercy and justice because (1) without Distribution, mercy and justice do not need to reach their individual maximum values for the GPB to maximize them collectively, and (2) without Atomism, mercy and justice can interact in such a way that their value maxima are changed compared to considering them individually.

How to Maximize Justice or Mercy?

When maximizing a great-making property, we need to understand what it means for that attribute to be exhibited to a greater degree (assuming it is, in fact, a degreed property), and what it could look like to exhibit that attribute at the greatest degree. In the case of justice and mercy, maximization is likely over numerous dimensions, including degree or magnitude, quality or type, recipients, times, and/or possible worlds.

Justice and mercy are relational properties, and they require multiple parties to be involved. Agents are the only objects that can be just or merciful, and agents are also the only objects that can be recipients of justice and mercy. Every act of justice and mercy has a recipient, someone who receives mercy or justice from an agent.

Let us consider a natural starting point for maximizing over all the dimensions listed earlier:

  • Maximal justice1 = just in the highest degree with respect to all people at all times in all possible worlds in the retributive, compensatory, and restorative aspects of justice.

Now, we will go through this piece by piece.

The highest “degree” aspect is probably redundant because one can only be just or unjust. If one gives less than the reward that is earned, it is weird to call that “partially just.” Punishment or reward is either fitting or it is not. We might instead understand that justice is exhibited to the highest degree if it is just in the way described by the rest of the definition. Consequently,  justice in the highest degree just is justice with respect to all people at all times in all possible worlds in the retributive, compensatory, and restorative aspects of justice.

Regarding the different qualities, or aspects, of justice, this would depend on our independent account of justice, but I think it is reasonable to say there are multiple aspects of justice, including retributive and restorative aspects.

The maximizing over all possible worlds makes sense, which is to say that God is necessarily just. We would need to clarify that, since justice is relational, it would be necessary relative to worlds in which God created agents. (See the next section for an objection along these lines).

We get a bigger issue when considering maximizing over all times and objects. The intrinsic maximum of the value of justice is likely not when all times and all objects are included. The reason is because it is likely morally better, or at least more valuable, for God to not act in this way at all times for all people. Namely, it is better if God were at least sometimes merciful, or sometimes gracious, at least with respect to some people. In the same way, it is better if God were not always merciful, but God at least sometimes gave people punishment they deserved rather than always letting people go scot-free.

Consider the videos that go viral on Facebook/YouTube of judges giving people mercy when they deserve a harsher sentence. We consider this praiseworthy but not obligatory, usually, and thus a supererogatory action on the judge. We would say that this judge is better than one who is a stickler and never lets people off the hook(never merciful).

Plausibly, it is worse to be always merciful, or always just, with respect to any given person, than sometimes merciful and sometimes just. So, in terms that we used earlier, there is a point at which justice could not be realized in a more valuable way, and the same with mercy.

What about being always just, but instead of with respect to any given individual, with respect to any persons? In other words, God may be just at all times, but not at all times for all people. This is how I used to reconcile the concept of maximal justice and mercy: God is just with respect to someone (anyone) at any given time, and God is merciful with respect to someone at any given time; alternatively, we could simply say: God, at all times, exhibits mercy and justice. Therefore, God is maximally merciful and just.[5]

However, there is a counterexample. Consider a possible world with only one person. God being always just would require God to be just with respect to this person at all times, but as we already considered, it is better if, for a given individual, there was a mixture of justice and mercy, at different times. Therefore, this cannot be the most valuable amount of justice. Obviously, we need a different analysis of the maximum value of justice.

I think the conclusion is that the intrinsic maximum value of justice maximizes over possible worlds, types of justice, and objects, but it does not maximize over all times. I think the analysis of mercy will be similar. If God never punishes the wicked, but always gives them mercy, that is worse than sometimes giving justice.

Crystallizing the Compatibility

Does justice or mercy have a true maximum? I explored in the previous section what maximizing justice or mercy might look like. One objection to this is that there is no maximum because God could always create more people to receive mercy, and that would make God more merciful to be merciful to more people.[6] A quite plausible response is analogous to what’s called “person-affecting” views in population ethics, which is where something is only important if it applies to an actual person. This means that it would not make God more merciful to create more people and be merciful towards them because if God did not create them, then they would not have done anything that warranted mercy in the first place (or existed at all). In this scenario, God is as merciful as possible per person, and that activity is all that is needed for maximal mercy.

We may say that it is greater to have mercy on 100 persons than have mercy on 10 persons, but that does not mean is greater in virtue of being more merciful by having mercy on 100 instead of 10 persons, particularly if we are comparing different possible worlds. It may be greater in virtue of other aspects of the ability to have mercy, such as the necessity of overcoming limitations in power, space, time, and knowledge to create and have mercy on 100 persons compared to 10 persons. (The same applies, even more clearly, to the parallel worry Murphy has about God being maximally loving). God can have the same level of being merciful whether he creates more people that he actualizes this ability or not, similar to how omnipotence does not mean God creates as many objects as possible because God does not need to actively use all his power to be maximally powerful. Thus, God can be maximally merciful without creating more people to be merciful toward, but by merely being maximally merciful towards the people that God does create.

A similar worry can arise when considering the necessity of God’s mercy. If mercy requires creatures, and God only contingently creates, then there might be a problem for God being necessarily just. For now, while I am engaging with Murphy on this, I adopt a parallel stance as Murphy on necessary love[7]: “given the existence of created persons, the Anselmian being necessarily [is just and merciful towards] them.”[8] One might press that even if God creates, that doesn’t imply that necessarily anyone has sinned and thus be a candidate for receiving mercy. One can appeal to a kind of transworld depravity, that any possible world with an agent will sin at some time and thus be a candidate for mercy. If all of this is unsatisfactory, I am perfectly fine with saying God is just and merciful in a contingent way, namely those worlds in which God creates moral agents, as there is no way for God to be merciful or just in worlds without agents and thus does not make God any less merciful or just.

Let us go back and reconsider properties that have an optimum that is distinct from their maximum, what Hill calls duality properties. In fact, Hill actually gives lenience (aka mercy), which he contrasts with justice, as an example of one! Hill says, “I contend that it is possible to be too lenient, i.e., that a maximally lenient being would not be optimally lenient… It is great, I think, to be lenient, and a more lenient being is greater than a less lenient being – up to a point, the point of being optimally lenient.”[9] I completely agree with him here. 

Hill further comments on the relationship between mercy and justice by saying, “[I]t seems that too much lenience implies not enough justice, and that being optimally lenient is compatible with being maximally (or optimally) just.”[10] Again, I completely agree. If one is always merciful, always forgiving and letting wrongdoers off the hook, then that is not enough justice. There is some optimum level, the exact level need not be identified, where value is maximized, and that optimum is not God acting always just towards every person at all times in all possible worlds, etc. Therefore, a visual depiction of the interaction effects between justice and mercy affecting the optimum values can be depicted by Figure 2.

Shows two graphs described by the figure caption. It shows how interaction between justice and mercy can cause their optimums to decrease below their original when considered individually.
Figure 2: The left depicts the GMPs of justice and mercy considered individually, while the right considers them collectively, including their interaction effects. In this case, there are interaction effects that result in a depression of the optimum degree of both justice and mercy to below the maximum degree, which constitutes a rejection of Atomism. This graphic suggests something like the optimum percentage of actions is 75% just and 25% merciful, which sounds reasonable, but I am not married to that ratio.

These considerations culminate in a new plausible account of maximal justice that is compatible with maximal mercy:

  • Maximal justice2 = justice with respect to all people in all possible worlds (that have agents) in all aspects of justice.

In this way, it appears as though maximal justice and maximal mercy, in the sense of maximizing them as great-making properties (i.e., achieving their value maximum), are perfectly compatible simultaneously. In this analysis, we made use of the interactions between the GMPs of justice and mercy, which suggest that the value maximum of each is affected by the other, and thus Atomism is false.

Conclusions

In this post, I proposed brief accounts of mercy and justice and analyzed what problems that might arise for God’s existence and perfect being theology. I analyzed what it might mean to be maximally just or maximally merciful, concluding that God would be just in all possible worlds (in which there are creatures that do morally relevant actions) in all senses towards all agents, but God does not need to be just or merciful toward all agents at all times. Mercy and justice have their intrinsic maximum of value affected by one another, which means that Atomism is false and each great-making property is not maximized purely individually. The optimum amount of justice and mercy is neither 100% just nor 100% merciful, but somewhere in between. Something that remains to be explored is how the Gospel so beautifully and perfectly combines justice, mercy, and grace in a way that punishes all evil and rewards all good while offering mercy and grace to all.


[1] I am neglecting a lot here and am giving a fairly naïve account, as I have not done much sustained reading on this front. This account of justice may be limited to retributive justice, which I am perfectly fine with. It may also be incompatible with compensation (e.g., for undeserved suffering), which again I am perfectly fine with for several reasons I leave for elsewhere. I also acknowledge this video for the featured image.

[2] Hill, Daniel. Divinity and Maximal Greatness. Routledge, 2004, pp. 10-11.

[3] Murphy, Mark C. God’s Own Ethics: Norms of divine agency and the argument from evil. Oxford University Press, 2017, p. 12. Definitions adapted for internal and external parallelism.

[4] This is Jeff Speaks’s Michael Jordan objection, that PBT may end in the conclusion that Michael Jordan is the greatest possible being, and thus we have clearly gone astray somewhere.

[5] There is a more general question I have here, and I’m not sure where to look for the answer (action theory? metaphysics?). How does one make inferences in either direction regarding attributes and actions? Given one’s set of actions over all time, how exactly do you make conclusions about the attributes of that individual? Or, more useful for doing perfect being theology, how do we make conclusions about one’s actions based on one’s attributes? Specifically, can one have the property of justice if one does not act just at all points in time? Does the property disappear if one does not “act out of justice”? Does the phrase “acting out of justice” even make sense? This is language that I have seen used, and use myself, when describing God’s actions. God is always loving, always just, always merciful, but that does not mean that all his actions are primarily out of mercy. Some actions are more related to his justice than his mercy. Is there a more robust way to make sense of this language? It may be that perfect being theologians have talked a lot about this in the context of maximal properties that I don’t know. There’s a lot of work here that I haven’t read yet. There is especially a lot of work on maximal love by Talbott and others that I have not yet adequately sifted through. I could add to this post based on what I have read thus far, but I’m trying to keep this short. 

[6] Thanks to Johnny Waldrop for raising this objection.

[7] The problem does seem worse than the problem of contingent love, since inter-Trinitarian relations can easily have love, but it is hard to make sense of justice or mercy as being contained in inter-Trinitarian relations.

[8] Murphy, p. 32.

[9] Hill, Daniel. Divinity and Maximal Greatness. Routledge, 2004, p. 11.

[10] Ibid.

A Systematic Response to Criticisms of Effective Altruism in the Wake of the FTX Scandal

Summary

Effective altruism (EA) has been in the news recently following the crash of a cryptocurrency exchange and trading firm, the head of which was publicly connected to EA. The highly-publicized event resulted in several articles arguing that EA is incorrect or morally problematic because EA increases the probability of a similar scandal, or that EA implies the ends justify the means, or that EA is inherently utilitarian, or that EA can be used to justify anything. In this post, I will demonstrate the failures of these arguments and others that have been amassed.  Instead, there is not much we can conclude about EA as an intellectual project or a moral framework because of this cryptocurrency scandal. EA remains a defensible and powerful tool for good and framework for assessing charitable donations and career choices.

Note: This is a long post, so feel free to skip around to the sections of particular interest using the linked section headers below. Additionally, this post is available as a PDF or Word document.

  1. Summary
  2. Introduction
  3. Effective Altruism Revealed
  4. My Background
  5. SBF Association Argument Against Effective Altruism
    1. Was SBF Acting in Alignment with EA?
    2. SBF Denies Adhering to EA?
    3. EA is Not Tainted by SBF
      1. An Irrelevant “Peculiar” Connection
      2. Skills in Charity Evaluation ≠ Skills in Fraud Detection in Friends
      3. EA Does Not Problematically Increase the Risk of Wrongdoing
  6. Genetic Utilitarian Arguments Against Effective Altruism
    1. Genetic Personal Argument Against EA
    2. Genetic Precursor Argument Against EA
    3. A Movement’s Commitments are not Dictated by the Belief Set of Its Leaders
    4. EA Leaders are Not All Utilitarians
  7. Do the Ends Justify the Means?
    1. Some Ends Justify Some Means
    2. Some Ends Justify Trivially Negative Means
    3. No End Can Justify Any Means
    4. A Sufficiently Positive End Can Justify a Negative Means
    5. Absolutism is the Problem
    6. Paradoxes of Absolute Deontology
    7. Application to the FTX Scandal
  8. Effective Altruism is Not Inherently Utilitarian
    1. [Minimal] EA Does Not Make Normative Claims
    2. EA is Independently Motivated
      1. Theory-Independent Motivation: The Drowning Child
      2. Martin Luther’s Drowning Person
      3. Virtue Theoretic Motivation: Generosity and Others-Centeredness
    3. EA Does Not Have a Global Scope
    4. EA Incorporates Side Constraints
    5. EA is Not Committed to the Same Value Theory
    6. EA Incorporates Moral Uncertainty
    7. Objections
    8. Sub-Conclusion
  9. Can EA/Consequentialism/Longtermism be Used to Justify Anything?
    1. All Families of Moral Theories Can Justify Anything
    2. Specific Moral Theories Do Not Justify Any Action
    3. Specific EA and Longtermism Frameworks Do Not Justify Any Action
  10. Takeaways and Conclusion
  11. Post-Script
  12. Endnotes

Introduction

Recently, there has been a serious scandal primarily involving Sam Bankman-Fried (SBF) and his cryptocurrency exchange FTX, precipitating a crash of billions of dollars into bankruptcy. I am talking about this because SBF has been publicly connected to the effective altruism movement, including being upheld as a good example of “earning to give,” which is where people purposely take lucrative jobs in order to donate even more money to effective charities. For example, Oliver Yeung took a job at Google and is able to donate 85% of his six-figure income to charities while living in New York City; for four years, he lived in a van to push this up to 90-95% of his income.

SBF met William MacAskill, one of the leaders and founders of the effective altruism (EA) movement, in undergrad, and MacAskill convinced him to go into finance to “earn to give.” SBF did very well, working at a top quantitative trading firm, Jane Street, and he decided to work with some other effective altruists (EAs) to start a trading firm Alameda Research and eventually a cryptocurrency exchange FTX that was intimately connected with Alameda. FTX and Alameda were doing really well, ballooning in the past several years. At his peak, right before the downfall, SBF had a net worth of $26 billion.

Like many other cryptocurrency exchanges, FTX produced its own altcoin, FTT, which gives some discounts and rewards to customers and acts as stock, and SBF had some of his company’s own assets in FTT. Trouble started in early November when CoinDesk published an article expressing concern over Alameda’s balance sheet, revealing an unhealthy amount of assets invested in FTT, which is essentially its own made-up currency. FTT-related assets amounted to over $6 billion assets of Alameda’s $14 billion assets, leaving Alameda extremely vulnerable to sudden drops in investment due to their limited ability to liquidate enough assets to pay the sellers.

Unfortunately for SBF, the Binance CEO decided to sell all of Binance’s FTT tokens, collectively worth $529 million. The CEO also publicly announced the sale, triggering a bank-run where many other customers decided to sell their FTT and withdraw their funds from FTX entirely. As a result of the run, $6 billion was withdrawn from FTX within 72 hours. FTX did not have the liquid assets to cover all of this and rapidly collapsed, declaring bankruptcy.

It became apparent that the investments of Alameda were extremely risky, even though they repeatedly told customers they have loans with “no downside” and high returns with “no risk.” It was revealed that Alameda’s risky bets were made with customer deposits, which is apparently a big “no-no”. As far as I can tell, it is not clear whether SBF actually committed fraud, but he clearly mishandled funds and misled customers about their funds, possibly in a way that violated the business’s terms and conditions.

In the fallout of this disaster, which included the closing of over 100 other organizations and the loss of many employees’ life savings, etc., effective altruism came under fire for their connection to SBF. SBF, was, after all, following suggestions given by EA organizations when he decided to “earn to give.” Further, he has explicitly advocated for EA-adjacent reasoning in maximizing expected value, though he also champions a more risk-tolerant approach than EAs tend to prefer.

The question everyone is asking (and most are poorly answering) is: “Is effective altruism to be blamed for SBF’s behavior?”

Many articles in popular media have denounced effective altruism in the wake of the crash, characterizing the philanthropic approach as “morally bankrupt,” “ineffective altruism,” and “defective altruism.” They say the FTX scandal “is more than a black eye for EA,” “killed EA,” or  “casts a pall on [EA].” Articles linking the scandal and EA, most of them critical of EA, have been published in the New York Times, the Guardian, the Washington Post, New York Magazine, the Economist, MIT Technology Review, Philanthropy Daily, Slate, the New Republic, and many other sites.

In this post, I am going to subject these articles and their arguments to scrutiny to see what exactly we can conclude about EA’s framework of evaluating the effectiveness of charities and careers and how they advocate for why and how we should do so in the first place. In short, my answer is: not much. There is not much we can conclude about EA from the FTX scandal.

I am only going to be investigating in search of critiques and assessing the articles as critiques of effective altruism. Some of these articles might have additional or entirely different purposes but sound sufficiently negative toward EA that I will nonetheless assess whether we can construct an argument against EA as a result.

Furthermore, I want EA to be criticized in the same sense that, for any given position, I want the best arguments and evidence for and against each side to be raised and assessed in the most rigorous way. Of course, that doesn’t mean every argument is equally good. I have spent much time looking at academic critiques of effective altruism, which I (normally) find more compelling, as they are more rigorous. However, most recent online criticisms are just not good.

In this post, I will 1) give a precise characterization of effective altruism, 2) mention possibly relevant background information that informs my perspective in evaluating EA, 3) address what seems to be the most frequent concern, yet to my mind remains the most perplexing concern, that SBF’s association with EA reveals that EA has an incorrect framework, 4) respond to arguments against EA that rely on the utilitarian origins of EA or its leadership, 5) clarify “ends justify the means” reasoning in recent discourse and normative ethics more broadly, 6) introduce six differences between EA and utilitarianism, showing that is EA independent of any commitments to consequentialism, and, finally, 7) respond to the concern that EA or consequentialism or longtermism can be used to justify anything and is therefore incorrect. With each argument, I try to reconstruct what is the best version of the critique against EA, since much of the argumentative work in these articles is left implicit or neglected entirely.

I welcome responses, better reconstructed arguments, corrections, challenges, counter-arguments, etc. Let’s dive in.

Effective Altruism Revealed

In “The Definition of Effective Altruism,”[1] William MacAskill characterizes effective altruism with two parts, an intellectual project (or research field) and a practical project (or social movement). Effective altruism is:

  1. the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources…
  2. the use of the findings from (1) to try to improve the world.

We could perhaps summarize this to say that someone is an effective altruist only if they try to maximize the good with their resources, particularly with respect to charitable donations and career choice, since that is EA’s emphases. A few features of this definition that MacAskill emphasizes are that it is: non-normative, maximizing, science-aligned, and tentatively impartial and welfarist.

We can further distinguish between different kinds of effective altruists[2]: normative EAs think that charitable donations that maximize good are morally obligatory, and radical EAs think that one is morally obligated to donate a substantial portion of one’s surplus income to charity. Normative, radical EAs combine these two together, and I independently argue for normative, radical EA in a draft paper (see n. 2). It is helpful to distinguish these kinds of EAs (minimal, normative, radical, or normative radical), where the summary of MacAskill’s definition is considered the minimal definition that constitutes the core of effective altruism, while the normative and radical commitments are auxiliary hypotheses of effective altruism. I will revisit this in the Effective Altruism is Not Inherently Utilitarian section.

Based on the characterization above, we can quickly dispel two key errors that articles repeatedly made. One error is that “effective altruism requires utilitarianism” (then “utilitarianism is false”, concluding “EA is incorrect”). The truth is that utilitarianism (trivially) implies effective altruism, but effective altruism does not imply utilitarianism. In fact, I would put effective altruism at the center of the Venn diagram of the three moral theories (see Figure 1). There are strong deontological and virtue ethical arguments to be made for effective altruism. See Effective Altruism is Not Inherently Utilitarian section for more on this, including one theory-independent and two virtue ethical arguments for EA. Also, see this 80,000 Hours Podcast episode on deontological motivations for EA.

Figure 1: A Venn diagram showing what moral theories imply effective altruism

The second important flawed criticism is that longtermism is an essential part of effective altruism. The core commitments of effective altruism do not imply longtermism, and longtermism does not require effective altruism. Instead, longtermism is an auxiliary hypothesis of EA. Longtermism could be false while EA is correct, and EA could be false while longtermism is correct. To get from EA to longtermism, you need an additional premise that “the best use of one’s resources should be put towards affecting the far future,” which longtermists defend, but EAs can reasonably reject. EA is committed to cause neutrality, so it is open to those who think non-longtermist causes should be prioritized.

As we will see, many people writing articles with criticisms of effective altruism could really stand to read the FAQ page on effectivealtruism.org, as many of the objections have been replied to at-length (not to mention academic level pieces), including the difference between EA and utilitarianism or neglecting systematic change. Another, slightly more advanced, but more precise discussion on characterizing effective altruism is in the chapter “The Definition of Effective Altruism” by MacAskill. The very first topic MacAskill covers in the “Misunderstandings of effective altruism” section is “Effective altruism is just utilitarianism.”  

My Background

I call myself an effective altruist. I think that effective altruism is obviously correct with solid arguments in its favor. It follows from very simple assumptions, such as i) it is always permissible to do the morally best thing,[3] ii) acting on strong evidence is better than acting on weak evidence, iii) if you can help someone in great need without sacrificing anything of moral significance, you should do so, etc. If you care about helping people, you are spending money on things you don’t need, and you don’t have infinite money, then you might as well give to where it helps the most. This just makes sense. On the other hand, I wouldn’t call myself a longtermist[4] (regarding either weak longtermism that says affecting the longterm future is a key moral priority or strong longtermism that says it is the most important moral priority), as I am skeptical about many of their claims. I simultaneously think most critiques I have heard of longtermism (I have not read much, if any, academic work on this) are lacking.

I have known about effective altruism since early 2021 and took the Giving What We Can pledge in March 2021. However, I was convinced of its way of thinking for several years, since early in undergrad. I have mostly been a part of Effective Altruism for Christians (EACH) more than the broader EA movement. I have not worked for an EA organization directly and do not have a local EA group to be a part of. I had never even heard of Sam Bankman-Fried until this whole scandal happened, though I heard other people talking about the FTX Future Fund (but I didn’t know what FTX was).

The closest to an “insider look” I have gotten into EA as an institutional structure is conversations with some people at an EACH retreat in San Francisco, one of which worked for an EA startup and started an EA city chapter. The other has been involved in the EA Berkeley community. Some of the things they said suggested that there are ways that various EA suborganizations could be further optimized in their use of funding, but nothing super concerning.

I will be mostly looking at recent pieces insofar as they contribute to the debate about the intellectual project and moral framework of EA, as I find that to be the most interesting, important, and fundamental questions at hand. The end result of this inquiry has direct bearing on whether we should give to EA-recommended charities like GiveWell, rather than asking e.g., whether the Center for Effective Altruism should spend less on advertising EA books, which is a different question entirely and not central to the EA project. Additionally, I have engaged with enough material on the moral frameworks in question (and normative ethics more broadly) to hopefully have something to contribute to evaluating the EA moral framework .

SBF Association Argument Against Effective Altruism

A lot of recent critiques of EA appeared to have the general outline of the form:

  1. Sam Bankman-Fried (SBF) engaged in extremely problematic practices.
  2. SBF was an EA/was intimately connected to EA/was a leader of EA.
  3. Therefore, EA is a bad or incorrect framework.

(1) is uncontroversial. On (2), SBF was clearly connected in a very public way to EA. The extent to which he was following or internalized EA principles can be challenged, and I will also question in inference from (1) and (2) to (3). What exactly is the argument from SBF’s actions and connection to EA to concluding that EA is either inherently or practically problematic?

Was SBF Acting in Alignment with EA?

The most relevant question in this whole debacle is that whether the EA framework implies that SBF acted in a morally permissible manner. The answer is this: it is extremely unlikely that, given the EA framework, what SBF did was morally permissible.

EA leaders have repeatedly repudiated the general type of scenario that SBF engaged in numerous times. In fact, William MacAskill and Benjamin Todd give financial fraud as a go-to example of what would be an impermissible career choice on an EA framework. Eric Levitz in the Intelligencer acknowledges this by saying that “MacAskill and Todd’s go-to example of an impermissible career is ‘a banker who commits fraud.’” Eric says that MacAskill and Todd specifically argue that “engaging in harmful economic activity to generate funds for charity probably is .” Additionally, “they suggest that performing a socially destructive job for the sake of bankrolling effective altruism is liable to fail on its own terms.”

It is very difficult to see how a virtually guaranteed bankruptcy, when thousands of people are depending on you for their lifesavings, jobs, and altruistic projects, is actually the best moral choice. Fraud is just a bad idea and is completely independent of effective altruism. The disagreement here may merely be on the empirical question rather than the moral question (it is notoriously difficult, at times, to separate empirical from moral disagreement, as empirical disagreement is often disguised as moral disagreement).

MacAskill calls out SBF’s behavior as not aligned with EA: “For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.” Furthermore, “if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.”

Additionally, his practices were just clearly horrible financially. He misplaced $8 billion dollars. John J. Ray III, who oversaw the restructuring of Enron and is now overseeing FTX, said about the FTX financial situation, “Never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy financial information as occurred here. From compromised systems integrity and faulty regulatory oversight abroad, to the concentration of control in the hands of a very small group of inexperienced, unsophisticated and potentially compromised individuals, this situation is unprecedented.” These practices obviously do not give a maximum expected value on any plausible view.

SBF Denies Adhering to EA?

In addition, Sam Bankman-Fried himself appeared to deny that he was actually attempting to implement an EA framework, though he later clarified his comments were about crypto regulation rather than EA. Nitasha Tiku in The Washington Post (non-paywalled) puts it as, “[SBF] denied he was ever truly an adherent [of EA] and suggested that his much-discussed ethical persona was essentially a scam.” Tiku is referring to an interview between SBF and Kelsey Piper in Vox. Piper interviewed SBF sometime in the summer, where SBF said that doing bad for the greater good does not work because of the risk of doing more harm than good as well as the 2nd order effects. Piper asked if he still thought that, to which he replied, “Man all the dumb sh*t I said. It’s not true, not really.”

When asked if that was just a front, as a PR answer rather than reality, to which he said, “everyone goes around pretending that perception reflects reality. It doesn’t.” He also said that most of the ethics stuff was a front, not all of it, but a lot of it, since it’s just about winners and losers on the balance sheet in the end. When asked about him being good at frequently talking about ethics, he said, “I had to be. It’s what reputations are made of, to some extent…I feel bad for those who get f—ed by it, by this dumb game we woke Westerners play where we say all the right shiboleths [sic] and so everyone likes us.” He said later, though, that the reference to the “dumb game we woke Westerners play” is to social responsibility and environmental, social, and governance (ESG) criteria for crypto investment rather than effective altruism.

Perhaps the most pessimistic and antagonistic of people would say, perhaps as Tiku did, that SBF only said what he did to protect EA. The idea is that he actually was an effective altruist, believed it, but lied about it just being a front in order to help save face for EA. Tiku says that EA’s brand “helped deflect the kind of scrutiny that might otherwise greet an executive who got rich quick in an unregulated offshore industry,” also reflected in the title of the article, “The do-gooder movement that shielded Sam Bankman-Fried from scrutiny.” Since we do not have access to SBF’s mental states, I do not care to speculate much about his reasoning for saying what he said. Armchair psychoanalysis is not exactly a reliable methodology.

People argue about whether or not SBF was being truthful here or not. He appeared to believe he was speaking off the air, suggesting honesty. If so, then he did not believe he was actively trying to implement the EA framework (unless SBF’s answers about his ethics in the Vox interview were intended to be disconnected from the EA framework and solely about regulations, which to me is not clear either way but didn’t seem entirely disconnected). Ultimately, I do not think much hinges on whether SBF believed he was implementing the EA framework, since it is more important whether or not SBF’s actions are a reflection of what is inherent in the EA framework, which they are not.

Now, I have little interest in attempting to disown SBF because he is now a black sheep. There is no doubt that EA painted SBF as a paradigm case of an actor doing great moral good by using his money to invest in and donate to charity. We EAs have to own that, and EAs got it incorrect due to our lack of knowledge about what was happening behind the scenes. Could there have been more to be done to prevent this from happening? Probably, and EAs are taking this very seriously, doing a lot of soul searching. It is likely there will be more safeguards put into place. These are reasonable questions, but they have little to do with the moral framework of EA itself, since the EA framework still ends up rendering SBF’s gamble as impermissible.

Next, I will investigate whether or not the mere connection between SBF and EA, rather than an alignment between EA’s framework and SBF’s actions, is sufficient to challenge EA’s framework.

EA is Not Tainted by SBF

Now that we know SBF’s actions do not coincide with EA principles, we can investigate how the connection between SBF and EA could be used as an argument against EA. Recent articles mostly seem to just toss the two names next to each other in an obscure way without making any clear argument, hoping that one will be tainted by the other.

An Irrelevant “Peculiar” Connection

For example, Jonathan Hannah in Philanthropy Daily says, “MacAskill claims to be an ethicist concerned with the most disadvantaged in the world, and so it seems peculiar that he was inextricably linked to Bankman-Fried and FTX given that FTX claimed to make money by trading cryptocurrencies, an activity that carries serious negative environmental consequences and may play a role in human trafficking.” The environmental consequences have to do with crypto mining that uses a lot of electricity (more than some countries as a whole), and the role in human trafficking is that virtual currencies are harder to track, so they are frequently used in black market activities. 

It is hard to understate how much of a stretch this argument is. Here is an equivalent argument against myself (relevant background is that I studied chemical engineering at Texas A&M, which also has a strong petroleum engineering program). I say I care about the disadvantaged, yet I have many friends that went into the oil and gas industry (and some of them listened to my suggestions about charitable donations). Oil and gas bad. Curious! Further, I have many more friends that love, watch, and/or attend football and other public sporting events, and yet these events are associated with an increase in human trafficking.[5] Therefore…I don’t care about the disadvantaged? And therefore my thoughts (or knowledge of evidence like randomized control trials) about helping others are wrong? Looks not much better than Figure 2.

Figure 2: I am very intelligent.

Of course, effective altruists have spent a great deal of time working on the issue of weighing the moral costs and benefits of working in plausibly harmful industries vs working for charities. This isn’t exactly their first rodeo. See 80,000 Hours: Find a Fulfilling Career That Does Good and Doing Good Better: Effective Altruism and How You Can Make a Difference (you can get a free copy of either of these at 80,000 Hours). We can also quickly consider SBF’s scenario (I am only considering my first-glance personal thoughts, and not attempting to use the 80,000 Hours framework). In SBF’s case, he has earned enough money from cryptocurrency to carbon offset all the cryptocurrency greenhouse emissions in all of the U.S. many times over.[6] Additionally, it is hard to see why employees (or employers) of cryptocurrency can be blamed for human trafficking purchases with crypto, especially no more than the U.S. treasury can be blamed for human trafficking purchases done with cash (which seems negligible at best). Plus, many other things he can do with the remaining sum not spent on carbon offsetting, resulting in a net good (especially compared to what other job opportunities he could take, many of which have comparable negative effects).

Skills in Charity Evaluation ≠ Skills in Fraud Detection in Friends

The same author also asks, “If these ‘experts’ failed to see what appears to be outright fraud committed by someone they were close to, why should we look to these utilitarians to learn how to be effective with our philanthropy?” This is again a strange conditional. Admittedly, I have not had many friends that committed billions of dollars’ worth of fraud (perhaps the author has more experience), but I would not expect them to go to their close friends and say, “Hey I’m committing fraud with billions of dollars, what do you think?” Acts like those done by SBF are done in desperation with a sinking ship, like a mouse backed into a corner, or someone with a gambling habit (especially apropos for the given situation). You get deeper into debt, take more risks, assuming and desperately hoping that it will work out in the next round. Repeat until bankruptcy. This is not something you go telling all your friends about (instead, you lie and try to siphon money from them, as was recently done by a Twitch scammer).

In addition, the skills and techniques it takes to assess the effectiveness of charities are quite different from the skills it takes to discover that your friend is committing massive fraud with his business. So, the reason we should look to EAs to be effective in philanthropy is because they have good evidence for charity effectiveness. Randomized control trials (or other comparable methods) are not exactly the tools optimized for detecting fraud in friends’ businesses.

Now, was there nothing suspicious about SBF prior to this point? No. There was some reason for suspicion. And of course, hindsight is 20-20. They evidently attempted to evaluate SBF and his ethical approach in 2018. I’m unsure the details of this, and I don’t know how much changed in SBF’s behavior in 4 years. As I mentioned earlier, like the desperation of a gambler, the risks and bad behavior likely exponentially increased over time leading to the present failure. Thus, we would expect most of the negative behavior to be heavily weighted towards 2022 rather than 2018 when he was reviewed. This debacle will likely increase scrutiny into this type of behavior (as much as possible across organizational lines), and with good reason. I won’t say EA as an organization or community is blameless here. But that doesn’t change the EA framework as being the best (and correct) framework for evaluation of charity effectiveness.

Without making this connection more explicit, this looks like a fallacious argument; however, like all informal fallacies, there is likely a reasonable argument form in the vicinity. Let us try to consider some of these possibilities.

EA Does Not Problematically Increase the Risk of Wrongdoing

Here is one way of putting the key inference for this argument: if something increases the probability of believing or doing something wrong, then it is bad or incorrect (and EA does this, so EA is incorrect). Of course, this is implausible, as then we couldn’t do anything (re: MacAskill’s paralysis argument). If we always had to minimize the probability of engaging in wrongdoing (through violating constraints) or false beliefs, then we should do (or believe) nothing.[7] This is one standard argument for global skepticism. If the only epistemic value is minimizing false beliefs, then having zero beliefs would ensure you have the minimum number of false beliefs, which is zero. This approach is clearly incorrect, since we do have knowledge and it is permissible to get out of bed in the morning.

Here’s another reductio: becoming a deontologist increases the probability that you will believe that we have a deontological requirement to punch every stranger we see in the face, since consequentialism does not include deontological requirements while deontology does, so deontologists need to put higher credence in variants of deontology. However, this is an implausible view that no one defends, so this mild increase in probability is uninteresting at best. 

A second, more plausible version of the inference for this argument is: if something substantially increases the probability of believing or doing something wrong, then it is bad or incorrect (and EA does this, so EA is incorrect). Random on Twitter seems to suggest something like this in response to Peter Singer’s (too) brief article when he identified the criticism as being that EA is “a philosophy that tends to lead practitioners to believe the ends justify the means when that’s not the case.” In any case, this is an extremely difficult and unwieldy claim to deal with at all, as this empirical premise is quite difficult to substantiate. First of all, increases the probability compared to what? What is the base rate for how frequently someone does the relevant wrong in question? And what is the probability given one is an EA? Do we only compare billionaires? Do we compare millionaires and beyond? Do we only compare SBF to other crypto businessmen?

In the absence of a more clear and substantiated argument, it is hard to see how this argument can be successful. Maybe we can ask, of the people that we know made incorrect assessments of ends vs means and thought the ends sometimes justifies the mean, what percent of them accept the EA framework? Good luck with that investigation. Plus, we are inevitably going to end up doing armchair psychoanalysis, a notoriously unreliable method.

Furthermore, there is another response. Plausibly, a framework can substantially increase the probability of people doing something wrong, and yet the framework entails that we should not do that thing. In such a case, it is hard to see why the framework goes in the trash if it gives the correct results even if in practice people’s attempted implementation end up doing the wrong thing.

To see this, consider is the difference between a criterion of rightness, which is how we evaluate and conclude if an action is morally right or wrong (as a 3rd party), and a decision-making procedure, which is the method an agent consciously implements when deciding what to do. This is a standard distinction in normative ethics that diffuses various kinds of objections, especially having to do with improper motivations for action. It may be that the decision procedure that was implemented is wrong, but this does not show that the normative or radical EA’s criterion of rightness is incorrect. I suspect that Richard Chappell’s meme about this distinction is actually a reference to this (or a closely related) mistake, since his other tweets and blog posts around the same time are referring to similar errors in commentary on EA and the FTX scandal (such as this thread on a possible connection between guilt-by-association arguments and inability to distinguish criterion of rightness and decision procedure).

Figure 3: Richard Chappell’s meme on bad EA criticism, referring to philosophers on Twitter that confuse the two

In summary, to answer Eric Levitz’s question “Did Sam Bankman-Fried corrupt effective altruism, or did effective altruism corrupt Sam Bankman-Fried?”, the answer is “Neither.” SBF did not act in a way aligned with EA, whether he thought he was or not. Until a better argument is forthcoming that SBF’s incorrect approach implies that EA’s framework is flawed, I conclude very little about the EA framework.

The EA framework is well-motivated, even on non-consequentialist grounds (as we will see later), and EA is an excellent way to help others through your charitable donations and career. To the extent that the FTX scandal makes EA look bad, it is only because of improper reasoning. There are likely additional institutional enhancements that can be implemented as protections against these kinds of disasters, but my intent here was to investigate the EA framework more than the EA practice in all of its institutional details, to which I am not privy. Therefore, I can conclude that the EA framework is correct and unmoved by the SBF and FTX scandal.

Genetic Utilitarian Arguments Against Effective Altruism

There is another set of claims I will assess in these critical articles related to effective altruism’s connection to utilitarianism in the form of historical and intellectual origins. Inevitably, especially from opponents of utilitarianism, any connection to utilitarianism is deemed hazardous and not to be touched with a ten-foot pole. For example, I have had several Christian friends be terrified of effective altruism because they hear that Peter Singer is connected to it.[8]

Genetic Personal Argument Against EA

I can briefly consider this genetic personal argument against EA. The best version of the principle in question to make an inference against EA is probably something like, “if a person is wrong about the majority of claims you have heard from that person, then the prior probability of the person being right about a new claim is fairly low.” The principle should likely be restricted to the claims that you have heard from that person that you got from a source including many more of that person’s beliefs and even arguments for said position. Otherwise, you risk making inferences from an exaggerated source, and the principle would be false. Even then, the principle would only tell you the prior probability. You need to update your background knowledge with further evidence to get the posterior probability of any given claim, so it remains important to actually investigate the person’s reasons for believing the new claim before making a definitive judgment on the new claim. Therefore, EA cannot be dismissed on a personal basis without assessing the arguments for EA, such as those referenced in the independent motivation section.

Genetic Precursor Argument Against EA

There may be another genetic argument raised against EA, which is that “the historical and intellectual precursors to EA involved utilitarian commitments, and so EA is inextricably linked to utilitarianism. Further, utilitarianism is false, and therefore EA is false.” I will examine each part of this argument in turn.

First, we need to examine the factual basis of the historical and intellectual connection between EA and utilitarianism in the first place. A number of recent critical articles point out the genetics of the EA tradition. I think facts about this connection are worth pointing out; yet it is important to clarify the contingent nature of this linkage, especially given how despised utilitarianism is to the average person. If this clarification was neglected as a kind of “poisoning the well” or “guilt by association”, shame on the author, though I do not make that assumption.

The Economist (non-paywalled) writes, “The [EA] movement…took inspiration from the utilitarian ethics of Peter Singer.” It would be more accurate to say that “the movement took inspiration from arguments using common sense intuitions from Peter Singer, and Peter Singer is a utilitarian.” Of course, that’s much less zingy to acknowledge that the arguments from Singer inspiring EA were not utilitarian in nature (from his “Famine, Affluence, and Morality”), as we discuss with more detail in the utilitarian-independent motivation subsection of Effective Altruism is Not Inherently Utilitarian section.

Rebecca Ackermann in Slate writes, The [EA] concept stemmed from applied ethics and utilitarianism, and was supported by tech entrepreneurs like Moskovitz.” This is just a strangely worded sentence. It would make more sense to say it stemmed from arguments in applied ethics, but applied ethics is merely a field of inquiry. Moreover, utilitarianism is a moral theory. So, you could say it is an implication of utilitarianism, but proposing that EA stemmed from a moral theory is a bit weird. That’s mostly nit-picking, and I also have absolutely no idea what the support from tech entrepreneurs has to do with anything. I guess the “technology” audience cares? Other articles appear to poison the well against EA merely by saying rich tech billionaires support EA, as though everything tech billionaires support is automatically incorrect, though this article may not be attempting to make such a faulty ‘argument’.

Rebecca Ackermann in MIT Technology Review writes “EA’s philosophical genes came from Peter Singer’s brand of utilitarianism and Oxford philosopher Nick Bostrom’s investigations into potential threats to humanity.” Similar to above, the ‘genes’ of utilitarianism are connected in the person of Peter Singer but not in the arguments of Peter Singer, which is an incredibly important distinction. EA does not rely on his brand of utilitarianism, and it is important to clarify this non-reliance to the public that wants to throw up anytime the word “utilitarianism” is mentioned. Also, Bostrom’s existential risks aren’t even a core part of EA; they are a more recent development. From my perspective, this development is much less of the genes of EA (though Bostrom was writing about longtermism- and extinction-related topics before EA) and more of a grafting into EA, at least as far as how much weight or significance the existential risks have.

Now, it is quite possible that the authors of these articles were merely noting the historical roots of the movement, which is of perfectly legitimate interest to note. Given that the average person finds utilitarianism detestable, however, suggests that it would be important for neutrality’s sake to clarify that effective altruism is not, in fact, wedded to the exact beliefs of the originators or even the current leaders.

If this connection was made to critique EA, this amounts to a kind of genetic argument against effective altruism. Whether these authors were attempting this approach (implicitly) is not my primary concern, and I will not comment either way, but since this is a fairly popular type of argument to make, I will investigate it. In fact, it does seem like the general structure of recent critiques of EA due to SBF and FTX are a guilt by association argument, which I explored in the SBF Association Argument Against Effective Altruism. My best attempted reconstruction of the genetic utilitarian argument is of the form:

  1. If the originators and/or leaders of a movement espouse a view, then the movement ineliminably is committed to that view
  2. The originators and/or leaders of the EA movement espouse utilitarianism
  3. Therefore, the EA movement is ineliminably committed to utilitarianism
  4. If a movement is ineliminably committed to a false view, then the movement has an incorrect framework
  5. Utilitarianism is false
  6. The EA movement has an incorrect framework

A Movement’s Commitments are not Dictated by the Belief Set of Its Leaders

One problem with this argument is that premise (1) is obviously false. Regarding the originators, movements can change. Additionally, leaders have many beliefs 1) unrelated to the movement, and 2) even related beliefs may not imply nor be implications of the framework. This can be true even if the originators and leaders all share some set of views P1 = {p1,p2,p3…p7}, as the movement may be characterized by a subset of those views P2 = {p1,p2}, where P2 does not imply {p3…p7}. This is likely the case in the effective altruism movement, as P2 does not encapsulate an entire global moral structure and so does not imply the entirety of the leader’s related views. Further, there can be a common cause of the beliefs of the leaders that are non-identical to the common cause of the beliefs of the core of the movement.

Another way to remit the concern above is to consider the core of the theory vs auxiliary hypotheses, as discussed in philosophy of science. If P2 is the core of effective altruism, it can be true that beliefs in P1, that are not in P2, are auxiliary hypotheses but can still be freely rejected by those in the movement and remain true to EA.

There is a parallel in Christianity as well. There is substantial diversity in the movement that is Christianity, yet there is a common core of essential commitments of Christianity, called “essential doctrine”. These commitments constitute the core of the theory of Christian theism. Beyond that, we can have reasonable disagreements as brothers and sisters in Christ. As 7th century theologian Rupertus Meldenius said, “In Essentials Unity, In Non-Essentials Liberty, In All Things Charity.”

This disagreement extends from laymen to pastors and “leaders” of the faith as well. I think this should be fairly obvious for people that have spent much time in Christian bubbles. Laymen can and do disagree with pastors of their own denomination, pastors of other denominations, the early church fathers, etc., and they remain Christian without rejecting essential doctrine. (Of course, some church leaders and laymen are better than others at not calling everyone else heretics).

EA Leaders are Not All Utilitarians

The second point of contention with this argument is that premise (2) is also false. William MacAskill can rightly be called both an originator and a leader of EA, and he does not espouse utilitarianism. He thinks that sometimes it is better to not do what results in the overall greatest moral good. He builds in side-constraints (though sophisticated forms of utilitarianism can do a limited version of this, and consequentialism can do precisely this in effect). Furthermore, he builds in uncertainty in the form of a risk-averse expected utility function with distributed credences between (at least) utilitarianism and deontology, which motivates side-constraints.

In this section, we examined two arguments against effective altruism in view of its connection to utilitarianism, finding both arguments substantially lacking. In conclusion from the previous two sections, we do not see a successful argument against effective altruism due to its theoretical or historical connection to utilitarianism. EA remains a highly defensible intellectual project.

Do the Ends Justify the Means?

There is a need for clarity around “ends-justifying-means” reasoning and claims like “the end doesn’t justify the means.” Many recent criticisms make this claim in response to the FTX scandal. They connect effective altruism to what they see as “ends-justifying-means” reasoning in Sam Bankman-Fried (SBF) and use that as a reductio against effective altruism.

This argument fails on virtually every point.

First, let’s see what people have said about it. Eric Levitz in the Intelligencer says that “the SBF saga spotlights the philosophy’s greatest liabilities. Effective altruism invites ‘ends justify the means’ reasoning, no matter how loudly EAs disavow such logic.” Eric also writes, “Effective altruists’ insistence on the supreme importance of consequences invites the impression that they would countenance any means for achieving a righteous end. But EAs have long disavowed that position.” Rebecca Ackermann in Slate mentions, “EA needs a clear story that rejects ends-justifying-means approaches,” referencing Dustin Moskovitz’s Tweets.

As the authors above mention, EA thinkers typically, on paper at least, disavow “ends justify the means” reasoning. More recently, MacAskill in a recent Twitter thread says, “A clear-thinking EA should strongly oppose ‘ends justify the means’ reasoning.” Holden Karnofsky, co-founder of Open Philanthropy and GiveWell, in a recent forum post says, “I dislike ‘end justify the means’-type reasoning.” This explicit rejection is not solely in the wake of the downfall of FTX; MacAskill 2019 in “The Definition of Effective Altruism” says, “as suggested in the guiding principles, there is a strong community norm against ‘ends justify the means’ reasoning.”[9] I talk more substantively about the use of side constraints in EA in the 4th difference between EA and utilitarianism below.

Of course, critics of EA readily acknowledge that EA, on paper, disavows ends-means reasoning. The problem, they think, is that EA “invites” ends-means reasoning, or that EA “invites the impression that they would countenance any means for achieving a righteous end” over and against EA’s claims. 

All of the above discussion fails to acknowledge two very key points, which is due to the ambiguity in what “ends justify the means,” in fact, means. These two points become obvious once we adequately explore ends-means reasoning[10]; they are: (1) some ends justify some means, and (2) “ends justify the means” is a problem for every plausible moral theory. 

Some Ends Justify Some Means

Obviously, some ends justify some means. Let’s say I strongly desire an ice cream cone and consuming it would make me very happy for the rest of the day with no negative results. Call me crazy, but I submit to you that this end (i.e., Ice Cream) justifies the means of giving $1 to the cashier. If this is correct, then some ends justify[11] some means. Therefore, it is false that “the end never justifies the means.”

Various ethicists have pointed this out. Joseph Fletcher says that people “take an action for a purpose, to bring about some end or ends. Indeed, to act aimlessly is aberrant and evidence of either mental or emotional illness.”[12] Though, it may be that this description in line with the “Standard Story” of Action in action theory entails a teleological conception of reasons that has distorted debates in normative ethics in favor of consequentialism, as Paul Hurley has argued.[13]

Nonetheless, Fletcher is right that even this commonsense thinking on everyday justification for any action “leads one to wonder how so many people may say so piously, ‘The end cannot justify the means.’ Such a result stems from a misinterpretation of the fundamental question concerning the relationship between ends and means. The proper question is – ‘Will any end justify any means?’ – and the necessary reply is negative.”[14] It is obviously false that any end justifies any means, and everyone in the debate accepts that, including the hardcore utilitarian.

What happens when we raise the stakes of either the end or the means? 

Some Ends Justify Trivially Negative Means

We can consider raising the moral significance of the end in question. Let us consider the end of preventing the U.S. from launching nuclear missiles at every other country on the globe (i.e., Nuclear Strike). Although lying is generally not morally good, I submit that it is morally permissible to fill in your birthday incorrectly on your Facebook account if it prevents Nuclear Strike. An end of great moral magnitude like Nuclear Strike justifies a mildly negative means like a single instance of deception on a relatively unimportant issue. Therefore, a very good moral end justifies a mildly negative means.

Similarly, when James Sterba considers the Pauline Principle that we should not do evil so that good may come of it, he acknowledges it is “rejected as an absolute principle…because there clearly seem to be exceptions to it.” Sterba gives two seemingly obvious cases where doing evil so that good may come “is justified when the resulting evil or harm is: (1) trivial (e.g., as in the case of stepping on someone’s foot to get out of a crowded subway) or (2) easily reparable (e.g., as in the case of lying to a temporarily depressed friend to keep him from committing suicide).”[15]

No End Can Justify Any Means

Further, there is no end that can justify any means. For any given end, we can consider means that are way worse. For example, consider the end of saving 1 million people from death. Is any means justified to save them? Of course not. For example, killing 1 billion people would not be justified as a means to save 1 million people from death. For any end, we can consider means that are 10x as bad as the end, and the result is that the means is not justified. From one perspective, in the scenario of killing 1 to save 1 million, the absolutist deontologist justifies terrible means (i.e., letting 1 million people die) to the end of saving 1; of course, they would not word it this way, but it amounts to the same thing. Ultimately, for a particular end, no matter how bad, it is false that we can use any means possible to achieve that end and doing so would be morally permissible.

As Joseph Fletcher (a consequentialist) said, “‘Does a worthy end justify any means? Can an action, no matter what, be justified by saying it was done for a worthy aim?’ The answer is, of course, a loud and resounding NO!” Instead, “ends and means should be in balance.”[16]

A Sufficiently Positive End Can Justify a Negative Means

Let us investigate further just how negative of means can be justified. Let us reconsider Ice Cream with a more negative means. Clearly, Ice Cream does not justify shooting someone non-fatally in the leg to get the ice cream cone. For an end to even possibly justify non-fatal shooting, it would require something much more significant. Is there any scenario that would make a non-fatal shooting morally permissible? I think there is. Consider a scenario that is rigged such that if you non-fatally shoot a person, one billion people will be saved from a painful death. It should be obvious that preventing the death of a billion people does justify shooting someone non-fatally in the leg. Therefore, it is possible for a massively positive end to justify a negative means.

Uh oh! Did I just admit I am a horrible person? I think it is okay to shoot someone (non-fatally) if the circumstances justify it, after all. Of course, most people think it is permissible to kill in some cases, such as self-defense or limited instances of just war.[17] After explaining the typical EA stance on deferring to constraints including a document by MacAskill and Todd, and how MacAskill said that SBF violated them, Eric Levitz in the Intelligencer complains that “yet, that same document suggests that, in some extraordinary circumstances, profoundly good ends can justify odious means.” My response is, “Yes, and that is trivially correct.” If I could prevent 100,000,000 people from being tortured and killed by slapping someone in the face, I would and should do it. And that shouldn’t be controversial.

As MacAskill and Todd note (which the author also quotes), “Almost all ethicists agree that these rights and rules are not absolute. If you had to kill one person to save 100,000 others, most would agree that it would be the right thing to do.” If you will sacrifice a million people to save one person, you are the one that needs to have your moral faculties reexamined. Killing a person, while more evil than letting a person die, is not 999,999 times more evil than letting one person die. Probably, the value difference between killing a person and letting a person die is much less than the value of a person, i.e., the disvalue of letting a person die. Therefore, letting two people die is already worse than killing one person, but it even more obvious that letting 1,000,000 people die is worse than killing one person.

I do not believe I have said much that is particularly controversial when looking at these manufactured scenarios.[18] We are stipulating in these tradeoff considerations that the tradeoff is actually a known tradeoff and there is no other way, etc.

In sum, the ends don’t justify the means…except, of course, when they do. Ends don’t never justify the means and don’t always justify the means, and virtually no one in this debate thinks otherwise. Almost everyone thinks ends sometimes justify the means (depending on the means). What we have to do is assess the ends and assess the means to discern when exactly what means are justified for what ends.

Absolutism is the Problem

This whole question has very little to do with consequentialism or deontology, contrary to popular belief, and everything to do with absolute vs relative ethics (not individually or culturally relative, but situationally relative).[19] There is a debate internal to non-consequentialist traditions about this question of when the ends justify the means. For example, with deontology there is what is called threshold or moderate deontology, and in natural law theory there is a view called proportionalism. Neither of these are absolutist views, and both views include the results of actions as justification for some means. Internal to these non-consequentialist families of theories typically characterized as absolutist remains the exact same debate about ends-means reasoning. In fact, the most plausible theories in all moral families allow extreme (implausible but possible) cases to violate absolute rules.

For example, it is uncommon to find a true absolutist deontologist among contemporary ethicists. As Aboodi, Borer, and Enoch point out, “hardly any (secular) contemporary deontologist is an absolutist. Contemporary deontologists are typically ‘moderate deontologists,’ deontologists who believe that deontological constraints come with thresholds, so that sometimes it is impermissible to violate a constraint in order to promote the good, but if enough good (or bad) is at stake, a constraint may justifiably be infringed.”[20] In other words, almost all (secular) deontologists also think the ends sometimes justify the means. Absolutism is subject to numerous paradoxes and counterexamples discussed previously and in the next subsection (see Figure 4)

Figure 4: Absolutism in a nutshell

Paradoxes of Absolute Deontology

Why is it that even deontologists think there are exceptions to constraints? Because absolute deontology is subject to substantial paradoxes and implausible implications that render it unpalatable, even worse than the alternatives. One example is the problem of risk, which is that any action raises the probability of violating absolute constraints, and no action gives 100% certainty of violating constraints. Therefore, it looks like the absolutist needs to say either that any action that produces a risk of violation is wrong, leading to moral paralysis since you would be prohibited from taking any action, or pick an (arbitrary) risk threshold, which implies that, in fact, two wrongs do make a right, and two rights make a wrong (in certain cases).[21] There have been responses, but what is perhaps the best response, stochastic dominance to motivate a risk threshold, is still subject to a sorites paradox that again appears to render absolutism false.[22] MacAskill offers a distinct but related argument from cluelessness that deontology implies moral paralysis.[23]

Alternatively, we can merely consider cases of extreme circumstances just like the one I gave earlier. A standard example is lying to a visitor to your house in order to prevent someone from being murdered, which Kant famously and psychopathically rejected. Michael Huemer considers a case where aliens will kill all 8 billion people on earth unless you kill one innocent person. Should you do so? The answer, as Huemer and any sane person agrees, is obviously yes.[24] (If the reader still thinks the answer is no, add another 3 zeros to the number of people you are letting die and ask yourself again. Repeat until you reject absolutism). These types of cases show quite quickly and simply that absolutism is not a plausible position in the slightest, and it is justified to do something morally bad if it results in something good enough (or, alternatively, prevents something way worse). There are other problems for absolutist deontology I neglect here.[25]

Of course, in a trivial sense, consequentialists are absolutist: it is always wrong to do something that does not result in the most good. However, that is not what anyone means when they call theories absolutist, which refers to theories that render specific classes of actions (e.g., intentional killing, lying, torture, etc.) as always impermissible.[26]

In summary, any plausible moral theory or framework has to reckon with the fact that something negative is permissible if it prevents something orders of magnitude worse. When people say “the end doesn’t justify the means” when condemning an action, they, in practice, more frequently mean those ends don’t justify those means. Equivalently, they mean that the ends don’t justify the means in this circumstance, rather than never, as the latter results in a completely implausible view.

Application to the FTX Scandal

So, where does that leave us in the FTX scandal? Everyone in the debate can say that, in this case, the ends did not justify the means. Although criticizing EA, Eric Levitz in the Intelligencer appears to challenge this, saying perhaps SBF may reasonably be considered justified if there are exceptions to absolute rules, “In ‘exceptional circumstances,’ the EAs allow, consequentialism may trump other considerations. And Sam Bankman-Fried might reasonably have considered his own circumstances exceptional,” describing the uniqueness of SBF’s case. Levitz asks, “If killing one person to save 100,000 is morally permissible, then couldn’t one say the same of scamming crypto investors for the sake of feeding the poor (and/or, preventing the robot apocalypse)?” If I were to put this into an argument, it may be: (1) if ends justify the means sometimes, then SBF’s actions are justified, (2) if EA, then ends justify the means sometimes, (3) if EA, then SBF’s actions are justified (or reasonably considered so).

There are several problems here (found in premise 1). First, it is not consequentialism that may trump other considerations, but consequences.[27] The significance of the difference is that any moral theory can say (and the most plausible ones do say) that consequences can, in the extreme, trump other considerations, as we saw earlier. Second, SBF’s circumstances may be exceptional in the generic sense of being rare and unique, but the question is “are they exceptional in the relevant sense,” which is that his circumstances are such that violating the constraint of committing illegal actions or fraud would result in a sufficient overall good to warrant breaking the constraint. It is a general rule that fraud is not good in the long run for your finances or moral evaluation.

Third, it is much too low a bar to say that it is reasonable for SBF to think that his circumstances were exceptional in the relevant sense, but we are (or should be) much more interested in whether SBF was correct in thinking his circumstances were exceptional in the relevant sense. An assessment of irrationality requires us to know his belief structure and evidence base for this primary claim as well as many background beliefs that informed his evidence and belief structure of the primary claim (and possibly knowing the correct view of decision theory, which is highly controversial). 

Fourth, one can say anything one wants (see next section Can EA/Consequentialism/Longtermism be Used to Justify Anything?). We are and should be interested in what one can accurately say about such a comparison between killing one person to save 100,000 and ‘scamming crypto investors for the sake of feeding the poor (and/or, preventing the robot apocalypse).’ Fifth, it is unlikely that one can accurately say that these are comparably similar, such that it is incredibly unlikely that SBF was correct in his assessment. This rhetorical question about comparing saving 100,000 lives vs scamming crypto investors does very little to demonstrate otherwise.

SBF’s approach, which approved of continuing double-or-nothing bets for eternity, evidently did not consider the fallout associated with nearly inevitable bankruptcy and how that would set the movement back, as that would render each gamble less than net zero. Secondly, almost everyone agrees his approach was far too risk-loving. Nothing about EA or utilitarianism or decision theory, etc. suggests that we should take this risk-loving approach. As MacAskill and other EA leaders argue, we should be risk averse, especially with the types of scenarios SBF was dealing with (relevant EA forum post). Plus, there is the disvalue associated with breaking the law and chance of further lawsuits. 

Levitz appears to accept the above points and concedes that it would be unfair to attribute SBF’s “bizarre financial philosophy” to effective altruism, and that EA leaders would likely have strongly disagreed with implementing this approach with his investments. Given Levitz’s acceptance of this, it is unclear what the critique is supposed to be from the above points. Levitz does move to another critique though, which is that EAs have fetishized expected value calculations, which I will address in the next section.

In summary, the ends sometimes justify the means, but violating constraints almost never actually produces the best result, as EA leaders are well-aware. Just because SBF made a horrible call does not mean that the EA framework is incorrect, as the typical EA framework makes very different predictions that would not include such risk-loving actions.

Effective Altruism is Not Inherently Utilitarian

There was a lot of confusion in these critiques about the connection between utilitarianism and effective altruism. Many of these articles assume that effective altruism implies or requires utilitarianism, such as (not including the quotes below) Erik Hoel, Elizabeth Weil in the Intelligencer, Rebecca Ackermann in MIT Technology Review (see a point-by-point response here), Giles Fraser in the Guardian, James W. Lenman in IAI News, and many more. I will survey and briefly respond to some individual quotations to this effect, showcase the differences between effective altruism and utilitarianism. Throughout, I will extensively refer to MacAskill’s 2019 characterization of effective altruism in “The Definition of Effective Altruism.”

As a first example, Linda Kinstler in the Economist (non-paywalled) writes “[MacAskill] taught an introductory lecture course on utilitarianism, the ethical theory that underwrites effective altruism.” Nitasha Tiku in The Washington Post (non-paywalled) writes, “[EA’s] underlying philosophy marries 18th-century utilitarianism with the more modern argument that people in rich nations should donate disposable income to help the global poor.” It is curious to call it 18th century utilitarianism when the version of utilitarianism EA is closest to (yet still quite distinct from) is “rule utilitarianism”, only hints of which were found in the 19th century with its primary development in the 20th century. Furthermore, while it may be a modern development that one can easily transfer money and goods across continents, it is certainly no modern argument that the wealthy should give disposable income to the poor, including across national lines. The Parable of the Good Samaritan advocates for helping explicitly across national lines, the Old Testament commanded concern for the poor by those with resources (for a fuller treatment, see Christians in an Age of Wealth: A Biblical Theology of Stewardship), and “the early Church Fathers took luxury to be a sign of idolatry and of neglect of the poor.”[28] The fourth century St. Ambrose condemns rich neglect of the poor, “You give coverings to walls and bring men to nakedness. The naked cries out before your house unheeded; your fellow-man is there, naked and crying, while you are perplexed by the choice of marble to clothe your floor.”[29]

Timothy Noah in The New Republic writes, “E.A. tries to distinguish itself from routine philanthropy by applying utilitarian reasoning with academic rigor and a youthful sense of urgency,” and also “Hard-core utilitarians tend not to concern themselves very much with the problem of economic inequality, so perhaps I shouldn’t be surprised to find little discussion of the topic within the E.A. sphere.” It is blatantly false that economic inequality is of little concern to utilitarians (as explained in the link that the author provided himself), including “hard-core” ones, as the state of economic inequality in the world leads to great suffering and death as a result. Now, it is correct that utilitarians do not see inequality as an intrinsic good, but merely an instrumental good. Yet, I do not see the problem with rejecting inequality’s intrinsic value rather than its instrumental value; it would be surprising that, on a perhaps extreme version of egalitarianism, there being two equally unhappy people is better than one slightly happy person and one extremely happy person. Alternatively, we should be much more concerned that people’s basic needs are met, so they are not dying of starvation and preventable disease, than we should that, if everyone already had their needs met, the rich have equal amounts of frivolous luxuries, as sufficientarianism well-accommodates. Finally, as MacAskill 2019 notes, EA is actually compatible with utilitarianism, prioritarianism, sufficientarianism, and egalitarianism (see next section).

Eric Levitz in the Intelligencer states, “Many people think of effective altruism as a ruthlessly utilitarian philosophy. Like utilitarians, EAs strive to do the greatest good for the greatest number. And they seek to subordinate common-sense moral intuitions to that aim.” EAs are not committed to doing the greatest good for the greatest number (see the next section for clarification), and they do not think any EA commitments subvert commonsense intuitions. In fact, EAs attempt to take common sense intuitions seriously along with their implications. The starting point for EA was originally that, if we can fairly easily save a drowning child, we should.[30] This is hardly a counterintuitive claim. Then, upon investigating the relevant similarities between this situation and charitable giving, we get effective altruism.

Jonathan Hannah in Philanthropy Daily asks, “why should we look to these utilitarians to learn how to be effective with our philanthropy?” First, we should look to EAs because EAs have evidence backing up claims of effectiveness. Secondly, again, EAs are not committed to utilitarianism, though many EAs are, in fact, utilitarians.

Theo Hobson in the Spectator claims, “Effective altruism is reheated utilitarianism… Even without the ‘longtermist’ aspect, this new utilitarianism is a thin and chilling philosophy.” Beyond the false utilitarianism claim, the accusation of thinness is surprising, since there are substantial and life-changing implications of taking EA seriously. These are profound implications that have resulted in protecting 70 million people from malaria, giving $100 million directly to those in extreme poverty, giving out hundreds of millions of deworming treatments, setting 100 million hens free from a caged existence, and much more. Collectively, GiveWell estimates the $1 billion donations through them will save 150,000 lives.

The aforementioned claims are misguided, as not everything that is an attempt to do the morally best thing is utilitarianism (see Figure 5).

Figure 5: Utilitarianism is a specific moral theory (or, rather, a family of specific theories), actually

Now, I seek to make good on my claim that effective altruism and utilitarianism are distinct. There are six things that distinguish EA from a reliance on utilitarianism, and I will examine each in turn:

  1. [Minimal] EA does not make normative claims
  2. EA is independently motivated
  3. EA does not have a global scope
  4. EA incorporates side constraints
  5. EA is not committed to the same “value theory”
  6. EA incorporates moral uncertainty

[Minimal] EA Does Not Make Normative Claims

Effective altruism is defined most precisely in MacAskill 2019, who clarifies explicitly that EA is non-normative. MacAskill says, “Effective altruism consists of two projects [an intellectual and a practical], rather than a set of normative claims.”[31] The idea is that EA is committed to trying to do the best with one’s resources, but not necessarily that it is morally obligatory to do so. Part of the reason for this definition is to be in alignment with the preferences and beliefs of those in the movement. There were surveys both to leaders and members of the movement in 2015 and 2017, respectively, which suggested a non-normative definition may be more representative to current EA adherents. Furthermore, it is more ecumenical, which is a desirable trait for a social movement as it expands.

Of course, a restriction to non-normative claims is limited, and Singer’s original argument that prompted many towards EA was explicitly normative in nature. His premises included talk of moral obligation. Many people in EA do think it is morally obligatory to be an EA. Thus, I think it is helpful to distinguish between different types or levels of EA, including minimal EA, normative EA, radical EA, and radical, normative EA.

Minimal EA makes no normative claims, while normative EA includes conditional obligations.[32] Normative EA claims that if one decides to donate, one is morally obligated to donate to the most effective charities, but it does not indicate how much one should donate. This could be claimed to be absolute, a general rule of thumb, or somewhere in between. Radical EA, on the other hand, includes unconditional obligations, but no conditional obligations. Brian Berkey, for example, argues that effective altruism is committed to unconditional obligations of beneficence.[33] Radical EA, as I characterize it, says one is morally obligated to donate a substantial portion of one’s surplus income to charities. Finally, radical, normative EA (RNEA) combines conditional and unconditional obligations of beneficence, claiming one is morally obligated to donate a substantial portion of one’s surplus income to effective charities. I expand on and defend these further elsewhere.[34]

Thus, while minimal EA does not include normative claims, there are expanded versions of EA that include conditional and/or unconditional obligations of beneficence. Minimal EA, then, constitutes the core of the EA theory, while these claims of obligations constitute auxiliary hypotheses of the EA theory. Since the core of EA does not include normative claims, it cannot be identical to (any version of) utilitarianism, whose core includes a normative claim to maximize impartial welfare.

EA is Independently Motivated

Effective altruism is distinct from utilitarianism in that EA can be motivated on non-consequentialist grounds. In fact, even Peter Singer’s original argument, inspiring much of EA, was non-consequentialist in nature. Singer’s original “drowning child” thought experiment relied only on a simple, specific thought experiment, proposing midlevel principles (principles that stand in between specific cases and moral theories) to explain the intuition from the thought experiment, and deriving a further conclusion by comparing relevant similarities in the thought experiment to a real world situation, all of which is a standard procedure in applied ethics. Of course, this article has been critically responded to in the philosophy community many, many times, some more revolting[35] than others,[36] but many (such as I) still find it a compelling and sound argument that also demonstrates EA’s independence from utilitarianism.

Theory-Independent Motivation: The Drowning Child

Singer’s original thought experiment is: “if I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing.”[37]

Singer proposes two variants[38] of a midlevel principle that would explain this obvious result:

  • If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.

He also proposed a weaker principle,

  • If it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do it.

These principles are extremely plausible, are quite intuitive, and would explain why we have the intuitions we do in various rescue cases comparable to the above. Next, Singer defended why this principle can be extended to the case of charitable giving by examining the relevant similarities. The reasoning is that, given the existence of charities, we are in a position to prevent something bad from happening, e.g., starvation and preventable disease. We can do something about it by ‘sacrificing’ our daily Starbucks, monthly Netflix subscription, yearly luxury vacations, or even more clearly unnecessary purchases, for example additional sports cars or boats that are not vocationally necessary, etc. None of these things are (obviously) morally significant, and they are certainly not of comparable moral importance of the lives of other human beings. Therefore, we have a moral obligation to take action in donating to effective charities, particularly from the income that we are using for surplus items.

Notice that we did not appeal to any kind of utilitarian reasoning in the above argument, and one can accept either of Singer’s midlevel principles without accepting utilitarianism. This example shows how effective altruism can be independently motivated apart from utilitarianism. This fact was pointed out previously by Jeff McMahan when he noticed that even philosophical critiques of EA make this false assumption of reliance on utilitarianism. McMahan, writing in 2016, said, “It is therefore insufficient to refute the claims of effective altruism simply to haul out [Bernard] Williams’s much debated objections to utilitarianism. To justify their disdain, critics must demonstrate that the positive arguments presented by Singer, Unger, and others, which are independent of any theoretical commitments, are mistaken.”[39]

Martin Luther’s Drowning Person

Interestingly, the Christian has a surprising connection to Singer’s Drowning Child thought experiment, as a nearly identical thought experiment and comparison was made by Martin Luther in the 16th century.[40] In his commentary on the 5th commandment “Thou shalt not kill” in The Large Catechism, Luther connects the commandment to Jesus’ words in Matthew 25, “For I was hungry and you gave me nothing to eat, I was thirsty and you gave me nothing to drink, I was a stranger and you did not invite me in, I needed clothes and you did not clothe me, I was sick and in prison and you did not look after me.” Luther then gives a drowning person comparison: “It is just as if I saw some one navigating and laboring in deep water [and struggling against adverse winds] or one fallen into fire, and could extend to him the hand to pull him out and save him, and yet refused to do it. What else would I appear, even in the eyes of the world, than as a murderer and a criminal?”

Luther condemns in the strongest words those could “defend and save [his neighbor], so that no bodily harm or hurt happen to him and yet does not do it.” He says, “If…you see one suffer hunger and do not give him food, you have caused him to starve. So also, if you see any one innocently sentenced to death or in like distress, and do not save him, although you know ways and means to do so, you have killed him.” Finally, he says, “Therefore God also rightly calls all those murderers who do not afford counsel and help in distress and danger of body and life, and will pass a most terrible sentence upon them in the last day.”

Virtue Theoretic Motivation: Generosity and Others-Centeredness

Beyond a theory-independent approach to motivate EA, we can also employ a non-consequentialist theory, virtue ethics, to motivate EA. Some limited connections between effective altruism and virtue ethics have been previously explored,[41] but I will briefly give two arguments for effective altruism from virtue ethics. Specifically, I will argue from the virtues of generosity and others-centeredness for normative EA and radical EA, respectively. Thus, if both arguments go through, the result is radical, normative EA.

First, I assume the qualified-agent account of the criterion of right action[42] of virtue ethics given by Rosalind Hursthouse.[43] Second, I employ T. Ryan Byerly’s accounts of both generosity and others-centeredness.[44] Both of these, especially from the Christian perspective, are virtues. The argument from generosity is:

  1. An action is right only if it is what a virtuous agent would characteristically do
  2. A virtuous agent would characteristically be generous
  3. To be generous is to be skillful in gift-giving (i.e., giving the right gifts in right amounts to the right people)
  4. A charitable donation is right only if it is skillful in gift-giving
  5. A charitable donation is skillful in gift-giving only if it results in maximal good
  6. A charitable donation is right only if it results in maximal good (NEA)

The argument from others-centeredness is:

  1. An action is right only if it is what a virtuous agent would characteristically do
  2. A virtuous agent would characteristically be others-centered
  3. To be others-centered includes treating others’ interests as more important than your own
  4. Satisfying one’s interests in luxuries before trying to satisfy others’ interests in basic needs is not others-centered
  5. An action is right only if it prioritizes others’ basic needs before your luxuries
  6. A substantial portion of one’s surplus income typically goes to luxuries
  7. Therefore, a person is morally obligated to donate a substantial portion of one’s surplus income to charity (REA)

I don’t have time to go into an in-depth defense of these arguments (though see my draft paper [pdf] for a characterization and assessment of luxuries as in the above argument, as well as independent arguments for premises 5-7 regarding others-centeredness), but it at least shows how one can reasonably motivate effective altruism from virtue ethical principles.

EA Does Not Have a Global Scope

Unlike utilitarianism, effective altruism is not a global moral theory in that it cannot, in principle, give deontic outcomes (i.e., right, wrong, obligatory, permissible, etc.) to any given option set (a set of actions that can be done by an agent at some time t). Utilitarianism is a claim about what explains why any given action is right, wrong, obligatory, etc., as well as the truth conditions for the same. In other words, utilitarianism makes a claim of the form, an action is right if and only if xyz, which are the truth conditions of deontic claims, and a claim of the form, an action is right because abc, which is the explanatory claim corresponding to the structure of reasons of the theory (that explains why actions are right/wrong).

While minimal EA trivially does not match utilitarianism in making global normative claims, even radical, normative EA does not govern every possible action set, and it does not propose to. At the most, EA makes claims about actions related to (1) charitable donations and (2) career choice, including RNEA. As MacAskill 2019 says, “Effective altruism is not claiming to be a complete account of the moral life.” There are many actions, such as, say, those governing social interactions, that are out of scope of EA and yet within the scope of utilitarianism.

Therefore, utilitarianism and effective altruism differ in their scopes, as EA is not a comprehensive moral theory, so EA does not require utilitarianism.

EA Incorporates Side Constraints

In “The Definition of Effective Altruism,” MacAskill 2019 is clear that EA includes constraints, and not any means can be justified for the greater good. MacAskill says that the best course of action, according to EA, is an action “that will do the most good (in expectation, without violating any side constraints).”[45] He only considers a value maximization where “whatever action will maximize the good, subject to not violating any side constraints.”[46] He says that EA is “open in principle to using any (non-side-constraint violating) means to addressing that problem.”[47]

In What We Owe the Future, MacAskill says that “naïve calculations that justify some harmful action because it has good consequences are, in practice, almost never correct” and that “plausibly it’s wrong to do harm even when doing so will bring about the best outcome.”[48] On Twitter, MacAskill shared relevant portions of his book on side constraints when responding to the FTX scandal, including the page shown below. He states that “concern for the longterm future does not justify violating others’ rights,” and “we should accept that the ends do not always justify the means…we should respect moral side-constraints, such as against harming others. So even on those rare occasions when some rights violation would bring about better longterm consequences, doing so would not be morally acceptable.”[49]

Figure 6: Excerpt from What We Owe the Future

Utilitarianism, on the other hand, does not have side constraints, or at least, not easily. Act utilitarianism (which is normally the implied view if the modifier is neglected) certainly does not. However, rule utilitarianism can function as a kind of constrained utilitarianism in two ways; one way is strong rule utilitarianism that has no exceptions, which is absolutist. Another is with weak rule utilitarianism that still allows some exceptions. MacAskill’s wording above makes it sound like there would not be any exceptions, “even when some rights violation would bring about better longterm consequences.”[50]

However, elsewhere, he makes it sound as though there can be exceptions. He (with Benjamin Todd) says, “Almost all ethicists agree that these rights and rules are not absolute. If you had to kill one person to save 100,000 others, most would agree that it would be the right thing to do.” I am in perfect agreement there. I think, as I discuss below in the Do the Ends Justify the Means? section, absolute rules are trivially false. In fact, MacAskill has an entire paper (with Andreas Mogensen) arguing that absolute constraints lead to moral paralysis because, to minimize your chance of violating any constraints, you should do nothing.[51] It is likely that MacAskill thinks there are extreme exceptions, though these would never happen in real life.

Finally, there remains a distinction between constrained effective altruism and rule utilitarianism, and that distinction is the same difference as between a consequentialized deontological theory and a standard deontological theory. The difference is that even rule utilitarianism explains the wrongness of all wrong actions ultimately by appeal to consequences (we should follow rules whose acceptance or teaching or following would lead to the best consequences), while constrained effective altruism explains the wrongness of constraint violations by appeal to constraints and to rights without a further justification in terms of the overall better outcomes.

In conclusion, EA incorporates side constraints, though with exceptions (as any plausible ethical theory would allow), while act utilitarianism does not. In addition, while EA has some structural similarities as rule utilitarianism, EA has different explanations of the wrongness of actions as utilitarianism, which turns out to be the key difference between (families of) moral theories,[52] and thus the two are quite distinct.

EA is Not Committed to the Same Value Theory

The fifth reason effective altruism is not utilitarian is because the value theory is not identical between the two. One reason they are not identical is because EA is not, strictly speaking, committed to a value theory. However, that does not mean the value theory is a free-for-all. EA is compatible with other theories in the vicinity of utilitarianism, such as prioritarianism, sufficientarianism, and egalitarianism.

Utilitarianism is committed to impartial welfarism in its value theory. There are a range of views within welfarism about what makes something well-off. Welfarism includes a range of views about well-being, including hedonism, desire or preference satisfactionism, or objective list theories. Hence, we can have hedonistic utilitarianism, preference utilitarianism, or objective list utilitarianism. Further, utilitarianism is committed to a simple aggregation function that makes good equal to the sum total of wellbeing, as opposed to a variously weighted aggregation function, such as in prioritarianism that gives additional weight to the wellbeing of those worse off.

The value theory that MacAskill 2019 describes in the definition of EA is “tentative impartial welfarism,”[53] where the ‘tentative’ implies this is a first approximation or working assumption. MacAskill expresses the difficulty here that arises from intra-EA disagreement: we do not want the scope of value maximization to be too large so that it can include maximizing whatever the individual wants, but we do not want the scope of maximization too small to exclude a substantial portion of the (current or future) movement.

MacAskill seems to do some hand-waving on this point. When defending EA as distinct from utilitarianism, he says, “it does not claim that wellbeing is the only thing of value,” so EA is compatible “with views on which non-welfarist goods are of value.”[54] However, two pages previously, his “preferred solution” of “tentative impartial welfarism…excludes non-welfarist views on which, for example, biodiversity or art has intrinsic value.” On the same page, he suggests that if the EA movement became convinced that “the best way to do good might well involve promoting non-welfarist goods, then we would revise the definition to simply talk about ‘doing good’ rather than ‘benefiting others.’”[55]

Perhaps one way of reconciling these is to say that, while “tentative impartial welfarism…excludes non-welfarist views,” there is instead a tentative commitment to ‘impartial welfarism’, as opposed to a commitment to ‘tentative impartial welfarism’, and it is the impartial welfarism (ignoring the tentative here) that excludes non-welfarist views. When Amy Berg considers the same problem of “how big should the tent be?”, she concludes that EA needs to commit to promote the impartial good in order to ensure that the effectiveness can be objectively measured.[56] 

I suggest that the best way to combine these is to say that EA is committed to maximizing the impartial good that can be approximated by welfarism. If a view cannot even be approximated by welfarism, then it would be fighting a different battle than EA is fighting. This approach combines the tentative nature of the commitment with ensuring it can be objectively measured and in line with the current EA movement, while remaining open to including some non-welfarist goods that remain similar enough to the movement as it currently stands.

Finally, MacAskill says that EA can work with “different views of population ethics and different views of how to weight the wellbeing of different creatures,”[57] which is why EA is compatible with prioritarianism, sufficientarianism, and egalitarianism, in addition to utilitarianism.

Therefore, EA is distinct from utilitarianism by having a different commitment in both what is valuable as well as the aggregation principle.

EA Incorporates Moral Uncertainty

The final reason I will discuss on why EA is not utilitarianism is that EA incorporates moral uncertainty, which is an inherently metatheoretical consideration, while utilitarianism does not. Utilitarians do, just as everyone else has to deal with moral uncertainty, but utilitarianism does not automatically include this. Since EA includes inherently metatheoretical considerations, then it cannot be the same as a theory, which does not inherently include metatheoretical considerations, by definition.

The first way EA includes moral uncertainty was above in the characterization of “tentative impartial welfarism.” EA is open to multiple different normative views; at the very least, it is open to hedonistic, preference, or objective list utilitarianism, while no single theory of utilitarianism can be open to multiple theories of utilitarianism, by definition. Further, this value theory does not rule out non-consequentialist views, and, if my virtue theoretic arguments above (or others) are successful, then virtue ethicists can be EAs. Therefore, EAs can reasonably distributed their credences across many different normative views, both utilitarian and non-utilitarian.

EA does not endorse a specific approach to moral uncertainty, which would likely be considered an auxiliary hypothesis of EA, though EA leaders do seem to clearly favor one particular approach, which is maximum expected choiceworthiness. Furthermore, MacAskill, who has done much work in moral uncertainty, reasons quite explicitly using uncertainty to distribute non-negligible credence in both utilitarianism and deontology, combining that with a risk-averse expected utility theory to motivate incorporating side constraints (aka agent-centered or deontic restrictions). I personally tentatively support the My Favorite Theory[58] approach to moral uncertainty, though EA does not require one or the other.

Objections

Savannah Pearlman argues that even though EA and utilitarianism are distinct moral frameworks, they share core philosophical commitments, and therefore EA is still dependent on utilitarianism. As I argue above, the exact differences between the two are such that EA is not dependent on utilitarianism. It is perfectly sufficient that EA and utilitarianism are (1) distinct frameworks and (2) independently motivated to conclude that EA is not inherently utilitarian. I showed the independent motivation (in the form of theory-independent midlevel principles as well as virtue ethical motivation) in section 2 above.

Pearlman evidently was not convinced that the theory-independent motivation was, in fact, theory-independent because there are shared commitments between EA and utilitarianism. Of course, we would expect that plausible moral theories will share some commitments. For example, that wellbeing is morally significant, and so are the consequences of one’s actions, is true on any plausible moral theory. Shared commitments, unless they are the totality of the theories’ commitments, do not show dependence. In the case of EA and utilitarianism, utilitarianism is sufficient for EA, but not necessary, since we can use virtue ethical arguments (or deontological, but I do not discuss that here).

Pearlman, however, misidentified the shared commitments. She says, “Rather clearly, Effective Altruism and Utilitarianism share the core philosophical commitments to Consequentialism, Impartiality, and Hedonism (repackaged by Effective Altruists into Welfarism).” A few noteworthy items on this. First, utilitarianism is not committed to hedonism; hedonistic utilitarianism is committed to hedonism, while preference utilitarianism is committed to preference satisfactionism, etc. In other words, utilitarianism is committed to some version of welfarism, which can be cashed out in various ways, which is the same as EA’s welfarism. There are no commitments to the family of utilitarian theories nor EA to a specific account of well-being.

Secondly, Pearlman includes consequentialism as part of the core commitments of EA, which she does without argument. It is unclear why she does so. There are a non-negligible number of non-consequentialist EAs. I would guess Pearlman thinks that maximizing only makes sense given consequentialism. I have more faith in other moral theories than Pearlman does (since maximizing is the morally correct option), apparently, since I think that deontology and virtue ethics can make sense of maximizing welfare with a given unit of resources particularly in the restricted domains of concern to EA, such as charitable donations and career choice. Maximizing in this restricted domain can also be understood as an implication of the theory-independent principles that Singer proposed in the drowning child case. 

Pearlman appears to take issue with some deontic outcomes in question, namely, in comparing two charities, that one should donate to a charity that is 100x more effective than another. Although minimal EA does not even commit to any obligation, we can consider the auxiliary commitment of normative EA (though this would still mean EA is not inherently utilitarian). Pearlman takes this moral obligation to imply that EA must be committed to a more general utilitarian principle. However, ignoring any moral theorizing, it just makes sense that you should not intentionally do an action that is much less good than another when it does not affect you much to do so. Normative EAs do not need to say more than this, while utilitarians do. As Richard Chappell points out in the comments, normative EA is only committed to efficient benevolence, but not constraint-less benevolence or unlimited beneficence that requires actions at great personal cost.

All things considered, from the clarification above we can see that Pearlman is incorrect that EA is inherently utilitarian and that criticisms of utilitarianism fairly apply to EA, as well.

Sub-Conclusion

In summary, effective altruism incorporates moral uncertainty in such a way that distinguishes itself from being inherently utilitarian in any interesting sense of the term. Of course, even an absolutist deontologist should have nonzero credence in some form of consequentialism to avoid being irrational, but that hardly makes them a consequentialist. So, EA is not inherently utilitarian.

All together, we saw six reasons that effective altruism is not reliant on utilitarianism. One is that minimal EA does not make normative claims. Furthermore, we saw that EA is also motivated by non-consequentialist reasoning, both theory-independent and virtue ethical in nature. More generally, EA, unlike utilitarianism, has a restricted scope, incorporates side constraints, has a different value theory, and includes moral uncertainty.

Can EA/Consequentialism/Longtermism be Used to Justify Anything?

Multiple authors express worries suggesting that EA or consequentialism or longtermism can be used to justify anything. In this section, I will show that this claim is either false or uninteresting, depending on how the claim is interpreted. 

Émile P. Torres, a big fan of “scare quotes,” wrote in a salon article titled “What the Sam Bankman-Fried debacle can teach us about ‘longtermism’” that “For years, I have been warning that longtermism could ‘justify’ actions much worsethan fraud, which Bankman-Fried appears to have committed in his effort to ‘get filthy rich, for charity’s sake’.” Eric Levitz in the Intelligencer says that effective altruism “lends itself to maniacal fetishization of ‘expected-value’ calculations, which can then be used to justify virtually anything.” I have also heard this claim made about consequentialism and utilitarianism maybe 400 times, so I will address the issue broadly here.

Drawing from my own manuscript titled “Worst Objections to Consequentialism,” I will show why these attempted points are silly. We can generalize the concept of moral theory to a moral framework that would include effective altruism and longtermism as their own moral frameworks, and then moral theories would also be included as their own moral framework. I will focus on moral theories because this is more well-defined and discussed among ethicists.

All Families of Moral Theories Can Justify Anything

First, any family of moral theories (e.g., consequentialism, deontology, virtue ethics) can justify any action as morally permissible. If this is correct, then it amounts to an entirely uninteresting claim that e.g., consequentialism can justify anything, as any family of theories can justify anything until you flesh out the details of the specific theory you actually want to compare. The reason these are called families, and not theories, is because there are a bunch of different versions of each of these theories combined in a family resemblance between them. Moral theories have a global scope, meaning they apply to all actions, and a deontic predicate, meaning they say whether an action is permissible, impermissible, obligatory, etc.

For any given family of theories, we can construct a theory in that family that renders any given action permissible by manipulating what we find valuable, dutiful, or virtuous. For example, we can construct a consequentialist theory that says that only harmful intentions have value. We can have a deontological theory that says that our only obligation is to punch as many people in the face as possible every day. We can invent a virtue ethical theory where an action is virtuous if and only if it has the worst moral consequences. All of these theories are part of the respective family of theories (consequentialism, deontology, and virtue ethics). Now, none of these are particularly plausible versions of these theories, but adhering to these views would justify some pretty terrible actions. Thus, it is uninteresting to make the point that these kinds of moral theory families (including utilitarianism, which is a family subset of consequentialism) can justify immoral actions (see Figure 7).

Figure 7: As it turns out, it is not helpful to point out that [inset moral theory or theory family here] “can justify” [insert immoral action here], and this is especially true since EA is not inherently utilitarian.

Another way to see why any family of theories can justify any action as permissible is because these families are interchangeable in terms of their deontic predicates. In other words, for any deontological theory, we can construct a consequentialist theory that has all the same moral outcomes for all the same actions (deontic predicates like permissible, obligatory), and vice versa. This construction is called consequentializing.[59] In the same way, we construct a deontological theory for any consequentialist theory, using a method called deontologizing.[60] There is debate over the significance of this, but the key conclusion here is that for any specific action that a deontologist can say is wrong, a consequentialist can say is wrong, and vice versa.

The takeaway from our exploration so far is that any objection to some theory for making some actions permissible needs to reference a specific version of the theory rather than the whole family of theories. For example, it is no objection to consequentialism that hedonistic utilitarianism makes it morally obligatory to go through the experience machine, since hedonistic utilitarianism is a subset of the family of theories, but it is a legitimate objection to hedonistic utilitarianism. Therefore, the claim that consequentialism can justify anything is true but uninteresting, since the same exact claim can be made of deontology, virtue ethics, or any other theory or anti-theory.

Specific Moral Theories Do Not Justify Any Action

Second, while any specific theory “can” justify any action, any specific theory does not justify any action. A significant chunk of applied ethics, and one of the primary methods of applied ethics, is taking a moral theory (or framework) and plugging in the relevant descriptive and evaluative information in order to ascertain the moral outcome of various actions. In other words, a large goal in ethics is to figure out what a moral theory actually implies for any given situation. People write many papers for and against various views, including when working from the same starting points, including the same specific theory at times. All of these contradictory implications cannot be correct. However, there is a fact of the matter about the proper implication of the theory for the specific actions, and so therefore a specific theory does not, though it can, justify any action.

Part of the issue here is obscured by the lack of definition of the word “can” in this claim. The word “can” (or “could”) is doing all the work in this claim. It is never specified how this is supposed to be translated. It is common in philosophical circles to distinguish different types of possibility (or claims about what can but not necessarily will happen): physical (aka nomological), metaphysical, epistemic, and broad logical possibility. Most common (depending on the context), especially for ethics circles, is metaphysical possibility, which is typically cashed out in terms of possible worlds as implemented in modal logic.

In other words, my best guess is that to say a theory “can justify” an action means that the theory implies that some action is permissible in a possible world (aka a way that the world could have been). Presumably, the worry here is about classes of actions, like lying, running, boxing, stealing, killing, etc. So, a theory can justify any action is that for any class of actions, there is a possible world where it is permissible to do that class of action. If conceivability is at least a good guide to possibility, then any thought experiment will do to show that a class of actions can be permissible in other possible worlds.

Furthermore, as we discussed earlier, on any plausible theory (including versions of consequentialism, deontology, and virtue ethics), there is some point where contextual considerations render the results so significant that it must be permissible. To deny this is to accept absolutism with all of its many problems discussed earlier. Therefore, all plausible moral theories will have members of all classes of actions that are permissible in some possible world, however fantastical. Therefore, all specific moral theories “can” justify in action in the sense that there are possible worlds where some action type is permitted.

However, any given specific theory does not justify any action. The reason for this is simple: the actual world is a subset of cardinality 1 of the set of all possible worlds, which is infinite. So, while a theory “can” justify any action, it does not justify any action or it faces incoherence. While a theory can justify an action in a world very different from our own, different physics, people, circumstances, laws (physical and political), etc., it does not justify any action in the actual world.

Since the much more interesting concern is about what is permissible or impermissible in the actual world, we care much more about whether theories do in fact justify various actions rather than that they can justify various actions.

Specific EA and Longtermism Frameworks Do Not Justify Any Action

The same applies to moral frameworks like effective altruism and longtermism, not just theories. EA and longtermism can also be understood as having a family resemblance of models. There is a correct way of filling in the details, but since we are not certain what that is at this time, and we have substantial disagreement, EA is committed to cause neutrality. So, because there is substantial disagreement on filing in these details, they “can” justify a wide range of actions. Yet, just like all moral theories, there is a correct way of working out the details. Thus, we need to investigate this question seriously to know what the exact implications of their commitments are.

In addition, Levitz has a suspicion that ‘expected-value’ calculations can be used to justify anything. Well, if all you have is an equation for expected value, and you ignore the rest of a moral framework, then yes. But that’s why you have the rest of the moral framework. If you only have agent-centered restrictions without filling in the details of what they are, you can say that it’s obligatory to punch every stranger in the face as soon as you see them. Therefore, deontology can justify virtually anything right? Not really. Obviously, you have to fill in the details, and the details need to be remotely plausible to be worth consideration. If I defend a version of virtue ethics where the only virtue is being self-centered, I will justify many terrible actions. You obviously have to compare the actual theories themselves, and you need to compare plausible theories. See the helpful discussions on this general point by Richard Yetter Chappell here and here.

Therefore, the phrase considered at the beginning is either false or uninteresting, depending on how it is interpreted. I will reemphasize Fletcher’s comments, “‘Does a worthy end justify any means? Can an action, no matter what, be justified by saying it was done for a worthy aim?’ The answer is, of course, a loud and resounding NO!”[61] At least, not in any interesting way.

Takeaways and Conclusion

The FTX scandal is very sad for effective altruism, cryptocurrency, and beyond, since a lot of money which was, or that would be going to, saving (or sustaining) people’s lives no longer will. Lots of people were hurt and will be worse-off as a result, to say the least. But as far as presenting an argument against effective altruism goes, I think there are, fortunately, no takeaways whatsoever here. The people that used SBF as an opportunity to critique a commitment to “doing the most good with one’s donations and career” failed to present a decent argument.

From a Christian perspective, this debacle is similar to many scandals in Christendom that have occurred, where important or powerful leaders have committed vicious actions or formed cults of personality that have completely wrecked many people’s lives and entire churches and communities. Examples include Mark Driscoll, Ravi Zacharius, and many others. These are tragedies and the actions of these leaders must be viciously condemned. Yet, from the very beginning, we know people go horribly astray. They make terrible mistakes. The only person we can have perfect faith in, and always strive to exemplify, is Jesus. Leaders do not always (and in fact rarely do always) reflect the core of their commitments. We’ve all heard this point 50,000 times, and yet somehow people keep thinking that leaders’ mistakes are a direct result of following the teachings that they supposedly espouse. This is not always (perhaps even rarely) the case.

For someone interested in purely assessing how effective altruism’s framework and approach fares, and whether EA should change its key commitments, the scandal remains entirely uninteresting and uneventful. Another day, another round of horrid critiques of effective altruism. It remains a very good thing to prevent people from dying of starvation and preventable disease, and if we can save more people’s lives by donating to effective charities, I am going to keep donating to effective charities.

If you have not yet been convinced of my arguments, listen to what ChatGPT (an artificial intelligence chatbot recently launched by OpenAI) had to say about the implications of SBF for EA in Figure 8, which is that the scandal does not necessarily reflect the moral principles of EA, and this same conclusion is true for any given individual. ChatGPT also agreed that EA is not inherently utilitarian.

Figure 8: ChatGPT knows what’s up regarding SBF and the implications for EA (i.e., not much). Note: I only include this on a lighthearted note, not as a particularly substantive argument (though I 100% agree with ChatGPT).

Post-Script

If I have time and energy (and there appears to remain a need or interest), I will write a part 2 to this in perhaps early January. Part 2 would include criticisms I found even less interesting or plausible, those that relate to the connection between longtermism and EA, the danger of maximizing, the homogeneity of EA, concerns about community norms, and more point-by-point responses to various critical pieces published online recently. Perhaps there also will be more relevant information revealed or more poignant responses between now and then; one very recent piece has more thoroughly suggested that EA leaders should have known about SBF’s dealings, and I may investigate that more carefully. Let me know what else, if anything, I should include, and if you would be interested in a follow-up.[62]

Endnotes


[1] MacAskill, William. “The Definition of Effective Altruism.” in Effective Altruism: Philosophical Issues (2019), p. 14.

[2] See Strasser, Alex. “Consequentialism, Effective Altruism, and God’s Glory.” (2022). Manuscript. [pdf] for more discussion about these distinctions and their motivation.

[3] This is the so-called Compelling Idea of consequentialism, which trivially entails normative EA and non-trivially entails radical, normative EA.

[4] Although, I realized when writing this that I might actually be a strong longtermist for Christian reasons. Namely, I probably think evangelism is the most important moral priority of our time, and the concern for the longterm future (e.g., afterlife) is sufficient to make evangelism the most important moral priority of our time. It looks like this makes me a strong longtermist after all. I need to consider this further.

[5] Boecking, Benedikt, et al. “Quantifying the relationship between large public events and escort advertising behavior.” Journal of Human Trafficking 5.3 (2019): 220-237.

[6] Cryptocurrency emissions estimated as the maximum of the range given by the White House, which is 50 million metric tons of carbon dioxide per year. Cost per metric ton of offset is $14.62 by Cool Effect (accessed 11.27.22). This amounts to $731 million to carbon offset the entirety, which is 1/35 of SBF’s net worth before the scandal. Of course, SBF and FTX’s contribution to the U.S. crypto emissions is a small fraction of that, so he could even more easily offset his own carbon emissions. Another difficulty is that it is unlikely that Cool Effect could easily (or at all) implement projects at the scale required to offset this amount of emissions, which is more than some countries in their entirety.

[7] This may assume that we have more negative duties than positive duties. It is frequently defended (or assumed) that we have stronger reasons to prevent harm than to promote beneficence, in which case the argument would go through.

[8] This distaste is mostly because of his views on abortion and infanticide. While I vehemently disagree with Singer on these specific issues, Singer’s thoughts on these issues do not affect his thoughts on poverty alleviation or the EA framework in general. It is also true that Singer’s views on these issues are sometimes distorted, which is why Eric Sampson (a Christian) wrote a clarifying piece on Singer’s views in context of backlash to Singer’s visit to Eric’s campus.

[9] MacAskill, William. “The Definition of Effective Altruism.” in Effective Altruism: Philosophical Issues (2019), p. 20.

[10] For an insightful and conversational introduction to this debate, including on whether the ends justify the means or there are intrinsically evil acts that cannot ever be done, and more, see Fletcher, Joseph and Wassmer, Thomas. Edited by May, William E. Hello, Lovers! An Introduction to Situation Ethics. Cleveland: Corpus, 1970.

[11] I will assume “justify” means something like “renders morally permissible,” whether as a truth condition or an explanation of its moral permissibility.

[12] Fletcher, Joseph. “Situation Ethics, Law and Watergate.” Cumb. L. Rev. 6 (1975): 35-60, p. 52.

[13] Hurley, Paul. “Consequentialism and the standard story of action.” The Journal of Ethics 22.1 (2018): 25-44.

[14] Fletcher, “Situation Ethics, Law and Watergate,” p. 52.

[15] Sterba, James P. “The Pauline Principle and the Just Political State.” Is a Good God Logically Possible? Palgrave Macmillan, Cham, 2019. 49-69, p. 49.

[16] Fletcher, “Situation Ethics, Law and Watergate,” p. 51, emphasis in original.

[17] Ironically, I am quite skeptical that killing in 1-to-1 (or more generally m attackers vs n victims where ) self-defense scenarios or war are ever justified in real-world scenarios. We can construct scenarios where it obviously would be, but I am skeptical we have sufficient reason before starting a large-scale war to think the foreseeable consequences of the war would result in fewer deaths (or other goods) long term than we would have without the war. I still need to investigate further though. It is ironic that I am less likely to think killing is ever permissible in the real world than those who frequently verbalize their opposition to ends-means reasoning.

[18] Of course, some natural law theorists and some Kantians may disagree, but I am more concerned about those with plausible moral theories.

[19] It is possible that this phrase is intended to claim what the moral explanation of any deontic outcome is or its structure of reasons. Namely, that why actions are right or wrong are never its consequences, which is the distinguishing aspect of consequentialism, that the rightness/wrongness of actions are ultimately explained by consequences instead of e.g., duty. As such, it would merely be a restatement of the claim “Consequentialism is false,” and then it could not even be used in the debate, since it begs the question against the consequentialist. I do not think the principle is intended to make a technical point about the proper structure of normative theories and normative explanation, but if so, it remains impotent as a moral principle.

Also, for threshold deontology, it may be the case that the explanation for why post-threshold actions are right is by appeal the consequences, so then this understanding of the phrase would be more clearly neutral between theories.

[20] Aboodi, Ron, Adi Borer, and David Enoch. “Deontology, individualism, and uncertainty: A reply to Jackson and Smith.” The Journal of Philosophy 105.5 (2008): 259-272, p. 261 n. 5.

[21] Huemer, Michael. “Lexical priority and the problem of risk.” Pacific Philosophical Quarterly 91.3 (2010): 332-351.

[22] Tarsney, Christian. “Moral uncertainty for deontologists.” Ethical Theory and Moral Practice 21.3 (2018): 505-520.

[23] Mogensen, Andreas, and William MacAskill. “The Paralysis Argument.” Philosophers’ Imprint 21.15 (2021).

[24] Huemer, Michael. Knowledge, Reality and Value. Independently published (2021), p. 297 of pdf.

[25] For example, there is the paradox of deontology as well as the related problem of inconsistent temporal discounting. The paradox of deontology is that deontology implies violating constraints is impermissible even when doing so means that you (and/or others) will violate the constraint many fewer times in the future, which is quite counterintuitive. The second problem occurs because modelling absolute constraints requires infinite disvalue for the immediate action but a discounted, finite disvalue for the same action in comparable circumstances in the future. The circumstances are only finitely different yet there is an infinite difference in the disvalue of the same action, which appears inconsistent. 

[26] See related and helpful discussion in Fletcher, Joseph and Wassmer, Thomas. Edited by May, William E. Hello, Lovers! An Introduction to Situation Ethics. Cleveland: Corpus, 1970, pp. 6-7. Fletcher, who identifies situation ethics as necessarily consequentialist or teleological, also says that for the single principle of situation ethics, he is deontological in a twisted sense.

[27] Consequences as understood in moral theory encompasses more than the term is used in common parlance. Consequences, in this sense, refers to the action and everything that follows from that action. It is not merely the effects after the action. Consequentialism sums the intrinsic value of the action and everything that follows from that action for all time. Lying, for example, can have intrinsic disvalue, and so can the results of lying, such as destroying a relationship. Anything, in principle, can be assigned value in a consequentialist theory, including intentions, motivations, virtues, and any subcategory of action. Further, these categories can be assigned infinite disvalue so that there are absolute constraints, if so desired.

[28] Cloutier, David. The Vice of Luxury: Economic Excess in a Consumer Age. Georgetown University Press, 2015, p. 137.

[29] Ambrose, “On Naboth”, cited in Phan, Peter C. Social Thought. Message of the Fathers of the Church series, Vol. 20, 1984, p. 175.

[30] Singer, Peter. “Famine, Affluence, and Morality.” Philosophy and Public Affairs 1.3 (1972), pp. 229-243.

[31] MacAskill, “The Definition of Effective Altruism,” p. 14.

[32] See Pummer, Theron. “Whether and Where to Give.” Philosophy & Public Affairs 44.1 (2016): 77-95 for a defense of this view, and Sinclair, Thomas. “Are we conditionally obligated to be effective altruists?” Philosophy and Public Affairs 46.1 (2018) for a response.

[33] Berkey, Brian. “The Philosophical Core of Effective Altruism.” Journal of Social Philosophy 52.1 (2021): 93-115.

[34] See Strasser, Alex. “Consequentialism, Effective Altruism, and God’s Glory.” (2022). Manuscript. [pdf]

[35] For example, Timmerman, Travis. “Sometimes there is nothing wrong with letting a child drown.” Analysis 75.2 (2015): 204-212 or Kekes, John. “On the supposed obligation to relieve famine.” Philosophy 77.4 (2002): 503-517.

[36] Haydar, Bashshar, and Gerhard Øverland. “Hypocrisy, poverty alleviation, and two types of emergencies.” The Journal of Ethics 23.1 (2019): 3-17.

[37] Singer, “Famine, Affluence, and Morality,” p. 231.

[38] He also proposed a third one in The Life You Can Save: (3) if it is in your power to prevent something bad from happening, without sacrificing anything nearly as important, it is wrong not to do so. See discussion in Haydar, Bashshar, and Gerhard Øverland. “Hypocrisy, poverty alleviation, and two types of emergencies.” The Journal of Ethics 23.1 (2019): 3-17, who argue that none of these three principles are needed to retain the intuition in the drowning pond case. We only need a weaker principle: (4) if it is in your power to prevent something bad from happening, without sacrificing anything significant, it is wrong not to do so.

[39] McMahan, Jeff. “Philosophical critiques of effective altruism.” The Philosophers’ Magazine 73 (2016): 92-99.

[40] Thanks to Dominic Roser for pointing this out to me.

[41] See Miller, Ryan. “80,000 Hours for the Common Good: A Thomistic Appraisal of Effective Altruism.” Proceedings of the American Catholic Philosophical Association (forthcoming) and Synowiec, Jakub. “Temperance and prudence as virtues of an effective altruist.” Logos i Ethos 54 (2020): 73-93.

[42] For discussion of different criteria of right action proposed in virtue ethics, see Van Zyl, Liezl. “Virtue Ethics and Right Action.” The Cambridge Companion to Virtue Ethics (2013): 172-196.

[43] Hursthouse, Rosalind. On Virtue Ethics. OUP Oxford, 1999, p. 28.

[44] Byerly, T. Ryan. Putting Others First: The Christian Ideal of Others-Centeredness. Routledge, 2018.

[45] MacAskill, “The Definition of Effective Altruism,” p. 23

[46] MacAskill, “The Definition of Effective Altruism,” p. 17

[47] MacAskill, “The Definition of Effective Altruism,” p. 20

[48] MacAskill, William. What We Owe the Future. Basic Books, 2022, p. 241.

[49] MacAskill, What We Owe the Future, pp. 276-277 of my pdf, emphasis in original.

[50] Ibid.

[51] Mogensen, Andreas, and William MacAskill. “The Paralysis Argument.” Philosophers’ Imprint 21.15 (2021).

[52] Schroeder, S. Andrew. “Consequentializing and its consequences.” Philosophical Studies 174.6 (2017): 1475-1497.

[53] MacAskill, “The Definition of Effective Altruism,” p. 18

[54] MacAskill, “The Definition of Effective Altruism,” p. 20

[55] MacAskill, “The Definition of Effective Altruism,” p. 18

[56] Berg, Amy. “Effective altruism: How big should the tent be?” Public Affairs Quarterly 32.4 (2018): 269-287.

[57] MacAskill, “The Definition of Effective Altruism,” p. 18

[58] One of the biggest challenges here is theory individuation, or how you distribute credences in theories with slightly varied parameters or structures. See discussion in papers with “My Favorite Theory” in the title by Gustafsson and also MacAskill’s book Moral Uncertainty.

[59] Portmore, Douglas W. “Consequentializing.” Philosophy Compass 4.2 (2009): 329-347. There are various challenges to the success of this project, but I won’t address those here. I think the challenges can be met.

[60] Hurley, Paul. “Consequentializing and deontologizing: Clogging the consequentialist vacuum.” Oxford Studies in Normative Ethics 3 (2013).

[61] Fletcher, “Situation Ethics, Law and Watergate,” p. 51, emphasis in original.

[62] Featured image adapted from FTX Bankruptcy, common creative license, downloaded here.