Category Archives: General Ethics

A Systematic Response to Criticisms of Effective Altruism in the Wake of the FTX Scandal


Effective altruism (EA) has been in the news recently following the crash of a cryptocurrency exchange and trading firm, the head of which was publicly connected to EA. The highly-publicized event resulted in several articles arguing that EA is incorrect or morally problematic because EA increases the probability of a similar scandal, or that EA implies the ends justify the means, or that EA is inherently utilitarian, or that EA can be used to justify anything. In this post, I will demonstrate the failures of these arguments and others that have been amassed.  Instead, there is not much we can conclude about EA as an intellectual project or a moral framework because of this cryptocurrency scandal. EA remains a defensible and powerful tool for good and framework for assessing charitable donations and career choices.

Note: This is a long post, so feel free to skip around to the sections of particular interest using the linked section headers below. Additionally, this post is available as a PDF or Word document.

  1. Summary
  2. Introduction
  3. Effective Altruism Revealed
  4. My Background
  5. SBF Association Argument Against Effective Altruism
    1. Was SBF Acting in Alignment with EA?
    2. SBF Denies Adhering to EA?
    3. EA is Not Tainted by SBF
      1. An Irrelevant “Peculiar” Connection
      2. Skills in Charity Evaluation ≠ Skills in Fraud Detection in Friends
      3. EA Does Not Problematically Increase the Risk of Wrongdoing
  6. Genetic Utilitarian Arguments Against Effective Altruism
    1. Genetic Personal Argument Against EA
    2. Genetic Precursor Argument Against EA
    3. A Movement’s Commitments are not Dictated by the Belief Set of Its Leaders
    4. EA Leaders are Not All Utilitarians
  7. Do the Ends Justify the Means?
    1. Some Ends Justify Some Means
    2. Some Ends Justify Trivially Negative Means
    3. No End Can Justify Any Means
    4. A Sufficiently Positive End Can Justify a Negative Means
    5. Absolutism is the Problem
    6. Paradoxes of Absolute Deontology
    7. Application to the FTX Scandal
  8. Effective Altruism is Not Inherently Utilitarian
    1. [Minimal] EA Does Not Make Normative Claims
    2. EA is Independently Motivated
      1. Theory-Independent Motivation: The Drowning Child
      2. Martin Luther’s Drowning Person
      3. Virtue Theoretic Motivation: Generosity and Others-Centeredness
    3. EA Does Not Have a Global Scope
    4. EA Incorporates Side Constraints
    5. EA is Not Committed to the Same Value Theory
    6. EA Incorporates Moral Uncertainty
    7. Objections
    8. Sub-Conclusion
  9. Can EA/Consequentialism/Longtermism be Used to Justify Anything?
    1. All Families of Moral Theories Can Justify Anything
    2. Specific Moral Theories Do Not Justify Any Action
    3. Specific EA and Longtermism Frameworks Do Not Justify Any Action
  10. Takeaways and Conclusion
  11. Post-Script
  12. Endnotes


Recently, there has been a serious scandal primarily involving Sam Bankman-Fried (SBF) and his cryptocurrency exchange FTX, precipitating a crash of billions of dollars into bankruptcy. I am talking about this because SBF has been publicly connected to the effective altruism movement, including being upheld as a good example of “earning to give,” which is where people purposely take lucrative jobs in order to donate even more money to effective charities. For example, Oliver Yeung took a job at Google and is able to donate 85% of his six-figure income to charities while living in New York City; for four years, he lived in a van to push this up to 90-95% of his income.

SBF met William MacAskill, one of the leaders and founders of the effective altruism (EA) movement, in undergrad, and MacAskill convinced him to go into finance to “earn to give.” SBF did very well, working at a top quantitative trading firm, Jane Street, and he decided to work with some other effective altruists (EAs) to start a trading firm Alameda Research and eventually a cryptocurrency exchange FTX that was intimately connected with Alameda. FTX and Alameda were doing really well, ballooning in the past several years. At his peak, right before the downfall, SBF had a net worth of $26 billion.

Like many other cryptocurrency exchanges, FTX produced its own altcoin, FTT, which gives some discounts and rewards to customers and acts as stock, and SBF had some of his company’s own assets in FTT. Trouble started in early November when CoinDesk published an article expressing concern over Alameda’s balance sheet, revealing an unhealthy amount of assets invested in FTT, which is essentially its own made-up currency. FTT-related assets amounted to over $6 billion assets of Alameda’s $14 billion assets, leaving Alameda extremely vulnerable to sudden drops in investment due to their limited ability to liquidate enough assets to pay the sellers.

Unfortunately for SBF, the Binance CEO decided to sell all of Binance’s FTT tokens, collectively worth $529 million. The CEO also publicly announced the sale, triggering a bank-run where many other customers decided to sell their FTT and withdraw their funds from FTX entirely. As a result of the run, $6 billion was withdrawn from FTX within 72 hours. FTX did not have the liquid assets to cover all of this and rapidly collapsed, declaring bankruptcy.

It became apparent that the investments of Alameda were extremely risky, even though they repeatedly told customers they have loans with “no downside” and high returns with “no risk.” It was revealed that Alameda’s risky bets were made with customer deposits, which is apparently a big “no-no”. As far as I can tell, it is not clear whether SBF actually committed fraud, but he clearly mishandled funds and misled customers about their funds, possibly in a way that violated the business’s terms and conditions.

In the fallout of this disaster, which included the closing of over 100 other organizations and the loss of many employees’ life savings, etc., effective altruism came under fire for their connection to SBF. SBF, was, after all, following suggestions given by EA organizations when he decided to “earn to give.” Further, he has explicitly advocated for EA-adjacent reasoning in maximizing expected value, though he also champions a more risk-tolerant approach than EAs tend to prefer.

The question everyone is asking (and most are poorly answering) is: “Is effective altruism to be blamed for SBF’s behavior?”

Many articles in popular media have denounced effective altruism in the wake of the crash, characterizing the philanthropic approach as “morally bankrupt,” “ineffective altruism,” and “defective altruism.” They say the FTX scandal “is more than a black eye for EA,” “killed EA,” or  “casts a pall on [EA].” Articles linking the scandal and EA, most of them critical of EA, have been published in the New York Times, the Guardian, the Washington Post, New York Magazine, the Economist, MIT Technology Review, Philanthropy Daily, Slate, the New Republic, and many other sites.

In this post, I am going to subject these articles and their arguments to scrutiny to see what exactly we can conclude about EA’s framework of evaluating the effectiveness of charities and careers and how they advocate for why and how we should do so in the first place. In short, my answer is: not much. There is not much we can conclude about EA from the FTX scandal.

I am only going to be investigating in search of critiques and assessing the articles as critiques of effective altruism. Some of these articles might have additional or entirely different purposes but sound sufficiently negative toward EA that I will nonetheless assess whether we can construct an argument against EA as a result.

Furthermore, I want EA to be criticized in the same sense that, for any given position, I want the best arguments and evidence for and against each side to be raised and assessed in the most rigorous way. Of course, that doesn’t mean every argument is equally good. I have spent much time looking at academic critiques of effective altruism, which I (normally) find more compelling, as they are more rigorous. However, most recent online criticisms are just not good.

In this post, I will 1) give a precise characterization of effective altruism, 2) mention possibly relevant background information that informs my perspective in evaluating EA, 3) address what seems to be the most frequent concern, yet to my mind remains the most perplexing concern, that SBF’s association with EA reveals that EA has an incorrect framework, 4) respond to arguments against EA that rely on the utilitarian origins of EA or its leadership, 5) clarify “ends justify the means” reasoning in recent discourse and normative ethics more broadly, 6) introduce six differences between EA and utilitarianism, showing that is EA independent of any commitments to consequentialism, and, finally, 7) respond to the concern that EA or consequentialism or longtermism can be used to justify anything and is therefore incorrect. With each argument, I try to reconstruct what is the best version of the critique against EA, since much of the argumentative work in these articles is left implicit or neglected entirely.

I welcome responses, better reconstructed arguments, corrections, challenges, counter-arguments, etc. Let’s dive in.

Effective Altruism Revealed

In “The Definition of Effective Altruism,”[1] William MacAskill characterizes effective altruism with two parts, an intellectual project (or research field) and a practical project (or social movement). Effective altruism is:

  1. the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources…
  2. the use of the findings from (1) to try to improve the world.

We could perhaps summarize this to say that someone is an effective altruist only if they try to maximize the good with their resources, particularly with respect to charitable donations and career choice, since that is EA’s emphases. A few features of this definition that MacAskill emphasizes are that it is: non-normative, maximizing, science-aligned, and tentatively impartial and welfarist.

We can further distinguish between different kinds of effective altruists[2]: normative EAs think that charitable donations that maximize good are morally obligatory, and radical EAs think that one is morally obligated to donate a substantial portion of one’s surplus income to charity. Normative, radical EAs combine these two together, and I independently argue for normative, radical EA in a draft paper (see n. 2). It is helpful to distinguish these kinds of EAs (minimal, normative, radical, or normative radical), where the summary of MacAskill’s definition is considered the minimal definition that constitutes the core of effective altruism, while the normative and radical commitments are auxiliary hypotheses of effective altruism. I will revisit this in the Effective Altruism is Not Inherently Utilitarian section.

Based on the characterization above, we can quickly dispel two key errors that articles repeatedly made. One error is that “effective altruism requires utilitarianism” (then “utilitarianism is false”, concluding “EA is incorrect”). The truth is that utilitarianism (trivially) implies effective altruism, but effective altruism does not imply utilitarianism. In fact, I would put effective altruism at the center of the Venn diagram of the three moral theories (see Figure 1). There are strong deontological and virtue ethical arguments to be made for effective altruism. See Effective Altruism is Not Inherently Utilitarian section for more on this, including one theory-independent and two virtue ethical arguments for EA. Also, see this 80,000 Hours Podcast episode on deontological motivations for EA.

Figure 1: A Venn diagram showing what moral theories imply effective altruism

The second important flawed criticism is that longtermism is an essential part of effective altruism. The core commitments of effective altruism do not imply longtermism, and longtermism does not require effective altruism. Instead, longtermism is an auxiliary hypothesis of EA. Longtermism could be false while EA is correct, and EA could be false while longtermism is correct. To get from EA to longtermism, you need an additional premise that “the best use of one’s resources should be put towards affecting the far future,” which longtermists defend, but EAs can reasonably reject. EA is committed to cause neutrality, so it is open to those who think non-longtermist causes should be prioritized.

As we will see, many people writing articles with criticisms of effective altruism could really stand to read the FAQ page on, as many of the objections have been replied to at-length (not to mention academic level pieces), including the difference between EA and utilitarianism or neglecting systematic change. Another, slightly more advanced, but more precise discussion on characterizing effective altruism is in the chapter “The Definition of Effective Altruism” by MacAskill. The very first topic MacAskill covers in the “Misunderstandings of effective altruism” section is “Effective altruism is just utilitarianism.”  

My Background

I call myself an effective altruist. I think that effective altruism is obviously correct with solid arguments in its favor. It follows from very simple assumptions, such as i) it is always permissible to do the morally best thing,[3] ii) acting on strong evidence is better than acting on weak evidence, iii) if you can help someone in great need without sacrificing anything of moral significance, you should do so, etc. If you care about helping people, you are spending money on things you don’t need, and you don’t have infinite money, then you might as well give to where it helps the most. This just makes sense. On the other hand, I wouldn’t call myself a longtermist[4] (regarding either weak longtermism that says affecting the longterm future is a key moral priority or strong longtermism that says it is the most important moral priority), as I am skeptical about many of their claims. I simultaneously think most critiques I have heard of longtermism (I have not read much, if any, academic work on this) are lacking.

I have known about effective altruism since early 2021 and took the Giving What We Can pledge in March 2021. However, I was convinced of its way of thinking for several years, since early in undergrad. I have mostly been a part of Effective Altruism for Christians (EACH) more than the broader EA movement. I have not worked for an EA organization directly and do not have a local EA group to be a part of. I had never even heard of Sam Bankman-Fried until this whole scandal happened, though I heard other people talking about the FTX Future Fund (but I didn’t know what FTX was).

The closest to an “insider look” I have gotten into EA as an institutional structure is conversations with some people at an EACH retreat in San Francisco, one of which worked for an EA startup and started an EA city chapter. The other has been involved in the EA Berkeley community. Some of the things they said suggested that there are ways that various EA suborganizations could be further optimized in their use of funding, but nothing super concerning.

I will be mostly looking at recent pieces insofar as they contribute to the debate about the intellectual project and moral framework of EA, as I find that to be the most interesting, important, and fundamental questions at hand. The end result of this inquiry has direct bearing on whether we should give to EA-recommended charities like GiveWell, rather than asking e.g., whether the Center for Effective Altruism should spend less on advertising EA books, which is a different question entirely and not central to the EA project. Additionally, I have engaged with enough material on the moral frameworks in question (and normative ethics more broadly) to hopefully have something to contribute to evaluating the EA moral framework .

SBF Association Argument Against Effective Altruism

A lot of recent critiques of EA appeared to have the general outline of the form:

  1. Sam Bankman-Fried (SBF) engaged in extremely problematic practices.
  2. SBF was an EA/was intimately connected to EA/was a leader of EA.
  3. Therefore, EA is a bad or incorrect framework.

(1) is uncontroversial. On (2), SBF was clearly connected in a very public way to EA. The extent to which he was following or internalized EA principles can be challenged, and I will also question in inference from (1) and (2) to (3). What exactly is the argument from SBF’s actions and connection to EA to concluding that EA is either inherently or practically problematic?

Was SBF Acting in Alignment with EA?

The most relevant question in this whole debacle is that whether the EA framework implies that SBF acted in a morally permissible manner. The answer is this: it is extremely unlikely that, given the EA framework, what SBF did was morally permissible.

EA leaders have repeatedly repudiated the general type of scenario that SBF engaged in numerous times. In fact, William MacAskill and Benjamin Todd give financial fraud as a go-to example of what would be an impermissible career choice on an EA framework. Eric Levitz in the Intelligencer acknowledges this by saying that “MacAskill and Todd’s go-to example of an impermissible career is ‘a banker who commits fraud.’” Eric says that MacAskill and Todd specifically argue that “engaging in harmful economic activity to generate funds for charity probably is .” Additionally, “they suggest that performing a socially destructive job for the sake of bankrolling effective altruism is liable to fail on its own terms.”

It is very difficult to see how a virtually guaranteed bankruptcy, when thousands of people are depending on you for their lifesavings, jobs, and altruistic projects, is actually the best moral choice. Fraud is just a bad idea and is completely independent of effective altruism. The disagreement here may merely be on the empirical question rather than the moral question (it is notoriously difficult, at times, to separate empirical from moral disagreement, as empirical disagreement is often disguised as moral disagreement).

MacAskill calls out SBF’s behavior as not aligned with EA: “For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations.” Furthermore, “if he lied and misused customer funds he betrayed me, just as he betrayed his customers, his employees, his investors, & the communities he was a part of.”

Additionally, his practices were just clearly horrible financially. He misplaced $8 billion dollars. John J. Ray III, who oversaw the restructuring of Enron and is now overseeing FTX, said about the FTX financial situation, “Never in my career have I seen such a complete failure of corporate controls and such a complete absence of trustworthy financial information as occurred here. From compromised systems integrity and faulty regulatory oversight abroad, to the concentration of control in the hands of a very small group of inexperienced, unsophisticated and potentially compromised individuals, this situation is unprecedented.” These practices obviously do not give a maximum expected value on any plausible view.

SBF Denies Adhering to EA?

In addition, Sam Bankman-Fried himself appeared to deny that he was actually attempting to implement an EA framework, though he later clarified his comments were about crypto regulation rather than EA. Nitasha Tiku in The Washington Post (non-paywalled) puts it as, “[SBF] denied he was ever truly an adherent [of EA] and suggested that his much-discussed ethical persona was essentially a scam.” Tiku is referring to an interview between SBF and Kelsey Piper in Vox. Piper interviewed SBF sometime in the summer, where SBF said that doing bad for the greater good does not work because of the risk of doing more harm than good as well as the 2nd order effects. Piper asked if he still thought that, to which he replied, “Man all the dumb sh*t I said. It’s not true, not really.”

When asked if that was just a front, as a PR answer rather than reality, to which he said, “everyone goes around pretending that perception reflects reality. It doesn’t.” He also said that most of the ethics stuff was a front, not all of it, but a lot of it, since it’s just about winners and losers on the balance sheet in the end. When asked about him being good at frequently talking about ethics, he said, “I had to be. It’s what reputations are made of, to some extent…I feel bad for those who get f—ed by it, by this dumb game we woke Westerners play where we say all the right shiboleths [sic] and so everyone likes us.” He said later, though, that the reference to the “dumb game we woke Westerners play” is to social responsibility and environmental, social, and governance (ESG) criteria for crypto investment rather than effective altruism.

Perhaps the most pessimistic and antagonistic of people would say, perhaps as Tiku did, that SBF only said what he did to protect EA. The idea is that he actually was an effective altruist, believed it, but lied about it just being a front in order to help save face for EA. Tiku says that EA’s brand “helped deflect the kind of scrutiny that might otherwise greet an executive who got rich quick in an unregulated offshore industry,” also reflected in the title of the article, “The do-gooder movement that shielded Sam Bankman-Fried from scrutiny.” Since we do not have access to SBF’s mental states, I do not care to speculate much about his reasoning for saying what he said. Armchair psychoanalysis is not exactly a reliable methodology.

People argue about whether or not SBF was being truthful here or not. He appeared to believe he was speaking off the air, suggesting honesty. If so, then he did not believe he was actively trying to implement the EA framework (unless SBF’s answers about his ethics in the Vox interview were intended to be disconnected from the EA framework and solely about regulations, which to me is not clear either way but didn’t seem entirely disconnected). Ultimately, I do not think much hinges on whether SBF believed he was implementing the EA framework, since it is more important whether or not SBF’s actions are a reflection of what is inherent in the EA framework, which they are not.

Now, I have little interest in attempting to disown SBF because he is now a black sheep. There is no doubt that EA painted SBF as a paradigm case of an actor doing great moral good by using his money to invest in and donate to charity. We EAs have to own that, and EAs got it incorrect due to our lack of knowledge about what was happening behind the scenes. Could there have been more to be done to prevent this from happening? Probably, and EAs are taking this very seriously, doing a lot of soul searching. It is likely there will be more safeguards put into place. These are reasonable questions, but they have little to do with the moral framework of EA itself, since the EA framework still ends up rendering SBF’s gamble as impermissible.

Next, I will investigate whether or not the mere connection between SBF and EA, rather than an alignment between EA’s framework and SBF’s actions, is sufficient to challenge EA’s framework.

EA is Not Tainted by SBF

Now that we know SBF’s actions do not coincide with EA principles, we can investigate how the connection between SBF and EA could be used as an argument against EA. Recent articles mostly seem to just toss the two names next to each other in an obscure way without making any clear argument, hoping that one will be tainted by the other.

An Irrelevant “Peculiar” Connection

For example, Jonathan Hannah in Philanthropy Daily says, “MacAskill claims to be an ethicist concerned with the most disadvantaged in the world, and so it seems peculiar that he was inextricably linked to Bankman-Fried and FTX given that FTX claimed to make money by trading cryptocurrencies, an activity that carries serious negative environmental consequences and may play a role in human trafficking.” The environmental consequences have to do with crypto mining that uses a lot of electricity (more than some countries as a whole), and the role in human trafficking is that virtual currencies are harder to track, so they are frequently used in black market activities. 

It is hard to understate how much of a stretch this argument is. Here is an equivalent argument against myself (relevant background is that I studied chemical engineering at Texas A&M, which also has a strong petroleum engineering program). I say I care about the disadvantaged, yet I have many friends that went into the oil and gas industry (and some of them listened to my suggestions about charitable donations). Oil and gas bad. Curious! Further, I have many more friends that love, watch, and/or attend football and other public sporting events, and yet these events are associated with an increase in human trafficking.[5] Therefore…I don’t care about the disadvantaged? And therefore my thoughts (or knowledge of evidence like randomized control trials) about helping others are wrong? Looks not much better than Figure 2.

Figure 2: I am very intelligent.

Of course, effective altruists have spent a great deal of time working on the issue of weighing the moral costs and benefits of working in plausibly harmful industries vs working for charities. This isn’t exactly their first rodeo. See 80,000 Hours: Find a Fulfilling Career That Does Good and Doing Good Better: Effective Altruism and How You Can Make a Difference (you can get a free copy of either of these at 80,000 Hours). We can also quickly consider SBF’s scenario (I am only considering my first-glance personal thoughts, and not attempting to use the 80,000 Hours framework). In SBF’s case, he has earned enough money from cryptocurrency to carbon offset all the cryptocurrency greenhouse emissions in all of the U.S. many times over.[6] Additionally, it is hard to see why employees (or employers) of cryptocurrency can be blamed for human trafficking purchases with crypto, especially no more than the U.S. treasury can be blamed for human trafficking purchases done with cash (which seems negligible at best). Plus, many other things he can do with the remaining sum not spent on carbon offsetting, resulting in a net good (especially compared to what other job opportunities he could take, many of which have comparable negative effects).

Skills in Charity Evaluation ≠ Skills in Fraud Detection in Friends

The same author also asks, “If these ‘experts’ failed to see what appears to be outright fraud committed by someone they were close to, why should we look to these utilitarians to learn how to be effective with our philanthropy?” This is again a strange conditional. Admittedly, I have not had many friends that committed billions of dollars’ worth of fraud (perhaps the author has more experience), but I would not expect them to go to their close friends and say, “Hey I’m committing fraud with billions of dollars, what do you think?” Acts like those done by SBF are done in desperation with a sinking ship, like a mouse backed into a corner, or someone with a gambling habit (especially apropos for the given situation). You get deeper into debt, take more risks, assuming and desperately hoping that it will work out in the next round. Repeat until bankruptcy. This is not something you go telling all your friends about (instead, you lie and try to siphon money from them, as was recently done by a Twitch scammer).

In addition, the skills and techniques it takes to assess the effectiveness of charities are quite different from the skills it takes to discover that your friend is committing massive fraud with his business. So, the reason we should look to EAs to be effective in philanthropy is because they have good evidence for charity effectiveness. Randomized control trials (or other comparable methods) are not exactly the tools optimized for detecting fraud in friends’ businesses.

Now, was there nothing suspicious about SBF prior to this point? No. There was some reason for suspicion. And of course, hindsight is 20-20. They evidently attempted to evaluate SBF and his ethical approach in 2018. I’m unsure the details of this, and I don’t know how much changed in SBF’s behavior in 4 years. As I mentioned earlier, like the desperation of a gambler, the risks and bad behavior likely exponentially increased over time leading to the present failure. Thus, we would expect most of the negative behavior to be heavily weighted towards 2022 rather than 2018 when he was reviewed. This debacle will likely increase scrutiny into this type of behavior (as much as possible across organizational lines), and with good reason. I won’t say EA as an organization or community is blameless here. But that doesn’t change the EA framework has being the best (and correct) framework for evaluation of charity effectiveness.

Without making this connection more explicit, this looks like a fallacious argument; however, like all informal fallacies, there is likely a reasonable argument form in the vicinity. Let us try to consider some of these possibilities.

EA Does Not Problematically Increase the Risk of Wrongdoing

Here is one way of putting the key inference for this argument: if something increases the probability of believing or doing something wrong, then it is bad or incorrect (and EA does this, so EA is incorrect). Of course, this is implausible, as then we couldn’t do anything (re: MacAskill’s paralysis argument). If we always had to minimize the probability of engaging in wrongdoing (through violating constraints) or false beliefs, then we should do (or believe) nothing.[7] This is one standard argument for global skepticism. If the only epistemic value is minimizing false beliefs, then having zero beliefs would ensure you have the minimum number of false beliefs, which is zero. This approach is clearly incorrect, since we do have knowledge and it is permissible to get out of bed in the morning.

Here’s another reductio: becoming a deontologist increases the probability that you will believe that we have a deontological requirement to punch every stranger we see in the face, since consequentialism does not include deontological requirements while deontology does, so deontologists need to put higher credence in variants of deontology. However, this is an implausible view that no one defends, so this mild increase in probability is uninteresting at best. 

A second, more plausible version of the inference for this argument is: if something substantially increases the probability of believing or doing something wrong, then it is bad or incorrect (and EA does this, so EA is incorrect). Random on Twitter seems to suggest something like this in response to Peter Singer’s (too) brief article when he identified the criticism as being that EA is “a philosophy that tends to lead practitioners to believe the ends justify the means when that’s not the case.” In any case, this is an extremely difficult and unwieldy claim to deal with at all, as this empirical premise is quite difficult to substantiate. First of all, increases the probability compared to what? What is the base rate for how frequently someone does the relevant wrong in question? And what is the probability given one is an EA? Do we only compare billionaires? Do we compare millionaires and beyond? Do we only compare SBF to other crypto businessmen?

In the absence of a more clear and substantiated argument, it is hard to see how this argument can be successful. Maybe we can ask, of the people that we know made incorrect assessments of ends vs means and thought the ends sometimes justifies the mean, what percent of them accept the EA framework? Good luck with that investigation. Plus, we are inevitably going to end up doing armchair psychoanalysis, a notoriously unreliable method.

Furthermore, there is another response. Plausibly, a framework can substantially increase the probability of people doing something wrong, and yet the framework entails that we should not do that thing. In such a case, it is hard to see why the framework goes in the trash if it gives the correct results even if in practice people’s attempted implementation end up doing the wrong thing.

To see this, consider is the difference between a criterion of rightness, which is how we evaluate and conclude if an action is morally right or wrong (as a 3rd party), and a decision-making procedure, which is the method an agent consciously implements when deciding what to do. This is a standard distinction in normative ethics that diffuses various kinds of objections, especially having to do with improper motivations for action. It may be that the decision procedure that was implemented is wrong, but this does not show that the normative or radical EA’s criterion of rightness is incorrect. I suspect that Richard Chappell’s meme about this distinction is actually a reference to this (or a closely related) mistake, since his other tweets and blog posts around the same time are referring to similar errors in commentary on EA and the FTX scandal (such as this thread on a possible connection between guilt-by-association arguments and inability to distinguish criterion of rightness and decision procedure).

Figure 3: Richard Chappell’s meme on bad EA criticism, referring to philosophers on Twitter that confuse the two

In summary, to answer Eric Levitz’s question “Did Sam Bankman-Fried corrupt effective altruism, or did effective altruism corrupt Sam Bankman-Fried?”, the answer is “Neither.” SBF did not act in a way aligned with EA, whether he thought he was or not. Until a better argument is forthcoming that SBF’s incorrect approach implies that EA’s framework is flawed, I conclude very little about the EA framework.

The EA framework is well-motivated, even on non-consequentialist grounds (as we will see later), and EA is an excellent way to help others through your charitable donations and career. To the extent that the FTX scandal makes EA look bad, it is only because of improper reasoning. There are likely additional institutional enhancements that can be implemented as protections against these kinds of disasters, but my intent here was to investigate the EA framework more than the EA practice in all of its institutional details, to which I am not privy. Therefore, I can conclude that the EA framework is correct and unmoved by the SBF and FTX scandal.

Genetic Utilitarian Arguments Against Effective Altruism

There is another set of claims I will assess in these critical articles related to effective altruism’s connection to utilitarianism in the form of historical and intellectual origins. Inevitably, especially from opponents of utilitarianism, any connection to utilitarianism is deemed hazardous and not to be touched with a ten-foot pole. For example, I have had several Christian friends be terrified of effective altruism because they hear that Peter Singer is connected to it.[8]

Genetic Personal Argument Against EA

I can briefly consider this genetic personal argument against EA. The best version of the principle in question to make an inference against EA is probably something like, “if a person is wrong about the majority of claims you have heard from that person, then the prior probability of the person being right about a new claim is fairly low.” The principle should likely be restricted to the claims that you have heard from that person that you got from a source including many more of that person’s beliefs and even arguments for said position. Otherwise, you risk making inferences from an exaggerated source, and the principle would be false. Even then, the principle would only tell you the prior probability. You need to update your background knowledge with further evidence to get the posterior probability of any given claim, so it remains important to actually investigate the person’s reasons for believing the new claim before making a definitive judgment on the new claim. Therefore, EA cannot be dismissed on a personal basis without assessing the arguments for EA, such as those referenced in the independent motivation section.

Genetic Precursor Argument Against EA

There may be another genetic argument raised against EA, which is that “the historical and intellectual precursors to EA involved utilitarian commitments, and so EA is inextricably linked to utilitarianism. Further, utilitarianism is false, and therefore EA is false.” I will examine each part of this argument in turn.

First, we need to examine the factual basis of the historical and intellectual connection between EA and utilitarianism in the first place. A number of recent critical articles point out the genetics of the EA tradition. I think facts about this connection are worth pointing out; yet it is important to clarify the contingent nature of this linkage, especially given how despised utilitarianism is to the average person. If this clarification was neglected as a kind of “poisoning the well” or “guilt by association”, shame on the author, though I do not make that assumption.

The Economist (non-paywalled) writes, “The [EA] movement…took inspiration from the utilitarian ethics of Peter Singer.” It would be more accurate to say that “the movement took inspiration from arguments using common sense intuitions from Peter Singer, and Peter Singer is a utilitarian.” Of course, that’s much less zingy to acknowledge that the arguments from Singer inspiring EA were not utilitarian in nature (from his “Famine, Affluence, and Morality”), as we discuss with more detail in the utilitarian-independent motivation subsection of Effective Altruism is Not Inherently Utilitarian section.

Rebecca Ackermann in Slate writes, The [EA] concept stemmed from applied ethics and utilitarianism, and was supported by tech entrepreneurs like Moskovitz.” This is just a strangely worded sentence. It would make more sense to say it stemmed from arguments in applied ethics, but applied ethics is merely a field of inquiry. Moreover, utilitarianism is a moral theory. So, you could say it is an implication of utilitarianism, but proposing that EA stemmed from a moral theory is a bit weird. That’s mostly nit-picking, and I also have absolutely no idea what the support from tech entrepreneurs has to do with anything. I guess the “technology” audience cares? Other articles appear to poison the well against EA merely by saying rich tech billionaires support EA, as though everything tech billionaires support is automatically incorrect, though this article may not be attempting to make such a faulty ‘argument’.

Rebecca Ackermann in MIT Technology Review writes “EA’s philosophical genes came from Peter Singer’s brand of utilitarianism and Oxford philosopher Nick Bostrom’s investigations into potential threats to humanity.” Similar to above, the ‘genes’ of utilitarianism are connected in the person of Peter Singer but not in the arguments of Peter Singer, which is an incredibly important distinction. EA does not rely on his brand of utilitarianism, and it is important to clarify this non-reliance to the public that wants to throw up anytime the word “utilitarianism” is mentioned. Also, Bostrom’s existential risks aren’t even a core part of EA; they are a more recent development. From my perspective, this development is much less of the genes of EA (though Bostrom was writing about longtermism- and extinction-related topics before EA) and more of a grafting into EA, at least as far as how much weight or significance the existential risks have.

Now, it is quite possible that the authors of these articles were merely noting the historical roots of the movement, which is of perfectly legitimate interest to note. Given that the average person finds utilitarianism detestable, however, suggests that it would be important for neutrality’s sake to clarify that effective altruism is not, in fact, wedded to the exact beliefs of the originators or even the current leaders.

If this connection was made to critique EA, this amounts to a kind of genetic argument against effective altruism. Whether these authors were attempting this approach (implicitly) is not my primary concern, and I will not comment either way, but since this is a fairly popular type of argument to make, I will investigate it. In fact, it does seem like the general structure of recent critiques of EA due to SBF and FTX are a guilt by association argument, which I explored in the SBF Association Argument Against Effective Altruism. My best attempted reconstruction of the genetic utilitarian argument is of the form:

  1. If the originators and/or leaders of a movement espouse a view, then the movement ineliminably is committed to that view
  2. The originators and/or leaders of the EA movement espouse utilitarianism
  3. Therefore, the EA movement is ineliminably committed to utilitarianism
  4. If a movement is ineliminably committed to a false view, then the movement has an incorrect framework
  5. Utilitarianism is false
  6. The EA movement has an incorrect framework

A Movement’s Commitments are not Dictated by the Belief Set of Its Leaders

One problem with this argument is that premise (1) is obviously false. Regarding the originators, movements can change. Additionally, leaders have many beliefs 1) unrelated to the movement, and 2) even related beliefs may not imply nor be implications of the framework. This can be true even if the originators and leaders all share some set of views P1 = {p1,p2,p3…p7}, as the movement may be characterized by a subset of those views P2 = {p1,p2}, where P2 does not imply {p3…p7}. This is likely the case in the effective altruism movement, as P2 does not encapsulate an entire global moral structure and so does not imply the entirety of the leader’s related views. Further, there can be a common cause of the beliefs of the leaders that are non-identical to the common cause of the beliefs of the core of the movement.

Another way to remit the concern above is to consider the core of the theory vs auxiliary hypotheses, as discussed in philosophy of science. If P2 is the core of effective altruism, it can be true that beliefs in P1, that are not in P2, are auxiliary hypotheses but can still be freely rejected by those in the movement and remain true to EA.

There is a parallel in Christianity as well. There is substantial diversity in the movement that is Christianity, yet there is a common core of essential commitments of Christianity, called “essential doctrine”. These commitments constitute the core of the theory of Christian theism. Beyond that, we can have reasonable disagreements as brothers and sisters in Christ. As 7th century theologian Rupertus Meldenius said, “In Essentials Unity, In Non-Essentials Liberty, In All Things Charity.”

This disagreement extends from laymen to pastors and “leaders” of the faith as well. I think this should be fairly obvious for people that have spent much time in Christian bubbles. Laymen can and do disagree with pastors of their own denomination, pastors of other denominations, the early church fathers, etc., and they remain Christian without rejecting essential doctrine. (Of course, some church leaders and laymen are better than others at not calling everyone else heretics).

EA Leaders are Not All Utilitarians

The second point of contention with this argument is that premise (2) is also false. William MacAskill can rightly be called both an originator and a leader of EA, and he does not espouse utilitarianism. He thinks that sometimes it is better to not do what results in the overall greatest moral good. He builds in side-constraints (though sophisticated forms of utilitarianism can do a limited version of this, and consequentialism can do precisely this in effect). Furthermore, he builds in uncertainty in the form of a risk-averse expected utility function with distributed credences between (at least) utilitarianism and deontology, which motivates side-constraints.

In this section, we examined two arguments against effective altruism in view of its connection to utilitarianism, finding both arguments substantially lacking. In conclusion from the previous two sections, we do not see a successful argument against effective altruism due to its theoretical or historical connection to utilitarianism. EA remains a highly defensible intellectual project.

Do the Ends Justify the Means?

There is a need for clarity around “ends-justifying-means” reasoning and claims like “the end doesn’t justify the means.” Many recent criticisms make this claim in response to the FTX scandal. They connect effective altruism to what they see as “ends-justifying-means” reasoning in Sam Bankman-Fried (SBF) and use that as a reductio against effective altruism.

This argument fails on virtually every point.

First, let’s see what people have said about it. Eric Levitz in the Intelligencer says that “the SBF saga spotlights the philosophy’s greatest liabilities. Effective altruism invites ‘ends justify the means’ reasoning, no matter how loudly EAs disavow such logic.” Eric also writes, “Effective altruists’ insistence on the supreme importance of consequences invites the impression that they would countenance any means for achieving a righteous end. But EAs have long disavowed that position.” Rebecca Ackermann in Slate mentions, “EA needs a clear story that rejects ends-justifying-means approaches,” referencing Dustin Moskovitz’s Tweets.

As the authors above mention, EA thinkers typically, on paper at least, disavow “ends justify the means” reasoning. More recently, MacAskill in a recent Twitter thread says, “A clear-thinking EA should strongly oppose ‘ends justify the means’ reasoning.” Holden Karnofsky, co-founder of Open Philanthropy and GiveWell, in a recent forum post says, “I dislike ‘end justify the means’-type reasoning.” This explicit rejection is not solely in the wake of the downfall of FTX; MacAskill 2019 in “The Definition of Effective Altruism” says, “as suggested in the guiding principles, there is a strong community norm against ‘ends justify the means’ reasoning.”[9] I talk more substantively about the use of side constraints in EA in the 4th difference between EA and utilitarianism below.

Of course, critics of EA readily acknowledge that EA, on paper, disavows ends-means reasoning. The problem, they think, is that EA “invites” ends-means reasoning, or that EA “invites the impression that they would countenance any means for achieving a righteous end” over and against EA’s claims. 

All of the above discussion fails to acknowledge two very key points, which is due to the ambiguity in what “ends justify the means,” in fact, means. These two points become obvious once we adequately explore ends-means reasoning[10]; they are: (1) some ends justify some means, and (2) “ends justify the means” is a problem for every plausible moral theory. 

Some Ends Justify Some Means

Obviously, some ends justify some means. Let’s say I strongly desire an ice cream cone and consuming it would make me very happy for the rest of the day with no negative results. Call me crazy, but I submit to you that this end (i.e., Ice Cream) justifies the means of giving $1 to the cashier. If this is correct, then some ends justify[11] some means. Therefore, it is false that “the end never justifies the means.”

Various ethicists have pointed this out. Joseph Fletcher says that people “take an action for a purpose, to bring about some end or ends. Indeed, to act aimlessly is aberrant and evidence of either mental or emotional illness.”[12] Though, it may be that this description in line with the “Standard Story” of Action in action theory entails a teleological conception of reasons that has distorted debates in normative ethics in favor of consequentialism, as Paul Hurley has argued.[13]

Nonetheless, Fletcher is right that even this commonsense thinking on everyday justification for any action “leads one to wonder how so many people may say so piously, ‘The end cannot justify the means.’ Such a result stems from a misinterpretation of the fundamental question concerning the relationship between ends and means. The proper question is – ‘Will any end justify any means?’ – and the necessary reply is negative.”[14] It is obviously false that any end justifies any means, and everyone in the debate accepts that, including the hardcore utilitarian.

What happens when we raise the stakes of either the end or the means? 

Some Ends Justify Trivially Negative Means

We can consider raising the moral significance of the end in question. Let us consider the end of preventing the U.S. from launching nuclear missiles at every other country on the globe (i.e., Nuclear Strike). Although lying is generally not morally good, I submit that it is morally permissible to fill in your birthday incorrectly on your Facebook account if it prevents Nuclear Strike. An end of great moral magnitude like Nuclear Strike justifies a mildly negative means like a single instance of deception on a relatively unimportant issue. Therefore, a very good moral end justifies a mildly negative means.

Similarly, when James Sterba considers the Pauline Principle that we should not do evil so that good may come of it, he acknowledges it is “rejected as an absolute principle…because there clearly seem to be exceptions to it.” Sterba gives two seemingly obvious cases where doing evil so that good may come “is justified when the resulting evil or harm is: (1) trivial (e.g., as in the case of stepping on someone’s foot to get out of a crowded subway) or (2) easily reparable (e.g., as in the case of lying to a temporarily depressed friend to keep him from committing suicide).”[15]

No End Can Justify Any Means

Further, there is no end that can justify any means. For any given end, we can consider means that are way worse. For example, consider the end of saving 1 million people from death. Is any means justified to save them? Of course not. For example, killing 1 billion people would not be justified as a means to save 1 million people from death. For any end, we can consider means that are 10x as bad as the end, and the result is that the means is not justified. From one perspective, in the scenario of killing 1 to save 1 million, the absolutist deontologist justifies terrible means (i.e., letting 1 million people die) to the end of saving 1; of course, they would not word it this way, but it amounts to the same thing. Ultimately, for a particular end, no matter how bad, it is false that we can use any means possible to achieve that end and doing so would be morally permissible.

As Joseph Fletcher (a consequentialist) said, “‘Does a worthy end justify any means? Can an action, no matter what, be justified by saying it was done for a worthy aim?’ The answer is, of course, a loud and resounding NO!” Instead, “ends and means should be in balance.”[16]

A Sufficiently Positive End Can Justify a Negative Means

Let us investigate further just how negative of means can be justified. Let us reconsider Ice Cream with a more negative means. Clearly, Ice Cream does not justify shooting someone non-fatally in the leg to get the ice cream cone. For an end to even possibly justify non-fatal shooting, it would require something much more significant. Is there any scenario that would make a non-fatal shooting morally permissible? I think there is. Consider a scenario that is rigged such that if you non-fatally shoot a person, one billion people will be saved from a painful death. It should be obvious that preventing the death of a billion people does justify shooting someone non-fatally in the leg. Therefore, it is possible for a massively positive end to justify a negative means.

Uh oh! Did I just admit I am a horrible person? I think it is okay to shoot someone (non-fatally) if the circumstances justify it, after all. Of course, most people think it is permissible to kill in some cases, such as self-defense or limited instances of just war.[17] After explaining the typical EA stance on deferring to constraints including a document by MacAskill and Todd, and how MacAskill said that SBF violated them, Eric Levitz in the Intelligencer complains that “yet, that same document suggests that, in some extraordinary circumstances, profoundly good ends can justify odious means.” My response is, “Yes, and that is trivially correct.” If I could prevent 100,000,000 people from being tortured and killed by slapping someone in the face, I would and should do it. And that shouldn’t be controversial.

As MacAskill and Todd note (which the author also quotes), “Almost all ethicists agree that these rights and rules are not absolute. If you had to kill one person to save 100,000 others, most would agree that it would be the right thing to do.” If you will sacrifice a million people to save one person, you are the one that needs to have your moral faculties reexamined. Killing a person, while more evil than letting a person die, is not 999,999 times more evil than letting one person die. Probably, the value difference between killing a person and letting a person die is much less than the value of a person, i.e., the disvalue of letting a person die. Therefore, letting two people die is already worse than killing one person, but it even more obvious that letting 1,000,000 people die is worse than killing one person.

I do not believe I have said much that is particularly controversial when looking at these manufactured scenarios.[18] We are stipulating in these tradeoff considerations that the tradeoff is actually a known tradeoff and there is no other way, etc.

In sum, the ends don’t justify the means…except, of course, when they do. Ends don’t never justify the means and don’t always justify the means, and virtually no one in this debate thinks otherwise. Almost everyone thinks ends sometimes justify the means (depending on the means). What we have to do is assess the ends and assess the means to discern when exactly what means are justified for what ends.

Absolutism is the Problem

This whole question has very little to do with consequentialism or deontology, contrary to popular belief, and everything to do with absolute vs relative ethics (not individually or culturally relative, but situationally relative).[19] There is a debate internal to non-consequentialist traditions about this question of when the ends justify the means. For example, with deontology there is what is called threshold or moderate deontology, and in natural law theory there is a view called proportionalism. Neither of these are absolutist views, and both views include the results of actions as justification for some means. Internal to these non-consequentialist families of theories typically characterized as absolutist remains the exact same debate about ends-means reasoning. In fact, the most plausible theories in all moral families allow extreme (implausible but possible) cases to violate absolute rules.

For example, it is uncommon to find a true absolutist deontologist among contemporary ethicists. As Aboodi, Borer, and Enoch point out, “hardly any (secular) contemporary deontologist is an absolutist. Contemporary deontologists are typically ‘moderate deontologists,’ deontologists who believe that deontological constraints come with thresholds, so that sometimes it is impermissible to violate a constraint in order to promote the good, but if enough good (or bad) is at stake, a constraint may justifiably be infringed.”[20] In other words, almost all (secular) deontologists also think the ends sometimes justify the means. Absolutism is subject to numerous paradoxes and counterexamples discussed previously and in the next subsection (see Figure 4)

Figure 4: Absolutism in a nutshell

Paradoxes of Absolute Deontology

Why is it that even deontologists think there are exceptions to constraints? Because absolute deontology is subject to substantial paradoxes and implausible implications that render it unpalatable, even worse than the alternatives. One example is the problem of risk, which is that any action raises the probability of violating absolute constraints, and no action gives 100% certainty of violating constraints. Therefore, it looks like the absolutist needs to say either that any action that produces a risk of violation is wrong, leading to moral paralysis since you would be prohibited from taking any action, or pick an (arbitrary) risk threshold, which implies that, in fact, two wrongs do make a right, and two rights make a wrong (in certain cases).[21] There have been responses, but what is perhaps the best response, stochastic dominance to motivate a risk threshold, is still subject to a sorites paradox that again appears to render absolutism false.[22] MacAskill offers a distinct but related argument from cluelessness that deontology implies moral paralysis.[23]

Alternatively, we can merely consider cases of extreme circumstances just like the one I gave earlier. A standard example is lying to a visitor to your house in order to prevent someone from being murdered, which Kant famously and psychopathically rejected. Michael Huemer considers a case where aliens will kill all 8 billion people on earth unless you kill one innocent person. Should you do so? The answer, as Huemer and any sane person agrees, is obviously yes.[24] (If the reader still thinks the answer is no, add another 3 zeros to the number of people you are letting die and ask yourself again. Repeat until you reject absolutism). These types of cases show quite quickly and simply that absolutism is not a plausible position in the slightest, and it is justified to do something morally bad if it results in something good enough (or, alternatively, prevents something way worse). There are other problems for absolutist deontology I neglect here.[25]

Of course, in a trivial sense, consequentialists are absolutist: it is always wrong to do something that does not result in the most good. However, that is not what anyone means when they call theories absolutist, which refers to theories that render specific classes of actions (e.g., intentional killing, lying, torture, etc.) as always impermissible.[26]

In summary, any plausible moral theory or framework has to reckon with the fact that something negative is permissible if it prevents something orders of magnitude worse. When people say “the end doesn’t justify the means” when condemning an action, they, in practice, more frequently mean those ends don’t justify those means. Equivalently, they mean that the ends don’t justify the means in this circumstance, rather than never, as the latter results in a completely implausible view.

Application to the FTX Scandal

So, where does that leave us in the FTX scandal? Everyone in the debate can say that, in this case, the ends did not justify the means. Although criticizing EA, Eric Levitz in the Intelligencer appears to challenge this, saying perhaps SBF may reasonably be considered justified if there are exceptions to absolute rules, “In ‘exceptional circumstances,’ the EAs allow, consequentialism may trump other considerations. And Sam Bankman-Fried might reasonably have considered his own circumstances exceptional,” describing the uniqueness of SBF’s case. Levitz asks, “If killing one person to save 100,000 is morally permissible, then couldn’t one say the same of scamming crypto investors for the sake of feeding the poor (and/or, preventing the robot apocalypse)?” If I were to put this into an argument, it may be: (1) if ends justify the means sometimes, then SBF’s actions are justified, (2) if EA, then ends justify the means sometimes, (3) if EA, then SBF’s actions are justified (or reasonably considered so).

There are several problems here (found in premise 1). First, it is not consequentialism that may trump other considerations, but consequences.[27] The significance of the difference is that any moral theory can say (and the most plausible ones do say) that consequences can, in the extreme, trump other considerations, as we saw earlier. Second, SBF’s circumstances may be exceptional in the generic sense of being rare and unique, but the question is “are they exceptional in the relevant sense,” which is that his circumstances are such that violating the constraint of committing illegal actions or fraud would result in a sufficient overall good to warrant breaking the constraint. It is a general rule that fraud is not good in the long run for your finances or moral evaluation.

Third, it is much too low a bar to say that it is reasonable for SBF to think that his circumstances were exceptional in the relevant sense, but we are (or should be) much more interested in whether SBF was correct in thinking his circumstances were exceptional in the relevant sense. An assessment of irrationality requires us to know his belief structure and evidence base for this primary claim as well as many background beliefs that informed his evidence and belief structure of the primary claim (and possibly knowing the correct view of decision theory, which is highly controversial). 

Fourth, one can say anything one wants (see next section Can EA/Consequentialism/Longtermism be Used to Justify Anything?). We are and should be interested in what one can accurately say about such a comparison between killing one person to save 100,000 and ‘scamming crypto investors for the sake of feeding the poor (and/or, preventing the robot apocalypse).’ Fifth, it is unlikely that one can accurately say that these are comparably similar, such that it is incredibly unlikely that SBF was correct in his assessment. This rhetorical question about comparing saving 100,000 lives vs scamming crypto investors does very little to demonstrate otherwise.

SBF’s approach, which approved of continuing double-or-nothing bets for eternity, evidently did not consider the fallout associated with nearly inevitable bankruptcy and how that would set the movement back, as that would render each gamble less than net zero. Secondly, almost everyone agrees his approach was far too risk-loving. Nothing about EA or utilitarianism or decision theory, etc. suggests that we should take this risk-loving approach. As MacAskill and other EA leaders argue, we should be risk averse, especially with the types of scenarios SBF was dealing with (relevant EA forum post). Plus, there is the disvalue associated with breaking the law and chance of further lawsuits. 

Levitz appears to accept the above points and concedes that it would be unfair to attribute SBF’s “bizarre financial philosophy” to effective altruism, and that EA leaders would likely have strongly disagreed with implementing this approach with his investments. Given Levitz’s acceptance of this, it is unclear what the critique is supposed to be from the above points. Levitz does move to another critique though, which is that EAs have fetishized expected value calculations, which I will address in the next section.

In summary, the ends sometimes justify the means, but violating constraints almost never actually produces the best result, as EA leaders are well-aware. Just because SBF made a horrible call does not mean that the EA framework is incorrect, as the typical EA framework makes very different predictions that would not include such risk-loving actions.

Effective Altruism is Not Inherently Utilitarian

There was a lot of confusion in these critiques about the connection between utilitarianism and effective altruism. Many of these articles assume that effective altruism implies or requires utilitarianism, such as (not including the quotes below) Erik Hoel, Elizabeth Weil in the Intelligencer, Rebecca Ackermann in MIT Technology Review (see a point-by-point response here), Giles Fraser in the Guardian, James W. Lenman in IAI News, and many more. I will survey and briefly respond to some individual quotations to this effect, showcase the differences between effective altruism and utilitarianism. Throughout, I will extensively refer to MacAskill’s 2019 characterization of effective altruism in “The Definition of Effective Altruism.”

As a first example, Linda Kinstler in the Economist (non-paywalled) writes “[MacAskill] taught an introductory lecture course on utilitarianism, the ethical theory that underwrites effective altruism.” Nitasha Tiku in The Washington Post (non-paywalled) writes, “[EA’s] underlying philosophy marries 18th-century utilitarianism with the more modern argument that people in rich nations should donate disposable income to help the global poor.” It is curious to call it 18th century utilitarianism when the version of utilitarianism EA is closest to (yet still quite distinct from) is “rule utilitarianism”, only hints of which were found in the 19th century with its primary development in the 20th century. Furthermore, while it may be a modern development that one can easily transfer money and goods across continents, it is certainly no modern argument that the wealthy should give disposable income to the poor, including across national lines. The Parable of the Good Samaritan advocates for helping explicitly across national lines, the Old Testament commanded concern for the poor by those with resources (for a fuller treatment, see Christians in an Age of Wealth: A Biblical Theology of Stewardship), and “the early Church Fathers took luxury to be a sign of idolatry and of neglect of the poor.”[28] The fourth century St. Ambrose condemns rich neglect of the poor, “You give coverings to walls and bring men to nakedness. The naked cries out before your house unheeded; your fellow-man is there, naked and crying, while you are perplexed by the choice of marble to clothe your floor.”[29]

Timothy Noah in The New Republic writes, “E.A. tries to distinguish itself from routine philanthropy by applying utilitarian reasoning with academic rigor and a youthful sense of urgency,” and also “Hard-core utilitarians tend not to concern themselves very much with the problem of economic inequality, so perhaps I shouldn’t be surprised to find little discussion of the topic within the E.A. sphere.” It is blatantly false that economic inequality is of little concern to utilitarians (as explained in the link that the author provided himself), including “hard-core” ones, as the state of economic inequality in the world leads to great suffering and death as a result. Now, it is correct that utilitarians do not see inequality as an intrinsic good, but merely an instrumental good. Yet, I do not see the problem with rejecting inequality’s intrinsic value rather than its instrumental value; it would be surprising that, on a perhaps extreme version of egalitarianism, there being two equally unhappy people is better than one slightly happy person and one extremely happy person. Alternatively, we should be much more concerned that people’s basic needs are met, so they are not dying of starvation and preventable disease, than we should that, if everyone already had their needs met, the rich have equal amounts of frivolous luxuries, as sufficientarianism well-accommodates. Finally, as MacAskill 2019 notes, EA is actually compatible with utilitarianism, prioritarianism, sufficientarianism, and egalitarianism (see next section).

Eric Levitz in the Intelligencer states, “Many people think of effective altruism as a ruthlessly utilitarian philosophy. Like utilitarians, EAs strive to do the greatest good for the greatest number. And they seek to subordinate common-sense moral intuitions to that aim.” EAs are not committed to doing the greatest good for the greatest number (see the next section for clarification), and they do not think any EA commitments subvert commonsense intuitions. In fact, EAs attempt to take common sense intuitions seriously along with their implications. The starting point for EA was originally that, if we can fairly easily save a drowning child, we should.[30] This is hardly a counterintuitive claim. Then, upon investigating the relevant similarities between this situation and charitable giving, we get effective altruism.

Jonathan Hannah in Philanthropy Daily asks, “why should we look to these utilitarians to learn how to be effective with our philanthropy?” First, we should look to EAs because EAs have evidence backing up claims of effectiveness. Secondly, again, EAs are not committed to utilitarianism, though many EAs are, in fact, utilitarians.

Theo Hobson in the Spectator claims, “Effective altruism is reheated utilitarianism… Even without the ‘longtermist’ aspect, this new utilitarianism is a thin and chilling philosophy.” Beyond the false utilitarianism claim, the accusation of thinness is surprising, since there are substantial and life-changing implications of taking EA seriously. These are profound implications that have resulted in protecting 70 million people from malaria, giving $100 million directly to those in extreme poverty, giving out hundreds of millions of deworming treatments, setting 100 million hens free from a caged existence, and much more. Collectively, GiveWell estimates the $1 billion donations through them will save 150,000 lives.

The aforementioned claims are misguided, as not everything that is an attempt to do the morally best thing is utilitarianism (see Figure 5).

Figure 5: Utilitarianism is a specific moral theory (or, rather, a family of specific theories), actually

Now, I seek to make good on my claim that effective altruism and utilitarianism are distinct. There are six things that distinguish EA from a reliance on utilitarianism, and I will examine each in turn:

  1. [Minimal] EA does not make normative claims
  2. EA is independently motivated
  3. EA does not have a global scope
  4. EA incorporates side constraints
  5. EA is not committed to the same “value theory”
  6. EA incorporates moral uncertainty

[Minimal] EA Does Not Make Normative Claims

Effective altruism is defined most precisely in MacAskill 2019, who clarifies explicitly that EA is non-normative. MacAskill says, “Effective altruism consists of two projects [an intellectual and a practical], rather than a set of normative claims.”[31] The idea is that EA is committed to trying to do the best with one’s resources, but not necessarily that it is morally obligatory to do so. Part of the reason for this definition is to be in alignment with the preferences and beliefs of those in the movement. There were surveys both to leaders and members of the movement in 2015 and 2017, respectively, which suggested a non-normative definition may be more representative to current EA adherents. Furthermore, it is more ecumenical, which is a desirable trait for a social movement as it expands.

Of course, a restriction to non-normative claims is limited, and Singer’s original argument that prompted many towards EA was explicitly normative in nature. His premises included talk of moral obligation. Many people in EA do think it is morally obligatory to be an EA. Thus, I think it is helpful to distinguish between different types or levels of EA, including minimal EA, normative EA, radical EA, and radical, normative EA.

Minimal EA makes no normative claims, while normative EA includes conditional obligations.[32] Normative EA claims that if one decides to donate, one is morally obligated to donate to the most effective charities, but it does not indicate how much one should donate. This could be claimed to be absolute, a general rule of thumb, or somewhere in between. Radical EA, on the other hand, includes unconditional obligations, but no conditional obligations. Brian Berkey, for example, argues that effective altruism is committed to unconditional obligations of beneficence.[33] Radical EA, as I characterize it, says one is morally obligated to donate a substantial portion of one’s surplus income to charities. Finally, radical, normative EA (RNEA) combines conditional and unconditional obligations of beneficence, claiming one is morally obligated to donate a substantial portion of one’s surplus income to effective charities. I expand on and defend these further elsewhere.[34]

Thus, while minimal EA does not include normative claims, there are expanded versions of EA that include conditional and/or unconditional obligations of beneficence. Minimal EA, then, constitutes the core of the EA theory, while these claims of obligations constitute auxiliary hypotheses of the EA theory. Since the core of EA does not include normative claims, it cannot be identical to (any version of) utilitarianism, whose core includes a normative claim to maximize impartial welfare.

EA is Independently Motivated

Effective altruism is distinct from utilitarianism in that EA can be motivated on non-consequentialist grounds. In fact, even Peter Singer’s original argument, inspiring much of EA, was non-consequentialist in nature. Singer’s original “drowning child” thought experiment relied only on a simple, specific thought experiment, proposing midlevel principles (principles that stand in between specific cases and moral theories) to explain the intuition from the thought experiment, and deriving a further conclusion by comparing relevant similarities in the thought experiment to a real world situation, all of which is a standard procedure in applied ethics. Of course, this article has been critically responded to in the philosophy community many, many times, some more revolting[35] than others,[36] but many (such as I) still find it a compelling and sound argument that also demonstrates EA’s independence from utilitarianism.

Theory-Independent Motivation: The Drowning Child

Singer’s original thought experiment is: “if I am walking past a shallow pond and see a child drowning in it, I ought to wade in and pull the child out. This will mean getting my clothes muddy, but this is insignificant, while the death of the child would presumably be a very bad thing.”[37]

Singer proposes two variants[38] of a midlevel principle that would explain this obvious result:

  • If it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it.

He also proposed a weaker principle,

  • If it is in our power to prevent something very bad from happening, without thereby sacrificing anything morally significant, we ought, morally, to do it.

These principles are extremely plausible, are quite intuitive, and would explain why we have the intuitions we do in various rescue cases comparable to the above. Next, Singer defended why this principle can be extended to the case of charitable giving by examining the relevant similarities. The reasoning is that, given the existence of charities, we are in a position to prevent something bad from happening, e.g., starvation and preventable disease. We can do something about it by ‘sacrificing’ our daily Starbucks, monthly Netflix subscription, yearly luxury vacations, or even more clearly unnecessary purchases, for example additional sports cars or boats that are not vocationally necessary, etc. None of these things are (obviously) morally significant, and they are certainly not of comparable moral importance of the lives of other human beings. Therefore, we have a moral obligation to take action in donating to effective charities, particularly from the income that we are using for surplus items.

Notice that we did not appeal to any kind of utilitarian reasoning in the above argument, and one can accept either of Singer’s midlevel principles without accepting utilitarianism. This example shows how effective altruism can be independently motivated apart from utilitarianism. This fact was pointed out previously by Jeff McMahan when he noticed that even philosophical critiques of EA make this false assumption of reliance on utilitarianism. McMahan, writing in 2016, said, “It is therefore insufficient to refute the claims of effective altruism simply to haul out [Bernard] Williams’s much debated objections to utilitarianism. To justify their disdain, critics must demonstrate that the positive arguments presented by Singer, Unger, and others, which are independent of any theoretical commitments, are mistaken.”[39]

Martin Luther’s Drowning Person

Interestingly, the Christian has a surprising connection to Singer’s Drowning Child thought experiment, as a nearly identical thought experiment and comparison was made by Martin Luther in the 16th century.[40] In his commentary on the 5th commandment “Thou shalt not kill” in The Large Catechism, Luther connects the commandment to Jesus’ words in Matthew 25, “For I was hungry and you gave me nothing to eat, I was thirsty and you gave me nothing to drink, I was a stranger and you did not invite me in, I needed clothes and you did not clothe me, I was sick and in prison and you did not look after me.” Luther then gives a drowning person comparison: “It is just as if I saw some one navigating and laboring in deep water [and struggling against adverse winds] or one fallen into fire, and could extend to him the hand to pull him out and save him, and yet refused to do it. What else would I appear, even in the eyes of the world, than as a murderer and a criminal?”

Luther condemns in the strongest words those could “defend and save [his neighbor], so that no bodily harm or hurt happen to him and yet does not do it.” He says, “If…you see one suffer hunger and do not give him food, you have caused him to starve. So also, if you see any one innocently sentenced to death or in like distress, and do not save him, although you know ways and means to do so, you have killed him.” Finally, he says, “Therefore God also rightly calls all those murderers who do not afford counsel and help in distress and danger of body and life, and will pass a most terrible sentence upon them in the last day.”

Virtue Theoretic Motivation: Generosity and Others-Centeredness

Beyond a theory-independent approach to motivate EA, we can also employ a non-consequentialist theory, virtue ethics, to motivate EA. Some limited connections between effective altruism and virtue ethics have been previously explored,[41] but I will briefly give two arguments for effective altruism from virtue ethics. Specifically, I will argue from the virtues of generosity and others-centeredness for normative EA and radical EA, respectively. Thus, if both arguments go through, the result is radical, normative EA.

First, I assume the qualified-agent account of the criterion of right action[42] of virtue ethics given by Rosalind Hursthouse.[43] Second, I employ T. Ryan Byerly’s accounts of both generosity and others-centeredness.[44] Both of these, especially from the Christian perspective, are virtues. The argument from generosity is:

  1. An action is right only if it is what a virtuous agent would characteristically do
  2. A virtuous agent would characteristically be generous
  3. To be generous is to be skillful in gift-giving (i.e., giving the right gifts in right amounts to the right people)
  4. A charitable donation is right only if it is skillful in gift-giving
  5. A charitable donation is skillful in gift-giving only if it results in maximal good
  6. A charitable donation is right only if it results in maximal good (NEA)

The argument from others-centeredness is:

  1. An action is right only if it is what a virtuous agent would characteristically do
  2. A virtuous agent would characteristically be others-centered
  3. To be others-centered includes treating others’ interests as more important than your own
  4. Satisfying one’s interests in luxuries before trying to satisfy others’ interests in basic needs is not others-centered
  5. An action is right only if it prioritizes others’ basic needs before your luxuries
  6. A substantial portion of one’s surplus income typically goes to luxuries
  7. Therefore, a person is morally obligated to donate a substantial portion of one’s surplus income to charity (REA)

I don’t have time to go into an in-depth defense of these arguments (though see my draft paper [pdf] for a characterization and assessment of luxuries as in the above argument, as well as independent arguments for premises 5-7 regarding others-centeredness), but it at least shows how one can reasonably motivate effective altruism from virtue ethical principles.

EA Does Not Have a Global Scope

Unlike utilitarianism, effective altruism is not a global moral theory in that it cannot, in principle, give deontic outcomes (i.e., right, wrong, obligatory, permissible, etc.) to any given option set (a set of actions that can be done by an agent at some time t). Utilitarianism is a claim about what explains why any given action is right, wrong, obligatory, etc., as well as the truth conditions for the same. In other words, utilitarianism makes a claim of the form, an action is right if and only if xyz, which are the truth conditions of deontic claims, and a claim of the form, an action is right because abc, which is the explanatory claim corresponding to the structure of reasons of the theory (that explains why actions are right/wrong).

While minimal EA trivially does not match utilitarianism in making global normative claims, even radical, normative EA does not govern every possible action set, and it does not propose to. At the most, EA makes claims about actions related to (1) charitable donations and (2) career choice, including RNEA. As MacAskill 2019 says, “Effective altruism is not claiming to be a complete account of the moral life.” There are many actions, such as, say, those governing social interactions, that are out of scope of EA and yet within the scope of utilitarianism.

Therefore, utilitarianism and effective altruism differ in their scopes, as EA is not a comprehensive moral theory, so EA does not require utilitarianism.

EA Incorporates Side Constraints

In “The Definition of Effective Altruism,” MacAskill 2019 is clear that EA includes constraints, and not any means can be justified for the greater good. MacAskill says that the best course of action, according to EA, is an action “that will do the most good (in expectation, without violating any side constraints).”[45] He only considers a value maximization where “whatever action will maximize the good, subject to not violating any side constraints.”[46] He says that EA is “open in principle to using any (non-side-constraint violating) means to addressing that problem.”[47]

In What We Owe the Future, MacAskill says that “naïve calculations that justify some harmful action because it has good consequences are, in practice, almost never correct” and that “plausibly it’s wrong to do harm even when doing so will bring about the best outcome.”[48] On Twitter, MacAskill shared relevant portions of his book on side constraints when responding to the FTX scandal, including the page shown below. He states that “concern for the longterm future does not justify violating others’ rights,” and “we should accept that the ends do not always justify the means…we should respect moral side-constraints, such as against harming others. So even on those rare occasions when some rights violation would bring about better longterm consequences, doing so would not be morally acceptable.”[49]

Figure 6: Excerpt from What We Owe the Future

Utilitarianism, on the other hand, does not have side constraints, or at least, not easily. Act utilitarianism (which is normally the implied view if the modifier is neglected) certainly does not. However, rule utilitarianism can function as a kind of constrained utilitarianism in two ways; one way is strong rule utilitarianism that has no exceptions, which is absolutist. Another is with weak rule utilitarianism that still allows some exceptions. MacAskill’s wording above makes it sound like there would not be any exceptions, “even when some rights violation would bring about better longterm consequences.”[50]

However, elsewhere, he makes it sound as though there can be exceptions. He (with Benjamin Todd) says, “Almost all ethicists agree that these rights and rules are not absolute. If you had to kill one person to save 100,000 others, most would agree that it would be the right thing to do.” I am in perfect agreement there. I think, as I discuss below in the Do the Ends Justify the Means? section, absolute rules are trivially false. In fact, MacAskill has an entire paper (with Andreas Mogensen) arguing that absolute constraints lead to moral paralysis because, to minimize your chance of violating any constraints, you should do nothing.[51] It is likely that MacAskill thinks there are extreme exceptions, though these would never happen in real life.

Finally, there remains a distinction between constrained effective altruism and rule utilitarianism, and that distinction is the same difference as between a consequentialized deontological theory and a standard deontological theory. The difference is that even rule utilitarianism explains the wrongness of all wrong actions ultimately by appeal to consequences (we should follow rules whose acceptance or teaching or following would lead to the best consequences), while constrained effective altruism explains the wrongness of constraint violations by appeal to constraints and to rights without a further justification in terms of the overall better outcomes.

In conclusion, EA incorporates side constraints, though with exceptions (as any plausible ethical theory would allow), while act utilitarianism does not. In addition, while EA has some structural similarities as rule utilitarianism, EA has different explanations of the wrongness of actions as utilitarianism, which turns out to be the key difference between (families of) moral theories,[52] and thus the two are quite distinct.

EA is Not Committed to the Same Value Theory

The fifth reason effective altruism is not utilitarian is because the value theory is not identical between the two. One reason they are not identical is because EA is not, strictly speaking, committed to a value theory. However, that does not mean the value theory is a free-for-all. EA is compatible with other theories in the vicinity of utilitarianism, such as prioritarianism, sufficientarianism, and egalitarianism.

Utilitarianism is committed to impartial welfarism in its value theory. There are a range of views within welfarism about what makes something well-off. Welfarism includes a range of views about well-being, including hedonism, desire or preference satisfactionism, or objective list theories. Hence, we can have hedonistic utilitarianism, preference utilitarianism, or objective list utilitarianism. Further, utilitarianism is committed to a simple aggregation function that makes good equal to the sum total of wellbeing, as opposed to a variously weighted aggregation function, such as in prioritarianism that gives additional weight to the wellbeing of those worse off.

The value theory that MacAskill 2019 describes in the definition of EA is “tentative impartial welfarism,”[53] where the ‘tentative’ implies this is a first approximation or working assumption. MacAskill expresses the difficulty here that arises from intra-EA disagreement: we do not want the scope of value maximization to be too large so that it can include maximizing whatever the individual wants, but we do not want the scope of maximization too small to exclude a substantial portion of the (current or future) movement.

MacAskill seems to do some hand-waving on this point. When defending EA as distinct from utilitarianism, he says, “it does not claim that wellbeing is the only thing of value,” so EA is compatible “with views on which non-welfarist goods are of value.”[54] However, two pages previously, his “preferred solution” of “tentative impartial welfarism…excludes non-welfarist views on which, for example, biodiversity or art has intrinsic value.” On the same page, he suggests that if the EA movement became convinced that “the best way to do good might well involve promoting non-welfarist goods, then we would revise the definition to simply talk about ‘doing good’ rather than ‘benefiting others.’”[55]

Perhaps one way of reconciling these is to say that, while “tentative impartial welfarism…excludes non-welfarist views,” there is instead a tentative commitment to ‘impartial welfarism’, as opposed to a commitment to ‘tentative impartial welfarism’, and it is the impartial welfarism (ignoring the tentative here) that excludes non-welfarist views. When Amy Berg considers the same problem of “how big should the tent be?”, she concludes that EA needs to commit to promote the impartial good in order to ensure that the effectiveness can be objectively measured.[56] 

I suggest that the best way to combine these is to say that EA is committed to maximizing the impartial good that can be approximated by welfarism. If a view cannot even be approximated by welfarism, then it would be fighting a different battle than EA is fighting. This approach combines the tentative nature of the commitment with ensuring it can be objectively measured and in line with the current EA movement, while remaining open to including some non-welfarist goods that remain similar enough to the movement as it currently stands.

Finally, MacAskill says that EA can work with “different views of population ethics and different views of how to weight the wellbeing of different creatures,”[57] which is why EA is compatible with prioritarianism, sufficientarianism, and egalitarianism, in addition to utilitarianism.

Therefore, EA is distinct from utilitarianism by having a different commitment in both what is valuable as well as the aggregation principle.

EA Incorporates Moral Uncertainty

The final reason I will discuss on why EA is not utilitarianism is that EA incorporates moral uncertainty, which is an inherently metatheoretical consideration, while utilitarianism does not. Utilitarians do, just as everyone else has to deal with moral uncertainty, but utilitarianism does not automatically include this. Since EA includes inherently metatheoretical considerations, then it cannot be the same as a theory, which does not inherently include metatheoretical considerations, by definition.

The first way EA includes moral uncertainty was above in the characterization of “tentative impartial welfarism.” EA is open to multiple different normative views; at the very least, it is open to hedonistic, preference, or objective list utilitarianism, while no single theory of utilitarianism can be open to multiple theories of utilitarianism, by definition. Further, this value theory does not rule out non-consequentialist views, and, if my virtue theoretic arguments above (or others) are successful, then virtue ethicists can be EAs. Therefore, EAs can reasonably distributed their credences across many different normative views, both utilitarian and non-utilitarian.

EA does not endorse a specific approach to moral uncertainty, which would likely be considered an auxiliary hypothesis of EA, though EA leaders do seem to clearly favor one particular approach, which is maximum expected choiceworthiness. Furthermore, MacAskill, who has done much work in moral uncertainty, reasons quite explicitly using uncertainty to distribute non-negligible credence in both utilitarianism and deontology, combining that with a risk-averse expected utility theory to motivate incorporating side constraints (aka agent-centered or deontic restrictions). I personally tentatively support the My Favorite Theory[58] approach to moral uncertainty, though EA does not require one or the other.


Savannah Pearlman argues that even though EA and utilitarianism are distinct moral frameworks, they share core philosophical commitments, and therefore EA is still dependent on utilitarianism. As I argue above, the exact differences between the two are such that EA is not dependent on utilitarianism. It is perfectly sufficient that EA and utilitarianism are (1) distinct frameworks and (2) independently motivated to conclude that EA is not inherently utilitarian. I showed the independent motivation (in the form of theory-independent midlevel principles as well as virtue ethical motivation) in section 2 above.

Pearlman evidently was not convinced that the theory-independent motivation was, in fact, theory-independent because there are shared commitments between EA and utilitarianism. Of course, we would expect that plausible moral theories will share some commitments. For example, that wellbeing is morally significant, and so are the consequences of one’s actions, is true on any plausible moral theory. Shared commitments, unless they are the totality of the theories’ commitments, do not show dependence. In the case of EA and utilitarianism, utilitarianism is sufficient for EA, but not necessary, since we can use virtue ethical arguments (or deontological, but I do not discuss that here).

Pearlman, however, misidentified the shared commitments. She says, “Rather clearly, Effective Altruism and Utilitarianism share the core philosophical commitments to Consequentialism, Impartiality, and Hedonism (repackaged by Effective Altruists into Welfarism).” A few noteworthy items on this. First, utilitarianism is not committed to hedonism; hedonistic utilitarianism is committed to hedonism, while preference utilitarianism is committed to preference satisfactionism, etc. In other words, utilitarianism is committed to some version of welfarism, which can be cashed out in various ways, which is the same as EA’s welfarism. There are no commitments to the family of utilitarian theories nor EA to a specific account of well-being.

Secondly, Pearlman includes consequentialism as part of the core commitments of EA, which she does without argument. It is unclear why she does so. There are a non-negligible number of non-consequentialist EAs. I would guess Pearlman thinks that maximizing only makes sense given consequentialism. I have more faith in other moral theories than Pearlman does (since maximizing is the morally correct option), apparently, since I think that deontology and virtue ethics can make sense of maximizing welfare with a given unit of resources particularly in the restricted domains of concern to EA, such as charitable donations and career choice. Maximizing in this restricted domain can also be understood as an implication of the theory-independent principles that Singer proposed in the drowning child case. 

Pearlman appears to take issue with some deontic outcomes in question, namely, in comparing two charities, that one should donate to a charity that is 100x more effective than another. Although minimal EA does not even commit to any obligation, we can consider the auxiliary commitment of normative EA (though this would still mean EA is not inherently utilitarian). Pearlman takes this moral obligation to imply that EA must be committed to a more general utilitarian principle. However, ignoring any moral theorizing, it just makes sense that you should not intentionally do an action that is much less good than another when it does not affect you much to do so. Normative EAs do not need to say more than this, while utilitarians do. As Richard Chappell points out in the comments, normative EA is only committed to efficient benevolence, but not constraint-less benevolence or unlimited beneficence that requires actions at great personal cost.

All things considered, from the clarification above we can see that Pearlman is incorrect that EA is inherently utilitarian and that criticisms of utilitarianism fairly apply to EA, as well.


In summary, effective altruism incorporates moral uncertainty in such a way that distinguishes itself from being inherently utilitarian in any interesting sense of the term. Of course, even an absolutist deontologist should have nonzero credence in some form of consequentialism to avoid being irrational, but that hardly makes them a consequentialist. So, EA is not inherently utilitarian.

All together, we saw six reasons that effective altruism is not reliant on utilitarianism. One is that minimal EA does not make normative claims. Furthermore, we saw that EA is also motivated by non-consequentialist reasoning, both theory-independent and virtue ethical in nature. More generally, EA, unlike utilitarianism, has a restricted scope, incorporates side constraints, has a different value theory, and includes moral uncertainty.

Can EA/Consequentialism/Longtermism be Used to Justify Anything?

Multiple authors express worries suggesting that EA or consequentialism or longtermism can be used to justify anything. In this section, I will show that this claim is either false or uninteresting, depending on how the claim is interpreted. 

Émile P. Torres, a big fan of “scare quotes,” wrote in a salon article titled “What the Sam Bankman-Fried debacle can teach us about ‘longtermism’” that “For years, I have been warning that longtermism could ‘justify’ actions much worsethan fraud, which Bankman-Fried appears to have committed in his effort to ‘get filthy rich, for charity’s sake’.” Eric Levitz in the Intelligencer says that effective altruism “lends itself to maniacal fetishization of ‘expected-value’ calculations, which can then be used to justify virtually anything.” I have also heard this claim made about consequentialism and utilitarianism maybe 400 times, so I will address the issue broadly here.

Drawing from my own manuscript titled “Worst Objections to Consequentialism,” I will show why these attempted points are silly. We can generalize the concept of moral theory to a moral framework that would include effective altruism and longtermism as their own moral frameworks, and then moral theories would also be included as their own moral framework. I will focus on moral theories because this is more well-defined and discussed among ethicists.

All Families of Moral Theories Can Justify Anything

First, any family of moral theories (e.g., consequentialism, deontology, virtue ethics) can justify any action as morally permissible. If this is correct, then it amounts to an entirely uninteresting claim that e.g., consequentialism can justify anything, as any family of theories can justify anything until you flesh out the details of the specific theory you actually want to compare. The reason these are called families, and not theories, is because there are a bunch of different versions of each of these theories combined in a family resemblance between them. Moral theories have a global scope, meaning they apply to all actions, and a deontic predicate, meaning they say whether an action is permissible, impermissible, obligatory, etc.

For any given family of theories, we can construct a theory in that family that renders any given action permissible by manipulating what we find valuable, dutiful, or virtuous. For example, we can construct a consequentialist theory that says that only harmful intentions have value. We can have a deontological theory that says that our only obligation is to punch as many people in the face as possible every day. We can invent a virtue ethical theory where an action is virtuous if and only if it has the worst moral consequences. All of these theories are part of the respective family of theories (consequentialism, deontology, and virtue ethics). Now, none of these are particularly plausible versions of these theories, but adhering to these views would justify some pretty terrible actions. Thus, it is uninteresting to make the point that these kinds of moral theory families (including utilitarianism, which is a family subset of consequentialism) can justify immoral actions (see Figure 7).

Figure 7: As it turns out, it is not helpful to point out that [inset moral theory or theory family here] “can justify” [insert immoral action here], and this is especially true since EA is not inherently utilitarian.

Another way to see why any family of theories can justify any action as permissible is because these families are interchangeable in terms of their deontic predicates. In other words, for any deontological theory, we can construct a consequentialist theory that has all the same moral outcomes for all the same actions (deontic predicates like permissible, obligatory), and vice versa. This construction is called consequentializing.[59] In the same way, we construct a deontological theory for any consequentialist theory, using a method called deontologizing.[60] There is debate over the significance of this, but the key conclusion here is that for any specific action that a deontologist can say is wrong, a consequentialist can say is wrong, and vice versa.

The takeaway from our exploration so far is that any objection to some theory for making some actions permissible needs to reference a specific version of the theory rather than the whole family of theories. For example, it is no objection to consequentialism that hedonistic utilitarianism makes it morally obligatory to go through the experience machine, since hedonistic utilitarianism is a subset of the family of theories, but it is a legitimate objection to hedonistic utilitarianism. Therefore, the claim that consequentialism can justify anything is true but uninteresting, since the same exact claim can be made of deontology, virtue ethics, or any other theory or anti-theory.

Specific Moral Theories Do Not Justify Any Action

Second, while any specific theory “can” justify any action, any specific theory does not justify any action. A significant chunk of applied ethics, and one of the primary methods of applied ethics, is taking a moral theory (or framework) and plugging in the relevant descriptive and evaluative information in order to ascertain the moral outcome of various actions. In other words, a large goal in ethics is to figure out what a moral theory actually implies for any given situation. People write many papers for and against various views, including when working from the same starting points, including the same specific theory at times. All of these contradictory implications cannot be correct. However, there is a fact of the matter about the proper implication of the theory for the specific actions, and so therefore a specific theory does not, though it can, justify any action.

Part of the issue here is obscured by the lack of definition of the word “can” in this claim. The word “can” (or “could”) is doing all the work in this claim. It is never specified how this is supposed to be translated. It is common in philosophical circles to distinguish different types of possibility (or claims about what can but not necessarily will happen): physical (aka nomological), metaphysical, epistemic, and broad logical possibility. Most common (depending on the context), especially for ethics circles, is metaphysical possibility, which is typically cashed out in terms of possible worlds as implemented in modal logic.

In other words, my best guess is that to say a theory “can justify” an action means that the theory implies that some action is permissible in a possible world (aka a way that the world could have been). Presumably, the worry here is about classes of actions, like lying, running, boxing, stealing, killing, etc. So, a theory can justify any action is that for any class of actions, there is a possible world where it is permissible to do that class of action. If conceivability is at least a good guide to possibility, then any thought experiment will do to show that a class of actions can be permissible in other possible worlds.

Furthermore, as we discussed earlier, on any plausible theory (including versions of consequentialism, deontology, and virtue ethics), there is some point where contextual considerations render the results so significant that it must be permissible. To deny this is to accept absolutism with all of its many problems discussed earlier. Therefore, all plausible moral theories will have members of all classes of actions that are permissible in some possible world, however fantastical. Therefore, all specific moral theories “can” justify in action in the sense that there are possible worlds where some action type is permitted.

However, any given specific theory does not justify any action. The reason for this is simple: the actual world is a subset of cardinality 1 of the set of all possible worlds, which is infinite. So, while a theory “can” justify any action, it does not justify any action or it faces incoherence. While a theory can justify an action in a world very different from our own, different physics, people, circumstances, laws (physical and political), etc., it does not justify any action in the actual world.

Since the much more interesting concern is about what is permissible or impermissible in the actual world, we care much more about whether theories do in fact justify various actions rather than that they can justify various actions.

Specific EA and Longtermism Frameworks Do Not Justify Any Action

The same applies to moral frameworks like effective altruism and longtermism, not just theories. EA and longtermism can also be understood as having a family resemblance of models. There is a correct way of filling in the details, but since we are not certain what that is at this time, and we have substantial disagreement, EA is committed to cause neutrality. So, because there is substantial disagreement on filing in these details, they “can” justify a wide range of actions. Yet, just like all moral theories, there is a correct way of working out the details. Thus, we need to investigate this question seriously to know what the exact implications of their commitments are.

In addition, Levitz has a suspicion that ‘expected-value’ calculations can be used to justify anything. Well, if all you have is an equation for expected value, and you ignore the rest of a moral framework, then yes. But that’s why you have the rest of the moral framework. If you only have agent-centered restrictions without filling in the details of what they are, you can say that it’s obligatory to punch every stranger in the face as soon as you see them. Therefore, deontology can justify virtually anything right? Not really. Obviously, you have to fill in the details, and the details need to be remotely plausible to be worth consideration. If I defend a version of virtue ethics where the only virtue is being self-centered, I will justify many terrible actions. You obviously have to compare the actual theories themselves, and you need to compare plausible theories. See the helpful discussions on this general point by Richard Yetter Chappell here and here.

Therefore, the phrase considered at the beginning is either false or uninteresting, depending on how it is interpreted. I will reemphasize Fletcher’s comments, “‘Does a worthy end justify any means? Can an action, no matter what, be justified by saying it was done for a worthy aim?’ The answer is, of course, a loud and resounding NO!”[61] At least, not in any interesting way.

Takeaways and Conclusion

The FTX scandal is very sad for effective altruism, cryptocurrency, and beyond, since a lot of money which was, or that would be going to, saving (or sustaining) people’s lives no longer will. Lots of people were hurt and will be worse-off as a result, to say the least. But as far as presenting an argument against effective altruism goes, I think there are, fortunately, no takeaways whatsoever here. The people that used SBF as an opportunity to critique a commitment to “doing the most good with one’s donations and career” failed to present a decent argument.

From a Christian perspective, this debacle is similar to many scandals in Christendom that have occurred, where important or powerful leaders have committed vicious actions or formed cults of personality that have completely wrecked many people’s lives and entire churches and communities. Examples include Mark Driscoll, Ravi Zacharius, and many others. These are tragedies and the actions of these leaders must be viciously condemned. Yet, from the very beginning, we know people go horribly astray. They make terrible mistakes. The only person we can have perfect faith in, and always strive to exemplify, is Jesus. Leaders do not always (and in fact rarely do always) reflect the core of their commitments. We’ve all heard this point 50,000 times, and yet somehow people keep thinking that leaders’ mistakes are a direct result of following the teachings that they supposedly espouse. This is not always (perhaps even rarely) the case.

For someone interested in purely assessing how effective altruism’s framework and approach fares, and whether EA should change its key commitments, the scandal remains entirely uninteresting and uneventful. Another day, another round of horrid critiques of effective altruism. It remains a very good thing to prevent people from dying of starvation and preventable disease, and if we can save more people’s lives by donating to effective charities, I am going to keep donating to effective charities.

If you have not yet been convinced of my arguments, listen to what ChatGPT (an artificial intelligence chatbot recently launched by OpenAI) had to say about the implications of SBF for EA in Figure 8, which is that the scandal does not necessarily reflect the moral principles of EA, and this same conclusion is true for any given individual. ChatGPT also agreed that EA is not inherently utilitarian.

Figure 8: ChatGPT knows what’s up regarding SBF and the implications for EA (i.e., not much). Note: I only include this on a lighthearted note, not as a particularly substantive argument (though I 100% agree with ChatGPT).


If I have time and energy (and there appears to remain a need or interest), I will write a part 2 to this in perhaps early January. Part 2 would include criticisms I found even less interesting or plausible, those that relate to the connection between longtermism and EA, the danger of maximizing, the homogeneity of EA, concerns about community norms, and more point-by-point responses to various critical pieces published online recently. Perhaps there also will be more relevant information revealed or more poignant responses between now and then; one very recent piece has more thoroughly suggested that EA leaders should have known about SBF’s dealings, and I may investigate that more carefully. Let me know what else, if anything, I should include, and if you would be interested in a follow-up.[62]


[1] MacAskill, William. “The Definition of Effective Altruism.” in Effective Altruism: Philosophical Issues (2019), p. 14.

[2] See Strasser, Alex. “Consequentialism, Effective Altruism, and God’s Glory.” (2022). Manuscript. [pdf] for more discussion about these distinctions and their motivation.

[3] This is the so-called Compelling Idea of consequentialism, which trivially entails normative EA and non-trivially entails radical, normative EA.

[4] Although, I realized when writing this that I might actually be a strong longtermist for Christian reasons. Namely, I probably think evangelism is the most important moral priority of our time, and the concern for the longterm future (e.g., afterlife) is sufficient to make evangelism the most important moral priority of our time. It looks like this makes me a strong longtermist after all. I need to consider this further.

[5] Boecking, Benedikt, et al. “Quantifying the relationship between large public events and escort advertising behavior.” Journal of Human Trafficking 5.3 (2019): 220-237.

[6] Cryptocurrency emissions estimated as the maximum of the range given by the White House, which is 50 million metric tons of carbon dioxide per year. Cost per metric ton of offset is $14.62 by Cool Effect (accessed 11.27.22). This amounts to $731 million to carbon offset the entirety, which is 1/35 of SBF’s net worth before the scandal. Of course, SBF and FTX’s contribution to the U.S. crypto emissions is a small fraction of that, so he could even more easily offset his own carbon emissions. Another difficulty is that it is unlikely that Cool Effect could easily (or at all) implement projects at the scale required to offset this amount of emissions, which is more than some countries in their entirety.

[7] This may assume that we have more negative duties than positive duties. It is frequently defended (or assumed) that we have stronger reasons to prevent harm than to promote beneficence, in which case the argument would go through.

[8] This distaste is mostly because of his views on abortion and infanticide. While I vehemently disagree with Singer on these specific issues, Singer’s thoughts on these issues do not affect his thoughts on poverty alleviation or the EA framework in general. It is also true that Singer’s views on these issues are sometimes distorted, which is why Eric Sampson (a Christian) wrote a clarifying piece on Singer’s views in context of backlash to Singer’s visit to Eric’s campus.

[9] MacAskill, William. “The Definition of Effective Altruism.” in Effective Altruism: Philosophical Issues (2019), p. 20.

[10] For an insightful and conversational introduction to this debate, including on whether the ends justify the means or there are intrinsically evil acts that cannot ever be done, and more, see Fletcher, Joseph and Wassmer, Thomas. Edited by May, William E. Hello, Lovers! An Introduction to Situation Ethics. Cleveland: Corpus, 1970.

[11] I will assume “justify” means something like “renders morally permissible,” whether as a truth condition or an explanation of its moral permissibility.

[12] Fletcher, Joseph. “Situation Ethics, Law and Watergate.” Cumb. L. Rev. 6 (1975): 35-60, p. 52.

[13] Hurley, Paul. “Consequentialism and the standard story of action.” The Journal of Ethics 22.1 (2018): 25-44.

[14] Fletcher, “Situation Ethics, Law and Watergate,” p. 52.

[15] Sterba, James P. “The Pauline Principle and the Just Political State.” Is a Good God Logically Possible? Palgrave Macmillan, Cham, 2019. 49-69, p. 49.

[16] Fletcher, “Situation Ethics, Law and Watergate,” p. 51, emphasis in original.

[17] Ironically, I am quite skeptical that killing in 1-to-1 (or more generally m attackers vs n victims where ) self-defense scenarios or war are ever justified in real-world scenarios. We can construct scenarios where it obviously would be, but I am skeptical we have sufficient reason before starting a large-scale war to think the foreseeable consequences of the war would result in fewer deaths (or other goods) long term than we would have without the war. I still need to investigate further though. It is ironic that I am less likely to think killing is ever permissible in the real world than those who frequently verbalize their opposition to ends-means reasoning.

[18] Of course, some natural law theorists and some Kantians may disagree, but I am more concerned about those with plausible moral theories.

[19] It is possible that this phrase is intended to claim what the moral explanation of any deontic outcome is or its structure of reasons. Namely, that why actions are right or wrong are never its consequences, which is the distinguishing aspect of consequentialism, that the rightness/wrongness of actions are ultimately explained by consequences instead of e.g., duty. As such, it would merely be a restatement of the claim “Consequentialism is false,” and then it could not even be used in the debate, since it begs the question against the consequentialist. I do not think the principle is intended to make a technical point about the proper structure of normative theories and normative explanation, but if so, it remains impotent as a moral principle.

Also, for threshold deontology, it may be the case that the explanation for why post-threshold actions are right is by appeal the consequences, so then this understanding of the phrase would be more clearly neutral between theories.

[20] Aboodi, Ron, Adi Borer, and David Enoch. “Deontology, individualism, and uncertainty: A reply to Jackson and Smith.” The Journal of Philosophy 105.5 (2008): 259-272, p. 261 n. 5.

[21] Huemer, Michael. “Lexical priority and the problem of risk.” Pacific Philosophical Quarterly 91.3 (2010): 332-351.

[22] Tarsney, Christian. “Moral uncertainty for deontologists.” Ethical Theory and Moral Practice 21.3 (2018): 505-520.

[23] Mogensen, Andreas, and William MacAskill. “The Paralysis Argument.” Philosophers’ Imprint 21.15 (2021).

[24] Huemer, Michael. Knowledge, Reality and Value. Independently published (2021), p. 297 of pdf.

[25] For example, there is the paradox of deontology as well as the related problem of inconsistent temporal discounting. The paradox of deontology is that deontology implies violating constraints is impermissible even when doing so means that you (and/or others) will violate the constraint many fewer times in the future, which is quite counterintuitive. The second problem occurs because modelling absolute constraints requires infinite disvalue for the immediate action but a discounted, finite disvalue for the same action in comparable circumstances in the future. The circumstances are only finitely different yet there is an infinite difference in the disvalue of the same action, which appears inconsistent. 

[26] See related and helpful discussion in Fletcher, Joseph and Wassmer, Thomas. Edited by May, William E. Hello, Lovers! An Introduction to Situation Ethics. Cleveland: Corpus, 1970, pp. 6-7. Fletcher, who identifies situation ethics as necessarily consequentialist or teleological, also says that for the single principle of situation ethics, he is deontological in a twisted sense.

[27] Consequences as understood in moral theory encompasses more than the term is used in common parlance. Consequences, in this sense, refers to the action and everything that follows from that action. It is not merely the effects after the action. Consequentialism sums the intrinsic value of the action and everything that follows from that action for all time. Lying, for example, can have intrinsic disvalue, and so can the results of lying, such as destroying a relationship. Anything, in principle, can be assigned value in a consequentialist theory, including intentions, motivations, virtues, and any subcategory of action. Further, these categories can be assigned infinite disvalue so that there are absolute constraints, if so desired.

[28] Cloutier, David. The Vice of Luxury: Economic Excess in a Consumer Age. Georgetown University Press, 2015, p. 137.

[29] Ambrose, “On Naboth”, cited in Phan, Peter C. Social Thought. Message of the Fathers of the Church series, Vol. 20, 1984, p. 175.

[30] Singer, Peter. “Famine, Affluence, and Morality.” Philosophy and Public Affairs 1.3 (1972), pp. 229-243.

[31] MacAskill, “The Definition of Effective Altruism,” p. 14.

[32] See Pummer, Theron. “Whether and Where to Give.” Philosophy & Public Affairs 44.1 (2016): 77-95 for a defense of this view, and Sinclair, Thomas. “Are we conditionally obligated to be effective altruists?” Philosophy and Public Affairs 46.1 (2018) for a response.

[33] Berkey, Brian. “The Philosophical Core of Effective Altruism.” Journal of Social Philosophy 52.1 (2021): 93-115.

[34] See Strasser, Alex. “Consequentialism, Effective Altruism, and God’s Glory.” (2022). Manuscript. [pdf]

[35] For example, Timmerman, Travis. “Sometimes there is nothing wrong with letting a child drown.” Analysis 75.2 (2015): 204-212 or Kekes, John. “On the supposed obligation to relieve famine.” Philosophy 77.4 (2002): 503-517.

[36] Haydar, Bashshar, and Gerhard Øverland. “Hypocrisy, poverty alleviation, and two types of emergencies.” The Journal of Ethics 23.1 (2019): 3-17.

[37] Singer, “Famine, Affluence, and Morality,” p. 231.

[38] He also proposed a third one in The Life You Can Save: (3) if it is in your power to prevent something bad from happening, without sacrificing anything nearly as important, it is wrong not to do so. See discussion in Haydar, Bashshar, and Gerhard Øverland. “Hypocrisy, poverty alleviation, and two types of emergencies.” The Journal of Ethics 23.1 (2019): 3-17, who argue that none of these three principles are needed to retain the intuition in the drowning pond case. We only need a weaker principle: (4) if it is in your power to prevent something bad from happening, without sacrificing anything significant, it is wrong not to do so.

[39] McMahan, Jeff. “Philosophical critiques of effective altruism.” The Philosophers’ Magazine 73 (2016): 92-99.

[40] Thanks to Dominic Roser for pointing this out to me.

[41] See Miller, Ryan. “80,000 Hours for the Common Good: A Thomistic Appraisal of Effective Altruism.” Proceedings of the American Catholic Philosophical Association (forthcoming) and Synowiec, Jakub. “Temperance and prudence as virtues of an effective altruist.” Logos i Ethos 54 (2020): 73-93.

[42] For discussion of different criteria of right action proposed in virtue ethics, see Van Zyl, Liezl. “Virtue Ethics and Right Action.” The Cambridge Companion to Virtue Ethics (2013): 172-196.

[43] Hursthouse, Rosalind. On Virtue Ethics. OUP Oxford, 1999, p. 28.

[44] Byerly, T. Ryan. Putting Others First: The Christian Ideal of Others-Centeredness. Routledge, 2018.

[45] MacAskill, “The Definition of Effective Altruism,” p. 23

[46] MacAskill, “The Definition of Effective Altruism,” p. 17

[47] MacAskill, “The Definition of Effective Altruism,” p. 20

[48] MacAskill, William. What We Owe the Future. Basic Books, 2022, p. 241.

[49] MacAskill, What We Owe the Future, pp. 276-277 of my pdf, emphasis in original.

[50] Ibid.

[51] Mogensen, Andreas, and William MacAskill. “The Paralysis Argument.” Philosophers’ Imprint 21.15 (2021).

[52] Schroeder, S. Andrew. “Consequentializing and its consequences.” Philosophical Studies 174.6 (2017): 1475-1497.

[53] MacAskill, “The Definition of Effective Altruism,” p. 18

[54] MacAskill, “The Definition of Effective Altruism,” p. 20

[55] MacAskill, “The Definition of Effective Altruism,” p. 18

[56] Berg, Amy. “Effective altruism: How big should the tent be?” Public Affairs Quarterly 32.4 (2018): 269-287.

[57] MacAskill, “The Definition of Effective Altruism,” p. 18

[58] One of the biggest challenges here is theory individuation, or how you distribute credences in theories with slightly varied parameters or structures. See discussion in papers with “My Favorite Theory” in the title by Gustafsson and also MacAskill’s book Moral Uncertainty.

[59] Portmore, Douglas W. “Consequentializing.” Philosophy Compass 4.2 (2009): 329-347. There are various challenges to the success of this project, but I won’t address those here. I think the challenges can be met.

[60] Hurley, Paul. “Consequentializing and deontologizing: Clogging the consequentialist vacuum.” Oxford Studies in Normative Ethics 3 (2013).

[61] Fletcher, “Situation Ethics, Law and Watergate,” p. 51, emphasis in original.

[62] Featured image adapted from FTX Bankruptcy, common creative license, downloaded here.


Defining Objective Morality, Subjectivism, Relativism, and More


If you have ever been confused trying to figure out what someone means by “objective morality,” or got mixed up between moral subjectivism and relativism, you are not alone. Here, I will first define is meant by “objective morality” (or moral realism as it is known to ethicists), as well as subjectivism, relativism, absolute vs contextual moral claims, and first- and second-order moral judgments. In short, objective morality (or “moral realism”) is the view that there are true moral statements that are true independent of anyone’s desires, beliefs, or subjective states about those moral truths.

Defining Objective Morality

When people talking about objectivity, or objective facts, they are talking about things that are independent of what people believe or feel. Feelings and desires can be called “subjective states,” where subjective is the opposite of objective and depends on the individual. Gravity holds a person walking on Earth down, even if that person believes they can fly or not. In metaethics, objective morality often goes by another name, which is moral realism. “Realism” is a term used about pretty much every field, such as scientific realism. It implies that certain things exist.

Objective morality, or moral realism,[1] is taken to be the combination of three claims about moral reality: a semantic, alethic (this has to do with what things are possibly true), and metaphysical claim,[2] which together can be summarized as saying “there are objective moral truths.”[3]

  1. Semantic: Moral claims are either true or false
  2. Alethic: Some moral claims are true
  3. Metaphysical: Moral facts are objective (independent of subjective states about that fact), relevantly similar to certain amoral [non-moral] facts  

Objective morality means there are some moral truths that are independent of anyone’s beliefs, feelings, or preferences about that truth

The semantic thesis is that moral claims (or propositions) are the type of thing that can be true or false, as opposed to something like an emotion, which cannot be true or false. In other words, moral claims are truth-apt. The semantic thesis distinguishes cognitivism (moral propositions represent cognitive states) from non-cognitivism (moral propositions represent subjective states). This truth-aptness is consistent with moral relativism, as they can identity moral claims as true relative to some framework.[4]

The alethic thesis is that some moral propositions are true, as opposed to all of them being false. All moral propositions being false is called “moral error theory.” The most famous defender (and the first proposal to my knowledge) of moral error theory is J.L. Mackie in Ethics: Inventing Right and Wrong.[5] Error theory is cognitivist, since it says they are either true or false, but the alethic claim that some are true distinguishes realism from error theory.

Finally, the metaphysical thesis is that moral facts are similar to amoral [non-moral] facts in that they are objective, independent of any subjective states about those facts.[6] Objective facts are taken to be “mind-independent.”[7] There are also subjective facts, such as I am happy, which depend on subjective states. However, the metaphysical thesis is limited to the types of amoral facts that are not dependent on subjective states. Another way to phrase this objectivity thesis is, “Which moral judgments are true does not depend on what we (either individually or collectively) accept.”[8] Additionally, the caveat that moral facts are independent of subjective states about those facts is important. If “torture is wrong” is true independent of any subjective states whatsoever, then we cannot say, “Torture is wrong because it causes unnecessary suffering or pain,” because suffering is itself a subjective state. Torture may be wrong in virtue of subjective states of suffering, but not in virtue of my approval of the statement “torture is wrong” or my disapproval of torture.  

Overall, objective morality is the claim that there are some moral truths (i.e. values or duties) that are objective, which means that they are independent of any beliefs, feelings, or preferences about the claim’s truth value. There are additional technicalities to consider on these semantics (independent of human subjective states vs any subjective states, including those of aliens, God, or an ideal observer) when considering some edge cases, including theistic morality.[9] The arguments for objective morality need to be carefully analyzed to consider whether they are arguments for independence from any subjective states or only independence from human subjective states while possibly leaving other subjective dependencies open.[10]

It is common in Christian circles to hear that ‘of course, morality is objective’ and also that without God, there are no objective moral values and duties. Given the frequency of this claim, and the centrality of ethical discussion in the Christian life, I am interested to see what the Bible has to say on the topic of the objectivity of Christian morality. This topic I take up in a future post.

Distinguishing Objective/Subjective, Universal/Relative, and Absolute/Contextual

Above we distinguished objective from subjective, but we need to introduce two more distinctions that are important and often get confused and intermixed with the objective/subjective distinction. Namely, we need to clarify the distinction between universal and relative moral theories, as well as absolute and contextual moral theories.

  • Objective moral truth = moral truth independent of any beliefs, feelings, or preferences about the claim’s truth value
  • Subjective moral truth = moral truth dependent on a belief, feeling, or preference about the claim’s truth value
  • Universal moral truth = moral truth that applies to all moral agents (usually an ethical theory)
  • Relative moral truth = moral truth that is true relative to a framework (individual or culture)
  • Absolute moral truth = moral truth that that holds for all agents in all contexts at all times
  • Contextual moral truth = moral truth that holds depending on the situational context

Subjective vs Relative

The first thing I want to emphasize is that moral relativism is logically independent of moral subjectivism. Neither implies the other, either can be true while the other one is false, both can be true, or both can be false. The Ethics Toolkit states that “it’s wrong to identify, as so many do, relativism with subjectivism.”[11] The Stanford Encyclopedia of Philosophy (SEP) states, “the subjectivist need not be a relativist.”[12] Susan Wolf states, “In principle, one may be a subjectivist without being a relativist.”[13] The ideas of subjective and relative truths are fairly well-defined ideas from epistemology, and these are the adaptation specific to moral truths. 

First, an ethical theory can be subjective but not relative. There is a prominent ethical theory, ideal observer theory, that is subjective but not relative (thus a form of universal subjectivism). It says that moral truths represent the preferences of a hypothetical “ideal observer,” where an ideal observer is one is neutral, fully informed, dispassionate, etc. In other words, when considering right and wrong, you ask, “What would an ideal observer do?” Ideal observer theory is also an example of the distinction between “independent of human subjective states” and “independent of all subjective states.” Ideal observer theory is consistently identified as a universal subjectivist theory, so its closely resembling theistic version, divine preference theory, is also universal subjectivist. Ideal observer theory is an example of why subjectivism cannot be equated with or logically connected to relativism.

Secondly, an ethical theory can also be relative but not subjective. As The Ethics Toolkit says, “Different societies might have different moralities for different objective reasons.”[14] For example, those reasons could include “the objective conditions of scarcity, the distribution of wealth, or, as some have argued, even the climate of that society.”[15] The SEP states, “It may be that what determines the difference in the two contexts [different individuals or cultures] is something ‘mind-dependent’—in which case it would be subjectivist relativism—but it need not be. Perhaps what determines the relevant difference is an entirely mind-independent affair, making for an objectivist relativism.”[16] Susan Wolf has a section in her paper “Two Levels of Pluralism” dedicated to explaining a form of “Relativism Without Subjectivism.”[17]

As Richard Joyce summarizes, “In short, the subjectivism vs. objectivism and the relativism vs. absolutism polarities are orthogonal to each other, and it is the former pair that matters when it comes to characterizing anti-realism.”[18] That moral relativism is (or can be) a form of moral realism, or objective morality, was also made by Gilbert Harman (though this reflects a recent change of mind).[19]

Although we have seen that subjectivism is or can be independent of relativism, they are often combined (call this subjectivist relativism or relativist subjectivism), since most theories that are relative are based on the preferences of individuals or cultures, and most of the time, theories that are subjective also hold that moral truth is relative to individual or culture. Susan Wolf states that “commonly relativism and subjectivism are linked: one suspects that moral standards may legitimately differ from one individual or society to another and” the offered explanation for why and how they differ is the “subjective judgments of the people to whom the standards apply.”[20] I think this common linkage between subjectivism and relativism is why they so often get confused,[21] even in philosophy or ethics journals or books.[22]

Absolute/Contextual Distinction

The last distinction to make is between absolute and contextual moral truths. Is it ever okay to lie? Most people would say yes, depending on the context. Consider the dreaded words, “Do I look fat in this dress?” Do you really need to think about it? The correct answer is always no. More seriously, the most common example of when it is considered okay to lie is if a Nazi came to your door asking if there were any Jews home. A true absolutist, such as Immanuel Kant, would have to say it is wrong to lie in this scenario, even if it resulted in the deaths of one or more people as a (more or less) direct result.[23] But just how far can this ‘context’ go?

We can distinguish between two types of context, agential context and situational context. There may be other types of context we can discuss, such as spatiotemporal context,[24] but this is less helpful for producing a taxonomy of ethical views. Agential context is what distinguishes between relative and universal ethical truths, and situational context is what distinguishes between contextual and absolute truths.

Agential context addresses how many agents and on which agents the moral truth depends. Starting small and expanding our scope, we can go from individuals, to cultures, to the universal. Thus, we have the two types of relativism: individual relativism and cultural relativism, where truth is relative to the individual and culture, respectively. In relativism, the same ethical claim (e.g. abortion is wrong) can be true relative to America and false relative to Africa. That is, true for an American and false for an African, given their cultural context. A universal morality, which applies to all moral agents, is not considered a form of relativism.

By situational context, I mean different general situations or states of affairs that one may find oneself in or choose to do. For example, cheating on the ACT (versus cheating simpliciter, aka cheating without qualification), or killing during war (versus killing simpliciter). The situational context might be significant to moral truths. It may be morally permissible to kill someone for self-defense or during war, but not as a hitman or just for fun. If you agree, then you think context is important and are not a true absolutist. Situational context can get even more specific, such as hurting Susy’s feelings, which can potentially be a combination of agential and situational context where all the relationships involved matter.

However, the absolute-contextual spectrum is just that – it is a spectrum, and it is based on how much context is taken into consideration for the rightness or wrongness of an action. As you move up the ladder from individual to universal, you think ethical truths are less agent-specific, and up from contextual to absolute, you think ethical truths are less situation-specific. Most ethical theories are universal theories (though they can be relativized), meaning they intend to apply to all moral agents, or at least all human moral agents, and they take some type of situational context into consideration and are contextual theories. Act utilitarianism is about as contextual as you can get, where each action is evaluated completely independently, whereas rule utilitarianism generalizes this a bit. Graded absolutism, which is probably the most prominent evangelical Christian ethic, resolves some moral dilemmas by permitting violations of divine commands as long as it is required to obey some other divine command of greater magnitude. Figure 1 displays the relative-universal and contextual-absolute scales, where agential and situation context are shown is the relevant factors in distinguishing these scales, respectively.

Figure 1: Spectra representing the (left) universal vs relative distinction, which depends on agential context, and (right) the absolute vs contextual distinction, which depends on situational context.

In an approximate sense, the entire contextual spectrum, including agential and situational context, ranges from relative to contextual to absolute (which may be preferred since absolute is often understood as the opposite of relative), which is shown in Figure 2. Since most ethical theories that are universal are also contextual, this is not too problematic. It is conceptually possible to have a form of cultural or individual relativism that ignore situational context (and would be absolute in this respect), but this would be widely implausible and not worth discussing.[25] Another way of putting this is that only universal theories tend to restrict context even further beyond agents into specific situations, getting into forms of graded or ungraded absolutism. Cultural and individual relativism also contextualize with respect to situations, so they should be lower down the overall contextual spectrum, below “contextual.”

Figure 2: The full relative-absolute spectrum, including agential and situational context.

Overall, we have a continuous restriction of context from individual relativism to absolute, starting with ethical truths relative to specific agential frameworks in specific situations and then being true for all agents in relevant situations, finally ending in ethical truths that do not depend on the agent or the situation. In the next section, we look at three types of moral judgments: first- and second-order moral judgments as well as moral principles.

Types of Moral Judgments

Philosophers like to distinguish between first-order and second-order things, such as beliefs, evidence, or ethical judgments. A first-order belief would be something like “I believe there is an apple on the table.” Symbolically, this could be represented as Bp, or belief B in some proposition p. A second order belief would be “I believe that I believe there is an apple on the table.” Symbolically, this is BBp. You can have parallel results for knowledge, knowing that you know p would be KKp. Second order evidence, or evidence of evidence, might be a book that documents arguments and evidence against the textual reliability of the Bible; however, you have not read it so you do not know what first-order evidence the book presents. Knowing that there is first-order evidence for or against a position is second-order evidence. The first-order evidence could be the papyrus manuscripts from the first three centuries CE, for example.

Similarly, you can talk of first- and second-order ethical (or moral) judgments, which, roughly, correspond to applied ethics and metaethics, respectively. “Cheating on a test is wrong” is a first-order ethical judgment, while “moral truths are objective” is a second-order ethical judgment. Additionally, we can talk about moral principles, which are general principles that are prominent in normative ethics. Thus, we can give definitions of these three types of moral claims, noting that “judgments” here could just as easily be replaced with “facts,” “propositions,” or “truths.”

  • first-order ethical judgments = ethical judgments with a specific context, such as those in thought experiments like the trolley problem, drowning child, violinist argument, etc.
  • second-order ethical judgments = ethical judgments about first-order ethical judgments (metaethical judgments), such as “ethical facts are relative to the individual”
  • moral principles = general abstract principles in ethical theories, such as “maximize the good”

These three types of moral claims are important in ethics for various reasons. For example, it is (or may be) consistent for a relativist to claim that first-order moral truths are relative to specific frameworks, but second-order moral truths are absolutely true (true in all frameworks). Thus, moral relativism may not be self-defeating.[26]

Let’s talk more about the distinction between moral principles and first-order ethical judgments, as their difference is not well-defined. If push came to shove, moral principles should probably be classified as a subset of first-order ethical judgments, as first- and second-order judgments should be mutually exclusive and jointly exhaustive of morality (at least, of the relevant moral claims of interest to us). However, it is helpful to distinguish the “up-close-and-personal” judgments of the first-order, those arising frequently in thought experiments, and the “impersonal” judgments of abstract moral principles, usually seen in the broad statements of normative ethical theories. This difference is important in moral epistemology and our reliance on intuitions during thought experiments. For example, Peter Singer is skeptical of the reliability of intuitions in thought experiments, but he accepts intuitions about abstract moral principles.[27] I think I tend to agree with this.


Various questions for the Christian arise upon investigation of the above topics, such as the objectivity of Christian morality. Additionally, the above distinctions raise the question of where the proper Christian ethic lies on the full relative-absolute spectrum, and why. I hope to investigate these questions and others in the future.  

This blog post set out to establish some working definitions to have more rigorous and productive conversation around objective morality (moral realism), moral subjectivism and relativism, absolute and contextual moral truths, and types of moral judgments. All of these definitions will important as to dive into arguments for and against objective morality and relativism, as well as other metaethical topics.

In sum, objective morality (moral realism) is the commitment to the view that there are some objective moral truths, truths that are independent of any subjective states about the claim’s truth value. Subjective moral truths are those that depend on someone’s beliefs, feelings, or preferences about the moral claim in question.  Subjectivism is distinct from relativism, where relativism says that moral truths are true relative to a framework, either individual or cultural. Absolute moral truths do not consider any context, while contextual moral truths consider situational context. There are first-order moral judgments, which are up-close-and-personal judgments with specific context, while second-order moral judgments (metaethical judgments) are moral judgments about first-order moral judgments. Finally, there moral principles, which are general abstract principles used in ethical theories.

In upcoming posts, we will explore various arguments for objective morality as well as whether the Bible teaches objective morality or not.  


[1] Moral realism and objective morality (moral objectivism) are not exactly synonyms, but we are simplifying terms for now. I elaborate on some of the complications with terms such as minimalism moral realism, moral universalism (moral objectivism), universal subjectivism, and more throughout the other footnotes.

[2] Simplified from Väyrynen, Pekka. “Moral Realism” in Borchert, Donald M. (ed.) Encyclopedia of Philosophy, 2nd Edition. Vol. 6. (2005), p. 379-380.

[3] McGrath, Sarah. “Moral realism without convergence.” Philosophical Topics 38.2 (2010): 59-90, p. 61

[4] If we used “objectively” true rather than true in the semantic thesis, then the combination of (1) and (2) would be objective morality, and then (3) could be the distinguishing factor for “robust” moral realism (1-3) vs minimal moral realism (1 and 2). Moral relativists would still say moral truths are “really true,” so torturing children is “really wrong” to (presumably most or all) moral relativists, so the charge that moral relativists cannot say what Hitler did is “really wrong” is false. It is just that the claim is made relative to a framework. I can’t remember or find where I saw this point made, but unfortunately this idea still seemingly pervades the metaethical literature, where “really” is often (intentionally) assumed as a synonym for “objectively.” However, to say something is “really” true just is to affirm its truth. If by really you mean objectively, then just say “objectively.” Then, obviously relativists would disagree but the point is obscured by this handwaving tactic. The claim “you can’t call the Nazis really wrong” reduces to “you can’t call the Nazis objectively wrong” and the reasonable response is, “Uh yeah, that’s the definition of relativism.” A similar point is made on IEP.

This assumption of equating “really” and “objectively” is made by Tan, Seow Hon. “The problems with moral subjectivism.” Think 46 (2016): 25-36, p. 31, 34-35; Dworkin, Ronald. “Objectivity and truth: You’d better believe it.” Philosophy & Public Affairs 25.2 (1996): 87-139, throughout; Bennigson, Thomas. “Is relativism really self-refuting?.” Philosophical Studies (1999): 211-236, p. 211 (this paper defends moral relativism but says relativism claims, “There is no sense to, or at least no answer to, the question of which is really right – there are no framework-neutral facts.”); Kramer, Matthew H. Moral Realism as a Moral Doctrine. Vol. 3. John Wiley & Sons, 2009, p. 200-201. Strangely, Kramer cites Simon Blackburn (a quasi-realist) in support of his reasoning here when Blackburn is essentially making the same point that I am making. Claims about what is “really true” reduce to things that are “true.” Blackburn talks about these word additions, “We can add flowers without end.” Relativists affirm that there are moral truths, just that they depend on what people believe. To claim that it is objectively true is to claim more than just that something is true (in the strictest sense and in the dialectical context here it is relevant). I think Blackburn may be assuming a type of truth minimalism here though, which I do not defend. In sum, saying relativists can’t say Nazism was “really wrong” is mere rhetoric and not substance.

[5] Mackie, John. Ethics: Inventing Right and Wrong. Penguin UK, (1990). His main argument is 1) moral propositions aim to be objective (they are implicitly objective truth claims), 2) there are no objective moral propositions (i.e. moral values or duties), 3) therefore, all moral propositions are false. He summarizes the argument on p. 35 as “But the denial of objective values will have to be put forward not as the result of an analytic approach, but as an ‘error theory’, a theory that although most people in making moral judgements implicitly claim, among other things, to be pointing to something objectively prescriptive, these claims are all false. It is this that makes the name ‘moral scepticism’ appropriate.” He then uses two arguments for the second proposition, that there are no objective moral values or duties, which are the argument from relativity (which is really from disagreement), and the argument from queerness.

[6] There are also substantial metaphysical complications that I would prefer to minimize. First, there is the question of whether there are moral properties in the external world (in the fabric of reality). The “metaphysical thesis” from the Encyclopedia of Philosophy’s “Moral Realism” article affirms moral properties, which is what makes moral facts “obtain,” but I wanted to ensure this wording does not commit myself to a particular metaphysics like truthmaker theory. If we say there are moral properties, there are still “robust” or “modest” forms of moral realism referring to primary vs secondary status of these properties, where secondary properties may be response-dependent, such as color properties. A final problem with identifying moral properties is that it is hard to make sense of a very prominent understanding of substances (Aristotle’s substance theory) with this (as opposed to Hume’s bundle theory). Moral discourse is covered with identifying actions as having properties, but on substance theory objects have properties but an action (as an event) does not (I may return to this problem in the future). If we neglect the idea of moral properties, a difficulty comes here when we consider non-human subjective states, such as the subjective states of an “ideal observer” or God. If a moral truth is categorical in the Kantian sense, then it is independent of any rational agent’s subjective states; this truth would be objective in a rationalist sense, then. I will only really be considering the rationalist or robust ontological senses of objectivity.

On any understanding of objective morality, with or without identifying moral properties, “There is some ‘reality’…that ‘makes true’ certain claims.” (Horgan, Terry, and Mark Timmons. “What does moral phenomenology tell us about moral objectivity?” Social Philosophy & Policy 25.1 (2008), p. 272.)

[7] A final problem arises from the generic definition of objective as “mind-independent.” If God is a mind, then everything in the universe is mind-dependent in some sense because a (disembodied) mind created the entire universe. Does that mean that all facts about the world are subjective? This hardly makes any sense. A parallel is seen when taking about mental causation: human minds can exert causal effects resulting in changes in the external world that is mind-independent, but this causal type of mind-dependence is not the sense in what we mean by mind-independent. This point is made by William Lane Craig here. Thus, it is better to explicitly render “mind-independent” as independent of any subjective states.

[8] McGrath, Sarah. “Moral realism without convergence.” Philosophical Topics 38.2 (2010): 59-90, p. 61.

[9] A less robust definition would be to say that objective morality is independent of any human subjective states. However, this could leave the option of alien preferences being the guidelines of morality. Additionally, one moral theory, ideal observer theory, identifies moral truths with the preferences of an ideal observer (the question of the existence of the ideal observer is irrelevant). This theory is called a universal subjectivist theory since it is independent of any human and thus applies to all (i.e. universally), but it depends on subjective states of an observer. If one affirms that the ideal observer exists and is God, it is called divine preference theory (see Thomas Carson’s work). However, there is a substantial distinction between divine preferences and divine commands. Divine preferences are clearly subjective, but divine commands are not clearly dependent on God’s subjective states. For example, William of Ockham famously bit the bullet on the arbitrariness objection by not allowing any restriction on God’s commands from God’s moral nature, which maximizes God’s freedom. Therefore, an Ockhamist DCT would seem to be independent of any subjective states (since commands are not subjective states, especially when one identifies those commands as exegeted from the biblical text).

However, most modern DCTs are not Ockhamist and have a grounding relation between God’s commands and God’s nature or His commands and His will. A grounding relation (I think) confers a dependence of some sort, especially in this sense because God’s nature or His will puts a restriction on the range of possible divine commands (if this is included in the grounding relation). A grounding relation between God’s commands and God’s nature would allow for DCT to remain objective, and this view is considered a hybrid view where moral values are based on God’s nature and moral obligations come from God’s commands, and values are more fundamental than obligations. This view is presented by Adams’ Finite and Infinite Goods: A Framework for Ethics, William Lane Craig accepts and defends this view, and this is a plausible solution to the Euthyphro Dilemma.

The grounding or identification of moral obligations in or with the divine will, however, is more likely to still be considered subjectivist. I don’t know enough about this view (defended by Mark Murphy and Philip Quinn) to say much. Christian Miller in “Divine Will Theory: Desires or Intentions?” suggests that while Murphy and Quinn focus on grounding moral obligations in divine intentions, it would be better to focus on divine desires. I think now (according to Christian Miller’s “Divine Desire Theory and Obligation”) these theories are considered distinctly and identified by divine intention theory and divine desire theory, respectively. Either way, my understanding is that intentional states are very much mind-dependent and subjective, and desires are explicitly subjective states. Therefore, it seems like divine will theory in either desire or intentions form would be a type of universal subjectivism. However, a divine command theory with commands grounded in God’s nature (or ungrounded) would remain objective. I will investigate these ideas more in-depth when investigating theistic morality and its objectivity.

[10] Additionally, perhaps “subjective” could mean subject-dependent, dependent on anything about some subject, instead of dependent on the subjective states of some subject. That would be a different story, as subject-dependent is different than mind-dependent. However, I have never seen anyone ever use this definition, so I will not consider this further.

[11] Baggini, Julian, and Peter S. Fosl. The Ethics toolkit: A compendium of ethical concepts and methods. Wiley-Blackwell, 2007, p. 130. All references are to the pdf of the epub version (no page numbers are given).


[13] Wolf, Susan. “Two levels of pluralism.” Ethics 102.4 (1992): 785-798, p. 786.

[14] The Ethics Toolkit, p. 133.  

[15] Ibid, p. 130.

[16] (Emphasis in original).  

[17] Wolf, Susan. “Two levels of pluralism.” Ethics 102.4 (1992): 785-798, pp. 792-797.

[18] Ibid.

[19] Harman, Gilbert. “Moral relativism is moral realism.” Philosophical Studies 172.4 (2015): 855-863.

[20] Wolf, Susan. “Two levels of pluralism.” Ethics 102.4 (1992): 785-798, p. 786. She explains the full line of reasoning to get to subjectivism as (p. 786), “Pondering the existence of persistent disagreement leads one to relativism. Pondering the conditions under which relativism would be true leads one to subjectivism.”

[21] This agrees with a point made in The Ethics Toolkit on p. 133. “Tt may be possible to speak of a subjectivism that’s collective or social. For this reason many conflate social relativism with social subjectivism. But while different social subjects are likely, according to subjectivism, to yield different moralities, relativism is possible even if subjectivism is wrong. Different societies might have different moralities for different objective reasons.”

[22] I might prepare a giant list of all the places I have seen relativism confused with subjectivism or vice versa, as this distinction has caused me much pain to sort out (and is in part why I was so delayed in finishing this post). Two such places I have seen it that are absolutely inexcusable are The Professional Ethics Toolkit and Michael Huemer’s Ethical Intuitionism.

[23] Kant himself used an example of lying to a murderer at your door who is looking for the would-be victim, and he says it is wrong to lie in such a scenario. This example was, post-World War II, adapted to be a Nazi at the door looking for Jews, and this example is commonly used to show the absurdity of Kant’s absolutist deontology. However, this seemingly obvious extrapolation of Kant’s views has been challenged, see Varden, H. (2010), “Kant and Lying to the Murderer at the Door…One More Time: Kant’s Legal Philosophy and Lies to Murderers and Nazis.” Journal of Social Philosophy, 41: 403-421. I do not know enough to comment.

[24] By spatiotemporal context, I mean something like “France in the 1800s” or “1920s USA” or “at the McDonalds down the street in Texas in 2021.” These give a time (or time period) and spatial location or geography. This type of context is likely more important for a cultural relativist that thinks moral truths are relative to a culture (or a subjectivist who thinks moral truths depend on cultural subjective states), which usually has spatiotemporally significant moral factors that contribute to moral truth values according to a relativist or subjectivist.

[25] An example would be saying that lying is always wrong for Bob, but lying is always permissible for Alice, no matter the situation of either of them. Another example could be that in France, abortion is always wrong no matter the reasoning, but in China, abortion is always permissible for any reason whatsoever.

[26] I will likely revisit this in the future to see how well a relativist can hold her ground here. There are different ways of pushing on this claim. I am not sure if it works or not. Naïve global relativism is straightforwardly self-defeating, though.

[27] Singer may use a strong intuition to justify the principle, “We ought to be preventing as much suffering as we can without sacrificing something else of comparable moral importance,” but reject the reliability of intuitions in his own Drowning Child thought experiment. Singer even offers an evolutionary debunking argument for these types of intuitions in Singer, Peter. “Ethics and Intuitions.” The Journal of Ethics 9.3-4 (2005): 331-352. However, this is consistent for Singer to offer an argument of this sort, since his interlocuters accept the reliability of first-order contextual intuitions. This point was made in my least favorite paper ever: Timmerman, Travis. “Sometimes there is nothing wrong with letting a child drown.” Analysis 75.2 (2015): 204-212, p. 211. The way he words it is that Singer “famously rejects the reliability of intuitions about first-order normative judgments” but “is not similarly skeptical of the reliability of intuitions about abstract moral principles.” It is for this reason I mention this dichotomy, with which I have great sympathies. This is the same idea behind talking about “up-close-and-personal” intuitions versus “impersonal” intuitions, which I take to correspond to first-order moral claims and moral principles, respectively. These two ‘types’ of intuitions, in connection to Singer’s views and evolutionary debunking argument, was discussed but challenged in Holtzman, Geoffrey S. “Famine, Affluence and Intuitions: Evolutionary Debunking Proves Too Much.” Disputatio 10.48 (2018): 57-70.

A Roadmap into Ethics


Questions of morality enter our lives every single day. For any adult, breaking the speed limit or paying taxes. For student, cheating on exams or homework. For an academic, plagiarizing someone else’s work or finding. Or how about, should I call in sick to work today so I can relax? How much of my work time can I spend on personal issues and phone calls, even if my boss will never know?

How about more general questions: how do I decide what is the right thing to do in any of the above situations? Do I base it on what I feel like doing in the moment? Should I have a robust system in place? Is something only wrong if I get caught?

Now even more general questions: where do moral obligations come from? Are moral values and obligations specific to me, or are they the same for every human? Did God implant these values and duties, did they evolve over time for survival, or do humans just make up a system and run with it?

Each of these sets of questions corresponds to the three subfields of ethics: applied ethics, normative ethics, and metaethics, respectively. In this article, I will outline and describe these topics and how I will approach them systematically in this blog.

Outline of Ethics

Ethics is broken down into three subfields (given in my first post):

  • Metaethics (what are morals, and what grounds them?)
  • Normative ethics (how do we decide what is moral?)
  • Applied ethics (what specific action is moral?)
Figure 1: Outline of Ethics

These fields flow naturally into each other, but your stance in one field does not usually commit you to particular views in other fields (though this is less clear-cut from meta- to normative ethics). For example, I can be a moral objectivist and hold to utilitarianism or virtue ethics. I can be a deontologist and be for or against abortion. Any normative ethical theory can be used to analyze any particular applied ethical issue.  


The most fundamental problem in metaethics, and perhaps ethics as a whole, is the “is-ought problem” (attributed to Hume): how can we derive moral obligations from mere factual statements? It is a fact that the dirty dishes are piled high by the sink. Does that necessarily imply that that I am obligated to wash the dishes today? It is a fact that this person on the street is choking and will die unless I perform the Heimlich. Does that mean that I am obligated to perform the Heimlich? Does the answer change if I do not know how to perform the Heimlich (this is Kant’s “ought implies can” principle)? These questions populate the realm of metaethics.

Metaethics also asks questions like, “Is morality objective or relative?” “Is moral obligation actually just emotion?” “Can there be a secular grounding for objective morality?” “Is objective morality only possible if there is a God?” These questions and their connection to Christianity is quite obvious. Additionally, there is the area of moral epistemology: how do we know right and wrong or the moral guiding principles for ascertaining right and wrong? Finally, moral psychology discusses our motivations for performing moral actions.

From where do moral obligations originate? How do I decide when action is necessary?

Normative Ethics

The connections between normative ethics and Christianity may be less obvious. This might explain why I felt no compelling interest to explore the ethical theories once I learned about them in my Ethics and Engineering class. I thought the ethic of the Christian life was pretty much “Obey God; therefore, follow the commands in the Bible” – that is what makes a faithful Christian.  This roughly translates to divine command theory as a normative ethical theory. Right and wrong, aka moral obligation, is based on God’s commands. This is a form of deontological ethics and is the predominate Protestant view, which can be seen in a psychological study on Christian opposition to consequentialist reasoning.[1] However, Western Christianity was dominated by a completely different view for over 1,000 years, natural law ethics, [2] which says that the right thing to do is based on properly seeking the ‘end’ of humanity, which is happiness.[3] The most predominate thinkers in this tradition are St. Augustine (4th century) and Thomas Aquinas (12th century).[4] This type of ethical norm is of a completely different sort, teleological rather than deontological. Now, this is still grounded (in metaethical terms) by God creating humans and empowering them with reason and grace. Therefore, we have two examples of Christian normative ethical theories (divine command ethics and natural law ethics) with two opposing frameworks: deontology and teleology. Which, if either, is correct?

Therefore, normative ethics seeks to find guiding principles for ascertaining what is right or wrong. The key disagreement is if the justification for the right action should be based on consequences (consequentialism), rules (deontological ethics), or character (virtue ethics). There are many variants and disagreements within each of these umbrellas, and they are not 100% separate (pluralist consequentialism can draw on multiple virtues, rule-consequentialism can implement rules), but their framework remains distinct. Normative ethics also seeks to understand the importance of intentions or motivations when performing any ethical action. 

Applied Ethics

Next, there is applied ethics. This topic is usually where the rage comes flying out. Merely the words abortion, homosexuality, or racism can bring substantial emotional baggage to the forefront (not saying it isn’t deserved!). It is often and increasingly associated with political association, unfortunately.[5] I am interested in a robust analysis of a variety of these practical issues from a purely ethical perspective. The “correct” answer to the applied ethical questions hinge on what we take to be the best normative theory, so we need to know how to evaluate normative theories (and whether or not there is a “correct” answer depends on our metaethical views).

Christians and non-Christians end up on all sides of any number of modern ethical issues, including abortion, animal rights, gay marriage, wealth and altruism, etc. I plan to be very selective about topics in applied ethics, as they are quite controversial and I want to only talk about those things I am informed about (i.e. can adequately engage with what contemporary ethicists have written on the topic). Therefore, for the foreseeable future, I only plan to talk about 1) wealth and altruism/theology of possessions, 2) abortion, and (probably) 3) animal rights and human dignity (which relate closely to abortion). These topics played an important role in how I got interested in ethics in the first place.

Beyond these highly controversial practical questions, ethics can be applied to things like Christian doctrine or philosophy of religion in a multidisciplinary setting (not technically the conventional ‘applied ethics’). I find two topics particularly interesting here: the atonement and the problem of evil. The problem of evil is rich with ethical thought and extends to other questions about God’s nature, such as God’s own moral obligations and moral agency. I plan to address both of these topics, the atonement and the problem of evil, in detail.

My Approach to This Blog

There are many possible topics to discuss, and I very much like a systematic approach. Therefore, I will be systematically working through the field of ethics from the top down (metaethics > normative ethics > applied ethics), exploring various topics and connecting the ideas to Christian thought as we go. I will likely do a detailed “first pass,” hitting on the most interesting and central ideas in each of the 3 fields, and then come back and revisit other relevant issues that warrant further attention.

Next time, I will be kicking off our series on metaethics, which consistent of some of the deepest and toughest questions in all of ethics. I will begin by discussing arguments for the objectivity of morality.

In what topics or questions are you particularly interested? Do you have any suggestions for things you would really like me to discuss or (attempt to) address? Let me know!

[1] Piazza, Jared. ““If you love me keep my commandments”: Religiosity increases preference for rule-based moral arguments.” International Journal for the Psychology of Religion 22.4 (2012): 285-302. Piazza, Jared, and Justin Landy. “” Lean not on your own understanding”: belief that morality is founded on divine authority and non-utilitarian moral thinking.” Judgment and Decision making 8.6 (2013): 639-661.

[2] “Natural law ethics – Christianized and church-controlled – more or less dominated the West for over a millennium.” in Perry, John, ed. God, the Good, and Utilitarianism: Perspectives on Peter Singer. Cambridge University Press, 2014, p. 21.

[3] Summa Theologiae, First Part of Second Part, Question 1, Article 8.


[5] For a collection of essays and critical responses that are ethical analyses on important political issues, such as immigration, minimum wage, environmental regulation, health care, abortion, privilege, feminism, affirmative action, racial profiling, and more, see Fischer, Bob (ed.). Ethics, Left and Right: The Moral Issues that Divide Us. Oxford University Press (2019). For a discussion on how people end up so up in arms with their tribe about this stuff, see Haidt, Jonathan. The Righteous Mind: Why Good People are Divided by Politics and Religion. Vintage, 2012.

My Winding Journey into Ethics

Upon reflection, it is surprising to me that it took me so long to get interested in the academic field of ethics. I have been interested in and passionate about many issues in ethics since high school, long before I knew what the field of “ethics” actually included. I will give some background on my life, especially how a preliminary (unknown) interest in ethics developed into an academic interest in ethics (in other words, how we got here).


For starters, ethical issues surrounding the Atonement and their beautiful coherency[1] were the biggest reason I became a Christian 16 years ago (16 years to this day: September 13, 2004). The parallel ethical issues in Islam of sin, judgment, and the afterlife remain, by far, my biggest concern with Islam, given their apparent incoherence.[2] My “extreme views”[3] on the ethics of wealth and possessions has caused a couple of Sunday School teachers, a pastor, and several friends to be uncomfortable or upset. I debated the ethics of abortion in my high school debate club. I made a survey of questions on abortion as a project in my sophomore government class that was intended to show the immoral absurdity of abortion. Abortion also played a central role in an admissions essay[4] to my current university (Texas A&M).

The ethics of the the Atonement was the biggest reason I became a Christian 16 years ago.

Given all this, you would think ethics would be a natural extension of the above; however, my actual journey into academic ethics was a bit more complicated. My first encounter with ethics as a field of philosophy was a class called “Ethics and Engineering” during my sophomore year. I was the only person I knew that enjoyed that class[5] and learning about the ethical theories (ethical egoism, utilitarianism, and virtue ethics specifically). I did not find any of them compelling in and of themselves though, mostly because I saw no connection at all to what I viewed as correct ethic and decision-making framework, which was following Christianity/the Bible. However, I did “incorporate some of the framework of utilitarianism into my life philosophy.”[6]

At this point in time, my only knowledge about philosophy came from twice-a-year discussions with my cousin Nathan, who was already interested in philosophy. In fact, I likely would not have gotten interested in philosophy at all if it were not for my cousin Nathan and the very difficult questions he was refusing to leave inadequately answered, especially on issues of epistemology (how do we know anything?), predestination, free will, and arguments for God’s existence. I would thus mark my true initial interest in philosophy probably with watching the William Lane Craig vs Christopher Hitchens debate in July of 2017 (which I re-watched last month to see how I felt after studying the arguments in depth for 3 years).[7] The next step for me was listening to Craig’s Reasonable Faith podcast (which I highly recommend), which talks about a wide range of issues in philosophy and Christianity. This led me to considering issues of logic, epistemology, and the cosmological argument for God’s existence more in depth by reading Stanford Encyclopedia of Philosophy, books, and academic papers. This, in turn, lead to the avalanche that resulted in where I am today. Thankfully, this journey was taking place parallel with my doing undergraduate research and literature reviews, so I was learning how to ‘Google things’ at a scholarly level. My philosophical interests, therefore, reside pretty squarely within philosophy of religion, epistemology, and ethics.

Into Academic Ethics

It was not until my last semester of undergrad (January 2020) that my political science professor’s silly comments about abortion gave me the prompting I needed to do a rigorous investigation into abortion (now that I knew how to do a rigorous investigation). It started with legal issues and the history of abortion,[8] then into metaphysical issues about personhood.[9] A couple of the latter papers mentioned the ethical impact, but not often. I was also first exposed to the violinist argument at this point. In the summer, I finally was able to dive into the ethical aspect, including the arguments from the violinist, embryo rescue case, future-like-ours, and much more. However, I did not recognize at this time that I was reading applied ethics papers. I was just so engrossed in a topic that I was passionate about that I wasn’t paying attention to what journals these papers were being published in or the broader field in question. In my mind, I was just reading “papers on abortion.” Thus, the ethics of abortion was the real breaking ground into the field of ethics. It helped me realize that thought in applied ethics could even help us tease out the ethical implications of Scripture and the relationship between ethical intuitionism, divine command theory, and situational ethics.

The next stage of ethical inquiry came from my friend Emily sharing her moral case for veganism. She mentioned the name Peter Singer several times, whom I had not heard of previously (or, at least, I thought I had not).[10] I began to (try to) think seriously about these ideas, which is still an ongoing process. I watched a video on Singer’s ideas, and his name came up several times by CosmicSkeptic (a vegan, atheist YouTuber) in his 50 book recommendations. I began reading a little bit on the moral argument for veganism based on opposition to industrial animal farming practices that result in massive amounts of animal suffering. I have not quite come to a position on this topic.

I soon realized via his website that Singer was not just the guy who is the front-man for principles that can support abortion, selective infanticide, and euthanasia, but also for ideas that support substantial giving to charities under the name of effective altruism (see his book, which you can get for free, The Life You Can Save). At this point, I was extremely intrigued: there are secular proponents of giving substantial amounts of our income to charities? There are secular arguments for a moral obligation for the wealthy to give possibly a majority of their income to charity? I had discovered plenty of secular pro-life organizations via Twitter,[11] but I was honestly surprised at this.

I found out that Peter Singer wrote a game-changing paper in 1972 called “Famine, affluence, and morality” (cited over 3500 times!), and this paper has inspired many critiques, further development by Singer, and more. I think the effective altruism was really the cake that led me to really jump into ethics, knowing I wanted to go deeper. But it wasn’t Singer’s positive arguments that really sealed the deal, it was two revolting responses I read that were so incredibly stupid I couldn’t believe they existed. Namely, “Sometimes there is nothing wrong with letting a child drown”[12] and “On the supposed obligation to relieve famine.”[13]

Figure 1: Probably my least favorite philosophy paper in the world. I label it as the philosophy paper that annoyed me the most, even beating out all the abortion papers I’ve read.

This made me start thinking back to my plans I made a long time ago. During my sophomore year of college, after being influenced by David Platt (which I will elaborate on in the next post), I started studying theology of possessions, which could be considered an area of applied Christian ethics. A prominent Christian view is stewardship theology, which says that God made us stewards over the planet (which gives us an obligation to take care of animals and the environment) as well as of our money and our possessions. In practice (not necessarily in theory), this seems to be taken to mean that I can pretty much do what I want with my money and you can’t tell me anything I ought to do because “Christian liberty.” I find this both revolting and starkly unbiblical. The short version is, I thought of a stronger form of a theology of possessions and developed it slightly along with my dad (who even gave presentations on it to at least one church). I made plans to come back and study the topic more rigorously in the future, even contemplating doing a master’s degree in theology where my thesis would be on this topic. This discovery of effective altruism and applied ethics, however, made me realize that I could incorporate Singer and related arguments into a type of theology of possessions for an even stronger case.

During this same timeframe, I was also beginning to study metaethics. I was introduced to William Lane Craig’s moral argument for God’s existence awhile back. I plan to study some of the other moral arguments for (and against) God’s existence in the future and discuss them here. I read Andrew Fisher’s introduction to metaethics this summer and saw a lot of interesting questions there, especially those surrounding divine command theory. I started to think about the connections between metaethics and normative ethics. Can someone believe in subjective morality and still think right and wrong is based on God’s commands? Can someone think morality is grounded in God but think that right and wrong is based on “natural law?”

At some point during this process, the two strands above (theology of possessions and meta-normative connections) came together such that I realized that I could turn some theological ideas that I’ve had, namely the ones about our purpose in life and my “life philosophy,” into a normative ethical theory. Glorifying God is really what I saw as our primary obligation the whole time, but I only recently began thinking about it in terms of an ethical framework. Divine glory utilitarianism is the result.

Glorifying God is what I saw as our primary obligation, but I only recently began thinking about it in terms of an ethical framework. Divine glory utilitarianism is the result.

Next, I was looking into normative ethical theories, especially utilitarianism, and came across the demandingness objection frequently. I started thinking about how that objection would apply to Christianity and Christian ethics, which made me think of the name What the Gospel Demands, then I had this idea for a blog! I started the website two years ago, as I originally set out to start a blog on discipleship and missional community, but I didn’t have/make the time and energy to do this. So this is take two. Considering I didn’t even make it to my first blog post last time, we’re doing great so far.


Here we are today! In summary, my pathway into ethics was abortion > animal rights > altruism > theology of possessions > normative ethics. As you can see, I have interests in all three fields of ethics: meta, normative, and applied. I just started diving into academic ethics this summer, so I’m still kind of a n00b. It’s been a good journey to get here, and I’m excited for the path forward, exploring many new ideas.

What areas or questions in ethics do you find interesting? How did you get interested in ethics?

[1] In Christianity, God rewards every good deed and punishes every wrongdoing (e.g. Romans 2:6, Ephesians 6:8, Revelation 22:12). Given that this is the definition of justice, God is perfectly just. God’s perfect mercy is displayed by Jesus voluntarily taking on the sin of humanity to offer forgiveness to all. There are complications here worth exploring, but in the end, only the innocent are rewarded and only the guilty are punished, and yet all have the opportunity for reconciliation.

[2] In Islam, God does not reward every good deed nor punish every wrongdoing. The two most problematic cases are 1) nullification and 2) the 70,000 that skip Judgment Day. Nullification refers to the 10 or so groups of people (based on specific sins they have committed) who will have their good deeds “nullified,” i.e. cancelled or ignored, on Judgment Day. Secondly, there are 70,000 individuals who will not have an account of their good or bad deeds, and will be sent to heaven regardless (source: the most authentic Islamic tradition collection, Sahih al-Bukhari). There are more problems to be explored here.  

[3] See Divine Glory Utilitarianism for my proposal of Christian ethics and its application to wealth and possessions at the end.

[4] When I reread this essay, which was on interacting with people of different beliefs, I cringed at my use of language and terminology. My entrance into philosophy, especially analytic philosophy that emphasizes clarity and precise argumentation, has made me a bit more careful about definitions and precision in speech (and not being so unnecessarily charged).

[5] This is probably because every other person in the class were graduating seniors, to whom the class is normally restricted. By the grace of God, they let me take it as a sophomore because I had no other options.

[6] This is what I told the teaching assistant of my 2016 Ethics and Engineering class in an email dated January 2019. It is clear that at this point, the seeds of ‘divine glory utilitarianism’ had already taken hold. I did not yet think of it as an ethical framework, but more about the purpose of our lives being to maximize God’s glory rather than that being our (primary) moral obligation and reason for things being right or wrong. It is worth noting that Alasdair MacIntyre argued in After Virtue that the purpose of our lives (aka the telos of humanity) informs and should be the source of our moral obligation, which would connect my understanding of ‘life philosophy’ and ethical framework.

[7] In case you were wondering, like most debates involving Craig, Craig was victorious and widely admitted as such on both sides due to his precise and (relatively) rigorous philosophical argumentation. Though, I find who “won” a debate to be irrelevant, but the soundness of the arguments is what matters. The funniest part when re-watching was Hitchens’ summaries of free will. He describes free will on atheism as, “We have no choice but to have free will,” and on theism as, “Of course we have free will. The boss demands it.”

[8] See this video for example. It is made from a very pro-life organization, Live Action, but I was surprised to see that a preliminary investigation confirmed pieces that I had time to look at. It was confirmed in part, for example, by the whopping 1200 page book Dispelling the Myths of Abortion History by Joseph Dellapenna published by Carolina Academic Press.  

[9] Usually humans are seen as “persons” when they have a certain developed form of rationality. It is usually said that “persons” have rights, rather than humans, including the right to life.

[10] It turns out, I wrote a response essay to Singer’s “All Animals are Equal” in my Ethics and Engineering class, but I had no recollection of this whatsoever.

[11] Especially Secular Pro-Life, Feminists for Life, and Pro-Life Humanists.

[12] Timmerman, Travis. “Sometimes there is nothing wrong with letting a child drown.” Analysis 75.2 (2015): 204-212.

[13] Kekes, John. “On the supposed obligation to relieve famine.” Philosophy 77.302 (2002): 503-517.

Welcome to “What the Gospel Demands”

Is morality absolute, objective, or subjective? How do we know what is right and wrong? Is morality rooted in God’s commands, God’s will, or something else? What should be our decision-making criteria? How do we import morals from the Bible into principles or specific applications? What is the importance of ethical intuition and situational context? If these types of questions pique your interest at all, you’re in the right place.

Welcome to What the Gospel Demands! This blog will be talking about issues in ethics (also known as moral philosophy) and how those issues intersect with Christian thought. When I initially heard about “ethics,” I thought to myself, “How boring. My ‘ethic’ is to live by the Bible. The end.” My mind has since changed (on the first part, at least). I have also found the wondrous ways in which ethical theory intersects important Christian issues and greatly affects how we understand the relationship between God and morality, obedience to God, decision-making criteria, and how these apply to specific (and often controversial) issues like abortion, death penalty, wealth, war, animals, and more.

Is morality absolute, objective, or subjective? How do we know what is right and wrong?

This project is now very different than how I originally conceived it in 2018 (and when I bought the domain name). However, I realized that the name, What the Gospel Demands, still applies quite nicely (see my next post to learn the origin of the name). “Demandingness” is one of the most discussed topics in ethics when evaluating ethical theories and applications of those theories. It is often posed as an objection (the demandingness objection) and is the subject of entire books, such as The Limits of Morality by Shelly Kagan. In popular discourse, the “demandingness” of Christian morals is perceived negatively as disgruntled obedience to a list of rules. However, the transformative life-change from the Holy Spirit causes a decrease in the desire for worldly things and a desire to mimic God and obey Him. One way this is reflected in the Psalms when David perceives God’s laws as beautiful, refreshing, and as a means of meditation. There is much more to be said here that I will leave for another time.

One thing I want to clarify is that I will be discussing “ethics and Christianity” rather than “Christian ethics.” The difference is that “Christian ethics” is its own field, with which I am much less familiar, but “ethics” is the broader field in academic philosophy. There is obvious substantial overlap, and I am interested in exploring this area. One reason I am focusing on the broader field is that it has a well-defined structure and seems to cover many more topics, and they are all relevant to Christianity.

Ethics is broken down into three main fields: metaethics (what is the source of moral values and duties, and what grounds them?), normative ethics (how do we decide what is moral?), and applied ethics (what specific action is moral?). A fourth field is sometimes included, descriptive ethics, which is more of an empirical social science focused on what people believe about morality. We will focus on the first three. There are questions in each of these fields that are (or at least should be) important to every person on Earth, especially to the Christian.

Figure 1: Outline of the Field of Ethics. Thanks to Abner Telan for the design.

If these topics interest you, then great! This blog is for anyone who wants to join me on this journey as I navigate the various topics within ethics and how they relate to Christianity. Really, I think one reason I’m doing this blog is to help me formulate and refine my own thoughts on these issues both through the writing process and also from getting feedback and pushback on my ideas from readers (you guys and gals). Along the way, perhaps someone can learn from my always-tortuous journey of trying to learn far too many things.

I hope to connect and engage with you. Feedback is appreciated and encouraged. Let me know if you disagree and why. You can reach out by filling out the contact form, leaving comments, or at my Twitter, @AStrasser116.