179 Comments
Comment deleted
Expand full comment

Giggling on the toilet so loud that my dog started barking at the bathroom door. He thinks I’m dying and I am.

Expand full comment

I can't figure out what ACX means, but I get the joke anyway.

Expand full comment
Aug 13, 2022·edited Aug 13, 2022

Rule utilitarianism isn't arbitrary (neither are the 10 commandments, for that matter; they have other issues).

What rule utilitarianism allows is heuristics/norms in the face of uncertainty/odd cases, and a bit of simplicity in the face of complexity.

It's almost cheating because it takes the useful bits of deontology (but, ideally, with better informed priors and a focus on consequences/empiricism) and combines it with the best bits of pure utilitarianism.

Rule Utilitarianism Isn’t So Crazy

https://fakenous.net/?p=2789

Expand full comment

I don't know if I agree or disagree, but I just wanted to say that fakenous is the best url I've ever seen.

Expand full comment

The funny thing here is that if pure utilitarianism--a form of consequentialism + math--leads to bad consequences then obviously it's not being implemented correctly in any given context. Different people with different values--often axioms--will have very different definitions of good vs. bad consequences, and therefore highly different utility calculations. Rule utilitarianism and preference utilitarianism coexist quite well with democracy and liberalism (which help set certain baseline values and means by which to contest values peacefully) and are more practical forms of implementable utilitarianism at scale over time.

So sure, if you strawman utilitarians a bit and then disallow them to make any pragmatic alterations to their consequentialist framework(s) then yeah it all sounds totally insane.

Expand full comment

I just came here to say something similar. So thank you.

Expand full comment

I have virtually no education on the topic but it seems that rule utilitarianism makes sense for, say, governmental action, while something like virtue ethics makes sense on an interpersonal level.

Am I a dunderhead for thinking different philosophical frameworks suit different purposes?

Expand full comment

No, you're not a dunderhead.

The laws of physics don't even span across different scales (to the best or our current knowledge). There's no simple system that can apply to any given moral problem.

They key thing in my opinion is that there is a way to pursue say virtue ethics in a manner consistent with consequentialism. How ought we to determine what is virtuous but not by the consequences that come about (IAW with our base values, like human well-being)?

Expand full comment

Of course you're not a dunderhead for thinking that.

I have a stupid amount of education on this topic -- PhD in philosophy with an emphasis in ethics. I think that virtue ethics is pretty but often doesn't provide good guidance on specific actions and is very hard to ground, ultimately. But that's a digression.

Expand full comment

I’m adding that to my resume.

“Not a dunderhead.” —Mariana Trench, PhD.

Expand full comment

As an analytically trained dunderhead myself I feel obliged to point out that, technically, her comment allows that you might be a dunderhead for reasons unrelated to let's call it the polyethical principle.

Expand full comment

Now you’re just adding caveats.

Expand full comment

Care to read the first few chapters of Singer's Practical Ethics? You left out the word "preference" in "preference utilitarianism" (as opposed to "hedonic utilitarianism" which seems to be your straw man).

An unconscious person still has (or rather had) "preferences" in the same way a dead person did. You care about what happens in the world after you die.

Also I don't know which utilitarians you've talked to, but the ones I know _love_ Trolley problems. They just can't get enough of them. Being allergic to thought experiments is a criticism I'd lay at the feet of just about _any_ other group before utilitarians

Expand full comment
author

"You care about what happens in the world after you die."

I assure you that I really, really don't.

Expand full comment

It doesn't matter whether you care now, it matters whether you care after you die.

Expand full comment

"What happens after I die", is a question only asked by the living.

Expand full comment

Our opinions about it while alive are irrelevant. Either we don't care after we die (because there's no afterlife or anything like it), or we do. Utilitarianism doesn't imply that we should only care about the dead who, when alive, believed they would care about what happens in the world after they die. It says that either we should care about all dead, if there is some kind of afterlife, or none, if there isn't. Utilitarianism itself doesn't take a stance on an afterlife.

Expand full comment

I don't think that our opinions about it while alive are irrelevant because for most people the thought of our own death is a great contributor to decisions in life... many sub-optimized. If we did not care, why would we strive to survive? Just look at the COVID crazies... people insane willing to destroy society over the fear of death... when the probability of the risk of their death and other's death could be very accurately calculated and was always generally very low.

Expand full comment

I was talking about our opinions about "life after death" and I meant that those don't matter for how we ought to be treated, according to utilitarianism, after we die.

I wasn't talking about our opinions about the value of life while alive, nor denying that our opinions now (on anything) matter for our actions and desires now.

Expand full comment

Sooo... Given the option between dying today and dying in 30 years but also the world blows up and everybody else dies too, you take the latter?

Expand full comment

Beat me to it! I just posted a similar comment.

Expand full comment

I don't understand this. Are you saying that if an all-knowing wizard or genie told you, "One hour after you die of natural causes, planet Earth will be destroyed in a fiery explosion," you would not care? Not even about your friends or loved ones who may be alive at the time?

Expand full comment

There's harm in learning that the world is going to explode an hour after you die, so morally the genie shouldn't tell you. But you won't have any reaction to the actual end of the world because you won't be there. So your feeling about the fact (unlike learning about it) has no moral weight.

Expand full comment

It bothers me that utilitarianism is often taken for granted in EA, policy circles, economics etc. when it's hardly the consensus position in normative ethics. The last PhilPapers survey on the views of academic philosophers showed that utilitarianism was the 3rd most popular view, slightly behind virtue ethics and deontological views (https://survey2020.philpeople.org/survey/results/all). When you restrict it to philosophers who specialise in normative ethics the gap widens in favour of deontology and against utilitarianism. I get the appeal – it's a wonkish anti-common sense view that tells you to break out your calculator and make the harsh tradeoffs that other people are too squeamish to make – but it shouldn't be uncontested gospel just because it excites nerds.

Expand full comment

What's the consensus among philosophers on using consensus among philosophers to establish normative ethics?

Expand full comment

Getting to the real questions, here.

Expand full comment

I'd argue that it's not really utilitarianism if it only includes certain groups in its calculation of well-being, so most of the policy circles and economics are not really doing utilitarianism. Doesn't Singer's work directly challenge what most of them are doing?

I guess this depends, though, on whether we define utilitarianism by doing the math, or by considering the well-being of all participants.

Expand full comment

The last objection is particularly annoying since utlilitarians are fond of pointing out that adherents of other moral systems get the answer to trolley problems "wrong."

Expand full comment

Ursula K. LeGuin's classic treatment is "The Ones Who Walk Away From Omelas".

Expand full comment

Personally speaking, Omelas is to Le Guin what Harrison Bergeron is to Vonnegut.

Expand full comment

There's a reason that Harrison Bergeron is one of the most famous things Vonnegut ever wrote--there's a better than even chance that any student of American literature has heard of it and knows instantly what's implied when someone brings it up.

This is despite the fact that it's amateurishly written from an early stage of Vonnegut's career when he was just starting out and his work was unpolished. In the end the idea is so striking that it's gone on to overcome the weakness of the writing in establishing itself as a touchpoint for modern American culture.

Expand full comment

Indeed... which means that in most Americans' minds (who have heard of him) he is equated with libertarianism rather than socialism. It's like George Orwell all over again!

I agree it has striking imagery though. I guess the same with Omelas.

But I think the world would be marginally better if 'Always Coming Home', say, were nearly as widely read (or 'Deadeye Dick' with Vonnegut).

Expand full comment

I don't think Harrison Bergeron really has any coherent political message. I don't think its message has anything to do with libertarianism so much as a rejection of enforced conformity.

For Vonnegut I think the only decent thing that he ever wrote is "Slaughterhouse Five". I am not a huge fan of LeGuin, to be honest, but some of her short stories are pretty good.

Expand full comment

Kind of off topic, but.....I'm really interested in reading LeGuin's work. Do you (or others) have recommendations about which one(s) I should start with? I.e., ones good for a beginner who knows almost nothing about LeGuin?

Expand full comment

Do you prefer science fiction or fantasy? (mind, her fantasy tends towards the rigorous and anthropological rather than airy-fairy!) Also, do you prefer weird, high-concept sci-fi or political stuff?

Expand full comment

I'm not sure, exactly. I probably prefer fantasy? However, I'm less into airy-fairy. (It's hard for me to know the difference b/w sci fi and fantasy.) For reference, here are some works I like:

C. S. Lewis: Narnia chronicles; sci-fi trilogy; Till We have faces

Tolkien: LOTR

Orson Scott Card's short stories (the one book-length work, about a hunted house, I didn't like very much)

(some of) Philip K. Dick's work

John Christopher: tripod trilogy; prince-in-waiting trilogy

Some of the above might make it seem that I lean toward the religious inflected writing (Lewis, Tolkien, Card), but one reason I'm interested in LeGuin is that I've heard she writes from an atheistic perspective.

Thanks, by the way, for responding to my request for info.

Expand full comment

No worries - I love Le Guin so I'm happy to recommend!

The novel 'The Left Hand of Darkness' would probably be a good place to start by my reckoning, alongside the short story collection 'The Wind's Twelve Quarters'.

The 'Earthsea' books are probably her closest to Lewis and Tolkien and you can get them collected all together now.

I hope you enjoy if you decide to read them!

Expand full comment

Yah, there’s no God so don’t have a moral philosophy, I just vibe man lol

Expand full comment

The main critique here falls flat to me, simply because you're allowed to fold into your definition of utility ideas like "people are better off in a society where we don't rape unconscious people or allow anyone to starve" and I think you'd be right to do so. Placing a large utility premium on these things lets us resolve all your counterexamples without much trouble, but importantly I *do* think there is some amount of human happiness we should trade one person starving for. Living in a world bound by physical reality requires making decisions with tradeoffs; you don't get to only win.

To me utilitarianism is the insistence that you consider the consequences of your actions when you're taking them, not just how they feel ex-ante. If your project is to alleviate climate change, then deontologically maybe recycling and spending tens of minutes thinking about how to reuse tote bags is the appropriate action, but a utilitarian will demand you acknowledge that these things are worse use of your resources and less effective at helping climate change than doing things like buying carbon credits, donating to climate lobbying groups, lobbying for nuclear power, etc.

A lot of EA's project is to get people to apply a minimally utilitarian lens to situations that aren't so thorny, and I think they are a very strong driving force in holding charity and philanthropy accountable for their actions and not just their words. I am extremely sympathetic to many of your criticisms of EA as a philosophical school, but I think I would much prefer to live in this world than a world where EA ideas are subscribed to by nobody. Any person living in abject poverty would prefer to gain basic resources and healthcare in exchange for adding to the world some billionaires and over-eager college students that the American philosopher class dislikes, and we, including you, have a duty to take that seriously

Expand full comment

If we're going to place a large utility premium on "people are better off in a society where we don't rape unconscious people or allow anyone to starve" is it ok to place another a large utility premium on not taking the Lord's name in vain, and on remembering the sabbath and keeping it holy?

Expand full comment

In short, yes- I don't personally place value on those things, but society's values are an aggregate, and if other people valued them, I think it would be right for society to take them into account in the utilitarian calculus

Arguing about what constitutes utility is a bit second-order to utilitarianism itself, imo

Expand full comment

"Arguing about what constitutes utility is a bit second-order to utilitarianism itself, imo"

I'm curious: how could that be? I feel like I must be misunderstanding something. You want to maximize something but determine *what that thing is you want to maximize* is not the most important question?

To me, calling it an increase in happiness / benefit / advantage / well-being or a decrease in same has two problems: 1) those things are subjective, and 2) they are impossible to measure except by comparison in the most specific of situations with a full set of context. And what that really amounts to then is not a principled ethics, but rather, your personal weighting of the relative increases / decreases in "good" in a given situation.

Expand full comment

Object-level application of theoretical principles does constitute a legitimate domain of criticism, though--repugnant conclusion arguments, eg. They're not knockdown arguments because one can always just bite the bullet, but they do direct our attention to the gaps between our sort of felt predictions and those we logically derive, and that can be a fruitful thing to interrogate.

Expand full comment

That's why all my EA dollars go to preventing the raping of the comatose.

Expand full comment

>>Placing a large utility premium on these things lets us resolve all your counterexamples without much trouble, but importantly I *do* think there is some amount of human happiness we should trade one person starving for. Living in a world bound by physical reality requires making decisions with tradeoffs; you don't get to only win.

Just out of curiosity, what are your thoughts on Ursula K. LeGuin's short story/thought experiment, "The Ones Who Walk Away from Omelas"?

Expand full comment

"There are of course many other examples where utilitarian logic violates our basic moral instincts."

Ah yes, the basic moral instinct argument. Does it really exist?

If your dog gets out and ends up in the neighbor's yard that are recently immigrated Hmong... and they kill and eat your dog, what is the moral argument?

On some primitive tribal islands if you accidentally shipwreck there, the natives will kill you and eat you. They might rape you first. What is that moral argument?

American liberals have "progressed" to a belief that more victims of crime are an acceptable consequence for reducing the number of incarcerated, and that young children who question their gender identity should be actively encouraged to physically alter their gender. How do you square these position in terms of natural morality?

C.S. Lewis argued in his letters to the British people despondent over yet another world war that God is natural present in the natural human reaction to cruelty, unfairness and harm done to others. However, those that are the most cruel, unfair and harmful often claim to be virtuous and moral in their actions.

I am not convinced that there is natural morality. Morality seems to be more a social and cultural construct, and hence it is malleable and corruptible. This distinction is important as the secular left "progresses" without a committed religious grounding of base morality.

Expand full comment

There really isn't any perfect moral philosophy. It'd be nice if there was. With Deontology you just run into all of the opposite problems

Expand full comment

Many a great Star Trek episode used to be written from this premise

Expand full comment

I feel that in a utilitarian system, there would be no vegetative patients to be raped. Think of all the resources that go into maintaining those patients. Wouldn't that money be better spent stocking food-banks? Wouldn't the trained staff be better deployed treating people who have urgent needs? Personally I would triage those who are actively suffering over the comatose. Ideally, we could care for both, but realistically there are people who are actively suffering who are currently going untreated.

I think you are mistaken that utilitarians would support the raping of comatose people because in a non-utopian utilitarian society those patients simply would not exist.

Expand full comment

What about utilitarians in a non-utilitarian society, though?

Expand full comment

How do you mean?

Expand full comment

Should utilitarians start living according to utilitarian principles now, where comotose patients exist, or should they live by a different set of ethics until the utilitarian society gets declared?

Expand full comment
Aug 13, 2022·edited Aug 14, 2022

Well, If I were the security guard witnessing the rape of a patient I would balance what's best for the patient, what's best for the rapist, and what's best for society and what's best for myself.

Best for the patient - I'm pretty sure the patient would want help

Best for the rapist - Don't interfere

Best for myself - performing my job well = protecting the patient

Best for society - the pleasure of the villain/rapist is less than the good of preventing the rape. We have to assume that the patient might wake up someday, or why is there even a patient? If the patient awoke and learned of the rape then that would cause "badness" that exceeds the "goodness" of a happy rapist.

I would intervene and stop the rape, because that is what is the best for most of the interests involved.

To answer your question - Yes utilitarians should live according to their principles in a non-utilitarian world. This will not lead to a more horrible world (than if they lived by other principles) as Freddie suggests.

---

I would like to add that I wouldn't sit down and figure all this out while the attempted rape was happening. If I'm that security guard I've already made the choice to protect my patients, just as as a data-analyst I've already made the choice not to share the confidential data of my clients.

Expand full comment

“We have to assume that the patient might wake up someday, or why is there even a patient?”

Okay, so limit the question to patients who have been diagnosed as irreversible, and whose families have signed off on pulling the plug, but for whatever reason the actual deed hasn’t been done yet.

“If the patient awoke and learned of the rape then that would cause ‘badness’ that exceeds the ‘goodness’ of a happy rapist.”

How do you know? What if the rapist REALLY loves raping? What if there are two rapists, or three, or a football team’s worth, just trailing out into the hall waiting their turn? (All of whom REALLY love raping.) At some point we must admit that their collective happiness would outweigh the pain of one person. I mean, it’s just one person! Think of all the people you’re helping. And the patient never even felt it! And you don’t even have to tell them it happened.

Expand full comment
Aug 13, 2022·edited Aug 14, 2022

"Okay, so limit the question to patients who have been diagnosed as irreversible, and whose families have signed off on pulling the plug, but for whatever reason the actual deed hasn’t been done yet. "

Well, it'd still try to do what's best.

Best for the patient: who cares they're already dead... But you said they have family so the family would likely be very upset by the rape.

Best for rapist: Don't intervene

Best for me: Do my job, don't get fired, live to stop another rapist.

Best for society: Live to stop another rapist.

I'd still stop that rapist.

---

"How do you know?"

Same way I know anything - I use empathy and intellect to guess.

---

"At some point we must admit that their collective happiness would outweigh the pain of one person."

I think something that you and Freddie are overlooking is that the crime of rape greatly upsets far more people than merely the victim. It harms society itself.

I think the fact that rape is so upsetting to those not involved is why Freddie chose that subject for his thought experiment in the first place. There is good in preventing the harm done to not only friends and family members of the victim but also the harm done to complete strangers who hear about it. Would you not suffer worry if you heard a patient was raped at a facility your loved one was also at?

Probably yes even though you don't know anyone involved.

So there is a real utilitarian good in preventing the rape that has nothing to do with the victim at all.

Expand full comment

My guess (and it’s a guess informed only by anecdotal evidence, if it’s informed at all)—

is that this situation is similar to the one with libertarians, where they argue that whatever bad outcomes are generated by their philosophy will self-correct, or can be easily corrected with almost no muss and fuss. A few caveats, a few tweaks, and we’re done. But in real life, where people don’t bother to take the tweaks and caveats seriously, bad outcomes happen and the philosophers are long gone when the suffering of others takes place.

Expand full comment

Sadly the world requires infinite decisions be made, yet provides a dearth of good choices. How is one to navigate in these circumstances? That is my quandary.

Expand full comment

I would not call myself a strict utilitarian, so I feel no need to defend the position, but this feels a bit too much like a straw man to be really useful.

There are short term utility and long term utility, and the two often collide. If I'm having surgery, it might be creating more utility for society in general if the doctor took all my usable organs and saved many lives, instead of doing the operation I was expecting to have. But the long term negative effect of this, fear of getting medical treatment lest one is turned into an unwilling organ donor, far outweighs the positive short term utility. I think it's rather easy to find similar counterarguments to the examples you're using.

"Utilitarianism places no value on duty to personal responsibilities." I think that's wrong, and I could see a utilitarian argument for such values similar to the one above: Showing personal responsibility and acting on duty can have positive consequences, social utility, thus can be defended on within a utilitarian philosophy.

In the end, I don't think all moral and practical questions can be solved on one universal principle, so I don't feel the need to defend utilitarian thinking at any cost. It's more, as you say, a good school of thought for shaking up one's own moral intuitions, for rethinking some of our mainstream approaches to problems.

Expand full comment

In physics, we describe this as "non-perturbative", i.e. the higher-order corrections can be larger than the first-order behavior. Most moral philosophers don't know how to do this kind of calculation, if they even know it exists. As a consequentialist, I find most naive "dilemmas" posed as arguments against utilitarianism to be rather facile on these grounds.

Expand full comment

Hey, you scooped my planned comment!

Expand full comment

It turns out the comment I was going to make is the obvious one! I'm also no utilitarian, but I'm not all that much of anything else, either, and weighing the utility (over some time frame) of things/decisions seems like the general way I'm supposed to think about the world. It's not blindingly obvious how to be rational if I'm not doing something like this.

Expand full comment

I think this particular critique relies on the scenarios having very limited time horizons & pools of people whose happiness is being considered.

1) A society in which people are not allowed to rape women in vegetative states will produce greater happiness for a greater number in the long run.

2) Fighting the set of norms that makes it possible to quell civil unrest by framing an innocent black man for rape, instead of upholding those norms for short-term gain, will, if you and your allies are successful, produce greater happiness for a greater number in the long run.

3) Give your loaf of bread to the homeless woman and her kids, then go back to the store and buy another loaf for your own kids. If you can't afford another loaf, you're in the same situation as the people asking for your help, and it's a coin toss whether you keep the bread or donate it.

4) "[I]f everyone followed such a project, the credit system would collapse"—yes, which is why it's good for you personally to uphold the credit system, since the stability of the system produces greater happiness for a greater number in the long run.

I'm not a utilitarian, but I think there's a steel man here you haven't yet taken on.

[My in-defense-of-utilitarianism argument for 2) doesn't fully satisfy me—a society doesn't have to base its scapegoating system on racial prejudice, but every society has a scapegoating system, and scapegoating is probably, cf. Girard, necessary to keep human societies stable. Utilitarianism does provide a defense of scapegoating in general, as long as you can't find a better system, and as long as you keep the number of scapegoats as low as possible. However, I'm not sure it follows from this that utilitarianism is bad, because I'm not sure it's possible to eliminate scapegoating from human societies.]

Expand full comment
author

OK but your 1) epitomizes my frustration - if we're attaching all those riders, where is the famous flexibility and simplicity of utilitarianism?

Expand full comment

But you're just stipulating the definition of Utilitarianism. You can do that, but it's generally better to address how the word (and the idea) are used in the world. Rule Utilitarianism is probably more common than Act Utilitarianism. You can say "Well, that's not really Utilitarianism then" but you're using the term very differently from the way the rest of the world uses it.

Expand full comment
author

But the point isn't semantic, it's functional. Utilitarians constantly say "our system is so flexible and simple." Then when you point out repugnant conclusions they say "well here's a huge list of provisos and complications." It's bogus.

Expand full comment

Who are they? When I studied moral philosophy, there were a ton of complicated arguments about Utilitarianism. I'm not suggesting that you read these, but if you do a search in the Stanford Encyclopedia of Philosophy for "Utilitarianism", you get a monstrous amount of stuff. Arguments, distinctions, objections, replies, more objections, more replies, more adjectives -- act, rule, hedonic, classical, and on and on. https://plato.stanford.edu/search/searcher.py?query=utilitarianism

Expand full comment

So again, who are these people? I know Scott Alexander hates deontology and likes Utilitarianism, but I don't recall his saying it's super simple. (Maybe he did and I missed it.) But anyone who says it's super simple is just wrong. They haven't studied the topic.

Expand full comment
author

Scott has a set of priors he calls a moral system, same as everyone else.

Expand full comment

Are those claims about the virtues of utilitarianism really important to utilitarian moral logic, though? Seems neither here nor there--utilitarians could (assuming they do largely endorse this view, it's not something I've heard too much myself) be wrong about that, but right about the reasoning. It's not like the claim is that utilitarianism is true *because* it's simple and flexible.

Expand full comment

I think you could argue Alexander is the most influential utilitarian. Singer is very popular, but I don’t see his program as very influential among elites. Alexander, by contrast, seems to put into words a particular strain of new elite (particularly tech) thinking partaken in by people from Peter Thiel to Pete Buttigieg. It’s utilitarian in its basic aims and style, particularly its hostility to older “irrational” schemes of values, but in practice it tends to produce justification for surveillance, zero tolerance policing, and “pro-growth” urban politics in terms of some hypothetical greater happiness. At least to me, his work has made that corner of the elite make sense.

Expand full comment

I like Alexander's writing very much; I haven't read much from him about utilitarianism explicitly (I'm a fairly recent reader). However, utilitarianism seems to me to be the attempt to apply rationalism to ethics, and ethics are inherently provisional and contextual and values-based, which makes rationalism a fairly unwieldy tool.

I'd like to think that he would decide that a certain point, utilitarianism is unhelpful, and at that point it's time to use a different method, similar to how he seems to argue that what constitutes absurdity is inherently a pragmatic matter of whose skepticism one is trying to satisfy.

Expand full comment

That's my sense of what he would argue, too. (I don't recall reading anything by him about utilitarianism, though maybe I have. I've read him a lot, off and on, over the last several years.)

Expand full comment

Also not a committed utilitarian, but this was my instinct for countering the given examples as well--"A world in which x generally obtains is worse". My problem with it, though, is that it feels like I'm using non-utilitarian intuitions to inform my moral judgments and then backfilling them with utilitarian logic. If the hope is to proceed from utilitarian principles to make moral discoveries or resolve moral disputes, then smuggling in little bits of deontological or virtue ethical or whatever intuitions won't do.

Expand full comment

Well said. Applying generalities sounds Kantian, or at least halfway-Kantian (not fully, since it doesn't necessitate looking at ends.)

There are those who try to marry deontology with utilitarianism but at that point you're as well just jettisoning the utilitarianism, since there's no real basis for restricting moral judgement just to utility if you've already opened the door to, say, virtue ethics.

Expand full comment