266 Comments
Comment deleted
Expand full comment

if markets were actually efficient mosquito nets would already cost the marginal value of the lives they save

Expand full comment

Good article. I think you can’t talk about EA, however, without mentioning the rationalist movement which underpins it, which starts with equally good if vague ideas (“we should try to be wrong less often,” has some really good methods for getting there, but has one terrible mistake: the belief that all rational people should eventually agree. Meaning that there is one right answer, and if no one can disprove it, then you have that right answer. Ironically, a framework to achieve intellectual humility has within it the seeds of creating the exact opposite.

I kind of wonder what would happen if all EA came with the failsafe of “if the answer I came up with is the answer that makes me most happy it’s wrong.” I feel like it’d be a better program.

“Yaaaay I get to work on AI, like I always wanted.” WRONG

“Yaaaaay I get to buy a castle.” WRONG

“Yaaaay let’s go to Mars!” WRONG

At least it seems to me that more of the EA types are realizing that utilitarianism leads to disaster, which is a good thing.

Expand full comment

I think that the focus on rationalism ends up meaning that you get a special kick out of violating the seemingly arbitrary morality of normies. Like, it's a bit of a thrill to talk shit about raising money for dog shelters and posture as though the people who donate to such efforts are worse than scum because they could be saving future human lives.

I'm just waiting for the first pro-life EAs and how things might turn militantly natalist when it happens...

Expand full comment

Your first paragraph immediately made me think of Bay Area polyamory.

Your second paragraph immediately made me think of Bay Area polyamory.

Expand full comment
Dec 3, 2023·edited Dec 4, 2023

That's already happened and it made no waves. There are a number of sincere Catholic EAs and the atheists are uncomfortable with them but accepting their perspective; it hasn't moved a significant fraction of donations because most of the interventions that cost-effectively save unborn babies also save mothers and recently-born babies.

Expand full comment

>I kind of wonder what would happen if all EA came with the failsafe of “if the answer I came up with is the answer that makes me most happy it’s wrong.” I feel like it’d be a better program.

I don't think it was phrased *exactly* like this, but my memory is that such an attitude was much more present in the early days of EA. Of course, that also led to a lot of burnout and they realized it was bad for recruitment and retention, so they reformulated. EA, as is the risk of any well-meaning ideology, was a victim of its own success.

Expand full comment

Many rationalists are EAs. Many EAs are rationalists.

Far from all in both cases.

More importantly, the underlying moral philosophy of EA can be derived completely independent of the rationality community's canon. So the influence is there, but the overlap is not strictly necessary.

Aumann's agreement theorem is what you seem to be trying to describe and it's logically true, not something rationalists cooked up. https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem

Further, EA highly emphasizes "personal fit" in considering how to choose one's career or charity strategy. Self-sacrifice is not a sustainable strategy that maximizes the potential to have a positive impact.

Expand full comment

"More importantly, the underlying moral philosophy of EA can be derived completely independent of the rationality community's canon. So the influence is there, but the overlap is not strictly necessary."

Well, of course. I do not speak of some sort of abstract, theoretical EA. I speak of the actual movement as actually exists, and in that movement the rationalist upbringing has major influence, and some of the thought patterns of that movement are inhibiting the saner bits.

"Aumann's agreement theorem is what you seem to be trying to describe and it's logically true, not something rationalists cooked up. https://en.wikipedia.org/wiki/Aumann%27s_agreement_theorem"

Thank you, yes. It's that "common prior" where the whole thing starts to break down, because how could rational people not share the same priors? What, you don't share that prior? Well, rationalists X, Y, and Z all do, so you must be wrong here. (I'm summarizing a post from a number of people much more closely connected to the movement than I.) EA is arguably hamstringing itself by not including enough people with different priors (thus the honing in on longtermism).

"Further, EA highly emphasizes "personal fit" in considering how to choose one's career or charity strategy. Self-sacrifice is not a sustainable strategy that maximizes the potential to have a positive impact."

Good! But I'm not sure that's apposite to anything I said.

Expand full comment

Rational people could not share the same prior for any number of reasons? Experiences and brains differ.

EA is (distinct from say LessWrong) hilariously open to criticism and tries so, so hard to take in new people and views. EA is so very agreeable and intentionally open (and rationalists so very disagreeable and must less willing to tolerate views that seem obviously wrong).

You wrote this: "I kind of wonder what would happen if all EA came with the failsafe of “if the answer I came up with is the answer that makes me most happy it’s wrong.” I feel like it’d be a better program."

Which I misread, because in the early days of EA people purposely doing what didn't make them happy and self-sacrifice was leading to a lot of burnout and other problems, so people came around to more sustainable ways of doing good.

So your idea is pretty ironic, and not the least because many AI safety people started out as huge advocates of AI progress. They don't want AI risk to be real.

Expand full comment

“On this Giving Tuesday I’d like to explain to you why effective altruism is bad” and then reasonably backing it up makes this the epitome of a Freddie post

Expand full comment

I agree EA has gotten weird, but, at its heart, isn't EA basically just "evidence based philanthropy." Like in medicine, I agree that suggesting that we use evidence to determine what works seems obvious, but like in medicine, it is actually not that common.

Expand full comment

As far back as in the 90s when I was doing Race for the Cure walks as a kid with my family, I was aware of the philanthropic report cards that graded Susan G Komen as an inefficient spender of donated funds, and scored other philanthropies as such - isn't that evidence based philanthropy? And if that's different from EA, what's the difference such that EA is a vast philosophical departure from prior charitable giving?

I guess the question is, evidence for what? Evidence that some giving is more effective than others is so general and self-evident as to make EA a meaningless distinction, and "evidence" that, say, donating to global antiviral research is inherently better than donating to the arts or even other medical research is so flimsy and subjective as to not be the basis for a coherent argument.

Expand full comment

I don't think EA invented evaluating charitable giving, but it certainly championed it. My understanding is that they also are/were big on measuring actual outcomes, ideally through experimentation. Earlier "report cards" looked at process markers, like how much is spent on overhead, which aren't always good stand-ins for actual results (sometimes overhead is necessary).

As for "evidence for what," I think EA at least started out as agnostic on this. The idea wasn't exactly that saving human lives (or something) was the only reason for giving, but thst, if you wanted to save human lives, we should look into whether spending on bed nets vs spending on new wells accomplishes that, but if you want to advance animal rights neither will be a good cause for you and we have to look at something else.

None of this is to say that I don't doubt that some of EA (particularly big names) have strayed from this mission and basically looked for excuses to talk about robots and rocket ships and other things they find fun.

Expand full comment

I think that groups have been grading charities on efficient spending for a while (not that everyone pays attention to them), so that you can look up administrative and overhead costs for Susan G Komen and St. Jude's and the rest and try to get most of your money spent on charity and not administrative salaries. In theory, EA takes this a step further by grading charitable causes on utilitarian grounds, so that mosquito nets take precedence over mural painting or whatever. In practice, it seems that EA's utilitarianism has been corrupted by setting the human population in 30,000 AD to infinity, so that a study of existential risk that decreases the chance of nuclear war or runaway AI by 0.00000001% works to to be more moral than saving a bunch of starving kids somewhere.

Expand full comment

The thing is that “fraction of money spent on administrative overhead” is a really weak proxy for what makes a charity good. If one charity has 25% overhead but saves 1 life per $100 donated, that is a much better charity to donate to than one with 5% overhead that only saves 1 life per $500 donated. For the same reason it might be beneficial for a private firm to offer higher salaries to attract quality employees, a charity may be able to do more good by doing the same. And as soon as people start paying attention to the proxy, charities themselves can start *optimizing* for it, which likely leads to adverse outcomes.

As far as I know, it was EAs who popularized the (correct) criticism of “fraction spent on overhead” as the main way in which charities are evaluated, and drove a lot of focus on superior alternatives.

Expand full comment

Yes. GiveWell completely changed the game here. Before that Charity Navigator was the really the one organization with public charity evaluations. And while these evaluations were useful for spotting charities that were essentially scams, they provided no insight into what charities were actually accomplishing. Even worse, they arguably created bad incentives for charities since they were penalized in the ratings for spending money on organizational infrastructure, which in some circumstances may have been entirely the right thing to do for a charity.

Expand full comment

I would argue that EA goes much further with its analysis and is really able to pinpoint very effective and cost effective causes for giving. Like, literally, $100 for mosquito nets will save 2.4 lives in west African (or something). There have been attempts to evaluate charities in the past, but the metrics were more simple, like what percentage of giving actually went to recipients.

Expand full comment

Will changing the name of altruism make it more common or effective? If so, what does that say about us?

Expand full comment

Yeah, Freddie seems to be deliberately overstating how much philanthropists have tracked that the charities they give to actually do much good in the world. While I'm sure philanthropists in theory want to do good, they have historically not paid all that much attention to whether they actually did so, at least not the ways that they meticulously account for how well their businesses operate. Much charitable giving is likely as much about making the philanthropist feel good about giving and/or about being seen as being charitable. To the extent that philanthropists are now checking on whether their giving does anything, it's at least partly because EA methods are more widespread now.

The evidence-based-philanthropy model is especially helpful for small dollar donors. At least in theory, the large-dollar contributor could at least have the connections to check up on a charity. The small-dollar donor rarely had any means of knowing whether the organizations they donated to were doing what they claimed to be doing. The emphasis on transparency and measurement enables that kind of insight where it didn't exist before. While transparency and measurement were not exclusively created by EA, EA certainly has pushed those ideas into the mainstream of public consciousness, which especially in an era of small-dollar giving, seems to be a net benefit.

I do think that the methods of evidence based philanthropy can be decoupled from the philosophy of EA, just like methods of statistical inference have be decoupled from the social Darwinian eugenics where they were initially developed.

But I think the comparison with evidence-based-medicine can show us some potential pitfalls with these methods. For some ailments, there just isn't much evidence for what works and what doesn't, and EBM can lead physicians to overemphasize the things they can measure easily and ignore or underemphasize things they can't. I've even seen EBM methods promoted in public health situations to try to prevent environmental regulations from being enacted. For example, we may have data that is suggestive that more contaminant A is linked to higher levels of disease X, but just don't have enough data to conclusively prove that link with a high degree of statistical power. A pro-business shill with a PhD may know that there isn't enough evidence yet to conclusively prove that link between A and X, and so invoke EBM to say that without enough evidence, there should be no regulation on A, whereas other scientific methods would suggest more caution and study in response to lack of evidence. A similar thing happens in evidence-based-philanthropy, that easily measured things get prioritized, and worthy but hard-to-measure things get pushed to the side. And malign forces can hijack the whole system for their insidious purposes.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

>Much charitable giving is likely as much about making the philanthropist feel good about giving and/or about being seen as being charitable.

Don't forget replicating wealthy donors' ideologies and biases at societal scale and ensuring that they can control how social/cultural programs are carried out. If a donor's revealed preference through the decisions they make in their professional lives is to suppress employee wages, they're never going to allow the charitable projects they support to conflict with that preference in a way that might meaningfully reduce poverty.

Expand full comment

I don’t think this necessarily follows. A business owner’s position could be something like “I will play the game that exists but also try to improve it.” Raising wages may make them less competitive relative to rivals, but alleviating poverty on a broad scale will not.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

This. Most "foundations" are the uber wealthy using their resources to bypass democratic institutions and unilaterally bribe/coerce their social/political policies into effect. They even get a sweet tax writeoff on top of it!

Expand full comment

More than that.

Lots and lots of charities are deliberately not based on evidence or indeed on doing much good. They are often appeals to "feel good factors" (help the poor pretty little girl) or debased (let's give more money to Yale! It's not like a $40B endowment is enough...)

EA was novel for arguing against those type of charities. Whether you consider that self evident, well... whatev'.

Expand full comment

The people who give to Yale don't think they're wasting their money. They think they're ensuring the continued existence of an institution that has given both them and the wider world immense benefits.

Expand full comment

Sure. And it's not 'wasted' in the sense that it gives them an ego kick. They get what they paid for. But this is definitely not efficient - Yale would do just as much good without their money. As I said, it has an endowment of $40B and has correctly been described as a hedge fund masquerading as a university (a play on GM being a bank masquerading as an auto manufacturer).

And that's the value of EA - pointing out that this above is incorrect, if your aim is to do the most good with your money. Something Freddie mistakenly considered so banal and well established it's a truism.

Expand full comment

They are also buying their child a place in that institution, and buying themselves a place in a network of the world elites. It’s probably a worthwhile investment for many.

Expand full comment

Right. Investment, not charity.

Expand full comment

EA is evidenced based in the same manner that Flat Earth is evidence based.

Expand full comment

The "branding" or "cult-like" aspects of EA are probably necessary to get a lot of people to donate, in the same way that "branding" and "cult-like" aspects of Taylor Swift enjoyment gets more people to pay ungodly amounts for her concerts.

People don't generally like giving away money and getting nothing in return. Churches used to be a good answer to "what do I get?" and a big part of that was "community." If EA doesn't have a brand, community, leadership figures, etc, then it doesn't work for that. To some extent it's doing the work of branding causes that aren't well branded. Direct cash transfers to Rwandans doesn't give you the same in-group-identifying-bumper-sticker as NPR, NRA, Harvard, etc., which puts it at a massive disadvantage without the positive auspices of EA.

This seems all seems fine. If people are going to get involved in a community, centering it on giving money away seems about as good as you can get, even if you're not on board with all the philosophy.

Expand full comment
author

This is a fair point. The question, though, is whether a moral system can be healthy when ultimately backstopped by the same kind of appeal that creates Taylor Swift fans.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023Liked by Freddie deBoer

That said, I think in turn the question is whether one can skim the apply everywhere cream of an ethical community (which I think Tyler G usefully distinguishes from a philosophy) and retain the aspects that make it a movement.

Your question is useful [and] important to argue about, as is Tyler's. But I think that both religions and effective altruism have community building aspects that you're naturally suspicious about and have reasonable arguments against. But creating sustained ethical communities with healthy moral systems is a hard and very much unsolved problem.

In short, I think a keeping-all-the-charitable-stuff-you-like approach risks inevitably leading to making individual optimal choices that will sooner or later extinguish the sense of community that reinforces the ethics.

Expand full comment

Marxist irony alert!

And above you criticized EAs for being too utilitarian and impartial!

Next you’re going to argue EAs don’t sufficiently consider Hayekian limits on helping others!

Expand full comment

I think there's a lot in here that's accurate, but I do want to push back somewhat with an affirmative case for EA. I'm not really an EA, but I am the guy who says we should spend money on mosquito nets instead of public libraries (in fact, I'm THE GUY who said that in the comments section on the last EA post), and I'll have you know that I do NOT mutter about Roko's Basilisk or anything weird or longtermist.

You're right that to the extent EA is a philosophy, it's basically just utilitarianism, and I think utilitarianism is underrated. I'm writing this fast so admittedly this is a bit of a drive-by, but I think it's not a coincidence that Bentham was an abolitionist and advocate for women's rights, and Kant was a racist. That is, to the extent utilitarianism pushes people in a direction against the current climate, I think it tends to push people in the right direction.

As I indicated earlier, I'm much more interested in the mosquito nets than longtermism, so I'd be happy with a less weird EA that focuses more on the mundane. But the counterpoint to that is, if, for example, you are genuinely very concerned about AI alignment, you want to encourage more weirdos to get involved in studying AI safety, so you kind of want to be weird and quirky and draw in the right sort of people to get them on the project. To put it another way, my vision of EA is millions of people all tithing to Against Malaria Foundation or GiveWell without thinking too hard about what they're doing; another vision is having a few thousand people working on AI alignment rather than only a few hundred. I'm at least a little concerned about AI, so I can respect where the latter thing is coming from.

Expand full comment

I'm with you on utilitarianism being underrated (well maybe not underrated, lot of people like it, maybe unfairly maligned). It definitely breaks down at the edges, (few things don't), but as people say about democracy, it's the worst moral philosophy except for all the others. It isn't like deontology has a spotless record.

I tend to think a good way to approach things is utilitarianism (based on whatever version of "good" works for you, and we can disagree on that) with a healthy dose of humility. If utilitarianism is telling you something increases the good, when it's clearly abhorrent (e.g., killing drifters for their organs) you should strongly consider, and maybe even assume, you are somehow miscalcuating utility, even if you can't quite spot the error. Basically, if your math is telling you that bumblebees can't fly, you need to recognize your math is bad, even you can't figure out exactly why, but if your math generally works, no need to dump it altogether just because it can't explain bumblebees.

Expand full comment

Agree 100% with both Alex and Ennui. Would add that utilitarianism should virtually always be the approach for politicians. That is really the message of Politics as a Vocation. But even Weber recognizes, in that amazing passage near the end, moments when something else is called for:

"Surely, politics is made with the head, but it is certainly not made with the head alone. In this the proponents of an ethic of ultimate ends are right. One cannot prescribe to anyone whether he should follow an ethic of absolute ends or an ethic of responsibility, or when the one and when the other. One can say only this much: If in these times, which, in your opinion, are not times of 'sterile' excitation­­ is not, after all, genuine passion, ­­if now suddenly the Weltanschauungs­ politicians crop up en masse and pass the watchword, 'The world is stupid and base, not I,' 'The responsibility for the consequences does not fall upon me but upon the others whom I serve and whose stupidity or baseness I shall eradicate,' then I declare frankly that I would first inquire into the degree of inner poise backing this ethic of ultimate ends. I am under the impression that in nine out of ten cases I deal with windbags who do not fully realize what they take upon themselves but who intoxicate themselves with romantic sensations. From a human point of view this is not very interesting to me, nor does it move me profoundly.

However, it is immensely moving when a mature man­­, no matter whether old or young in years­­, is aware of a responsibility for the consequences of his conduct and really feels such responsibility with heart and soul. He then acts by following an ethic of responsibility and somewhere he reaches the point where he says: 'Here I stand; I can do no other.' That is something genuinely human and moving. And every one of us who is not spiritually dead must realize the possibility of finding himself at some time in that position. In so far as this is true, an ethic of ultimate ends and an ethic of responsibility are not absolute contrasts but rather supplements, which only in unison constitute a genuine man­­, a man who can have the 'calling for politics.'"

Expand full comment
author

But this is subject to the exact complaint I made in the piece. If the message is "use utilitarianism, but make exceptions when you have to," the immediate question is a) when do I have to and b) exceptions according to which moral system? Utilitarians do this CONSTANTLY - "our system is simple and intuitive and results in the most moral outcomes, except when it isn't and doesn't." A moral acrostic that you have to constantly make exceptions for is not a moral system!

Expand full comment

I think this is an often-made and rather incorrect point about utilitarianism, and somewhat consequentialism more generally. The whole practice of rule-based heuristics that e.g. virtue ethics is is almost explicitly for the purpose of simplistic decision-making. Moving to a moral system that cares about the consequences is in essence the same as saying "ok, let's stop eyeballing these things and actually do the math." I don't think anyone would say that that's easier or simpler, even if it is more intuitive and cares about fewer things.

Expand full comment
author

But the measure of a moral system lies in its ability to guide the actual decisions of actually-existing regular people.

Expand full comment

I think that's only a subset of the things a moral system helps with. Utilitarianism in particular is most useful in its ability to guide the decisions of non-regular people, generally those in a policy-making capacity where the individual human they affect is less meaningful, they have fewer intuitions, and they have the ability to do actual math rather than guess something is probably better. I don't think humanity is best served by everyone having the same moral system, because different moral systems perform better for different people and different types of problems. Most people are not benefited by utilitarianism in the same way that most people are not benefited by math - it's just a situation where most people are better served learning a different tool. I know it's not the populist answer, but that's how I tend to view it.

Expand full comment

That's why Rawls started invoking "reflective equilibrium" in his deontological-ish theory which is essentially just checking the outcome of your theory against your common sense (pardon me, your Bayesian priors). There is no theory that will universally crank out the correct moral decision in every situation. You have to look at it from various perspectives and ask/read thoughtful people and take your best shot. It isn't perfect. But it's better than what most people actually do.

Expand full comment

Here we are, thinking we can invent a moral system with axioms that work like math.

Its a strange instinct to want a moral system that is consistent at every edge case possible.

I really don't want to sound trite, but I think we are all looking to integrate a moral system over some degree of social intelligence and empathy, which we probably call, in the olden days, wisdom.

Expand full comment

Utilitarianism is a model. Like all models it goes wonky sometimes. Newtonian physics is almost almost useful in engineering, but if you are making transistors or GPS satellites, you start to have to apply quantum physics and relatively and if you are trying to explain black holes or the big bang, even those fail. Doesn't mean you should stop using Newtonian physics to build bridges.

As for "how do you know when it's failing," the answer is when it becomes clear yhst the answer it is giving you isn't actually increasing the "good." I think this often happens when you've failed to define the "good."

In a lot of edge cases utilitarianism tells you to do something that advances the "good" as you initially defined it, killing an innocent for multiple life-saving organ transplants saves lives on net, so if that is your chosen metric, utilitarianism tells you to do it, but what it really does is reveal that saving net lives is not really the metric you want.

An important thing about utilitarianism is that it's value-agnostic. It doesn't define what the "good" is, it just says that once identify the "good" you should make choices based on what advances that "good" rather than being focused on specific inviolable rules. To me, I think that it generally works better than less flexible deontolocial systems, which wind up with a lot more edge cases that go wrong ("don't lie," OK what if Nazis are asking about the Jews in your basement?). I'm open to a better system, but people tend not to propose one, they just complain about flaws in utilitarianism.

Expand full comment

>An important thing about utilitarianism is that it's value-agnostic. It doesn't define what the "good" is, it just says that once identify the "good" you should make choices based on what advances that "good" rather than being focused on specific inviolable rules.

I don't remember where I got it, but my preferred analogy continues to be: utilitarianism is a *yardstick*, and trying to use a yardstick as a compass doesn't work. Using the wrong tool for the job (among other issues) is part of the EA problem (or rather the problem many people have with EA; EA's are making each other rich and buying castles; they don't a problem).

Expand full comment

The best defense I can see for utilitarianism -- and it's a poor one, but it's often implicit in the defenses offered that aren't full Peter Singer -- is basically, "it's a tool, not a moral system. Use it within a different moral system when your moral sense or virtues or whatever offer no clear guidance and you have to weigh your options." Plus, I suppose, maybe sometimes use it to see if your primary moral system is misaligned: "wait, if I follow this belief to its logical conclusion, a whole lot of people will get hurt. Maybe I should rethink this..."

Which is not a roaring defense, I know -- in fact, one could even say it's trivial to say "if it's not clear which option is more moral, or at least less immoral, sit down and weigh your options". But I'm not a utilitarian, so that's not really my problem.

Expand full comment

I think you are to some extent underrating utilitarianism/consequentialism in general, but I agree the form in which most people use it is extremely ridiculous. Its big problem is that it almost never deals with the self as a potentially irrational actor. Your example with your own child vs. a stranger is a great one, because once you start the sentence with 'given that I am a human being that has strong kin-protection instincts' the moral calculus becomes different.

So you probably wind up with something like rules-based utilitarianism, which in the end becomes fairly similar to virtue ethics. The idea would be that you try to subject the rules/virtues themselves to consequentialist thinking, trying to tease out whether, as rules, they make the world a better place or not, always being aware of what irrational humans (including yourself) might do with them. Universal human rights are a great example of such a rule. I still think in the absence of a deity ethics HAS to come from something like consequentialism because otherwise it's just evolved instinct, but obviously these ridiculous overly-certain equations and discount rates and future quality lives don't really work that way.

Expand full comment

I agree with you, and I think disagree with Gordon below. If you use utilitarianism to get to an answer that you find morally abhorrent, there are two possibilities: that your math is wrong (ie, you're using inputs for "utility" that do not actually correspond with what actually creates utility for you as a person), or that you already have a strong idea of what is "right" based on another moral system entirely.

As a side note, it's always interesting to me when the primary arg against utilitarianism is that it sometimes produces answers that go against our moral intuitions. Which, to Freddie's point, the natural response seems to be... why are you doing a utilitarian calculation in the first place, then? If you believe that we can intuit what is right and wrong already, based on our gut feelings, what use is an external moral framework at all? It seems obviously true that, for any moral criteria to actually be useful as such, it would *necessarily* sometimes tell us that the moral answer is something other than what we already assumed the moral answer is. Christian morality embraces this idea - of course these rules go against your moral intuitions. You're a bag of sin! You shouldn't trust your moral intuitions at all." Unfortunately, non-supernatural moral frameworks lack this certitude.

Expand full comment

I don't think it's accurate to say that Christian morality says "you shouldn't trust your moral intuitions at all." Of course, there's many different strands of Christian morality, and there are some which emphasize man's depravity to such an extent that morality does become about following a set of incomprehensible rules. But that's a relatively small subset of Christianity. The broad Christian tradition has devoted a great deal of thought to the importance of the individual conscience, which has been placed in us by God and does reflect, if incompletely, the moral fabric of the universe He has created. When we sin, it's less that our intuitions led us astray and more that we deliberately ignore, stifle, and rationalize away those intuitions for the sake of competing desires. This is historically the more dominant view within Christianity, which is why classical Christian theologians such as Augustine stressed the primary of reason, which also contains our moral intuitions, over our passions.

Expand full comment

But public libraries help people better understand why it’s important to donate money to improve public health in developing countries.

Expand full comment

If you're dead from malaria, you can't go to a public library.

Expand full comment

But libraries (among other public programs like education and the arts) help me develop the knowledge, empathy, and wisdom to appreciate and respond to the moral imperative to support anti-malaria efforts.

Expand full comment

Military expenditure dwarfs library expenditure. Maybe go after that first since it's premised on killing people?

Expand full comment

But public libraries will give the tools for lots more than mosquito nets. That's just not a good trade-off. How about distributing mosquito nets at PLs? Also in summer when low-income kids are not in school free lunch programs libraries are distribution points. Well, don't get me started...

Expand full comment

The sort of kids who live in areas with public libraries are not going to die from malaria for the want of a $2 mosquito net. If you're thinking about distribution points for free lunch programs, you're not really thinking about the level of poverty at play here.

Expand full comment

"I think it's not a coincidence that Bentham was an abolitionist and advocate for women's rights, and Kant was a racist"

Yes, and Kant thought animals matter only insofar as our actions toward them impact humans, e.g. their owners; while the modern animal rights/welfare movement has some of its roots in utilitarians like Bentham.

Some of Singer's U-ian conclusions are unseemly I agree, but a far more obvious reductio can be performed against most of his Ethicist peers who are wholly silent on the animal question. Kind of like all those Bioethicists on Research Hospital boards who show no interest in the animals being medically tortured down the hall. Singer looks pretty good for at least shining a light on the topic.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

One of the things a struggle with is keeping individual acts of altruism separate from decisions made at a policy level. I know that me giving the schizophrenic guy who lives in a flop house around the corner from me a buck or two when I see him is not the most effective help for his situation. But it feels good for him and for me in the immediate moment. However, were I a policy maker, I think its hard not to approach ameliorating his and the other men he lives with the issues they face, without getting into some kind of utilitarian calculus. Where those lines are drawn I guess are not always clear to me.

The "Very Bad Wizards" podcast had some pretty cool discussions around this which I really liked including their critique of Utilitarianism. (https://verybadwizards.fireside.fm/135)

Expand full comment

Thank you for this - I have been involved in the nonprofit world for most of my [approaching] 30 year career. I get so tired of people who bemoan the large number of nonprofits, as though it's wasteful for people to spend their money on tiny ones instead of pooling their money into bigger -presumptively more effective - ones. The bottom line is that people give to charities for a lot of different reasons that are usually personal to them [eg; their relative died of a disease] and often local. You're not going to convince them to just change their donations to be more effective. Even the rich people who are touting effective altruism are getting something out of it - see SBF, Elon Musk, and many others who make national headlines for their giving. If my friends want to donate to a local charity that takes veterans with PTSD fly-fishing because 1) they want to honor military service and take care of veterans; and 2) they grew up in Montana and want to preserve fly-fishing as a local tradition, or maybe love to do it themselves, then how is that a bad thing? How is that not effective, especially if there are 40,000 similarly small organizations that help veterans? That's an actual figure by the way. And I for one love the fact that there are enough people out there who honor our country and military service to create 40,000 ways to help veterans and families who may be struggling.

Expand full comment

And the communal aspect of giving to something personally meaningful is important. Our local non-profits engage many students (in a university town) which will likely be something they carry on through life.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

People act in what they perceive to be their own best interest. Mother Theresa enjoyed the endorphin hit from her good works. Firemen running into burning buildings do so because they dread returning to the firehouse knowing that they could have changed the outcome. They also may dread criticism and enjoy that Mother Theresa heroism endorphin hit, too. Those of us less selfless-appearing are all driven the same way. Once that inescapable reality is internalized, life and the behavior of others becomes more clear. All that remains to analyze is the extent to which others seek to influence or deceive us about any unspoken goals or hidden agendas behind the actions they take which, to restate, represent at the moment what they believe to be in their own best interest.

Expand full comment

You are engaging in Motte and Bailey reasoning. Your Motte is that everyone technically must want to do the things they chose to do on some level, otherwise they wouldn't do them. Your Bailey is that everyone is selfish, using the vernacular definition of that term. You are equivocating between two different different definitions of "their own best interest."

If you expand the definition of "own best interest" so wide that it just means "people do what they want to" then it becomes a meaningless tautology. Everyone has a vernacular definition of "self interest" that does not encompass literally everything a person desires to do, it only encompasses the things we desire to do that are related to our own personal happiness and fulfillment. A firefighter running into a burning building to save people is acting against their own self interest and for the interests of others. The fact that they want to do that, otherwise they wouldn't, is irrelevant, because our self interest is not the same as everything we want to do.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

The whole EA thing is massively disappointing because there really are huge inefficiencies in the way charitable aid is allocated; I work in that sector and am always looking for ways to do it better. I share the conclusion that public health in the developing world is one of the most important things you can work on, and I work on that. At some point, it felt like the EA movement could be a good reaction to the inefficiencies of huge international NGOs that could steer resources in a better direction. Alas, not once when I've engaged with Effective Altruism as it exists has it given me any insights into how to do my work better, beyond the banal truths you cite – although you'd be shocked to see how hard it can be to implement those banal truths in the aid industry.

EA is ultimately just the prosperity gospel for tech bros, a convenient excuse to feel moral while making lots of money and doing the kind of work you want to do anyway. That's why it's so focused on longtermism and AI – that's what's cool, that's what's lucrative, science fiction is a favourite genre so thinking about that is fun. They get to be rich AI dudes while maintaining smugness over their peers. The story of someone insisting 'I can't do any good without power' and then being corrupted by the pursuit of power is as old as time. And a lot of the flaws in EA (and utilitarianism) stem from adherents' total inability to see _themselves_ as merely human as well. It's a real tragedy, and ridiculous to boot.

GiveWell and GiveDirectly are both useful guides to channeling money though.

Expand full comment

Reminds me of carbon offsets.

Expand full comment

Silicon Valley, reinventing the wheel for half a century.

Expand full comment

Good article. This somewhat clarifies a question I had, given your previous article in which you said the problem with EA is so few people don't really want to be charitable and the problem with the Rationalism movement is so few people want to be irrational.

- You have an entire chapter in How Elites Ate The Social Justice Movement devoted to why nonprofits are ineffective altruism. Arguably the whole book is how altruism becomes ineffective. Clearly that's real.

- Your book The Cult Of Smart details how the education system doesn't educate. (IIRC, you may have also commented about how the health care system is oriented toward things other than providing health care.) Clearly that's real too.

- You've complained that your critics will say "but Freddie, nobody's saying that, and if they are, they're unimportant". Now you're saying nobody's saying altruism should be ineffective and if they are, they're unimportant.

It now appears your critique is actually not that nobody is against doing altruism ineffectively, but that nobody's explicitly arguing that we shouldn't care about altruism. (They just act like they think that?)

Perhaps more importantly (and in my view more more effectively), you point out the "sage on the stage" celebrity social-group, the bait-and-switch, and the tendency to get away from concrete material issues in front of our nose and into abstractions.

Expand full comment
author

I'm saying that a) saying that charitable giving and development should be done effectively rather than ineffectively is not anything remotely unique to EA, and b) what is unique to EA is this utterly batshit set of obsessions that have next to nothing to do with a), and I think we should just jettison that whole social culture that produces all of the bizarre excess and accept that the simple desire to do charity more effectively doesn't require all of this baggage. And the deeper point is that a lot of people only get onboard BECAUSE of the baggage, because that's the fun part. But doing good isn't about having fun.

Expand full comment

All true and important. Thanks for clarifying and sorry the clarification was necessary.

Expand full comment

>a) saying that charitable giving and development should be done effectively rather than ineffectively is not anything remotely unique to EA

That may be so, but there's a big difference between "everybody in the world already agrees that charity should be done effectively" and "some people who are not EAs agree that charity should be done effectively". You argued for the former in the article, not the latter.

Expand full comment

I know a lot of EA people and I think they're generally a good bunch, but I think they suffer from a sort of anti-humanism as described by Matthew Crawford - they are aware of the flaws in human cognition, but instead of saying "huh maybe those are a part of being human, and we can work together to overcome them" the takeaway is that we need a super- (or non-)human class of reasoners to make plans without the filthy, matter-bound constraints of human existence getting in the way.

But we are humans, not computers - sometimes our instincts are right, sometimes a disgust response isn't something to be overcome but a valuable signal about what is right and wrong, sometimes we think differently about those close to us than those we don't really know.

Acting like there will be a human world where those facets of humanity will be winnowed away is ahistorical and misunderstands us hairless apes. I think this is why so many are into transhumanism- they see existence as a curse of suffering to be overcome with technology, not a gift of limitless value to be appreciated, puzzled over, laughed at, and enjoyed.

I think utilitarianism can bring some great insights, but so often the EA "fermi estimates" are just motivated wish fulfilment that gives an answer in the ballpark they want. Utilitarianism is an interesting and useful lens through which to explore decision-making, but if it's not tempered by some kind of virtue ethics it's a recipe for big, bad ideas.

Expand full comment

The refusal to understand that I and my kin are rightfully more important to me than a stranger is, is so fucking stupid I really can't get past it.

Expand full comment

And even more stupid is the widespread attitude that those who haven't evolved into galaxy-brainer boddhisatvahood are moral relics who have nothing interesting to say about the modern world.

Expand full comment

EAs understand this.

Even EAs are partial to themselves and their kin. The point is if you’re going to donate and you want to do the most good, maybe figure out how to do that instead of merely looking at what you already know.

Expand full comment

The most good I can do is to help people close to me, where I have local knowledge and my efforts are more likely to be effective and efficient, not wasted on layers of middlemen.

Expand full comment

What’s crazy is that EAs have considered that and found an alternative conclusion. If everyone around you already leads pretty good lives but people a little further away are in desperate straights maybe the math changes. What is “close to me” in an age of instantaneous communication and international flight? How do you know where to draw the line for “local”?

The relative value of a dollar changes a lot around the world but a human life is a human life, if we are being impartial. EAs tend to be pretty econ-pilled and aware of local knowledge concerns and bureaucratic inefficiency risks and strive to find areas that avoid or overcome these types of issues.

Overall, it’s just super ironic for this line of argument to be made in support of a Marxist criticizing EA.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

You obviously didn't read my first comment if you think that I believe a life is a life. I am not impartial. I do not think it is valuable to be impartial in this domain. There are no utils to be weighed and compared. The people close to me might be living decently, but throw a rock in any local community and you will find people who are not, and you are better able to help those people than you are someone on a different continant, your effort will strengthen the community you live in, in a way which cannot be replicated by an NGO.

It doesn't matter that Freddie is a Marxist, and my critique does not rest on his in any way. His critique is obvious to people of many stripes. Randolph Carter and I are about as hostile to Marxism as it gets, frankly.

Expand full comment
Nov 29, 2023·edited Nov 29, 2023

I’ll note you addressed none of my questions about how you decide where to draw your circle of moral concern.

EAs do not argue anyone should be impartial in all areas of their life all the time.

You say “in a way which cannot be replicated by an NGO” but you have no evidence for this. If I save a stranger from starving this is a good thing near or far. If I can use $1 to save 10 from starving further away so much the better most would say.

Perhaps your personal moral theory places the value of lives outside say a 10-mile radius at zero and so increased purchasing power to save more lives is irrelevant, but most people don’t hold such views.

Expand full comment

It doesn’t matter that Freddie is a Marxist in me debating you directly, but it is pretty fucking ironic for Freddie to make arguments that seem to contradict the whole “From each according to his ability, to each according to his needs” line of thinking as context for why we are here at all.

Expand full comment

Effective altruists always make me think of a writing class I taught many years ago. We were discussing whether we give money to panhandlers, and every last student said they never did, because we should give our money to food pantries instead. I asked the students if any of them actually had ever given money to a food pantry, and--you guessed it--every single one said no.

My problem with effective altruism is that in order for it to actually work in the real world--in order for effective altruists to ignore the promptings of conscience and donate to strangers instead--its practitioners must be perfectly rational. They must be impervious to such base emotions as greed, pride, and plain old obliviousness. Most effective altruists purport to be rationalists, but if you've spent much time in rationalist communities online (I have) you will rapidly discover that the last thing they are is rational. (As just one example, a commenter on the Astral Codex Ten blog once said that to maximize human happiness, we should force all girls and women of childbearing age to give birth once a year until either they die or enter menopause: More people means more happiness, right? I wish I could say that this guy was joking, but he was serious about the merits of this plan.)

We have seen where this irrationality leads--to Sam Bankman-Fried's thievery, to other colossal wastes of money, to boondoggles such as cruelty-free insect farms, to building longtermism castles in the sky, to--as Freddie points out--buying actual castles. (I get that it's more cost-effective for effective altruists to own a building instead of renting, but have these people never heard of FaceTime and Zoom? Why not hold their meetings online so they can donate more money?). I also suspect that some effective altruists who earn-to-give wind up succumbing to the lure of wealth and keep the money too.

The truth is that our emotions are a better guide to being charitable than pure reason is. The vast majority of us are more easily moved to generosity by the plight of the person in our own neighborhood than we are by abstract principles. Or, as Tolstoy once said, "The most important time is now. The most important person is the one standing right in front of us. And the most important thing to do is to do good for that person. That is why we are here."

Expand full comment

You’re making up a standard of “EA can only work if EAs are perfectly rational.” You’re setting up an isolated demand for rigor when you do things like “they could just use virtual meetings instead of owning physical places” as if the same situation doesn’t apply to all orgs and as if trade offs don’t exist.

Marginal improvement is possible. EA wants to focus on things that are impactful, tractable, and neglected and use rigorous measurement to do evaluations.

Going off pure emotion is rarely the right answer in any situation.

Expand full comment

I think perfect rationality is a total disaster because it is perfectly able to do horrible things in pursuit of better things. Doing horrible things in pursuit of better things is frequently immoral. There is a reason the Hippocratic oath is "first do no harm" and not "always choose the result that creates the most good."

Expand full comment

After reading a New Yorker piece about SBF and his parents I couldn't help but conclude that these EA types don't actually care about people as individuals. Sure, they care about People (and naturally see themselves as saviors of the People), but normal human emotions of affection, interest, involvement with other humans seem to be missing entirely. There's no attempt at genuine connection, and that is probably the most damning and dangerous thing about EA.

Expand full comment

I think there is high autistic representation in EA, which could explain a lot of this - as well as the strange polyamory/constant communication with a pod of people culture that has grown up with EA. I find a lot of the polyamory talk about how you should have multiple partners to "fulfill your needs because one person can't be everything" speaks directly to this - yes, no individual can do literally everything, but having to make choices and commitments because we have limited time on Earth is just a fundamental part of being human. The expectation that constant hedonic stimulation will make people "happy" is tied up in it too, when actual happiness seems to come out of webs of obligation and the feeling of being needed. It's like the whole setup demands that everyone be maximally selfish while expecting others to not have a bad reaction to being told they're not necessary, but merely a part of the hedonic tapestry of your partners.

Expand full comment

The autistic aspect makes sense, but overall the EA philosophy you describe - selfish, hedonistic, etc - strikes me as an incredibly immature and limited way to live. And that's fine in a sense - people can live as they like - but it would be nice to see a larger society-wide pushback on not just on the technical shortcomings of EA but on the underlying humanistic deficits the movement espouses. Maybe there is and I'm not just seeing it.

Expand full comment

How is polyamory talk about needing multiple partners to fulfill your needs any different from the way typical people talk about needing lots of different friends? Everyone seems to agree that having a lot of friends is good and having just one close friend is unhealthy, it seems like polyamory is just extending it to romantic partners. If you had a friend who was offended that you had other friends because it made them feel less "needed," we would correctly view their behavior as horrible and toxic. If a parent divorced their spouse and fought for sole custody of their children because they were concerned that their children did not "need" them as much since they had a second parent, we would recognize that behavior as psychotic and narcissistic.

I am not in a poly relationship, but my wife has a bad reaction to being told that she is necessary. When I have wondered if my life might fall apart without her, she has been horrified, she has not found it to be sweet or romantic. She wants to be part of my life, but also wants me to be able to stand on my own. I have to admit that until I read and started composing this reply to your post, I did not appreciate what a truly special and emotionally mature woman that makes her.

This feeling of being needed that you are talking about seems like one of those things that sounds good in a steamy romance novel, but is bad in real life. Maybe it's good in small doses, but I suspect that poly people are able to obtain it as well as mono people. You also seem to think poly relationships are purely sexual and hedonistic in nature, doesn't it strike you as possible that people in such relationships love and depend on each other? I have definitely heard of asexual people who are in poly relationships.

Expand full comment

The sex is the defining part of poly relationships, otherwise it's just "friendship." And I don't mean being "needed" in terms of "oh no the world will crumble without you," I mean it in terms of "you aren't a replaceable interchangeable cog."

Expand full comment

A secret known by one is a secret. A secret shared by two is no secret. A secret shared by three is told to the world.

Perhaps a better word than "secret", is "private". Over many years, through both closed and open relationships, I've discovered that I'm largely in relationships for intimacy, and a critical component of that intimacy is privacy.

A shared private life is troublesome to maintain with multiple partners. So I have one relationship (a marriage, now) where intimacy and privacy are as good as they can get for me. And then many other relationships with my friends, where those things aren't such a big deal. Or, at least, the secrets are different.

So that's a distinction one might draw between friends and romantic partners, and I hope it helps answer your question.

I've mixed noun and adjective above, opening the door for a sublime pun about sharing "privates", but I won't try for it.

Expand full comment

The problem is we live in a world where it happens that genuine connection is not the most effective way to help people, because it is financially possible to help more people than it is physically possible to meet face to face. Sending money to get malaria prevention or HIV prevention treatment to some people in Africa will ultimately save more Africans than personally going to Africa and helping some Africans face to face will.

To me talking about genuine connection instead of about saving lives seems profoundly narcissisistic. It makes charity about you instead of about others.

Expand full comment

Thanks for the reply. I disagree, mostly because I think that connection between people is the best way to foster a change in perspective - a broadening of perspective, perhaps. Falling in love and dedicating yourself to another, raising kids, being a loyal friend and family member for years or decades - personally, all these have inspired me to do good in the sense of being a more generous and charitable person. Giving money is easy - you donate and then you move on with your life. Not to say there's no value in mosquito net charities - of course there is! - but as Freddie wrote, it's easy for a certain type of person to take those types of moral calculations to an extreme and become so focused on expending money efficiently that they forget about the actual people involved.

Expand full comment

There's actually a famous EA essay that discusses this called "Purchase Fuzzies and Utilons Seperately." The author basically agrees with you that many people need to connect with other people face to face in order to remain motivated to do good. Therefore, he recommends EAs engage in both targeted giving and face-to-face charity work.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

Freddie wrote: "Public commenters like Scott Alexander and Matt Yglesias have complained that the Bankman-Fried affair has resulted in an overly harsh backlash to EA. The question I would ask of them is, why not just keep the actual charitable stuff you like, and jettison all the nonsense that took effective altruism in that regrettable direction?"

I don't read Scott Alexander closely enough to talk about him, but Yglesias pretty much does exactly that. He donates a percentage of his revenue to GiveWell and talks about them and Give Directly all the time. He rarely talks about the other stuff and has been somewhat skeptical when he does.

Expand full comment
author

Yes, but he is very Yglesias-y about it all - because people turned against the bad stuff in EA, particularly the lefties with whom he is in a mutually-parasitic relationship, his instinct is to defend capital-letter Effective Altruism and not just effective altruism. And I think he can just say "I care about smart giving," and drop all the histrionics.

Expand full comment

Why don't you say "I care about equality", abandon capital-s Socialism, and drop all the histrionics?

Surely whatever complaints you make about EA can be made about socialists. They have really weird beliefs if you dig too deep? Check - "the dialectic", communes, government wasting away into nothing after the Revolution, Posadism, Fourier insisting that once the world became socialist the oceans would turn into lemonade.

They have a history of terrible people associating with them and causing disasters in their name? Check - Mao, Stalin, Pol Pot, etc.

If you file off all the serial numbers and take a super-zoomed-out view, it's just common sense? Check - probably 90%+ of people agree everyone should be equal and free and it's kind of weird that Jeff Bezos has $100 billion while poor people are starving.

People accuse it of being a cult? Check, sometimes it's literal cults (like Bob Avakian or Fred Newman) and even normal socialists have their own weird jargon and preferred media and clubs and groups.

So how about all world socialist parties disband, everyone stops identifying as a socialist, and everyone just agrees to support equality however much they already support it?

If you have objections to this proposal - like that some people care about these things more than others, that it would destroy your ability to organize and accomplish goals, or that you actually like some of the things everyone else considers crazy - then those are my objections to your similar proposal for EA.

Expand full comment
Comment deleted
Expand full comment

I think technically that’s just normal capitalism

Expand full comment

Isn't that just Social Security?

Expand full comment
Comment deleted
Expand full comment

Nobody has ever provided hard evidence that workers will exist in the future, so positing that they will is just a bizarre sci-fi scenario.

Expand full comment

Agreed.

Also to make the argument that EA orgs should disband since their ideas are already "commonplace", FdB would also have to believe that there's no purpose in having coordinating entities to promote and guide real-world applications of socialist theory. It's probably obvious to most people that absent those political entities, the "ideas" wouldn't go anywhere.

In addition, I really don't understand how this crowd dismisses AI risk so easily. Many top researchers in the field (who I assume aren't commenting here) are VERY convinced that superintelligence could be extremely detrimental to humanity. Even if this isn't a guarantee, EA's ahead-of-the-curve push to highlight these risks was prescient.

Expand full comment

>They have really weird beliefs if you dig too deep?

Come on, there was a multi-million dollar promotion for MacAskill's WWOTF, and *your own review* sounded like you were halfway turning away from (that shell of) EA.

Disagree all you want about that not being *central* EA, so we can keep playing the shell game of what counts and what stage of the tower of assumptions we're on, but the ideas in a prominent book by the literal founder that had several months of major media push is not "digging too deep." It's right there, front and center, from the Big Man's mouth.

Expand full comment

I don't know what your complaint is, I just said they had really weird beliefs if you dig too deep. That sentence wasn't intended to be a denial, it was a tu quoque.

Expand full comment

My complaint was that the really weird beliefs aren't deep; if one disagrees with MacAskill's longtermism the weird stuff is front and center.

If someone brings up, say, Brian Tomasik's blog on suffering electrons, that's digging (kinda) deep.

Expand full comment

I think lots of entry-level socialist ideas are really weird too.

Expand full comment

We missed you being based, Scott.

Expand full comment

Best comment on this thread.

Expand full comment

Yglesias considers AI risk to be a real thing and has expressed he doesn’t write about it because he has nothing he considers worth saying.

Expand full comment
founding

I agree entirely with the Utilitarian Trojan Horse critique. The EA/internet rationalist crowd is 100% committed to moral realism, but doesn’t have any argument for it. And of course, it is completely at odds with the rest of their views. When they describe EA as:

> a research field, which aims to identify the world’s most pressing problems

they are starting with the assumptions ”there is no fact-value distinction” and ”we can have knowledge of values.”

It’s not so much a philosophy as a failure to understand what philosophy is.

Expand full comment