284 Comments
author

I have to tell you that I find this dynamic frustrating - every time I write about EA, there's a lot of comments of the type "oh, just ignore the weirdos." But you go to EA spaces and it's all weirdos! They are the movement! SBF became a god figure among them for a reason! They're the ones who are going to steer the ship into the future, and they're the ones who are clamoring to sideline poverty and need now in favor of extinction risk or whatever in the future, which I find a repugnant approach to philanthropy. You can't ignore the weirdos. And to the extent that you can, that's what I'm advocating for here - just break up with "Effective Altruism" as an entity and push for the smaller-bore efficiency stuff within existing charitable structures and organizations. I don't know why you'd bother trying to convert the people who are obsessed with how much pain tuna feel when they're caught in a net. They got into it for the weird shit. Just preach efficiency and evidence-based charity in the broader world.

Expand full comment

I agree EA has gotten weird, but, at its heart, isn't EA basically just "evidence based philanthropy." Like in medicine, I agree that suggesting that we use evidence to determine what works seems obvious, but like in medicine, it is actually not that common.

Expand full comment

I think the basic principles of EA (warning: me saying things in response to this essay, not some well-known movement constitution) are:

1. Donate some fixed and considered amount of your income (traditionally 10%) to charity, or get a job in a charitable field.

2. Think really hard about what charities are most important, using something like consequentialist reasoning (where eg donating to a fancy college endowment seems less good than saving the lives of starving children). Preferably do some math to make sure you're not just consulting your prejudices. Check with other people to see if your assessments agree.

3. ACTUALLY DO THESE THINGS! DON'T JUST WRITE ESSAYS SAYING THEY'RE "OBVIOUS" BUT THEN NOT DO THEM!

I don't think any of these things are obvious. I think fewer than 5% of people do (1), fewer than 5% of people who do 1 also do (2), and fewer than 5% of people who do 1 and 2 also do 3.

I agree that (2) can be interpreted in more or less arcane ways (is the most effective thing donating to starving children, or to an 0.0001% chance of future utopia?) and I think anyone who thinks about the question honestly and engages with the other people thinking about the question is part of the movement (I don't think most people who donate to eg their favorite US political candidate or cause meet this bar).

If the movement has clear principles, and most people outside the movement don't follow these principles, and most people in the movement do, then I'm not sure what's left of your complaint.

I've written more about this kind of thing at https://www.astralcodexten.com/p/effective-altruism-as-a-tower-of , and will have a slightly related post up today.

Expand full comment

I think there's a lot in here that's accurate, but I do want to push back somewhat with an affirmative case for EA. I'm not really an EA, but I am the guy who says we should spend money on mosquito nets instead of public libraries (in fact, I'm THE GUY who said that in the comments section on the last EA post), and I'll have you know that I do NOT mutter about Roko's Basilisk or anything weird or longtermist.

You're right that to the extent EA is a philosophy, it's basically just utilitarianism, and I think utilitarianism is underrated. I'm writing this fast so admittedly this is a bit of a drive-by, but I think it's not a coincidence that Bentham was an abolitionist and advocate for women's rights, and Kant was a racist. That is, to the extent utilitarianism pushes people in a direction against the current climate, I think it tends to push people in the right direction.

As I indicated earlier, I'm much more interested in the mosquito nets than longtermism, so I'd be happy with a less weird EA that focuses more on the mundane. But the counterpoint to that is, if, for example, you are genuinely very concerned about AI alignment, you want to encourage more weirdos to get involved in studying AI safety, so you kind of want to be weird and quirky and draw in the right sort of people to get them on the project. To put it another way, my vision of EA is millions of people all tithing to Against Malaria Foundation or GiveWell without thinking too hard about what they're doing; another vision is having a few thousand people working on AI alignment rather than only a few hundred. I'm at least a little concerned about AI, so I can respect where the latter thing is coming from.

Expand full comment

Effective altruists always make me think of a writing class I taught many years ago. We were discussing whether we give money to panhandlers, and every last student said they never did, because we should give our money to food pantries instead. I asked the students if any of them actually had ever given money to a food pantry, and--you guessed it--every single one said no.

My problem with effective altruism is that in order for it to actually work in the real world--in order for effective altruists to ignore the promptings of conscience and donate to strangers instead--its practitioners must be perfectly rational. They must be impervious to such base emotions as greed, pride, and plain old obliviousness. Most effective altruists purport to be rationalists, but if you've spent much time in rationalist communities online (I have) you will rapidly discover that the last thing they are is rational. (As just one example, a commenter on the Astral Codex Ten blog once said that to maximize human happiness, we should force all girls and women of childbearing age to give birth once a year until either they die or enter menopause: More people means more happiness, right? I wish I could say that this guy was joking, but he was serious about the merits of this plan.)

We have seen where this irrationality leads--to Sam Bankman-Fried's thievery, to other colossal wastes of money, to boondoggles such as cruelty-free insect farms, to building longtermism castles in the sky, to--as Freddie points out--buying actual castles. (I get that it's more cost-effective for effective altruists to own a building instead of renting, but have these people never heard of FaceTime and Zoom? Why not hold their meetings online so they can donate more money?). I also suspect that some effective altruists who earn-to-give wind up succumbing to the lure of wealth and keep the money too.

The truth is that our emotions are a better guide to being charitable than pure reason is. The vast majority of us are more easily moved to generosity by the plight of the person in our own neighborhood than we are by abstract principles. Or, as Tolstoy once said, "The most important time is now. The most important person is the one standing right in front of us. And the most important thing to do is to do good for that person. That is why we are here."

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

The whole EA thing is massively disappointing because there really are huge inefficiencies in the way charitable aid is allocated; I work in that sector and am always looking for ways to do it better. I share the conclusion that public health in the developing world is one of the most important things you can work on, and I work on that. At some point, it felt like the EA movement could be a good reaction to the inefficiencies of huge international NGOs that could steer resources in a better direction. Alas, not once when I've engaged with Effective Altruism as it exists has it given me any insights into how to do my work better, beyond the banal truths you cite – although you'd be shocked to see how hard it can be to implement those banal truths in the aid industry.

EA is ultimately just the prosperity gospel for tech bros, a convenient excuse to feel moral while making lots of money and doing the kind of work you want to do anyway. That's why it's so focused on longtermism and AI – that's what's cool, that's what's lucrative, science fiction is a favourite genre so thinking about that is fun. They get to be rich AI dudes while maintaining smugness over their peers. The story of someone insisting 'I can't do any good without power' and then being corrupted by the pursuit of power is as old as time. And a lot of the flaws in EA (and utilitarianism) stem from adherents' total inability to see _themselves_ as merely human as well. It's a real tragedy, and ridiculous to boot.

GiveWell and GiveDirectly are both useful guides to channeling money though.

Expand full comment
Nov 28, 2023·edited Nov 28, 2023

When EA was first introduced to me, the concept was actually pretty new to me. That feels embarrassing now, as I was into my 20s when this happened, but I'd just sort of always accepted that if an organization was performing "charity," then it must be good. Thinking about how to allocate limited funds most effectively across a gigantic range of problems and ruthlessly determine whether a given intervention was actually working was a new framework for me, especially at a time when I was still deep in a community where the identitarian markers of the people doing the fundraising was more important than what they were doing.

Then I joined an EA Discord, and the first conversation I witnessed was someone having a legitimate panic attack in the chat because he thought he might have solved some important problem for the future of AI-risk, and he wanted to reach out to the AI-risk bigwigs to see if they could use his solution, but was utterly paralyzed by terror that if he was wrong, the time they spent checking his work would have less utility than the time they would otherwise presumably have been spending solving AI-risk. This was clearly someone in the throes of an anxiety-induced mental health episode-- he had literally catastrophized himself into the belief that the world might end if somebody took too long to read his email. But the whole group talked him through the calculus of whether he should email the AI-risk bigwigs as though this were a serious and rational problem to be having. I walked out and never looked back.

These days I still donate to GiveDirectly and keep up with GiveWell, because the framework remains one I appreciate. I like how the best parts of EA continue to try to innovate on valuable projects that are already in the works, and their commitment to transparency. I think you're right that it's time to stop talking about "EA" as though it's only its best parts and not its worst. Maybe drop the label altogether. The more an EA-influenced project has to interface with the real world (GiveDirectly) the better it is, the more it's about "community" (the Discord server) the less functional it is.

Expand full comment

I know a lot of EA people and I think they're generally a good bunch, but I think they suffer from a sort of anti-humanism as described by Matthew Crawford - they are aware of the flaws in human cognition, but instead of saying "huh maybe those are a part of being human, and we can work together to overcome them" the takeaway is that we need a super- (or non-)human class of reasoners to make plans without the filthy, matter-bound constraints of human existence getting in the way.

But we are humans, not computers - sometimes our instincts are right, sometimes a disgust response isn't something to be overcome but a valuable signal about what is right and wrong, sometimes we think differently about those close to us than those we don't really know.

Acting like there will be a human world where those facets of humanity will be winnowed away is ahistorical and misunderstands us hairless apes. I think this is why so many are into transhumanism- they see existence as a curse of suffering to be overcome with technology, not a gift of limitless value to be appreciated, puzzled over, laughed at, and enjoyed.

I think utilitarianism can bring some great insights, but so often the EA "fermi estimates" are just motivated wish fulfilment that gives an answer in the ballpark they want. Utilitarianism is an interesting and useful lens through which to explore decision-making, but if it's not tempered by some kind of virtue ethics it's a recipe for big, bad ideas.

Expand full comment

Thank you for this - I have been involved in the nonprofit world for most of my [approaching] 30 year career. I get so tired of people who bemoan the large number of nonprofits, as though it's wasteful for people to spend their money on tiny ones instead of pooling their money into bigger -presumptively more effective - ones. The bottom line is that people give to charities for a lot of different reasons that are usually personal to them [eg; their relative died of a disease] and often local. You're not going to convince them to just change their donations to be more effective. Even the rich people who are touting effective altruism are getting something out of it - see SBF, Elon Musk, and many others who make national headlines for their giving. If my friends want to donate to a local charity that takes veterans with PTSD fly-fishing because 1) they want to honor military service and take care of veterans; and 2) they grew up in Montana and want to preserve fly-fishing as a local tradition, or maybe love to do it themselves, then how is that a bad thing? How is that not effective, especially if there are 40,000 similarly small organizations that help veterans? That's an actual figure by the way. And I for one love the fact that there are enough people out there who honor our country and military service to create 40,000 ways to help veterans and families who may be struggling.

Expand full comment

https://web.archive.org/web/20211108155321/https://freddiedeboer.substack.com/p/please-just-fucking-tell-me-what

Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Political And Social Changes I Demand.

Expand full comment

Sacrificing the needs of actual, living people in favor of the hypothetical needs of future people who may or may not ever exist is nothing short of monstrous. This position, which seems to be sincerely held by a lot of EA folks, is sufficient to invalidate the entire philosophy and everything associated with it.

Expand full comment

The "branding" or "cult-like" aspects of EA are probably necessary to get a lot of people to donate, in the same way that "branding" and "cult-like" aspects of Taylor Swift enjoyment gets more people to pay ungodly amounts for her concerts.

People don't generally like giving away money and getting nothing in return. Churches used to be a good answer to "what do I get?" and a big part of that was "community." If EA doesn't have a brand, community, leadership figures, etc, then it doesn't work for that. To some extent it's doing the work of branding causes that aren't well branded. Direct cash transfers to Rwandans doesn't give you the same in-group-identifying-bumper-sticker as NPR, NRA, Harvard, etc., which puts it at a massive disadvantage without the positive auspices of EA.

This seems all seems fine. If people are going to get involved in a community, centering it on giving money away seems about as good as you can get, even if you're not on board with all the philosophy.

Expand full comment

“On this Giving Tuesday I’d like to explain to you why effective altruism is bad” and then reasonably backing it up makes this the epitome of a Freddie post

Expand full comment

Wouldn’t it be relatively easy for the governments of the world to get together and produce decades, centuries even, worth of mosquito nets, thus freeing us from ever having to compare any other social good to mosquito nets ever again?

Expand full comment

Good article. I think you can’t talk about EA, however, without mentioning the rationalist movement which underpins it, which starts with equally good if vague ideas (“we should try to be wrong less often,” has some really good methods for getting there, but has one terrible mistake: the belief that all rational people should eventually agree. Meaning that there is one right answer, and if no one can disprove it, then you have that right answer. Ironically, a framework to achieve intellectual humility has within it the seeds of creating the exact opposite.

I kind of wonder what would happen if all EA came with the failsafe of “if the answer I came up with is the answer that makes me most happy it’s wrong.” I feel like it’d be a better program.

“Yaaaay I get to work on AI, like I always wanted.” WRONG

“Yaaaaay I get to buy a castle.” WRONG

“Yaaaay let’s go to Mars!” WRONG

At least it seems to me that more of the EA types are realizing that utilitarianism leads to disaster, which is a good thing.

Expand full comment

After reading a New Yorker piece about SBF and his parents I couldn't help but conclude that these EA types don't actually care about people as individuals. Sure, they care about People (and naturally see themselves as saviors of the People), but normal human emotions of affection, interest, involvement with other humans seem to be missing entirely. There's no attempt at genuine connection, and that is probably the most damning and dangerous thing about EA.

Expand full comment