Scott Alexander and Freddie DeBoer both published articles on Effective Altruism in the past few days. Scott’s piece gives a load of examples of Effective Altruists doing good things. EAs have saved 200,000 human lives (mostly by paying for malaria nets). They’ve convinced farms to move 400 million chickens from tiny cages to slightly-less-unpleasant barns or outdoor areas. They’ve also given quite a lot of cash to weirder causes that are probably good: the YIMBY movement, think tanks trying to reduce the risk of nuclear war, various groups trying to make Artificial Intelligence safer, and so on.
Freddie’s piece, on the other hand, makes the point that Effective Altruists are very weird, and some of them think extremely weird things, and that those weird things are often bad. Effective Altruists claim that the whole EA schtick is a commitment to doing as much good as possible, but basically everyone would agree we should do a lot of good! That can’t really be what makes someone an Effective Altruist, so it’s actually definitely all of the weird stuff.
And there really is a ton of weird stuff. As Freddie points out, some people think we should eliminate a load of predators in the wild because it might be good for their prey. Others think, if given the chance, we should flip a coin that has a 51% chance of doubling all the good in the world and a 49% chance of killing everyone. Some EAs think the best thing you can do with your money is try to improve the lives of shrimp. If you go to an Effective Altruism conference, much of the discussion will be about how we can get Artificial Intelligence not to kill everyone (and why we should spend much more of our money trying to reduce this risk even by a tiny fraction).
So, who's right? Are EAs weird, or are they good?
Well, por qué no los dos? It’s definitely true that lots of Effective Altruists believe things that are very weird, including a serious and prominent focus on risks from Artificial Intelligence. Effective Altruists sometimes try to gloss over this, pointing to the fact that EAs still donate significantly more money to causes related to Global Health and Development (think malaria nets and deworming) than they do to groups trying to reduce AI risk.
Another point you might hear, as made by the most influential Effective Altruist philosopher William MacAskill below, is that EA isn’t a combination of specific views on how to improve the world. Instead, it’s the general principle that we ought to use evidence to do the most good possible.
If some crackpot scientist makes the claim that he’s found a hidden dimension for instant universal travel via special vibrational frequencies, you don’t say ‘Hmm, I guess science was a load of bullshit after all, what a pity!’. Instead, you say ‘this particular scientist seems nuts, but that doesn’t mean I should toss out the idea of formulating and testing predictions about the world’. MacAskill makes the claim that we should think about EA in a similar way. Even if you don’t buy the stuff about AI killing everyone, you shouldn’t throw out the general principle of using evidence to do good.
And yes, sort of. But I also think these are pretty misleading ways of talking about Effective Altruism. While it may be other things, EA is primarily a social movement. While someone who has never heard of Effective Altruism but happens to do her own research on which charities to donate to could technically be called an Effective Altruist, they aren’t an EA in the way that most people use the term.
The defence from MacAskill here seems a bit like an MP for the Conservative party responding to a criticism of specific policies by saying, ‘Conservatism is not a package of particular views. It’s about a commitment to pragmatism, preserving long-standing institutions and customs, and a focus on the actual over the possible’. Well, that may be. But if I think the policies are ghastly, I’m not going to call myself a Conservative any time soon1.
It is true that most EAs donate more to global health and development charities than to AI charities, and the big EA funders do the same. But if you go to an EA social event or an EA conference, there will probably be lots of, ahem, unusual people. They might ask you about your p(doom), or make controversial philosophical points about the meat eater problem, or encourage you to stop donating your money to charities that help people today and start donating it to animal charities or AI safety organisations2.
And you know what? It’s good that EAs are weird. Responding to the EA claim that we ought to try to do a lot of good, Freddie writes:
This sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all. The immediate response to such a definition, if you’re not particularly impressionable or invested in your status within certain obscure internet communities, should be to point out that this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably.
And it’s true that many people, if asked, would probably agree that when trying to do good you should try to do the most good possible. But it’s insanely weird to actually put it into practice. It’s weird to donate 10% of your income to malaria charities. It’s weird to give your kidney to a stranger. It’s weird to spend a significant amount of your time thinking about how exactly you should structure your career in order to do as much good as possible. So while I’m sympathetic to the argument that the idea is banal, it’s extremely weird to actually go and do this stuff.
Couldn’t we have all the good stuff and get rid of the especially weird stuff? Why does donating 10% of your income to malaria charities have to be associated with people who take seriously the idea that we should eliminate some species of predators in the wild?
Well, if you’re weird enough to take seriously the idea that you should donate a hefty chunk of your income to help people who live on the other side of the planet, you might also be weird enough to think it’s worth considering spending a lot of time and effort trying to help wild animals or people who don’t yet exist. You don’t get the good stuff without the weird stuff.
If you’re trying to decide how to do good, and you only consider options that sound reasonable and normal to most people, you probably won't end up with the conclusion that we should donate huge amounts of our income to people abroad. You probably won’t end up with all the good things EA has done: no malaria nets, no pandemic preparedness funding, no huge campaigns to get animals out of cages. Effective Altruists are weird people doing good things, and long may they stay that way.
Yes, technically I could call myself a small-c conservative. But there’s not really such a thing as a small-e small-a Effective Altruist, so I can still make this point.
I’m not opposed to any of these things, I’m just saying they’re weird.
Nice piece. But I disagree that it's purely a movement. I think the core idea of trying to do good effectively is also really distinctive. As you say, "it’s insanely weird to actually put [these principles] into practice." So there's plenty of room to defend the idea of effective altruism as *really obviously good and worth pursuing* even if one questions whether Big EA actually does a good job of realizing its ideals.
Though if it's helpful to have a different name to distinguish the core ideas from the actual movement, I quite like "beneficentrism":
https://rychappell.substack.com/p/beneficentrism
Enjoyed this non-totalising reaction, essentially to FdB's churlish piece. His is typical of an approach that assumes the validity of a very subjective deontological morality and uses that as the basis for disparaging a much less subjective mode of thought. I'm getting tired of it.
As for the weird stuff, deontological ethics leads one into stupid-seeming conclusions too. But, unlike utilitarianism, it's rarely as transparent about how it gets there. Anyway, thanks for putting a thoughtful, reasonable case.