[Note: I’m not super familiar with the literature on the Harm-Benefit Asymmetry so some of this stuff might have been covered a lot by academic philosophers, sorry!]
I was reading through Will MacAskill and Andreas Mogensen’s great paper The Paralysis Argument recently, and thought it was a pretty interesting challenge to certain types of non-consequentialists. The specific non-consequentialist view being challenged is this: if there is an action that you are able to perform that does serious harm, but also results in a lot of benefits for other people (and let’s assume that the amount of harm and benefits are equal in this instance), you ought not to do it, as reasons against doing harm to others are weightier than reasons for benefitting others.
They argue against this view with the ‘Paralysis Argument’. The argument is this: virtually anything you do will inevitably result in indirect harm. Suppose that a traffic accident is going to happen somewhere in London today, and that my driving to work means that the person who dies in traffic will be different to who it would have been if I hadn’t driven to work - I am doing serious harm to the person who ends up dying, but I am doing providing an equally large benefit in that I prevent someone else from dying. Actions I do can have indirect knock-off effects that change who is harmed. If you take the non-consequentialist view, you now have to take the view that I should simply remain at home - even though the total harm hasn’t increased as a result of my driving to work, if we take the view that reasons not to harm are weightier than reasons to benefit others, I should not leave the house at all in order to prevent inadvertent harms that have neutral consequences. If you reject the idea that people ought to simply sit around and do nothing, you need to acknowledge that an action that does harm to others and benefits others in equal measure is morally neutral rather than morally wrong.
So, non-consequentialists see an asymmetry between doing harm and benefitting others, and MacAskill and Mogensen reject this asymmetry (henceforth HBA, for Harm-Benefit Asymmetry). But I think the rejection of the HBA brings serious problems of its own. Suppose that there is a person who enjoys finding homeless people on the streets without friends or family and murdering them. But then imagine that this serial killer feels guilty about his actions, and decides to offset his killings by donating so much money to charity so to save as many people as he killed (if you think there are large negative consequences of murdering people as such that an equal number of lives saved is not a full offset, imagine that he saves the lives of many more people than he killed so as to make the sum of his actions utility-neutral). It doesn’t seem like many people would argue that these offsets make the Altruistic Killer’s life morally comparable to the life of someone who neither kills nor donates.
The example is slightly absurd, but I think moral offsets do raise some interesting questions about the HBA. Scott Alexander once wrote a piece about whether we ought to eat chicken or beef, arguing that beef was preferable because it costs more to morally offset eating chicken than it does to buy carbon offsets for eating beef - you can pay a small amount of money to put more reduce carbon emissions whereas it costs quite a lot to buy animal suffering offsets, where you pay a certain amount of money to save animals that would have otherwise died. But are these moral offsets actually morally neutral? If you accept the HBA, they aren’t. Carbon offsets probably aren’t morally neutral either. If you reject the HBA, and think that both Carbon offsets and animal suffering offsets are legitimate, where do you draw the line? Is reporting the Altruistic Killer to the police morally wrong because the total utility output of the Killer was neutral, and sending him to prison would cause suffering that wouldn’t have otherwise occurred? That can’t be right, right?
MacAskill and Mogensen give another example, the Dice of Fortuna. Suppose that you have a box that contains dice, and that if you shake the box and roll the dice and the dice come up even, you save a life. If the dice come up odd, someone dies. Each time you shake the box, you are given $10 (note, this is a slightly adapted version of the Dice of Fortuna for the sake of brevity). Is it morally wrong to shake the box? I’m actually not really convinced that it is morally wrong, so I guess I’m more of a HBA-sceptic than I thought. But at the same time, I think the Altruistic Killer is a worse person than an ordinary person who doesn’t donate and doesn’t kill, and that reporting the Altruistic Killer is the right thing to do.
In one sense hypotheticals are totally unrelated to how morality works in reality (if a serial killer who gives to AMF does ever turn up, please let me know), but I will note that it may not be as farfetched as it seems. About a year ago I listened to the incredible podcast Hunting Warhead, about some of the police-officers and journalists who try to find and arrest men who commit horrific crimes against children. One interesting thing they mentioned in the podcast is that many of the men who commit these crimes do lots of charitable work in their personal lives. They speculate (not particularly convincingly, I might add) that the point of the charitable work is to convince themselves that they can’t really be bad people, in spite of their crimes. I think it’s pretty clear that the amount of good they do through charity work is much, much smaller than the amount of harm they do to children. But the fact that they actually do apparently attempt to offset their crimes with charity work is interesting, and I think the HBA has serious real-world implications. Let me know what you think in the comments or reach out on Twitter.
EDIT (23/02/22): I’ve just been made aware of this old SSC post which seems like it proposes basically the same thought experiment as the Altruistic Killer. I’ll leave my post up because maybe it adds some value by linking the idea to the HBA and MacAskill’s paper specifically, but you should check out Scott’s original post!