There are a ton of interesting challenges to utilitarianism, and I thought there might be some value in putting them together in one place. Most of the ‘classic’ challenges aren’t ones that I find particularly troubling. Take the ‘transplant problem’:
Imagine a hypothetical scenario in which there are five patients, each of whom will soon die unless they receive an appropriate transplanted organ—a heart, two kidneys, a liver, and lungs. A healthy patient, Chuck, comes into the hospital for a routine check-up and the doctor finds that Chuck is a perfect match as a donor for all five patients. Should the doctor kill Chuck and use his organs to save the five others?
If you get rid of the common objections about whether this really passes a utilitarian cost-benefit analysis (‘will this create a norm that will deter people from going to hospital when they need to?’ and so on) and just assume that this really is the utilitarian thing to do by stating that nobody else will ever find out, then I’m fairly happy to say that it is a good thing if the surgeon kills Chuck. That’s a bullet I’m content to bite.
Similarly, with the experience machine (where you are asked whether you want to enter a machine that will simulate a blissful existence), I’m happy to say both that I would enter the machine and that I ought to enter the machine. Status quo bias seems relevant here - if you flip the question to ‘Suppose you are already in the experience machine, would you opt to leave if you know that your non-simulated existence is one of immense suffering?’, it makes it more apparent to most people that they may not have as much of an objection to simulated pleasure as they thought they did.
Even the Repugnant Conclusion doesn’t particularly bother me. If you don’t know what the Repugnant Conclusion is, head over to the Wikipedia page linked in the previous sentence, but the gist is that total utilitarians are forced to accept that a world with many people with lives that are barely worth living could be better than a world with a smaller number of much happier people. In Derek Parfit’s example, a world where people pop into the world for only a moment and get to eat a potato and hear some muzak is better (given enough people) than a world with a much smaller number of flourishing people living wonderful lives.
Again, I don’t find this so troubling. An objection I heard from Robert Wiblin, although I suspect has been made elsewhere, is interesting: we may already be in the Repugnant Conclusion. While the typical examples given are ones in which everyone has lives that are both brief and boring, it could be the case that the average life on earth at the moment is only ‘barely worth living’, given the huge amount of human suffering, and not many of us would have much trouble saying 8 billion people living the average life on earth is probably a lot better than only a few hundred people living wonderful lives. But even without that objection, I’m content to say that trillions of brief muzak and potato lives is better than a very small number of wonderful lives.
So far, I’m a satisfied bullet biter. So, where do I get off the proverbial train to crazy town? Tyler Cowen’s variant of the St. Petersburg paradox is one objection to utilitarianism that I accept as a serious problem. Suppose you are offered a deal - you can press a button that has a 51% chance of creating a new world and doubling the total amount of utility, but a 49% chance of destroying the world and all utility in existence (let’s assume that there are no aliens in the universe, or alternatively that the button also doubles the number of aliens or something). If you want to maximise total expected utility, you ought to press the button - after all, the button is rigged in your favour and so pressing the button has positive expected value.
But the problem comes when you are asked whether you want to press the button again and again and again - at each point, the person trying to maximise expected utility ought to agree to press the button, but of course, eventually they will destroy everything. I’m not happy to almost certainly destroy all utility in existence because utilitarianism tells me to. My friend Eli Lifland (who I believe does bite this bullet) has a useful objection though - are there any odds that you would take?
Suppose that rather than there being a 49% chance you lose everything, there’s a one-in-a-trillion chance. It seems like you ought to push the button over and over again, although of course if you press it enough times you run into the same problem as with the original odds: you will almost certainly eventually destroy everything. Most ordinary people I’ve spoken to about this say ‘I would just press the button a ton of times until I feel like I’ve done a load of good, and then I would take my winnings’, which seems irrational but also appealing. I’m not sure what to do about this one.
I’m also unwilling to be a victim of Pascal’s mugging. Here is the description of the problem from Wikipedia:
Blaise Pascal is accosted by a mugger who has forgotten their weapon. However, the mugger proposes a deal: the philosopher gives them his wallet, and in exchange the mugger will return twice the amount of money tomorrow. Pascal declines, pointing out that it is unlikely the deal will be honoured. The mugger then continues naming higher rewards, pointing out that even if it is just one chance in 1000 that they will be honourable, it would make sense for Pascal to make a deal for a 2000 times return. Pascal responds that the probability for that high return is even lower than one in 1000.
The mugger argues back that for any low probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet—and given human fallibility and philosophical scepticism a rational person must admit there is at least some non-zero chance that such a deal would be possible. In one example, the mugger succeeds by promising Pascal 1,000 quadrillion happy days of life. Convinced by the argument, Pascal gives the mugger the wallet.
While the offer of coming back with money probably fails because of the diminishing marginal utility of money (at some point, getting extra cash just doesn’t make you any happier), the example where the mugger claims to be able to create simulations of huge numbers of people and torture them does seem to pose a problem for utilitarians. Should you give your money away? Of course you shouldn’t, but I think this probably does indicate that trying to maximise expected utility does fail when it comes to extremely low probabilities of an extremely large reward. In fact, that standard problem of Pascal’s wager remains fairly serious.
The ‘Very Repugnant Conclusion’ is another such problem (which is pretty similar to Ursula Le Guin’s famous story ‘The Ones Who Walk Away from Omelas’). Here, the point is total utilitarians need to accept not only the original Repugnant Conclusion, but also a world with a huge number of people living lives that are barely worth living that also contains a smaller number of people who live a life that is filled only with torture and extreme suffering. Here is a brief explanation, from the EA Forum:
There seems to be more trouble ahead for total [symmetric] utilitarians. Once they assign some positive value, however small, to the creation of each person who has a weak preference for leading her life rather than no life, then how can they stop short of saying that some large number of such lives can compensate for the creation of lots of dreadful lives, lives in pain and torture that nobody would want to live? (Fehige, 1998, pp. 534–535.)
So, I do get off the train to crazy town eventually. If the price of a train ticket is that I accept that all utility is virtually guaranteed to be destroyed, or that I have to give thousands of pounds to a mugger who says he will simulate miserable existences should I not give him my money, or if I have to accept that many people will live awful lives but many more people will be able to eat potatoes and listen to muzak, I’m not going to ride the train. But I suppose the more meta question is: what are the principles by which we should decide when to get off the train? I guess the guiding principle for me is that I ought to get off when my intuition to get off is stronger than the intuitions that drew me towards utilitarianism in the first place.
But I think that there’s a problem with getting off the train using this principle. Sometimes I imagine talking to someone who gives large amounts of their money to local animal shelters, and telling them that they ought to give their money to effective charities instead (although I don’t actually criticise peoples’ charitable giving in reality). What if they invoke this principle to defend their ineffective giving? If we’re back to intuitions about what seems crazy to us, why shouldn’t they get off the train to crazy town at the point where it asks them to donate to AMF rather than local animal shelters? Let me know what you think in the comments or message or @ me on Twitter.
I will say, I think the 'status quo bias' objection to the experience machine argument relies on a sleight-of-hand, making asymmetric situations out to be symmetric. Of course, some people really would exit the experience machine if asked, even knowing that suffering awaits them: they have a felt need for freedom or truthfulness experienced as an intrinsic value. (For what I take to be a good illustration of this basic idea, see the behaviour of the character Maeve in Westworld Season 1: although her life within the 'machine' is certainly not one of pure bliss, it's still illustrative of what the basic motivation might be.) But that isn't my primary concern: I think that, even if you assume that you shouldn't exit the machine, you can consistently and without status quo bias say that you shouldn't enter it either.
If you are told 'you are already in the experience machine, would you like to come out?', you'd be emerging into a world you'd never lived in, where you'd never built relationships or had concrete projects or goals or aspirations. You'd have never related to this world at all; the only world you care about would be the one inside the machine. By contrast, if someone comes up to you and says 'you live in reality, but would you like to enter the machine?', you _do_ have a connection to this world. You have a family, you probably have friends, you probably have goals and dreams; while you also potentially have a lot of suffering, you might choose not to sacrifice your 'categorical desires' (as they're called) to get rid of it. But if you're already in the machine, you probably don't _have_ any categorical desires, at least as ordinarily understood.* As such, the situations are asymmetric. If I were told that I was in the experience machine, I probably wouldn't leave, because nothing of particular value to me would be waiting in the real world. But it's not valid at all to infer that, rationally, I should get in the experience machine if given the choice, because there _are_ things of particular value to me in this world. This isn't status quo bias: it's a rational response to my existing patterns of desire and value.
Nozick's objection to utilitarianism _just is_ that it takes these asymmetric situations to be symmetric. Things like relationships, or being-in-the-world, or projects and aspirations, simply cannot be represented in classical utilitarianism except insofar as they might cause positive or negative affect; but as a matter of fact they are _intrinsically_, not just instrumentally, relevant to our ethical decisions. The entire point of the experience machine argument is to point out that classical utilitarianism cannot understand the importance of categorical desires. The status quo bias response, which assumes that the two situations are symmetrical and thus ignores the relevance of categorical desire, does not refute the argument: it strengthens it.
*You likely would have some analogue of categorical desires, desires directed towards the world of the machine that are not conditional on your presence in the machine (as opposed to desires directed towards the real world that are not conditional on you being alive). But this would make the situation even more asymmetrical, and only strengthen my point, by providing reasons to stay in the machine that do not correspond with reasons to enter it.
I guess most people think that the point of charity is to build a good society. From that point of view, if people think a society with animal shelters is a good society, giving to animal shelters is entirely correct.
Utilitarians disagree. They don't think the point of charity is to build a good society, but to build a good world. There is one big problem with this idea: The world consists of different societies and those societies never have entirely peaceful intentions towards each other. If most peoples try to build good societies, while a few try to build a good world, those few risk being overrun by good societies.