19 Comments

I will say, I think the 'status quo bias' objection to the experience machine argument relies on a sleight-of-hand, making asymmetric situations out to be symmetric. Of course, some people really would exit the experience machine if asked, even knowing that suffering awaits them: they have a felt need for freedom or truthfulness experienced as an intrinsic value. (For what I take to be a good illustration of this basic idea, see the behaviour of the character Maeve in Westworld Season 1: although her life within the 'machine' is certainly not one of pure bliss, it's still illustrative of what the basic motivation might be.) But that isn't my primary concern: I think that, even if you assume that you shouldn't exit the machine, you can consistently and without status quo bias say that you shouldn't enter it either.

If you are told 'you are already in the experience machine, would you like to come out?', you'd be emerging into a world you'd never lived in, where you'd never built relationships or had concrete projects or goals or aspirations. You'd have never related to this world at all; the only world you care about would be the one inside the machine. By contrast, if someone comes up to you and says 'you live in reality, but would you like to enter the machine?', you _do_ have a connection to this world. You have a family, you probably have friends, you probably have goals and dreams; while you also potentially have a lot of suffering, you might choose not to sacrifice your 'categorical desires' (as they're called) to get rid of it. But if you're already in the machine, you probably don't _have_ any categorical desires, at least as ordinarily understood.* As such, the situations are asymmetric. If I were told that I was in the experience machine, I probably wouldn't leave, because nothing of particular value to me would be waiting in the real world. But it's not valid at all to infer that, rationally, I should get in the experience machine if given the choice, because there _are_ things of particular value to me in this world. This isn't status quo bias: it's a rational response to my existing patterns of desire and value.

Nozick's objection to utilitarianism _just is_ that it takes these asymmetric situations to be symmetric. Things like relationships, or being-in-the-world, or projects and aspirations, simply cannot be represented in classical utilitarianism except insofar as they might cause positive or negative affect; but as a matter of fact they are _intrinsically_, not just instrumentally, relevant to our ethical decisions. The entire point of the experience machine argument is to point out that classical utilitarianism cannot understand the importance of categorical desires. The status quo bias response, which assumes that the two situations are symmetrical and thus ignores the relevance of categorical desire, does not refute the argument: it strengthens it.

*You likely would have some analogue of categorical desires, desires directed towards the world of the machine that are not conditional on your presence in the machine (as opposed to desires directed towards the real world that are not conditional on you being alive). But this would make the situation even more asymmetrical, and only strengthen my point, by providing reasons to stay in the machine that do not correspond with reasons to enter it.

Expand full comment

I think I'd go further and say that the experience machine objection is mixing up two separate questions. First, is simulated reality real/valuable? Second, does only pleasure matter? The reversal test, where you imagine you are already in an experience machine, tests the first of these questions because this life I lead feels real and valuable to me even if it is simulated. But as a defense of pleasure being the only good, the reversal test does nothing as you say above because what I seem to like about this (simulated or not) life I lead is my relationships, projects, desires, ...

Expand full comment

I guess most people think that the point of charity is to build a good society. From that point of view, if people think a society with animal shelters is a good society, giving to animal shelters is entirely correct.

Utilitarians disagree. They don't think the point of charity is to build a good society, but to build a good world. There is one big problem with this idea: The world consists of different societies and those societies never have entirely peaceful intentions towards each other. If most peoples try to build good societies, while a few try to build a good world, those few risk being overrun by good societies.

Expand full comment

I escape both the Repugnant Conclusion and the St Petersberg Paradox by not assigning any positive value at all to the creation of a new life, however good that life is expected to be.

I feel pretty comfortable about this, although I believe this does lead to other unintuitive conclusions and I *do* ascribe negative value to the creation of lives of suffering which looks a bit inconsistent.

I'm with you on the organ donor one, I think (in principle, yes; in virtually any practical situation that looks a bit like this, clearly no); I feel uncomfortable about the simulation one, but do believe my discomfort is irrational.

I don't really have a solution for Pascal's Mugging, though.

Expand full comment

Hi Andrew, one thing worth noting is that neutrality about new lives doesn't help with versions of the puzzles that apply just within your own life. You presumably assign positive value to your own positive future, but would you rather have Z-many barely-worth-living moments, or a century of bliss? To answer the latter, you need a more positive account of how quality can trump quantity. For further discussion, see: https://rychappell.substack.com/p/puzzles-for-everyone#%C2%A7population-ethics

Expand full comment

I feel like I don't assign positive value to my own future exactly (and share something like the view of Andrew above), assuming both of your options contain no significant suffering (so "barely worth living" means "kind of boring but with some highlights", not "intense torture followed by enough orgasms that Professor Chappell thinks its worth it") then I take Barely Worth It because I think my life has positive expected value to other creatures, hopefully reducing their suffering with my donations. So a longer life has more of that positive effect on others. But assuming a brain-in-a-vat isolation where I'm only thinking about the experiences I'm having not my instrumental effects, I think I choose Bliss over Barely Worth It because (1) assuming that both scenarios contain equal suffering per moment, Bliss is shorter so less suffering and (2) assuming that the two options contain different amounts of suffering per moment, I assume Bliss contains less per moment than Barely Worth It

Expand full comment

Good point. I'm not sure how I'd answer either.

Expand full comment

Excellent summary of some serious objections to utilitarianism. I have one thought on the St. Petersburg paradox, though I admit I have not thought it through carefully. If time is finite, then the number of times the button can be pressed is finite, so there is never a guarantee that the universe will be destroyed, and pressing as many times as possible does result in some chance of immense value. This seems acceptable to me. If it's infinite, then the universe will eventually be destroyed, though the expected rate of value accumulation of the universe at any given finite time is greater with more button presses, it's (Rate of utility accumulation given no presses at time t)*2^n*.51^n, where n is the number of presses by time t. Especially since both presses and no presses may result in infinite net utility in this case, I have no idea what to make of this. Infinities in moral reasoning in general make me squirm, maybe I need to read more about infinite ethics.

Expand full comment

Good point on time being finite, hadn't considered that, although I guess it remains a problem for utilitarianism if you just posit when asking the question that time is infinite in this case. Agree about infinities being really weird, I assume you've read this already but you definitely should if not: https://handsandcities.com/2022/01/30/on-infinite-ethics/

Expand full comment

I don't see how saying time is infinite helps the argument here. If time is infinite you can show that the universe's continued existence has probability 0 but that's not the same thing as saying it's impossible. Indeed, take a uniform probability measure on the reals. Every single outcome has probability zero yet that doesn't mean one can't occur.

So I think there's a problematic slip here from probability 0 to impossible that's not justified.

Expand full comment

I feel like this doesn't help. The only reason that time is finite is that we live in a non-classical universe (classical universes are infinite in time), and this poses a huge problem to any ethical theory that seeks to be global and consistent. Ethical theories typically assume something like a classical background context, because that's the only way to get ideas about e.g. causation or knowledge off the ground easily. By appealing to the finitude of time, you escape the St Petersburg Paradox but only at the expense of having to explain the structure of your decision-making in ways that are globally compatible with GR - and, well, good luck with that!

Expand full comment

All meta-ethical traincars end in a train wreck near Reductio station. Your best bet is to mind the is-ought gap on your way out, and respect the other passengers. 😉

Expand full comment

If you ask me, many of the weird utilitarian thought experiments are rooted in the idea that making a new person is an act with moral weight. That it's good if they live a good life and bad if they live a bad one.

I don't buy that.

I think morality should be concerned with people who exist, and not with hypothetical people who might exist. Making a baby is morally neutral, whether your genes predispose your offspring to bliss or to depression. And gambling the world against the hope of a second world would be insane, because creating a new world is not a good thing even though destroying one is very very bad.

Expand full comment

This is a famous view in population ethics (commonly referred to as 'asymmetry'). Roughly: 'we should care about making people happy, but not about making happy people'. Fair enough! But perhaps read some of the objections before you settle on this view.

Here's a quick point from Richard Y Chappell: 'You learn that a new colony of awesome, happy, flourishing people will pop into existence in some distant, otherwise inaccessible realm, unless you pluck and eat a particular apple.' Many of us, me included, have the intuition that it would be wrong to eat the apple even if I would enjoy it (or at least, there is something morally praiseworthy about refraining from eating the apple). If you think this intuition is right, you're forced to reject asymmetry. More here: https://philpapers.org/archive/CHARTA-5.pdf

Expand full comment

Maybe my intuitions are weird, but I don't think there's anything morally wrong with eating the apple. Asymmetry just seems correct to me.

And that's not because I hate the idea of creating people. I wouldn't eat the apple myself; I like the idea of creating something good. But I don't see that as a question of morality.

I would feel similarly if not eating the apple created magnificent art in some distant realm, I think. Morality aside, I have a desire to create.

Expand full comment

Regarding the repugnant conclusion I think the problem comes from the social pressure we face not to place the zero utility bar too high.

I mean, if the conclusion was merely that we should be willing to give up a small number of extremely happy lives in exchange for a large number of decently happy lives -- no problem. The only reason the repugnant conclusion feels so repugnant is because we imagine those barely positive lives as being quite bad. However, I think we should instead take our repugnance as evidence that those lives we are imagining are actually net negative utility.

It's just that saying that forces us to admit that some existing human lives are net negative utility which feels scarily close to a position that says it's ok to treat them as if they lacked value -- though I think the better answer is to say that so treating them creates harmful precedents etc I understand the reluctance to say so. But there is no problem with utilitarianism saying the best thing to do is to advocate some other moral theory.

--

The very repugnant conclusion doesn't seem to add much imo. I mean we'd all be willing to accept a few moments of great pain for a long enough period of great bliss and I suspect even for a really really really long period of moderate bliss. So once again the problem is really just one of zero setting.

Expand full comment

Here I advocate going all the way to crazytown (with the sort of exception of the Saint Petersberg thing). https://benthams.substack.com/p/going-all-the-way-to-crazy-town

Expand full comment

One thing worth noting; those paradoxes and such arise for every satisfiable axiology. Everyone thinks that there's a prima facie duty to promote utility -- thus, this is just as much a problem for anyone else as the utilitarian.

Expand full comment