我在写一部名叫《道德责任分散》的伦理学研究专著。下面是第一章第二节的整体概述。 本节将在上一节已明...

Создано: 3 марта 2025 г.

Отвечено с использованием O1 Pro от Chat01

Вопрос

我在写一部名叫《道德责任分散》的伦理学研究专著。下面是第一章第二节的整体概述。

本节将在上一节已明确“道德责任分散”概念内涵的基础上,进一步分析说明可能导致该概念外延扩展或模糊化的具体因素,从而明确该概念在理论与实践中的适用范围与界限,避免其泛化或误用。在我们看来,存在两个主要因素可能模糊或扩大了“道德责任分散”概念的边界:第一,道德判断由关注行动者的自主性,逐渐转向强调行动所处的具体情景,从而使个体责任在情境因素中逐渐稀释甚至模糊;第二,道德评价体系自身的复杂化甚至技术化趋势,可能催生出“技术性道德责任分散”,进一步虚化道德主体的实际责任。为明确区分上述情况与真正意义上的道德责任分散,本书接下来的论证也将相应地分两步展开:首先,通过关于自由意志与决定论讨论模式转变的梳理,说明当前道德责任的探讨方式发生了实质变化,揭示相关讨论语境已从强调个体自主性转向到对情景因素的嵌入式分析;其次,依据超义务悖论阐明,多维化的道德思考虽然能容纳更多的直觉性内容,体现更丰富的伦理立场,但很可能因理论复杂程度和解释成本的增加,导致责任主体不明确,淡化或削弱道德评价本应具备的核心约束力。

请依据上述所说的第一步和第二步内容拟定书稿中使用的小标题,小节后的三级标题。

对于第一步骤,我希望分成四个三级标题。你要分别拟定这四个标题。其中前两个标题的内容我希望是:第一步,说明法国大革命的公意暴力及其困境;第二步说明卢梭及其所提出的理论可以看作是这场大型政治实验的教唆者,第三步说明同时期的康德作为卢梭某种意义上的追随者,用“普遍原则”弥补了集体意志专断的缺陷(比如大革命期间对“公意”的滥用和误解,导致了派系斗争和权力滥用),但也留下了责任过度个体化的缺陷;第四步通过举一个历史上世纪发生的事例说明,一个世纪后边沁提出的功利主义理论更清楚的暴露除了危机的实质;第五步,说明边沁的“矫枉过正”。第六步,说明密尔的改进。第七步,通过构筑思想实验的方式说明该改进仍然会导致边沁早期设想的“圆形监狱”(Panopticon)情况的发生。第八步,通过描述当今时代的问题强调当今社会将义务论与功利主义的缺陷暴露的更加严重。第九步说明,要想把弥补缺陷的方法说明白,得从哲学家们讨论自由意志与决定论问题说起。

请你按照论证目的设想后面两个三级标题。

然后你在这些标题之下完成内容的叙述。

对于第二步骤,我也希望分成四个三级标题。其中,应该谈到超义务悖论及其它衍生出的霍顿(Horton)的“全无或全有悖论”、卡姆(Kamm)的“不可传递性悖论”。并且谈到它的解决方案。关于此,请参照【】中的论文。我希望你谈论这样解决方案的问题,说明它将导致道德责任的消解(虽然能容纳更多的直觉性内容,体现更丰富的伦理立场,但也可能因理论复杂程度和解释成本的增加,导致责任主体不明确,进一步淡化或削弱了道德评价本应具备的核心约束力)。然后,我觉得你可以从政治哲学家艾里斯·玛丽恩·杨(Iris Marion Young)所提出的“社会联系”模型(Social Connection Model)角度来完成这样的论证。杨认为,当不公正是由结构性过程(在制度环境中的许多人的联合行为)产生时,我们需要扩大我们对责任的理解,超越直接因果关系。这种观点将道德焦点从罪恶感(向后看)转移到补救责任(向前看)——不是问“谁该负责?”而是问“谁应该采取行动来解决这个问题,我们共同需要做什么?” 它将分散的责任从逃避行动的借口转变为呼吁协作行动。

同样,请你拟定好这四个三级标题。并在标题之下完成内容论述。

Three paradoxes of supererogation

Daniel Muñoz D


Monash University Correspondence

Daniel Muñoz, Monash University. Email: [email protected]


Abstract

Supererogatory acts-good deeds "beyond the call of duty"—are a part of moral common sense, but conceptually puzzling. I propose a unified solution to three of the most infamous puzzles: the classic Paradox of Supererogation (if it's so good, why isn't it just obligatory?), Horton's All or Nothing Problem, and Kamm's Intransitivity Paradox. I conclude that supererogation makes sense if, and only if, the grounds of rightness are multi-dimensional and comparative.

KEYWORDS

deontology, Effective Altruism, moral obligation, supererogation, The All or Nothing Problem

1 | INTRODUCTION

We don't always have to do what's best. Some good deeds are supererogatory: they are optional and yet better than other permissible options. Examples include enduring injury to save a life, doing a favor, and granting forgiveness. 1{}^{1} Lovely stuff; hardly required.

But as innocent as the examples may sound, the theory of supererogation is fraught with paradoxes. Three in particular have been influential. The classic Paradox of Supererogation asks how supererogation could be possible even in a simple pairwise choice, like the choice between saving someone from a crumbling building or safely loitering outside (Archer, 2018: §4; Dreier, 2004: 148; Heyd, 2016: §3; Raz, 1975;). If being the hero is really better, why isn't it just obligatory?

The other puzzles add a third option. In Horton's (2017) All or Nothing Problem, the agent has two ways to be the hero: either rescue the one from the building, or rescue two, at the same level of cost and effort—say, two crushed arms either way. It seems wrong to save just one person; letting the other die is pointless badness. And yet, saving one seems no worse than permissibly saving zero. But surely it's always worse to do wrong. How could it not be?

In Kamm's (1985, 1996, 2007: 30-1) Intransitivity Paradox, meanwhile, the agent has two ways not to be heroic. Suppose you come upon the building while on your way to the cafe, where you have promised to meet a friend. You may break your promise to save a life, or you may keep the promise, but it would be wrong to just sit there. This leads to an infamous intransitivity: you may do nothing instead of saving the one (in a pairwise choice), and may save the one instead of keeping your promise, but you may not do nothing instead of keeping the promise. How is this possible?

These puzzles are usually discussed in isolation. It is not standard to check whether a solution to Kamm's conundrum could work for Horton's, or vice versa, nor do people discuss these two in light of the classic Paradox. But we need a unified solution. My goal is to find one.

Why take on the puzzles together? For one thing, the choices involved are undeniably linked. In the classic Paradox, we have:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_1.jpg?x=559&y=555&w=503&h=45&r=0"/> <!-- Media -->

Where 'Do Nothing ~ Save 1' means that Do Nothing "ties" Save 1: both options are permissible in a pairwise choice. In Horton's case, we seem to have:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_1.jpg?x=560&y=711&w=507&h=125&r=0"/> <!-- Media -->

Where 'Save 2 > Save 1' means that Save 2 "defeats" Save 1: only Save 2 is permissible in a pairwise choice (cf. Dietrich & List, 2017). Kamm's case is isomorphic:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_1.jpg?x=559&y=948&w=506&h=119&r=0"/> <!-- Media -->

Indeed, the cases are built in the same way. We add an option that defeats only one of the classic pair. Kamm's addition beats Do Nothing; Horton's beats Save 1.

Intriguing links aside, the deeper reason to treat the puzzles together is that we need our solutions to cohere. To capture the intuitions in Kamm's or Horton's case, some philosophers propose revisionary principles of reasons and wrongness. But these proposals, I argue, can't be extended to the combined case, where you have two ways to be heroic and two ways not to:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_1.jpg?x=561&y=1355&w=502&h=121&r=0"/> <!-- Media -->

Other philosophers argue that we can't capture the intuitions in Kamm's and Horton's cases. But in fact these intuitions naturally follow from our best solutions to the classic Paradox. To make sense of the hard cases, we should start with the more basic question of how supererogation could be possible at all.

What makes supererogation possibleis that wrongness doesn't just depend on worseness. Wrongness depends on multiple factors that are independent—for example, moral reasons and non-moral reasons. This independence can lead to some wrong acts (like Save 1) being no worse than permissible ones (like Do Nothing). It also leads to intransitivities of the kind described by Kamm, though in her case, I argue that the intransitivity arises from more than the presence of independent factors; one factor must also be "comparative." I will later explain what this amounts to, and why I think comparativity is implicit in, or at least winked at by, our everyday views of supererogation.

We begin with the classic Paradox (§§2–3), then Horton’s (§§4–5) and Kamm’s (§§6–7). If all goes well, the reward is a coherent picture of how morality works beyond and beneath the call of duty, and the happy conclusion ( \ §8 - 9 $ ) is that supererogation isn’t really paradoxical - just more interesting than we expected.

2 | THE CLASSIC PARADOX: AGAINST 1D SOLUTIONS

The classic Paradox of Supererogation asks: if these fabulous deeds are really so good, why aren't they obligatory? The question has bite if we are drawn to:

The More Reason Principle

An option is obligatory if there is more reason to choose it than any alternative.

Which sounds hard to resist. Why permit worse choices? Or as Raz (1975: 165) asks: “How can one be permitted to refrain from action that is required by reason?" And yet the More Reason Principle entails that anything less than best is wrong. Supererogation is impossible. 2{}^{2}

To save the supererogatory, we need to replace this principle with something less stringent. Now, one strategy is to stick with plain reasons, and pick a fancier, more forgiving way of moving from reasons to rightness. Why not just do that?

It's worth a shot, but the fancy principles have their own problems. The main option is to say that we are only required to be "good enough," clearing a baseline of decency:

The Baseline Principle

An option is permissible just if it is better than the baseline. (Hurka, 1990; Slote, 1985)

Where "the baseline" is either an absolute standard (one's choice isn't too awful) or, like an average, relative to the set of one's options (one's choice isn't too much worse than the alternatives). Either way, this view allows for supererogation in simple two-option cases; Do Nothing might be permissible, even though Save 1 is better, if both are above the line. The problem is that the Baseline Principle permits too much, since it permits acts that are above the line yet gratuitously bad. Although this problem predates Horton's puzzle (Bradley, 2006), his building case is a perfect example. It's wrong to save only one, given that saving two was no costlier, and yet Save 1 must be above the line, since Do Nothing is. The Baseline Principle's permissions are too indiscriminate.

The two principles we have seen invite us to rank options from worst to best like numbers on a line. The More Reason Principle says anything below the top option is wrong; the Baseline Principle permits anything "high enough." But neither principle can rule out gratuitously worse options (like Save 1) while leaving in those that are justifiably worse (like Do Nothing). 3{}^{3} To properly solve the classic Paradox, we need the resources to say what it is about the permissible suboptimal acts that makes them permissible, something beyond raw betterness.

Solving the Paradox of Supererogation thus requires us to think of an action's moral status as more than a number on a line. We need a second factor, something inherently permissive—and this is exactly what we find in the two leading solutions to the Paradox of Supererogation.

3 | THE CLASSIC PARADOX: 2D SOLUTIONS

If supererogating is better, why permit anything less?

The leading answer is that the supererogatory act is only better in a respect. Supererogation is favored by one kind of reason, but is opposed by weighty reasons of another kind, which are said to be "justifying" rather than requiring (Archer, 2016; cf. Gert, 2007; Lazar, 2019), "partial" rather than impartial (Parfit, 2011), or "non-moral" rather than moral (Portmore, 2011; Slote, 1991)—whatever they are, they counterbalance the reasons to supererogate,stopping them from grounding an obligation. 4{}^{4} So we get something like: The Non-Moral Reasons Principle An option x\mathrm{x} is obligatory just if,for any alternative y\mathrm{y} ,there is more moral reason to choose x\mathrm{x} than there is combined moral and non-moral reason to choose y\mathrm{y} .

Where moral and non-moral reasons are two independent factors, and only moral reasons favor supererogating.

Already, this is enough to solve the paradox. Heroic sacrifices are optional, even though they are morally better, because we have mighty non-moral reasons not to harm ourselves. (This view does, however, require us to gently revise our concept of supererogation, since it's no longer better overall-just morally better. See §5§5 ,below.)

My own preferred view admits just one kind of reason—which tends to favor, justify, and require— and posits something further: prerogatives,which justify without at all requiring or favoring. 5{}^{5} Perrog-atives are purely permissive. They have weights, like reasons, but their sole function is to justify acts that would otherwise be wrong, putting a targeted gap between "most reason" and "must." They are like non-moral reasons, except they don't affect what's best.

This view also lets us replace the More Reason Principle. We say:

The Prerogatives Principle

An option x\mathrm{x} is obligatory just if,for any alternative y\mathrm{y} ,there is more reason to choose x\mathrm{x} than there is combined reason and prerogative to choose y.

And this gives us an easy way to explain the optionality of self-sacrifice. On any view of prerogatives, I have a prerogative not to give up my limbs, and so even though the reasons favor self-sacrifice, it lies well beyond the call of duty. 6{}^{6}

These are the leading solutions in the literature, and I can't stress enough that they both involve the idea that right and wrong depend on two dimensions. There isn't just a cutoff along a single scale from naughty to nice. Moreover, on either view, the two dimensions are independent, in the sense that an option's ranking along one doesn't tell us how it ranks with respect to the other. The weights of the moral reasons don't tell us how the non-moral reasons lie; the balance of reasons doesn't settle which prerogatives we have; and so on (cf. Lazar, 2019: 246).

The crucial upshot of this independence is that an option can outrank its rival on one dimension while being outranked on the other. Indeed, this sort of moral mismatch is just what we would expect to find in our simple case of supererogation. Self-sacrifice is a wonderful thing, but a harm to oneself can have special significance. Do Nothing thus gets protected by the weightier prerogative (or non-moral reasons),even though Save 1 is (morally) better.

We need two dimensions to make sense of supererogation, and having two dimensions can lead to moral mismatches. That is the main lesson of the Paradox of Supererogation. It's a simple point, but even on its own, it is powerful enough to solve the All or Nothing Problem.

4 | ALL OR NOTHING: REASONS AND PREROGATIVES

We begin by stating the problem more carefully.

In Horton's case, I am faced with a collapsing building and three options. I can do nothing, which costs me nothing. Or I could sacrifice my arms to hold open a small escape route, saving one of the people inside. I could also, for the same blood price, open a bigger route and save both.

Intuitively, I do not have to be the hero.

May Save None

It is permissible for me to save no one.

After all, two arms would be a massive sacrifice. But it would be wrong to put that sacrifice to waste, saving only one person and abandoning the other:

Wrong to Save One

It is wrong for me to save just one.

There is no way to justify such gratuitous badness. Still, saving one is no worse than saving zero.

No Worse to Save One

I don't have more reason, all things considered, to save no one rather than saving just


one.


Morality wouldn't recommend saving fewer lives.

But as nice as these intuitions are, they leave us with a wrong act (saving one) that is no worse than a permissible act (saving zero). And surely:

Worse to Do Wrong

If x\mathrm{x} is wrong and y\mathrm{y} is permissible,then I\mathrm{I} have more reason to do y\mathrm{y} .

Indeed, it is a "common assumption" in the literature that any permissible act is better than any wrong alternative (Pummer,2019: 284n.19). 7{}^{7}

That is the All or Nothing Problem: if the only permissible options are saving all of the people and saving none, then we must revise our intuitions, or else deny the seemingly undeniable-Worse to Do Wrong.

Most solutions give up an intuition. Horton (2017) denies May Save None (he says it is false when the agent is willing to make the sacrifice); 8{}^{8} Portmore (2019: §5.4) denies that there is more moral reason to save one than none; and McMahan (2018: 99-100) suggests that saving one might be wrong and yet somehow not impermissible. Although these concessions are costly, they would allow us to preserve Worse to Do Wrong.

But that principle is seriously undermined - and the intuitions are strongly predicted-by the idea of independent dimensions, which we got from working through the Paradox of Supererogation. Wrong acts would always be worse, if 'wrong' meant 'below the cutoff on the one true moral scale from bad to good'. Any permissible act would be above the line and therefore better than anything below. But what if there are two scales? Then we get counterexamples. Let's see why, starting with a view of reasons and prerogatives.

Recall the basic pair: Do Nothing \sim Save 1 . (Where ’ xyx \sim y ’ means that xx and yy are both permissible in a pairwise choice.) Given that Save 1 is better than Do Nothing, we might expect Save 1 to be obligatory. But it isn't. We have prerogatives not to harm ourselves, so Do Nothing is justifiable despite being suboptimal. The reason to save a life doesn't outweigh the combined reasons and prerogatives to self-preserve.

Now the key question. What should we expect to happen when we add the option to Save 2? Consider how it fares in pairwise choices. Intuitively, Save 2 is obligatory in a pairwise choice with Save 1: Save 2 > Save 1. The cost is the same either way, so in effect, the agent is just choosing whether to pointlessly allow the second person to die. I assume that pointlessly letting someone die is wrong. Whatever prerogatives we have, they don't make a decisive difference in this kind of choice, where no additional harm to self (or weighty rights) are in play.

But our prerogatives do permit Do Nothing in a choice with Save 2. Do Nothing \sim Save 2. A massive harm to oneself can allow us to forgo a whole range of sacrifices; it is not as if anything beyond one life has got to tip the scales. I assume that two lives won't be the tipping point.

So in the choice from {Do Nothing, Save 1, Save 2}, we should expect only one wrong option: Save 1. The balance of reasons and prerogatives will be something like:

<!-- Media --> <table><tr><td/><td>Do Nothing</td><td>Save 1</td><td>Save 2</td></tr><tr><td>Reasons</td><td>1</td><td>5</td><td>10</td></tr><tr><td>Prerogatives</td><td>10</td><td>0</td><td>0</td></tr></table> <!-- Media -->

The result is that wrongly saving one is better than permissibly doing nothing. And this is possible because of reasons and prerogatives are independent. Do Nothing is worse than Save 1, but I have a mighty prerogative not to sacrifice, and only a negligible prerogative to sacrifice to save the one; I can't justify Save 1 over purely better rescues. That is why the addition of Save 2 is able to rule out only the better of two options. "Tied" options don't have to be morally equivalent, and they don't have to be defeated by the same things. Even the lesser option in a tie can be more resiliently permissible. 9{}^{9}

So here is my main claim about the All or Nothing Problem. If we start with reasons, prerogatives, and pairwise judgments, we should expect that wrongly saving one will be better than permissibly saving no one. It is a prediction, not a paradox, that the three intuitions are true and that Worse to Do Wrong is false. We can solve the All or Nothing Problem without compromising on our intuitive judgments, all thanks to the insights we got from working through the Paradox of Supererogation.

5 | ALL OR NOTHING: PARITY OF REASONS

But what about reasons-based views of supererogation? Do they make the same predictions about Horton's case? It's a bit complicated, but mostly: yes.

For simplicity, let's stick with the Non-Moral Reasons Principle, which says that supererogation emerges from a clash of moral and non-moral reasons: an act is obligatory just if the moral reasons to do it outweigh the combined moral and non-moral reasons against. Given this principle, we would expect Save 2 to be morally best, followed by Save 1, and we would expect both of these sacrifices to be non-morally worse than Do Nothing. So the prediction is that Save 1 will be wrong and yet morally better than permissibly saving no one. This is violation of what we might call "Morally Worse to Do Wrong," which is the version of Worse to Do Wrong that is in fact endorsed by fans of non-moral reasons (Portmore, 2019: 22).

But can this view accommodate our three intuitions? On the face of it, no. The trouble is with No Worse to Save One. Since Do Nothing is permissible, we know that it is at least as good overall as Save 2, which is strictly better than Save 1. Don't we have to conclude that Do Nothing is also better than Save 1?

This is a well-known problem. 10{}^{10} But there is a solution,due to Derek Parfit: we say that Do Nothing is on a par with Save 1 and Save 2 (2011: 137-141). Two options are on a par when they are comparable, though neither is better,and they aren’t exactly equal in goodness (Chang,2002). Using ‘ \sim ’ to denote parity and ‘ >> ’ plain betterness,we get a familiar structure:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_6.jpg?x=561&y=240&w=505&h=223&r=0"/> <!-- Media -->

So Save 1 is wrong because it's strictly worse than Save 2,but Save 1 is still on a par with Do Nothing, which remains permissible because it is the best non-morally. With parity, I conclude, we can capture all three intuitions in the All or Nothing Problem and debunk Worse to Do Wrong. 11{}^{11}

Indeed, we can do all this even if we don't use moral and non-moral reasons in particular. We might instead say that the reasons to save lives are "other-regarding" while the reasons to do nothing are "prudential." Or we could call it a conflict between "impartial" reasons (like it would save two lives) and "partial" reasons (like it would save my arms). All good. So long as the reasons to sacrifice are on a par with the reasons to self-preserve,we can solve the All or Nothing Problem. 12{}^{12}

6 | THE INTRANSITIVITY PARADOX

Now the twist: we have already solved Kamm's Intransitivity Paradox. (And the bucket of cold water: it's not entirely obvious what "Kamm's Paradox" amounts to. We'll clear this up as we go.)

The core of the puzzle, as originally presented, is that some cases appear to feature a spooky intransitivity. In Kamm's case, I have three options: keep a promise to meet someone for lunch, save a life at great personal cost,or do nothing. 13{}^{13} Because life-saving is costly,I am free to do nothing instead: Do Nothing \sim Save 1. (Where this "tie" means that I may do either option in a pairwise choice.) But since the sacrifice is for a splendid end-saving a life- it is permissible even when it entails breaking lunch obligations: Save 1 ~ Keep Promise. And yet I may not skip lunch to lounge around: Keep Promise > Do Nothing. (This "defeat" means that I would have to keep the promise in a pairwise choice.) All together:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_6.jpg?x=559&y=1421&w=506&h=121&r=0"/> <!-- Media -->

Here,we get an intransitivity in the relation of weak defeat,or ’ \gtrsim ’. 14{}^{14} Let ’ xyx \gtrsim y ’ mean that xx is permissible in the pairwise choice. (’ xyx \gtrsim y ’ is equivalent to ‘ x>yx > y or xyx \sim y ’.) Kamm’s intuitions about her case entail that ’ \gtrsim ’ is intransitive: Do Nothing \gtrsim Save 11 \gtrsim Keep Promise >> Do Nothing. This means that we have Do Nothing \gtrsim Save 11 \gtrsim Keep Promise,but not Do Nothing \gtrsim Keep Promise. That's Kamm's intransitivity.

The air of paradox creeps in when we ask: how is this possible? If xx is tall enough to at least tie with yy in a height contest,and yy is tall enough to at least tie with zz ,then xx will at least tie with zz . Why shouldn’t the same hold for contests of permissibility? If xx is at least good enough to choose over yy , and yy over zz ,doesn’t xx also have to be good enough to choose over z?15z{?}^{15}

But we have already shown that this kind of question is confused, because being permissible, unlike being tallest, is a matter of two independent dimensions - reasons and prerogatives. Indeed, we have just seen how reasons and prerogatives lead to intransitivity in Horton's case:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_6.jpg?x=561&y=2076&w=504&h=124&r=0"/> <!-- Media -->

Save 1 ≳ Do Nothing ≳ Save 2,but not Save 1 ≳ Save 2. The key is a moral mismatch. I have reasons to help others and prerogatives not to self-harm. So, I have a greater prerogative to Do Nothing despite having more reason to Save 1. (Mutatis mutandis for views with multi-flavored reasons.)

The fear of transitivity, like the pull of Worse to Do Wrong, depends on the assumption that right and wrong depend on only one dimension—a dogma that would rule out supererogation from the start. We should be glad to get rid of Worse to Do Wrong. For the same reason, we should enjoy our polite farewell to the transitivity of weak defeat.

7 | COMPARATIVE PREROGATIVES

Now things get tricky. Kamm's Paradox is kaput; there is nothing fishy about intransitivity per se. But we are still in a pickle insofar as we haven't figured out Kamm's case, which is trickier than Horton's. Maybe this isn't part of the paradox proper, since it's not strictly about transitivity (this is the unclarity alluded to earlier). Still, we need a solution.

Here’s the problem. Because Do Nothing \gtrsim Save 1,we have to say that there is a mighty prerogative to Do Nothing. But clearly, in the choice between Do Nothing and Keep Promise, there can't be such a prerogative. If there were, promise-breaking would be permissible - which it isn't. It's a datum that Keep Promise > Do Nothing.

An essential part of this problem is that we can't just say that Keep Promise rules out Do Nothing by virtue of its being even better than Save 1. Keeping the promise isn't better than giving up one's arms to save a life, as shown by the fact that saving the life is supererogatory. The core mystery is that the prerogative to Do Nothing seems to kick in against Save 1 - the best option - but disappear up against Keep Promise.

This is really puzzling. What's going on?

Kamm (1996: 313) proposes an answer: we are dealing with two independent dimensions. Save 1 scores highest in one respect-it may "take precedence" over the biggest swatch of options, weakly defeating them - whereas Keep Promise ranks highest in another: it requires the greatest efforts. The idea seems to be that Keep Promise is urged by the bossier moral reasons,but Save 1 is protected by the bigger prerogative (for a similar interpretation, see Lazar, 2019: 246). Clearly, Kamm's view will have trouble capturing the betterness of Save 1. But a deeper problem is that it makes the wrong predictions about Horton's option to Save 2.

Recall the combined choice:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_7.jpg?x=561&y=1571&w=504&h=124&r=0"/> <!-- Media -->

Now, if Kamm is right that we have a mighty prerogative to save lives, and only relatively weak moral reasons to do so, then the justification to Save 1 should be enough to outweigh the reasons to Save 2, and Save 1 will be permissible.

Here’s why. We know that there is some justification for Keep Promise,since Keep Promise \gtrsim Save 1. We also know that there is even more justification for Save 1; that is Kamm's main idea. But then adding Save 2 can't make Save 1 alone wrong. Since Save 1 is even more justifiable than Keep Promise, anything that makes Save 1 wrong will also rule out Keep Promise. But Save 2 doesn't rule out Keep Promise. So it can't rule out Save 1, either. Kamm's view thus conflicts with the idea that it's wrong to save one in Horton’s case. The view lets us be gratuitously nasty. 16{}^{16}

The other leading take on Kamm's case, due to Alfred Archer (2016: 459), is that it involves three dimensions. Basically, Archer's view is just Kamm's plus the idea that an option can be "morally favored" independently of how strongly it is justified or required. This allows Archer to say that Save 1 is more favored than Keep Promise. Now, like Portmore (2016: 293n.11), I think it’s odd that "favoring" should be independent of other factors. But even granting Archer this, he hasn't solved the fundamental problem for Kamm's view: his view also permits me to Save 1 even given the option to Save 2. Neither Kamm's view nor Archer's can be plausibly extended to Horton-style cases, where there are better and worse ways to save.

Now what? If we can't treat Kamm's case even with three dimensions, what on earth could do the trick? A leap to four dimensions? A formal deus ex?

I don't think we need anything so fancy. Just ask yourself: why am I free to stay out of the building, but not free to break the promise? The answer is obvious: keeping the promise wouldn't crush my arms! Promise-keeping isn't more harmful to me than doing nothing, so I don't have any harm-based prerogative to Do Nothing rather than keep my promise.

The big mistake was to think that we could always represent reasons and prerogatives with a single weight. Sometimes, as in Kamm's case, it matters what we are comparing to what. I have a prerogative to Do Nothing rather than Save 1, because saving the life is more harmful to me; but I don't have any such prerogative to Do Nothing rather than Keep Promise, since the harm to me is the same either way. What matters is comparative harmfulness. That is why we can't just say that Do Nothing gets a prerogative of weight 5, or weight 10 , or what have you. The weight doesn't just depend on how harmful the act is considered in itself; we also need to know the alternative. Prerogative are in this sense "contrastive" (Snedegar,2015),or as I like to put it,comparative: we have a prerogative to do xx rather than yy if xx harms us less than yy does.

The key to Kamm’s case is that prerogatives (or non-moral reasons) are comparative. 17{}^{17} Stated in the abstract, this might sound like an ad hoc addition. But it's not. It is just common sense that our prerogatives depend on comparative harms, and there is nothing natural about the idea that the costs of heroism should justify one non-heroic option over another. This idea is just confused, as Kamm's case shows, and the solution is that harms matter in a comparative way.

Let me close with one more case, introduced by Dale Dorsey (2013), which has been treated as a variant on Kamm’s. 18{}^{18} Suppose I am able to beat up a stranger,Roy,and take his money,which I can use either to buy a car for myself, or to save ten lives via an amazing NGO. I have three options: Get a Car, Respect Roy's Rights, or Save 10. Intuitively, Dorsey thinks, I don't have to give up my own luxury for charity: Get a Car \sim Save 10. (A bit heartless,but let’s grant it.) I may,however,beat up Roy when lives are on the line: Respect Roy’s Rights \sim Save 10. And yet I would not be justified in assaulting someone just for the sake of new wheels: Respect Roy's Rights > Get a Car.

We get the same old structure:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_8.jpg?x=557&y=1630&w=505&h=139&r=0"/> <!-- Media -->

This time, however, we can't say that the two non-lifesaving acts involve the same null cost to self. Respecting Roy costs me a whole car! So why is my prerogative to Get a Car comparative? Why does it count for so much against the greater good and for so little against Roy's rights?

Again, I don't think the explanation here has to be ad hoc. On the contrary, the verdicts follow from two familiar views of prerogatives. First, on Hurka and Shubert's view, we have a prerogative to promote our own interests except if doing so would violate a right. They argue:

Surely if it's wrong to kill one innocent person to save two others, as deontological moralities hold, it's also and equally wrong when one of the two is you: your agent-favoring permission to pursue your own good must have more weight against the duty to promote the good than it does against deontological constraints. (2012: 10; cf. Kamm, 2007: 30-1)

On a rights-based view, meanwhile, it seems clear that a right to spend on oneself doesn't entail a right to steal from others, much less beat them up!

Both views entail that we have no prerogative to violate rights, even self-interestedly. We don't have to agree with this idea. But if we do, then we can explain the presence of a comparative prerogative in Dorsey's case, which turns out to be more complex than Kamm's. Instead of a difference in marginal costs, we find a difference in the significance of the same marginal cost of one shiny new car. The prospect of new wheels does little to justify stealing, but seems to do a fair bit to justify spending on oneself over strangers. This is the same comparativity as in Kamm's case—only with a different source.

8 | MENU RELATIVITY

We have finally hacked our way through the paradoxes of supererogation. Let's recap.

The fundamental idea is that we build our way up to the outré many-option choices from what we already know about simple choices between pairs. We start with the classic Paradox: why may we choose a morally subpar option? The answer is that it is justified by a second factor - prerogatives (or non-moral reasons) - which is independent of moral reasons and inherently permissive. That gives us the Prerogatives Principle: we may choose an option when the reasons and prerogatives in favor aren't outweighed by the reasons to do any alternative.

We then use this this principle to generate pairwise judgments about our full set of options.

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_9.jpg?x=561&y=1138&w=503&h=123&r=0"/> <!-- Media -->

This instantly yields Kamm’s intransitivity: Do Nothing \sim Save 1 - Keep Promise >> Do Nothing. We even find the same intransitivity with Horton’s three options: Save 1 ~ Do Nothing ~ Save 2 > Save 1.

Moreover, the Prerogatives Principle helps us debunk Worse to Do Wrong. The reason why Save 1is wrong,even though it is better than permissibly doing nothing,is that I have a prerogative to Do{Do} Nothing rather than save lives, whereas I have no prerogative to Save 1. A prerogative to keep my arms doesn't entail a right to sacrifice them however I like. By that same token, a prerogative to keep my arms doesn't entail a prerogative to keep them however I like. I have no prerogative to Do Nothing rather than keep my promise. Prerogatives depend on comparative harms: if two options are equally harmful to me, I have no harm-based prerogative to pick one over the other. That's why I may not Do Nothing rather than Keep Promise, just as I may not Save 1 rather than Save 2.

The result is a unified and minimal solution to the paradoxes. It uses the basic ideas that make supererogation possible, along with some easy pairwise judgments, to debunk transitivity and Worse to Do Wrong—no exotic extras, no controversial apparatus.

Now I want to clarify something. My solution sounds like something very controversial. It sounds like I am saying: when we add new options, we change the balance of reasons and prerogatives (cf. Kamm, 2007: Chapter 1, n.55). This idea has been defended by Lazar and Barry (ms), who think that adding Save 2 to {Do Nothing, Save 1} generates a new moral reason against Save 1, which makes it morally worse than Do Nothing. There's definitely something to this idea. (Isn't it bad to gratuitously let someone die? Doesn't the presence of the second child make Save 1 into a gratuitous letting-die?) But I think changing reasons are hard to defend, and not essential to solving our paradoxes.

On Lazar and Barry's view, reasons "change" in the sense that they depend on the menu:

Menu Relativity of Reasons and Prerogatives

The weight of the reasons and prerogatives to do x\mathrm{x} rather than y\mathrm{y} may differ depending on the availability of a further option z\mathrm{z} .

In particular, there isn't more reason to Save 1 rather than Do Nothing in a pairwise choice, but there is when Save 2 is on the menu. The presence of Save 2 makes Save 1 better than Do Nothing. That is why Save 1 is permissible in the pairwise choice but wrong with Save 2 on the menu.

Again, this sounds like my view. I have argued that Do Nothing compares more favorably to Save 1 in the presence of Save 2 (and less favorably, in the presence of Keep Promise). Pummer agrees:

it is a familiar feature of nonconsequentialist ethics that the moral status of an act can depend on which alternative acts are available. In this case, the presence of [Save 2] alters the moral status of [Save 1], thereby altering the way that [Save 1] and [Do Nothing] compare morally. (2016: 86-7)

Isn't this just Menu Relativity? Aren't we saying that Save 2 makes Save 1 worse, and that this explains why Save 1 becomes wrong?

No. We already have a simpler explanation. Save 2 makes Save 1 wrong, but leaves Do Nothing permissible, because only Do Nothing is protected by a prerogative. We don't need to posit changing reasons to capture this; it's implicit in defeat relations:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_10.jpg?x=558&y=1093&w=508&h=106&r=0"/> <!-- Media -->

Because Save 2 > Save 1, we know that the reasons to Save 2 outweigh the reasons and prerogatives to Save 1 in a pairwise choice. Assuming that this doesn’t change when we add the option to Do Nothing, Save 1 will be wrong whenever Save 2 is on the menu, though Do Nothing needn't be.

Adding options, however, clearly does have some effect. Even in my view. the presence of Save 2 alters the way in which Save 1 "compares morally" to Do Nothing. But rather than a change in reasons and prerogatives, we have a divergence in deontic status. Only one option is made wrong by the new addition. This seems to be how Pummer sees the case, too:

...with the full choice situation in view, it is clear that there is something to be said against [Save 1] that cannot be said against [Do Nothing] or [Save 2]: the performance of [Save 1] constitutes a deliberate refusal to do something much better at no extra cost. This is a serious moral failing. (2016: 86-7)

The failing consists in how Save 1 compares to Save 2, not in how it directly compares to Do Nothing. Only Save 1 is gratuitously worse than an alternative; it alone is defeated. Similarly, in Kamm's case, only Do Nothing becomes wrong because it alone is defeated by Keep Promise. This explanation doesn’t require any Menu Relativity. 19{}^{19}

So we haven't used, and don't need, Menu Relativity to handle Kamm and Horton's cases. All we need are commonsense reasons and prerogatives - which we already needed to solve the classic Paradox. By contrast, Menu Relativity can play no role in that Paradox, since it concerns a single fixed menu, {Do Nothing, Save 1}.

That's why it's important to distinguish my view from the Menu Relativist's. Not only are the views different: mine is more unified. I am trying to solve a whole cluster of puzzles using only the familiar idea of a prerogative not to harm oneself, and I don't appeal to menu-relative reasons, which makes the view much less controversial than it might have sounded.

But why is Menu Relativity so controversial? And why isn't my view, with its comparativity and intransitivity, just as fishy? As it turns out, the Menu Relative view has much stronger—and stranger— implications. To see why, we need to slow down and talk about what are sometimes called principles of "consistency" (Sen, 1993) or "rationality" (Sen, 2017: 63). A more neutral name for them would be "principles of menu-insensitivity." They require options to keep their deontic properties even as we alter the rest of the menu.

Our cases, as I've described them, respect two key principles of menu-insensitivity. First:

Property α\alpha (Basic Contraction Consistency)

If x\mathrm{x} is permissible to choose from a set of options S\mathrm{S} ,then x\mathrm{x} is permissible to choose from any subset S* of S to which x belongs. (Sen, 2017: Chapter 1*6)

Property α\alpha says that a permissible option stays permissible even if we prune other options from the menu. 20{}^{20} This is how things work in Kamm’s and Horton’s cases,as well as the combined case. Nothing is ever made wrong by the removal of an alternative; Save 2 doesn't become wrong when we take away the option to Keep Promise. The cases also obey:

Property γ\gamma (Basic Expansion Consistency)

Given some sets of options Si{S}_{i} ,if xx is permissible to choose from any Si,t{S}_{i,t} then xx is permis-

sible to choose from the union of all Si{\mathrm{S}}_{\mathrm{i}} . (Sen,1993: 500)

For instance: Save 1 is permissible in pairwise choices with either Do Nothing or Keep Promise, and it remains permissible in the three-way choice.

Now here is the kicker. Properties α\alpha and γ\gamma ,though consistent with my view,are deeply in tension with Menu Relativity. The upshot of the properties is that we may derive the facts about what's permissible from facts about what defeats what. More precisely: the properties hold just if the permissible options in any subset S* of S are exactly those that weakly defeat everything in S*. 21{}^{21} If either property is violated, then the permissibility facts are so menu-sensitive that we would need Menu Relativity to explain it. Consider an α\alpha -violation: xx is permissible to pick from {x,y,z}\{ x,y,z\} but not from {x,y}\{ x,y\} . Do the reasons to do yy outweigh the reasons and prerogatives to do xx ? The answer has to be: it depends. The reasons for yy don’t rule out xx in {x,y,z}\{ x,y,z\} ,but they do in {x,y}\{ x,y\} ,so zz ’s presence must make xx more justifiable over yy . Similarly,we need Menu Relativity to understand cases where Property γ\gamma doesn’t hold. In fact,it is only in these two kinds of cases that we need Menu-Relativity. So long as Property α\alpha and Property γ\gamma are obeyed, we can find some way to derive the judgments about what's permissible from a possible array of non-relative reasons and prerogatives.

This is bad news for the friend of Menu Relativity. The best kind of evidence for their view is missing in our examples,where there is no violation of α\alpha or γ\gamma in sight. Moreover,given their view,we should expect to find violations of both of these principles, somewhere or other. That is a significant cost, especially in the case of α\alpha ,since it is quite plausible that pruning the menu shouldn’t make an option wrong. As Vallentyne &Tungodden (2005: 143) ask: "If [x]\left\lbrack x\right\rbrack is a winner against [y]\left\lbrack y\right\rbrack and zz ,why would it not also be a winner against yy alone?" 22{}^{22}

Our examples do exhibit one kind of menu-sensitivity. Rather than offending against α\alpha or γ\gamma ,they infringe on their less legendary cousin:

Property β

If x\mathrm{x} and y\mathrm{y} are both permissible to choose from a set of options S\mathrm{S} ,then if x\mathrm{x} is permissible to choose from a superset S* of S, so is y . (Sen, 2017: Chapter 1*6)

β\beta says: if two options are both permissible,then adding an option can’t make only one of them wrong. Equally permissible options are beaten by the same things. Clearly this isn't how things work in our paradoxes, where Save 2 makes only Save 1 wrong, as Keep Promise does to Do Nothing.

Is this evidence for Menu Relativity? Interestingly,no. When β\beta fails,the upshot isn’t any kind of relativity—it’s intransitivity! Property β\beta is equivalent to what I call:

Transmission Over Ties


If x>yzx > y \sim z ,then x>z.23x > z{\text{.}}^{23}


And this is exactly the principle to which Kamm's and Horton's cases are counterexamples. Recall:

<!-- Media --> <img src="https://cdn.noedgeai.com/01953d37-96f8-7021-97fc-13735ee59518_12.jpg?x=559&y=746&w=507&h=121&r=0"/> <!-- Media -->

Here we have a smorgasbord of Transmission failures. (For example: Save 2>2 > Save 11 \sim Do Nothing \sim Save 2.) And it is easy to see why defeat doesn’t transmit across these tied options. Since we are dealing with two dimensions, the tied options are relevantly different; there is more reason to Save 1, but a much greater prerogative to Do Nothing. When reasons and prerogatives are mismatched, tied options can be defeated by different things,which means that β\beta fails and intransitivities abound.

This is a rather nice result. My approach to the paradoxes, based on the Prerogatives Principle, fits with the failure of β\beta ,and unlike Menu Relativity,it doesn’t conflict at all with the more plausible α\alpha and γ\gamma . Why no conflict? Because the approach is all about pairs. To figure out if xx is permissible,just ask,for each alternative yy : do the reasons to do yy outweigh the combined reasons and prerogatives to do xx instead? If the answer is always "no," then xx is permissible. If we run this test in Kamm’s and Horton's cases, we get the exact answers we wanted simply by considering the reasons and prerogatives at work in pairwise choices; this leaves us with no need to posit Menu Relativity. All we need to give up is Property β\beta -and its equivalent,Transmission Over Ties-which is no great loss. Any view that vindicates our intuitions will have to lose β\beta and Transmission. The goal is to give up only these principles, and keep their more plausible relatives.

In that same vein,just as my approach lets us keep α\alpha and γ\gamma ,it avoids the more scandalous kinds of intransitivity. In particular, we may hang on to:

Acyclicity

If x>y>>zx > y > \cdots > z ,then it’s not true that z>xz > x .

What a relief. The possibility of cycles is legendarily controversial (cf. Temkin, 2012), since they leave the agent with no undefeated option. Transmission failures entail no such moral doom.

Menu-Relative reasons are unnecessary for treating Horton's case; they are costly to posit; and they aren't motivated by any more basic ideas about supererogation. That is why I prefer a solution that uses non-relative (but still comparative!) reasons and prerogatives.

Let me note one last advantage. My view gives a unified account of both Horton's and Kamm's cases. Menu Relativity can at best explain Horton's. On Lazar and Barry's view, adding the option to Save 2 makes Save 1 morally worse vis-à-vis Do Nothing, as Save 1 now involves letting the second child die for no reason. That strikes me as a reasonable proposal. But we can't extend it to Kamm's case.

Throwing Keep Promise into the mix does make Do Nothing worse, but it also makes Save 1 worse. Since both options become ways of breaking the promise, they are worsened equally. So what explains why Keep Promise makes only Do Nothing wrong? If we really wanted to stick with Menu Relativity, we could try to cook up a new answer. But I think we should instead go back to our Supererogation ABCs: take each option pair by pair, weigh up the reasons and prerogatives, and watch as the paradoxes unravel.

9 | CONCLUSION

There are two kinds of objection to the possibility of supererogation (Dancy, 1993b). The first is that the very concept of supererogation is confused or problematic. The second is that the concept, however coherent, is empty, because morality wouldn't allow anyone to do less than best.

My concern here has been the conceptual objections—the paradoxes. I have argued that these shouldn't scare us, and that we can construct an elegant account of supererogation using nothing more than two dimensions—reasons and prerogatives, flavors of reasons—so long as we allow for compara-tivity. We don't need baselines, yet more dimensions, Menu-Relative reasons, or cycles of defeat. The solution is simple: you must do best unless you have a prerogative not to. You have to save lives, for example, unless it harms you much more than the alternative.

Solving paradoxes can be fun. But in this case it has another purpose. We are clearing the ground for a more fruitful debate about which acts (if any) are supererogatory. Does a non-moral reason to keep one's arms outweigh the moral reasons to save a life? Do we have any prerogative to eat the meat of non-human animals? To turn away refugees? Pollute the oceans? Or are we forbidden from doing these things? These questions are urgent and difficult. But we might see them in a new light, or at least with more clarity,if we can just get them out of the shadow of paradox. 24{}^{24}

ENDNOTES

1{}^{1} When I say that supererogation is "better," I just mean that there is more reason to do it,not that it has nice effects or merits praise. Later I will discuss views on which supererogation isn't better all things considered, but only morally better. None of my arguments depend on how we define 'better' or 'supererogatory'.

2{}^{2} Some discussions of the paradox also link reasons to some other notion,like praise (Raz,1975: 164) or goodness (Heyd, 2016: §3.1). By contrast, I am not making substantive claims about how value links up to reasons; I use 'better' only as a snappy way to say 'more reason' (see n.1).

3{}^{3} Another proposal is to assign each option an interval on the real line,not a point,and use this rule: an option is wrong just if its interval lies wholly below that of another option (cf. Gert, 2004: 505). But this proposal can't handle the combined case (see §1§1 ,above),where the options are {Keep Promise,Do Nothing,Save 2,Save 1}. This case intuitively violates:

Interval Order Property

If x1>x2{\mathrm{x}}_{1} > {\mathrm{x}}_{2} and y1>y2{\mathrm{y}}_{1} > {\mathrm{y}}_{2} ,then either x1>y2{\mathrm{x}}_{1} > {\mathrm{y}}_{2} or Iy1>x2{\mathrm{{Iy}}}_{1} > {\mathrm{x}}_{2} .

And no relation that violates this property can be represented with intervals and the proposed rule for determining wrongness (see Fishburn, 1970: 20-23; Rabinowicz, 2008: 33n.23).

4{}^{4} For other 2D reasons-based views of supererogation,see Raz (1975),Dancy (1993) (critiqued in Postow,2005),Bedke (2011) (critiqued in Snedegar, 2016), and Portmore (2019) (whose views have changed since his Commonsense Con-sequentialism, in 2011). Some argue that supererogatory beneficence is favored by "merely commendatory" reasons, which make options choiceworthy without requiring them (Dreier, 2004; Horgan & Timmons, 2010; Little & McNamara, 2017). The stock objection: this view permits us to let people die even when the cost is negligible (Archer, 2016: 460; Dorsey, 2013: 361). Horton's case is a special instance of this: Save 1 seems wrong, but it would be permissible if the reasons to Save 2 were merely commendatory.

5{}^{5} Hurka and Shubert (2012) prefer “prima facie permissions.” (‘Prerogative’ is from Scheffler,1982; Slote,1984; and Kamm, 1996). But just as nothing turns on how we define 'supererogation', nothing turns on how we define 'reason' and 'prerogative'; the substantive ideas here are justifying, favoring, requiring, and permissibility.

6{}^{6} Some say we have a basic prerogative to do what’s prudent (Scheffler,1982; Slote,1984; Hurka & Shubert,2012); others derive prerogatives from rights against harm (e.g. Benn, 2017; I am also happy to derive prerogatives from other rights, like body and property rights; see Muñoz ms.).

7{}^{7} Worse to Do Wrong is endorsed in some form by Darwall (2013),Ferguson (2016: 959n.23),Horton (2017: 96) (who says we "ought rather" to do any permissible act over any wrong one), Ferguson and Köhler (2019) (as "reasons monotonicity of permissibility"), and Portmore (2019: 22) (who thinks the moral reasons must always favor a morally permissible act over a wrong alternative, or else permissions couldn't depend on reasons). Two important exceptions are Bader (2019) and Pummer (2019). For discussion of cases like Horton's, see Parfit, 1982; Kagan, 1989: 16; Tadros, 2011: 161-62; Portmore, 2011: 147; and Snedegar, 2015: 379. Barnett (ms.) discusses some fascinating examples that make trouble for the analogue of Worse to Do Wrong in decision theory.

8{}^{8} Sinclair (2018: 49) appears to share Horton’s view of the building case.

9{}^{9} This case also shows that we can’t recover the facts about what’s permissible from the sum of reasons and prerogatives in favor of each option. (Do Nothing has a higher sum than Save 2, but only Save 2 makes Save 1 wrong.) Prerogatives and reasons play different roles; only reasons can require.

10{}^{10} See Kagan,1991: 927-28; Portmore,2011: Chapter 5; Hurka & Shubert 2012: n.7.

11{}^{11} Indeed,even non-moral parity makes trouble for Worse to Do Wrong. For example,start with Tea \sim Coffee,then add a "sweetened" third option: Coffee+> Coffee. Plausibly, if Coffee+ is just a slight improvement, we have Tea~Coffee+. But that means a wrong option, Coffee, is on a par with (and therefore no worse than) a permissible option-Tea. In such cases, parity is "insensitive to mild sweetening" (Hare, 2010). Sweetening is a recipe for counterexamples to Worse to Do Wrong.

12{}^{12} One limitation of the parity view,however,is that,since Save 1 and Do Nothing are on a par,we can’t say that Save 1 is better. This is one reason why I prefer prerogatives to non-moral reasons.

13{}^{13} Kamm’s (1985) hero gives a kidney rather than losing arms. I ignore this cosmetic difference.

14{}^{14} Tying is also intransitive. Indeed,if weak defeat is intransitive,complete,and reflexive,then either trying or defeat must be intransitive, though not necessarily both (see n.23). Kamm (1985) herself doesn't use 'tie', and her term for weak defeat is 'may permissibly take precedence over'.

15{}^{15} Archer (2016: 445) and Portmore (2017: 294) bring out the oddness in terms of reasons. If the reasons in favor of xx can justify acting against the reasons for yy ,and same for yy ’s reasons against zz ’s,then why can’t the reasons in favor of xx justify acting against the reasons in favor of zz ?

16{}^{16} Kamm (1996: 336) is sensitive to the worry that we may have to save lives,noting that the proposed view does not explain "why it is obligatory to save a life at small cost to ourselves rather than keep an important business appointment." She then suggests, roughly, that costless lifesaving is supported by stronger moral reasons than promise-keeping. I'm not sure that this will help in the four-option case, since Save 2 isn't costless, and yet it is favored by strong moral reasons.

17{}^{17} Let me be clear: I am not saying that prerogatives are "essentially comparative" in Temkin's sense (2012:371). That would mean that the weight of the prerogative to do xx over yy is not a function of the options’ intrinsic properties even taken together. But for all I have said so far,the prerogative’s weight might be a function of how much xx and yy harm the agent.

18{}^{18} See Archer,2016; Portmore,2017. Dorsey doesn’t himself mention transitivity or Kamm’s paper. I have cleaned up his cases so that they fit into a single three-option choice.

19{}^{19} Interestingly,in our cases,it does seem menu-relative whether two options have the same deontic status. For example, Do Nothing and Save 1 are both permissible in a pairwise choice, but only Do Nothing may be chosen from {Do Nothing, Save 1, Save 2}. I don't deny this. When I say I reject "Menu Relativity," I am only denying that zz 's presence can change the reasons and prerogatives to do xx over yy . Our two factors are independent and comparative,but not themselves menu-relative.

20{}^{20} Property α\alpha ,also known as "The Independence of Irrelevant Alternatives" (IIA),is often mixed up with nearby ideas. Kamm (1996: 344) confuses IIA with its homophone in social choice theory (which says: society’s preference over a pair depends only on how individuals rank that pair; see Arrow, 1951, Sen, 2017: Chapter 3). Rulli and Worsnip (2016) define IIA in a way that entails Property β\beta .

21{}^{21} I assume that,for any subset S* of our (finite!) set of options S,there is an option that would be permissible in a choice from S*; in such conditions,we say that there is a choice function ff defined over S; α\alpha and γ\gamma are properties of ff necessary and sufficient for ff to be generated from a non-menu-relative relation ’ \gtrsim ’,such that xf(  S)x \in f\left( {\mathrm{\;S}}^{ * }\right) just if xS&ySx \in {\mathrm{S}}^{ * }\& \forall y \in {S}^{ * } , xyx \gtrsim y (Sen,1993: 499-500).

22{}^{22} There is also Morgenbesser’s joke (adapted from Kamm,1996: n.31):

BARTENDER: Would you prefer red or white wine?

PATRON: White, please.

BARTENDER: I forgot: we also have beer.

PATRON: In that case, I'll take the red.

For some potential counterexamples to α\alpha ,see Pettit,1991; Sen,1993; Kamm,1996:343.

23{}^{23} This equivalence holds given that α\alpha and γ\gamma obtain and that a choice function is defined (see n.21). Sen (2017: 66) elegantly explains the relation between Transmission Over Ties and transitivity. Transmission is strictly weaker than the transitivity of ‘ \gtrsim ’,which is equivalent to the conjunction of Transmission with the transitivity of ‘ >> ’. (Assuming that ’ \gtrsim ’ is complete and reflexive.)

24{}^{24} This paper began as a chapter of my dissertation entitled “Supererogation and Rational Choice: Incommensurability, Intransitivity, Independence." It has come a long way. My thanks to Kieran Setiya, Tamar Schapiro, Caspar Hare, Kerah Gordon-Solmon, Jack Spencer, Anders Herlitz, Yael Loewenstein, Oli Rawle, Tom Hurka, and Theron Pummer for invaluable comments on drafts. For discussion, I thank Joe Bowen, Kelly Gaus, Jocelyn Wang, Katie Steele, Sam Dishaw, Justin D'Ambrosio, Brian Hedden, Steve Yablo, Anni Räty, Frances Kamm, Quinn White, Justin Khoo, Kida Lin, Nathaniel Baron-Schmitt, David Builes, Toby Handfield, Joe Horton, David Barnett, and audiences at the Australian National University and St Andrews's Centre for Ethics, Philosophy, and Public Affairs. I am also grateful to Seth Lazar and Al Hájek, who sponsored my visit in 2018 to the ANU, where I started this project. Finally, my sincere thanks to the editors of Noûs, and especially to an anonymous referee, whose supererogatory comments led to many improvements.

ORCID

Daniel Muñoz | https://orcid.org/0000-0001-6175-5675

REFERENCES

Archer, A. (2016). Moral obligation, self-interest, and the transitivity problem. Utilitas, 28, 441-464. https://doi.org/10.1017/S0953820816000091

Archer, A. (2017). Supererogation. Philosophy Compass, 13. https://doi.org/10.1111/phc3.12476

Arrow, K. (1951) [1963]. Social choice and individual values (2nd ed.). New York: Wiley.

Bader, R. (2019). Agent-relative prerogatives and suboptimal beneficence. In M. Timmons (Ed.), Oxford studies in normative ethics (Vol. 9, pp. 223-250). Oxford: Oxford University Press.

Barnett, D. (ms.). Graded ratifiability.

Bedke, M. (2011). Passing the deontic buck. In R. Shafer-Landau (Ed.), Oxford studies in metaethics (Vol. 6, pp. 128- 152). Oxford: Oxford University Press.

Benn, C. (2017). Supererogatory spandrels. Ethics & Politics, 19, 269-290.

Bradley, B. (2016). Against satisficing consequentialism. Utilitas, 18, 97-108. https://doi.org/10.1017/ S0953820806001877

Chang, R. (2002). The possibility of parity. Ethics, 112, 659-688. https://doi.org/10.1086/339673

Dancy, J. (1993a). Moral reasons. Oxford: Blackwell.

Dancy, J. (1993b). Beyond the Call of Duty: Supererogation, Obligation, and Offence by Gregory Mellema. Philosophical Books, 34, 48-49.

Darwall, S. (2013). But it would be wrong. In Morality, authority, and law: Essays in second-personal ethics I (pp. 52-71). Oxford: Oxford University Press.

Dietrich, F., & List, C. (2017). What matters and how it matters: A choice-theoretic interpretation of moral theories. Philosophical Review, 126, 421-479. https://doi.org/10.1215/00318108-4173412

Dorsey, D. (2013). The supererogatory, and how to accommodate it. Utilitas, 25, 355-382. https://doi.org/ 10.1017/S095382081200060X

Dreier, J. (2004). Why ethical satisficing makes sense and rational satisficing doesn't. In M. Byron (Ed.), Satisficing and maximizing (pp. 131-154). Cambridge: Cambridge University Press.

Ferguson, B. (2016). The paradox of exploitation. Erkenntnis, 81, 951-972. https://doi.org/10.1007/s10670-015-9776-4

Ferguson, B., & Köhler, S. (2019). Betterness of permissibility. Philosophical Studies, Print version forthcoming. https://doi.org/10.1007/s11098-019-01319-8

Fishburn, P. (1970). Utility theory for decision making. New York: John Wiley & Sons.

Gert, J. (2004). Value and parity. Ethics, 114, 492-510. https://doi.org/10.1086/381697

Gert, J. (2007). Normative strength and the balance of reasons. Philosophical Review, 116, 533-562. https://doi.org/10.1215/00318108-2007-013

Hare, C. (2010). Take the sugar. Analysis, 70, 237-247. https://doi.org/10.1093/analys/anp174

Heyd, D. (2016). Supererogation. In E. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring 2016 Edition). Retrieved from https://plato.stanford.edu/archives/spr2016/entries/supererogation/

Horgan, T., & Timmons, M. (2010). Untying a knot from the inside out: Reflections on the ‘paradox’ of supererogation. Social Philosophy and Policy, 27, 29–63. https://doi.org/10.1017/S026505250999015X

Horton, J. (2017). The all or nothing problem. Journal of Philosophy, 114, 94-104. https://doi.org/10.5840/ jphil201711427

Hurka, T. (1990). Two kinds of satisficing. Philosophical Studies, 59, 107-111.

Hurka, T., & Shubert, E. (2012). Permissions to do less than best: A moving band. In M. Timmons (Ed.), Oxford studies in normative ethics (Vol. 2, pp. 1-27). Oxford: Oxford University Press.

Kamm, F. (1985). Supererogation and obligation. Journal of Philosophy, 82, 118-138.

Kamm, F. (1996). Morality, mortality, volume II: Rights, duties, and status. New York: Oxford University Press.

Kamm, F. (2007). Intricate ethics: Rights, responsibilities, and permissible harm. Oxford: Oxford University Press.

Lazar, S. (2019). Accommodating options. Pacific Philosophical Quarterly, 100, 233-255. https://doi.org/10.1111/ papq.12252

Lazar, S., & Barry, C. (ms.). Acting beyond the call of duty: Supererogation and optimization.

Little, M., & McNamara, C. (2017). For better or worse: Commendatory reasons and latitude. In M. Timmons (Ed.), Oxford studies in normative ethics (Vol. 7, pp. 138-160). Oxford: Oxford University Press.

McMahan, J. (2018). Doing good and doing the best. In P. Woodruff (Ed.), The ethics of giving: Philosophers' perspectives on philanthropy (pp. 78-102). New York: Oxford University Press.

Muñoz, D. (ms.). From rights to prerogatives.

Parfit, D. (1982). Future generations: Further problems. Philosophy and Public Affairs, 11, 113-172.

Parfit, D. (2011). On what matters, volume one. Oxford: Oxford University Press.

Pettit, P. (1991). Decision theory and folk psychology. In M. Bacharach & S. Hurley (Eds.), Essays in the foundations of decision theory. Basil: Blackwell.

Portmore, D. (2011). Commonsense consequentialism: Wherein morality meets rationality. New York: Oxford University Press.

Portmore, D. (2017). Transitivity, moral latitude, and supererogation. Utilitas, 29, 286-298. https://doi.org/10.1017/ S0953820816000364

Portmore, D. (2019). Opting for the best: Oughts and options. Oxford: Oxford University Press.

Postow, B. (2005). Supererogation again. Journal of Value Inquiry, 39, 245-253. https://doi.org/10.1007/s10790- 006-1344-3

Pummer, T. (2016). Whether and where to give. Philosophy and Public Affairs, 44, 77-95. https://doi.org/ 10.1111/papa.12065

Pummer, T. (2019). All or nothing, but if not all, next best or nothing. Journal of Philosophy, 116, 278-291. https:// doi.org/10.5840/jphil2019116518

Rabinowicz, W. (2008). Value relations. Theoria, 74, 18-49. https://doi.org/10.1111/j.1755-2567.2008.00008.x

Raz, J. (1975). Permissions and supererogation. Philosophical Quarterly, 12, 161-168.

Scheffler, S. (1982). The rejection of consequentialism: A philosophical investigation of the considerations underlying rival moral conceptions. Oxford: Oxford University Press.

Sen, A. (1993). Internal consistency of choice. Econometrica, 61, 495-521.

Sen, A. (2017). Collective welfare and social choice: Expanded edition. Cambridge: Harvard University Press.

Sinclair, T. (2018). Are we conditionally obligated to be effective altruists? Philosophy and Public Affairs, 46, 36-59. https://doi.org/10.1111/papa.12107

Slote, M. (1984). Morality and self-other asymmetry. Journal of Philosophy, 81, 179-192.

Slote, M. (1985). Common-sense morality and consequentialism. London: Routledge & Kegan-Paul.

Slote, M. (1991). Shelly Kagan's The Limits of Morality. Philosophy and Phenomenological Research, 5, 915-917.

Snedegar, J. (2015). Contrastivism about reasons and ought. Philosophy Compass, 10, 379-388. https://doi.org/10.1111/phc3.12231

Snedegar, J. (2016). Reasons, oughts, and requirements. In R. Shafer-Landau (Ed.), Oxford studies in metaethics (Vol. 9, pp. 183-211). Oxford: Oxford University Press.

Tadros, V. (2011). The ends of harm: The moral foundations of criminal law. Oxford: Oxford University Press.

Temkin, L. (2012). Rethinking the good: Moral ideals and the nature of practical reasoning. Oxford: Oxford University Press.

Tungodden, B., & Vallentyne, P. (2005). On the possibility of Paretian egalitarianism. Journal of Philosophy, 102, 126- 154. https://doi.org/10.2307/3655706

How to cite this article: Muñoz D. Three paradoxes of supererogation. Noús. 2020;1-18. https://doi.org/10.1111/nous.12326

Мысль

Crafting headings

I’m tasked with developing headings for the second section of a research book on moral responsibility diffusion, aligning with planned arguments. References include paradoxes and supererogation paradox solutions.

Clarifying structure

I’m thinking through nine headings, but the user insists on four, each divided into three. The goal is to clarify and refine arguments, possibly touching on Rousseau, Kant, Bentham, and Mill's ideas.

Wondering about structure

Piecing together instructions for four sub-subsections in the first step, with a specific focus on the first two and then the remaining seven.

Clarifying content organization

I’m piecing together details for four sub-subsections in the first step, integrating Rousseau, Kant, Bentham, and Mill’s ideas, but am unsure about the exact structure.

Organizing the sequence

The user seems to want four sub-subsections in the first step, focusing on the French Revolution, Rousseau, Kant, and a historical example, while possibly planning subsequent steps.

Examining instruction structure

I’m thinking through the passage and realizing there’s a misunderstanding regarding the user's intent, which could stem from a mix-up or a typographical error.

Revisiting the structure

The user seems to require 9 sub-subsections for the "first step," covering various topics like the French Revolution, Rousseau, Kant, Bentham, Mill, and contemporary issues.

Mapping the sequence

Now I’m thinking through the user's request for 9 sub-subsections in the first step, which include the French Revolution, Rousseau, Kant, Bentham, Mill, and contemporary issues.

Synthesizing Rousseau and Kant

Uniting Rousseau and Kant under one heading to streamline the subsections aligns more with the user's goals and creates a more coherent and organized structure.

Mapping the organization

Highlighting 9 sub-subsections for Step 1, covering the French Revolution, Rousseau, Kant, Bentham, Mill, and contemporary issues, with a bridging subsection to Step 2.

Clarifying the structure

Refining the user's request, I’m noting the specific instructions for 9 sub-subsections in Step 1, covering various philosophical themes, and 4 in Step 2, focusing on free will and determinism.

Charting the course

Mapping out the second step, I’m focusing on paradoxes like Horton’s "all-or-nothing" and Kamm’s "intransitivity," analyzing current solutions, and considering ethical responsibility issues.

Synthesizing paradoxes

I'm crafting Chinese headings and aligning with the user's directives, integrating paradoxes like Horton’s "all-or-nothing" and Kamm’s "intransitivity," and focusing on ethical responsibility and remediatory models.

Articulating paradoxes

I'm logically presenting paradoxes like Horton’s "all-or-nothing" and Kamm’s "intransitivity," addressing ethical responsibility issues, and integrating Iris Marion Young's "social connection" model.

Reasoned for 4m 7s

Ответ

下面按照“第一步”“第二步”这两大部分,分别拟定各自的小标题(即“三级标题”),并在每个标题下给出相应内容的叙述示例。为便于清晰展现,第一步将有四个三级标题;第二步也将有四个三级标题。由于您在第一步中已经明确了想讨论的九个层次要点(法国大革命、卢梭、康德、边沁的功利主义、边沁的“矫枉过正”、密尔的改进、对功利主义改进的思想实验(Panopticon问题)、当代社会更显著的缺陷暴露、以及回到自由意志与决定论的思路),这里将它们合理整合进四个三级标题之中。然后,在第一步的后两个三级标题(第三、第四)中,会依照您所说的“请你按照论证目的设想后面两个三级标题”来进行拟定和展开。


(一)第一步:从“公意暴力”到“责任个体化”之路径考察

1. 公意暴力与法国大革命的困境

标题缘由

  • 您希望第一步首先讨论法国大革命的“公意暴力”及其所面临的困境。这可被视为探究“集体意志”或“公意”在实践中走向极端化、导致道德责任分散或模糊化的早期经典案例。

内容示例

  • 公意概念的反噬:法国大革命期间,“公意”(volonté générale)一度被理解为必须绝对服从的主导力量。一旦某个派系自诩代表“公意”,它便可能借此合理化暴力、清洗乃至极端恐怖政策(如雅各宾派的恐怖统治)。
  • 困境所在:当“公意”被片面地等同于某种抽象或激进的主流意见,其结果往往是不容异议的高压手段,“暴力即合理”也得以蔓延。这里蕴含的“道德责任”问题在于:在如此强调集体信念或目标的氛围里,个人很容易将自身责任“稀释”到所谓“集体的整体意志”之中,从而削弱了对个体行为的伦理检视。

2. 卢梭与大型政治实验的理论鼓动

标题缘由

  • 卢梭被视为法国大革命主要思想根源之一。您希望在此说明卢梭及其所提出的理论在何种意义上可以看作这场政治实验的“教唆者”。

内容示例

  • 社会契约理论与人民主权:卢梭在《社会契约论》中提出了“公意”(volonté générale)的核心观点,强调人民主权不可分割且必须无条件服从;这在理论上为“一体化”的激进实践提供了正当性框架。
  • 内在张力:卢梭原初构想中虽包含对“公意”理性化、正当化的期盼,但被当时的革命者解读为“真正的公意永远高于个人意志”,甚至引发了“大革命中公意至上、以至于公意暴力化”的偏狭实践。
  • 教唆者之争议:将卢梭称作“教唆者”并非指其本意就是倡导暴力,而是指他的理论为当时的革命思潮提供了“集体意志可以凌驾一切个体”的哲学话语支撑;后人在实践层面将其激进化,遂导致个人责任在“公意”面前被大大弱化。

3. 康德的“普遍原则”与责任个体化的隐忧

标题缘由(根据您的“后面两个三级标题”之需求)

  • 您提及第三步想说明:康德是卢梭的某种意义上的追随者,他用“普遍原则”试图修正公意专断的问题(例如大革命时期对“公意”的误用),但同时也留下了“责任过度个体化”的缺陷。

内容示例

  1. 康德对“公意”的修正

    • 康德确实推崇卢梭的人文关怀与公意理念,但他在道德哲学中强调“理性的普遍律令”(如《道德形而上学》中的“定言令式”),试图将任何政治或道德行动都置于可普遍化的理性要求下。
    • 这在理论上可以纠正法国大革命时那种“谁都可自诩代表公意并打压异见”的滥用,因为康德认为只有经由理性普遍化检验的法则才具备道德正当性。
  2. 责任的过度个体化

    • 不过,康德式的“自律”与“自我立法”也带来了另一个极端:当一切责任都被认为是“每个理性主体自我反思、严格自律的产物”,实际上可能弱化对群体情境、社会结构等外在因素的关注。
    • 在某些情形下,这种“过度个体化”会导致对社会或制度性不正义缺乏整体批判,也为“只要我个人遵从了绝对善意志”就足以免责的倾向提供了空间。结果仍可能出现某种形态的“责任分散”,只是和法国大革命时期“集体淹没个体”的方向相反,这里却成为“个体只顾自身良心,不管结构性后果”的另一种盲区。
  3. 连接到后续理论演进

    • 正是在此张力下,后来者(例如边沁、密尔)才转向另一条路:用功利主义框架来重新衡量“个体行为的道德价值”。康德思想与卢梭思想的复杂交融,推动了近代西方道德与政治理论的下一波发展。

4. 从边沁的功利主义到当代缺陷暴露:矫枉过正与改进之循环

标题缘由(根据您的“后面两个三级标题”之需求,结合第五步到第九步的讨论)

  • 您在后面特别指出:
    • 第四步要举一个历史上一世纪后的事例,说明边沁提出的功利主义理论如何更清晰地暴露了危机的实质;
    • 第五步是边沁的“矫枉过正”;
    • 第六步是密尔的改进;
    • 第七步用思想实验说明改进仍会带来“圆形监狱”(Panopticon)的问题;
    • 第八步当代社会又使义务论与功利主义的缺陷进一步暴露;
    • 第九步则提出要想真正弥补缺陷,需要回到自由意志与决定论的讨论。
  • 因此在本小节中,可将这几部分串联起来,以“矫枉过正—修正—再陷困境—再思反省”的方式展现功利主义和义务论在实践中的拉锯与循环。

内容示例

  1. 边沁功利主义的历史检验:危机更为凸显

    • 在法国大革命后的一个多世纪,边沁(Jeremy Bentham)提出以“最大多数人的最大幸福”来判断道德或法律正当性;这一理念迅速在立法与社会改革中得到应用。
    • 但从历史案例看,过度强调“结果最大化”常使少数人利益被忽视,或者容易在短期功利计算中忽略长程后果,导致制度设计虽初衷在“幸福最大化”,却可能产生新的权力滥用或弱势群体被牺牲的现象。
  2. “矫枉过正”与密尔的改进

    • 边沁理念一度“矫枉过正”,为追求效率或总幸福,有时忽略了个体权利或尊严。
    • 密尔(J. S. Mill)在此基础上尝试加入对个体自由、质的愉快差异性等更多考虑,将功利主义“人性化”,但也承认要为“幸福”提供某种更复杂的衡量标准。
  3. 思想实验与“圆形监狱”隐喻

    • 即便经密尔改进后的功利主义,仍然可能在制度设计中衍生出“圆形监狱”(Panopticon)式的全面监控和权力配置:边沁当年设想的监狱模型,初衷是高效率监管、节省人力,却暗含着对个体隐私与自主的严重消解。
    • 类似的思想实验表明,即便我们在理论上融合了对自由和幸福的考量,在具体实施中,若制度性安排过度集中于“最优结果”而无视个体处境的复杂性,也会形成另一种形式的“道德责任淡化”——因为人们可能将责任归于“系统”或“最优算法”,而忽略自身的伦理关切。
  4. 当今社会对义务论与功利主义缺陷的进一步揭示:回到自由意志与决定论

    • 在大数据、算法决策、全球性政治-经济结构更加复杂的时代,传统义务论(康德取向)与功利主义(边沁/密尔取向)都面临新的质疑:前者因忽视情境系统性,后者因简化为效用核算。
    • 要克服这些缺陷,或至少找到平衡点,就必须回到对自由意志与决定论的再讨论:究竟个体在多大程度上对其所处之社会结构、技术环境等负有责任?若自由意志并非绝对,那么“分散在制度或科技系统”之中的责任又该如何认定?
    • 这正为后续(第二步骤)的更深入分析埋下伏笔:只有进一步澄清个人自主与结构性情境的互动,才能避免“责任分散”成为推卸道德义务的借口,也才能真正甄别什么时候是“不可抗”或“情境注定”,什么时候又是个体“故意或漠视”。

(二)第二步:从“超义务悖论”到“社会联系”模型——多维化责任的再思

这一部分将讨论伦理学中“超义务悖论”(supererogation paradox)及其引申出的各类难题,如霍顿(Horton)提出的“全无或全有悖论”(All or Nothing Problem)与卡姆(Kamm)提出的“不可传递性悖论”(Intransitivity Paradox),并结合学界最新研究(如您提供的【Muñoz的论文】),探讨多维度或技术性解决方案所带来的道德责任消解风险。最后,从艾里斯·玛丽恩·杨(Iris Marion Young)的“社会联系模型”切入,为重新界定“分散责任”与“协作行动”提供思路。以下四个三级标题范例:

1. 超义务悖论与其衍生争议

内容示例

  • 概念梳理:什么是“超义务行为”(Supererogation)?一般指超出常规道德要求的自愿善行,如在不违背任何道德底线的前提下做出“更好”的选择。
  • 三大典型难题:依据【Muñoz的论文】等学界讨论,超义务悖论本身会引申出以下矛盾:
    1. “经典悖论”:若某行为道德上更好,为什么不变成义务?
    2. 霍顿的“全无或全有问题”:在可完全不作为与可以做得更好两端之间,做“中间方案”似乎反而会被指为错误。
    3. 卡姆的“不可传递性悖论”:当面对多选项时,逐对比较可能显示出不一致甚至非传递性的结论,使人难以解释哪些行为是“可容许的次优选择”。

2. 多维度/比较性解决方案的提出:从理论到实践的困境

内容示例

  • 理由与特权(prerogatives)的多重维度:一些当代理论(参考【Muñoz】)主张道德评判不再只有单一“更好/更坏”维度,还要考量非道德理由(如自我保护、个人自主)或“特权”权重,从而解释为什么“次优”却仍然不算错误。
  • 霍顿和卡姆难题的“统一式”回应:当道德评判须考虑多重且独立的理由向度时,就可能出现“做得更好”却未必“必须做”的情况,也会出现“看似不做更好选项仍可容许”而又与常识冲突的状况。这些模型能较好地对应霍顿“全或全无”与卡姆“不可传递”这类悖论。
  • 实践困境:然而,一旦引入多维度,理论解释将变得复杂,解释成本增加,也为“把责任分散到不同维度”带来空间。例如,个人可声称自己在“道德维度”虽不完美,但在“个人特权维度”仍有正当性,从而模糊了整体责任判断。

3. 超义务解决方案的反噬:道德责任的消解风险

内容示例

  • 复杂化导致的“主体不明”:当我们为了容纳更多伦理直觉或特殊情境(如非道德理由)而建立多维体系时,反而容易出现“各维度间难以衡量、主体责任难落实”的局面,使得“谁该负责”或“是否真的有明确义务”变得模糊。
  • 技术化与结构化的推脱:随着社会高度分工与技术工具(算法决策、远程协同等)的发展,原本多维度的道德考量更易被“推脱为系统综合判断”。这样一来,“责任分散”就不只是个人自我辩解,而是整个机制或结构在替人做决策。从而削弱对行为人的核心道德约束力。

4. 从“社会联系模型”到协作行动:重新定位分散责任

内容示例

  • 杨(Iris Marion Young)的社会联系模型:杨认为在结构性不公正往往由制度环境中众多人的联合行为产生时,我们需要超越传统“因果责任”或“罪恶感”导向的追责方式,转而关注“补救责任”(forward-looking responsibility)。
  • 分散责任不再是逃避,而是协作的起点:在杨的模型中,正因为任何结构性不公正背后都有无数“小行为”与社会要素的共同作用,所以每个人都有一定的道德责任去参与改变;但这责任不是单纯的“谁犯了错就该罚”,而是“谁能对现状改进有所贡献,就应站出来行动”。
  • 对“责任分散”的正面逆用:换言之,如果“责任分散”仅被用来推卸个体义务,则会造成道德评价的瓦解;但若按照杨的思路,将分散责任视为“所有身处此结构中的人,都有呼吁并参与变革的道德义务”,便能重新激发协作伦理。
  • 与超义务悖论的对话:杨的方案表明,就算在多维度下个体难以确定自己有没有“必须做的最佳行动”,至少可以朝“共同承担并改进结构”这一方向努力,从而避免在“可做更好却不做”与“无需做”之间陷入悖论。

小结

通过以上对“第一步”和“第二步”中各四个三级标题(小节)所做的拟定与内容示例,可以在宏观上形成如下脉络:

  • **第一步(四个三级标题)**着重从法国大革命时“公意暴力”与卢梭思想开端,谈到康德对普遍原则的强调,再到边沁、密尔等功利主义思路的出现与改进,最后联系现代社会对义务论与功利主义的双重反思并回归对自由意志与决定论的探讨。这条线索帮助我们理解道德责任如何在历史演变和理论争鸣中被稀释、分散、再度被强调又再度被冲击,进而昭示出在现代复杂社会中“责任”本身所面临的严峻挑战。

  • **第二步(四个三级标题)**则更直接地聚焦当代伦理学的“超义务悖论”,以及由此引申的霍顿“全或全无”难题与卡姆“不可传递性”难题,并讨论学界在多维度理由、比较性理由、技术化等方面的种种应对。最后引入杨的“社会联系模型”,从“向前看”的补救责任和协作行动视角,对“分散责任”进行了重新定位:与其逃避,不如凝聚;与其在悖论和复杂理论里陷于僵局,不如寻找可行的政治哲学路径,敦促所有相关主体共同承担改善不公的责任。

如此,两大步骤的展开既能够避免过度泛化“道德责任分散”概念,也能为后文进一步的细节论证提供全景脉络。希望以上的标题与示例性叙述能为您的研究专著写作提供思路与框架支持。

Поделиться этим Q&A