Saving Mill’s Utilitarianism

Some ideas have the force of a runaway trolley. When they are first proposed, they are vigorously endorsed and maligned by diverse, forceful personalities. Then they enter the crucible of development, are battered with intense scrutiny. Even if the ideas are eventually abandoned, they will have left an imprint upon the centuries, like the corpse of an elder god washed up upon the beach. We gain more from poking and prodding at its corpse than we do from shaking hands with its successors.

Utilitarianism, for example. The principle of utility is just an ethical theory that conforms to the slogan: “Do whatever produces the greatest happiness for the greatest number”. Utilitarianism has been attacked from all sides, but it retains a close following. It is a beloved treasure among compassionate naturalists and bean-counting social engineers, and critiqued by both lazy romantics and sensitive sophisticates. It is used as an intuition-pump for the sympathies of secularists, just as much as it is used to sanction torture in ticking time-bomb scenarios.

The doctrine has roots in the welfarism of David Hume and Aristotle, and owes a healthy dose of accolades to Epicurus. Its modern advocates come easily to mind: Peter Singer, David Brink, Peter Railton, Sam Harris. But it was not until 18th century reformer Jeremy Bentham and John Stuart Mill published their works that utilitarianism could find articulation in its contemporary form.

Bentham defined the principle of utility in this way: “By the principle of utility is meant that principle which approves or disapproves of every action whatsoever. according to the tendency it appears to have to augment or diminish the happiness of the party whose interest is in question: or, what is the same thing in other words to promote or to oppose that happiness.” For Bentham, the primary focus of moral inquiry was the rightness or wrongness of actions, measured in terms of their perceived consequences. Bentham’s utilitarianism is, hence, a form of consequentialism: rightness and wrongness of acts is a function of the good or bad consequences, and nothing else.

The history of philosophy has not been kind to the Benthamites. A regimend of critics (including 20th century notables like John Rawls, JJ Thomson, Philippa Foot, Samuel Scheffler, Bernard Williams, to name just a few) have rejected utilitarianism as a moral doctrine on a variety of grounds. And, on the whole, I think these critics have successfully shown that Bentham’s utilitarianism is riddled with absurdity. To the extent that utilitarianism belongs to Bentham, we must abandon utility.

Unfortunately, despite all the headway they have made against the Benthamites, critics have not shown much sensitivity to John Stuart Mill’s formulation of utilitarianism. It turns out that the John Stuart Mill that we meet in freshman lectures may not, bear much kinship with the John Stuart Mill who lived and breathed. So it’s worth noticing, and advertising far and wide, just how the standard picture of Mill is undergoing a rapid change.

For one thing, there is some confusion in the literature whether or not Mill counts as an act- or rule-utilitarian. It is not uncommon to hear his name paired up with one or the other, but rarely both (textual evidence be damned) — if there are any internal contradictions, then it is easy to think that that is a product of Mill’s incoherence, and not a failure on our part to be charitable. And I think Fred Wilson put it nicely:

Mill is … not an “act utilitarian” who holds that the principle of utility is used to judge the rightness or wrongness of each and every act. But neither is he a “rule utilitarian” who holds that individual acts are judged by various moral rules which are themselves judged by the principle of utility acting as a second order principle to determine which set of rules secures the greatest amount of happiness. For the principle of utility judges not simply rules, according to Mill, but rules with sanctions attached.

For another thing, it isn’t even clear whether or not Mill is a consequentialist. In the essay linked, Daniel Jacobsen argues that Mill’s idea of utilitarianism was non-consequentialist — which is roughly to say that it is unclear whether or not Mill believed that we judge the good or bad consequences of acts by being indifferent towards the identity of persons who are affected. Instead, in the essay linked, Jacobsen argues that Mill is best understood as an advocate of a commonsense doctrine that he calls “sentimentalism” (where an act is wrong so long as an agent’s feelings of guilt are suitable).

And it’s certainly not the case that Mill was a consequentialist bean-counter, given his strong emphasis upon the importance of developing good character. As Mill remarks in On Liberty, while it is possible for a man to achieve a good life without ever exercising autonomy, this can only to his detriment as a human being. To take just one of Mill’s quotes, which Kwame Anthony Appiah mentioned favorably (in The Ethics of Identity): “It really is of importance, not only what men do, but also what manner of men they are that do it.”

What can account for such a massive neglect for one of utilitarianism’s fiercest defenders? It could be that utilitarianism has been assessed — and rejected — because it has been associated with its weakest proponents. If charity in interpretation has been lacking in our study of Mill, then it may be that we are now seeing a sea shift in the study of utilitarianism. I doubt that all of Mill can be salvaged — parts of his doctrine are a bit dotty. But still it may be that the old god, Utility, still has some life in him.

Enhanced by Zemanta

63 Comments.

  1. s. wallerstein

    Ben:

    I’ve never read Mill except On Liberty, but it seems to me that your essay saves Mill, but not Mill’s utilitarianism.

    The points that you score in Mill’s favor, his sentimentalism, his belief that
    what matters is not only what men do, but what manner of men they are indicate how far Mill strays from standard utilitarianism.

  2. Maybe, maybe not. We’ll see. But there are a few things that are worth thinking that the tide is turning in the other direction.

    It seems to me that it is absolutely beyond dispute that Mill does not fit comfortably as either a rule- or act-utilitarian. Even a cursory (charitable) reading of “Utilitarianism” ought to leave no question on that matter. I first read “Utilitarianism” in high school, ripping it off the shelves at my local Chapters and reading it during lunch hour. Then when I took my first Ethics class in university, I was surprised to hear that he was being pigeonholed into one category or the other. Moreover, my professor, when confronted with the text, wasn’t able to defend the traditional interpretation.

    Also his remarks on character in “On Liberty” are not tremendously different from those he makes in ‘Utilitarianism’. Though virtue arises as a product of its ability to produce happiness and avoid pain, virtue may be “felt a good in itself, and desired as such with as great intensity as any other good… there is nothing which makes him so much a blessing to [others] as the cultivation of the disinterested love of virtue”. Hence, the utilitarian standard “enjoins and requires the cultivation of the love of virtue to the greatest strength possible, as being above all things important to the general happiness”. In other words, virtue is a good, and ranked highly on the hierarchy of goods.

  3. s. wallerstein

    Ben:

    I took your title as referring to Mill’s utilitarianism (the philosophy), but if I misread it and it refers to
    Mill’s book, “Utilitarianism”, then you’re saving it, to be sure.

  4. I’m not sure I see the difference — Mill’s theory of utility is expressed in his book by that name.

  5. Dear Mr Nelson,

    I cannot complain strongly enough.

    Your piece has no reference – whatsoever – to the headline of the week. And it has no relevance – at all – to the interests of crazy people. I further find it gives not the slightest encouragement to those who like to use CAPITAL LETTERS.

    Both Jeremy and yourself really need to stop getting so distracted by trivialities such as completing doctorates and writing books. Get yourselves back to the serious business of posting well-reasoned and well-written blogs, that’s what I say.

    Shame on you sir, there’s not been nearly enough of this kind of thing.

    yours sincerely

    Jim P Houston,

  6. Mr. Houston,

    Touché, sir — touché.

    Yours in perpetuity,
    Benjamin S Nelson,
    Master of Arts (MA)
    Bachelor of Arts (BA)
    Blogger on the Internet (BotI)

  7. Monsieur Nelson,

    pas de touché

    I have decided that you contribution is not a good blog posting after all but a well-written article of philosophical interest. You just happened to self-publish it online here.

    Regarding Daniel Jacobson’s paper: are you at all convinced that any “any view that rejects deontic impartiality and agent neutrality is not consequentialist in the standard sense”? That defining consequentialism as the view (roughly speaking) that “whether an act is morally right depends only on consequences” allows some ‘unlikely theories’ to be deemed ‘consequentialist’ seems no objection to me.

    I don’t see that ‘Ethical Egoism’ – rendered as the theory that ‘an action is morally right if the consequences of that action are more favorable than unfavourable, only to the agent performing the action’ – can plausibly be classified as anything other than a consequentialist theory. And the same seems true of ‘Ethical Altruism’ – the theory that ‘an action is morally right if the consequences of that action are more favorable than unfavorable to everyone except the agent. I don’t think we would be “drastically revising our taxonomy’ if we allowed theories like these to be termed consequentialist and I fail to see what is wrong with a notion of consequentialism that leaves enough room for a few ‘unlikely’ theories. I’m just not buying the line that ‘deontic impartiality’ is necessary for consequentialism. Do you?

    I beg to remain, Sir, your most humble and obedient servant,

    Mr Jim P Houston
    MA (Hons)
    Had an article published online once
    Err..

  8. Jim, I’m convinced of it… but I’m not sure that’s saying very much. Remember, “consequentialism” is a doctrine that was defined by its critics (by GEM Anscombe, I believe, in reference to Henry Sidgwick). And agent-neutrality is the stalking-horse in Sam Scheffler’s polemic against consequentialism. So if objecting to agent-neutrality seems like an easy way to kill off consequentialism, then that may be because the doctrine was made for killing.

    That said, Jacobsen’s paper inherits terms of art that are highly technicalised. For my part, I don’t mind if we think of ‘consequentialism’ in non-standard terms, in such a way that it covers the possibility of agent-partiality. If that’s how we want to define consequentialism, then even Jacobsen would agree that Mill is still a consequentialist — albeit, in a loose, unsatisfying, non-standard sense (discussed in reference to Sinott-Armstrong, p.166).

    Perhaps here is one way of articulating Jacobsen’s argument for ‘teleological theories’ in the language of consequences. As you recall, in philosophy of mind there is a distinction between ‘type-‘ and ‘token-‘ identity theories. A type-identity theory is one where types of mental states supervene on brain states: e.g., beliefs activate the belief-area of the brain. (This, it turns out, is mostly a false theory.) Then there are token theories: a token mental state will supervene on token brain states. (This is just to say: whatever the mental is, it’s always a brainy sort of thing.) The same logic could be used in favor of consequentialism — a “token consequentialism”, if you like — where we say, “whatever the right is, it’s supposed to be a good-making sort of thing”. That’s just to say that it’s what Jacobsen called a teleological theory.

    Is Jacobsen right to dislike the language of consequentialism when used in this way? I dunno — it depends on what we expect “consequentialism” to do for us. If we think consequentialism ought to provide us with a normative decision-procedure, then token consequentialism will turn out to be supremely frustrating. After all, we expected from the outset that we should be able to deduce the right from the good. But now all we’re being told is that the right maps onto the good in some way that we’ll have to figure out on independent grounds — in other words, normatively, we’re left adrift on the open ocean. On the other hand, if all we expect consequentialism to do is to tell us something about how to ground and clarify our moral convictions by putting them in objective terms, then token consequentialism might do the trick.

    As for altruism, I can go either way — it depends on the details. For instance, one way of defining altruism might be: ‘A’s commits to defer to B’s preference’. No matter how you cut it, this sort of altruism is not a consequentialist way of thinking. For there is no assumption that A wants B’s preference to obtain. There’s not even an assumption that A wants to defer to B. For that matter, it’s not even obviously the case that ‘A wants A to defer to B’s preference’, since the mere fact that they have a reason for action need not entail that their reason for action is preceded by the sense that it is a good. I might commit myself begrudgingly to defer to your preference, despite the fact that it causes me all sorts of pains, simply because I’m pathological like that. It might be only later on that I realize that I’ve done a good thing. That doesn’t mean I was aiming at good outcomes all along, even secretly, as much as it means I was getting on with the usual stupid routines in my life that I’m accustomed to. (We might think of this as a kind of Kantian altruism, embodying his characteristically demented idea of virtue.) It is not even clear, in this case, whether the right is supposed to be a good-making sort of thing.

  9. As long as act-utilitarianism is not sloppily equated with consequentialism in general, I have no great problem with desires to dispense with most of it, Benjamin. If utilitarianism is a framework that ultimately focuses only on individual utility (such as the pleasure something brings to a person), I say begone and good riddance! But if utilitarianism is simply seen as measuring as accurately as possible an act’s fitness for some total aggregate and common end (i.e. its final ‘utility’ for a defined group specified ‘desire’, conceptual or real), I object. The latter would reduce almost all consequentialist frameworks, including my own, evolutionism and the Basic Imperative, to a form of utilitarianism.

    Do I notice a certain disdain for those who do not purely trust intuition (and rationality applied to such intuitions) to judge the morality of an act? I will admit that I’m perhaps reading between the lines, but the word bean counter is a pretty derogatory term. You attach it to social engineers. It leaves me with the impression of miserly little creatures sitting on high chairs, busily flipping through their tomes, feverishly accounting every lost penny, their minds focused at the tip of their quills and unconcerned with any emotions regarding the subject.

    It’s not enough that social engineering has acquired a negative connotation, despite that all political involvement that doesn’t apply purely to one’s own immediate needs is a form of social engineering. Now they have been slapped with a scarlet Bean Counter to their lapels! Or is it just those engineers who literally count beans that are a problem? Can you account the number of deaths from lung cancer? Increase in average temperatures? Use of resources? Crime levels? Wrongful convictions? Are there any acceptable beans we who believe in the value of empirical measurements are allowed to use to determine inter-subjective right and wrong? Are we allowed to engineer frameworks at all? Or is it each and everyone to themselves? As the madman Aleister Crowley said: Do what thou Wilt shall be the whole of the Law!

    And, ah, how I love Reason 7 for dispensing with utilitarianism: it requires us to enter the Experience Machine since virtual reality is bound to give greater pleasure. Holy smokes! Which must obviously mean utilitarianism is nonsense, no? Reason 7 demonstrates such a shallow understanding of simulations that I cringe (see the Simulation Argument as an example for a better comprehension). My strong reaction to Reason 7 is perhaps proof that even the engineering type can be emotionally moved by their endeavors at establishing a more rich, efficient and sound world.

    Note: I hold pleasure seeking to be a poor measure of morality. I’m not a utilitarian. But the acceptability of seeking personal pleasure for the sake of, well, just feeling good, is somewhat of a challenge to my own evolutionism (which seems to demand far more selfless sacrifice for the common good). And trying to find ways to empirically measure morality still seems like a reasonable endeavor, despite the obvious difficulties. And since pleasure remains a reasonable existential goal, ergo act-utilitarianism. Just because Bentham might not have gotten everything right does not mean that its all humbug. Despite my own willingness to dispense with vain self-actualization, I would not be be too speedy in erecting the final tomb stone of Bentham’s work. And needing to thinking of ways utilitarianism could be saved. Marxists dispensed with all kinds of supposed nonsense in the century that they put their political weight on the intelligentsia.

    What an awkward situation I find myself in. I’m defending something I have in the past so vehemently criticized. Still, perhaps the references in this discussion to Bentham is not evidence of how long lasting the spasms of a bad but powerfully controversial idea can be. Could it be evidence of some of Bentham’s profound insights?

  10. Andreas, I can’t respond to all the interesting points you mention. e.g., I will have to wait until later posts before I am ready to discuss of the experience machine and your extreme survivalist doctrine. But for right now, I want to clarify some issues that are directly related to the topmost post.

    My derogatory remarks towards “bean-counters” are not meant to disparage principled behaviors based on empirical metrics. I am an equal opportunity jabber — my use of the needling phrase, “lazy romantics”, was tacitly pointed in the direction of casuists and intuition-mongers. I believe that our intuitions and rules must mutually enrich one another, and one should not be compartmentalized away from the other. That’s the problem with “bean-counting”.

    The classical formulation of the principle of utility is hedonistic, in the sense that happiness is the only thing that is intrinsically good, and pain intrinsically bad. (“Good” and “bad” are treated as amoral terms, here.) Further, utilitarianism treats everyone’s happiness as equally valuable as everyone else’s. And, finally, utilitarianism is thought to be consequentialist, in the sense that the right thing to do is determined by the good that an act tends to produce.

    An act can be fit for a common end without it being fit to bring happiness to the common. Everyone may desire wealth, but wealth will not bring happiness to everyone. So orthodox utilitarianism isn’t reducible to the doctrine of “measuring as accurately as possible an act’s fitness for some total aggregate and common end”. There are, however, deviant utilitarian theories that replace “happiness” with objective goods: for example, David Brink’s work. They may be closer to the mark.

    So it’s difficult to say whether your doctrine counts as utilitarian. Personally, I don’t think it counts in that way. However, it is certainly safe to say that the extreme form of survivalism you mentioned is a form of consequentialism.

  11. Hi Ben,

    Thank you for the pointers. I will try to do some more reading, your post requires much more than ‘intuitive’ responses.

    I never really expected much from ‘consequentialism’ truth be told. It just seems a heading under which we put theories that judge actions by their consequences (or intended consequences).

    Regarding Ethical Altruism, the preferences of agents B.C,D etc seem, in principle, irrelevant (though in practice fulfilling preferences could be linked to achieving the most favourable consequence) all that matters is that the consequence (or intended consequence) of a given action are more favourable than unfavourable to ‘everyone’ but calculating agent A (who does count his interests amongst the other beans). If we equate the place marker ‘favourability’ with, say, happiness. The good is the general happiness (excluding yours), and the right are those acts that promote the general happiness (excluding yours). This seems much the same as the situation with utilitarianism: the utilitarian takes general happiness to be ‘the good’, and ‘the right’ to be those (permissible?) acts that are productive of the most general happiness.

    Of course, you could view ‘the right’ as the rules that constraining the pursuit of ‘the good’. In Mill’s case On Liberty seems to promote the right and Utilitarianism the good. How we deduce ‘liberty’ from ‘utility’ I don’t know.

    One interesting footnote in Utilitarianism is this:

    ‘There is no point which utilitarian thinkers (and Bentham pre-eminently) have taken more pains to illustrate than this. The morality of the action depends entirely upon the intention—that is, upon what the agent wills to do. But the motive, that is, the feeling which makes him will so to do, when it … makes no difference in the act, makes none in the morality: though it makes a great difference in our moral estimation of the agent, especially if it indicates a good or a bad habitual disposition—a bent of character from which useful, or from which hurtful actions are likely to arise.’

    So it seems we are to judge an act’s ‘utilitarian rightness’ is to judge only the hedonic value of the intended consequence of an act. Dispositions towards acting rightly we can judge as good. But then there still seems to remain room for moral estimation of the character of the agent and I don’t know that we are meant to judge him according to whether he is acting to promote the general happiness. Certainly, I’m not sure Mill’s is exhausted by its utilitarian element.

    I wonder if ‘utilitarianism’ needs to be saved – for the right occasions.

  12. Sorry Ben,I see the above comment has come out more than a little confused. I thought I’d made more sense than that. I do rather feel the unprepared undergraduate troubling the tutor with unfocused questions. Partly its got muddled in the editing but mostly the thinking wasn’t clear and I was being wrong or wrong-headed. There seems insufficient general happiness to be gained from clearing it all up.

    I have been looking at the Collected Works (all available online) and specifically ‘CHAPTER XII: Of the Logic of Practice, or Art; Including Morality and Policy’ – and that the latter parts of that seem useful to refer to (as Jacobson does). Of course I’m sure you’re well aware of them yourself but I thought them interesting enough to draw attention to.

    Mill notes that “the promotion of happiness is the ultimate principle of Teleology” but says he does not “mean to assert that the promotion of happiness should be itself the end of all actions, or even of all rules of action. It is the justification, and ought to be the controller, of all ends, but is not itself the sole end. There are many virtuous actions, and even virtuous modes of action … by which happiness in the particular instance is sacrificed, more pain being produced than pleasure. But … this … admits of justification only because … on the whole more happiness will exist in the world, if feelings are cultivated which will make people, in certain cases, regardless of happiness.

    …the cultivation of an ideal nobleness of will and conduct, should be to individual human beings an end, to which the specific pursuit either of their own happiness or of that of others (except so far as included in that idea) should, in any case of conflict, give way. But … what constitutes this elevation of character, is itself to be decided by a reference to happiness as the standard. The character itself should be, to the individual, a paramount end, simply because the existence of this ideal nobleness of character, or of a near approach to it, in any abundance, would go further than all things else towards making human life happy; both in the comparatively humble sense, of pleasure and freedom from pain, and in the higher meaning, of rendering life, not what it now is almost universally, puerile and insignificant—but such as human beings with highly developed faculties can care to have.”

  13. All of Mill’s writings can be found here (including ‘Utilitarianism’ and other relevant articles on Bentham etc):

    http://oll.libertyfund.org/?option=com_staticxt&staticfile=show.php%3Fperson=21&Itemid=28

    The passage above can be found here:

    http://oll.libertyfund.org/?option=com_staticxt&staticfile=show.php%3Ftitle=247&chapter=40067&layout=html&Itemid=27

  14. Thanks Jim — those are very useful passages indeed. There, we’re seeing what Mill means by virtue and how it connects with the goodness of acts. Nothing more to add!

  15. Hi Ben,

    It did make things a bit clearer for me. And I wonder if it helps make ‘room’ for the principles of Liberty and to connect it up with Utility?

  16. Oh, I think so — you’ve honed in on perfectly explicit passages in Mill’s “On Liberty”, which echo the same thoughts he made in “Utilitarianism”. One is struck by not any incoherence, but rather, of how admirably sophisticated and creative that his theory has shown itself to be.

    His famous arguments in favor of free speech (e.g., that even when we are wrong it produces in us an appreciation of truth), are patently utilitarian. e.g., the defence of liberty is made by appealing to the harm principle, which has an indisputable connection with a particular reading of the principle of utility.

    But with all that having been said, I think the most exciting thing about Jacobsen’s critique, and the reignited interest from other scholars like Appiah, is that it is not entirely clear how we can make sense of 20th century critiques of utilitarianism in light of a non-consequentialist reading of Mill. e.g., It may very well be that some critiques of Mill can be sustained: for example, one of Rawls’s complaints against utilitarianism was that it seemingly ignored the “publicity condition”, and was therefore elitist. Another defect — again, rightly pointed out by Rawls — is that Mill’s utilitarianism was agnostic about distributive justice.

  17. Another defect — again, rightly pointed out by Rawls — is that Mill’s utilitarianism was agnostic about distributive justice.
         Benjamin S Nelson

    My problem, Benjamin, with most statements such as it ignores requirement x, y or z is that x, y or z is just a moral imperative itself. The “engineer” expresses what they personally want and try to “engineer” a moral framework that suits their own desire of what aught to be. In the end it’s as if we were playing God. All we end up with are axiomatic and absolutist statements about what is right, statements with basis only in the supposedly intuitive and omnipotent claims of us, the ethicist.

  18. Andreas, phew! I think there’s a lot of contentious stuff in such a short post.

    You suggest, rightly, that people who complain that a theory lacks a certain element X, will tend to think that the missing element X is important. No question about it. So I’m at a loss, since this seems like a platitude, not a problem.

    Further, the argument that philosophy is rooted in pretentious intuitive claims (expressed in “absolutes” or foundational “axioms”) is, at best, only an accurate depiction of bad philosophy. But there are some good reasons to think that this isn’t quite so widespread. e.g., if it turned out that were are principled grounds for making a practical distinction between “arguing towards a prior conclusion” and “arriving at a conclusion through reason”, then it would be difficult to see how we can get a comparison to the divine or an engineer.

  19. Benjamin, I agree. It’s obviously perfectly valid to use axioms in any argument, even ones based on loose conjectures and previously bundled conclusions, as long they are identified as such. To say otherwise would be to say you can’t reason except when you see the perpetrator with a smoking gun beside the dead victim. And, regarding prior conclusions, there is no doubt that distribution of goods is a major challenge in ethics. If a framework is agnostic about it, it is indeed lacking. If we do not address it, we have not, so to say, explored the whole barrel. So, I’m guilty as charged. What I said was indeed a bit of a platitude.

    But…isn’t there always a but…I never spoke of all of philosophy. Not all philosophy is concerned with ethics. So your implication that I made an indictment against philosophy as such goes a little too far. Ethics, in my understanding and in seeming contrast to Rawls, needs to be based on something deeper. And I do, contrary to you, think that many ethical frameworks are just bad philosophy based on (oh no, here we go again) unfounded claims. This is where I think Bentamites were insightful. They did indeed try to go, so to say, to the bottom of the barrel. And then consequentially and empirically work from there towards the surface. Their main error, in my view, was that they started from the wrong proposition: pleasure of the masses. They still did not dive sufficiently deep to hit the barrel’s bottom. This lead them into all kinds of problems. However, I can myself see why one would start at pleasure. Who wants to live a life of dreary survivalism except those entertaining others on Discovery Channel? Rather be dead than live a life of constant drudgery, right? Well, yes, but without survival there can be no pleasure whatsoever. So this is the point we must start at. And consequentially and empirically work from there.

    Regarding the issue of bad philosophy, if John Rawls starting position is that the “most reasonable principles of justice are those everyone would accept and agree to from a fair position”, then I can but conclude it’s bad philosophy. The arguments inside Rawlsianism can be as wonderfully astute and as accurate as be. But because it is based on a flawed premise, it will draw all kinds of flawed conclusions. Rawls Veil of Ignorance is at best marginally interesting. I’ll drive it to the extreme. So let’s say we have a grown up, a teenager, a child, a parrot, a cat and a cow. What will they say about redistribution? Oh, I’m not allowed to use non-adults and animals? From “A Theory of Justice”, edition 1999, page 10, paragraph 1 under “The Main Idea of the Theory of Justice” (with my bold):

    [The] guiding idea is that the principles of justice for the basic structure of society are the object of the original agreement.They are the principles that free and rational persons concerned to further their own interests would accept in an initial position of equality as defining the fundamental terms of their association.

    Oh, I see. So everyone in this thought experiment has to have some inner equality behind the veil? On the outside they can have any type of nose and crazy hairdo and be of any height, but on the inside, that which matters, they must be like John Rawls? So we have to assume that which is to be concluded? There’s a word that comes to mind for such a fallacy: bad philosophy.

    There is only one ethical “commandment” I know for sure is true:

    Live, and produce viable offspring. Or die, and be forever extinct.

    This is another, more forceful, way of expressing what I have called the Basic Imperative. In my view any ethical system must start from this self-evident position. It is the only “commandment” I cannot envision potentially being abrogated by newly discovered universal conditions or temporary circumstances. You have already called my starting position extreme. But the Basic Imperative does not mean everyone has to learn how to walk around in shorts and shoeless in wool sock in the cold Wyoming winter. Or make batteries from coins, paper and salt water. Nor does it preclude altruism when considered from the perspective of a social organism. And I would assume that Bentham was correct in that the desire to live and perpetuate society is somehow associated with pleasure. But pleasure does not trump survival. Jürgen Brandes’ behavior was as deplorable as Armin Meiwes‘, despite any amount of consent Brandes gave and pleasure he may have derived from the atrocious act of his own demise.

    I don’t entirely dismiss ethical intuitions as legitimate. Strangely, Rawls does not seem to think of himself as an intuitionist. In the strictest sense he may not be one, but separated from reality behind the veil, what else are we to rely on? Ultimately, I think Rawls puts way to much faith into intuitions. He does not take into account that protected from reality behind a veil, people (especially the rational) are notoriously bad at assessing their incompetencies. And thereby cause everyone’s, including their own, eventual extinction.

  20. I’m glad to see we agree that it’s acceptable practice to critique omissions.

    But I’m also struggling to see where we disagree when it comes to ethical theories as well. First, it’s true that I spoke of ‘philosophy’ most widely instead of ethics in particular. That’s because I don’t see any reason to think that critiques of the role of intuitions in ethics won’t also apply to other areas of philosophical inquiry. But if that’s contentious, then just suppose that we are speaking only of ethics.

    Second, when you say that “many ethical frameworks are just bad philosophy based on… unfounded claims”, you’re echoing my claim that “the argument that philosophy is rooted in pretentious intuitive claims… is, at best, only an accurate depiction of bad philosophy.” Since both of these claims may be true at once, they are not contraries.

    Rawls is, I think, a great example of someone who is clearly not driving his work on merely pretentious intuitive claims. Or, at least, that’s not how he envisions his project. Rather, he’s in a wide reflective equilibrium, using his intuitions as provisional starting points, from which he can go on to build a plausible system. Now, I’ve heard some people accuse Rawls of smuggling in dubious intuitions, and you cite one possibly problematic assumption. (I think I would agree with you that it’s not especially obvious what it means to be a free and rational person, or whether freedom or rationality are essential to moral agency.) But in order to be fair to Rawls, we ought to respect the fact that he is self-consciously floating on the top of the barrel (since he’s a constructivist) while also letting us catch a glimpse at what lies beneath (since he has a meta-ethics based on rational deliberation).

    I think, to pull the strings together, that a survivalist view can be condemned for what it omits. Yes, genetic survival is consistent with altruism, and egoism, and various and sundry doctrines. But that’s precisely the problem! Genetic survival is vital as an answer to some questions (e.g., the meaning of life), but imperatives for justice and morality require a finer grain.

  21. WHY I CAN NOT SEE MY POST?

  22. That Guy Montag

    Ben:

    I’ve found myself in the position of defending Mill’s Utilitarianism, but possibly at a slightly more oblique angle. In particular I’m mostly curious about whether or not we need to change our conception of happiness from being some kind of end towards instead being something like a second order property. The doctrine as a whole then starts to look far more like a theory of moral perception than a theory of moral ontology. I certainly think that that kind of a move makes sense of things like the pluralism that Jim mentions and also fits rather nicely with the virtue approach you seem to be taking.

  23. TGMon, I think I’ll need you to say more about what you mean by “second-order property”, since I’m not sure I have a 100% clear idea about what that refers to. You might be talking about the analogy of moral properties to secondary properties (e.g., color perception), or non-natural properties (if we want to say those really are properties).

    I can say a little but about Mill’s theory of happiness, which he emphasizes in the concluding chapters of “Utilitarianism”. He says that our favorite goals eventually become a “part of” happiness.

    What a bizarre idea! One would have initially thought that happiness is a kind of feeling-tone (to use Sidgwick’s term), a brute sense of positive affect. But no — here, Mill is telling us that happiness itself can have other values ‘cooked into’ it, so to speak. And so virtue, being the sort of thing that is best suited to produce happiness, surely counts as being a part of happiness.

    If that’s the best way of thinking about Mill’s axiology, then it would seem that happiness (for Mill) really would need to be a kind of end. It’s hard to think of him any other way.

    Is Mill a realist about morality, in any sense that’s analogous to color-perception? In other words, does Mill believe that there are such things as “moral properties”, that are independent of the mind (as colors are)? Personally, I don’t think so. If Jacobsen is right, then it’s pretty hard to pin the ‘moral realist’ label on Mill, because he’s not a consequentialist. But even if Mill were a consequentialist, it’s not even clear that consequentialism entails moral realism. For instance, Peter Railton has an argument that is both broadly utilitarian and morally realistic — but I don’t think Mill (or Bentham, for that matter) would buy into it, because Railton has a very idiosyncratic and metaphysical gloss on the process that’s involved in learning about your own interests.

  24. That Guy Montag

    Sorry, I was aiming to say secondary properties but clearly had one of those brain burps. In terms of the interpretation I’ve been aiming for I’d be more inclined to stay away from Non-Natural properties more generally but that might be more due to my own prejudice against intuition.

    In any case there are a couple of reasons I think that this might be a good way to conceive of Mill. The first is Mills suppossed failure to establish why we should establish happiness as a general good in the fourth chapter of “Utilitarianism”.

    “The only proof capable of being given that an object is visible, is that people actually see it. The only proof that a sound is audible, is that people hear it: and so of the other sources of our experience. In like manner, I apprehend, the sole evidence it is possible to produce that anything is desirable, is that people do actually desire it. If the end which the utilitarian doctrine proposes to itself were not, in theory and in practice, acknowledged to be an end, nothing could ever convince any person that it was so. No reason can be given why the general happiness is desirable, except that each person, so far as he believes it to be attainable, desires his own happiness. This, however, being a fact, we have not only all the proof which the case admits of, but all which it is possible to require, that happiness is a good: that each person’s happiness is a good to that person, and the general happiness, therefore, a good to the aggregate of all persons. Happiness has made out its title as one of the ends of conduct, and consequently one of the criteria of morality.”

    A useful comparison I think is to look at how Sam Harris goes about talking about well being. In Harris’s case he treats wellbeing as something that is a necessary feature of it being relevant to human beings at all. I think an important failure in attempts to understand his point is that people tend see this as some kind of active goal to be strived for as opposed to a passive necessity of human engagement with moral questions. I also think in particular the second half of Mill’s claim here doesn’t really make sense unless we take this passivity on board and complete it with your argument against Mill being a consequentialist.

    Another point that I believe supports this secondary property view of happiness more generally is that it makes more sense of for instance Jim’s pluralism and your point about virtues being ‘cooked into’ happiness. It’s not about saying that there is some object, happiness, that exists there bubbling over the fire and that whenever we come to see some other value as being important we ‘throw it into the happiness pot.’ Instead happiness should be seen as the act of perceiving the real objective reasons that the world provides. In which case there is no adding to the pot: the only thing in the pot is what is added.

    This leads to my final point. I am increasingly of the view that the claim that observation is theory laden is both ultimately right and of pretty hefty significance. (If I admit that I think “conceptual engineering” is a very good analogy for philosophy you’ll probably see where for instance my distrust of intuition is coming from.) It’s in the light of this kind of thinking that his famous dictum “Better Socrates…” makes the most sense. What could he possible be describing as marking out the difference between Socrates and the pig? Certainly I think we’re not treating the relativist and our relativist intuitions seriously if we think it’s automatically obvious that we should prefer the view of Socrates. On the other hand if we avoid the common mistake of forgetting how we end his quote “And if the fool, or the pig, are a different opinion, it is because they only know their own side of the question.” we start to see I think a very strong reason to suspect that Mill at least had some sense of the implications for theory on observation but again only if we see Mill as aiming more generally at happiness as perception.

    Now I think this forms at least the beginnings of a case for a moral perception theory in Mill but ultimately I don’t think Mill makes it and I think that represents the biggest real confusion in Mill’s Utilitarianism. He seems to me to have had some intuitions at least in the right place but in large part, I suspect because of how strong subjectivist arguments can seem, he falls into the trap of treating happiness too much as a goal and not enough as a means.

  25. Andreas

    The pre-theoretical intuitions contained in the veil of ignorance are that equality and rationality are required for neutrality. This is distinct from the conclusion of the concept of justice as fairness. They are the necessary conditions for the example that is meant to motivate one in favour of the conclusion. Furthermore, these intuitions are themselves interesting. Rawls extensively details the method of wide reflective equilibrium so accusing him of smuggling in the conclusion while he explicitly states the assumption of the intuitions, makes theoretical claims about the methodological suitability of doing so, and then clearly offers reasons for the conditions, is ridiculous. You can reject wide reflective equilibrium as a suitable method, but you criticism comes in the form of simple accusation of intellectual dishonesty rather than an incredibly technical critique of theoretical method. Finally, expanding my claim that the intuitions (which are unproblematic because they are explicitly declared) are themselves useful, it is interesting to note that these conditions have been used to amend Rawls’s conclusion. Consider the work of Norman Daniels, which claims that any just distribution must necessarily include structures to secure minimum guaranties of healthcare to all individuals on the claim that the rationality requirement behind the veil necessitates certain assumptions about the objective condition of individuals. He has, in a very important sense, pre-empted Rawls (who claims that the form of institutions result from the principles derived from behind the veil) by giving content to the assumptions of the rhetorical device by claiming that rational, equal bargaining requires all participants enjoy a minimum level of health.

  26. TGMon, those are excellent and clever points. And I have to admit, there’s something very persuasive about that textual evidence you offered from chapter 4. At the very least, it is impossible to argue with the fact that Mill is attempting to naturalize our study of the good. That’s really the point of the monistic project: to bridge ‘is’ and ‘ought’, at least with respect to happiness.

    First, I have to point out that the passage from Ch. (4) is about the epistemology of happiness and goodness. So it’s not obviously about the metaphysics of primary and secondary qualities (though it may end up saying something about those things).

    Second, if I’m not mistaken, your analogy to secondary qualities (I assume) is probably drawn from McDowell (esp. his essay on Mackie). When we consult it, sure enough, we find quotes from McDowell that are uncannily similar to the one you gave from Mill. e.g.,

    For an object to merit fear just is for it to be fearful. So explanations of fear that manifest our capacity to understand ourselves in this region of our lives will simply not cohere with the claim that reality contains nothing in the way of fearfulness.

    If it turns out that McDowell’s metaphysics is a decent fit to Mill’s theory, then your tentative suggestion would seem to be pretty well grounded.

    McDowell, at this point in his essay, is critiquing the idea that we’re essentially just projecting parts of ourselves onto external objects. McDowell argues that there’s some sense in which objects of fear are mind-independent, simply because some objects (e.g., bears) merit fear. Hence, I suppose, colors are mind-independent because they merit color-ascriptions, and values are mind-independent because they merit valuing.

    That sounds like a pretty weird definition of mind-independence. Understandably so, since McDowell has rigged our method of explanation to yield surprising results. For McDowell, we are effectively explaining how we understand things about the world. By contrast, we are not explaining the causal facts of the world. e.g.: “it would be obviously grotesque to fancy that a case of fear might be explained as the upshot of a mechanical… process initiated by an instance of “objective fearfulness”.”

    But is that what Mill thinks he’s doing? If we translate McDowell’s quote into Millian terms, we are left with an open question. Do we think that Mill would say that the claim — “For an object to merit desire is just for it to be desired” — cannot cohere with the claim that “reality contains nothing in the way of desirableness”? McDowell evidently believes that there is no logical space between these two claims. However, we don’t need to agree, and it’s not clear Mill needs to. For instance, we think that certain words merit certain kinds of meaning. Does that mean that meanings are mind-independent in the same way that colors are?

    I guess that’s an open question for Mill, and will depend on a close reading and a charitable reconstruction of his meta-ethics. (Regretfully, on this note: in the interests of space, I’m going to leave aside textual analysis that you invited on his famous Socrates-pig contrast. For myself, I would like to think of Mill’s competent judges as being capable of changing opinions over time, and hence, being able to make sense of some of relativist intuitions you might have.)

    But it’s not an open question for me. It seems to me that thinking along McDowell’s lines involves a confusion of the objectivity of meaning with the objectivity of truth. Meanings are objective in the sense that some signs contractually merit some kind of content; secondary qualities are objective in the sense that some observable qualities would persist regardless of our whims. Or, as you put it: there’s a difference between the active and passive.

    You admirably pick up the mantle at this point by saying (on Harris’s behalf) that we really are being passive in a sense when it comes to morals (and the good), because we are discussing the “passive necessity of human engagement with moral questions”. And sure enough, if there were such a thing, then I think I would have to jump on board with you. But I’m not sure there is. As our mutual friend Ophelia put it quite succinctly: “It’s a contingent fact that we care.”

  27. That Guy Montag

    Thank you Ben for the very generous praise and the brilliant argument. I do feel it’s my duty to warn you though that you are going to be thoroughly plagiarised during my next ethics exam. ;)

    I haven’t read any McDowell yet but I’ve heard enough to suggest that he might be saying something similar to what I’m aiming for. Another person who seems to be thinking along the same lines as I, at least more generally, is Thomas Scanlon. I’m in the middle of listening to his John Locke Lectures right now and it’s telling that he’s building his metaphysics more from Quine and Carnap, which is far more explicitly the kind of thinking that’s informing my argument. If we digress slightly, the Quine/Carnap line of metaphysics also makes a lot of sense in interpreting Sam Harris, given David Lewis and given Harris’s emphasis on possible worlds.

    In terms of your reply I can only say yes, the fact that ordinary colour perception at least seems strongly objective in a way that morals don’t is a strong objection. I don’t have a good enough grasp of the metaphysics I’m trying to appeal to to make anything stronger than a gesture here though. (This is essentially what my summer reading list is revolving around.) On the other hand I do want to say that the case is premature. While I don’t deny this subjective experience, a cornerstone of my argument is in fact to say that Mill clearly is seduced by it and that he doesn’t need to be. Having read the post you link to I’d argue that you too are seduced by it.

    Using your text as the starting point I’m thoroughly behind everything up until the point where you suggest will-independence as some kind of proxy for the mind-independence requirement of moral objectivity. This is something that a moral realist of my stripe will object to. Let’s look at a basic case of colour perception. Regardless of whether colours really exist independently of the mind, it’s trivially true that there’s a lot that needs to be said about us in order to say that we are capable of perceiving colour. There’s the brute fact that we need the right physiognomy. There’s also a different set of requirements involved in recognising and being able to report a perception. (You raise the case of autistics in your post and it’s interesting that this is in fact very similar to the kind of account of colour reports that Quine gives at the start of Word and Object, in particular in relation to the colour blind.) Buried in all of this will at some point arise the basic question of my subjective experience of a particular colour, the so called hard problem of consciousness. The point is that if I’m really committed to giving the same kind of account to objective moral facts, I need to in fact have something similar to the idea of qualia or sense data, a supposedly purely subjective feature of experience.

    My contention is that now we have a model of what motivation is. There can be no will-independence, because will just is my subjective experience of perceiving a reason for action. Taken in this light, it’s not surprising that for instance Mill shifts away from a sort of passive model of happiness as perception to a stronger view of happiness as an end because the existence of the will and its connection to action just seems so obvious and something needs to be treated as the goal for that willing. If I’m right however then, just as with colour, all we really need is a proper account of what it is that causes the experience, the action or event we perceive that we consider wrong, and our own ability to perceive reasons, of which motivation is just the part available to conscious experience. All of this is why I think there’s something to your non-Consequentialist view of Mill and Jim’s gestures towards particularism.

    I want to end with the part I’m most tentative about, the connection between theory and perception which is a large part of what interests me in the Quine/Carnap debate and I think the crux of Mill’s rightly famous, if often misquoted, Socrates argument. I can’t agree more with Ophelia’s point that however we look at this, the fact that our moral concerns are closely tied to our culture or our subjective experiences needs to be explained. This is what I mean by respecting relativist intuitions, even though I clearly don’t find them convincing. The argument I’ve given above certainly counts here but ultimately it faces a serious challenge in that theory appears to necessarily play a role in moral perception that is a harder sell for other forms of perception. Just to keep this brief, my answer is that I think there is a lot to be said for Carnap’s view of philosophy as “Conceptual Engineering” to the point that I think a working moral theory plays the same role in moral philosophy as a telescope in astronomy. I also think that this kind of an argument ignores the subjective experience of the way we come to develop skills, and therefore internalise theories. An example I find very compelling here is the so-called Stroop Effect where we have a hard time distinguishing between colour perception and the meanings of words.

  28. That Guy Montag

    I’m aware of a slight confusion in my last paragraph. The end should read:

    “I also think that relativist arguments of this kind ignore the subjective experience of the way we come to develop skills, and therefore internalise theories. An example I find very compelling here is the so-called Stroop Effect where we have a hard time distinguishing between colour perception and the meanings of words.”

    As for the rest of the confusions, well, no excuses.

  29. That Guy Montag

    Oh lord, HTML fail! I swear there was a /b after kind.

  30. TGMon, good points again.

    I’d be glad if our discussion helps you with your exams, since I hate those things as much as the next guy. But if you’re writing papers, then it’s important to credit your interlocutors as sources. In addition to being a mandatory practice (since plagiarism is rightly regarded as an unforgivable sin), being generous with citations and acknowledgments makes you seem like a worldly cosmopolitan type of person just by virtue of arguing with people over the internet. Not a bad deal I think.

    Also, thanks for your link to Scanlon, I’ll have to take a listen.

    Now, back to business. I’m not inclined to agree with your characterization of the will — it seems like you’re pacifying the one thing in life that we thought was active. If we think that the will is just “my subjective experience of perceiving a reason for action”, then we’re turning all of our idle thoughts into expressions of the will. Presumably, you meant something more like “my subjective experience of perceiving a reason for action which actually happens”. But that would mean that post-hoc rationalizations are expressions of the will, too. And post-hoc rationalizations are a strange way of talking about our powers of self-control. Post-hoc rationalization seems less like an account of our self-control, and more like the practice of making up reasons as to why we don’t need to worry about having any self-control. In other words: if we adopt your formulation, as stated, it interprets the practice of confabulating reasons for acting the way that we have into ‘expressions of your will’. Arguably, confabulation is the exact inverse of this — an anesthetic that lulls the will to sleep.

    I think there is a vital question that is underneath all of this, and it is this. Is an expression of the will the same thing as an intentional action? Speaking for myself, I’m inclined to think they break apart. An action does not lose its intentional contents just because the intentional contents occur to a person after the relevant behavior. And it is not clear that all intentional actions are expressions of the will, since I would hope that the core phenomenon we’re trying to make sense of when we talk about ‘expressions of the will’ is the idea of planned behavior.

  31. That Guy Montag

    Ben:

    Comments about plagiarism were intended entirely as both a joke and a compliment. :D  I’m not sure if it’s a latent masochistic streak, but I actually enjoy exams partly as a way to challenge myself but also as a way to help make my normally chaotic thought processes cohere just a touch more. Conversations on the other hand are never wasted though I’m not sure if graduating from appearing to be a sad geek who spends most of his spare time in his room, in his pants, eating cold pizza and reading philosophy to one who does all that and argues with people on the internet is necessarily a step up. ;)

    I think it’s important for me to get to grips with your question about will and intentional action. I’m not sure how I’ll respond to that so that answer is going to have to wait. The point about post-hoc rationalisations is a good one though and I do have something to say about that now.

    There seems to be a lot more to cases of ordinary perception than simply the fact that I experience it as say a colour. Setting aside my confusing physiognomy and physiology (oh brain, why do you fail me so!), I pointed out that for Quine at least there appear to be a special set of requirements to being able to reliably report perceptions of say red. The specific idea is that there’s a distinction between following a rule, or in this case responding to a perception, and being guided by it. Along the same vein I heard recently a talk by Tyler Burge where he significantly draws a distinction between stimulus response and perception, with yet again this idea of a kind of reflection in some sense constituting the difference between the two. I think this is at the very least very suggestive if we take on board the idea of moral motivation as moral perception.

    Ultimately this is where I agree with you the most about Mill and in particular about the importance of virtues in his account. The first thing to be said about this is that it does contradict my earlier point of how we can start to see virtues as being cooked into happiness, and maybe to a degree it contradicts at least that part of Mill’s account, because virtues in this account start to take on higher order properties of reflecting back on the contents of perception. On the other hand if you’re right about Mill’s moral theory valuing virtues, then I think the way to think about them will be as playing just this epistemic role in moral perception.

  32. You can reject wide reflective equilibrium as a suitable method, but you criticism comes in the form of simple accusation of intellectual dishonesty rather than an incredibly technical critique of theoretical method.
          PETER IRISH

    Wa, hold on, Peter. When did I accuse Rawls of dishonesty? I may have accused him of performing bad philosophy. Unless I misinterpreted myself, my claim was something along the lines that the foundations of Rawls’ argument are as solid as a cloud. How you interpret this to mean that I think he is dishonest is slightly mysterious. I presume it is the use of the verb to smuggle that has lead you to introduce the idea of dishonesty. But I did not use this verb. Benjamin did, and only as a reference to critiques he has heard from others. By mentioning it in conjunction with my claim that Rawls makes problematic assumptions and is (ultimately) an intuitionist, Benjamin seems to have created a worm hole between my claims and those of the anti-smuggling guard. You seem to have fallen into this worm hole, Peter, attaching opinions to me that I don’t think aught to be attached. Unless, of course, I’m severely misunderstanding myself.

    But, more importantly, you think my claim that his assumptions contain the conclusion, is ridiculous. I have to be honest I don’t quite understand why you think this because the meaning of the following sentence is too complicated for my simple mind:

    Rawls extensively details the method of wide reflective equilibrium so accusing him of smuggling in the conclusion while he explicitly states the assumption of the intuitions, makes theoretical claims about the methodological suitability of doing so, and then clearly offers reasons for the conditions, is ridiculous.
          PETER IRISH

    The construction of the sentence is very German. I assume I must be read as “accusing him of smuggling…is ridiculous”. I find the clause “while he…offers reasons for the conditions” quite confusing. Are you using “while” in the same sense as “whereas”? Does the sentence mean that because Rawls gives technically complicated reasons for what he is about to do and explicitly states his assumptions (and why his assumptions are appropriate), it is ridiculous to accuse him of making the wrong assumptions that presuppose the conclusion? I guess I must have misunderstood something in there. But assuming I understood what you said it seems to me that what you are saying is something along the following lines: “Rawls wrote a great and wonderfully complex book, and if you don’t understand how great his book is, you aught to be ridiculed.”

    Okidoki. Then perhaps I aught to be a bit more complex in my critique of why his assumptions are not so different from the conclusion they seem to indicate. With other words how an assumption that fairness is morally foundational leads to the conclusion that the distribution of goods must be fair.

    So Rawls claims that neutrality requires equality and rationality. And neutrality is a requirement to establish a wide reflective equilibrium, right? Justice as fairness is then what emerges. But what is fairness? Is fairness not when everyone starts a race from the same position without any advantages? The game of golf has even ingeniously invented “handicaps” to make the game fun for all, regardless of their difference in skill. Wait a minute. Original position, equality and rationality. Sounds pretty fair to me! So behind the veil we start off at a fair position. It’s just like when track-and-field runners start the race at a diagonal to compensate for the curvature of the track.

    This is the initial equilibrium, the measure of our endeavor to establish what should be, distributive justice. Whenever someone is bestowed with an advantage, we look back at the original position and measure the advantage accordingly. Is it all right to give the person this new advantage? Presumably only if an advantage can be bestowed upon only a specific person (or set of persons) is it all right to disturb the initial equilibrium. But we can only bestow the advantage if ultimately it is an advantage to all (i.e. without ultimately disturbing the initial equilibrium). With other words, I can reward you with 1 billon dollars only if you will use it in such a manner as to lift all to a higher level of prosperity. The increased aggregate value of the system as such must compensate for the assignment to a specific subset of the system, thereby in effect not disturbing the equilibrium. Remember, when we started off we were all equal. What justification does anyone have to disturb this equality? Presumably only if the realities of nature so dictate can we mess with the equilibrium. Yet if we do mess with it, we have to make sure it’s not actually messed with, since advantages bestowed by nature are not fair given that such advantages are not of our own doing. Disequilibriums are only fair if we are the true cause of the advantage that caused such disequilibrium. Huh? Qué? What?

    We could adapt Galen Strawson’s Basic Argument here.

        1. You do what you do at any time because of the way you are.
        2. So in order to be ultimately responsible, you have to be ultimately responsible for the way you are – at least in some mental respects
        3. But you cannot ultimately be responsible for the way you are in any respect
        4. Therefore, you can’t be responsible for anything you do.

    Hence, there can be no moral desert! Well, that seems like a very powerful argument. I will not critique the Basic Argument again. You can read my previous critique here. And, yes, there I do use the words like “bamboozle”. But they were directed as Strawson, not Rawls. And, on reflection, perhaps I was a little too harsh in my use of words. I don’t think Strawson had the intent of fooling us. I presume he and many others believe it is a very sound argument. Importantly, in Rawls world you are supposedly not responsible for where you were born or who your parents are. This is, in fact, the original position! But, of course, since infants can’t argue with each other, Rawls as to imagine his free and rational infants.

    Logos is presumed. It exists as an eternal in the realm of determining what is right. Not reality. No. Logos. Presumably, if some belief is brought to our rational infants that seems advantageous to just them, they will reject it out of rational realization that better advantages could be bestowed on others unbeknownst to them. And the fair equilibrium would be improperly and unfairly disturbed. Or if they wish to adopt it, they will allow other infants to adopt similar beliefs, all happily living in Kumbayah. Because, after all, Logos rules, and what is fair is fair. Logos is presumable available to our infants through the divine inspiration of intuition. And you only deserve what you have truly caused, which Strawson has “proven” amounts to just about nothing.

    So what Rawls says is: we must start off in a fair position; whatever we do must be fair; and if we cannot be reasonably fair, we must be compensatorily fair in order to be reasonably fair. Fairness is central to the whole argument. If I chuck fairness on the heap of discarded moral concepts, the whole framework falls apart. You can call the original position being neutral all you want. It is perfectly synonymous with being fair.

    How about a simpler proposition: what works will work; and if it doesn’t work, it won’t work and there will be…nothing. Or is that just too simple a proposition? I’m reminded of a German saying my father Lars would always tell me when I was an overly philosophical teenager:

    Warum einfach es einfach machen wenn man es so schön kompliziert machen kann?

    Translated:
    Why simply make it simple when you can make it so wonderfully complicated?

  33. Yes, genetic survival is consistent with altruism, and egoism, and various and sundry doctrines. But that’s precisely the problem! Genetic survival is vital as an answer to some questions (e.g., the meaning of life), but imperatives for justice and morality require a finer grain.
          Benjamin S Nelson

    Benjamin, this is in some sense true. But it’s a little bit like saying that using evolution as measure is useless when considering the functioning of the eye. I reject that argument. We continuously harken back to the idea of variation and selection as we evaluate the complexity of the eye, despite the fact of the immense distance between original eukaryotes and nervous tissue.

    Encapsulation is central to evolutionism. The oft derided Herbert Spencer realized this in his The Factors of Organic Evolution. He did not call it encapsulation. But he spoke of the importance in evolution of outer protective structures and inner stable structures. Each encapsulation is subjected to the same simple evolutionary principle as the previous layer of encapsulation. We use the evolutionary idea as an evaluative framework at all levels.

    I have yet to expand on my proposed evolutionism in any great detail (which I believe , unlike other such frameworks, looks forward not backwards and downwards ). But, for example, I believe that Freedom of Speech can be argued for because it represents a necessary mechanism of variation against which selection occurs (since speech influences behavior). And also the sanctity of life, because without life there will be no new Speech. And I believe one can argue for the fact that predicting who will produce new meaningful Speech is immensely difficult, if not impossible. Even fools can speak the truth, as Shakespeare realized.

    Note: you introduced the adjective “genetics”. I think it is important to realize that Lamarck, Darwin and Spencer knew nothing about the mechanics of inheritance when they formulated the essentials of the simple yet immensely powerful theory of evolution. It was George Mendel that set the seeds for understanding the mechanics. And new discoveries in epigenetics has forced us to revisit Lamarck’s early insights.

  34. There are two ways that I don’t understand that analogy.

    First, I don’t understand how the evolutionary account of the functioning of the eye does not require a finer grained explanation besides “evolution did it”. If it were to turn out that an evolutionary explanation could not be produced to make sense of how this complex organ came into being, then that would be fine scientific grounds for seeking out alternative forms of explanation. But, alas, there are perfectly good explanations of how the eye evolved as a result of natural selection. The accounts do not even require very much imagination to concoct independently.

    Second, even if it were not the case that we ought to expect a fine-grained account of evolution, we should still expect such an account when answering questions about morality. These are very different kinds of subject matter. The scientists seek out new discoveries in order to predict what will happen in the future; the moralists find new ways of coping with the world we know about in order to figure out what we’re going to do next. The scientists have ignorance as their excuse. At least on the face of it, the moralists do not.

    (Yes, it’s true that I used the term “genetics”. But that’s because you’re only now making it clear that you have the old-timey steampunk evolutionists in mind like Spencer, Darwin, and the rest. It will not do for you to draw too much from my use of words when they articulate the best interpretation I could make on the limited clues you gave!)

    Just now, you have accepted my challenge by offering the beginnings of a finer-grained account. And as far as that goes, the proof is in the pudding. I’m pessimistic, and so I taste bad pudding — but that alone isn’t something that you should care about.

    What I’d rather point out is that the analogy to evolution starts to break down when we start to consider your proposals closely. For example, you suggest free speech “represents a necessary mechanism of variation against which selection occurs”. Actually, I doubt that it is necessary; so long as people have free will, a race of mutes might build a society, and that society might vary in all kinds of important respects. It is more realistic to say that free speech is efficient in producing variations. And, keeping with our analogy, direct exposure to radiation from represents an efficient mechanism for generating mutations in cells. Does that mean we ought to expose generations of the population to bursts of X-Rays, in the interests of producing more variability? I don’t think so. These ends aren’t rewarding enough to justify the means.

  35. Benjamin, I turned 20 at the height of steampunk and was a great fan of the genre. It never occurred to me that my later interest in evolution, emergentism and other conceptual frameworks with many roots in Britain’s 19’th century might be influenced by my earlier interest in a Victorian age reimagined. I suspect it’s only of minor import though. Were I to reference Aristotle, would you associated what I said with hydropunk? Hmm, interesting, hydropunk…a world were the Difference Engine is built with screws, cogs, pulleys and water buckets. Was Pythia just the chief-engineer of an ancient Analytical Engine?

    I suppose it’s my mention of Herbert Spencer that inspired your reference to steampunk. I want to make it clearer why I mentioned him, Darwin and Lamarck. Genetics has been so successful in explaining all kinds of biological phenomena that it has become almost synonymous with the theory of evolution. Often it’s no longer the essentials of the evolutionary idea that we are exploring but how the mechanical “letter” sequencing in DNA gives rise to higher level phenomena. For many decades we became completely absorbed by the supervened, forgetting about the supervening. In fact, we became so obsessed with it that for a while we were baffled by studies on the intergenerational effects of famine (such as those by Lars Olov Bygren using historical records from Överkalix). Due to such studies, we have discovered that our previous views about the mechanics of inheritance are flawed.

    Epigenetics still focuses on the mechanics, the supervened. However, there is a deeper realization of the cyclic influence: the overall state of the cell influences how genes express themselves. We no longer treat the cell merely as a result of the underlying genetic code effected only temporarily by external forces (such as blunt force trauma) during the course of one generation. The cell is itself, in turn, an effect on that which caused the cell to be what it is (i.e. the expression of the genetic code), thereby causing intergenerational changes. This, to me, once and for all puts a kibosh on the selfish gene. Genetic code is a means by which the greater structures operates, not a pure dictum of its operations. It seems obvious to me that we must treat the whole human being in a similar way vis-à-vis the cell. Again, it is not sufficient to treat our universe as a single-layered clockwork. Rather than organisms being treated as complex machines, machines aught to be considered crude organisms that lack even the ability to self-perpetuate!

    This has lead me to revisit what the theory of evolution actually is, and therefore to revisit such thinkers as Herbert Spencer. My own conclusion is that stripped of all the excessive details, there are three basics of the evolutionary theory:

    1. A combination of entities forming a whole through their given relationships (cooperating entities)
    2. A possibility of changing the relationships between such entities (so called variation)
    3. Persistence and increased occurrence of select variations against a specific background (so called “natural” selection)

    I have previously expressed the third basic as what will work will work. Considering these three basics, it seems obvious to me that the idea aught to apply at all levels of decomposition, whether we are talking about organizations, humans, cells or genes. Once we abstract evolutionary theory to entities and relationships, we are no longer blinded by G-C-A-T sequencing. Of course, a big question is if such abstraction makes sense.

    When considering the last question, I just have a hard time imagining how the idea is not applicable to almost any reconfigurable process subject to an established underlying process against which such process must operate. And that such process will potentially influence the process that it operates against. It’s seems so obvious to me that discussing whether it’s foundational and universal and applicable to all layers is almost like discussing why there are 6 ways of getting a sum of 7 with two regular dice regardless of their size and color.

    And morality to me is nothing but defining how people are permitted to relate to each other. Different moral systems give rise to different variations of cooperation. And again it seems self-evident that certain ways for humans to relate will tear the fabric of society apart, returning humanity to a “savage” (and more endangered) state. What I can imagine is that there are several configurations that work equally well. Just like there are 6 ways of getting a sum of 7, there might be several ways to organize human society effectively. However, not an infinite (or even large) number of ways. If this is true, then to some extent morality can be relative. But, I emphasize, only to some extent. Ultimately, every moral system is measured against the same underlying process.

    Modern evolutionary studies of morality have mostly focused on how morals have emerged. Such studies seem to not have accepted that evolution has given rise to human consciousness. It’s almost as if most modern evolutionary ethics have wanted to bypass consciousness as an incomprehensible nuisance that doesn’t fit into a deterministic view of nature (despite that what allows such studies is consciousness itself). Again, I emphasize that Darwin accepted that biological traits could be inherited and variated without knowing how because he observed at a systemic level that such inheritance and variation took place. It’s like causing friction and noticing that it can produce fire. I don’t have to understand the exact details of combustion. Friction, heat and fire obviously have some kind of potentiating relationship. Discovering oxidation simply confirms (and deepens) that understanding. Similarly, I don’t have to “understand” consciousness and the generation of ideas in any great detail to know that it allows us to formulate guessing rules about how to behave in given circumstances.

    What my evolutionism proposes is that consciousness allows us to reflect on old forms of cooperation, attempt new variations and then measure their effectiveness against the real. This to me is what good ethical engineering should be. We are not bound to just identify various intuitive rules (that, note, have evolved in the course of human history) and then contemplate which ones work. We can, due to our consciousness, propose, adopt and then measure the effectiveness of new (and possibly less intuitive) rules. I see a strong similarity to science here. And yes, I think the “natural fallacy” is bogus. Measuring morality’s effect on social stability is, in fact, what humanity has done on an ad-hoc basis since the inception of human society. But the problem has been that most frequently such rules have been treated as inspired by the Divine (or given by Logos). Call it “blind” moral evolution. Rationality is indeed a very powerful, if not necessary, process in proposing and evaluating variations. But, again, ultimately such proposals of reorganization will be measured not against a potentially fallible Logos but reality as such. Which is why the wide reflective equilibrium behind a Veil of Ignorance is at best marginally interesting. Through the use of empirical methods, something that requires us to lift the Veil of Ignorance, we can better guess when we are headed for a bad outcome despite all our wonderful mental cogitations behind the veil about fairness.

    Now, you can legitimately ask: what is the measure against which we evaluate? Well, it seems to me that ultimately it’s the success at persisting against our overall background (i.e. success in the context of the phenomenal Universe as such). This is what I think we call survival. However, if we don’t want to go extinct before determining whether something works (which is a tad of a paradox), what do we do? Well, most human knowledge seems reduced at some point to guess work (heuristics). There are good guesses, better guesses and outright awful guesses. I think the answer is that we can never know for sure whether we are headed for extinction, but by observing trends I think we can make pretty good predictions. We can become “sighted” evolution. Global warming would be such a guess. Yes, natural cycles of temperature change might be the underlying cause. And it deserves consideration. But currently, my own guess would be that the evidence is sufficient so that not addressing global warming would be immoral. And that owning an 2001 model SUV when you spend 80% of your time in the city should, by current reasonable guesses, be considered immoral.

    You can claim that it needs “finer grain”. And I have agreed that the Basic Imperative as such is but foundational. But on further reflection, I believe my proposed Basic Imperative is somewhat irrelevant of the granularity (since I believe the theory of evolution can be applied at all levels of decomposition). Whenever there is encapsulation into higher entities that can vary in their cooperative relationship, be it local homeowner associations or entire nations, the Basic Imperative serves as an informative idea for the proposed means of (re)organizing ourselves. Though we extrapolate “sub rules”, such rules are informed by and measured against the Basic Imperative.

    You questioned my analogy of using the eye. The reason I used it was that on my first extended stay in the U.S. at the age of 15, I found myself watching a televangelist . This was in the mid 80’s, the golden age of televangelism! The minister had an anatomical diagram of the eye, explaining to us it’s beautiful complexity. We were told that there was just no way evolution could explain the intricate relationship between its part. No way! It was obvious evidence of God’s existence. Contrast that to your reaction. Unlike my 80’s televangelist, I expected you to think evolution is very informative for why the eye is the way it is. But when we go to the ophthalmologist, we are unlikely to engage in a discussion about how each part of the eye evolved. Yet, if we engage in a discussion about the eye with a biologist, evolution will inevitable be used at some point as an informative concept. So are philosophers more like ophthalmologists or biologists? Crudely compared, my felling is that legislators are to moral philosophers what practicing doctors are to biologists.

    Now, I have written extensively and still I have not addressed a very interesting question you asked in reference to my argument for Free Speech: does this mean we should expose people to high doses of x-rays to promote genetic variation? To you this may seem like an absurd question with an obvious answer, illustrating the nonsense of evolutionism capability of providing sufficiently sophisticated answers. To me, it’s an interesting question because considering it illustrates that evolutionism is indeed capable of answering this question in a way that confirms my gut feeling: no, we should not, and yet we should permit a great degree of Free Speech. But I will refrain from answering this question at this time. It’s too interesting to spend just a few cursory sentences on. It’s a very challenging, and profoundly interesting question indeed.

    NOTE: The comment about mutes I don’t buy. We are speaking about communication of ideas, not verbal communication. If you insist we can call it Freedom of Expression. Any social system requires communication (implying expression). And without social systems, ethics is meaningless. And for such systems this discussion would be completely pointless.

  36. Andreas, that’s a lot of interesting stuff. I’m tempted by a great many of your proposals. I am currently conducting research along some similar topics, which means my remarks have to be cagey. But I would like to make a few points.

    Despite your Basic Imperative, it seems as though the nub of your concern about morality is the idea of cooperation. So you write: “What my evolutionism proposes is that consciousness allows us to reflect on old forms of cooperation, attempt new variations and then measure their effectiveness against the real.” And conclude: “without social systems, ethics is meaningless.” If the real sticking points are the efficient forms of cooperation, as they seem to be, then I’m inclined to believe that any talk of biological evolution is eliminable from moral debate. It may very well be right that this is just ‘encapsulation’ of our theory of moral psychology and social development away from the rest of the biological evolutionary story. But then again, that’s nothing to brag about — to some extent, encapsulating our lived and real world notion of morality away from the bigger picture is exactly what I think we are permitted to do!

    Of course, we might continue to discuss the evolution of new systems of social cooperation, and those sorts of phrases retain some normative and explanatory power. And that may be one of the ways you would like to talk about evolution — as an analogy to biology, nothing more. But then notice that we’re really just using a new name to discuss the many faces of public utility. And, as I indicated in my illustration above in the X-Ray proposal, I am at the moment disinclined to think that that the use of the language of mutation is an effective analogue to free speech (or free expression) for any normative purposes. At the very least, more would need to be said about this.

    Be that as it may, I’ll restrict the rest of my comments to your basic imperative, understood as being ultimately a biological proposal.

    Here are some of my thoughts. First, for instance, I’m inclined to disagree with your rough handling of Rawls. His formalism (the veil of ignorance) should not be dismissed out of hand. It is not just a formalism — it’s rooted in a conception of human nature, and purposed towards the development of a scheme for social cooperation. (IMHO, it is a mostly naive, back-of-the-envelope conception of human nature which he hides in the final sections of Theory of Justice. But it’s still there!) So the main attraction of your view — that it asks that we inform our oughts with ises — is something that Rawls would agree with.

    Second, your view implies that we can make sense of an ethics for species other than our own. Since your foundational criterion is survival, we should be able to deliberate ethically about the demands put upon any species at all. But I am not sure that we can go beyond human nature in our ethics — I am not sure that morality means anything, unless it follows from a certain human picture. This concern can be cashed out in two ways. i) Our knowledge of morality is limited to human (and near-human) experiences and behaviors. It doesn’t matter how the Eloi or Morlocks decide how to cooperate or survive, any more than it matters how cells and plants decide to survive — it’s all going to seem like a fishy amoral business to us, so long as they’re appropriately alien. ii) Our normative warrant is also suitably constrained. We have primary duties to our own species that we don’t have to other animals or to aliens. If you, me, and your dog are stranded in a jungle, I will be obliged to help you before I help your dog.

    The only way you can come around to agreeing with me on this last point (ii) is if you understood the norms we use to make sense of species-preferences are a function of your basic imperative (i.e., your sub-rules). So, for instance, you might appeal to kin-selection as a legitimate norm. But while that may be true, two things follow: a) it will imply encapsulation, and what follows is exactly what I want — a morality for us, a population of thinkers, living now. b) We have foundational generalizations (like the principle of utility) that are just as general as an account of where morality comes from, and more normatively helpful in telling us where morality ought to go. And it seems to me that we ought to choose the norm that is both explanatorily general and most normatively helpful.

  37. [What] I want [is] a morality for us, a population of thinkers, living now.

    Benjamin, no system of morality is for us in the immediate sense because this us is a fluid us, a constantly shifting concept. Ethics always deals with a stable Us, an Us in the abstract. This is the public good you mention. In what sense is the public good a concept in the here and now? Let’s use your raft at sea from earlier. On this raft is a very immediate us, those stuck on the raft. Given what we usually think of as a raft, there are presumably fewer people on the raft than the number of people we can personally relate to, the so called Dunbar’s number.

    Should the fundamental rules of our cooperation depend on who is on the raft? For example let’s say we found ourselves on the raft with Pol Pot after 1980 knowing what he had done. Should our system of morals be any different than if we found ourselves on the raft with Joseph Stalin some time after the Great Purge? I’d say no, because what matters is not only who they are in the immediate, but also what they are in the extended sense. We can imagine that being on that raft we find Stalin to be a paranoid paternalistic puff and Pol Pot a humble gentleman. Do their past crimes against humanity matter, or is it only their demeanor on the raft that matters? If we judge them for their crimes and act accordingly, then we are acting for reasons beyond the raft. Our behavior is about a distant place and a possible future for people we don’t yet and might never know.

    We could isolate our system of justice to the raft itself. In this case, whatever Stalin or Pol Pot did prior to finding themselves with us on that raft is completely irrelevant. We can now act for ourselves in the most immediate sense. How we help each other is no longer dictated by knowledge of the prior or predictions about what’s to come. We start, so to say, from a clean slate. We act by ourselves for ourselves. If we deem the leadership of Stalin to be such that it facilitates us getting to shore (resulting in our personal survival), then we are justified in letting Stalin be in command, even if it increases his chance of committing further crimes against humanity.

    So if you agree that we should act not just for those on our little floating world, drifting lost at sea, who should we act for? With other words, what is the public good? Any willing congregation we had previously joined or planned on joining once rescued? This would be odd, because this would seem to imply we could form congregations on the raft as well. There would be no morals per se. What if Joseph decides all of us are scum except his buddy Lazar Kaganovich who also happens to be on the raft? Are the two of them justified to make a pact right there and then and clobber us to death? After all, killing us might to them seem in the interest of their congregation (i.e. Stalin and Kaganovich). This just can’t be the case.

    The conclusion is that the public good is not in fact a group bound by a specific social contract. For if it were, Stalin and Kaganovich would be in the right as long as they honor their own pact and those that they had committed to up until that moment. If you were a Soviet communist and I was a citizen of the British Commonwealth, presumably it would be legitimate for them to kill me but not you. That just seems crazy. And yet, this is largely how we have operated until very recently. Only by signing the Universal Declaration of Human Rights did we officially recognize that there are rules that transcended any congregation. But, to be fair, theists have claimed this for a long time.

    In the end, then, the common good must be considered synonymous with humanity as such. Saying “it’s for the sake of the common good” is exactly the same as saying “it’s for the sake of humanity”. Since humanity is the same as our species, to say “act for the sake of our species” is the same as saying “act for the sake of humanity”, assuming the one saying it is a human. But my Basic Imperative has taken things one step further. It speaks not of our species, but our distant descendants. Does this make nonsense of the common good for which our morality exists? I don’t think so.

    If when we act morally, we act for the sake of our species (i.e. humanity), then why should we consider only our species as it exists at this time? After all, that is not what we are doing on the raft. We seem to have an obligation beyond the one to those poor souls caught up in the same misery as ours. Let’s say there are children on our raft. Do we have a greater obligation to sacrifice ourselves for them? I’m pretty sure I know what most parents would instinctually do. It does not even have to be their biological children. There is a visceral reaction that takes place when we consider children’s safety. Our sophistry dissipates and we return to our deep biology. It seems logical that though we are in the immediate sense acting for our children, we are acting for their potential grandchildren as well. Many who choose to have children, myself included, dreamily hope to one day be grandparents. Again, it’s not a rational thought, it’s an emotional desire deeply rooted in our biology.

    If I logically extend such deep rooted desires, I end up at my distant descendants. Not my dog’s distant descendants, but mine. Yes, there is a dog on that raft too, and unfortunately, as much as I love cats and dogs, I suspect that you are correct. The choice between the children, my fellow adults and the dog would be an easy one. Engaging in cannibalism is far more difficult to imagine that making a meal of my good companion. But what if I was Dr. Han Fastolfe and had R. Daneel Olivaw with me on the raft? Would I really let Olivaw commit himself to the Third Law of Robotics for the sake of my own survival? Being an engineer and knowing how emotionally committed I am to my own creations, I just doubt it. Somehow I think higher consciousness is higher consciousness, whatever form it resides in.

    I can’t know for sure how Fastolfe would act since I have never helped create anything with a consciousness like my own except my two sons, who are both in form and substance very much like myself. You mention the Eloi & Morlock that were originally brought up by James (a.k.a. Curious), and then commented on in greater detail by myself. You surmise that anything non-human will always seem somewhat creepy to us. I actually doubt this. What H. G. Wells imagined was what I would call a gradual devolution, not an evolution. We have a profound capacity to identify with other species, even octopi. People find octopi fascinating because of their intelligence and their ability to relate to their caretakers. Same with wolves, orcas, lions and parrots. I am by no means well versed in interspecies ethics. But friends of mine like Ralph Acampora take it very seriously. And I deeply respect their efforts to understand our ethical obligations to other animals.

    But my Basic Imperative is squarely directed at humanity’s (presumably evolved) distant descendants and none other. All it really does is implore us to think intergenerationally. I think this is what any moral framework should ultimately be dedicated to. I suspect that what you want is something far more immediate, pragmatic and clear cut, some rules that will tell you exactly how to behave when you consider being lenient towards yourself on your taxes. But before you get to this granularity, I think you aught to consider what it is morality can really achieve. I don’t know if the tax issue is one of them. But I do think that minimizing the chance of getting clobbered to death by Stalin and Kaganovich is indeed one of them (and thereby at least maximizing humanity’s chance of survival for a while longer).

  38. Andreas,

    I think you’re right to say that the relevant ‘us’ is a fuzzy concept. But just because the boundaries that tell us who counts as being part of ‘us’ are fungible, doesn’t mean that there are no limits. The Eloi and Morlocks are of no concern to me, nor should they be. Moreover, and more importantly, it doesn’t mean that there are no necessary conditions that are anchored close to home. One may coherently claim both that “‘We’ is a concept with wide parameters” and also claim that “‘We’ always includes the population of the recent living”.

    When you bring up the raft example, your point, I take it, is to ask: who counts as being ‘on the raft’? So I say: the recent living. And when I talk about “the recent living”, I mean to speak of that generation that has most recently died out, all of those who are presently alive, and the generation that has yet to be born. So, various dictators are purging people on the raft, and may be judged on the raft, too. Of course, many persons in these generations will be morally vile, but that’s not important — the important thing is that they count at all.

    So then you might press on, and say: “Well, then, suppose we’re talking about somebody two generations ahead. Why aren’t they on the raft?” My answer is more epistemic than moral. We don’t know who they are or what they want (apart from obvious things, like a relatively habitable planet, etc.) And they probably don’t know who you are or what you want. Morality co-evolves with knowledge. The more we doubt the things we know, the more we doubt the things we know about morals.

    There’s pretty much no connection between advocating for public utility and advocating for Stalinist rule. Or at least it takes a lot of misguided thinking in order for that to come out as a viable conclusion. I’d be the last to argue along those lines. The raft analogy starts to unravel at this point, I’m afraid.

    I don’t have much faith in a social contract, either. I think the language of a social contract is an attractive metaphor when used to describe the normativity of meaning. A contract implies a prior equality in power between contractors, such that neither one is under duress. Everyone can reasonably be said to be equals when it comes to language, so long as we think that the point of language is mutual understanding.

    But other than that, the ‘social contract’ is largely a myth, or at least an overly ambitious trope that has dubious moral significance. After all, a person always has to ask, “A contract between who and whom?” Rawls will tell you that it is between yourself (behind the veil) and the order of the society that is to come. Hobbes will tell you that it’s a contract between every man and every man. Gauthier will say that the contract is between those who intend to cooperate. The reality is that the contract is made between active guardians of the society, the people who have a sense of their role in the social order. Anyone else is just a bystander, an externality, an eavesdropper on somebody else’s conversation. They’re on the raft without having any say about what happens to the raft.

    I am doubtful of the proposition that morality is legitimately known or acted upon for the sake of our species. Rather, I think we have an informed picture of humanity, and our morals change along with the picture. No more needs to be said in proof of this than to look at the facts about how we use the language of ‘humanism’. It’s obvious to the point of clichee that “Humanism” is an epic misnomer. It’s a name we use to describe the sunniest angle in being human, while ignoring the darkest traits. You can’t get away with shoving that much dirt under the rug unless it’s on purpose.

    Hence, strictly speaking, I don’t think we legitimately know or act upon things for the sake of our species. We do know and act on the basis of our picture of the species, though. So, since I don’t think we legitimately know or act upon things for the sake of our species, I certainly don’t think we know or act upon things for the sake of the evolution of life. To reject the wedge is to close the door.

    The appeal to the wellbeing of human children is not compelling. Of course we have a sense of fellow-feeling to children, and duties of care — that’s not in question. What is in question, rather, is how we should feel about distant relatives. That is to say, ultimately, how we feel about the tiniest life forms. And the fortunes of a single-celled bacterium do not even enter into my moral deliberations except under the most unwinnable circumstances.

    For instance, suppose that we know that a massive solar flare is going to obliterate all complex life forms from the planet, and the only hope of survival is to bury extremophile bacteria miles underground, so that they will live on to evolve into complex life over the course of the ensuing million years. Under ordinary circumstances, the bacteria are obviously not ‘on the raft’, and we have no obligations towards them. But I would like to say that we have no moral obligations to the bacteria even in this case.

    Of course, when push comes to shove, it’s better to have at least somebody on the raft rather than nobody at all; life has some value. But life doesn’t have intrinsic moral value. Some of us will be tempted to say that by preserving the bacterium, our legacy will somehow live on. But — actually, no it won’t. The bacterium is an alien thing, and it will create more complex alien things, and we have no responsibilities to either.

    So I hope I’ve answered your question, about what I expect to achieve. But now, I’d like to pose that same question back to you: what do you expect to achieve? If you think that you expect for your legacy to be carried on by bacterium, then I think that’s about as sensible as saying that you have moral obligations to the wellbeing of the moon.

  39. So I hope I’ve answered your question, about what I expect to achieve. But now, I’d like to pose that same question back to you: what do you expect to achieve? If you think that you expect for your legacy to be carried on by bacterium, then I think that’s about as sensible as saying that you have moral obligations to the wellbeing of the moon.
          BENJAMIN S. NELSON

    Interesting. I suppose this exposes the fact that all moral frameworks are a human design. This seems to make some sense since other animals don’t write things like the Bible. I do think it can be said though, that other primates have been observed to create unwritten and yet transferable behavioral rules. Albeit extremely simple social rules and obviously not the kind of complex forward looking rules we humans have dreamed up to guide our societies. Still, our capacity for morality is grounded in something deeper than mere human reason. Morality has a deep biological grounding.

    I suppose I could say that “my interest is in the truth”, but such statements are a bit vapid. There is no denying that all moral frameworks are a construct. It is what “we” think ought to be, not what is. In this sense it’s like a house. You don’t have to build a house. You can live in a cave, or under the bare sky. But if you have the precursive know-how for building what we would call a house, it’s a good thing. And once you decide to build a house you realize that there are good ways and bad ways to build structures. It’s not like dreams where you can suspend yourself in mid-air or survive after falling off a precipice. Quoting from Matthew, Chapter 7 (King James Bible):

    7:24 Therefore whosoever heareth these sayings of mine, and doeth them, I will liken him unto a wise man, which built his house upon a rock:
    7:25 And the rain descended, and the floods came, and the winds blew, and beat upon that house; and it fell not: for it was founded upon a rock.
    7:26 And every one that heareth these sayings of mine, and doeth them not, shall be likened unto a foolish man, which built his house upon the sand:
    7:27 And the rain descended, and the floods came, and the winds blew, and beat upon that house; and it fell: and great was the fall of it.

    So, the first thing I want is a house built on a solid foundation, the is. The second thing I want is a house built that will weather storms, the might. After all, what’s the point of having a house that leaks when it rains? Or blows over at the first strong gust? Preferably our house will weather even a tornado. And ultimately, if I’m aloud to dream just enough to set the seeds for something quite new, an immensely destructive solar flare. I assume we will have no disagreement about the need for a sturdy house. What we might disagree about is how sturdy it can reasonably be. Again, we can dream about building an elevator to space. But the difficulties of such an endeavor are daunting, so daunting it might seem impossible. If a space elevator is not impossible per se, at least it might be impossible at this stage of our evolution.

    Since we are building something, we do indeed have to ask who we are building this something for. You seem to want to build a house for yourself and your fellow villagers. To be fair, not just your fellow villagers but your fellow city-dwellers. It’s not just for the people you know and greet on a daily basis and it might even be reasonable to say that you are interested in is a house for a whole nation. My ambition is perhaps a little bit more crazy. I want to build a house not just for them but for nations to come. Nay, make that species we may evolve into!

    Of course, that means my house must be rather flexible. And, of course I realize it may like all other human constructs be torn down and replaced by something even better at some point. But hopefully at least it will have served as a good starting point for the next big project. This, of course, goes for your house as well. And you might ridicule my architectural plans as unfeasible. Better build a sturdy 12 story building than a massive crumbling Tower of Babel!

    All right, so maybe I can’t build the actual house for everyone. But I might be able to create a recipe from which all future foundations are concocted and poured. Concrete is a very ancient recipe. Add Joseph Monier‘s reinforcements and you get something that will probably remain a part of how we construct buildings for a very long time into the future. But we might eventually see it replaced in some instances with fibre reinforced plastics. Importantly, once you find a good foundational recipe that has both tensile and compressive strength before you start building your off to a good start.

    It’s really the recipe for a good foundation I’m looking for. Same as Bentham and Rawls, I suppose. Being convinced that neither have invented concrete, I have proposed by own recipe. Perhaps my foundational recipe (the Basic Imperative) has neither tensile nor compressive strength. This is what I have set out to find out. So far I have not seen it crack under pressure. There might be some shearing though. If I understand correctly, so far you have (1) agreed that things cannot have value without life, that it is a prerequisite; (2) claimed that survival as an imperative is insufficient because life does not have “moral” value on its own; (3) that we have no responsibility to anyone beyond perhaps our actual grandchildren. As far as #3 goes I think you are even saying our responsibility goes only as far as having responsibility to any co-existent human being.

    The greatest sheering effect occurs for #3 when we push it into the absurd. James (a.k.a Curious) has been of a similar opinion to you, that somehow it is nonsense that we should have duties towards non-existent beings in a distant future. You don’t even seem to allow us to start seeing how far we can even extend ourselves into the absurd. Because to you children seem not more compelling than, say, old ladies in wheelchairs. I don’t share this sentiment at all, so for you to just say “it’s not compelling” is not enough for me. I’m forced to address this because otherwise I can’t even stress test our responsibility to future generations. Which is really what I am interested in: an intergenerational ethic of sustainable growth (not “growth” in the sense of more people but more conscious and intelligent life activity, a necessity itself for increasing the chance of our continued existence).

    So how about a good old thought experiment: let’s say we find ourselves at a raging stream in the wilderness with some fellow hikers. A severe storm is approaching and we have to get back to safety. To get to safety, we have to cross the stream. The waters are rapidly rising by the minute. In our company of hikers, there are: a very fat middle-aged person (every thought experiment needs one); a senior that walks with a cane; a five year old child; and a pregnant woman in the third trimester. Any moment now the waters will be uncrossable. Being in the company of good people, we’re going to assume that we are willing to help each and everyone. For the sake of the experiment we are going to assume we are able bodied adults that have the easiest time crossing the stream. We can only help one person at a time. In what order should we help them?

    Do you agree with me that it’s between the pregnant women and the child? I’d say the child because the woman is still presumably stronger despite her cumbersome physique. She should be able to handle a stronger stream. If you agree with this order, then there must be some reason beyond just the fact that they need help and are our fellow hikers. Even if we explain it by their higher potential productivity for the common good, we are in effect saying it has something to do with the fact that they represent our future.

    This may seem like an antiquated question. But it is a real one, and apparently has had real historical consequences. I don’t think it can just be brushed off as old social norms. I’ve stayed clear of the gender issue so far but, I’m sorry fellow men, having now mentioned it I think 8 women and 2 men is better than 2 men and 8 women. For obvious reasons and not a personal one! I just increased my chance of going down with a sinking ship from 1/5 to 4/5. No love for me. But the optimal solution is probably 5 men and 5 women. So, as far as chivalry goes, we’ll stay way from the whole gender thing.

    This concern with the future matters. If we have no intergenerational responsibilities, then we end up with a kind of quarterly earnings problem. As long as I plan on exiting before a company goes belly up, the company’s future is someone else’s problem. Focusing only on quarterly earnings, my investments are not for the long haul. Burning fossil fuels? Not my problem, buddy! My kids problems perhaps, but not mine. Missions to Mars? Who needs them! Not me. I’ll never go there, for sure. Any project that has pay offs beyond 25 years, highly dubious. Large Hadron Collider? Are you kidding me! Why are you stealing my hard earned cash! NASA and ESA, I have two words for you: disband immediately. And I hope, for my own sake, there’s no one receiving tax-payer money to research in the field of philosophy. Pay-off time. Wah, like, what, 100 years? 250? 500 years?? Get out of my hut and leave me alone.

    Seriously, though, it does indeed matter in terms of how we legislate and spend our tax money. Without an imperative that directs us towards intergenerational thinking, we can gradually increase our public deficit as we much as we think might be workable before we are forced into a necessary bail by the inevitability of our own death. We will always be hedging our bets a bit. But if you’re a generation like the Baby Boomers, the numbers are on your side. By the time most of you have thrown in the towel, it’s Generation X’s and the Millennials problem. Hmmm, I think that is me (and believe you too). This should not be construed to mean I’m promoting this kind of thinking for my own reasons. I think the Baby Boomer deficit issue is already too late. We are the ones who will have to deal with it. There’s no preventing it anymore. I am thinking about my own children though. And the grandchildren I don’t yet have. You asked the earlier question as if leaving a legacy was for me. That is only partly true. I will not be around if there are any benefits from my way of thinking about morals. What I can admit is that I do this for “my last 8 minutes”. Once I lay there knowing that my brain is by all likelihood about to permanently shut down, I want to feel as if I did everything I could for those who will come after me. It’s true. And therein is my enlightened egoism.

  40. s. wallerstein

    Andreas:

    If I may weigh in on this discussion, let me say that I feel obligations towards those living now and for their sake, to their children. That is, to the children of my children (and others of their generation), not because my yet unborn grandchildren matter to me, but because they will matter to my children and my children matter to me.

    After that, I don’t care much what happens to humanity or to the universe.

    If my genes were to die out after my children, fine. I feel no special attachment to my genes nor to my family as a genetic type. Not that I think that my family is especially messed up, but that there is no reason for our continued presence in the universe, after my children have lived out a normal, hopefully happy, lifespan.

    Maybe I’m missing the gene which programs people to care about their genes. If so, I suspect that there are many more mutants like myself wandering this planet.

  41. Wallerstein,
    First, let me point out that I think to consider evolution only in terms of “genetics” is incorrect. As mentioned earlier, epigenetics seems to indicate this is not a correct model of evolution. So it might be better to still speak of inheritable traits. And we are speaking of our distant descendants, not exclusively mine or yours. We humans are social beings that cooperate through flexible specialization (unlike bees who cooperate through rigid specialization).

    That said, systems that have too many people with traits like yourself should, theoretically, in the long run become less prevalent. You do admit caring to some extent about future generations (for whatever reasons), so your traits have a fair chance of perpetuating. Unchallenged by, say, massive solar flares, a system with lots of entities like yourself might be around for a very long time. But, in the long run you should eventually become less common. Unless individuals like yourself serve some kind of buffering function from trying to think too far into the future.

    Shearing occurs when we, so to say, get ahead of ourselves. We can have fancy goals. But we constantly have to keep the uncertainties of the future in mind. To act as if we knew with any likelihood what the weather will be like on April 21, 2050, is (at least by current models) quite questionable. Let’s call people like yourself the realists. And we’ll call people like myself the dreamers. Too many dreamers and we’re in trouble. We never balance our checkbook. Lots of opportunities, too many expenditures => extinction. Too many realists, we’re in trouble as well. We keep balancing our checkbook. Solid P&L, no new opportunities => extinction.

    Does that mean there should be two Basic Imperatives, one for the realists and one for the dreamers? Of course not. We are not at war with one another and should not encourage it. We’re humans, we cooperate. We are all part of the same system, our society. We both may agree it makes sense to think in as long terms as possible, the length of which we argue about and may depend on what we are considering. Perhaps the realists are just more likely to focus on sustainability, and the dreamers on growth. Both are a necessary part of fulfilling the Basic Imperative. And since we are flexible specialists, I would assume there is a little bit of a realist and a dreamer in most of us. Most people probably fall within a standard deviation of the mid point, some with a bit more realism, others with a little bit more dreaminess.

    Perhaps my standard deviation is way out there on the dream-side. But in strong cooperation with an (optimistic) ultra realist, I believe people like myself can do very surprising things. And then one day what the sane dreamer and the optimistic realist have accomplished will seem like the most obvious and natural thing in the world. Of course we can build things like the Burj Khalifa! Why shouldn’t we be able to? But, in the short term, mud huts work too. If you find yourself lost in the wilderness, building shelters with a few branches can be very useful. It won’t help our grandkids much on Callisto though.

  42. “a very fat middle-aged person … a senior that walks with a cane; a five year old child; and a pregnant woman in the third trimester… In what order should we help them? Do you agree with me that it’s between the pregnant women and the child? …If you agree with this order… Even if we explain it by their higher potential productivity for the common good, we are in effect saying it has something to do with the fact that they represent our future.”

    You’d say the child first because the woman is presumably stronger’? Oh Andreas I think the thought experiment works best if we assume they all have an equal chance – then you are forced to make a principled choice between the two. Without any religious anti-abortionist thinking, it seems reasonable to say the third-trimester women counts as two human beings. Surely you can’t say the ‘thing’ inside her isn’t a human being? You can reasonably deny it’s a person but then you seem more concerned with humans or possible human descendants than persons anyway. Otherwise there’d be space for intelligent aliens, sentient computers of alien design, highly evolved ‘Planet of the Apes chimps and long lost Neanderthals in your Basic Imperative (does shared genus or order help?). All of them seem able to ‘out-value’ the Morlocks to the moral Time Traveller forced to choose which species to save. Perhaps we ought to maximise the survival chances of what should be important to us – personhood, morality, intelligence and not the continuation of the human ‘bloodline’? (I appreciate you can include human-designed robots amongst our descendants).

    In any case, choosing to leave the middle aged person and the senior till third and fourth doesn’t have to be about their lack of importance to the future – it can just be about the fact they have had a share of life that the child and the mother + child-in-the-mother haven’t. I don’t think we need be valuing the child and the mother + child-in-the-mother over the other two because they ‘represent our future’ – we don’t need to be putting values on their lives like that at all – its just a matter of who has had a ‘fair’ share of life. The middle aged obese person and the elderly person should and probably would insist the other two go first on exactly that basis.

  43. s. wallerstein

    It seems that we are going to overdose from obligations.

    We already have obligations to our family, our neighbors, our society, sentient beings, the environment, and who knows to whom and to what else.

    I’m tired of fulfilling obligations.

    Now some want to add on obligations to generations which will be born well after our lifetime.

    Our whole lives will be spent fulfilling obligations.

    There will be no more free time, and free time is what makes life worth living.

  44. Amos,

    Ah yes duty can get in the way of the good – ethically valuable – life. Of course you could go down the Ayn Rand route and deny that any of us has any ‘duties’ to anybody else at all. But assuming that doesn’t appeal we tend to take the view that (outside family/friend/tribe circles) we can have duties to other beings and especially other persons. As a matter of contingent fact all persons we know of are also human (though arguably not all humans are necessarily persons). But we can imagine non-human persons, and all I’m really saying is that persons – thinking moral beings with life-plans, the ability to suffer etc – matter, but their bloodline or genetic/design origins doesn’t. I’m not suggesting we assume more duties, just that we assume that what duties we do have to other persons apply because they are persons not because they happen to be human. (And that speaking in science fiction thought experiment terms, in principle, what is best about humanity may be better preserved by beings with no genetic or design link to us – Planet of Apes chimps or aliens instead of our distant descendants who may well be ‘devolved’ Morlocks.)

  45. s. wallerstein

    Curious:

    I recognize that I have duties and responsibilities towards others and towards the environment, but there are days when I’d like some time off, time off not to be
    “evil” or to do harm, but simply to lie back, listen to music and read a good book, time off not to have to respond to the needs of others.

  46. Amos,

    I’m quite sure you’ve more than earnt your days off.

    Those who want to sign away all their material goods and leisure time in return for a life of constant duty are free to do so.

    From a comfortable chair, I raise my glass to them, and, of course, yourself: ¡Salud!

  47. Andreas, I must say, this is an enormously interesting exchange. Thanks so much for your thoughts.

    But straight to business. I endorse something like your propositions (1) and (2), but I have not endorsed (3). That is the proposal “that we have no responsibility to anyone beyond perhaps our actual grandchildren… our responsibility goes only as far as having responsibility to any co-existent human being”. Presumably this is an interpretation of my restriction of moral concern to the recent living, meaning:

    when I talk about “the recent living”, I mean to speak of that generation that has most recently died out, all of those who are presently alive, and the generation that has yet to be born. So, various dictators are purging people on the raft, and may be judged on the raft, too. Of course, many persons in these generations will be morally vile, but that’s not important — the important thing is that they count at all.

    I’d like to say a few things about the bolded portions.

    First, a clarification. I was not at all clear in what I meant when I said the “the generation that has most recently died out, all those presently alive, and the generation that has yet to be born”. I should not have used the word “generation”, since it is an obsolete concept that makes for crap social science. More importantly, that word is bound to just trip up what I mean. For all practical purposes, here’s roughly what I mean: the contemporaries of the oldest person to live up to the day of my birth, the entirety of people who live contemporaneously with me, and the contemporaries of the oldest person to be born around the time that I die. In my case, assuming that the oldest person lives 105 years, and given that I was born in 1982, and assuming I live up to 2052 at the sturdy old age of 70, that means that I have sensible moral concern over the legacy and fortunes of those who lived at any time during the period of 1877 – 2157. That’s quite the long haul, and hopefully it is compatible with your concern about quarterly thinking.

    Second, I do not mean to say that whoever counts as being part of our moral concern is of greater or lesser moral quality. So, for present purposes, I have pretty much nothing to say about your river saving example — all those infirm candidates are moral options. In other words, I might agree with you that it is best to save the child and mother, without it making any difference to my point. If someone were to choose the grandmother, they might be failing to do something that they ought to do (by failing to save the mother and child), but they’re not thereby doing something that they ought not to have done. Unless, of course, we think grandmothers are not morally salient, but that is a very mean attitude towards granny.

    I’d also like to say that I don’t think it is nonsense to care about the fates of persons born in 2158. It is, rather, pointless, just as it is pointless to care about the fate of the moon. I like the way that S’Wally put it — we’re potentially drowning in obligations already!

  48. s. wallerstein

    Curious:

    Thanks.

    ¡Salud!

  49. For all practical purposes, here’s roughly what I mean: the contemporaries of the oldest person to live up to the day of my birth, the entirety of people who live contemporaneously with me, and the contemporaries of the oldest person to be born around the time that I die […] hopefully it is compatible with your concern about quarterly thinking.

    Benjamin, it really depends on what is knowable. Knowability does seem like one of the main critiques of the Basic Imperative. The other seems to be that it’s insufficiently granular. But for now lets deal with the critique of knowability. So far my argument has been that we are highly conscious social beings that can, through abstraction, extend our thinking beyond Dunbar’s Number. And that the logical consequence of the ability to make such abstract extension leads us to a responsibility for future generations that are as of yet still simply a statistical possibility.

    Your claim, and James’s (a.k.a Curious’s), is that I have driven our moral responsibility ad absurdum. I disagree. There are cases when it’s a complete unknown what might benefit people that will come after you, whether it’s the next guy who walks around the corner or some as of yet unborn person. Then I will say that whatever you do is beyond the realm of morality. But if there is a way of informing your guesses about what is existentially beneficial to the persons who fill follow you, then you do have a responsibility to act according to the Basic Imperative.

    I invite you to consider a thought experiment: the Distant Doomsday Test.

  50. Wallerstein,
    No worries. The Distant Unborn will not come hounding you for your laziness. Relax, have a vacation. Maybe a glass of Merlot if you like that sort of thing. And as you stair at the beautiful night sky and watch the meteorites burn in the atmosphere, who knows, you might have the Eureka of the millennium, more valuable to our distant descendants than any of our strained thoughts in this modern day olive grove.

  51. s. wallerstein

    Thanks, Andreas.

    Yes, I’m a wine drinker. I live in wine country, Chile, near the wine growing region. We even grow olives farther to the north, although what fed and watered the Greek sense of wonder does not seem to produce much philosophical speculation here.

    To your health and to the health of the unborn, cheers!

  52. Hi Andreas,

    ‘Your claim, and James’s (a.k.a Curious’s), is that I have driven our moral responsibility ad absurdum’

    I wouldn’t want to say you have driven moral responsibility to absurdity Andreas no. I don’t see that your theory necessarily makes unreasonable demands. Mill didn’t expect us to be constantly trying to calculate what would generate the greatest happiness or constantly acting with the conscious intention of maximizing the general happiness. I don’t see that your imperative has to be taken to demand more than can realistically be expected. You extend the domain of those we should have concern for and I don’t think that’s wrong. I see no absurdity in thinking we may have a moral responsibility to our possible ancestors – be they biological or cybernetic.

    However I don’t want to admit at the outset that moral significance is determined by membership of our species or of a descendant group. Personhood grants moral significance not bloodline (or design history). And thought experiments suggest we could have good moral reason to maximize the survival value of the descendants of a species other than our own – the survival of the evolved apes or aliens outweighs the value of the survival of the Morlocks or Terminators.

  53. Andreas, you pose a fairly significant challenge there. Things look grim for my view. There are, however, a few things that have to be considered, which might not make it so unattractive after all.

    I need to make two distinctions. First, there’s a distinction between moral concern and moral power. To have a moral concern is to have a sense of duty, and to have a moral power is to assume you are able to satisfy that duty through some behavior. There is also a distinction between ethics and morality. Morality involves deontology, the right, the rules we ought to follow; ethics involves the ethical good, the virtuous.

    There is in intimate relationship between these two things. I don’t think that we ought to morally care about things which we are powerless to influence. We surely ought to care, ethically, about things that are currently impossible to influence, and strive to gain the powers needed to influence them. e.g., the man trapped in a concentration camp ought not lose their ethical resentment of their captors, even if they have been stripped of all moral options.

    But you cannot morally care about something impossible without thereby making morality impossible. To be morally overambitious is to turn morality into a vain state of helplessness. In other words, there is something right about the bleeding heart (since it motivates us to transform ourselves into better moral animals), but there’s also something very very wrong with it (since it has the capacity to annihilate morality altogether). A bleeding heart method to ethics is potentially lethal to moral agency as such.

    Next, I need to point two things out about my clarification. I said that, for purposes of making an analytical approximation, I have a moral concern for those who live during the period of 1877-2157. That means that a baby can be born on December 31, 2157, and I might morally care about what happens to that child well into the rest of their life. So I suppose I have a moral concern for yet another 105 years, for that particular child. But this is probably already too generous, if anything; for whenever you claim to have a moral concern over something, you are claiming that you trust to know the right thing to do is for them.

    The seemingly absurd result is that the child born on the very next day is beyond my moral concern. And even if I find a way of caring about that baby, then the people at the end of the world in your thought-experiment in the 24th century are certainly not a part of my moral concern. Why do I say that? Because, for the mostpart, I do not know what the people of that century want. I have a sense of the human picture that I glean from the people of today, and I project that picture onto the future, as a part of my ethical attempt to stretch myself beyond my moral limitations.

    But it’s a crap shoot. The future may be terrible. Our descendants may be unrecognizable to us as moral people. They may hunt each other for sport. They may have genetically modified themselves into hapless semi-sentient pleasure-machines. They may be Morlocks or Eloi. I owe the future my best, only to the extent that I trust that they are the sorts of people I can recognize as worthy of flourishing. I may even proceed in acting upon that assumption. But it may be the case that our future is a mistake, and (at the outermost extreme) that our descendants are not even worthy of survival. This curbs the sense that I have a moral concern to distant generations.

  54. […] thought experiments suggest we could have good moral reason to maximize the survival value of the descendants of a species other than our own – the survival of the evolved apes or aliens outweighs the value of the survival of the Morlocks or Terminators.
          CURIOUS (a.k.a. JAMES)

    James,
    I understand your concern. It’s the same concern any parent has. We do our best to educate our children. Then one day they we find them standing in the parlor dressed up in a neatly ironed blue shirt, their red beret slightly tilted to the side as they chant the Cara al Sol with an extended right arm. Our Republican blood boils. “What in goodness name did we do wrong?!?”, we lament to our neighbor’s son.

    It’s the risk we take when we give birth to conscious beings capable of agency. But if I am right that we should care for our distant descendants, as I think my Distant Doomsday Test indicates, then we have an obligation to at least try being the best great grandparents we can. And note that the values and knowledge that we pass on to our descendants are more important to their future survival than our specific genetic codes. I have explored this in some more detail: Distant Descendants, Dystopia or Utopia?

  55. Hi Andreas,

    Absolutely, the concern that our distant descendants might turn out to be terminators or morlocks does not free us from the obligation to try and pass on knowledge, values and a habitable planet – and of course we have some influence over whether they turn out to be terminators or morlocks or not. I also agree that we might have some obligation to start working on ways to save the planet from the meteor disaster that looks almost certain to occur a few centuries hence.

    My true sympathies are with the Martians though, they’re a much more worthy bunch.

  56. I have to admit, Curious, that I have been rather cruel towards our dear Martians since the day I began blogging in earnest: Dirty Space Probes.

  57. But it may be the case that our future is a mistake, and (at the outermost extreme) that our descendants are not even worthy of survival. This curbs the sense that I have a moral concern to distant generations.
          BENJAMIN S. NELSON

    Yes, this is to some extent true, Benjamin. As I commented to James, sometimes our children are not what we hoped for. And yet we do our very best.

    I don’t know if you have children. But I can tell you that in my own experience, once I had children, my sense of responsibility shifted dramatically. It’s not that my emotions and instinct suddenly trumped my rational convictions. However, what I had only intellectually suspected prior to the birth of my first son Julien was suddenly not worth questioning unless someone would present me with some unexpected argument seriously contradicting the evolutionary theory. It seemed so deeply self-evident. My opinion is that our intuitions must confirm our reasoning, which must be confirmed by empirical evidence. Loosely speaking, rationality and empirical evidence form the base on which the triangle stands. Intuition is the third measure which should drive us to reexamine the other two should the three be out of balance.

    I would not emotionally or rationally hesitate to sacrifice my life for my two sons. Empirical evidence seem to give credence to this anticipated behavior of mine. I wonder how Ayn Rand would have changed her philosophy had she chosen to have children. I’m sure she would have just adjusted Objectivism slightly to make way for some kind of “extended” enlightened egoism. Even I have to appeal in some extent to selfishness in explaining why we should care about other species. And my evolutionism is partially based on organizational self-directedness. Nonetheless, the concept of encapsulation provides unrestricted extensions of this so called “enlightened egoism”. Ultimately, enlightened egoism represent a reasonable position that should not be ignored.

    I have always been a serious individualist. I don’t think you can challenge convention, and thereby provide variation, without a hint of the character that Ayn Rand excessively adulates. Yet mindfully having children has deeply affected my sense of the profound extent to which I am a social being. Wallerstein claims he cares only for his grand children because his children care for them. I’m not going to claim he is deluding himself. If this is what he claims he feels, who am I to say otherwise? What I can claim, however, is that I have anecdotally observed in my own parents a deep longing to bypass me and be directly connected and helpful to my children, that is their grand children. And my grandmother Inga (a quasi surrogate mother to me), would always ask how her great grandchildren were, always speaking sweetly of the few times when she had met them.

    Caring for your extended kin makes sense rationally, emotionally and empirically. Even if they are, like for my late and beloved grandma Inga, somewhat distant. Blessed be the legacy of her caring.

  58. Andreas, I couldn’t disagree with anything you’ve said there, and wouldn’t want to. Enlightened egoism is perfectly right and good. I think all your intuitions, expressed in the latest post, are ones that I share. But they have no relation to the people of the 24th century, and no defence of Morlocks.

  59. […] they have no relation to the people of the 24th century, and no defence of Morlocks.

    Well, here I would somewhat disagree, Benjamin. Perhaps not the Morlocks (which are separated by at least one degree of speciation). But my grandma Inga’s caring for my two sons who she hardly ever met is indicative of some kind of directionality towards the 24’th century. Let’s say that you suddenly found yourself in the 24’th century and your great great whatever was a wrongheaded futuristic Falange fighting a “wise” alien species. Would you feel compelled to try to, through some kind of diplomacy or such, convince your great great whatever of the benefits of peace? Perhaps even of the benefits of abandoning their Falangist position? Or would you just join forces with the “wise” alien species to exterminate your great great whatever for the “common good”?

  60. But you cannot morally care about something impossible without thereby making morality impossible. To be morally overambitious is to turn morality into a vain state of helplessness. In other words, there is something right about the bleeding heart (since it motivates us to transform ourselves into better moral animals), but there’s also something very very wrong with it (since it has the capacity to annihilate morality altogether).
          BENJAMIN S. NELSON

    Yes, but… How do you determine that something is unfeasible, Benjamin? No doubt there are things we cannot do by empirical obviousity. We cannot fly by flapping our arms, no matter how hard we try. But does that mean we cannot fly? We obviously cannot split atoms with a karate chop. But does that mean we cannot split atoms? I could go on and on. Yes, indeed, some of the things that come to my mind must have seemed like a foolish impossibility to some of our distant ancestors. But what can I say? If it wasn’t for their hard work at trying the seemingly impossible, their would be no International Space Station orbiting the Earth.

    I have to say, thank goodness for the “hopelessly” optimistic! Bless ‘em all!

  61. s. wallerstein

    Andreas:

    Some people are more into children than others.

    I don’t particularly enjoy the company of small children.

    I prefer kids after age 14 or so, when I can begin to talk to them about politics or ethics or books.

    I don’t have any grandchildren yet, but I suspect that my interest in them will increase, if I live so long, when they begin to ask questions the answers of which, if there are answers, interest me too.

    However, my basic point here is that there is nothing “genetic” in
    being especially fond of one’s grandchildren, unless I’m missing a gene, which may be the case.

    As to my own son, yes, I prefer to spend time with him now that’s he’s an adult, capable of reasoning with me and against me and above all, capable of paying his own bills.

  62. Some people are more into children than others.
          S. WALLERSTEIN

    True. And as social creatures we don’t all have to be into nurturing young children to make society work. We don’t even have to be fertile as is demonstrated by the sterile members of eusocial systems. But we do need to fulfill certain obligations to society as such (the so called “common good”). Without that sense of obligation, society begins to falter. One such obligation is to somehow assist in performing the necessary tasks for sustaining the system as such. My claim is that, in the end, our efforts have a directedness to those who will follow us, towards the future. Ultimately, it is not for ourselves and by ourselves that we act even if “enlightened egoism” and autonomy may be partial mechanisms by which such altruism and sociality is explained.

    We all need some rest at time. It’s even naturally built into our circadian rhythm in the form of sleep. And some of the most valuable ideas for society have purportedly occurred in a state of mental relaxation. And if we don’t first take care of ourselves, then there would be no us to speak of and hence no society as such. But the guy who always snoozes on the log as the others schlep it through the mud and then demand a good place by the bonfire is a real problem. If they don’t have the strength to carry the log, fine, but then they should try their very hardest at making themselves useful in some other way. Perhaps they can gather leaves and branches for starting the fire. Only a minuscule fraction of people really have no means whatsoever for making themselves useful to the rest of us, both those present and those who will follow.

    Even if you don’t have the natural disposition to baby infants, you have a responsibility to create a social context in which our infants can grow and mature. Perhaps you’re the guy who takes over when the babying is done. Great! Our youth needs to be educated in the sophisticated arts and crafts that will make them outshine us and create a far more sustainable society. Maybe you’re even the wise old man who sits by the tree with a glass of tea and dispenses wisdom to the middle aged. Wonderful! What I would say is that the Basic Imperative seems to hint at that the old have an obligation to help the young. And the young would be foolish to not listen and let themselves be helped by the old.

  63. in China, people never know the word “Utilitarianism” before 80′. but right now, even 2 yrs old kids know Utilitarianism very well.