Tag Archives: John Stuart Mill

Slavery: Consequences & Status

While there is a multitude of moral theories, two of the big dogs of ethics are utilitarianism and deontology. John Stuart Mill presents the paradigm of utilitarian ethics: the morality of an action is dependent on the happiness and unhappiness it creates for the morally relevant beings. Moral status, for this sort of utilitarian, is defined in terms of the being’s capacity to experience happiness and unhappiness. Beings count to the degree they can experience these states. Obviously, a being that could not experience either would not count—except to the degree that what happened to it affected beings that could experience happiness and unhappiness. Of course, even a being that has moral status merely gets included in the utilitarian calculation. As such, all beings are means to the ends—namely maximizing happiness and minimizing unhappiness.

Kant, the paradigm deontologist, rejects the utilitarian approach.  Instead, he contends that ethics is a matter of following the correct moral rules. He also contends that rational beings are ends and are not to be treated merely as means to ends. For Kant, the possible moral statuses of a being are binary: rational beings have status as ends, non-rational beings are mere objects and are thus means. As would be expected, these moral theories present two rather different approaches to the ethics of slavery.

For the classic utilitarian, the ethics of slavery would be assessed in terms of the happiness and unhappiness generated by the activities of slavery. On the face of it, an assessment of slavery would seem to result in the conclusion that slavery is morally wrong. After all, slavery typically involve considerable unhappiness on the part of the enslaved. This unhappiness is not only a matter of the usual abuse and exploitation that a slave suffers, but also the general damage to happiness that would tend to arise from being regarded as property rather than a person. While the slave owners are clearly better off than the slaves, the practice of slavery is often harmful to the happiness of the slave owners. As such, the harms of slavery would seem to make it immoral on utilitarian grounds.

It is important to note that for the utilitarian the immorality of slavery is a contingent matter: if enslaving people creates more unhappiness than happiness, then it is wrong. However, if enslaving people were to create more happiness than unhappiness, then it would be morally acceptable. The obvious reply to this is to argue that slavery, by its very nature, would always create more unhappiness than happiness. As such, while the evil of slavery is contingent, it would always turn out to be wrong.

Another interesting counter is to put the burden of proof on those who would claim that such slavery would be wrong. That is, they would need to show that a happy system of slavery was morally wrong. On the face of it, showing that something that created more good than bad is still bad would be challenging. However, there are numerous intuition arguments that aim to do just that. The usual approach is to present a scenario that generates more happiness than unhappiness, but intuitively seems to be wrong—or at least makes one feel morally queasy about the matter. Ursula K. Le Guin’s classic short story “The Ones Who Walk Away from Omelas” is often used in this role. There are also other options, such as arguing within the context of another moral theory. For example, a natural rights theory that included a right to liberty could be used to argue that slavery is wrong because it violates rights—even if happened to be a happy slavery.

A utilitarian can also “bite the bullet” and argue that even if such a happy enslavement might seem intuitively wrong to our sensibilities, this is a mere prejudice on our part—most likely fueled by examples the unhappy slaveries that pervade history. While utilitarian moral theory can obviously be applied to the ethics of slavery, it is not the only word on the matter. As such, I now turn to the Kantian approach.

As noted above, Kant divides reality into two distinct classes of beings. Rational beings exist as ends and to use them solely as means would be, for Kant, morally wrong. Non-rational beings, which includes non-human animals, are mere objects. Interestingly, as I have noted in past essays, Kant does argue that animals should be treated well because treating them badly can incline humans to treat other humans badly. This, I have argued elsewhere, gives animals an ersatz moral status.

On the face of it, under Kant’s theory the very nature of slavery would make it immoral. If persons are rational beings (and rational beings are persons) and that slavery treats slaves as objects, then slavery would be wrong. First, it would involve treating a rational being solely as a means. After all, it seems difficult to imagine that enslaving a person is consistent with treating them as an end rather than as a means. Second, it would also seem to involve a willful category error by treating a rational being (which is not an object) as an object. Slavery would thus be fundamentally incoherent because it purports that non-objects are objects.

Since Kantian ethics do not focus on happiness and unhappiness, even a deliriously happy system of slavery would still be wrong for Kant. Kant does, of course, get criticized because his system relegates non-rational beings into the realm of objects, thus lumping together squirrels and stones, apes and asphalt, tapirs and twigs and so on. As such, if non-rational beings could be enslaved, then this would not matter morally (unless doing so impacted rational beings in negative ways). The easy and obvious reply to this concern is to argue that non-rational beings could not be enslaved because slavery is when people are taken to be property and non-rational beings are not people.

It is, of course, possible to have an account of what it is to be a person that extends personhood beyond rational beings. For example, opponents of abortion often contend that the zygote is a person despite its obvious lack of rationality. Fortunately, it would be easy enough to create a modification of Kant’s theory in which what matters is being a person (however defined) rather than being a rational being.

Thus, utilitarian ethical theories leave open the possibility that slavery could be morally acceptable while under a Kantian account slavery would always seem to be morally wrong.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Philosophy & My Old Husky II: Difference & Agreement

Isis in the mulchAs mentioned in my previous essay, Isis (my Siberian husky) fell victim to the ravages of time. Once a fast sprinting and long running blur of fur, she now merely saunters along. Still, lesser beasts fear her (and to a husky, all creatures are lesser beasts) and the sun is warm—so her life is still good.

Faced with the challenge of keeping her healthy and happy, I have relied a great deal on what I learned as a philosopher. As noted in the preceding essay, I learned to avoid falling victim to the post hoc fallacy and the fallacy of anecdotal evidence. In this essay I will focus on two basic, but extremely useful methods of causal reasoning.

One of the most useful tool for causal reasoning is the method of difference. This method was famously developed by the philosopher John Stuart Mill and has been a staple in critical thinking classes since way before my time. The purpose of the method is figuring out the cause of an effect, such as a husky suffering from a knuckling paw (a paw that folds over, so the dog is walking on the top of the foot rather than the bottom). The method can also be used to try to sort out the effect of a suspected cause, such as the efficacy of an herbal supplement in treating canine arthritis.

Fortunately, the method is quite simple. To use it, you need at least two cases: one in which the effect has occurred and one in which it has not. In terms of working out the cause, more cases are better—although more cases of something bad (like arthritis pain) would certainly be undesirable from other standpoints. The two cases can actually involve the same individual at different times—it need not be different individuals (though it also works in those cases as well). For example, when sorting out Isis’ knuckling problem the case in which the effect occurred was when Isis was suffering from knuckling and the case in which it did not was when Isis was not suffering from this problem. I also looked into other cases in which dogs suffered from knuckling issues and when they did not.

The cases in which the effect is present and those in which it is absent are then compared in order to determine the difference between the cases. The goal is to sort out which factor or factors made the difference. When doing this, it is important to keep in mind that it is easy to fall victim to the post hoc fallacy—to conclude without adequate evidence that a difference is a cause because the effect occurred after that difference. Avoiding this mistake requires considering that the “connection” between the suspected cause and the effect might be purely a matter of coincidence. For example, Isis ate some peanut butter the day she started knuckling, but it is unlikely that had any effect—especially since she has been eating peanut butter her whole life. It is also important to consider that an alleged cause might actually be an effect caused by a factor that is also producing the effect one is concerned about. For example, a person might think that a dog’s limping is causing the knuckling, but they might both be effects of a third factor, such as arthritis or nerve damage. You must also keep in mind the possibility of reversed causation—that the alleged cause is actually the effect. For example, a person might think that the limping is causing the knuckling, but it might turn out that the knuckling is the cause of the limping.

In some cases, sorting out the cause can be very easy. For example, if a dog slips and falls, then has trouble walking, then the most likely cause is the fall (but it could still be something else—perhaps the fall and walking trouble were caused by something else). In other cases, sorting out the cause can be very difficult. It might be because there are many possible causal factors. For example, knuckling can be caused by many things (apparently even Lyme disease). It might also be because there are no clear differences (such as when a dog starts limping with no clear preceding event). One useful approach is to do research using reliable sources. Another, which is a good idea with pet problems, is to refer to an expert—such as a vet. Medical tests, for example, are useful for sorting out the difference and finding a likely cause.

The same basic method can also be used in reverse, such as determining the effectiveness of a dietary supplement for treating canine arthritis. For example, when Isis started slowing down and showing signs of some soreness, I started giving her senior dog food, glucosamine and some extra protein. What followed was an improvement in her mobility and the absence of the signs of soreness. While the change might have been a mere coincidence, it is reasonable to consider that one or more of these factors helped her. After all, there is some scientific evidence that diet can have an influence on these things. From a practical standpoint, I decided to keep to this plan since the cost of the extras is low, they have no harmful side effects, and there is some indication that they work. I do consider that I could be wrong. Fortunately, I do have good evidence that the steroids Isis has been prescribed work—she made a remarkable improvement after starting the steroids and there is solid scientific evidence that they are effective at treating pain and inflammation. As such, it is rational to accept that the steroids are the cause of her improvement—though this could also be a coincidence.

The second method is the method of agreement. Like difference, this requires at least two cases. Unlike difference, the effect is present in all the cases. In this method, the cases exhibiting the effect (such as knuckling) are considered in order to find a common thread in all the cases. For example, each incident of knuckling would be examined to determine what they all have in common. The common factor (or factors) that is the most plausible cause of the effect is what should be taken as the likely cause. As with the method of difference, it is important to consider such factors as coincidence so as to avoid falling into a post hoc fallacy.

The method of agreement is most often used to form a hypothesis about a likely cause. The next step is, if possible, to apply the method of difference by comparing similar cases in which the effect did not occur. Roughly put, the approach would be to ask what all the cases have in common, then determine if that common factor is absent in cases in which the effect is also absent. For example, a person investigating knuckling might begin by considering what all the knuckling cases have in common and then see if that common factor is absent in cases in which knuckling did not occur.

One of the main weaknesses of these methods is that they tend to have very small sample sizes—sometimes just one individual, such as my husky. While these methods are quite useful, they can be supplemented by general causal reasoning in the form of experiments and studies—the subject of the next essay in this series.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Obligations to People We Don’t Know

English: Statue of Immanuel Kant in Kaliningra...

English: Statue of Immanuel Kant in Kaliningrad, Russia (Photo credit: Wikipedia)

One of the classic moral problems is the issue of whether or not we have moral obligations to people we do not know.  If we do have such obligations, then there are also questions about the foundation, nature and extent of these obligations. If we do not have such obligations, then there is the obvious question about why there are no such obligations. I will start by considering some stock arguments regarding our obligations to others.

One approach to the matter of moral obligations to others is to ground them on religion. This requires two main steps. The first is establishing that the religion imposes such obligations. The second is making the transition from the realm of religion to the domain of ethics.

Many religions do impose such obligations on their followers. For example, John 15:12 conveys God’s command: “This is my commandment, That you love one another, as I have loved you.”  If love involves obligations (which it seems to), then this would certainly seem to place us under these obligations.  Other faiths also include injunctions to assist others.

In terms of transitioning from religion to ethics, one easy way is to appeal to divine command theory—the moral theory that what God commands is right because He commands it. This does raise the classic Euthyphro problem: is something good because God commands it, or is it commanded because it is good? If the former, goodness seems arbitrary. If the latter, then morality would be independent of God and divine command theory would be false.

Using religion as the basis for moral obligation is also problematic because doing so would require proving that the religion is correct—this would be no easy task. There is also the practical problem that people differ in their faiths and this would make a universal grounding for moral obligations difficult.

Another approach is to argue for moral obligations by using the moral method of reversing the situation.  This method is based on the Golden Rule (“do unto others as you would have them do unto you”) and the basic idea is that consistency requires that a person treat others as she would wish to be treated.

To make the method work, a person would need to want others to act as if they had obligations to her and this would thus obligate the person to act as if she had obligations to them. For example, if I would want someone to help me if I were struck by a car and bleeding out in the street, then consistency would require that I accept the same obligation on my part. That is, if I accept that I should be helped, then consistency requires that I must accept I should help others.

This approach is somewhat like that taken by Immanuel Kant. He argues that because a person necessarily regards herself as an end (and not just a means to an end), then she must also regard others as ends and not merely as means.  He endeavors to use this to argue in favor of various obligations and duties, such as helping others in need.

There are, unfortunately, at least two counters to this sort of approach. The first is that it is easy enough to imagine a person who is willing to forgo the assistance of others and as such can consistently refuse to accept obligations to others. So, for example, a person might be willing to starve rather than accept assistance from other people. While such people might seem a bit crazy, if they are sincere then they cannot be accused of inconsistency.

The second is that a person can argue that there is a relevant difference between himself and others that would justify their obligations to him while freeing him from obligations to them. For example, a person of a high social or economic class might assert that her status obligates people of lesser classes while freeing her from any obligations to them.  Naturally, the person must provide reasons in support of this alleged relevant difference.

A third approach is to present a utilitarian argument. For a utilitarian, like John Stuart Mill, morality is assessed in terms of consequences: the correct action is the one that creates the greatest utility (typically happiness) for the greatest number. A utilitarian argument for obligations to people we do not know would be rather straightforward. The first step would be to estimate the utility generated by accepting a specific obligation to people we do not know, such as rendering aid to an intoxicated person who is about to become the victim of sexual assault. The second step is to estimate the disutility generated by imposing that specific obligation. The third step is to weigh the utility against the disutility. If the utility is greater, then such an obligation should be imposed. If the disutility is greater, then it should not.

This approach, obviously enough, rests on the acceptance of utilitarianism. There are numerous arguments against this moral theory and these can be employed against attempts to ground obligations on utility. Even for those who accept utilitarianism, there is the open possibility that there will always be greater utility in not imposing obligations, thus undermining the claim that we have obligations to others.

A fourth approach is to consider the matter in terms of rational self-interest and operate from the assumption that people should act in their self-interest. In terms of a moral theory, this would be ethical egoism: the moral theory that a person should act in her self-interest rather than acting in an altruistic manner.

While accepting that others have obligations to me would certainly be in my self-interest, it initially appears that accepting obligations to others would be contrary to my self-interest. That is, I would be best served if others did unto me as I would like to be done unto, but I was free to do unto them as I wished. If I could get away with this sort of thing, it would be ideal (assuming that I am selfish). However, as a matter of fact people tend to notice and respond negatively to a lack of reciprocation. So, if having others accept that they have some obligations to me were in my self-interest, then it would seem that it would be in my self-interest to pay the price for such obligations by accepting obligations to them.

For those who like evolutionary just-so stories in the context of providing foundations for ethics, the tale is easy to tell: those who accept obligations to others would be more successful than those who do not.

The stock counter to the self-interest argument is the problem of Glaucon’s unjust man and Hume’s sensible knave. While it certainly seems rational to accept obligations to others in return for getting them to accept similar obligations, it seems preferable to exploit their acceptance of obligations while avoiding one’s supposed obligations to others whenever possible. Assuming that a person should act in accord with self-interest, then this is what a person should do.

It can be argued that this approach would be self-defeating: if people exploited others without reciprocation, the system of obligations would eventually fall apart. As such, each person has an interest in ensuring that others hold to their obligations. Humans do, in fact, seem to act this way—those who fail in their obligations often get a bad reputation and are distrusted. From a purely practical standpoint, acting as if one has obligations to others would thus seem to be in a person’s self-interest because the benefits would generally outweigh the costs.

The counter to this is that each person still has an interest in avoiding the cost of fulfilling obligations and there are various practical ways to do this by the use of deceit, power and such. As such, a classic moral question arises once again: why act on your alleged obligations if you can get away with not doing so? Aside from the practical reply given above, there seems to be no answer from self-interest.

A fifth option is to look at obligations to others as a matter of debts. A person is born into an established human civilization built on thousands of years of human effort. Since each person arrives as a helpless infant, each person’s survival is dependent on others. As the person grows up, she also depends on the efforts of countless other people she does not know. These include soldiers that defend her society, the people who maintain the infrastructure, firefighters who keep fire from sweeping away the town or city, the taxpayers who pay for all this, and so on for all the many others who make human civilization possible. As such, each member of civilization owes a considerable debt to those who have come before and those who are here now.

If debt imposes an obligation, then each person who did not arise ex-nihilo owes a debt to those who have made and continue to make their survival and existence in society possible. At the very least, the person is obligated to make contributions to continue human civilization as a repayment to these others.

One objection to this is for a person to claim that she owes no such debt because her special status obligates others to provide all this for her with nothing owed in return. The obvious challenge is for a person to prove such an exalted status.

Another objection is for a person to claim that all this is a gift that requires no repayment on the part of anyone and hence does not impose any obligation. The challenge is, of course, to prove this implausible claim.

A final option I will consider is that offered by virtue theory. Virtue theory, famously presented by thinkers like Aristotle and Confucius, holds that people should develop their virtues. These classic virtues include generosity, loyalty and other virtues that involve obligations and duties to others. Confucius explicitly argued in favor of duties and obligations as being key components of virtues.

In terms of why a person should have such virtues and accept such obligations, the standard answer is that being virtuous will make a person happy.

Virtue theory is not without its detractors and the criticism of the theory can be employed to undercut it, thus undermining its role in arguing that we have obligations to people we do not know.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

God, Rape & Free Will


freewill.jpg (Photo credit: Thunderkiss59)

The stock problem of evil is that the existence of evil in the world is incompatible with the Philosophy 101 conception of God, namely that God is all good, all powerful and all knowing. After all, if God has these attributes, then He knows about all evil, should tolerate no evil and has the power to prevent evil. While some take the problem of evil to show that God does not exist, it can also be taken as showing that this conception of God is in error.

Not surprisingly, those who wish to accept the existence of this all good, all powerful and all-knowing deity have attempted various ways to respond to the problem of evil. One standard response is, of course, that God has granted us free will and this necessitates that He allow us to do evil things. This, it is claimed, gets God off the hook: since we are free to choose evil, God is not accountable for the evil we do.

In a previous essay I discussed Republican Richard Mourdock’s view that “Life is that gift from God. I think that even when life begins in that horrible situation of rape, that it is something God intended to happen.” In the course of that essay, I briefly discussed the matter of free will. In this essay I will expand on this matter.

For the sake of the discussion, I will assume that we have free will. Obviously, this can easily be dispute, I am interested in seeing whether or not such free will can actually get God off the hook for the evil that occurs, such as rape and its consequences.

On the face of it, free will would seem to free God from being morally accountable for our choices. After all, if God does not compel or influence our choices and we are truly free to select between good and evil, then the responsibility of the choice would rest on the person making the decision. It should also be added that God would presumably also be excused from allowing for evil choices—after all, in order for there to be truly free will in the context of morality there must be the capacity for choosing good or evil. Or so the stock arguments usually claim.

For the sake of the discussion I will also accept this second assumption, namely that free will gets God off the hook in regards to our choices. This does, of course, lead to an interesting question: does allowing free will also require that God allow the consequences of the evil choices to come to pass? That is, could God allow people moral autonomy in their choices, yet prevent their misdeeds from actually bearing their evil fruit?

One way to consider this matter is to take the view that free will requires that a person be able to make a moral decision and that this decision be either good or evil (or possibly neutral). After all, a moral choice must be a moral choice. On this approach, whether or not free will would be compatible with God preventing occurrences (like rape or pregnancy caused by rape) would seem to depend on what makes something good or evil.

There are, of course, a multitude of moral theories that address this matter. For the sake of brevity I will consider two: Kant’s view and the utilitarian view (as exemplified by John Stuart Mill).

Kant famously takes the view that “A good will is good not because of what it performs or effects, not by its aptness for the attainment of some proposed end, but simply by virtue of the volition—that is, it is good in itself, and considered by itself is to be esteemed much higher than all that can be brought about by it in favor of any inclination…Its usefulness or fruitlessness can neither add to nor take away anything from this value.”

For Kant, what makes a willing (decision) good or evil is contained in the act of willing itself. Hence, there would be no need to consider the consequences of an action stemming from a decision when determining the morality of the choice. An interesting illustration of this view can be found in Bioware’s Star Wars the Old Republic game. Players are often given a chance to select between light side (good) and dark side (evil) options, thus earning light side or dark side points which determine the moral alignment of the character. For example, a player might have to choose to kill or spare a defeated opponent.  Conveniently, the choices are labeled with symbols indicating whether a choice is light side or dark side—which would be very useful in real life.

If Kant’s view is correct, then God could allow the freedom of the will while also preventing evil choices from having any harmful consequences. For example, a person could freely chose to rape a woman and the moral choice would presumably be duly noted by God (in anticipation of judgment day). God could then simply prevent the rape from ever occurring—the rapist could, for example, stumble and fall while lunging towards his intended victim. As another example, a person could freely will the decision to murder someone, yet find that her gun fails to fire when aimed at the intended victim. In short, people could be free to make moral choices while at the same time being unable to actually bring those evil intentions into actuality. Thus, God could allow free will while also preventing anyone from being harmed.

It might be objected that God could not do this on the grounds that people would soon figure out that they could never actualize their evil decisions and hence people would (in general) stop making evil choices. That is, there would be a rather effective deterrent to evil choices, namely that they could never bear fruit and this would rob people of their free will. For example, those who would otherwise decide to rape if they could engage in rape would not do that because they would know that their attempts to act on their decisions would be thwarted.

The obvious reply is that free will does not mean that person gets what s/he wills—it merely means that the person is free to will. As such, people who want to rape could still will to rape and do so freely. They just would not be able to harm anyone.

It is, of course, obvious that this is not how the world works—people are able to do all sorts of misdeeds. However, since God could make the world work this way, this would suggest various possibilities such as God not existing or that God is not a Kantian. This leads me to the discussion of the utilitarian option.

On the stock utilitarian approach, the morality of an action depends on the consequences of said action. As Mill put it, “actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.” As such, the morality of a willing would not be determined by the willing but by the consequences of the action brought about by the willing in question.

If this is correct, then God would need to allow the consequences of the willing to occur in order for the willing to be good or evil (or neutral). After all, if the willing had no consequences then it would have no moral significance on a consequentialist view like utilitarianism. So, for example, if a person freely wills to rape a woman, then God must not intervene. Otherwise He would be interfering with what determines the ethics of the willing. As such, if God did not allow the rapist to act upon his willing, then the decision to rape would not be an evil decision. If it is assumed that free will is essential to God being able to judge people for their deeds and misdeeds, then He would have to allow misdeeds to bear fruit so that they would be, in fact, misdeeds. On the usual view, He then punishes or rewards people after they die.

One rather obvious problem with this approach is that an all knowing God would know the consequences of an action even without allowing the action to take place. As such, God could allow people to will their misdeeds and then punish them for what the consequences would have been if they had been able to act upon their intentions. After all human justice punishes people even when they are prevented from committing their crimes. For example, someone who tries to murder another person is still justly punished even if she is prevented from succeeding.

It might be countered that God can only punish cases of actual evil rather than potential evil. That is, if the misdeed is prevented then it is not an actual misdeed and hence God cannot justly punish a person. On this view, God must allow rape in order to be able to toast rapists in Hell. This would, of course, require that God not consider an attempted evil deed as an evil deed. So, actual murder would be wrong, but attempted murder would not. This, of course, is rather contrary to human justice—but it could be claimed that human law and divine law are rather different. Obviously humans and God take very different approaches: we generally try to keep people from committing misdeeds whereas God apparently never does. Rather, He seems content to punish long after the fact—at least on the usual account of God.


My Amazon Author Page

Enhanced by Zemanta

On vitriol and mockery

A few days ago, I posted about a piece by South African author Tauriq Moosa, relating to such things as the practice of charitable interpretation on the internet. He has now followed up with a Part 2, which again emphasizes the Millian conception of intellectual inquiry.

Tauriq (who tells me he’d rather I call him by his first name in posts like this) begins by talking about the danger for truth of engaging in vitriol:

This is a problem that persists not only for blog commenters and trolls, but everyone – including the blogger herself. As Mill said: “Unmeasured vituperation employed on the side of the prevailing opinion, really does deter people from professing contrary opinions, and from listening to those who profess them”. If we create spaces of reasoned debate – websites, magazines, blogs – in which we always praise the prevailing opinion [of that space, not of society] and disregard with “unmeasured vituperation” anything other than full-throated acquiescence, it is no longer a space for inquiry but of dogmatic following, of uncritical agreement.

Quite so. Tauriq responds to this by emphasising the principle of charity, and that, in particular, it is better to assume you are dealing with someone who has somewhat opaque (to you) thought processes or an unfortunate way of expressing their view, or who is, perhaps, simply mistaken about something, rather than that you are dealing with someone who is disingenuous or horrible. Again, that seems like good advice, at least until you see a pattern of behaviour that suggests you really are dealing with a vicious person.

And even if you see what looks like a pattern, you can’t easily be sure where somebody is coming from. The same questions or ideas may be provoked in different people who have very different overall value and belief systems – in most cases, you’re getting only a limited sample of what this person is really like and of what may actually be bugging her. There may be important aspects of her that are hidden from your view.

Tauriq then has quite a bit to say about mocking or deriding people. Here, there might be some room for disagreement, or at least for further thought, so permit me to quote him at some length:

I think a rule of thumb should be that some people are very good at persuading using mockery, satire and so on; but that requires skill that most of us do not have. Most of us should err on the side of trying our best to respond without namecalling and putting emotions before justification – even if a view is horribly wrong. This is difficult and, probably, most of us have failed often at this: due to knee-jerk reactions, being pushed too far, touching a particularly sensitive topic, and so on. Perhaps this gets reinforced when our anger and animosity drives the offender away. But even here, this is probably not a good thing because, as Mill points out, there are greater concerns than getting a kick out of mocking someone to silence: there is the worry that such an attitude helps foster in-group tribalism, non-engagement with alternative ideas and, indeed, prevents debate from occurring since any alternate ideas could be viewed as only stemming from “bad” people.

I’m very much in sympathy with all of this. In particular, nothing is gained by creating an environment so hostile to an interlocutor that he or she gives up, perhaps feeling frustrated or hurt, and goes away. Where’s the intellectual progress in that? Even if you have, in a sense “defeated” this person, you have not thereby defeated her argument. In fact, you’ve deprived yourself of something that you should normally welcome: an opportunity to hear an argument that might, if you seriously consider what strengths it may turn out to have, prompt you to change your mind (at least on points of detail), or to add extra nuance, depth, or clarity to your position.

Even if it becomes apparent that you’ve heard it all before, there may be others, i.e. bystanders, who have not, and who might profit from seeing your interlocutor add a bit to the debate.

That said, I do think there is a place for mockery and denunciation. For example, it may be legitimate to tell someone that she’s being ridiculous insofar as she’s relying on a premise that seems absurd, or for a conclusion that’s absurd on its face. You might want to use rhetoric, imagery, etc., to try to get your interlocutor and others to shift perspective and see the absurdity that you (think you) see. That’s a legitimate way to argue, and it is not simply a way of trying to drive off a despised opponent.

Sometimes it may, in fact, be legitimate to denounce or ridicule people in the third person – e.g. if someone who really does appear dangerous seeks political power (Mitt Romney may well be an example, though there are even worse political candidates around), or if someone who already has power of some kind is using it unfairly, or cruelly, or with disastrous incompetence.

Sometimes there’s a degree of urgency about responding to this, perhaps opposing it with all your force, or perhaps just dramatically distancing yourself from it. You may need to take a stand and expose someone whom you judge to be truly harmful, or at least acting truly harmfully. But all this is different from situations where the person concerned is involved with you in a Millian quest for truth. And in any situation at all, considerations of fairness and accuracy – and a sense of your own fallibility – might make you pause and reflect before trashing someone who might actually, overall, be a good (and, as most of us are, emotionally vulnerable) person. You don’t want to be a bully, a cheat, or a dogmatist.

I’m tempted to conclude with something as sententious as, “Only mock ideas, not people.” This would allow reductio ad absurdum arguments, but not much else. Although it’s a tempting maxim, I, for one, could never abide by it. I don’t think we should attempt such an onerous, unrealistic reform of our behaviour. There is, I think, a place for vitriol and mockery and denunciation. The take-home lesson, rather, is just that this place is a smaller one, sometimes a much smaller one, than we might reflexively assume. We should at least think about it before we reach for the weapons of vitriol, mockery, and denunciation.

That applies most especially when we are dealing with interlocutors who are, themselves, acting with substantial civility and arguing in what can reasonably be construed as good faith.

Soda & the State


Soft drinks on shelves in a Woolworths superma...

Soft drinks on shelves in a Woolworths supermarket (Australia). Taken by myself. (Photo credit: Wikipedia)


The mayor of New York is considering a ban on selling sweetened drinks larger than 16 ounces. The ban would not cover diet sodas, fruit juice, dairy products (like milkshakes) or alcohol. It also does not cover grocery or convenience stores. As might be suspected, the intent of this ban is to help combat obesity. About half of the adult population of the city is overweight and the mayor presumably hopes that this ban will help with this problem.

There are, of course, two key factual issues here. The first is whether or not the large drinks in question are a causal factor in obesity. The second is whether or not the ban would have the intended effect.

Not surprisingly, the folks in the relevant industries are claiming that the drinks are not the cause of the problem. On the one hand, it can be claimed that they are in error. After all, it does make sense that consuming large quantities of high calorie beverages (a 12 ounce soda has 124-189 calories) would contribute to people being overweight. On the other hand, it can be argued that this is not the case. After all, the drinks are not the only (or even main) source of calories and hence they are just one contributory cause among many. There is also the point that people are not compelled to consume the large beverages and people are, by in large, obese because of their choices. My considered view is that these drinks make it easier to be obese, but that it would be an error to cast them as the primary villains, so to speak.

In regards to the effectiveness of such a ban, it seems likely that the impact will be fairly minor. While people will not be able to get drinks over 16 ounces, they are not prevented from getting refills or buying as many as they want. While the 16 ounce limit will make it slightly less convenient to get a greater volume of sweet drinks, I suspect that this inconvenience factor will not be enough to impact the obesity problem. After all, people already buy multiple burgers or tacos and can probably adjust easily to getting multiple drinks. There is also the fact that the ban does not affect drinks whose calorie content can exceed that of the banned drinks and people can switch to those. For example, unsweetened apple juice has 169-175 calories per 12 ounce serving.

While the factual matters are of concern, what is of philosophical interest is whether or not the state has a right to impose such bans. As might be imagined, it is easy enough to argue for and against this right using the very same principles.

One reasonable principle is that the state has a legitimate role in preventing harm to the citizens and has a right to use its compulsive power in this capacity. The most obvious examples of this include the state’s role as a military protector and its role as the police. Another reasonable principle, taken from John Stuart Mill, is that the state does not have a right to impose on the liberty of individuals except in cases in which the individual’s actions could cause unwarranted harm to others. For example, the state clearly has a right to prevent citizens from murdering each other.

In the case of the sweet drink ban, it could be argued that the state is acting to prevent harm to the citizens and is thus operating within its legitimate rights. After all, the easy accessibility of high calorie foods in high volume servings makes it far easier for people to over-consume calories and this leads to increased obesity. Obesity presents a clear health threat to individuals as well as imposing significant costs on society (such as lost productivity and increase medical costs). As such, the state would be acting rightly in banning such sweet drinks.

One easy reply is to contend that the ban would not be effective (as argued above) and hence would be an imposition on liberty that fails to achieve its stated goal. It seems reasonable enough to accept that the state should not restrict liberty when doing so would not achieve the stated goal of the imposition-after all, the justification for the imposition would simply not be grounded.

Another reply, and the one I favor, is that even if the ban was to prove effective, it is still an illegitimate violation of liberty. The state does, of course, have a right to protect people from toxic ingredients, especially when such ingredients are not known to the consumer. To use a specific example to illustrate this, the state would be acting legitimately by banning companies from surreptitiously using lead  acetate in place of sugar as sweetener. This is because this substance is known to be toxic and most customers would not willingly consume “sweet lead.” In this case, the state would be protecting the customers from being harmed by the manufacturers. After all, companies do not have the liberty to poison ignorant customers.

In the case of the sweet drinks, the customer knows what s/he is getting: a high calorie (typically low nutrient) drink. While it is unwise and unhealthy to consume large amounts of such drinks, as long as the consumer is freely making the choice to drink the beverage and is aware of its contents and effects, then the state has no right to impose on the individual’s liberty. As usual, John Stuart Mill’s arguments in favor of liberty work quite well here. Naturally enough, the state would be well within its rights to require companies to make information about the beverages available to consumers so that they can make informed choices. However, treating adults as if they were children in this regard is not acceptable nor within the legitimate rights of the state. After all, what is solely the business of the individual is not the business of the state and how much sweet drink a person consumes would seem to be solely his or her business. The choice is thus the right of the individual, be it a good choice (to avoid sweet drinks) or a bad choice (to consume mass quantities of sugar water).

The obvious reply to this is that the harms done by obesity are not limited to the individual. Obesity increases health care costs for everyone, impacts productivity, and has other consequences that extend beyond the individual. Given that the obesity of an individual harms others, then it would seem that the state would have the right to step in and impose restrictions to counter obesity. After all, while people have every liberty to be as fat as they can and want to be, they do not have the right to expect the rest of society to also bear the consequences and costs of their poor choices.  To modify a stock line from the right in the US, why should the rest of us subsidize the cost of obesity-surely that would be a socialism of fat.  If this reasoning is plausible, then there seem to be two reasonable alternatives (and, of course, there might be others).

The first is that the state should act within its legitimate rights to endeavor to counter causal factors that significantly contribute to obesity (such as high volume high calorie beverages). The second is that individuals who wish to enjoy the liberty to be as fat as they choose to be would need to take full responsibility for the consequences of their choices. They would, for example, need to opt out of state medical support in regards to any conditions caused by or aggravated by their obesity, perhaps by purchasing special insurance. Provided that an individual was willing to eliminate the harms his/her choices would impose on others, then s/he would have the perfect right to do as s/he pleases. This is analogous to how certain states allow people to ride motorcycles without helmets provided that they have adequate insurance. Perhaps people could received special ID cards proving they have obesity insurance and  this would allow them to purchase large beverages (and other such things).

A second reply to the liberty argument is that it could be argued that the sweeteners used to create the sweet drinks is actually a toxic substance. Interestingly enough, lead acetate was once used as a sweetener until is was clearly established that it is, in fact, toxic. As such, it is not wildly implausible that the sweeteners currently in use are actually toxic substances that should be properly regulated. While it is easy enough to dismiss the idea that, for example, sugar could be toxic because it just sounds silly, it is rather important to make such assessments on the basis of scientific evidence. If sweeteners are not harmful, then an objective scientific investigation would surely show this. As such, those who think that it is silly to consider sugar and other sweeteners as toxic should insist on objective and extensive evaluation. After all, doing so would silence the rational critics of sweeteners and provide hard evidence to counter attempts to ban or restrict sweeteners and products that use them, such as sweet drinks.

My own view on the matter is that people have a right to the liberty of self-abuse (even self-destruction). However, this liberty does not allow them to impose on others. As such, the freedom to be fat comes with the responsibility to ensure that other people are not forced to bear the price that the individual alone should pay. As the hackneyed saying goes, freedom is not free-and this goes for fat freedom as well.

Enhanced by Zemanta

The Individual Mandate

United States Supreme Court building in Washin...

(Photo credit: Wikipedia)

The United States Supreme Court is considering the constitutionality of the Affordable Care Act and this has created quite a political stir. One of the main points of concern is the individual mandate. The gist of this is that individuals are required to buy health insurance. Those who fail to do so will be fined.

Setting aside the rabid rhetoric, the main philosophical issue seems to be whether or not the state has a legitimate right to impose this mandate. Or, as opponents of the mandate put it, whether or not the state has the right to require people to buy a private product.

On the face of it, I am inclined to agree that the state does not have a general right to compel citizens to buy products even when it would be wise and good to do so. As critics have noted, while broccoli is good for people, the state would seem to have no legitimate right to compel people to buy it. This sort of reasoning is consistent with my own view of liberty, which is roughly based on that of John Stuart Mill’s view. The general idea is that people only have a moral right to compel people when the actions in question can cause unwarranted harm to others. Even if doing something would be good or wise, society has no right to compel an individual into doing (or not doing something) when it is not their legitimate concern (that is, involves harm to others).

Because of my adherence to this view of liberty, I would be against the state compelling people to buy broccoli, to exercise or to quit smoking. After all, in such matters the individual is sovereign. Since I endeavor to be consistent in my principles, I also oppose the illegality of recreational drugs as well as any law that would ban same-sex marriage. After all, if it would violate liberty to force someone to buy broccoli because it is good for them, it would also seem to violate liberty to force someone to forgo marijuana because it is bad for them or to forgo same sex marriage because some people do not like it. Not surprisingly, some folks are not quite consistent in these matters: they scream for freedom when an individual mandate is on the line but are quite happy to impose on others when the issue turns to same-sex marriage.

Given my view on a broccoli mandate it might be suspected I would oppose the individual mandate.  However, this is not the case-I actually support it. Naturally, some folks might accuse me of supporting it from blind liberalism. However, my reasons for supporting it are classic conservatism. This should not be at all shocking since the individual mandate actually has a fine conservative pedigree.

Given its origin, it might be tempting to argue that the conservative assault on the mandate is misguided. However, to claim that something is good (or bad) based on its origin would be an error (specifically the genetic fallacy). It might also be tempting to argue that the conservatives are being inconsistent in attacking the mandate given that it was supported by conservatives in the past. However, this would be a mere ad hominem tu quoque.  However, it is certainly interesting to note that the conservative opposition to the mandate seems to be driven by their opposition to Obama rather than the result of a reasoned repudiation of the conservative arguments in favor of the mandate. As such, one might suspect that the rejection of the mandate is motivated in part by an ad homimen attack amounting to “Obama and the Democrats are for it, so it must be bad.” However, my goal is not to consider the history and psychology of the matter, but to present conservative arguments for the mandate.

One stock conservative principle is that people should take responsibility for themselves. This principle is often taken to entail more specific principles, such as the one that people should pay for what they receive and the one that the state should always endeavor to avoid providing welfare and its ilk.

These principles seem eminently reasonable. After all, if I fail to take responsibility and because of this I get aid from the state that I have not paid for, it would seem reasonable to regard me as a thief. To use a specific example, if I decide that I am tired of working and quit my job to go on welfare, then I would seem to be stealing from my fellows. After all, I could support myself and merely would have chosen not to do so. To use another example, if my company gets subsidies from the state when it is profitable on its own, I would thus seem to be robbing my fellows. After all, my company can easily support itself without sponging off the taxpayers.

At this point, one might be wondering what these principles have to do with the individual mandate. After all, it has been cast as the state imposing on liberty by forcing people to buy a product. However, this is not the proper way to see the mandate. To see that this is the case, consider the following.

Back in 1986 the United States Congress passed the  Emergency Medical Treatment and Labor Act. This act mandates that hospitals cannot turn away or transfer a patient unnecessarily when there is an emergency condition. While hospitals can ask about the patient’s ability to pay, they cannot delay or refuse treatment based on a lack of ability to pay. Hospitals can, of course, refuse to provide treatment or examination in non-emergency situations. Hospitals that violate the law can be fined as can doctors who are complicit in declaring a patient’s condition to be a non-emergency when it actually was.

Since people know that hospitals cannot turn away emergency cases, people who do not have insurance often turn to emergency rooms for medical treatment. In some cases, they do so even for routine care on the assumption that the medical personnel will provide at least some care even in the case of non-emergencies. While there has been some dispute over the exact numbers, this has been a problem in many hospitals for quite some time.

Obviously enough, when a hospital provides “free” medical care to the uninsured, it still must be paid for. After all, medical personnel do not work for free nor do the supplies and equipment needed for treatment come free. While hospitals do try to collect from the uninsured patients, this often does not cover the bill. After all, most people who are uninsured are without insurance because they cannot afford it  rather than as a matter of choosing to forgo it. As such, the costs must be passed on to those who have insurance as well as on to the state. It is estimated that covering the bills of the uninsured adds $1500 to a family’s insurance premiums and about $500 to that of an individual.

As such, under the current system hospitals are required to provide services to those who cannot pay and the insured and the taxpayers are compelled to pay the bill. Thus, some people are not taking responsibility by paying for what they receive and others are left to pick up the tab-including the state. This is exactly the sort of situation that one would expect a conservative to rail against. After all, it involves people getting something for nothing as well as other people being compelled to pay more. And, of course, it also involves the state in providing “handouts.”

In this situation, there seem to be two main legitimate conservative options. The first is to ensure an end to the free ride and the government handouts by compelling people to get insurance. This way they would be paying for what they received and not being free riders. This, coupled with the Affordable Care Act,  would also have the benefit of allowing people affordable access to non-emergency preventative care which would be better for their health and also reduce the strain on emergency rooms. There is, however, a second option.

A second way to address this problem is to repeal the part of the  Emergency Medical Treatment and Labor Act that requires hospitals to provide emergency care to people who cannot pay. If those without insurance or money were not treated, then there would be no extra cost to pass on to the insured or to the state, thus solving the problem at hand.

Obviously, while the second solution would save some people money, it would not come without a price. It would require accepting that people should be left to die if they lack the financial resources to pay for vastly overpriced medical care. I would certainly hope that this is not a value that my fellow Americans would endorse, but perhaps this is not the case. Perhaps we should be free of the burden of caring for others and they should be free to die on the curb of a hospital because the job creators did not create an adequate  job for them.

Check out my books. 

Enhanced by Zemanta

Health Care & Compulsion

President Barack Obama's signature on the heal...

President Barack Obama

The United States Supreme Court will soon be ruling on the constitutionality of the Affordable Care Act. This ruling will, of course, settle the legality of the matter-at least until it is challenged. While the constitutionality (or lack thereof) of the act is certainly interesting, my main interest as a philosopher is the ethics of the matter.

While the law is 2,400 pages long, I will just focus on the ethics of a key part, namely that the law requires people to purchase health care insurance. Opponents of the law assert that this is the first federal law that requires citizens to purchase a product and they typically assert  that this goes beyond the legitimate power of the state. Proponents of the law retort by arguing that the state is acting legitimately. As such, a key moral issue is whether or not the state has the right to compel citizens to buy a product in general and insurance in particular.

On the face of it, the idea that the state has a right to compel people to buy a product seems to be absurd-even when such a purchase would be a good idea. After all, buying and consuming fresh fruits and vegetables is a good idea, yet one would be hard pressed to present an argument in favor of compelling this by law (the Broccoli and Orange Act, perhaps).

To present a more substantial argument, I will begin by noting that I favor a a presumption in favor of liberty. That is, the burden of proof is upon the state to show that the intrusion on liberty is legitimately warranted. While spelling out the various conditions under which intrusion would be warranted goes beyond the scope of this essay, one obvious justification is that the intrusion prevents an individual or group from inflicting unjust harm onto others. Thus, the restrictions on murder, theft and defective products are warranted. Intrusion that are done merely for the good of a person would not be justified, at least if we follow John Stuart Mill’s classic arguments regarding liberty. After all, if I am sovereign over my self, then the state (and others) have no moral right to intrude on my actions when they impact only me. As such, while the state can justly prevent me from selling tainted broccoli, it would not seem warranted in compelling me to eat broccoli-despite the fact that doing so would be good for me. This line of reasoning, interestingly enough, would also forbid the state from making the use of marijuana illegal and would also make laws forbidding same sex marriage morally wrong since they impose on liberty solely to impose a specific religious/moral view rather than to prevent people from harming one another. In fact, the principle that the state cannot compel except to prevent harm would entail a host of libertarian positions on various issues-something that should be duly considered when using such a principle.

Of course, it could be argued that while the state has a right to compel people to not do various things even when they are not harmful to others, it does not have the right to compel people to take positive action. That is, the state can be justified in telling me what not to do, but it has no legitimate right to tell me what to do. As might be imagined, this approach is often taken by folks who want the state to compel people to not do various things (like smoke marijuana or marry someone of the same sex) but who are against this specific act.

While justifying specific acts of compulsion can be challenging, this approach does seem consistent. After all, the principle is that the state cannot compel taking action and can only compel people to not do things. As such, while the state can, for example, forbid abortion, it cannot morally  compel people to buy health insurance.

Naturally, if this principle is used to argue that forcing people to buy insurance is wrong, it must be applied consistently-namely that the state is wrong to compel action and it can only forbid. This would entail that compelling young men to sign up for selective service is wrong. It would also entail that compelling people to pay taxes is wrong. Forcing automakers to include seat belts, air bags, and brakes would also be wrong. Forcing women to undergo an ultrasound before getting an abortion would be wrong. So would forcing children to attend school. Compelling people to serve on a jury would also be wrong.  And so on. Naturally, this might have considerable appeal to some people, but this path would seem to take us into the realm of the absurd (although some, such as the anarchists, would say we are already there and doing this would take us out of the absurd).

The obvious counter is to insist on narrowing the principle from “the state has no moral right to compel” to “the state has no moral right to compel people to buy a specific product.” The challenge, of course, lies in justifying this principle. As might be imagined, the reply that “the state has not done this before, so doing it is wrong” is not a good argument. After all, the mere fact that something has not been done before is no indication of whether it is good, bad or indifferent. This sort of “reasoning” could be seen as a variant on the appeal to tradition fallacy, in that the idea is not that something is good because it is a tradition but that something is bad because it is not.

What, then, is needed is something that shows that the state lacks the legitimate authority to morally compel people to have health insurance while it does possess a right to compel people to do things. The challenge is to show a relevant difference between the insurance case and the other cases in which state compulsion is (allegedly) morally acceptable. The obvious thing to point to is that the state would be forcing people to buy a  product. However, the mere fact that this is different does not entail that it is a relevant difference that make this specific compulsion wrong. After all, the state does compel us to pay for Medicare, Medicaid and Social Security and is thus compelling us to buy what amounts to health and retirement insurance.

Naturally, it could be argued (as some have) that the state should not do this-that being forced to pay for Medicare, Medicaid and Social Security involves an unjust compulsion and thus the state should not do this. This view would allow someone to consistently oppose the Affordable Health Care Act’s compulsion to buy health insurance but at an obvious price-namely the need to be opposed to these three things.  As such, those who do not want to be rid of these things will need another line of attack.

The most fruitful one is that the the health care coverage is a private product rather than a state product. This, of course, does provide a significant difference between the state’s “products” and health care coverage. Of course, it does not provide an argument against the state compelling people to pay into a national health care insurance (as it does with the current national health care insurances, Medicare and Medicaid). As such, focusing on the fact that the product is a private one would seem to help shore up the view that the state should instead compel people to buy national health care-after all, this seems to be well within the legitimate power of the state (at least as it is currently conceived).

Nations other than the United States do, of course, have national health care. Of course, this sort of thing is presented as a worse demon than forcing people to buy private insurance. But this is a matter for another time.



Enhanced by Zemanta

Bribing the Poor

Esther Duflo at Pop!Tech 2009

Image via Wikipedia

Anya Kamenetz recently wrote an article, “Bribing the Poor”,  about Esther Duflo’s strategy of giving the poor incentives to be immunized. While the article mainly just reported on the practice, it did get me thinking about the ethics of this approach. But before getting to the moral matter, a little background is in order.

In developed countries, about 90% of children receive immunization. This has had a significant impact on the health of the population. In contrast, less-developed countries tend to have far lower immunization rates. For example, India has an overall rate of 44%, but specific areas have rates that drop to 22% or even 2%. While humans can have natural resistance to diseases, the lack of immunization means that people get sick (and sometimes die) needlessly.

Duflo focused on India, and hence the best information is available for that country. Duflo found that there were various obstacles to immunization. The first is that many clinics in the rural area Duflo studied were closed because the government paid nurses did not show up for work. The second is superstition. Many people still believe in supernatural causes of illness and such people will tend to not put much faith in immunization (unless, perhaps, it was presented as magic-something that Duflo did not propose). The third is that immunizations have an image problem. When they work, there is nothing to see. When they do not work or they cause a harmful effect, the results are visible and tend to stick in people’s minds. People then tend to “reason” that immunizations are harmful in general, thus falling victim to misleading vividness, hasty generalization or the fallacy of anecdotal evidence. This is not, of course, confined to the developing world. In the United States unfounded fears about vaccination causing autism caused people to forgo immunization for their children. Irrationality, like disease, is a global phenomenon. The third is that getting immunization can require effort. The fourth is that a clear and obvious incentive (other than avoiding disease) was not provided.

Duflo’s solution involved two parts. The first was aimed at making immunization easy. This was done by setting up camps in villages. To ensure that the nurses showed up, they were paid only when they did so. This provided the nurses with a financial incentive to actually do their jobs. Making it easier to get the shots boosted the rate of immunization from 2% to 18%.

The second part was aimed at giving people a clear incentive to get immunized. As many thinkers have noted, people tend to place less value on the future and also seem to find a negative (not getting disease) less appealing than a positive (a gain, such as a gift). As such, the incentive to get immunization that will prevent something from happening latter will tend to be relatively low. However, an incentive that involves getting something right now will tend to be more effective. Duflo’s solution was to offer a $1 bag of lentils as an incentive to get one’s child immunized. This tactic increased the immunization rate from 2% to 38%, which is certainly a significant boost. As an added bonus, the overall cost was lower: the nurses are paid by the hour, so more people were immunized in less time.

While this seems like a very sensible approach, people on both the left and the right have attached it as unethical (which might be taken as evidence in its favor).

People on the left tend to advance the argument that bribing the poor to get immunized is patronizing and paternalistic. To use an analogy, it could be compared to giving a child a treat so she will cooperate and get her shots. While this is fine with an actual child (they do not know better), it might well be regarded as condescending paternalism that casts the poor as children who must be bribed to do what a rational person would do without a bribe.This would seem to be wrong.

While this does have some appeal, it can be countered. One reply would be to follow John Stuart Mill’s view: “Despotism is a legitimate mode of government in dealing with barbarians, provided the end be their improvement, and the means justified by actually effecting that end.” Swap out “paternalism” for “despotism” and keep the appeal to consequences, and this would be a possible approach. After all, the good that is done for the children and others would seem to outweigh any harm done by giving people an incentive to get immunized.

A second reply is that this incentive approach need not be paternalistic. After all, offering people an incentive hardly seems to be inherently patronizing. To use an example, students might be offered extra credit to go to an event that would benefit them. This hardly seems paternalistic. Or, to use another example, companies often provide free stuff at expos to get people to look at their goods and services. That hardly seems patronizing. Another point worth considering is that people do not claim that paying the nurses to give the immunizations is patronizing. If paying the nurse to do her duty  is not patronizing, then paying the people to do their social duty is not patronizing either.

On the right, the usual objection is that the poor should be responsible and should not be given a handout. As a moral argument it does have some appeal. After all, bribing someone to do what they should do because it is right does seem to be morally questionable (at least on some grounds). To use an analogy, if a person is given $1 when she tells the truth and tells the truth for the sake of the money, then she is not acting on the basis of morality. The person who bribes her might have good intentions, but s/he can be seen as acting wrongly, at least some views. For example, Kant would regard this in a rather negative light: for him, people are supposed to do good out of a sense of duty rather than a desire for gain.

Despite the appeal, this can be countered in various ways. One obvious way is to argue on utilitarian grounds: handing out free lentils with the free immunizations ends up preventing the harms of illness and death. Put in the financial terms so beloved to the right, it is a good investment in terms of the money saved on later medical care and the worker productivity that would be lost to illness and death. A second way to argue it is that while the parents are being bribed to do the right thing, the folks on the right should be more worried about the children than the adults. While it might be wrong to bribe parents to get their children immunized, it would be worse to allow children to go without immunization. As such, while it might be claimed that the parents have acted wrongly, it would seem that the people doing the bribing have acted rightly. Finally, the folks on the right should appreciate the value of providing financial incentives to get people to do things. After all, that is what capitalism is all about.

In light of the above arguments, bringing the poor in this manner seems to be morally acceptable.

Enhanced by Zemanta

Saving Mill’s Utilitarianism

Some ideas have the force of a runaway trolley. When they are first proposed, they are vigorously endorsed and maligned by diverse, forceful personalities. Then they enter the crucible of development, are battered with intense scrutiny. Even if the ideas are eventually abandoned, they will have left an imprint upon the centuries, like the corpse of an elder god washed up upon the beach. We gain more from poking and prodding at its corpse than we do from shaking hands with its successors.

Utilitarianism, for example. The principle of utility is just an ethical theory that conforms to the slogan: “Do whatever produces the greatest happiness for the greatest number”. Utilitarianism has been attacked from all sides, but it retains a close following. It is a beloved treasure among compassionate naturalists and bean-counting social engineers, and critiqued by both lazy romantics and sensitive sophisticates. It is used as an intuition-pump for the sympathies of secularists, just as much as it is used to sanction torture in ticking time-bomb scenarios.

The doctrine has roots in the welfarism of David Hume and Aristotle, and owes a healthy dose of accolades to Epicurus. Its modern advocates come easily to mind: Peter Singer, David Brink, Peter Railton, Sam Harris. But it was not until 18th century reformer Jeremy Bentham and John Stuart Mill published their works that utilitarianism could find articulation in its contemporary form.

Bentham defined the principle of utility in this way: “By the principle of utility is meant that principle which approves or disapproves of every action whatsoever. according to the tendency it appears to have to augment or diminish the happiness of the party whose interest is in question: or, what is the same thing in other words to promote or to oppose that happiness.” For Bentham, the primary focus of moral inquiry was the rightness or wrongness of actions, measured in terms of their perceived consequences. Bentham’s utilitarianism is, hence, a form of consequentialism: rightness and wrongness of acts is a function of the good or bad consequences, and nothing else.

The history of philosophy has not been kind to the Benthamites. A regimend of critics (including 20th century notables like John Rawls, JJ Thomson, Philippa Foot, Samuel Scheffler, Bernard Williams, to name just a few) have rejected utilitarianism as a moral doctrine on a variety of grounds. And, on the whole, I think these critics have successfully shown that Bentham’s utilitarianism is riddled with absurdity. To the extent that utilitarianism belongs to Bentham, we must abandon utility.

Unfortunately, despite all the headway they have made against the Benthamites, critics have not shown much sensitivity to John Stuart Mill’s formulation of utilitarianism. It turns out that the John Stuart Mill that we meet in freshman lectures may not, bear much kinship with the John Stuart Mill who lived and breathed. So it’s worth noticing, and advertising far and wide, just how the standard picture of Mill is undergoing a rapid change.

For one thing, there is some confusion in the literature whether or not Mill counts as an act- or rule-utilitarian. It is not uncommon to hear his name paired up with one or the other, but rarely both (textual evidence be damned) — if there are any internal contradictions, then it is easy to think that that is a product of Mill’s incoherence, and not a failure on our part to be charitable. And I think Fred Wilson put it nicely:

Mill is … not an “act utilitarian” who holds that the principle of utility is used to judge the rightness or wrongness of each and every act. But neither is he a “rule utilitarian” who holds that individual acts are judged by various moral rules which are themselves judged by the principle of utility acting as a second order principle to determine which set of rules secures the greatest amount of happiness. For the principle of utility judges not simply rules, according to Mill, but rules with sanctions attached.

For another thing, it isn’t even clear whether or not Mill is a consequentialist. In the essay linked, Daniel Jacobsen argues that Mill’s idea of utilitarianism was non-consequentialist — which is roughly to say that it is unclear whether or not Mill believed that we judge the good or bad consequences of acts by being indifferent towards the identity of persons who are affected. Instead, in the essay linked, Jacobsen argues that Mill is best understood as an advocate of a commonsense doctrine that he calls “sentimentalism” (where an act is wrong so long as an agent’s feelings of guilt are suitable).

And it’s certainly not the case that Mill was a consequentialist bean-counter, given his strong emphasis upon the importance of developing good character. As Mill remarks in On Liberty, while it is possible for a man to achieve a good life without ever exercising autonomy, this can only to his detriment as a human being. To take just one of Mill’s quotes, which Kwame Anthony Appiah mentioned favorably (in The Ethics of Identity): “It really is of importance, not only what men do, but also what manner of men they are that do it.”

What can account for such a massive neglect for one of utilitarianism’s fiercest defenders? It could be that utilitarianism has been assessed — and rejected — because it has been associated with its weakest proponents. If charity in interpretation has been lacking in our study of Mill, then it may be that we are now seeing a sea shift in the study of utilitarianism. I doubt that all of Mill can be salvaged — parts of his doctrine are a bit dotty. But still it may be that the old god, Utility, still has some life in him.

Enhanced by Zemanta