Monthly Archives: December 2011

God & Punishment

Governor Rick Perry of Texas speaking at the R...

Image via Wikipedia

A while back I saw Rick Perry receive thunderous applause for the number of executions in the state of Texas. More recently I saw his video in which he claims that he is not ashamed to admit he is  a Christian. Thanks to Rick, I started thinking about God and punishment.

On many conceptions of God, God punishes and rewards people for their deeds and misdeeds when they reach the afterlife. This afterlife might be in Heaven or Hell. It might also be a post first life Resurrection in the flesh followed by judgement and reward or punishment. In any case, those who believe in God generally also believe in a system of divine rewards and punishments that are granted or inflicted post death.

Interestingly, people who believe in such a divine system generally also accept a system of punishment here on earth. Some, like Perry, strongly support capital punishment here on earth while also professing to be of the Christian faith (and thus believing in divine punishment).

The stock justifications for punishment (like executions) include retribution, reparation, and deterrence. In the case of retribution, the idea is that a misdeed warrants a comparable punishment as a just response. In the case of reparation, the idea is that the wrongdoer should be compelled to  provide compensation for the damage done by his/her misdeeds. Deterrence, obviously enough, aims at motivating the wrongdoer to not do wrong again and to motivate others not to do wrong.

When it comes to punishment, it seems reasonable to accept certain moral limits. At the very least, the severity and quantity of punishment would need to be justified. At the very least, the punishment should be on par with the crime in terms if its severity and quantity (otherwise it merely creates more wrong). Punishment without adequate moral justification would seem to be morally unacceptable and would seem to be wrongdoing under the name of punishment rather than justice.

Getting back to God, suppose that God exists and does inflict divine punishments for misdeeds. If this is the case, then it would seem to be unreasonable, perhaps even immoral, for human courts to inflict punishment for crimes that God also punishes.

First, if God punishes people for their misdeeds, then there is no need to seek retribution for crimes here on earth. After all, if someone believes in divine justice, they would also need to believe that mortal retribution is unnecessary-after all, whether we punish the wrongdoer or not, just retribution shall occur after the wrongdoer dies. If we do punish a wrongdoer, then God would presumably need to subtract out our punishment from the punishment he inflicts-otherwise He would be overdoing it. As such, mortal retribution is simply a waste of time-unless, of course, it takes some of the load of an allegedly omnipotent being.

Second, if God rewards good deeds and punishes misdeeds, then there would seem to be no need for reparations here on earth. After all, if someone steals my laptop, then God will see to it that s/he gets what s/he deserves and so will I. That is, all the books will be balanced after death. As such, if someone believes in divine justice, then there seems to be little sense in worrying about reparation here on earth. After all, if we will just be here for a very little while then what will my laptop matter in the scope of eternity? Not a bit, I assure you.

Third, if God inflicts divine punishments and hands out divine rewards, it would seem absurd to try to deter people with mortal punishments. If someone believes that murderers are not deterred by the threat of Hell (or the hope of Heaven), then they surely would not think that the mere threat of bodily death would have deterrent value. To use an analogy, if I knew that a friend of mine would shoot anyone who tried to hurt me, it would be odd of me to tell someone who threatened to harm me that I would poke them with a toothpick. After all, if the threat of being shot would not deter them, the threat of a poke with a toothpick surely would not work.

It might be argued that we need to punish people here because not everyone believes in God. To use an analogy, if I told people that I am protected by  a sniper armed with a .50 caliber rifle, they might still make a go at me if they did not believe in the sniper. As such, I would want to show them my pistol to deter them. Likewise, to deter non-believers we would want to have jails and lethal injections to scare them away from misdeeds. After all, while some people might not believe in God, everyone believes in prison.

Of course, the fact that we rely on prisons and other punishments for deterrence does seem to indicate that we regard God’s divine justice as having very little deterrence value-unless, of course, it is claimed that criminals are atheists or agnostics.

There is also the usually concern that God does not seem particularly concerned with deterring misdeeds. After all, while religious texts present various threats of divine punishment, there is no evidence that God actually punishes the wicked and this certainly cuts into the deterrence value of His punishments. To use an analogy, imagine if I told my students that cheating in my class would be punished by the Chair of Student Punishments for Philosophy Classes and the punishment would take place after graduation. Imagine that a student turned in a plagiarized paper and cheated like mad on the tests, yet I did nothing and simply entered in grades as if everything was fine and nothing happened.  Imagine that the students never see the alleged chair and the only evidence they have for her existence is the fact that she is listed on my syllabus and a little sign I put up on an empty office. As might be imagined, the students would not deterred from cheating.

If there really was a Chair of Student Punishment for Philosophy Classes, she would make an appearance in the class and administer punishments as soon as she was aware of the violations. The same would seem to be true of God. Crudely put, if He does exist and metes out justice, then we would not need to punish (at least in the case of the misdeeds that concern Him). If we do need to punish, then it would seem that either He does not exist or He does not dispense divine justice.

Enhanced by Zemanta

The Antinomies of Privilege

There’s a tired old argument that seems to have gained a new lease of life in these less exacting times (bad internet!), which holds that privilege functions as an epistemological barrier when it comes to understanding sexism, racism, inequality, etc; and, conversely, that being part of a group that is in various ways marginalized, oppressed or subordinated confers a sort of epistemological privilege when it comes to understanding the nature and reality of that situation.

Obviously, there is a kernel of truth to this argument, but it is also highly problematic (especially for people committed to the importance of reason, evidence, etc., as mechanisms for assessing truth-claims). Here are some of the things you need to get straight about if you’re tempted to deploy this argument.

1. If you think that one’s lived experience has systematic and predictable epistemic consequences, then you have to accept that this might flow in the opposite direction to the one suggested by the argument above. In other words, it is entirely possible that structural privilege confers epistemological privilege even when it comes to understanding the nature and reality of the situations of the subordinated, marginalized, etc. This is not a particularly counterintuitive thought (indeed, one could argue that it underpins most of our ideas about education). It’s easy enough to find examples of precisely this sort of argument from amongst even those who champion the cause of the underprivileged. So, for example, you’ll find that Marxists bang on about false class consciousness, ideological state apparatuses, hegemonic projects, etc., to explain how the marginalization and powerlessness of the proletariat messes with its head so it can’t see the reality of its true situation.

2. Yes, yes, I know, it’s one thing to know something in principle, but that’s not the same as experiencing it – there’s a sort of knowledge that comes with experience (some might claim). Well, there’s certainly a sort of something that comes with experience, but whether it is knowledge, and what sort of knowledge, is a difficult issue to sort out. Consider, for example: (a) that people disagree about the nature of their experience as members of purportedly marginalized groups (and some get called “gender traitors” for their trouble); (b) that there’s a wealth of data that suggests we’re actually pretty bad at correctly understanding the situations we inhabit (and indeed, even our thoughts about these situations); and (c) that people do not necessarily experience what most us would take to be marginalized situations as being problematic (check out, for example, some of the literature on FGM; or ask yourself whether slaves in the ancient world would have accepted the legitimacy of the institution of slavery).

3. The annoying tendency of (some of) the marginalized and subordinated not to see or experience their own marginalization and subordination in quite the same terms as those of us who are less marginalized and subordinated would have it is a problem of individual differences (i.e., the fact that individuals cannot be reduced to group characteristics). This comes up in a different guise in a row that played out between socialist and radical feminists in the 1970s, and which is still relevant today. In essence, the problem is that it is… implausible to suppose that there is enough that unites all women, or the working class, for example, so that it makes sense to think mere “membership” of these groups means a common identity or interests. So, for example, the idea that the Queen of England has more in common with a working-class woman than does a working-class man, and is consequently better qualified to talk about their shared lived experience as women is… well, problematic, to say the least. (Similarly, one might consider how working-class politics in the UK in the 1970s and 1980s was characterized by endless rows over pay differentials).

4. There’s an epistemological problem with the argument to epistemological privilege. Specifically, it’s not easy to see that it is possible to substantiate the claim that epistemological privilege necessarily flows from certain kinds of marginalized experience without falling into contradiction. This is because the moment you appeal to evidence, argument, etc., you are operating precisely on the terrain of epistemic equality. The trouble is if you deny that this evidence is generally accessible – if you really are committed to the view that there are certain privileged ways of knowing (and that you can’t know this to be the case unless you’re in a position of privilege) – then your position is simply an article of faith (in fact, it’s disconcertingly similar to the proof of god from religious experience).

5. Finally, there’s a rather subtle point about how you can know that some particular belief you have about your experience as a marginalized person is genuinely flowing from your epistemological privilege, rather than just being a possibly flawed everyday sort of belief. Or, to put this crudely, if you’re committed to the idea of epistemological privilege, it’s hard to see that you can ever be sure you’ve got it. Basically, the problem here is that if epistemological privilege (about certain sorts of things) belongs uniquely to the marginalized, then it seems to be required that the beliefs that are acquired via this privilege are valid even if they do not stand up to scrutiny in the court of universal reason (because if they do have to pass this test, then it seems there’s nothing in principle privileged about the epistemological situation of being marginalized – albeit de facto it might still be true that it’ll be easier to come by particular beliefs that turn out to be true if one is marginalized). However, if the court of universal reason has no jurisdiction here, it’s not clear you can subject your own beliefs to any sort of test. This is because it seems to be the case that even the most minimal of tests – for example, determining whether your beliefs are in accord with your experiences – requires that one makes use of the normal rules of rationality, evidential warrant, etc., all of which would also be available to the court of universal reason.

Okay, that’ll do for now. If you can sort that lot out, then good luck to you, you should carry on using the privilege argument. But the really cool thing here is that if you can’t sort any of it out, no problem, you can just tell yourself that these arguments are themselves a function of privilege. How lovely it must be to have recourse to a hermetically sealed argument that means you get to be right even if you have no idea why you’re right.

Athletes & God

English: This cross-country race course in Sea...

Did God knock those guys down?

While professional athletes get the most attention when they thank God for their successes and victories, athletes thanking God is not that uncommon. It is also not uncommon for this sort of thing to attract both negative and positive attention. As should come as no surprise, there are some matters of philosophical interest here.

I will begin in a somewhat non-philosophical vein by noting that I have no problems with people expressing their faith in the context of sports. When I ran in college,I  noticed that quite a few of my fellow runners were religious-I distinctly remember seeing people praying before the start of a cross country race (on some courses, divine protection was something well worth having and flipping their crosses from the front to the back (also a good idea-racing downhill can result in a cross to the face). I was, at that time, an atheist. But, as a runner, I have a respect for devotion and faith. Plus, most of these people proved to be decent human beings and I certainly respect that.

When I race now, some races I compete in are put on my churches or have religious race directors. As such, I participate in races that often have a prayer before the start. While I am not known for my faith, I am generally fine with the prayers-they tend to be ones that express gratitude for the opportunity to be healthy and express the hope that the runners will be watched over and come to no harm. I agree with both sentiments. What I find to be a matter of potential concern is, of course, when athletes credit God with their successes and wins.

On the one hand, if someone does believe in God it does make sense to give God a general thanks. After all, if God did create the world and all that, then we would all owe him thanks for existing and having a universe in which we can compete in sports. There is also the fact that such thanks can be seen as being the sort of thing one does-just as one thanks the little people for one’s success in the movies or politics one should thank the Big Guy for His role in literally making it all possible.

On the other hand, an athlete thanking God for his or her specific success over others does raise some matters of philosophical interest that I will now explore.

One point of concern that is commonly raised is that it seems rather odd that God would intervene to, for example, help a pro-football player score a touchdown while He is allowing untold amounts of suffering to occur. If He can help push a ball into the hands of a quarterback why could he not deflect, just a bit, a bullet fired by a murderer? Why could He not just tweak a virus a bit so that it does not cause AIDS? The idea that God is so active in sports and so inactive in things that really matter would certainly raise questions about God’s benevolence and priorities.

Another point of concern is that to thank God for a victory is to indicate that God  wanted the other side or other athletes to be defeated. While this would make sense if one was, for example, doing a marathon against demons or on the field against a team of devils, it seems less reasonable when one is just playing a game or running a race. When I beat people in a race, there seems to generally be no evidence that they are more wicked than I or any less morally or theologically deserving in the eyes of God (with some notable exceptions-you know who you are).  It seems odd to think that God regards some teams or some athletes as His foes that must be defeated by His champions (I will, of course, make the obvious exception for the damn Yankees).  So, if I beat you and I thank God for the victory, I would seem to be saying that God wanted you to lose. That would, of course, raise questions about why that would be the case. It seems to make more sense to say that I won because I ran faster rather than because God did something to bless me on the course or smite you.

The notion that God did something also raises an important moral point. A key part of athletic ethics is competing fairly without things like illegal performance enhancing drugs or outside intervention. If I win a race because I was blood doping and had people tackling other runners in the woods, then I would be a cheater and not a winner. If God steps into athletic events and starts intervening for one side or person, then God is cheating. Given that God is supposed to be God, surely He surely would not cheat and would thus allow the better team or athlete to win. He might, of course, act to offset or prevent cheating and be morally just. However, while  Jesus turned water to wine,God generally does not seem to turn steroids into saline.

As a final point, there is also the rather broad matter of freedom. If our athletic victories are due to God (and also our losses-but no one praises God for those on TV), then it would seem that our agency is lacking in these contests. God would be like a child playing with action figures (“zoom, Mike surges ahead or the win!” or “zap, Jeremy blasts past the Kenyans to win the NYC marathon!”) and the athletes would no more deserve the credit or the blame than the action figures. After all, the agency of both is simply lacking and all agency lies with the one moving the figures about. As would be imagined, this lack of agency would seem to extend throughout life-if God is responsible for my 5K time, then He would also seem responsible for my publications and whether I stab someone in the face or not. This is, of course, a classic problem-only now in the context of sports. Naturally (or supernaturally), the universe could in fact work this way. Of course, this would also mean that the athletes who praise God would be like sock puppets worn by a puppeteer who is praising himself or herself.

Now, if God does actually intervene in sports, I would like to make a modest request: God, could you see fit to shave two minutes off my 5K time this coming year? Oh, and as always, smite the Yankees. The Gators, too.

Enhanced by Zemanta

A World Less Violent?


Image by Rickydavid via Flickr

Although the Libyan and Iraq wars recently ended, the world still seems like a violent place. After all, the twenty four hour news cycles are awash with stories of crime, war, riots and other violent activities. However, Steven Pinker contends, in his The Better Angels of Our nature: Why Violence Has Declined that we are living in a time in which violence is at an all time low.

Pinker bases his claim on statistical data. For example, the records of 14th century Oxford reveal 110 homicides per 100,000 people while the middle of the 20th century saw London with a murder rate of less than 1 person per 100,000. As another example, even the 20th century (which saw two world wars and multitudes of lesser wars) killed .7% of the population (3% if all war connected deaths are counted).

Not surprisingly, people have pointed to the fact that modern wars have killed millions of people and that the number of people who die violently is fairly large. Pinker, not surprisingly, makes the obvious reply: the number of violent deaths is higher but the percentage is far lower-mainly because there are so many more people today relative to the past.

As the title suggests, Pinker attributes the change, in part, to people being better at impulse control, considering consequences, and also considering others. This view runs contrary to the idea that people today are not very good at such things-but perhaps people are generally better than people in the past. Pinker does also acknowledge that states have far more control now than in the past, which tends to reduce crime.

While Pinker makes a good case, it is also reasonable to consider other explanations that can be added to the mix.

In the case of war, improved medicines and improved weapons have reduced the number of deaths. Wounds that would have been fatal in the past can often be handled by battlefield medicine, thus lower the percentage of soldiers who die as the result of combat.  Weapon technology also has a significant impact. Improvements in defensive technology mean that a lower percentage of combatants are killed and improvements in weapon accuracy mean that less non-combatants are killed. The newer technology has also changed the nature of warfare in terms of civilian involvement. With some notable exceptions, siege warfare is largely a thing of the past because of the changes in technology. So, instead of starving a city into surrendering, soldiers now just take the city using combined arms.

The improved technology also means that modern soldiers are far more effective that soldiers in the past which reduces the percentage of the population that needs to be involved in combat, thus lowering the percentage of people killed.

There is also the fact that the nature of competition between human groups has changed. At one time the conflict was directly over land and resources and these conflicts were settled with violence. While this still occurs, we now have far broader avenues of competition, such as economics, sports, and so on. As such, people might be just as violently inclined as ever, only now we have far more avenues into which to channel that violence. So, for example, back in the day an ambitious man might have as his main option being a noble and achieving his ends by violence. Today a person with ambitions of conquest might start a business or waste away his life in computer games.

In the case of violent crime, people are more distracted, more medicated, and more separated than in the past. This would tend to reduce violent crimes, at least in terms of the percentages.

A rather interesting factor to consider is natural selection. Societies tend to respond to violent crimes with violence, often killing such criminals. Wars also tend to kill the violent. As such, centuries of war and violent crime might be performing natural selection on the human species-the more violent humans would tend to be killed, thus leaving those less prone to crime and violence to reproduce more. Crudely put, perhaps we are killing our way towards peace.

Enhanced by Zemanta

Telling People to Shut Up

A little while ago I started to write a book for Continuum called, Identity Crisis: Against Multiculturalism. Its basic thesis is – or would have been – that the sort of multiculturalism practised in the UK is misguided and dangerous because it inevitably exacerbates the all too human tendency to divide the world into “people like us” and “people like them”.

I say “would have been” because it is now very unlikely I’m going to complete it. There are a number of reasons for my (almost a) decision to abandon the project, but the main one has to do with the rise of the EDL in the UK. Basically, I think the emergence of the EDL has changed the moral calculus here: it is one thing to write a book that is critical of multiculturalism when multiculturalism is getting a free pass, it is quite a different thing to write such a book when minority groups are under systematic and concerted attack by a bunch of racist, football hooligans. Of course, this is a judgement call, and I can quite see how somebody else might come to a different determination: a reasonable person could easily think that I’m wrong to abandon the project for this reason.

Okay, so why is this of any interest? Well, imagine a world in which I’m a blogger at Socialist Unity (okay, that’s a stretch even for a thought experiment), and in this world “Jeremy” has decided to go ahead with the book. In this situation, if I found out about “Jeremy’s” decision, would I be justified in publicly urging him not to write the book (assuming I agree with the real-world Jeremy that the book is a bad idea in the current political climate)? In other words, if I thought he wasn’t helping in going after multiculturalism, would I be justified in telling him to shut up?

My view is that it isn’t at all clear that I wouldn’t be justified. It doesn’t seem implausible to think that any justification of a speech act has to take into account its perlocutionary effects (which is part of the reason why this whole tone troll meme is so absurd). It would seem to follow from this that if there were reasonable grounds for supposing that some particular speech act – or a book length variant – is likely to have bad effects, then I have a prima facie moral reason at least for urging silence. This is pretty obvious stuff: if I know that somebody is about to shout “fire” in a crowded theatre, and I think a stampede will likely be the result, then I am surely justified in urging the person to keep their trap shut.

Obviously there is complexity here. There are freedom of speech implications, for example: so, for instance, if one takes the naive act utilitarian view that every speech act must be justified by its particular consequences, then an individual or group can shut down all criticism just by making the consequences of such criticism sufficiently bad. And, of course, there are also complications to do with the absence of perfect knowledge: we can’t know with certainty what the outcome of any particular speech act is likely to be, etc.

But, in a way, the complexity is precisely the point. Reasonable people can disagree in good faith about the wisdom of writing a book, employing a particular rhetorical style, or articulating a particular speech act. They can do a proper moral calculus, and come to a different conclusion. They can be attentive to the same evidence, worry about the same moral issues, and come to a different determination.

If one accepts this point, how should one react if somebody else suggests that perhaps one ought not to write a book, or that one ought to tone down some rhetoric, or go easy with some criticism?

Well, at least one answer, which in my more pious moments I’m inclined to favour, is that one should ask whether their request – or even demand – has any merit. Are their concerns legitimate – can you see what they’re worrying about? Is their position held in good faith (since even if you think they’re mistaken, this is a relevant datum in terms of how one should view their character, etc)? Does their position have at least some evidential merit? In other words, one should react in a spirit of rational enquiry – after all, it’s possible they’ve got a point, and it’s possible that a lot is riding on getting things right.

How one should not react is simply to assume that they are beyond the moral pale because they make the request or demand. Sometimes, shutting up is the best option. And sometimes telling people to shut up is morally justified (and perhaps even obligated).

They Eat Horses, Don’t They?

Horse meat in mongolia

Image via Wikipedia

In 2006,  the United States Congress banned the use of federal money for inspecting horses intended to be slaughtered for food. Since the UDSA requires the federal inspection of all food grade meat, this effectively ended the slaughter of horse for food in the United States. This ban was, however, lifted in November, 2011. This opens the doors the the slaughter of horses for food.

While some people might wonder why there might be a need to resume slaughtering horses for food, there are some arguments that have been presented in its favor. I will consider some of these before moving on to some objections against killing horses for food.

One stock argument is the economic argument that while American slaughterhouses are not profiting or creating horse slaughtering related jobs, other countries (such as Mexico and Canada) are doing so. By having moral and sentimental qualms about killing horses for food, the United States missed out on the opportunity to create jobs and make profits in the horse meat market. Rectifying this will allow the job creators to create more jobs and will enable Americans to profit from the slaughter of horses, rather than allowing other countries to dominate the horse meat market.

In these troubled economic times, this argument does have a certain appeal.  However, there is also the stocky reply that just because something could be profitable and created jobs, it does not entail that we should do so. For example, legalizing various drugs would create American jobs and allow legitimate companies to profit, however, some people might regard this as morally unacceptable. As another example, prostitution could be made legal across America, thus creating many legal jobs of various sorts (pun intended) and allowing American companies to make a profit. But this might be regarded as morally unacceptable. Likewise, if using horses for food is morally unacceptable, then it would seem that we should not do this-even if it creates profits and jobs.

A second argument that has been advanced is that the economic downturn has resulted in more people abandoning their horses or being unable to properly care for them. Since horses cannot be slaughtered for food, these horses are left to suffer. Being able to slaughter horses for food would solve the problem of these suffering horses.

One obvious reply to this argument is that there seems to be no need to allow horses to be slaughtered for food to address the alleged problem with abandoned or neglected horses. After all, it would seem more humane to use the federal money to care for them rather than inspect them to see if they are fit for hamburger. To use an analogy, imagine if it was suggested that we should start slaughtering children for food because the economic downturn has made it harder for parents to care for them. This would a rather horrific suggestion. While horse are not children, it seems horrific to say that we can best help them by seeing to it they are made into hamburger.

Even if it were accepted that the best way to address the abandoned or neglected horses was by killing them, it would hardly follow that this should be done by the meat industry in order to create meat to sell. That said, it could be argued that such meat should not go to waste. This principle would, it would seem, also indicate that the abandoned dogs, cats and other pets should also be inspected and made into food as a solution.  This might be taken as a reductio, or perhaps as a business plan.

A second obvious reply is that it seems unlikely that the abandoned or neglected horses could supply enough meat to actually make a significant economic difference.  That is, there are certainly not enough such horses to support an industry. As such, in order for the economic argument to work, another source of horses would be needed-such as horses raised specifically for food or horses that would be harvested from public lands. While this would allow the economic argument to remain, it would certainly reduce the impact of the “mercy killing” argument.

Not surprisingly, I am not in favor of slaughtering horses for food.  In part, as some proponents of horse slaughtering contend, this is due to sentimental reasons. My parents worked at a summer camp which had horses and, as such, I literally grew up with horses learning to ride them and care for them. It is, as might be imagined, difficult for me to see horses as food. After all, friends do not eat friends. Also, like many Americans, I grew up with cowboy movies and I can no more accept the idea of eating Trigger or Silver as I can accept the idea of eating Lassie, Rin Tin Tin or the Little Rascals.

This, of course, merely reports on my psychology and, as such, has no logical weight by itself. After all, there are plenty of folks who would have no qualms sitting down to a main disk of Trigger with a side of Lassie.

There are, of course, various stock arguments against eating any animals and they can be pressed into service here. However, my objective is to present some arguments specific to horses.

For my first argument, I will steal from Kant. While horses are non-rational beings and would thus be mere objects in Kant’s moral theory, Kant does argue that we have indirect duties to animals. Roughly put, he contends that we can treat animals as analogous to humans when assessing how we should treat them (at least in a somewhat limited context). For example, if Ted has a dog Blue that has served him faithfully and well, while Blue is but an object, a human who had served faithfully and well would have earned proper treatment. As such, it would be wrong of Ted to simply dispose of Blue because he is too old to serve any longer. Kant also contends that we should treat animals well because doing so, crudely put, trains us to treat humans well. Likewise, we should not treat animals badly because doing so trains us to treat humans badly. Since humans matter morally to Kant, this is why our treatment of animals would matter.

Horses have clearly served humans very well. They have fought in our wars, carried us around the world, and have been good companions.  As such, we owe them a debt for that service. To simply treat them as meat would be small minded and an act of ingratitude.

One obvious reply is that even if we assume that we might owe individual horses a debt, this does not apply to all horses. To use the obvious analogy, simply because one member of a family helped you out it does not follow that you then owe anything to other members of that family.

This does have an appeal to it. After all, the notion of owing a collective debt seems as mysterious as the notion of collective sin or collective rights. This is especially mysterious when one is speaking of owing a species. I do, as such, admit that this argument would only have bite with those who are willing to consider the notion that a collective can be owed for the action of the individuals who took specific actions.

For my second argument, I will steal from C.S. Lewis. In his classic The Abolition of Man, Lewis writes, “until quite modern times all teachers and even all men believed the universe to be such that certain emotional reactions on our part could be either congruous or incongruous to it -believed, in fact, that objects did not merely receive, but could merit, our approval or disapproval, our reverence, or our contempt.”

It is, of course, easy enough to take issue with Lewis. However, there is considerable appeal in his view and it seems appealing enough to extend it from objects to animals, actions and people.

For example, imagine that Ted the Just  falls into raging flood waters and Sally the Brave leaps in to save him. After she pulls him from the water, Larry the Loather  goes up and spits on her, saying “How contemptible and cowardly of you to have done that. I feel nothing but loathing for you, Sally.” Imagine that Ted says “What the hell? She was brave and deserves your respect!” If Larry says, “Fah, I feel no respect for her. I feel naught but contempt and loathing”, then he may very well be speaking honestly. However, it also seems clear that his feelings are not apt-Sally merits approval and respect regardless of what Larry feels or does not feel.

While it is obviously true that horses are regarded as some people as mere meat (and or profits), there is the question of whether or not this is to feel what horses in fact merit. Do they merit being looked at as something to be butchered and sold by the pound, or do they merit better?

As might be imagined, I contend that horses merit better. To regard them with sentiment and respect is not simply a matter of emotional sappiness or being soft-hearted. Rather, it is to have the sort of feelings that horses do, in fact, merit. As such, to mass slaughter them and make them into hamburger is to act in ways that horses do not deserve and in ways that diminish us emotionally and morally.

Enhanced by Zemanta

Climate ethics: is sustainability possible?

Here’s the last in a series of three posts about shifting facts and climate change, from a talk in a series called Rights to a Green Future in Utrecht earlier this month. The general idea is that the facts of climate change are shifting around, and I think that’s doing something to moral reflection on action. The first post, about history and cumulative emissions, is here. The second, about the present state of play and equal per capital shares, is here. This post is about arguments for action that depend on some future good.

These arguments are hypothetical in form: if we value a sustainable world, a green future, a nice and habitable planet like the one we’ve got for those who come after us, then, the argument goes, such and such a sort of mitigation or adaptation strategy is now demanded. Sometimes the argument is reversed: if we want to avoid a future with a lot of miserable lives in it – suffering we might dodge if we choose wisely now – then, again, such and such a strategy is morally demanded of us.

Commitments to a sustainable future, when they do appear in international negotiations, typically mention the Brundtland Report’s definition: sustainability ‘implies meeting the needs of the present without compromising the ability of future generations to meet their own needs’. The question at the back of everyone’s mind when they hear this, the question which needs now to shift to the front, is this: is it possible for our needs and future needs to be met?

There are now a lot of people around – we were just joined by number 7 billion, and it looks like we’ll have 10 billion before the world’s population levels out. Most of our basic needs are met by burning fossil fuels. In other words, the moral argument for action now might be thought to boil down to the question of whether we really can act to meet everyone’s needs. In a nutshell, is sustainability possible? Is it possible to meet our own needs and leave a habitable world in our wake?

That’s partly an empirical question. The world seems to have settled on two kinds of targets or limits – the thought is that if we pass them we’re in for dangerous climate change and an unsustainable world. One is 2 degrees Celsius of warming above pre-industrial levels, and indeed this target was loudly endorsed at the United Nations Framework Convention on Climate Change conference in Cancun in 2010, which called for all countries to take urgent action to limit the increase in global average temperature to beneath this temperature threshold. (There’s a good summary here.) Some nations, particularly low lying island states with a lot to lose as sea levels rise, have argued that 1.5 degrees or less is the only safe maximum. (112 countries argue for this more ambitious target.  There’s a list and details here, of the so called Least Developed Countries and the Association of Small Island States.)

How likely are we to stay under the 2 degree target? We have already warmed the world by .74 degrees, and another half a degree or so is thought to already be in the climate system. In a paper which appeared in October of this year, the examination of published emission scenarios from different climate models found that in the set of scenarios with a ‘likely’ chance of staying below 2 °C, and by that the mean merely a better than 66% chance, emissions must peak and begin falling rapidly very soon, between 2010 and 2020. (Joeri Rogelj et al, ‘Emission pathways consistent with a 2 °C global temperature limit’ Nature Climate Change, Volume: 1, (2011) 413–418)

As they put it,

“Without a firm commitment to put in place the mechanisms to enable an early global emissions peak followed by steep reductions thereafter, there are significant risks that the 2 °C target, endorsed by so many nations, is already slipping out of reach.”

The related target is 450 parts per million of greenhouse gasses in the atmosphere, which is thought to be the maximum we can emit and stay beneath the 2 degree threshold. The level at present is about 390 ppm. It turns out that while the projected date at which passing 450 is unavoidable is still several years ahead, the choices we make now about building power plants and extracting energy can ‘lock us in’ to pathways that overshoot 450. According to a report released last month by the International Energy Agency (World Energy Outlook 2011), the world’s existing infrastructure is already producing 80% of the carbon budget we’ve got left if we want to stay under 450 ppm. If trends continue and we build more fossil fuel burning energy plants, by 2015, 90% of the available “carbon budget” will spent. By 2017, the remaining carbon budget that might keep us under 450 ppm will be gone, and we’ll have no chance at all of staying under 2 degrees. As the Guardian reported,

“The door is closing,” Fatih Birol, chief economist at the International Energy Agency, said. “I am very worried – if we don’t change direction now on how we use energy, we will end up beyond what scientists tell us is the minimum [for safety] … If we do not have an international agreement, whose effect is put in place by 2017, then the door to [holding temperatures to 2C of warming] will be closed forever,” said Birol.

Are we likely to have such an agreement? Copenhagen was viewed by many as the world’s last chance at a global agreement, and of course that did not materialise. As I write this, newspaper reports from the current UN Climate Conference in Durban say that the world’s leading economies now privately admit that no new global climate agreement will be reached before 2016. The EU is pressing for targets now, but the US, Canada, Russia, Japan, India and China say new negotiations should not begin until 2015, to come into effect in 2020 at the earliest.

The IEA, again in its 2011 report,

“projects that world CO2 emissions from fuel combustion will continue to grow unabated, albeit at a lower rate … [this] is in line with the worst case scenario presented by the Intergovernmental Panel on Climate Change (IPCC) in the Fourth Assessment Report (2007), which projects a world average temperature increase of between 2.4°C and 6.4°C by 2100.”

For what it’s worth this kind of talk jives with the results of a 2009 poll, undertaken by the Guardian, which showed that,

“Almost nine out of 10 climate scientists do not believe political efforts to restrict global warming to 2C will succeed … An average rise of 4-5C by the end of this century is more likely, they say, given soaring carbon emissions and political constraints.”

What exactly does passing the 2 degree limit mean? No one is sure. It’s synonymous with so called ‘dangerous climate change’ or ‘runaway climate change’. The IPCC associate temperature rises above 2 degrees with ‘more and more negative impacts’. Mark Lynas put some flesh on the these conservative bones with a book called Six Degrees, an attempt to work out what we’re in for as the world heats up, degree by degree, by looking at what the world has been like, in its long history, at those temperatures. It’s just one take on our prospects past 2 degrees, but it’s well-researched, compelling stuff. Here’s a summary:

Between 2 and 3 degrees of warming, one ‘tipping point’ is crossed. Enough heat to cause the eventual complete melting of the Greenland ice sheet is in the system, which would eventually raise global sea levels by as much as seven metres and change the planet’s weather systems. Heat waves are likely to be responsible for many deaths each summer in Europe, coral reefs die and the marine food chain is disrupted, and the loss of fresh water from melting glaciers and snowpack affects both food production and the availability of drinking water.

Between 3 and 4 degrees, a large tipping point is crossed, where it’s thought that climate mechanisms might run out of control, with tipping points leading to the emission of more greenhouse gasses and more tipping points leading to the emission of still more greenhouse gasses, and so on until warming is, in effect, runaway. If the Amazon rainforest collapses, dries and burns, as is consistent with a 3 degree world, the carbon released could be enough to push us up another 1.5 degrees past a four degree world. Beyond three degrees, Africa, Australia and parts of North American turn into deserts on some climate models – food production obviously suffers, and water becomes scarce.

Between 4 and 5 degrees another tipping point is crossed, the Arctic permafrost melts, and huge amounts of methane and carbon dioxide are released into the atmosphere, further increasing the effects of climate change and pushing us up to 5 or 6 degrees. The Arctic melts, again increasing sea level. Humanity heads towards the poles, as other parts of the world become uninhabitable.

Beyond 5 degrees … there’s nothing like a clear picture. The world hasn’t been that hot for millions of years. Lynas talks of methane hydrates on the ocean floor erupting up in warmer waters and pushing the greenhouse effect out of control, and real questions arise here about the possibility of human beings joining the other 95% of the earth’s species in extinction. There’s talk of the Earth becoming a hot, desolate, lifeless ball, like Venus.

So what can we make of arguments for sustainability in the light of all this? Is sustainability still a live possibility?  The argument is of the form, if we want a world like x, we must do y — but it’s possible that a world like x is becoming less and less likely.

It seems to me that sustainability arguments can take still take hold of us, with a particular sort of urgency, but perhaps only for a few years more, after which it becomes more and more likely that we’ll be unable to do anything to avoid the possibility of runaway climate change. I have to admit that it’s not easy to say things like this and keep a straight face. One sounds very much like some end-of-the-world cultist, warning that the end is nigh, but the voices telling us that we’ve only got a few years left to leave a habitable world in our wake are coming from the authors of peer reviewed papers, the heads of respected research institutions, the writers of books that win the Royal Society Science Prize. The world’s nations have agreed a 2 degree target, calling climate change an ‘urgent and potentially irreversible threat to human societies and the planet’.  These aren’t crazy people talking.  It’s the agreed language of representatives of our governments.

There are thoughts to be had here about civil disobedience, as well as other thoughts about human nature. But since I wrote this, we’ve had something of a conclusion in Durban. (Mark Lynas’ valuable discussion of the meaning of the Durban Platform is here.) It looks like there’s a commitment to have a commitment in 2015, which will come into legal force, if all hurdles are cleared, in 2020. Whether or not we’ve left it too late is unlcear, but there’s room for philosophical reflection on how to think about this possibility, about what it does to arguments for action on climate change, and about what to make of ourselves against this backdrop.

Atheists, India and Australia

I’ve blogged elsewhere about a little trick that is embedded within the Morality Play interactive activity.

Very quickly, one of the questions asks whether there is a moral obligation to help a person who is in severe need.

You see a charity advertisement in a newspaper about a person in severe need in India/Australia. There is no state welfare available to this person, but you can help them at little cost to yourself. You have good reason to believe that any help you offer will make a difference. Are you morally obliged to help the person?

Half the people undertaking the activity are told that the person lives in India; the other half that the person lives in Australia. They are then asked to state whether they think we are “Strongly Obliged”, “Weakly Obliged” or “Not Obliged” to help the person.

After nearly 1000 responses, this is what the results are showing us.

The thing that has really caught my attention is the results for people who self-identify as Christians and atheists, respectively (more precisely, the atheist group self-identify as having “No Religion”, so they could be agnostics, or perhaps even deists of some sort, but for the sake of convenience, I’m going to call them atheists).

The headline news is that atheists are twice as likely as Christians to think we’re “Not Obliged” to help the person in need in India (currently, 43% as opposed to 21%).

I actually find that quite shocking. But perhaps even more shocking is the fact the atheist group are much less likely to respond that way when asked about the person in Australia. Here (only) 35% think we’re not morally obliged to help. There are two further points here: (1) this gap is four times as large as the average gap across all respondents (and it’s easily statistically significant – I checked!); and (2) if you look at the Christian group, in complete contrast to the atheist group, you find that they are more likely to think we’re not obliged to help the person in Australia.

My first reaction to these figures was to think I had messed up the programming somewhere. But I have double and triple-checked, and I’m almost certain that I haven’t. Plus, I’ve checked the numbers manually (so to speak); and the figures in the charts correctly add up to 100, so I think this really is what the numbers are saying.

My second reaction, of course, was to think about confounding variables and systematic biases. (Note to any stray new atheists reading this: I am fully aware of the dangers of a non-randomised, self-selecting sample, and that it is not possible to generalize these results, but the fact remains that these results are curious, and rather shocking, in and of themselves – we’re not talking about tiny numbers of people here).

So what’s going on? I don’t really know, but if I had to guess, I’d say it’s possible there is some correlation between youth and irreligiosity specific to these activities (because they tend to get picked up by European schools and colleges), and that it might be that young people are less likely to think in terms of moral obligation than older people; it also seems possible that various stripes of moral nihilism might result in non-religious people denying that one is morally obliged to help others (even if they would in fact help others).

But the difference between the atheist response to the India and Australia conditions is… well, harder to explain (and, as I said, it’s a little disturbing). Anybody got any ideas?

Are Professors Laborers?

Sombrero y diploma de graduación


Members of many professions like to hold to a certain image of their profession. In some cases this is a mere illusion or even a delusion. In the case of professors, we often like to think of ourselves as more than just paid laborers but rather as important members of a learning community.  Administrators and others often like to cultivate this view (or delusion). After all, members of a learning community will do unpaid work for “the good of the community” while a smart laborer never works for free.

On one hand, a professor is clearly a paid worker. Professors get a salary and benefits (if they are lucky) in return for doing work for the school. While professors typically do not punch the clock or record the hours (or minutes) of their work, they are still expected to earn their pay. As such, professors can be seen as any other worker or laborer.

On the other hand, professors (as noted above) are also often seen as being members of a learning community. While they are paid for the work, they are also expected by tradition (and often by assignment of responsibilities) to engage in various unpaid endeavors such as publishing articles, doing community service, doing professional service, assisting student clubs, and so on. These activities are seen as being valuable, but they also generate value for the professor in that s/he is adding to the community-a contributor to the general good.

Like many professors, I was very much of the “good of the community” sort of professor in the days of my youth. I made my work on fallacies freely available, accepted all invitations to speak (for free), helped students prepare for graduate school, wrote letters for students who had graduated long ago, and did a multitude of other extra (and unpaid) things. While none of this was required or had any impact on my pay, I regarded all of it as part of the “good of the community” duties of a professor.

In recent years I noticed the increasing tendency to look at the academy as a business and to approach it using certain business models. While I am all for greater efficiency and a smooth running business aspect of the university, I did look upon the expansion of this model with some concern.

One effect of this view is what seems to be an obsession with assessment and metrics. Professors are finding that they need to quantify their activities in ways set by administrators or the state. While I do agree that professors should be accountable, one unfortunate aspect of this approach is that often  little (or no) value is placed on the unpaid “community good” work of professors (or the unpaid work is simply rolled into the paid work but the pay is not increased).

Also, casting professors as workers to be carefully monitored can have a negative impact on the “community good” aspects of being a professor. One reason for this lies in the difference between the reasonable attitude of a paid laborer and a member of a community.

If I am a member of a learning community, then I have a stake in the general good of that community and part of my compensation and motivation can be that I am contributing to that good.  After all, as a member of the community, I have a stake in the good of that community and thus it is worth my while to contribute to that good. The analogy to a family or group of friends is obvious. As such, this view can incline professors to do unpaid work for the “good of the community.” Of course, for professors to justly believe they are a part of a community, there must actually be such a community-rather than a mere business.

However, if I am simply a worker in the education business  and the quality and extent of  my efforts are disconnected from reward (at many schools, merit pay is a thing of the distant past and bonuses apparently only go to top administrators), then it would seem I have little economic incentive to do more than what is required to keep my job.

Even if my efforts did yield economic rewards, I would only have an incentive to go above and beyond the basic level in regards to things that would yield economic results for me. Obviously, merely being good for the community would hardly provide a suitable motivation to do anything extra.

After all, if the goal of a business is to get maximum revenue for minimum expenditure , the goal of a worker would seem to be a comparable sort of thing: to get the maximum pay for the minimal effort. If doing the job with greater quality or doing more work yields no economic benefit, then there would seem to be no incentive to work beyond what is required to simply stay employed  (unless, of course, one is looking to move to a better job with another job creator.

Employers can, of course, counter this by compelling workers to work more or do higher quality work through the threat of unemployment. The worse the economy, the bigger the stick that employers wield and these days, employers can swing a rather big stick. However, compelled employees tend to be demoralized employees and threatening people in order to achieve excellence generally does not have a great level of success.  Also, CEOs and their supporters argue that quality work must be duly compensated, but perhaps that only applies to the top executives and not mere workers.

It can be argued that professors have had it too easy over the years and that it is time that they be locked into the same sort of business reality that almost everyone else is compelled to endure. While this might make some gray haired folks cry out as their ivory towers are stripped and sold on the free market, this is the new economic reality: universities are not learning communities-they are businesses that deal in the commodity of education (and sports, merchandise, etc.). Professors will need to awaken from their delusional dreams and accept that they are workers in this education factory. True, some of these education workers might deserve some additional compensation for improving the product, offering quality customer service or otherwise aiding the business. Naturally, they cannot expect too much-as always, the lion’s share of compensation belongs not to the mere employees, but to the top executives.

Enhanced by Zemanta

Who said this? To whom? Redux, redux.

Since it’s Christmas, here’s a little quiz. Which philosopher is responsible for the following, and to whom was he or she writing? No Googling, and marks will only be awarded if you offer an explanation for your guess… deduction.

I prefer this situation to that even of your delicious villa… One is not alone so frequently in the country as one could wish: a number of impertinent visitors are continually besieging you. Here, as all the world, except myself, is occupied in commerce, it depends merely on myself to live unknown to the world. I walk every day amongst immense ranks of people, with as much tranquility as you do in your green alleys. The men I meet with make the same impression on my mind as would the trees of your forests, or the flocks of sheep grazing on your common. The busy hum too of these merchants does not disturb one more than the purling of your brooks. If sometimes I amuse myself in contemplating their anxious motions, I receive the same pleasure which you do in observing those men who cultivate your land; for I reflect that the end of all their labours is to embellish the city which I inhabit, and to anticipate my wants.

Remember, no cheating. And the decision of the judge – that’s me – is final. Your prize for winning will be to have your name up in lights on Twitter. Now, who could resist such an inducement, right…?