Monthly Archives: November 2012

Knowing I am Not the Best

Long ago, when I was a young boy, I was afflicted with the dread three Ss. That is, I was Small, Smart and (worst of all) Sensitive. As a good father, my dad endeavored to see to it that I developed the proper virtues of a young man. Fortunately, his efforts were ultimately successful although the path was, I am sure, not quite what he expected. Mainly because the path was mostly track, road and trail rather than field, court and gridiron.

As part of this process, I was sent to basketball camp to develop my skills in this reputable game. I was a terrible player with no real skill and I had no real interest in the sport. I much preferred reading over shooting hoops. However, I went to the camp and tried to do the best I could within the limits of my abilities.

During one drill, the coach yelled out for the best player to run to the center of the court. Immediately all the other boys rushed to the center of the court. Being honest in my assessment of my abilities I did not move. While I might not have been the worst player present, I was clearly not the best. I was not even within free throw distance of the best. For some reason, the coach made all the boys do pushups. He also made me do pushups, albeit double the number done by the other boys.

I thought this was very odd since this sort of thing seemed to encourage self-deception and that seemed, even to the young me, wrong. I recall quite well getting considerable abuse for my actions, which made me think even more about the matter. I did know better than to discuss this with anyone at the time, but I have thought about it over the years.

In recent years, I have run into something similar. I am always asked before I go to race if I will win. I always give an honest answer, which is usually “no.” This always results in an expression of dismay. While I have won races, I am now 46 years old and folks with far fewer years and miles show up to take their rightful place ahead of me, earning this because they are better than I am. My pride and arrogance, of course, compel me to say that when I was the age of many of my competitors, I was faster than they are now. But, as the saying goes, that was then and this is now. Barring a TARDIS picking up my twenty-something self to go to the races of now (to save the galaxy, of course—racing is very important) I am forced to content myself with a folly of age: looking back on how good I was and comparing the younger me with my current competition.

One the one hand, I do get the point of self-deception in regards to one’s abilities. After all, it could be argued, that a person thinking incorrectly that he is the best would help him do better. That is, thinking he is the best will push him in the direction of being the best. I do, in fact, know people who are like this and they often push very hard in competition because they believe they are better than they actually are and are thus driven to contend against people who are, in fact, better than them. On the downside, when such people are defeated by those who are better, they sometimes grow angry and concoct excuses for their defeat to maintain the illusion of their superiority.

On the other hand, such self-deception could be problematic. After all, a person who wrongly thinks he is the best and operates on this assumption will not be acting rationally. There are, in fact, two well-known cognitive biases that involve a person thinking he is better than he is.

One is known as the “overconfidence effect.” This bias causes a person to believe that she has done better than she has in fact done. As a professor, I commonly see this bias when students get their grades. For example, I have lost track of the times a student has said “my paper felt like an A” when it was a D (or worse) or has said “I think I did great on the test” when it turns out that they did not do so great.

A closely related bias is the “better-than-average Illusion.” A person falls victim to this when she overestimates her abilities relative to others, usually those she is engaged in competition with. Since people often think very highly of themselves, people commonly fall into this trap.

While confidence can be a good thing (and thinking that one is going to do poorly is a way of contributing to making that a reality), this bias obviously has negative consequences. One rather serious problem is that it can lead people to actually do worse. After all, a person who overestimates her performance or abilities might not try as hard as she should—after all, she will think she is already doing much better than she is, thus overestimating her performance and coming to a false conclusion about, for example, her grade. This is most likely to occur when the person does not have immediate feedback, such as on a test or paper.

It can also have the impact of causing a person to “burn out” by trying to hard it based on a false assessment of his abilities. For example, a common sight at road races is inexperienced runners sprinting out ahead of the experienced (and better runners) only to quickly discover that they are not as a capable as they had believed. It can even happen to people who should know better. For example, some years ago I went to the USA 15K championship race as part of a team. Our supposed best runner was bragging about running with the Kenyans. Unfortunately, he got passed by some female runners (as did I—the race attracts top talent) and this apparently broke him to the point where he gave up. I knew my capabilities and was honest about them, so when the fast ladies surged past me I just stuck to my plan. I knew what I could do and what I could not do—and I knew I had a lot of race left and no reason to burn myself out due to a false belief in my abilities. Fortunately, the rest of the team delivered solid races and we took an honorable third place. My experience has been that I do better when I have an accurate assessment of my abilities relative to my competition, most especially in running. Naturally, I do my best—but to do this, I must have a reasonable gauge of what this is to avoid being overconfident and to resist being defeated by my own foolish and unfounded pride.

It might be objected that my rational assessment of my abilities robs me of the critical passion that one must have to be a true competitor. This is, however, not the case. As my friends will attest, while I am gracious in defeat I also hate to lose. In fact, honesty compels me to say that I hate losing slightly more than I love winning. And I really love to win. As such, when I get to the starting line, start presenting a philosophical paper to people looking to score philosophical pissing points, or join a competitive video game I am there to win and to make others lose. But, victory often rests on knowing what I and my competitors can and cannot do. I gain no advantage by deluding myself into thinking I am better than I am or they are worse than they are. True, I am not free of self-deception. But I do not willfully add to it.

My Amazon Author Page

Enhanced by Zemanta

Technology and Freedom [Freedom, part II]

In my earlier post, I suggested that we could look at freedom from three perspectives, and I will get back to that at the end of this post. But I want to also look at the way that the ideal of freedom has been affected by technological shifts.

The environment of nature has always put limitations on freedom in that it has always required certain behaviors and disallowed others: there have always been “laws” in nature that we do not have the freedom to surpass. The environment demands a certain amount of food, air, water, work and rest, regardless of how those things are achieved. Nonetheless, so long as no person interferes, the natural difficulties which arise are shrugged off as amoral, merely luck and not much to account for. By this understanding, freedom as an ideal is only limited when human laws get in the way, not when disaster, illness, accident or other natural causes do. This classic American vision of freedom at first seems to contain a Rousseauian assumption that a social contract is unnecessary, that life without a social contract consists of individuals who leave one another alone and seek out what they need in relative peace.

However, such a viewpoint is radically at odds with a world of business. In order for industry and technology to grow, for capitalism to achieve its goals, it is vital that networks and groups – companies and corporations – are formed, compete and grow as well. In fact it seems that the 19th century assumption is more Hobbesian in its premise but just draws a different conclusion: life without a social contract is nasty, brutish, short – and totally awesome. The fewer rules prescribed, the more battles must be fought, but this is a benefit rather than a cost, and the “collateral damage” of those lost in the fight is worth the rise of empire.

But all of this becomes more complicated as technology expands. While nature provided limitations that could not be denied, the freedoms of individuals allow for the alteration of nature and new rules are put into play. In other words, the environment of a contemporary person is less limited by natural factors than by the structure of society. Unless born into specific circumstances, a person cannot simply start hiking, foraging, farming or hunting to survive. Instead, to afford food, shelter and transportation it’s necessary to take part in the economy, and this is thanks to the revolutionary changes put into place by businessmen. Thus the freedom to do anything leads, through technology, to particular limitations for the citizen. It is not the forces of government that put those rules into place, but the forces of invention; even Amish communities allow themselves limited use of certain technologies just to be able to survive (once local resources like lumber get used up and trading becomes necessary).

In other words, society takes over for nature as the primary environmental setting in which people live, and the needs and options are determined according to social rules. The very basics – a job and a place to live – come with various strings attached, and many other aspects will seem necessary to the majority as well, things like the right sort of clothing, cable TV, household appliances, a diamond ring, a nice car, or an iPhone. Conveniences and achievable luxuries in life change expectations until it is assumed that everyone ought to be taking advantage of their availability, and they become simply “the norm”. The more such social roles become defined, not just according to gender or family but also generation, musical preferences, political parties, brands or stores, and all manner of interests, the more identity is socially secured, and freedom is harder to reach. (While one may be free to break social norms, it is always easier for those with resources than those without, as social approval is usually needed to get a job, and in any case social acceptance is a constant component of life choices.)

To return to the three aspects of freedom I discussed in part one of this post, we can link back to a classic trichotomy: one could think of these forms of freedom as elements of the true, the good and the beautiful. The first form, freedom as what you are physically able to do, describes what is actually possible and factual—but truth as potential, through the lens of technology, is an active and relative descriptive. What is possible is always becoming, not a final determination. As technology grows, even nails in coffins are looked upon like puzzles that might unlock.

The second, the choice an individual can make, is clearly in keeping with the history of the good, the right, or the legal. This too is entangled with the changing options of a world with new identities and roles. Goodness has always been perspectival in practice given the necessity of conflicting interests, even if certain thinkers have maintained belief in an ultimate form, but here it takes on a Sartrean component—what is good is whatever you are willing to live with. The individual bears the burden of complete freedom to make moral decisions, as even those who claim absolute answers can at best be “one absolute answer among many.”

Finally, the notion of what is most beautiful or appealing to the soul includes freedom in another way. Here it is the feeling of freedom as an emotion being connected to the feeling of beauty. Kant’s theory of beauty speaks of aesthetic judgment, or the mental sensation of recognizing something as beautiful, as a “free play” between imagination and understanding. Since the understanding is the ability to conceptualize or see things as belonging to categories, beauty is the ability to go beyond that and experience the item in a way that breaks free from rules or standards. Although it is merely concerned with a direct experience of the environment, and not the meaning of one’s larger social role or way of life, there is something analogous about beauty and freedom in an anarchic sense.

Altogether, then, the larger idea of freedom seems to combine an awareness of an unknown future, the weight of responsibility, and the sense of excitement of breaking out of routines. Which aspects are people worried about? It is probable that when spoken of in theoretical terms, it is the second one, a moral freedom to determine one’s own values, that is cited most, but when referred to simply as a broad worry, there are aspects of the other two as well—a sense of fear that opportunities just won’t be available or social constrictions will hold us all hostage.

In fact, I think a strong case could be made that it is that third one, the aesthetic of freedom, that drives concerns about losing freedom. And of course, the more determinations are made to assure factual freedoms, the less the aesthetic of freedom has any place. In reality, the aesthetic of freedom includes tragedy, pain, and risk – it includes competition and even violence – but the volatility inherent to this sensory freedom is at odds with the stability and reliability expected from guarantees and laws, even those that protect freedoms. Freedom writ large cannot be simply defended, but has to be understood as a whole variety of different issues and desires that can be taken in turn.

If the post-Industrial age has brought with it new problems of freedom, they are not tied to certain policies but a much more complex series of historical and technological changes that has produced roles not of family members or craftsmen, but of consumers and servers – roles heavily tied into an economy rather than a community.

The Multiplicity of Freedom [Freedom, part I]

There is a claim made by a portion of Americans—especially among those who lost the most recent election—that they defend the ideal of “freedom” and that it is in danger of slipping away, either under the current administration or just in contemporary culture generally. But the idea of freedom is both vague and complex. Although this is an enormous topic, there are a couple points I’d like to make, one regarding the multiple angles of the concept to begin with, and one regarding how history and technology have had an effect. Today, I’ll look at three ways that the concept of freedom may be grasped: as ability, as choice, and as feeling. In my next post, I’ll follow up with what this means in context.

The first version of freedom is the simple capacity to do something. This is originally inhibited only by the laws of nature—I can walk but I can’t fly, and though I am free to be lazy I still must find food if I wish to stay alive. However, as history progresses this aspect of freedom is impacted by technology and society. For instance, my first example is now false in everyday parlance –modern human beings fly all the time. Donna Haraway’s theory of cyborgs exploits this use of freedom: ultimately, what we are able to do is what makes us free, so technology is a beneficient force. For Haraway, women in particular suffer when reduced to that which nature intends—or demands—and not allowed the creativity of the artificial. Once intertwined with technological possibilities, embracing a “cyborg” nature as she calls it, women can actuate a new level of freedom. This goes against tradition and any idea of natural law, of course, in which freedom is met by clear boundaries.

The second concept is the idea of free will or autonomy, which is not the physical possibility of performing a particular action, but the process of choosing intentionally to do so. (This is the kind of freedom that usually gets tied up in theories of determinism, which I am not going to address here). Nonetheless, autonomy is always complicated by secondary pressures and forces. That is, the individual may define this notion of freedom externally by some form of law or moral boundary that is not identical across the population. It is easy to say we should all be free, but harder to agree on whether that freedom includes certain choices—and as it turns out, much of what is considered taking away freedom by one group is seen as a way to save or protect freedom by another. It is an argument of definitions as much as policy: Is it the freedom of the mother or the fetus that should be under consideration when discussing abortion? Is it freedom of speech to be able to demean someone for their belief, or freedom of religion to be able to practice that religion without persecution? The autonomy of multiple parties has to be accounted for, and is commonly in conflict. The most libertarian approach, where existence and action always win over persecution and impediment, runs into trouble when trying to explain why people can’t be watched, used, and generally exploited since it’s the freedom of the big guys to keep expanding their enterprises. Limitations that recognize protecting freedoms to, for instance, pursue happiness and not just maintain one’s existence, complicate definitions and also leave the edges of each person’s liberty rubbing against each other.

The third is a less specific ideal and one that permeates the American psyche. It is the fantasy of a new beginning, of wild horses and open land on an uncharted continent allowing for anything to happen. This notion can change as time passes, and history begins to settle in. America is a young country, but no longer adolescent. When Emerson wondered what the “new American Scholar” would be like, the Civil War had not even taken place yet. He advised members of the childlike country to stick closer to Nature and Action than Books, to explore things anew instead of being weighed down by history, but now Americans are bound to the traditions of our own books, quoting Emerson instead of following his advice. Even so, the feeling of excitement towards free, open space, a sense of boundlessness and lawlessness, is clearly universal, and there are multiple ways that this desire manifests. The question may be how it is related to the more distilled forms of freedom mentioned earlier.

In our most everyday use, we might say freedom is the ability to do as you choose. This definition could be thought to include both capacity and self-rule. One might presume it to be boundless unless directly challenged, but on closer inspection neither component requires there to be an immediate enemy in order to be reduced. Both the potential avenues a person can travel, as well as their own awareness and determination in making active choices, can face severe erosion due to social and environmental factors alone. In other words, a person’s freedom can be limited by the chance experiences they undergo in life, so that they are stuck in a situation where there truly is no other choice, or in terms of our definition, where they have no freedom.

Does such a situation count as a society taking away freedom? I will look into how this multiplicity of freedom can clarify the nature of the concept, as well as discuss the historical arc of technological change, next.

Race in America

Official photographic portrait of US President...

(Photo credit: Wikipedia)

While the United States professes that all men are created equal and there has been talk of a post-racial America, race is still a significant factor. To use but one example, the 2012 Presidential election involved considerable focus on race. Some, like Bill O’Reilly, lamented what they seem to have taken as the end of the dominance of the white establishment. Others merely focus on the demographic lines drawn in accord with race and hope to appeal to those groups when election time comes.

Despite this unfortunate obsession with race, the concept is incredibly vague. There have been various attempts to sort out clear definitions of the races. For example, the “one drop rule” was an attempt to distinguish whites from blacks, primarily for the purposes of slavery. More recently, there have been attempts to sort out race based on genetics. This has had some interesting results, including some people finding out that the race they identified with is not the same as their genetic “race.”

In many ways, of course, these sorts of findings illustrate that the concept of race is also a matter of perception. That is, being white (or black or whatever) is often a matter of being perceived (or perceiving oneself) as being white (or black or whatever). In many ways, race is clearly a social construct with little correlation to genetics.

Getting back to genetics, many Americans are mixed rather than “pure.” This, of course, creates the problem of sorting people into those allegedly important racial demographics. After all, if a person has a mixed ancestry, they would not seem to fall clearly into a category (other than mixed). To “solve” this “problem” the tendency is to go with how the person is perceived. To use one example, consider President Obama. While his mother was white and his father black, he is considered black (after all, his place in history is as America’s first black president). The fact that he is considered black is thus a matter of perception. After all, he is just as white as he is black—although, of course, he looks black. As might be imagined, appearance is often taken as the major determining factor in regards to race. So, Obama looks more black than white, so he is black. Or so it might be claimed.

There is, of course, a problem in regards to people who are “mixed” but look “pure.” Interestingly enough, in the United States it is typically the case that a “mixed” person who looks “pure” means that they look white enough. After all, people who are “mixed” but do not look clearly white are typically classified as belonging to the “other” race. Like, for example, President Obama.  People who look white enough are typically classified as white, despite their actual ancestry.

I can use myself as an example in this case. While my mother’s side is documented “white” all the way back to the Mayflower, my father’s side is mixed. While my grandfather’s ancestry is French and some Native American, we really have no idea about the specific mix. My grandmother, however, was at least 50% “pure” Mohawk. As such, I am mixed. However, I look rather white and I have consistently been treated as white. Since many official forms and job applications require that a person identify by race, I always pause and look through the categories—especially when there is supposed to be consequences for not being honest. When a form allows multiple selections, I go with “white” and “Native American” since that is true. If I can only pick one, I usually go with “other” and if that is not an option, “white.” After all, no one would doubt that I am white simply by looking at me. As such, I might “really” be white—at least in the way that matters most in society (namely appearance). However, the race categories continue to annoy me and I always worry a tiny bit that I will be busted someday for putting down the wrong race.

 

My Amazon Author Page

Enhanced by Zemanta

Rockets & Ethics

English: A Qassam rocket fired from a civilian...

(Photo credit: Wikipedia)

In a repeat of events in 2008 (and earlier) Hamas stepped up its rocket attacks from Gaza against Israel. Israel, not surprisingly, responded with attacks of its own. In addition to the political and humanitarian concerns, this matter raises numerous ethical issues.

One issue of concern is that Hamas generally locates its launch sites close to or in civilian areas. As such, Israel runs the risk of killing civilians when it attempts to destroy the launchers. This raises the general issue of launching attacks from within a civilian population.

On the face of it, this tactic seems to be immoral. To use the obvious analogy, if I am involved in a gun fight and I grab a child to use as a human shield, I am acting wrongly. After all, I am intentionally endangering an innocent to protect myself. If the child is hurt or killed, I clearly bear some of the moral blame. While my opponent should not endanger the child, I would rather limit her options if I kept attacking her while hiding behind the child.  Naturally, if I was shooting at her innocent children while using a child as a shield, I would certainly be acting very wrongly indeed.

One possible counter is that the analogy is flawed. In the child example, the child is coerced into serving as a shield. If the civilians support Hamas and freely allow themselves to be used as human shields, then Hamas would not be acting wrongly. To use an analogy, if I am in a gun fight and people volunteer to take bullets for me by acting as human shields, I would seem to be acting in a way that would be morally acceptable. As such, as long as the civilians are not coerced or kept in ignorance (that is, employed as shields by force or fraud), then it would seem that Hamas could be acting in a morally acceptable way.

There is, of course, a rather obvious concern. To go back to the gunfight analogy, suppose my fellows volunteer to serve as human shields while I shoot randomly at my opponent’s friends and family. If my opponent returns fire and hits one of my shields while trying to stop me, it would seem that my opponent would not be acting wrongly. After all, she is not trying to kill my shields—she is trying to stop me from shooting randomly at her friends and family.

This, of course, leads to another point of moral concern: Hamas fires rockets into populated areas as opposed to aiming at military targets. That is, Hamas seems intent on hurting random Israelis. One main argument in defense of Hamas is that the rockets are being fired in retaliation for Israeli wrong doings. As such, the rockets are intended as retribution for wrongs. In general, punishing people for their misdeeds is morally acceptable and can be argued for in terms of deterrence and retribution. Of course, it must be shown that Israel has done wrong and that the retribution is proportional and justified.

However, the fact that Hamas is shooting rockets that randomly hurt people seems to remove the retribution justification from Hamas’ attack on Israel.  After all, punishment is something that should be directed at the guilty party and not randomly inflicted on whoever happens to be at the receiving end of a rocket. After all, to punish the innocent would simply be to commit a crime against them and would not be an act of justice.

One stock reply is that the people hurt by the rockets are (usually) Israelis and hence they are not innocent.  That is, they are fully accountable for whatever wrongs Israel has allegedly committed. However, being a member of a large group seems to be a rather weak basis for justifying such random retribution. To use an analogy, imagine that professor Sally is fired from her job at Big University so that the president of the university can give her boyfriend Sally’s job. Now suppose that, in revenge, Sally starts randomly slashing the tires of students’ cars and that she defends her actions by pointing out that the students are associated with Big University and hence just targets of her retribution.

On the face of it, Sally’s justification seems absurd: the students are hardly accountable for the doings of the president. Likewise, one might argue, random people are unlikely to be accountable for any alleged misdeeds attributed to Israel.

One obvious counter is that being a citizen comes with moral accountability that would not hold in the case of students. A citizen of a democratic state, it can be argued, is responsible for what is done by her nation. After all, a citizen of a democracy has the right to elect officials and make decisions regarding the actions of the country. So, the rocket attacks could be just retaliation provided that the actions of the Israeli state warranted such retribution.

The obvious reply is that while citizens of a democratic state do bear some responsibility for the actions of their nation, such random attacks fail to take into account important distinctions. To be specific, it seems clear that every citizen does not bear the guilt of every misdeed (or perceived misdeed) of a nation. For example, a random rocket attack could kill an Israeli who opposes violence or it could murder a child. Surely such people do not deserve death, whatever the alleged misdeeds of the country.

Obviously, it could be argued that collective guilt somehow overrides all other normally relevant aspects (such as past actions).  However, the burden of proof seems to be on those who would make this claim.

As such, these random rocket attacks fired from within civilian areas seem to be morally wrong.

Naturally, a similar sort of argument can be applied to any cases in which Israeli attacks kill random people in Gaza. Or random attacks kill anyone anywhere.

My Amazon Author Page

Enhanced by Zemanta

Brian Leiter – “Should we respect religion?”

In Chapter IV of Why Tolerate Religion? Brian Leiter asks whether/why we should respect religion. The point here is to consider whether religion might merit something more than mere toleration, i.e. putting up with something that you don’t (necessarily) approve of.

At an earlier stage of the book, Leiter has argued that both Kantians and utilitarians have reasons to tolerate religious views and practices that they disapprove of. So far, so good – although Kantian and utilitarian moral theories are controversial, and I’d be looking for a rather different basis for toleration myself (I actually ground it in what I think many people, including many religious people, can see as the point or role of the institution of the state … but let’s skip over that).

Very well, let’s stipulate that there is some moral basis for tolerating religion, particularly in the sense of not bringing organised political power to bear (with fire, swords, police cars, jails, and so on) in an attempt to suppress it, even if we’re talking about a form of religion that we dislike. But Leiter wants to know whether we should be doing more than that, perhaps based on a claim that religion merits respect in some strong sense.

Here he offers what seems to me a useful discussion of respect. He leans on some terminology from Stephen Darwall, distinguishing between recognition respect and appraisal respect. Recognition respect is what I would simply call “respect” – i.e. recognising something’s properties that ought to be taken into account in some way, and moulding your behaviour so that you actually do take them into account in whatever is the appropriate way. Appraisal respect is more like deciding that something is worthy of esteem. (I’ve made a similar distinction many times, without being aware of Darwall’s 1977 article that Leiter refers to. I’m not the only one, as, irrespective of terminology, these different conceptions of respect are frequently discussed in one way or another. In an endnote, Leiter observes that Darwall’s views have changed since the 1977 article, but that need not detain us.)

Let’s all concede that religion has certain properties that we’d better take into account in some way, perhaps by not making it a political issue whether a particular religion ought to be imposed by the power of the state or whether certain religions ought to be suppressed by state power. Thus, we could agree that we ought to give religion recognition respect, which will then make us circumscribe our behaviour in certain ways. These ways might be important if they make the difference between whether or not we live in a society with bloody religious persecutions. All the same, the effect on our behaviour as individuals may be slight. The appropriate level of recognition may not be demanding in how it constrains our behaviour, at least for most of us.

It does not follow that religion per se merits any esteem, or anything similar that might motivate us to treat it with special deference or solicitude. Does religion (again, religion per se, not some particular, especially “nice” religion) merit appraisal respect, i.e. we ought to appraise it as meritorious, worthy of esteem, and so on? I don’t see why, and neither does Leiter. Religion may have its good side, but it also has a dark side. Taken as a whole, it is not obviously something that is worthy of our esteem, or even something that is all to the good.

For Leiter, it follows that there is no requirement, above and beyond his basic argument for toleration, to give religion any special rights. It is in the same boat as other matters of individual conscience, deserving no more (though no less) deference by the state. Although I argue for religious toleration from a different philosophical viewpoint, I think Leiter is clearly right on the basic issue here.

[Pssst my Amazon author’s page, and the link to Freedom of Religion and the Secular State.)

Russell vs. Ryle–A Philosophical Spat

As is well-known, Bertrand Russell wasn’t too keen on the “ordinary language philosophy” that was popular among Oxford philosophers in the middle of the twentieth century. This meant that when the sociologist Ernest Gellner wrote a book, Words and Things (pub: 1959), that was highly critical of the approach, Russell was only too happy to write its Preface.

At this time, the editor of Mind was Gilbert Ryle, a leading exponent of the Oxford approach, and he refused to allow Words and Things to be reviewed in the journal on the grounds that it was abusive and could not therefore be regarded as a serious contribution to academic debate.

This annoyed Russell, who promptly penned a letter to The Times, which resulted in a philosophical spat that played out in the newspaper’s letters pages during November 1959.

I reproduce it below.

Read more »

Accommodations for religious and family/cultural purposes

I’ve just begun reading Brian Leiter’s new book, Why Tolerate Religion?, about which I’ll doubtless have more to say – here and elsewhere. Meanwhile, I can report that the book is focused on one main topic within the larger field of freedom of religion (and/or secular government). Leiter concentrates on the topic of why we should accommodate religious practices, even if they fall within the terms of prohibitory laws that are religiously neutral and of general applicability.

For those of you who are familiar with my book, Freedom of Religion and the Secular State, Leiter is covering the terrain that I deal with mainly in Chapter 7 (although the issues do come up to an extent elsewhere).

Leiter raises the particular issues that he has in mind by presenting us with the example of a Sikh boy who is required by the canons of conduct of his religion to wear a dagger at all times. Should he be exempt from a generally applicable legal rule, with no religious or anti-religious purpose behind it, that forbids weapons at school? If so, what do we say of a boy of the same age who is required to carry a particular dagger that is a family heirloom: one that has been passed down to him ceremonially as part of a longstanding family custom that is, in turn, well grounded in the local culture? Imagine that Boy A (the Sikh) and his family will suffer about the same amount of emotional distress as Boy B and his family… if they are not exempted from the rule to the necessary extent.

Thus, we assume that the family/cultural custom binding Boy B is very meaningful or emotionally important to Boy B and his family, even though the custom is not enjoined by anything that courts of law would regard as a religion (e.g., the custom is not entangled with beliefs about an otherworldly order, or a transcendent way for human beings to flourish, or ideas of immortality or spiritual salvation, or anything that seems closely analogous to any of these).

Leiter offers a fair bit of detail about the two scenarios to make them seem emotionally about equivalent. Should Boy A be exempt from the rule? Should Boy B be exempt from the rule? Both of them, perhaps? Neither of them?

Leiter hasn’t raised this so far, but who, in a liberal democracy, should decide this issue? The legislature (or someone with delegated authority to create rules with the status of subordinate legislation)? The courts? Someone else?

Please discuss.

[Pssst … my Amazon author page.]

Republicans & “Minorities”

Republican Party (United States)

No longer a white elephant? (Photo credit: Wikipedia)

As Bill O’Reilly pointed out, the majority of black & Hispanic voters supported Obama over Romney in the 2012 election. While O’Reilly presented this a moral failing on the part of blacks and Hispanics (as O’Reilly saw it, they supported Obama because they wanted “stuff”) more practical Republican politicians have taken a different perspective.

To be specific, these politicians are saying that the Republican Party needs to attract these voters and this will require that the party undergo some changes (or at least the appearance of change). This has already led some politicians to say that the party needs to reconsider its stance on immigration so as to win over Hispanic voters. Interestingly, the party had previously professed to have taken a principled stance on this and related issues. However, that was before they lost the election to Obama.

While politicians profess principles and ideologies, these are typically means to the end of being elected rather than actual commitments. That is, politicians profess what they believe will get them elected.

There are, of course, some true believers. However, there are clearly more politicians who are like Romney (who changed his professed views with consistent inconsistency) than like Ron Paul (who is well known for his constancy in belief).

As such, it makes sense that the practical Republicans would begin to change their professed views on the matter of immigration. After all, they believe that doing so will increase their chances of being elected (or re-elected). As might be imagined, it has been pointed out that Hispanics do not care solely about immigration and that merely saying something different about immigration will not be enough to win over voters.

It is also interesting that the main focus is on Hispanics rather than other minorities. However, this is not surprising—Hispanics are a rapidly growing “minority” and even before the Republicans publicly acknowledge the need to get their vote they were a coveted demographic for advertisers. Also, as some might point out, it had been assumed that blacks would support Obama and hence little effort was made to woo black voters. This might, however, change.

There has also been an effort to win over women voters and this began before the election. Romney was able to make inroads against Obama’s lead, but Obama did well with single women, making this a demographic that Republicans will need to win over in future elections.

It is, of course, tempting to criticize politicians for doing this. After all, if O’Reilly can criticize voters for supporting Obama because they want “stuff” it seems very reasonable to criticize politicians for abandoning their professed principles and ideologies simply to get votes. After all, they are not acting on principle—other than the principle that one should do whatever it takes to get elected. After all, when they thought they could win by appealing to white and socially conservative voters, they pandered to them. Now that they have realized that the demographics are not as their narrative told them, they are changing their pandering targets.

In defense of the Republicans who are advocating a change in professed values, it could be argued that they are not merely being cynical and practical politicians. Rather, it could be argued that they are following the principles of democracy and modifying their views in a principled way to match the values of their potential constituents. That is, the Republicans are legitimately undergoing a re-evaluation of their values and assessing them in a principle manner—as opposed to changing their rhetoric to pander to the new demographics so as to get elected.

However, if the Republicans truly change their professed principles on key issues to win over black, Hispanic and women voters, then there is the important question of determining what the party and its members stand for (other than winning elections). Of course, the party could contend that they will still retain their core values while changing what are now the more peripheral values (although these values seemed rather core last time around).

My Amazon Author Page

Enhanced by Zemanta

How can you say that if you’re an error theorist?!

Now and then, when I’m involved in discussion of some question of normative ethics or the like, I’ll get a response along the lines of, “How can you say that when you’re an error theorist?!”

Note that large assumptions are being made here. One is that I am, in fact, an error theorist as that is understood in contemporary metaethics. In fact, I tend to use formulations such as that I think moral error theory “has a point”, or that it’s the standard metaethical position that I think is “closest to the truth”, or that I am “attracted” to moral error theory, etc. What I try to avoid doing, though I don’t say I’ve always succeeded (since it’s often necessary to take conversational shortcuts), is to say, outright, “I am a moral error theorist.”

That’s partly because moral error theory has come to mean something quite specific that is not necessarily what J.L. Mackie advocated in the first place, and I don’t necessarily buy a theory quite that specific even though I agree with 90 per cent of what I read in Mackie’s Ethics: Inventing Right and Wrong. I actually prefer to call myself a moral sceptic (or “skeptic” if you prefer), which is a vaguer term that can cover a range of positions.

The difficulty here is that moral error theory has come to mean the claim that all of our first-order moral judgments (or perhaps just a very large sub-set of our standard kinds of first-order moral judgments) are truth-apt but false. The usual way, though by no means the only way, that this result is derived is to begin with the claim that there are no objectively binding behavioural standards or objectively prescriptive moral properties. This is then combined with a semantic claim that first-order moral judgments purport to refer to such standards or properties. For example, it might be that “Torturing babies is morally wrong” means something like “Torturing babies is prohibited by an objectively binding behavioural standard.” Since no such objectively binding behavioural standards exist, “Torturing babies is morally wrong” turns out to be false – in much the same way that “Samantha is a (real) witch” will always turn out to be false because there are no (real) witches in the requisite sense (i.e., no women with supernatural powers, involvement with the devil, etc.).

Even if we think that there are no objectively binding behavioural standards, in the relevant sense, or objectively prescriptive moral properties, in the relevant sense, it does not follow that moral error theory is true. It would only follow that moral error theory is true if we accepted a moral semantics in which first-order moral judgments purport to refer to such non-existent standards, properties, etc. Perhaps we should accept such a moral semantics, but it might get complicated. And of course there are notoriously analyses of moral language that do not require any such semantics – non-cognitivist analyses, moral naturalist analyses, relativist analyses of various kinds, and doubtless others.

It’s also likely, I think, that our moral language is not monolithic and is not simple even in particular cases. For example, some of our moral language, but not all of it, might best be analysed along non-cognitivist lines. Some of it might be best analysed along moral naturalist lines – for example, if I say, “Torturing babies is cruel” I might be saying something that is quite true, and yet this is a moral judgment. Perhaps it combines a factual statement about the painful consequences of torturing babies with an expression of repugnance at the practice and/or a prescription that others avoid it. Moral judgments, particularly “thick” ones, but perhaps not only those, might have mixed content of some kind.

The point that I want to suggest at this stage is that scepticism about objectively binding behavioural standards, objectively prescriptive moral properties, and the like, need not cash out in the belief that first-order moral judgments are simply false, or that all of them are.

This can actually get very messy, and I don’t claim to have got to the bottom of it all. For what it’s worth, I do tend to think that at least some of our first-order moral judgments are, strictly speaking, false, for the sorts of reasons typically advanced by moral error theorists. But that is a long way from accepting moral error theory of the the simplistic kind that is usually portrayed in undergraduate philosophy courses or even in philosophy text books.

But let’s assume for the sake of arguments that all first-order moral judgments actually are false. Perhaps so! Does it follow that we should give up making such judgments? Not obviously. Take the judgment that torturing babies is morally wrong. If this means that torturing babies is forbidden by an objectively binding behavioural standard, and assuming there are no such standards, then, strictly speaking, the sentence is false. But there may well be – I’m sure there are – true statements in the vicinity.

For example, it might still be true that: “Torturing babies is forbidden by standards that everyone in this conversation accepts.” And/or it might still be true that: “Torturing babies is forbidden by standards that it would be prudent for me to follow as a package, to promote my own long-term self-interest.” Or it might still be true that “Torturing babies is forbidden by standards that it would be prudent for everyone involved in this conversation and everyone else in their societies to follow as a package, in order to produce mutual advantage.” Or it might still be true that “Torturing babies is forbidden by standards that I try to follow and invite you to follow.” And so on.

I’m not suggesting that “Torturing babies is morally wrong” means any of the things in the previous paragraph, though we could probably find theorists who would defend one or other of these meanings. Nonetheless, there may be a causal story as to why we make the moral judgments that we do, involving the truth of some of these and related propositions, even though the statements that we make when we make moral judgments are, strictly speaking, false. We actually do, for example, have moral standards, these are largely shared, and they are not entirely arbitrary. They may not be objectively binding on us, but they may well have personal (for long-term self-interest) and social benefit.

Moral error theorists don’t have to deny any of this. In which case, it’s not obvious that moral error theorists should advocate abolishing language such as “Torturing babies is morally wrong.”

Even if I were a full-blown textbook moral error theorist, with no misgivings about the theory at all, I might think that there is utility in continuing to employ this kind of language, even in my own self-talk, thereby buying into a useful fiction that torturing babies is forbidden by an objectively binding standard (not merely a personally or socially beneficial one).

Or I might think that there is benefit in going on using such language while having in mind something more like “Torturing babies is wrong by a standard that I accept and invite you to accept, and which I think you probably have good reasons to accept given your own values.” If she is open, in appropriate contexts, that this is what she has in mind, a moral error theorist might think that the meaning of such sentences will ultimately be revised – people generally, or at least those she is likely to be talking to, will eventually come to use the language in this revisionary way. After all, she might think, the real point (in some sense) of first-order moral language is to make judgments based on standards that are personally and socially beneficial, and it is not strictly necessary for us to rationalise these standards as also being objectively binding.

All that said, some moral error theorists – moral abolitionists – actually do think it is more beneficial to give up on making moral judgments, once we see through them, as it were. There might also be a partial abolitionist position that suggests that we stop making some kinds of moral judgments but not others – the complexities of moral semantics and our social situations might support some nuanced approach along these lines.

In the upshot, scepticism about such things as objectively binding moral standards (a scepticism that I definitely share) goes only part of the way toward moral abolitionism: advocacy of the total abolition of moral judgments. In my own case, I am certainly aware of these issues when writing about how we should behave, what dispositions of character are virtuous or vicious, etc., and the language that I use is, indeed, moulded to an extent by my tentative views about the issues I’ve discussed in this post. I engage, I suppose, in a mix of revisionism and partial abolitionism.

The point is simply that even a moral sceptic – indeed, even a textbook moral error theorist – can have plenty of reasons not to abandon moral talk entirely. To assume otherwise is to skate over a host of complex and controversial issues.