Category Archives: Philosophy

Divisive Obama

Official photographic portrait of US President...

One of the relentless talking points of conservative pundits and many Republicans is that Obama is divisive. Perhaps even the most divisive president in American history. It is, in fact, a common practice to engage in a point-by-point analysis of Obama’s alleged divisiveness. As should be expected, supporters of Obama deny that he is divisive; or at least claim he is not the most divisive president.

It is almost certainly pointless to try to argue about the issue of whether Obama is divisive or not. Since this is a matter of political identity, the vast majority of people cannot be influenced by any amount of evidence or argumentation against their position. However, one of the purposes of philosophy is the rational assessment of beliefs even when doing so will convince no one to change their views. That said, this endeavor is not pointless: while I do not expect to change any hearts (for this is a matter of feeling and not reason) it is still worthwhile to advance our understanding of divisiveness and accusations about it.

Since analogies are often useful to enhancing understanding, I will make a comparison with fright. This requires a story from my own past. When I was in high school, our English teacher suggested a class trip to Europe. As with just about anything involving education, fundraising was necessary and this included what amounted to begging (with permission) at the local Shop N’ Save grocery store. As beggars, we worked in teams of two and I was paired up with Gopal. When the teacher found out about this (and our failure to secure much, if any, cash) she was horrified: we were frightening the old people; hence they were not inclined to even approach us, let alone donate to send us to Europe. As I recall, she said the old folks saw us as “thugs.”

I have no reason to doubt that some of the old folks were, in fact, frightened of us. As such, it is true that we were frightening. The same can be said about Obama: it is obviously true that many people see him as divisive and thus he is divisive. This is also analogous to being offensive: if a person is offended by, for example, a person’s Christian faith or her heterosexuality, then those things are offensive. To use another analogy, if a Christian is hired into a philosophy department composed mainly of devout atheists and they dislike her for her faith and it causes trouble in the department, the she is divisive. After all, the department would not be divided but for her being Christian.

While it is tempting to leave it at this, there seems more to the charge of divisiveness than a mere assertion about how other people respond to a person. After all, when Obama is accused of being divisive, the flaw is supposed to lie with Obama—he is condemned for this. As such, the charge of divisiveness involves placing blame on the divider. This leads to the obvious question about whether or not the response is justified.

Turning back to my perceived thuggery at Shop N’ Save, while it was true that Gopal and I frightened some old people, the question is whether or not they were justified in their fear. I would say not, but since I am biased in my own favor I need to support this claim. While Gopal and I were both young men (and thus a source of fear to some), we were hardly thugs. In fact, we were hardcore nerds: we played Advanced Dungeons & Dragons, we were on the debate team, and we did the nerdiest of sports—track. For teenagers, we were polite and well behaved. We were certainly not inclined to engage in any thuggery towards older folks in the grocery store. As such, the fear was unwarranted. In fairness, the old people might not have known this.

In the case of Obama, the question is whether or not his alleged divisiveness has a foundation. This would involve assessing his words and deeds to determine if an objective observer would regard them as divisive. In this case, divisive words and deeds would be such that initially neutral and unbiased Americans would be moved apart and inclined to regard each other with hostility. There is, of course, an almost insurmountable obstacle here: those who regard Obama as divisive will perceive his words and deeds as having these qualities and will insist that a truly objective observer would see things as they do. His supporters will, of course, contend the opposite. While Obama has spoken more honestly and openly about such subjects as race than past presidents, his words and deeds do not seem to be such that a neutral person would be turned against other Americans on their basis. He does not, for example, make sweeping and hateful claims based on race and religion. Naturally, those who think Obama is divisive will think I am merely expressing my alleged liberal biases while they regard themselves as gazing upon his divisiveness via the illumination of the light of pure truth. Should Trump win in 2016, the Democrats will certainly accuse him of being divisive—and his supporters will insist that he is a uniter and not a divider. While whether or not a claim of divisiveness is well founded is a matter of concern, there is also the matter of intent. It is to this I now turn.

Continuing the analogy, a person could have qualities that frighten others and legitimately do so; yet the person might have no intention of creating such fear. For example, a person might not understand social rules about how close he should get to other people and when he can and cannot tough others. His behavior might thus scare people, but acting from ignorance rather than malice, he has no intention to scare others—in fact, he might intend quite the opposite. Such a person could be blamed for the fear he creates to the degree that he should know better, but intent would certainly matter. After all, to frighten through ignorance is rather different from intentionally frightening people.

The same can be true of divisiveness: a person might divide in ignorance and perhaps do so while attempting to bring about greater unity. If the divisive person does not intend to be divisive, then the appropriate response would be (to borrow from Socrates) take the person aside and assist them in correcting their behavior. If a person intends to be divisive, then they would deserve blame for whatever success they achieve and whatever harm they cause. While intent can be difficult to establish (since the minds of others are inaccessible), consideration of what a person does can go a long way in making this determination. In the case of Obama, his intent does not seem to be to divide Americans. Naturally, those who think Obama is divisive will tend to also accept that he is an intentionally divider (rather than an accidental divider) and will attribute nefarious motives to him. Those who support him will do the opposite. There is, of course, almost no possibility of reason and evidence changing the minds of the committed about this matter. However, it is certainly worth the effort to try to consider the evidence or lack of evidence for the claim that Obama is an intentional divider. I do not believe that he is the most divisive president ever or even particularly divisive in a sense that is blameworthy. It is true that some disagree with him and dislike him; but it is their choice to expand the divide rather than close it. It is like a person who runs away, all the while insisting the other person is the one to blame for the growing distance. In closing, what I have written will change no minds—those who think Obama is divisive still think that. Those who think otherwise, still think as they did before. This is, after all, a matter of how people feel rather than a matter of reason.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Third Parties & Voting for the Lesser Evil

English: An artist's visual representation of ...

I, along with some other philosophers, was recently interviewed about voting for an article by Olivia Goldhill of Quartz. While I certainly stand by what I said, interviews do have inherent problems. One common problem is the lack of depth. In some cases, this is due to the interview being short. For the Quartz piece, I spoke to the author for about five minutes. In other cases, the interview might be longer, but the content must be slashed down to fit in a limited amount of time or space. An interview I did about D&D alignments and the real world was about thirty minutes long; but only a few minutes were used in the final broadcast. Another problem is that material aimed at the general public typically has to be simplified. This is because most people are not experts on the subject at hand. As such, I need to expand a bit on my quote in the article.

After briefly discussing the difference between deontological and utilitarian approaches to voting, I presented my soundbite view of the issue

 “As a citizen, I have a duty to others because it’s not just me and my principles, but everybody. I have to consider how what I do will impact other people. For example, if I was a die-hard Bernie supporter, I might say my principles tell me to vote for Bernie. But I’m not going to let my principles condemn other people to suffering.”

Interestingly enough, my position can be taken as either a deontological approach or a utilitarian approach. For the deontologist, an action is right or wrong in and of itself—the consequences are not what matter morally. For the utilitarian, the morality of an action is determined by its consequences. Looked at from a deontological perspective, acting on a duty to the general good would be the right thing to do. The fact that doing so would have good consequences is not what makes the action good. From the utilitarian perspective, the foundation of my duty would be utility: I should do what brings about the greatest good for the greatest number.

In the upcoming election, I intend to follow my principle. While I voted for Sanders in the primary and prefer him over Hillary, I think that a Trump presidency would be vastly worse for the country as a whole than another Clinton presidency. Hillary, as I see her, is essentially a 1990s moderate Republican with a modern liberal paint job. As such, she can be counted on as a competent business as usual politician who will march along with the majority of the population in regards to social policy (such as same sex marriage and gun regulation). Trump has no experience in office and I have no real idea what he would do as president. As such, I am taking the classic approach of choosing the lesser evil and the devil I know. If I was voting for the greater evil, Cthulhu would have my vote.

It might be objected that my approach is flawed. After all, if a person votes based on a rational assessment of the impact of an election on everyone, then she could end up voting against her own self-interest. What a person should do, it could be argued, is consider the matter selfishly—to vote based on what is in her interest regardless of the general good.

This approach does have considerable appeal and is based on an established moral philosophy, known as ethical egoism. This is the view that a person should always take the action that maximizes her self-interest. Roughly put, for the ethical egoist, she is the only one with moral value. The opposing moral view is altruism; the view that other people count morally. Ayn Rand is probably the best known proponent of ethical egoism and the virtue of selfishness. This ideology has also been embraced by Paul Ryan and explicitly by many in the American Tea Party.

While supporters of selfishness claim that the collective result of individual selfishness will be the general good (a view advanced by Adam Smith), history and reason show the opposite. Everyone being selfish has exactly the result one would suspect—most people are worse off than they would be if people were more altruistic. To use an analogy, everyone being cruel does not make the world a kinder place. More people being kind makes it a kinder place.

This is not to say that people should not consider their interests, just that they should also consider the interests of others. This is, after all, what makes civilization possible. Pure selfishness without regulation, as Hobbes argued, is the state of nature and the state of war—which is not in anyone’s interest.

It can also be objected that my approach is flawed because it perpetuates the two party lockdown of the American political system. While many people are unaware of this, there are many third party candidates running in 2016. Perhaps the best known is libertarian Gary Johnson. He received 1% of the popular vote in 2012 and is polling in the double digits in some polls. It is all but certain that he will not win, thus a vote for Johnson merely helps either Trump or Hillary get elected (depending on whether the person would have otherwise voted for one of them). Nader’s ill-fated bid for president enabled Bush to win the election, something that is often regarded as a disaster (but, to be fair, Al Gore might have done worse). While voting for a third party candidate can be seen as, at best, throwing away one’s vote a case can be made for voting this way.

Like the approach I took in the interview, the argument for voting third party can be based on utilitarian considerations (one can also make a deontological argument based on the notion of a duty to vote one’s conscience). The difference is that the vote for the third party would be justified by the hope of long term consequences. To be specific, the justification would be that voting for a third party candidate could allow the greater evil to win this election. And the next election. And probably several more elections after that. But, eventually, the lockdown on politics by Democrats and Republicans could be broken by a viable third party. If the third party is likely to be better than the Democrats or Republicans, then this could be a good utilitarian argument.  It could also be a good argument if having a viable third party merely improved things for the population. The deciding factor would be whether or not the positive consequences of eventually getting a viable third party would be worth the cost of getting there. Naturally, the likelihood of viability is also a factor.

I am split on this issue. On the one hand, there seems to be a good reason to stick with voting for the lesser evil, namely the fact that third party viability is quite a gamble. There is also the concern about whether any third party candidate is better than a lesser evil. On the other hand, voting for the lesser evil does lock us in the two party system and this could prove more damaging than allowing the greater evil to win numerous times on the way towards having a viable third party.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter


English: NRA (National Recovery Administration...

Put a bit simply, a silencer is a device attached to a gun for the purpose of suppressing the sound it makes. This is usually done to avoid drawing attention to the shooter. This makes an excellent analogy for what happens to proposals for gun regulation: the sound is quickly suppressed so as to ensure that attention moves on to something new.

Part of this suppression is deliberate. After each mass shooting, the NRA and other similar groups step up pressure on the politicians they influence to ensure that new regulations are delayed, defeated or defanged. While it is tempting to cast the NRA as a nefarious player that subverts democracy, the truth seems to be that the NRA has mastered the democratic process: it organizes and guides very motivated citizens to give money (which is used to lobby politicians) and to contact their representatives in the government. This has proven vastly more effective than protests, sit-ins and drum circles. While it is true that the NRA represents but a fraction of the population, politics is rather like any sport: you have to participate to win. While most citizens do not even bother to vote, NRA member turnout is apparently quite good—thus they gain influence by voting. This is, of course, democracy. Naturally, another tale could be told of the NRA and its power and influence. A tale that presents the NRA and its members as subverting the will of the majority.

Certain pundits and politicians also engage in suppression. One standard tactic is, after a shooting, to claim that it is “too soon” to engage in discussion and lawmaking. Rather, the appropriate response involves moments of silence and prayer. While it is appropriate to pay respects to the wounded and dead, there is a difference between doing this and trying to run out the clock with this delaying tactic. Those that use it know quite well that if the discussion can be delayed, interest will fade and along with it the chances of any action being taken.

It is, in fact, appropriate to take action as soon as possible. To use the obvious analogy, if a fire is ravaging through a neighborhood, then the time to put out that fire is now. This way there will be less need of moments of silence and prayers for victims.

Another stock tactic is to accuse those proposing gun regulation of playing politics and exploiting the tragedy for political points or to advance an agenda. This approach can have some moral merit—if a person is engaged in a Machiavellian exploitation of some awful event (be it a mass shooting, a terrorist attack or a wave of food poisoning) without any real concern for the suffering of others, then that person would be morally awful. That said, the person could still be acting rightly, albeit for all the wrong reasons. This would be in terms of the consequences, which could be quite good despite the problematic motivations. For example, if a politician cynically exploited the harm inflicted by lead contaminated water in order to gain national attention, then that person would hardly be a good person. However, if this resulted in changes that significantly reduced lead poisoning in the United States, then consequences would certainly seem good and desirable.

It is also worth considering that using an awful event to motivate change for the better could result from laudable motives and a recognition of how human psychology generally works. To use an analogy, a person who loves someone who just suffered from a lifestyle inflicted heart attack could use that event to get the person to change her lifestyle and do so for commendable reasons. After all, people are most likely to do something when an awful event is fresh in their minds; hence this is actually the ideal time to address a problem—which leads to the final part of the discussion.

Although active suppression can be an effective tactic, it often relies on the fact that interest in a matter fades as time passes—this is why those opposed to new gun regulation use delaying tactics. They know that public attention will shift and fade.

On the one hand, the human tendency to lose interest can be regarded as a bad thing. As Merlin said in Excalibur, “for it is the doom of men that they forget.” In the case of mass shootings and gun violence, people quickly forget an incident—at least until another incident reminds them. This allows a problem to persist and is why action needs to be taken as soon as possible.

On the other hand, our forgetting is often our salvation. If the memory of fear and pain did not fade over time, they would be as wounds that did not heal. Just as a person would bleed to death physically from wounds that never healed, a person would bleed out emotionally if memory did not fade.

To use another analogy, if the mind is like a ship and memory is like a cargo, just as a ship that could never lighten its load would plunge to the ocean floor, a person that could never lighten her emotional load would be dragged into the great abyss of emotions and thus be ruined. Thus, forgetting is both our doom and our salvation. Of course, we would have far less need to forget if we remembered what we need to fix. And fixed it.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Modern Philosophy

Portrait of René Descartes, dubbed the "F...

Here is a (mostly) complete course in Modern Philosophy.

Notes & Readings

Modern Readings SP 2014

Modern Notes SP 2014

Modern Philosophy Part One (Hobbes & Descartes)

#1 This is the unedited video from the 1/7/2016 Modern class. It covers the syllabus and some of the historical background for the Modern era.

#2 This is the unedited video from the 1/12/2016 Modern philosophy class. It concludes the background for the modern era and the start of argument basics.

#3 This is the unedited video from the 1/14/2016 modern philosophy class. It covers the analogical argument, the argument by example, the argument from authority, appeal to intuition, and the background for Thomas Hobbes.

#4 This is the unedited video from the 1/19/2016 Modern Philosophy class. It covers Thomas Hobbes.

#5 This is the unedited video from the 1/21/2016 Modern Philosophy. It covers Descartes’ first meditation as well as the paper for the class

#6 This is the unedited video from the 1/26/2016 Modern class. In covers Descartes’ Meditations II & III.

#7 This is the unedited video from the 1/28/2016 Modern Philosophy course. It covers Descartes’ Meditations 4-6 and more about Descartes.

Modern Philosophy Part Two (Spinoza & Leibniz)

#8 This is the unedited video from the 2/2/2016 Modern Philosophy class. It covers the start of Spinoza’s philosophy. It could not be otherwise.

#9 No Video

#10 This is the unedited video from the 2/9/2016 Modern Philosophy class. It covers Spinoza.

#11 This is the unedited video from the 2/11/2016 Modern Philosophy class. It covers the end of Spinoza and the start of Leibniz.

#12 This is the unedited video from the 2/16/2016 Modern philosophy class. It covers Leibniz.

#13  This is the unedited video from the 2/18/2016 Modern philosophy class. It covers Leibniz addressing the problem of evil and the start of monads.

#14 This is the unedited video from the 2/23/2016 Modern philosophy class. It covers Leibniz’s monads, pre-established harmony and the city of God.

#15 This is the unedited video from the 2/25/2016 Modern philosophy class. It covers the end of Leibniz and the start of the background for the Enlightenment.

Modern Philosophy Part Three (Locke & Berkeley)

#16 This is the unedited video from the 3/1/2016 Modern Philosophy Class. It finishes the enlightenment background and the start of John Locke.

#17 This is the unedited video from the 3/3/2016 Modern Philosophy class. It covers John Locke’s epistemology.

#18 This is the unedited video from the 3/15/2016 Modern Philosophy class. It includes a recap of Locke’s reply to skepticism and the start of his theory of personal identity.

#19 No Video

#20 This is the unedited video from the 3/22/2016 Modern Philosophy class. It covers Locke’s political philosophy.

#21 This is the unedited video from the 3/29/2016 Modern Philosophy class. It covers the first part of George Berkeley’s immaterialism.

#22 This unedited video is from the 3/31/2016 Modern Philosophy class. It covers the final part of Berkeley, including his arguments for God as well as the classic problems with his theory.

Modern Philosophy Part Four (Hume & Kant)

#23 This is the unedited video from the 4/5/2016 Modern Philosophy class. It covers the introduction to David Hume and his theory of necessary connections.

#24 This is the unedited video from the 4/7/2016 Modern philosophy class. It covers Hume’s skepticism regarding the senses.

#25 This is the unedited video from the 4/12/2016 Modern Philosophy class. It covers David Hume’s theory of personal identity, ethical theory and theory of religion.

#26 This is the unedited video from the 4/19/2016 Modern Philosophy class. It covers Kant’s philosophy.

#27 This is the unedited video from the 4/19/2016 Modern class. It covers Kant’s epistemology and metaphysics.

#28 This is the unedited video from the 4/21/2016 Modern Philosophy class. It covers Kant’s antinomies, God, and the categorical imperative


Denmark’s Refugee “Fee”

In January, 2016 Denmark passed a law that refugees who enter the state with assets greater than about US $1,450 will have their valuables taken in order to help pay for the cost of their being in the country. In response to international criticism, Denmark modified the law to allow refugees to keep items of sentimental value, such as wedding rings. This matter is certainly one of moral concern.

Critics have been quick to deploy a Nazi analogy, likening this policy to how the Nazis stole the valuables of those they sent to the concentration camps. While taking from refugees does seem morally problematic, the Nazi analogy does not really stick—there are too many relevant differences between the situations. Most importantly, the Danes would be caring for the refugees rather than murdering them. There is also the fact that the refugees are voluntarily going to Denmark rather than being rounded up, robbed, imprisoned and murdered. While the Danes have clearly not gone full Nazi, there are still grounds for moral criticism. However, I will endeavor to provide a short defense of the law—a rational consideration requires at least considering the pro side of the argument.

The main motivation of the law seems to be to deter refugees from coming to Denmark. This is a strategy of making their country less appealing than other countries in the hopes that refugees will go somewhere else and be someone else’s burden. Countries, like individuals, do seem to have the right to make themselves less appealing.  While this sort of approach is certainly not morally commendable, it does not seem to be morally wrong. After all, the Danes are not simply banning refugees but trying to provide a financial disincentive. Somewhat ironically, the law would not deter the poorest of refugees. It would only deter those who have enough property to make losing it a worthwhile deterrent.

The main moral argument in favor of the law is based on the principle that people should help pay for the cost of their upkeep to at least the degree they can afford to do so. To use an analogy, if people show up at my house and ask to live with me and eat my food, it would certainly be fair of me to expect them to at least chip in for the costs of the utilities and food. After all, I do not get my utilities and food for free. This argument does have considerable appeal, but can be countered.

One counter to the argument is based on the fact that the refugees are fleeing a disaster. Going back to the house analogy, if survivors of a disaster showed up at my door asking for a place to stay until they could get back on their feet, taking their few remaining possessions to offset the cost of their food and shelter would seem to be cruel and heartless. They have lost so much already and to take what little that remains to them would add injury and insult to injury. To use another analogy, it would be like a rescue crew stripping people of their valuables to help pay for the rescue. While rescues are expensive, such a practice certainly would seem awful.

One counter is that refugees who are well off should pay for what they receive. After all, if relatively well-off people showed up at my door asking for food and shelter, it would not seem wrong of me to expect that they contribute to the cost of things. After all, if they can afford it, then they have no grounds to claim a free ride off me. Likewise for well-off refugees. That said, the law does not actually address the point, unless having more than $1450 is well off.

Another point of consideration is that it is one thing to have people pay for lodging and food with money they have; quite another to take a person’s remaining worldly possessions. It seems like a form of robbery, using whatever threat drove the refugees from home as the weapon. The obvious reply is that the refugees would be choosing to go to Denmark; they could go to a more generous country. The problem is, however, that refugees might soon have little choice about where they go.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Against accommodationism: How science undermines religion

Faith versus Fact
There is currently a fashion for religion/science accommodationism, the idea that there’s room for religious faith within a scientifically informed understanding of the world.

Accommodationism of this kind gains endorsement even from official science organizations such as, in the United States, the National Academy of Sciences and the American Association for the Advancement of Science. But how well does it withstand scrutiny?

Not too well, according to a new book by distinguished biologist Jerry A. Coyne.

Gould’s magisteria

The most famous, or notorious, rationale for accommodationism was provided by the celebrity palaeontologist Stephen Jay Gould in his 1999 book Rocks of Ages. Gould argues that religion and science possess separate and non-overlapping “magisteria”, or domains of teaching authority, and so they can never come into conflict unless one or the other oversteps its domain’s boundaries.

If we accept the principle of Non-Overlapping Magisteria (NOMA), the magisterium of science relates to “the factual construction of nature”. By contrast, religion has teaching authority in respect of “ultimate meaning and moral value” or “moral issues about the value and meaning of life”.

On this account, religion and science do not overlap, and religion is invulnerable to scientific criticism. Importantly, however, this is because Gould is ruling out many religious claims as being illegitimate from the outset even as religious doctrine. Thus, he does not attack the fundamentalist Christian belief in a young earth merely on the basis that it is incorrect in the light of established scientific knowledge (although it clearly is!). He claims, though with little real argument, that it is illegitimate in principle to hold religious beliefs about matters of empirical fact concerning the space-time world: these simply fall outside the teaching authority of religion.

I hope it’s clear that Gould’s manifesto makes an extraordinarily strong claim about religion’s limited role. Certainly, most actual religions have implicitly disagreed.

The category of “religion” has been defined and explained in numerous ways by philosophers, anthropologists, sociologists, and others with an academic or practical interest. There is much controversy and disagreement. All the same, we can observe that religions have typically been somewhat encyclopedic, or comprehensive, explanatory systems.

Religions usually come complete with ritual observances and standards of conduct, but they are more than mere systems of ritual and morality. They typically make sense of human experience in terms of a transcendent dimension to human life and well-being. Religions relate these to supernatural beings, forces, and the like. But religions also make claims about humanity’s place – usually a strikingly exceptional and significant one – in the space-time universe.

It would be naïve or even dishonest to imagine that this somehow lies outside of religion’s historical role. While Gould wants to avoid conflict, he creates a new source for it, since the principle of NOMA is itself contrary to the teachings of most historical religions. At any rate, leaving aside any other, or more detailed, criticisms of the NOMA principle, there is ample opportunity for religion(s) to overlap with science and come into conflict with it.

Coyne on religion and science

The genuine conflict between religion and science is the theme of Jerry Coyne’s Faith versus Fact: Why Science and Religion are Incompatible (Viking, 2015). This book’s appearance was long anticipated; it’s a publishing event that prompts reflection.

In pushing back against accommodationism, Coyne portrays religion and science as “engaged in a kind of war: a war for understanding, a war about whether we should have good reasons for what we accept as true.” Note, however, that he is concerned with theistic religions that include a personal God who is involved in history. (He is not, for example, dealing with Confucianism, pantheism or austere forms of philosophical deism that postulate a distant, non-interfering God.)

Accommodationism is fashionable, but that has less to do with its intellectual merits than with widespread solicitude toward religion. There are, furthermore, reasons why scientists in the USA (in particular) find it politically expedient to avoid endorsing any “conflict model” of the relationship between religion and science. Even if they are not religious themselves, many scientists welcome the NOMA principle as a tolerable compromise.

Some accommodationists argue for one or another very weak thesis: for example, that this or that finding of science (or perhaps our scientific knowledge base as a whole) does not logically rule out the existence of God (or the truth of specific doctrines such as Jesus of Nazareth’s resurrection from the dead). For example, it is logically possible that current evolutionary theory and a traditional kind of monotheism are both true.

But even if we accept such abstract theses, where does it get us? After all, the following may both be true:

1. There is no strict logical inconsistency between the essentials of current evolutionary theory and the existence of a traditional sort of Creator-God.


2. Properly understood, current evolutionary theory nonetheless tends to make Christianity as a whole less plausible to a reasonable person.

If 1. and 2. are both true, it’s seriously misleading to talk about religion (specifically Christianity) and science as simply “compatible”, as if science – evolutionary theory in this example – has no rational tendency at all to produce religious doubt. In fact, the cumulative effect of modern science (not least, but not solely, evolutionary theory) has been to make religion far less plausible to well-informed people who employ reasonable standards of evidence.

For his part, Coyne makes clear that he is not talking about a strict logical inconsistency. Rather, incompatibility arises from the radically different methods used by science and religion to seek knowledge and assess truth claims. As a result, purported knowledge obtained from distinctively religious sources (holy books, church traditions, and so on) ends up being at odds with knowledge grounded in science.

Religious doctrines change, of course, as they are subjected over time to various pressures. Faith versus Fact includes a useful account of how they are often altered for reasons of mere expediency. One striking example is the decision by the Mormons (as recently as the 1970s) to admit blacks into its priesthood. This was rationalised as a new revelation from God, which raises an obvious question as to why God didn’t know from the start (and convey to his worshippers at an early time) that racial discrimination in the priesthood was wrong.

It is, of course, true that a system of religious beliefs can be modified in response to scientific discoveries. In principle, therefore, any direct logical contradictions between a specified religion and the discoveries of science can be removed as they arise and are identified. As I’ve elaborated elsewhere (e.g., in Freedom of Religion and the Secular State (2012)), religions have seemingly endless resources to avoid outright falsification. In the extreme, almost all of a religion’s stories and doctrines could gradually be reinterpreted as metaphors, moral exhortations, resonant but non-literal cultural myths, and the like, leaving nothing to contradict any facts uncovered by science.

In practice, though, there are usually problems when a particular religion adjusts. Depending on the circumstances, a process of theological adjustment can meet with internal resistance, splintering and mutual anathemas. It can lead to disillusionment and bitterness among the faithful. The theological system as a whole may eventually come to look very different from its original form; it may lose its original integrity and much of what once made it attractive.

All forms of Christianity – Catholic, Protestant, and otherwise – have had to respond to these practical problems when confronted by science and modernity.

Coyne emphasizes, I think correctly, that the all-too-common refusal by religious thinkers to accept anything as undercutting their claims has a downside for believability. To a neutral outsider, or even to an insider who is susceptible to theological doubts, persistent tactics to avoid falsification will appear suspiciously ad hoc.

To an outsider, or to anyone with doubts, those tactics will suggest that religious thinkers are not engaged in an honest search for truth. Rather, they are preserving their favoured belief systems through dogmatism and contrivance.

How science subverted religion

In principle, as Coyne also points out, the important differences in methodology between religion and science might (in a sense) not have mattered. That is, it could have turned out that the methods of religion, or at least those of the true religion, gave the same results as science. Why didn’t they?

Let’s explore this further. The following few paragraphs are my analysis, drawing on earlier publications, but I believe they’re consistent with Coyne’s approach. (Compare also Susan Haack’s non-accommodationist analysis in her 2007 book, Defending Science – within Reason.)

At the dawn of modern science in Europe – back in the sixteenth and seventeenth centuries – religious worldviews prevailed without serious competition. In such an environment, it should have been expected that honest and rigorous investigation of the natural world would confirm claims that were already found in the holy scriptures and church traditions. If the true religion’s founders had genuinely received knowledge from superior beings such as God or angels, the true religion should have been, in a sense, ahead of science.

There might, accordingly, have been a process through history by which claims about the world made by the true religion (presumably some variety of Christianity) were successively confirmed. The process might, for example, have shown that our planet is only six thousand years old (give or take a little), as implied by the biblical genealogies. It might have identified a global extinction event – just a few thousand years ago – resulting from a worldwide cataclysmic flood. Science could, of course, have added many new details over time, but not anything inconsistent with pre-existing knowledge from religious sources.

Unfortunately for the credibility of religious doctrine, nothing like this turned out to be the case. Instead, as more and more evidence was obtained about the world’s actual structures and causal mechanisms, earlier explanations of the appearances were superseded. As science advances historically, it increasingly reveals religion as premature in its attempts at understanding the world around us.

As a consequence, religion’s claims to intellectual authority have become less and less rationally believable. Science has done much to disenchant the world – once seen as full of spiritual beings and powers – and to expose the pretensions of priests, prophets, religious traditions, and holy books. It has provided an alternative, if incomplete and provisional, image of the world, and has rendered much of religion anomalous or irrelevant.

By now, the balance of evidence has turned decisively against any explanatory role for beings such as gods, ghosts, angels, and demons, and in favour of an atheistic philosophical naturalism. Regardless what other factors were involved, the consolidation and success of science played a crucial role in this. In short, science has shown a historical, psychological, and rational tendency to undermine religious faith.

Not only the sciences!

I need to be add that the damage to religion’s authority has come not only from the sciences, narrowly construed, such as evolutionary biology. It has also come from work in what we usually regard as the humanities. Christianity and other theistic religions have especially been challenged by the efforts of historians, archaeologists, and academic biblical scholars.

Those efforts have cast doubt on the provenance and reliability of the holy books. They have implied that many key events in religious accounts of history never took place, and they’ve left much traditional theology in ruins. In the upshot, the sciences have undermined religion in recent centuries – but so have the humanities.

Coyne would not tend to express it that way, since he favours a concept of “science broadly construed”. He elaborates this as: “the same combination of doubt, reason, and empirical testing used by professional scientists.” On his approach, history (at least in its less speculative modes) and archaeology are among the branches of “science” that have refuted many traditional religious claims with empirical content.

But what is science? Like most contemporary scientists and philosophers, Coyne emphasizes that there is no single process that constitutes “the scientific method”. Hypothetico-deductive reasoning is, admittedly, very important to science. That is, scientists frequently make conjectures (or propose hypotheses) about unseen causal mechanisms, deduce what further observations could be expected if their hypotheses are true, then test to see what is actually observed. However, the process can be untidy. For example, much systematic observation may be needed before meaningful hypotheses can be developed. The precise nature and role of conjecture and testing will vary considerably among scientific fields.

Likewise, experiments are important to science, but not to all of its disciplines and sub-disciplines. Fortunately, experiments are not the only way to test hypotheses (for example, we can sometimes search for traces of past events). Quantification is also important… but not always.

However, Coyne says, a combination of reason, logic and observation will always be involved in scientific investigation. Importantly, some kind of testing, whether by experiment or observation, is important to filter out non-viable hypotheses.

If we take this sort of flexible and realistic approach to the nature of science, the line between the sciences and the humanities becomes blurred. Though they tend to be less mathematical and experimental, for example, and are more likely to involve mastery of languages and other human systems of meaning, the humanities can also be “scientific” in a broad way. (From another viewpoint, of course, the modern-day sciences, and to some extent the humanities, can be seen as branches from the tree of Greek philosophy.)

It follows that I don’t terribly mind Coyne’s expansive understanding of science. If the English language eventually evolves in the direction of employing his construal, nothing serious is lost. In that case, we might need some new terminology – “the cultural sciences” anyone? – but that seems fairly innocuous. We already talk about “the social sciences” and “political science”.

For now, I prefer to avoid confusion by saying that the sciences and humanities are continuous with each other, forming a unity of knowledge. With that terminological point under our belts, we can then state that both the sciences and the humanities have undermined religion during the modern era. I expect they’ll go on doing so.

A valuable contribution

In challenging the undeserved hegemony of religion/science accommodationism, Coyne has written a book that is notably erudite without being dauntingly technical. The style is clear, and the arguments should be understandable and persuasive to a general audience. The tone is rather moderate and thoughtful, though opponents will inevitably cast it as far more polemical and “strident” than it really is. This seems to be the fate of any popular book, no matter how mild-mannered, that is critical of religion.

Coyne displays a light touch, even while drawing on his deep involvement in scientific practice (not to mention a rather deep immersion in the history and detail of Christian theology). He writes, in fact, with such seeming simplicity that it can sometimes be a jolt to recognize that he’s making subtle philosophical, theological, and scientific points.

In that sense, Faith versus Fact testifies to a worthwhile literary ideal. If an author works at it hard enough, even difficult concepts and arguments can usually be made digestible. It won’t work out in every case, but this is one where it does. That’s all the more reason why Faith versus Fact merits a wide readership. It’s a valuable, accessible contribution to a vital debate.

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

Yoga & Cultural Appropriation

Homo sum, humani nihil a me alienum puto.


In the fall of 2015, a free yoga class at the University of Ottawa was suspended out of concern that it might be an act of cultural appropriation. Staff at the Centre for Students with Disabilities, where the class was offered, made this decision on the basis of a complaint.  A Centre official noted that many cultures, including the culture from which yoga originated, “have experienced oppression, cultural genocide and diasporas due to colonialism and western supremacy … we need to be mindful of this and how we express ourselves while practising yoga.”  In response, there was an attempt to “rebrand” the class as “mindful stretching.” Due to issues regarding a French translation of the phrase, the rebranding failed and the class was suspended.

When I first heard about his story, I inferred it was satire on the part of the Onion because it seemed to be an absurd lampooning of political correctness. It turned out that it was real, but still absurd. But, as absurdities sometimes do, it does provide an interesting context for discussing a serious subject—in this case that of cultural appropriation.

The concept of cultural appropriation is somewhat controversial, but the basic idea is fairly simple. In general terms, cultural appropriation takes place when a dominant culture takes (“appropriates”) from a marginalized culture for morally problematic reasons. For example, white college students have been accused of cultural appropriation (and worse) when they have made mocking use of aspects of black culture for theme parties. Some on the left (or “the politically correct” as they are called by their detractors) regard cultural appropriation as morally wrong. Some on the right think the idea of cultural appropriation is ridiculous and people should just get over and forget about past oppressions.

While I am no fan of what can justly be considered mere political correctness, I do agree that there are moral problems with what is often designated as cultural appropriation. One common area of cultural appropriation is that which is intended to lampoon. While comedy, as Aristotle noted, is a species of the ugly, it should not enter into the realm of what is actually hurtful. As such lampooning of cultural stereotypes that cross over into being actually hurtful would cease to be comedic and would instead be merely insulting mockery. An excellent (or awful) example of this would be the use of blackface by people who are not black. Naturally, specific cases would need to be given due consideration—it can be aesthetically legitimate to use the shock of apparent cultural appropriation to make a point.

It can, of course, be objected that lampooning is exempt from the usual moral concerns about insulting people and thus that such mocking insults would be morally fine. It must also be noted that I am making no assertions here about what should be forbidden by law. My view is, in fact, that even the most insulting mockery should not be restricted by law. Morality is, after all, distinct from legality.

Another common area of cultural appropriation is the misuse of symbols from a culture. For example, having an underwear model prance around in a war bonnet is not intended as lampooning, but is an insult to the culture that regards the war bonnet as an honor to be earned. It would be comparable to having underwear models prancing around displaying unearned honors such as the Purple Heart or the Medal of Honor. This misuse can, of course, be unintentional—people often use cultural marks of honor as “cool accessories” without any awareness of what they actually mean. While people should, perhaps, do some research before borrowing from other cultures, innocent ignorance is certainly forgivable.

It could be objected that such misuse is not morally problematic since there is no real harm being done when a culture is insulted by the misuse of its symbols. This, of course, would need to be held to consistently—a person making this argument to allow the misuse of the symbols of another culture would need to accept a comparable misuse of her own most sacred symbols as morally tolerable. Once again, I am not addressing the legality of this matter—although cultures do often have laws protecting their own symbols, such as military medals or religious icons.

While it would be easy to run through a multitude of cases that would be considered cultural appropriation, I prefer to focus on presenting a general principle about what would be morally problematic cultural appropriation. Given the above examples and consideration of the others that can be readily found, what seems to make appropriation inappropriate is the misuse or abuse of the cultural elements. That is, there needs to be meaningful harm inflicted by the appropriation. This misuse or abuse could be intentional (which would make it morally worse) or unintentional (which might make it an innocent error of ignorance).

It could be contended that any appropriation of culture is harmful by using an analogy to trademark, patent, and copyright law. A culture could be regarded as holding the moral “trademark”, “patent” or “copyright” (as appropriate) on its cultural items and thus people who are not part of that culture would be inflicting harm by appropriating these items. This would be analogous to another company appropriating, for example, Disney’s trademarks, violating the copyrights held by Random House or the patents held by Google. Culture could be thus regarded as a property owned by members of that culture and passed down as a matter of inheritance. This would seem to make any appropriation of culture by outsiders morally problematic—although a culture could give permission for such use by intentionally sharing the culture. Those who are fond of property rights should find this argument appealing.

One interesting way to counter the ownership argument is to note that humans are born into culture by chance and any human could be raised in any culture. As such, it could be claimed that humans have an ownership stake in all human cultures and thus are entitled to adopt culture as they see fit. The culture should, of course, be shown proper respect. This would, of course, be a form of cultural communism—which those who like strict property rights might find unappealing.

The response to this is to note that humans are also born by chance to families and any human could be designated the heir of a family, yet there are strict rules governing the inheritance of property. As such, cultural inheritance could work the same way—only the true heirs can give permission to others to use the culture. This should appeal to those who favor strict protections for inherited property.

My own inclination is that humans are the inheritors of all human culture and thus we all have a right to the cultural wealth our species has produced.  Naturally, individual ownership of specific works should be properly respected. However, as with any gift, it must be treated with due respect and used appropriately—rather than misused through appropriation. So, cancelling the yoga class was absurd.



My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Performance Based Funding & Adjustments


Photo by Paula O'Neil

Photo by Paula O’Neil

I have written numerous essays on the issue of performance based funding of Florida state universities. This essay adds to the stack by addressing the matter of adjusting the assessment on the basis of impediments. I will begin, as I so often do, with a running analogy.

This coming Thursday is Thanksgiving and I will, as I have for the past few decades, run the Tallahassee Turkey Trot. By ancient law, the more miles you run on Thanksgiving, the more pumpkin pie and turkey you can stuff into your pie port. This is good science.

Back in the day, people wanted me to be on their Turkey Trot team because I was (relatively) fast. These days, I am asked to be on a team because I am (relatively) old but still (relatively) mobile.  As to why age and not just speed would be important in team selection, the answer is that the team scoring involves the use of an age grade calculator. While there is some debate about the accuracy of the calculators, the basic idea is sound: the impact of aging on performance can be taken into account in order to “level the playing field” (or “running road”) so as to allow fair comparisons and assessments of performance between people of different ages.

Suppose, for example, I wanted to compare my performance as a 49 year old runner relative to a young man (perhaps my younger and much faster self). The most obvious way to do this is to simply compare our times in the same race and this would be a legitimate comparison. If I ran the 5K in 20 minutes and the young fellow ran it in 19 minutes, he would have performed better than I did. However, if a fair comparison were desired, then the effect of aging should be taken into account—after all, as I like to say, I am dragging the weight of many more years.  Using an age grade calculator, my 20 minute 5K would be age adjusted to be equivalent to a 17:45 run by a young man. As such, I would have performed better than the young fellow given the temporal challenge I faced.

While assessing running times is different from assessing the performance of a university, the situations do seem similar in relevant ways. To be specific, the goal is to assess performance and to do so fairly. In the case of running, measuring the performance can be done by using only the overall times, but this does not truly measure the performance in terms of how well each runner has done in regards to the key challenge of age. Likewise, universities could be compared in terms of the unadjusted numbers, but this would not provide a fair basis for measuring performance without considering the key challenges faced by each university.

As I have mentioned in previous essays, my university, Florida A&M University, has fared poorly under the state’s assessment system. As with using just the actual times from a race, this assessment is a fair evaluation given the standards. My university really is doing worse than the other schools, given the assigned categories and the way the results are calculated. However, Florida A&M University (and other schools) face challenges that the top ranked schools do not face (or do not face to the same degree). As such, a truly fair assessment of the performance of the schools would need to employ something analogous to the age graded calculations.

As noted in another essay, Florida A&M University is well ranked in terms of its contribution to social mobility. One reason for this is that the majority of Florida A&M University students are low-income students and the school does reasonable well at helping them move up. However, lower income students face numerous challenged that would lower their chances of graduation and success. These factors include the fact that students from poor schools (which tend to be located in economically disadvantaged areas) will tend to be poorly prepared for college.  Another factor is that poverty negatively impacts brain development as well as academic performance. There is also the obvious fact that disadvantaged students need to borrow more money than students from wealthier backgrounds. This entails more student debt and seventy percent of African American students say that student debt is their main reason for dropping out. In contrast, less than fifty percent of white students make this claim.

Given the impediments faced by lower income students, the assessment of university performance should be economically graded—that is, there should be an adjustment that compensates for the negative effect of the economic disadvantages of the students. Without this, the performance of the university cannot be properly assessed. Even though a university’s overall numbers might be lower than other schools, the school’s actual performance in terms of what it is doing for its students might be quite good.

In addition to the economic factors, there is also the factor of racism (which is also intertwined with economics). As I have mentioned in prior essays, African-American students are still often victims of segregation in regards to K-12 education and receive generally inferior education relative to white students. This clearly will impact college performance.

Race is also a major factor in regards to economic success. As noted in a previous essay, people with white sounding names are more likely to get interviews and call backs. For whites, the unemployment rate is 5.3% and it is 11.4% for blacks.  The poverty rate for whites is 9.7% while that for blacks it is 27.2%. The median household wealth for whites is $91,405 and for blacks $6,446. Blacks own homes at a rate of 43.5% while whites do so at 72.9%. Median household income is $35,416 for blacks and $59,754 for whites.  Since many of the factors used to assess Florida state universities use economic and performance factors that are impacted by the effects of racism, fairness would require that there be a racism graded calculation. This would factor in how the impact of racism lowers the academic and economic success of black college graduates, thus allowing an accurate measure of the performance of Florida A&M University and other schools. Without such adjustments, there is no clear measure of how the schools actually are performing.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Refugees & Terrorists

In response to the recent terrorist attack in Paris (but presumably not those outside the West, such as in Beirut) many governors have stated they will try to prevent the relocation of Syrian refugees into their states. These states include my home state of Maine, my university state of Ohio and my adopted state of Florida. Recognizing a chance to score political points, some Republican presidential candidates have expressed their opposition to allowing more Syrian refugees into the country. Some, such as Ted Cruz, have proposed a religious test for entry into the country: Christian refugees would be allowed, while Muslim refugees would be turned away.

On the one hand, it is tempting to dismiss this as mere political posturing and pandering to fear, racism and religious intolerance. On the other hand, it is worth considering the legitimate worries that lie under the posturing and the pandering. One worry is, of course, the possibility that terrorists could masquerade as refugees to enter the country. Another worry is that refugees who are not already terrorists might be radicalized and become terrorists.

In matters of politics, it is rather unusual for people to operate on the basis of consistently held principles. Instead, views tend to be held on the basis of how a person feels about a specific matter or what the person thinks about the political value of taking a specific position. However, a proper moral assessment requires considering the matter in terms of general principles and consistency.

In the case of the refugees, the general principle justifying excluding them would be something like this: it is morally acceptable to exclude from a state groups who include people who might pose a threat. This principle seems, in general, quite reasonable. After all, excluding people who might present a threat serves to protect people from harm.

Of course, this principle is incredibly broad and would justify excluding almost anyone and everyone. After all, nearly every group of people (tourists, refugees, out-of-staters, men, Christians, atheists, cat fanciers, football players, and so on) include people who might pose a threat.  While excluding everyone would increase safety, it would certainly make for a rather empty state. As such, this general principle should be subject to some additional refinement in terms of such factors as the odds that a dangerous person will be in the group in question, the harm such a person is likely to do, and the likely harms from excluding such people.

As noted above, the concern about refugees from Syria (and the Middle East) is that they might include terrorists or terrorists to be. One factor to consider is the odds that this will occur. The United States has a fairly extensive (and slow) vetting process for refugees and, as such, it is not surprising that of “745,000 refugees resettled since September 11th, only two Iraqis in Kentucky have been arrested on terrorist charges, for aiding al-Qaeda in Iraq.”  This indicates that although the chance of a terrorist arriving masquerading as a refugee is not zero, it is exceptionally unlikely.

It might be countered, using the usual hyperbolic rhetoric of such things, that if even one terrorist gets into the United States, that would be an intolerable disaster. While I do agree that this would be a bad thing, there is the matter of general principles. In this case, would it be reasonable to operate on a principle that the possibility of even one bad outcome is sufficient to warrant a broad ban on something? That, I would contend, would generally seem to be unreasonable. This principle would justify banning guns, nuts, cars and almost all other things. It would also justify banning tourists and visitors from other states. After all, tourists and people from other states do bad things in states from time to time. As such, this principle seems unreasonable.

There is, of course, the matter of the political risk. A politician who supports allowing refugees to come into her state will be vilified by certain pundits and a certain news outlet if even a single incident happens. This, of course, would be no more reasonable than vilifying a politician who supports the second amendment just because a person is wrongly shot in her state.  But, reason is usually absent in the realm of political punditry.

Another factor to consider is the harm that would be done by excluding such refugees. If they cannot be settled someplace, they will be condemned to live as involuntary nomads and suffer all that entails. There is also the ironic possibility that such excluded refugees will become, as pundits like to say, radicalized. After all, people who are deprived of hope and who are treated as pariahs tend to become a bit resentful and some might decide to actually become terrorists. There is also the fact that banning refugees provides a nice bit of propaganda for the terrorist groups.

Given that the risk is very small and the harm to the refugees would be significant, the moral thing to do is to allow the refugees into the United States. Yes, one of them could be a terrorist. But so could a tourist. Or some American coming from another state. Or already in the state.

In addition to the sort of utilitarian calculation just made, an argument can also be advanced on the basis of moral duties to others, even when acting on such a duty involves risk. In terms of religious-based ethics, a standard principle is to love thy neighbor as thyself, which would seem to require that the refugees be aided, even at a slight risk. There is also the golden rule: if the United States fell into chaos and war, Americans fleeing the carnage would want other people to help them. Even though we Americans have a reputation for violence. As such, we need to accept refugees.

As a closing point, we Americans love to make claims about the moral superiority and exceptionalism of our country. Talk is cheap, so if we want to prove our alleged superiority and exceptionalism, we have to act in an exceptional way. Refusing to help people out of fear is to show a lack of charity, compassion and courage. This is not what an exceptional nation would do.


My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Left’s Defection from Progress

Note: This is a slightly abridged (but otherwise largely warts and all) version of an article that I had published in Quadrant magazine in April 1999. It has not previously been published online (except that I am cross-posting on my own blog, Metamagician and the Hellfire Club). While my views have developed somewhat in the interim, there may be some advantage in republishing it for a new audience, especially at a time when there is much discussion of a “regressive left”.


In a recent mini-review of David Stove’s Anything Goes: Origins of Scientific Irrationalism (originally published in 1982 as Popper and After), Diane Carlyle and Nick Walker make a casual reference to Stove’s “reactionary polemic”. By contrast, they refer to the philosophies of science that Stove attacks as “progressive notions of culture-based scientific knowledge”. To say the least, this appears tendentious.

To be fair, Carlyle and Walker end up saying some favourable things about Stove’s book. What is nonetheless alarming about their review is that it evidences just how easy it has become to write as if scientific realism were inherently “reactionary” and the more or less relativist views of scientific knowledge that predominate among social scientists and humanities scholars were “progressive”.

The words “reactionary” and “progressive” usually attach themselves to political and social movements, some kind of traditionalist or conservative backlash versus an attempt to advance political liberties or social equality. Perhaps Carlyle and Walker had another sense in mind, but the connotations of their words are pretty inescapable. Moreover, they would know as well as I do that there is now a prevalent equation within the social sciences and humanities of relativist conceptions of truth and reality with left-wing social critique, and of scientific realism with the political right. Carlyle and Walker wrote their piece against that background. But where does it leave those of us who retain at least a temperamental attachment to the left, however nebulous that concept is becoming, while remaining committed to scientific realism? To adapt a phrase from Christina Hoff Sommers, we are entitled to ask about who has been stealing socially liberal thought in general.

Is the life of reason and liberty (intellectual and otherwise) that some people currently enjoy in some countries no more than an historical anomaly, a short-lived bubble that will soon burst? It may well appear so. Observe the dreadful credulity of the general public in relation to mysticism, magic and pseudoscience, and the same public’s preponderant ignorance of genuine science. Factor in the lowbrow popularity of religious fundamentalism and the anti-scientific rantings of highbrow conservatives such as Bryan Appleyard. Yet the sharpest goad to despair is the appearance that what passes for the intellectual and artistic left has now repudiated the Enlightenment project of conjoined scientific and social progress.

Many theorists in the social sciences and humanities appear obsessed with dismantling the entirety of post-Enlightenment political, philosophical and scientific thought. This is imagined to be a progressive act, desirable to promote the various social, cultural and other causes that have become politically urgent in recent decades, particularly those associated with sex, race, and the aftermath of colonialism. The positions on these latter issues taken by university-based theorists give them a claim to belong to, if not actually constitute, the “academic left”, and I’ll refer to them with this shorthand expression.

There is, however, nothing inherently left-wing about wishing to sweep away our Enlightenment legacy. Nor is a commitment to scientific inquiry and hard philosophical analysis inconsistent with socially liberal views. Abandonment of the project of rational inquiry, with its cross-checking of knowledge in different fields, merely opens the door to the worst kind of politics that the historical left could imagine, for the alternative is that “truth” be determined by whoever, at particular times and places, possesses sufficient political or rhetorical power to decide what beliefs are orthodox. The rationality of our society is at stake, but so is the fate of the left itself, if it is so foolish as to abandon the standards of reason for something more like a brute contest for power.

It is difficult to know where to start in criticising the academic left’s contribution to our society’s anti-rationalist drift. The approaches I am gesturing towards are diverse among themselves, as well as being professed in the universities side by side with more traditional methods of analysing society and culture. There is considerable useful dialogue among all these approaches, and it can be difficult obtaining an accurate idea of specific influences within the general intellectual milieu.

However, amidst all the intellectual currents and cross-currents, it is possible to find something of a common element in the thesis or assumption (sometimes one, sometimes the other) that reality, or our knowledge of it, is “socially constructed”. There are many things this might mean, and I explain below why I do not quarrel with them all.

In the extreme, however, our conceptions of reality, truth and knowledge are relativised, leading to absurd doctrines, such as the repudiation of deductive logic or the denial of a mind-independent world. Symptomatic of the approach I am condemning is a subordination of the intellectual quest for knowledge and understanding to political and social advocacy. Some writers are prepared to misrepresent mathematical and scientific findings for the purposes of point scoring or intellectual play, or the simple pleasure of ego-strutting. All this is antithetical to Enlightenment values, but so much – it might be said – for the Enlightenment.


The notion that reality is socially constructed would be attractive and defensible if it were restricted to a thesis about the considerable historical contingency of any culture’s social practices and mores, and its systems of belief, understanding and evaluation. These are, indeed, shaped partly by the way they co-evolve and “fit” with each other, and by the culture’s underlying economic and other material circumstances.

The body of beliefs available to anyone will be constrained by the circumstances of her culture, including its attitude to free inquiry, the concepts it has already built up for understanding the world, and its available technologies for the gathering of data. Though Stove is surely correct to emphasise that the accumulation of empirical knowledge since the 17th century has been genuine, the directions taken by science have been influenced by pre-existing values and beliefs. Meanwhile, social practices, metaphysical and ethical (rather than empirical) beliefs, the methods by which society is organised and by which human beings understand their experience are none of them determined in any simple, direct or uniform way by human “nature” or biology, or by transcendental events.

So far, so good – but none of this is to suggest that all of these categories should or can be treated in exactly the same way. Take the domain of metaphysical questions. Philosophers working in metaphysics are concerned to understand such fundamentals as space, time, causation, the kinds of substances that ultimately exist, the nature of consciousness and the self. The answers cannot simply be “read off” our access to empirical data or our most fundamental scientific theories, or some body of transcendental knowledge. Nonetheless, I am content to assume that all these questions, however intractable we find them, have correct answers.

The case of ethical disagreement may be very different, and I discuss it in more detail below. It may be that widespread and deep ethical disagreement actually evidences the correctness of a particular metaphysical (and meta-ethical) theory – that there are no objectively existing properties of moral good and evil. Yet, to the extent that they depend upon empirical beliefs about the consequences of human conduct, practical moral judgements may often be reconcilable. Your attitude to the rights of homosexuals will differ from mine if yours is based on a belief that homosexual acts cause earthquakes.

Again, the social practices of historical societies may turn out to be constrained by our biology in a way that is not true of the ultimate answers to questions of metaphysics. All these are areas where human behaviour and belief may be shaped by material circumstances and the way they fit with each other, and relatively unconstrained by empirical knowledge. But, to repeat, they are not all the same.

Where this appears to lead us is that, for complicated reasons and in awkward ways, there is much about the practices and beliefs of different cultures that is contingent on history. In particular, the way institutions are built up around experience is more or less historically contingent, dependent largely upon economic and environmental circumstances and on earlier or co-evolving layers of political and social structures. Much of our activity as human beings in the realms of understanding, organising, valuing and responding to experience can reasonably be described as “socially constructed”, and it will often make perfectly good sense to refer to social practices, categories, concepts and beliefs as “social constructions”.

Yet this modest insight cries out for clear intellectual distinctions and detailed application to particular situations, with conscientious linkages to empirical data. It cannot provide a short-cut to moral perspicuity or sound policy formulation. Nor is it inconsistent with a belief in the actual existence of law-governed events in the empirical world, which can be the subject of objective scientific theory and accumulating knowledge.


As Antony Flew once expressed it, what is socially constructed is not reality itself but merely “reality”: the beliefs, meanings and values available within a culture.

Thus, none of what I’ve described so far amounts to “social constructionism” in a pure or philosophical sense, since this would require, in effect, that we never have any knowledge. It would require a thesis that all beliefs are so deeply permeated by socially specific ideas that they never transcend their social conditions of production to the extent of being about objective reality. To take this a step further, even the truth about physical nature would be relative to social institutions – relativism applies all the way down.

Two important points need to be made here. First, even without such a strong concept of socially constructed knowledge, social scientists and humanities scholars have considerable room to pursue research programs aimed at exploring the historically contingent nature of social institutions. In the next section, I argue that this applies quintessentially to socially accepted moral beliefs.

Second, however, there is a question as to why anyone would insist upon the thesis that the nature of reality is somehow relative to social beliefs all the way down, that there is no point at which we ever hit a bedrock of truth and falsity about anything. It is notorious that intellectuals who use such language sometimes retreat, when challenged, to a far more modest or equivocal kind of position.

Certainly, there is no need for anyone’s political or social aims to lead them to deny the mind-independent existence of physical nature, or to suggest that the truth about it is, in an ultimate sense, relative to social beliefs or subjective to particular observers. Nonetheless, many left-wing intellectuals freely express a view in which reality, not “reality”, is a mere social construction.


If social construction theory is to have any significant practical bite, then it has to assert that moral beliefs are part of what is socially constructed. I wish to explore this issue through some more fundamental considerations about ethics.

It is well-documented that there are dramatic contrasts between different societies’ practical beliefs about what is right and wrong, so much so that the philosopher J.L. Mackie said that these “make it difficult to treat those judgements as apprehensions of objective truths.” As Mackie develops the argument, it is not part of some general theory that “the truth is relative”, but involves a careful attempt to show that the diversity of moral beliefs is not analogous to the usual disagreements about the nature of the physical world.

Along with other arguments put by philosophers in Hume’s radical empiricist tradition, Mackie’s appeal to cultural diversity may persuade us that there are no objective moral truths. Indeed, it seems to me that there are only two positions here that are intellectually viable. The first is that Mackie is simply correct. This idea might seem to lead to cultural relativism about morality, but things are not always what they seem.

The second viable position is that there are objective moral truths, but they take the form of principles of an extremely broad nature, broad enough to help shape – rather than being shaped by – a diverse range of social practices in different environmental, economic and other circumstances.

If this is so, particular social practices and practical moral beliefs have some ultimate relationship to fundamental moral principles, but there can be enormous “slippage” between the two, depending on the range of circumstances confronting different human societies. Moreover, during times of rapid change such as industrialised societies have experienced in the last three centuries – and especially the last several decades – social practices and practical moral beliefs might tend to be frozen in place, even though they have become untenable. Conversely, there might be more wisdom, or at least rationality, than is apparent to most Westerners in the practices and moral beliefs of traditional societies. All societies, however, might have practical moral beliefs that are incorrect because of lack of empirical knowledge about the consequences of human conduct.

Taken with my earlier, more general, comments about various aspects of social practices and culturally-accepted “reality”, this approach gives socially liberal thinkers much of what they want. It tends to justify those who would test and criticise the practices and moral beliefs of Western nations while defending the rationality and sophistication of people from colonised cultures.


The academic left’s current hostility to science and the Enlightenment project may have its origins in a general feeling, brought on by the twentieth century’s racial and ideological atrocities, that the Enlightenment has failed. Many intellectuals have come to see science as complicit in terror, oppression and mass killing, rather than as an inspiration for social progress.

The left’s hostility has surely been intensified by a quite specific fear that the reductive study of human biology will cross a bridge from the empirical into the normative realm, where it may start to dictate the political and social agenda in ways that can aptly be described as reactionary. This, at least, is the inference I draw from left-wing intellectuals’ evident detestation of human sociobiology or evolutionary psychology.

The fear may be that dubious research in areas such as evolutionary psychology and/or cognitive neuroscience will be used to rationalise sexist, racist or other illiberal positions. More radically, it may be feared that genuine knowledge of a politically unpalatable or otherwise harmful kind will emerge from these areas. Are such fears justified? To dismiss them lightly would be irresponsible and naive. I can do no more than place them in perspective. The relationship between the social sciences and humanities, on the one hand, and the “hard” end of psychological research, on the other, is one of the most important issues to be tackled by intellectuals in all fields – the physical sciences, social sciences and humanities.

One important biological lesson we have learned is that human beings are not, in any reputable sense, divided into “races”. As an empirical fact of evolutionary history and genetic comparison, we are all so alike that superficial characteristics such as skin or hair colour signify nothing about our moral or intellectual worth, or about the character of our inner experience. Yet, what if it had turned out otherwise? It is understandable if people are frightened by our ability to research such issues. At the same time, the alternative is to suppress rational inquiry in some areas, leaving questions of orthodoxy to however wins the naked contest for power. This is neither rational nor safe.

What implications could scientific knowledge about ourselves have for moral conduct or social policy? No number of factual statements about human nature, by themselves, can ever entail statements that amount to moral knowledge, as Hume demonstrated. What is required is an ethical theory, persuasive on other grounds, that already links “is” and “ought”. This might be found, for example, in a definition of moral action in terms of human flourishing, though it is not clear why we should, as individuals, be concerned about something as abstract as that – why not merely the flourishing of ourselves or our particular loved ones?.

One comfort is that, even if we had a plausible set of empirical and meta-ethical gadgets to connect what we know of human nature to high-level questions about social policy, we would discover significant slippage between levels. Nature does not contradict itself, and no findings from a field such as evolutionary psychology could be inconsistent with the observed facts of cultural diversity. If reductive explanations of human nature became available in more detail, these must turn out to be compatible with the existence of the vast spectrum of viable cultures that human beings have created so far. And there is no reason to believe that a lesser variety of cultures will be workable in the material circumstances of a high-technology future.

The dark side of evolutionary psychology includes, among other things, some scary-looking claims about the reproductive and sociopolitical behaviour of the respective sexes. True, no one seriously asserts that sexual conduct in human societies and the respective roles of men and women within families and extra-familial hierarchies are specified by our genes in a direct or detailed fashion. What, however, are we to make of the controversial analyses of male and female reproductive “strategies” that have been popularised by several writers in the 1990s? Perhaps the best-known exposition is that of Matt Ridley in The Red Queen: Sex and the Evolution of Human Nature (1993). Such accounts offer evidence and argument that men are genetically hardwired to be highly polygamous or promiscuous, while women are similarly programmed to be imperfectly monogamous, as well as sexually deceitful.

In responding to this, first, I am in favour of scrutinising the evidence for such claims very carefully, since they can so readily be adapted to support worn-out stereotypes about the roles of the sexes. That, however, is a reason to show scientific and philosophical rigour, not to accept strong social constructionism about science. Secondly, even if findings similar to those synthesised by Ridley turned out to be correct, the social consequences are by no means apparent. Mere biological facts cannot tell us in some absolute way what are the correct sexual mores for a human society.

To take this a step further, theories about reproductive strategies suggest that there are in-built conflicts between the interests of men and women, and of higher and lower status men, which will inevitably need to be moderated by social compromise, not necessarily in the same way by different cultures. If all this were accepted for the sake of argument, it might destroy a precious notion about ourselves: that there is a simple way for relations between the sexes to be harmonious. On the other hand, it would seem to support rather than refute what might be considered a “progressive” notion: that no one society, certainly not our own, has the absolutely final answer to questions about sexual morality.

Although evolutionary psychology and cognitive neuroscience are potential minefields, it is irrational to pretend that they are incapable of discovering objective knowledge. Fortunately, such knowledge will surely include insight into the slippage between our genetic similarity and the diversity of forms taken by viable cultures. The commonality of human nature will be at a level that is consistent with the (substantial) historical contingency of social practices and of many areas of understanding and evaluative belief. The effect on social policy is likely to be limited, though we may become more charitable about what moral requirements are reasonable for the kinds of creatures that we are.

I should add that evolutionary psychology and cognitive neuroscience are not about to put the humanities, in particular, out of business. There are good reasons why the natural sciences cannot provide a substitute for humanistic explanation, even if we obtain a far deeper understanding of our own genetic and neurophysiological make-up. This is partly because reductive science is ill-equipped to deal with the particularity of complex events, partly because causal explanation may not be all that we want, anyway, when we try to interpret and clarify human experience.


Either there are no objective moral truths or they are of an extremely general kind. Should we, therefore, become cultural relativists?

Over a quarter of a century ago, Bernard Williams made the sharp comment that cultural relativism is “possibly the most absurd view to have been advanced even in moral philosophy”” To get this clear, Williams was criticising a cluster of beliefs that has a great attraction for left-wing academics and many others who preach inter-cultural tolerance: first, that what is “right” means what is right for a particular culture; second, that what is right for a particular culture refers to what is functionally valuable for it; and third, that it is “therefore” wrong for one culture to interfere with the organisation or values of another.

As Williams pointed out, these propositions are internally inconsistent. Not only does the third not follow from the others; it cannot be asserted while the other two are maintained. After all, it may be functionally valuable to culture A (and hence “right” within that culture) for it to develop institutions for imposing its will on culture B. These may include armadas and armies, colonising expeditions, institutionalised intolerance, and aggressively proselytising religions. In fact, nothing positive in the way of moral beliefs, political programs or social policy can ever be derived merely from a theory of cultural relativism.

That does not mean that there are no implications at all from the insight that social practices and beliefs are, to a large degree, contingent on history and circumstance. Depending upon how we elaborate this insight, we may have good reason to suspect that another culture’s odd-looking ways of doing things are more justifiable against universal principles of moral value than is readily apparent. In that case, we may also take the view that the details of how our own society, or an element of it, goes about things are open to challenge as to how far they are (or remain?) justifiable against such universal principles.

If, on the other hand, we simply reject the existence of any objective moral truths – which I have stated to be a philosophically viable position – we will have a more difficult time explaining why we are active in pursuing social change. Certainly, we will not be able to appeal to objectively applicable principles to justify our activity. All the same, we may be able to make positive commitments to ideas such as freedom, equality or benevolence that we find less arbitrary and more psychologically satisfying than mere acquiescence in “the way they do things around here”. In no case, however, can we intellectually justify a course of political and social activism without more general principles or commitments to supplement the bare insight that, in various complicated ways, social beliefs and practices are largely contingent.


An example of an attempt to short-circuit the kind of hard thinking about moral foundations required to deal with contentious issues is Martin F. Katz’s well-known article, “After the Deconstruction: Law in the Age of Post-Structuralism”. Katz is a jurisprudential theorist who is committed to a quite extreme form of relativism about empirical knowledge. In particular, his article explicitly assigns the findings of physical science the same status as the critical interpretations of literary works.

Towards the end of “After the Deconstruction”, Katz uses the abortion debate as an example of how what he calls “deconstructionism” or the “deconstructionist analysis” can clarify and arbitrate social conflict. He begins by stating the debate much as it might be seen by its antagonists:

One side of the debate holds that abortion is wrong because it involves the murder of an unborn baby. The other side of the debate sees abortion as an issue of self-determination; the woman’s right to choose what she does to her body. How do we measure which of these “rights” should take priority?

In order to avoid any sense of evasion, I’ll state clearly that the second of these positions, the “pro-choice” position, is closer to my own. However, either position has more going for it in terms of rationality than what Katz actually advocates.

This, however, is not how Katz proposes to solve the problem of abortion. He begins by stating that “deconstructionism” recommends that we “resist the temptation to weigh the legitimacy of . . . these competing claims.” Instead, we should consider the different “subjugations” supposedly instigated by the pro-life and pro-choice positions. The pro-life position is condemned because it denies women the choice of what role they wish to take in society, while the pro-choice position is apparently praised (though even this is not entirely clear) for shifting the decision about whether and when to have children directly to women.

The trouble with this is that it prematurely forecloses on the metaphysical and ethical positions at stake, leaving everything to be solved in terms of power relations. However, if we believe that a foetus (say at a particular age) is a person in some sense that entails moral regard, or a being that possesses a human soul, then there are moral consequences. Such beliefs, together with some plausible assumptions about our moral principles or commitments, entail that we should accept that aborting the foetus is an immoral act. The fact that banning the abortion may reduce the political power of the woman concerned, or of women generally, over against that of men will seem to have little moral bite, unless we adopt a very deep principle of group political equality. That would require ethical argument of an intensity which Katz never attempts.

If we take it that the foetus is not a person in the relevant sense, we may be far more ready to solve the problem (and to advocate an assignment of “rights”) on the basis of utilitarian, or even libertarian, principles. By contrast, the style of “deconstructionist” thought advocated by Katz threatens to push rational analysis aside altogether, relying on untheorised hunches or feelings about how we wish power to be distributed in our society. This approach can justifiably be condemned as irrational. At the same time, the statements that Katz makes about the political consequences for men or women of banning or legalising abortion are so trite that it is difficult to imagine how anyone not already beguiled by an ideology could think that merely stating them could solve the problem.


In the example of Katz’s article, as in the general argument I have put, the insight that much in our own society’s practices and moral beliefs is “socially constructed” can do only a modest amount of intellectual work. We may have good reason to question the way they do things around here, to subject it to deeper analysis. We may also have good reason to believe that the “odd” ways they do things in other cultures make more sense than is immediately apparent to the culture-bound Western mind. All very well. None of this, however, can undermine the results of systematic empirical inquiry. Nor can it save us from the effort of grappling with inescapable metaphysical and ethical questions, just as we had to do before the deconstruction.

[My Amazon author page]