Category Archives: Philosophy

Modern Philosophy

Portrait of René Descartes, dubbed the "F...

Here is a (mostly) complete course in Modern Philosophy.

Notes & Readings

Modern Readings SP 2014

Modern Notes SP 2014

Modern Philosophy Part One (Hobbes & Descartes)

#1 This is the unedited video from the 1/7/2016 Modern class. It covers the syllabus and some of the historical background for the Modern era.

#2 This is the unedited video from the 1/12/2016 Modern philosophy class. It concludes the background for the modern era and the start of argument basics.

#3 This is the unedited video from the 1/14/2016 modern philosophy class. It covers the analogical argument, the argument by example, the argument from authority, appeal to intuition, and the background for Thomas Hobbes.

#4 This is the unedited video from the 1/19/2016 Modern Philosophy class. It covers Thomas Hobbes.

#5 This is the unedited video from the 1/21/2016 Modern Philosophy. It covers Descartes’ first meditation as well as the paper for the class

#6 This is the unedited video from the 1/26/2016 Modern class. In covers Descartes’ Meditations II & III.

#7 This is the unedited video from the 1/28/2016 Modern Philosophy course. It covers Descartes’ Meditations 4-6 and more about Descartes.

Modern Philosophy Part Two (Spinoza & Leibniz)

#8 This is the unedited video from the 2/2/2016 Modern Philosophy class. It covers the start of Spinoza’s philosophy. It could not be otherwise.

#9 No Video

#10 This is the unedited video from the 2/9/2016 Modern Philosophy class. It covers Spinoza.

#11 This is the unedited video from the 2/11/2016 Modern Philosophy class. It covers the end of Spinoza and the start of Leibniz.

#12 This is the unedited video from the 2/16/2016 Modern philosophy class. It covers Leibniz.

#13  This is the unedited video from the 2/18/2016 Modern philosophy class. It covers Leibniz addressing the problem of evil and the start of monads.

#14 This is the unedited video from the 2/23/2016 Modern philosophy class. It covers Leibniz’s monads, pre-established harmony and the city of God.

#15 This is the unedited video from the 2/25/2016 Modern philosophy class. It covers the end of Leibniz and the start of the background for the Enlightenment.

Modern Philosophy Part Three (Locke & Berkeley)

#16 This is the unedited video from the 3/1/2016 Modern Philosophy Class. It finishes the enlightenment background and the start of John Locke.

#17 This is the unedited video from the 3/3/2016 Modern Philosophy class. It covers John Locke’s epistemology.

#18 This is the unedited video from the 3/15/2016 Modern Philosophy class. It includes a recap of Locke’s reply to skepticism and the start of his theory of personal identity.

#19 No Video

#20 This is the unedited video from the 3/22/2016 Modern Philosophy class. It covers Locke’s political philosophy.

#21 This is the unedited video from the 3/29/2016 Modern Philosophy class. It covers the first part of George Berkeley’s immaterialism.

#22 This unedited video is from the 3/31/2016 Modern Philosophy class. It covers the final part of Berkeley, including his arguments for God as well as the classic problems with his theory.

Modern Philosophy Part Four (Hume & Kant)

#23 This is the unedited video from the 4/5/2016 Modern Philosophy class. It covers the introduction to David Hume and his theory of necessary connections.

#24 This is the unedited video from the 4/7/2016 Modern philosophy class. It covers Hume’s skepticism regarding the senses.

#25 This is the unedited video from the 4/12/2016 Modern Philosophy class. It covers David Hume’s theory of personal identity, ethical theory and theory of religion.

#26 This is the unedited video from the 4/19/2016 Modern Philosophy class. It covers Kant’s philosophy.

#27 This is the unedited video from the 4/19/2016 Modern class. It covers Kant’s epistemology and metaphysics.

#28 This is the unedited video from the 4/21/2016 Modern Philosophy class. It covers Kant’s antinomies, God, and the categorical imperative

 

Denmark’s Refugee “Fee”

In January, 2016 Denmark passed a law that refugees who enter the state with assets greater than about US $1,450 will have their valuables taken in order to help pay for the cost of their being in the country. In response to international criticism, Denmark modified the law to allow refugees to keep items of sentimental value, such as wedding rings. This matter is certainly one of moral concern.

Critics have been quick to deploy a Nazi analogy, likening this policy to how the Nazis stole the valuables of those they sent to the concentration camps. While taking from refugees does seem morally problematic, the Nazi analogy does not really stick—there are too many relevant differences between the situations. Most importantly, the Danes would be caring for the refugees rather than murdering them. There is also the fact that the refugees are voluntarily going to Denmark rather than being rounded up, robbed, imprisoned and murdered. While the Danes have clearly not gone full Nazi, there are still grounds for moral criticism. However, I will endeavor to provide a short defense of the law—a rational consideration requires at least considering the pro side of the argument.

The main motivation of the law seems to be to deter refugees from coming to Denmark. This is a strategy of making their country less appealing than other countries in the hopes that refugees will go somewhere else and be someone else’s burden. Countries, like individuals, do seem to have the right to make themselves less appealing.  While this sort of approach is certainly not morally commendable, it does not seem to be morally wrong. After all, the Danes are not simply banning refugees but trying to provide a financial disincentive. Somewhat ironically, the law would not deter the poorest of refugees. It would only deter those who have enough property to make losing it a worthwhile deterrent.

The main moral argument in favor of the law is based on the principle that people should help pay for the cost of their upkeep to at least the degree they can afford to do so. To use an analogy, if people show up at my house and ask to live with me and eat my food, it would certainly be fair of me to expect them to at least chip in for the costs of the utilities and food. After all, I do not get my utilities and food for free. This argument does have considerable appeal, but can be countered.

One counter to the argument is based on the fact that the refugees are fleeing a disaster. Going back to the house analogy, if survivors of a disaster showed up at my door asking for a place to stay until they could get back on their feet, taking their few remaining possessions to offset the cost of their food and shelter would seem to be cruel and heartless. They have lost so much already and to take what little that remains to them would add injury and insult to injury. To use another analogy, it would be like a rescue crew stripping people of their valuables to help pay for the rescue. While rescues are expensive, such a practice certainly would seem awful.

One counter is that refugees who are well off should pay for what they receive. After all, if relatively well-off people showed up at my door asking for food and shelter, it would not seem wrong of me to expect that they contribute to the cost of things. After all, if they can afford it, then they have no grounds to claim a free ride off me. Likewise for well-off refugees. That said, the law does not actually address the point, unless having more than $1450 is well off.

Another point of consideration is that it is one thing to have people pay for lodging and food with money they have; quite another to take a person’s remaining worldly possessions. It seems like a form of robbery, using whatever threat drove the refugees from home as the weapon. The obvious reply is that the refugees would be choosing to go to Denmark; they could go to a more generous country. The problem is, however, that refugees might soon have little choice about where they go.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Against accommodationism: How science undermines religion

Faith versus Fact
There is currently a fashion for religion/science accommodationism, the idea that there’s room for religious faith within a scientifically informed understanding of the world.

Accommodationism of this kind gains endorsement even from official science organizations such as, in the United States, the National Academy of Sciences and the American Association for the Advancement of Science. But how well does it withstand scrutiny?

Not too well, according to a new book by distinguished biologist Jerry A. Coyne.

Gould’s magisteria

The most famous, or notorious, rationale for accommodationism was provided by the celebrity palaeontologist Stephen Jay Gould in his 1999 book Rocks of Ages. Gould argues that religion and science possess separate and non-overlapping “magisteria”, or domains of teaching authority, and so they can never come into conflict unless one or the other oversteps its domain’s boundaries.

If we accept the principle of Non-Overlapping Magisteria (NOMA), the magisterium of science relates to “the factual construction of nature”. By contrast, religion has teaching authority in respect of “ultimate meaning and moral value” or “moral issues about the value and meaning of life”.

On this account, religion and science do not overlap, and religion is invulnerable to scientific criticism. Importantly, however, this is because Gould is ruling out many religious claims as being illegitimate from the outset even as religious doctrine. Thus, he does not attack the fundamentalist Christian belief in a young earth merely on the basis that it is incorrect in the light of established scientific knowledge (although it clearly is!). He claims, though with little real argument, that it is illegitimate in principle to hold religious beliefs about matters of empirical fact concerning the space-time world: these simply fall outside the teaching authority of religion.

I hope it’s clear that Gould’s manifesto makes an extraordinarily strong claim about religion’s limited role. Certainly, most actual religions have implicitly disagreed.

The category of “religion” has been defined and explained in numerous ways by philosophers, anthropologists, sociologists, and others with an academic or practical interest. There is much controversy and disagreement. All the same, we can observe that religions have typically been somewhat encyclopedic, or comprehensive, explanatory systems.

Religions usually come complete with ritual observances and standards of conduct, but they are more than mere systems of ritual and morality. They typically make sense of human experience in terms of a transcendent dimension to human life and well-being. Religions relate these to supernatural beings, forces, and the like. But religions also make claims about humanity’s place – usually a strikingly exceptional and significant one – in the space-time universe.

It would be naïve or even dishonest to imagine that this somehow lies outside of religion’s historical role. While Gould wants to avoid conflict, he creates a new source for it, since the principle of NOMA is itself contrary to the teachings of most historical religions. At any rate, leaving aside any other, or more detailed, criticisms of the NOMA principle, there is ample opportunity for religion(s) to overlap with science and come into conflict with it.

Coyne on religion and science

The genuine conflict between religion and science is the theme of Jerry Coyne’s Faith versus Fact: Why Science and Religion are Incompatible (Viking, 2015). This book’s appearance was long anticipated; it’s a publishing event that prompts reflection.

In pushing back against accommodationism, Coyne portrays religion and science as “engaged in a kind of war: a war for understanding, a war about whether we should have good reasons for what we accept as true.” Note, however, that he is concerned with theistic religions that include a personal God who is involved in history. (He is not, for example, dealing with Confucianism, pantheism or austere forms of philosophical deism that postulate a distant, non-interfering God.)

Accommodationism is fashionable, but that has less to do with its intellectual merits than with widespread solicitude toward religion. There are, furthermore, reasons why scientists in the USA (in particular) find it politically expedient to avoid endorsing any “conflict model” of the relationship between religion and science. Even if they are not religious themselves, many scientists welcome the NOMA principle as a tolerable compromise.

Some accommodationists argue for one or another very weak thesis: for example, that this or that finding of science (or perhaps our scientific knowledge base as a whole) does not logically rule out the existence of God (or the truth of specific doctrines such as Jesus of Nazareth’s resurrection from the dead). For example, it is logically possible that current evolutionary theory and a traditional kind of monotheism are both true.

But even if we accept such abstract theses, where does it get us? After all, the following may both be true:

1. There is no strict logical inconsistency between the essentials of current evolutionary theory and the existence of a traditional sort of Creator-God.

AND

2. Properly understood, current evolutionary theory nonetheless tends to make Christianity as a whole less plausible to a reasonable person.

If 1. and 2. are both true, it’s seriously misleading to talk about religion (specifically Christianity) and science as simply “compatible”, as if science – evolutionary theory in this example – has no rational tendency at all to produce religious doubt. In fact, the cumulative effect of modern science (not least, but not solely, evolutionary theory) has been to make religion far less plausible to well-informed people who employ reasonable standards of evidence.

For his part, Coyne makes clear that he is not talking about a strict logical inconsistency. Rather, incompatibility arises from the radically different methods used by science and religion to seek knowledge and assess truth claims. As a result, purported knowledge obtained from distinctively religious sources (holy books, church traditions, and so on) ends up being at odds with knowledge grounded in science.

Religious doctrines change, of course, as they are subjected over time to various pressures. Faith versus Fact includes a useful account of how they are often altered for reasons of mere expediency. One striking example is the decision by the Mormons (as recently as the 1970s) to admit blacks into its priesthood. This was rationalised as a new revelation from God, which raises an obvious question as to why God didn’t know from the start (and convey to his worshippers at an early time) that racial discrimination in the priesthood was wrong.

It is, of course, true that a system of religious beliefs can be modified in response to scientific discoveries. In principle, therefore, any direct logical contradictions between a specified religion and the discoveries of science can be removed as they arise and are identified. As I’ve elaborated elsewhere (e.g., in Freedom of Religion and the Secular State (2012)), religions have seemingly endless resources to avoid outright falsification. In the extreme, almost all of a religion’s stories and doctrines could gradually be reinterpreted as metaphors, moral exhortations, resonant but non-literal cultural myths, and the like, leaving nothing to contradict any facts uncovered by science.

In practice, though, there are usually problems when a particular religion adjusts. Depending on the circumstances, a process of theological adjustment can meet with internal resistance, splintering and mutual anathemas. It can lead to disillusionment and bitterness among the faithful. The theological system as a whole may eventually come to look very different from its original form; it may lose its original integrity and much of what once made it attractive.

All forms of Christianity – Catholic, Protestant, and otherwise – have had to respond to these practical problems when confronted by science and modernity.

Coyne emphasizes, I think correctly, that the all-too-common refusal by religious thinkers to accept anything as undercutting their claims has a downside for believability. To a neutral outsider, or even to an insider who is susceptible to theological doubts, persistent tactics to avoid falsification will appear suspiciously ad hoc.

To an outsider, or to anyone with doubts, those tactics will suggest that religious thinkers are not engaged in an honest search for truth. Rather, they are preserving their favoured belief systems through dogmatism and contrivance.

How science subverted religion

In principle, as Coyne also points out, the important differences in methodology between religion and science might (in a sense) not have mattered. That is, it could have turned out that the methods of religion, or at least those of the true religion, gave the same results as science. Why didn’t they?

Let’s explore this further. The following few paragraphs are my analysis, drawing on earlier publications, but I believe they’re consistent with Coyne’s approach. (Compare also Susan Haack’s non-accommodationist analysis in her 2007 book, Defending Science – within Reason.)

At the dawn of modern science in Europe – back in the sixteenth and seventeenth centuries – religious worldviews prevailed without serious competition. In such an environment, it should have been expected that honest and rigorous investigation of the natural world would confirm claims that were already found in the holy scriptures and church traditions. If the true religion’s founders had genuinely received knowledge from superior beings such as God or angels, the true religion should have been, in a sense, ahead of science.

There might, accordingly, have been a process through history by which claims about the world made by the true religion (presumably some variety of Christianity) were successively confirmed. The process might, for example, have shown that our planet is only six thousand years old (give or take a little), as implied by the biblical genealogies. It might have identified a global extinction event – just a few thousand years ago – resulting from a worldwide cataclysmic flood. Science could, of course, have added many new details over time, but not anything inconsistent with pre-existing knowledge from religious sources.

Unfortunately for the credibility of religious doctrine, nothing like this turned out to be the case. Instead, as more and more evidence was obtained about the world’s actual structures and causal mechanisms, earlier explanations of the appearances were superseded. As science advances historically, it increasingly reveals religion as premature in its attempts at understanding the world around us.

As a consequence, religion’s claims to intellectual authority have become less and less rationally believable. Science has done much to disenchant the world – once seen as full of spiritual beings and powers – and to expose the pretensions of priests, prophets, religious traditions, and holy books. It has provided an alternative, if incomplete and provisional, image of the world, and has rendered much of religion anomalous or irrelevant.

By now, the balance of evidence has turned decisively against any explanatory role for beings such as gods, ghosts, angels, and demons, and in favour of an atheistic philosophical naturalism. Regardless what other factors were involved, the consolidation and success of science played a crucial role in this. In short, science has shown a historical, psychological, and rational tendency to undermine religious faith.

Not only the sciences!

I need to be add that the damage to religion’s authority has come not only from the sciences, narrowly construed, such as evolutionary biology. It has also come from work in what we usually regard as the humanities. Christianity and other theistic religions have especially been challenged by the efforts of historians, archaeologists, and academic biblical scholars.

Those efforts have cast doubt on the provenance and reliability of the holy books. They have implied that many key events in religious accounts of history never took place, and they’ve left much traditional theology in ruins. In the upshot, the sciences have undermined religion in recent centuries – but so have the humanities.

Coyne would not tend to express it that way, since he favours a concept of “science broadly construed”. He elaborates this as: “the same combination of doubt, reason, and empirical testing used by professional scientists.” On his approach, history (at least in its less speculative modes) and archaeology are among the branches of “science” that have refuted many traditional religious claims with empirical content.

But what is science? Like most contemporary scientists and philosophers, Coyne emphasizes that there is no single process that constitutes “the scientific method”. Hypothetico-deductive reasoning is, admittedly, very important to science. That is, scientists frequently make conjectures (or propose hypotheses) about unseen causal mechanisms, deduce what further observations could be expected if their hypotheses are true, then test to see what is actually observed. However, the process can be untidy. For example, much systematic observation may be needed before meaningful hypotheses can be developed. The precise nature and role of conjecture and testing will vary considerably among scientific fields.

Likewise, experiments are important to science, but not to all of its disciplines and sub-disciplines. Fortunately, experiments are not the only way to test hypotheses (for example, we can sometimes search for traces of past events). Quantification is also important… but not always.

However, Coyne says, a combination of reason, logic and observation will always be involved in scientific investigation. Importantly, some kind of testing, whether by experiment or observation, is important to filter out non-viable hypotheses.

If we take this sort of flexible and realistic approach to the nature of science, the line between the sciences and the humanities becomes blurred. Though they tend to be less mathematical and experimental, for example, and are more likely to involve mastery of languages and other human systems of meaning, the humanities can also be “scientific” in a broad way. (From another viewpoint, of course, the modern-day sciences, and to some extent the humanities, can be seen as branches from the tree of Greek philosophy.)

It follows that I don’t terribly mind Coyne’s expansive understanding of science. If the English language eventually evolves in the direction of employing his construal, nothing serious is lost. In that case, we might need some new terminology – “the cultural sciences” anyone? – but that seems fairly innocuous. We already talk about “the social sciences” and “political science”.

For now, I prefer to avoid confusion by saying that the sciences and humanities are continuous with each other, forming a unity of knowledge. With that terminological point under our belts, we can then state that both the sciences and the humanities have undermined religion during the modern era. I expect they’ll go on doing so.

A valuable contribution

In challenging the undeserved hegemony of religion/science accommodationism, Coyne has written a book that is notably erudite without being dauntingly technical. The style is clear, and the arguments should be understandable and persuasive to a general audience. The tone is rather moderate and thoughtful, though opponents will inevitably cast it as far more polemical and “strident” than it really is. This seems to be the fate of any popular book, no matter how mild-mannered, that is critical of religion.

Coyne displays a light touch, even while drawing on his deep involvement in scientific practice (not to mention a rather deep immersion in the history and detail of Christian theology). He writes, in fact, with such seeming simplicity that it can sometimes be a jolt to recognize that he’s making subtle philosophical, theological, and scientific points.

In that sense, Faith versus Fact testifies to a worthwhile literary ideal. If an author works at it hard enough, even difficult concepts and arguments can usually be made digestible. It won’t work out in every case, but this is one where it does. That’s all the more reason why Faith versus Fact merits a wide readership. It’s a valuable, accessible contribution to a vital debate.

Russell Blackford, Conjoint Lecturer in Philosophy, University of Newcastle

This article was originally published on The Conversation. Read the original article.

Yoga & Cultural Appropriation

Homo sum, humani nihil a me alienum puto.

-Terence

In the fall of 2015, a free yoga class at the University of Ottawa was suspended out of concern that it might be an act of cultural appropriation. Staff at the Centre for Students with Disabilities, where the class was offered, made this decision on the basis of a complaint.  A Centre official noted that many cultures, including the culture from which yoga originated, “have experienced oppression, cultural genocide and diasporas due to colonialism and western supremacy … we need to be mindful of this and how we express ourselves while practising yoga.”  In response, there was an attempt to “rebrand” the class as “mindful stretching.” Due to issues regarding a French translation of the phrase, the rebranding failed and the class was suspended.

When I first heard about his story, I inferred it was satire on the part of the Onion because it seemed to be an absurd lampooning of political correctness. It turned out that it was real, but still absurd. But, as absurdities sometimes do, it does provide an interesting context for discussing a serious subject—in this case that of cultural appropriation.

The concept of cultural appropriation is somewhat controversial, but the basic idea is fairly simple. In general terms, cultural appropriation takes place when a dominant culture takes (“appropriates”) from a marginalized culture for morally problematic reasons. For example, white college students have been accused of cultural appropriation (and worse) when they have made mocking use of aspects of black culture for theme parties. Some on the left (or “the politically correct” as they are called by their detractors) regard cultural appropriation as morally wrong. Some on the right think the idea of cultural appropriation is ridiculous and people should just get over and forget about past oppressions.

While I am no fan of what can justly be considered mere political correctness, I do agree that there are moral problems with what is often designated as cultural appropriation. One common area of cultural appropriation is that which is intended to lampoon. While comedy, as Aristotle noted, is a species of the ugly, it should not enter into the realm of what is actually hurtful. As such lampooning of cultural stereotypes that cross over into being actually hurtful would cease to be comedic and would instead be merely insulting mockery. An excellent (or awful) example of this would be the use of blackface by people who are not black. Naturally, specific cases would need to be given due consideration—it can be aesthetically legitimate to use the shock of apparent cultural appropriation to make a point.

It can, of course, be objected that lampooning is exempt from the usual moral concerns about insulting people and thus that such mocking insults would be morally fine. It must also be noted that I am making no assertions here about what should be forbidden by law. My view is, in fact, that even the most insulting mockery should not be restricted by law. Morality is, after all, distinct from legality.

Another common area of cultural appropriation is the misuse of symbols from a culture. For example, having an underwear model prance around in a war bonnet is not intended as lampooning, but is an insult to the culture that regards the war bonnet as an honor to be earned. It would be comparable to having underwear models prancing around displaying unearned honors such as the Purple Heart or the Medal of Honor. This misuse can, of course, be unintentional—people often use cultural marks of honor as “cool accessories” without any awareness of what they actually mean. While people should, perhaps, do some research before borrowing from other cultures, innocent ignorance is certainly forgivable.

It could be objected that such misuse is not morally problematic since there is no real harm being done when a culture is insulted by the misuse of its symbols. This, of course, would need to be held to consistently—a person making this argument to allow the misuse of the symbols of another culture would need to accept a comparable misuse of her own most sacred symbols as morally tolerable. Once again, I am not addressing the legality of this matter—although cultures do often have laws protecting their own symbols, such as military medals or religious icons.

While it would be easy to run through a multitude of cases that would be considered cultural appropriation, I prefer to focus on presenting a general principle about what would be morally problematic cultural appropriation. Given the above examples and consideration of the others that can be readily found, what seems to make appropriation inappropriate is the misuse or abuse of the cultural elements. That is, there needs to be meaningful harm inflicted by the appropriation. This misuse or abuse could be intentional (which would make it morally worse) or unintentional (which might make it an innocent error of ignorance).

It could be contended that any appropriation of culture is harmful by using an analogy to trademark, patent, and copyright law. A culture could be regarded as holding the moral “trademark”, “patent” or “copyright” (as appropriate) on its cultural items and thus people who are not part of that culture would be inflicting harm by appropriating these items. This would be analogous to another company appropriating, for example, Disney’s trademarks, violating the copyrights held by Random House or the patents held by Google. Culture could be thus regarded as a property owned by members of that culture and passed down as a matter of inheritance. This would seem to make any appropriation of culture by outsiders morally problematic—although a culture could give permission for such use by intentionally sharing the culture. Those who are fond of property rights should find this argument appealing.

One interesting way to counter the ownership argument is to note that humans are born into culture by chance and any human could be raised in any culture. As such, it could be claimed that humans have an ownership stake in all human cultures and thus are entitled to adopt culture as they see fit. The culture should, of course, be shown proper respect. This would, of course, be a form of cultural communism—which those who like strict property rights might find unappealing.

The response to this is to note that humans are also born by chance to families and any human could be designated the heir of a family, yet there are strict rules governing the inheritance of property. As such, cultural inheritance could work the same way—only the true heirs can give permission to others to use the culture. This should appeal to those who favor strict protections for inherited property.

My own inclination is that humans are the inheritors of all human culture and thus we all have a right to the cultural wealth our species has produced.  Naturally, individual ownership of specific works should be properly respected. However, as with any gift, it must be treated with due respect and used appropriately—rather than misused through appropriation. So, cancelling the yoga class was absurd.

 

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Performance Based Funding & Adjustments

 

Photo by Paula O'Neil

Photo by Paula O’Neil

I have written numerous essays on the issue of performance based funding of Florida state universities. This essay adds to the stack by addressing the matter of adjusting the assessment on the basis of impediments. I will begin, as I so often do, with a running analogy.

This coming Thursday is Thanksgiving and I will, as I have for the past few decades, run the Tallahassee Turkey Trot. By ancient law, the more miles you run on Thanksgiving, the more pumpkin pie and turkey you can stuff into your pie port. This is good science.

Back in the day, people wanted me to be on their Turkey Trot team because I was (relatively) fast. These days, I am asked to be on a team because I am (relatively) old but still (relatively) mobile.  As to why age and not just speed would be important in team selection, the answer is that the team scoring involves the use of an age grade calculator. While there is some debate about the accuracy of the calculators, the basic idea is sound: the impact of aging on performance can be taken into account in order to “level the playing field” (or “running road”) so as to allow fair comparisons and assessments of performance between people of different ages.

Suppose, for example, I wanted to compare my performance as a 49 year old runner relative to a young man (perhaps my younger and much faster self). The most obvious way to do this is to simply compare our times in the same race and this would be a legitimate comparison. If I ran the 5K in 20 minutes and the young fellow ran it in 19 minutes, he would have performed better than I did. However, if a fair comparison were desired, then the effect of aging should be taken into account—after all, as I like to say, I am dragging the weight of many more years.  Using an age grade calculator, my 20 minute 5K would be age adjusted to be equivalent to a 17:45 run by a young man. As such, I would have performed better than the young fellow given the temporal challenge I faced.

While assessing running times is different from assessing the performance of a university, the situations do seem similar in relevant ways. To be specific, the goal is to assess performance and to do so fairly. In the case of running, measuring the performance can be done by using only the overall times, but this does not truly measure the performance in terms of how well each runner has done in regards to the key challenge of age. Likewise, universities could be compared in terms of the unadjusted numbers, but this would not provide a fair basis for measuring performance without considering the key challenges faced by each university.

As I have mentioned in previous essays, my university, Florida A&M University, has fared poorly under the state’s assessment system. As with using just the actual times from a race, this assessment is a fair evaluation given the standards. My university really is doing worse than the other schools, given the assigned categories and the way the results are calculated. However, Florida A&M University (and other schools) face challenges that the top ranked schools do not face (or do not face to the same degree). As such, a truly fair assessment of the performance of the schools would need to employ something analogous to the age graded calculations.

As noted in another essay, Florida A&M University is well ranked in terms of its contribution to social mobility. One reason for this is that the majority of Florida A&M University students are low-income students and the school does reasonable well at helping them move up. However, lower income students face numerous challenged that would lower their chances of graduation and success. These factors include the fact that students from poor schools (which tend to be located in economically disadvantaged areas) will tend to be poorly prepared for college.  Another factor is that poverty negatively impacts brain development as well as academic performance. There is also the obvious fact that disadvantaged students need to borrow more money than students from wealthier backgrounds. This entails more student debt and seventy percent of African American students say that student debt is their main reason for dropping out. In contrast, less than fifty percent of white students make this claim.

Given the impediments faced by lower income students, the assessment of university performance should be economically graded—that is, there should be an adjustment that compensates for the negative effect of the economic disadvantages of the students. Without this, the performance of the university cannot be properly assessed. Even though a university’s overall numbers might be lower than other schools, the school’s actual performance in terms of what it is doing for its students might be quite good.

In addition to the economic factors, there is also the factor of racism (which is also intertwined with economics). As I have mentioned in prior essays, African-American students are still often victims of segregation in regards to K-12 education and receive generally inferior education relative to white students. This clearly will impact college performance.

Race is also a major factor in regards to economic success. As noted in a previous essay, people with white sounding names are more likely to get interviews and call backs. For whites, the unemployment rate is 5.3% and it is 11.4% for blacks.  The poverty rate for whites is 9.7% while that for blacks it is 27.2%. The median household wealth for whites is $91,405 and for blacks $6,446. Blacks own homes at a rate of 43.5% while whites do so at 72.9%. Median household income is $35,416 for blacks and $59,754 for whites.  Since many of the factors used to assess Florida state universities use economic and performance factors that are impacted by the effects of racism, fairness would require that there be a racism graded calculation. This would factor in how the impact of racism lowers the academic and economic success of black college graduates, thus allowing an accurate measure of the performance of Florida A&M University and other schools. Without such adjustments, there is no clear measure of how the schools actually are performing.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Refugees & Terrorists

In response to the recent terrorist attack in Paris (but presumably not those outside the West, such as in Beirut) many governors have stated they will try to prevent the relocation of Syrian refugees into their states. These states include my home state of Maine, my university state of Ohio and my adopted state of Florida. Recognizing a chance to score political points, some Republican presidential candidates have expressed their opposition to allowing more Syrian refugees into the country. Some, such as Ted Cruz, have proposed a religious test for entry into the country: Christian refugees would be allowed, while Muslim refugees would be turned away.

On the one hand, it is tempting to dismiss this as mere political posturing and pandering to fear, racism and religious intolerance. On the other hand, it is worth considering the legitimate worries that lie under the posturing and the pandering. One worry is, of course, the possibility that terrorists could masquerade as refugees to enter the country. Another worry is that refugees who are not already terrorists might be radicalized and become terrorists.

In matters of politics, it is rather unusual for people to operate on the basis of consistently held principles. Instead, views tend to be held on the basis of how a person feels about a specific matter or what the person thinks about the political value of taking a specific position. However, a proper moral assessment requires considering the matter in terms of general principles and consistency.

In the case of the refugees, the general principle justifying excluding them would be something like this: it is morally acceptable to exclude from a state groups who include people who might pose a threat. This principle seems, in general, quite reasonable. After all, excluding people who might present a threat serves to protect people from harm.

Of course, this principle is incredibly broad and would justify excluding almost anyone and everyone. After all, nearly every group of people (tourists, refugees, out-of-staters, men, Christians, atheists, cat fanciers, football players, and so on) include people who might pose a threat.  While excluding everyone would increase safety, it would certainly make for a rather empty state. As such, this general principle should be subject to some additional refinement in terms of such factors as the odds that a dangerous person will be in the group in question, the harm such a person is likely to do, and the likely harms from excluding such people.

As noted above, the concern about refugees from Syria (and the Middle East) is that they might include terrorists or terrorists to be. One factor to consider is the odds that this will occur. The United States has a fairly extensive (and slow) vetting process for refugees and, as such, it is not surprising that of “745,000 refugees resettled since September 11th, only two Iraqis in Kentucky have been arrested on terrorist charges, for aiding al-Qaeda in Iraq.”  This indicates that although the chance of a terrorist arriving masquerading as a refugee is not zero, it is exceptionally unlikely.

It might be countered, using the usual hyperbolic rhetoric of such things, that if even one terrorist gets into the United States, that would be an intolerable disaster. While I do agree that this would be a bad thing, there is the matter of general principles. In this case, would it be reasonable to operate on a principle that the possibility of even one bad outcome is sufficient to warrant a broad ban on something? That, I would contend, would generally seem to be unreasonable. This principle would justify banning guns, nuts, cars and almost all other things. It would also justify banning tourists and visitors from other states. After all, tourists and people from other states do bad things in states from time to time. As such, this principle seems unreasonable.

There is, of course, the matter of the political risk. A politician who supports allowing refugees to come into her state will be vilified by certain pundits and a certain news outlet if even a single incident happens. This, of course, would be no more reasonable than vilifying a politician who supports the second amendment just because a person is wrongly shot in her state.  But, reason is usually absent in the realm of political punditry.

Another factor to consider is the harm that would be done by excluding such refugees. If they cannot be settled someplace, they will be condemned to live as involuntary nomads and suffer all that entails. There is also the ironic possibility that such excluded refugees will become, as pundits like to say, radicalized. After all, people who are deprived of hope and who are treated as pariahs tend to become a bit resentful and some might decide to actually become terrorists. There is also the fact that banning refugees provides a nice bit of propaganda for the terrorist groups.

Given that the risk is very small and the harm to the refugees would be significant, the moral thing to do is to allow the refugees into the United States. Yes, one of them could be a terrorist. But so could a tourist. Or some American coming from another state. Or already in the state.

In addition to the sort of utilitarian calculation just made, an argument can also be advanced on the basis of moral duties to others, even when acting on such a duty involves risk. In terms of religious-based ethics, a standard principle is to love thy neighbor as thyself, which would seem to require that the refugees be aided, even at a slight risk. There is also the golden rule: if the United States fell into chaos and war, Americans fleeing the carnage would want other people to help them. Even though we Americans have a reputation for violence. As such, we need to accept refugees.

As a closing point, we Americans love to make claims about the moral superiority and exceptionalism of our country. Talk is cheap, so if we want to prove our alleged superiority and exceptionalism, we have to act in an exceptional way. Refusing to help people out of fear is to show a lack of charity, compassion and courage. This is not what an exceptional nation would do.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

The Left’s Defection from Progress

Note: This is a slightly abridged (but otherwise largely warts and all) version of an article that I had published in Quadrant magazine in April 1999. It has not previously been published online (except that I am cross-posting on my own blog, Metamagician and the Hellfire Club). While my views have developed somewhat in the interim, there may be some advantage in republishing it for a new audience, especially at a time when there is much discussion of a “regressive left”.

I.

In a recent mini-review of David Stove’s Anything Goes: Origins of Scientific Irrationalism (originally published in 1982 as Popper and After), Diane Carlyle and Nick Walker make a casual reference to Stove’s “reactionary polemic”. By contrast, they refer to the philosophies of science that Stove attacks as “progressive notions of culture-based scientific knowledge”. To say the least, this appears tendentious.

To be fair, Carlyle and Walker end up saying some favourable things about Stove’s book. What is nonetheless alarming about their review is that it evidences just how easy it has become to write as if scientific realism were inherently “reactionary” and the more or less relativist views of scientific knowledge that predominate among social scientists and humanities scholars were “progressive”.

The words “reactionary” and “progressive” usually attach themselves to political and social movements, some kind of traditionalist or conservative backlash versus an attempt to advance political liberties or social equality. Perhaps Carlyle and Walker had another sense in mind, but the connotations of their words are pretty inescapable. Moreover, they would know as well as I do that there is now a prevalent equation within the social sciences and humanities of relativist conceptions of truth and reality with left-wing social critique, and of scientific realism with the political right. Carlyle and Walker wrote their piece against that background. But where does it leave those of us who retain at least a temperamental attachment to the left, however nebulous that concept is becoming, while remaining committed to scientific realism? To adapt a phrase from Christina Hoff Sommers, we are entitled to ask about who has been stealing socially liberal thought in general.

Is the life of reason and liberty (intellectual and otherwise) that some people currently enjoy in some countries no more than an historical anomaly, a short-lived bubble that will soon burst? It may well appear so. Observe the dreadful credulity of the general public in relation to mysticism, magic and pseudoscience, and the same public’s preponderant ignorance of genuine science. Factor in the lowbrow popularity of religious fundamentalism and the anti-scientific rantings of highbrow conservatives such as Bryan Appleyard. Yet the sharpest goad to despair is the appearance that what passes for the intellectual and artistic left has now repudiated the Enlightenment project of conjoined scientific and social progress.

Many theorists in the social sciences and humanities appear obsessed with dismantling the entirety of post-Enlightenment political, philosophical and scientific thought. This is imagined to be a progressive act, desirable to promote the various social, cultural and other causes that have become politically urgent in recent decades, particularly those associated with sex, race, and the aftermath of colonialism. The positions on these latter issues taken by university-based theorists give them a claim to belong to, if not actually constitute, the “academic left”, and I’ll refer to them with this shorthand expression.

There is, however, nothing inherently left-wing about wishing to sweep away our Enlightenment legacy. Nor is a commitment to scientific inquiry and hard philosophical analysis inconsistent with socially liberal views. Abandonment of the project of rational inquiry, with its cross-checking of knowledge in different fields, merely opens the door to the worst kind of politics that the historical left could imagine, for the alternative is that “truth” be determined by whoever, at particular times and places, possesses sufficient political or rhetorical power to decide what beliefs are orthodox. The rationality of our society is at stake, but so is the fate of the left itself, if it is so foolish as to abandon the standards of reason for something more like a brute contest for power.

It is difficult to know where to start in criticising the academic left’s contribution to our society’s anti-rationalist drift. The approaches I am gesturing towards are diverse among themselves, as well as being professed in the universities side by side with more traditional methods of analysing society and culture. There is considerable useful dialogue among all these approaches, and it can be difficult obtaining an accurate idea of specific influences within the general intellectual milieu.

However, amidst all the intellectual currents and cross-currents, it is possible to find something of a common element in the thesis or assumption (sometimes one, sometimes the other) that reality, or our knowledge of it, is “socially constructed”. There are many things this might mean, and I explain below why I do not quarrel with them all.

In the extreme, however, our conceptions of reality, truth and knowledge are relativised, leading to absurd doctrines, such as the repudiation of deductive logic or the denial of a mind-independent world. Symptomatic of the approach I am condemning is a subordination of the intellectual quest for knowledge and understanding to political and social advocacy. Some writers are prepared to misrepresent mathematical and scientific findings for the purposes of point scoring or intellectual play, or the simple pleasure of ego-strutting. All this is antithetical to Enlightenment values, but so much – it might be said – for the Enlightenment.

II.

The notion that reality is socially constructed would be attractive and defensible if it were restricted to a thesis about the considerable historical contingency of any culture’s social practices and mores, and its systems of belief, understanding and evaluation. These are, indeed, shaped partly by the way they co-evolve and “fit” with each other, and by the culture’s underlying economic and other material circumstances.

The body of beliefs available to anyone will be constrained by the circumstances of her culture, including its attitude to free inquiry, the concepts it has already built up for understanding the world, and its available technologies for the gathering of data. Though Stove is surely correct to emphasise that the accumulation of empirical knowledge since the 17th century has been genuine, the directions taken by science have been influenced by pre-existing values and beliefs. Meanwhile, social practices, metaphysical and ethical (rather than empirical) beliefs, the methods by which society is organised and by which human beings understand their experience are none of them determined in any simple, direct or uniform way by human “nature” or biology, or by transcendental events.

So far, so good – but none of this is to suggest that all of these categories should or can be treated in exactly the same way. Take the domain of metaphysical questions. Philosophers working in metaphysics are concerned to understand such fundamentals as space, time, causation, the kinds of substances that ultimately exist, the nature of consciousness and the self. The answers cannot simply be “read off” our access to empirical data or our most fundamental scientific theories, or some body of transcendental knowledge. Nonetheless, I am content to assume that all these questions, however intractable we find them, have correct answers.

The case of ethical disagreement may be very different, and I discuss it in more detail below. It may be that widespread and deep ethical disagreement actually evidences the correctness of a particular metaphysical (and meta-ethical) theory – that there are no objectively existing properties of moral good and evil. Yet, to the extent that they depend upon empirical beliefs about the consequences of human conduct, practical moral judgements may often be reconcilable. Your attitude to the rights of homosexuals will differ from mine if yours is based on a belief that homosexual acts cause earthquakes.

Again, the social practices of historical societies may turn out to be constrained by our biology in a way that is not true of the ultimate answers to questions of metaphysics. All these are areas where human behaviour and belief may be shaped by material circumstances and the way they fit with each other, and relatively unconstrained by empirical knowledge. But, to repeat, they are not all the same.

Where this appears to lead us is that, for complicated reasons and in awkward ways, there is much about the practices and beliefs of different cultures that is contingent on history. In particular, the way institutions are built up around experience is more or less historically contingent, dependent largely upon economic and environmental circumstances and on earlier or co-evolving layers of political and social structures. Much of our activity as human beings in the realms of understanding, organising, valuing and responding to experience can reasonably be described as “socially constructed”, and it will often make perfectly good sense to refer to social practices, categories, concepts and beliefs as “social constructions”.

Yet this modest insight cries out for clear intellectual distinctions and detailed application to particular situations, with conscientious linkages to empirical data. It cannot provide a short-cut to moral perspicuity or sound policy formulation. Nor is it inconsistent with a belief in the actual existence of law-governed events in the empirical world, which can be the subject of objective scientific theory and accumulating knowledge.

III.

As Antony Flew once expressed it, what is socially constructed is not reality itself but merely “reality”: the beliefs, meanings and values available within a culture.

Thus, none of what I’ve described so far amounts to “social constructionism” in a pure or philosophical sense, since this would require, in effect, that we never have any knowledge. It would require a thesis that all beliefs are so deeply permeated by socially specific ideas that they never transcend their social conditions of production to the extent of being about objective reality. To take this a step further, even the truth about physical nature would be relative to social institutions – relativism applies all the way down.

Two important points need to be made here. First, even without such a strong concept of socially constructed knowledge, social scientists and humanities scholars have considerable room to pursue research programs aimed at exploring the historically contingent nature of social institutions. In the next section, I argue that this applies quintessentially to socially accepted moral beliefs.

Second, however, there is a question as to why anyone would insist upon the thesis that the nature of reality is somehow relative to social beliefs all the way down, that there is no point at which we ever hit a bedrock of truth and falsity about anything. It is notorious that intellectuals who use such language sometimes retreat, when challenged, to a far more modest or equivocal kind of position.

Certainly, there is no need for anyone’s political or social aims to lead them to deny the mind-independent existence of physical nature, or to suggest that the truth about it is, in an ultimate sense, relative to social beliefs or subjective to particular observers. Nonetheless, many left-wing intellectuals freely express a view in which reality, not “reality”, is a mere social construction.

IV.

If social construction theory is to have any significant practical bite, then it has to assert that moral beliefs are part of what is socially constructed. I wish to explore this issue through some more fundamental considerations about ethics.

It is well-documented that there are dramatic contrasts between different societies’ practical beliefs about what is right and wrong, so much so that the philosopher J.L. Mackie said that these “make it difficult to treat those judgements as apprehensions of objective truths.” As Mackie develops the argument, it is not part of some general theory that “the truth is relative”, but involves a careful attempt to show that the diversity of moral beliefs is not analogous to the usual disagreements about the nature of the physical world.

Along with other arguments put by philosophers in Hume’s radical empiricist tradition, Mackie’s appeal to cultural diversity may persuade us that there are no objective moral truths. Indeed, it seems to me that there are only two positions here that are intellectually viable. The first is that Mackie is simply correct. This idea might seem to lead to cultural relativism about morality, but things are not always what they seem.

The second viable position is that there are objective moral truths, but they take the form of principles of an extremely broad nature, broad enough to help shape – rather than being shaped by – a diverse range of social practices in different environmental, economic and other circumstances.

If this is so, particular social practices and practical moral beliefs have some ultimate relationship to fundamental moral principles, but there can be enormous “slippage” between the two, depending on the range of circumstances confronting different human societies. Moreover, during times of rapid change such as industrialised societies have experienced in the last three centuries – and especially the last several decades – social practices and practical moral beliefs might tend to be frozen in place, even though they have become untenable. Conversely, there might be more wisdom, or at least rationality, than is apparent to most Westerners in the practices and moral beliefs of traditional societies. All societies, however, might have practical moral beliefs that are incorrect because of lack of empirical knowledge about the consequences of human conduct.

Taken with my earlier, more general, comments about various aspects of social practices and culturally-accepted “reality”, this approach gives socially liberal thinkers much of what they want. It tends to justify those who would test and criticise the practices and moral beliefs of Western nations while defending the rationality and sophistication of people from colonised cultures.

V.

The academic left’s current hostility to science and the Enlightenment project may have its origins in a general feeling, brought on by the twentieth century’s racial and ideological atrocities, that the Enlightenment has failed. Many intellectuals have come to see science as complicit in terror, oppression and mass killing, rather than as an inspiration for social progress.

The left’s hostility has surely been intensified by a quite specific fear that the reductive study of human biology will cross a bridge from the empirical into the normative realm, where it may start to dictate the political and social agenda in ways that can aptly be described as reactionary. This, at least, is the inference I draw from left-wing intellectuals’ evident detestation of human sociobiology or evolutionary psychology.

The fear may be that dubious research in areas such as evolutionary psychology and/or cognitive neuroscience will be used to rationalise sexist, racist or other illiberal positions. More radically, it may be feared that genuine knowledge of a politically unpalatable or otherwise harmful kind will emerge from these areas. Are such fears justified? To dismiss them lightly would be irresponsible and naive. I can do no more than place them in perspective. The relationship between the social sciences and humanities, on the one hand, and the “hard” end of psychological research, on the other, is one of the most important issues to be tackled by intellectuals in all fields – the physical sciences, social sciences and humanities.

One important biological lesson we have learned is that human beings are not, in any reputable sense, divided into “races”. As an empirical fact of evolutionary history and genetic comparison, we are all so alike that superficial characteristics such as skin or hair colour signify nothing about our moral or intellectual worth, or about the character of our inner experience. Yet, what if it had turned out otherwise? It is understandable if people are frightened by our ability to research such issues. At the same time, the alternative is to suppress rational inquiry in some areas, leaving questions of orthodoxy to however wins the naked contest for power. This is neither rational nor safe.

What implications could scientific knowledge about ourselves have for moral conduct or social policy? No number of factual statements about human nature, by themselves, can ever entail statements that amount to moral knowledge, as Hume demonstrated. What is required is an ethical theory, persuasive on other grounds, that already links “is” and “ought”. This might be found, for example, in a definition of moral action in terms of human flourishing, though it is not clear why we should, as individuals, be concerned about something as abstract as that – why not merely the flourishing of ourselves or our particular loved ones?.

One comfort is that, even if we had a plausible set of empirical and meta-ethical gadgets to connect what we know of human nature to high-level questions about social policy, we would discover significant slippage between levels. Nature does not contradict itself, and no findings from a field such as evolutionary psychology could be inconsistent with the observed facts of cultural diversity. If reductive explanations of human nature became available in more detail, these must turn out to be compatible with the existence of the vast spectrum of viable cultures that human beings have created so far. And there is no reason to believe that a lesser variety of cultures will be workable in the material circumstances of a high-technology future.

The dark side of evolutionary psychology includes, among other things, some scary-looking claims about the reproductive and sociopolitical behaviour of the respective sexes. True, no one seriously asserts that sexual conduct in human societies and the respective roles of men and women within families and extra-familial hierarchies are specified by our genes in a direct or detailed fashion. What, however, are we to make of the controversial analyses of male and female reproductive “strategies” that have been popularised by several writers in the 1990s? Perhaps the best-known exposition is that of Matt Ridley in The Red Queen: Sex and the Evolution of Human Nature (1993). Such accounts offer evidence and argument that men are genetically hardwired to be highly polygamous or promiscuous, while women are similarly programmed to be imperfectly monogamous, as well as sexually deceitful.

In responding to this, first, I am in favour of scrutinising the evidence for such claims very carefully, since they can so readily be adapted to support worn-out stereotypes about the roles of the sexes. That, however, is a reason to show scientific and philosophical rigour, not to accept strong social constructionism about science. Secondly, even if findings similar to those synthesised by Ridley turned out to be correct, the social consequences are by no means apparent. Mere biological facts cannot tell us in some absolute way what are the correct sexual mores for a human society.

To take this a step further, theories about reproductive strategies suggest that there are in-built conflicts between the interests of men and women, and of higher and lower status men, which will inevitably need to be moderated by social compromise, not necessarily in the same way by different cultures. If all this were accepted for the sake of argument, it might destroy a precious notion about ourselves: that there is a simple way for relations between the sexes to be harmonious. On the other hand, it would seem to support rather than refute what might be considered a “progressive” notion: that no one society, certainly not our own, has the absolutely final answer to questions about sexual morality.

Although evolutionary psychology and cognitive neuroscience are potential minefields, it is irrational to pretend that they are incapable of discovering objective knowledge. Fortunately, such knowledge will surely include insight into the slippage between our genetic similarity and the diversity of forms taken by viable cultures. The commonality of human nature will be at a level that is consistent with the (substantial) historical contingency of social practices and of many areas of understanding and evaluative belief. The effect on social policy is likely to be limited, though we may become more charitable about what moral requirements are reasonable for the kinds of creatures that we are.

I should add that evolutionary psychology and cognitive neuroscience are not about to put the humanities, in particular, out of business. There are good reasons why the natural sciences cannot provide a substitute for humanistic explanation, even if we obtain a far deeper understanding of our own genetic and neurophysiological make-up. This is partly because reductive science is ill-equipped to deal with the particularity of complex events, partly because causal explanation may not be all that we want, anyway, when we try to interpret and clarify human experience.

VI.

Either there are no objective moral truths or they are of an extremely general kind. Should we, therefore, become cultural relativists?

Over a quarter of a century ago, Bernard Williams made the sharp comment that cultural relativism is “possibly the most absurd view to have been advanced even in moral philosophy”” To get this clear, Williams was criticising a cluster of beliefs that has a great attraction for left-wing academics and many others who preach inter-cultural tolerance: first, that what is “right” means what is right for a particular culture; second, that what is right for a particular culture refers to what is functionally valuable for it; and third, that it is “therefore” wrong for one culture to interfere with the organisation or values of another.

As Williams pointed out, these propositions are internally inconsistent. Not only does the third not follow from the others; it cannot be asserted while the other two are maintained. After all, it may be functionally valuable to culture A (and hence “right” within that culture) for it to develop institutions for imposing its will on culture B. These may include armadas and armies, colonising expeditions, institutionalised intolerance, and aggressively proselytising religions. In fact, nothing positive in the way of moral beliefs, political programs or social policy can ever be derived merely from a theory of cultural relativism.

That does not mean that there are no implications at all from the insight that social practices and beliefs are, to a large degree, contingent on history and circumstance. Depending upon how we elaborate this insight, we may have good reason to suspect that another culture’s odd-looking ways of doing things are more justifiable against universal principles of moral value than is readily apparent. In that case, we may also take the view that the details of how our own society, or an element of it, goes about things are open to challenge as to how far they are (or remain?) justifiable against such universal principles.

If, on the other hand, we simply reject the existence of any objective moral truths – which I have stated to be a philosophically viable position – we will have a more difficult time explaining why we are active in pursuing social change. Certainly, we will not be able to appeal to objectively applicable principles to justify our activity. All the same, we may be able to make positive commitments to ideas such as freedom, equality or benevolence that we find less arbitrary and more psychologically satisfying than mere acquiescence in “the way they do things around here”. In no case, however, can we intellectually justify a course of political and social activism without more general principles or commitments to supplement the bare insight that, in various complicated ways, social beliefs and practices are largely contingent.

VII.

An example of an attempt to short-circuit the kind of hard thinking about moral foundations required to deal with contentious issues is Martin F. Katz’s well-known article, “After the Deconstruction: Law in the Age of Post-Structuralism”. Katz is a jurisprudential theorist who is committed to a quite extreme form of relativism about empirical knowledge. In particular, his article explicitly assigns the findings of physical science the same status as the critical interpretations of literary works.

Towards the end of “After the Deconstruction”, Katz uses the abortion debate as an example of how what he calls “deconstructionism” or the “deconstructionist analysis” can clarify and arbitrate social conflict. He begins by stating the debate much as it might be seen by its antagonists:

One side of the debate holds that abortion is wrong because it involves the murder of an unborn baby. The other side of the debate sees abortion as an issue of self-determination; the woman’s right to choose what she does to her body. How do we measure which of these “rights” should take priority?

In order to avoid any sense of evasion, I’ll state clearly that the second of these positions, the “pro-choice” position, is closer to my own. However, either position has more going for it in terms of rationality than what Katz actually advocates.

This, however, is not how Katz proposes to solve the problem of abortion. He begins by stating that “deconstructionism” recommends that we “resist the temptation to weigh the legitimacy of . . . these competing claims.” Instead, we should consider the different “subjugations” supposedly instigated by the pro-life and pro-choice positions. The pro-life position is condemned because it denies women the choice of what role they wish to take in society, while the pro-choice position is apparently praised (though even this is not entirely clear) for shifting the decision about whether and when to have children directly to women.

The trouble with this is that it prematurely forecloses on the metaphysical and ethical positions at stake, leaving everything to be solved in terms of power relations. However, if we believe that a foetus (say at a particular age) is a person in some sense that entails moral regard, or a being that possesses a human soul, then there are moral consequences. Such beliefs, together with some plausible assumptions about our moral principles or commitments, entail that we should accept that aborting the foetus is an immoral act. The fact that banning the abortion may reduce the political power of the woman concerned, or of women generally, over against that of men will seem to have little moral bite, unless we adopt a very deep principle of group political equality. That would require ethical argument of an intensity which Katz never attempts.

If we take it that the foetus is not a person in the relevant sense, we may be far more ready to solve the problem (and to advocate an assignment of “rights”) on the basis of utilitarian, or even libertarian, principles. By contrast, the style of “deconstructionist” thought advocated by Katz threatens to push rational analysis aside altogether, relying on untheorised hunches or feelings about how we wish power to be distributed in our society. This approach can justifiably be condemned as irrational. At the same time, the statements that Katz makes about the political consequences for men or women of banning or legalising abortion are so trite that it is difficult to imagine how anyone not already beguiled by an ideology could think that merely stating them could solve the problem.

VIII.

In the example of Katz’s article, as in the general argument I have put, the insight that much in our own society’s practices and moral beliefs is “socially constructed” can do only a modest amount of intellectual work. We may have good reason to question the way they do things around here, to subject it to deeper analysis. We may also have good reason to believe that the “odd” ways they do things in other cultures make more sense than is immediately apparent to the culture-bound Western mind. All very well. None of this, however, can undermine the results of systematic empirical inquiry. Nor can it save us from the effort of grappling with inescapable metaphysical and ethical questions, just as we had to do before the deconstruction.

[My Amazon author page]

Solving the Attendance Problem

While philosophy is about inquiry and students should be encouraged to ask questions, there used to be one question I hoped students would not ask. That question was “do I need the book?” I did realize that some students asked this question out of a legitimate concern based on the often limited finances of students. In other cases, it arose from a soul deep hope to avoid the unbearable pain of reading philosophy.

My answer was always an honest “yes.” I must confess that I have heard the evil whispers of the Book Devil trying to tempt me to line my shelves with desk copies or, even worse, get free books to sell to the book buyers. But, I have always resisted this temptation. My will, I must say, was fortified

by memories of buying expensive books that were never actually used by the professors in the classes. Despite the fact that the books for my courses were legitimately required and I diligently sought the best books for the lowest costs, the students still lamented my cruel practice of actually requiring books.

Moved by their terrible suffering, I quested for a solution and found it: technology. Since most of the great philosophers are not only dead but really, really dead, their works are typically in the public domain. This allowed me to assemble free texts for all my classes except Critical Inquiry. These were first distributed via 3.5 inch floppies (kids, ask your parents about these), then via the internet. While I could not include the latest and (allegedly greatest) of contemporary philosophy, the digital books are clearly as good as most of the expensive offerings. The students are, I am pleased to say, happy that the books they will not read will not cost them a penny. Yes, sometimes students now ask “do I have to read the book?” I say “yes.”

Since I make a point of telling the students on day one that the book is a free PDF file (except for the Critical Inquiry text), I rarely hear “do I need to buy the book?” these days. Now students ask “do I have to come to class?” I have to take some of the blame for this—my classes are designed so that all the coursework can be completed or turned in online via Black Board. Technology is thus shown, once again, to be a two-edged sword: it solved the “do I have to buy the book?” problem, but helped create the “do I have to come to class problem.”

When I was first asked this, I was a bit bothered. After all, a reasonable interpretation of the question is “I think I have nothing to learn. I believe you have nothing to teach me. But I’d rather not fail.” Since I have a reasonably good understanding of what people are like, I am confident that this interpretation is often correct. Honesty even compels me to admit that the student could be right: perhaps the student does have nothing to learn from me. After all, various arguments have been advanced over the centuries that philosophy is useless and presumably not worth learning. Things like logic, critical thinking and ethics could be worthless—after all, some people seem to do just fine without them. Some even manage to hold high positions. Or at least want to. However, I am reasonable confident that the majority of students do have something to learn that I can teach them.

After overcoming my initial annoyance, I gave the matter considerable thought. As with the “do I have to buy the book?” question, there could be a good reason for asking. This reason could be that the student needs the time that would otherwise be spent in my class to do things for other classes. Or time to grind for engrams and materials in Destiny. The student might even need the time to work in order to earn money to pay for school.

This was not the first time that I had thought about why students skipped my class. Since April, 2014 I have been collecting survey data from students. While as of this writing I only have 233 responses, 28.8% of students surveyed claimed that work was the primary reason they missed class. 15% claimed that the fact that they could turn in work via Black Board was the reason they skipped class. This reason is currently in second place. 6% claimed they needed to spend time on other classes.

There are some obvious concerns with my survey. The first is that the sample is relatively small at 233 students. The second is that although the survey is completely anonymous, the respondents might be inclined to select the answer they regard as the most laudable reason to miss class. That said, these results do make intuitive sense. One reason is that the majority of students at Florida A&M University are from low-income families and hence often need to work to pay for school. Another reason is that I routinely overhear students talking about their jobs and I sometimes even see students wearing their work uniforms in class.

While it might be suspected that my main concern about attendance is a matter of ego, it is actually a matter of concern for my students. In addition to being curious about why students were skipping my class, I was also interested in why students failed my courses. Fortunately, I had considerable objective data in the form of attendance records, grades, and coursework.

I found a clear correlation between lack of attendance and failing grades. None of the students who failed had perfect attendance and only 27% had better than 50% attendance. This was hardly surprising: students who do not attend class miss out on the lectures, class discussion and the opportunity to ask questions. To use the obvious analogy, these students are like athletes skipping practice and the coursework is analogous to meets or games.

I have been testing a solution to this problem: I am creating YouTube videos of one of my classes and putting the links into Black Board. This way students can view the videos at their convenience and skip or rewind as they desire. As might be suspected given the cast and production values, the view counts are rather low. However, some students have already expressed appreciation for the availability of the videos. If they can reduce the number of students who fail by even a few students each semester, then the effort will be worthwhile. It would also be worthwhile if I went viral and was able to ride that sweet wave of internet fame to some boosted book sales. I do not, however, see that happening. The fame, that is.

I also found that 67.7% of the students who failed did so because of failing scores on work. While this might elicit a response of “duh”, 51% of those who failed did not complete the exams, 45% did not complete the quizzes, and 42% did not complete the paper. As such, while failing grades on the work was a major factor, simply not doing the work was also a significant cause. Interestingly, none of the students who failed completed all the work—part of the reason for the failure was not completing the work. While they might have failed the work even if they had completed it, failure was assured by not making the attempt.

My initial attempt at solving the problem involved having all coursework either on Black Board or capable of being turned in via Black Board. My obvious concern with this solution was the possibility that students would cheat. While there are some awkward and expensive solutions (such as video monitoring) I decided to rely on something I had learned about the homework assigned in my courses: despite having every opportunity to cheat, student performance on out of class work was consistent with their performance on monitored in course work. It was simply a matter of designing questions and tests to make cheating unrewarding. The solution was fairly easy—questions aimed mainly at comprehension, a tight time limit on exams, and massive question banks to generate random exams. This approach seems to have worked: student grades remained very close to those in pre-Black Board days. Students can, of course, try to cheat—but either they are not cheating or they are cheating in ways that has had no impact on the grades. On the plus side, there was an increase in the completion rate of the coursework. However, the increase was not as significant as I had hoped.

In the light of work left uncompleted, I decided to have very generous deadlines for work. Students get a month to complete the quizzes for a section. For exams 1-3 (which cover sections 1-3), students get one month after we finish a section to complete the exam. Exam 4 deadlines at the end of the last day of classes and the final deadlines at the end of the normal final time. The paper deadlines are unchanged from the pre-Black Board days, although now the students can turn in papers from anywhere with internet access and can do so round the clock.

The main impact of this change has been another increase in the completion rate of work, thus decreasing the failure rate in my classes. As should be suspected, there are still students who do not complete all the work and fail much of the work they do complete. While I can certainly do more to provide students with the opportunity to pass, they still have responsibilities. One of mine is, of course, to record their failure.

 

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Ontological Zombies

Zombie 2015As a gamer and horror fan I have an undecaying fondness for zombies. Some years back, I was intrigued to learn about philosophical zombies—I had a momentary hope that my fellow philosophers were doing something…well…interesting. But, as so often has been the case, professional philosophers managed to suck the life out of even the already lifeless. Unlike proper flesh devouring products of necromancy or mad science, philosophical zombies lack all coolness.

To bore the reader a bit, philosophical zombies are beings who look and act just like normal humans, but lack consciousness. They are no more inclined to seek the brains of humans than standard humans, although discussions of them can numb the brain. Rather than causing the horror proper to zombies (or the joy of easy XP), philosophical zombies merely bring about a feeling of vague disappointment. This is the same sort of disappointment that you might recall from childhood trick or treating when someone gave you pennies or an apple rather than real candy.

Rather than serving as creepy cannon fodder for vile necromancers or metaphors for vacuous and excessive American consumerism, philosophical zombies serve as victims in philosophical discussions about the mind and consciousness.

The dullness of current philosophical zombies does raise an important question—is it possible to have a philosophical discussion about proper zombies? There is also a second and equally important question—is it possible to have an interesting philosophical discussion about zombies? As I will show, the answers are “yes” and “obviously not.”

Since there is, at least in this world, no Bureau of Zombie Standards and Certification, there are many varieties of zombies. In my games and fiction, I generally define zombies in terms of beings that are biologically dead yet animated (or re-animated, to be more accurate). Traditionally, zombies are “mindless” or at least possess extremely basic awareness (enough to move about and seek victims).

In fiction, many beings called “zombies” do not have these qualities. The zombies in 28 Days are “mindless”, but are still alive. As such, they are not really zombies at all—just infected people. The zombies in Return of the Living Dead are dead and re-animated, but retain their human intelligence. Zombie lords and juju zombies in D&D and Pathfinder are dead and re-animated, but are intelligent. In the real world, there are also what some call zombies—these are organisms taken over and controlled by another organism, such as an ant controlled by a rather nasty fungus. To keep the discussion focused and narrow, I will stick with what I consider proper zombies: biologically dead, yet animated. While I generally consider zombies to be unintelligent, I do not consider that a definitive trait. For folks concerned about how zombies differ from other animate dead, such as vampires and ghouls, the main difference is that stock zombies lack the special powers of more luxurious undead—they have the same basic capabilities as the living creature (mostly moving around, grabbing and biting).

One key issue regarding zombies is whether or not they are possible. There are, of course, various ways to “cheat” in creating zombies—for example, a mechanized skeleton could be embedded in dead flesh to move the flesh about. This would make a rather impressive horror weapon—so look for it in a war coming soon. Another option is to have a corpse driven about by another organism—wearing the body as a “meat suit.” However, these would not be proper zombies since they are not self propelling—just being moved about by something else.

In terms of “scientific” zombies, the usual approaches include strange chemicals, viruses, funguses or other such means of animation. Since it is well-established that electrical shocks can cause dead organisms to move, getting a proper zombie would seem to be an engineering challenge—although making one work properly could require substantial “cheating” (for example, having computerized control nodes in the body that coordinate the manipulation of the dead flesh).

A much more traditional means of animating corpses is via supernatural means. In games like Pathfinder, D&D and Call of Cthulhu, zombies are animated by spells (the classic being animate dead) or by an evil spirit occupying the flesh. In the D&D tradition, zombies (and all undead) are powered by negative energy (while living creatures are powered by positive energy). It is this energy that enables the dead flesh to move about (and violate the usual laws of biology).

While the idea of negative energy is mostly a matter of fantasy games, the notion of unintelligent animating forces is not unprecedented in the history of science and philosophy. For example, Aristotle seems to have considered that the soul (or perhaps a “part” of it) served to animate the body. Past thinkers also considered forces that would animate non-living bodies. As such, it is easy enough to imagine a similar sort of force that could animate a dead body (rather than returning it to life).

The magic “explanation” is the easiest approach, in that it is not really an explanation. It seems safe to hold that magic zombies are not possible in the actual world—though all the zombie stories and movies show it is rather easy to imagine possible worlds inhabited by them.

The idea of a truly dead body moving around in the real world the way fictional zombies do in their fictional worlds does seem somewhat hard to accept. After all, it seems essential to biological creatures that they be alive (to some degree) in order for them to move about under their own power. What would be needed is some sort of force or energy that could move truly dead tissue. While this is clearly conceivable (in the sense that it is easy to imagine), it certainly does not seem possible—at least in this world. Dualists might, of course, be tempted to consider that the immaterial mind could drive the dead shell—after all, this would only be marginally more mysterious than the ghost driving around a living machine. Physicalists, of course, would almost certainly balk at proper zombies—at least until the zombie apocalypse. Then they would be running.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter

Total Validation Experience

There are many self-help books on the market, but they all suffer from one fatal flaw. That flaw is the assumption that the solution to your problems lies in changing yourself. This is a clearly misguided approach for many reasons.

The first is the most obvious. As the principle of identity states, A=A. Or, put in wordy words, “eaTVEch thing is the same with itself and different from another.” As such, changing yourself is impossible: to change yourself, you would cease to be you. The new person might be better. And, let’s face it, probably would be. But, it would not be you. As such, changing yourself would be ontological suicide and you do not want any part of that.

The second is less obvious, but is totally historical. Parmenides of Elea, a very dead ancient Greek philosopher, showed that change is impossible. I know that “Parmenides” sounds like a cheese, perhaps one that would be good on spaghetti. But, trust me, he was a philosopher and would probably make a poor pasta topping.  Best of all, he laid it out in poetic form, the most truthful of truth conveying word wording:

How could what is perish? How could it have come to be? For if it came into being, it is not; nor is it if ever it is going to be. Thus coming into being is extinguished, and destruction unknown.

Nor was [it] once, nor will [it] be, since [it] is, now, all together, / One, continuous; for what coming-to-be of it will you seek? / In what way, whence, did [it] grow? Neither from what-is-not shall I allow / You to say or think; for it is not to be said or thought / That [it] is not. And what need could have impelled it to grow / Later or sooner, if it began from nothing? Thus [it] must either be completely or not at all.

[What exists] is now, all at once, one and continuous… Nor is it divisible, since it is all alike; nor is there any more or less of it in one place which might prevent it from holding together, but all is full of what is.

And it is all one to me / Where I am to begin; for I shall return there again.

That, I think we can all agree, is completely obvious and utterly decisive. Since you cannot change, you cannot self-help yourself by changing. That is just good logic. I would say more, but I do not get paid by the word to write this stuff. Hell, I do not get paid at all.

But, obviously enough, you want to help yourself to a better life. Since you cannot change and it should be assumed with 100% confidence that you are not the problem, an alternative explanation for your woes is needed. Fortunately, the problem is obvious: other people. The solution is  equally obvious: get new people. Confucius said “Refuse the friendship of all who are not like you.” This was close to the solution, but if you are annoying or a jerk, being friends with annoying jerks is not going to help you. A better solution is to tweak Confucius just a bit: “Refuse the friendship of all who do not like you.” This is a good start, but more is needed. After all, it is obvious that you should just be around people who like you. But that will not be totally validating.

The goal is, of course, to achieve a Total Validation Experience (TVE). A TVE is an experience that fully affirms and validates whatever you feel needs to be validated at the time. It might be your opinion on Mexicans or your belief that your beauty rivals that of Adonis and Helen. Or it might be that your character build in Warcraft is fully and truly optimized.

By following this simple dictate “Refuse the friendship of all who do not totally validate you”, you will achieve the goal that you will never achieve with any self-help book: a vast ego, a completely unshakeable belief that you are right about everything, and all that is good in life. You will never be challenged and never feel doubt. It will truly be the best of all possible worlds. So, get to work on surrounding yourself with Validators.

My Amazon Author Page

My Paizo Page

My DriveThru RPG Page

Follow Me on Twitter